76 FR 27232 - Airworthiness Directives; Airbus Model A310 Series Airplanes
2011-05-11
..., 1979); and 3. Will not have a significant economic impact, positive or negative, on a substantial...) of this AD; or modification of the reinforcement angle runout in accordance with Airbus Service... reinforcement angle runout in accordance with Airbus Service Bulletin A310-53-2019, Revision 3, dated...
76 FR 42 - Airworthiness Directives; Airbus Model A310 Series Airplanes
2011-01-03
... Procedures (44 FR 11034, February 26, 1979); and 3. Will not have a significant economic impact, positive or...) of this AD; or modification of the reinforcement angle runout in accordance with Airbus Service... reinforcement angle runout in accordance with Airbus Service Bulletin A310-53-2019, Revision 3, dated...
2011-03-22
... of Airbus All Operators Telex (AOT) A310-25A2203, Revision 02, dated March 2, 2009; or Airbus AOT... Service Information Document Revision Date Airbus All Operators Telex A300- 02 March 2, 2009. 25A6215. Airbus All Operators Telex A310- 02 March 2, 2009. 25A2203. Airbus Mandatory Service...
2010-12-01
... SOGERMA 2510113 series co-pilot seats, in accordance with the instructions of Airbus All Operators Telex... Airbus All Operators Telex A300-25A6215.. 02 March 2, 2009. Airbus All Operators Telex A310-25A2203.....
76 FR 421 - Airworthiness Directives; Airbus Model A310 Series Airplanes
2011-01-05
... the criteria of the Regulatory Flexibility Act. We prepared a regulatory evaluation of the estimated... the NPRM, the regulatory evaluation, any comments received, and other information. The street address... irregularity. Available lighting is normally supplemented with a direct source of good lighting at an...
76 FR 62653 - Airworthiness Directives; Airbus Model A310 Series Airplanes
2011-10-11
... of chafing with the new routing of the wire bundle 2S in the RH wing pylon area to the generator wire... the fuel electrical circuit in the Right Hand (RH) wing must be modified in order to ensure better... electrical circuit in the right hand wing must be modified to ensure better segregation between fuel...
Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)
2004-01-01
textabstractThis book considers periodic time series models for seasonal data, characterized by parameters that differ across the seasons, and focuses on their usefulness for out-of-sample forecasting. Providing an up-to-date survey of the recent developments in periodic time series, the book
Introduction to Time Series Modeling
Kitagawa, Genshiro
2010-01-01
In time series modeling, the behavior of a certain phenomenon is expressed in relation to the past values of itself and other covariates. Since many important phenomena in statistical analysis are actually time series and the identification of conditional distribution of the phenomenon is an essential part of the statistical modeling, it is very important and useful to learn fundamental methods of time series modeling. Illustrating how to build models for time series using basic methods, "Introduction to Time Series Modeling" covers numerous time series models and the various tools f
2010-01-01
... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Recruitment. 8a.310 Section 8a.310... in Admission and Recruitment Prohibited § 8a.310 Recruitment. (a) Nondiscriminatory recruitment. A... recruitment and admission of students. A recipient may be required to undertake additional recruitment efforts...
Models for dependent time series
Tunnicliffe Wilson, Granville; Haywood, John
2015-01-01
Models for Dependent Time Series addresses the issues that arise and the methodology that can be applied when the dependence between time series is described and modeled. Whether you work in the economic, physical, or life sciences, the book shows you how to draw meaningful, applicable, and statistically valid conclusions from multivariate (or vector) time series data.The first four chapters discuss the two main pillars of the subject that have been developed over the last 60 years: vector autoregressive modeling and multivariate spectral analysis. These chapters provide the foundational mater
2011-03-22
... reported several cases of wire damages at the pylon/ wing interface. Analysis revealed that wires damages... the pylon/ wing interface. Analysis revealed that wires damages are due to installation quality issue..., this AD requires the modification of the electrical installation in the pylon/wing interface to...
Lag space estimation in time series modelling
Goutte, Cyril
1997-01-01
The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...
A Simple Fuzzy Time Series Forecasting Model
Ortiz-Arroyo, Daniel
2016-01-01
In this paper we describe a new ﬁrst order fuzzy time series forecasting model. We show that our automatic fuzzy partitioning method provides an accurate approximation to the time series that when combined with rule forecasting and an OWA operator improves forecasting accuracy. Our model does...... not attempt to provide the best results in comparison with other forecasting methods but to show how to improve ﬁrst order models using simple techniques. However, we show that our ﬁrst order model is still capable of outperforming some more complex higher order fuzzy time series models....
Time series modeling, computation, and inference
Prado, Raquel
2010-01-01
The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit
THE FOURIER SERIES MODEL IN MAP ANALYSIS.
During the past several years the double Fourier Series has been applied to the analysis of contour-type maps as an alternative to the more commonly...used polynomial model. The double Fourier Series has high potential in the study of areal variations, inasmuch as a succession of trend maps based on...and it is shown that the double Fourier Series can be used to summarize the directional properties of areally-distributed data. An Appendix lists
Nonlinear time series modelling: an introduction
Simon M. Potter
1999-01-01
Recent developments in nonlinear time series modelling are reviewed. Three main types of nonlinear models are discussed: Markov Switching, Threshold Autoregression and Smooth Transition Autoregression. Classical and Bayesian estimation techniques are described for each model. Parametric tests for nonlinearity are reviewed with examples from the three types of models. Finally, forecasting and impulse response analysis is developed.
An Excel™ model of a radioactive series
Andrews, D. G. H.
2009-01-01
A computer model of the decay of a radioactive series, written in Visual Basic in Excel™, is presented. The model is based on the random selection of cells in an array. The results compare well with the theoretical equations. The model is a useful tool in teaching this aspect of radioactivity.
Feature Matching in Time Series Modelling
Xia, Yingcun
2011-01-01
Using a time series model to mimic an observed time series has a long history. However, with regard to this objective, conventional estimation methods for discrete-time dynamical models are frequently found to be wanting. In the absence of a true model, we prefer an alternative approach to conventional model fitting that typically involves one-step-ahead prediction errors. Our primary aim is to match the joint probability distribution of the observable time series, including long-term features of the dynamics that underpin the data, such as cycles, long memory and others, rather than short-term prediction. For want of a better name, we call this specific aim {\\it feature matching}. The challenges of model mis-specification, measurement errors and the scarcity of data are forever present in real time series modelling. In this paper, by synthesizing earlier attempts into an extended-likelihood, we develop a systematic approach to empirical time series analysis to address these challenges and to aim at achieving...
Building Chaotic Model From Incomplete Time Series
Siek, Michael; Solomatine, Dimitri
2010-05-01
This paper presents a number of novel techniques for building a predictive chaotic model from incomplete time series. A predictive chaotic model is built by reconstructing the time-delayed phase space from observed time series and the prediction is made by a global model or adaptive local models based on the dynamical neighbors found in the reconstructed phase space. In general, the building of any data-driven models depends on the completeness and quality of the data itself. However, the completeness of the data availability can not always be guaranteed since the measurement or data transmission is intermittently not working properly due to some reasons. We propose two main solutions dealing with incomplete time series: using imputing and non-imputing methods. For imputing methods, we utilized the interpolation methods (weighted sum of linear interpolations, Bayesian principle component analysis and cubic spline interpolation) and predictive models (neural network, kernel machine, chaotic model) for estimating the missing values. After imputing the missing values, the phase space reconstruction and chaotic model prediction are executed as a standard procedure. For non-imputing methods, we reconstructed the time-delayed phase space from observed time series with missing values. This reconstruction results in non-continuous trajectories. However, the local model prediction can still be made from the other dynamical neighbors reconstructed from non-missing values. We implemented and tested these methods to construct a chaotic model for predicting storm surges at Hoek van Holland as the entrance of Rotterdam Port. The hourly surge time series is available for duration of 1990-1996. For measuring the performance of the proposed methods, a synthetic time series with missing values generated by a particular random variable to the original (complete) time series is utilized. There exist two main performance measures used in this work: (1) error measures between the actual
Estimating High-Dimensional Time Series Models
Medeiros, Marcelo C.; Mendes, Eduardo F.
We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly...
Forecasting with periodic autoregressive time series models
Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)
1999-01-01
textabstractThis paper is concerned with forecasting univariate seasonal time series data using periodic autoregressive models. We show how one should account for unit roots and deterministic terms when generating out-of-sample forecasts. We illustrate the models for various quarterly UK consumption
Forecasting with periodic autoregressive time series models
Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)
1999-01-01
textabstractThis paper is concerned with forecasting univariate seasonal time series data using periodic autoregressive models. We show how one should account for unit roots and deterministic terms when generating out-of-sample forecasts. We illustrate the models for various quarterly UK consumption
Outlier Detection in Structural Time Series Models
Marczak, Martyna; Proietti, Tommaso
investigate via Monte Carlo simulations how this approach performs for detecting additive outliers and level shifts in the analysis of nonstationary seasonal time series. The reference model is the basic structural model, featuring a local linear trend, possibly integrated of order two, stochastic seasonality......Structural change affects the estimation of economic signals, like the underlying growth rate or the seasonally adjusted series. An important issue, which has attracted a great deal of attention also in the seasonal adjustment literature, is its detection by an expert procedure. The general...... and a stationary component. Further, we apply both kinds of indicator saturation to detect additive outliers and level shifts in the industrial production series in five European countries....
Modeling noisy time series Physiological tremor
Timmer, J
1998-01-01
Empirical time series often contain observational noise. We investigate the effect of this noise on the estimated parameters of models fitted to the data. For data of physiological tremor, i.e. a small amplitude oscillation of the outstretched hand of healthy subjects, we compare the results for a linear model that explicitly includes additional observational noise to one that ignores this noise. We discuss problems and possible solutions for nonlinear deterministic as well as nonlinear stochastic processes. Especially we discuss the state space model applicable for modeling noisy stochastic systems and Bock's algorithm capable for modeling noisy deterministic systems.
Forecasting with nonlinear time series models
Kock, Anders Bredahl; Teräsvirta, Timo
and two versions of a simple artificial neural network model. Techniques for generating multi-period forecasts from nonlinear models recursively are considered, and the direct (non-recursive) method for this purpose is mentioned as well. Forecasting with com- plex dynamic systems, albeit less frequently...... applied to economic fore- casting problems, is briefly highlighted. A number of large published studies comparing macroeconomic forecasts obtained using different time series models are discussed, and the paper also contains a small simulation study comparing recursive and direct forecasts in a partic...
Time Series Modelling using Proc Varmax
Milhøj, Anders
2007-01-01
In this paper it will be demonstrated how various time series problems could be met using Proc Varmax. The procedure is rather new and hence new features like cointegration, testing for Granger causality are included, but it also means that more traditional ARIMA modelling as outlined by Box & Je...... & Jenkins is performed in a more modern way using the computer resources which are now available...
Time series modeling for automatic target recognition
Sokolnikov, Andre
2012-05-01
Time series modeling is proposed for identification of targets whose images are not clearly seen. The model building takes into account air turbulence, precipitation, fog, smoke and other factors obscuring and distorting the image. The complex of library data (of images, etc.) serving as a basis for identification provides the deterministic part of the identification process, while the partial image features, distorted parts, irrelevant pieces and absence of particular features comprise the stochastic part of the target identification. The missing data approach is elaborated that helps the prediction process for the image creation or reconstruction. The results are provided.
Forecasting Daily Time Series using Periodic Unobserved Components Time Series Models
Koopman, Siem Jan; Ooms, Marius
2004-01-01
We explore a periodic analysis in the context of unobserved components time series models that decompose time series into components of interest such as trend and seasonal. Periodic time series models allow dynamic characteristics to depend on the period of the year, month, week or day. In the stand
Forecasting Daily Time Series using Periodic Unobserved Components Time Series Models
Koopman, Siem Jan; Ooms, Marius
2004-01-01
We explore a periodic analysis in the context of unobserved components time series models that decompose time series into components of interest such as trend and seasonal. Periodic time series models allow dynamic characteristics to depend on the period of the year, month, week or day. In the
Time series modeling for syndromic surveillance
Mandl Kenneth D
2003-01-01
Full Text Available Abstract Background Emergency department (ED based syndromic surveillance systems identify abnormally high visit rates that may be an early signal of a bioterrorist attack. For example, an anthrax outbreak might first be detectable as an unusual increase in the number of patients reporting to the ED with respiratory symptoms. Reliably identifying these abnormal visit patterns requires a good understanding of the normal patterns of healthcare usage. Unfortunately, systematic methods for determining the expected number of (ED visits on a particular day have not yet been well established. We present here a generalized methodology for developing models of expected ED visit rates. Methods Using time-series methods, we developed robust models of ED utilization for the purpose of defining expected visit rates. The models were based on nearly a decade of historical data at a major metropolitan academic, tertiary care pediatric emergency department. The historical data were fit using trimmed-mean seasonal models, and additional models were fit with autoregressive integrated moving average (ARIMA residuals to account for recent trends in the data. The detection capabilities of the model were tested with simulated outbreaks. Results Models were built both for overall visits and for respiratory-related visits, classified according to the chief complaint recorded at the beginning of each visit. The mean absolute percentage error of the ARIMA models was 9.37% for overall visits and 27.54% for respiratory visits. A simple detection system based on the ARIMA model of overall visits was able to detect 7-day-long simulated outbreaks of 30 visits per day with 100% sensitivity and 97% specificity. Sensitivity decreased with outbreak size, dropping to 94% for outbreaks of 20 visits per day, and 57% for 10 visits per day, all while maintaining a 97% benchmark specificity. Conclusions Time series methods applied to historical ED utilization data are an important tool
Time series models of symptoms in schizophrenia.
Tschacher, Wolfgang; Kupper, Zeno
2002-12-15
The symptom courses of 84 schizophrenia patients (mean age: 24.4 years; mean previous admissions: 1.3; 64% males) of a community-based acute ward were examined to identify dynamic patterns of symptoms and to investigate the relation between these patterns and treatment outcome. The symptoms were monitored by systematic daily staff ratings using a scale composed of three factors: psychoticity, excitement, and withdrawal. Patients showed moderate to high symptomatic improvement documented by effect size measures. Each of the 84 symptom trajectories was analyzed by time series methods using vector autoregression (VAR) that models the day-to-day interrelations between symptom factors. Multiple and stepwise regression analyses were then performed on the basis of the VAR models. Two VAR parameters were found to be associated significantly with favorable outcome in this exploratory study: 'withdrawal preceding a reduction of psychoticity' as well as 'excitement preceding an increase of withdrawal'. The findings were interpreted as generating hypotheses about how patients cope with psychotic episodes.
Genetic programming-based chaotic time series modeling
张伟; 吴智铭; 杨根科
2004-01-01
This paper proposes a Genetic Programming-Based Modeling (GPM) algorithm on chaotic time series. GP is used here to search for appropriate model structures in function space, and the Particle Swarm Optimization (PSO) algorithm is used for Nonlinear Parameter Estimation (NPE) of dynamic model structures. In addition, GPM integrates the results of Nonlinear Time Series Analysis (NTSA) to adjust the parameters and takes them as the criteria of established models. Experiments showed the effectiveness of such improvements on chaotic time series modeling.
Modeling Time Series Data for Supervised Learning
Baydogan, Mustafa Gokce
2012-01-01
Temporal data are increasingly prevalent and important in analytics. Time series (TS) data are chronological sequences of observations and an important class of temporal data. Fields such as medicine, finance, learning science and multimedia naturally generate TS data. Each series provide a high-dimensional data vector that challenges the learning…
TIME SERIES ANALYSIS USING A UNIQUE MODEL OF TRANSFORMATION
Goran Klepac
2007-12-01
Full Text Available REFII1 model is an authorial mathematical model for time series data mining. The main purpose of that model is to automate time series analysis, through a unique transformation model of time series. An advantage of this approach of time series analysis is the linkage of different methods for time series analysis, linking traditional data mining tools in time series, and constructing new algorithms for analyzing time series. It is worth mentioning that REFII model is not a closed system, which means that we have a finite set of methods. At first, this is a model for transformation of values of time series, which prepares data used by different sets of methods based on the same model of transformation in a domain of problem space. REFII model gives a new approach in time series analysis based on a unique model of transformation, which is a base for all kind of time series analysis. The advantage of REFII model is its possible application in many different areas such as finance, medicine, voice recognition, face recognition and text mining.
Hidden Markov Models for Time Series An Introduction Using R
Zucchini, Walter
2009-01-01
Illustrates the flexibility of HMMs as general-purpose models for time series data. This work presents an overview of HMMs for analyzing time series data, from continuous-valued, circular, and multivariate series to binary data, bounded and unbounded counts and categorical observations.
Modelling road accidents: An approach using structural time series
Junus, Noor Wahida Md; Ismail, Mohd Tahir
2014-09-01
In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.
Fourier Series, the DFT and Shape Modelling
Skoglund, Karl
2004-01-01
This report provides an introduction to Fourier series, the discrete Fourier transform, complex geometry and Fourier descriptors for shape analysis. The content is aimed at undergraduate and graduate students who wish to learn about Fourier analysis in general, as well as its application to shape...
Trend time-series modeling and forecasting with neural networks.
Qi, Min; Zhang, G Peter
2008-05-01
Despite its great importance, there has been no general consensus on how to model the trends in time-series data. Compared to traditional approaches, neural networks (NNs) have shown some promise in time-series forecasting. This paper investigates how to best model trend time series using NNs. Four different strategies (raw data, raw data with time index, detrending, and differencing) are used to model various trend patterns (linear, nonlinear, deterministic, stochastic, and breaking trend). We find that with NNs differencing often gives meritorious results regardless of the underlying data generating processes (DGPs). This finding is also confirmed by the real gross national product (GNP) series.
Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models
Price, Larry R.
2012-01-01
The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…
A multivariate approach to modeling univariate seasonal time series
Ph.H.B.F. Franses (Philip Hans)
1994-01-01
textabstractA seasonal time series can be represented by a vector autoregressive model for the annual series containing the seasonal observations. This model allows for periodically varying coefficients. When the vector elements are integrated, the maximum likelihood cointegration method can be used
Parameterizing unconditional skewness in models for financial time series
He, Changli; Silvennoinen, Annastiina; Teräsvirta, Timo
In this paper we consider the third-moment structure of a class of time series models. It is often argued that the marginal distribution of financial time series such as returns is skewed. Therefore it is of importance to know what properties a model should possess if it is to accommodate...
Genetic programming-based chaotic time series modeling
张伟; 吴智铭; 杨根科
2004-01-01
This paper proposes a Genetic Programming-Based Modeling(GPM)algorithm on chaotic time series. GP is used here to search for appropriate model structures in function space,and the Particle Swarm Optimization(PSO)algorithm is used for Nonlinear Parameter Estimation(NPE)of dynamic model structures. In addition,GPM integrates the results of Nonlinear Time Series Analysis(NTSA)to adjust the parameters and takes them as the criteria of established models.Experiments showed the effectiveness of such improvements on chaotic time series modeling.
Modeling Persistence In Hydrological Time Series Using Fractional Differencing
Hosking, J. R. M.
1984-12-01
The class of autoregressive integrated moving average (ARIMA) time series models may be generalized by permitting the degree of differencing d to take fractional values. Models including fractional differencing are capable of representing persistent series (d > 0) or short-memory series (d = 0). The class of fractionally differenced ARIMA processes provides a more flexible way than has hitherto been available of simultaneously modeling the long-term and short-term behavior of a time series. In this paper some fundamental properties of fractionally differenced ARIMA processes are presented. Methods of simulating these processes are described. Estimation of the parameters of fractionally differenced ARIMA models is discussed, and an approximate maximum likelihood method is proposed. The methodology is illustrated by fitting fractionally differenced models to time series of streamflows and annual temperatures.
LIFE DISTRIBUTION OF SERIES UNDER THE SUCCESSIVE DAMAGE MODEL
WANG Dongqian; C. D. Lai; LI Guoying
2003-01-01
We analyse further the reliability behaviour of series and parallel systems in the successive damage model initiated by Downton. The results are compared with those obtained for other models with different bivariate distributions.
General expression for linear and nonlinear time series models
Ren HUANG; Feiyun XU; Ruwen CHEN
2009-01-01
The typical time series models such as ARMA, AR, and MA are founded on the normality and stationarity of a system and expressed by a linear difference equation; therefore, they are strictly limited to the linear system. However, some nonlinear factors are within the practical system; thus, it is difficult to fit the model for real systems with the above models. This paper proposes a general expression for linear and nonlinear auto-regressive time series models (GNAR). With the gradient optimization method and modified AIC information criteria integrated with the prediction error, the parameter estimation and order determination are achieved. The model simulation and experiments show that the GNAR model can accurately approximate to the dynamic characteristics of the most nonlinear models applied in academics and engineering. The modeling and prediction accuracy of the GNAR model is superior to the classical time series models. The proposed GNAR model is flexible and effective.
Madsen, Henrik; Pearson, Charles P.; Rosbjerg, Dan
1997-01-01
Two regional estimation schemes, based on, respectively, partial duration series (PDS) and annual maximum series (AMS), are compared. The PDS model assumes a generalized Pareto (GP) distribution for modeling threshold exceedances corresponding to a generalized extreme value (GEV) distribution...... for annual maxima. First, the accuracy of PDS/GP and AMS/GEV regional index-flood T-year event estimators are compared using Monte Carlo simulations. For estimation in typical regions assuming a realistic degree of heterogeneity, the PDS/GP index-flood model is more efficient. The regional PDS and AMS...
Lagrangian Time Series Models for Ocean Surface Drifter Trajectories
Sykulski, Adam M; Lilly, Jonathan M; Danioux, Eric
2016-01-01
This paper proposes stochastic models for the analysis of ocean surface trajectories obtained from freely-drifting satellite-tracked instruments. The proposed time series models are used to summarise large multivariate datasets and infer important physical parameters of inertial oscillations and other ocean processes. Nonstationary time series methods are employed to account for the spatiotemporal variability of each trajectory. Because the datasets are large, we construct computationally efficient methods through the use of frequency-domain modelling and estimation, with the data expressed as complex-valued time series. We detail how practical issues related to sampling and model misspecification may be addressed using semi-parametric techniques for time series, and we demonstrate the effectiveness of our stochastic models through application to both real-world data and to numerical model output.
Time series modelling of overflow structures
Carstensen, J.; Harremoës, P.
1997-01-01
to the overflow structures. The capacity of a pump draining the storage pipe has been estimated for two rain events, revealing that the pump was malfunctioning during the first rain event. The grey-box modelling approach is applicable for automated on-line surveillance and control. (C) 1997 IAWQ. Published......The dynamics of a storage pipe is examined using a grey-box model based on on-line measured data. The grey-box modelling approach uses a combination of physically-based and empirical terms in the model formulation. The model provides an on-line state estimate of the overflows, pumping capacities...
Stochastic modelling of regional archaeomagnetic series
Hellio, G; Bouligand, C; Jault, D
2015-01-01
SUMMARY We report a new method to infer continuous time series of the declination, inclination and intensity of the magnetic field from archeomagnetic data. Adopting a Bayesian perspective, we need to specify a priori knowledge about the time evolution of the magnetic field. It consists in a time correlation function that we choose to be compatible with present knowledge about the geomagnetic time spectra. The results are presented as distributions of possible values for the declination, inclination or intensity. We find that the methodology can be adapted to account for the age uncertainties of archeological artefacts and we use Markov Chain Monte Carlo to explore the possible dates of observations. We apply the method to intensity datasets from Mari, Syria and to intensity and directional datasets from Paris, France. Our reconstructions display more rapid variations than previous studies and we find that the possible values of geomagnetic field elements are not necessarily normally distributed. Another outp...
The use of synthetic input sequences in time series modeling
Oliveira, Dair Jose de [Programa de Pos-Graduacao em Engenharia Eletrica, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, 31.270-901 Belo Horizonte, MG (Brazil); Letellier, Christophe [CORIA/CNRS UMR 6614, Universite et INSA de Rouen, Av. de l' Universite, BP 12, F-76801 Saint-Etienne du Rouvray cedex (France); Gomes, Murilo E.D. [Programa de Pos-Graduacao em Engenharia Eletrica, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, 31.270-901 Belo Horizonte, MG (Brazil); Aguirre, Luis A. [Programa de Pos-Graduacao em Engenharia Eletrica, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, 31.270-901 Belo Horizonte, MG (Brazil)], E-mail: aguirre@cpdee.ufmg.br
2008-08-04
In many situations time series models obtained from noise-like data settle to trivial solutions under iteration. This Letter proposes a way of producing a synthetic (dummy) input, that is included to prevent the model from settling down to a trivial solution, while maintaining features of the original signal. Simulated benchmark models and a real time series of RR intervals from an ECG are used to illustrate the procedure.
The use of synthetic input sequences in time series modeling
de Oliveira, Dair José; Letellier, Christophe; Gomes, Murilo E. D.; Aguirre, Luis A.
2008-08-01
In many situations time series models obtained from noise-like data settle to trivial solutions under iteration. This Letter proposes a way of producing a synthetic (dummy) input, that is included to prevent the model from settling down to a trivial solution, while maintaining features of the original signal. Simulated benchmark models and a real time series of RR intervals from an ECG are used to illustrate the procedure.
Ruin Probability in Linear Time Series Model
ZHANG Lihong
2005-01-01
This paper analyzes a continuous time risk model with a linear model used to model the claim process. The time is discretized stochastically using the times when claims occur, using Doob's stopping time theorem and martingale inequalities to obtain expressions for the ruin probability as well as both exponential and non-exponential upper bounds for the ruin probability for an infinite time horizon. Numerical results are included to illustrate the accuracy of the non-exponential bound.
Long Memory Models to Generate Synthetic Hydrological Series
Guilherme Armando de Almeida Pereira
2014-01-01
Full Text Available In Brazil, much of the energy production comes from hydroelectric plants whose planning is not trivial due to the strong dependence on rainfall regimes. This planning is accomplished through optimization models that use inputs such as synthetic hydrologic series generated from the statistical model PAR(p (periodic autoregressive. Recently, Brazil began the search for alternative models able to capture the effects that the traditional model PAR(p does not incorporate, such as long memory effects. Long memory in a time series can be defined as a significant dependence between lags separated by a long period of time. Thus, this research develops a study of the effects of long dependence in the series of streamflow natural energy in the South subsystem, in order to estimate a long memory model capable of generating synthetic hydrologic series.
A generalized trigonometric series function model for determining ionospheric delay
YUAN Yunbin; OU Jikun
2004-01-01
A generalized trigonometric series function (GTSF) model, with an adjustable number of parameters, is proposed and analyzed to study ionosphere by using GPS, especially to provide ionospheric delay correction for single frequency GPS users. The preliminary results show that, in comparison with the trigonometric series function (TSF) model and the polynomial (POLY) model, the GTSF model can more precisely describe the ionospheric variation and more efficiently provide the ionospheric correction when GPS data are used to investigate or extract the earth's ionospheric total electron content. It is also shown that the GTSF model can further improve the precision and accuracy of modeling local ionospheric delays.
Madsen, Henrik; Rasmussen, Peter F.; Rosbjerg, Dan
1997-01-01
Two different models for analyzing extreme hydrologic events, based on, respectively, partial duration series (PDS) and annual maximum series (AMS), are compared. The PDS model assumes a generalized Pareto distribution for modeling threshold exceedances corresponding to a generalized extreme value...... model with ML estimation for large positive shape parameters. Since heavy-tailed distributions, corresponding to negative shape parameters, are far the most common in hydrology, the PDS model generally is to be preferred for at-site quantile estimation....... distribution for annual maxima. The performance of the two models in terms of the uncertainty of the T-year event estimator is evaluated in the cases of estimation with, respectively, the maximum likelihood (ML) method, the method of moments (MOM), and the method of probability weighted moments (PWM...
Fisher Information Framework for Time Series Modeling
Venkatesan, R C
2016-01-01
A robust prediction model invoking the Takens embedding theorem, whose \\textit{working hypothesis} is obtained via an inference procedure based on the minimum Fisher information principle, is presented. The coefficients of the ansatz, central to the \\textit{working hypothesis} satisfy a time independent Schr\\"{o}dinger-like equation in a vector setting. The inference of i) the probability density function of the coefficients of the \\textit{working hypothesis} and ii) the establishing of constraint driven pseudo-inverse condition for the modeling phase of the prediction scheme, is made, for the case of normal distributions, with the aid of the quantum mechanical virial theorem. The well-known reciprocity relations and the associated Legendre transform structure for the Fisher information measure (FIM, hereafter)-based model in a vector setting (with least square constraints) are self-consistently derived. These relations are demonstrated to yield an intriguing form of the FIM for the modeling phase, which defi...
Convergent series for lattice models with polynomial interactions
Ivanov, Aleksandr S.; Sazonov, Vasily K.
2017-01-01
The standard perturbative weak-coupling expansions in lattice models are asymptotic. The reason for this is hidden in the incorrect interchange of the summation and integration. However, substituting the Gaussian initial approximation of the perturbative expansions by a certain interacting model or regularizing original lattice integrals, one can construct desired convergent series. In this paper we develop methods, which are based on the joint and separate utilization of the regularization and new initial approximation. We prove, that the convergent series exist and can be expressed as re-summed standard perturbation theory for any model on the finite lattice with the polynomial interaction of even degree. We discuss properties of such series and study their applicability to practical computations on the example of the lattice ϕ4-model. We calculate expectation value using the convergent series, the comparison of the results with the Borel re-summation and Monte Carlo simulations shows a good agreement between all these methods.
Convergent series for lattice models with polynomial interactions
Ivanov, Aleksandr S
2016-01-01
The standard perturbative weak-coupling expansions in lattice models are asymptotic. The reason for this is hidden in the incorrect interchange of the summation and integration. However, substituting the Gaussian initial approximation of the perturbative expansions by a certain interacting model or regularizing original lattice integrals, one can construct desired convergent series. In this paper we develop methods, which are based on the joint and separate utilization of the regularization and new initial approximation. We prove, that the convergent series exist and can be expressed as the re-summed standard perturbation theory for any model on the finite lattice with the polynomial interaction of even degree. We discuss properties of such series and make them applicable to practical computations. The workability of the methods is demonstrated on the example of the lattice $\\phi^4$-model. We calculate the operator $\\langle\\phi_n^2\\rangle$ using the convergent series, the comparison of the results with the Bo...
Parameterizing unconditional skewness in models for financial time series
He, Changli; Silvennoinen, Annastiina; Teräsvirta, Timo
In this paper we consider the third-moment structure of a class of time series models. It is often argued that the marginal distribution of financial time series such as returns is skewed. Therefore it is of importance to know what properties a model should possess if it is to accommodate...... unconditional skewness. We consider modelling the unconditional mean and variance using models that respond nonlinearly or asymmetrically to shocks. We investigate the implications of these models on the third-moment structure of the marginal distribution as well as conditions under which the unconditional...
Structural Equation Modeling of Multivariate Time Series
du Toit, Stephen H. C.; Browne, Michael W.
2007-01-01
The covariance structure of a vector autoregressive process with moving average residuals (VARMA) is derived. It differs from other available expressions for the covariance function of a stationary VARMA process and is compatible with current structural equation methodology. Structural equation modeling programs, such as LISREL, may therefore be…
Statistical modelling of agrometeorological time series by exponential smoothing
Murat, Małgorzata; Malinowska, Iwona; Hoffmann, Holger; Baranowski, Piotr
2016-01-01
Meteorological time series are used in modelling agrophysical processes of the soil-plant-atmosphere system which determine plant growth and yield. Additionally, long-term meteorological series are used in climate change scenarios. Such studies often require forecasting or projection of meteorological variables, eg the projection of occurrence of the extreme events. The aim of the article was to determine the most suitable exponential smoothing models to generate forecast using data on air temperature, wind speed, and precipitation time series in Jokioinen (Finland), Dikopshof (Germany), Lleida (Spain), and Lublin (Poland). These series exhibit regular additive seasonality or non-seasonality without any trend, which is confirmed by their autocorrelation functions and partial autocorrelation functions. The most suitable models were indicated by the smallest mean absolute error and the smallest root mean squared error.
An Excel[TM] Model of a Radioactive Series
Andrews, D. G. H.
2009-01-01
A computer model of the decay of a radioactive series, written in Visual Basic in Excel[TM], is presented. The model is based on the random selection of cells in an array. The results compare well with the theoretical equations. The model is a useful tool in teaching this aspect of radioactivity. (Contains 4 figures.)
With string model to time series forecasting
Pinčák, Richard; Bartoš, Erik
2015-01-01
Overwhelming majority of econometric models applied on a long term basis in the financial forex market do not work sufficiently well. The reason is that transaction costs and arbitrage opportunity are not included, as this does not simulate the real financial markets. Analyses are not conducted on the non equidistant date but rather on the aggregate date, which is also not a real financial case. In this paper, we would like to show a new way how to analyze and, moreover, forecast financial ma...
With string model to time series forecasting
Pinčák, Richard; Bartoš, Erik
2015-10-01
Overwhelming majority of econometric models applied on a long term basis in the financial forex market do not work sufficiently well. The reason is that transaction costs and arbitrage opportunity are not included, as this does not simulate the real financial markets. Analyses are not conducted on the non equidistant date but rather on the aggregate date, which is also not a real financial case. In this paper, we would like to show a new way how to analyze and, moreover, forecast financial market. We utilize the projections of the real exchange rate dynamics onto the string-like topology in the OANDA market. The latter approach allows us to build the stable prediction models in trading in the financial forex market. The real application of the multi-string structures is provided to demonstrate our ideas for the solution of the problem of the robust portfolio selection. The comparison with the trend following strategies was performed, the stability of the algorithm on the transaction costs for long trade periods was confirmed.
Multiplicative ARMA models to generate hourly series of global irradiation
Mora-Lopez, L. [Universidad de Malaga (Spain). Dpto. Lenguajes y C. Computacion; Sidrach-de-Cardona, M. [Universidad de Malaga (Spain). Dpto. Fisica Aplicada
1998-11-01
A methodology to generate hourly series of global irradiation is proposed. The only input parameter which is required is the monthly mean value of daily global irradiation, which is available for most locations. The procedure to obtain new series is based on the use of a multiplicative autoregressive moving-average statistical model for time series with regular and seasonal components. The multiplicative nature of these models enables capture of the two types of relationships observed in recorded hourly series of global irradiation: on the one hand, the relationship between the value at one hour and the value at the previous hour; and on the other hand, the relationship between the value at one hour in one day and the value at the same hour in the previous day. In this paper the main drawback which arises when using these models to generate new series is solved: namely, the need for available recorded series in order to obtain the three parameters contained in the statistical ARMA model which is proposed (autoregressive coefficient, moving-average coefficient and variance of the error term). Specifically, expressions which enable estimation of these parameters using only monthly mean values of daily global irradiation are proposed in this paper. (author)
Stochastic modeling of hourly rainfall times series in Campania (Italy)
Giorgio, M.; Greco, R.
2009-04-01
Occurrence of flowslides and floods in small catchments is uneasy to predict, since it is affected by a number of variables, such as mechanical and hydraulic soil properties, slope morphology, vegetation coverage, rainfall spatial and temporal variability. Consequently, landslide risk assessment procedures and early warning systems still rely on simple empirical models based on correlation between recorded rainfall data and observed landslides and/or river discharges. Effectiveness of such systems could be improved by reliable quantitative rainfall prediction, which can allow gaining larger lead-times. Analysis of on-site recorded rainfall height time series represents the most effective approach for a reliable prediction of local temporal evolution of rainfall. Hydrological time series analysis is a widely studied field in hydrology, often carried out by means of autoregressive models, such as AR, ARMA, ARX, ARMAX (e.g. Salas [1992]). Such models gave the best results when applied to the analysis of autocorrelated hydrological time series, like river flow or level time series. Conversely, they are not able to model the behaviour of intermittent time series, like point rainfall height series usually are, especially when recorded with short sampling time intervals. More useful for this issue are the so-called DRIP (Disaggregated Rectangular Intensity Pulse) and NSRP (Neymann-Scott Rectangular Pulse) model [Heneker et al., 2001; Cowpertwait et al., 2002], usually adopted to generate synthetic point rainfall series. In this paper, the DRIP model approach is adopted, in which the sequence of rain storms and dry intervals constituting the structure of rainfall time series is modeled as an alternating renewal process. Final aim of the study is to provide a useful tool to implement an early warning system for hydrogeological risk management. Model calibration has been carried out with hourly rainfall hieght data provided by the rain gauges of Campania Region civil
Models for Pooled Time-Series Cross-Section Data
Lawrence E Raffalovich
2015-07-01
Full Text Available Several models are available for the analysis of pooled time-series cross-section (TSCS data, defined as “repeated observations on fixed units” (Beck and Katz 1995. In this paper, we run the following models: (1 a completely pooled model, (2 fixed effects models, and (3 multi-level/hierarchical linear models. To illustrate these models, we use a Generalized Least Squares (GLS estimator with cross-section weights and panel-corrected standard errors (with EViews 8 on the cross-national homicide trends data of forty countries from 1950 to 2005, which we source from published research (Messner et al. 2011. We describe and discuss the similarities and differences between the models, and what information each can contribute to help answer substantive research questions. We conclude with a discussion of how the models we present may help to mitigate validity threats inherent in pooled time-series cross-section data analysis.
Hybrid grey model to forecast monitoring series with seasonality
WANG Qi-jie; LIAO Xin-hao; ZHOU Yong-hong; ZOU Zheng-rong; ZHU Jian-jun; PENG Yue
2005-01-01
The grey forecasting model has been successfully applied to many fields. However, the precision of GM(1,1) model is not high. In order to remove the seasonal fluctuations in monitoring series before building GM(1,1) model, the forecasting series of GM(1,1) was built, and an inverse process was used to resume the seasonal fluctuations. Two deseasonalization methods were presented , i.e., seasonal index-based deseasonalization and standard normal distribution-based deseasonalization. They were combined with the GM(1,1) model to form hybrid grey models. A simple but practical method to further improve the forecasting results was also suggested. For comparison, a conventional periodic function model was investigated. The concept and algorithms were tested with four years monthly monitoring data. The results show that on the whole the seasonal index-GM(1,1) model outperform the conventional periodic function model and the conventional periodic function model outperform the SND-GM(1,1) model. The mean absolute error and mean square error of seasonal index-GM(1,1) are 30.69% and 54.53% smaller than that of conventional periodic function model, respectively. The high accuracy, straightforward and easy implementation natures of the proposed hybrid seasonal index-grey model make it a powerful analysis technique for seasonal monitoring series.
A multivariate heuristic model for fuzzy time-series forecasting.
Huarng, Kun-Huang; Yu, Tiffany Hui-Kuang; Hsu, Yu Wei
2007-08-01
Fuzzy time-series models have been widely applied due to their ability to handle nonlinear data directly and because no rigid assumptions for the data are needed. In addition, many such models have been shown to provide better forecasting results than their conventional counterparts. However, since most of these models require complicated matrix computations, this paper proposes the adoption of a multivariate heuristic function that can be integrated with univariate fuzzy time-series models into multivariate models. Such a multivariate heuristic function can easily be extended and integrated with various univariate models. Furthermore, the integrated model can handle multiple variables to improve forecasting results and, at the same time, avoid complicated computations due to the inclusion of multiple variables.
Unit root modeling for trending stock market series
Afees A. Salisu
2016-06-01
Full Text Available In this paper, we examine how the unit root for stock market series should be modeled. We employ the Narayan and Liu (2015 trend GARCH-based unit root and its variants in order to more carefully capture the inherent statistical behavior of the series. We utilize daily, weekly and monthly data covering nineteen countries across the regions of America, Asia and Europe. We find that the nature of data frequency matters for unit root testing when dealing with stock market data. Our evidence also suggests that stock market data is better modeled in the presence of structural breaks, conditional heteroscedasticity and time trend.
Analyzing the Dynamics of Nonlinear Multivariate Time Series Models
DenghuaZhong; ZhengfengZhang; DonghaiLiu; StefanMittnik
2004-01-01
This paper analyzes the dynamics of nonlinear multivariate time series models that is represented by generalized impulse response functions and asymmetric functions. We illustrate the measures of shock persistences and asymmetric effects of shocks derived from the generalized impulse response functions and asymmetric function in bivariate smooth transition regression models. The empirical work investigates a bivariate smooth transition model of US GDP and the unemployment rate.
Modelling, simulation and inference for multivariate time series of counts
Veraart, Almut E. D.
2016-01-01
This article presents a new continuous-time modelling framework for multivariate time series of counts which have an infinitely divisible marginal distribution. The model is based on a mixed moving average process driven by L\\'{e}vy noise - called a trawl process - where the serial correlation and the cross-sectional dependence are modelled independently of each other. Such processes can exhibit short or long memory. We derive a stochastic simulation algorithm and a statistical inference meth...
Quality Quandaries- Time Series Model Selection and Parsimony
Bisgaard, Søren; Kulahci, Murat
2009-01-01
Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....
Quality Quandaries- Time Series Model Selection and Parsimony
Bisgaard, Søren; Kulahci, Murat
2009-01-01
Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....
Model and Variable Selection Procedures for Semiparametric Time Series Regression
Risa Kato
2009-01-01
Full Text Available Semiparametric regression models are very useful for time series analysis. They facilitate the detection of features resulting from external interventions. The complexity of semiparametric models poses new challenges for issues of nonparametric and parametric inference and model selection that frequently arise from time series data analysis. In this paper, we propose penalized least squares estimators which can simultaneously select significant variables and estimate unknown parameters. An innovative class of variable selection procedure is proposed to select significant variables and basis functions in a semiparametric model. The asymptotic normality of the resulting estimators is established. Information criteria for model selection are also proposed. We illustrate the effectiveness of the proposed procedures with numerical simulations.
A generalized exponential time series regression model for electricity prices
Haldrup, Niels; Knapik, Oskar; Proietti, Tomasso
We consider the issue of modeling and forecasting daily electricity spot prices on the Nord Pool Elspot power market. We propose a method that can handle seasonal and non-seasonal persistence by modelling the price series as a generalized exponential process. As the presence of spikes can distort...... the estimation of the dynamic structure of the series we consider an iterative estimation strategy which, conditional on a set of parameter estimates, clears the spikes using a data cleaning algorithm, and reestimates the parameters using the cleaned data so as to robustify the estimates. Conditional...... on the estimated model, the best linear predictor is constructed. Our modeling approach provides good fit within sample and outperforms competing benchmark predictors in terms of forecasting accuracy. We also find that building separate models for each hour of the day and averaging the forecasts is a better...
Deriving dynamic marketing effectiveness from econometric time series models
C. Horváth (Csilla); Ph.H.B.F. Franses (Philip Hans)
2003-01-01
textabstractTo understand the relevance of marketing efforts, it has become standard practice to estimate the long-run and short-run effects of the marketing-mix, using, say, weekly scanner data. A common vehicle for this purpose is an econometric time series model. Issues that are addressed in the
Convergent series for lattice models with polynomial interactions
Aleksandr S. Ivanov
2017-01-01
Full Text Available The standard perturbative weak-coupling expansions in lattice models are asymptotic. The reason for this is hidden in the incorrect interchange of the summation and integration. However, substituting the Gaussian initial approximation of the perturbative expansions by a certain interacting model or regularizing original lattice integrals, one can construct desired convergent series. In this paper we develop methods, which are based on the joint and separate utilization of the regularization and new initial approximation. We prove, that the convergent series exist and can be expressed as re-summed standard perturbation theory for any model on the finite lattice with the polynomial interaction of even degree. We discuss properties of such series and study their applicability to practical computations on the example of the lattice ϕ4-model. We calculate 〈ϕn2〉 expectation value using the convergent series, the comparison of the results with the Borel re-summation and Monte Carlo simulations shows a good agreement between all these methods.
Combined forecasts from linear and nonlinear time series models
N. Terui (Nobuhiko); H.K. van Dijk (Herman)
1999-01-01
textabstractCombined forecasts from a linear and a nonlinear model are investigated for time series with possibly nonlinear characteristics. The forecasts are combined by a constant coefficient regression method as well as a time varying method. The time varying method allows for a locally (non)line
Sparse time series chain graphical models for reconstructing genetic networks
Abegaz, Fentaw; Wit, Ernst
2013-01-01
We propose a sparse high-dimensional time series chain graphical model for reconstructing genetic networks from gene expression data parametrized by a precision matrix and autoregressive coefficient matrix. We consider the time steps as blocks or chains. The proposed approach explores patterns of co
Modeling irregularly spaced residual series as a continuous stochastic process
Von Asmuth, J.R.; Bierkens, M.F.P.
2005-01-01
In this paper, the background and functioning of a simple but effective continuous time approach for modeling irregularly spaced residual series is presented. The basic equations were published earlier by von Asmuth et al. (2002), who used them as part of a continuous time transfer function noise mo
Optimization of recurrent neural networks for time series modeling
Pedersen, Morten With
1997-01-01
The present thesis is about optimization of recurrent neural networks applied to time series modeling. In particular is considered fully recurrent networks working from only a single external input, one layer of nonlinear hidden units and a li near output unit applied to prediction of discrete time...
Modeling Philippine Stock Exchange Composite Index Using Time Series Analysis
Gayo, W. S.; Urrutia, J. D.; Temple, J. M. F.; Sandoval, J. R. D.; Sanglay, J. E. A.
2015-06-01
This study was conducted to develop a time series model of the Philippine Stock Exchange Composite Index and its volatility using the finite mixture of ARIMA model with conditional variance equations such as ARCH, GARCH, EG ARCH, TARCH and PARCH models. Also, the study aimed to find out the reason behind the behaviorof PSEi, that is, which of the economic variables - Consumer Price Index, crude oil price, foreign exchange rate, gold price, interest rate, money supply, price-earnings ratio, Producers’ Price Index and terms of trade - can be used in projecting future values of PSEi and this was examined using Granger Causality Test. The findings showed that the best time series model for Philippine Stock Exchange Composite index is ARIMA(1,1,5) - ARCH(1). Also, Consumer Price Index, crude oil price and foreign exchange rate are factors concluded to Granger cause Philippine Stock Exchange Composite Index.
Recursive Bayesian recurrent neural networks for time-series modeling.
Mirikitani, Derrick T; Nikolaev, Nikolay
2010-02-01
This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.
A Comparative Study of Portmanteau Tests for Univariate Time Series Models
Sohail Chand
2006-07-01
Full Text Available Time series model diagnostic checking is the most important stage of time series model building. In this paper the comparison among several suggested diagnostic tests has been made using the simulation time series data.
Time series ARIMA models for daily price of palm oil
Ariff, Noratiqah Mohd; Zamhawari, Nor Hashimah; Bakar, Mohd Aftar Abu
2015-02-01
Palm oil is deemed as one of the most important commodity that forms the economic backbone of Malaysia. Modeling and forecasting the daily price of palm oil is of great interest for Malaysia's economic growth. In this study, time series ARIMA models are used to fit the daily price of palm oil. The Akaike Infromation Criterion (AIC), Akaike Infromation Criterion with a correction for finite sample sizes (AICc) and Bayesian Information Criterion (BIC) are used to compare between different ARIMA models being considered. It is found that ARIMA(1,2,1) model is suitable for daily price of crude palm oil in Malaysia for the year 2010 to 2012.
Calorimetric measurement and modelling of the equivalent series of capacitors
Seguin, B.; Gosse, J. P.; Ferrieux, J. P.
1999-12-01
The equivalent series resistance of polypropylene capacitors has been determined under rated voltage, in the range 1 kHz 1 MHz, between 220 K and 370 K by a calorimetric technique. The original feature of this determination of capacitor losses lies in the use of the isothermal calorimetry and in the measurement of an electrical power and not of a temperature increase. The frequency dependence of the equivalent series resistance, at various temperatures, enables to separate the losses in the conducting material from those in the dielectric and to get their respective variations as a function of frequency and temperature. These variations of the equivalent series resistance with frequency at a given temperature have been reproduced by using an equivalent circuit composed of resistors, inductors and capacitors. This model has been verified for non-sinusoidal waveforms such as those met with in a filtering circuit and is used to evaluate by simulation the losses of the capacitor.
A refined fuzzy time series model for stock market forecasting
Jilani, Tahseen Ahmed; Burney, Syed Muhammad Aqil
2008-05-01
Time series models have been used to make predictions of stock prices, academic enrollments, weather, road accident casualties, etc. In this paper we present a simple time-variant fuzzy time series forecasting method. The proposed method uses heuristic approach to define frequency-density-based partitions of the universe of discourse. We have proposed a fuzzy metric to use the frequency-density-based partitioning. The proposed fuzzy metric also uses a trend predictor to calculate the forecast. The new method is applied for forecasting TAIEX and enrollments’ forecasting of the University of Alabama. It is shown that the proposed method work with higher accuracy as compared to other fuzzy time series methods developed for forecasting TAIEX and enrollments of the University of Alabama.
Neural network versus classical time series forecasting models
Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam
2017-05-01
Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.
TIME SERIES FORECASTING WITH MULTIPLE CANDIDATE MODELS: SELECTING OR COMBINING?
YU Lean; WANG Shouyang; K. K. Lai; Y.Nakamori
2005-01-01
Various mathematical models have been commonly used in time series analysis and forecasting. In these processes, academic researchers and business practitioners often come up against two important problems. One is whether to select an appropriate modeling approach for prediction purposes or to combine these different individual approaches into a single forecast for the different/dissimilar modeling approaches. Another is whether to select the best candidate model for forecasting or to mix the various candidate models with different parameters into a new forecast for the same/similar modeling approaches. In this study, we propose a set of computational procedures to solve the above two issues via two judgmental criteria. Meanwhile, in view of the problems presented in the literature, a novel modeling technique is also proposed to overcome the drawbacks of existing combined forecasting methods. To verify the efficiency and reliability of the proposed procedure and modeling technique, the simulations and real data examples are conducted in this study.The results obtained reveal that the proposed procedure and modeling technique can be used as a feasible solution for time series forecasting with multiple candidate models.
Time Series Analysis, Modeling and Applications A Computational Intelligence Perspective
Chen, Shyi-Ming
2013-01-01
Temporal and spatiotemporal data form an inherent fabric of the society as we are faced with streams of data coming from numerous sensors, data feeds, recordings associated with numerous areas of application embracing physical and human-generated phenomena (environmental data, financial markets, Internet activities, etc.). A quest for a thorough analysis, interpretation, modeling and prediction of time series comes with an ongoing challenge for developing models that are both accurate and user-friendly (interpretable). The volume is aimed to exploit the conceptual and algorithmic framework of Computational Intelligence (CI) to form a cohesive and comprehensive environment for building models of time series. The contributions covered in the volume are fully reflective of the wealth of the CI technologies by bringing together ideas, algorithms, and numeric studies, which convincingly demonstrate their relevance, maturity and visible usefulness. It reflects upon the truly remarkable diversity of methodological a...
Time series regression model for infectious disease and weather.
Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro
2015-10-01
Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Forecasting the Reference Evapotranspiration Using Time Series Model
H. Zare Abyaneh
2016-10-01
Full Text Available Introduction: Reference evapotranspiration is one of the most important factors in irrigation timing and field management. Moreover, reference evapotranspiration forecasting can play a vital role in future developments. Therefore in this study, the seasonal autoregressive integrated moving average (ARIMA model was used to forecast the reference evapotranspiration time series in the Esfahan, Semnan, Shiraz, Kerman, and Yazd synoptic stations. Materials and Methods: In the present study in all stations (characteristics of the synoptic stations are given in Table 1, the meteorological data, including mean, maximum and minimum air temperature, relative humidity, dry-and wet-bulb temperature, dew-point temperature, wind speed, precipitation, air vapor pressure and sunshine hours were collected from the Islamic Republic of Iran Meteorological Organization (IRIMO for the 41 years from 1965 to 2005. The FAO Penman-Monteith equation was used to calculate the monthly reference evapotranspiration in the five synoptic stations and the evapotranspiration time series were formed. The unit root test was used to identify whether the time series was stationary, then using the Box-Jenkins method, seasonal ARIMA models were applied to the sample data. Table 1. The geographical location and climate conditions of the synoptic stations Station\tGeographical location\tAltitude (m\tMean air temperature (°C\tMean precipitation (mm\tClimate, according to the De Martonne index classification Longitude (E\tLatitude (N Annual\tMin. and Max. Esfahan\t51° 40'\t32° 37'\t1550.4\t16.36\t9.4-23.3\t122\tArid Semnan\t53° 33'\t35° 35'\t1130.8\t18.0\t12.4-23.8\t140\tArid Shiraz\t52° 36'\t29° 32'\t1484\t18.0\t10.2-25.9\t324\tSemi-arid Kerman\t56° 58'\t30° 15'\t1753.8\t15.6\t6.7-24.6\t142\tArid Yazd\t54° 17'\t31° 54'\t1237.2\t19.2\t11.8-26.0\t61\tArid Results and Discussion: The monthly meteorological data were used as input for the Ref-ET software and monthly reference
Exact series model of Langevin transducers with internal losses.
Nishamol, P A; Ebenezer, D D
2014-03-01
An exact series method is presented to analyze classical Langevin transducers with arbitrary boundary conditions. The transducers consist of an axially polarized piezoelectric solid cylinder sandwiched between two elastic solid cylinders. All three cylinders are of the same diameter. The length to diameter ratio is arbitrary. Complex piezoelectric and elastic coefficients are used to model internal losses. Solutions to the exact linearized governing equations for each cylinder include four series. Each term in each series is an exact solution to the governing equations. Bessel and trigonometric functions that form complete and orthogonal sets in the radial and axial directions, respectively, are used in the series. Asymmetric transducers and boundary conditions are modeled by using axially symmetric and anti-symmetric sets of functions. All interface and boundary conditions are satisfied in a weighted-average sense. The computed input electrical admittance, displacement, and stress in transducers are presented in tables and figures, and are in very good agreement with those obtained using atila-a finite element package for the analysis of sonar transducers. For all the transducers considered in the analysis, the maximum difference between the first three resonance frequencies calculated using the present method and atila is less than 0.03%.
A Parsimonious Bootstrap Method to Model Natural Inflow Energy Series
Fernando Luiz Cyrino Oliveira
2014-01-01
Full Text Available The Brazilian energy generation and transmission system is quite peculiar in its dimension and characteristics. As such, it can be considered unique in the world. It is a high dimension hydrothermal system with huge participation of hydro plants. Such strong dependency on hydrological regimes implies uncertainties related to the energetic planning, requiring adequate modeling of the hydrological time series. This is carried out via stochastic simulations of monthly inflow series using the family of Periodic Autoregressive models, PAR(p, one for each period (month of the year. In this paper it is shown the problems in fitting these models by the current system, particularly the identification of the autoregressive order “p” and the corresponding parameter estimation. It is followed by a proposal of a new approach to set both the model order and the parameters estimation of the PAR(p models, using a nonparametric computational technique, known as Bootstrap. This technique allows the estimation of reliable confidence intervals for the model parameters. The obtained results using the Parsimonious Bootstrap Method of Moments (PBMOM produced not only more parsimonious model orders but also adherent stochastic scenarios and, in the long range, lead to a better use of water resources in the energy operation planning.
Kālī: Time series data modeler
Kasliwal, Vishal P.
2016-07-01
The fully parallelized and vectorized software package Kālī models time series data using various stochastic processes such as continuous-time ARMA (C-ARMA) processes and uses Bayesian Markov Chain Monte-Carlo (MCMC) for inferencing a stochastic light curve. Kālimacr; is written in c++ with Python language bindings for ease of use. K¯lī is named jointly after the Hindu goddess of time, change, and power and also as an acronym for KArma LIbrary.
Modelling of series of types of automated trenchless works tunneling
Gendarz, P.; Rzasinski, R.
2016-08-01
Microtunneling is the newest method for making underground installations. Show method is the result of experience and methods applied in other, previous methods of trenchless underground works. It is considered reasonable to elaborate a series of types of construction of tunneling machines, to develop this particular earthworks method. There are many design solutions of machines, but the current goal is to develop non - excavation robotized machine. Erosion machines with main dimensions of the tunnels which are: 1600, 2000, 2500, 3150 are design with use of the computer aided methods. Series of types of construction of tunneling machines creating process was preceded by analysis of current state. The verification of practical methodology of creating the systematic part series was based on the designed erosion machines series of types. There were developed: method of construction similarity of the erosion machines, algorithmic methods of quantitative construction attributes variant analyzes in the I-DEAS advanced graphical program, relational and program parameterization. There manufacturing process of the parts will be created, which allows to verify the technological process on the CNC machines. The models of designed will be modified and the construction will be consulted with erosion machine users and manufacturers like: Tauber Rohrbau GmbH & Co.KG from Minster, OHL ZS a.s. from Brna,. The companies’ acceptance will result in practical verification by JUMARPOL company.
Unsupervised Classification During Time-Series Model Building.
Gates, Kathleen M; Lane, Stephanie T; Varangis, E; Giovanello, K; Guiskewicz, K
2017-01-01
Researchers who collect multivariate time-series data across individuals must decide whether to model the dynamic processes at the individual level or at the group level. A recent innovation, group iterative multiple model estimation (GIMME), offers one solution to this dichotomy by identifying group-level time-series models in a data-driven manner while also reliably recovering individual-level patterns of dynamic effects. GIMME is unique in that it does not assume homogeneity in processes across individuals in terms of the patterns or weights of temporal effects. However, it can be difficult to make inferences from the nuances in varied individual-level patterns. The present article introduces an algorithm that arrives at subgroups of individuals that have similar dynamic models. Importantly, the researcher does not need to decide the number of subgroups. The final models contain reliable group-, subgroup-, and individual-level patterns that enable generalizable inferences, subgroups of individuals with shared model features, and individual-level patterns and estimates. We show that integrating community detection into the GIMME algorithm improves upon current standards in two important ways: (1) providing reliable classification and (2) increasing the reliability in the recovery of individual-level effects. We demonstrate this method on functional MRI from a sample of former American football players.
Single-Index Additive Vector Autoregressive Time Series Models
LI, YEHUA
2009-09-01
We study a new class of nonlinear autoregressive models for vector time series, where the current vector depends on single-indexes defined on the past lags and the effects of different lags have an additive form. A sufficient condition is provided for stationarity of such models. We also study estimation of the proposed model using P-splines, hypothesis testing, asymptotics, selection of the order of the autoregression and of the smoothing parameters and nonlinear forecasting. We perform simulation experiments to evaluate our model in various settings. We illustrate our methodology on a climate data set and show that our model provides more accurate yearly forecasts of the El Niño phenomenon, the unusual warming of water in the Pacific Ocean. © 2009 Board of the Foundation of the Scandinavian Journal of Statistics.
Disease management with ARIMA model in time series.
Sato, Renato Cesar
2013-01-01
The evaluation of infectious and noninfectious disease management can be done through the use of a time series analysis. In this study, we expect to measure the results and prevent intervention effects on the disease. Clinical studies have benefited from the use of these techniques, particularly for the wide applicability of the ARIMA model. This study briefly presents the process of using the ARIMA model. This analytical tool offers a great contribution for researchers and healthcare managers in the evaluation of healthcare interventions in specific populations.
Modeling, design, and optimization of Mindwalker series elastic joint.
Wang, Shiqian; Meijneke, Cor; van der Kooij, Herman
2013-06-01
Weight and power autonomy are limiting the daily use of wearable exoskeleton. Lightweight, efficient and powerful actuation system are not easy to achieve. Choosing the right combinations of existing technologies, such as battery, gear and motor is not a trivial task. In this paper, we propose an optimization framework by setting up a power-based quasi-static model of the exoskeleton joint drivetrain. The goal is to find the most efficient and lightweight combinations. This framework can be generalized for other similar applications by extending or accommodating the model to their own needs. We also present the Mindwalker exoskeleton joint, for which a novel series elastic actuator, consisting of a ballscrew-driven linear actuator and a double spiral spring, was developed and tested. This linear actuator is capable of outputting 960 W power and the exoskeleton joint can output 100 Nm peak torque continuously. The double spiral spring can sense torque between 0.08Nm and 100 Nm and it exhibits linearity of 99.99%, with no backlash or hysteresis. The series elastic joint can track a chirp torque profile with amplitude of 100 Nm over 6 Hz (large torque bandwidth) and for small torque (2 Nm peak-to-peak), it has a bandwidth over 38 Hz. The integrated exoskeleton joint, including the ballscrew-driven linear actuator, the series spring, electronics and the metal housing which hosts these components, weighs 2.9 kg.
A series of rat segmental forelimb ectopic implantation models.
Zhou, Xianyu; Luo, Xusong; Gao, Bowen; Liu, Fei; Gu, Chuan; Yu, Qingxiong; Li, Qingfeng; Zhu, Hainan
2017-05-09
Temporary ectopic implantation has been performed in clinical practice to salvage devascularized amputated tissues for delayed replantation purpose. In this study, we established a series of segmental forelimb ectopic implantation models in rats, including forelimb, forearm, forepaw, digit, and double forelimbs, to mimic the clinical context. Time of amputated limbs harvesting in donors and ectopic implantation process in recipients were recorded. Survival time and mortalities of recipients were also recorded. Sixty days after ectopic implantation, a full-field laser perfusion imager (FLPI) was used to detect the blood flow of amputated limbs and micro-CT imaging was used to examine bone morphological changes. Histological sections of amputated limbs were stained with hematoxylin and eosin to evaluate pathological changes. Implanted amputated limbs in all models achieved long term survival and there were no obvious morphological and histological changes were found according to results of micro-CT and histology study. Thus, a series of rat segmental forelimb temporary ectopic implantation models have been well established. To our knowledge, this is the first rodent animal model related to forelimb temporary ectopic implantation. These models might facilitate further research related to salvage, reconstruction and better aesthetic and functional outcome of upper extremity/digit in temporary ectopic implantation scenario.
Modeling Periodic Impulsive Effects on Online TV Series Diffusion.
Fu, Peihua; Zhu, Anding; Fang, Qiwen; Wang, Xi
Online broadcasting substantially affects the production, distribution, and profit of TV series. In addition, online word-of-mouth significantly affects the diffusion of TV series. Because on-demand streaming rates are the most important factor that influences the earnings of online video suppliers, streaming statistics and forecasting trends are valuable. In this paper, we investigate the effects of periodic impulsive stimulation and pre-launch promotion on on-demand streaming dynamics. We consider imbalanced audience feverish distribution using an impulsive susceptible-infected-removed(SIR)-like model. In addition, we perform a correlation analysis of online buzz volume based on Baidu Index data. We propose a PI-SIR model to evolve audience dynamics and translate them into on-demand streaming fluctuations, which can be observed and comprehended by online video suppliers. Six South Korean TV series datasets are used to test the model. We develop a coarse-to-fine two-step fitting scheme to estimate the model parameters, first by fitting inter-period accumulation and then by fitting inner-period feverish distribution. We find that audience members display similar viewing habits. That is, they seek new episodes every update day but fade away. This outcome means that impulsive intensity plays a crucial role in on-demand streaming diffusion. In addition, the initial audience size and online buzz are significant factors. On-demand streaming fluctuation is highly correlated with online buzz fluctuation. To stimulate audience attention and interpersonal diffusion, it is worthwhile to invest in promotion near update days. Strong pre-launch promotion is also a good marketing tool to improve overall performance. It is not advisable for online video providers to promote several popular TV series on the same update day. Inter-period accumulation is a feasible forecasting tool to predict the future trend of the on-demand streaming amount. The buzz in public social communities
Series Connected Photovoltaic Cells—Modelling and Analysis
Anas Al Tarabsheh
2017-03-01
Full Text Available As solar energy costs continue to drop, the number of large-scale deployment projects increases, and the need for different analysis models for photovoltaic (PV modules in both academia and industry rises. This paper proposes a modified equivalent-circuit model for PV modules. A PV module comprises several series-connected PV cells, to generate more electrical power, where each PV cell has an internal shunt resistance. Our proposed model simplifies the standard one-diode equivalent-circuit (SEC model by removing the shunt resistance and including its effect on the diode part of the circuit, while retaining the original model accuracy. Our proposed equivalent circuit, called here a modified SEC (MSEC, has less number of circuit elements. All of the PV cells are assumed operating under the same ambient conditions where they share the same electric voltage and current values. To ensure the simplification did not come at a reduction in the accuracy of the SEC model, we validate our MSEC model by simulating both under the same conditions, calculate, and compare their current/voltage (I/V characteristics. Our results validate the accuracy of our model with the difference between the two models falling below 1%. Therefore, the proposed model can be adopted as an alternative representation of the equivalent circuit for PV cells and modules.
A Simple Pile-up Model for Time Series Analysis
Sevilla, Diego J. R.
2017-07-01
In this paper, a simple pile-up model is presented. This model calculates the probability P(n| N) of having n counts if N particles collide with a sensor during an exposure time. Through some approximations, an analytic expression depending on only one parameter is obtained. This parameter characterizes the pile-up magnitude, and depends on features of the instrument and the source. The statistical model obtained permits the determination of probability distributions of measured counts from the probability distributions of incoming particles, which is valuable for time series analysis. Applicability limits are discussed, and an example of the improvement that can be achieved in the statistical analysis considering the proposed pile-up model is shown by analyzing real data.
Hybrid perturbation methods based on statistical time series models
San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario
2016-04-01
In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.
On the maximum-entropy/autoregressive modeling of time series
Chao, B. F.
1984-01-01
The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.
Clustering Multivariate Time Series Using Hidden Markov Models
Shima Ghassempour
2014-03-01
Full Text Available In this paper we describe an algorithm for clustering multivariate time series with variables taking both categorical and continuous values. Time series of this type are frequent in health care, where they represent the health trajectories of individuals. The problem is challenging because categorical variables make it difficult to define a meaningful distance between trajectories. We propose an approach based on Hidden Markov Models (HMMs, where we first map each trajectory into an HMM, then define a suitable distance between HMMs and finally proceed to cluster the HMMs with a method based on a distance matrix. We test our approach on a simulated, but realistic, data set of 1,255 trajectories of individuals of age 45 and over, on a synthetic validation set with known clustering structure, and on a smaller set of 268 trajectories extracted from the longitudinal Health and Retirement Survey. The proposed method can be implemented quite simply using standard packages in R and Matlab and may be a good candidate for solving the difficult problem of clustering multivariate time series with categorical variables using tools that do not require advanced statistic knowledge, and therefore are accessible to a wide range of researchers.
Modeling financial time series with S-plus
Zivot, Eric
2003-01-01
The field of financial econometrics has exploded over the last decade This book represents an integration of theory, methods, and examples using the S-PLUS statistical modeling language and the S+FinMetrics module to facilitate the practice of financial econometrics This is the first book to show the power of S-PLUS for the analysis of time series data It is written for researchers and practitioners in the finance industry, academic researchers in economics and finance, and advanced MBA and graduate students in economics and finance Readers are assumed to have a basic knowledge of S-PLUS and a solid grounding in basic statistics and time series concepts Eric Zivot is an associate professor and Gary Waterman Distinguished Scholar in the Economics Department at the University of Washington, and is co-director of the nascent Professional Master's Program in Computational Finance He regularly teaches courses on econometric theory, financial econometrics and time series econometrics, and is the recipient of the He...
Sørup, Hjalte Jomo Danielsen; Madsen, Henrik; Arnbjerg-Nielsen, Karsten
2011-01-01
A very fine temporal and volumetric resolution precipitation time series is modeled using Markov models. Both 1st and 2nd order Markov models as well as seasonal and diurnal models are investigated and evaluated using likelihood based techniques. The 2nd order Markov model is found to be insignif...
Hybrid Perturbation methods based on Statistical Time Series models
San-Juan, Juan Félix; Pérez, Iván; López, Rosario
2016-01-01
In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of a...
Crop Yield Forecasted Model Based on Time Series Techniques
Li Hong-ying; Hou Yan-lin; Zhou Yong-juan; Zhao Hui-ming
2012-01-01
Traditional studies on potential yield mainly referred to attainable yield： the maximum yield which could be reached by a crop in a given environment. The new concept of crop yield under average climate conditions was defined in this paper, which was affected by advancement of science and technology. Based on the new concept of crop yield, the time series techniques relying on past yield data was employed to set up a forecasting model. The model was tested by using average grain yields of Liaoning Province in China from 1949 to 2005. The testing combined dynamic n-choosing and micro tendency rectification, and an average forecasting error was 1.24%. In the trend line of yield change, and then a yield turning point might occur, in which case the inflexion model was used to solve the problem of yield turn point.
Empirical intrinsic geometry for nonlinear modeling and time series filtering.
Talmon, Ronen; Coifman, Ronald R
2013-07-30
In this paper, we present a method for time series analysis based on empirical intrinsic geometry (EIG). EIG enables one to reveal the low-dimensional parametric manifold as well as to infer the underlying dynamics of high-dimensional time series. By incorporating concepts of information geometry, this method extends existing geometric analysis tools to support stochastic settings and parametrizes the geometry of empirical distributions. However, the statistical models are not required as priors; hence, EIG may be applied to a wide range of real signals without existing definitive models. We show that the inferred model is noise-resilient and invariant under different observation and instrumental modalities. In addition, we show that it can be extended efficiently to newly acquired measurements in a sequential manner. These two advantages enable us to revisit the Bayesian approach and incorporate empirical dynamics and intrinsic geometry into a nonlinear filtering framework. We show applications to nonlinear and non-Gaussian tracking problems as well as to acoustic signal localization.
Modeling Large Time Series for Efficient Approximate Query Processing
Perera, Kasun S; Hahmann, Martin; Lehner, Wolfgang
2015-01-01
Evolving customer requirements and increasing competition force business organizations to store increasing amounts of data and query them for information at any given time. Due to the current growth of data volumes, timely extraction of relevant information becomes more and more difficult...... these issues, compression techniques have been introduced in many areas of data processing. In this paper, we outline a new system that does not query complete datasets but instead utilizes models to extract the requested information. For time series data we use Fourier and Cosine transformations and piece...
Modeling Glacier Elevation Change from DEM Time Series
Di Wang
2015-08-01
Full Text Available In this study, a methodology for glacier elevation reconstruction from Digital Elevation Model (DEM time series (tDEM is described for modeling the evolution of glacier elevation and estimating related volume change, with focus on medium-resolution and noisy satellite DEMs. The method is robust with respect to outliers in individual DEM products. Fox Glacier and Franz Josef Glacier in New Zealand are used as test cases based on 31 Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER DEMs and the Shuttle Radar Topography Mission (SRTM DEM. We obtained a mean surface elevation lowering rate of −0.51 ± 0.02 m·a−1 and −0.09 ± 0.02 m·a−1 between 2000 and 2014 for Fox and Franz Josef Glacier, respectively. The specific volume difference between 2000 and 2014 was estimated as −0.77 ± 0.13 m·a−1 and −0.33 ± 0.06 m·a−1 by our tDEM method. The comparably moderate thinning rates are mainly due to volume gains after 2013 that compensate larger thinning rates earlier in the series. Terminus thickening prevailed between 2002 and 2007.
Incorporating Satellite Time-Series Data into Modeling
Gregg, Watson
2008-01-01
In situ time series observations have provided a multi-decadal view of long-term changes in ocean biology. These observations are sufficiently reliable to enable discernment of even relatively small changes, and provide continuous information on a host of variables. Their key drawback is their limited domain. Satellite observations from ocean color sensors do not suffer the drawback of domain, and simultaneously view the global oceans. This attribute lends credence to their use in global and regional model validation and data assimilation. We focus on these applications using the NASA Ocean Biogeochemical Model. The enhancement of the satellite data using data assimilation is featured and the limitation of tongterm satellite data sets is also discussed.
Forecasting inflation in Montenegro using univariate time series models
Milena Lipovina-Božović
2015-04-01
Full Text Available The analysis of price trends and their prognosis is one of the key tasks of the economic authorities in each country. Due to the nature of the Montenegrin economy as small and open economy with euro as currency, forecasting inflation is very specific which is more difficult due to low quality of the data. This paper analyzes the utility and applicability of univariate time series models for forecasting price index in Montenegro. Data analysis of key macroeconomic movements in previous decades indicates the presence of many possible determinants that could influence forecasting result. This paper concludes that the forecasting models (ARIMA based only on its own previous values cannot adequately cover the key factors that determine the price level in the future, probably because of the existence of numerous external factors that influence the price movement in Montenegro.
Madsen, Henrik; Pearson, Charles P.; Rosbjerg, Dan
1997-04-01
Two regional estimation schemes, based on, respectively, partial duration series (PDS) and annual maximum series (AMS), are compared. The PDS model assumes a generalized Pareto (GP) distribution for modeling threshold exceedances corresponding to a generalized extreme value (GEV) distribution for annual maxima. First, the accuracy of PDS/GP and AMS/GEV regional index-flood T-year event estimators are compared using Monte Carlo simulations. For estimation in typical regions assuming a realistic degree of heterogeneity, the PDS/GP index-flood model is more efficient. The regional PDS and AMS procedures are subsequently applied to flood records from 48 catchments in New Zealand. To identify homogeneous groupings of catchments, a split-sample regionalization approach based on catchment characteristics is adopted. The defined groups are more homogeneous for PDS data than for AMS data; a two-way grouping based on annual average rainfall is sufficient to attain homogeneity for PDS, whereas a further partitioning is necessary for AMS. In determination of the regional parent distribution using L- moment ratio diagrams, PDS data, in contrast to AMS data, provide an unambiguous interpretation, supporting a GP distribution.
Self-organising mixture autoregressive model for non-stationary time series modelling.
Ni, He; Yin, Hujun
2008-12-01
Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.
Time series modelling and forecasting of emergency department overcrowding.
Kadri, Farid; Harrou, Fouzi; Chaabane, Sondès; Tahon, Christian
2014-09-01
Efficient management of patient flow (demand) in emergency departments (EDs) has become an urgent issue for many hospital administrations. Today, more and more attention is being paid to hospital management systems to optimally manage patient flow and to improve management strategies, efficiency and safety in such establishments. To this end, EDs require significant human and material resources, but unfortunately these are limited. Within such a framework, the ability to accurately forecast demand in emergency departments has considerable implications for hospitals to improve resource allocation and strategic planning. The aim of this study was to develop models for forecasting daily attendances at the hospital emergency department in Lille, France. The study demonstrates how time-series analysis can be used to forecast, at least in the short term, demand for emergency services in a hospital emergency department. The forecasts were based on daily patient attendances at the paediatric emergency department in Lille regional hospital centre, France, from January 2012 to December 2012. An autoregressive integrated moving average (ARIMA) method was applied separately to each of the two GEMSA categories and total patient attendances. Time-series analysis was shown to provide a useful, readily available tool for forecasting emergency department demand.
2013-10-31
... Learjet Model 45 series airplanes. The Model 45 series airplanes are swept-wing aircraft equipped with two... type certification basis for Learjet Model 45 series airplanes. System Security Protection for Aircraft... ensure that continued airworthiness of the aircraft is maintained, including all...
Using the Neumann series expansion for assembling Reduced Order Models
Nasisi S.
2014-06-01
Full Text Available An efficient method to remove the limitation in selecting the master degrees of freedom in a finite element model by means of a model order reduction is presented. A major difficulty of the Guyan reduction and IRS method (Improved Reduced System is represented by the need of appropriately select the master and slave degrees of freedom for the rate of convergence to be high. This study approaches the above limitation by using a particular arrangement of the rows and columns of the assembled matrices K and M and employing a combination between the IRS method and a variant of the analytical selection of masters presented in (Shah, V. N., Raymund, M., Analytical selection of masters for the reduced eigenvalue problem, International Journal for Numerical Methods in Engineering 18 (1 1982 in case first lowest frequencies had to be sought. One of the most significant characteristics of the approach is the use of the Neumann series expansion that motivates this particular arrangement of the matrices’ entries. The method shows a higher rate of convergence when compared to the standard IRS and very accurate results for the lowest reduced frequencies. To show the effectiveness of the proposed method two testing structures and the human vocal tract model employed in (Vampola, T., Horacek, J., Svec, J. G., FE modeling of human vocal tract acoustics. Part I: Prodution of Czech vowels, Acta Acustica United with Acustica 94 (3 2008 are presented.
Modeling PSInSAR time series without phase unwrapping
Zhang, L.; Ding, X.; Lu, Zhiming
2011-01-01
In this paper, we propose a least-squares-based method for multitemporal synthetic aperture radar interferometry that allows one to estimate deformations without the need of phase unwrapping. The method utilizes a series of multimaster wrapped differential interferograms with short baselines and focuses on arcs at which there are no phase ambiguities. An outlier detector is used to identify and remove the arcs with phase ambiguities, and a pseudoinverse of the variancecovariance matrix is used as the weight matrix of the correlated observations. The deformation rates at coherent points are estimated with a least squares model constrained by reference points. The proposed approach is verified with a set of simulated data. ?? 2006 IEEE.
Predicting chaotic time series with a partial model.
Hamilton, Franz; Berry, Tyrus; Sauer, Timothy
2015-07-01
Methods for forecasting time series are a critical aspect of the understanding and control of complex networks. When the model of the network is unknown, nonparametric methods for prediction have been developed, based on concepts of attractor reconstruction pioneered by Takens and others. In this Rapid Communication we consider how to make use of a subset of the system equations, if they are known, to improve the predictive capability of forecasting methods. A counterintuitive implication of the results is that knowledge of the evolution equation of even one variable, if known, can improve forecasting of all variables. The method is illustrated on data from the Lorenz attractor and from a small network with chaotic dynamics.
Forecasting electricity usage using univariate time series models
Hock-Eam, Lim; Chee-Yin, Yip
2014-12-01
Electricity is one of the important energy sources. A sufficient supply of electricity is vital to support a country's development and growth. Due to the changing of socio-economic characteristics, increasing competition and deregulation of electricity supply industry, the electricity demand forecasting is even more important than before. It is imperative to evaluate and compare the predictive performance of various forecasting methods. This will provide further insights on the weakness and strengths of each method. In literature, there are mixed evidences on the best forecasting methods of electricity demand. This paper aims to compare the predictive performance of univariate time series models for forecasting the electricity demand using a monthly data of maximum electricity load in Malaysia from January 2003 to December 2013. Results reveal that the Box-Jenkins method produces the best out-of-sample predictive performance. On the other hand, Holt-Winters exponential smoothing method is a good forecasting method for in-sample predictive performance.
MODELLING GASOLINE DEMAND IN GHANA: A STRUCTURAL TIME SERIES ANALYSIS
Ishmael Ackah
2014-01-01
Full Text Available Concerns about the role of energy consumption in global warming have led to policy designs that seek to reduce fossil fuel consumption or find a less polluting alternative especiallyfor the transport sector. This study seeks to estimate the elasticities of price, income, education and technology on transport gasoline demand sector inGhana. The Structural Time Series Model reports a short-run price and income elasticities of -0.0088 and 0.713. Total factor productivity is -0.408 whilstthe elasticity for education is 2.33. In the long run, the reported price and income elasticities are -0.065 and 5.129 respectively. The long run elasticityfor productivity is -2.935. The study recommends that in order to enhanceefficiency in gasoline consumption in the transport sector, there should beinvestment in productivity.
Auto-Regressive Models of Non-Stationary Time Series with Finite Length
FEI Wanchun; BAI Lun
2005-01-01
To analyze and simulate non-stationary time series with finite length, the statistical characteristics and auto-regressive (AR) models of non-stationary time series with finite length are discussed and studied. A new AR model called the time varying parameter AR model is proposed for solution of non-stationary time series with finite length. The auto-covariances of time series simulated by means of several AR models are analyzed. The result shows that the new AR model can be used to simulate and generate a new time series with the auto-covariance same as the original time series. The size curves of cocoon filaments regarded as non-stationary time series with finite length are experimentally simulated. The simulation results are significantly better than those obtained so far, and illustrate the availability of the time varying parameter AR model. The results are useful for analyzing and simulating non-stationary time series with finite length.
Optimal model-free prediction from multivariate time series.
Runge, Jakob; Donner, Reik V; Kurths, Jürgen
2015-05-01
Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.
Testing coeffcients of AR and bilinear time series models by a graphical approach
IP; WaiCheung
2008-01-01
AR and bilinear time series models are expressed as time series chain graphical models, based on which, it is shown that the coefficients of AR and bilinear models are the conditional correlation coefficients conditioned on the other components of the time series. Then a graphically based procedure is proposed to test the significance of the coeffcients of AR and bilinear time series. Simulations show that our procedure performs well both in sizes and powers.
INDUSTRIAL PRODUCTION IN GERMANY AND AUSTRIA: A CASE STUDY IN STRUCTURAL TIME SERIES MODELLING
Gerhard THURY
2003-01-01
Industrial production series are volatile and often cyclical. Time series models can be used to establish certain stylized facts, such as trends and cycles, which may be present in these series. In certain situations, it is also possible that common factors, which may have an interesting interpretation, can be detected in production series. Series from two neighboring countries with close economic relationships, such as Germany and Austria, are especially likely to exhibit such joint stylized facts.
Entin Hidayah
2011-02-01
Full Text Available Disaggregation of hourly rainfall data is very important to fulfil the input of continual rainfall-runoff model, when the availability of automatic rainfall records are limited. Continual rainfall-runoff modeling requires rainfall data in form of series of hourly. Such specification can be obtained by temporal disaggregation in single site. The paper attempts to generate single-site rainfall model based upon time series (AR1 model by adjusting and establishing dummy procedure. Estimated with Bayesian Markov Chain Monte Carlo (MCMC the objective variable is hourly rainfall depth. Performance of model has been evaluated by comparison of history data and model prediction. The result shows that the model has a good performance for dry interval periods. The performance of the model good represented by smaller number of MAE by 0.21 respectively.
Elmore, Donald E.; Guayasamin, Ryann C.; Kieffer, Madeleine E.
2010-01-01
As computational modeling plays an increasingly central role in biochemical research, it is important to provide students with exposure to common modeling methods in their undergraduate curriculum. This article describes a series of computer labs designed to introduce undergraduate students to energy minimization, molecular dynamics simulations,…
Elmore, Donald E.; Guayasamin, Ryann C.; Kieffer, Madeleine E.
2010-01-01
As computational modeling plays an increasingly central role in biochemical research, it is important to provide students with exposure to common modeling methods in their undergraduate curriculum. This article describes a series of computer labs designed to introduce undergraduate students to energy minimization, molecular dynamics simulations,…
Thin stillage fractionation using ultrafiltration: resistance in series model.
Arora, Amit; Dien, Bruce S; Belyea, Ronald L; Wang, Ping; Singh, Vijay; Tumbleson, M E; Rausch, Kent D
2009-02-01
The corn based dry grind process is the most widely used method in the US for fuel ethanol production. Fermentation of corn to ethanol produces whole stillage after ethanol is removed by distillation. It is centrifuged to separate thin stillage from wet grains. Thin stillage contains 5-10% solids. To concentrate solids of thin stillage, it requires evaporation of large amounts of water and maintenance of evaporators. Evaporator maintenance requires excess evaporator capacity at the facility, increasing capital expenses, requiring plant slowdowns or shut downs and results in revenue losses. Membrane filtration is one method that could lead to improved value of thin stillage and may offer an alternative to evaporation. Fractionation of thin stillage using ultrafiltration was conducted to evaluate membranes as an alternative to evaporators in the ethanol industry. Two regenerated cellulose membranes with molecular weight cut offs of 10 and 100 kDa were evaluated. Total solids (suspended and soluble) contents recovered through membrane separation process were similar to those from commercial evaporators. Permeate flux decline of thin stillage using a resistance in series model was determined. Each of the four components of total resistance was evaluated experimentally. Effects of operating variables such as transmembrane pressure and temperature on permeate flux rate and resistances were determined and optimum conditions for maximum flux rates were evaluated. Model equations were developed to evaluate the resistance components that are responsible for fouling and to predict total flux decline with respect to time. Modeling results were in agreement with experimental results (R(2) > 0.98).
Model of a synthetic wind speed time series generator
Negra, N.B.; Holmstrøm, O.; Bak-Jensen, B.
2008-01-01
of possible wind conditions. If these information are not available, synthetic wind speed time series may be a useful tool as well, but their generator must preserve statistical and stochastic features of the phenomenon. This paper deals with this issue: a generator for synthetic wind speed time series...
Forecasting Financial Time-Series using Artificial Market Models
Gupta, N; Johnson, N F; Gupta, Nachi; Hauser, Raphael; Johnson, Neil F.
2005-01-01
We discuss the theoretical machinery involved in predicting financial market movements using an artificial market model which has been trained on real financial data. This approach to market prediction - in particular, forecasting financial time-series by training a third-party or 'black box' game on the financial data itself -- was discussed by Johnson et al. in cond-mat/0105303 and cond-mat/0105258 and was based on some encouraging preliminary investigations of the dollar-yen exchange rate, various individual stocks, and stock market indices. However, the initial attempts lacked a clear formal methodology. Here we present a detailed methodology, using optimization techniques to build an estimate of the strategy distribution across the multi-trader population. In contrast to earlier attempts, we are able to present a systematic method for identifying 'pockets of predictability' in real-world markets. We find that as each pocket closes up, the black-box system needs to be 'reset' - which is equivalent to sayi...
Time-varying parameter auto-regressive models for autocovariance nonstationary time series
FEI WanChun; BAI Lun
2009-01-01
In this paper,autocovariance nonstationary time series is clearly defined on a family of time series.We propose three types of TVPAR (time-varying parameter auto-regressive) models:the full order TVPAR model,the time-unvarying order TVPAR model and the time-varying order TVPAR model for autocovariance nonstationary time series.Related minimum AIC (Akaike information criterion) estimations are carried out.
Time-varying parameter auto-regressive models for autocovariance nonstationary time series
无
2009-01-01
In this paper, autocovariance nonstationary time series is clearly defined on a family of time series. We propose three types of TVPAR (time-varying parameter auto-regressive) models: the full order TVPAR model, the time-unvarying order TVPAR model and the time-varying order TV-PAR model for autocovariance nonstationary time series. Related minimum AIC (Akaike information criterion) estimations are carried out.
Kennedy Curtis E
2011-10-01
Full Text Available Abstract Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1 selecting candidate variables; 2 specifying measurement parameters; 3 defining data format; 4 defining time window duration and resolution; 5 calculating latent variables for candidate variables not directly measured; 6 calculating time series features as latent variables; 7 creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8
Richly parameterized linear models additive, time series, and spatial models using random effects
Hodges, James S
2013-01-01
A First Step toward a Unified Theory of Richly Parameterized Linear ModelsUsing mixed linear models to analyze data often leads to results that are mysterious, inconvenient, or wrong. Further compounding the problem, statisticians lack a cohesive resource to acquire a systematic, theory-based understanding of models with random effects.Richly Parameterized Linear Models: Additive, Time Series, and Spatial Models Using Random Effects takes a first step in developing a full theory of richly parameterized models, which would allow statisticians to better understand their analysis results. The aut
The First European Parabolic Flight Campaign with the Airbus A310 ZERO-G
Pletser, Vladimir; Rouquette, Sebastien; Friedrich, Ulrike; Clervoy, Jean-Francois; Gharib, Thierry; Gai, Frederic; Mora, Christophe
2016-12-01
Aircraft parabolic flights repetitively provide up to 23 seconds of reduced gravity during ballistic flight manoeuvres. Parabolic flights are used to conduct short microgravity investigations in Physical and Life Sciences and in Technology, to test instrumentation prior to space flights and to train astronauts before a space mission. The use of parabolic flights is complementary to other microgravity carriers (drop towers, sounding rockets), and preparatory to manned space missions on board the International Space Station and other manned spacecraft, such as Shenzhou and the future Chinese Space Station. After 17 years of using the Airbus A300 ZERO-G, the French company Novespace, a subsidiary of the ' Centre National d'Etudes Spatiales' (CNES, French Space Agency), based in Bordeaux, France, purchased a new aircraft, an Airbus A310, to perform parabolic flights for microgravity research in Europe. Since April 2015, the European Space Agency (ESA), CNES and the ` Deutsches Zentrum für Luft- und Raumfahrt e.V.' (DLR, the German Aerospace Center) use this new aircraft, the Airbus A310 ZERO-G, for research experiments in microgravity. The first campaign was a Cooperative campaign shared by the three agencies, followed by respectively a CNES, an ESA and a DLR campaign. This paper presents the new Airbus A310 ZERO-G and its main characteristics and interfaces for scientific experiments. The experiments conducted during the first European campaign are presented.
Prediction and interpolation of time series by state space models
Helske, Jouni
2015-01-01
A large amount of data collected today is in the form of a time series. In order to make realistic inferences based on time series forecasts, in addition to point predictions, prediction intervals or other measures of uncertainty should be presented. Multiple sources of uncertainty are often ignored due to the complexities involved in accounting them correctly. In this dissertation, some of these problems are reviewed and some new solutions are presented. A state space approach...
The Exponential Model for the Spectrum of a Time Series: Extensions and Applications
Proietti, Tommaso; Luati, Alessandra
The exponential model for the spectrum of a time series and its fractional extensions are based on the Fourier series expansion of the logarithm of the spectral density. The coefficients of the expansion form the cepstrum of the time series. After deriving the cepstrum of important classes of time...
A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series
Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.
2011-01-01
Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…
A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series
Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.
2011-01-01
Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…
New Models for Forecasting Enrollments: Fuzzy Time Series and Neural Network Approaches.
Song, Qiang; Chissom, Brad S.
Since university enrollment forecasting is very important, many different methods and models have been proposed by researchers. Two new methods for enrollment forecasting are introduced: (1) the fuzzy time series model; and (2) the artificial neural networks model. Fuzzy time series has been proposed to deal with forecasting problems within a…
Time series, correlation matrices and random matrix models
Vinayak [Instituto de Ciencias Físicas, Universidad Nacional Autónoma de México, C.P. 62210 Cuernavaca (Mexico); Seligman, Thomas H. [Instituto de Ciencias Físicas, Universidad Nacional Autónoma de México, C.P. 62210 Cuernavaca, México and Centro Internacional de Ciencias, C.P. 62210 Cuernavaca (Mexico)
2014-01-08
In this set of five lectures the authors have presented techniques to analyze open classical and quantum systems using correlation matrices. For diverse reasons we shall see that random matrices play an important role to describe a null hypothesis or a minimum information hypothesis for the description of a quantum system or subsystem. In the former case various forms of correlation matrices of time series associated with the classical observables of some system. The fact that such series are necessarily finite, inevitably introduces noise and this finite time influence lead to a random or stochastic component in these time series. By consequence random correlation matrices have a random component, and corresponding ensembles are used. In the latter we use random matrices to describe high temperature environment or uncontrolled perturbations, ensembles of differing chaotic systems etc. The common theme of the lectures is thus the importance of random matrix theory in a wide range of fields in and around physics.
Time series count data models: an empirical application to traffic accidents.
Quddus, Mohammed A
2008-09-01
Count data are primarily categorised as cross-sectional, time series, and panel. Over the past decade, Poisson and Negative Binomial (NB) models have been used widely to analyse cross-sectional and time series count data, and random effect and fixed effect Poisson and NB models have been used to analyse panel count data. However, recent literature suggests that although the underlying distributional assumptions of these models are appropriate for cross-sectional count data, they are not capable of taking into account the effect of serial correlation often found in pure time series count data. Real-valued time series models, such as the autoregressive integrated moving average (ARIMA) model, introduced by Box and Jenkins have been used in many applications over the last few decades. However, when modelling non-negative integer-valued data such as traffic accidents at a junction over time, Box and Jenkins models may be inappropriate. This is mainly due to the normality assumption of errors in the ARIMA model. Over the last few years, a new class of time series models known as integer-valued autoregressive (INAR) Poisson models, has been studied by many authors. This class of models is particularly applicable to the analysis of time series count data as these models hold the properties of Poisson regression and able to deal with serial correlation, and therefore offers an alternative to the real-valued time series models. The primary objective of this paper is to introduce the class of INAR models for the time series analysis of traffic accidents in Great Britain. Different types of time series count data are considered: aggregated time series data where both the spatial and temporal units of observation are relatively large (e.g., Great Britain and years) and disaggregated time series data where both the spatial and temporal units are relatively small (e.g., congestion charging zone and months). The performance of the INAR models is compared with the class of Box and
Financial-Economic Time Series Modeling and Prediction Techniques – Review
2014-01-01
Financial-economic time series distinguishes from other time series because they contain a portion of uncertainity. Because of this, statistical theory and methods play important role in their analysis. Moreover, external influence of various parameters on the values in time series makes them non-linear, which on the other hand suggests employment of more complex techniques for ther modeling. To cope with this challenging problem many researchers and scientists have developed various models a...
Evolution of Black-Box Models Based on Volterra Series
Daniel D. Silveira
2015-01-01
Full Text Available This paper presents a historical review of the many behavioral models actually used to model radio frequency power amplifiers and a new classification of these behavioral models. It also discusses the evolution of these models, from a single polynomial to multirate Volterra models, presenting equations and estimation methods. New trends in RF power amplifier behavioral modeling are suggested.
2013-12-10
... part of the type certification basis for Cessna Model 680 Series airplanes. System Security Protection... Federal Aviation Administration 14 CFR Part 25 Special Conditions: Cessna Model 680 Series Airplanes; Aircraft Electronic System Security Protection From Unauthorized External Access AGENCY: Federal Aviation...
Multivariate nonlinear time series modeling of exposure and risk in road safety research
Bijleveld, F.; Commandeur, J.; Montfort, van K.; Koopman, S.J.
2010-01-01
A multivariate non-linear time series model for road safety data is presented. The model is applied in a case-study into the development of a yearly time series of numbers of fatal accidents (inside and outside urban areas) and numbers of kilometres driven by motor vehicles in the Netherlands betwee
Model-Coupled Autoencoder for Time Series Visualisation
Gianniotis, Nikolaos; Tiňo, Peter; Polsterer, Kai L
2016-01-01
We present an approach for the visualisation of a set of time series that combines an echo state network with an autoencoder. For each time series in the dataset we train an echo state network, using a common and fixed reservoir of hidden neurons, and use the optimised readout weights as the new representation. Dimensionality reduction is then performed via an autoencoder on the readout weight representations. The crux of the work is to equip the autoencoder with a loss function that correctly interprets the reconstructed readout weights by associating them with a reconstruction error measured in the data space of sequences. This essentially amounts to measuring the predictive performance that the reconstructed readout weights exhibit on their corresponding sequences when plugged back into the echo state network with the same fixed reservoir. We demonstrate that the proposed visualisation framework can deal both with real valued sequences as well as binary sequences. We derive magnification factors in order t...
Travel cost inference from sparse, spatio-temporally correlated time series using markov models
Yang, B.; Guo, C.; Jensen, C.S.
2013-01-01
of such time series offers insight into the underlying system and enables prediction of system behavior. While the techniques presented in the paper apply more generally, we consider the case of transportation systems and aim to predict travel cost from GPS tracking data from probe vehicles. Specifically, each......The monitoring of a system can yield a set of measurements that can be modeled as a collection of time series. These time series are often sparse, due to missing measurements, and spatiotemporally correlated, meaning that spatially close time series exhibit temporal correlation. The analysis...... road segment has an associated travel-cost time series, which is derived from GPS data. We use spatio-temporal hidden Markov models (STHMM) to model correlations among different traffic time series. We provide algorithms that are able to learn the parameters of an STHMM while contending...
Learning restricted Boolean network model by time-series data
2014-01-01
Restricted Boolean networks are simplified Boolean networks that are required for either negative or positive regulations between genes. Higa et al. (BMC Proc 5:S5, 2011) proposed a three-rule algorithm to infer a restricted Boolean network from time-series data. However, the algorithm suffers from a major drawback, namely, it is very sensitive to noise. In this paper, we systematically analyze the regulatory relationships between genes based on the state switch of the target gene and propose an algorithm with which restricted Boolean networks may be inferred from time-series data. We compare the proposed algorithm with the three-rule algorithm and the best-fit algorithm based on both synthetic networks and a well-studied budding yeast cell cycle network. The performance of the algorithms is evaluated by three distance metrics: the normalized-edge Hamming distance μhame, the normalized Hamming distance of state transition μhamst, and the steady-state distribution distance μssd. Results show that the proposed algorithm outperforms the others according to both μhame and μhamst, whereas its performance according to μssd is intermediate between best-fit and the three-rule algorithms. Thus, our new algorithm is more appropriate for inferring interactions between genes from time-series data. PMID:25093019
Learning restricted Boolean network model by time-series data.
Ouyang, Hongjia; Fang, Jie; Shen, Liangzhong; Dougherty, Edward R; Liu, Wenbin
2014-01-01
Restricted Boolean networks are simplified Boolean networks that are required for either negative or positive regulations between genes. Higa et al. (BMC Proc 5:S5, 2011) proposed a three-rule algorithm to infer a restricted Boolean network from time-series data. However, the algorithm suffers from a major drawback, namely, it is very sensitive to noise. In this paper, we systematically analyze the regulatory relationships between genes based on the state switch of the target gene and propose an algorithm with which restricted Boolean networks may be inferred from time-series data. We compare the proposed algorithm with the three-rule algorithm and the best-fit algorithm based on both synthetic networks and a well-studied budding yeast cell cycle network. The performance of the algorithms is evaluated by three distance metrics: the normalized-edge Hamming distance [Formula: see text], the normalized Hamming distance of state transition [Formula: see text], and the steady-state distribution distance μ (ssd). Results show that the proposed algorithm outperforms the others according to both [Formula: see text] and [Formula: see text], whereas its performance according to μ (ssd) is intermediate between best-fit and the three-rule algorithms. Thus, our new algorithm is more appropriate for inferring interactions between genes from time-series data.
The modified Yule-Walker method for α-stable time series models
Kruczek, Piotr; Wyłomańska, Agnieszka; Teuerle, Marek; Gajda, Janusz
2017-03-01
This paper discusses the problem of parameters estimation for stable periodic autoregressive (PAR) time series. Considered models generalize popular and widely accepted autoregressive (AR) time series. By examining measures of dependence for α-stable processes, first we introduce new empirical estimator of autocovariation for α-stable sequences. Based on this approach we generalize Yule-Walker method for estimation of parameter for PAR time series. Thus we fill a gap in estimation methods for non-Gaussian models. We test proposed procedure and show its consistency. Moreover, we use our approach to model real empirical data thus showing usefulness of heavy tailed models in statistical modelling.
2013-12-12
... September 10, 2010, Cessna applied for a change to Type Certificate No. T00007WI in the digital systems architecture in the Cessna Model 750 series airplanes. The Model 750 is a twin-engine pressurized executive...
An Exponential Model for the Spectrum of a Scalar Time Series
A new class of parametric models for the spectrum of a scalar time series is proposed, in which the logarithm of the spectral density function is represented by a finite Fourier series. Two alternative parameter estimation procedures are described, and the use of a fitted model to provide forecasts of future values is discussed. The model has been compared with the more conventional autoregressive/moving-average model, and the results of their comparison are given.
Time Series Modelling of Syphilis Incidence in China from 2005 to 2012.
Zhang, Xingyu; Zhang, Tao; Pei, Jiao; Liu, Yuanyuan; Li, Xiaosong; Medrano-Gracia, Pau
2016-01-01
The infection rate of syphilis in China has increased dramatically in recent decades, becoming a serious public health concern. Early prediction of syphilis is therefore of great importance for heath planning and management. In this paper, we analyzed surveillance time series data for primary, secondary, tertiary, congenital and latent syphilis in mainland China from 2005 to 2012. Seasonality and long-term trend were explored with decomposition methods. Autoregressive integrated moving average (ARIMA) was used to fit a univariate time series model of syphilis incidence. A separate multi-variable time series for each syphilis type was also tested using an autoregressive integrated moving average model with exogenous variables (ARIMAX). The syphilis incidence rates have increased three-fold from 2005 to 2012. All syphilis time series showed strong seasonality and increasing long-term trend. Both ARIMA and ARIMAX models fitted and estimated syphilis incidence well. All univariate time series showed highest goodness-of-fit results with the ARIMA(0,0,1)×(0,1,1) model. Time series analysis was an effective tool for modelling the historical and future incidence of syphilis in China. The ARIMAX model showed superior performance than the ARIMA model for the modelling of syphilis incidence. Time series correlations existed between the models for primary, secondary, tertiary, congenital and latent syphilis.
HIGH ORDER FUZZY TIME SERIES MODEL AND ITS APLICATION TO IMKB
Çağdaş Hakan ALADAĞ
2010-12-01
Full Text Available The observations of some real time series such as temperature and stock market can take different values in a day. Instead of representing the observations of these time series by real numbers, employing linguistic values or fuzzy sets can be more appropriate. In recent years, many approaches have been introduced to analyze time series consisting of observations which are fuzzy sets and such time series are called fuzzy time series. In this study, a novel approach is proposed to analyze high order fuzzy time series model. The proposed method is applied to IMKB data and the obtained results are discussed. IMKB data is also analyzed by using some other fuzzy time series methods available in the literature and obtained results are compared to results obtained from the proposed method. As a result of the comparison, it is seen that the proposed method produce accurate forecasts.
Fitting ARMA Time Series by Structural Equation Models.
van Buuren, Stef
1997-01-01
This paper outlines how the stationary ARMA (p,q) model (G. Box and G. Jenkins, 1976) can be specified as a structural equation model. Maximum likelihood estimates for the parameters in the ARMA model can be obtained by software for fitting structural equation models. The method is applied to three problem types. (SLD)
Extracting the relevant delays in time series modelling
Goutte, Cyril
1997-01-01
selection, and more precisely stepwise forward selection. The method is compared to other forward selection schemes, as well as to a nonparametric tests aimed at estimating the embedding dimension of time series. The final application extends these results to the efficient estimation of FIR filters on some......In this contribution, we suggest a convenient way to use generalisation error to extract the relevant delays from a time-varying process, i.e. the delays that lead to the best prediction performance. We design a generalisation-based algorithm that takes its inspiration from traditional variable...
Markov Model of Wind Power Time Series UsingBayesian Inference of Transition Matrix
Chen, Peiyuan; Berthelsen, Kasper Klitgaard; Bak-Jensen, Birgitte
2009-01-01
This paper proposes to use Bayesian inference of transition matrix when developing a discrete Markov model of a wind speed/power time series and 95% credible interval for the model verification. The Dirichlet distribution is used as a conjugate prior for the transition matrix. Three discrete Markov...... models are compared, i.e. the basic Markov model, the Bayesian Markov model and the birth-and-death Markov model. The proposed Bayesian Markov model shows the best accuracy in modeling the autocorrelation of the wind power time series....
Modeling refractive metasurfaces in series as a single metasurface
Toor, Fatima; Guneratne, Ananda C.
2016-03-01
Metasurfaces are boundaries between two media that are engineered to induce an abrupt phase shift in propagating light over a distance comparable to the wavelength of the light. Metasurface applications exploit this rapid phase shift to allow for precise control of wavefronts. The phase gradient is used to compute the angle at which light is refracted using the generalized Snell's Law. [1] In practice, refractive metasurfaces are designed using a relatively small number of phaseshifting elements such that the phase gradient is discrete rather than continuous. Designing such a metasurface requires finding phase-shifting elements that cover a full range of phases (a phase range) from 0 to 360 degrees. We demonstrate an analytical technique to calculate the refraction angle due to multiple metasurfaces arranged in series without needing to account for the effect of each individual metasurface. The phase gradients of refractive metasurfaces in series may be summed to obtain the phase gradient of a single equivalent refractive metasurface. This result is relevant to any application that requires a system with multiple metasurfaces, such as biomedical imaging [2], wavefront correctors [3], and beam shaping [4].
On the Practice of Bayesian Inference in Basic Economic Time Series Models using Gibbs Sampling
M.D. de Pooter (Michiel); R. Segers (René); H.K. van Dijk (Herman)
2006-01-01
textabstractSeveral lessons learned from a Bayesian analysis of basic economic time series models by means of the Gibbs sampling algorithm are presented. Models include the Cochrane-Orcutt model for serial correlation, the Koyck distributed lag model, the Unit Root model, the Instrumental Variables
van der Heijden, Sven; Callau Poduje, Ana; Müller, Hannes; Shehu, Bora; Haberlandt, Uwe; Lorenz, Manuel; Wagner, Sven; Kunstmann, Harald; Müller, Thomas; Mosthaf, Tobias; Bárdossy, András
2015-04-01
For the design and operation of urban drainage systems with numerical simulation models, long, continuous precipitation time series with high temporal resolution are necessary. Suitable observed time series are rare. As a result, intelligent design concepts often use uncertain or unsuitable precipitation data, which renders them uneconomic or unsustainable. An expedient alternative to observed data is the use of long, synthetic rainfall time series as input for the simulation models. Within the project SYNOPSE, several different methods to generate synthetic precipitation data for urban drainage modelling are advanced, tested, and compared. The presented study compares four different approaches of precipitation models regarding their ability to reproduce rainfall and runoff characteristics. These include one parametric stochastic model (alternating renewal approach), one non-parametric stochastic model (resampling approach), one downscaling approach from a regional climate model, and one disaggregation approach based on daily precipitation measurements. All four models produce long precipitation time series with a temporal resolution of five minutes. The synthetic time series are first compared to observed rainfall reference time series. Comparison criteria include event based statistics like mean dry spell and wet spell duration, wet spell amount and intensity, long term means of precipitation sum and number of events, and extreme value distributions for different durations. Then they are compared regarding simulated discharge characteristics using an urban hydrological model on a fictitious sewage network. First results show a principal suitability of all rainfall models but with different strengths and weaknesses regarding the different rainfall and runoff characteristics considered.
A flexible coefficient smooth transition time series model.
Medeiros, Marcelo C; Veiga, Alvaro
2005-01-01
In this paper, we consider a flexible smooth transition autoregressive (STAR) model with multiple regimes and multiple transition variables. This formulation can be interpreted as a time varying linear model where the coefficients are the outputs of a single hidden layer feedforward neural network. This proposal has the major advantage of nesting several nonlinear models, such as, the self-exciting threshold autoregressive (SETAR), the autoregressive neural network (AR-NN), and the logistic STAR models. Furthermore, if the neural network is interpreted as a nonparametric universal approximation to any Borel measurable function, our formulation is directly comparable to the functional coefficient autoregressive (FAR) and the single-index coefficient regression models. A model building procedure is developed based on statistical inference arguments. A Monte Carlo experiment showed that the procedure works in small samples, and its performance improves, as it should, in medium size samples. Several real examples are also addressed.
Calibration of transient groundwater models using time series analysis and moment matching
Bakker, M.; Maas, K.; Von Asmuth, J.R.
2008-01-01
A comprehensive and efficient approach is presented for the calibration of transient groundwater models. The approach starts with the time series analysis of the measured heads in observation wells using all active stresses as input series, which may include rainfall, evaporation, surface water leve
Design considerations for case series models with exposure onset measurement error.
Mohammed, Sandra M; Dalrymple, Lorien S; Sentürk, Damla; Nguyen, Danh V
2013-02-28
The case series model allows for estimation of the relative incidence of events, such as cardiovascular events, within a pre-specified time window after an exposure, such as an infection. The method requires only cases (individuals with events) and controls for all fixed/time-invariant confounders. The measurement error case series model extends the original case series model to handle imperfect data, where the timing of an infection (exposure) is not known precisely. In this work, we propose a method for power/sample size determination for the measurement error case series model. Extensive simulation studies are used to assess the accuracy of the proposed sample size formulas. We also examine the magnitude of the relative loss of power due to exposure onset measurement error, compared with the ideal situation where the time of exposure is measured precisely. To facilitate the design of case series studies, we provide publicly available web-based tools for determining power/sample size for both the measurement error case series model as well as the standard case series model.
Mixed Portmanteau Test for Diagnostic Checking of Time Series Models
Sohail Chand
2014-01-01
Full Text Available Model criticism is an important stage of model building and thus goodness of fit tests provides a set of tools for diagnostic checking of the fitted model. Several tests are suggested in literature for diagnostic checking. These tests use autocorrelation or partial autocorrelation in the residuals to criticize the adequacy of fitted model. The main idea underlying these portmanteau tests is to identify if there is any dependence structure which is yet unexplained by the fitted model. In this paper, we suggest mixed portmanteau tests based on autocorrelation and partial autocorrelation functions of the residuals. We derived the asymptotic distribution of the mixture test and studied its size and power using Monte Carlo simulations.
A feature fusion based forecasting model for financial time series.
Guo, Zhiqiang; Wang, Huaiqing; Liu, Quan; Yang, Jie
2014-01-01
Predicting the stock market has become an increasingly interesting research area for both researchers and investors, and many prediction models have been proposed. In these models, feature selection techniques are used to pre-process the raw data and remove noise. In this paper, a prediction model is constructed to forecast stock market behavior with the aid of independent component analysis, canonical correlation analysis, and a support vector machine. First, two types of features are extracted from the historical closing prices and 39 technical variables obtained by independent component analysis. Second, a canonical correlation analysis method is utilized to combine the two types of features and extract intrinsic features to improve the performance of the prediction model. Finally, a support vector machine is applied to forecast the next day's closing price. The proposed model is applied to the Shanghai stock market index and the Dow Jones index, and experimental results show that the proposed model performs better in the area of prediction than other two similar models.
A feature fusion based forecasting model for financial time series.
Zhiqiang Guo
Full Text Available Predicting the stock market has become an increasingly interesting research area for both researchers and investors, and many prediction models have been proposed. In these models, feature selection techniques are used to pre-process the raw data and remove noise. In this paper, a prediction model is constructed to forecast stock market behavior with the aid of independent component analysis, canonical correlation analysis, and a support vector machine. First, two types of features are extracted from the historical closing prices and 39 technical variables obtained by independent component analysis. Second, a canonical correlation analysis method is utilized to combine the two types of features and extract intrinsic features to improve the performance of the prediction model. Finally, a support vector machine is applied to forecast the next day's closing price. The proposed model is applied to the Shanghai stock market index and the Dow Jones index, and experimental results show that the proposed model performs better in the area of prediction than other two similar models.
Application of uncertainty reasoning based on cloud model in time series prediction
张锦春; 胡谷雨
2003-01-01
Time series prediction has been successfully used in several application areas, such as meteoro-logical forecasting, market prediction, network traffic forecasting, etc. , and a number of techniques have been developed for modeling and predicting time series. In the traditional exponential smoothing method, a fixed weight is assigned to data history, and the trend changes of time series are ignored. In this paper, an uncertainty reasoning method, based on cloud model, is employed in time series prediction, which uses cloud logic controller to adjust the smoothing coefficient of the simple exponential smoothing method dynamically to fit the current trend of the time series. The validity of this solution was proved by experiments on various data sets.
Application of uncertainty reasoning based on cloud model in time series prediction
张锦春; 胡谷雨
2003-01-01
Time series prediction has been successfully used in several application areas, such as meteorological forecasting, market prediction, network traffic forecasting, etc., and a number of techniques have been developed for modeling and predicting time series. In the traditional exponential smoothing method, a fixed weight is assigned to data history, and the trend changes of time series are ignored. In this paper, an uncertainty reasoning method, based on cloud model, is employed in time series prediction, which uses cloud logic controller to adjust the smoothing coefficient of the simple exponential smoothing method dynamically to fit the current trend of the time series. The validity of this solution was proved by experiments on various data sets.
Mathematical Model of Thyristor Inverter Including a Series-parallel Resonant Circuit
Miroslaw Luft
2008-01-01
Full Text Available The article presents a mathematical model of thyristor inverter including a series-parallel resonant circuit with theaid of state variable method. Maple procedures are used to compute current and voltage waveforms in the inverter.
Multilayer stock forecasting model using fuzzy time series.
Javedani Sadaei, Hossein; Lee, Muhammad Hisyam
2014-01-01
After reviewing the vast body of literature on using FTS in stock market forecasting, certain deficiencies are distinguished in the hybridization of findings. In addition, the lack of constructive systematic framework, which can be helpful to indicate direction of growth in entire FTS forecasting systems, is outstanding. In this study, we propose a multilayer model for stock market forecasting including five logical significant layers. Every single layer has its detailed concern to assist forecast development by reconciling certain problems exclusively. To verify the model, a set of huge data containing Taiwan Stock Index (TAIEX), National Association of Securities Dealers Automated Quotations (NASDAQ), Dow Jones Industrial Average (DJI), and S&P 500 have been chosen as experimental datasets. The results indicate that the proposed methodology has the potential to be accepted as a framework for model development in stock market forecasts using FTS.
Nonlinear Time Series Model for Shape Classification Using Neural Networks
无
2000-01-01
A complex nonlinear exponential autoregressive (CNEAR) model for invariant feature extraction is developed for recognizing arbitrary shapes on a plane. A neural network is used to calculate the CNEAR coefficients. The coefficients, which constitute the feature set, are proven to be invariant to boundary transformations such as translation, rotation, scale and choice of starting point in tracing the boundary. The feature set is then used as the input to a complex multilayer perceptron (C-MLP) network for learning and classification. Experimental results show that complicated shapes can be accurately recognized even with the low-order model and that the classification method has good fault tolerance when noise is present.
Modeling Financial Time Series Based on a Market Microstructure Model with Leverage Effect
Yanhui Xi
2016-01-01
Full Text Available The basic market microstructure model specifies that the price/return innovation and the volatility innovation are independent Gaussian white noise processes. However, the financial leverage effect has been found to be statistically significant in many financial time series. In this paper, a novel market microstructure model with leverage effects is proposed. The model specification assumed a negative correlation in the errors between the price/return innovation and the volatility innovation. With the new representations, a theoretical explanation of leverage effect is provided. Simulated data and daily stock market indices (Shanghai composite index, Shenzhen component index, and Standard and Poor’s 500 Composite index via Bayesian Markov Chain Monte Carlo (MCMC method are used to estimate the leverage market microstructure model. The results verify the effectiveness of the model and its estimation approach proposed in the paper and also indicate that the stock markets have strong leverage effects. Compared with the classical leverage stochastic volatility (SV model in terms of DIC (Deviance Information Criterion, the leverage market microstructure model fits the data better.
Bayesian Modelling of fMRI Time Series
Højen-Sørensen, Pedro; Hansen, Lars Kai; Rasmussen, Carl Edward
2000-01-01
We present a Hidden Markov Model (HMM) for inferring the hidden psychological state (or neural activity) during single trial fMRI activation experiments with blocked task paradigms. Inference is based on Bayesian methodology, using a combination of analytical and a variety of Markov Chain Monte...
Linear models for multivariate, time series, and spatial data
Christensen, Ronald
1991-01-01
This is a companion volume to Plane Answers to Complex Questions: The Theory 0/ Linear Models. It consists of six additional chapters written in the same spirit as the last six chapters of the earlier book. Brief introductions are given to topics related to linear model theory. No attempt is made to give a comprehensive treatment of the topics. Such an effort would be futile. Each chapter is on a topic so broad that an in depth discussion would require a book-Iength treatment. People need to impose structure on the world in order to understand it. There is a limit to the number of unrelated facts that anyone can remem ber. If ideas can be put within a broad, sophisticatedly simple structure, not only are they easier to remember but often new insights become avail able. In fact, sophisticatedly simple models of the world may be the only ones that work. I have often heard Arnold Zellner say that, to the best of his knowledge, this is true in econometrics. The process of modeling is fundamental to understand...
Prerequisites for modeling price and return data series for the Bucharest Stock Exchange
Andrei TINCA
2013-11-01
Full Text Available Time series data from the capital market exhibits certain qualities which invalidate the hypotheses necessary for obtaining meaningful results from statistical modeling. This paper presents some of these qualities by looking at the time series for prices and returns on the Romanian Stock Exchange. The examples are based on the price time series and return time series of the Antibiotice securities and the BET-C index. The choice of a certain security and of the stock exchange index has been made with the intention of analyzing, in the future, the correlation between these two variables, and drawing significant conclusions which can be used for forecasts.Firstly, we will identify the empirical proprieties of the capital market, as they are described in the field research. Secondly, we will investigate the prerequisites for modeling chronological data series; these are stationary mean and variance. In the paper, the three methods are used: graphical representation, autocorrelation and the ADF test (Augmented Dickey-Fuller. For the frequent cases where the mean is not stationary, we will present the time series differentiation method, which can be used to obtain stationary values.Lastly, we will investigate the normality of the time series through the skewness and kurtosis methods, and through the Jarque-Bera statistic. We find out a characteristic for the capital market, in that the majority of the time series for securities have non-normal distributions.
Bayesian Modelling of fMRI Time Series
Højen-Sørensen, Pedro; Hansen, Lars Kai; Rasmussen, Carl Edward
2000-01-01
We present a Hidden Markov Model (HMM) for inferring the hidden psychological state (or neural activity) during single trial fMRI activation experiments with blocked task paradigms. Inference is based on Bayesian methodology, using a combination of analytical and a variety of Markov Chain Monte C...... Carlo (MCMC) sampling techniques. The advantage of this method is that detection of short time learning effects between repeated trials is possible since inference is based only on single trial experiments....
Research on power grid loss prediction model based on Granger causality property of time series
Wang, J. [North China Electric Power Univ., Beijing (China); State Grid Corp., Beijing (China); Yan, W.P.; Yuan, J. [North China Electric Power Univ., Beijing (China); Xu, H.M.; Wang, X.L. [State Grid Information and Telecommunications Corp., Beijing (China)
2009-03-11
This paper described a method of predicting power transmission line losses using the Granger causality property of time series. The stable property of the time series was investigated using unit root tests. The Granger causality relationship between line losses and other variables was then determined. Granger-caused time series were then used to create the following 3 prediction models: (1) a model based on line loss binomials that used electricity sales to predict variables, (2) a model that considered both power sales and grid capacity, and (3) a model based on autoregressive distributed lag (ARDL) approaches that incorporated both power sales and the square of power sales as variables. A case study of data from China's electric power grid between 1980 and 2008 was used to evaluate model performance. Results of the study showed that the model error rates ranged between 2.7 and 3.9 percent. 6 refs., 3 tabs., 1 fig.
A solution to the problem of constructing a state space model from time series
David Di Ruscio
1994-01-01
Full Text Available The problem of constructing minimal realizations from arbitrary input-output time series which are only covariance stationary (not necessarily stationary is considered. An algorithm which solves this problem for a fairly nonrestrictive class of exogenous (input signals is presented. The algorithm is based upon modeling nonzero exogenous signals by linear models and including these in the total system model.
Nonlinearity, Breaks, and Long-Range Dependence in Time-Series Models
Hillebrand, Eric Tobias; Medeiros, Marcelo C.
We study the simultaneous occurrence of long memory and nonlinear effects, such as parameter changes and threshold effects, in ARMA time series models and apply our modeling framework to daily realized volatility. Asymptotic theory for parameter estimation is developed and two model building...
Modelling Biophysical Parameters of Maize Using Landsat 8 Time Series
Dahms, Thorsten; Seissiger, Sylvia; Conrad, Christopher; Borg, Erik
2016-06-01
Open and free access to multi-frequent high-resolution data (e.g. Sentinel - 2) will fortify agricultural applications based on satellite data. The temporal and spatial resolution of these remote sensing datasets directly affects the applicability of remote sensing methods, for instance a robust retrieving of biophysical parameters over the entire growing season with very high geometric resolution. In this study we use machine learning methods to predict biophysical parameters, namely the fraction of absorbed photosynthetic radiation (FPAR), the leaf area index (LAI) and the chlorophyll content, from high resolution remote sensing. 30 Landsat 8 OLI scenes were available in our study region in Mecklenburg-Western Pomerania, Germany. In-situ data were weekly to bi-weekly collected on 18 maize plots throughout the summer season 2015. The study aims at an optimized prediction of biophysical parameters and the identification of the best explaining spectral bands and vegetation indices. For this purpose, we used the entire in-situ dataset from 24.03.2015 to 15.10.2015. Random forest and conditional inference forests were used because of their explicit strong exploratory and predictive character. Variable importance measures allowed for analysing the relation between the biophysical parameters with respect to the spectral response, and the performance of the two approaches over the plant stock evolvement. Classical random forest regression outreached the performance of conditional inference forests, in particular when modelling the biophysical parameters over the entire growing period. For example, modelling biophysical parameters of maize for the entire vegetation period using random forests yielded: FPAR: R² = 0.85; RMSE = 0.11; LAI: R² = 0.64; RMSE = 0.9 and chlorophyll content (SPAD): R² = 0.80; RMSE=4.9. Our results demonstrate the great potential in using machine-learning methods for the interpretation of long-term multi-frequent remote sensing datasets to model
A Stepwise Time Series Regression Procedure for Water Demand Model Identification
Miaou, Shaw-Pin
1990-09-01
Annual time series water demand has traditionally been studied through multiple linear regression analysis. Four associated model specification problems have long been recognized: (1) the length of the available time series data is relatively short, (2) a large set of candidate explanatory or "input" variables needs to be considered, (3) input variables can be highly correlated with each other (multicollinearity problem), and (4) model error series are often highly autocorrelated or even nonstationary. A step wise time series regression identification procedure is proposed to alleviate these problems. The proposed procedure adopts the sequential input variable selection concept of stepwise regression and the "three-step" time series model building strategy of Box and Jenkins. Autocorrelated model error is assumed to follow an autoregressive integrated moving average (ARIMA) process. The stepwise selection procedure begins with a univariate time series demand model with no input variables. Subsequently, input variables are selected and inserted into the equation one at a time until the last entered variable is found to be statistically insignificant. The order of insertion is determined by a statistical measure called between-variable partial correlation. This correlation measure is free from the contamination of serial autocorrelation. Three data sets from previous studies are employed to illustrate the proposed procedure. The results are then compared with those from their original studies.
Monitoring Poisson time series using multi-process models
Engebjerg, Malene Dahl Skov; Lundbye-Christensen, Søren; Kjær, Birgitte B.
Surveillance of infectious diseases based on routinely collected public health data is important for at least three reasons: The early detection of an epidemic may facilitate prompt interventions and the seasonal variations and long term trend may be of general epidemiological interest. Furthermore...... aspects of health resource management may also be addressed. In this paper we center on the detection of outbreaks of infectious diseases. This is achieved by a multi-process Poisson state space model taking autocorrelation and overdispersion into account, which has been applied to a data set concerning...
A four-stage hybrid model for hydrological time series forecasting.
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.
Moeeni, Hamid; Bonakdari, Hossein; Fatemi, Seyed Ehsan
2017-04-01
Because time series stationarization has a key role in stochastic modeling results, three methods are analyzed in this study. The methods are seasonal differencing, seasonal standardization and spectral analysis to eliminate the periodic effect on time series stationarity. First, six time series including 4 streamflow series and 2 water temperature series are stationarized. The stochastic term for these series obtained with ARIMA is subsequently modeled. For the analysis, 9228 models are introduced. It is observed that seasonal standardization and spectral analysis eliminate the periodic term completely, while seasonal differencing maintains seasonal correlation structures. The obtained results indicate that all three methods present acceptable performance overall. However, model accuracy in monthly streamflow prediction is higher with seasonal differencing than with the other two methods. Another advantage of seasonal differencing over the other methods is that the monthly streamflow is never estimated as negative. Standardization is the best method for predicting monthly water temperature although it is quite similar to seasonal differencing, while spectral analysis performed the weakest in all cases. It is concluded that for each monthly seasonal series, seasonal differencing is the best stationarization method in terms of periodic effect elimination. Moreover, the monthly water temperature is predicted with more accuracy than monthly streamflow. The criteria of the average stochastic term divided by the amplitude of the periodic term obtained for monthly streamflow and monthly water temperature were 0.19 and 0.30, 0.21 and 0.13, and 0.07 and 0.04 respectively. As a result, the periodic term is more dominant than the stochastic term for water temperature in the monthly water temperature series compared to streamflow series.
Road safety forecasts in five European countries using structural time series models.
Antoniou, Constantinos; Papadimitriou, Eleonora; Yannis, George
2014-01-01
Modeling road safety development is a complex task and needs to consider both the quantifiable impact of specific parameters as well as the underlying trends that cannot always be measured or observed. The objective of this research is to apply structural time series models for obtaining reliable medium- to long-term forecasts of road traffic fatality risk using data from 5 countries with different characteristics from all over Europe (Cyprus, Greece, Hungary, Norway, and Switzerland). Two structural time series models are considered: (1) the local linear trend model and the (2) latent risk time series model. Furthermore, a structured decision tree for the selection of the applicable model for each situation (developed within the Road Safety Data, Collection, Transfer and Analysis [DaCoTA] research project, cofunded by the European Commission) is outlined. First, the fatality and exposure data that are used for the development of the models are presented and explored. Then, the modeling process is presented, including the model selection process, introduction of intervention variables, and development of mobility scenarios. The forecasts using the developed models appear to be realistic and within acceptable confidence intervals. The proposed methodology is proved to be very efficient for handling different cases of data availability and quality, providing an appropriate alternative from the family of structural time series models in each country. A concluding section providing perspectives and directions for future research is presented.
Hybrid model for forecasting time series with trend, seasonal and salendar variation patterns
Suhartono; Rahayu, S. P.; Prastyo, D. D.; Wijayanti, D. G. P.; Juliyanto
2017-09-01
Most of the monthly time series data in economics and business in Indonesia and other Moslem countries not only contain trend and seasonal, but also affected by two types of calendar variation effects, i.e. the effect of the number of working days or trading and holiday effects. The purpose of this research is to develop a hybrid model or a combination of several forecasting models to predict time series that contain trend, seasonal and calendar variation patterns. This hybrid model is a combination of classical models (namely time series regression and ARIMA model) and/or modern methods (artificial intelligence method, i.e. Artificial Neural Networks). A simulation study was used to show that the proposed procedure for building the hybrid model could work well for forecasting time series with trend, seasonal and calendar variation patterns. Furthermore, the proposed hybrid model is applied for forecasting real data, i.e. monthly data about inflow and outflow of currency at Bank Indonesia. The results show that the hybrid model tend to provide more accurate forecasts than individual forecasting models. Moreover, this result is also in line with the third results of the M3 competition, i.e. the hybrid model on average provides a more accurate forecast than the individual model.
On Fire regime modelling using satellite TM time series
Oddi, F.; . Ghermandi, L.; Lanorte, A.; Lasaponara, R.
2009-04-01
Wildfires can cause an environment deterioration modifying vegetation dynamics because they have the capacity of changing vegetation diversity and physiognomy. In semiarid regions, like the northwestern Patagonia, fire disturbance is also important because it could impact on the potential productivity of the ecosystem. There is reduction plant biomass and with that reducing the animal carrying capacity and/or the forest site quality with negative economics implications. Therefore knowledge of the fires regime in a region is of great importance to understand and predict the responses of vegetation and its possible effect on the regional economy. Studies of this type at a landscape level can be addressed using GIS tools. Satellite imagery allows detect burned areas and through a temporary analysis can be determined to fire regime and detecting changes at landscape scale. The study area of work is located on the east of the city of Bariloche including the San Ramon Ranch (22,000 ha) and its environs in the ecotone formed by the sub Antarctic forest and the patagonian steppe. We worked with multiespectral Landsat TM images and Landsat ETM + 30m spatial resolution obtained at different times. For the spatial analysis we used the software Erdas Imagine 9.0 and ArcView 3.3. A discrimination of vegetation types has made and was determined areas affected by fires in different years. We determined the level of change on vegetation induced by fire. In the future the use of high spatial resolution images combined with higher spectral resolution will allows distinguish burned areas with greater precision on study area. Also the use of digital terrain models derived from satellite imagery associated with climatic variables will allows model the relationship between them and the dynamics of vegetation.
Stochastic modeling of Lake Van water level time series with jumps and multiple trends
H. Aksoy
2013-06-01
Full Text Available In the 1990s, water level in the closed-basin Lake Van located in the Eastern Anatolia, Turkey, has risen up about 2 m. Analysis of the hydrometeorological data shows that change in the water level is related to the water budget of the lake. In this study, stochastic models are proposed for simulating monthly water level data. Two models considering mono- and multiple-trend time series are developed. The models are derived after removal of trend and periodicity in the dataset. Trend observed in the lake water level time series is fitted by mono- and multiple-trend lines. In the so-called mono-trend model, the time series is treated as a whole under the hypothesis that the lake water level has an increasing trend. In the second model (so-called multiple-trend, the time series is divided into a number of segments to each a linear trend can be fitted separately. Application on the lake water level data shows that four segments, each fitted with a trend line, are meaningful. Both the mono- and multiple-trend models are used for simulation of synthetic lake water level time series under the hypothesis that the observed mono- and multiple-trend structure of the lake water level persist during the simulation period. The multiple-trend model is found better for planning the future infrastructural projects in surrounding areas of the lake as it generates higher maxima for the simulated lake water level.
A probabilistic method for constructing wave time-series at inshore locations using model scenarios
Long, Joseph W.; Plant, Nathaniel G.; Dalyander, P. Soupy; Thompson, David M.
2014-01-01
Continuous time-series of wave characteristics (height, period, and direction) are constructed using a base set of model scenarios and simple probabilistic methods. This approach utilizes an archive of computationally intensive, highly spatially resolved numerical wave model output to develop time-series of historical or future wave conditions without performing additional, continuous numerical simulations. The archive of model output contains wave simulations from a set of model scenarios derived from an offshore wave climatology. Time-series of wave height, period, direction, and associated uncertainties are constructed at locations included in the numerical model domain. The confidence limits are derived using statistical variability of oceanographic parameters contained in the wave model scenarios. The method was applied to a region in the northern Gulf of Mexico and assessed using wave observations at 12 m and 30 m water depths. Prediction skill for significant wave height is 0.58 and 0.67 at the 12 m and 30 m locations, respectively, with similar performance for wave period and direction. The skill of this simplified, probabilistic time-series construction method is comparable to existing large-scale, high-fidelity operational wave models but provides higher spatial resolution output at low computational expense. The constructed time-series can be developed to support a variety of applications including climate studies and other situations where a comprehensive survey of wave impacts on the coastal area is of interest.
Almaraz, Pablo
2005-04-01
Time-series analyses in ecology usually involve the use of autoregressive modelling through direct and/or delayed difference equations, which severely restricts the ability of the modeler to structure complex causal relationships within a multivariate frame. This is especially problematic in the field of population regulation, where the proximate and ultimate causes of fluctuations in population size have been hotly debated for decades. Here it is shown that this debate can benefit from the implementation of structural modelling with latent constructs (SEM) to time-series analysis in ecology. A nonparametric bootstrap scheme illustrates how this modelling approach can circumvent some problems posed by the climate-ecology interface. Stochastic Monte Carlo simulation is further used to assess the effects of increasing time-series length and different parameter estimation methods on the performance of several model fit indexes. Throughout, the advantages and limitations of the SEM method are highlighted.
MIMO model of an interacting series process for Robust MPC via System Identification.
Wibowo, Tri Chandra S; Saad, Nordin
2010-07-01
This paper discusses the empirical modeling using system identification technique with a focus on an interacting series process. The study is carried out experimentally using a gaseous pilot plant as the process, in which the dynamic of such a plant exhibits the typical dynamic of an interacting series process. Three practical approaches are investigated and their performances are evaluated. The models developed are also examined in real-time implementation of a linear model predictive control. The selected model is able to reproduce the main dynamic characteristics of the plant in open-loop and produces zero steady-state errors in closed-loop control system. Several issues concerning the identification process and the construction of a MIMO state space model for a series interacting process are deliberated.
Neutron scattering and models: Iron. Nuclear data and measurements series
Smith, A.B. [Argonne National Lab., IL (United States)
1995-08-01
Differential elastic and inelastic neutron-scattering cross sections of elemental iron are measured from 4.5 to 10 MeV in increments of {approx} 0.5 MeV. At each incident energy the measurements are made at forty or more scattering angles distributed between {approx} 17{degrees} and 160{degrees}, with emphasis on elastic scattering and inelastic scattering due to the excitation of the yrast 2{sup +} state. The measured data is combined with earlier lower-energy results from this laboratory, with recent high-precision {approx} 9.5 {yields} 15 MeV results from the Physilalisch Technische Bundesanstalt and with selected values from the literature to provide a detailed neutron-scattering data base extending from {approx} 1.5 to 26 MeV. This data is interpreted in the context of phenomenological spherical-optical and coupled-channels (vibrational and rotational) models, and physical implications discussed. Deformation, coupling, asymmetry and dispersive effects are explored. It is shown that, particularly in a collective context, a good description of the interaction of neutrons with iron is achieved over the energy range {approx} 0 {yields} 26 MeV, avoiding the dichotomy between high and low-energy interpretations found in previous work.
Travel cost inference from sparse, spatio-temporally correlated time series using markov models
Yang, B.; Guo, C.; Jensen, C.S.
2013-01-01
of such time series offers insight into the underlying system and enables prediction of system behavior. While the techniques presented in the paper apply more generally, we consider the case of transportation systems and aim to predict travel cost from GPS tracking data from probe vehicles. Specifically, each...... road segment has an associated travel-cost time series, which is derived from GPS data. We use spatio-temporal hidden Markov models (STHMM) to model correlations among different traffic time series. We provide algorithms that are able to learn the parameters of an STHMM while contending...... with the sparsity, spatio-temporal correlation, and heterogeneity of the time series. Using the resulting STHMM, near future travel costs in the transportation network, e.g., travel time or greenhouse gas emissions, can be inferred, enabling a variety of routing services, e.g., eco-routing. Empirical studies...
Multifractal Detrended Fluctuation Analysis of Interevent Time Series in a Modified OFC Model
LIN Min; YAN Shuang-Xi; ZHAO Gang; WANG Gang
2013-01-01
We use multifractal detrended fluctuation analysis (MF-DFA) method to investigate the multifractal behavior of the interevent time series in a modified Olami-Feder-Christensen (OFC) earthquake model on assortative scale-free networks.We determine generalized Hurst exponent and singularity spectrum and find that these fluctuations have multifractal nature.Comparing the MF-DFA results for the original interevent time series with those for shuffled and surrogate series,we conclude that the origin of multifractality is due to both the broadness of probability density function and long-range correlation.
Adaptive time-variant models for fuzzy-time-series forecasting.
Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching
2010-12-01
A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.
Modeling lanthanide series binding sites on humic acid.
Pourret, Olivier; Martinez, Raul E
2009-02-01
Lanthanide (Ln) binding to humic acid (HA) has been investigated by combining ultrafiltration and ICP-MS techniques. A Langmuir-sorption-isotherm metal-complexation model was used in conjunction with a linear programming method (LPM) to fit experimental data representing various experimental conditions both in HA/Ln ratio (varying between 5 and 20) and in pH range (from 2 to 10) with an ionic strength of 10(-3) mol L(-1). The LPM approach, not requiring prior knowledge of surface complexation parameters, was used to solve the existing discrepancies in LnHA binding constants and site densities. The application of the LPM to experimental data revealed the presence of two discrete metal binding sites at low humic acid concentrations (5 mg L(-1)), with log metal complexation constants (logK(S,j)) of 2.65+/-0.05 and 7.00 (depending on Ln). The corresponding site densities were 2.71+/-0.57x10(-8) and 0.58+/-0.32x10(-8) mol of Ln(3+)/mg of HA (depending on Ln). Total site densities of 3.28+/-0.28x10(-8), 4.99+/-0.02x10(-8), and 5.01+/-0.01x10(-8) mol mg(-1) were obtained by LPM for humic acid, for humic acid concentrations of 5, 10, and 20 mg L(-1), respectively. These results confirm that lanthanide binding occurs mainly at weak sites (i.e., ca. 80%) and second at strong sites (i.e., ca. 20%). The first group of discrete metal binding sites may be attributed to carboxylic groups (known to be the main binding sites of Ln in HA), and the second metal binding group to phenolic moieties. Moreover, this study evidences heterogeneity in the distribution of the binding sites among Ln. Eventually, the LPM approach produced feasible and reasonable results, but it was less sensitive to error and did not require an a priori assumption of the number and concentration of binding sites.
Stochastic modeling of Lake Van water level time series with jumps and multiple trends
H. Aksoy
2013-02-01
Full Text Available In 1990s, water level in the closed-basin Lake Van located in the Eastern Anatolia, Turkey has risen up about 2 m. Analysis of the hydrometeorological shows that change in the water level is related to the water budget of the lake. In this study, a stochastic model is generated using the measured monthly water level data of the lake. The model is derived after removal of trend and periodicity in the data set. Trend observed in the lake water level time series is fitted by mono- and multiple-trend lines. For the multiple-trend, the time series is first divided into homogeneous segments by means of SEGMENTER, segmentation software. Four segments are found meaningful practically each fitted with a trend line. Two models considering mono- and multiple-trend time series are developed. The multiple-trend model is found better for planning future development in surrounding areas of the lake.
Fluctuation complexity of agent-based financial time series model by stochastic Potts system
Hong, Weijia; Wang, Jun
2015-03-01
Financial market is a complex evolved dynamic system with high volatilities and noises, and the modeling and analyzing of financial time series are regarded as the rather challenging tasks in financial research. In this work, by applying the Potts dynamic system, a random agent-based financial time series model is developed in an attempt to uncover the empirical laws in finance, where the Potts model is introduced to imitate the trading interactions among the investing agents. Based on the computer simulation in conjunction with the statistical analysis and the nonlinear analysis, we present numerical research to investigate the fluctuation behaviors of the proposed time series model. Furthermore, in order to get a robust conclusion, we consider the daily returns of Shanghai Composite Index and Shenzhen Component Index, and the comparison analysis of return behaviors between the simulation data and the actual data is exhibited.
The application of time series models to cloud field morphology analysis
Chin, Roland T.; Jau, Jack Y. C.; Weinman, James A.
1987-01-01
A modeling method for the quantitative description of remotely sensed cloud field images is presented. A two-dimensional texture modeling scheme based on one-dimensional time series procedures is adopted for this purpose. The time series procedure used is the seasonal autoregressive, moving average (ARMA) process in Box and Jenkins. Cloud field properties such as directionality, clustering and cloud coverage can be retrieved by this method. It has been demonstrated that a cloud field image can be quantitatively defined by a small set of parameters and synthesized surrogates can be reconstructed from these model parameters. This method enables cloud climatology to be studied quantitatively.
Pietkiewicz, A.; Tollik, D.; Klaassens, J. B.
1989-08-01
A simple small-signal low-frequency model of an idealized series resonant converter employing peak capacitor voltage prediction and switching frequency control is proposed. Two different versions of the model describe all possible conversion modes. It is found that step down modes offer better dynamic characteristics over most important network functions than do the step-up modes. The dynamical model of the series resonant converter with peak capacitor voltage prediction and switching frequency programming is much simpler than such popular control stategies as frequency VCO (voltage controlled oscillators) based control, or diode conduction angle control.
A new modeling and control scheme for thyristor-controlled series capacitor
Zhizhong MAO
2009-01-01
In order to design an optimal controller for the thyristor controlled series capacitor(TCSC),a novel TCSC control model is developed.In the model,the delay angle of thyristor valves is the input,and the inductor current is chosen as the output.Theoretical analysis and simulation studies show that TCSC is a non-linear system and its parameters vary with the operating point.In consideration of the special characteristics of the TCSC,an improved model algorithmic control (IMAC) scheme is proposed to control TCSC effectively.The good performance can be observed from simulation results when IMAC is applied to a series compensated radial system.
NON-LINEAR VIBRATION MODELING WITH THE HELP OF FUNCTIONAL SERIES
Z. M. Ghasanov
2010-06-01
Full Text Available The algorithm of modeling the significantly nonlinear processes – «black boxes» – is offered. It uses functional series. The algorithm is described on the example of modeling of complex oscillations, which occur in acoustic flaw detection.
Ørregård Nielsen, Morten
2015-01-01
This article proves consistency and asymptotic normality for the conditional-sum-of-squares estimator, which is equivalent to the conditional maximum likelihood estimator, in multivariate fractional time-series models. The model is parametric and quite general and, in particular, encompasses...
Ørregård Nielsen, Morten
This paper proves consistency and asymptotic normality for the conditional-sum-of-squares estimator, which is equivalent to the conditional maximum likelihood estimator, in multivariate fractional time series models. The model is parametric and quite general, and, in particular, encompasses...
Development of Simulink-Based SiC MOSFET Modeling Platform for Series Connected Devices
Tsolaridis, Georgios; Ilves, Kalle; Reigosa, Paula Diaz
2016-01-01
A new MATLAB/Simulink-based modeling platform has been developed for SiC MOSFET power modules. The modeling platform describes the electrical behavior f a single 1.2 kV/ 350 A SiC MOSFET power module, as well as the series connection of two of them. A fast parameter initialization is followed...
An Alternative Bayesian Approach to Structural Breaks in Time Series Models
S. van den Hauwe (Sjoerd); R. Paap (Richard); D.J.C. van Dijk (Dick)
2011-01-01
textabstractWe propose a new approach to deal with structural breaks in time series models. The key contribution is an alternative dynamic stochastic specification for the model parameters which describes potential breaks. After a break new parameter values are generated from a so-called baseline pr
Applying ARIMA model for annual volume time series of the Magdalena River
Gloria Amaris
2017-04-01
Conclusions: The simulated results obtained with the ARIMA model compared to the observed data showed a fairly good adjustment of the minimum and maximum magnitudes. This allows concluding that it is a good tool for estimating minimum and maximum volumes, even though this model is not capable of simulating the exact behaviour of an annual volume time series.
Multi-Scale Gaussian Processes: a Novel Model for Chaotic Time Series Prediction
ZHOU Ya-Tong; ZHANG Tai-Yi; SUN Jian-Cheng
2007-01-01
@@ Based on the classical Gaussian process (GP) model, we propose a multi-scale Gaussian process (MGP) model to predict the existence of chaotic time series. The MGP employs a covariance function that is constructed by a scaling function with its different dilations and translations, ensuring that the optimal hyperparameter is easy to determine.
Kusev, Petko; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Juliusson, Asgeir; Chater, Nick
2017-04-06
When attempting to predict future events, people commonly rely on historical data. One psychological characteristic of judgmental forecasting of time series, established by research, is that when people make forecasts from series, they tend to underestimate future values for upward trends and overestimate them for downward ones, so-called trend-damping (modeled by anchoring on, and insufficient adjustment from, the average of recent time series values). Events in a time series can be experienced sequentially (dynamic mode), or they can also be retrospectively viewed simultaneously (static mode), not experienced individually in real time. In one experiment, we studied the influence of presentation mode (dynamic and static) on two sorts of judgment: (a) predictions of the next event (forecast) and (b) estimation of the average value of all the events in the presented series (average estimation). Participants' responses in dynamic mode were anchored on more recent events than in static mode for all types of judgment but with different consequences; hence, dynamic presentation improved prediction accuracy, but not estimation. These results are not anticipated by existing theoretical accounts; we develop and present an agent-based model-the adaptive anchoring model (ADAM)-to account for the difference between processing sequences of dynamically and statically presented stimuli (visually presented data). ADAM captures how variation in presentation mode produces variation in responses (and the accuracy of these responses) in both forecasting and judgment tasks. ADAM's model predictions for the forecasting and judgment tasks fit better with the response data than a linear-regression time series model. Moreover, ADAM outperformed autoregressive-integrated-moving-average (ARIMA) and exponential-smoothing models, while neither of these models accounts for people's responses on the average estimation task. Copyright © 2017 The Authors. Cognitive Science published by Wiley
Analyzing Multiple Multivariate Time Series Data Using Multilevel Dynamic Factor Models.
Song, Hairong; Zhang, Zhiyong
2014-01-01
Multivariate time series data offer researchers opportunities to study dynamics of various systems in social and behavioral sciences. Dynamic factor model (DFM), as an idiographic approach for studying intraindividual variability and dynamics, has typically been applied to time series data obtained from a single unit. When multivariate time series data are collected from multiple units, how to synchronize dynamical information becomes a silent issue. To address this issue, the current study presented a multilevel dynamic factor model (MDFM) that analyzes multiple multivariate time series in multilevel SEM frameworks. MDFM not only disentangles within- and between-person variability but also models dynamics of the intraindividual processes. To illustrate the uses of MDFMs, we applied lag0, lag1, and lag2 MDFMs to empirical data on affect collected from 205 dating couples who had at least 50 consecutive days of observations. We also considered a model extension where the dynamical coefficients were allowed to be randomly varying in the population. The empirical analysis yielded interesting findings regarding affect regulation and coregulation within couples, demonstrating promising uses of MDFMs in analyzing multiple multivariate time series. In the end, we discussed a number of methodological issues in the applications of MDFMs and pointed out possible directions for future research.
Model-based Clustering of Categorical Time Series with Multinomial Logit Classification
Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea
2010-09-01
A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.
Time series decomposition methods were applied to meteorological and air quality data and their numerical model estimates. Decomposition techniques express a time series as the sum of a small number of independent modes which hypothetically represent identifiable forcings, thereb...
Cointegration and Error Correction Modelling in Time-Series Analysis: A Brief Introduction
Helmut Thome
2015-07-01
Full Text Available Criminological research is often based on time-series data showing some type of trend movement. Trending time-series may correlate strongly even in cases where no causal relationship exists (spurious causality. To avoid this problem researchers often apply some technique of detrending their data, such as by differencing the series. This approach, however, may bring up another problem: that of spurious non-causality. Both problems can, in principle, be avoided if the series under investigation are “difference-stationary” (if the trend movements are stochastic and “cointegrated” (if the stochastically changing trendmovements in different variables correspond to each other. The article gives a brief introduction to key instruments and interpretative tools applied in cointegration modelling.
Keil, Petr; Herben, Tomás; Rosindell, James; Storch, David
2010-07-07
There has recently been increasing interest in neutral models of biodiversity and their ability to reproduce the patterns observed in nature, such as species abundance distributions. Here we investigate the ability of a neutral model to predict phenomena observed in single-population time series, a study complementary to most existing work that concentrates on snapshots in time of the whole community. We consider tests for density dependence, the dominant frequencies of population fluctuation (spectral density) and a relationship between the mean and variance of a fluctuating population (Taylor's power law). We simulated an archipelago model of a set of interconnected local communities with variable mortality rate, migration rate, speciation rate, size of local community and number of local communities. Our spectral analysis showed 'pink noise': a departure from a standard random walk dynamics in favor of the higher frequency fluctuations which is partly consistent with empirical data. We detected density dependence in local community time series but not in metacommunity time series. The slope of the Taylor's power law in the model was similar to the slopes observed in natural populations, but the fit to the power law was worse. Our observations of pink noise and density dependence can be attributed to the presence of an upper limit to community sizes and to the effect of migration which distorts temporal autocorrelation in local time series. We conclude that some of the phenomena observed in natural time series can emerge from neutral processes, as a result of random zero-sum birth, death and migration. This suggests the neutral model would be a parsimonious null model for future studies of time series data.
Bayesian dynamic modeling of time series of dengue disease case counts.
Daniel Adyro Martínez-Bello
2017-07-01
Full Text Available The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease
Modeling and Forecasting of Water Demand in Isfahan Using Underlying Trend Concept and Time Series
H. Sadeghi
2016-02-01
Full Text Available Introduction: Accurate water demand modeling for the city is very important for forecasting and policies adoption related to water resources management. Thus, for future requirements of water estimation, forecasting and modeling, it is important to utilize models with little errors. Water has a special place among the basic human needs, because it not hampers human life. The importance of the issue of water management in the extraction and consumption, it is necessary as a basic need. Municipal water applications is include a variety of water demand for domestic, public, industrial and commercial. Predicting the impact of urban water demand in better planning of water resources in arid and semiarid regions are faced with water restrictions. Materials and Methods: One of the most important factors affecting the changing technological advances in production and demand functions, we must pay special attention to the layout pattern. Technology development is concerned not only technically, but also other aspects such as personal, non-economic factors (population, geographical and social factors can be analyzed. Model examined in this study, a regression model is composed of a series of structural components over time allows changed invisible accidentally. Explanatory variables technology (both crystalline and amorphous in a model according to which the material is said to be better, but because of the lack of measured variables over time can not be entered in the template. Model examined in this study, a regression model is composed of a series of structural component invisible accidentally changed over time allows. In this study, structural time series (STSM and ARMA time series models have been used to model and estimate the water demand in Isfahan. Moreover, in order to find the efficient procedure, both models have been compared to each other. The desired data in this research include water consumption in Isfahan, water price and the monthly pay
Estimation of time of death with a fourier series unsteady-state heat transfer model.
Smart, Jimmy L
2010-11-01
The purpose of this study was to return to fundamental principles of heat transfer and derive a suitable model to establish a firm basis for constructing a postmortem human cooling curve. A Fourier Series Model was successfully applied to unsteady heat transfer within a wooden cylinder in controlled laboratory conditions. Wood has similar thermal diffusivity properties as human tissue. By manipulation of the model, sensitivity analyses were performed to observe the impact of changes in values of input variables. Variables of initial temperature of the cylinder and ambient surrounding temperature were shown to be very sensitive and have the most impact upon predictive results of the model. The model was also used to demonstrate the existence of an initial temperature plateau, which is often the subject of controversy in estimating time of death. Finally, it was demonstrated how the Fourier Series Model can be applied to estimate time of death for humans.
Modeling of signal transmitting of avionic systems based on Volterra series
Юрий Владимирович Пепа
2014-11-01
Full Text Available The paper deals with mathematical modeling methods for the formation and transmission of analogue and digital avionics systems using Volterra series. A mathematical model of the modulation in the presence of various initial data is developed, the computer modeling is conducted. The processes of analog modulation is simulated using MATLAB+SIMULINK, which allows you to simulate these processes, as well as explore them.
On Modelling of Nonlinear Systems and Phenomena with the Use of Volterra and Wiener Series
Andrzej Borys
2015-03-01
Full Text Available This is a short tutorial on Volterra and Wiener series applications to modelling of nonlinear systems and phenomena, and also a survey of the recent achievements in this area. In particular, we show here how the philosophies standing behind each of the above theories differ from each other. On the other hand, we discuss also mathematical relationships between Volterra and Wiener kernels and operators. Also, the problem of a best approximation of large-scale nonlinear systems using Volterra operators in weighted Fock spaces is described. Examples of applications considered are the following: Volterra series use in description of nonlinear distortions in satellite systems and their equalization or compensation, exploiting Wiener kernels to modelling of biological systems, the use of both Volterra and Wiener theories in description of ocean waves and in magnetic resonance spectroscopy. Moreover, connections between Volterra series and neural network models, and also input-output descriptions of quantum systems by Volterra series are discussed. Finally, we consider application of Volterra series to solving some nonlinear problems occurring in hydrology, navigation, and transportation.
A new approach to calibrate steady groundwater flow models with time series of head observations
Obergfell, C.; Bakker, M.; Maas, C.
2012-04-01
We developed a new method to calibrate aquifer parameters of steady-state well field models using measured time series of head fluctuations. Our method is an alternative to standard pumping tests and is based on time series analysis using parametric impulse response functions. First, the pumping influence is isolated from the overall groundwater fluctuation observed at monitoring wells around the well field, and response functions are determined for each individual well. Time series parameters are optimized using a quasi-Newton algorithm. For one monitoring well, time series model parameters are also optimized by means of SCEM-UA, a Markov Chain Monte Carlo algorithm, as a control on the validity of the parameters obtained by the faster quasi-Newton method. Subsequently, the drawdown corresponding to an average yearly pumping rate is calculated from the response functions determined by time series analysis. The drawdown values estimated with acceptable confidence intervals are used as calibration targets of a steady groundwater flow model. A case study is presented of the drinking water supply well field of Waalwijk (Netherlands). In this case study, a uniform aquifer transmissivity is optimized together with the conductance of ditches in the vicinity of the well field. Groundwater recharge or boundary heads do not have to be entered, which eliminates two import sources of uncertainty. The method constitutes a cost-efficient alternative to pumping tests and allows the determination of pumping influences without changes in well field operation.
Yolanda Navarro-Abal
2012-12-01
Full Text Available Television fiction series sometimes generate an unreal vision of life, especially among young people, becoming a mirror in which they can see themselves reflected. The series become models of values, attitudes, skills and behaviours that tend to be imitated by some viewers. The aim of this study was to analyze the conflict management behavioural styles presented by the main characters of television fiction series. Thus, we evaluated the association between these styles and the age and sex of the main characters, as well as the nationality and genre of the fiction series. 16 fiction series were assessed by selecting two characters of both sexes from each series. We adapted the Rahim Organizational Conflict Inventory-II for observing and recording the data. The results show that there is no direct association between the conflict management behavioural styles presented in the drama series and the sex of the main characters. However, associations were found between these styles and the age of the characters and the genre of the fiction series.
Guy J. Abel
2013-12-01
Full Text Available Background: Population forecasts are widely used for public policy purposes. Methods to quantify the uncertainty in forecasts tend to ignore model uncertainty and to be based on a single model. Objective: In this paper, we use Bayesian time series models to obtain future population estimates with associated measures of uncertainty. The models are compared based on Bayesian posterior model probabilities, which are then used to provide model-averaged forecasts. Methods: The focus is on a simple projection model with the historical data representing population change in England and Wales from 1841 to 2007. Bayesian forecasts to the year 2032 are obtained based on a range of models, including autoregression models, stochastic volatility models and random variance shift models. The computational steps to fit each of these models using the OpenBUGS software via R are illustrated. Results: We show that the Bayesian approach is adept in capturing multiple sources of uncertainty in population projections, including model uncertainty. The inclusion of non-constant variance improves the fit of the models and provides more realistic predictive uncertainty levels. The forecasting methodology is assessed through fitting the models to various truncated data series.
Time-series analysis with a hybrid Box-Jenkins ARIMA and neural network model
Dilli R Aryal; WANG Yao-wu(王要武)
2004-01-01
Time-series analysis is important to a wide range of disciplines transcending both the physical and social sciences for proactive policy decisions. Statistical models have sound theoretical basis and have been successfully used in a number of problem domains in time series forecasting. Due to power and flexibility, Box-Jenkins ARIMA model has gained enormous popularity in many areas and research practice for the last three decades.More recently, the neural networks have been shown to be a promising alternative tool for modeling and forecasting owing to their ability to capture the nonlinearity in the data. However, despite the popularity and the superiority of ARIMA and ANN models, the empirical forecasting performance has been rather mixed so that no single method is best in every situation. In this study, a hybrid ARIMA and neural networks model to time series forecasting is proposed. The basic idea behind the model combination is to use each model's unique features to capture different patterns in the data. With three real data sets, empirical results evidently show that the hybrid model outperforms ARIMA and ANN model noticeably in terms of forecasting accuracy used in isolation.
A Series Solution of the Cauchy Problem for Turing Reaction-diffusion Model
L. Päivärinta
2011-12-01
Full Text Available In this paper, the series pattern solution of the Cauchy problem for Turing reaction-diffusion model is obtained by using the homotopy analysis method (HAM. Turing reaction-diffusion model is nonlinear reaction-diffusion system which usually has power-law nonlinearities or may be rewritten in the form of power-law nonlinearities. Using the HAM, it is possible to find the exact solution or an approximate solution of the problem. This technique provides a series of functions which converges rapidly to the exact solution of the problem. The efficiency of the approach will be shown by applying the procedure on two problems. Furthermore, the so-called homotopy-Pade technique (HPT is applied to enlarge the convergence region and rate of solution series given by the HAM.
Modelling changes in the unconditional variance of long stock return series
Amado, Cristina; Teräsvirta, Timo
2014-01-01
In this paper we develop a testing and modelling procedure for describing the long-term volatility movements over very long daily return series. For this purpose we assume that volatility is multiplicatively decomposed into a conditional and an unconditional component as in Amado and Teräsvirta...... (2012, 2013). The latter component is modelled such that the unconditional time-varying component evolves slowly over time. Statistical inference is used for specifying the parameterization of the time-varying component by applying a sequence of Lagrange multiplier tests. The model building procedure...... that the apparent long memory property in volatility may be interpreted as changes in the unconditional variance of the long series. Finally, based on a formal statistical test we find evidence of the superiority of volatility forecasting accuracy of the new model over the GJR-GARCH model at all horizons for eight...
Modelling Changes in the Unconditional Variance of Long Stock Return Series
Amado, Cristina; Teräsvirta, Timo
In this paper we develop a testing and modelling procedure for describing the long-term volatility movements over very long return series. For the purpose, we assume that volatility is multiplicatively decomposed into a conditional and an unconditional component as in Amado and Teräsvirta (2011......). The latter component is modelled by incorporating smooth changes so that the unconditional variance is allowed to evolve slowly over time. Statistical inference is used for specifying the parameterization of the time-varying component by applying a sequence of Lagrange multiplier tests. The model building...... show that the long-memory property in volatility may be explained by ignored changes in the unconditional variance of the long series. Finally, based on a formal statistical test we find evidence of the superiority of volatility forecast accuracy of the new model over the GJR-GARCH model at all...
Time-series models on somatic cell score improve detection of matistis
Norberg, E; Korsgaard, I R; Sloth, K H M N
2008-01-01
with bacteriological findings. At a sensitivity of 90% the corresponding specificity was 68%, which increased to 83% using a one-step back smoothing. It is concluded that mixture models based on Kalman filters are efficient in handling in-line sensor data for detection of mastitis and may be useful for similar......In-line detection of mastitis using frequent milk sampling was studied in 241 cows in a Danish research herd. Somatic cell scores obtained at a daily basis were analyzed using a mixture of four time-series models. Probabilities were assigned to each model for the observations to belong to a normal...... "steady-state" development, change in "level", change of "slope" or "outlier". Mastitis was indicated from the sum of probabilities for the "level" and "slope" models. Time-series models were based on the Kalman filter. Reference data was obtained from veterinary assessment of health status combined...
Markov Chain Modelling for Short-Term NDVI Time Series Forecasting
Stepčenko Artūrs
2016-12-01
Full Text Available In this paper, the NDVI time series forecasting model has been developed based on the use of discrete time, continuous state Markov chain of suitable order. The normalised difference vegetation index (NDVI is an indicator that describes the amount of chlorophyll (the green mass and shows the relative density and health of vegetation; therefore, it is an important variable for vegetation forecasting. A Markov chain is a stochastic process that consists of a state space. This stochastic process undergoes transitions from one state to another in the state space with some probabilities. A Markov chain forecast model is flexible in accommodating various forecast assumptions and structures. The present paper discusses the considerations and techniques in building a Markov chain forecast model at each step. Continuous state Markov chain model is analytically described. Finally, the application of the proposed Markov chain model is illustrated with reference to a set of NDVI time series data.
Time-series modeling of long-term weight self-monitoring data.
Helander, Elina; Pavel, Misha; Jimison, Holly; Korhonen, Ilkka
2015-08-01
Long-term self-monitoring of weight is beneficial for weight maintenance, especially after weight loss. Connected weight scales accumulate time series information over long term and hence enable time series analysis of the data. The analysis can reveal individual patterns, provide more sensitive detection of significant weight trends, and enable more accurate and timely prediction of weight outcomes. However, long term self-weighing data has several challenges which complicate the analysis. Especially, irregular sampling, missing data, and existence of periodic (e.g. diurnal and weekly) patterns are common. In this study, we apply time series modeling approach on daily weight time series from two individuals and describe information that can be extracted from this kind of data. We study the properties of weight time series data, missing data and its link to individuals behavior, periodic patterns and weight series segmentation. Being able to understand behavior through weight data and give relevant feedback is desired to lead to positive intervention on health behaviors.
Generation of Natural Runoff Monthly Series at Ungauged Sites Using a Regional Regressive Model
Dario Pumo
2016-05-01
Full Text Available Many hydrologic applications require reliable estimates of runoff in river basins to face the widespread lack of data, both in time and in space. A regional method for the reconstruction of monthly runoff series is here developed and applied to Sicily (Italy. A simple modeling structure is adopted, consisting of a regression-based rainfall–runoff model with four model parameters, calibrated through a two-step procedure. Monthly runoff estimates are based on precipitation, temperature, and exploiting the autocorrelation with runoff at the previous month. Model parameters are assessed by specific regional equations as a function of easily measurable physical and climate basin descriptors. The first calibration step is aimed at the identification of a set of parameters optimizing model performances at the level of single basin. Such “optimal” sets are used at the second step, part of a regional regression analysis, to establish the regional equations for model parameters assessment as a function of basin attributes. All the gauged watersheds across the region have been analyzed, selecting 53 basins for model calibration and using the other six basins exclusively for validation. Performances, quantitatively evaluated by different statistical indexes, demonstrate relevant model ability in reproducing the observed hydrological time-series at both the monthly and coarser time resolutions. The methodology, which is easily transferable to other arid and semi-arid areas, provides a reliable tool for filling/reconstructing runoff time series at any gauged or ungauged basin of a region.
Prediction of altimetric sea level anomalies using time series models based on spatial correlation
Miziński, Bartłomiej; Niedzielski, Tomasz
2014-05-01
Sea level anomaly (SLA) times series, which are time-varying gridded data, can be modelled and predicted using time series methods. This approach has been shown to provide accurate forecasts within the Prognocean system, the novel infrastructure for anticipating sea level change designed and built at the University of Wrocław (Poland) which utilizes the real-time SLA data from Archiving, Validation and Interpretation of Satellite Oceanographic data (AVISO). The system runs a few models concurrently, and our ocean prediction experiment includes both uni- and multivariate time series methods. The univariate ones are: extrapolation of polynomial-harmonic model (PH), extrapolation of polynomial-harmonic model and autoregressive prediction (PH+AR), extrapolation of polynomial-harmonic model and self-exciting threshold autoregressive prediction (PH+SETAR). The following multivariate methods are used: extrapolation of polynomial-harmonic model and vector autoregressive prediction (PH+VAR), extrapolation of polynomial-harmonic model and generalized space-time autoregressive prediction (PH+GSTAR). As the aforementioned models and the corresponding forecasts are computed in real time, hence independently and in the same computational setting, we are allowed to compare the accuracies offered by the models. The objective of this work is to verify the hypothesis that the multivariate prediction techniques, which make use of cross-correlation and spatial correlation, perform better than the univariate ones. The analysis is based on the daily-fitted and updated time series models predicting the SLA data (lead time of two weeks) over several months when El Niño/Southern Oscillation (ENSO) was in its neutral state.
Transfer function modeling of the monthly accumulated rainfall series over the Iberian Peninsula
Mateos, Vidal L.; Garcia, Jose A.; Serrano, Antonio; De la Cruz Gallego, Maria [Departamento de Fisica, Universidad de Extremadura, Badajoz (Spain)
2002-10-01
In order to improve the results given by Autoregressive Moving-Average (ARMA) modeling for the monthly accumulated rainfall series taken at 19 observatories of the Iberian Peninsula, a Discrete Linear Transfer Function Noise (DLTFN) model was applied taking the local pressure series (LP), North Atlantic sea level pressure series (SLP) and North Atlantic sea surface temperature (SST) as input variables, and the rainfall series as the output series. In all cases, the performance of the DLTFN models, measured by the explained variance of the rainfall series, is better than the performance given by the ARMA modeling. The best performance is given by the models that take the local pressure as the input variable, followed by the sea level pressure models and the sea surface temperature models. Geographically speaking, the models fitted to those observatories located in the west of the Iberian Peninsula work better than those on the north and east of the Peninsula. Also, it was found that there is a region located between 0 N and 20 N, which shows the highest cross-correlation between SST and the peninsula rainfalls. This region moves to the west and northwest off the Peninsula when the SLP series are used. [Spanish] Con el objeto de mejorar los resultados porporcionados por los modelos Autorregresivo Media Movil (ARMA) ajustados a las precipitaciones mensuales acumuladas registradas en 19 observatorios de la Peninsula Iberica se han usado modelos de funcion de transferencia (DLTFN) en los que se han empleado como variable independiente la presion local (LP), la presion a nivel del mar (SLP) o la temperatura de agua del mar (SST) en el Atlantico Norte. En todos los casos analizados, los resultados obtenidos con los modelos DLTFN, medidos mediante la varianza explicada por el modelo, han sido mejores que los resultados proporcionados por los modelos ARMA. Los mejores resultados han sido dados por aquellos modelos que usan la presion local como variable de entrada, seguidos
Data on copula modeling of mixed discrete and continuous neural time series
Meng Hu
2016-06-01
Full Text Available Copula is an important tool for modeling neural dependence. Recent work on copula has been expanded to jointly model mixed time series in neuroscience (“Hu et al., 2016, Joint Analysis of Spikes and Local Field Potentials using Copula” [1]. Here we present further data for joint analysis of spike and local field potential (LFP with copula modeling. In particular, the details of different model orders and the influence of possible spike contamination in LFP data from the same and different electrode recordings are presented. To further facilitate the use of our copula model for the analysis of mixed data, we provide the Matlab codes, together with example data.
Gil-Alana, L.A.; Moreno, A; Pérez-de-Gracia, F. (Fernando)
2011-01-01
The last 20 years have witnessed a considerable increase in the use of time series techniques in econometrics. The articles in this important set have been chosen to illustrate the main themes in time series work as it relates to econometrics. The editor has written a new concise introduction to accompany the articles. Sections covered include: Ad Hoc Forecasting Procedures, ARIMA Modelling, Structural Time Series Models, Unit Roots, Detrending and Non-stationarity, Seasonality, Seasonal Adju...
Point Processes Modeling of Time Series Exhibiting Power-Law Statistics
Kaulakys, B; Gontis, V
2010-01-01
We consider stochastic point processes generating time series exhibiting power laws of spectrum and distribution density (Phys. Rev. E 71, 051105 (2005)) and apply them for modeling the trading activity in the financial markets and for the frequencies of word occurrences in the language.
ShapeSelectForest: a new r package for modeling landsat time series
Mary Meyer; Xiyue Liao; Gretchen Moisen; Elizabeth. Freeman
2015-01-01
We present a new R package called ShapeSelectForest recently posted to the Comprehensive R Archival Network. The package was developed to fit nonparametric shape-restricted regression splines to time series of Landsat imagery for the purpose of modeling, mapping, and monitoring annual forest disturbance dynamics over nearly three decades. For each pixel and spectral...
Time series modeling of daily abandoned calls in a call centre ...
Time series modeling of daily abandoned calls in a call centre. ... were shown to be both parsimonious and adequate using the P-P plots, Q-Q plots and residual analysis. ... The data for application were got from a GSM telephone provider.
Molenaar, P.C.M.
1987-01-01
Outlines a frequency domain analysis of the dynamic factor model and proposes a solution to the problem of constructing a causal filter of lagged factor loadings. The method is illustrated with applications to simulated and real multivariate time series. The latter applications involve topographic a
Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox
Nonejad, Nima
This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast...
The Evolutionary Modeling and Short-range Climatic Prediction for Meteorological Element Time Series
YU Kangqing; ZHOU Yuehua; YANG Jing'an; KANG Zhuo
2005-01-01
The time series of precipitation in flood season (May-September) at Wuhan Station, which is set as an example of the kind of time series with chaos characters, is split into two parts: One includes macro climatic timescale period waves that are affected by some relatively steady climatic factors such as astronomical factors (sunspot, etc.), some other known and/or unknown factors, and the other includes micro climatic timescale period waves superimposed on the macro one. The evolutionary modeling (EM), which develops from genetic programming (GP), is supposed to be adept at simulating the former part because it creates the nonlinear ordinary differential equation (NODE) based upon the data series. The natural fractals (NF)are used to simulate the latter part. The final prediction is the sum of results from both methods, thus the model can reflect multi-time scale effects of forcing factors in the climate system. The results of this example for 2002 and 2003 are satisfactory for climatic prediction operation. The NODE can suggest that the data vary with time, which is beneficial to think over short-range climatic analysis and prediction. Comparison in principle between evolutionary modeling and linear modeling indicates that the evolutionary one is a better way to simulate the complex time series with nonlinear characteristics.
Commandeur, J.J.F. Wesemann, P. Bijleveld, F.D. Chhoun, V. & Sann, S.
2017-01-01
The authors present the methodology used for estimating forecasts for the number of road traffic fatalities in 2011—2020 in Cambodia based on observed developments in Cambodian road traffic fatalities and motor vehicle ownership in the years 1995—2009. Using the latent risk time series model
Comparison of time series models for predicting campylobacteriosis risk in New Zealand.
Al-Sakkaf, A; Jones, G
2014-05-01
Predicting campylobacteriosis cases is a matter of considerable concern in New Zealand, after the number of the notified cases was the highest among the developed countries in 2006. Thus, there is a need to develop a model or a tool to predict accurately the number of campylobacteriosis cases as the Microbial Risk Assessment Model used to predict the number of campylobacteriosis cases failed to predict accurately the number of actual cases. We explore the appropriateness of classical time series modelling approaches for predicting campylobacteriosis. Finding the most appropriate time series model for New Zealand data has additional practical considerations given a possible structural change, that is, a specific and sudden change in response to the implemented interventions. A univariate methodological approach was used to predict monthly disease cases using New Zealand surveillance data of campylobacteriosis incidence from 1998 to 2009. The data from the years 1998 to 2008 were used to model the time series with the year 2009 held out of the data set for model validation. The best two models were then fitted to the full 1998-2009 data and used to predict for each month of 2010. The Holt-Winters (multiplicative) and ARIMA (additive) intervention models were considered the best models for predicting campylobacteriosis in New Zealand. It was noticed that the prediction by an additive ARIMA with intervention was slightly better than the prediction by a Holt-Winter multiplicative method for the annual total in year 2010, the former predicting only 23 cases less than the actual reported cases. It is confirmed that classical time series techniques such as ARIMA with intervention and Holt-Winters can provide a good prediction performance for campylobacteriosis risk in New Zealand. The results reported by this study are useful to the New Zealand Health and Safety Authority's efforts in addressing the problem of the campylobacteriosis epidemic. © 2013 Blackwell Verlag GmbH.
Szolgayová Elena
2014-03-01
Full Text Available Short term streamflow forecasting is important for operational control and risk management in hydrology. Despite a wide range of models available, the impact of long range dependence is often neglected when considering short term forecasting. In this paper, the forecasting performance of a new model combining a long range dependent autoregressive fractionally integrated moving average (ARFIMA model with a wavelet transform used as a method of deseasonalization is examined. It is analysed, whether applying wavelets in order to model the seasonal component in a hydrological time series, is an alternative to moving average deseasonalization in combination with an ARFIMA model. The one-to-ten-steps-ahead forecasting performance of this model is compared with two other models, an ARFIMA model with moving average deseasonalization, and a multiresolution wavelet based model. All models are applied to a time series of mean daily discharge exhibiting long range dependence. For one and two day forecasting horizons, the combined wavelet - ARFIMA approach shows a similar performance as the other models tested. However, for longer forecasting horizons, the wavelet deseasonalization - ARFIMA combination outperforms the other two models. The results show that the wavelets provide an attractive alternative to the moving average deseasonalization.
Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.
Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa
2017-02-01
Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.
Analysis of Data from a Series of Events by a Geometric Process Model
Yeh Lam; Li-xing Zhu; Jennifer S. K. Chan; Qun Liu
2004-01-01
Geometric process was first introduced by Lam[10,11]. A stochastic process {Xi, i = 1, 2,…} is called a geometric process (GP) if, for some a > 0, {ai-1Xi, i = 1, 2,…} forms a renewal process. In thispaper, the GP is used to analyze the data from a series of events. A nonparametric method is introduced forthe estimation of the three parameters in the GP. The limiting distributions of the three estimators are studied.Through the analysis of some real data sets, the GP model is compared with other three homogeneous andnonhomogeneous Poisson models. It seems that on average the GP model is the best model among these fourmodels in analyzing the data from a series of events.
Nonlinear Fluctuation Behavior of Financial Time Series Model by Statistical Physics System
Wuyang Cheng
2014-01-01
Full Text Available We develop a random financial time series model of stock market by one of statistical physics systems, the stochastic contact interacting system. Contact process is a continuous time Markov process; one interpretation of this model is as a model for the spread of an infection, where the epidemic spreading mimics the interplay of local infections and recovery of individuals. From this financial model, we study the statistical behaviors of return time series, and the corresponding behaviors of returns for Shanghai Stock Exchange Composite Index (SSECI and Hang Seng Index (HSI are also comparatively studied. Further, we investigate the Zipf distribution and multifractal phenomenon of returns and price changes. Zipf analysis and MF-DFA analysis are applied to investigate the natures of fluctuations for the stock market.
Research on Optimize Prediction Model and Algorithm about Chaotic Time Series
JIANG Wei-jin; XU Yu-sheng
2004-01-01
We put forward a chaotic estimating model.by using the parameter of the chaotic system, sensitivity of the parameter to inching and control the disturbance of the system, and estimated the parameter of the model by using the best update option.In the end, we forecast the intending series value in its mutually space.The example shows that it can increase the precision in the estimated process by selecting the best model steps.It not only conquer the abuse of using detention inlay technology alone, but also decrease blindness of using forecast error to decide the input model directly, and the result of it is better than the method of statistics and other series means.
Automated Bayesian model development for frequency detection in biological time series
Oldroyd Giles ED
2011-06-01
Full Text Available Abstract Background A first step in building a mathematical model of a biological system is often the analysis of the temporal behaviour of key quantities. Mathematical relationships between the time and frequency domain, such as Fourier Transforms and wavelets, are commonly used to extract information about the underlying signal from a given time series. This one-to-one mapping from time points to frequencies inherently assumes that both domains contain the complete knowledge of the system. However, for truncated, noisy time series with background trends this unique mapping breaks down and the question reduces to an inference problem of identifying the most probable frequencies. Results In this paper we build on the method of Bayesian Spectrum Analysis and demonstrate its advantages over conventional methods by applying it to a number of test cases, including two types of biological time series. Firstly, oscillations of calcium in plant root cells in response to microbial symbionts are non-stationary and noisy, posing challenges to data analysis. Secondly, circadian rhythms in gene expression measured over only two cycles highlights the problem of time series with limited length. The results show that the Bayesian frequency detection approach can provide useful results in specific areas where Fourier analysis can be uninformative or misleading. We demonstrate further benefits of the Bayesian approach for time series analysis, such as direct comparison of different hypotheses, inherent estimation of noise levels and parameter precision, and a flexible framework for modelling the data without pre-processing. Conclusions Modelling in systems biology often builds on the study of time-dependent phenomena. Fourier Transforms are a convenient tool for analysing the frequency domain of time series. However, there are well-known limitations of this method, such as the introduction of spurious frequencies when handling short and noisy time series, and
The partial duration series method in regional index-flood modeling
Madsen, Henrik; Rosbjerg, Dan
1997-01-01
A regional index-flood method based on the partial duration series model is introduced. The model comprises the assumptions of a Poisson-distributed number of threshold exceedances and generalized Pareto (GP) distributed peak magnitudes. The regional T-year event estimator is based on a regional...... preferable to at-site estimation in moderately heterogeneous and homogeneous regions for large sample sizes. Modest intersite dependence has only a small effect on the performance of the regional index-flood estimator....
Lohani, A. K.; Kumar, Rakesh; Singh, R. D.
2012-06-01
SummaryTime series modeling is necessary for the planning and management of reservoirs. More recently, the soft computing techniques have been used in hydrological modeling and forecasting. In this study, the potential of artificial neural networks and neuro-fuzzy system in monthly reservoir inflow forecasting are examined by developing and comparing monthly reservoir inflow prediction models, based on autoregressive (AR), artificial neural networks (ANNs) and adaptive neural-based fuzzy inference system (ANFIS). To take care the effect of monthly periodicity in the flow data, cyclic terms are also included in the ANN and ANFIS models. Working with time series flow data of the Sutlej River at Bhakra Dam, India, several ANN and adaptive neuro-fuzzy models are trained with different input vectors. To evaluate the performance of the selected ANN and adaptive neural fuzzy inference system (ANFIS) models, comparison is made with the autoregressive (AR) models. The ANFIS model trained with the input data vector including previous inflows and cyclic terms of monthly periodicity has shown a significant improvement in the forecast accuracy in comparison with the ANFIS models trained with the input vectors considering only previous inflows. In all cases ANFIS gives more accurate forecast than the AR and ANN models. The proposed ANFIS model coupled with the cyclic terms is shown to provide better representation of the monthly inflow forecasting for planning and operation of reservoir.
Structural damage detection using ARMAX time series models and cepstral distances
K LAKSHMI; A RAMA MOHAN RAO
2016-09-01
A novel damage detection algorithm for structural health monitoring using time series model is presented. The proposed algorithm uses output-only acceleration time series obtained from sensors on the structure which are fitted using Auto-regressive moving-average with exogenous inputs (ARMAX) model. The algorithm uses Cepstral distances between the ARMAX models of decorrelated data obtained from healthy and any other current condition of the structure as the damage indicator. A numerical model of a simply supported beam with variations due to temperature and operating conditions along with measurement noise is used to demonstrate the effectiveness of the proposed damage diagnostic technique using the ARMAX time series models and their Cepstral distances with novelty indices. The effectiveness of the proposed method is validatedusing the benchmark data of the 8-DOF system made available to public by the Engineering Institute of LANL and the simulated vibration data obtained from the FEM model of IASC-ASCE 12-DOF steel frame. The results of the studies indicate that the proposed algorithm is robust in identifying the damage from the acceleration datacontaminated with noise under varied environmental and operational conditions.
Zakynthinaki, M. S.; Stirling, J. R.
2007-01-01
Stochastic optimization is applied to the problem of optimizing the fit of a model to the time series of raw physiological (heart rate) data. The physiological response to exercise has been recently modeled as a dynamical system. Fitting the model to a set of raw physiological time series data is, however, not a trivial task. For this reason and in order to calculate the optimal values of the parameters of the model, the present study implements the powerful stochastic optimization method ALOPEX IV, an algorithm that has been proven to be fast, effective and easy to implement. The optimal parameters of the model, calculated by the optimization method for the particular athlete, are very important as they characterize the athlete's current condition. The present study applies the ALOPEX IV stochastic optimization to the modeling of a set of heart rate time series data corresponding to different exercises of constant intensity. An analysis of the optimization algorithm, together with an analytic proof of its convergence (in the absence of noise), is also presented.
Zhu, Y. K.; Yu, Y. G.; Li, L.; Jiang, T.; Wang, X. Y.; Zheng, X. J.
2016-07-01
A Timoshenko beam model combined with piezoelectric constitutive equations and an electrical model was proposed to describe the energy harvesting performances of multilayered d 15 mode PZT-51 piezoelectric bimorphs in series and parallel connections. The effect of different clamped conditions was considered for non-piezoelectric and piezoelectric layers in the theoretical model. The frequency dependences of output peak voltage and power at different load resistances and excitation voltages were studied theoretically, and the results were verified by finite element modeling (FEM) simulation and experimental measurements. Results show that the theoretical model considering different clamped conditions for non-piezoelectric and piezoelectric layers could make a reliable prediction for the energy harvesting performances of multilayered d 15 mode piezoelectric bimorphs. The multilayered d 15 mode piezoelectric bimorph in a series connection exhibits a higher output peak voltage and power than that of a parallel connection at a load resistance of 1 MΩ. A criterion for choosing a series or parallel connection for a multilayered d 15 mode piezoelectric bimorph is dependent on the comparison of applied load resistance with the critical resistance of about 55 kΩ. The proposed model may provide some useful guidelines for the design and performance optimization of d 15 mode piezoelectric energy harvesters.
High-temperature series analyses of the classical Heisenberg and XY model
Adler, J; Janke, W
1993-01-01
Although there is now a good measure of agreement between Monte Carlo and high-temperature series expansion estimates for Ising ($n=1$) models, published results for the critical temperature from series expansions up to 12{\\em th} order for the three-dimensional classical Heisenberg ($n=3$) and XY ($n=2$) model do not agree very well with recent high-precision Monte Carlo estimates. In order to clarify this discrepancy we have analyzed extended high-temperature series expansions of the susceptibility, the second correlation moment, and the second field derivative of the susceptibility, which have been derived a few years ago by L\\"uscher and Weisz for general $O(n)$ vector spin models on $D$-dimensional hypercubic lattices up to 14{\\em th} order in $K \\equiv J/k_B T$. By analyzing these series expansions in three dimensions with two different methods that allow for confluent correction terms, we obtain good agreement with the standard field theory exponent estimates and with the critical temperature estimates...
Trottier Helen
2006-08-01
Full Text Available Abstract The goal of this paper is to analyze the stochastic dynamics of childhood infectious disease time series. We present an univariate time series analysis of pertussis, mumps, measles and rubella based on Box-Jenkins or AutoRegressive Integrated Moving Average (ARIMA modeling. The method, which enables the dependency structure embedded in time series data to be modeled, has potential research applications in studies of infectious disease dynamics. Canadian chronological series of pertussis, mumps, measles and rubella, before and after mass vaccination, are analyzed to characterize the statistical structure of these diseases. Despite the fact that these infectious diseases are biologically different, it is found that they are all represented by simple models with the same basic statistical structure. Aside from seasonal effects, the number of new cases is given by the incidence in the previous period and by periodically recurrent random factors. It is also shown that mass vaccination does not change this stochastic dependency. We conclude that the Box-Jenkins methodology does identify the collective pattern of the dynamics, but not the specifics of the diseases at the biological individual level.
A Bayesian Surrogate Model for Rapid Time Series Analysis and Application to Exoplanet Observations
Ford, Eric B; Veras, Dimitri
2011-01-01
We present a Bayesian surrogate model for the analysis of periodic or quasi-periodic time series data. We describe a computationally efficient implementation that enables Bayesian model comparison. We apply this model to simulated and real exoplanet observations. We discuss the results and demonstrate some of the challenges for applying our surrogate model to realistic exoplanet data sets. In particular, we find that analyses of real world data should pay careful attention to the effects of uneven spacing of observations and the choice of prior for the "jitter" parameter.
Nonlinear Behaviors of Tail Dependence and Cross-Correlation of Financial Time Series Model
Wei Deng
2014-01-01
Full Text Available Nonlinear behaviors of tail dependence and cross-correlation of financial time series are reproduced and investigated by stochastic voter dynamic system. The voter process is a continuous-time Markov process and is one of the interacting dynamic systems. The tail dependence of return time series for pairs of Chinese stock markets and the proposed financial models is studied by copula analysis, in an attempt to detect and illustrate the existence of relevant correlation relationships. Further, the multifractality of cross-correlations for return series is studied by multifractal detrended cross-correlation analysis, which indicates the analogous cross-correlations and some fractal characters for both actual data and simulative data and provides an intuitive evidence for market inefficiency.
Neural modeling for time series: A statistical stepwise method for weight elimination.
Cottrell, M; Girard, B; Girard, Y; Mangeas, M; Muller, C
1995-01-01
Many authors use feedforward neural networks for modeling and forecasting time series. Most of these applications are mainly experimental, and it is often difficult to extract a general methodology from the published studies. In particular, the choice of architecture is a tricky problem. We try to combine the statistical techniques of linear and nonlinear time series with the connectionist approach. The asymptotical properties of the estimators lead us to propose a systematic methodology to determine which weights are nonsignificant and to eliminate them to simplify the architecture. This method (SSM or statistical stepwise method) is compared to other pruning techniques and is applied to some artificial series, to the famous Sunspots benchmark, and to daily electrical consumption data.
Super CCD的传人——富士FinePix A310数码相机评测
Rio
2003-01-01
富士FinePix A310数码相机采用第四代Super CCD HR技术，亮晶晶的条形开头采用侧面推拉的操作方式。在拉动开关的同时镜头盖迅速缩回机身内部，弹出镜头。
Generation of future high-resolution rainfall time series with a disaggregation model
Müller, Hannes; Haberlandt, Uwe
2017-04-01
High-resolution rainfall data are needed in many fields of hydrology and water resources management. For analyzes of future rainfall condition climate scenarios exist with hourly values of rainfall. However, the direct usage of these data is associated with uncertainties which can be indicated by comparisons of observations and C20 control runs. An alternative is the derivation of changes of rainfall behavior over the time from climate simulations. Conclusions about future rainfall conditions can be drawn by adding these changes to observed time series. A multiplicative cascade model is used in this investigation for the disaggregation of daily rainfall amounts to hourly values. Model parameters can be estimated by REMO rainfall time series (UBA-, BfG- and ENS-realization), based on ECHAM5. Parameter estimation is carried out for C20 period as well as near term and long term future (2021-2050 and 2071-2100). Change factors for both future periods are derived by parameter comparisons and added to the parameters estimated from observed time series. This enables the generation of hourly rainfall time series from observed daily values with respect to future changes. The investigation is carried out for rain gauges in Lower Saxony. Generated Time series are analyzed regarding statistical characteristics, e.g. extreme values, event-based (wet spell duration and amounts, dry spell duration, …) and continuum characteristics (average intensity, fraction of dry intervals,…). The generation of the time series is validated by comparing the changes in the statistical characteristics from the REMO data and from the disaggregated data.
Li, Qiongge; Chan, Maria F
2017-01-01
Over half of cancer patients receive radiotherapy (RT) as partial or full cancer treatment. Daily quality assurance (QA) of RT in cancer treatment closely monitors the performance of the medical linear accelerator (Linac) and is critical for continuous improvement of patient safety and quality of care. Cumulative longitudinal QA measurements are valuable for understanding the behavior of the Linac and allow physicists to identify trends in the output and take preventive actions. In this study, artificial neural networks (ANNs) and autoregressive moving average (ARMA) time-series prediction modeling techniques were both applied to 5-year daily Linac QA data. Verification tests and other evaluations were then performed for all models. Preliminary results showed that ANN time-series predictive modeling has more advantages over ARMA techniques for accurate and effective applicability in the dosimetry and QA field.
Time series models of environmental exposures: Good predictions or good understanding.
Barnett, Adrian G; Stephen, Dimity; Huang, Cunrui; Wolkewitz, Martin
2017-04-01
Time series data are popular in environmental epidemiology as they make use of the natural experiment of how changes in exposure over time might impact on disease. Many published time series papers have used parameter-heavy models that fully explained the second order patterns in disease to give residuals that have no short-term autocorrelation or seasonality. This is often achieved by including predictors of past disease counts (autoregression) or seasonal splines with many degrees of freedom. These approaches give great residuals, but add little to our understanding of cause and effect. We argue that modelling approaches should rely more on good epidemiology and less on statistical tests. This includes thinking about causal pathways, making potential confounders explicit, fitting a limited number of models, and not over-fitting at the cost of under-estimating the true association between exposure and disease. Copyright © 2017 Elsevier Inc. All rights reserved.
He, Yuning
2015-01-01
Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.
Series-expansion thermal tensor network approach for quantum lattice models
Chen, Bin-Bin; Liu, Yun-Jing; Chen, Ziyu; Li, Wei
2017-04-01
We propose a series-expansion thermal tensor network (SETTN) approach for efficient simulations of quantum lattice models. This continuous-time SETTN method is based on the numerically exact Taylor series expansion of the equilibrium density operator e-β H (with H the total Hamiltonian and β the imaginary time), and is thus Trotter-error free. We discover, through simulating XXZ spin chain and square-lattice quantum Ising models, that not only the Hamiltonian H , but also its powers Hn, can be efficiently expressed as matrix product operators, which enables us to calculate with high precision the equilibrium and dynamical properties of quantum lattice models at finite temperatures. Our SETTN method provides an alternative to conventional Trotter-Suzuki renormalization-group (RG) approaches, and achieves a very high standard of thermal RG simulations in terms of accuracy and flexibility.
Statistical models and time series forecasting of sulfur dioxide: a case study Tehran.
Hassanzadeh, S; Hosseinibalam, F; Alizadeh, R
2009-08-01
This study performed a time-series analysis, frequency distribution and prediction of SO(2) levels for five stations (Pardisan, Vila, Azadi, Gholhak and Bahman) in Tehran for the period of 2000-2005. Most sites show a quite similar characteristic with highest pollution in autumn-winter time and least pollution in spring-summer. The frequency distributions show higher peaks at two residential sites. The potential for SO(2) problems is high because of high emissions and the close geographical proximity of the major industrial and urban centers. The ACF and PACF are nonzero for several lags, indicating a mixed (ARMA) model, then at Bahman station an ARMA model was used for forecasting SO(2). The partial autocorrelations become close to 0 after about 5 lags while the autocorrelations remain strong through all the lags shown. The results proved that ARMA (2,2) model can provides reliable, satisfactory predictions for time series.
Multi-factor high-order intuitionistic fuzzy time series forecasting model
Yanan Wang; Yingjie Lei; Yang Lei; Xiaoshi Fan
2016-01-01
Fuzzy sets theory cannot describe the neutrality degree of data, which has largely limited the objectivity of fuzzy time series in uncertain data forecasting. With this regard, a multi-factor high-order intuitionistic fuzzy time series forecasting model is built. In the new model, a fuzzy clustering algorithm is used to get unequal intervals, and a more objective technique for ascertaining member-ship and non-membership functions of the intuitionistic fuzzy set is proposed. On these bases, forecast rules based on multidimen-sional intuitionistic fuzzy modus ponens inference are established. Final y, contrast experiments on the daily mean temperature of Beijing are carried out, which show that the novel model has a clear advantage of improving the forecast accuracy.
Intuitionistic Fuzzy Time Series Forecasting Model Based on Intuitionistic Fuzzy Reasoning
Ya’nan Wang
2016-01-01
Full Text Available Fuzzy sets theory cannot describe the data comprehensively, which has greatly limited the objectivity of fuzzy time series in uncertain data forecasting. In this regard, an intuitionistic fuzzy time series forecasting model is built. In the new model, a fuzzy clustering algorithm is used to divide the universe of discourse into unequal intervals, and a more objective technique for ascertaining the membership function and nonmembership function of the intuitionistic fuzzy set is proposed. On these bases, forecast rules based on intuitionistic fuzzy approximate reasoning are established. At last, contrast experiments on the enrollments of the University of Alabama and the Taiwan Stock Exchange Capitalization Weighted Stock Index are carried out. The results show that the new model has a clear advantage of improving the forecast accuracy.
High-Order Fuzzy Time Series Model Based on Generalized Fuzzy Logical Relationship
Wangren Qiu
2013-01-01
Full Text Available In view of techniques for constructing high-order fuzzy time series models, there are three methods which are based on advanced algorithms, computational methods, and grouping the fuzzy logical relationships, respectively. The last kind model has been widely applied and researched for the reason that it is easy to be understood by the decision makers. To improve the fuzzy time series forecasting model, this paper presents a novel high-order fuzzy time series models denoted as GTS(M,N on the basis of generalized fuzzy logical relationships. Firstly, the paper introduces some concepts of the generalized fuzzy logical relationship and an operation for combining the generalized relationships. Then, the proposed model is implemented in forecasting enrollments of the University of Alabama. As an example of in-depth research, the proposed approach is also applied to forecast the close price of Shanghai Stock Exchange Composite Index. Finally, the effects of the number of orders and hierarchies of fuzzy logical relationships on the forecasting results are discussed.
Application of Time-Series Model to Predict Groundwater Dynamic in Sanjiang Plain,Northeast China
LUAN Zhaoqing; LIU Guihua; YAN Baixing
2011-01-01
To study the groundwater dynamic in the typical region of Sanjiang Plain,long-term groundwater level observation data in the Honghe State Farm were collected and analyzed in this paper.The seasonal and long-term groundwater dynamic was explored.From 1996 to 2008,groundwater level kept declining due to intensive exploitation of groundwater resources for rice irrigation.A decline of nearly 5 m was found for almost all the monitoring wells.A time-series method was established to model the groundwater dynamic.Modeled results by time-series model showed that the groundwater level in this region would keep declining according to the current exploitation intensity.A total dropdown of 1.07 m would occur from 2009 to 2012.Time-series model can be used to model and forecast the groundwater dynamic with high accuracy.Measures including control on groundwater exploitation amount and application of water saving irrigation technique should be taken to prevent the continuing declining of groundwater in the Sanjiang Plain.
A Bayesian Approach for Summarizing and Modeling Time-Series Exposure Data with Left Censoring.
Houseman, E Andres; Virji, M Abbas
2017-08-01
Direct reading instruments are valuable tools for measuring exposure as they provide real-time measurements for rapid decision making. However, their use is limited to general survey applications in part due to issues related to their performance. Moreover, statistical analysis of real-time data is complicated by autocorrelation among successive measurements, non-stationary time series, and the presence of left-censoring due to limit-of-detection (LOD). A Bayesian framework is proposed that accounts for non-stationary autocorrelation and LOD issues in exposure time-series data in order to model workplace factors that affect exposure and estimate summary statistics for tasks or other covariates of interest. A spline-based approach is used to model non-stationary autocorrelation with relatively few assumptions about autocorrelation structure. Left-censoring is addressed by integrating over the left tail of the distribution. The model is fit using Markov-Chain Monte Carlo within a Bayesian paradigm. The method can flexibly account for hierarchical relationships, random effects and fixed effects of covariates. The method is implemented using the rjags package in R, and is illustrated by applying it to real-time exposure data. Estimates for task means and covariates from the Bayesian model are compared to those from conventional frequentist models including linear regression, mixed-effects, and time-series models with different autocorrelation structures. Simulations studies are also conducted to evaluate method performance. Simulation studies with percent of measurements below the LOD ranging from 0 to 50% showed lowest root mean squared errors for task means and the least biased standard deviations from the Bayesian model compared to the frequentist models across all levels of LOD. In the application, task means from the Bayesian model were similar to means from the frequentist models, while the standard deviations were different. Parameter estimates for covariates
Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing
Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa
2016-08-01
Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture—for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments—as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series—daily Poaceae pollen concentrations over the period 2006-2014—was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.
Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing
Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa
2017-02-01
Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture—for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments—as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series—daily Poaceae pollen concentrations over the period 2006-2014—was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.
A regional GIS-based model for reconstructing natural monthly streamflow series at ungauged sites
Pumo, Dario; Lo Conti, Francesco; Viola, Francesco; Noto, Leonardo V.
2016-04-01
Several hydrologic applications require reliable estimates of monthly runoff in river basins to face the widespread lack of data, both in time and in space. The main aim of this work is to propose a regional model for the estimation of monthly natural runoff series at ungauged sites, analyzing its applicability, reliability and limitations. A GIS (Geographic Information System) based model is here developed and applied to the entire region of Sicily (Italy). The core of this tool is a regional model for the estimation of monthly natural runoff series, based on a simple modelling structure, consisting of a regression based rainfall-runoff model with only four parameters. The monthly runoff is obtained as a function of precipitation and mean temperature at the same month and runoff at the previous month. For a given basin, the four model parameters are assessed by specific regional equations as a function of some easily measurable geomorphic and climate basins' descriptors. The model is calibrated by a "two-step" procedure applied to a number of gauged basins over the region. The first step is aimed at the identification of a set of parameters optimizing model performances at the level of single basin. Such "optimal" parameters sets, derived for each calibration basin, are successively used inside a regional regression analysis, performed at the second step, by which the regional equations for model parameters assessment are defined and calibrated. All the gauged watersheds across the Sicily have been analyzed, selecting 53 basins for model calibration and using other 6 basins exclusively for validation purposes. Model performances, quantitatively evaluated considering different statistical indexes, demonstrate a relevant model ability in capturing the observed hydrological response at both the monthly level and higher time scales (seasonal and annual). One of the key features related to the proposed methodology is its easy transferability to other arid and semiarid
Watanabe, Hayafumi; Sano, Yukie; Takayasu, Hideki; Takayasu, Misako
2016-11-01
To elucidate the nontrivial empirical statistical properties of fluctuations of a typical nonsteady time series representing the appearance of words in blogs, we investigated approximately 3 ×109 Japanese blog articles over a period of six years and analyze some corresponding mathematical models. First, we introduce a solvable nonsteady extension of the random diffusion model, which can be deduced by modeling the behavior of heterogeneous random bloggers. Next, we deduce theoretical expressions for both the temporal and ensemble fluctuation scalings of this model, and demonstrate that these expressions can reproduce all empirical scalings over eight orders of magnitude. Furthermore, we show that the model can reproduce other statistical properties of time series representing the appearance of words in blogs, such as functional forms of the probability density and correlations in the total number of blogs. As an application, we quantify the abnormality of special nationwide events by measuring the fluctuation scalings of 1771 basic adjectives.
Parneet Paul
2013-02-01
Full Text Available The computer modelling and simulation of wastewater treatment plant and their specific technologies, such as membrane bioreactors (MBRs, are becoming increasingly useful to consultant engineers when designing, upgrading, retrofitting, operating and controlling these plant. This research uses traditional phenomenological mechanistic models based on MBR filtration and biochemical processes to measure the effectiveness of alternative and novel time series models based upon input–output system identification methods. Both model types are calibrated and validated using similar plant layouts and data sets derived for this purpose. Results prove that although both approaches have their advantages, they also have specific disadvantages as well. In conclusion, the MBR plant designer and/or operator who wishes to use good quality, calibrated models to gain a better understanding of their process, should carefully consider which model type is selected based upon on what their initial modelling objectives are. Each situation usually proves unique.
Time series modeling for analysis and control advanced autopilot and monitoring systems
Ohtsu, Kohei; Kitagawa, Genshiro
2015-01-01
This book presents multivariate time series methods for the analysis and optimal control of feedback systems. Although ships’ autopilot systems are considered through the entire book, the methods set forth in this book can be applied to many other complicated, large, or noisy feedback control systems for which it is difficult to derive a model of the entire system based on theory in that subject area. The basic models used in this method are the multivariate autoregressive model with exogenous variables (ARX) model and the radial bases function net-type coefficients ARX model. The noise contribution analysis can then be performed through the estimated autoregressive (AR) model and various types of autopilot systems can be designed through the state–space representation of the models. The marine autopilot systems addressed in this book include optimal controllers for course-keeping motion, rolling reduction controllers with rudder motion, engine governor controllers, noise adaptive autopilots, route-tracki...
Dilip P Ahalpara; Amit Verma; Jiterndra C Parikh; Prasanta K Panigrahi
2008-09-01
A method based on wavelet transform is developed to characterize variations at multiple scales in non-stationary time series. We consider two different financial time series, S&P CNX Nifty closing index of the National Stock Exchange (India) and Dow Jones industrial average closing values. These time series are chosen since they are known to comprise of stochastic fluctuations as well as cyclic variations at different scales. The wavelet transform isolates cyclic variations at higher scales when random fluctuations are averaged out; this corroborates correlated behaviour observed earlier in financial time series through random matrix studies. Analysis is carried out through Haar, Daubechies-4 and continuous Morlet wavelets for studying the character of fluctuations at different scales and show that cyclic variations emerge at intermediate time scales. It is found that Daubechies family of wavelets can be effectively used to capture cyclic variations since these are local in nature. To get an insight into the occurrence of cyclic variations, we then proceed to model these wavelet coefficients using genetic programming (GP) approach and using the standard embedding technique in the reconstructed phase space. It is found that the standard methods (GP as well as artificial neural networks) fail to model these variations because of poor convergence. A novel interpolation approach is developed that overcomes this difficulty. The dynamical model equations have, primarily, linear terms with additive Padé-type terms. It is seen that the emergence of cyclic variations is due to an interplay of a few important terms in the model. Very interestingly GP model captures smooth variations as well as bursty behaviour quite nicely.
Model for the respiratory modulation of the heart beat-to-beat time interval series
Capurro, Alberto; Diambra, Luis; Malta, C. P.
2005-09-01
In this study we present a model for the respiratory modulation of the heart beat-to-beat interval series. The model consists of a set of differential equations used to simulate the membrane potential of a single rabbit sinoatrial node cell, excited with a periodic input signal with added correlated noise. This signal, which simulates the input from the autonomous nervous system to the sinoatrial node, was included in the pacemaker equations as a modulation of the iNaK current pump and the potassium current iK. We focus at modeling the heart beat-to-beat time interval series from normal subjects during meditation of the Kundalini Yoga and Chi techniques. The analysis of the experimental data indicates that while the embedding of pre-meditation and control cases have a roughly circular shape, it acquires a polygonal shape during meditation, triangular for the Kundalini Yoga data and quadrangular in the case of Chi data. The model was used to assess the waveshape of the respiratory signals needed to reproduce the trajectory of the experimental data in the phase space. The embedding of the Chi data could be reproduced using a periodic signal obtained by smoothing a square wave. In the case of Kundalini Yoga data, the embedding was reproduced with a periodic signal obtained by smoothing a triangular wave having a rising branch of longer duration than the decreasing branch. Our study provides an estimation of the respiratory signal using only the heart beat-to-beat time interval series.
Water quality management using statistical analysis and time-series prediction model
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2014-12-01
This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.
A time series model: First-order integer-valued autoregressive (INAR(1))
Simarmata, D. M.; Novkaniza, F.; Widyaningsih, Y.
2017-07-01
Nonnegative integer-valued time series arises in many applications. A time series model: first-order Integer-valued AutoRegressive (INAR(1)) is constructed by binomial thinning operator to model nonnegative integer-valued time series. INAR (1) depends on one period from the process before. The parameter of the model can be estimated by Conditional Least Squares (CLS). Specification of INAR(1) is following the specification of (AR(1)). Forecasting in INAR(1) uses median or Bayesian forecasting methodology. Median forecasting methodology obtains integer s, which is cumulative density function (CDF) until s, is more than or equal to 0.5. Bayesian forecasting methodology forecasts h-step-ahead of generating the parameter of the model and parameter of innovation term using Adaptive Rejection Metropolis Sampling within Gibbs sampling (ARMS), then finding the least integer s, where CDF until s is more than or equal to u . u is a value taken from the Uniform(0,1) distribution. INAR(1) is applied on pneumonia case in Penjaringan, Jakarta Utara, January 2008 until April 2016 monthly.
Wavelet time series MPARIMA modeling for power system short term load forecasting
冉启文; 单永正; 王建赜; 王骐
2003-01-01
The wavelet power system short term load forecasting(STLF) uses a mulriple periodical autoregressive integrated moving average(MPARIMA) model to model the mulriple near-periodicity, nonstationarity and nonlinearity existed in power system short term quarter-hour load time series, and can therefore accurately forecast the quarter-hour loads of weekdays and weekends, and provide more accurate results than the conventional techniques, such as artificial neural networks and autoregressive moving average(ARMA) models test results. Obtained with a power system networks in a city in Northeastern part of China confirm the validity of the approach proposed.
Time series models to simulate and forecast hourly averaged wind speeds in Quetta, Pakistan
Lalarukh Kamal [Balochistan University, Quetta (Pakistan). Dept. of Mathematics; Yasmin Zahra Jafri [Balochistan University, Quetta (Pakistan). Dept. of Statistics
1997-07-01
Stochastic simulation and forecast models of hourly average wind speeds are presented. Time series models take into account several basic features of wind speed data including autocorrelation, non-Gaussian distribution and diurnal nonstationarity. The positive correlation between consecutive wind speed observations is taken into account by fitting an ARMA (p,q) process to wind speed data transformed to make their distribution approximately Gaussian and standardized to remove scattering of transformed data. Diurnal variations have been taken into account to observe forecasts and its dependence on lead times. We find the ARMA (p,q) model suitable for prediction intervals and probability forecasts. (author)
Extracting Knowledge From Time Series An Introduction to Nonlinear Empirical Modeling
Bezruchko, Boris P
2010-01-01
This book addresses the fundamental question of how to construct mathematical models for the evolution of dynamical systems from experimentally-obtained time series. It places emphasis on chaotic signals and nonlinear modeling and discusses different approaches to the forecast of future system evolution. In particular, it teaches readers how to construct difference and differential model equations depending on the amount of a priori information that is available on the system in addition to the experimental data sets. This book will benefit graduate students and researchers from all natural sciences who seek a self-contained and thorough introduction to this subject.
Modeling and Simulation of Series Compensator to Mitigate Power Quality Problems
S.Sadaiappan,
2010-12-01
Full Text Available Power Electronics and Advanced Control technologies have made it possible to mitigate power quality problems and maintain the operation of sensitive loads. Among power system disturbances, voltage sags, swells and harmonics are some of the severe problems to the sensitive loads. The series compensation method is best suited to protect such loads against those disturbances. The use of a series compensator (SC to improve power quality is an isolated power system is investigated. The role of the compensator is not only to mitigate the effects of voltage sag, but also to reduce the harmonic distortion due to the presence of non linear loads in the network. In this paper, a series compensator is proposed and a method of harmonic compensation is described and a method to mitigate voltage sag is investigated. The proposed series compensator consists of Energy Storage System (ESS and Voltage Source Inverter (VSI, Injection Transformer. The ESS can be a capacitor of suitable capacity. ESS would act as a buffer and generally provides the energy needed for load ride-through during voltage sag. Injection Transformer is used to inject the voltage in transmission line in appropriate level.In this way the terminal voltage of the protected sensitive load can be regulated to maintain a constant level. The modeling and imulation of the proposed series compensator was implemented in Matlab Simulink work space. Simulation results showed that the proposed series compensator was efficient in mitigating voltage sags and harmonics and thus improve the power quality of the isolated power system. This approach is different from conventional methods and provides effective solution. If this method is enhanced in future it could provide much more improved power quality.
Ahalpara, D P; Parikh, J C; Verma, A; Ahalpara, Dilip P.; Panigrahi, Prasanta K.; Parikh, Jitendra C.; Verma, Amit
2006-01-01
A method based on wavelet transform and genetic programming is proposed for characterizing and modeling variations at multiple scales in non-stationary time series. The cyclic variations, extracted by wavelets and smoothened by cubic splines, are well captured by genetic programming in the form of dynamical equations. For the purpose of illustration, we analyze two different non-stationary financial time series, S&P CNX Nifty closing index of the National Stock Exchange (India) and Dow Jones industrial average closing values through Haar, Daubechies-4 and continuous Morlet wavelets for studying the character of fluctuations at different scales, before modeling the cyclic behavior through GP. Cyclic variations emerge at intermediate time scales and the corresponding dynamical equations reveal characteristic behavior at different scales.
Prawirodirdjo, Linette; Ben-Zion, Yehuda; Bock, Yehuda
2006-02-01
We suggest that strain in the elastic part of the Earth's crust induced by surface temperature variations is a significant contributor to the seasonal variations observed in the spatially filtered daily position time series of Southern California Integrated GPS Network (SCIGN) stations. We compute the predicted thermoelastic strain from the observed local atmospheric temperature record assuming an elastically decoupled layer over a uniform elastic half-space and compare the seasonal variations in thermoelastic strain to the horizontal GPS position time series. We consider three regions (Palmdale, 29 Palms, and Idyllwild), each with one temperature station and three to six GPS stations. The temperature time series is used to compute thermoelastic strain at each station on the basis of its relative location in the temperature field. For each region we assume a wavelength for the temperature field that is related to the local topography. The depth of the decoupled layer is inferred from the phase delay between the temperature record and the GPS time series. The relative amplitude of strain variation at each GPS station, calculated to be on the order of 0.1 μstrain, is related to the relative location of that station in the temperature field. The goodness of fit between model and data is evaluated from the relative amplitudes of the seasonal signals, as well as the appropriateness of the chosen temperature field wavelength and decoupled layer depth. The analysis shows a good fit between the predicted strains and the GPS time series. This suggests that the model captures the key first-order ingredients that determine the thermoelastic strain in a given area. The results can be used to improve the signal/noise ratio in GPS data.
Applications of soft computing in time series forecasting simulation and modeling techniques
Singh, Pritpal
2016-01-01
This book reports on an in-depth study of fuzzy time series (FTS) modeling. It reviews and summarizes previous research work in FTS modeling and also provides a brief introduction to other soft-computing techniques, such as artificial neural networks (ANNs), rough sets (RS) and evolutionary computing (EC), focusing on how these techniques can be integrated into different phases of the FTS modeling approach. In particular, the book describes novel methods resulting from the hybridization of FTS modeling approaches with neural networks and particle swarm optimization. It also demonstrates how a new ANN-based model can be successfully applied in the context of predicting Indian summer monsoon rainfall. Thanks to its easy-to-read style and the clear explanations of the models, the book can be used as a concise yet comprehensive reference guide to fuzzy time series modeling, and will be valuable not only for graduate students, but also for researchers and professionals working for academic, business and governmen...
A note on the Fourier series model for analysing line transect data.
Buckland, S T
1982-06-01
The Fourier series model offers a powerful procedure for the estimation of animal population density from line transect data. The estimate is reliable over a wide range of detection functions. In contrast, analytic confidence intervals yield, at best, 90% confidence for nominal 95% intervals. Three solutions, one using Monte Carlo techniques, another making direct use of replicate lines and the third based on the jackknife method, are discussed and compared.
Uniting Mandelbrot’s Noah and Joseph Effects in Toy Models of Natural Hazard Time Series
Credgington, D.; Watkins, N. W.; Chapman, S. C.; Rosenberg, S. J.; Sanchez, R.
2009-12-01
The forecasting of extreme events is a highly topical, cross-disciplinary problem. One aspect which is potentially tractable even when the events themselves are stochastic is the probability of a “burst” of a given size and duration, defined as the area between a time series and a constant threshold. Many natural time series depart from the simplest, Brownian, case and in the 1960s Mandelbrot developed the use of fractals to describe these departures. In particular he proposed two kinds of fractal model to capture the way in which natural data is often persistent in time (his “Joseph effect”, common in hydrology and exemplified by fractional Brownian motion) and/or prone to heavy tailed jumps (the “Noah effect”, typical of economic index time series, for which he gave Levy flights as an examplar). Much of the earlier modelling, however, has emphasised one of the Noah and Joseph parameters (the tail exponent mu and one derived from the temporal behaviour such as power spectral beta) at the other one's expense. I will describe work [1] in which we applied a simple self-affine stable model-linear fractional stable motion (LFSM)-which unifies both effects to better describe natural data, in this case from space physics. I will show how we have resolved some contradictions seen in earlier work, where purely Joseph or Noah descriptions had been sought. I will also show recent work [2] using numerical simulations of LFSM and simple analytic scaling arguments to study the problem of the area between a fractional Levy model time series and a threshold. [1] Watkins et al, Space Science Reviews [2005] [2] Watkins et al, Physical Review E [2009
Renormalized scattering series for frequency-domain waveform modelling of strong velocity contrasts
Jakobsen, M.; Wu, R. S.
2016-08-01
An improved description of scattering and inverse scattering processes in reflection seismology may be obtained on the basis of a scattering series solution to the Helmoltz equation, which allows one to separately model primary and multiple reflections. However, the popular scattering series of Born is of limited seismic modelling value, since it is only guaranteed to converge if the global contrast is relatively small. For frequency-domain waveform modelling of realistic contrasts, some kind of renormalization may be required. The concept of renormalization is normally associated with quantum field theory, where it is absolutely essential for the treatment of infinities in connection with observable quantities. However, the renormalization program is also highly relevant for classical systems, especially when there are interaction effects that act across different length scales. In the scattering series of De Wolf, a renormalization of the Green's functions is achieved by a split of the scattering potential operator into fore- and backscattering parts; which leads to an effective reorganization and partially re-summation of the different terms in the Born series, so that their order better reflects the physics of reflection seismology. It has been demonstrated that the leading (single return) term in the De Wolf series (DWS) gives much more accurate results than the corresponding Born approximation, especially for models with high contrasts that lead to a large accumulation of phase changes in the forward direction. However, the higher order terms in the DWS that are associated with internal multiples have not been studied numerically before. In this paper, we report from a systematic numerical investigation of the convergence properties of the DWS which is based on two new operator representations of the DWS. The first operator representation is relatively similar to the original scattering potential formulation, but more global and explicit in nature. The second
Klein, A.A.B.; Melard, G.; Zahaf, T.
2000-01-01
The Fisher information matrix is of fundamental importance for the analysis of parameter estimation of time series models. In this paper the exact information matrix of a multivariate Gaussian time series model expressed in state space form is derived. A computationally efficient procedure is used b
75 FR 3127 - Airworthiness Directives; Thrush Aircraft, Inc. Model 600 S2D and S2R Series Airplanes
2010-01-20
... wing front lower spar caps in Thrush Aircraft, Inc. Model 600 S2D and S2R (S-2R) series airplanes (type..., which applies to Thrush Aircraft, Inc. Model 600 S2D and S2R (S-2R) series airplanes (type certificate... Environmental Conditions Avenger Aircraft and Services (Avenger) states the life limits for the wing front...
Klein, A.A.B.; Melard, G.; Zahaf, T.
2000-01-01
The Fisher information matrix is of fundamental importance for the analysis of parameter estimation of time series models. In this paper the exact information matrix of a multivariate Gaussian time series model expressed in state space form is derived. A computationally efficient procedure is used b
2013-12-17
... Federal Aviation Administration 14 CFR Part 25 Special Conditions: Airbus, Model A350-900 Series Airplane... Model A350-900 series airplanes. These airplanes will have a novel or unusual design feature(s... sending written comments, data, or views. The most helpful comments reference a specific portion of...
Yang, Hyun-Ho; Han, Chang-Hoon; Oen Lee, Jeong; Yoon, Jun-Bo
2014-06-01
As a powerful method to reduce actuation voltage in an electrostatic micro-actuator, we propose and investigate an electrostatic micro-actuator with a pre-charged series capacitor. In contrast to a conventional electrostatic actuator, the injected pre-charges into the series capacitor can freely modulate the pull-in voltage of the proposed actuator even after the completion of fabrication. The static characteristics of the proposed actuator were investigated by first developing analytical models based on a parallel-plate capacitor model. We then successfully designed and demonstrated a micro-switch with a pre-charged series capacitor. The pull-in voltage of the fabricated micro-switch was reduced from 65.4 to 0.6 V when pre-charged with 46.3 V. The on-resistance of the fabricated micro-switch was almost the same as the initial one, even when the device was pre-charged, which was demonstrated for the first time. All results from the analytical models, finite element method simulations, and measurements were in good agreement with deviations of less than 10%. This work can be favorably adapted to electrostatic micro-switches which need a low actuation voltage without noticeable degradation of performance.
a Landsat Time-Series Stacks Model for Detection of Cropland Change
Chen, J.; Chen, J.; Zhang, J.
2017-09-01
Global, timely, accurate and cost-effective cropland monitoring with a fine spatial resolution will dramatically improve our understanding of the effects of agriculture on greenhouse gases emissions, food safety, and human health. Time-series remote sensing imagery have been shown particularly potential to describe land cover dynamics. The traditional change detection techniques are often not capable of detecting land cover changes within time series that are severely influenced by seasonal difference, which are more likely to generate pseuso changes. Here,we introduced and tested LTSM ( Landsat time-series stacks model), an improved Continuous Change Detection and Classification (CCDC) proposed previously approach to extract spectral trajectories of land surface change using a dense Landsat time-series stacks (LTS). The method is expected to eliminate pseudo changes caused by phenology driven by seasonal patterns. The main idea of the method is that using all available Landsat 8 images within a year, LTSM consisting of two term harmonic function are estimated iteratively for each pixel in each spectral band .LTSM can defines change area by differencing the predicted and observed Landsat images. The LTSM approach was compared with change vector analysis (CVA) method. The results indicated that the LTSM method correctly detected the "true change" without overestimating the "false" one, while CVA pointed out "true change" pixels with a large number of "false changes". The detection of change areas achieved an overall accuracy of 92.37 %, with a kappa coefficient of 0.676.
Neighbourhood selection for local modelling and prediction of hydrological time series
Jayawardena, A. W.; Li, W. K.; Xu, P.
2002-02-01
The prediction of a time series using the dynamical systems approach requires the knowledge of three parameters; the time delay, the embedding dimension and the number of nearest neighbours. In this paper, a new criterion, based on the generalized degrees of freedom, for the selection of the number of nearest neighbours needed for a better local model for time series prediction is presented. The validity of the proposed method is examined using time series, which are known to be chaotic under certain initial conditions (Lorenz map, Henon map and Logistic map), and real hydro meteorological time series (discharge data from Chao Phraya river in Thailand, Mekong river in Thailand and Laos, and sea surface temperature anomaly data). The predicted results are compared with observations, and with similar predictions obtained by using arbitrarily fixed numbers of neighbours. The results indicate superior predictive capability as measured by the mean square errors and coefficients of variation by the proposed approach when compared with the traditional approach of using a fixed number of neighbours.
Big Data impacts on stochastic Forecast Models: Evidence from FX time series
Sebastian Dietz
2013-12-01
Full Text Available With the rise of the Big Data paradigm new tasks for prediction models appeared. In addition to the volume problem of such data sets nonlinearity becomes important, as the more detailed data sets contain also more comprehensive information, e.g. about non regular seasonal or cyclical movements as well as jumps in time series. This essay compares two nonlinear methods for predicting a high frequency time series, the USD/Euro exchange rate. The first method investigated is Autoregressive Neural Network Processes (ARNN, a neural network based nonlinear extension of classical autoregressive process models from time series analysis (see Dietz 2011. Its advantage is its simple but scalable time series process model architecture, which is able to include all kinds of nonlinearities based on the universal approximation theorem of Hornik, Stinchcombe and White 1989 and the extensions of Hornik 1993. However, restrictions related to the numeric estimation procedures limit the flexibility of the model. The alternative is a Support Vector Machine Model (SVM, Vapnik 1995. The two methods compared have different approaches of error minimization (Empirical error minimization at the ARNN vs. structural error minimization at the SVM. Our new finding is, that time series data classified as “Big Data” need new methods for prediction. Estimation and prediction was performed using the statistical programming language R. Besides prediction results we will also discuss the impact of Big Data on data preparation and model validation steps. Normal 0 21 false false false DE X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Normale Tabelle"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}
Finlay, Chris; Olsen, Nils; Tøffner-Clausen, Lars
th order spline representation with knot points spaced at 0.5 year intervals. The resulting field model is able to consistently fit data from six independent low Earth orbit satellites: Oersted, CHAMP, SAC-C and the three Swarm satellites. As an example, we present comparisons of the excellent model......Ten months of data from ESA's Swarm mission, together with recent ground observatory monthly means, are used to update the CHAOS series of geomagnetic field models with a focus on time-changes of the core field. As for previous CHAOS field models quiet-time, night-side, data selection criteria...
Mukhin, Dmitry; Gavrilov, Andrey; Loskutov, Evgeny; Feigin, Alexander
2016-04-01
We suggest a method for empirical forecast of climate dynamics basing on the reconstruction of reduced dynamical models in a form of random dynamical systems [1,2] derived from observational time series. The construction of proper embedding - the set of variables determining the phase space the model works in - is no doubt the most important step in such a modeling, but this task is non-trivial due to huge dimension of time series of typical climatic fields. Actually, an appropriate expansion of observational time series is needed yielding the number of principal components considered as phase variables, which are to be efficient for the construction of low-dimensional evolution operator. We emphasize two main features the reduced models should have for capturing the main dynamical properties of the system: (i) taking into account time-lagged teleconnections in the atmosphere-ocean system and (ii) reflecting the nonlinear nature of these teleconnections. In accordance to these principles, in this report we present the methodology which includes the combination of a new way for the construction of an embedding by the spatio-temporal data expansion and nonlinear model construction on the basis of artificial neural networks. The methodology is aplied to NCEP/NCAR reanalysis data including fields of sea level pressure, geopotential height, and wind speed, covering Northern Hemisphere. Its efficiency for the interannual forecast of various climate phenomena including ENSO, PDO, NAO and strong blocking event condition over the mid latitudes, is demonstrated. Also, we investigate the ability of the models to reproduce and predict the evolution of qualitative features of the dynamics, such as spectral peaks, critical transitions and statistics of extremes. This research was supported by the Government of the Russian Federation (Agreement No. 14.Z50.31.0033 with the Institute of Applied Physics RAS) [1] Y. I. Molkov, E. M. Loskutov, D. N. Mukhin, and A. M. Feigin, "Random
Time-series gas prediction model using LS-SVR within a Bayesian framework
Qiao Meiying; Ma Xiaoping; Lan Jianyi; Wang Ying
2011-01-01
The traditional least squares support vector regression (LS-SVR) model, using cross validation to determine the regularization parameter and kernel parameter, is time-consuming. We propose a Bayesian evidence framework to infer the LS-SVR model parameters. Three levels Bayesian inferences are used to determine the model parameters, regularization hyper-parameters and tune the nuclear parameters by model comparison. On this basis, we established Bayesian LS-SVR time-series gas forecasting models and provide steps for the algorithm. The gas outburst data of a Hebi 10th mine working face is used to validate the model. The optimal embedding dimension and delay time of the time series were obtained by the smallest differential entropy method. Finally, within a MATLAB7.1 environment, we used actual coal gas data to compare the traditional LS-SVR and the Bayesian LS-SVR with LS-SVMlab1.5 Toolbox simulation. The results show that the Bayesian framework of an LS-SVR significantly improves the speed and accuracy of the forecast
A stochastic HMM-based forecasting model for fuzzy time series.
Li, Sheng-Tun; Cheng, Yi-Chung
2010-10-01
Recently, fuzzy time series have attracted more academic attention than traditional time series due to their capability of dealing with the uncertainty and vagueness inherent in the data collected. The formulation of fuzzy relations is one of the key issues affecting forecasting results. Most of the present works adopt IF-THEN rules for relationship representation, which leads to higher computational overhead and rule redundancy. Sullivan and Woodall proposed a Markov-based formulation and a forecasting model to reduce computational overhead; however, its applicability is limited to handling one-factor problems. In this paper, we propose a novel forecasting model based on the hidden Markov model by enhancing Sullivan and Woodall's work to allow handling of two-factor forecasting problems. Moreover, in order to make the nature of conjecture and randomness of forecasting more realistic, the Monte Carlo method is adopted to estimate the outcome. To test the effectiveness of the resulting stochastic model, we conduct two experiments and compare the results with those from other models. The first experiment consists of forecasting the daily average temperature and cloud density in Taipei, Taiwan, and the second experiment is based on the Taiwan Weighted Stock Index by forecasting the exchange rate of the New Taiwan dollar against the U.S. dollar. In addition to improving forecasting accuracy, the proposed model adheres to the central limit theorem, and thus, the result statistically approximates to the real mean of the target value being forecast.
Partitioning and interpolation based hybrid ARIMA–ANN model for time series forecasting
C NARENDRA BABU; PALLAVIRAM SURE
2016-07-01
Time series data (TSD) originating from different applications have dissimilar characteristics. Hence for prediction of TSD, diversified varieties of prediction models exist. In many applications, hybrid models provide more accurate predictions than individual models. One such hybrid model, namely auto regressive integrated moving average – artificial neural network (ARIMA–ANN) is devised in many different ways in the literature. However, the prediction accuracy of hybrid ARIMA–ANN model can be further improved by devising suitable processing techniques. In this paper, a hybrid ARIMA–ANN model is proposed, which combines the concepts of the recently developed moving average (MA) filter based hybrid ARIMA–ANN model, with a processing technique involving a partitioning–interpolation (PI) step. The improved prediction accuracy of the proposed PI based hybrid ARIMA–ANN model is justified using a simulation experiment.Further, on different experimental TSD like sunspots TSD and electricity price TSD, the proposed hybrid model is applied along with four existing state-of-the-art models and it is found that the proposed model outperforms all the others, and hence is a promising model for TSD prediction
Time Series Model of Occupational Injuries Analysis in Ghanaian Mines-A Case Study
S.J. Aidoo
2012-02-01
Full Text Available This study has modeled occupational injuries at Gold Fields Ghana Limited (GFGL, Tarkwa Mine using time series analysis. Data was collected from the Safety and Environment Department from January 2007 to December 2010. Testing for stationarity condition using line graph from Statistical Package for Social Sciences (SPSS 17.0 edition failed, hence the use of Box-Jenkins method of differencing which tested positive after the first difference. ARIMA (1,1,1 model was then applied in modeling the stationary data and model diagnostic was done to ensure its appropriateness. The model was further used to forecast the occurrence of injuries at GFGL for two year period spanning from January 2011 to December 2012. The results show that occupational injuries for GFGL are going to have a slight upward and downward movement from January 2011 to May 2011, after which there will be stability (almost zero from June 2011 to December 2012.
A Trend-Switching Financial Time Series Model with Level-Duration Dependence
Qingsheng Wang
2012-01-01
overcome the difficult problem that motivates our researches in this paper. An asymmetric and nonlinear model with the change of local trend depending on local high-low turning point process is first proposed in this paper. As the point process can be decomposed into the two different processes, a high-low level process and an up-down duration process, we then establish the so-called trend-switching model which depends on both level and duration (Trend-LD. The proposed model can predict efficiently the direction and magnitude of the local trend of a time series by incorporating the local high-low turning point information. The numerical results on six indices in world stock markets show that the proposed Trend-LD model is suitable for fitting the market data and able to outperform the traditional random walk model.
Lihua Yang
2015-04-01
Full Text Available Export volume forecasting of fresh fruits is a complex task due to the large number of factors affecting the demand. In order to guide the fruit growers’ sales, decreasing the cultivating cost and increasing their incomes, a hybrid fresh apple export volume forecasting model is proposed. Using the actual data of fresh apple export volume, the Seasonal Decomposition (SD model of time series and Radial Basis Function (RBF model of artificial neural network are built. The predictive results are compared among the three forecasting model based on the criterion of Mean Absolute Percentage Error (MAPE. The result indicates that the proposed combined forecasting model is effective because it can improve the prediction accuracy of fresh apple export volumes.
Kamel, S.; Jurado, F.; Chen, Zhe
2015-01-01
This paper presents an implicit modeling of Static Synchronous Series Compensator (SSSC) in Newton–Raphson load flow method. The algorithm of load flow is based on the revised current injection formulation. The developed model of SSSC is depended on the current injection approach. In this model......, the voltage source representation of SSSC is transformed to current source, and then this current is injected at the sending and auxiliary buses. These injected currents at the terminals of SSSC are a function of the required line flow and voltage of buses. These currents can be included easily...... to the original mismatches at the terminal buses of SSSC. The developed model can be used to control active and reactive line flow together or individually. The implicit modeling of SSSC device decreases the complexity of load flow code, the modification of Jacobian matrix is avoided, the change only...
I MADE ARYA ANTARA
2015-02-01
Full Text Available This paper aimed to elaborates and compares the performance of Fuzzy Time Series (FTS model with Markov Chain (MC model in forecasting the Gross Regional Domestic Product (GDRP of Bali Province. Both methods were considered as forecasting methods in soft modeling domain. The data used was quarterly data of Bali’s GDRP for year 1992 through 2013 from Indonesian Bureau of Statistic at Denpasar Office. Inspite of using the original data, rate of change from two consecutive quarters was used to model. From the in-sample forecasting conducted, we got the Average Forecasting Error Rate (AFER for FTS dan MC models as much as 0,78 percent and 2,74 percent, respectively. Based-on these findings, FTS outperformed MC in in-sample forecasting for GDRP of Bali’s data.
Zhang, Yong; Zhong, Miner; Geng, Nana; Jiang, Yunjian
2017-01-01
The market demand for electric vehicles (EVs) has increased in recent years. Suitable models are necessary to understand and forecast EV sales. This study presents a singular spectrum analysis (SSA) as a univariate time-series model and vector autoregressive model (VAR) as a multivariate model. Empirical results suggest that SSA satisfactorily indicates the evolving trend and provides reasonable results. The VAR model, which comprised exogenous parameters related to the market on a monthly basis, can significantly improve the prediction accuracy. The EV sales in China, which are categorized into battery and plug-in EVs, are predicted in both short term (up to December 2017) and long term (up to 2020), as statistical proofs of the growth of the Chinese EV industry.
Time Series Model of Wind Speed for Multi Wind Turbines based on Mixed Copula
Nie Dan
2016-01-01
Full Text Available Because wind power is intermittent, random and so on, large scale grid will directly affect the safe and stable operation of power grid. In order to make a quantitative study on the characteristics of the wind speed of wind turbine, the wind speed time series model of the multi wind turbine generator is constructed by using the mixed Copula-ARMA function in this paper, and a numerical example is also given. The research results show that the model can effectively predict the wind speed, ensure the efficient operation of the wind turbine, and provide theoretical basis for the stability of wind power grid connected operation.
Interception modeling with vegetation time series derived from Landsat TM data
Polo, M. J.; Díaz-Gutiérrez, A.; González-Dugo, M. P.
2011-11-01
Rainfall interception by the vegetation may constitute a significant fraction in the water budget at local and watershed scales, especially in Mediterranean areas. Different approaches can be found to model locally the interception fraction, but a distributed analysis requires time series of vegetation along the watershed for the study period, which includes both type of vegetation and ground cover fraction. In heterogeneous watersheds, remote sensing is usually the only viable alternative to characterize medium to large size areas, but the high number of scenes necessary to capture the temporal variability during long periods, together with the sometimes extreme scarcity of data during the wet season, make it necessary to deal with a limited number of images and interpolate vegetation maps between consecutive dates. This work presents an interception model for heterogeneous watersheds which combines an interception continuous simulation derived from Gash model and their derivations, and a time series of vegetation cover fraction and type from Landsat TM data and vegetation inventories. A mountainous watershed in Southern Spain where a physical hydrological modelling had been previously calibrated was selected for this study. The dominant species distribution and their relevant characteristics regarding the interception process were analyzed from literature and digital cartography; the evolution of the vegetation cover fraction along the watershed during the study period (2002-2005) was produced by the application of a NDVI analysis on the available scenes of Landsat TM images. This model was further calibrated by field data collected in selected areas in the watershed.
A sequential approach to calibrate ecosystem models with multiple time series data
Oliveros-Ramos, Ricardo; Verley, Philippe; Echevin, Vincent; Shin, Yunne-Jai
2017-02-01
When models are aimed to support decision-making, their credibility is essential to consider. Model fitting to observed data is one major criterion to assess such credibility. However, due to the complexity of ecosystem models making their calibration more challenging, the scientific community has given more attention to the exploration of model behavior than to a rigorous comparison to observations. This work highlights some issues related to the comparison of complex ecosystem models to data and proposes a methodology for a sequential multi-phases calibration (or parameter estimation) of ecosystem models. We first propose two criteria to classify the parameters of a model: the model dependency and the time variability of the parameters. Then, these criteria and the availability of approximate initial estimates are used as decision rules to determine which parameters need to be estimated, and their precedence order in the sequential calibration process. The end-to-end (E2E) ecosystem model ROMS-PISCES-OSMOSE applied to the Northern Humboldt Current Ecosystem is used as an illustrative case study. The model is calibrated using an evolutionary algorithm and a likelihood approach to fit time series data of landings, abundance indices and catch at length distributions from 1992 to 2008. Testing different calibration schemes regarding the number of phases, the precedence of the parameters' estimation, and the consideration of time varying parameters, the results show that the multiple-phase calibration conducted under our criteria allowed to improve the model fit.
Cheng, C. M.; Peng, Z. K.; Zhang, W. M.; Meng, G.
2017-03-01
Nonlinear problems have drawn great interest and extensive attention from engineers, physicists and mathematicians and many other scientists because most real systems are inherently nonlinear in nature. To model and analyze nonlinear systems, many mathematical theories and methods have been developed, including Volterra series. In this paper, the basic definition of the Volterra series is recapitulated, together with some frequency domain concepts which are derived from the Volterra series, including the general frequency response function (GFRF), the nonlinear output frequency response function (NOFRF), output frequency response function (OFRF) and associated frequency response function (AFRF). The relationship between the Volterra series and other nonlinear system models and nonlinear problem solving methods are discussed, including the Taylor series, Wiener series, NARMAX model, Hammerstein model, Wiener model, Wiener-Hammerstein model, harmonic balance method, perturbation method and Adomian decomposition. The challenging problems and their state of arts in the series convergence study and the kernel identification study are comprehensively introduced. In addition, a detailed review is then given on the applications of Volterra series in mechanical engineering, aeroelasticity problem, control engineering, electronic and electrical engineering.
Sample correlations of infinite variance time series models: an empirical and theoretical study
Jason Cohen
1998-01-01
Full Text Available When the elements of a stationary ergodic time series have finite variance the sample correlation function converges (with probability 1 to the theoretical correlation function. What happens in the case where the variance is infinite? In certain cases, the sample correlation function converges in probability to a constant, but not always. If within a class of heavy tailed time series the sample correlation functions do not converge to a constant, then more care must be taken in making inferences and in model selection on the basis of sample autocorrelations. We experimented with simulating various heavy tailed stationary sequences in an attempt to understand what causes the sample correlation function to converge or not to converge to a constant. In two new cases, namely the sum of two independent moving averages and a random permutation scheme, we are able to provide theoretical explanations for a random limit of the sample autocorrelation function as the sample grows.
Modeling and Simulation of Time Series Prediction Based on Dynic Neural Network
王雪松; 程玉虎; 彭光正
2004-01-01
Molding and simulation of time series prediction based on dynic neural network(NN) are studied. Prediction model for non-linear and time-varying system is proposed based on dynic Jordan NN. Aiming at the intrinsic defects of back-propagation (BP) algorithm that cannot update network weights incrementally, a hybrid algorithm combining the temporal difference (TD) method with BP algorithm to train Jordan NN is put forward. The proposed method is applied to predict the ash content of clean coal in jigging production real-time and multi-step. A practical exple is also given and its application results indicate that the method has better performance than others and also offers a beneficial reference to the prediction of nonlinear time series.
A model-free characterization of recurrences in stationary time series
Chicheportiche, Rémy; Chakraborti, Anirban
2017-05-01
Study of recurrences in earthquakes, climate, financial time-series, etc. is crucial to better forecast disasters and limit their consequences. Most of the previous phenomenological studies of recurrences have involved only a long-ranged autocorrelation function, and ignored the multi-scaling properties induced by potential higher order dependencies. We argue that copulas is a natural model-free framework to study non-linear dependencies in time series and related concepts like recurrences. Consequently, we arrive at the facts that (i) non-linear dependences do impact both the statistics and dynamics of recurrence times, and (ii) the scaling arguments for the unconditional distribution may not be applicable. Hence, fitting and/or simulating the intertemporal distribution of recurrence intervals is very much system specific, and cannot actually benefit from universal features, in contrast to the previous claims. This has important implications in epilepsy prognosis and financial risk management applications.
Magdy A. El-Tawil
2012-07-01
Full Text Available The homotopy analysis method (HAM is used to find approximate analytical solutions of continuous population models for single and interacting species. The homotopy analysis method contains the auxiliary parameter $hbar,$ which provides us with a simple way to adjust and control the convergence region of series solution. the solutions are compared with the numerical results obtained using NDSolve, an ordinary differential equation solver found in the Mathematica package and a good agreement is found. Also the solutions are compared with the available analytic results obtained by other methods and more accurate and convergent series solution found. The convergence region is also computed which shows the validity of the HAM solution. This method is reliable and manageable.
Chattopadhyay, Goutami; 10.1140/epjp/i2012-12043-9
2012-01-01
This study reports a statistical analysis of monthly sunspot number time series and observes non homogeneity and asymmetry within it. Using Mann-Kendall test a linear trend is revealed. After identifying stationarity within the time series we generate autoregressive AR(p) and autoregressive moving average (ARMA(p,q)). Based on minimization of AIC we find 3 and 1 as the best values of p and q respectively. In the next phase, autoregressive neural network (AR-NN(3)) is generated by training a generalized feedforward neural network (GFNN). Assessing the model performances by means of Willmott's index of second order and coefficient of determination, the performance of AR-NN(3) is identified to be better than AR(3) and ARMA(3,1).
The fictionality of topic modeling: Machine reading Anthony Trollope's Barsetshire series
Rachel Sagner Buurma
2015-12-01
Full Text Available This essay describes how using unsupervised topic modeling (specifically the latent Dirichlet allocation topic modeling algorithm in MALLET on relatively small corpuses can help scholars of literature circumvent the limitations of some existing theories of the novel. Using an example drawn from work on Victorian novelist Anthony Trollope's Barsetshire series, it argues that unsupervised topic modeling's counter-factual and retrospective reconstruction of the topics out of which a given set of novels have been created allows for a denaturalizing and unfamiliar (though crucially not “objective” or “unbiased” view. In other words, topic models are fictions, and scholars of literature should consider reading them as such. Drawing on one aspect of Stephen Ramsay's idea of algorithmic criticism, the essay emphasizes the continuities between “big data” methods and techniques and longer-standing methods of literary study.
Endo, Vitor Takashi; de Carvalho Pereira, José Carlos
2017-05-01
Material properties description and understanding are essential aspects when computational solid mechanics is applied to product development. In order to promote injected fiber reinforced thermoplastic materials for structural applications, it is very relevant to develop material characterization procedures, considering mechanical properties variation in terms of fiber orientation and loading time. Therefore, a methodology considering sample manufacturing, mechanical tests and data treatment is described in this study. The mathematical representation of the material properties was solved by a linear viscoelastic constitutive model described by Prony series, which was properly adapted to orthotropic materials. Due to the large number of proposed constitutive model coefficients, a parameter identification method was employed to define mathematical functions. This procedure promoted good correlation among experimental tests, and analytical and numerical creep models. Such results encourage the use of numerical simulations for the development of structural components with the proposed linear viscoelastic orthotropic constitutive model. A case study was presented to illustrate an industrial application of proposed methodology.
Analysis of hohlraum energetics of the SG series and the NIF experiments with energy balance model
Guoli Ren
2017-01-01
Full Text Available The basic energy balance model is applied to analyze the hohlraum energetics data from the Shenguang (SG series laser facilities and the National Ignition Facility (NIF experiments published in the past few years. The analysis shows that the overall hohlraum energetics data are in agreement with the energy balance model within 20% deviation. The 20% deviation might be caused by the diversity in hohlraum parameters, such as material, laser pulse, gas filling density, etc. In addition, the NIF's ignition target designs and our ignition target designs given by simulations are also in accordance with the energy balance model. This work confirms the value of the energy balance model for ignition target design and experimental data assessment, and demonstrates that the NIF energy is enough to achieve ignition if a 1D spherical radiation drive could be created, meanwhile both the laser plasma instabilities and hydrodynamic instabilities could be suppressed.
Simpson, Shawn E
2013-03-01
A primary objective in the application of postmarketing drug safety surveillance is to ascertain the relationship between time-varying drug exposures and recurrent adverse events (AEs) related to health outcomes. The self-controlled case series (SCCS) method is one approach to analysis in this context. It is based on a conditional Poisson regression model, which assumes that events at different time points are conditionally independent given the covariate process. This requirement is problematic when the occurrence of an event can alter the future event risk. In a clinical setting, for example, patients who have a first myocardial infarction (MI) may be at higher subsequent risk for a second. In this work, we propose the positive dependence self-controlled case series (PD-SCCS) method: a generalization of SCCS that allows the occurrence of an event to increase the future event risk, yet maintains the advantages of the original model by controlling for fixed baseline covariates and relying solely on data from cases. As in the SCCS model, individual-level baseline parameters drop out of the PD-SCCS likelihood. Data sources used for postmarketing surveillance can contain tens of millions of people, so in this situation it is particularly advantageous that PD-SCCS avoids doing a costly estimation of individual parameters. We develop expressions for large sample inference and optimization for PD-SCCS and compare the results of our generalized model with the more restrictive SCCS approach. Copyright © 2013, The International Biometric Society.
Wangren Qiu
2015-01-01
Full Text Available In view of techniques for constructing high-order fuzzy time series models, there are three types which are based on advanced algorithms, computational method, and grouping the fuzzy logical relationships. The last type of models is easy to be understood by the decision maker who does not know anything about fuzzy set theory or advanced algorithms. To deal with forecasting problems, this paper presented novel high-order fuzz time series models denoted as GTS (M, N based on generalized fuzzy logical relationships and automatic clustering. This paper issued the concept of generalized fuzzy logical relationship and an operation for combining the generalized relationships. Then, the procedure of the proposed model was implemented on forecasting enrollment data at the University of Alabama. To show the considerable outperforming results, the proposed approach was also applied to forecasting the Shanghai Stock Exchange Composite Index. Finally, the effects of parameters M and N, the number of order, and concerned principal fuzzy logical relationships, on the forecasting results were also discussed.
Kansuporn eSriyudthsak
2016-05-01
Full Text Available The high-throughput acquisition of metabolome data is greatly anticipated for the complete understanding of cellular metabolism in living organisms. A variety of analytical technologies have been developed to acquire large-scale metabolic profiles under different biological or environmental conditions. Time series data are useful for predicting the most likely metabolic pathways because they provide important information regarding the accumulation of metabolites, which implies causal relationships in the metabolic reaction network. Considerable effort has been undertaken to utilize these data for constructing a mathematical model merging system properties and quantitatively characterizing a whole metabolic system in toto. However, there are technical difficulties between benchmarking the provision and utilization of data. Although hundreds of metabolites can be measured, which provide information on the metabolic reaction system, simultaneous measurement of thousands of metabolites is still challenging. In addition, it is nontrivial to logically predict the dynamic behaviors of unmeasurable metabolite concentrations without sufficient information on the metabolic reaction network. Yet, consolidating the advantages of advancements in both metabolomics and mathematical modeling remain to be accomplished. This review outlines the conceptual basis of and recent advances in technologies in both the research fields. It also highlights the potential for constructing a large-scale mathematical model by estimating model parameters from time series metabolome data in order to comprehensively understand metabolism at the systems level.
Siggiridou, Elsa; Kugiumtzis, Dimitris
2016-04-01
Granger causality has been used for the investigation of the inter-dependence structure of the underlying systems of multi-variate time series. In particular, the direct causal effects are commonly estimated by the conditional Granger causality index (CGCI). In the presence of many observed variables and relatively short time series, CGCI may fail because it is based on vector autoregressive models (VAR) involving a large number of coefficients to be estimated. In this work, the VAR is restricted by a scheme that modifies the recently developed method of backward-in-time selection (BTS) of the lagged variables and the CGCI is combined with BTS. Further, the proposed approach is compared favorably to other restricted VAR representations, such as the top-down strategy, the bottom-up strategy, and the least absolute shrinkage and selection operator (LASSO), in terms of sensitivity and specificity of CGCI. This is shown by using simulations of linear and nonlinear, low and high-dimensional systems and different time series lengths. For nonlinear systems, CGCI from the restricted VAR representations are compared with analogous nonlinear causality indices. Further, CGCI in conjunction with BTS and other restricted VAR representations is applied to multi-channel scalp electroencephalogram (EEG) recordings of epileptic patients containing epileptiform discharges. CGCI on the restricted VAR, and BTS in particular, could track the changes in brain connectivity before, during and after epileptiform discharges, which was not possible using the full VAR representation.
FY 2016 Status Report on the Modeling of the M8 Calibration Series using MAMMOTH
Baker, Benjamin Allen [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ortensi, Javier [Idaho National Lab. (INL), Idaho Falls, ID (United States); DeHart, Mark David [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2016-09-01
This report provides a summary of the progress made towards validating the multi-physics reactor analysis application MAMMOTH using data from measurements performed at the Transient Reactor Test facility, TREAT. The work completed consists of a series of comparisons of TREAT element types (standard and control rod assemblies) in small geometries as well as slotted mini-cores to reference Monte Carlo simulations to ascertain the accuracy of cross section preparation techniques. After the successful completion of these smaller problems, a full core model of the half slotted core used in the M8 Calibration series was assembled. Full core MAMMOTH simulations were compared to Serpent reference calculations to assess the cross section preparation process for this larger configuration. As part of the validation process the M8 Calibration series included a steady state wire irradiation experiment and coupling factors for the experiment region. The shape of the power distribution obtained from the MAMMOTH simulation shows excellent agreement with the experiment. Larger differences were encountered in the calculation of the coupling factors, but there is also great uncertainty on how the experimental values were obtained. Future work will focus on resolving some of these differences.
Loos, Martin; Krauss, Martin; Fenner, Kathrin
2012-09-18
Formation of soil nonextractable residues (NER) is central to the fate and persistence of pesticides. To investigate pools and extent of NER formation, an established inverse modeling approach for pesticide soil degradation time series was evaluated with a Monte Carlo Markov Chain (MCMC) sampling procedure. It was found that only half of 73 pesticide degradation time series from a homogeneous soil source allowed for well-behaved identification of kinetic parameters with a four-pool model containing a parent compound, a metabolite, a volatile, and a NER pool. A subsequent simulation indeed confirmed distinct parameter combinations of low identifiability. Taking the resulting uncertainties into account, several conclusions regarding NER formation and its impact on persistence assessment could nonetheless be drawn. First, rate constants for transformation of parent compounds to metabolites were correlated to those for transformation of parent compounds to NER, leading to degradation half-lives (DegT50) typically not being larger than disappearance half-lives (DT50) by more than a factor of 2. Second, estimated rate constants were used to evaluate NER formation over time. This showed that NER formation, particularly through the metabolite pool, may be grossly underestimated when using standard incubation periods. It further showed that amounts and uncertainties in (i) total NER, (ii) NER formed from the parent pool, and (iii) NER formed from the metabolite pool vary considerably among data sets at t→∞, with no clear dominance between (ii) and (iii). However, compounds containing aromatic amine moieties were found to form significantly more total NER when extrapolating to t→∞ than the other compounds studied. Overall, our study stresses the general need for assessing uncertainties, identifiability issues, and resulting biases when using inverse modeling of degradation time series for evaluating persistence and NER formation.
A LANDSAT TIME-SERIES STACKS MODEL FOR DETECTION OF CROPLAND CHANGE
J. Chen
2017-09-01
Full Text Available Global, timely, accurate and cost-effective cropland monitoring with a fine spatial resolution will dramatically improve our understanding of the effects of agriculture on greenhouse gases emissions, food safety, and human health. Time-series remote sensing imagery have been shown particularly potential to describe land cover dynamics. The traditional change detection techniques are often not capable of detecting land cover changes within time series that are severely influenced by seasonal difference, which are more likely to generate pseuso changes. Here,we introduced and tested LTSM ( Landsat time-series stacks model, an improved Continuous Change Detection and Classification (CCDC proposed previously approach to extract spectral trajectories of land surface change using a dense Landsat time-series stacks (LTS. The method is expected to eliminate pseudo changes caused by phenology driven by seasonal patterns. The main idea of the method is that using all available Landsat 8 images within a year, LTSM consisting of two term harmonic function are estimated iteratively for each pixel in each spectral band .LTSM can defines change area by differencing the predicted and observed Landsat images. The LTSM approach was compared with change vector analysis (CVA method. The results indicated that the LTSM method correctly detected the “true change” without overestimating the “false” one, while CVA pointed out “true change” pixels with a large number of “false changes”. The detection of change areas achieved an overall accuracy of 92.37 %, with a kappa coefficient of 0.676.
Time series modeling of soil moisture dynamics on a steep mountainous hillside
Kim, Sanghyun
2016-05-01
The response of soil moisture to rainfall events along hillslope transects is an important hydrologic process and a critical component of interactions between soil vegetation and the atmosphere. In this context, the research described in this article addresses the spatial distribution of soil moisture as a function of topography. In order to characterize the temporal variation in soil moisture on a steep mountainous hillside, a transfer function, including a model for noise, was introduced. Soil moisture time series with similar rainfall amounts, but different wetness gradients were measured in the spring and fall. Water flux near the soil moisture sensors was modeled and mathematical expressions were developed to provide a basis for input-output modeling of rainfall and soil moisture using hydrological processes such as infiltration, exfiltration and downslope lateral flow. The characteristics of soil moisture response can be expressed in terms of model structure. A seasonal comparison of models reveals differences in soil moisture response to rainfall, possibly associated with eco-hydrological process and evapotranspiration. Modeling results along the hillslope indicate that the spatial structure of the soil moisture response patterns mainly appears in deeper layers. Similarities between topographic attributes and stochastic model structures are spatially organized. The impact of temporal and spatial discretization scales on parameter expression is addressed in the context of modeling results that link rainfall events and soil moisture.
Modeling Inter-Country Connection from Geotagged News Reports: A Time-Series Analysis
Yuan, Yihong
2016-01-01
The development of theories and techniques for big data analytics offers tremendous flexibility for investigating large-scale events and patterns that emerge over space and time. In this research, we utilize a unique open-access dataset "The Global Data on Events, Location and Tone" (GDELT) to model the image of China in mass media, specifically, how China has related to the rest of the world and how this connection has evolved upon time based on an autoregressive integrated moving average (ARIMA) model. The results of this research contribute both in methodological and empirical perspectives: We examined the effectiveness of time series models in predicting trends in long-term mass media data. In addition, we identified various types of connection strength patterns between China and its top 15 related countries. This study generates valuable input to interpret China's diplomatic and regional relations based on mass media data, as well as providing methodological references for investigating international rel...
Non-stationary time series modeling on caterpillars pest of palm oil for early warning system
Setiyowati, Susi; Nugraha, Rida F.; Mukhaiyar, Utriweni
2015-12-01
The oil palm production has an important role for the plantation and economic sector in Indonesia. One of the important problems in the cultivation of oil palm plantation is pests which causes damage to the quality of fruits. The caterpillar pest which feed palm tree's leaves will cause decline in quality of palm oil production. Early warning system is needed to minimize losses due to this pest. Here, we applied non-stationary time series modeling, especially the family of autoregressive models to predict the number of pests based on its historical data. We realized that there is some uniqueness of these pests data, i.e. the spike value that occur almost periodically. Through some simulations and case study, we obtain that the selection of constant factor has a significance influence to the model so that it can shoot the spikes value precisely.
Model for the heart beat-to-beat time series during meditation
Capurro, A.; Diambra, L.; Malta, C. P.
2003-09-01
We present a model for the respiratory modulation of the heart beat-to-beat interval series. The model consists of a pacemaker, that simulates the membrane potential of the sinoatrial node, modulated by a periodic input signal plus correlated noise that simulates the respiratory input. The model was used to assess the waveshape of the respiratory signals needed to reproduce in the phase space the trajectory of experimental heart beat-to-beat interval data. The data sets were recorded during meditation practices of the Chi and Kundalini Yoga techniques. Our study indicates that in the first case the respiratory signal has the shape of a smoothed square wave, and in the second case it has the shape of a smoothed triangular wave.
Estimating and Analyzing Savannah Phenology with a Lagged Time Series Model
Boke-Olen, Niklas; Lehsten, Veiko; Ardo, Jonas
2016-01-01
Savannah regions are predicted to undergo changes in precipitation patterns according to current climate change projections. This change will affect leaf phenology, which controls net primary productivity. It is of importance to study this since savannahs play an important role in the global carbon...... cycle due to their areal coverage and can have an effect on the food security in regions that depend on subsistence farming. In this study we investigate how soil moisture, mean annual precipitation, and day length control savannah phenology by developing a lagged time series model. The model uses...... climate data for 15 flux tower sites across four continents, and normalized difference vegetation index from satellite to optimize a statistical phenological model. We show that all three variables can be used to estimate savannah phenology on a global scale. However, it was not possible to create...
Dynamics modeling for sugar cane sucrose estimation using time series satellite imagery
Zhao, Yu; Justina, Diego Della; Kazama, Yoriko; Rocha, Jansle Vieira; Graziano, Paulo Sergio; Lamparelli, Rubens Augusto Camargo
2016-10-01
Sugarcane, as one of the most mainstay crop in Brazil, plays an essential role in ethanol production. To monitor sugarcane crop growth and predict sugarcane sucrose content, remote sensing technology plays an essential role while accurate and timely crop growth information is significant, in particularly for large scale farming. We focused on the issues of sugarcane sucrose content estimation using time-series satellite image. Firstly, we calculated the spectral features and vegetation indices to make them be correspondence to the sucrose accumulation biological mechanism. Secondly, we improved the statistical regression model considering more other factors. The evaluation was performed and we got precision of 90% which is about 20% higher than the conventional method. The validation results showed that prediction accuracy using our sugarcane growth modeling and improved mix model is satisfied.
Development of New Loan Payment Models with Piecewise Geometric Gradient Series
Erdal Aydemir
2014-12-01
Full Text Available Engineering economics plays an important role in decision making. Also, the cash flows, time value of money and interest rates are the most important research fields in mathematical finance. Generalized formulae obtained from a variety of models with the time value of money and cash flows are inadequate to solve some problems. In this study, a new generalized formulae is considered for the first time and derived from a loan payment model which is a certain number of payment amount determined by customer at the beginning of payment period and the other repayments with piecewise linear gradient series. As a result, some numerical examples with solutions are given for the developed models.
Modeling of ionosphere time series using wavelet neural networks (case study: N-W of Iran)
Ghaffari Razin, Mir Reza; Voosoghi, Behzad
2016-07-01
Wavelet neural networks (WNNs) are important tools for analyzing time series especially when it is non-linear and non-stationary. It takes advantage of high resolution of wavelets and feed forward nature of neural networks (NNs). Therefore, in this paper, WNNs is used for modeling of ionosphere time series in Iran. To apply the method, observations collected at 22 GPS stations in 12 successive days of 2012 (DOY# 219-230) from Azerbaijan local GPS network are used. For training of WNN, back-propagation (BP) algorithm is used. The results of WNN compared with results of international reference ionosphere 2012 (IRI-2012) and international GNSS service (IGS) products. To assess the error of WNN, statistical indicators, relative and absolute errors are used. Minimum relative error for WNN compared with GPS TEC is 6.37% and maximum relative error is 12.94%. Also the maximum and minimum absolute error computed 6.32 and 0.13 TECU, respectively. Comparison of diurnal predicted TEC values from the WNN model and the IRI-2012 with GPS TEC revealed that the WNN provides more accurate predictions than the IRI-2012 model and IGS products in the test area.
Voyant, Cyril; Muselli, Marc; Paoli, Christophe; Nivet, Marie Laure
2014-01-01
When a territory is poorly instrumented, geostationary satellites data can be useful to predict global solar radiation. In this paper, we use geostationary satellites data to generate 2-D time series of solar radiation for the next hour. The results presented in this paper relate to a particular territory, the Corsica Island, but as data used are available for the entire surface of the globe, our method can be easily exploited to another place. Indeed 2-D hourly time series are extracted from the HelioClim-3 surface solar irradiation database treated by the Heliosat-2 model. Each point of the map have been used as training data and inputs of artificial neural networks (ANN) and as inputs for two persistence models (scaled or not). Comparisons between these models and clear sky estimations were proceeded to evaluate the performances. We found a normalized root mean square error (nRMSE) close to 16.5% for the two best predictors (scaled persistence and ANN) equivalent to 35-45% related to ground measurements. F...
Osborne, Melissa; Gomez, Daniel; Feng, Zhihua; McEwen, Corissa; Beltran, Jose; Cirillo, Kim; El-Khodor, Bassem; Lin, Ming-Yi; Li, Yun; Knowlton, Wendy M; McKemy, David D; Bogdanik, Laurent; Butts-Dehm, Katherine; Martens, Kimberly; Davis, Crystal; Doty, Rosalinda; Wardwell, Keegan; Ghavami, Afshin; Kobayashi, Dione; Ko, Chien-Ping; Ramboz, Sylvie; Lutz, Cathleen
2012-10-15
A number of mouse models for spinal muscular atrophy (SMA) have been genetically engineered to recapitulate the severity of human SMA by using a targeted null mutation at the mouse Smn1 locus coupled with the transgenic addition of varying copy numbers of human SMN2 genes. Although this approach has been useful in modeling severe SMA and very mild SMA, a mouse model of the intermediate form of the disease would provide an additional research tool amenable for drug discovery. In addition, many of the previously engineered SMA strains are multi-allelic by design, containing a combination of transgenes and targeted mutations in the homozygous state, making further genetic manipulation difficult. A new genetic engineering approach was developed whereby variable numbers of SMN2 sequences were incorporated directly into the murine Smn1 locus. Using combinations of these alleles, we generated an allelic series of SMA mouse strains harboring no, one, two, three, four, five, six or eight copies of SMN2. We report here the characterization of SMA mutants in this series that displayed a range in disease severity from embryonic lethal to viable with mild neuromuscular deficits.
Multidimensional k-nearest neighbor model based on EEMD for financial time series forecasting
Zhang, Ningning; Lin, Aijing; Shang, Pengjian
2017-07-01
In this paper, we propose a new two-stage methodology that combines the ensemble empirical mode decomposition (EEMD) with multidimensional k-nearest neighbor model (MKNN) in order to forecast the closing price and high price of the stocks simultaneously. The modified algorithm of k-nearest neighbors (KNN) has an increasingly wide application in the prediction of all fields. Empirical mode decomposition (EMD) decomposes a nonlinear and non-stationary signal into a series of intrinsic mode functions (IMFs), however, it cannot reveal characteristic information of the signal with much accuracy as a result of mode mixing. So ensemble empirical mode decomposition (EEMD), an improved method of EMD, is presented to resolve the weaknesses of EMD by adding white noise to the original data. With EEMD, the components with true physical meaning can be extracted from the time series. Utilizing the advantage of EEMD and MKNN, the new proposed ensemble empirical mode decomposition combined with multidimensional k-nearest neighbor model (EEMD-MKNN) has high predictive precision for short-term forecasting. Moreover, we extend this methodology to the case of two-dimensions to forecast the closing price and high price of the four stocks (NAS, S&P500, DJI and STI stock indices) at the same time. The results indicate that the proposed EEMD-MKNN model has a higher forecast precision than EMD-KNN, KNN method and ARIMA.
A Score Type Test for General Autoregressive Models in Time Series
Jian-hong Wu; Li-xing Zhu
2007-01-01
This paper is devoted to the goodness-of-fit test for the general autoregressive models in time series. By averaging for the weighted residuals, we construct a score type test which is asymptotically standard chi-squared under the null and has some desirable power properties under the alternatives. Specifically, the test is sensitive to alternatives and can detect the alternatives approaching, along a direction, the null at a rate that is arbitrarily close to n-1/2. Furthermore, when the alternatives are not directional, we construct asymptotically distribution-free maximin tests for a large class of alternatives. The performance of the tests is evaluated through simulation studies.
Multi-horizon solar radiation forecasting for Mediterranean locations using time series models
Voyant, Cyril; Paoli, Christophe; Muselli, Marc; Nivet, Marie Laure
2013-01-01
International audience; Considering the grid manager's point of view, needs in terms of prediction of intermittent energy like the photovoltaic resource can be distinguished according to the considered horizon: following days (d+1, d+2 and d+3), next day by hourly step (h+24), next hour (h+1) and next few minutes (m+5 e.g.). Through this work, we have identified methodologies using time series models for the prediction horizon of global radiation and photovoltaic power. What we present here i...
Multivariate time series modeling of short-term system scale irrigation demand
Perera, Kushan C.; Western, Andrew W.; George, Biju; Nawarathna, Bandara
2015-12-01
Travel time limits the ability of irrigation system operators to react to short-term irrigation demand fluctuations that result from variations in weather, including very hot periods and rainfall events, as well as the various other pressures and opportunities that farmers face. Short-term system-wide irrigation demand forecasts can assist in system operation. Here we developed a multivariate time series (ARMAX) model to forecast irrigation demands with respect to aggregated service points flows (IDCGi, ASP) and off take regulator flows (IDCGi, OTR) based across 5 command areas, which included area covered under four irrigation channels and the study area. These command area specific ARMAX models forecast 1-5 days ahead daily IDCGi, ASP and IDCGi, OTR using the real time flow data recorded at the service points and the uppermost regulators and observed meteorological data collected from automatic weather stations. The model efficiency and the predictive performance were quantified using the root mean squared error (RMSE), Nash-Sutcliffe model efficiency coefficient (NSE), anomaly correlation coefficient (ACC) and mean square skill score (MSSS). During the evaluation period, NSE for IDCGi, ASP and IDCGi, OTR across 5 command areas were ranged 0.98-0.78. These models were capable of generating skillful forecasts (MSSS ⩾ 0.5 and ACC ⩾ 0.6) of IDCGi, ASP and IDCGi, OTR for all 5 lead days and IDCGi, ASP and IDCGi, OTR forecasts were better than using the long term monthly mean irrigation demand. Overall these predictive performance from the ARMAX time series models were higher than almost all the previous studies we are aware. Further, IDCGi, ASP and IDCGi, OTR forecasts have improved the operators' ability to react for near future irrigation demand fluctuations as the developed ARMAX time series models were self-adaptive to reflect the short-term changes in the irrigation demand with respect to various pressures and opportunities that farmers' face, such as
Modeling commodity salam contract between two parties for discrete and continuous time series
Hisham, Azie Farhani Badrol; Jaffar, Maheran Mohd
2017-08-01
In order for Islamic finance to remain competitive as the conventional, there needs a new development of Islamic compliance product such as Islamic derivative that can be used to manage the risk. However, under syariah principles and regulations, all financial instruments must not be conflicting with five syariah elements which are riba (interest paid), rishwah (corruption), gharar (uncertainty or unnecessary risk), maysir (speculation or gambling) and jahl (taking advantage of the counterparty's ignorance). This study has proposed a traditional Islamic contract namely salam that can be built as an Islamic derivative product. Although a lot of studies has been done on discussing and proposing the implementation of salam contract as the Islamic product however they are more into qualitative and law issues. Since there is lack of quantitative study of salam contract being developed, this study introduces mathematical models that can value the appropriate salam price for a commodity salam contract between two parties. In modeling the commodity salam contract, this study has modified the existing conventional derivative model and come out with some adjustments to comply with syariah rules and regulations. The cost of carry model has been chosen as the foundation to develop the commodity salam model between two parties for discrete and continuous time series. However, the conventional time value of money results from the concept of interest that is prohibited in Islam. Therefore, this study has adopted the idea of Islamic time value of money which is known as the positive time preference, in modeling the commodity salam contract between two parties for discrete and continuous time series.
Lukas Falat
2014-01-01
Full Text Available In this paper, authors apply feed-forward artificial neural network (ANN of RBF type into the process of modelling and forecasting the future value of USD/CAD time series. Authors test the customized version of the RBF and add the evolutionary approach into it. They also combine the standard algorithm for adapting weights in neural network with an unsupervised clustering algorithm called K-means. Finally, authors suggest the new hybrid model as a combination of a standard ANN and a moving average for error modeling that is used to enhance the outputs of the network using the error part of the original RBF. Using high-frequency data, they examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, authors perform the comparative out-of-sample analysis of the suggested hybrid model with statistical models and the standard neural network.
Modeling time-series count data: the unique challenges facing political communication studies.
Fogarty, Brian J; Monogan, James E
2014-05-01
This paper demonstrates the importance of proper model specification when analyzing time-series count data in political communication studies. It is common for scholars of media and politics to investigate counts of coverage of an issue as it evolves over time. Many scholars rightly consider the issues of time dependence and dynamic causality to be the most important when crafting a model. However, to ignore the count features of the outcome variable overlooks an important feature of the data. This is particularly the case when modeling data with a low number of counts. In this paper, we argue that the Poisson autoregressive model (Brandt and Williams, 2001) accurately meets the needs of many media studies. We replicate the analyses of Flemming et al. (1997), Peake and Eshbaugh-Soha (2008), and Ura (2009) and demonstrate that models missing some of the assumptions of the Poisson autoregressive model often yield invalid inferences. We also demonstrate that the effect of any of these models can be illustrated dynamically with estimates of uncertainty through a simulation procedure. The paper concludes with implications of these findings for the practical researcher.
YE Liming; YANG Guixia; Eric VAN RANST; TANG Huajun
2013-01-01
A generalized,structural,time series modeling framework was developed to analyze the monthly records of absolute surface temperature,one of the most important environmental parameters,using a deterministicstochastic combined (DSC) approach.Although the development of the framework was based on the characterization of the variation patterns of a global dataset,the methodology could be applied to any monthly absolute temperature record.Deterministic processes were used to characterize the variation patterns of the global trend and the cyclic oscillations of the temperature signal,involving polynomial functions and the Fourier method,respectively,while stochastic processes were employed to account for any remaining patterns in the temperature signal,involving seasonal autoregressive integrated moving average (SARIMA) models.A prediction of the monthly global surface temperature during the second decade of the 21st century using the DSC model shows that the global temperature will likely continue to rise at twice the average rate of the past 150 years.The evaluation of prediction accuracy shows that DSC models perform systematically well against selected models of other authors,suggesting that DSC models,when coupled with other ecoenvironmental models,can be used as a supplemental tool for short-term (～10-year) environmental planning and decision making.
Keller, D. E.; Fischer, A. M.; Frei, C.; Liniger, M. A.; Appenzeller, C.; Knutti, R.
2014-07-01
Many climate impact assessments over topographically complex terrain require high-resolution precipitation time-series that have a spatio-temporal correlation structure consistent with observations. This consistency is essential for spatially distributed modelling of processes with non-linear responses to precipitation input (e.g. soil water and river runoff modelling). In this regard, weather generators (WGs) designed and calibrated for multiple sites are an appealing technique to stochastically simulate time-series that approximate the observed temporal and spatial dependencies. In this study, we present a stochastic multi-site precipitation generator and validate it over the hydrological catchment Thur in the Swiss Alps. The model consists of several Richardson-type WGs that are run with correlated random number streams reflecting the observed correlation structure among all possible station pairs. A first-order two-state Markov process simulates intermittence of daily precipitation, while precipitation amounts are simulated from a mixture model of two exponential distributions. The model is calibrated separately for each month over the time-period 1961-2011. The WG is skilful at individual sites in representing the annual cycle of the precipitation statistics, such as mean wet day frequency and intensity as well as monthly precipitation sums. It reproduces realistically the multi-day statistics such as the frequencies of dry and wet spell lengths and precipitation sums over consecutive wet days. Substantial added value is demonstrated in simulating daily areal precipitation sums in comparison to multiple WGs that lack the spatial dependency in the stochastic process: the multi-site WG is capable to capture about 95% of the observed variability in daily area sums, while the summed time-series from multiple single-site WGs only explains about 13%. Limitation of the WG have been detected in reproducing observed variability from year to year, a component that has
Porta, Alberto; Bari, Vlasta; Ranuzzi, Giovanni; De Maria, Beatrice; Baselli, Giuseppe
2017-09-01
We propose a multiscale complexity (MSC) method assessing irregularity in assigned frequency bands and being appropriate for analyzing the short time series. It is grounded on the identification of the coefficients of an autoregressive model, on the computation of the mean position of the poles generating the components of the power spectral density in an assigned frequency band, and on the assessment of its distance from the unit circle in the complex plane. The MSC method was tested on simulations and applied to the short heart period (HP) variability series recorded during graded head-up tilt in 17 subjects (age from 21 to 54 years, median = 28 years, 7 females) and during paced breathing protocols in 19 subjects (age from 27 to 35 years, median = 31 years, 11 females) to assess the contribution of time scales typical of the cardiac autonomic control, namely in low frequency (LF, from 0.04 to 0.15 Hz) and high frequency (HF, from 0.15 to 0.5 Hz) bands to the complexity of the cardiac regulation. The proposed MSC technique was compared to a traditional model-free multiscale method grounded on information theory, i.e., multiscale entropy (MSE). The approach suggests that the reduction of HP variability complexity observed during graded head-up tilt is due to a regularization of the HP fluctuations in LF band via a possible intervention of sympathetic control and the decrement of HP variability complexity observed during slow breathing is the result of the regularization of the HP variations in both LF and HF bands, thus implying the action of physiological mechanisms working at time scales even different from that of respiration. MSE did not distinguish experimental conditions at time scales larger than 1. Over a short time series MSC allows a more insightful association between cardiac control complexity and physiological mechanisms modulating cardiac rhythm compared to a more traditional tool such as MSE.
Olsen, Seth
2012-01-01
We propose a single effective Hamiltonian to describe the low-energy electronic structure of a series of symmetric cationic diarylmethanes, which are all bridge-substituted derivatives of Michler's Hydrol Blue. Three-state diabatic Hamiltonians for the dyes are calculated using four-electron three-orbital state-averaged complete active space self-consistent field and multi-state multi-reference perturbation theory models. The approach takes advantage of an isolobal analogy that can be established between the orbitals spanning the active spaces of the different substituted dyes. The solutions of the chemical problem are expressed in a diabatic Hilbert space that is analogous to classical resonance models. The effective Hamiltonians for all dyes can be fit to a single functional form that depends on the mixing angle between a bridge-charged diabatic state and a superposition representing the canonical resonance. We find that the structure of the bridge-charged state changes in a regular fashion across the serie...
Time Series Stream Temperature And Dissolved Oxygen Modeling In The Lower Flint River Basin
Li, G.; Jackson, C. R.
2004-12-01
The tributaries of the Lower Flint River Basin (LFRB) are incised into the upper Floridan semi-confined limestone aquifer, and thus seepage of relatively old groundwater sustains baseflows and provides some control over temperature and dissolved oxygen fluctuations. This hydrologic and geologic setting creates aquatic habitat that is unique in the state of Georgia. Groundwater withdrawals and possible water supply reservoirs threaten to exacerbate low flow conditions during summer droughts, which may force negative impacts to stream temperature and dissolved oxygen (DO). To evaluate the possible effects of human modifications to stream habitat, summer time series (in 15 min interval) of stream temperature and DO were monitored over the last three years along these streams, and a Continuously Stirred Tank Reactor (CSTR) model was developed and calibrated with these data. The driving forces of the diel trends and the overall levels of stream temperature and DO were identified by this model. Simulations were conducted with assumed managed flow conditions to illustrate potential effects of various stream flow regimes on stream temperature and DO time series. The goal of this research is to provide an accurate simulation tool to guide management decisions.
Linear genetic programming for time-series modelling of daily flow rate
Aytac Guven
2009-04-01
In this study linear genetic programming (LGP),which is a variant of Genetic Programming,and two versions of Neural Networks (NNs)are used in predicting time-series of daily ﬂow rates at a station on Schuylkill River at Berne,PA,USA.Daily ﬂow rate at present is being predicted based on different time-series scenarios.For this purpose,various LGP and NN models are calibrated with training sets and validated by testing sets.Additionally,the robustness of the proposed LGP and NN models are evaluated by application data,which are used neither in training nor at testing stage.The results showed that both techniques predicted the ﬂow rate data in quite good agreement with the observed ones,and the predictions of LGP and NN are challenging.The performance of LGP,which was moderately better than NN,is very promising and hence supports the use of LGP in predicting of river ﬂow data.
Stochastic modeling for time series InSAR: with emphasis on atmospheric effects
Cao, Yunmeng; Li, Zhiwei; Wei, Jianchao; Hu, Jun; Duan, Meng; Feng, Guangcai
2017-08-01
Despite the many applications of time series interferometric synthetic aperture radar (TS-InSAR) techniques in geophysical problems, error analysis and assessment have been largely overlooked. Tropospheric propagation error is still the dominant error source of InSAR observations. However, the spatiotemporal variation of atmospheric effects is seldom considered in the present standard TS-InSAR techniques, such as persistent scatterer interferometry and small baseline subset interferometry. The failure to consider the stochastic properties of atmospheric effects not only affects the accuracy of the estimators, but also makes it difficult to assess the uncertainty of the final geophysical results. To address this issue, this paper proposes a network-based variance-covariance estimation method to model the spatiotemporal variation of tropospheric signals, and to estimate the temporal variance-covariance matrix of TS-InSAR observations. The constructed stochastic model is then incorporated into the TS-InSAR estimators both for parameters (e.g., deformation velocity, topography residual) estimation and uncertainty assessment. It is an incremental and positive improvement to the traditional weighted least squares methods to solve the multitemporal InSAR time series. The performance of the proposed method is validated by using both simulated and real datasets.
Bodmer, James E; English, Anthony; Brady, Megan; Blackwell, Ken; Haxhinasto, Kari; Fotedar, Sunaina; Borgman, Kurt; Bai, Er-Wei; Moy, Alan B
2005-09-01
Transendothelial impedance across an endothelial monolayer grown on a microelectrode has previously been modeled as a repeating pattern of disks in which the electrical circuit consists of a resistor and capacitor in series. Although this numerical model breaks down barrier function into measurements of cell-cell adhesion, cell-matrix adhesion, and membrane capacitance, such solution parameters can be inaccurate without understanding model stability and error. In this study, we have evaluated modeling stability and error by using a chi(2) evaluation and Levenberg-Marquardt nonlinear least-squares (LM-NLS) method of the real and/or imaginary data in which the experimental measurement is compared with the calculated measurement derived by the model. Modeling stability and error were dependent on current frequency and the type of experimental data modeled. Solution parameters of cell-matrix adhesion were most susceptible to modeling instability. Furthermore, the LM-NLS method displayed frequency-dependent instability of the solution parameters, regardless of whether the real or imaginary data were analyzed. However, the LM-NLS method identified stable and reproducible solution parameters between all types of experimental data when a defined frequency spectrum of the entire data set was selected on the basis of a criterion of minimizing error. The frequency bandwidth that produced stable solution parameters varied greatly among different data types. Thus a numerical model based on characterizing transendothelial impedance as a resistor and capacitor in series and as a repeating pattern of disks is not sufficient to characterize the entire frequency spectrum of experimental transendothelial impedance.
2013-12-17
...; Electronic System Security Protection From Unauthorized External Access AGENCY: Federal Aviation... conditions for Airbus Model A350- 900 series airplanes. These airplanes will have a novel or unusual design feature associated with electronic system security protection from unauthorized external access....
U.S. Geological Survey, Department of the Interior — Abstract: This data release presents modeled time series of nearshore waves along the southern California coast, from Point Conception to the Mexican border,...
U.S. Geological Survey, Department of the Interior — Abstract: This data release presents modeled time series of nearshore waves along the southern California coast, from Point Conception to the Mexican border,...
Accurate estimation of energy expenditure (EE) in children and adolescents is required for a better understanding of physiological, behavioral, and environmental factors affecting energy balance. Cross-sectional time series (CSTS) models, which account for correlation structure of repeated observati...
Cooling load calculation by the radiant time series method - effect of solar radiation models
Costa, Alexandre M.S. [Universidade Estadual de Maringa (UEM), PR (Brazil)], E-mail: amscosta@uem.br
2010-07-01
In this work was analyzed numerically the effect of three different models for solar radiation on the cooling load calculated by the radiant time series' method. The solar radiation models implemented were clear sky, isotropic sky and anisotropic sky. The radiant time series' method (RTS) was proposed by ASHRAE (2001) for replacing the classical methods of cooling load calculation, such as TETD/TA. The method is based on computing the effect of space thermal energy storage on the instantaneous cooling load. The computing is carried out by splitting the heat gain components in convective and radiant parts. Following the radiant part is transformed using time series, which coefficients are a function of the construction type and heat gain (solar or non-solar). The transformed result is added to the convective part, giving the instantaneous cooling load. The method was applied for investigate the influence for an example room. The location used was - 23 degree S and 51 degree W and the day was 21 of January, a typical summer day in the southern hemisphere. The room was composed of two vertical walls with windows exposed to outdoors with azimuth angles equals to west and east directions. The output of the different models of solar radiation for the two walls in terms of direct and diffuse components as well heat gains were investigated. It was verified that the clear sky exhibited the less conservative (higher values) for the direct component of solar radiation, with the opposite trend for the diffuse component. For the heat gain, the clear sky gives the higher values, three times higher for the peek hours than the other models. Both isotropic and anisotropic models predicted similar magnitude for the heat gain. The same behavior was also verified for the cooling load. The effect of room thermal inertia was decreasing the cooling load during the peak hours. On the other hand the higher thermal inertia values are the greater for the non peak hours. The effect
A Long-Term Prediction Model of Beijing Haze Episodes Using Time Series Analysis
Zhang, Zhongqiu; Sun, Liren; Xu, Cui
2016-01-01
The rapid industrial development has led to the intermittent outbreak of pm2.5 or haze in developing countries, which has brought about great environmental issues, especially in big cities such as Beijing and New Delhi. We investigated the factors and mechanisms of haze change and present a long-term prediction model of Beijing haze episodes using time series analysis. We construct a dynamic structural measurement model of daily haze increment and reduce the model to a vector autoregressive model. Typical case studies on 886 continuous days indicate that our model performs very well on next day's Air Quality Index (AQI) prediction, and in severely polluted cases (AQI ≥ 300) the accuracy rate of AQI prediction even reaches up to 87.8%. The experiment of one-week prediction shows that our model has excellent sensitivity when a sudden haze burst or dissipation happens, which results in good long-term stability on the accuracy of the next 3–7 days' AQI prediction. PMID:27597861
A Long-Term Prediction Model of Beijing Haze Episodes Using Time Series Analysis
Xiaoping Yang
2016-01-01
Full Text Available The rapid industrial development has led to the intermittent outbreak of pm2.5 or haze in developing countries, which has brought about great environmental issues, especially in big cities such as Beijing and New Delhi. We investigated the factors and mechanisms of haze change and present a long-term prediction model of Beijing haze episodes using time series analysis. We construct a dynamic structural measurement model of daily haze increment and reduce the model to a vector autoregressive model. Typical case studies on 886 continuous days indicate that our model performs very well on next day’s Air Quality Index (AQI prediction, and in severely polluted cases (AQI ≥ 300 the accuracy rate of AQI prediction even reaches up to 87.8%. The experiment of one-week prediction shows that our model has excellent sensitivity when a sudden haze burst or dissipation happens, which results in good long-term stability on the accuracy of the next 3–7 days’ AQI prediction.
A Long-Term Prediction Model of Beijing Haze Episodes Using Time Series Analysis.
Yang, Xiaoping; Zhang, Zhongxia; Zhang, Zhongqiu; Sun, Liren; Xu, Cui; Yu, Li
2016-01-01
The rapid industrial development has led to the intermittent outbreak of pm2.5 or haze in developing countries, which has brought about great environmental issues, especially in big cities such as Beijing and New Delhi. We investigated the factors and mechanisms of haze change and present a long-term prediction model of Beijing haze episodes using time series analysis. We construct a dynamic structural measurement model of daily haze increment and reduce the model to a vector autoregressive model. Typical case studies on 886 continuous days indicate that our model performs very well on next day's Air Quality Index (AQI) prediction, and in severely polluted cases (AQI ≥ 300) the accuracy rate of AQI prediction even reaches up to 87.8%. The experiment of one-week prediction shows that our model has excellent sensitivity when a sudden haze burst or dissipation happens, which results in good long-term stability on the accuracy of the next 3-7 days' AQI prediction.
Scaling symmetry, renormalization, and time series modeling: the case of financial assets dynamics.
Zamparo, Marco; Baldovin, Fulvio; Caraglio, Michele; Stella, Attilio L
2013-12-01
We present and discuss a stochastic model of financial assets dynamics based on the idea of an inverse renormalization group strategy. With this strategy we construct the multivariate distributions of elementary returns based on the scaling with time of the probability density of their aggregates. In its simplest version the model is the product of an endogenous autoregressive component and a random rescaling factor designed to embody also exogenous influences. Mathematical properties like increments' stationarity and ergodicity can be proven. Thanks to the relatively low number of parameters, model calibration can be conveniently based on a method of moments, as exemplified in the case of historical data of the S&P500 index. The calibrated model accounts very well for many stylized facts, like volatility clustering, power-law decay of the volatility autocorrelation function, and multiscaling with time of the aggregated return distribution. In agreement with empirical evidence in finance, the dynamics is not invariant under time reversal, and, with suitable generalizations, skewness of the return distribution and leverage effects can be included. The analytical tractability of the model opens interesting perspectives for applications, for instance, in terms of obtaining closed formulas for derivative pricing. Further important features are the possibility of making contact, in certain limits, with autoregressive models widely used in finance and the possibility of partially resolving the long- and short-memory components of the volatility, with consistent results when applied to historical series.
Estimating and Analyzing Savannah Phenology with a Lagged Time Series Model.
Niklas Boke-Olén
Full Text Available Savannah regions are predicted to undergo changes in precipitation patterns according to current climate change projections. This change will affect leaf phenology, which controls net primary productivity. It is of importance to study this since savannahs play an important role in the global carbon cycle due to their areal coverage and can have an effect on the food security in regions that depend on subsistence farming. In this study we investigate how soil moisture, mean annual precipitation, and day length control savannah phenology by developing a lagged time series model. The model uses climate data for 15 flux tower sites across four continents, and normalized difference vegetation index from satellite to optimize a statistical phenological model. We show that all three variables can be used to estimate savannah phenology on a global scale. However, it was not possible to create a simplified savannah model that works equally well for all sites on the global scale without inclusion of more site specific parameters. The simplified model showed no bias towards tree cover or between continents and resulted in a cross-validated r2 of 0.6 and root mean squared error of 0.1. We therefore expect similar average results when applying the model to other savannah areas and further expect that it could be used to estimate the productivity of savannah regions.
Guarnaccia, Claudio; Quartieri, Joseph; Tepedino, Carmine
2017-06-01
One of the most hazardous physical polluting agents, considering their effects on human health, is acoustical noise. Airports are a strong source of acoustical noise, due to the airplanes turbines, to the aero-dynamical noise of transits, to the acceleration or the breaking during the take-off and landing phases of aircrafts, to the road traffic around the airport, etc.. The monitoring and the prediction of the acoustical level emitted by airports can be very useful to assess the impact on human health and activities. In the airports noise scenario, thanks to flights scheduling, the predominant sources may have a periodic behaviour. Thus, a Time Series Analysis approach can be adopted, considering that a general trend and a seasonal behaviour can be highlighted and used to build a predictive model. In this paper, two different approaches are adopted, thus two predictive models are constructed and tested. The first model is based on deterministic decomposition and is built composing the trend, that is the long term behaviour, the seasonality, that is the periodic component, and the random variations. The second model is based on seasonal autoregressive moving average, and it belongs to the stochastic class of models. The two different models are fitted on an acoustical level dataset collected close to the Nice (France) international airport. Results will be encouraging and will show good prediction performances of both the adopted strategies. A residual analysis is performed, in order to quantify the forecasting error features.
Creating Discriminative Models for Time Series Classification and Clustering by HMM Ensembles.
Asadi, Nazanin; Mirzaei, Abdolreza; Haghshenas, Ehsan
2016-12-01
Classification of temporal data sequences is a fundamental branch of machine learning with a broad range of real world applications. Since the dimensionality of temporal data is significantly larger than static data, and its modeling and interpreting is more complicated, performing classification and clustering on temporal data is more complex as well. Hidden Markov models (HMMs) are well-known statistical models for modeling and analysis of sequence data. Besides, ensemble methods, which employ multiple models to obtain the target model, revealed good performances in the conducted experiments. All these facts are a high level of motivation to employ HMM ensembles in the task of classification and clustering of time series data. So far, no effective classification and clustering method based on HMM ensembles has been proposed. Moreover, employing the limited existing HMM ensemble methods has trouble separating models of distinct classes as a vital task. In this paper, according to previous points a new framework based on HMM ensembles for classification and clustering is proposed. In addition to its strong theoretical background by employing the Rényi entropy for ensemble learning procedure, the main contribution of the proposed method is addressing HMM-based methods problem in separating models of distinct classes by considering the inverse emission matrix of the opposite class to build an opposite model. The proposed algorithms perform more effectively compared to other methods especially other HMM ensemble-based methods. Moreover, the proposed clustering framework, which derives benefits from both similarity-based and model-based methods, together with the Rényi-based ensemble method revealed its superiority in several measurements.
Artificial neural networks for modeling time series of beach litter in the southern North Sea.
Schulz, Marcus; Matthies, Michael
2014-07-01
In European marine waters, existing monitoring programs of beach litter need to be improved concerning litter items used as indicators of pollution levels, efficiency, and effectiveness. In order to ease and focus future monitoring of beach litter on few important litter items, feed-forward neural networks consisting of three layers were developed to relate single litter items to general categories of marine litter. The neural networks developed were applied to seven beaches in the southern North Sea and modeled time series of five general categories of marine litter, such as litter from fishing, shipping, and tourism. Results of regression analyses show that general categories were predicted significantly moderately to well. Measured and modeled data were in the same order of magnitude, and minima and maxima overlapped well. Neural networks were found to be eligible tools to deliver reliable predictions of marine litter with low computational effort and little input of information. Copyright © 2014 Elsevier Ltd. All rights reserved.
Research on Time-series Modeling and Filtering Methods for MEMS Gyroscope Random Drift Error
Wang, Xiao Yi; Meng, Xiu Yun
2017-03-01
The precision of MEMS gyroscope is reduced by random drift error. This paper applied time series analysis to model random drift error of MEMS gyroscope. Based on the model established, Kalman filter was employed to compensate for the error. To overcome the disadvantages of conventional Kalman filter, Sage-Husa adaptive filtering algorithm was utilized to improve the accuracy of filtering results and the orthogonal property of innovation in the process of filtering was utilized to deal with outliers. The results showed that, compared with conventional Kalman filter, the modified filter can not only enhance filter accuracy, but also resist to outliers and this assured the stability of filtering thus improving the performance of gyroscopes.
Modelling the neurovascular habituation effect on fMRI time series
Ciuciu, Ph.; Sockeel, S.; Vincent, T. [NeuroSpin/CEA, F-91191 Gif-sur-Yvette (France); Idier, J. [IRCCyN/CNRS, 1 rue de la Noe 44300 Nantes (France)
2009-07-01
In this paper, a novel non-stationary model of functional Magnetic Resonance Imaging (fMRI) time series is proposed. It allows us to account for some putative habituation effect arising in event-related fMRI paradigms that involves the so-called repetition-suppression phenomenon and induces decreasing magnitude responses over successive trials. Akin, this model is defined over functionally homogeneous regions-of-interest (ROIs) and embedded in a joint detection-estimation approach of brain activity. Importantly, its non-stationarity character is embodied in the trial-varying nature of the BOLD response magnitude. Habituation and activation maps are then estimated within the Bayesian framework in a fully unsupervised MCMC procedure. On artificial fMRI datasets, we show that habituation effects can be accurately recovered in activating voxels. (authors)
The Volterra series as special case of artificial neural network model
Napiorkowski, J.; O Kane, J. P.
2003-04-01
The geophysical processes contributing to the hydrological cycle are described by theoretically sound non-linear partial differential equations of mass and energy transfer. The hydrodynamic equations describing hydrological processes were developed in non-linear form in the nineteenth century. In the case of surface runoff from a natural catchment or flow in an open channel, an accurate application of the hydraulic approach requires a detailed topographical survey and determination of roughness parameters. In order to avoid these difficulties, alternative approaches e.g. via conceptual models and black box models were developed in the second half of the last century. The conceptual model approach is to simulate the nature of the catchment response or the channel response by relatively simple non-linear model built up from simple non-linear elements, e.g. cascade of non-linear reservoirs. Each non-linear reservoir is responsible for part of the attenuation of the system response. This lumped dynamic model can be represented by a set of ordinary differential equations: begin{gathered} dot S_1 (t) = - f[S_1 (t)] + x(t) dot S_2 (t) = - f[S_2 (t)] + f[S_1 (t)] ... dot S_n (t) = - f[S_n (t)] + f[Sn - 1 (t)] y(t) = f[S_n (t)] % MathType!End!2!1! (1) where x is the input signal(rainfall or flow at the upstream end of the channel), Si is the storage in the i-th reservoir, f(.) represents the outflow-storage relation and y is the output signal (surface runoff or flow at the downstream end of the channel). Non-linear black box analysis is concerned with representing a system by a functional Volterra series in the form of a sum of convolution integrals: begin{gathered} y(t) = intlimits_0^t {h_1 (τ )x(t - τ )dτ + intlimits_0^t {intlimits_0^t {h_2 (τ _1 ,τ _2 )x(t - τ _1 )x(t - τ _2 )dτ _1 dτ _2 } } } quad quad + intlimits_0^t {intlimits_0^t {intlimits_0^t {h_3 (τ _1 ,τ _2 ,τ _3 )x(t - τ _1 )x(t - τ _2 )x(t - τ _3 )dτ _1 dτ _2 dτ _3 } + ...} } % MathType!End!2
Rexer, Moritz; Claessens, Sten; Hirt, Christian
2016-04-01
The number of relevant terms of binominal series expansions used in spectral forward modelling of the gravitational potential is known to rise substantially as the resolution of the models increases. Here, we investigate and compare the binominal series expansions in forward modelling w.r.t. a sphere and w.r.t. an ellipsoid (Claessens and Hirt, 2013) in view of high degree forward modelling (d/o 10800). The series in each case depend on different parameters - such as elevation of the topographic function or ellipsoidal radius/co-latitude - and reveal different maximum orders of truncation for a 1% convergence level (=relative error). The results are verified in a real data scenario up to d/o 5400 by spot-checks using direct integral solutions that do not depend on binomial series expansions. As a conclusion, our study demonstrates that for d/o 10800 modelling up to 30 terms of the binominal series accounting for the radial integral are needed within the spherical and the ellipsoidal case, while up to 60 terms are needed for the binominal series accounting for the oblateness of Earth in the ellipsoidal case for a convergence at the 1% level. References: Claessens, S.J.; Hirt, C.: Ellipsoidal topographic potential - new solutions for spectral forward gravity modelling of topography with respect to a reference ellipsoid; Journal of Geophysical Research (JGR) - Solid Earth, Vol. 118, DOI: 10.1002/2013JB010457, 2013.
Modeling of the jack rabbit series of experiments with a temperature based reactive burn model
Desbiens, Nicolas
2017-01-01
The Jack Rabbit experiments, performed by Lawrence Livermore National Laboratory, focus on detonation wave corner turning and shock desensitization. Indeed, while important for safety or charge design, the behaviour of explosives in these regimes is poorly understood. In this paper, our temperature based reactive burn model is calibrated for LX-17 and compared to the Jack Rabbit data. It is shown that our model can reproduce the corner turning and shock desensitization behaviour of four out of the five experiments.
Buishand, T. A.; Klein Tank, A. M. G.
1996-05-01
The precipitation amounts on wet days at De Bilt (the Netherlands) are linked to temperature and surface air pressure through advanced regression techniques. Temperature is chosen as a covariate to use the model for generating synthetic time series of daily precipitation in a CO2 induced warmer climate. The precipitation-temperature dependence can partly be ascribed to the phenomenon that warmer air can contain more moisture. Spline functions are introduced to reproduce the non-monotonous change of the mean daily precipitation amount with temperature. Because the model is non-linear and the variance of the errors depends on the expected response, an iteratively reweighted least-squares technique is needed to estimate the regression coefficients. A representative rainfall sequence for the situation of a systematic temperature rise is obtained by multiplying the precipitation amounts in the observed record with a temperature dependent factor based on a fitted regression model. For a temperature change of 3°C (reasonable guess for a doubled CO2 climate according to the present-day general circulation models) this results in an increase in the annual average amount of 9% (20% in winter and 4% in summer). An extended model with both temperature and surface air pressure is presented which makes it possible to study the additional effects of a potential systematic change in surface air pressure on precipitation.
Time-series microarray data simulation modeled with a case-control label.
Liu, Y J; Zhang, J Y
2016-05-12
With advances in molecular biology, microarray data have become an important resource in the exploration of complex human diseases. Although gene chip technology continues to grow, there are still many barriers to overcome, such as high costs, small sample sizes, complex procedures, poor repeatability, and the dependence on data analysis methods. To avoid these problems, simulation data have a vital role in the study of complex diseases. A simulation method of microarray data is introduced in this study to model the occurrence and development of general diseases. Using classic statistics and control theory, five risk models are proposed. One or more models can be introduced into the baseline simulation dataset with a case-control label. In addition, time-series gene expression data can be generated to model the dynamic evolutionary process of a disease. The prevalence of each model is estimated and disease-associated genes are tested by significance analysis of microarrays. The source code, written in MATLAB, is freely and publicly available at http://sourceforge.net/projects/genesimulation/files/.
The string prediction models as invariants of time series in the forex market
Pincak, R.
2013-12-01
In this paper we apply a new approach of string theory to the real financial market. The models are constructed with an idea of prediction models based on the string invariants (PMBSI). The performance of PMBSI is compared to support vector machines (SVM) and artificial neural networks (ANN) on an artificial and a financial time series. A brief overview of the results and analysis is given. The first model is based on the correlation function as invariant and the second one is an application based on the deviations from the closed string/pattern form (PMBCS). We found the difference between these two approaches. The first model cannot predict the behavior of the forex market with good efficiency in comparison with the second one which is, in addition, able to make relevant profit per year. The presented string models could be useful for portfolio creation and financial risk management in the banking sector as well as for a nonlinear statistical approach to data optimization.
Optimizing the De-Noise Neural Network Model for GPS Time-Series Monitoring of Structures
Mosbeh R. Kaloop
2015-09-01
Full Text Available The Global Positioning System (GPS is recently used widely in structures and other applications. Notwithstanding, the GPS accuracy still suffers from the errors afflicting the measurements, particularly the short-period displacement of structural components. Previously, the multi filter method is utilized to remove the displacement errors. This paper aims at using a novel application for the neural network prediction models to improve the GPS monitoring time series data. Four prediction models for the learning algorithms are applied and used with neural network solutions: back-propagation, Cascade-forward back-propagation, adaptive filter and extended Kalman filter, to estimate which model can be recommended. The noise simulation and bridge’s short-period GPS of the monitoring displacement component of one Hz sampling frequency are used to validate the four models and the previous method. The results show that the Adaptive neural networks filter is suggested for de-noising the observations, specifically for the GPS displacement components of structures. Also, this model is expected to have significant influence on the design of structures in the low frequency responses and measurements’ contents.
A detection model of underwater topography with a series of SAR images acquired at different time
YANG Jungang; ZHANG Jie; MENG Junmin
2010-01-01
underwater topography is one of oceanic features detected by Synthetic Aperture Radar. Under-water topography SAR imaging mechanism shows that tidal current is the important factor for underwater topography SAR imaging. Thus under the same wind field condition, SAR images for the same area acquired at different time include different information of the underwater topogra-phy. To utilize synchronously SAR images acquired at different time for the underwater topography SAR detection and improve the precision of detection, based on the detection model of underwater topography with single SAR image and the periodicity of tidal current, a detection model of under- water topography with a series of SAR images acquired at different time is developed by combing with tide and tidal current numerical simulation. To testify the feasibility of the presented model, Taiwan Shoal located at the south outlet of Taiwan Strait is selected as study area and three SAR images are used in the underwater topography detection. The detection results are compared with the field observation data of water depth carried out by R/V Dongfanghong 2, and the errors of the detection are compared with those of the single SAR image. All comparisons show that the detec- tion model presented in the paper improves the precision of underwater topography SAR detection, and the presented model is feasible.
An advection-based model to increase the temporal resolution of PIV time series.
Scarano, Fulvio; Moore, Peter
A numerical implementation of the advection equation is proposed to increase the temporal resolution of PIV time series. The method is based on the principle that velocity fluctuations are transported passively, similar to Taylor's hypothesis of frozen turbulence. In the present work, the advection model is extended to unsteady three-dimensional flows. The main objective of the method is that of lowering the requirement on the PIV repetition rate from the Eulerian frequency toward the Lagrangian one. The local trajectory of the fluid parcel is obtained by forward projection of the instantaneous velocity at the preceding time instant and backward projection from the subsequent time step. The trajectories are approximated by the instantaneous streamlines, which yields accurate results when the amplitude of velocity fluctuations is small with respect to the convective motion. The verification is performed with two experiments conducted at temporal resolutions significantly higher than that dictated by Nyquist criterion. The flow past the trailing edge of a NACA0012 airfoil closely approximates frozen turbulence, where the largest ratio between the Lagrangian and Eulerian temporal scales is expected. An order of magnitude reduction of the needed acquisition frequency is demonstrated by the velocity spectra of super-sampled series. The application to three-dimensional data is made with time-resolved tomographic PIV measurements of a transitional jet. Here, the 3D advection equation is implemented to estimate the fluid trajectories. The reduction in the minimum sampling rate by the use of super-sampling in this case is less, due to the fact that vortices occurring in the jet shear layer are not well approximated by sole advection at large time separation. Both cases reveal that the current requirements for time-resolved PIV experiments can be revised when information is poured from space to time. An additional favorable effect is observed by the analysis in the frequency
Du, Kongchang; Zhao, Ying; Lei, Jiaqiang
2017-09-01
In hydrological time series prediction, singular spectrum analysis (SSA) and discrete wavelet transform (DWT) are widely used as preprocessing techniques for artificial neural network (ANN) and support vector machine (SVM) predictors. These hybrid or ensemble models seem to largely reduce the prediction error. In current literature researchers apply these techniques to the whole observed time series and then obtain a set of reconstructed or decomposed time series as inputs to ANN or SVM. However, through two comparative experiments and mathematical deduction we found the usage of SSA and DWT in building hybrid models is incorrect. Since SSA and DWT adopt 'future' values to perform the calculation, the series generated by SSA reconstruction or DWT decomposition contain information of 'future' values. These hybrid models caused incorrect 'high' prediction performance and may cause large errors in practice.
Study on Apparent Kinetic Prediction Model of the Smelting Reduction Based on the Time-Series
Guo-feng Fan
2012-01-01
Full Text Available A series of direct smelting reduction experiment has been carried out with high phosphorous iron ore of the different bases by thermogravimetric analyzer. The derivative thermogravimetric (DTG data have been obtained from the experiments. One-step forward local weighted linear (LWL method , one of the most suitable ways of predicting chaotic time-series methods which focus on the errors, is used to predict DTG. In the meanwhile, empirical mode decomposition-autoregressive (EMD-AR, a data mining technique in signal processing, is also used to predict DTG. The results show that (1 EMD-AR(4 is the most appropriate and its error is smaller than the former; (2 root mean square error (RMSE has decreased about two-thirds; (3 standardized root mean square error (NMSE has decreased in an order of magnitude. Finally in this paper, EMD-AR method has been improved by golden section weighting; its error would be smaller than before. Therefore, the improved EMD-AR model is a promising alternative for apparent reaction rate (DTG. The analytical results have been an important reference in the field of industrial control.
Multi-horizon solar radiation forecasting for Mediterranean locations using time series models
Voyant, Cyril; Muselli, Marc; Nivet, Marie Laure
2013-01-01
Considering the grid manager's point of view, needs in terms of prediction of intermittent energy like the photovoltaic resource can be distinguished according to the considered horizon: following days (d+1, d+2 and d+3), next day by hourly step (h+24), next hour (h+1) and next few minutes (m+5 e.g.). Through this work, we have identified methodologies using time series models for the prediction horizon of global radiation and photovoltaic power. What we present here is a comparison of different predictors developed and tested to propose a hierarchy. For horizons d+1 and h+1, without advanced ad hoc time series pre-processing (stationarity) we find it is not easy to differentiate between autoregressive moving average (ARMA) and multilayer perceptron (MLP). However we observed that using exogenous variables improves significantly the results for MLP . We have shown that the MLP were more adapted for horizons h+24 and m+5. In summary, our results are complementary and improve the existing prediction techniques ...
Diffusive and subdiffusive dynamics of indoor microclimate: a time series modeling.
Maciejewska, Monika; Szczurek, Andrzej; Sikora, Grzegorz; Wyłomańska, Agnieszka
2012-09-01
The indoor microclimate is an issue in modern society, where people spend about 90% of their time indoors. Temperature and relative humidity are commonly used for its evaluation. In this context, the two parameters are usually considered as behaving in the same manner, just inversely correlated. This opinion comes from observation of the deterministic components of temperature and humidity time series. We focus on the dynamics and the dependency structure of the time series of these parameters, without deterministic components. Here we apply the mean square displacement, the autoregressive integrated moving average (ARIMA), and the methodology for studying anomalous diffusion. The analyzed data originated from five monitoring locations inside a modern office building, covering a period of nearly one week. It was found that the temperature data exhibited a transition between diffusive and subdiffusive behavior, when the building occupancy pattern changed from the weekday to the weekend pattern. At the same time the relative humidity consistently showed diffusive character. Also the structures of the dependencies of the temperature and humidity data sets were different, as shown by the different structures of the ARIMA models which were found appropriate. In the space domain, the dynamics and dependency structure of the particular parameter were preserved. This work proposes an approach to describe the very complex conditions of indoor air and it contributes to the improvement of the representative character of microclimate monitoring.
Siggiridou, Elsa
2015-01-01
Granger causality has been used for the investigation of the inter-dependence structure of the underlying systems of multi-variate time series. In particular, the direct causal effects are commonly estimated by the conditional Granger causality index (CGCI). In the presence of many observed variables and relatively short time series, CGCI may fail because it is based on vector autoregressive models (VAR) involving a large number of coefficients to be estimated. In this work, the VAR is restricted by a scheme that modifies the recently developed method of backward-in-time selection (BTS) of the lagged variables and the CGCI is combined with BTS. Further, the proposed approach is compared favorably to other restricted VAR representations, such as the top-down strategy, the bottom-up strategy, and the least absolute shrinkage and selection operator (LASSO), in terms of sensitivity and specificity of CGCI. This is shown by using simulations of linear and nonlinear, low and high-dimensional systems and different t...
Dynamic Modeling and Simulation of a Switched Reluctance Motor in a Series Hybrid Electric Vehicle
Siavash Sadeghi
2010-04-01
Full Text Available Dynamic behavior analysis of electric motors is required in order to accuratelyevaluate the performance, energy consumption and pollution level of hybrid electricvehicles. Simulation tools for hybrid electric vehicles are divided into steady state anddynamic models. Tools with steady-state models are useful for system-level analysiswhereas tools that utilize dynamic models give in-depth information about the behavior ofsublevel components. For the accurate prediction of hybrid electric vehicle performance,dynamic modeling of the motor and other components is necessary. Whereas the switchedreluctance machine is well suited for electric and hybrid electric vehicles, due to the simpleand rugged construction, low cost, and ability to operate over a wide speed range atconstant power, in this paper dynamic performance of the switched reluctance motor for eseries hybrid electric vehicles is investigated. For this purpose a switched reluctance motorwith its electrical drive is modeld and simulated first, and then the other components of aseries hybrid electric vehicle, such as battery, generator, internal combusion engine, andgearbox, are designed and linked with the electric motor. Finally a typical series hybridelectric vehicle is simulated for different drive cycles. The extensive simulation results showthe dynamic performance of SRM, battery, fuel consumption, and emissions.
Zhang, Tingting; Wu, Jingwei; Li, Fan; Caffo, Brian; Boatman-Reich, Dana
2015-03-01
We introduce a dynamic directional model (DDM) for studying brain effective connectivity based on intracranial electrocorticographic (ECoG) time series. The DDM consists of two parts: a set of differential equations describing neuronal activity of brain components (state equations), and observation equations linking the underlying neuronal states to observed data. When applied to functional MRI or EEG data, DDMs usually have complex formulations and thus can accommodate only a few regions, due to limitations in spatial resolution and/or temporal resolution of these imaging modalities. In contrast, we formulate our model in the context of ECoG data. The combined high temporal and spatial resolution of ECoG data result in a much simpler DDM, allowing investigation of complex connections between many regions. To identify functionally segregated sub-networks, a form of biologically economical brain networks, we propose the Potts model for the DDM parameters. The neuronal states of brain components are represented by cubic spline bases and the parameters are estimated by minimizing a log-likelihood criterion that combines the state and observation equations. The Potts model is converted to the Potts penalty in the penalized regression approach to achieve sparsity in parameter estimation, for which a fast iterative algorithm is developed. The methods are applied to an auditory ECoG dataset.
Cheng, Qing; Lu, Xin; Wu, Joseph T.; Liu, Zhong; Huang, Jincai
2016-01-01
Guangdong experienced the largest dengue epidemic in recent history. In 2014, the number of dengue cases was the highest in the previous 10 years and comprised more than 90% of all cases. In order to analyze heterogeneous transmission of dengue, a multivariate time series model decomposing dengue risk additively into endemic, autoregressive and spatiotemporal components was used to model dengue transmission. Moreover, random effects were introduced in the model to deal with heterogeneous dengue transmission and incidence levels and power law approach was embedded into the model to account for spatial interaction. There was little spatial variation in the autoregressive component. In contrast, for the endemic component, there was a pronounced heterogeneity between the Pearl River Delta area and the remaining districts. For the spatiotemporal component, there was considerable heterogeneity across districts with highest values in some western and eastern department. The results showed that the patterns driving dengue transmission were found by using clustering analysis. And endemic component contribution seems to be important in the Pearl River Delta area, where the incidence is high (95 per 100,000), while areas with relatively low incidence (4 per 100,000) are highly dependent on spatiotemporal spread and local autoregression. PMID:27666657
Bayesian models of thermal and pluviometric time series in the Fucino plateau
Adriana Trabucco
2011-09-01
Full Text Available This work was developed within the Project Metodologie e sistemi integrati per la qualificazione di produzioni orticole del Fucino (Methodologies and integrated systems for the classification of horticultural products in the Fucino plateau, sponsored by the Italian Ministry of Education, University and Research, Strategic Projects, Law 448/97. Agro-system managing, especially if necessary to achieve high quality in speciality crops, requires knowledge of main features and intrinsic variability of climate. Statistical models may properly summarize the structure existing behind the observed variability, furthermore they may support the agronomic manager by providing the probability that meteorological events happen in a time window of interest. More than 30 years of daily values collected in four sites located on the Fucino plateau, Abruzzo region, Italy, were studied by fitting Bayesian generalized linear models to air temperature maximum /minimum and rainfall time series. Bayesian predictive distributions of climate variables supporting decision-making processes were calculated at different timescales, 5-days for temperatures and 10-days for rainfall, both to reduce computational efforts and to simplify statistical model assumptions. Technicians and field operators, even with limited statistical training, may exploit the model output by inspecting graphs and climatic profiles of the cultivated areas during decision-making processes. Realizations taken from predictive distributions may also be used as input for agro-ecological models (e.g. models of crop growth, water balance. Fitted models may be exploited to monitor climatic changes and to revise climatic profiles of interest areas, periodically updating the probability distributions of target climatic variables. For the sake of brevity, the description of results is limited to just one of the four sites, and results for all other sites are available as supplementary information.
Tri-Vien Vu
2014-10-01
Full Text Available This study applied a model predictive control (MPC framework to solve the cruising control problem of a series hydraulic hybrid vehicle (SHHV. The controller not only regulates vehicle velocity, but also engine torque, engine speed, and accumulator pressure to their corresponding reference values. At each time step, a quadratic programming problem is solved within a predictive horizon to obtain the optimal control inputs. The objective is to minimize the output error. This approach ensures that the components operate at high efficiency thereby improving the total efficiency of the system. The proposed SHHV control system was evaluated under urban and highway driving conditions. By handling constraints and input-output interactions, the MPC-based control system ensures that the system operates safely and efficiently. The fuel economy of the proposed control scheme shows a noticeable improvement in comparison with the PID-based system, in which three Proportional-Integral-Derivative (PID controllers are used for cruising control.
A LINEAR-NEURAL HYBRID MODEL FOR ANALYSIS AND FORECASTING OF TIME-SERIES
MARCELO CUNHA MEDEIROS
1998-01-01
Esta dissertação apresenta um modelo não linear auto-regressivo com variáveis exógenas (ARX), para análise e previsão de séries temporais. Os coeficientes do modelo são estimados pela saída de uma rede neural feed-forward, treinada por um algoritmo híbrido de otimização. Os resultados obtidos são comparados tanto com modelos lineares, quanto com não lineares. This thesis presents a non linear autoregressive model with exogeneous variables (ARX), for time series analysis and forecasting. Th...
Goodness-of-fit tests for vector autoregressive models in time series
无
2010-01-01
The paper proposes and studies some diagnostic tools for checking the goodness-of-fit of general parametric vector autoregressive models in time series. The resulted tests are asymptotically chi-squared under the null hypothesis and can detect the alternatives converging to the null at a parametric rate. The tests involve weight functions,which provides us with the flexibility to choose scores for enhancing power performance,especially under directional alternatives. When the alternatives are not directional,we construct asymptotically distribution-free maximin tests for a large class of alternatives. A possibility to construct score-based omnibus tests is discussed when the alternative is saturated. The power performance is also investigated. In addition,when the sample size is small,a nonparametric Monte Carlo test approach for dependent data is proposed to improve the performance of the tests. The algorithm is easy to implement. Simulation studies and real applications are carried out for illustration.
Low-derivative operators of the Standard Model effective field theory via Hilbert series methods
Lehman, Landon
2015-01-01
In this work, we explore an extension of Hilbert series techniques to count operators that include derivatives. For sufficiently low-derivative operators, we find an algorithm that gives the number of invariant operators, properly accounting for redundancies due to the equations of motion and integration by parts. Specifically, the technique can be applied whenever there is only one Lorentz invariant for a given partitioning of derivatives among the fields. At higher numbers of derivatives, equation of motion redundancies can be removed, but the increased number of Lorentz contractions spoils the subtraction of integration by parts redundancies. While restricted, this technique is sufficient to automatically generate the complete set of invariant operators of the Standard Model effective field theory for dimensions 6 and 7 (for arbitrary numbers of flavors). At dimension 8, the algorithm does not automatically generate the complete operator set; however, it suffices for all but five classes of operators. For ...
Volterra prediction model for speech signal series%语音信号序列的Volterra预测模型∗
张玉梅; 胡小俊; 吴晓军; 白树林; 路纲
2015-01-01
The given English phonemes, words and sentences are sampled and preprocessed. For these real measured speech signal series, time delay and embedding dimension are determined by using mutual information method and Cao’s method, respectively, so as to perform phase space reconstruction of the speech signal series. By using small data set method, the largest Lyapunov exponent of the speech signal series is calculated and the fact that its value is greater than zero presents chaotic characteristics of the speech signal series. This, in fact, performs the chaotic characteristic identification of the speech signal series. By introducing second-order Volterra series, in this paper we put forward a type of nonlinear prediction model with an explicit structure. To overcome some intrinsic shortcomings caused by improper parameter selection when using the least mean square (LMS) algorithm to update Volterra model eﬃciency, by using a variable convergence factor technology based on a posteriori error assumption on the basis of LMS algorithm, a novel Davidon-Fletcher-Powell-based second of Volterra filter (DFPSOVF) is constructed and is performed to predict speech signal series of the given English phonemes, words and sentences with chaotic characteristics. Simulation results under MATLAB 7.0 environment show that the proposed nonlinear model DFPSOVF can guarantee its stability and convergence and there are no divergence problems in using LMS algorithm; for single-frame and multi-frame of the measured speech signals, when root mean square error (RMSE) is used as an evaluation criterion the prediction accuracy of the proposed nonlinear prediction model DFPSOVF in this paper is better than that of the linear prediction (LP) that is traditionally employed. The primary results of single-frame and multi-frame predictions are given. So, the proposed DFPSOVF model can substitute linear prediction model on certain conditions. Meanwhile, it can better reflect trends and regularity
Time-Elastic Generative Model for Acceleration Time Series in Human Activity Recognition.
Munoz-Organero, Mario; Ruiz-Blazquez, Ramona
2017-02-08
Body-worn sensors in general and accelerometers in particular have been widely used in order to detect human movements and activities. The execution of each type of movement by each particular individual generates sequences of time series of sensed data from which specific movement related patterns can be assessed. Several machine learning algorithms have been used over windowed segments of sensed data in order to detect such patterns in activity recognition based on intermediate features (either hand-crafted or automatically learned from data). The underlying assumption is that the computed features will capture statistical differences that can properly classify different movements and activities after a training phase based on sensed data. In order to achieve high accuracy and recall rates (and guarantee the generalization of the system to new users), the training data have to contain enough information to characterize all possible ways of executing the activity or movement to be detected. This could imply large amounts of data and a complex and time-consuming training phase, which has been shown to be even more relevant when automatically learning the optimal features to be used. In this paper, we present a novel generative model that is able to generate sequences of time series for characterizing a particular movement based on the time elasticity properties of the sensed data. The model is used to train a stack of auto-encoders in order to learn the particular features able to detect human movements. The results of movement detection using a newly generated database with information on five users performing six different movements are presented. The generalization of results using an existing database is also presented in the paper. The results show that the proposed mechanism is able to obtain acceptable recognition rates (F = 0.77) even in the case of using different people executing a different sequence of movements and using different hardware.
Time-Elastic Generative Model for Acceleration Time Series in Human Activity Recognition
Mario Munoz-Organero
2017-02-01
Full Text Available Body-worn sensors in general and accelerometers in particular have been widely used in order to detect human movements and activities. The execution of each type of movement by each particular individual generates sequences of time series of sensed data from which specific movement related patterns can be assessed. Several machine learning algorithms have been used over windowed segments of sensed data in order to detect such patterns in activity recognition based on intermediate features (either hand-crafted or automatically learned from data. The underlying assumption is that the computed features will capture statistical differences that can properly classify different movements and activities after a training phase based on sensed data. In order to achieve high accuracy and recall rates (and guarantee the generalization of the system to new users, the training data have to contain enough information to characterize all possible ways of executing the activity or movement to be detected. This could imply large amounts of data and a complex and time-consuming training phase, which has been shown to be even more relevant when automatically learning the optimal features to be used. In this paper, we present a novel generative model that is able to generate sequences of time series for characterizing a particular movement based on the time elasticity properties of the sensed data. The model is used to train a stack of auto-encoders in order to learn the particular features able to detect human movements. The results of movement detection using a newly generated database with information on five users performing six different movements are presented. The generalization of results using an existing database is also presented in the paper. The results show that the proposed mechanism is able to obtain acceptable recognition rates (F = 0.77 even in the case of using different people executing a different sequence of movements and using different
Morton, Kenneth D., Jr.; Torrione, Peter A.; Collins, Leslie
2010-04-01
Time domain ground penetrating radar (GPR) has been shown to be a powerful sensing phenomenology for detecting buried objects such as landmines. Landmine detection with GPR data typically utilizes a feature-based pattern classification algorithm to discriminate buried landmines from other sub-surface objects. In high-fidelity GPR, the time-frequency characteristics of a landmine response should be indicative of the physical construction and material composition of the landmine and could therefore be useful for discrimination from other non-threatening sub-surface objects. In this research we propose modeling landmine time-domain responses with a nonparametric Bayesian time-series model and we perform clustering of these time-series models with a hierarchical nonparametric Bayesian model. Each time-series is modeled as a hidden Markov model (HMM) with autoregressive (AR) state densities. The proposed nonparametric Bayesian prior allows for automated learning of the number of states in the HMM as well as the AR order within each state density. This creates a flexible time-series model with complexity determined by the data. Furthermore, a hierarchical non-parametric Bayesian prior is used to group landmine responses with similar HMM model parameters, thus learning the number of distinct landmine response models within a data set. Model inference is accomplished using a fast variational mean field approximation that can be implemented for on-line learning.
Marc, Odin; Hovius, Niels; Meunier, Patrick; Uchida, Taro; Gorum, Tolga
2016-04-01
Earthquakes impart a catastrophic forcing on hillslopes, that often lead to widespread landsliding and can contribute significantly to sedimentary and organic matter fluxes. We present a new expression for the total area and volume of populations of earthquake-induced landslides.This model builds on a set of scaling relationships between key parameters, such as landslide density, ground acceleration, fault size, earthquake source depth and seismic moment, derived from geomorphological and seismological observations. To assess the model we have assembled and normalized a catalogue of landslide inventories for 40 earthquakes. We have found that low landscape steepness systematically leads to over-prediction of the total area and volume of landslides.When this effect is accounted for, the model is able to predict within a factor of 2 the landslide areas and associated volumes for about two thirds of the cases in our databases. This is a significant improvement on a previously published empirical expression based only on earthquake moment. This model is suitable for integration into landscape evolution models, and application to the assessment of secondary hazards and risks associated with earthquakes. However, it only models landslides associated to the strong ground shaking and neglects the intrinsic permanent damage that also occurred on hillslopes and persist for longer period. With time series of landslide maps we have constrained the magnitude of the change in landslide susceptibility in the epicentral areas of 4 intermediate to large earthquakes. We propose likely causes for this transient ground strength perturbations and compare our observations to other observations of transient perturbations in epicentral areas, such as suspended sediment transport increases, seismic velocity reductions and hydrological perturbations. We conclude with some preliminary observations on the coseismic mass wasting and post-seismic landslide enhancement caused by the 2015 Mw.7
Suhartono Suhartono
2005-01-01
Full Text Available Many business and economic time series are non-stationary time series that contain trend and seasonal variations. Seasonality is a periodic and recurrent pattern caused by factors such as weather, holidays, or repeating promotions. A stochastic trend is often accompanied with the seasonal variations and can have a significant impact on various forecasting methods. In this paper, we will investigate and compare some forecasting methods for modeling time series with both trend and seasonal patterns. These methods are Winter's, Decomposition, Time Series Regression, ARIMA and Neural Networks models. In this empirical research, we study on the effectiveness of the forecasting performance, particularly to answer whether a complex method always give a better forecast than a simpler method. We use a real data, that is airline passenger data. The result shows that the more complex model does not always yield a better result than a simpler one. Additionally, we also find the possibility to do further research especially the use of hybrid model by combining some forecasting method to get better forecast, for example combination between decomposition (as data preprocessing and neural network model.
PENGEMBANGAN FOIL NACA SERI 2412 SEBAGAI SISTEM PENYELAMAN MODEL KAPAL SELAM
Ali Munazid
2015-06-01
Full Text Available Bentuk foil menghasilkan gaya angkat (lift force ketika foil dilewati oleh aliran fluida karena adanya pengaruh interaksi antara aliran fluida dengan permukaan foil yang mengakibatkan tekanan permukaan atas lebih kecil dari permukaan bawah. Bagaimana mengaplikasikan teori foil pada hydroplane kapal selam sebagai system penyelaman, dengan membalik foil maka lift force tersebut menjadi gaya ke bawah, dengan demikian memungkinkan kapal selam dapat menyelam, melayang dan bermanouver di bawah air, seperti halnya gerak pesawat terbang yang terbang dan melayang dengan menggunakan sayap. Dilakukan penelitian dan pengamatan terhadap kemampuan penyelaman (diving plan dari foil NACA seri 2412 pada model kapal selam, dengan mencari nilai Cl (coefisien lift di Laboratorium, serta mendesain bentuk badan kapal selam dan analisa gaya-gaya yang bekerja pada model kapal selam, jumlah gaya-gaya yang bekerja keatas lebih rendah dari gaya-gaya ke bawah maka kapal selam mampu menyelam. Penerapan Hydroplane sebagai diving plane dapat diterapkan, kemampuan penyelaman dipengaruhi oleh sudut flip Hydroplane dan kecepatan model, semakin besar kecepatan dan sudut flip maka semakin besar kedalaman penyelaman yang dapat dilakukan.
Monitoring and modeling of wetland environment using time-series bi-sensor remotely sensed data
Michishita, Ryo
More than half of the wetlands in the world have been lost in the last century mainly due to human activities. Since natural wetlands receive a significant amount of untreated runoff from urban and agricultural areas, it is necessary to account for other landscapes adjacent to wetlands, such as water bodies, agricultural areas, and urban areas, in the protection and restoration of the wetlands. The goal of this dissertation is to monitor and model land cover changes using the time-series Landsat-5 TM and Terra MODIS data in the Poyang Lake area of China from two perspectives: wetland cover changes and urbanization. A bi-scale monitoring approach was adopted in the monitoring and modeling of wetland cover changes to examine the similarities and differences derived from remotely sensed imagery with different spatial resolutions. The effect of different modeling settings of multiple endmember spectral mixture analysis (MESMA) were examined utilizing a single pair of TM and MODIS scenes. MESMA applied to nine pairs of TM and MODIS scenes acquired from July 2004 to October 2005 captured phenological and hydrological trends of land cover fractions (LCFs) and LCF agreement between the image pairs. Ground surface reflectance, rather than LCFs, was chosen as the key parameter in the blending of bi-scale remotely sensed data that utilized the spatial details of one data type and temporal details of the other. This research customized an existing fusion model to overcome the problem with the unobserved pixels in MODIS data acquired on TM data acquisition dates. It is interesting that the input data combination considering water level change achieved higher accuracy. In the monitoring of urbanization, this research investigated the relationship between urban land cover and human activities, and detected the areas of new urban development and redevelopment of built-up areas. Different urbanization processes largely influenced by the economic reforms of China were demonstrated
Andriyas, S.; McKee, M.
2014-12-01
Anticipating farmers' irrigation decisions can provide the possibility of improving the efficiency of canal operations in on-demand irrigation systems. Although multiple factors are considered during irrigation decision making, for any given farmer there might be one factor playing a major role. Identification of that biophysical factor which led to a farmer deciding to irrigate is difficult because of high variability of those factors during the growing season. Analysis of the irrigation decisions of a group of farmers for a single crop can help to simplify the problem. We developed a hidden Markov model (HMM) to analyze irrigation decisions and explore the factor and level at which the majority of farmers decide to irrigate. The model requires observed variables as inputs and the hidden states. The chosen model inputs were relatively easily measured, or estimated, biophysical data, including such factors (i.e., those variables which are believed to affect irrigation decision-making) as cumulative evapotranspiration, soil moisture depletion, soil stress coefficient, and canal flows. Irrigation decision series were the hidden states for the model. The data for the work comes from the Canal B region of the Lower Sevier River Basin, near Delta, Utah. The main crops of the region are alfalfa, barley, and corn. A portion of the data was used to build and test the model capability to explore that factor and the level at which the farmer takes the decision to irrigate for future irrigation events. Both group and individual level behavior can be studied using HMMs. The study showed that the farmers cannot be classified into certain classes based on their irrigation decisions, but vary in their behavior from irrigation-to-irrigation across all years and crops. HMMs can be used to analyze what factor and, subsequently, what level of that factor on which the farmer most likely based the irrigation decision. The study shows that the HMM is a capable tool to study a process
Modeling GPP in the Nordic Forest Landscape Using MODIS Time Series Data
Schubert, P.; Lagergren, F.; Aurela, M.; Christensen, T. R.; Grelle, A.; Heliasz, M.; Klemedtsson, L. K.; Lindroth, A.; Pilegaard, K.; Vesala, T.; Eklundh, L.
2011-12-01
Satellite sensor-derived images cover the ground surface continuously throughout the landscapes and are therefore suitable for regional and global estimations of carbon dioxide (CO2) exchange. This study is aimed at developing an empirical model for regional estimations of gross primary productivity (GPP) in Nordic forests by using data from the Moderate Resolution Imaging Spectroradiometer (MODIS) and modeled incoming photosynthetic photon flux density (PPFD). Eddy covariance-measured net ecosystem exchange (NEE) from three deciduous and ten coniferous sites was partitioned into GPP. Linear regression analyses were made on 8-day averages of GPP in relation to MODIS 8-day composite data and 8-day averages of PPFD. Time series of the two-band enhanced vegetation index (EVI2) were calculated from MODIS 500 m reflectance data (MOD09A1). In order to reduce noise in data, these times series were smoothed by a curve fitting procedure. For most sites, fairly strong to strong relationships were found between GPP and the product of EVI2 and PPFD (Deciduous: R2 = 0.45-0.86, Coniferous: R2 = 0.49-0.90). Similar relationships were found for GPP versus the product of EVI2 and the MODIS 1 km daytime land surface temperature (LST, MOD11A2) (R2 = 0.55-0.81, 0.57-0.77) and for GPP versus EVI2, PPFD and daytime LST in multiple linear regressions (R2 = 0.73-0.89, 0.65-0.93). The slope coefficient for GPP versus the product of EVI2 and PPFD was used as a proxy variable for the light use efficiency (LUE). An attempt was made to model the between-site variation in slope by linear regressions to other variables, but all relationships were found to be weak or very weak. One year of data was collected from each coniferous site and treated as one sample, in order to derive one general empirical model for GPP versus the product of EVI2 and PPFD (R2 = 0.70). General models were also derived for GPP versus the product of EVI2 and daytime LST (R2 = 0.62) and for GPP versus EVI2, PPFD and
Pan-Arctic TV Series on Inuit wellness: a northern model of communication for social change?
Johnson, Rhonda; Morales, Robin; Leavitt, Doreen; Carry, Catherine; Kinnon, Dianne; Rideout, Denise; Clarida, Kath
2011-06-01
This paper provides highlights of a utilization-focused evaluation of a collaborative Pan-Arctic Inuit Wellness TV Series that was broadcast live in Alaska and Canada in May 2009. This International Polar Year (IPY) communication and outreach project intended to (1) share information on International Polar Year research progress, disseminate findings and explore questions with Inuit in Alaska, Canada and Greenland; (2) provide a forum for Inuit in Alaska, Canada and Greenland to showcase innovative health and wellness projects; (3) ensure Inuit youth and adult engagement throughout; and (4) document and reflect on the overall experience for the purposes of developing and "testing" a participatory communication model. Utilization-focused formative evaluation of the project, with a focus on overall objectives, key messages and lessons learned to facilitate program improvement. Participant observation, surveys, key informant interviews, document review and website tracking. Promising community programs related to 3 themes - men's wellness, maternity care and youth resilience - in diverse circumpolar regions were highlighted, as were current and stillevolving findings from ongoing Arctic research. Multiple media methods were used to effectively deliver and receive key messages determined by both community and academic experts. Local capacity and new regional networks were strengthened. Evidence-based resources for health education and community action were archived in digital formats (websites and DVDs), increasing accessibility to otherwise isolated individuals and remote communities. The Pan-Arctic Inuit Wellness TV Series was an innovative, multi-dimensional communication project that raised both interest and awareness about complex health conditions in the North and stimulated community dialogue and potential for increased collaborative action. Consistent with a communication for social change approach, the project created new networks, increased motivation to act
Revealing the Organization of Complex Adaptive Systems through Multivariate Time Series Modeling
David G. Angeler
2011-09-01
Full Text Available Revealing the adaptive responses of ecological, social, and economic systems to a transforming biosphere is crucial for understanding system resilience and preventing collapse. However, testing the theory that underpins complex adaptive system organization (e.g., panarchy theory is challenging. We used multivariate time series modeling to identify scale-specific system organization and, by extension, apparent resilience mechanisms. We used a 20-year time series of invertebrates and phytoplankton from 26 Swedish lakes to test the proposition that a few key-structuring environmental variables at specific scales create discontinuities in community dynamics. Cross-scale structure was manifested in two independent species groups within both communities across lakes. The first species group showed patterns of directional temporal change, which was related to environmental variables that acted at broad spatiotemporal scales (reduced sulfate deposition, North Atlantic Oscillation. The second species group showed fluctuation patterns, which often could not be explained by environmental variables. However, when significant relationships were found, species-group trends were predicted by variables (total organic carbon, nutrients that acted at narrower spatial scales (i.e., catchment and lake. Although the sets of environmental variables that predicted the species groups differed between phytoplankton and invertebrates, the scale-specific imprints of keystone environmental variables for creating cross-scale structure were clear for both communities. Temporal trends of functional groups did not track the observed structural changes, suggesting functional stability despite structural change. Our approach allows for identifying scale-specific patterns and processes, thus providing opportunities for better characterization of complex adaptive systems organization and dynamics. This, in turn, holds potential for more accurate evaluation of resilience in
Nielsen, Joakim Refslund; Dellwik, Ebba; Hahmann, Andrea N.
2014-01-01
A method is presented for development of satellite green vegetation fraction (GVF) time series for use in the Weather Research and Forecasting (WRF) model. The GVF data is in the WRF model used to describe the temporal evolution of many land surface parameters, in addition to the evolution...
Beyond Rating Curves: Time Series Models for in-Stream Turbidity Prediction
Wang, L.; Mukundan, R.; Zion, M.; Pierson, D. C.
2012-12-01
The New York City Department of Environmental Protection (DEP) manages New York City's water supply, which is comprised of over 20 reservoirs and supplies over 1 billion gallons of water per day to more than 9 million customers. DEP's "West of Hudson" reservoirs located in the Catskill Mountains are unfiltered per a renewable filtration avoidance determination granted by the EPA. While water quality is usually pristine, high volume storm events occasionally cause the reservoirs to become highly turbid. A logical strategy for turbidity control is to temporarily remove the turbid reservoirs from service. While effective in limiting delivery of turbid water and reducing the need for in-reservoir alum flocculation, this strategy runs the risk of negatively impacting water supply reliability. Thus, it is advantageous for DEP to understand how long a particular turbidity event will affect their system. In order to understand the duration, intensity and total load of a turbidity event, predictions of future in-stream turbidity values are important. Traditionally, turbidity predictions have been carried out by applying streamflow observations/forecasts to a flow-turbidity rating curve. However, predictions from rating curves are often inaccurate due to inter- and intra-event variability in flow-turbidity relationships. Predictions can be improved by applying an autoregressive moving average (ARMA) time series model in combination with a traditional rating curve. Since 2003, DEP and the Upstate Freshwater Institute have compiled a relatively consistent set of 15-minute turbidity observations at various locations on Esopus Creek above Ashokan Reservoir. Using daily averages of this data and streamflow observations at nearby USGS gauges, flow-turbidity rating curves were developed via linear regression. Time series analysis revealed that the linear regression residuals may be represented using an ARMA(1,2) process. Based on this information, flow-turbidity regressions with
Vihermaa, Leena; Waldron, Susan; Newton, Jason
2013-04-01
Two small streams (New Colpita and Main Trail) and two rivers (Tambopata and La Torre), in the Tambopata National Reserve, Madre de Dios, Peru, were sampled for water chemistry (conductivity, pH and dissolved oxygen) and hydrology (stage height and flow velocity). In the small streams water chemistry and hydrology variables were logged at 15 minute intervals from Feb 2011 to November 2012. Water samples were collected from all four channels during field campaigns spanning different seasons and targeting the hydrological extremes. All the samples were analysed for dissolved inorganic carbon (DIC) concentration and δ13C (sample size ranging from 77 to 172 depending on the drainage system) and a smaller subset for dissolved organic carbon (DOC) and particulate organic carbon (POC) concentrations. Strong positive relationships were found between conductivity and both DIC concentration and δ13C in the New Colpita stream and the La Torre river. In Tambopata river the trends were less clear and in the Main Trail stream there was very little change in DIC and isotopic composition. The conductivity data was used to model continuous DIC time series for the New Colpita stream. The modelled DIC data agreed well with the measurements; the concordance correlation coefficients between predicted and measured data were 0.91 and 0.87 for mM-DIC and δ13C-DIC, respectively. The predictions of δ13C-DIC were improved when calendar month was included in the model, which indicates seasonal differences in the δ13C-DIC conductivity relationship. At present, continuous DIC sampling still requires expensive instrumentation. Therefore, modelling DIC from a proxy variable which can be monitored continuously with ease and at relatively low cost, such as conductivity, provides a powerful alternative method of DIC determination.
Data-driven modeling based on volterra series for multidimensional blast furnace system.
Gao, Chuanhou; Jian, Ling; Liu, Xueyi; Chen, Jiming; Sun, Youxian
2011-12-01
The multidimensional blast furnace system is one of the most complex industrial systems and, as such, there are still many unsolved theoretical and experimental difficulties, such as silicon prediction and blast furnace automation. For this reason, this paper is concerned with developing data-driven models based on the Volterra series for this complex system. Three kinds of different low-order Volterra filters are designed to predict the hot metal silicon content collected from a pint-sized blast furnace, in which a sliding window technique is used to update the filter kernels timely. The predictive results indicate that the linear Volterra predictor can describe the evolvement of the studied silicon sequence effectively with the high percentage of hitting the target, very low root mean square error and satisfactory confidence level about the reliability of the future prediction. These advantages and the low computational complexity reveal that the sliding-window linear Volterra filter is full of potential for multidimensional blast furnace system. Also, the lack of the constructed Volterra models is analyzed and the possible direction of future investigation is pointed out.
A Study of Time Series Model for Predicting Jute Yarn Demand: Case Study
C. L. Karmaker
2017-01-01
Full Text Available In today’s competitive environment, predicting sales for upcoming periods at right quantity is very crucial for ensuring product availability as well as improving customer satisfaction. This paper develops a model to identify the most appropriate method for prediction based on the least values of forecasting errors. Necessary sales data of jute yarn were collected from a jute product manufacturer industry in Bangladesh, namely, Akij Jute Mills, Akij Group Ltd., in Noapara, Jessore. Time series plot of demand data indicates that demand fluctuates over the period of time. In this paper, eight different forecasting techniques including simple moving average, single exponential smoothing, trend analysis, Winters method, and Holt’s method were performed by statistical technique using Minitab 17 software. Performance of all methods was evaluated on the basis of forecasting accuracy and the analysis shows that Winters additive model gives the best performance in terms of lowest error determinants. This work can be a guide for Bangladeshi manufacturers as well as other researchers to identify the most suitable forecasting technique for their industry.
Atlantic-Arctic exchange in a series of ocean model simulations (CORE-II)
Roth, Christina; Behrens, Erik; Biastoch, Arne
2014-05-01
In this study we aim to improve the understanding of exchange processes between the North Atlantic and the Arctic Ocean. The Nordic Sea builds an important connector between these regions, by receiving and modifying warm and saline Atlantic waters, and by providing dense overflow as a backbone of the Atlantic Meridional Overturning Circulation (AMOC). Using a hierarchy of global ocean/sea-ice models, the specific role of the Nordic Seas, both providing a feedback with the AMOC, but also as a modulator of the Atlantic water flowing into the Arctic Ocean, is examined. The models have been performed under the CORE-II protocol, in which atmospheric forcing of the past 60 years was applied in a subsequent series of 5 iterations. During the course of this 300-year long integration, the AMOC shows substantial changes, which are correlated with water mass characteristics in the Denmark Strait overflow characteristics. Quantitative analyses using Lagrangian trajectories explore the impact of these trends on the Arctic Ocean through the Barents Sea and the Fram Strait.
L(U) Wei-cai; XU Shao-quan
2004-01-01
Using similar single-difference methodology(SSDM) to solve the deformation values of the monitoring points, there is unstability of the deformation information series, at sometimes.In order to overcome this shortcoming, Kalman filtering algorithm for this series is established,and its correctness and validity are verified with the test data obtained on the movable platform in plane. The results show that Kalman filtering can improve the correctness, reliability and stability of the deformation information series.
Advanced methods for modeling water-levels and estimating drawdowns with SeriesSEE, an Excel add-in
Halford, Keith; Garcia, C. Amanda; Fenelon, Joe; Mirus, Benjamin B.
2012-12-21
Water-level modeling is used for multiple-well aquifer tests to reliably differentiate pumping responses from natural water-level changes in wells, or “environmental fluctuations.” Synthetic water levels are created during water-level modeling and represent the summation of multiple component fluctuations, including those caused by environmental forcing and pumping. Pumping signals are modeled by transforming step-wise pumping records into water-level changes by using superimposed Theis functions. Water-levels can be modeled robustly with this Theis-transform approach because environmental fluctuations and pumping signals are simulated simultaneously. Water-level modeling with Theis transforms has been implemented in the program SeriesSEE, which is a Microsoft® Excel add-in. Moving average, Theis, pneumatic-lag, and gamma functions transform time series of measured values into water-level model components in SeriesSEE. Earth tides and step transforms are additional computed water-level model components. Water-level models are calibrated by minimizing a sum-of-squares objective function where singular value decomposition and Tikhonov regularization stabilize results. Drawdown estimates from a water-level model are the summation of all Theis transforms minus residual differences between synthetic and measured water levels. The accuracy of drawdown estimates is limited primarily by noise in the data sets, not the Theis-transform approach. Drawdowns much smaller than environmental fluctuations have been detected across major fault structures, at distances of more than 1 mile from the pumping well, and with limited pre-pumping and recovery data at sites across the United States. In addition to water-level modeling, utilities exist in SeriesSEE for viewing, cleaning, manipulating, and analyzing time-series data.
Nestorov, I A; Aarons, L J; Rowland, M
1997-08-01
Sensitivity analysis studies the effects of the inherent variability and uncertainty in model parameters on the model outputs and may be a useful tool at all stages of the pharmacokinetic modeling process. The present study examined the sensitivity of a whole-body physiologically based pharmacokinetic (PBPK) model for the distribution kinetics of nine 5-n-alkyl-5-ethyl barbituric acids in arterial blood and 14 tissues (lung, liver, kidney, stomach, pancreas, spleen, gut, muscle, adipose, skin, bone, heart, brain, testes) after i.v. bolus administration to rats. The aims were to obtain new insights into the model used, to rank the model parameters involved according to their impact on the model outputs and to study the changes in the sensitivity induced by the increase in the lipophilicity of the homologues on ascending the series. Two approaches for sensitivity analysis have been implemented. The first, based on the Matrix Perturbation Theory, uses a sensitivity index defined as the normalized sensitivity of the 2-norm of the model compartmental matrix to perturbations in its entries. The second approach uses the traditional definition of the normalized sensitivity function as the relative change in a model state (a tissue concentration) corresponding to a relative change in a model parameter. Autosensitivity has been defined as sensitivity of a state to any of its parameters; cross-sensitivity as the sensitivity of a state to any other states' parameters. Using the two approaches, the sensitivity of representative tissue concentrations (lung, liver, kidney, stomach, gut, adipose, heart, and brain) to the following model parameters: tissue-to-unbound plasma partition coefficients, tissue blood flows, unbound renal and intrinsic hepatic clearance, permeability surface area product of the brain, have been analyzed. Both the tissues and the parameters were ranked according to their sensitivity and impact. The following general conclusions were drawn: (i) the overall
Analysis of MODIS snow cover time series over the alpine regions as input for hydrological modeling
Notarnicola, Claudia; Rastner, Philipp; Irsara, Luca; Moelg, Nico; Bertoldi, Giacomo; Dalla Chiesa, Stefano; Endrizzi, Stefano; Zebisch, Marc
2010-05-01
Snow extent and relative physical properties are key parameters in hydrology, weather forecast and hazard warning as well as in climatological models. Satellite sensors offer a unique advantage in monitoring snow cover due to their temporal and spatial synoptic view. The Moderate Resolution Imaging Spectrometer (MODIS) from NASA is especially useful for this purpose due to its high frequency. However, in order to evaluate the role of snow on the water cycle of a catchment such as runoff generation due to snowmelt, remote sensing data need to be assimilated in hydrological models. This study presents a comparison on a multi-temporal basis between snow cover data derived from (1) MODIS images, (2) LANDSAT images, and (3) predictions by the hydrological model GEOtop [1,3]. The test area is located in the catchment of the Matscher Valley (South Tyrol, Northern Italy). The snow cover maps derived from MODIS-images are obtained using a newly developed algorithm taking into account the specific requirements of mountain regions with a focus on the Alps [2]. This algorithm requires the standard MODIS-products MOD09 and MOD02 as input data and generates snow cover maps at a spatial resolution of 250 m. The final output is a combination of MODIS AQUA and MODIS TERRA snow cover maps, thus reducing the presence of cloudy pixels and no-data-values due to topography. By using these maps, daily time series starting from the winter season (November - May) 2002 till 2008/2009 have been created. Along with snow maps from MODIS images, also some snow cover maps derived from LANDSAT images have been used. Due to their high resolution (manto nevoso in aree alpine con dati MODIS multi-temporali e modelli idrologici, 13th ASITA National Conference, 1-4.12.2009, Bari, Italy. [3] Zanotti F., Endrizzi S., Bertoldi G. and Rigon R. 2004. The GEOtop snow module. Hydrological Processes, 18: 3667-3679. DOI:10.1002/hyp.5794.
Prechanon Kumkratug
2012-01-01
Full Text Available Problem statement: It is becoming increasingly important to fully utilize the existing transmission system assets due to environmental legislation, rights-of-way issues and costs of construction and deregulation policies that introduced in recent years. The Thyristor Controlled Series Capacitor (TCSC has been proposed for the better control power flow and dynamic performance. The exact long transmission line model consists of the resistance and reactance. Most of previous researches studies transient stability performance of the TCSC in SMIB System with neglecting the resistance of the line. Thus the fully capability of the TCSC on transient stability improvement of power system may not be applied. The consideration of the resistance causes in the difficulty of deriving the mathematical model. Approach: This study investigates the effect of the TCSC on transient stability of the power system with consideration the exact long transmission line mode. The concept of two-port network is applied to simplify the mathematical model of the power system. The proposed method is tested on sample system and compared on various cases. Results: The first swing of rotor angle curve of the faulted system without resistance is obviously higher than that of with resistance whereas the second swing of the faulted system without resistance is slightly less than that of with resistance. The system with a TCSC can improve transient stability of power system. Conclusion: It was found from this study that the TCSC and resistance of the line can improve first swing of rotor angle. However, the resistance of the line provides the negative effect on second swing of rotor angle. The simulation results indicate that for practical long line, the resistance is very import parameters for evaluating transient stability of power system.
Prechanon Kumkratug
2012-01-01
Full Text Available Problem statement: It is becoming increasingly important to fully utilize the existing transmission system assets due to environmental legislation, rights-of-way issues, costs of construction and deregulation policies that introduced in recent years. The Thyristor Controlled Series Capacitor (TCSC has been proposed for the better control power flow and dynamic performance. The exact medium transmission line model consists of the resistance and reactance. Most of previous researches studies transient stability performance of the TCSC in SMIB System with neglecting the resistance of the line. Thus the fully capability of the TCSC on transient stability improvement of power system may not be applied. The consideration of the resistance causes in the difficulty of deriving the mathematical model. Approach: This study investigates the effect of the TCSC on transient stability of the power system with consideration the exact medium transmission line mode. The concept of two-port network is applied to simplify the mathematical model of the power system. The proposed method is tested on sample system and compared on various cases. Results: The first swing of rotor angle curve of the faulted system without resistance is obviously higher than that of with resistance whereas the second swing of the faulted system without resistance is slightly less than that of with resistance. The system with a TCSC can improve transient stability of power system. Conclusion: It was found from this study that the TCSC and resistance of the line can improve first swing of rotor angle. However, the resistance of the line provides the negative effect on second swing of rotor angle. The simulation results indicate that for practical medium line, the resistance is very import parameters for evaluating transient stability of power system.
Kim, J R; Ko, J H; Im, J H; Lee, S H; Kim, S H; Kim, C W; Park, T J
2006-01-01
The information on the incoming load to wastewater treatment plants is not often available to apply modelling for evaluating the effect of control actions on a full-scale plant. In this paper, a time series model was developed to forecast flow rate, COD, NH4(+)-N and PO4(3-)-P in influent by using 250 days data of field plant operation data. The data for 150 days and 100 days were used for model development and model validation, respectively. The missing data were interpolated by the spline method and the time series model. Three different methods were proposed for model development: one model and one-step to seven-step ahead forecasting (Method 1); seven models and one-step-ahead forecasting (Method 2); and one model and one-step-ahead forecasting (Method 3). Method 3 featured only one-step-ahead forecasting that could avoid the accumulated error and give simple estimation of coefficients. Therefore, Method 3 was the reliable approach to developing the time series model for the purpose of this research.
76 FR 56290 - Airworthiness Directives; Bombardier, Inc. Model DHC-8-400 Series Airplanes
2011-09-13
... Review Board Report) of the Bombardier DHC-8 Series 400 Maintenance Requirements Manual (PSM 1-84-7...) of the Bombardier DHC-8 Series 400 Maintenance Requirements Manual (PSM 1-84-7). Detailed Inspection... (PSM 1-84-7). Doing this revision terminates the requirements of paragraphs (h) and (l) of this AD....
Horacio Fernández Castaño
2010-01-01
Full Text Available En la modelación de las volatilidades con cambios súbitos, es imperativo usar modelos que permitan describir y analizar el dinamismo de la volatilidad, ya que los inversionistas, entre otras cosas, pueden estar interesados en estimar la tasa de retorno y la volatilidad de un instrumento financiero u otros derivados, sólo durante el período de tenencia. En este artículo, que constituye la primera de dos entregas, se hace una evaluación del modelo asimétrico EGARCH que resulta ser muy útil para estudiar la dinámica del Índice General de la Bolsa de valores de Colombia (IGBC y de su volatilidad, pues inicia haciendo una breve revisión del modelo GARCH, resaltando su importancia en la modelación de series de tiempo financieras, e identificando sus debilidades en cuanto a su propiedad de simetría para las distribuciones de colas gruesas y que pueden generar errores de pronóstico. Luego se muestra la importancia del modelo EGARCH para la modelación de algunos hechos que no se logran capturar con los modelos GARCHIn the modeling of volatility with rapid changes, it is imperative to use models to describe and analyze the dynamics of volatility, as investors, among other things, may be interested in estimating the rate of return and volatility of an instrument financial or other derivatives, only during the holding period. This article contains an evaluation of asymmetric EGARCH model that proves to be very useful to study the dynamics of the General Index of the Stock Exchange of Colombia (IGBC and its volatility, since, as will be shown, the results suggest they could be more useful for capture the stylized facts of the Colombian market behavior. It is really significant to evidence the importance of asymmetric models to estimate the volatility of financial series is intended here as a model for identifying, in the best way to estimate the volatility of daily returns of the IGBC.
ARMA modelled time-series classification for structural health monitoring of civil infrastructure
Peter Carden, E.; Brownjohn, James M. W.
2008-02-01
Structural health monitoring (SHM) is the subject of a great deal of ongoing research leading to the capability that reliable remote monitoring of civil infrastructure would allow a shift from schedule-based to condition-based maintenance strategies. The first stage in such a system would be the indication of an extraordinary change in the structure's behaviour. A statistical classification algorithm is presented here which is based on analysis of a structure's response in the time domain. The time-series responses are fitted with Autoregressive Moving Average (ARMA) models and the ARMA coefficients are fed to the classifier. The classifier is capable of learning in an unsupervised manner and of forming new classes when the structural response exhibits change. The approach is demonstrated with experimental data from the IASC-ASCE benchmark four-storey frame structure, the Z24 bridge and the Malaysia-Singapore Second Link bridge. The classifier is found to be capable of identifying structural change in all cases and of forming distinct classes corresponding to different structural states in most cases.
Low-derivative operators of the Standard Model effective field theory via Hilbert series methods
Lehman, Landon; Martin, Adam [Department of Physics, University of Notre Dame,Nieuwland Science Hall, Notre Dame, IN 46556 (United States)
2016-02-12
In this work, we explore an extension of Hilbert series techniques to count operators that include derivatives. For sufficiently low-derivative operators, we conjecture an algorithm that gives the number of invariant operators, properly accounting for redundancies due to the equations of motion and integration by parts. Specifically, the conjectured technique can be applied whenever there is only one Lorentz invariant for a given partitioning of derivatives among the fields. At higher numbers of derivatives, equation of motion redundancies can be removed, but the increased number of Lorentz contractions spoils the subtraction of integration by parts redundancies. While restricted, this technique is sufficient to automatically recreate the complete set of invariant operators of the Standard Model effective field theory for dimensions 6 and 7 (for arbitrary numbers of flavors). At dimension 8, the algorithm does not automatically generate the complete operator set; however, it suffices for all but five classes of operators. For these remaining classes, there is a well defined procedure to manually determine the number of invariants. Assuming our method is correct, we derive a set of 535 dimension-8 N{sub f}=1 operators.
T. V. O. Fabson
2011-11-01
Full Text Available Bullwhip (or whiplash effect is an observed phenomenon in forecast driven distribution channeland careful management of these effects is of great importance to managers of supply chain.Bullwhip effect refers to situations where orders to the suppliers tend to have larger variance thansales to the buyer (demand distortion and the distortion increases as we move up the supply chain.Due to the fact that demand of customer for product is unstable, business managers must forecast inorder to properly position inventory and other resources. Forecasts are statistically based and in mostcases, are not very accurate. The existence of forecast errors made it necessary for organizations tooften carry an inventory buffer called “safety stock”. Moving up the supply chain from the end userscustomers to raw materials supplier there is a lot of variation in demand that can be observed, whichcall for greater need for safety stock.This study compares the efficacy of simulation and Time Series model in quantifying the bullwhipeffects in supply chain management.
Low-derivative operators of the Standard Model effective field theory via Hilbert series methods
Lehman, Landon; Martin, Adam
2016-02-01
In this work, we explore an extension of Hilbert series techniques to count operators that include derivatives. For sufficiently low-derivative operators, we conjecture an algorithm that gives the number of invariant operators, properly accounting for redundancies due to the equations of motion and integration by parts. Specifically, the conjectured technique can be applied whenever there is only one Lorentz invariant for a given partitioning of derivatives among the fields. At higher numbers of derivatives, equation of motion redundancies can be removed, but the increased number of Lorentz contractions spoils the subtraction of integration by parts redundancies. While restricted, this technique is sufficient to automatically recreate the complete set of invariant operators of the Standard Model effective field theory for dimensions 6 and 7 (for arbitrary numbers of flavors). At dimension 8, the algorithm does not automatically generate the complete operator set; however, it suffices for all but five classes of operators. For these remaining classes, there is a well defined procedure to manually determine the number of invariants. Assuming our method is correct, we derive a set of 535 dimension-8 N f = 1 operators.
Ramli, Nazirah; Mutalib, Siti Musleha Ab; Mohamad, Daud
2017-08-01
Fuzzy time series forecasting model has been proposed since 1993 to cater for data in linguistic values. Many improvement and modification have been made to the model such as enhancement on the length of interval and types of fuzzy logical relation. However, most of the improvement models represent the linguistic term in the form of discrete fuzzy sets. In this paper, fuzzy time series model with data in the form of trapezoidal fuzzy numbers and natural partitioning length approach is introduced for predicting the unemployment rate. Two types of fuzzy relations are used in this study which are first order and second order fuzzy relation. This proposed model can produce the forecasted values under different degree of confidence.
David E. Allen
2016-03-01
Full Text Available This paper features an analysis of major currency exchange rate movements in relation to the US dollar, as constituted in US dollar terms. Euro, British pound, Chinese yuan, and Japanese yen are modelled using a variety of non-linear models, including smooth transition regression models, logistic smooth transition regressions models, threshold autoregressive models, nonlinear autoregressive models, and additive nonlinear autoregressive models, plus Neural Network models. The models are evaluated on the basis of error metrics for twenty day out-of-sample forecasts using the mean average percentage errors (MAPE. The results suggest that there is no dominating class of time series models, and the different currency pairs relationships with the US dollar are captured best by neural net regression models, over the ten year sample of daily exchange rate returns data, from August 2005 to August 2015.
Louis, J.P.
2004-07-01
The modeling of a system to be automatized is a key step for the determination of the control laws because these laws are based on inverse models deduced from direct models. The ideal example is the DC actuator, the simpleness of which allows to directly shift from the modeling to the control law. For AC actuators, the modeling tools are based on the classical hypotheses: linearity, first harmonics, symmetry. They lead to very efficient models which allow to study the properties in dynamical and permanent regime of the most important actuators: synchronous motors, asynchronous motors, voltage inverters. Some extensions to other kind of machines which does not fulfill the classical hypotheses are also proposed: synchronous machines with non-sinusoidal field distribution and asynchronous machines in saturated regime. (J.S.)
Almog, Assaf; Garlaschelli, Diego
2014-09-01
The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.
Z Jalali mola
2011-12-01
Full Text Available The Ising model is one of the simplest models describing the interacting particles. In this work, we calculate the high temperature series expansions of zero field susceptibility of ising model with ferromagnetic, antiferromagnetic and one antiferromagnetic interactions on two dimensional kagome lattice. Using the Pade´ approximation, we calculate the susceptibility of critical exponent of ferromagnetic ising model γ ≈ 1.75, which is consistent with universality hypothesis. However, antiferromagnetic and one antiferromagnetic interaction ising model doesn’t show any transition at finite temperature because of the effect of magnetic frustration.
Butera, P.; Comi, M.; Marchesini, G.
1990-06-01
We present simple tables of integers from which it is possible to reconstruct the high-temperature series coefficients through β14 for the susceptibility, for the second correlation moment, and for the second field derivative of the susceptibility of the O(N) classical Heisenberg model on a simple (hyper)cubic lattice in dimension d=2, 3, and 4 and for any N. To construct the tables we have used the recent extension of the high-temperature series by M. Luscher and P. Weisz and some analytic properties in N that we have derived from the Schwinger-Dyson equations of the O(N) model. We also present a numerical study of these series in the d=2 case. The main results are: (a) the extended series give further support to the Cardy-Hamber-Nienhuis exact formulas for the critical exponents when -2=3 there are no indications of any critical point at finite β (c) the series are consistent with the low-temperature asymptotic forms predicted by the perturbative renormalization group.
Schulze-Halberg, Axel, E-mail: axgeschu@iun.edu, E-mail: xbataxel@gmail.com [Department of Mathematics and Actuarial Science and Department of Physics, Indiana University Northwest, 3400 Broadway, Gary, Indiana 46408 (United States); Wang, Jie, E-mail: wangjie@iun.edu [Department of Computer Information Systems, Indiana University Northwest, 3400 Broadway, Gary, Indiana 46408 (United States)
2015-07-15
We obtain series solutions, the discrete spectrum, and supersymmetric partners for a quantum double-oscillator system. Its potential features a superposition of the one-parameter Mathews-Lakshmanan interaction and a one-parameter harmonic or inverse harmonic oscillator contribution. Furthermore, our results are transferred to a generalized Pöschl-Teller model that is isospectral to the double-oscillator system.
2013-05-03
...; Lower Deck Crew Rest Compartments AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final... unusual design feature associated with the installation of lower deck crew rest (LDCR) compartments. The... certificate to install a lower deck crew rest (LDCR) compartment in Airbus Model A340-600 series...
Subanar Subanar
2006-01-01
Full Text Available Recently, one of the central topics for the neural networks (NN community is the issue of data preprocessing on the use of NN. In this paper, we will investigate this topic particularly on the effect of Decomposition method as data processing and the use of NN for modeling effectively time series with both trend and seasonal patterns. Limited empirical studies on seasonal time series forecasting with neural networks show that some find neural networks are able to model seasonality directly and prior deseasonalization is not necessary, and others conclude just the opposite. In this research, we study particularly on the effectiveness of data preprocessing, including detrending and deseasonalization by applying Decomposition method on NN modeling and forecasting performance. We use two kinds of data, simulation and real data. Simulation data are examined on multiplicative of trend and seasonality patterns. The results are compared to those obtained from the classical time series model. Our result shows that a combination of detrending and deseasonalization by applying Decomposition method is the effective data preprocessing on the use of NN for forecasting trend and seasonal time series.
The scale mismatch between remotely sensed observations and crop growth models simulated state variables decreases the reliability of crop yield estimates. To overcome this problem, we used a two-step data assimilation phases: first we generated a complete leaf area index (LAI) time series by combin...
2013-12-19
... Series Airplanes; Rechargeable Lithium Ion Batteries and Battery Systems AGENCY: Federal Aviation... lithium ion batteries and battery system that will be used on an International Communications Group (ICG... uses rechargeable lithium ion batteries and battery systems in the Boeing Model 777-200, -300,...
Nedorezov, L V
2015-01-01
For approximation of some well-known time series of Paramecia caudatun population dynamics (G. F. Gause, The Struggle for Existence, 1934) Verhulst and Gompertz models were used. The parameters were estimated for each of the models in two different ways: with the least squares method (global fitting) and non-traditional approach (a method of extreme points). The results obtained were compared and also with those represented by G. F. Gause. Deviations of theoretical (model) trajectories from experimental time series were tested using various non-parametric statistical tests. It was shown that the least square method-estimations lead to the results which not always meet the requirements imposed for a "fine" model. But in some cases a small modification of the least square method-estimations is possible allowing for satisfactory representations of experimental data set for approximation.
Time Series Modeling, Forecast Principle%时间序列建模、预报的原理
王娜
2012-01-01
本文介绍时间序列理论的基本内容，以及各种基本模型。最后对各类时序模型做预报公式。%The article introduces the basic content of time sequence theory, and all kinds of basic model. Finally on various kinds of time-series model do forecast formula.
Zhou, Fuqun; Zhang, Aining
2016-10-25
Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.
Spaeder, M C; Fackler, J C
2012-04-01
Respiratory syncytial virus (RSV) is the most common cause of documented viral respiratory infections, and the leading cause of hospitalization, in young children. We performed a retrospective time-series analysis of all patients aged Forecasting models of weekly RSV incidence for the local community, inpatient paediatric hospital and paediatric intensive-care unit (PICU) were created. Ninety-five percent confidence intervals calculated around our models' 2-week forecasts were accurate to ±9·3, ±7·5 and ±1·5 cases/week for the local community, inpatient hospital and PICU, respectively. Our results suggest that time-series models may be useful tools in forecasting the burden of RSV infection at the local and institutional levels, helping communities and institutions to optimize distribution of resources based on the changing burden and severity of illness in their respective communities.
Ward-Garrison, Christian; Markstrom, Steven L.; Hay, Lauren E.
2009-01-01
The U.S. Geological Survey Downsizer is a computer application that selects, downloads, verifies, and formats station-based time-series data for environmental-resource models, particularly the Precipitation-Runoff Modeling System. Downsizer implements the client-server software architecture. The client presents a map-based, graphical user interface that is intuitive to modelers; the server provides streamflow and climate time-series data from over 40,000 measurement stations across the United States. This report is the Downsizer user's manual and provides (1) an overview of the software design, (2) installation instructions, (3) a description of the graphical user interface, (4) a description of selected output files, and (5) troubleshooting information.
Modeling and Application of Series Elastic Actuators for Force Control Multi Legged Robots
S, Arumugom; V, Ponselvan
2009-01-01
Series Elastic Actuators provide many benefits in force control of robots in unconstrained environments. These benefits include high force fidelity, extremely low impedance, low friction, and good force control bandwidth. Series Elastic Actuators employ a novel mechanical design architecture which goes against the common machine design principal of "stiffer is better". A compliant element is placed between the gear train and driven load to intentionally reduce the stiffness of the actuator. A position sensor measures the deflection, and the force output is accurately calculated using Hooke's Law (F=Kx). A control loop then servos the actuator to the desired output force. The resulting actuator has inherent shock tolerance, high force fidelity and extremely low impedance. These characteristics are desirable in many applications including legged robots, exoskeletons for human performance amplification, robotic arms, haptic interfaces, and adaptive suspensions. We describe several variations of Series Elastic Ac...
Bendel, David; Beck, Ferdinand; Dittmer, Ulrich
2013-01-01
In the presented study climate change impacts on combined sewer overflows (CSOs) in Baden-Wuerttemberg, Southern Germany, were assessed based on continuous long-term rainfall-runoff simulations. As input data, synthetic rainfall time series were used. The applied precipitation generator NiedSim-Klima accounts for climate change effects on precipitation patterns. Time series for the past (1961-1990) and future (2041-2050) were generated for various locations. Comparing the simulated CSO activity of both periods we observe significantly higher overflow frequencies for the future. Changes in overflow volume and overflow duration depend on the type of overflow structure. Both values will increase at simple CSO structures that merely divide the flow, whereas they will decrease when the CSO structure is combined with a storage tank. However, there is a wide variation between the results of different precipitation time series (representative for different locations).
CHAN Kung-Sik; TONG Howell; STENSETH Nils Chr
2009-01-01
The study of the rodent fluctuations of the North was initiated in its modern form with Elton's pioneering work. Many scientific studies have been designed to collect yearly rodent abundance data, but the resulting time series are generally subject to at least two "problems": being short and non-linear. We explore the use of the continuous threshold autoregressive (TAR) models for analyzing such data. In the simplest case, the continuous TAR models are additive autoregressive models, being piecewise linear in one lag, and linear in all other lags. The location of the slope change is called the threshold parameter. The continuous TAR models for rodent abundance data can be derived from a general prey-predator model under some simplifying assumptions. The lag in which the threshold is located sheds important insights on the structure of the prey-predator system. We propose to assess the uncertainty on the location of the threshold via a new bootstrap called the nearest block bootstrap (NBB) which combines the methods of moving block bootstrap and the nearest neighbor bootstrap.The NBB assumes an underlying finite-order time-homogeneous Markov process. Essentially, the NBB bootstraps blocks of random block sizes, with each block being drawn from a non-parametric estimate of the future distribution given the realized past bootstrap series. We illustrate the methods by simulations and on a particular rodent abundance time series from Kilpisjarvi, Northern Finland.
Yoon, Heesung; Park, Eungyu; Yoon, Pilsun; Lee, Eunhee; Kim, Gyoo-Bum
2016-04-01
A method to filter out the effect of river stage fluctuations on groundwater level was designed using an artificial neural network-based time series model of groundwater level prediction. The designed method was applied to daily groundwater level data near the Gangjeong-Koryeong Barrage in the Nakdong river, South Korea. First, one-step ahead direct prediction time series models were successfully developed for both cases of before and after the barrage construction using past measurement data of rainfall, river stage, and groundwater level as inputs. The correlation coefficient values between observed and predicted data were over 0.97. Based on the direct prediction models, recursive prediction models for the simulation of groundwater level fluctuations were designed. The effect of river stage fluctuation on groundwater level data was filtered out by setting a constant value for river stage inputs of the recursive time series models. The hybrid water table fluctuation method was employed to estimate the groundwater recharge using the filtered data. The calculated ratios of groundwater recharge to precipitation before and after the barrage construction were 11.0% and 4.3%, respectively. It is expected that the proposed method can be a useful tool for groundwater level prediction and recharge estimation in the riverside area.
Ohkubo, Jun
2011-12-01
A scheme is developed for estimating state-dependent drift and diffusion coefficients in a stochastic differential equation from time-series data. The scheme does not require to specify parametric forms for the drift and diffusion coefficients in advance. In order to perform the nonparametric estimation, a maximum likelihood method is combined with a concept based on a kernel density estimation. In order to deal with discrete observation or sparsity of the time-series data, a local linearization method is employed, which enables a fast estimation.
75 FR 8551 - Airworthiness Directives; Airbus Model A300, A300-600, and A310 Airplanes
2010-02-25
... functional degradation, possibly resulting in reduced control of the aeroplane when combined with an air duct... in reduced control of the aeroplane when combined with an air duct leak, air conditioning system... degradation, possibly resulting in reduced control of the aeroplane when combined with an air duct leak,...
2011-05-11
...-2074, Revision 04, dated October 24, 2008. (A) If cracking is found and the radius of the rework is... of the rework is 20.0 mm (0.787 inch) or more, before further flight, repair in accordance with a... radius of the rework is 20.0 mm (0.787 inch) or more. Paragraph (g)(3)(i)(B) of this AD requires...
Modeling of U-series Radionuclide Transport Through Soil at Pena Blanca, Chihuahua, Mexico
Pekar, K. E.; Goodell, P. C.; Walton, J. C.; Anthony, E. Y.; Ren, M.
2007-05-01
The Nopal I uranium deposit is located at Pena Blanca in Chihuahua, Mexico. Mining of high-grade uranium ore occurred in the early 1980s, with the ore stockpiled nearby. The stockpile was mostly cleared in the 1990s; however, some of the high-grade boulders have remained there, creating localized sources of radioactivity for a period of 25-30 years. This provides a unique opportunity to study radionuclide transport, because the study area did not have any uranium contamination predating the stockpile in the 1980s. One high-grade boulder was selected for study based upon its shape, location, and high activity. The presumed drip-line off of the boulder was marked, samples from the boulder surface were taken, and then the boulder was moved several feet away. Soil samples were taken from directly beneath the boulder, around the drip-line, and down slope. Eight of these samples were collected in a vertical profile directly beneath the boulder. Visible flakes of boulder material were removed from the surficial soil samples, because they would have higher concentrations of U-series radionuclides and cause the activities in the soil samples to be excessively high. The vertical sampling profile used 2-inch thicknesses for each sample. The soil samples were packaged into thin plastic containers to minimize the attenuation and to standardize sample geometry, and then they were analyzed by gamma-ray spectroscopy with a Ge(Li) detector for Th-234, Pa-234, U-234, Th-230, Ra-226, Pb-214, Bi-214, and Pb-210. The raw counts were corrected for self-attenuation and normalized using BL-5, a uranium standard from Beaverlodge, Saskatchewan. BL-5 allowed the counts obtained on the Ge(Li) to be referenced to a known concentration or activity, which was then applied to the soil unknowns for a reliable calculation of their concentrations. Gamma ray spectra of five soil samples from the vertical profile exhibit decreasing activities with increasing depth for the selected radionuclides
A. Karsenty
2013-01-01
Full Text Available Ultrathin body (UTB and nanoscale body (NSB SOI-MOSFET devices, sharing a similar W/L but with a channel thickness of 46 nm and lower than 5 nm, respectively, were fabricated using a selective “gate-recessed” process on the same silicon wafer. Their current-voltage characteristics measured at room temperature were found to be surprisingly different by several orders of magnitude. We analyzed this result by considering the severe mobility degradation and the influence of a huge series resistance and found that the last one seems more coherent. Then the electrical characteristics of the NSB can be analytically derived by integrating a gate voltage-dependent drain source series resistance. In this paper, the influence of the channel thickness on the series resistance is reported for the first time. This influence is integrated to the analytical model in order to describe the trends of the saturation current with the channel thickness. This modeling approach may be useful to interpret anomalous electrical behavior of other nanodevices in which series resistance and/or mobility degradation is of a great concern.
J D Velásquez
2012-06-01
Full Text Available Many time series with trend and seasonal pattern are successfully modeled and forecasted by the airline model of Box and Jenkins; however, this model neglects the presence of nonlinearity on data. In this paper, we propose a new nonlinear version of the airline model; for this, we replace the moving average linear component by a multilayer perceptron neural network. The proposedmodel is used for forecasting two benchmark time series; we found that theproposed model is able to forecast the time series with more accuracy that other traditional approaches.Muchas series de tiempo con tendencia y ciclos estacionales son exitosamente modeladas y pronosticadas usando el modelo airline de Box y Jenkins; sin embargo, la presencia de no linealidades en los datos son despreciadas por este modelo. En este artículo, se propone una nueva versión no lineal del modelo airline; para esto, se reemplaza la componente lineal de promedios móviles por un perceptrón multicapa. El modelo propuesto es usado para pronosticar dos series de tiempo benchmark; se encontró que el modelo propuesto es capaz de pronosticar las series de tiempo con mayor precisión que otras aproximaciones tradicionales.
Chih-Chieh Young
2015-01-01
Full Text Available Accurate prediction of water level fluctuation is important in lake management due to its significant impacts in various aspects. This study utilizes four model approaches to predict water levels in the Yuan-Yang Lake (YYL in Taiwan: a three-dimensional hydrodynamic model, an artificial neural network (ANN model (back propagation neural network, BPNN, a time series forecasting (autoregressive moving average with exogenous inputs, ARMAX model, and a combined hydrodynamic and ANN model. Particularly, the black-box ANN model and physically based hydrodynamic model are coupled to more accurately predict water level fluctuation. Hourly water level data (a total of 7296 observations was collected for model calibration (training and validation. Three statistical indicators (mean absolute error, root mean square error, and coefficient of correlation were adopted to evaluate model performances. Overall, the results demonstrate that the hydrodynamic model can satisfactorily predict hourly water level changes during the calibration stage but not for the validation stage. The ANN and ARMAX models better predict the water level than the hydrodynamic model does. Meanwhile, the results from an ANN model are superior to those by the ARMAX model in both training and validation phases. The novel proposed concept using a three-dimensional hydrodynamic model in conjunction with an ANN model has clearly shown the improved prediction accuracy for the water level fluctuation.
Fang, Xin; Li, Runkui; Kan, Haidong; Bottai, Matteo; Fang, Fang
2016-01-01
Objective To demonstrate an application of Bayesian model averaging (BMA) with generalised additive mixed models (GAMM) and provide a novel modelling technique to assess the association between inhalable coarse particles (PM10) and respiratory mortality in time-series studies. Design A time-series study using regional death registry between 2009 and 2010. Setting 8 districts in a large metropolitan area in Northern China. Participants 9559 permanent residents of the 8 districts who died of respiratory diseases between 2009 and 2010. Main outcome measures Per cent increase in daily respiratory mortality rate (MR) per interquartile range (IQR) increase of PM10 concentration and corresponding 95% confidence interval (CI) in single-pollutant and multipollutant (including NOx, CO) models. Results The Bayesian model averaged GAMM (GAMM+BMA) and the optimal GAMM of PM10, multipollutants and principal components (PCs) of multipollutants showed comparable results for the effect of PM10 on daily respiratory MR, that is, one IQR increase in PM10 concentration corresponded to 1.38% vs 1.39%, 1.81% vs 1.83% and 0.87% vs 0.88% increase, respectively, in daily respiratory MR. However, GAMM+BMA gave slightly but noticeable wider CIs for the single-pollutant model (−1.09 to 4.28 vs −1.08 to 3.93) and the PCs-based model (−2.23 to 4.07 vs −2.03 vs 3.88). The CIs of the multiple-pollutant model from two methods are similar, that is, −1.12 to 4.85 versus −1.11 versus 4.83. Conclusions The BMA method may represent a useful tool for modelling uncertainty in time-series studies when evaluating the effect of air pollution on fatal health outcomes. PMID:27531727
Harmonic analysis of dense time series of landsat imagery for modeling change in forest conditions
Barry Tyler. Wilson
2015-01-01
This study examined the utility of dense time series of Landsat imagery for small area estimation and mapping of change in forest conditions over time. The study area was a region in north central Wisconsin for which Landsat 7 ETM+ imagery and field measurements from the Forest Inventory and Analysis program are available for the decade of 2003 to 2012. For the periods...
A dynamic factor model for the analysis of multivariate time series
Molenaar, P.C.M.
1985-01-01
Describes the new statistical technique of dynamic factor analysis (DFA), which accounts for the entire lagged covariance function of an arbitrary 2nd-order stationary time series. DFA is shown to be applicable to a relatively short stretch of observations and is therefore considered worthwhile for
76 FR 12629 - Airworthiness Directives; Bombardier, Inc. Model DHC-8-400 Series Airplanes
2011-03-08
... Requirements Manual (PSM 1-84-7). The actions described in this service information are intended to correct the... (Maintenance Review Board Report) of the Bombardier DHC-8 Series 400 Maintenance Requirements Manual (PSM 1-84... Maintenance Requirements Manual (PSM 1-84-7). Change to Applicability of AD 2007-22-09 AD 2007-22-09...
Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz
2015-02-01
In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.
Practical Aspects of the Spectral Analysis of Irregularly Sampled Data With Time-Series Models
Broersen, P.M.T.
2009-01-01
Several algorithms for the spectral analysis of irregularly sampled random processes can estimate the spectral density for a low frequency range. A new time-series method extended that frequency range with a factor of thousand or more. The new algorithm has two requirements to give useful results. F
Vinkovic, Anton; Mihalic, Rafael [Faculty of Electrical Engineering, University of Ljubljana, Trzaska 25, 1000 Ljubljana (Slovenia)
2008-10-15
In this paper, a new approach to modeling a static synchronous series compensator (SSSC) for power-flow calculations by applying the Newton-Raphson method is presented. This new approach differs from known methods in terms of the interpretation of the device's branch. It is considered on the basis of its current and is therefore denoted as a current-based model of an SSSC. This approach might in principle be applicable also for other FACTS devices (i.e., UPFC, IPFC, GUPFC). In the paper, the current-based model of an SSSC is presented as the models of this device have difficulties with convergence in power-flow calculations and there are very few references covering these topics. First, the basic features of an SSSC are presented, as it is the basis for the current-based model that is incorporated into the Newton-Raphson load-flow model. The results of the tests at the IEEE 57-bus system are discussed in detail and compared with the existing injection SSSC load-flow model [X.P. Zhang, Advanced modeling of the multicontrol functional static synchronous series compensator (SSSC) in Newton power flow, IEEE Trans. Power Syst. 18 (November (4)) 2003]. (author)
Bronson, Jonathan E; Fei, Jingyi; Hofman, Jake M; Gonzalez, Ruben L; Wiggins, Chris H
2009-12-16
Time series data provided by single-molecule Förster resonance energy transfer (smFRET) experiments offer the opportunity to infer not only model parameters describing molecular complexes, e.g., rate constants, but also information about the model itself, e.g., the number of conformational states. Resolving whether such states exist or how many of them exist requires a careful approach to the problem of model selection, here meaning discrimination among models with differing numbers of states. The most straightforward approach to model selection generalizes the common idea of maximum likelihood--selecting the most likely parameter values--to maximum evidence: selecting the most likely model. In either case, such an inference presents a tremendous computational challenge, which we here address by exploiting an approximation technique termed variational Bayesian expectation maximization. We demonstrate how this technique can be applied to temporal data such as smFRET time series; show superior statistical consistency relative to the maximum likelihood approach; compare its performance on smFRET data generated from experiments on the ribosome; and illustrate how model selection in such probabilistic or generative modeling can facilitate analysis of closely related temporal data currently prevalent in biophysics. Source code used in this analysis, including a graphical user interface, is available open source via http://vbFRET.sourceforge.net.
A scalable database model for multiparametric time series: a volcano observatory case study
Montalto, Placido; Aliotta, Marco; Cassisi, Carmelo; Prestifilippo, Michele; Cannata, Andrea
2014-05-01
The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.
Mayaud, C; Wagner, T; Benischke, R; Birk, S
2014-04-16
The Lurbach karst system (Styria, Austria) is drained by two major springs and replenished by both autogenic recharge from the karst massif itself and a sinking stream that originates in low permeable schists (allogenic recharge). Detailed data from two events recorded during a tracer experiment in 2008 demonstrate that an overflow from one of the sub-catchments to the other is activated if the discharge of the main spring exceeds a certain threshold. Time series analysis (autocorrelation and cross-correlation) was applied to examine to what extent the various available methods support the identification of the transient inter-catchment flow observed in this binary karst system. As inter-catchment flow is found to be intermittent, the evaluation was focused on single events. In order to support the interpretation of the results from the time series analysis a simplified groundwater flow model was built using MODFLOW. The groundwater model is based on the current conceptual understanding of the karst system and represents a synthetic karst aquifer for which the same methods were applied. Using the wetting capability package of MODFLOW, the model simulated an overflow similar to what has been observed during the tracer experiment. Various intensities of allogenic recharge were employed to generate synthetic discharge data for the time series analysis. In addition, geometric and hydraulic properties of the karst system were varied in several model scenarios. This approach helps to identify effects of allogenic recharge and aquifer properties in the results from the time series analysis. Comparing the results from the time series analysis of the observed data with those of the synthetic data a good agreement was found. For instance, the cross-correlograms show similar patterns with respect to time lags and maximum cross-correlation coefficients if appropriate hydraulic parameters are assigned to the groundwater model. The comparable behaviors of the real and the
Mayaud, C.; Wagner, T.; Benischke, R.; Birk, S.
2014-04-01
The Lurbach karst system (Styria, Austria) is drained by two major springs and replenished by both autogenic recharge from the karst massif itself and a sinking stream that originates in low permeable schists (allogenic recharge). Detailed data from two events recorded during a tracer experiment in 2008 demonstrate that an overflow from one of the sub-catchments to the other is activated if the discharge of the main spring exceeds a certain threshold. Time series analysis (autocorrelation and cross-correlation) was applied to examine to what extent the various available methods support the identification of the transient inter-catchment flow observed in this binary karst system. As inter-catchment flow is found to be intermittent, the evaluation was focused on single events. In order to support the interpretation of the results from the time series analysis a simplified groundwater flow model was built using MODFLOW. The groundwater model is based on the current conceptual understanding of the karst system and represents a synthetic karst aquifer for which the same methods were applied. Using the wetting capability package of MODFLOW, the model simulated an overflow similar to what has been observed during the tracer experiment. Various intensities of allogenic recharge were employed to generate synthetic discharge data for the time series analysis. In addition, geometric and hydraulic properties of the karst system were varied in several model scenarios. This approach helps to identify effects of allogenic recharge and aquifer properties in the results from the time series analysis. Comparing the results from the time series analysis of the observed data with those of the synthetic data a good agreement was found. For instance, the cross-correlograms show similar patterns with respect to time lags and maximum cross-correlation coefficients if appropriate hydraulic parameters are assigned to the groundwater model. The comparable behaviors of the real and the
Mayaud, C.; Wagner, T.; Benischke, R.; Birk, S.
2014-01-01
Summary The Lurbach karst system (Styria, Austria) is drained by two major springs and replenished by both autogenic recharge from the karst massif itself and a sinking stream that originates in low permeable schists (allogenic recharge). Detailed data from two events recorded during a tracer experiment in 2008 demonstrate that an overflow from one of the sub-catchments to the other is activated if the discharge of the main spring exceeds a certain threshold. Time series analysis (autocorrelation and cross-correlation) was applied to examine to what extent the various available methods support the identification of the transient inter-catchment flow observed in this binary karst system. As inter-catchment flow is found to be intermittent, the evaluation was focused on single events. In order to support the interpretation of the results from the time series analysis a simplified groundwater flow model was built using MODFLOW. The groundwater model is based on the current conceptual understanding of the karst system and represents a synthetic karst aquifer for which the same methods were applied. Using the wetting capability package of MODFLOW, the model simulated an overflow similar to what has been observed during the tracer experiment. Various intensities of allogenic recharge were employed to generate synthetic discharge data for the time series analysis. In addition, geometric and hydraulic properties of the karst system were varied in several model scenarios. This approach helps to identify effects of allogenic recharge and aquifer properties in the results from the time series analysis. Comparing the results from the time series analysis of the observed data with those of the synthetic data a good agreement was found. For instance, the cross-correlograms show similar patterns with respect to time lags and maximum cross-correlation coefficients if appropriate hydraulic parameters are assigned to the groundwater model. The comparable behaviors of the real and
2010-08-05
... Douglas Corporation Model DC- 9-10 Series Airplanes, DC-9-30 Series Airplanes, DC-9-81 (MD-81) Airplanes, DC-9-82 (MD-82) Airplanes, DC-9-83 (MD-83) Airplanes, DC-9- 87 (MD-87) Airplanes, MD-88 Airplanes... directive (AD), which applies to all McDonnell Douglas Model DC-9-10 series airplanes, DC-9-30...
Farshad Fathian
2017-01-01
Full Text Available Introduction: Time series models are generally categorized as a data-driven method or mathematically-based method. These models are known as one of the most important tools in modeling and forecasting of hydrological processes, which are used to design and scientific management of water resources projects. On the other hand, a better understanding of the river flow process is vital for appropriate streamflow modeling and forecasting. One of the main concerns of hydrological time series modeling is whether the hydrologic variable is governed by the linear or nonlinear models through time. Although the linear time series models have been widely applied in hydrology research, there has been some recent increasing interest in the application of nonlinear time series approaches. The threshold autoregressive (TAR method is frequently applied in modeling the mean (first order moment of financial and economic time series. Thise type of the model has not received considerable attention yet from the hydrological community. The main purposes of this paper are to analyze and to discuss stochastic modeling of daily river flow time series of the study area using linear (such as ARMA: autoregressive integrated moving average and non-linear (such as two- and three- regime TAR models. Material and Methods: The study area has constituted itself of four sub-basins namely, Saghez Chai, Jighato Chai, Khorkhoreh Chai and Sarogh Chai from west to east, respectively, which discharge water into the Zarrineh Roud dam reservoir. River flow time series of 6 hydro-gauge stations located on upstream basin rivers of Zarrineh Roud dam (located in the southern part of Urmia Lake basin were considered to model purposes. All the data series used here to start from January 1, 1997, and ends until December 31, 2011. In this study, the daily river flow data from January 01 1997 to December 31 2009 (13 years were chosen for calibration and data for January 01 2010 to December 31 2011
Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth
2007-01-01
In this paper two methods of validation of transmission network harmonic models are introduced. The methods were developed as a result of the work presented in [1]. The first method allows calculating the transfer harmonic impedance between two nodes of a network. Switching a linear, series network...... are used for calculation of the transfer harmonic impedance between the nodes. The determined transfer harmonic impedance can be used to validate a computer model of the network. The second method is an extension of the fist one. It allows switching a series element that contains a shunt branch......, as for example a transmission line. Both methods require that harmonic measurements performed at two ends of the disconnected element are precisely synchronized....
Cavaliere, Giuseppe; Nielsen, Morten Ørregaard; Taylor, Robert
We consider the problem of conducting estimation and inference on the parameters of univariate heteroskedastic fractionally integrated time series models. We first extend existing results in the literature, developed for conditional sum-of squares estimators in the context of parametric fractional...... time series models driven by conditionally homoskedastic shocks, to allow for conditional and unconditional heteroskedasticity both of a quite general and unknown form. Global consistency and asymptotic normality are shown to still obtain; however, the covariance matrix of the limiting distribution...... of the estimator now depends on nuisance parameters derived both from the weak dependence and heteroskedasticity present in the shocks. We then investigate classical methods of inference based on the Wald, likelihood ratio and Lagrange multiplier tests for linear hypotheses on either or both of the long and short...
Tolstov, Georgi P
1962-01-01
Richard A. Silverman's series of translations of outstanding Russian textbooks and monographs is well-known to people in the fields of mathematics, physics, and engineering. The present book is another excellent text from this series, a valuable addition to the English-language literature on Fourier series.This edition is organized into nine well-defined chapters: Trigonometric Fourier Series, Orthogonal Systems, Convergence of Trigonometric Fourier Series, Trigonometric Series with Decreasing Coefficients, Operations on Fourier Series, Summation of Trigonometric Fourier Series, Double Fourie
Onisko, Agnieszka; Druzdzel, Marek J.; Austin, R. Marshall
2016-01-01
Background: Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. Aim: The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. Materials and Methods: This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan–Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. Results: The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Conclusion: Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches. PMID:28163973
Jamal Shamsara
2014-01-01
Full Text Available MMP-12 is a member of matrix metalloproteinases (MMPs family involved in pathogenesis of some inflammatory based diseases. Design of selective matrix MMPs inhibitors is still challenging because of binding pocket similarities among MMPs family. We tried to generate a HQSAR (hologram quantitative structure activity relationship model for a series of MMP-12 inhibitors. Compounds in the series of inhibitors with reported biological activity against MMP-12 were used to construct a predictive HQSAR model for their inhibitory activity against MMP-12. The HQSAR model had statistically excellent properties and possessed good predictive ability for test set compounds. The HQSAR model was obtained for the 26 training set compounds showing cross-validated q2 value of 0.697 and conventional r2 value of 0.986. The model was then externally validated using a test set of 9 compounds and the predicted values were in good agreement with the experimental results (rpred2=0.8733. Then, the external validity of the model was confirmed by Golbraikh-Tropsha and rm2 metrics. The color code analysis based on the obtained HQSAR model provided useful insights into the structural features of the training set for their bioactivity against MMP-12 and was useful for the design of some new not yet synthesized MMP-12 inhibitors.
Field Quality of the First LARP Nb3Sn 3.7 m-Long Quadrupole Model of LQ Series
Velev, G.V.; Schmalzle, J.; Ambrosio, G.; Andreev, N.; Anerella, M.; Bossert, R.; Caspi, S.; Chlachidze, G.; DiMarco, J.; Escallier, J.; Felice, H.; Ferracin, P.; Kashikhin, V.V.; Lamm, M.J.; Nobrega, F.; Prebys, E.; Sabbi, G.L.; Tartaglia, M.; Wanderer, P.; Zlobin, A.V.
2011-08-03
The US-LHC accelerator research program (LARP) built and tested the first 3.7-m long Nb{sub 3}Sn quadrupole model of LQ series with a 90 mm bore diameter and a target field gradient of 200 T/m. The LQ series, developed in collaboration among FNAL, LBNL and BNL, is a scale up of the previously tested 1-m long technology quadrupoles of TQ series based on similar coils and two different mechanical structures (shell-based TQS and collar-based TQC), with a primary goal of demonstrating the Nb{sub 3}Sn accelerator magnet technology for the luminosity upgrade of LHC interaction regions. In this paper, we present the field quality measurements in the first 3.7-m long LQS01 model based on the modified TQS mechanical structure. The results are compared to the expectations from the magnet geometry and magnetic properties of coils and iron yoke. Moreover, we present a comparison between this magnet and the short models previously measured.
Modelling and Simulation of Single-Phase Series Active Compensator for Power Quality Improvement
Verma, Arun Kumar; Mathuria, Kirti; Singh, Bhim; Bhuvaneshwari, G.
2016-10-01
A single-phase active series compensator is proposed in this work to reduce harmonic currents at the ac mains and to regulate the dc link voltage of a diode bridge rectifier (DBR) that acts as the front end converter for a voltage source inverter feeding an ac motor. This ac motor drive is used in any of the domestic, commercial or industrial appliances. Under fluctuating ac mains voltages, the dc link voltage of the DBR depicts wide variations and hence the ac motor is used at reduced rating as compared to its name-plate rating. The active series compensator proposed here provides dual functions of improving the power quality at the ac mains and regulating the dc link voltage thus averting the need for derating of the ac motor.
2013-01-01
Time series analysis can be used to quantitatively monitor, describe, explain, and predict road safety developments. Time series analysis techniques offer the possibility of quantitatively modelling road safety developments in such a way that the dependencies between the observations of time series
Mahammad A. Hannan
2017-09-01
Full Text Available This study aims to develop an accurate model of a charge equalization controller (CEC that manages individual cell monitoring and equalizing by charging and discharging series-connected lithium-ion (Li-ion battery cells. In this concept, an intelligent control algorithm is developed to activate bidirectional cell switches and control direct current (DC–DC converter switches along with pulse width modulation (PWM generation. Individual models of an electric vehicle (EV-sustainable Li-ion battery, optimal power rating, a bidirectional flyback DC–DC converter, and charging and discharging controllers are integrated to develop a small-scale CEC model that can be implemented for 10 series-connected Li-ion battery cells. Results show that the charge equalization controller operates at 91% efficiency and performs well in equalizing both overdischarged and overcharged cells on time. Moreover, the outputs of the CEC model show that the desired balancing level occurs at 2% of state of charge difference and that all cells are operated within a normal range. The configuration, execution, control, power loss, cost, size, and efficiency of the developed CEC model are compared with those of existing controllers. The proposed model is proven suitable for high-tech storage systems toward the advancement of sustainable EV technologies and renewable source of applications.
LI W.K.; LI GuoDong
2009-01-01
@@ The authors are to be congratulated for an innovative paper in terms of both modelling methodology and subject matter significance. The analysis of short time series is known to be difficult even for linear models.
Dalla Valle, Nicolas; Wutzler, Thomas; Meyer, Stefanie; Potthast, Karin; Michalzik, Beate
2017-04-01
Dual-permeability type models are widely used to simulate water fluxes and solute transport in structured soils. These models contain two spatially overlapping flow domains with different parameterizations or even entirely different conceptual descriptions of flow processes. They are usually able to capture preferential flow phenomena, but a large set of parameters is needed, which are very laborious to obtain or cannot be measured at all. Therefore, model inversions are often used to derive the necessary parameters. Although these require sufficient input data themselves, they can use measurements of state variables instead, which are often easier to obtain and can be monitored by automated measurement systems. In this work we show a method to estimate soil hydraulic parameters from high frequency soil moisture time series data gathered at two different measurement depths by inversion of a simple one dimensional dual-permeability model. The model uses an advection equation based on the kinematic wave theory to describe the flow in the fracture domain and a Richards equation for the flow in the matrix domain. The soil moisture time series data were measured in mesocosms during sprinkling experiments. The inversion consists of three consecutive steps: First, the parameters of the water retention function were assessed using vertical soil moisture profiles in hydraulic equilibrium. This was done using two different exponential retention functions and the Campbell function. Second, the soil sorptivity and diffusivity functions were estimated from Boltzmann-transformed soil moisture data, which allowed the calculation of the hydraulic conductivity function. Third, the parameters governing flow in the fracture domain were determined using the whole soil moisture time series. The resulting retention functions were within the range of values predicted by pedotransfer functions apart from very dry conditions, where all retention functions predicted lower matrix potentials
Lefieux, V
2007-10-15
Reseau de Transport d'Electricite (RTE), in charge of operating the French electric transportation grid, needs an accurate forecast of the power consumption in order to operate it correctly. The forecasts used everyday result from a model combining a nonlinear parametric regression and a SARIMA model. In order to obtain an adaptive forecasting model, nonparametric forecasting methods have already been tested without real success. In particular, it is known that a nonparametric predictor behaves badly with a great number of explanatory variables, what is commonly called the curse of dimensionality. Recently, semi parametric methods which improve the pure nonparametric approach have been proposed to estimate a regression function. Based on the concept of 'dimension reduction', one those methods (called MAVE : Moving Average -conditional- Variance Estimate) can apply to time series. We study empirically its effectiveness to predict the future values of an autoregressive time series. We then adapt this method, from a practical point of view, to forecast power consumption. We propose a partially linear semi parametric model, based on the MAVE method, which allows to take into account simultaneously the autoregressive aspect of the problem and the exogenous variables. The proposed estimation procedure is practically efficient. (author)
The benefit of modeled ozone data for the reconstruction of a 99-year UV radiation time series
Junk, J.; Feister, U.; Helbig, A.; GöRgen, K.; Rozanov, E.; KrzyśCin, J. W.; Hoffmann, L.
2012-08-01
Solar erythemal UV radiation (UVER) is highly relevant for numerous biological processes that affect plants, animals, and human health. Nevertheless, long-term UVER records are scarce. As significant declines in the column ozone concentration were observed in the past and a recovery of the stratospheric ozone layer is anticipated by the middle of the 21st century, there is a strong interest in the temporal variation of UVERtime series. Therefore, we combined ground-based measurements of different meteorological variables with modeled ozone data sets to reconstruct time series of daily totals of UVER at the Meteorological Observatory, Potsdam, Germany. Artificial neural networks were trained with measured UVER, sunshine duration, the day of year, measured and modeled total column ozone, as well as the minimum solar zenith angle. This allows for the reconstruction of daily totals of UVERfor the period from 1901 to 1999. Additionally, analyses of the long-term variations from 1901 until 1999 of the reconstructed, new UVER data set are presented. The time series of monthly and annual totals of UVERprovide a long-term meteorological basis for epidemiological investigations in human health and occupational medicine for the region of Potsdam and Berlin. A strong benefit of our ANN-approach is the fact that it can be easily adapted to different geographical locations, as successfully tested in the framework of the COSTAction 726.
ERIC Clearinghouse on Educational Management, Eugene, OR.
This review analyzes current research trends in the application of planning models to broad educational systems. Planning models reviewed include systems approach models, simulation models, operational gaming, linear programing, Markov chain analysis, dynamic programing, and queuing techniques. A 77-item bibliography of recent literature is…
A time-series analysis framework for the flood-wave method to estimate groundwater model parameters
Obergfell, Christophe; Bakker, Mark; Maas, Kees
2016-11-01
The flood-wave method is implemented within the framework of time-series analysis to estimate aquifer parameters for use in a groundwater model. The resulting extended flood-wave method is applicable to situations where groundwater fluctuations are affected significantly by time-varying precipitation and evaporation. Response functions for time-series analysis are generated with an analytic groundwater model describing stream-aquifer interaction. Analytical response functions play the same role as the well function in a pumping test, which is to translate observed head variations into groundwater model parameters by means of a parsimonious model equation. An important difference as compared to the traditional flood-wave method and pumping tests is that aquifer parameters are inferred from the combined effects of precipitation, evaporation, and stream stage fluctuations. Naturally occurring fluctuations are separated in contributions from different stresses. The proposed method is illustrated with data collected near a lowland river in the Netherlands. Special emphasis is put on the interpretation of the streambed resistance. The resistance of the streambed is the result of stream-line contraction instead of a semi-pervious streambed, which is concluded through comparison with the head loss calculated with an analytical two-dimensional cross-section model.
CHAN; Kung-Sik; TONG; Howell; STENSETH; Nils; Chr
2009-01-01
The study of the rodent fluctuations of the North was initiated in its modern form with Elton’s pioneering work.Many scientific studies have been designed to collect yearly rodent abundance data,but the resulting time series are generally subject to at least two "problems":being short and non-linear.We explore the use of the continuous threshold autoregressive(TAR) models for analyzing such data.In the simplest case,the continuous TAR models are additive autoregressive models,being piecewise linear in one lag,and linear in all other lags.The location of the slope change is called the threshold parameter.The continuous TAR models for rodent abundance data can be derived from a general prey-predator model under some simplifying assumptions.The lag in which the threshold is located sheds important insights on the structure of the prey-predator system.We propose to assess the uncertainty on the location of the threshold via a new bootstrap called the nearest block bootstrap(NBB) which combines the methods of moving block bootstrap and the nearest neighbor bootstrap.The NBB assumes an underlying finite-order time-homogeneous Markov process.Essentially,the NBB bootstraps blocks of random block sizes,with each block being drawn from a non-parametric estimate of the future distribution given the realized past bootstrap series.We illustrate the methods by simulations and on a particular rodent abundance time series from Kilpisjrvi,Northern Finland.
Yu-Long QI; Chen-Chen CAI; Ping-Zhen LANG
2013-01-01
Double-layer,multi-roller plate crusher is a new device,that uses a multi-stage series crushing style to break particles,with the crushing ratio distribution directly influencing the machine's performance.Three crushing ratios of 2.25,2.15 and 2.0 1,used for fuzzy physical programming,were determined.The comparison of the optimized result between the double-layer multi-roller plate crusher and a high pressure roll grinder showed that the double-layer multi-roller plate crusher had a better performance,reducing crushing force and wear.
Fluorescence spectrum analysis using Fourier series modeling for Fluorescein solution in Ethanol
Hadi, Mahasin F
2011-01-01
We have measured the fluorescence spectrum for fluorescein solution in ethanol with concentration 1 {\\times} 10-3 mol/liter at different temperatures from room temperature to freezing point of solvent, (T = 153, 183, 223, 253, and 303 K) using liquid nitrogen. Table curve 2D version 5.01 program has been used to determine the fitting curve and fitting equation for each fluorescence spectrum. Fourier series (3 {\\times} 2) was the most suitable fitting equation for all spectra. Theoretical fluorescence spectrum of fluorescein in ethanol at T = 183K was calculated and compared with experimental fluorescence spectrum at the same temperature. There is a good similarity between them.
ARIMA-Based Time Series Model of Stochastic Wind Power Generation
Chen, Peiyuan; Pedersen, Troels; Bak-Jensen, Birgitte
2010-01-01
This paper proposes a stochastic wind power model based on an autoregressive integrated moving average (ARIMA) process. The model takes into account the nonstationarity and physical limits of stochastic wind power generation. The model is constructed based on wind power measurement of one year from...... the Nysted offshore wind farm in Denmark. The proposed limited-ARIMA (LARIMA) model introduces a limiter and characterizes the stochastic wind power generation by mean level, temporal correlation and driving noise. The model is validated against the measurement in terms of temporal correlation...... and probability distribution. The LARIMA model outperforms a first-order transition matrix based discrete Markov model in terms of temporal correlation, probability distribution and model parameter number. The proposed LARIMA model is further extended to include the monthly variation of the stochastic wind power...
WIMAX TRAFFIC MODEL BASED ON TIME SERIES FOR FORECAST FUTURE VALUES OF TRAFFIC
Cesar Augusto Hernández Suarez
2009-03-01
Full Text Available El objetivo de esta investigación es demostrar que las series de tiempo son una excelente herramienta para el modelamiento de tráfico de datos en redes Wimax. Para lograr este objetivo se utilizó la metodología de Box-Jenkins, la cual se describe en este artículo. El modelamiento de tráfico Wimax a través de modelos correlacionados como las series de tiempo permiten ajustar gran parte de la dinámica del comportamiento de los datos en una ecuación y con base en esto estimar valores futuros de tráfico. Lo anterior es una ventaja para la planeación de cobertura, reservación de recursos y la realización de un control más oportuno y eficiente en forma integrada a diferentes niveles de la jerarquía funcional de la red de datos Wimax. Como resultado de la investigación se obtuvo un modelo de tráfico ARIMA de orden 18, el cual realizó pronósticos de tráfico con valores del error cuadrático medio relativamente pequeños, para un periodo de 10 días.
The projection of world geothermal energy consumption from time series and regression model
Simanullang, Elwin Y.; Supriatna, Agus; Supriatna, Asep K.
2015-12-01
World population growth has many impacts on human live activities and other related aspects. One among the aspects is the increase of the use of energy to support human daily activities, covering industrial aspect, transportation, domestic activities, etc. It is plausible that the higher the population size in a country the higher the needs for energy to support all aspects of human activities in the country. Considering the depletion of petroleum and other fossil-based energy, recently there is a tendency to use geothermal as other source of energy. In this paper we will discuss the prediction of the world consumption of geothermal energy by two different methods, i.e. via the time series of the geothermal usage and via the time series of the geothermal usage combined with the prediction of the world total population. For the first case, we use the simple exponential smoothing method while for the second case we use the simple regression method. The result shows that taking into account the prediction of the world population size giving a better prediction to forecast a short term of the geothermal energy consumption.
Nahlawi, Layan; Goncalves, Caroline; Imani, Farhad; Gaed, Mena; Gomez, Jose A.; Moussa, Madeleine; Gibson, Eli; Fenster, Aaron; Ward, Aaron D.; Abolmaesumi, Purang; Mousavi, Parvin; Shatkay, Hagit
2017-03-01
Recent studies have shown the value of Temporal Enhanced Ultrasound (TeUS) imaging for tissue characterization in transrectal ultrasound-guided prostate biopsies. Here, we present results of experiments designed to study the impact of temporal order of the data in TeUS signals. We assess the impact of variations in temporal order on the ability to automatically distinguish benign prostate-tissue from malignant tissue. We have previously used Hidden Markov Models (HMMs) to model TeUS data, as HMMs capture temporal order in time series. In the work presented here, we use HMMs to model malignant and benign tissues; the models are trained and tested on TeUS signals while introducing variation to their temporal order. We first model the signals in their original temporal order, followed by modeling the same signals under various time rearrangements. We compare the performance of these models for tissue characterization. Our results show that models trained over the original order-preserving signals perform statistically significantly better for distinguishing between malignant and benign tissues, than those trained on rearranged signals. The performance degrades as the amount of temporal-variation increases. Specifically, accuracy of tissue characterization decreases from 85% using models trained on original signals to 62% using models trained and tested on signals that are completely temporally-rearranged. These results indicate the importance of order in characterization of tissue malignancy from TeUS data.
Soroosh Mahmoodi
2016-01-01
Full Text Available An interesting model which was able to recuperate and reuse braking energy was investigated. It was named series hybrid hydraulic/electric system (SHHES. The innovated model was presented for heavy hybrid vehicles to overcome the existing drawbacks of single energy storage sources. The novelty of this paper was investigation of a new series hybrid vehicle with triple sources, combustion engine, electric motor, and hydraulic sources. It was simulated with MATLAB-Simulink and different operational mode of control system was investigated. The aim was to improve the efficiency of the energy-loading components in the power train system and the transmission system independently. The ability to store and reuse the kinetic energy was added to the system to prevent energy wasting while the vehicle was braking. Control models were also investigated to realize suitable control algorithms to offer the best efficiency in system components for different vehicle conditions. The torque control strategy based on fuzzy logic controller was proposed to achieve better vehicle performance while the fuel consumption was minimized. The results implied efficient storage and usage in the transmission system. A small vehicle model experimentally verified the simulation results.
Martin Gugat
2012-05-01
Full Text Available Compressible squeeze film damping is a phenomenon of great importance for micromachines. For example, for the optimal design of an electrostatically actuated micro-cantilever mass sensor that operates in air, it is essential to have a model for the system behavior that can be evaluated efficiently. An analytical model that is based upon a solution of the linearized Reynolds equation has been given by R.B. Darling. In this paper we explain how some infinite sums that appear in Darling’s model can be evaluated analytically. As an example of applications of these closed form representations, we compute an approximation for the critical frequency where the spring component of the reaction force on the microplate, due to the motion through the air, is equal to a certain given multiple of the damping component. We also show how some double series that appear in the model can be reduced to a single infinite series that can be approximated efficiently.
VARIABLE SELECTION BY PSEUDO WAVELETS IN HETEROSCEDASTIC REGRESSION MODELS INVOLVING TIME SERIES
无
2006-01-01
A simple but efficient method has been proposed to select variables in heteroscedastic regression models. It is shown that the pseudo empirical wavelet coefficients corresponding to the significant explanatory variables in the regression models are clearly larger than those nonsignificant ones, on the basis of which a procedure is developed to select variables in regression models. The coefficients of the models are also estimated. All estimators are proved to be consistent.
Robert E. Keane
2012-01-01
Simulation modeling can be a powerful tool for generating information about historical range of variation (HRV) in landscape conditions. In this chapter, I will discuss several aspects of the use of simulation modeling to generate landscape HRV data, including (1) the advantages and disadvantages of using simulation, (2) a brief review of possible landscape models. and...
Jump-Preserving Varying-Coefficient Models for Nonlinear Time Series
Cizek, Pavel; Koo, Chao
2017-01-01
An important and widely used class of semiparametric models is formed by the varyingcoefficient models. Although the varying coefficients are traditionally assumed to be smooth functions, the varying-coefficient model is considered here with the coefficient functions containing a finite set of disco
The Biasing Effects of Unmodeled ARMA Time Series Processes on Latent Growth Curve Model Estimates
Sivo, Stephen; Fan, Xitao; Witta, Lea
2005-01-01
The purpose of this study was to evaluate the robustness of estimated growth curve models when there is stationary autocorrelation among manifest variable errors. The results suggest that when, in practice, growth curve models are fitted to longitudinal data, alternative rival hypotheses to consider would include growth models that also specify…
Forecasting ocean wave energy: A Comparison of the ECMWF wave model with time series methods
Reikard, Gordon; Pinson, Pierre; Bidlot, Jean
2011-01-01
days. In selecting a method, the forecaster has a choice between physics-based models and statistical techniques. A further idea is to combine both types of models. This paper analyzes the forecasting properties of a well-known physics-based model, the European Center for Medium-Range Weather Forecasts......Recently, the technology has been developed to make wave farms commercially viable. Since electricity is perishable, utilities will be interested in forecasting ocean wave energy. The horizons involved in short-term management of power grids range from as little as a few hours to as long as several...... energy flux. In the initial tests, the ECMWF model and the statistical models are compared directly. The statistical models do better at short horizons, producing more accurate forecasts in the 1–5 h range. The ECMWF model is superior at longer horizons. The convergence point, at which the two methods...
A New Approach to Improve Accuracy of Grey Model GMC(1,n in Time Series Prediction
Sompop Moonchai
2015-01-01
Full Text Available This paper presents a modified grey model GMC(1,n for use in systems that involve one dependent system behavior and n-1 relative factors. The proposed model was developed from the conventional GMC(1,n model in order to improve its prediction accuracy by modifying the formula for calculating the background value, the system of parameter estimation, and the model prediction equation. The modified GMC(1,n model was verified by two cases: the study of forecasting CO2 emission in Thailand and forecasting electricity consumption in Thailand. The results demonstrated that the modified GMC(1,n model was able to achieve higher fitting and prediction accuracy compared with the conventional GMC(1,n and D-GMC(1,n models.
Serinaldi, F.
2010-12-01
Discrete multiplicative random cascade (MRC) models were extensively studied and applied to disaggregate rainfall data, thanks to their formal simplicity and the small number of involved parameters. Focusing on temporal disaggregation, the rationale of these models is based on multiplying the value assumed by a physical attribute (e.g., rainfall intensity) at a given time scale L, by a suitable number b of random weights, to obtain b attribute values corresponding to statistically plausible observations at a smaller L/b time resolution. In the original formulation of the MRC models, the random weights were assumed to be independent and identically distributed. However, for several studies this hypothesis did not appear to be realistic for the observed rainfall series as the distribution of the weights was shown to depend on the space-time scale and rainfall intensity. Since these findings contrast with the scale invariance assumption behind the MRC models and impact on the applicability of these models, it is worth studying their nature. This study explores the possible presence of dependence of the parameters of two discrete MRC models on rainfall intensity and time scale, by analyzing point rainfall series with 5-min time resolution. Taking into account a discrete microcanonical (MC) model based on beta distribution and a discrete canonical beta-logstable (BLS), the analysis points out that the relations between the parameters and rainfall intensity across the time scales are detectable and can be modeled by a set of simple functions accounting for the parameter-rainfall intensity relationship, and another set describing the link between the parameters and the time scale. Therefore, MC and BLS models were modified to explicitly account for these relationships and compared with the continuous in scale universal multifractal (CUM) model, which is used as a physically based benchmark model. Monte Carlo simulations point out that the dependence of MC and BLS