WorldWideScience

Sample records for turbulence time series

  1. Turbulence time series data hole filling using Karhunen-Loeve and ARIMA methods

    International Nuclear Information System (INIS)

    Chang, M P J L; Nazari, H; Font, C O; Gilbreath, G C; Oh, E

    2007-01-01

    Measurements of optical turbulence time series data using unattended instruments over long time intervals inevitably lead to data drop-outs or degraded signals. We present a comparison of methods using both Principal Component Analysis, which is also known as the Karhunen-Loeve decomposition, and ARIMA that seek to correct for these event-induced and mechanically-induced signal drop-outs and degradations. We report on the quality of the correction by examining the Intrinsic Mode Functions generated by Empirical Mode Decomposition. The data studied are optical turbulence parameter time series from a commercial long path length optical anemometer/scintillometer, measured over several hundred metres in outdoor environments

  2. Time-Series Analysis of Intermittent Velocity Fluctuations in Turbulent Boundary Layers

    Science.gov (United States)

    Zayernouri, Mohsen; Samiee, Mehdi; Meerschaert, Mark M.; Klewicki, Joseph

    2017-11-01

    Classical turbulence theory is modified under the inhomogeneities produced by the presence of a wall. In this regard, we propose a new time series model for the streamwise velocity fluctuations in the inertial sub-layer of turbulent boundary layers. The new model employs tempered fractional calculus and seamlessly extends the classical 5/3 spectral model of Kolmogorov in the inertial subrange to the whole spectrum from large to small scales. Moreover, the proposed time-series model allows the quantification of data uncertainties in the underlying stochastic cascade of turbulent kinetic energy. The model is tested using well-resolved streamwise velocity measurements up to friction Reynolds numbers of about 20,000. The physics of the energy cascade are briefly described within the context of the determined model parameters. This work was supported by the AFOSR Young Investigator Program (YIP) award (FA9550-17-1-0150) and partially by MURI/ARO (W911NF-15-1-0562).

  3. Tempered fractional time series model for turbulence in geophysical flows

    Science.gov (United States)

    Meerschaert, Mark M.; Sabzikar, Farzad; Phanikumar, Mantha S.; Zeleke, Aklilu

    2014-09-01

    We propose a new time series model for velocity data in turbulent flows. The new model employs tempered fractional calculus to extend the classical 5/3 spectral model of Kolmogorov. Application to wind speed and water velocity in a large lake are presented, to demonstrate the practical utility of the model.

  4. Tempered fractional time series model for turbulence in geophysical flows

    International Nuclear Information System (INIS)

    Meerschaert, Mark M; Sabzikar, Farzad; Phanikumar, Mantha S; Zeleke, Aklilu

    2014-01-01

    We propose a new time series model for velocity data in turbulent flows. The new model employs tempered fractional calculus to extend the classical 5/3 spectral model of Kolmogorov. Application to wind speed and water velocity in a large lake are presented, to demonstrate the practical utility of the model. (paper)

  5. Complexity analysis of the turbulent environmental fluid flow time series

    Science.gov (United States)

    Mihailović, D. T.; Nikolić-Đorić, E.; Drešković, N.; Mimić, G.

    2014-02-01

    We have used the Kolmogorov complexities, sample and permutation entropies to quantify the randomness degree in river flow time series of two mountain rivers in Bosnia and Herzegovina, representing the turbulent environmental fluid, for the period 1926-1990. In particular, we have examined the monthly river flow time series from two rivers (the Miljacka and the Bosnia) in the mountain part of their flow and then calculated the Kolmogorov complexity (KL) based on the Lempel-Ziv Algorithm (LZA) (lower-KLL and upper-KLU), sample entropy (SE) and permutation entropy (PE) values for each time series. The results indicate that the KLL, KLU, SE and PE values in two rivers are close to each other regardless of the amplitude differences in their monthly flow rates. We have illustrated the changes in mountain river flow complexity by experiments using (i) the data set for the Bosnia River and (ii) anticipated human activities and projected climate changes. We have explored the sensitivity of considered measures in dependence on the length of time series. In addition, we have divided the period 1926-1990 into three subintervals: (a) 1926-1945, (b) 1946-1965, (c) 1966-1990, and calculated the KLL, KLU, SE, PE values for the various time series in these subintervals. It is found that during the period 1946-1965, there is a decrease in their complexities, and corresponding changes in the SE and PE, in comparison to the period 1926-1990. This complexity loss may be primarily attributed to (i) human interventions, after the Second World War, on these two rivers because of their use for water consumption and (ii) climate change in recent times.

  6. The application of complex network time series analysis in turbulent heated jets

    International Nuclear Information System (INIS)

    Charakopoulos, A. K.; Karakasidis, T. E.; Liakopoulos, A.; Papanicolaou, P. N.

    2014-01-01

    In the present study, we applied the methodology of the complex network-based time series analysis to experimental temperature time series from a vertical turbulent heated jet. More specifically, we approach the hydrodynamic problem of discriminating time series corresponding to various regions relative to the jet axis, i.e., time series corresponding to regions that are close to the jet axis from time series originating at regions with a different dynamical regime based on the constructed network properties. Applying the transformation phase space method (k nearest neighbors) and also the visibility algorithm, we transformed time series into networks and evaluated the topological properties of the networks such as degree distribution, average path length, diameter, modularity, and clustering coefficient. The results show that the complex network approach allows distinguishing, identifying, and exploring in detail various dynamical regions of the jet flow, and associate it to the corresponding physical behavior. In addition, in order to reject the hypothesis that the studied networks originate from a stochastic process, we generated random network and we compared their statistical properties with that originating from the experimental data. As far as the efficiency of the two methods for network construction is concerned, we conclude that both methodologies lead to network properties that present almost the same qualitative behavior and allow us to reveal the underlying system dynamics

  7. Possible signatures of dissipation from time-series analysis techniques using a turbulent laboratory magnetohydrodynamic plasma

    International Nuclear Information System (INIS)

    Schaffner, D. A.; Brown, M. R.; Rock, A. B.

    2016-01-01

    The frequency spectrum of magnetic fluctuations as measured on the Swarthmore Spheromak Experiment is broadband and exhibits a nearly Kolmogorov 5/3 scaling. It features a steepening region which is indicative of dissipation of magnetic fluctuation energy similar to that observed in fluid and magnetohydrodynamic turbulence systems. Two non-spectrum based time-series analysis techniques are implemented on this data set in order to seek other possible signatures of turbulent dissipation beyond just the steepening of fluctuation spectra. Presented here are results for the flatness, permutation entropy, and statistical complexity, each of which exhibits a particular character at spectral steepening scales which can then be compared to the behavior of the frequency spectrum.

  8. Time series with tailored nonlinearities

    Science.gov (United States)

    Räth, C.; Laut, I.

    2015-10-01

    It is demonstrated how to generate time series with tailored nonlinearities by inducing well-defined constraints on the Fourier phases. Correlations between the phase information of adjacent phases and (static and dynamic) measures of nonlinearities are established and their origin is explained. By applying a set of simple constraints on the phases of an originally linear and uncorrelated Gaussian time series, the observed scaling behavior of the intensity distribution of empirical time series can be reproduced. The power law character of the intensity distributions being typical for, e.g., turbulence and financial data can thus be explained in terms of phase correlations.

  9. Time Change and Universality in Turbulence and Finance

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole E.; Schmiegel, Jürgen; Shephard, Neil

    Empirical time series of turbulent flows and financial markets reveal some common basic stylized features. In particular, the densities of velocity increments and log returns are well fitted within the class of Normal inverse Gaussian distributions and show a similar evolution across time scales ...

  10. Mean Flow and Turbulence Near a Series of Dikes

    Science.gov (United States)

    Yaeger, M. A.; Duan, J. G.

    2008-12-01

    Scour around various structures obstructing flow in an open channel is a common problem faced by river engineers. To better understand why this occurs, two questions must be answered: what are the mean flow and turbulence distributions around these structures and how do these two fields affect sediment transport? In addition, are the mean flow or turbulence properties more important in predicting the local transport rate? To answer these questions, a near-bed turbulence and shear stress study was conducted in a flat, fixed bed laboratory flume. Three dikes were placed on the left wall at right angles to the flow, extending partway into the flow, and remaining fully emerged throughout the experiment. A micro acoustic Doppler velocimeter (ADV) was used to measure velocities near the bed in the x, y, and z directions and then the turbulence intensities and Reynolds stresses were calculated from these measurements. Preliminary results showed that mean velocity has no relation to the formation of scour near the tips of the dikes but that Reynolds stresses and turbulence intensities do. It was shown that the horizontal component of the Reynolds stress near the bed contributed the most to the formation of scour. The maximum value of this component was over 200 times that of the mean bed shear stress of the incoming flow, whereas in a single dike field, the same Reynolds stress is about 60 times that of the incoming flow. The magnitudes of the other two components of the Reynolds stress were less than that of the horizontal component, with magnitudes about 20 times that of the incoming flow. This may be attributed to the very small contribution of the vertical velocity in these components. Turbulence intensity magnitudes were about 3 to 5 times that of the incoming flow, with the largest being u'. The largest values for both Reynolds stresses and turbulence intensities were seen at the tip of the second dike in the series. Better understanding of these flow processes will

  11. Novel approaches to estimating the turbulent kinetic energy dissipation rate from low- and moderate-resolution velocity fluctuation time series

    Directory of Open Access Journals (Sweden)

    M. Wacławczyk

    2017-11-01

    Full Text Available In this paper we propose two approaches to estimating the turbulent kinetic energy (TKE dissipation rate, based on the zero-crossing method by Sreenivasan et al. (1983. The original formulation requires a fine resolution of the measured signal, down to the smallest dissipative scales. However, due to finite sampling frequency, as well as measurement errors, velocity time series obtained from airborne experiments are characterized by the presence of effective spectral cutoffs. In contrast to the original formulation the new approaches are suitable for use with signals originating from airborne experiments. The suitability of the new approaches is tested using measurement data obtained during the Physics of Stratocumulus Top (POST airborne research campaign as well as synthetic turbulence data. They appear useful and complementary to existing methods. We show the number-of-crossings-based approaches respond differently to errors due to finite sampling and finite averaging than the classical power spectral method. Hence, their application for the case of short signals and small sampling frequencies is particularly interesting, as it can increase the robustness of turbulent kinetic energy dissipation rate retrieval.

  12. Turbulencelike Behavior of Seismic Time Series

    International Nuclear Information System (INIS)

    Manshour, P.; Saberi, S.; Sahimi, Muhammad; Peinke, J.; Pacheco, Amalio F.; Rahimi Tabar, M. Reza

    2009-01-01

    We report on a stochastic analysis of Earth's vertical velocity time series by using methods originally developed for complex hierarchical systems and, in particular, for turbulent flows. Analysis of the fluctuations of the detrended increments of the series reveals a pronounced transition in their probability density function from Gaussian to non-Gaussian. The transition occurs 5-10 hours prior to a moderate or large earthquake, hence representing a new and reliable precursor for detecting such earthquakes

  13. Turbulence modification in bubbly upward pipe flow. Extraction of time resolved turbulent microscopic structure by high speed PIV

    International Nuclear Information System (INIS)

    Yoshimura, Koki; Minato, Daiju; Sato, Yohei; Hishida, Koichi

    2004-01-01

    The objective of the present study is to obtain detailed information on the effects of bubbles on modification of turbulent structure by time-series measurements using a high speed time-resolved PIV. The experiments were carried out in a fully-developed vertical pipe with upflow of water at the Reynolds number of 9700 and the void fraction of 0.5%. It is observed that turbulence production was decreased and the dissipation rate was enhanced in the whole domain. We analyzed the effects of bubbles on modification of the energy cascade process from power spectra of velocity fluctuation of the continuous phase. (author)

  14. De-trending of turbulence measurements

    DEFF Research Database (Denmark)

    Hansen, Kurt Schaldemose; Larsen, Gunner Chr.

    2007-01-01

    based on time series statistics only. The performance of the proposed de-trending algorithm is assessed using huge number of time series recorded at different types of terrain and orography. The strategy is the following: Based on the available time series information a conventional (linear) time series...... de-trending is performed and subsequently compared with the prediction from the proposed algorithm. The de-trended turbulence intensities are reduced in the range of 3 – 15 % compared to the raw turbulence intensity. The performed analysis shows that the proposed model, based on statistical...... this requires access to the basic time-series. However, including a suitable modelling of the mean wind speed time variation, it is possible to estimate an approximate (linear) trend correction based on statistical data only. This paper presents such an algorithm for de-trending of turbulence standard deviation...

  15. Stochastic nature of series of waiting times

    Science.gov (United States)

    Anvari, Mehrnaz; Aghamohammadi, Cina; Dashti-Naserabadi, H.; Salehi, E.; Behjat, E.; Qorbani, M.; Khazaei Nezhad, M.; Zirak, M.; Hadjihosseini, Ali; Peinke, Joachim; Tabar, M. Reza Rahimi

    2013-06-01

    Although fluctuations in the waiting time series have been studied for a long time, some important issues such as its long-range memory and its stochastic features in the presence of nonstationarity have so far remained unstudied. Here we find that the “waiting times” series for a given increment level have long-range correlations with Hurst exponents belonging to the interval 1/2time distribution. We find that the logarithmic difference of waiting times series has a short-range correlation, and then we study its stochastic nature using the Markovian method and determine the corresponding Kramers-Moyal coefficients. As an example, we analyze the velocity fluctuations in high Reynolds number turbulence and determine the level dependence of Markov time scales, as well as the drift and diffusion coefficients. We show that the waiting time distributions exhibit power law tails, and we were able to model the distribution with a continuous time random walk.

  16. Using Indirect Turbulence Measurements for Real-Time Parameter Estimation in Turbulent Air

    Science.gov (United States)

    Martos, Borja; Morelli, Eugene A.

    2012-01-01

    The use of indirect turbulence measurements for real-time estimation of parameters in a linear longitudinal dynamics model in atmospheric turbulence was studied. It is shown that measuring the atmospheric turbulence makes it possible to treat the turbulence as a measured explanatory variable in the parameter estimation problem. Commercial off-the-shelf sensors were researched and evaluated, then compared to air data booms. Sources of colored noise in the explanatory variables resulting from typical turbulence measurement techniques were identified and studied. A major source of colored noise in the explanatory variables was identified as frequency dependent upwash and time delay. The resulting upwash and time delay corrections were analyzed and compared to previous time shift dynamic modeling research. Simulation data as well as flight test data in atmospheric turbulence were used to verify the time delay behavior. Recommendations are given for follow on flight research and instrumentation.

  17. Order and turbulence in rf-driven Josephson junction series arrays

    International Nuclear Information System (INIS)

    Dominguez, D.; Cerdeira, H.A.

    1994-01-01

    We study underdamped Josephson junction series arrays that are globally coupled through a resistive shunting load and driven by an rf bias current. We find coherent, ordered, partially ordered and turbulent regimes in the IV characteristics. The ordered regime corresponds to giant Shapiro steps. In the turbulent regime there is a saturation of the broad band noise for a large number of junctions. This corresponds to a breaking of the law of large numbers already seen in globally coupled maps. Coexisting with this, we find an emergence of novel pseudo-steps in the IV characteristics. (author). 18 refs, 3 figs

  18. Compounding approach for univariate time series with nonstationary variances

    Science.gov (United States)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  19. Inverse statistical approach in heartbeat time series

    International Nuclear Information System (INIS)

    Ebadi, H; Shirazi, A H; Mani, Ali R; Jafari, G R

    2011-01-01

    We present an investigation on heart cycle time series, using inverse statistical analysis, a concept borrowed from studying turbulence. Using this approach, we studied the distribution of the exit times needed to achieve a predefined level of heart rate alteration. Such analysis uncovers the most likely waiting time needed to reach a certain change in the rate of heart beat. This analysis showed a significant difference between the raw data and shuffled data, when the heart rate accelerates or decelerates to a rare event. We also report that inverse statistical analysis can distinguish between the electrocardiograms taken from healthy volunteers and patients with heart failure

  20. Multi-time, multi-scale correlation functions in turbulence and in turbulent models

    NARCIS (Netherlands)

    Biferale, L.; Boffetta, G.; Celani, A.; Toschi, F.

    1999-01-01

    A multifractal-like representation for multi-time, multi-scale velocity correlation in turbulence and dynamical turbulent models is proposed. The importance of subleading contributions to time correlations is highlighted. The fulfillment of the dynamical constraints due to the equations of motion is

  1. Turbulent Fluid Motion 6: Turbulence, Nonlinear Dynamics, and Deterministic Chaos

    Science.gov (United States)

    Deissler, Robert G.

    1996-01-01

    Several turbulent and nonturbulent solutions of the Navier-Stokes equations are obtained. The unaveraged equations are used numerically in conjunction with tools and concepts from nonlinear dynamics, including time series, phase portraits, Poincare sections, Liapunov exponents, power spectra, and strange attractors. Initially neighboring solutions for a low-Reynolds-number fully developed turbulence are compared. The turbulence is sustained by a nonrandom time-independent external force. The solutions, on the average, separate exponentially with time, having a positive Liapunov exponent. Thus, the turbulence is characterized as chaotic. In a search for solutions which contrast with the turbulent ones, the Reynolds number (or strength of the forcing) is reduced. Several qualitatively different flows are noted. These are, respectively, fully chaotic, complex periodic, weakly chaotic, simple periodic, and fixed-point. Of these, we classify only the fully chaotic flows as turbulent. Those flows have both a positive Liapunov exponent and Poincare sections without pattern. By contrast, the weakly chaotic flows, although having positive Liapunov exponents, have some pattern in their Poincare sections. The fixed-point and periodic flows are nonturbulent, since turbulence, as generally understood, is both time-dependent and aperiodic.

  2. Gaussian vs non-Gaussian turbulence: impact on wind turbine loads

    DEFF Research Database (Denmark)

    Berg, Jacob; Natarajan, Anand; Mann, Jakob

    2016-01-01

    taking into account the safety factor for extreme moments. Other extreme load moments as well as the fatigue loads are not affected because of the use of non-Gaussian turbulent inflow. It is suggested that the turbine thus acts like a low-pass filter that averages out the non-Gaussian behaviour, which......From large-eddy simulations of atmospheric turbulence, a representation of Gaussian turbulence is constructed by randomizing the phases of the individual modes of variability. Time series of Gaussian turbulence are constructed and compared with its non-Gaussian counterpart. Time series from the two...

  3. Validity of the assumption of Gaussian turbulence; Gyldighed af antagelsen om Gaussisk turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, M.; Hansen, K.S.; Juul Pedersen, B.

    2000-07-01

    Wind turbines are designed to withstand the impact of turbulent winds, which fluctuations usually are assumed of Gaussian probability distribution. Based on a large number of measurements from many sites, this seems a reasonable assumption in flat homogeneous terrain whereas it may fail in complex terrain. At these sites the wind speed often has a skew distribution with more frequent lulls than gusts. In order to simulate aerodynamic loads, a numerical turbulence simulation method was developed and implemented. This method may simulate multiple time series of variable not necessarily Gaussian distribution without distortion of the spectral distribution or spatial coherence. The simulated time series were used as input to the dynamic-response simulation program Vestas Turbine Simulator (VTS). In this way we simulated the dynamic response of systems exposed to turbulence of either Gaussian or extreme, yet realistic, non-Gaussian probability distribution. Certain loads on turbines with active pitch regulation were enhanced by up to 15% compared to pure Gaussian turbulence. It should, however, be said that the undesired effect depends on the dynamic system, and it might be mitigated by optimisation of the wind turbine regulation system after local turbulence characteristics. (au)

  4. Non-gaussian turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Hoejstrup, J [NEG Micon Project Development A/S, Randers (Denmark); Hansen, K S [Denmarks Technical Univ., Dept. of Energy Engineering, Lyngby (Denmark); Pedersen, B J [VESTAS Wind Systems A/S, Lem (Denmark); Nielsen, M [Risoe National Lab., Wind Energy and Atmospheric Physics, Roskilde (Denmark)

    1999-03-01

    The pdf`s of atmospheric turbulence have somewhat wider tails than a Gaussian, especially regarding accelerations, whereas velocities are close to Gaussian. This behaviour is being investigated using data from a large WEB-database in order to quantify the amount of non-Gaussianity. Models for non-Gaussian turbulence have been developed, by which artificial turbulence can be generated with specified distributions, spectra and cross-correlations. The artificial time series will then be used in load models and the resulting loads in the Gaussian and the non-Gaussian cases will be compared. (au)

  5. Simulation of Ground Winds Time Series for the NASA Crew Launch Vehicle (CLV)

    Science.gov (United States)

    Adelfang, Stanley I.

    2008-01-01

    Simulation of wind time series based on power spectrum density (PSD) and spectral coherence models for ground wind turbulence is described. The wind models, originally developed for the Shuttle program, are based on wind measurements at the NASA 150-m meteorological tower at Cape Canaveral, FL. The current application is for the design and/or protection of the CLV from wind effects during on-pad exposure during periods from as long as days prior to launch, to seconds or minutes just prior to launch and seconds after launch. The evaluation of vehicle response to wind will influence the design and operation of constraint systems for support of the on-pad vehicle. Longitudinal and lateral wind component time series are simulated at critical vehicle locations. The PSD model for wind turbulence is a function of mean wind speed, elevation and temporal frequency. Integration of the PSD equation over a selected frequency range yields the variance of the time series to be simulated. The square root of the PSD defines a low-pass filter that is applied to adjust the components of the Fast Fourier Transform (FFT) of Gaussian white noise. The first simulated time series near the top of the launch vehicle is the inverse transform of the adjusted FFT. Simulation of the wind component time series at the nearest adjacent location (and all other succeeding next nearest locations) is based on a model for the coherence between winds at two locations as a function of frequency and separation distance, where the adjacent locations are separated vertically and/or horizontally. The coherence function is used to calculate a coherence weighted FFT of the wind at the next nearest location, given the FFT of the simulated time series at the previous location and the essentially incoherent FFT of the wind at the selected location derived a priori from the PSD model. The simulated time series at each adjacent location is the inverse Fourier transform of the coherence weighted FFT. For a selected

  6. Tetrahedral-Mesh Simulation of Turbulent Flows with the Space-Time Conservative Schemes

    Science.gov (United States)

    Chang, Chau-Lyan; Venkatachari, Balaji; Cheng, Gary C.

    2015-01-01

    Direct numerical simulations of turbulent flows are predominantly carried out using structured, hexahedral meshes despite decades of development in unstructured mesh methods. Tetrahedral meshes offer ease of mesh generation around complex geometries and the potential of an orientation free grid that would provide un-biased small-scale dissipation and more accurate intermediate scale solutions. However, due to the lack of consistent multi-dimensional numerical formulations in conventional schemes for triangular and tetrahedral meshes at the cell interfaces, numerical issues exist when flow discontinuities or stagnation regions are present. The space-time conservative conservation element solution element (CESE) method - due to its Riemann-solver-free shock capturing capabilities, non-dissipative baseline schemes, and flux conservation in time as well as space - has the potential to more accurately simulate turbulent flows using unstructured tetrahedral meshes. To pave the way towards accurate simulation of shock/turbulent boundary-layer interaction, a series of wave and shock interaction benchmark problems that increase in complexity, are computed in this paper with triangular/tetrahedral meshes. Preliminary computations for the normal shock/turbulence interactions are carried out with a relatively coarse mesh, by direct numerical simulations standards, in order to assess other effects such as boundary conditions and the necessity of a buffer domain. The results indicate that qualitative agreement with previous studies can be obtained for flows where, strong shocks co-exist along with unsteady waves that display a broad range of scales, with a relatively compact computational domain and less stringent requirements for grid clustering near the shock. With the space-time conservation properties, stable solutions without any spurious wave reflections can be obtained without a need for buffer domains near the outflow/farfield boundaries. Computational results for the

  7. De-trending of turbulence measurements

    DEFF Research Database (Denmark)

    Hansen, Kurt Schaldemose; Larsen, Gunner Chr.

    2006-01-01

    contribution to the wind speed turbulence intensity for a number of representative locations. A linear de-trending process has been implemented during indexing of the time-series. The observed de-trended turbulence intensities are reduced 3 – 15 % compared to the raw turbulence intensity. This reduction...... depends primarily on site characteristics and local mean wind speed variations. Reduced turbulence intensity will result in lower design fatigue loads. This aspect of de-trending is discussed by use of a simple heuristic load model. Finally an empirical model for de-trending wind resource data...

  8. Comparisons between two wavelet functions in extracting coherent structures from solar wind time series

    International Nuclear Information System (INIS)

    Bolzani, M.J.A.; Guarnieri, F.L.; Vieira, Paulo Cesar

    2009-01-01

    Nowadays, wavelet analysis of turbulent flows have become increasingly popular. However, the study of geometric characteristics from wavelet functions is still poorly explored. In this work we compare the performance of two wavelet functions in extracting the coherent structures from solar wind velocity time series. The data series are from years 1996 to 2002 (except 1998 and 1999). The wavelet algorithm decomposes the annual time-series in two components: the coherent part and non-coherent one, using the daubechies-4 and haar wavelet function. The threshold assumed is based on a percentage of maximum variance found in each dyadic scale. After the extracting procedure, we applied the power spectral density on the original time series and coherent time series to obtain spectral indices. The results from spectral indices show higher values for the coherent part obtained by daubechies-4 than those obtained by the haar wavelet function. Using the kurtosis statistical parameter, on coherent and non-coherent time series, it was possible to conjecture that the differences found between two wavelet functions may be associated with their geometric forms. (author)

  9. A time-localized response of wave growth process under turbulent winds

    Directory of Open Access Journals (Sweden)

    Z. Ge

    2007-06-01

    Full Text Available Very short time series (with lengths of approximately 40 s or 5~7 wave periods of wind velocity fluctuations and wave elevation were recorded simultaneously and investigated using the wavelet bispectral analysis. Rapid changes in the wave and wind spectra were detected, which were found to be intimately related to significant energy transfers through transient quadratic wind-wave and wave-wave interactions. A possible pattern of energy exchange between the wind and wave fields was further deduced. In particular, the generation and variation of the strong wave-induced perturbation velocity in the wind can be explained by the strengthening and diminishing of the associated quadratic interactions, which cannot be unveiled by linear theories. On small time scales, the wave-wave quadratic interactions were as active and effective in transferring energy as the wind-wave interactions. The results also showed that the wind turbulence was occasionally effective in transferring energy between the wind and the wave fields, so that the background turbulence in the wind cannot be completely neglected. Although these effects are all possibly significant over short times, the time-localized growth of the wave spectrum may not considerably affect the long-term process of wave development.

  10. Time series analysis of wind speed using VAR and the generalized impulse response technique

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, Bradley T. [Area of Information Systems and Quantitative Sciences, Rawls College of Business and Wind Science and Engineering Research Center, Texas Tech University, Lubbock, TX 79409-2101 (United States); Kruse, Jamie Brown [Center for Natural Hazard Research, East Carolina University, Greenville, NC (United States); Schroeder, John L. [Department of Geosciences and Wind Science and Engineering Research Center, Texas Tech University, Lubbock, TX (United States); Smith, Douglas A. [Department of Civil Engineering and Wind Science and Engineering Research Center, Texas Tech University, Lubbock, TX (United States)

    2007-03-15

    This research examines the interdependence in time series wind speed data measured in the same location at four different heights. A multiple-equation system known as a vector autoregression is proposed for characterizing the time series dynamics of wind. Additionally, the recently developed method of generalized impulse response analysis provides insight into the cross-effects of the wind series and their responses to shocks. Findings are based on analysis of contemporaneous wind speed time histories taken at 13, 33, 70 and 160 ft above ground level with a sampling rate of 10 Hz. The results indicate that wind speeds measured at 70 ft was the most variable. Further, the turbulence persisted longer at the 70-ft measurement than at the other heights. The greatest interdependence is observed at 13 ft. Gusts at 160 ft led to the greatest persistence to an 'own' shock and led to greatest persistence in the responses of the other wind series. (author)

  11. Stochastic Subspace Modelling of Turbulence

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Pedersen, B. J.; Nielsen, Søren R.K.

    2009-01-01

    positive definite cross-spectral density matrix a frequency response matrix is constructed which determines the turbulence vector as a linear filtration of Gaussian white noise. Finally, an accurate state space modelling method is proposed which allows selection of an appropriate model order......, and estimation of a state space model for the vector turbulence process incorporating its phase spectrum in one stage, and its results are compared with a conventional ARMA modelling method.......Turbulence of the incoming wind field is of paramount importance to the dynamic response of civil engineering structures. Hence reliable stochastic models of the turbulence should be available from which time series can be generated for dynamic response and structural safety analysis. In the paper...

  12. On the Space-Time Structure of Sheared Turbulence

    DEFF Research Database (Denmark)

    de Mare, Martin Tobias; Mann, Jakob

    2016-01-01

    We develop a model that predicts all two-point correlations in high Reynolds number turbulent flow, in both space and time. This is accomplished by combining the design philosophies behind two existing models, the Mann spectral velocity tensor, in which isotropic turbulence is distorted according......-assisted feed forward control and wind-turbine wake modelling....

  13. Time-series analysis to study the impact of an intersection on dispersion along a street canyon.

    Science.gov (United States)

    Richmond-Bryant, Jennifer; Eisner, Alfred D; Hahn, Intaek; Fortune, Christopher R; Drake-Richman, Zora E; Brixey, Laurie A; Talih, M; Wiener, Russell W; Ellenson, William D

    2009-12-01

    This paper presents data analysis from the Brooklyn Traffic Real-Time Ambient Pollutant Penetration and Environmental Dispersion (B-TRAPPED) study to assess the transport of ultrafine particulate matter (PM) across urban intersections. Experiments were performed in a street canyon perpendicular to a highway in Brooklyn, NY, USA. Real-time ultrafine PM samplers were positioned on either side of an intersection at multiple locations along a street to collect time-series number concentration data. Meteorology equipment was positioned within the street canyon and at an upstream background site to measure wind speed and direction. Time-series analysis was performed on the PM data to compute a transport velocity along the direction of the street for the cases where background winds were parallel and perpendicular to the street. The data were analyzed for sampler pairs located (1) on opposite sides of the intersection and (2) on the same block. The time-series analysis demonstrated along-street transport, including across the intersection when background winds were parallel to the street canyon and there was minimal transport and no communication across the intersection when background winds were perpendicular to the street canyon. Low but significant values of the cross-correlation function (CCF) underscore the turbulent nature of plume transport along the street canyon. The low correlations suggest that flow switching around corners or traffic-induced turbulence at the intersection may have aided dilution of the PM plume from the highway. This observation supports similar findings in the literature. Furthermore, the time-series analysis methodology applied in this study is introduced as a technique for studying spatiotemporal variation in the urban microscale environment.

  14. New time scale based k-epsilon model for near-wall turbulence

    Science.gov (United States)

    Yang, Z.; Shih, T. H.

    1993-01-01

    A k-epsilon model is proposed for wall bonded turbulent flows. In this model, the eddy viscosity is characterized by a turbulent velocity scale and a turbulent time scale. The time scale is bounded from below by the Kolmogorov time scale. The dissipation equation is reformulated using this time scale and no singularity exists at the wall. The damping function used in the eddy viscosity is chosen to be a function of R(sub y) = (k(sup 1/2)y)/v instead of y(+). Hence, the model could be used for flows with separation. The model constants used are the same as in the high Reynolds number standard k-epsilon model. Thus, the proposed model will be also suitable for flows far from the wall. Turbulent channel flows at different Reynolds numbers and turbulent boundary layer flows with and without pressure gradient are calculated. Results show that the model predictions are in good agreement with direct numerical simulation and experimental data.

  15. Effects of forcing time scale on the simulated turbulent flows and turbulent collision statistics of inertial particles

    International Nuclear Information System (INIS)

    Rosa, B.; Parishani, H.; Ayala, O.; Wang, L.-P.

    2015-01-01

    In this paper, we study systematically the effects of forcing time scale in the large-scale stochastic forcing scheme of Eswaran and Pope [“An examination of forcing in direct numerical simulations of turbulence,” Comput. Fluids 16, 257 (1988)] on the simulated flow structures and statistics of forced turbulence. Using direct numerical simulations, we find that the forcing time scale affects the flow dissipation rate and flow Reynolds number. Other flow statistics can be predicted using the altered flow dissipation rate and flow Reynolds number, except when the forcing time scale is made unrealistically large to yield a Taylor microscale flow Reynolds number of 30 and less. We then study the effects of forcing time scale on the kinematic collision statistics of inertial particles. We show that the radial distribution function and the radial relative velocity may depend on the forcing time scale when it becomes comparable to the eddy turnover time. This dependence, however, can be largely explained in terms of altered flow Reynolds number and the changing range of flow length scales present in the turbulent flow. We argue that removing this dependence is important when studying the Reynolds number dependence of the turbulent collision statistics. The results are also compared to those based on a deterministic forcing scheme to better understand the role of large-scale forcing, relative to that of the small-scale turbulence, on turbulent collision of inertial particles. To further elucidate the correlation between the altered flow structures and dynamics of inertial particles, a conditional analysis has been performed, showing that the regions of higher collision rate of inertial particles are well correlated with the regions of lower vorticity. Regions of higher concentration of pairs at contact are found to be highly correlated with the region of high energy dissipation rate

  16. Simple Analytical Forms of the Perpendicular Diffusion Coefficient for Two-component Turbulence. III. Damping Model of Dynamical Turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Gammon, M.; Shalchi, A., E-mail: andreasm4@yahoo.com [Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba R3T 2N2 (Canada)

    2017-10-01

    In several astrophysical applications one needs analytical forms of cosmic-ray diffusion parameters. Some examples are studies of diffusive shock acceleration and solar modulation. In the current article we explore perpendicular diffusion based on the unified nonlinear transport theory. While we focused on magnetostatic turbulence in Paper I, we included the effect of dynamical turbulence in Paper II of the series. In the latter paper we assumed that the temporal correlation time does not depend on the wavenumber. More realistic models have been proposed in the past, such as the so-called damping model of dynamical turbulence. In the present paper we derive analytical forms for the perpendicular diffusion coefficient of energetic particles in two-component turbulence for this type of time-dependent turbulence. We present new formulas for the perpendicular diffusion coefficient and we derive a condition for which the magnetostatic result is recovered.

  17. An advection-based model to increase the temporal resolution of PIV time series.

    Science.gov (United States)

    Scarano, Fulvio; Moore, Peter

    A numerical implementation of the advection equation is proposed to increase the temporal resolution of PIV time series. The method is based on the principle that velocity fluctuations are transported passively, similar to Taylor's hypothesis of frozen turbulence . In the present work, the advection model is extended to unsteady three-dimensional flows. The main objective of the method is that of lowering the requirement on the PIV repetition rate from the Eulerian frequency toward the Lagrangian one. The local trajectory of the fluid parcel is obtained by forward projection of the instantaneous velocity at the preceding time instant and backward projection from the subsequent time step. The trajectories are approximated by the instantaneous streamlines, which yields accurate results when the amplitude of velocity fluctuations is small with respect to the convective motion. The verification is performed with two experiments conducted at temporal resolutions significantly higher than that dictated by Nyquist criterion. The flow past the trailing edge of a NACA0012 airfoil closely approximates frozen turbulence , where the largest ratio between the Lagrangian and Eulerian temporal scales is expected. An order of magnitude reduction of the needed acquisition frequency is demonstrated by the velocity spectra of super-sampled series. The application to three-dimensional data is made with time-resolved tomographic PIV measurements of a transitional jet. Here, the 3D advection equation is implemented to estimate the fluid trajectories. The reduction in the minimum sampling rate by the use of super-sampling in this case is less, due to the fact that vortices occurring in the jet shear layer are not well approximated by sole advection at large time separation. Both cases reveal that the current requirements for time-resolved PIV experiments can be revised when information is poured from space to time . An additional favorable effect is observed by the analysis in the

  18. Chaotic radiation/turbulence interactions in flames

    Energy Technology Data Exchange (ETDEWEB)

    Menguec, M.P.; McDonough, J.M.

    1998-11-01

    In this paper, the authors present a review of their recent efforts to model chaotic radiation-turbulence interactions in flames. The main focus is to characterize soot volume fraction fluctuations in turbulent diffusion flames, as they strongly contribute to these interaction. The approach is based on the hypothesis that the fluctuations of properties in turbulent flames are deterministic in nature, rather than random. The authors first discuss the theoretical details and then they briefly outline the experiments conducted to measure the scattered light signals from fluctuating soot particles along the axis of an ethylene-air diffusion flame. They compare the power spectra and time series obtained from experiments against the ad-hoc and rigorous models derived using a series of logistic maps. These logistic maps can be used in simulation of the fluctuations in these type of flames, without extensive computational effort or sacrifice of physical detail. Availability of accurate models of these kinds allows investigation of radiation-turbulence interactions at a more fundamental level than it was previously possible.

  19. Characterizing the Severe Turbulence Environments Associated With Commercial Aviation Accidents: A Real-Time Turbulence Model (RTTM) Designed for the Operational Prediction of Hazardous Aviation Turbulence Environments

    Science.gov (United States)

    Kaplan, Michael L.; Lux, Kevin M.; Cetola, Jeffrey D.; Huffman, Allan W.; Riordan, Allen J.; Slusser, Sarah W.; Lin, Yuh-Lang; Charney, Joseph J.; Waight, Kenneth T.

    2004-01-01

    Real-time prediction of environments predisposed to producing moderate-severe aviation turbulence is studied. We describe the numerical model and its postprocessing system designed for said prediction of environments predisposed to severe aviation turbulence as well as presenting numerous examples of its utility. The numerical model is MASS version 5.13, which is integrated over three different grid matrices in real time on a university work station in support of NASA Langley Research Center s B-757 turbulence research flight missions. The postprocessing system includes several turbulence-related products, including four turbulence forecasting indices, winds, streamlines, turbulence kinetic energy, and Richardson numbers. Additionally, there are convective products including precipitation, cloud height, cloud mass fluxes, lifted index, and K-index. Furthermore, soundings, sounding parameters, and Froude number plots are also provided. The horizontal cross-section plot products are provided from 16 000 to 46 000 ft in 2000-ft intervals. Products are available every 3 hours at the 60- and 30-km grid interval and every 1.5 hours at the 15-km grid interval. The model is initialized from the NWS ETA analyses and integrated two times a day.

  20. GPS Position Time Series @ JPL

    Science.gov (United States)

    Owen, Susan; Moore, Angelyn; Kedar, Sharon; Liu, Zhen; Webb, Frank; Heflin, Mike; Desai, Shailen

    2013-01-01

    Different flavors of GPS time series analysis at JPL - Use same GPS Precise Point Positioning Analysis raw time series - Variations in time series analysis/post-processing driven by different users. center dot JPL Global Time Series/Velocities - researchers studying reference frame, combining with VLBI/SLR/DORIS center dot JPL/SOPAC Combined Time Series/Velocities - crustal deformation for tectonic, volcanic, ground water studies center dot ARIA Time Series/Coseismic Data Products - Hazard monitoring and response focused center dot ARIA data system designed to integrate GPS and InSAR - GPS tropospheric delay used for correcting InSAR - Caltech's GIANT time series analysis uses GPS to correct orbital errors in InSAR - Zhen Liu's talking tomorrow on InSAR Time Series analysis

  1. A turbulent time scale based k–ε model for probability density function modeling of turbulence/chemistry interactions: Application to HCCI combustion

    International Nuclear Information System (INIS)

    Maroteaux, Fadila; Pommier, Pierre-Lin

    2013-01-01

    Highlights: ► Turbulent time evolution is introduced in stochastic modeling approach. ► The particles number is optimized trough a restricted initial distribution. ► The initial distribution amplitude is modeled by magnitude of turbulence field. -- Abstract: Homogenous Charge Compression Ignition (HCCI) engine technology is known as an alternative to reduce NO x and particulate matter (PM) emissions. As shown by several experimental studies published in the literature, the ideally homogeneous mixture charge becomes stratified in composition and temperature, and turbulent mixing is found to play an important role in controlling the combustion progress. In a previous study, an IEM model (Interaction by Exchange with the Mean) has been used to describe the micromixing in a stochastic reactor model that simulates the HCCI process. The IEM model is a deterministic model, based on the principle that the scalar value approaches the mean value over the entire volume with a characteristic mixing time. In this previous model, the turbulent time scale was treated as a fixed parameter. The present study focuses on the development of a micro-mixing time model, in order to take into account the physical phenomena it stands for. For that purpose, a (k–ε) model is used to express this micro-mixing time model. The turbulence model used here is based on zero dimensional energy cascade applied during the compression and the expansion cycle; mean kinetic energy is converted to turbulent kinetic energy. Turbulent kinetic energy is converted to heat through viscous dissipation. Besides, in this study a relation to calculate the initial heterogeneities amplitude is proposed. The comparison of simulation results against experimental data shows overall satisfactory agreement at variable turbulent time scale

  2. Time series analysis time series analysis methods and applications

    CERN Document Server

    Rao, Tata Subba; Rao, C R

    2012-01-01

    The field of statistics not only affects all areas of scientific activity, but also many other matters such as public policy. It is branching rapidly into so many different subjects that a series of handbooks is the only way of comprehensively presenting the various aspects of statistical methodology, applications, and recent developments. The Handbook of Statistics is a series of self-contained reference books. Each volume is devoted to a particular topic in statistics, with Volume 30 dealing with time series. The series is addressed to the entire community of statisticians and scientists in various disciplines who use statistical methodology in their work. At the same time, special emphasis is placed on applications-oriented techniques, with the applied statistician in mind as the primary audience. Comprehensively presents the various aspects of statistical methodology Discusses a wide variety of diverse applications and recent developments Contributors are internationally renowened experts in their respect...

  3. Highly comparative time-series analysis: the empirical structure of time series and their methods.

    Science.gov (United States)

    Fulcher, Ben D; Little, Max A; Jones, Nick S

    2013-06-06

    The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.

  4. Radiation turbulence interactions in pulverized coal flames: Chaotic map models of soot fluctuations in turbulent diffusion flames. Quarterly report, October 1995--December 1995

    Energy Technology Data Exchange (ETDEWEB)

    McDonough, J.M.; Menguc, M.P.; Mukerji, S.; Swabb, S.; Manickavasagam, S.; Ghosal, S.

    1995-12-31

    In this paper, we introduce a methodology to characterize soot volume fraction fluctuations in turbulent diffusion flames via chaotic maps. The approach is based on the hypothesis that the fluctuations of properties in turbulent flames is deterministic in nature, rather than statistical. Out objective is to develop models to mimic these fluctuations. The models will be used eventually in comprehensive algorithms to study the true physics of turbulent flames and the interaction of turbulence with radiation. To this extent, we measured the time series of soot scattering coefficient in an ethylene diffusion flame from light scattering experiments. Following this, corresponding power spectra and delay maps were calculated. It was shown that if the data were averaged, the characteristics of the fluctuations were almost completely washed out. The psds from experiments were successfully modeled using a series of logistic maps.

  5. Investigation of a turbulent spot and a tripped turbulent boundary layer flow using time-resolved tomographic PIV

    NARCIS (Netherlands)

    Schröder, A.; Geisler, R.; Elsinga, G.E.; Scarano, F.; Dierksheide, U.

    2007-01-01

    In this feasibility study the tomographic PIV technique has been applied to time resolved PIV recordings for the study of the growth of a turbulent spot in a laminar flat plate boundary layer and to visualize the topology of coherent flow structures within a tripped turbulent flat plate boundary

  6. Characterizing and estimating noise in InSAR and InSAR time series with MODIS

    Science.gov (United States)

    Barnhart, William D.; Lohman, Rowena B.

    2013-01-01

    InSAR time series analysis is increasingly used to image subcentimeter displacement rates of the ground surface. The precision of InSAR observations is often affected by several noise sources, including spatially correlated noise from the turbulent atmosphere. Under ideal scenarios, InSAR time series techniques can substantially mitigate these effects; however, in practice the temporal distribution of InSAR acquisitions over much of the world exhibit seasonal biases, long temporal gaps, and insufficient acquisitions to confidently obtain the precisions desired for tectonic research. Here, we introduce a technique for constraining the magnitude of errors expected from atmospheric phase delays on the ground displacement rates inferred from an InSAR time series using independent observations of precipitable water vapor from MODIS. We implement a Monte Carlo error estimation technique based on multiple (100+) MODIS-based time series that sample date ranges close to the acquisitions times of the available SAR imagery. This stochastic approach allows evaluation of the significance of signals present in the final time series product, in particular their correlation with topography and seasonality. We find that topographically correlated noise in individual interferograms is not spatially stationary, even over short-spatial scales (<10 km). Overall, MODIS-inferred displacements and velocities exhibit errors of similar magnitude to the variability within an InSAR time series. We examine the MODIS-based confidence bounds in regions with a range of inferred displacement rates, and find we are capable of resolving velocities as low as 1.5 mm/yr with uncertainties increasing to ∼6 mm/yr in regions with higher topographic relief.

  7. Introduction to Time Series Modeling

    CERN Document Server

    Kitagawa, Genshiro

    2010-01-01

    In time series modeling, the behavior of a certain phenomenon is expressed in relation to the past values of itself and other covariates. Since many important phenomena in statistical analysis are actually time series and the identification of conditional distribution of the phenomenon is an essential part of the statistical modeling, it is very important and useful to learn fundamental methods of time series modeling. Illustrating how to build models for time series using basic methods, "Introduction to Time Series Modeling" covers numerous time series models and the various tools f

  8. ON QUIET-TIME SOLAR WIND ELECTRON DISTRIBUTIONS IN DYNAMICAL EQUILIBRIUM WITH LANGMUIR TURBULENCE

    International Nuclear Information System (INIS)

    Zaheer, S.; Yoon, P. H.

    2013-01-01

    A recent series of papers put forth a self-consistent theory of an asymptotically steady-state electron distribution function and Langmuir turbulence intensity. The theory was developed in terms of the κ distribution which features Maxwellian low-energy electrons and a non-Maxwellian energetic power-law tail component. The present paper discusses a generalized κ distribution that features a Davydov-Druyvesteyn type of core component and an energetic power-law tail component. The physical motivation for such a generalization is so that the model may reflect the influence of low-energy electrons interacting with low-frequency kinetic Alfvénic turbulence as well as with high-frequency Langmuir turbulence. It is shown that such a solution and the accompanying Langmuir wave spectrum rigorously satisfy the balance requirement between the spontaneous and induced emission processes in both the particle and wave kinetic equations, and approximately satisfy the similar balance requirement between the spontaneous and induced scattering processes, which are nonlinear. In spite of the low velocity modification of the electron distribution function, it is shown that the resulting asymptotic velocity power-law index α, where f e ∼ v –α is close to the average index observed during the quiet-time solar wind condition, i.e., α ∼ O(6.5) whereas α average ∼ 6.69, according to observation

  9. The deterministic chaos and random noise in turbulent jet

    International Nuclear Information System (INIS)

    Yao, Tian-Liang; Liu, Hai-Feng; Xu, Jian-Liang; Li, Wei-Feng

    2014-01-01

    A turbulent flow is usually treated as a superposition of coherent structure and incoherent turbulence. In this paper, the largest Lyapunov exponent and the random noise in the near field of round jet and plane jet are estimated with our previously proposed method of chaotic time series analysis [T. L. Yao, et al., Chaos 22, 033102 (2012)]. The results show that the largest Lyapunov exponents of the round jet and plane jet are in direct proportion to the reciprocal of the integral time scale of turbulence, which is in accordance with the results of the dimensional analysis, and the proportionality coefficients are equal. In addition, the random noise of the round jet and plane jet has the same linear relation with the Kolmogorov velocity scale of turbulence. As a result, the random noise may well be from the incoherent disturbance in turbulence, and the coherent structure in turbulence may well follow the rule of chaotic motion

  10. International Work-Conference on Time Series

    CERN Document Server

    Pomares, Héctor; Valenzuela, Olga

    2017-01-01

    This volume of selected and peer-reviewed contributions on the latest developments in time series analysis and forecasting updates the reader on topics such as analysis of irregularly sampled time series, multi-scale analysis of univariate and multivariate time series, linear and non-linear time series models, advanced time series forecasting methods, applications in time series analysis and forecasting, advanced methods and online learning in time series and high-dimensional and complex/big data time series. The contributions were originally presented at the International Work-Conference on Time Series, ITISE 2016, held in Granada, Spain, June 27-29, 2016. The series of ITISE conferences provides a forum for scientists, engineers, educators and students to discuss the latest ideas and implementations in the foundations, theory, models and applications in the field of time series analysis and forecasting.  It focuses on interdisciplinary and multidisciplinary rese arch encompassing the disciplines of comput...

  11. Time-Filtered Navier-Stokes Approach and Emulation of Turbulence-Chemistry Interaction

    Science.gov (United States)

    Liu, Nan-Suey; Wey, Thomas; Shih, Tsan-Hsing

    2013-01-01

    This paper describes the time-filtered Navier-Stokes approach capable of capturing unsteady flow structures important for turbulent mixing and an accompanying subgrid model directly accounting for the major processes in turbulence-chemistry interaction. They have been applied to the computation of two-phase turbulent combustion occurring in a single-element lean-direct-injection combustor. Some of the preliminary results from this computational effort are presented in this paper.

  12. Transport equation for the time scale of a turbulent scalar field

    International Nuclear Information System (INIS)

    Kurbatskij, A.F.

    1999-01-01

    The two-parametric turbulence models cause serious difficulties by modeling the near-wall flows due to absence of the natural boundary condition on the wall for dissipation of the ε turbulence energy and the ε θ scalar field destruction. This difficulty may be overcome, if instead of the ε and ε θ , as the second parameter of the model, to apply the time scales of the turbulent dynamic and scalar fields. The equation of the scalar field is derived and numerical coefficients included therein, are determined from the simplest problems on the turbulent heat transfer [ru

  13. From Networks to Time Series

    Science.gov (United States)

    Shimada, Yutaka; Ikeguchi, Tohru; Shigehara, Takaomi

    2012-10-01

    In this Letter, we propose a framework to transform a complex network to a time series. The transformation from complex networks to time series is realized by the classical multidimensional scaling. Applying the transformation method to a model proposed by Watts and Strogatz [Nature (London) 393, 440 (1998)], we show that ring lattices are transformed to periodic time series, small-world networks to noisy periodic time series, and random networks to random time series. We also show that these relationships are analytically held by using the circulant-matrix theory and the perturbation theory of linear operators. The results are generalized to several high-dimensional lattices.

  14. Near bed suspended sediment flux by single turbulent events

    Science.gov (United States)

    Amirshahi, Seyed Mohammad; Kwoll, Eva; Winter, Christian

    2018-01-01

    The role of small scale single turbulent events in the vertical mixing of near bed suspended sediments was explored in a shallow shelf sea environment. High frequency velocity and suspended sediment concentration (SSC; calibrated from the backscatter intensity) were collected using an Acoustic Doppler Velocimeter (ADV). Using quadrant analysis, the despiked velocity time series was divided into turbulent events and small background fluctuations. Reynolds stress and Turbulent Kinetic Energy (TKE) calculated from all velocity samples, were compared to the same turbulent statistics calculated only from velocity samples classified as turbulent events (Reevents and TKEevents). The comparison showed that Reevents and TKEevents was increased 3 and 1.6 times, respectively, when small background fluctuations were removed and that the correlation with SSC for TKE could be improved through removal of the latter. The correlation between instantaneous vertical turbulent flux (w ‧) and SSC fluctuations (SSC ‧) exhibits a tidal pattern with the maximum correlation at peak ebb and flood currents, when strong turbulent events appear. Individual turbulent events were characterized by type, strength, duration and length. Cumulative vertical turbulent sediment fluxes and average SSC associated with individual turbulent events were calculated. Over the tidal cycle, ejections and sweeps were the most dominant events, transporting 50% and 36% of the cumulative vertical turbulent event sediment flux, respectively. Although the contribution of outward interactions to the vertical turbulent event sediment flux was low (11%), single outward interaction events were capable of inducing similar SSC ‧ as sweep events. The results suggest that on time scales of tens of minutes to hours, TKE may be appropriate to quantify turbulence in sediment transport studies, but that event characteristics, particular the upward turbulent flux need to be accounted for when considering sediment transport

  15. Duality between Time Series and Networks

    Science.gov (United States)

    Campanharo, Andriana S. L. O.; Sirer, M. Irmak; Malmgren, R. Dean; Ramos, Fernando M.; Amaral, Luís A. Nunes.

    2011-01-01

    Studying the interaction between a system's components and the temporal evolution of the system are two common ways to uncover and characterize its internal workings. Recently, several maps from a time series to a network have been proposed with the intent of using network metrics to characterize time series. Although these maps demonstrate that different time series result in networks with distinct topological properties, it remains unclear how these topological properties relate to the original time series. Here, we propose a map from a time series to a network with an approximate inverse operation, making it possible to use network statistics to characterize time series and time series statistics to characterize networks. As a proof of concept, we generate an ensemble of time series ranging from periodic to random and confirm that application of the proposed map retains much of the information encoded in the original time series (or networks) after application of the map (or its inverse). Our results suggest that network analysis can be used to distinguish different dynamic regimes in time series and, perhaps more importantly, time series analysis can provide a powerful set of tools that augment the traditional network analysis toolkit to quantify networks in new and useful ways. PMID:21858093

  16. Long time series

    DEFF Research Database (Denmark)

    Hisdal, H.; Holmqvist, E.; Hyvärinen, V.

    Awareness that emission of greenhouse gases will raise the global temperature and change the climate has led to studies trying to identify such changes in long-term climate and hydrologic time series. This report, written by the......Awareness that emission of greenhouse gases will raise the global temperature and change the climate has led to studies trying to identify such changes in long-term climate and hydrologic time series. This report, written by the...

  17. A Course in Time Series Analysis

    CERN Document Server

    Peña, Daniel; Tsay, Ruey S

    2011-01-01

    New statistical methods and future directions of research in time series A Course in Time Series Analysis demonstrates how to build time series models for univariate and multivariate time series data. It brings together material previously available only in the professional literature and presents a unified view of the most advanced procedures available for time series model building. The authors begin with basic concepts in univariate time series, providing an up-to-date presentation of ARIMA models, including the Kalman filter, outlier analysis, automatic methods for building ARIMA models, a

  18. Measurements of Turbulent Convection Speeds in Multistream Jets Using Time-Resolved PIV

    Science.gov (United States)

    Bridges, James; Wernet, Mark P.

    2017-01-01

    Convection speeds of turbulent velocities in jets, including multi-stream jets with and without flight stream, were measured using an innovative application of time-resolved particle image velocimetry. The paper describes the unique instrumentation and data analysis that allows the measurement to be made. Extensive data is shown that relates convection speed, mean velocity, and turbulent velocities for multiple jet cases. These data support the overall observation that the local turbulent convection speed is roughly that of the local mean velocity, biased by the relative intensity of turbulence.

  19. Measurements of Turbulence Convection Speeds in Multistream Jets Using Time-Resolved PIV

    Science.gov (United States)

    Bridges, James; Wernet, Mark P.

    2017-01-01

    Convection speeds of turbulent velocities in jets, including multi-stream jets with and without flight stream, were measured using an innovative application of time-resolved particle image velocimetry. The paper describes the unique instrumentation and data analysis that allows the measurement to be made. Extensive data is shown that relates convection speed, mean velocity, and turbulent velocities for multiple jet cases. These data support the overall observation that the local turbulent convection speed is roughly that of the local mean velocity, biased by the relative intensity of turbulence.

  20. Sensible Heat Flux Related to Variations in Atmospheric Turbulence Kinetic Energy on a Sandy Beach

    Science.gov (United States)

    2017-06-01

    production, turbulent transport by pressure fluctuations, dissipation and flux divergence . The TKE budget as explained by Srivastava and Sarthi (2002...generation of turbulence. Term 3 is flux divergence , which describes the differential transport of TKE by turbulent eddies. Term 4, dissipation, is a sink...the time series data to align all signals to the same time base. Winds were rotated into a shore-normal frame of reference. All data outside of T

  1. Kolmogorov Space in Time Series Data

    OpenAIRE

    Kanjamapornkul, K.; Pinčák, R.

    2016-01-01

    We provide the proof that the space of time series data is a Kolmogorov space with $T_{0}$-separation axiom using the loop space of time series data. In our approach we define a cyclic coordinate of intrinsic time scale of time series data after empirical mode decomposition. A spinor field of time series data comes from the rotation of data around price and time axis by defining a new extradimension to time series data. We show that there exist hidden eight dimensions in Kolmogorov space for ...

  2. Scaling for turbulent viscosity of buoyant plumes in stratified fluids: PIV measurement with implications for submarine hydrothermal plume turbulence

    Science.gov (United States)

    Zhang, Wei; He, Zhiguo; Jiang, Houshuo

    2017-11-01

    Time-resolved particle image velocimetry (PIV) has been used to measure instantaneous two-dimensional velocity vector fields of laboratory-generated turbulent buoyant plumes in linearly stratified saltwater over extended periods of time. From PIV-measured time-series flow data, characteristics of plume mean flow and turbulence have been quantified. To be specific, maximum plume penetration scaling and entrainment coefficient determined from the mean flow agree well with the theory based on the entrainment hypothesis for buoyant plumes in stratified fluids. Besides the well-known persistent entrainment along the plume stem (i.e., the 'plume-stem' entrainment), the mean plume velocity field shows persistent entrainment along the outer edge of the plume cap (i.e., the 'plume-cap' entrainment), thereby confirming predictions from previous numerical simulation studies. To our knowledge, the present PIV investigation provides the first measured flow field data in the plume cap region. As to measured plume turbulence, both the turbulent kinetic energy field and the turbulence dissipation rate field attain their maximum close to the source, while the turbulent viscosity field reaches its maximum within the plume cap region; the results also show that maximum turbulent viscosity scales as νt,max = 0.030(B/N)1/2, where B is source buoyancy flux and N is ambient buoyancy frequency. These PIV data combined with previously published numerical simulation results have implications for understanding the roles of hydrothermal plume turbulence, i.e. plume turbulence within the cap region causes the 'plume-cap' entrainment that plays an equally important role as the 'plume-stem' entrainment in supplying the final volume flux at the plume spreading level.

  3. On the Use of Running Trends as Summary Statistics for Univariate Time Series and Time Series Association

    OpenAIRE

    Trottini, Mario; Vigo, Isabel; Belda, Santiago

    2015-01-01

    Given a time series, running trends analysis (RTA) involves evaluating least squares trends over overlapping time windows of L consecutive time points, with overlap by all but one observation. This produces a new series called the “running trends series,” which is used as summary statistics of the original series for further analysis. In recent years, RTA has been widely used in climate applied research as summary statistics for time series and time series association. There is no doubt that ...

  4. Multiple Indicator Stationary Time Series Models.

    Science.gov (United States)

    Sivo, Stephen A.

    2001-01-01

    Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…

  5. Time Series Momentum

    DEFF Research Database (Denmark)

    Moskowitz, Tobias J.; Ooi, Yao Hua; Heje Pedersen, Lasse

    2012-01-01

    We document significant “time series momentum” in equity index, currency, commodity, and bond futures for each of the 58 liquid instruments we consider. We find persistence in returns for one to 12 months that partially reverses over longer horizons, consistent with sentiment theories of initial...... under-reaction and delayed over-reaction. A diversified portfolio of time series momentum strategies across all asset classes delivers substantial abnormal returns with little exposure to standard asset pricing factors and performs best during extreme markets. Examining the trading activities...

  6. International Work-Conference on Time Series

    CERN Document Server

    Pomares, Héctor

    2016-01-01

    This volume presents selected peer-reviewed contributions from The International Work-Conference on Time Series, ITISE 2015, held in Granada, Spain, July 1-3, 2015. It discusses topics in time series analysis and forecasting, advanced methods and online learning in time series, high-dimensional and complex/big data time series as well as forecasting in real problems. The International Work-Conferences on Time Series (ITISE) provide a forum for scientists, engineers, educators and students to discuss the latest ideas and implementations in the foundations, theory, models and applications in the field of time series analysis and forecasting. It focuses on interdisciplinary and multidisciplinary research encompassing the disciplines of computer science, mathematics, statistics and econometrics.

  7. Bed slope effects on turbulent wave boundary layers: 1. Model validation and quantification of rough-turbulent results

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Fredsøe, Jørgen; Sumer, B. Mutlu

    2009-01-01

    measurements for steady streaming induced by a skewed free stream velocity signal is also provided. We then simulate a series of experiments involving oscillatory flow in a convergent-divergent smooth tunnel, and a good match with respect to bed shear stresses and streaming velocities is achieved......A numerical model solving incompressible Reynolds-averaged Navier-Stokes equations, combined with a two-equation k-omega turbulence closure, is used to study converging-diverging effects from a sloping bed on turbulent (oscillatory) wave boundary layers. Bed shear stresses from the numerical model....... The streaming is conceptually explained using analogies from steady converging and diffuser flows. A parametric study is undertaken to assess both the peak and time-averaged bed shear stresses in converging and diverging half periods under rough-turbulent conditions. The results are presented as friction factor...

  8. Stochastic models for time series

    CERN Document Server

    Doukhan, Paul

    2018-01-01

    This book presents essential tools for modelling non-linear time series. The first part of the book describes the main standard tools of probability and statistics that directly apply to the time series context to obtain a wide range of modelling possibilities. Functional estimation and bootstrap are discussed, and stationarity is reviewed. The second part describes a number of tools from Gaussian chaos and proposes a tour of linear time series models. It goes on to address nonlinearity from polynomial or chaotic models for which explicit expansions are available, then turns to Markov and non-Markov linear models and discusses Bernoulli shifts time series models. Finally, the volume focuses on the limit theory, starting with the ergodic theorem, which is seen as the first step for statistics of time series. It defines the distributional range to obtain generic tools for limit theory under long or short-range dependences (LRD/SRD) and explains examples of LRD behaviours. More general techniques (central limit ...

  9. Graphical Data Analysis on the Circle: Wrap-Around Time Series Plots for (Interrupted) Time Series Designs.

    Science.gov (United States)

    Rodgers, Joseph Lee; Beasley, William Howard; Schuelke, Matthew

    2014-01-01

    Many data structures, particularly time series data, are naturally seasonal, cyclical, or otherwise circular. Past graphical methods for time series have focused on linear plots. In this article, we move graphical analysis onto the circle. We focus on 2 particular methods, one old and one new. Rose diagrams are circular histograms and can be produced in several different forms using the RRose software system. In addition, we propose, develop, illustrate, and provide software support for a new circular graphical method, called Wrap-Around Time Series Plots (WATS Plots), which is a graphical method useful to support time series analyses in general but in particular in relation to interrupted time series designs. We illustrate the use of WATS Plots with an interrupted time series design evaluating the effect of the Oklahoma City bombing on birthrates in Oklahoma County during the 10 years surrounding the bombing of the Murrah Building in Oklahoma City. We compare WATS Plots with linear time series representations and overlay them with smoothing and error bands. Each method is shown to have advantages in relation to the other; in our example, the WATS Plots more clearly show the existence and effect size of the fertility differential.

  10. Time Series with Long Memory

    OpenAIRE

    西埜, 晴久

    2004-01-01

    The paper investigates an application of long-memory processes to economic time series. We show properties of long-memory processes, which are motivated to model a long-memory phenomenon in economic time series. An FARIMA model is described as an example of long-memory model in statistical terms. The paper explains basic limit theorems and estimation methods for long-memory processes in order to apply long-memory models to economic time series.

  11. Visibility Graph Based Time Series Analysis.

    Science.gov (United States)

    Stephen, Mutua; Gu, Changgui; Yang, Huijie

    2015-01-01

    Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it's microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks.

  12. Visibility Graph Based Time Series Analysis.

    Directory of Open Access Journals (Sweden)

    Mutua Stephen

    Full Text Available Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it's microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks.

  13. Turbulence-induced persistence in laser beam wandering.

    Science.gov (United States)

    Zunino, Luciano; Gulich, Damián; Funes, Gustavo; Pérez, Darío G

    2015-07-01

    We have experimentally confirmed the presence of long-memory correlations in the wandering of a thin Gaussian laser beam over a screen after propagating through a turbulent medium. A laboratory-controlled experiment was conducted in which coordinate fluctuations of the laser beam were recorded at a sufficiently high sampling rate for a wide range of turbulent conditions. Horizontal and vertical displacements of the laser beam centroid were subsequently analyzed by implementing detrended fluctuation analysis. This is a very well-known and widely used methodology to unveil memory effects from time series. Results obtained from this experimental analysis allow us to confirm that both coordinates behave as highly persistent signals for strong turbulent intensities. This finding is relevant for a better comprehension and modeling of the turbulence effects in free-space optical communication systems and other applications related to propagation of optical signals in the atmosphere.

  14. Turbulent wind at the equatorial segment of an operating Darrieus wind turbine blade

    Science.gov (United States)

    Connell, J. R.; Morris, V. R.

    1989-09-01

    Six turbulent wind time series, measured at equally spaced equator-height locations on a circle 3 m outside a 34-m Darrieus rotor, are analyzed to approximate the wind fluctuations experienced by the rotor. The flatwise lower root-bending stress of one blade was concurrently recorded. The wind data are analyzed in three ways: wind components that are radial and tangential to the rotation of a blade were rotationally sampled; induction and wake effects of the rotor were estimated from the six Eulerian time series; and turbulence spectra of both the measured wind and the modeled wind from the PNL theory of rotationally sampled turbulence. The wind and the rotor response are related by computing the spectral response function of the flatwise lower root-bending stress. Two bands of resonant response that surround the first and second flatwise modal frequencies shift with the rotor rotation rate.

  15. Network structure of multivariate time series.

    Science.gov (United States)

    Lacasa, Lucas; Nicosia, Vincenzo; Latora, Vito

    2015-10-21

    Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail.

  16. What marketing scholars should know about time series analysis : time series applications in marketing

    NARCIS (Netherlands)

    Horváth, Csilla; Kornelis, Marcel; Leeflang, Peter S.H.

    2002-01-01

    In this review, we give a comprehensive summary of time series techniques in marketing, and discuss a variety of time series analysis (TSA) techniques and models. We classify them in the sets (i) univariate TSA, (ii) multivariate TSA, and (iii) multiple TSA. We provide relevant marketing

  17. Time change and universality in turbulence

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Schmiegel, Jürgen

    of the probability densities of turbulent velocity increments. Furthermore, the application of a time change in terms of the scale parameter δ of the normal inverse Gaussian distribution results in a collapse of the densities of velocity increments onto Reynolds number independent distributions. We discuss this kind...... experiment. Taylor Reynolds numbers range from Rλ = 80 for the wind tunnel experiment up to Rλ = 17000 for the atmospheric boundary layer experiment. Empirical findings strongly support the appropriateness of normal inverse Gaussian distributions for a parsimonious and universal description...

  18. Kinetic features of interchange turbulence

    International Nuclear Information System (INIS)

    Sarazin, Y; Grandgirard, V; Fleurence, E; Garbet, X; Ghendrih, Ph; Bertrand, P; Depret, G

    2005-01-01

    Non-linear gyrokinetic simulations of the interchange instability are discussed. The semi-Lagrangian numerical scheme allows one to address two critical points achieved with simulations lasting several confinement times: an accurate statistical analysis of the fluctuations and the back reaction of the turbulence on equilibrium profiles. Zonal flows are found to quench a 2D + 1D interchange turbulence when one of the species has a vanishing response to zonal modes. Conversely, when streamers dominate, the equilibrium profiles are found to be stiff. In the non-linear regime and steady-state turbulence, the distribution function exhibits a significant departure from a Maxwellian distribution. This property is characterized by an expansion on generalized Laguerre functions with a slow decay of the series of moments. This justifies the use of gyrokinetic simulations since a standard fluid approach, based on a limited number of moments, would certainly require a complex closure so as to take into account the impact of these non-vanishing high order moments

  19. Data mining in time series databases

    CERN Document Server

    Kandel, Abraham; Bunke, Horst

    2004-01-01

    Adding the time dimension to real-world databases produces Time SeriesDatabases (TSDB) and introduces new aspects and difficulties to datamining and knowledge discovery. This book covers the state-of-the-artmethodology for mining time series databases. The novel data miningmethods presented in the book include techniques for efficientsegmentation, indexing, and classification of noisy and dynamic timeseries. A graph-based method for anomaly detection in time series isdescribed and the book also studies the implications of a novel andpotentially useful representation of time series as strings. Theproblem of detecting changes in data mining models that are inducedfrom temporal databases is additionally discussed.

  20. Models for dependent time series

    CERN Document Server

    Tunnicliffe Wilson, Granville; Haywood, John

    2015-01-01

    Models for Dependent Time Series addresses the issues that arise and the methodology that can be applied when the dependence between time series is described and modeled. Whether you work in the economic, physical, or life sciences, the book shows you how to draw meaningful, applicable, and statistically valid conclusions from multivariate (or vector) time series data.The first four chapters discuss the two main pillars of the subject that have been developed over the last 60 years: vector autoregressive modeling and multivariate spectral analysis. These chapters provide the foundational mater

  1. Visual time series analysis

    DEFF Research Database (Denmark)

    Fischer, Paul; Hilbert, Astrid

    2012-01-01

    We introduce a platform which supplies an easy-to-handle, interactive, extendable, and fast analysis tool for time series analysis. In contrast to other software suits like Maple, Matlab, or R, which use a command-line-like interface and where the user has to memorize/look-up the appropriate...... commands, our application is select-and-click-driven. It allows to derive many different sequences of deviations for a given time series and to visualize them in different ways in order to judge their expressive power and to reuse the procedure found. For many transformations or model-ts, the user may...... choose between manual and automated parameter selection. The user can dene new transformations and add them to the system. The application contains efficient implementations of advanced and recent techniques for time series analysis including techniques related to extreme value analysis and filtering...

  2. A Review of Subsequence Time Series Clustering

    Directory of Open Access Journals (Sweden)

    Seyedjamal Zolhavarieh

    2014-01-01

    Full Text Available Clustering of subsequence time series remains an open issue in time series clustering. Subsequence time series clustering is used in different fields, such as e-commerce, outlier detection, speech recognition, biological systems, DNA recognition, and text mining. One of the useful fields in the domain of subsequence time series clustering is pattern recognition. To improve this field, a sequence of time series data is used. This paper reviews some definitions and backgrounds related to subsequence time series clustering. The categorization of the literature reviews is divided into three groups: preproof, interproof, and postproof period. Moreover, various state-of-the-art approaches in performing subsequence time series clustering are discussed under each of the following categories. The strengths and weaknesses of the employed methods are evaluated as potential issues for future studies.

  3. A review of subsequence time series clustering.

    Science.gov (United States)

    Zolhavarieh, Seyedjamal; Aghabozorgi, Saeed; Teh, Ying Wah

    2014-01-01

    Clustering of subsequence time series remains an open issue in time series clustering. Subsequence time series clustering is used in different fields, such as e-commerce, outlier detection, speech recognition, biological systems, DNA recognition, and text mining. One of the useful fields in the domain of subsequence time series clustering is pattern recognition. To improve this field, a sequence of time series data is used. This paper reviews some definitions and backgrounds related to subsequence time series clustering. The categorization of the literature reviews is divided into three groups: preproof, interproof, and postproof period. Moreover, various state-of-the-art approaches in performing subsequence time series clustering are discussed under each of the following categories. The strengths and weaknesses of the employed methods are evaluated as potential issues for future studies.

  4. A Review of Subsequence Time Series Clustering

    Science.gov (United States)

    Teh, Ying Wah

    2014-01-01

    Clustering of subsequence time series remains an open issue in time series clustering. Subsequence time series clustering is used in different fields, such as e-commerce, outlier detection, speech recognition, biological systems, DNA recognition, and text mining. One of the useful fields in the domain of subsequence time series clustering is pattern recognition. To improve this field, a sequence of time series data is used. This paper reviews some definitions and backgrounds related to subsequence time series clustering. The categorization of the literature reviews is divided into three groups: preproof, interproof, and postproof period. Moreover, various state-of-the-art approaches in performing subsequence time series clustering are discussed under each of the following categories. The strengths and weaknesses of the employed methods are evaluated as potential issues for future studies. PMID:25140332

  5. Analysis of Heavy-Tailed Time Series

    DEFF Research Database (Denmark)

    Xie, Xiaolei

    This thesis is about analysis of heavy-tailed time series. We discuss tail properties of real-world equity return series and investigate the possibility that a single tail index is shared by all return series of actively traded equities in a market. Conditions for this hypothesis to be true...... are identified. We study the eigenvalues and eigenvectors of sample covariance and sample auto-covariance matrices of multivariate heavy-tailed time series, and particularly for time series with very high dimensions. Asymptotic approximations of the eigenvalues and eigenvectors of such matrices are found...... and expressed in terms of the parameters of the dependence structure, among others. Furthermore, we study an importance sampling method for estimating rare-event probabilities of multivariate heavy-tailed time series generated by matrix recursion. We show that the proposed algorithm is efficient in the sense...

  6. Time dependent plasma viscosity and relation between neoclassical transport and turbulent transport

    International Nuclear Information System (INIS)

    Shaing, K.C.

    2005-01-01

    Time dependent plasma viscosities for asymmetric toroidal plasmas in various collisionality regimes are calculated. It is known that in the symmetric limit the time dependent plasma viscosities accurately describe plasma flow damping rate. Thus, time dependent plasma viscosities are important in modeling the radial electric field of the zonal flow. From the momentum balance equation, it is shown that, at the steady state, the balance of the viscosity force and the momentum source determines the radial electric field of the zonal flow. Thus, for a fixed source, the smaller the viscous force is, the larger the value of the radial electric field is, which in turn suppresses the turbulence fluctuations more and improves turbulence transport. However, the smaller the viscous force also implies the smaller the neoclassical transport fluxes based on the neoclassical flux-force relationship. We thus show that when neoclassical transport fluxes are improved so are the turbulent fluxes in toroidal plasmas. (author)

  7. Adaptive time-variant models for fuzzy-time-series forecasting.

    Science.gov (United States)

    Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching

    2010-12-01

    A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.

  8. Time Series Analysis and Forecasting by Example

    CERN Document Server

    Bisgaard, Soren

    2011-01-01

    An intuition-based approach enables you to master time series analysis with ease Time Series Analysis and Forecasting by Example provides the fundamental techniques in time series analysis using various examples. By introducing necessary theory through examples that showcase the discussed topics, the authors successfully help readers develop an intuitive understanding of seemingly complicated time series models and their implications. The book presents methodologies for time series analysis in a simplified, example-based approach. Using graphics, the authors discuss each presented example in

  9. Nonlinear time-series analysis of current signal in cathodic contact glow discharge electrolysis

    International Nuclear Information System (INIS)

    Allagui, Anis; Abdelkareem, Mohammad Ali; Rojas, Andrea Espinel; Bonny, Talal; Elwakil, Ahmed S.

    2016-01-01

    In the standard two-electrode configuration employed in electrolytic process, when the control dc voltage is brought to a critical value, the system undergoes a transition from conventional electrolysis to contact glow discharge electrolysis (CGDE), which has also been referred to as liquid-submerged micro-plasma, glow discharge plasma electrolysis, electrode effect, electrolytic plasma, etc. The light-emitting process is associated with the development of an irregular and erratic current time-series which has been arbitrarily labelled as “random,” and thus dissuaded further research in this direction. Here, we examine the current time-series signals measured in cathodic CGDE configuration in a concentrated KOH solution at different dc bias voltages greater than the critical voltage. We show that the signals are, in fact, not random according to the NIST SP. 800-22 test suite definition. We also demonstrate that post-processing low-pass filtered sequences requires less time than the native as-measured sequences, suggesting a superposition of low frequency chaotic fluctuations and high frequency behaviors (which may be produced by more than one possible source of entropy). Using an array of nonlinear time-series analyses for dynamical systems, i.e., the computation of largest Lyapunov exponents and correlation dimensions, and re-construction of phase portraits, we found that low-pass filtered datasets undergo a transition from quasi-periodic to chaotic to quasi-hyper-chaotic behavior, and back again to chaos when the voltage controlling-parameter is increased. The high frequency part of the signals is discussed in terms of highly nonlinear turbulent motion developed around the working electrode.

  10. Clustering of financial time series

    Science.gov (United States)

    D'Urso, Pierpaolo; Cappelli, Carmela; Di Lallo, Dario; Massari, Riccardo

    2013-05-01

    This paper addresses the topic of classifying financial time series in a fuzzy framework proposing two fuzzy clustering models both based on GARCH models. In general clustering of financial time series, due to their peculiar features, needs the definition of suitable distance measures. At this aim, the first fuzzy clustering model exploits the autoregressive representation of GARCH models and employs, in the framework of a partitioning around medoids algorithm, the classical autoregressive metric. The second fuzzy clustering model, also based on partitioning around medoids algorithm, uses the Caiado distance, a Mahalanobis-like distance, based on estimated GARCH parameters and covariances that takes into account the information about the volatility structure of time series. In order to illustrate the merits of the proposed fuzzy approaches an application to the problem of classifying 29 time series of Euro exchange rates against international currencies is presented and discussed, also comparing the fuzzy models with their crisp version.

  11. Stirring turbulence with turbulence

    NARCIS (Netherlands)

    Cekli, H.E.; Joosten, R.; van de Water, W.

    2015-01-01

    We stir wind-tunnel turbulence with an active grid that consists of rods with attached vanes. The time-varying angle of these rods is controlled by random numbers. We study the response of turbulence on the statistical properties of these random numbers. The random numbers are generated by the

  12. Propagation of Porro "petal" beams through a turbulent atmosphere

    CSIR Research Space (South Africa)

    Burger, L

    2009-07-01

    Full Text Available . Construct a series of pseudo–random phase screens from the basis. 3. Implement optical wavefront changes from the pseudo–random phase screens. 4. Propagate the resulting beam to the far field and measure …. Page 11 Phase screen construction 20 40 60 80... constant h is height asl k is the wave number Atmospheric propagation Kolmogorov Turbulence Model Page 10 Atmospheric propagation How to measure turbulence 1. Decompose the turbulence model into a series of orthogonal functions (basis set). 2...

  13. Data Mining Smart Energy Time Series

    Directory of Open Access Journals (Sweden)

    Janina POPEANGA

    2015-07-01

    Full Text Available With the advent of smart metering technology the amount of energy data will increase significantly and utilities industry will have to face another big challenge - to find relationships within time-series data and even more - to analyze such huge numbers of time series to find useful patterns and trends with fast or even real-time response. This study makes a small review of the literature in the field, trying to demonstrate how essential is the application of data mining techniques in the time series to make the best use of this large quantity of data, despite all the difficulties. Also, the most important Time Series Data Mining techniques are presented, highlighting their applicability in the energy domain.

  14. Predicting chaotic time series

    International Nuclear Information System (INIS)

    Farmer, J.D.; Sidorowich, J.J.

    1987-01-01

    We present a forecasting technique for chaotic data. After embedding a time series in a state space using delay coordinates, we ''learn'' the induced nonlinear mapping using local approximation. This allows us to make short-term predictions of the future behavior of a time series, using information based only on past values. We present an error estimate for this technique, and demonstrate its effectiveness by applying it to several examples, including data from the Mackey-Glass delay differential equation, Rayleigh-Benard convection, and Taylor-Couette flow

  15. Measuring multiscaling in financial time-series

    International Nuclear Information System (INIS)

    Buonocore, R.J.; Aste, T.; Di Matteo, T.

    2016-01-01

    We discuss the origin of multiscaling in financial time-series and investigate how to best quantify it. Our methodology consists in separating the different sources of measured multifractality by analyzing the multi/uni-scaling behavior of synthetic time-series with known properties. We use the results from the synthetic time-series to interpret the measure of multifractality of real log-returns time-series. The main finding is that the aggregation horizon of the returns can introduce a strong bias effect on the measure of multifractality. This effect can become especially important when returns distributions have power law tails with exponents in the range (2, 5). We discuss the right aggregation horizon to mitigate this bias.

  16. Strange attractors in weakly turbulent Couette-Taylor flow

    Science.gov (United States)

    Brandstater, A.; Swinney, Harry L.

    1987-01-01

    An experiment is conducted on the transition from quasi-periodic to weakly turbulent flow of a fluid contained between concentric cylinders with the inner cylinder rotating and the outer cylinder at rest. Power spectra, phase-space portraits, and circle maps obtained from velocity time-series data indicate that the nonperiodic behavior observed is deterministic, that is, it is described by strange attractors. Various problems that arise in computing the dimension of strange attractors constructed from experimental data are discussed and it is shown that these problems impose severe requirements on the quantity and accuracy of data necessary for determining dimensions greater than about 5. In the present experiment the attractor dimension increases from 2 at the onset of turbulence to about 4 at a Reynolds number 50-percent above the onset of turbulence.

  17. Time averaging, ageing and delay analysis of financial time series

    Science.gov (United States)

    Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf

    2017-06-01

    We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.

  18. Applied time series analysis

    CERN Document Server

    Woodward, Wayne A; Elliott, Alan C

    2011-01-01

    ""There is scarcely a standard technique that the reader will find left out … this book is highly recommended for those requiring a ready introduction to applicable methods in time series and serves as a useful resource for pedagogical purposes.""-International Statistical Review (2014), 82""Current time series theory for practice is well summarized in this book.""-Emmanuel Parzen, Texas A&M University""What an extraordinary range of topics covered, all very insightfully. I like [the authors'] innovations very much, such as the AR factor table.""-David Findley, U.S. Census Bureau (retired)""…

  19. The nature of turbulence in a triangular lattice gas automaton

    Science.gov (United States)

    Duong-Van, Minh; Feit, M. D.; Keller, P.; Pound, M.

    1986-12-01

    Power spectra calculated from the coarse-graining of a simple lattice gas automaton, and those of time averaging other stochastic times series that we have investigated, have exponents in the range -1.6 to -2, consistent with observation of fully developed turbulence. This power spectrum is a natural consequence of coarse-graining; the exponent -2 represents the continuum limit.

  20. Entropic Analysis of Electromyography Time Series

    Science.gov (United States)

    Kaufman, Miron; Sung, Paul

    2005-03-01

    We are in the process of assessing the effectiveness of fractal and entropic measures for the diagnostic of low back pain from surface electromyography (EMG) time series. Surface electromyography (EMG) is used to assess patients with low back pain. In a typical EMG measurement, the voltage is measured every millisecond. We observed back muscle fatiguing during one minute, which results in a time series with 60,000 entries. We characterize the complexity of time series by computing the Shannon entropy time dependence. The analysis of the time series from different relevant muscles from healthy and low back pain (LBP) individuals provides evidence that the level of variability of back muscle activities is much larger for healthy individuals than for individuals with LBP. In general the time dependence of the entropy shows a crossover from a diffusive regime to a regime characterized by long time correlations (self organization) at about 0.01s.

  1. Quantifying memory in complex physiological time-series.

    Science.gov (United States)

    Shirazi, Amir H; Raoufy, Mohammad R; Ebadi, Haleh; De Rui, Michele; Schiff, Sami; Mazloom, Roham; Hajizadeh, Sohrab; Gharibzadeh, Shahriar; Dehpour, Ahmad R; Amodio, Piero; Jafari, G Reza; Montagnese, Sara; Mani, Ali R

    2013-01-01

    In a time-series, memory is a statistical feature that lasts for a period of time and distinguishes the time-series from a random, or memory-less, process. In the present study, the concept of "memory length" was used to define the time period, or scale over which rare events within a physiological time-series do not appear randomly. The method is based on inverse statistical analysis and provides empiric evidence that rare fluctuations in cardio-respiratory time-series are 'forgotten' quickly in healthy subjects while the memory for such events is significantly prolonged in pathological conditions such as asthma (respiratory time-series) and liver cirrhosis (heart-beat time-series). The memory length was significantly higher in patients with uncontrolled asthma compared to healthy volunteers. Likewise, it was significantly higher in patients with decompensated cirrhosis compared to those with compensated cirrhosis and healthy volunteers. We also observed that the cardio-respiratory system has simple low order dynamics and short memory around its average, and high order dynamics around rare fluctuations.

  2. Effective Feature Preprocessing for Time Series Forecasting

    DEFF Research Database (Denmark)

    Zhao, Junhua; Dong, Zhaoyang; Xu, Zhao

    2006-01-01

    Time series forecasting is an important area in data mining research. Feature preprocessing techniques have significant influence on forecasting accuracy, therefore are essential in a forecasting model. Although several feature preprocessing techniques have been applied in time series forecasting...... performance in time series forecasting. It is demonstrated in our experiment that, effective feature preprocessing can significantly enhance forecasting accuracy. This research can be a useful guidance for researchers on effectively selecting feature preprocessing techniques and integrating them with time...... series forecasting models....

  3. Structure function scaling in a Reλ = 250 turbulent mixing layer

    KAUST Repository

    Attili, Antonio

    2011-12-22

    A highly resolved Direct Numerical Simulation of a spatially developing turbulent mixing layer is presented. In the fully developed region, the flow achieves a turbulent Reynolds number Reλ = 250, high enough for a clear separation between large and dissipative scales, so for the presence of an inertial range. Structure functions have been calculated in the self-similar region using velocity time series and Taylor\\'s frozen turbulence hypothesis. The Extended Self-Similarity (ESS) concept has been employed to evaluate relative scaling exponents. A wide range of scales with scaling exponents and intermittency levels equal to homogeneous isotropic turbulence has been identified. Moreover an additional scaling range exists for larger scales; it is characterized by smaller exponents, similar to the values reported in the literature for flows with strong shear.

  4. Structure function scaling in a Reλ = 250 turbulent mixing layer

    KAUST Repository

    Attili, Antonio; Bisetti, Fabrizio

    2011-01-01

    A highly resolved Direct Numerical Simulation of a spatially developing turbulent mixing layer is presented. In the fully developed region, the flow achieves a turbulent Reynolds number Reλ = 250, high enough for a clear separation between large and dissipative scales, so for the presence of an inertial range. Structure functions have been calculated in the self-similar region using velocity time series and Taylor's frozen turbulence hypothesis. The Extended Self-Similarity (ESS) concept has been employed to evaluate relative scaling exponents. A wide range of scales with scaling exponents and intermittency levels equal to homogeneous isotropic turbulence has been identified. Moreover an additional scaling range exists for larger scales; it is characterized by smaller exponents, similar to the values reported in the literature for flows with strong shear.

  5. Statistical description of turbulent transport for flux driven toroidal plasmas

    Science.gov (United States)

    Anderson, J.; Imadera, K.; Kishimoto, Y.; Li, J. Q.; Nordman, H.

    2017-06-01

    A novel methodology to analyze non-Gaussian probability distribution functions (PDFs) of intermittent turbulent transport in global full-f gyrokinetic simulations is presented. In this work, the auto-regressive integrated moving average (ARIMA) model is applied to time series data of intermittent turbulent heat transport to separate noise and oscillatory trends, allowing for the extraction of non-Gaussian features of the PDFs. It was shown that non-Gaussian tails of the PDFs from first principles based gyrokinetic simulations agree with an analytical estimation based on a two fluid model.

  6. Inverse scattering problem in turbulent magnetic fluctuations

    Directory of Open Access Journals (Sweden)

    R. A. Treumann

    2016-08-01

    Full Text Available We apply a particular form of the inverse scattering theory to turbulent magnetic fluctuations in a plasma. In the present note we develop the theory, formulate the magnetic fluctuation problem in terms of its electrodynamic turbulent response function, and reduce it to the solution of a special form of the famous Gelfand–Levitan–Marchenko equation of quantum mechanical scattering theory. The last of these applies to transmission and reflection in an active medium. The theory of turbulent magnetic fluctuations does not refer to such quantities. It requires a somewhat different formulation. We reduce the theory to the measurement of the low-frequency electromagnetic fluctuation spectrum, which is not the turbulent spectral energy density. The inverse theory in this form enables obtaining information about the turbulent response function of the medium. The dynamic causes of the electromagnetic fluctuations are implicit to it. Thus, it is of vital interest in low-frequency magnetic turbulence. The theory is developed until presentation of the equations in applicable form to observations of turbulent electromagnetic fluctuations as input from measurements. Solution of the final integral equation should be done by standard numerical methods based on iteration. We point to the possibility of treating power law fluctuation spectra as an example. Formulation of the problem to include observations of spectral power densities in turbulence is not attempted. This leads to severe mathematical problems and requires a reformulation of inverse scattering theory. One particular aspect of the present inverse theory of turbulent fluctuations is that its structure naturally leads to spatial information which is obtained from the temporal information that is inherent to the observation of time series. The Taylor assumption is not needed here. This is a consequence of Maxwell's equations, which couple space and time evolution. The inversion procedure takes

  7. Statistical criteria for characterizing irradiance time series.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua S.; Ellis, Abraham; Hansen, Clifford W.

    2010-10-01

    We propose and examine several statistical criteria for characterizing time series of solar irradiance. Time series of irradiance are used in analyses that seek to quantify the performance of photovoltaic (PV) power systems over time. Time series of irradiance are either measured or are simulated using models. Simulations of irradiance are often calibrated to or generated from statistics for observed irradiance and simulations are validated by comparing the simulation output to the observed irradiance. Criteria used in this comparison should derive from the context of the analyses in which the simulated irradiance is to be used. We examine three statistics that characterize time series and their use as criteria for comparing time series. We demonstrate these statistics using observed irradiance data recorded in August 2007 in Las Vegas, Nevada, and in June 2009 in Albuquerque, New Mexico.

  8. Homogenising time series: beliefs, dogmas and facts

    Science.gov (United States)

    Domonkos, P.

    2011-06-01

    In the recent decades various homogenisation methods have been developed, but the real effects of their application on time series are still not known sufficiently. The ongoing COST action HOME (COST ES0601) is devoted to reveal the real impacts of homogenisation methods more detailed and with higher confidence than earlier. As a part of the COST activity, a benchmark dataset was built whose characteristics approach well the characteristics of real networks of observed time series. This dataset offers much better opportunity than ever before to test the wide variety of homogenisation methods, and analyse the real effects of selected theoretical recommendations. Empirical results show that real observed time series usually include several inhomogeneities of different sizes. Small inhomogeneities often have similar statistical characteristics than natural changes caused by climatic variability, thus the pure application of the classic theory that change-points of observed time series can be found and corrected one-by-one is impossible. However, after homogenisation the linear trends, seasonal changes and long-term fluctuations of time series are usually much closer to the reality than in raw time series. Some problems around detecting multiple structures of inhomogeneities, as well as that of time series comparisons within homogenisation procedures are discussed briefly in the study.

  9. Ocean time-series near Bermuda: Hydrostation S and the US JGOFS Bermuda Atlantic time-series study

    Science.gov (United States)

    Michaels, Anthony F.; Knap, Anthony H.

    1992-01-01

    Bermuda is the site of two ocean time-series programs. At Hydrostation S, the ongoing biweekly profiles of temperature, salinity and oxygen now span 37 years. This is one of the longest open-ocean time-series data sets and provides a view of decadal scale variability in ocean processes. In 1988, the U.S. JGOFS Bermuda Atlantic Time-series Study began a wide range of measurements at a frequency of 14-18 cruises each year to understand temporal variability in ocean biogeochemistry. On each cruise, the data range from chemical analyses of discrete water samples to data from electronic packages of hydrographic and optics sensors. In addition, a range of biological and geochemical rate measurements are conducted that integrate over time-periods of minutes to days. This sampling strategy yields a reasonable resolution of the major seasonal patterns and of decadal scale variability. The Sargasso Sea also has a variety of episodic production events on scales of days to weeks and these are only poorly resolved. In addition, there is a substantial amount of mesoscale variability in this region and some of the perceived temporal patterns are caused by the intersection of the biweekly sampling with the natural spatial variability. In the Bermuda time-series programs, we have added a series of additional cruises to begin to assess these other sources of variation and their impacts on the interpretation of the main time-series record. However, the adequate resolution of higher frequency temporal patterns will probably require the introduction of new sampling strategies and some emerging technologies such as biogeochemical moorings and autonomous underwater vehicles.

  10. Turbulent Statistics From Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition

    Science.gov (United States)

    Dahl, Milo D.

    2013-01-01

    Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.

  11. Multivariate Time Series Decomposition into Oscillation Components.

    Science.gov (United States)

    Matsuda, Takeru; Komaki, Fumiyasu

    2017-08-01

    Many time series are considered to be a superposition of several oscillation components. We have proposed a method for decomposing univariate time series into oscillation components and estimating their phases (Matsuda & Komaki, 2017 ). In this study, we extend that method to multivariate time series. We assume that several oscillators underlie the given multivariate time series and that each variable corresponds to a superposition of the projections of the oscillators. Thus, the oscillators superpose on each variable with amplitude and phase modulation. Based on this idea, we develop gaussian linear state-space models and use them to decompose the given multivariate time series. The model parameters are estimated from data using the empirical Bayes method, and the number of oscillators is determined using the Akaike information criterion. Therefore, the proposed method extracts underlying oscillators in a data-driven manner and enables investigation of phase dynamics in a given multivariate time series. Numerical results show the effectiveness of the proposed method. From monthly mean north-south sunspot number data, the proposed method reveals an interesting phase relationship.

  12. Forecasting Enrollments with Fuzzy Time Series.

    Science.gov (United States)

    Song, Qiang; Chissom, Brad S.

    The concept of fuzzy time series is introduced and used to forecast the enrollment of a university. Fuzzy time series, an aspect of fuzzy set theory, forecasts enrollment using a first-order time-invariant model. To evaluate the model, the conventional linear regression technique is applied and the predicted values obtained are compared to the…

  13. Forecasting Cryptocurrencies Financial Time Series

    DEFF Research Database (Denmark)

    Catania, Leopoldo; Grassi, Stefano; Ravazzolo, Francesco

    2018-01-01

    This paper studies the predictability of cryptocurrencies time series. We compare several alternative univariate and multivariate models in point and density forecasting of four of the most capitalized series: Bitcoin, Litecoin, Ripple and Ethereum. We apply a set of crypto–predictors and rely...

  14. Forecasting Cryptocurrencies Financial Time Series

    OpenAIRE

    Catania, Leopoldo; Grassi, Stefano; Ravazzolo, Francesco

    2018-01-01

    This paper studies the predictability of cryptocurrencies time series. We compare several alternative univariate and multivariate models in point and density forecasting of four of the most capitalized series: Bitcoin, Litecoin, Ripple and Ethereum. We apply a set of crypto–predictors and rely on Dynamic Model Averaging to combine a large set of univariate Dynamic Linear Models and several multivariate Vector Autoregressive models with different forms of time variation. We find statistical si...

  15. Turbulence

    CERN Document Server

    Bailly, Christophe

    2015-01-01

    This book covers the major problems of turbulence and turbulent processes, including  physical phenomena, their modeling and their simulation. After a general introduction in Chapter 1 illustrating many aspects dealing with turbulent flows, averaged equations and kinetic energy budgets are provided in Chapter 2. The concept of turbulent viscosity as a closure of the Reynolds stress is also introduced. Wall-bounded flows are presented in Chapter 3, and aspects specific to boundary layers and channel or pipe flows are also pointed out. Free shear flows, namely free jets and wakes, are considered in Chapter 4. Chapter 5 deals with vortex dynamics. Homogeneous turbulence, isotropy, and dynamics of isotropic turbulence are presented in Chapters 6 and 7. Turbulence is then described both in the physical space and in the wave number space. Time dependent numerical simulations are presented in Chapter 8, where an introduction to large eddy simulation is offered. The last three chapters of the book summarize remarka...

  16. Time series modeling, computation, and inference

    CERN Document Server

    Prado, Raquel

    2010-01-01

    The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit

  17. Time Series Analysis Forecasting and Control

    CERN Document Server

    Box, George E P; Reinsel, Gregory C

    2011-01-01

    A modernized new edition of one of the most trusted books on time series analysis. Since publication of the first edition in 1970, Time Series Analysis has served as one of the most influential and prominent works on the subject. This new edition maintains its balanced presentation of the tools for modeling and analyzing time series and also introduces the latest developments that have occurred n the field over the past decade through applications from areas such as business, finance, and engineering. The Fourth Edition provides a clearly written exploration of the key methods for building, cl

  18. Experimental studies of occupation times in turbulent flows

    DEFF Research Database (Denmark)

    Mann, J.; Ott, Søren; Pécseli, H.L.

    2003-01-01

    The motion of passively convected particles in turbulent flows is studied experimentally in approximately homogeneous and isotropic turbulent flows, generated in water by two moving grids. The simultaneous trajectories of many small passively convected, neutrally buoyant, polystyrene particles...

  19. Costationarity of Locally Stationary Time Series Using costat

    OpenAIRE

    Cardinali, Alessandro; Nason, Guy P.

    2013-01-01

    This article describes the R package costat. This package enables a user to (i) perform a test for time series stationarity; (ii) compute and plot time-localized autocovariances, and (iii) to determine and explore any costationary relationship between two locally stationary time series. Two locally stationary time series are said to be costationary if there exists two time-varying combination functions such that the linear combination of the two series with the functions produces another time...

  20. Two-scale analysis of intermittency in fully developed turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Badii, R; Talkner, P [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1999-08-01

    A self-affinity test for turbulent time series is applied to experimental data for the estimation of intermittency exponents. The method employs exact relations satisfied by joint expectations of observables computed across two different length scales. One of these constitutes a verification tool for the existence and the extent of the inertial range. (author) 2 figs., 13 refs.

  1. Detecting nonlinear structure in time series

    International Nuclear Information System (INIS)

    Theiler, J.

    1991-01-01

    We describe an approach for evaluating the statistical significance of evidence for nonlinearity in a time series. The formal application of our method requires the careful statement of a null hypothesis which characterizes a candidate linear process, the generation of an ensemble of ''surrogate'' data sets which are similar to the original time series but consistent with the null hypothesis, and the computation of a discriminating statistic for the original and for each of the surrogate data sets. The idea is to test the original time series against the null hypothesis by checking whether the discriminating statistic computed for the original time series differs significantly from the statistics computed for each of the surrogate sets. While some data sets very cleanly exhibit low-dimensional chaos, there are many cases where the evidence is sketchy and difficult to evaluate. We hope to provide a framework within which such claims of nonlinearity can be evaluated. 5 refs., 4 figs

  2. Introduction to time series and forecasting

    CERN Document Server

    Brockwell, Peter J

    2016-01-01

    This book is aimed at the reader who wishes to gain a working knowledge of time series and forecasting methods as applied to economics, engineering and the natural and social sciences. It assumes knowledge only of basic calculus, matrix algebra and elementary statistics. This third edition contains detailed instructions for the use of the professional version of the Windows-based computer package ITSM2000, now available as a free download from the Springer Extras website. The logic and tools of time series model-building are developed in detail. Numerous exercises are included and the software can be used to analyze and forecast data sets of the user's own choosing. The book can also be used in conjunction with other time series packages such as those included in R. The programs in ITSM2000 however are menu-driven and can be used with minimal investment of time in the computational details. The core of the book covers stationary processes, ARMA and ARIMA processes, multivariate time series and state-space mod...

  3. Constantin Carathéodory mathematics and politics in turbulent times

    CERN Document Server

    Georgiadou, Maria

    2004-01-01

    Constantin Carathéodory - Mathematics and Politics in Turbulent Times is the biography of a mathematician, born in Berlin in 1873, who became famous during his life time, but has hitherto been ignored by historians for half a century since his death in 1950, in Munich. In a thought-provoking approach, Maria Georgiadou devotes to Constantin Carathéodory all the attention such a personality deserves. With breathtaking detail and the appropriate scrutiny she elucidates his oeuvre, life and turbulent political and historical surroundings. A descendant of the the Greek élite of Constantinople, Carathéodory graduated from the military school of Brussels, became engineer at the Assiout dam in Egypt and finally dedicated a life of effort to mathematics and education. He studied and embarked on an international academic career, haunted by wars, catastrophes and personal tragedies. Over the last years of his life, he stayed in Munich despite World War II, an ambiguous decision upon which the author sheds unprecede...

  4. PROTOSTELLAR OUTFLOW EVOLUTION IN TURBULENT ENVIRONMENTS

    International Nuclear Information System (INIS)

    Cunningham, Andrew J.; Frank, Adam; Carroll, Jonathan; Blackman, Eric G.; Quillen, Alice C.

    2009-01-01

    The link between turbulence in star-forming environments and protostellar jets remains controversial. To explore issues of turbulence and fossil cavities driven by young stellar outflows, we present a series of numerical simulations tracking the evolution of transient protostellar jets driven into a turbulent medium. Our simulations show both the effect of turbulence on outflow structures and, conversely, the effect of outflows on the ambient turbulence. We demonstrate how turbulence will lead to strong modifications in jet morphology. More importantly, we demonstrate that individual transient outflows have the capacity to re-energize decaying turbulence. Our simulations support a scenario in which the directed energy/momentum associated with cavities is randomized as the cavities are disrupted by dynamical instabilities seeded by the ambient turbulence. Consideration of the energy power spectra of the simulations reveals that the disruption of the cavities powers an energy cascade consistent with Burgers'-type turbulence and produces a driving scale length associated with the cavity propagation length. We conclude that fossil cavities interacting either with a turbulent medium or with other cavities have the capacity to sustain or create turbulent flows in star-forming environments. In the last section, we contrast our work and its conclusions with previous studies which claim that jets cannot be the source of turbulence.

  5. TIME SERIES ANALYSIS USING A UNIQUE MODEL OF TRANSFORMATION

    Directory of Open Access Journals (Sweden)

    Goran Klepac

    2007-12-01

    Full Text Available REFII1 model is an authorial mathematical model for time series data mining. The main purpose of that model is to automate time series analysis, through a unique transformation model of time series. An advantage of this approach of time series analysis is the linkage of different methods for time series analysis, linking traditional data mining tools in time series, and constructing new algorithms for analyzing time series. It is worth mentioning that REFII model is not a closed system, which means that we have a finite set of methods. At first, this is a model for transformation of values of time series, which prepares data used by different sets of methods based on the same model of transformation in a domain of problem space. REFII model gives a new approach in time series analysis based on a unique model of transformation, which is a base for all kind of time series analysis. The advantage of REFII model is its possible application in many different areas such as finance, medicine, voice recognition, face recognition and text mining.

  6. Frontiers in Time Series and Financial Econometrics

    OpenAIRE

    Ling, S.; McAleer, M.J.; Tong, H.

    2015-01-01

    __Abstract__ Two of the fastest growing frontiers in econometrics and quantitative finance are time series and financial econometrics. Significant theoretical contributions to financial econometrics have been made by experts in statistics, econometrics, mathematics, and time series analysis. The purpose of this special issue of the journal on “Frontiers in Time Series and Financial Econometrics” is to highlight several areas of research by leading academics in which novel methods have contrib...

  7. Scale-dependent intrinsic entropies of complex time series.

    Science.gov (United States)

    Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E

    2016-04-13

    Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. © 2016 The Author(s).

  8. Elements of nonlinear time series analysis and forecasting

    CERN Document Server

    De Gooijer, Jan G

    2017-01-01

    This book provides an overview of the current state-of-the-art of nonlinear time series analysis, richly illustrated with examples, pseudocode algorithms and real-world applications. Avoiding a “theorem-proof” format, it shows concrete applications on a variety of empirical time series. The book can be used in graduate courses in nonlinear time series and at the same time also includes interesting material for more advanced readers. Though it is largely self-contained, readers require an understanding of basic linear time series concepts, Markov chains and Monte Carlo simulation methods. The book covers time-domain and frequency-domain methods for the analysis of both univariate and multivariate (vector) time series. It makes a clear distinction between parametric models on the one hand, and semi- and nonparametric models/methods on the other. This offers the reader the option of concentrating exclusively on one of these nonlinear time series analysis methods. To make the book as user friendly as possible...

  9. An Energy-Based Similarity Measure for Time Series

    Directory of Open Access Journals (Sweden)

    Pierre Brunagel

    2007-11-01

    Full Text Available A new similarity measure, called SimilB, for time series analysis, based on the cross-ΨB-energy operator (2004, is introduced. ΨB is a nonlinear measure which quantifies the interaction between two time series. Compared to Euclidean distance (ED or the Pearson correlation coefficient (CC, SimilB includes the temporal information and relative changes of the time series using the first and second derivatives of the time series. SimilB is well suited for both nonstationary and stationary time series and particularly those presenting discontinuities. Some new properties of ΨB are presented. Particularly, we show that ΨB as similarity measure is robust to both scale and time shift. SimilB is illustrated with synthetic time series and an artificial dataset and compared to the CC and the ED measures.

  10. Describing temporal variability of the mean Estonian precipitation series in climate time scale

    Science.gov (United States)

    Post, P.; Kärner, O.

    2009-04-01

    ,1,1) model can be interpreted to be consisting of random walk in a noisy environment (Box and Jenkins, 1976). The fitted model appears to be weakly non-stationary, that gives us the possibility to use stationary approximation if only the noise component from that sum of white noise and random walk is exploited. We get a convenient routine to generate a stationary precipitation climatology with a reasonable accuracy, since the noise component variance is much larger than the dispersion of the random walk generator. This interpretation emphasizes dominating role of a random component in the precipitation series. The result is understandable due to a small territory of Estonia that is situated in the mid-latitude cyclone track. References Box, J.E.P. and G. Jenkins 1976: Time Series Analysis, Forecasting and Control (revised edn.), Holden Day San Francisco, CA, 575 pp. Davis, A., Marshak, A., Wiscombe, W. and R. Cahalan 1996: Multifractal characterizations of intermittency in nonstationary geophysical signals and fields.in G. Trevino et al. (eds) Current Topics in Nonsstationarity Analysis. World-Scientific, Singapore, 97-158. Kärner, O. 2002: On nonstationarity and antipersistency in global temperature series. J. Geophys. Res. D107; doi:10.1029/2001JD002024. Kärner, O. 2005: Some examples on negative feedback in the Earth climate system. Centr. European J. Phys. 3; 190-208. Monin, A.S. and A.M. Yaglom 1975: Statistical Fluid Mechanics, Vol 2. Mechanics of Turbulence , MIT Press Boston Mass, 886 pp.

  11. Detecting chaos in irregularly sampled time series.

    Science.gov (United States)

    Kulp, C W

    2013-09-01

    Recently, Wiebe and Virgin [Chaos 22, 013136 (2012)] developed an algorithm which detects chaos by analyzing a time series' power spectrum which is computed using the Discrete Fourier Transform (DFT). Their algorithm, like other time series characterization algorithms, requires that the time series be regularly sampled. Real-world data, however, are often irregularly sampled, thus, making the detection of chaotic behavior difficult or impossible with those methods. In this paper, a characterization algorithm is presented, which effectively detects chaos in irregularly sampled time series. The work presented here is a modification of Wiebe and Virgin's algorithm and uses the Lomb-Scargle Periodogram (LSP) to compute a series' power spectrum instead of the DFT. The DFT is not appropriate for irregularly sampled time series. However, the LSP is capable of computing the frequency content of irregularly sampled data. Furthermore, a new method of analyzing the power spectrum is developed, which can be useful for differentiating between chaotic and non-chaotic behavior. The new characterization algorithm is successfully applied to irregularly sampled data generated by a model as well as data consisting of observations of variable stars.

  12. Building Chaotic Model From Incomplete Time Series

    Science.gov (United States)

    Siek, Michael; Solomatine, Dimitri

    2010-05-01

    This paper presents a number of novel techniques for building a predictive chaotic model from incomplete time series. A predictive chaotic model is built by reconstructing the time-delayed phase space from observed time series and the prediction is made by a global model or adaptive local models based on the dynamical neighbors found in the reconstructed phase space. In general, the building of any data-driven models depends on the completeness and quality of the data itself. However, the completeness of the data availability can not always be guaranteed since the measurement or data transmission is intermittently not working properly due to some reasons. We propose two main solutions dealing with incomplete time series: using imputing and non-imputing methods. For imputing methods, we utilized the interpolation methods (weighted sum of linear interpolations, Bayesian principle component analysis and cubic spline interpolation) and predictive models (neural network, kernel machine, chaotic model) for estimating the missing values. After imputing the missing values, the phase space reconstruction and chaotic model prediction are executed as a standard procedure. For non-imputing methods, we reconstructed the time-delayed phase space from observed time series with missing values. This reconstruction results in non-continuous trajectories. However, the local model prediction can still be made from the other dynamical neighbors reconstructed from non-missing values. We implemented and tested these methods to construct a chaotic model for predicting storm surges at Hoek van Holland as the entrance of Rotterdam Port. The hourly surge time series is available for duration of 1990-1996. For measuring the performance of the proposed methods, a synthetic time series with missing values generated by a particular random variable to the original (complete) time series is utilized. There exist two main performance measures used in this work: (1) error measures between the actual

  13. Multivariate Time Series Search

    Data.gov (United States)

    National Aeronautics and Space Administration — Multivariate Time-Series (MTS) are ubiquitous, and are generated in areas as disparate as sensor recordings in aerospace systems, music and video streams, medical...

  14. Analysing Stable Time Series

    National Research Council Canada - National Science Library

    Adler, Robert

    1997-01-01

    We describe how to take a stable, ARMA, time series through the various stages of model identification, parameter estimation, and diagnostic checking, and accompany the discussion with a goodly number...

  15. Neural Network Models for Time Series Forecasts

    OpenAIRE

    Tim Hill; Marcus O'Connor; William Remus

    1996-01-01

    Neural networks have been advocated as an alternative to traditional statistical forecasting methods. In the present experiment, time series forecasts produced by neural networks are compared with forecasts from six statistical time series methods generated in a major forecasting competition (Makridakis et al. [Makridakis, S., A. Anderson, R. Carbone, R. Fildes, M. Hibon, R. Lewandowski, J. Newton, E. Parzen, R. Winkler. 1982. The accuracy of extrapolation (time series) methods: Results of a ...

  16. Influence of turbulence on bed load sediment transport

    DEFF Research Database (Denmark)

    Sumer, B. Mutlu; Chua, L.; Cheng, N. S.

    2003-01-01

    This paper summarizes the results of an experimental study on the influence of an external turbulence field on the bedload sediment transport in an open channel. The external turbulence was generated by: (1) with a horizontal pipe placed halfway through the depth, h; (2) with a series of grids......-bed experiments and the ripple-covered-bed experiments. In the former case, the flow in the presence of the turbulence generator was adjusted so that the mean bed shear stress was the same as in the case without the turbulence generator in order to single out the effect of the external turbulence on the sediment...... correlated with the sediment transport rate. The sediment transport increases markedly with increasing turbulence level....

  17. Time Series Observations in the North Indian Ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Shenoy, D.M.; Naik, H.; Kurian, S.; Naqvi, S.W.A.; Khare, N.

    Ocean and the ongoing time series study (Candolim Time Series; CaTS) off Goa. In addition, this article also focuses on the new time series initiative in the Arabian Sea and the Bay of Bengal under Sustained Indian Ocean Biogeochemistry and Ecosystem...

  18. Coastal Microstructure: From Active Overturn to Fossil Turbulence

    Science.gov (United States)

    Tau Leung, Pak

    2011-11-01

    The Remote Anthropogenic Sensing Program was a five year effort (2001- 2005) to examine subsurface phenomena related to a sewage outfall off the coast of Oahu, Hawaii. This research has implications for basic ocean hydrodynamics, particularly for a greatly improved understanding of the evolution of turbulent patches. It was the first time a microstructure measurement was used to study such a buoyancy-driven turbulence generated by a sea-floor diffuser. In 2004, two stations were selected to represent the near field and ambient conditions. They have nearly identical bathymetrical and hydrographical features and provide an ideal environment for a control experiment. Repeated vertical microstructure measurements were performed at both stations for 20 days. A time series of physical parameters was collected and used for statistical analysis. After comparing the data from both stations, it can be concluded that the turbulent mixing generated by the diffuser contributes to the elevated dissipation rate observed in the pycnocline and bottom boundary layer. To further understand the mixing processes in both regions, data were plotted on a Hydrodynamic Phase Diagram. The overturning stages of the turbulent patches are identified by Hydrodynamic Phase Diagram. This technique provides detailed information on the evolution of the turbulent patches from active overturns to fossilized scalar microstructures in the water column. Results from this study offer new evidence to support the fossil turbulence theory. This study concluded that: 1. Field Data collected near a sea-floor outfall diffuser show that turbulent patches evolve from active (overturning) to fossil (buoyancy-inhibited) stages, consistent with the process of turbulent patch evolution proposed by fossil turbulence theory. 2. The data show that active (overturning) and fossil (buoyancy-inhibited) patches have smaller length scales than the active+fossil (intermediate) stage of patch evolution, consistent with fossil

  19. Geometric noise reduction for multivariate time series.

    Science.gov (United States)

    Mera, M Eugenia; Morán, Manuel

    2006-03-01

    We propose an algorithm for the reduction of observational noise in chaotic multivariate time series. The algorithm is based on a maximum likelihood criterion, and its goal is to reduce the mean distance of the points of the cleaned time series to the attractor. We give evidence of the convergence of the empirical measure associated with the cleaned time series to the underlying invariant measure, implying the possibility to predict the long run behavior of the true dynamics.

  20. BRITS: Bidirectional Recurrent Imputation for Time Series

    OpenAIRE

    Cao, Wei; Wang, Dong; Li, Jian; Zhou, Hao; Li, Lei; Li, Yitan

    2018-01-01

    Time series are widely used as signals in many classification/regression tasks. It is ubiquitous that time series contains many missing values. Given multiple correlated time series data, how to fill in missing values and to predict their class labels? Existing imputation methods often impose strong assumptions of the underlying data generating process, such as linear dynamics in the state space. In this paper, we propose BRITS, a novel method based on recurrent neural networks for missing va...

  1. Turbulence modulation induced by interaction between a bubble swarm and decaying turbulence in oscillating-grid turbulence

    International Nuclear Information System (INIS)

    Imaizumi, Ryota; Morikawa, Koichi; Higuchi, Masamori; Saito, Takayuki

    2009-01-01

    In this study, the interaction between a bubble swarm and homogeneous isotropic turbulence was experimentally investigated. The objective is to clarify the turbulence modulation induced by interaction between the bubble swarm and the homogeneous isotropic turbulence without mean flow. In order to generate simultaneously ideally homogeneous isotropic turbulence and a sufficiently controlled bubble swarm, we employed both oscillating grid and bubble generators equipped with audio speakers. First, the homogeneous isotropic turbulence was formed by operating the oscillating grid cylindrical acrylic pipe (height: 600 mm, inner diameter: 149 mm) filled with ion-exchanged and degassed water. Second, we stopped the oscillating-grid in arbitrary time after the homogeneous isotropic turbulence was achieved. A few moments later, the controlled bubble swarm (number of bubbles: 3, average equivalent diameter of bubble: 3 mm, bubble Reynolds number: 859, Weber number: 3.48) was launched into the decaying turbulence described above, using the bubble generators. The bubble formation, bubble size and bubble-launch timing are controlled arbitrarily and precisely by this device. In this study, we conducted the following experiments: 1) measurement of the motion of bubbles in rest water and oscillating grid turbulence via high-speed visualization, 2) measurement of the liquid phase motion around the bubbles in rest water via PIV system with LIF method, 3) measurement of the liquid phase motion around the bubbles in oscillating-grid turbulence via PIV system with LIF method. In the vitalization of the liquid-phase motion of both experiments, two high speed video cameras were employed in order to simultaneously film large- and small-scale interrogation areas. The liquid-phase ambient turbulence hastened the change of the bubble motion from zigzag mode to spiral mode. The interaction between the bubble swarm and liquid-phase turbulence increased decay-rate of the turbulence. (author)

  2. Efficient Algorithms for Segmentation of Item-Set Time Series

    Science.gov (United States)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  3. Studies on time series applications in environmental sciences

    CERN Document Server

    Bărbulescu, Alina

    2016-01-01

    Time series analysis and modelling represent a large study field, implying the approach from the perspective of the time and frequency, with applications in different domains. Modelling hydro-meteorological time series is difficult due to the characteristics of these series, as long range dependence, spatial dependence, the correlation with other series. Continuous spatial data plays an important role in planning, risk assessment and decision making in environmental management. In this context, in this book we present various statistical tests and modelling techniques used for time series analysis, as well as applications to hydro-meteorological series from Dobrogea, a region situated in the south-eastern part of Romania, less studied till now. Part of the results are accompanied by their R code. .

  4. Global Population Density Grid Time Series Estimates

    Data.gov (United States)

    National Aeronautics and Space Administration — Global Population Density Grid Time Series Estimates provide a back-cast time series of population density grids based on the year 2000 population grid from SEDAC's...

  5. Prediction and Geometry of Chaotic Time Series

    National Research Council Canada - National Science Library

    Leonardi, Mary

    1997-01-01

    This thesis examines the topic of chaotic time series. An overview of chaos, dynamical systems, and traditional approaches to time series analysis is provided, followed by an examination of state space reconstruction...

  6. Time series analysis of continuous-wave coherent Doppler Lidar wind measurements

    International Nuclear Information System (INIS)

    Sjoeholm, M; Mikkelsen, T; Mann, J; Enevoldsen, K; Courtney, M

    2008-01-01

    The influence of spatial volume averaging of a focused 1.55 μm continuous-wave coherent Doppler Lidar on observed wind turbulence measured in the atmospheric surface layer over homogeneous terrain is described and analysed. Comparison of Lidar-measured turbulent spectra with spectra simultaneously obtained from a mast-mounted sonic anemometer at 78 meters height at the test station for large wind turbines at Hoevsoere in Western Jutland, Denmark is presented for the first time

  7. Fractal-Markovian scaling of turbulent bursting process in open channel flow

    International Nuclear Information System (INIS)

    Keshavarzi, Ali Reza; Ziaei, Ali Naghi; Homayoun, Emdad; Shirvani, Amin

    2005-01-01

    The turbulent coherent structure of flow in open channel is a chaotic and stochastic process in nature. The coherence structure of the flow or bursting process consists of a series of eddies with a variety of different length scales and it is very important for the entrainment of sediment particles from the bed. In this study, a fractal-Markovian process is applied to the measured turbulent data in open channel. The turbulent data was measured in an experimental flume using three-dimensional acoustic Doppler velocity meter (ADV). A fractal interpolation function (FIF) algorithm was used to simulate more than 500,000 time series data of measured instantaneous velocity fluctuations and Reynolds shear stress. The fractal interpolation functions (FIF) enables to simulate and construct time series of u', v', and u'v' for any particular movement and state in the Markov process. The fractal dimension of the bursting events is calculated for 16 particular movements with the transition probability of the events based on 1st order Markov process. It was found that the average fractal dimensions of the streamwise flow velocity (u') are; 1.73, 1.74, 1.71 and 1.74 with the transition probability of 60.82%, 63.77%, 59.23% and 62.09% for the 1-1, 2-2, 3-3 and 4-4 movements, respectively. It was also found that the fractal dimensions of Reynold stress u'v' for quadrants 1, 2, 3 and 4 are 1.623, 1.623, 1.625 and 1.618, respectively

  8. Sensor-Generated Time Series Events: A Definition Language

    Science.gov (United States)

    Anguera, Aurea; Lara, Juan A.; Lizcano, David; Martínez, Maria Aurora; Pazos, Juan

    2012-01-01

    There are now a great many domains where information is recorded by sensors over a limited time period or on a permanent basis. This data flow leads to sequences of data known as time series. In many domains, like seismography or medicine, time series analysis focuses on particular regions of interest, known as events, whereas the remainder of the time series contains hardly any useful information. In these domains, there is a need for mechanisms to identify and locate such events. In this paper, we propose an events definition language that is general enough to be used to easily and naturally define events in time series recorded by sensors in any domain. The proposed language has been applied to the definition of time series events generated within the branch of medicine dealing with balance-related functions in human beings. A device, called posturograph, is used to study balance-related functions. The platform has four sensors that record the pressure intensity being exerted on the platform, generating four interrelated time series. As opposed to the existing ad hoc proposals, the results confirm that the proposed language is valid, that is generally applicable and accurate, for identifying the events contained in the time series.

  9. Reynolds number dependency in equilibrium two-dimensional turbulence

    Science.gov (United States)

    Bracco, A.; McWilliams, J.

    2009-04-01

    We use the Navier-Stokes equations for barotropic turbulence as a zero-order approximation of chaotic space-time patterns and equilibrium distributions that mimic turbulence in geophysical flows. In this overly-simplified set-up for which smooth-solutions exist, we investigate if is possible to bound the uncertainty associated with the numerical domain discretization, i.e. with the limitation imposed by the Reynolds number range we can explore. To do so we analyze a series of stationary barotropic turbulence simulations spanning a large range of Reynolds numbers and run over a three year period for over 300,000 CPU hours. We find a persistent Reynolds number dependency in the energy power spectra and second order vorticity structure function, while distributions of dynamical quantities such as velocity, vorticity, dissipation rates and others are invariant in shape and have variances scaling with the viscosity coefficient according to simple power-laws. The relevance to this work to the possibility of conceptually reducing uncertainties in climate models will be discussed.

  10. Correlation and multifractality in climatological time series

    International Nuclear Information System (INIS)

    Pedron, I T

    2010-01-01

    Climate can be described by statistical analysis of mean values of atmospheric variables over a period. It is possible to detect correlations in climatological time series and to classify its behavior. In this work the Hurst exponent, which can characterize correlation and persistence in time series, is obtained by using the Detrended Fluctuation Analysis (DFA) method. Data series of temperature, precipitation, humidity, solar radiation, wind speed, maximum squall, atmospheric pressure and randomic series are studied. Furthermore, the multifractality of such series is analyzed applying the Multifractal Detrended Fluctuation Analysis (MF-DFA) method. The results indicate presence of correlation (persistent character) in all climatological series and multifractality as well. A larger set of data, and longer, could provide better results indicating the universality of the exponents.

  11. Time Series Forecasting with Missing Values

    Directory of Open Access Journals (Sweden)

    Shin-Fu Wu

    2015-11-01

    Full Text Available Time series prediction has become more popular in various kinds of applications such as weather prediction, control engineering, financial analysis, industrial monitoring, etc. To deal with real-world problems, we are often faced with missing values in the data due to sensor malfunctions or human errors. Traditionally, the missing values are simply omitted or replaced by means of imputation methods. However, omitting those missing values may cause temporal discontinuity. Imputation methods, on the other hand, may alter the original time series. In this study, we propose a novel forecasting method based on least squares support vector machine (LSSVM. We employ the input patterns with the temporal information which is defined as local time index (LTI. Time series data as well as local time indexes are fed to LSSVM for doing forecasting without imputation. We compare the forecasting performance of our method with other imputation methods. Experimental results show that the proposed method is promising and is worth further investigations.

  12. Reconstruction of ensembles of coupled time-delay systems from time series.

    Science.gov (United States)

    Sysoev, I V; Prokhorov, M D; Ponomarenko, V I; Bezruchko, B P

    2014-06-01

    We propose a method to recover from time series the parameters of coupled time-delay systems and the architecture of couplings between them. The method is based on a reconstruction of model delay-differential equations and estimation of statistical significance of couplings. It can be applied to networks composed of nonidentical nodes with an arbitrary number of unidirectional and bidirectional couplings. We test our method on chaotic and periodic time series produced by model equations of ensembles of diffusively coupled time-delay systems in the presence of noise, and apply it to experimental time series obtained from electronic oscillators with delayed feedback coupled by resistors.

  13. Application of simulated lidar scanning patterns to constrained Gaussian turbulence fields for load validation

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimirov; Natarajan, Anand

    2017-01-01

    We demonstrate a method for incorporating wind velocity measurements from multiple-point scanning lidars into threedimensional wind turbulence time series serving as input to wind turbine load simulations. Simulated lidar scanning patterns are implemented by imposing constraints on randomly gener...

  14. The analysis of time series: an introduction

    National Research Council Canada - National Science Library

    Chatfield, Christopher

    1989-01-01

    .... A variety of practical examples are given to support the theory. The book covers a wide range of time-series topics, including probability models for time series, Box-Jenkins forecasting, spectral analysis, linear systems and system identification...

  15. Time behaviours of visible lines in turbulently heated TRIAM-1 plasma

    Energy Technology Data Exchange (ETDEWEB)

    Hiraki, N; Nakamura, K; Nakamura, Y; Itoh, S [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics

    1981-08-01

    Spectroscopic studies were carried out on turbulently heated TRIAM-1 tokamak plasma. The temporal evolutions of the line radiance of visible lines were measured and two types of time behaviours of the line radiance were identified. The observed remarkable reduction of the line radiance of visible lines which have low ionization potential and are localized in the skin-layer due to the application of a pulsed electric-field for turbulent heating is attributed to the strong plasma heating in the peripherical region. Spatial profiles of neutrals and ions which are related to these lines are calculated, and the temporal variations of these profiles caused by the application of the heating pulse are discussed.

  16. Time series modeling in traffic safety research.

    Science.gov (United States)

    Lavrenz, Steven M; Vlahogianni, Eleni I; Gkritza, Konstantina; Ke, Yue

    2018-08-01

    The use of statistical models for analyzing traffic safety (crash) data has been well-established. However, time series techniques have traditionally been underrepresented in the corresponding literature, due to challenges in data collection, along with a limited knowledge of proper methodology. In recent years, new types of high-resolution traffic safety data, especially in measuring driver behavior, have made time series modeling techniques an increasingly salient topic of study. Yet there remains a dearth of information to guide analysts in their use. This paper provides an overview of the state of the art in using time series models in traffic safety research, and discusses some of the fundamental techniques and considerations in classic time series modeling. It also presents ongoing and future opportunities for expanding the use of time series models, and explores newer modeling techniques, including computational intelligence models, which hold promise in effectively handling ever-larger data sets. The information contained herein is meant to guide safety researchers in understanding this broad area of transportation data analysis, and provide a framework for understanding safety trends that can influence policy-making. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Applicability of Taylor's hypothesis in thermally driven turbulence

    Science.gov (United States)

    Kumar, Abhishek; Verma, Mahendra K.

    2018-04-01

    In this paper, we show that, in the presence of large-scale circulation (LSC), Taylor's hypothesis can be invoked to deduce the energy spectrum in thermal convection using real-space probes, a popular experimental tool. We perform numerical simulation of turbulent convection in a cube and observe that the velocity field follows Kolmogorov's spectrum (k-5/3). We also record the velocity time series using real-space probes near the lateral walls. The corresponding frequency spectrum exhibits Kolmogorov's spectrum (f-5/3), thus validating Taylor's hypothesis with the steady LSC playing the role of a mean velocity field. The aforementioned findings based on real-space probes provide valuable inputs for experimental measurements used for studying the spectrum of convective turbulence.

  18. Time series prediction: statistical and neural techniques

    Science.gov (United States)

    Zahirniak, Daniel R.; DeSimio, Martin P.

    1996-03-01

    In this paper we compare the performance of nonlinear neural network techniques to those of linear filtering techniques in the prediction of time series. Specifically, we compare the results of using the nonlinear systems, known as multilayer perceptron and radial basis function neural networks, with the results obtained using the conventional linear Wiener filter, Kalman filter and Widrow-Hoff adaptive filter in predicting future values of stationary and non- stationary time series. Our results indicate the performance of each type of system is heavily dependent upon the form of the time series being predicted and the size of the system used. In particular, the linear filters perform adequately for linear or near linear processes while the nonlinear systems perform better for nonlinear processes. Since the linear systems take much less time to be developed, they should be tried prior to using the nonlinear systems when the linearity properties of the time series process are unknown.

  19. Effectiveness of Multivariate Time Series Classification Using Shapelets

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2015-01-01

    Full Text Available Typically, time series classifiers require signal pre-processing (filtering signals from noise and artifact removal, etc., enhancement of signal features (amplitude, frequency, spectrum, etc., classification of signal features in space using the classical techniques and classification algorithms of multivariate data. We consider a method of classifying time series, which does not require enhancement of the signal features. The method uses the shapelets of time series (time series shapelets i.e. small fragments of this series, which reflect properties of one of its classes most of all.Despite the significant number of publications on the theory and shapelet applications for classification of time series, the task to evaluate the effectiveness of this technique remains relevant. An objective of this publication is to study the effectiveness of a number of modifications of the original shapelet method as applied to the multivariate series classification that is a littlestudied problem. The paper presents the problem statement of multivariate time series classification using the shapelets and describes the shapelet–based basic method of binary classification, as well as various generalizations and proposed modification of the method. It also offers the software that implements a modified method and results of computational experiments confirming the effectiveness of the algorithmic and software solutions.The paper shows that the modified method and the software to use it allow us to reach the classification accuracy of about 85%, at best. The shapelet search time increases in proportion to input data dimension.

  20. Constrained dynamics of an inertial particle in a turbulent flow

    International Nuclear Information System (INIS)

    Obligado, M; Baudet, C; Gagne, Y; Bourgoin, M

    2011-01-01

    Most of theoretical and numerical works for free advected particles in a turbulent flow, which only consider the drag force acting on the particles, fails to predict recent experimental results for the transport of finite size particles. These questions have motivated a series of experiments trying to emphasize the actual role of the drag force by imposing this one as an unambiguous leading forcing term acting on a particle in a turbulent background. This is achieved by considering the constrained dynamics of towed particles in a turbulent environment. In the present work, we focus on the influence of particles inertia on its velocity and acceleration Lagrangian statistics and energy spectral density. Our results are consistent with a filtering scenario resulting from the viscous response time of an inertial particle whose dynamics is coupled to the surrounding fluid via strong contribution of drag.

  1. Time-series-analysis techniques applied to nuclear-material accounting

    International Nuclear Information System (INIS)

    Pike, D.H.; Morrison, G.W.; Downing, D.J.

    1982-05-01

    This document is designed to introduce the reader to the applications of Time Series Analysis techniques to Nuclear Material Accountability data. Time series analysis techniques are designed to extract information from a collection of random variables ordered by time by seeking to identify any trends, patterns, or other structure in the series. Since nuclear material accountability data is a time series, one can extract more information using time series analysis techniques than by using other statistical techniques. Specifically, the objective of this document is to examine the applicability of time series analysis techniques to enhance loss detection of special nuclear materials. An introductory section examines the current industry approach which utilizes inventory differences. The error structure of inventory differences is presented. Time series analysis techniques discussed include the Shewhart Control Chart, the Cumulative Summation of Inventory Differences Statistics (CUSUM) and the Kalman Filter and Linear Smoother

  2. Time Resolved Scanning PIV measurements at fine scales in a turbulent jet

    International Nuclear Information System (INIS)

    Cheng, Y.; Torregrosa, M.M.; Villegas, A.; Diez, F.J.

    2011-01-01

    The temporal and spatial complexity of turbulent flows at intermediate and small scales has prevented the acquisition of full three-dimensional experimental data sets for validating classical turbulent theory and Direct Numerical Simulations (DNS). Experimental techniques like Particle Velocimetry, PIV, allow non-intrusive planar measurements of turbulent flows. The present work applied a Time Resolved Scanning PIV system, TRS-PIV, capable of obtaining three-dimensional two-component velocities to measure the small scales of a turbulent jet. When probing the small scales of these flows with PIV, the uncertainty of the measured turbulent properties are determined by the characteristics of the PIV system and specially the thickness of the laser sheet. A measurement of the particle distribution across the thickness of the laser sheet is proposed as a more detailed description of the PIV sheet thickness. The high temporal and spatial resolution of the TRS-PIV system allowed obtaining quasi-instantaneous volumetric vector fields at the far field of a round turbulent jet in water, albeit for a low Reynolds number of 1478 due to the speed limitations of the present camera and scanning system. Six of the nine components of the velocity gradient tensor were calculated from the velocity measurements. This allowed the visualization with near Kolmogorov-scale resolution of the velocity gradient structures in three-dimensional space. In general, these structures had a complex geometry corresponding to elongated shapes in the form of sheets and tubes. An analysis of the probability density function, pdf, of the velocity gradients calculated showed that the on-diagonal (off-diagonal) velocity gradient components were very similar to each other even for events at the tails of the pdfs, as required for homogeneous isotropy. The root mean square of the components of the velocity gradients is also calculated and their ratio of off-diagonal components to on-diagonal components

  3. Clinical and epidemiological rounds. Time series

    Directory of Open Access Journals (Sweden)

    León-Álvarez, Alba Luz

    2016-07-01

    Full Text Available Analysis of time series is a technique that implicates the study of individuals or groups observed in successive moments in time. This type of analysis allows the study of potential causal relationships between different variables that change over time and relate to each other. It is the most important technique to make inferences about the future, predicting, on the basis or what has happened in the past and it is applied in different disciplines of knowledge. Here we discuss different components of time series, the analysis technique and specific examples in health research.

  4. Eulerian short-time statistics of turbulent flow at large Reynolds number

    NARCIS (Netherlands)

    Brouwers, J.J.H.

    2004-01-01

    An asymptotic analysis is presented of the short-time behavior of second-order temporal velocity structure functions and Eulerian acceleration correlations in a frame that moves with the local mean velocity of the turbulent flow field. Expressions in closed-form are derived which cover the viscous

  5. Integer-valued time series

    NARCIS (Netherlands)

    van den Akker, R.

    2007-01-01

    This thesis adresses statistical problems in econometrics. The first part contributes statistical methodology for nonnegative integer-valued time series. The second part of this thesis discusses semiparametric estimation in copula models and develops semiparametric lower bounds for a large class of

  6. Robust Forecasting of Non-Stationary Time Series

    NARCIS (Netherlands)

    Croux, C.; Fried, R.; Gijbels, I.; Mahieu, K.

    2010-01-01

    This paper proposes a robust forecasting method for non-stationary time series. The time series is modelled using non-parametric heteroscedastic regression, and fitted by a localized MM-estimator, combining high robustness and large efficiency. The proposed method is shown to produce reliable

  7. Study of the fractal dimension of the wind and its relationships with turbulent and stability parameters

    Science.gov (United States)

    Tijera, Manuel; Maqueda, Gregorio; Cano, José L.; López, Pilar; Yagüe, Carlos

    2010-05-01

    The wind velocity series of the atmospheric turbulent flow in the planetary boundary layer (PBL), in spite of being highly erratic, present a self-similarity structure (Frisch, 1995; Peitgen et., 2004; Falkovich et., 2006). So, the wind velocity can be seen as a fractal magnitude. We calculate the fractal dimension (Komolgorov capacity or box-counting dimension) of the wind perturbation series (u' = u- ) in the physical spaces (namely velocity-time). It has been studied the time evolution of the fractal dimension along different days and at three levels above the ground (5.8 m, 13.5 m, 32 m). The data analysed was recorded in the experimental campaign SABLES-98 (Cuxart et al., 2000) at the Research Centre for the Lower Atmosphere (CIBA) located in Valladolid (Spain). In this work the u, v and w components of wind velocity series have been measured by sonic anemometers (20 Hz sampling rate). The fractal dimension versus the integral length scales of the mean wind series have been studied, as well as the influence of different turbulent parameters. A method for estimating these integral scales is developed using the normalized autocorrelation function and a Gaussian fit. Finally, it will be analysed the variation of the fractal dimension versus stability parameters (as Richardson number) in order to explain some of the dominant features which are likely immersed in the fractal nature of these turbulent flows. References - Cuxart J, Yagüe C, Morales G, Terradellas E, Orbe J, Calvo J, Fernández A, Soler MR, Infante C, Buenestado P, Espinalt A, Joergensen HE, Rees JM, Vilá J, Redondo JM, Cantalapiedra IR and Conangla L (2000) Stable atmospheric boundary-layer experiment in Spain (SABLES98): a report. Boundary- Layer Meteorol 96:337-370 - Falkovich G and Kattepalli R. Sreenivasan (2006) Lessons from Hidrodynamic Turbulence. Physics Today 59: 43-49 - Frisch U (1995) Turbulence the legacy of A.N. Kolmogorov Cambridge University Press 269pp - Peitgen H, Jürgens H and

  8. Progress in turbulence research

    International Nuclear Information System (INIS)

    Bradshaw, P.

    1990-01-01

    Recent developments in experiments and eddy simulations, as an introduction to a discussion of turbulence modeling for engineers is reviewed. The most important advances in the last decade rely on computers: microcomputers to control laboratory experiments, especially for multidimensional imaging, and supercomputers to simulate turbulence. These basic studies in turbulence research are leading to genuine breakthroughs in prediction methods for engineers and earth scientists. The three main branches of turbulence research: experiments, simulations (numerically-accurate three-dimensional, time-dependent solutions of the Navier-Stokes equations, with any empiricism confined to the smallest eddies), and modeling (empirical closure of time-averaged equations for turbulent flow) are discussed. 33 refs

  9. Characterizing time series via complexity-entropy curves

    Science.gov (United States)

    Ribeiro, Haroldo V.; Jauregui, Max; Zunino, Luciano; Lenzi, Ervin K.

    2017-06-01

    The search for patterns in time series is a very common task when dealing with complex systems. This is usually accomplished by employing a complexity measure such as entropies and fractal dimensions. However, such measures usually only capture a single aspect of the system dynamics. Here, we propose a family of complexity measures for time series based on a generalization of the complexity-entropy causality plane. By replacing the Shannon entropy by a monoparametric entropy (Tsallis q entropy) and after considering the proper generalization of the statistical complexity (q complexity), we build up a parametric curve (the q -complexity-entropy curve) that is used for characterizing and classifying time series. Based on simple exact results and numerical simulations of stochastic processes, we show that these curves can distinguish among different long-range, short-range, and oscillating correlated behaviors. Also, we verify that simulated chaotic and stochastic time series can be distinguished based on whether these curves are open or closed. We further test this technique in experimental scenarios related to chaotic laser intensity, stock price, sunspot, and geomagnetic dynamics, confirming its usefulness. Finally, we prove that these curves enhance the automatic classification of time series with long-range correlations and interbeat intervals of healthy subjects and patients with heart disease.

  10. Complex network approach to fractional time series

    Energy Technology Data Exchange (ETDEWEB)

    Manshour, Pouya [Physics Department, Persian Gulf University, Bushehr 75169 (Iran, Islamic Republic of)

    2015-10-15

    In order to extract correlation information inherited in stochastic time series, the visibility graph algorithm has been recently proposed, by which a time series can be mapped onto a complex network. We demonstrate that the visibility algorithm is not an appropriate one to study the correlation aspects of a time series. We then employ the horizontal visibility algorithm, as a much simpler one, to map fractional processes onto complex networks. The degree distributions are shown to have parabolic exponential forms with Hurst dependent fitting parameter. Further, we take into account other topological properties such as maximum eigenvalue of the adjacency matrix and the degree assortativity, and show that such topological quantities can also be used to predict the Hurst exponent, with an exception for anti-persistent fractional Gaussian noises. To solve this problem, we take into account the Spearman correlation coefficient between nodes' degrees and their corresponding data values in the original time series.

  11. Experimental study of a DMD based compressive line sensing imaging system in the turbulence environment

    Science.gov (United States)

    Ouyang, Bing; Hou, Weilin; Gong, Cuiling; Caimi, Frank M.; Dalgleish, Fraser R.; Vuorenkoski, Anni K.

    2016-05-01

    The Compressive Line Sensing (CLS) active imaging system has been demonstrated to be effective in scattering mediums, such as turbid coastal water through simulations and test tank experiments. Since turbulence is encountered in many atmospheric and underwater surveillance applications, a new CLS imaging prototype was developed to investigate the effectiveness of the CLS concept in a turbulence environment. Compared with earlier optical bench top prototype, the new system is significantly more robust and compact. A series of experiments were conducted at the Naval Research Lab's optical turbulence test facility with the imaging path subjected to various turbulence intensities. In addition to validating the system design, we obtained some unexpected exciting results - in the strong turbulence environment, the time-averaged measurements using the new CLS imaging prototype improved both SNR and resolution of the reconstructed images. We will discuss the implications of the new findings, the challenges of acquiring data through strong turbulence environment, and future enhancements.

  12. Introduction to time series analysis and forecasting

    CERN Document Server

    Montgomery, Douglas C; Kulahci, Murat

    2008-01-01

    An accessible introduction to the most current thinking in and practicality of forecasting techniques in the context of time-oriented data. Analyzing time-oriented data and forecasting are among the most important problems that analysts face across many fields, ranging from finance and economics to production operations and the natural sciences. As a result, there is a widespread need for large groups of people in a variety of fields to understand the basic concepts of time series analysis and forecasting. Introduction to Time Series Analysis and Forecasting presents the time series analysis branch of applied statistics as the underlying methodology for developing practical forecasts, and it also bridges the gap between theory and practice by equipping readers with the tools needed to analyze time-oriented data and construct useful, short- to medium-term, statistically based forecasts.

  13. Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance.

    Science.gov (United States)

    Liu, Yongli; Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao

    2018-01-01

    Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy.

  14. Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance

    Science.gov (United States)

    Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao

    2018-01-01

    Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy. PMID:29795600

  15. The foundations of modern time series analysis

    CERN Document Server

    Mills, Terence C

    2011-01-01

    This book develops the analysis of Time Series from its formal beginnings in the 1890s through to the publication of Box and Jenkins' watershed publication in 1970, showing how these methods laid the foundations for the modern techniques of Time Series analysis that are in use today.

  16. Time series clustering in large data sets

    Directory of Open Access Journals (Sweden)

    Jiří Fejfar

    2011-01-01

    Full Text Available The clustering of time series is a widely researched area. There are many methods for dealing with this task. We are actually using the Self-organizing map (SOM with the unsupervised learning algorithm for clustering of time series. After the first experiment (Fejfar, Weinlichová, Šťastný, 2009 it seems that the whole concept of the clustering algorithm is correct but that we have to perform time series clustering on much larger dataset to obtain more accurate results and to find the correlation between configured parameters and results more precisely. The second requirement arose in a need for a well-defined evaluation of results. It seems useful to use sound recordings as instances of time series again. There are many recordings to use in digital libraries, many interesting features and patterns can be found in this area. We are searching for recordings with the similar development of information density in this experiment. It can be used for musical form investigation, cover songs detection and many others applications.The objective of the presented paper is to compare clustering results made with different parameters of feature vectors and the SOM itself. We are describing time series in a simplistic way evaluating standard deviations for separated parts of recordings. The resulting feature vectors are clustered with the SOM in batch training mode with different topologies varying from few neurons to large maps.There are other algorithms discussed, usable for finding similarities between time series and finally conclusions for further research are presented. We also present an overview of the related actual literature and projects.

  17. Transmission of linear regression patterns between time series: from relationship in time series to complex networks.

    Science.gov (United States)

    Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui

    2014-07-01

    The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.

  18. Lag space estimation in time series modelling

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1997-01-01

    The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...

  19. Time-series prediction and applications a machine intelligence approach

    CERN Document Server

    Konar, Amit

    2017-01-01

    This book presents machine learning and type-2 fuzzy sets for the prediction of time-series with a particular focus on business forecasting applications. It also proposes new uncertainty management techniques in an economic time-series using type-2 fuzzy sets for prediction of the time-series at a given time point from its preceding value in fluctuating business environments. It employs machine learning to determine repetitively occurring similar structural patterns in the time-series and uses stochastic automaton to predict the most probabilistic structure at a given partition of the time-series. Such predictions help in determining probabilistic moves in a stock index time-series Primarily written for graduate students and researchers in computer science, the book is equally useful for researchers/professionals in business intelligence and stock index prediction. A background of undergraduate level mathematics is presumed, although not mandatory, for most of the sections. Exercises with tips are provided at...

  20. A Time Series Forecasting Method

    Directory of Open Access Journals (Sweden)

    Wang Zhao-Yu

    2017-01-01

    Full Text Available This paper proposes a novel time series forecasting method based on a weighted self-constructing clustering technique. The weighted self-constructing clustering processes all the data patterns incrementally. If a data pattern is not similar enough to an existing cluster, it forms a new cluster of its own. However, if a data pattern is similar enough to an existing cluster, it is removed from the cluster it currently belongs to and added to the most similar cluster. During the clustering process, weights are learned for each cluster. Given a series of time-stamped data up to time t, we divide it into a set of training patterns. By using the weighted self-constructing clustering, the training patterns are grouped into a set of clusters. To estimate the value at time t + 1, we find the k nearest neighbors of the input pattern and use these k neighbors to decide the estimation. Experimental results are shown to demonstrate the effectiveness of the proposed approach.

  1. Efficient Approximate OLAP Querying Over Time Series

    DEFF Research Database (Denmark)

    Perera, Kasun Baruhupolage Don Kasun Sanjeewa; Hahmann, Martin; Lehner, Wolfgang

    2016-01-01

    The ongoing trend for data gathering not only produces larger volumes of data, but also increases the variety of recorded data types. Out of these, especially time series, e.g. various sensor readings, have attracted attention in the domains of business intelligence and decision making. As OLAP...... queries play a major role in these domains, it is desirable to also execute them on time series data. While this is not a problem on the conceptual level, it can become a bottleneck with regards to query run-time. In general, processing OLAP queries gets more computationally intensive as the volume...... of data grows. This is a particular problem when querying time series data, which generally contains multiple measures recorded at fine time granularities. Usually, this issue is addressed either by scaling up hardware or by employing workload based query optimization techniques. However, these solutions...

  2. Investigation of coolant mixing in WWER-440/213 RPV with improved turbulence model

    International Nuclear Information System (INIS)

    Kiss, B.; Aszodi, A.

    2011-01-01

    A detailed and complex RPV model of WWER-440/213 type reactor was developed in Budapest University of Technology and Economics Institute of Nuclear Techniques in the previous years. This model contains the main structural elements as inlet and outlet nozzles, guide baffles of hydro-accumulators coolant, alignment drifts, perforated plates, brake- and guide tube chamber and simplified core. With the new vessel model a series of parameter studies were performed considering turbulence models, discretization schemes, and modeling methods with ANSYS CFX. In the course of parameter studies the coolant mixing was investigated in the RPV. The coolant flow was 'traced' with different scalar concentration at the inlet nozzles and its distribution was calculated at the core bottom. The simulation results were compared with PAKS NPP measured mixing factors data (available from FLOMIX project. Based on the comparison the SST turbulence model was chosen for the further simulations, which unifies the advantages of two-equation (kω and kε) models. The most widely used turbulence models are Reynolds-averaged Navier-Stokes models that are based on time-averaging of the equations. Time-averaging filters out all turbulent scales from the simulation, and the effect of turbulence on the mean flow is then re-introduced through appropriate modeling assumptions. Because of this characteristic of SST turbulence model a decision was made in year 2011 to investigate the coolant mixing with improved turbulence model as well. The hybrid SAS-SST turbulence model was chosen, which is capable of resolving large scale turbulent structures without the time and grid-scale resolution restrictions of LES, often allowing the use of existing grids created for Reynolds-averaged Navier-Stokes simulations. As a first step the coolant mixing was investigated in the downcomer only. Eddies are occurred after the loop connection because of the steep flow direction change. This turbulent, vertiginous flow was

  3. A Dynamic Fuzzy Cluster Algorithm for Time Series

    Directory of Open Access Journals (Sweden)

    Min Ji

    2013-01-01

    clustering time series by introducing the definition of key point and improving FCM algorithm. The proposed algorithm works by determining those time series whose class labels are vague and further partitions them into different clusters over time. The main advantage of this approach compared with other existing algorithms is that the property of some time series belonging to different clusters over time can be partially revealed. Results from simulation-based experiments on geographical data demonstrate the excellent performance and the desired results have been obtained. The proposed algorithm can be applied to solve other clustering problems in data mining.

  4. Numerical study on turbulent flow inside a channel with an extended chamber

    International Nuclear Information System (INIS)

    Lee, Young Tae; Lim, Hee Chang

    2009-01-01

    The paper presents a LES numerical simulation of turbulent flow around an extended chamber. The simulations are carried out on a series of 3-dimensional cavities placed in a turbulent boundary layer at a Reynolds number of 1.0x10 5 based on U and h, which are the velocity at the upper top of the cavity and the depth height, respectively. In order to get an appropriate solution in the Filtered Navier-Stokes equation for the incompressible flow, the computational mesh is densely attracted to the cavity surface and coarsely far-field, as this aids saving the computation cost and rapid convergence. The Boussinesq hypothesis is employed in the subgrid-scale turbulence model. In order to obtain the subgrid-scale turbulent viscosity, the Smagorinsky-Lilly SGS model is applied and the CFL number for time marching is 1.0. The results include the flow variations inside a cavity with the different sizes and shapes.

  5. Numerical Study on Turbulent Flow Inside a Channel with an Extended Chamber

    International Nuclear Information System (INIS)

    Lee, Young Tae; Lim, Hee Chang

    2010-01-01

    The paper describes a Large Eddy Simulation (LES) study of turbulent flow around a cavity. A series of three-dimensional cavities placed in a turbulent boundary layer are simulated at a Reynolds number of 1.0 x 10 5 by considering U and h, which represent the velocity at the top and the depth of the cavity, respectively. In order to obtain the appropriate solution for the filtered Navier-Stokes equation for incompressible flow, the computational mesh forms dense close to the wall of the cavity but relatively coarse away from the wall; this helps reduce computation cost and ensure rapid convergence. The Boussinesq hypothesis is employed in the subgrid-scale turbulence model. In order to determine the subgrid-scale turbulent viscosity, the Smagorinsky-Lilly SGS model is applied and the CFL number for time marching is set as 1.0. The results show the flow variations inside cavities of different sizes and shapes

  6. A novel weight determination method for time series data aggregation

    Science.gov (United States)

    Xu, Paiheng; Zhang, Rong; Deng, Yong

    2017-09-01

    Aggregation in time series is of great importance in time series smoothing, predicting and other time series analysis process, which makes it crucial to address the weights in times series correctly and reasonably. In this paper, a novel method to obtain the weights in time series is proposed, in which we adopt induced ordered weighted aggregation (IOWA) operator and visibility graph averaging (VGA) operator and linearly combine the weights separately generated by the two operator. The IOWA operator is introduced to the weight determination of time series, through which the time decay factor is taken into consideration. The VGA operator is able to generate weights with respect to the degree distribution in the visibility graph constructed from the corresponding time series, which reflects the relative importance of vertices in time series. The proposed method is applied to two practical datasets to illustrate its merits. The aggregation of Construction Cost Index (CCI) demonstrates the ability of proposed method to smooth time series, while the aggregation of The Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) illustrate how proposed method maintain the variation tendency of original data.

  7. Foundations of Sequence-to-Sequence Modeling for Time Series

    OpenAIRE

    Kuznetsov, Vitaly; Mariet, Zelda

    2018-01-01

    The availability of large amounts of time series data, paired with the performance of deep-learning algorithms on a broad class of problems, has recently led to significant interest in the use of sequence-to-sequence models for time series forecasting. We provide the first theoretical analysis of this time series forecasting framework. We include a comparison of sequence-to-sequence modeling to classical time series models, and as such our theory can serve as a quantitative guide for practiti...

  8. The time scale for the transition to turbulence in a high Reynolds number, accelerated flow

    International Nuclear Information System (INIS)

    Robey, H.F.; Zhou Ye; Buckingham, A.C.; Keiter, P.; Remington, B.A.; Drake, R.P.

    2003-01-01

    An experiment is described in which an interface between materials of different density is subjected to an acceleration history consisting of a strong shock followed by a period of deceleration. The resulting flow at this interface, initiated by the deposition of strong laser radiation into the initially well characterized solid materials, is unstable to both the Richtmyer-Meshkov (RM) and Rayleigh-Taylor (RT) instabilities. These experiments are of importance in their ability to access a difficult experimental regime characterized by very high energy density (high temperature and pressure) as well as large Reynolds number and Mach number. Such conditions are of interest, for example, in the study of the RM/RT induced mixing that occurs during the explosion of a core-collapse supernova. Under these experimental conditions, the flow is in the plasma state and given enough time will transition to turbulence. By analysis of the experimental data and a corresponding one-dimensional numerical simulation of the experiment, it is shown that the Reynolds number is sufficiently large (Re>10 5 ) to support a turbulent flow. An estimate of three key turbulence length scales (the Taylor and Kolmogorov microscales and a viscous diffusion scale), however, shows that the temporal duration of the present flow is insufficient to allow for the development of a turbulent inertial subrange. A methodology is described for estimating the time required under these conditions for the development of a fully turbulent flow

  9. Climate Prediction Center (CPC) Global Precipitation Time Series

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The global precipitation time series provides time series charts showing observations of daily precipitation as well as accumulated precipitation compared to normal...

  10. Climate Prediction Center (CPC) Global Temperature Time Series

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The global temperature time series provides time series charts using station based observations of daily temperature. These charts provide information about the...

  11. Recurrent Neural Network Applications for Astronomical Time Series

    Science.gov (United States)

    Protopapas, Pavlos

    2017-06-01

    The benefits of good predictive models in astronomy lie in early event prediction systems and effective resource allocation. Current time series methods applicable to regular time series have not evolved to generalize for irregular time series. In this talk, I will describe two Recurrent Neural Network methods, Long Short-Term Memory (LSTM) and Echo State Networks (ESNs) for predicting irregular time series. Feature engineering along with a non-linear modeling proved to be an effective predictor. For noisy time series, the prediction is improved by training the network on error realizations using the error estimates from astronomical light curves. In addition to this, we propose a new neural network architecture to remove correlation from the residuals in order to improve prediction and compensate for the noisy data. Finally, I show how to set hyperparameters for a stable and performant solution correctly. In this work, we circumvent this obstacle by optimizing ESN hyperparameters using Bayesian optimization with Gaussian Process priors. This automates the tuning procedure, enabling users to employ the power of RNN without needing an in-depth understanding of the tuning procedure.

  12. TIDAL TURBULENCE SPECTRA FROM A COMPLIANT MOORING

    Energy Technology Data Exchange (ETDEWEB)

    Thomson, Jim; Kilcher, Levi; Richmond, Marshall C.; Talbert, Joe; deKlerk, Alex; Polagye, Brian; Guerra, Maricarmen; Cienfuegos, Rodrigo

    2013-06-13

    A compliant mooring to collect high frequency turbulence data at a tidal energy site is evaluated in a series of short demon- stration deployments. The Tidal Turbulence Mooring (TTM) improves upon recent bottom-mounted approaches by suspend- ing Acoustic Doppler Velocimeters (ADVs) at mid-water depths (which are more relevant to tidal turbines). The ADV turbulence data are superior to Acoustic Doppler Current Profiler (ADCP) data, but are subject to motion contamination when suspended on a mooring in strong currents. In this demonstration, passive stabilization is shown to be sufficient for acquiring bulk statistics of the turbulence, without motion correction. With motion cor- rection (post-processing), data quality is further improved; the relative merits of direct and spectral motion correction are dis- cussed.

  13. Flume experiments on intermittency and zero-crossing properties of canopy turbulence

    Science.gov (United States)

    Poggi, Davide; Katul, Gabriel

    2009-06-01

    How the presence of a canopy alters the clustering and the fine scale intermittency exponents and any possible connections between them remains a vexing research problem in canopy turbulence. To begin progress on this problem, detailed flume experiments in which the longitudinal and vertical velocity time series were acquired using laser Doppler anemometry within and above a uniform canopy composed of densely arrayed rods. The time series analysis made use of the telegraphic approximation (TA) and phase-randomization (PR) methods. The TA preserved the so-called zero-crossing properties in the original turbulent velocity time series but eliminated amplitude variations, while the PR generated surrogate data that preserved the spectral scaling laws in the velocity series but randomized the acceleration statistics. Based on these experiments, it was shown that the variations in the dissipation intermittency exponents were well described by the Taylor microscale Reynolds number (Reλ) within and above the canopy. In terms of clustering, quantified here using the variance in zero-crossing density across scales, two scaling regimes emerged. For spatial scales much larger than the canopy height hc, representing the canonical scale of the vortices dominating the flow, no significant clustering was detected. For spatial scales much smaller than hc, significant clustering was discernable and follows an extensive scaling law inside the canopy. Moreover, the canopy signatures on the clustering scaling laws were weak. When repeating these clustering measures on the PR data, the results were indistinguishable from the original series. Hence, clustering exponents derived from variances in zero-crossing density across scales primarily depended on the velocity correlation function and not on the distributional properties of the acceleration. In terms of the connection between dissipation intermittency and clustering exponents, there was no significant relationship. While the former

  14. Turbulence modulation induced by bubble swarm in oscillating-grid turbulence

    International Nuclear Information System (INIS)

    Morikawa, Koichi; Urano, Shigeyuki; Saito, Takayuki

    2007-01-01

    In the present study, liquid-phase turbulence modulation induced by a bubble swarm ascending in arbitrary turbulence was experimentally investigated. Liquid-phase homogeneous isotropic turbulence was formed using an oscillating grid in a cylindrical acrylic vessel of 149 mm in inner diameter. A bubble swarm consisting of 19 bubbles of 2.8 mm in equivalent diameter was examined; the bubble size and launching time were completely controlled using a bubble launching device through audio speakers. This bubble launching device was able to repeatedly control the bubble swarm arbitrarily and precisely. The bubble swarm was launched at a frequency of 4 Hz. The liquid phase motion was measured via two LDA (Laser Doppler Anemometer) probes. The turbulence intensity, spatial correlation and integral scale were calculated from LDA data obtained by the two spatially-separate-point measurement. When the bubble swarm was added, the turbulence intensity dramatically changed. The original isotropic turbulence was modulated to the anisotropic turbulence by the mutual interference between the bubble swarm and ambient isotropic turbulence. The integral scales were calculated from the spatial correlation function. The effects of the bubble swarm on the integral scales showed the tendencies similar to those on turbulence intensity. (author)

  15. Transition Icons for Time-Series Visualization and Exploratory Analysis.

    Science.gov (United States)

    Nickerson, Paul V; Baharloo, Raheleh; Wanigatunga, Amal A; Manini, Todd M; Tighe, Patrick J; Rashidi, Parisa

    2018-03-01

    The modern healthcare landscape has seen the rapid emergence of techniques and devices that temporally monitor and record physiological signals. The prevalence of time-series data within the healthcare field necessitates the development of methods that can analyze the data in order to draw meaningful conclusions. Time-series behavior is notoriously difficult to intuitively understand due to its intrinsic high-dimensionality, which is compounded in the case of analyzing groups of time series collected from different patients. Our framework, which we call transition icons, renders common patterns in a visual format useful for understanding the shared behavior within groups of time series. Transition icons are adept at detecting and displaying subtle differences and similarities, e.g., between measurements taken from patients receiving different treatment strategies or stratified by demographics. We introduce various methods that collectively allow for exploratory analysis of groups of time series, while being free of distribution assumptions and including simple heuristics for parameter determination. Our technique extracts discrete transition patterns from symbolic aggregate approXimation representations, and compiles transition frequencies into a bag of patterns constructed for each group. These transition frequencies are normalized and aligned in icon form to intuitively display the underlying patterns. We demonstrate the transition icon technique for two time-series datasets-postoperative pain scores, and hip-worn accelerometer activity counts. We believe transition icons can be an important tool for researchers approaching time-series data, as they give rich and intuitive information about collective time-series behaviors.

  16. Multifractal analysis of visibility graph-based Ito-related connectivity time series.

    Science.gov (United States)

    Czechowski, Zbigniew; Lovallo, Michele; Telesca, Luciano

    2016-02-01

    In this study, we investigate multifractal properties of connectivity time series resulting from the visibility graph applied to normally distributed time series generated by the Ito equations with multiplicative power-law noise. We show that multifractality of the connectivity time series (i.e., the series of numbers of links outgoing any node) increases with the exponent of the power-law noise. The multifractality of the connectivity time series could be due to the width of connectivity degree distribution that can be related to the exit time of the associated Ito time series. Furthermore, the connectivity time series are characterized by persistence, although the original Ito time series are random; this is due to the procedure of visibility graph that, connecting the values of the time series, generates persistence but destroys most of the nonlinear correlations. Moreover, the visibility graph is sensitive for detecting wide "depressions" in input time series.

  17. Mathematical foundations of time series analysis a concise introduction

    CERN Document Server

    Beran, Jan

    2017-01-01

    This book provides a concise introduction to the mathematical foundations of time series analysis, with an emphasis on mathematical clarity. The text is reduced to the essential logical core, mostly using the symbolic language of mathematics, thus enabling readers to very quickly grasp the essential reasoning behind time series analysis. It appeals to anybody wanting to understand time series in a precise, mathematical manner. It is suitable for graduate courses in time series analysis but is equally useful as a reference work for students and researchers alike.

  18. Time series analysis in the social sciences the fundamentals

    CERN Document Server

    Shin, Youseop

    2017-01-01

    Times Series Analysis in the Social Sciences is a practical and highly readable introduction written exclusively for students and researchers whose mathematical background is limited to basic algebra. The book focuses on fundamental elements of time series analysis that social scientists need to understand so they can employ time series analysis for their research and practice. Through step-by-step explanations and using monthly violent crime rates as case studies, this book explains univariate time series from the preliminary visual analysis through the modeling of seasonality, trends, and re

  19. Fluid particles only separate exponentially in the dissipation range of turbulence after extremely long times

    Science.gov (United States)

    Dhariwal, Rohit; Bragg, Andrew D.

    2018-03-01

    In this paper, we consider how the statistical moments of the separation between two fluid particles grow with time when their separation lies in the dissipation range of turbulence. In this range, the fluid velocity field varies smoothly and the relative velocity of two fluid particles depends linearly upon their separation. While this may suggest that the rate at which fluid particles separate is exponential in time, this is not guaranteed because the strain rate governing their separation is a strongly fluctuating quantity in turbulence. Indeed, Afik and Steinberg [Nat. Commun. 8, 468 (2017), 10.1038/s41467-017-00389-8] argue that there is no convincing evidence that the moments of the separation between fluid particles grow exponentially with time in the dissipation range of turbulence. Motivated by this, we use direct numerical simulations (DNS) to compute the moments of particle separation over very long periods of time in a statistically stationary, isotropic turbulent flow to see if we ever observe evidence for exponential separation. Our results show that if the initial separation between the particles is infinitesimal, the moments of the particle separation first grow as power laws in time, but we then observe convincing evidence that at sufficiently long times the moments do grow exponentially. However, this exponential growth is only observed after extremely long times ≳200 τη , where τη is the Kolmogorov time scale. This is due to fluctuations in the strain rate about its mean value measured along the particle trajectories, the effect of which on the moments of the particle separation persists for very long times. We also consider the backward-in-time (BIT) moments of the article separation, and observe that they too grow exponentially in the long-time regime. However, a dramatic consequence of the exponential separation is that at long times the difference between the rate of the particle separation forward in time (FIT) and BIT grows

  20. Data imputation analysis for Cosmic Rays time series

    Science.gov (United States)

    Fernandes, R. C.; Lucio, P. S.; Fernandez, J. H.

    2017-05-01

    The occurrence of missing data concerning Galactic Cosmic Rays time series (GCR) is inevitable since loss of data is due to mechanical and human failure or technical problems and different periods of operation of GCR stations. The aim of this study was to perform multiple dataset imputation in order to depict the observational dataset. The study has used the monthly time series of GCR Climax (CLMX) and Roma (ROME) from 1960 to 2004 to simulate scenarios of 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80% and 90% of missing data compared to observed ROME series, with 50 replicates. Then, the CLMX station as a proxy for allocation of these scenarios was used. Three different methods for monthly dataset imputation were selected: AMÉLIA II - runs the bootstrap Expectation Maximization algorithm, MICE - runs an algorithm via Multivariate Imputation by Chained Equations and MTSDI - an Expectation Maximization algorithm-based method for imputation of missing values in multivariate normal time series. The synthetic time series compared with the observed ROME series has also been evaluated using several skill measures as such as RMSE, NRMSE, Agreement Index, R, R2, F-test and t-test. The results showed that for CLMX and ROME, the R2 and R statistics were equal to 0.98 and 0.96, respectively. It was observed that increases in the number of gaps generate loss of quality of the time series. Data imputation was more efficient with MTSDI method, with negligible errors and best skill coefficients. The results suggest a limit of about 60% of missing data for imputation, for monthly averages, no more than this. It is noteworthy that CLMX, ROME and KIEL stations present no missing data in the target period. This methodology allowed reconstructing 43 time series.

  1. Algorithm for Compressing Time-Series Data

    Science.gov (United States)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  2. Modeling of Volatility with Non-linear Time Series Model

    OpenAIRE

    Kim Song Yon; Kim Mun Chol

    2013-01-01

    In this paper, non-linear time series models are used to describe volatility in financial time series data. To describe volatility, two of the non-linear time series are combined into form TAR (Threshold Auto-Regressive Model) with AARCH (Asymmetric Auto-Regressive Conditional Heteroskedasticity) error term and its parameter estimation is studied.

  3. In-silico experiments on characteristic time scale at a shear-free gas-liquid interface in fully developed turbulence

    International Nuclear Information System (INIS)

    Nagaosa, Ryuichi; Handler, Robert A

    2011-01-01

    The purpose of this study is to model scalar transfer mechanisms in a fully developed turbulence for accurate predictions of the turbulent scalar flux across a shear-free gas-liquid interface. The concept of the surface-renewal approximation (Dankwerts, 1951) is introduced in this study to establish the predictive models for the interfacial scalar flux. Turbulent flow realizations obtained by a direct numerical simulation technique are employed to prepare details of three-dimensional information on turbulence in the region very close to the interface. Two characteristic time scales at the interface have been examined for exact prediction of the scalar transfer flux. One is the time scale which is reciprocal of the root-mean-square surface divergence, T γ = (γγ) −1/2 , where γ is the surface divergence. The other time scale to be examined is T S = Λ/V, where Λ is the zero-correlation length of the surface divergence as the interfacial length scale, and V is the root-mean-square velocity fluctuation in the streamwise direction as the interfacial velocity scale. The results of this study suggests that T γ is slightly unsatisfactory to correlate the turbulent scalar flux at the gas-liquid interface based on the surface-renewal approximation. It is also found that the proportionality constant appear to be 0.19, which is different with that observed in the laboratory experiments, 0.34 (Komori, Murakami, and Ueda, 1989). It is concluded that the time scale, T γ , is considered a different kind of the time scale observed in the laboratory experiments. On the other hand, the present in-silico experiments indicate that T s predicts the turbulent scalar flux based on the surface-renewal approximation in a satisfactory manner. It is also elucidated that the proportionality constant for T s is approximately 0.36, which is very close to that found by the laboratory experiments. This fact shows that the time scale T s appears to be essentially the same as the time scale

  4. In-silico experiments on characteristic time scale at a shear-free gas-liquid interface in fully developed turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Nagaosa, Ryuichi [Research Center for Compact Chemical System (CCS), AIST, 4-2-1 Nigatake, Miyagino, Sendai 983-8551 (Japan); Handler, Robert A, E-mail: ryuichi.nagaosa@aist.go.jp [Department of Mechanical Engineering, Texas A and M University, College Station, TX 77843-3123 (United States)

    2011-12-22

    The purpose of this study is to model scalar transfer mechanisms in a fully developed turbulence for accurate predictions of the turbulent scalar flux across a shear-free gas-liquid interface. The concept of the surface-renewal approximation (Dankwerts, 1951) is introduced in this study to establish the predictive models for the interfacial scalar flux. Turbulent flow realizations obtained by a direct numerical simulation technique are employed to prepare details of three-dimensional information on turbulence in the region very close to the interface. Two characteristic time scales at the interface have been examined for exact prediction of the scalar transfer flux. One is the time scale which is reciprocal of the root-mean-square surface divergence, T{sub {gamma}} = ({gamma}{gamma}){sup -1/2}, where {gamma} is the surface divergence. The other time scale to be examined is T{sub S} = {Lambda}/V, where {Lambda} is the zero-correlation length of the surface divergence as the interfacial length scale, and V is the root-mean-square velocity fluctuation in the streamwise direction as the interfacial velocity scale. The results of this study suggests that T{sub {gamma}} is slightly unsatisfactory to correlate the turbulent scalar flux at the gas-liquid interface based on the surface-renewal approximation. It is also found that the proportionality constant appear to be 0.19, which is different with that observed in the laboratory experiments, 0.34 (Komori, Murakami, and Ueda, 1989). It is concluded that the time scale, T{sub {gamma}}, is considered a different kind of the time scale observed in the laboratory experiments. On the other hand, the present in-silico experiments indicate that T{sub s} predicts the turbulent scalar flux based on the surface-renewal approximation in a satisfactory manner. It is also elucidated that the proportionality constant for T{sub s} is approximately 0.36, which is very close to that found by the laboratory experiments. This fact shows

  5. Layered Ensemble Architecture for Time Series Forecasting.

    Science.gov (United States)

    Rahman, Md Mustafizur; Islam, Md Monirul; Murase, Kazuyuki; Yao, Xin

    2016-01-01

    Time series forecasting (TSF) has been widely used in many application areas such as science, engineering, and finance. The phenomena generating time series are usually unknown and information available for forecasting is only limited to the past values of the series. It is, therefore, necessary to use an appropriate number of past values, termed lag, for forecasting. This paper proposes a layered ensemble architecture (LEA) for TSF problems. Our LEA consists of two layers, each of which uses an ensemble of multilayer perceptron (MLP) networks. While the first ensemble layer tries to find an appropriate lag, the second ensemble layer employs the obtained lag for forecasting. Unlike most previous work on TSF, the proposed architecture considers both accuracy and diversity of the individual networks in constructing an ensemble. LEA trains different networks in the ensemble by using different training sets with an aim of maintaining diversity among the networks. However, it uses the appropriate lag and combines the best trained networks to construct the ensemble. This indicates LEAs emphasis on accuracy of the networks. The proposed architecture has been tested extensively on time series data of neural network (NN)3 and NN5 competitions. It has also been tested on several standard benchmark time series data. In terms of forecasting accuracy, our experimental results have revealed clearly that LEA is better than other ensemble and nonensemble methods.

  6. The effect of existing turbulence on stratified shear instability

    Science.gov (United States)

    Kaminski, Alexis; Smyth, William

    2017-11-01

    Ocean turbulence is an essential process governing, for example, heat uptake by the ocean. In the stably-stratified ocean interior, this turbulence occurs in discrete events driven by vertical variations of the horizontal velocity. Typically, these events have been modelled by assuming an initially laminar stratified shear flow which develops wavelike instabilities, becomes fully turbulent, and then relaminarizes into a stable state. However, in the real ocean there is always some level of turbulence left over from previous events, and it is not yet understood how this turbulence impacts the evolution of future mixing events. Here, we perform a series of direct numerical simulations of turbulent events developing in stratified shear flows that are already at least weakly turbulent. We do so by varying the amplitude of the initial perturbations, and examine the subsequent development of the instability and the impact on the resulting turbulent fluxes. This work is supported by NSF Grant OCE1537173.

  7. A self-affine multi-fractal wave turbulence discrimination method using data from single point fast response sensors in a nocturnal atmospheric boundary layer

    OpenAIRE

    Kamada, Ray; Decaria, Alex Joseph

    1992-01-01

    We present DA, a self-affine, multi-fractal which may become the first routine wave/turbulence discriminant for time series data. Using nocturnal atmospheric data, we show the advantages of D A over self-similar fractals and standard turbulence measures such as FFTs, Richardson number, Brunt-Vaisala frequency, buoyancy length scale, variances, turbulent kinetic energy, and phase averaging. DA also shows promise in resolving "wave-break" events. Since it uses local basis functions, DA may be...

  8. Strategic thinking in turbulent times

    Directory of Open Access Journals (Sweden)

    Bratianu Constantin

    2017-07-01

    Full Text Available The purpose of this paper is to present a structural analysis of strategic thinking spectrum in turbulent times. Business excellence cannot be achieved without a well-defined strategic thinking spectrum able to elaborate and implement strategies in a fast changeable and unpredictable business environment. Strategic thinking means to think for a desirable future which can be ahead 4-5 years of the present time and to make decisions to the best of our knowledge for that unknown business environment. Thus, the research question is: How can we conceive the spectrum of strategic thinking such that we shall be able to deal with a complex and unknown future in achieving a competitive advantage? The methodology used to answer this question is based on metaphorical thinking, and multidimensional analysis. I shall consider four main dimensions: time, complexity, uncertainty, and novelty. On each of these dimensions I shall analyze the known thinking models and their attributes with respect to request formulated in the research question. Then, I shall choose those thinking models that correspond to the future characteristics and integrate them in a continuous spectrum. On each dimension I shall consider three basic thinking models. On the time dimension they are: inertial, dynamic and entropic thinking. On the complexity dimension they are: linear, nonlinear and systemic thinking. On the uncertainty dimension they are: deterministic, probabilistic and chaotic thinking. Finally, on the novelty dimension we have: template, intelligent and creative thinking. Considering all requirements for the unknown future, we conclude that strategic thinking spectrum should contain: entropic, nonlinear and systemic, probabilistic and chaotic, intelligent and creative thinking models. Such a spectrum increases the capacity of our understanding and as a consequence it enhances the capability of making adequate decisions in conditions of complexity and uncertainty.

  9. A window-based time series feature extraction method.

    Science.gov (United States)

    Katircioglu-Öztürk, Deniz; Güvenir, H Altay; Ravens, Ursula; Baykal, Nazife

    2017-10-01

    This study proposes a robust similarity score-based time series feature extraction method that is termed as Window-based Time series Feature ExtraCtion (WTC). Specifically, WTC generates domain-interpretable results and involves significantly low computational complexity thereby rendering itself useful for densely sampled and populated time series datasets. In this study, WTC is applied to a proprietary action potential (AP) time series dataset on human cardiomyocytes and three precordial leads from a publicly available electrocardiogram (ECG) dataset. This is followed by comparing WTC in terms of predictive accuracy and computational complexity with shapelet transform and fast shapelet transform (which constitutes an accelerated variant of the shapelet transform). The results indicate that WTC achieves a slightly higher classification performance with significantly lower execution time when compared to its shapelet-based alternatives. With respect to its interpretable features, WTC has a potential to enable medical experts to explore definitive common trends in novel datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Comparison of turbulence mitigation algorithms

    Science.gov (United States)

    Kozacik, Stephen T.; Paolini, Aaron; Sherman, Ariel; Bonnett, James; Kelmelis, Eric

    2017-07-01

    When capturing imagery over long distances, atmospheric turbulence often degrades the data, especially when observation paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms has different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for postprocessing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005. We will compare techniques from the literature with our commercially available, real-time, GPU-accelerated turbulence mitigation software. These comparisons will be made using real (not synthetic), experimentally obtained data for a variety of conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation. Additionally, we will present a technique for quantitatively comparing turbulence mitigation algorithms using real images of radial resolution targets.

  11. The Fiftieth Anniversary of Brookhaven National Laboratory: A Turbulent Time

    Science.gov (United States)

    Bond, Peter D.

    2018-03-01

    The fiftieth anniversary year of Brookhaven National Laboratory was momentous, but for reasons other than celebrating its scientific accomplishments. Legacy environmental contamination, community unrest, politics, and internal Department of Energy issues dominated the year. It was the early days of perhaps the most turbulent time in the lab's history. The consequences resulted in significant changes at the lab, but in addition they brought a change to contracts to manage the Department of Energy laboratories.

  12. Prewhitening of hydroclimatic time series? Implications for inferred change and variability across time scales

    Science.gov (United States)

    Razavi, Saman; Vogel, Richard

    2018-02-01

    Prewhitening, the process of eliminating or reducing short-term stochastic persistence to enable detection of deterministic change, has been extensively applied to time series analysis of a range of geophysical variables. Despite the controversy around its utility, methodologies for prewhitening time series continue to be a critical feature of a variety of analyses including: trend detection of hydroclimatic variables and reconstruction of climate and/or hydrology through proxy records such as tree rings. With a focus on the latter, this paper presents a generalized approach to exploring the impact of a wide range of stochastic structures of short- and long-term persistence on the variability of hydroclimatic time series. Through this approach, we examine the impact of prewhitening on the inferred variability of time series across time scales. We document how a focus on prewhitened, residual time series can be misleading, as it can drastically distort (or remove) the structure of variability across time scales. Through examples with actual data, we show how such loss of information in prewhitened time series of tree rings (so-called "residual chronologies") can lead to the underestimation of extreme conditions in climate and hydrology, particularly droughts, reconstructed for centuries preceding the historical period.

  13. DTW-APPROACH FOR UNCORRELATED MULTIVARIATE TIME SERIES IMPUTATION

    OpenAIRE

    Phan , Thi-Thu-Hong; Poisson Caillault , Emilie; Bigand , André; Lefebvre , Alain

    2017-01-01

    International audience; Missing data are inevitable in almost domains of applied sciences. Data analysis with missing values can lead to a loss of efficiency and unreliable results, especially for large missing sub-sequence(s). Some well-known methods for multivariate time series imputation require high correlations between series or their features. In this paper , we propose an approach based on the shape-behaviour relation in low/un-correlated multivariate time series under an assumption of...

  14. Density Effects on Post-shock Turbulence Structure

    Science.gov (United States)

    Tian, Yifeng; Jaberi, Farhad; Livescu, Daniel; Li, Zhaorui; Michigan State University Collaboration; Los Alamos National Laboratory Collaboration; Texas A&M University-Corpus Christi Collaboration

    2017-11-01

    The effects of density variations due to mixture composition on post-shock turbulence structure are studied using turbulence-resolving shock-capturing simulations. This work extends the canonical Shock-Turbulence Interaction (STI) problem to involve significant variable density effects. The numerical method has been verified using a series of grid and LIA convergence tests, and is used to generate accurate post-shock turbulence data for a detailed flow study. Density effects on post-shock turbulent statistics are shown to be significant, leading to an increased amplification of turbulent kinetic energy (TKE). Eulerian and Lagrangian analyses show that the increase in the post-shock correlation between rotation and strain is weakened in the case with significant density variations (referred to as the ``multi-fluid'' case). Similar to previous single-fluid results and LIA predictions, the shock wave significantly changes the topology of the turbulent structures, exhibiting a symmetrization of the joint PDF of second and third invariant of the deviatoric part of velocity gradient tensor. In the multi-fluid case, this trend is more significant and mainly manifested in the heavy fluid regions. Lagrangian data are also used to study the evolution of turbulence structure away from the shock wave and assess the accuracy of Lagrangian dynamical models.

  15. Variable Selection in Time Series Forecasting Using Random Forests

    Directory of Open Access Journals (Sweden)

    Hristos Tyralis

    2017-10-01

    Full Text Available Time series forecasting using machine learning algorithms has gained popularity recently. Random forest is a machine learning algorithm implemented in time series forecasting; however, most of its forecasting properties have remained unexplored. Here we focus on assessing the performance of random forests in one-step forecasting using two large datasets of short time series with the aim to suggest an optimal set of predictor variables. Furthermore, we compare its performance to benchmarking methods. The first dataset is composed by 16,000 simulated time series from a variety of Autoregressive Fractionally Integrated Moving Average (ARFIMA models. The second dataset consists of 135 mean annual temperature time series. The highest predictive performance of RF is observed when using a low number of recent lagged predictor variables. This outcome could be useful in relevant future applications, with the prospect to achieve higher predictive accuracy.

  16. Trend time-series modeling and forecasting with neural networks.

    Science.gov (United States)

    Qi, Min; Zhang, G Peter

    2008-05-01

    Despite its great importance, there has been no general consensus on how to model the trends in time-series data. Compared to traditional approaches, neural networks (NNs) have shown some promise in time-series forecasting. This paper investigates how to best model trend time series using NNs. Four different strategies (raw data, raw data with time index, detrending, and differencing) are used to model various trend patterns (linear, nonlinear, deterministic, stochastic, and breaking trend). We find that with NNs differencing often gives meritorious results regardless of the underlying data generating processes (DGPs). This finding is also confirmed by the real gross national product (GNP) series.

  17. TURBULENT DISKS ARE NEVER STABLE: FRAGMENTATION AND TURBULENCE-PROMOTED PLANET FORMATION

    Energy Technology Data Exchange (ETDEWEB)

    Hopkins, Philip F. [TAPIR, Mailcode 350-17, California Institute of Technology, Pasadena, CA 91125 (United States); Christiansen, Jessie L., E-mail: phopkins@caltech.edu [SETI Institute/NASA Ames Research Center, M/S 244-30, Moffett Field, CA 94035 (United States)

    2013-10-10

    A fundamental assumption in our understanding of disks is that when the Toomre Q >> 1, the disk is stable against fragmentation into self-gravitating objects (and so cannot form planets via direct collapse). But if disks are turbulent, this neglects a spectrum of stochastic density fluctuations that can produce rare, high-density mass concentrations. Here, we use a recently developed analytic framework to predict the statistics of these fluctuations, i.e., the rate of fragmentation and mass spectrum of fragments formed in a turbulent Keplerian disk. Turbulent disks are never completely stable: we calculate the (always finite) probability of forming self-gravitating structures via stochastic turbulent density fluctuations in such disks. Modest sub-sonic turbulence above Mach number M∼0.1 can produce a few stochastic fragmentation or 'direct collapse' events over ∼Myr timescales, even if Q >> 1 and cooling is slow (t{sub cool} >> t{sub orbit}). In transsonic turbulence this extends to Q ∼ 100. We derive the true Q-criterion needed to suppress such events, which scales exponentially with Mach number. We specify to turbulence driven by magneto-rotational instability, convection, or spiral waves and derive equivalent criteria in terms of Q and the cooling time. Cooling times ∼> 50 t{sub dyn} may be required to completely suppress fragmentation. These gravo-turbulent events produce mass spectra peaked near ∼(Q M{sub disk}/M{sub *}){sup 2} M{sub disk} (rocky-to-giant planet masses, increasing with distance from the star). We apply this to protoplanetary disk models and show that even minimum-mass solar nebulae could experience stochastic collapse events, provided a source of turbulence.

  18. TURBULENT DISKS ARE NEVER STABLE: FRAGMENTATION AND TURBULENCE-PROMOTED PLANET FORMATION

    International Nuclear Information System (INIS)

    Hopkins, Philip F.; Christiansen, Jessie L.

    2013-01-01

    A fundamental assumption in our understanding of disks is that when the Toomre Q >> 1, the disk is stable against fragmentation into self-gravitating objects (and so cannot form planets via direct collapse). But if disks are turbulent, this neglects a spectrum of stochastic density fluctuations that can produce rare, high-density mass concentrations. Here, we use a recently developed analytic framework to predict the statistics of these fluctuations, i.e., the rate of fragmentation and mass spectrum of fragments formed in a turbulent Keplerian disk. Turbulent disks are never completely stable: we calculate the (always finite) probability of forming self-gravitating structures via stochastic turbulent density fluctuations in such disks. Modest sub-sonic turbulence above Mach number M∼0.1 can produce a few stochastic fragmentation or 'direct collapse' events over ∼Myr timescales, even if Q >> 1 and cooling is slow (t cool >> t orbit ). In transsonic turbulence this extends to Q ∼ 100. We derive the true Q-criterion needed to suppress such events, which scales exponentially with Mach number. We specify to turbulence driven by magneto-rotational instability, convection, or spiral waves and derive equivalent criteria in terms of Q and the cooling time. Cooling times ∼> 50 t dyn may be required to completely suppress fragmentation. These gravo-turbulent events produce mass spectra peaked near ∼(Q M disk /M * ) 2 M disk (rocky-to-giant planet masses, increasing with distance from the star). We apply this to protoplanetary disk models and show that even minimum-mass solar nebulae could experience stochastic collapse events, provided a source of turbulence

  19. Segmentation of Nonstationary Time Series with Geometric Clustering

    DEFF Research Database (Denmark)

    Bocharov, Alexei; Thiesson, Bo

    2013-01-01

    We introduce a non-parametric method for segmentation in regimeswitching time-series models. The approach is based on spectral clustering of target-regressor tuples and derives a switching regression tree, where regime switches are modeled by oblique splits. Such models can be learned efficiently...... from data, where clustering is used to propose one single split candidate at each split level. We use the class of ART time series models to serve as illustration, but because of the non-parametric nature of our segmentation approach, it readily generalizes to a wide range of time-series models that go...

  20. Non-parametric characterization of long-term rainfall time series

    Science.gov (United States)

    Tiwari, Harinarayan; Pandey, Brij Kishor

    2018-03-01

    The statistical study of rainfall time series is one of the approaches for efficient hydrological system design. Identifying, and characterizing long-term rainfall time series could aid in improving hydrological systems forecasting. In the present study, eventual statistics was applied for the long-term (1851-2006) rainfall time series under seven meteorological regions of India. Linear trend analysis was carried out using Mann-Kendall test for the observed rainfall series. The observed trend using the above-mentioned approach has been ascertained using the innovative trend analysis method. Innovative trend analysis has been found to be a strong tool to detect the general trend of rainfall time series. Sequential Mann-Kendall test has also been carried out to examine nonlinear trends of the series. The partial sum of cumulative deviation test is also found to be suitable to detect the nonlinear trend. Innovative trend analysis, sequential Mann-Kendall test and partial cumulative deviation test have potential to detect the general as well as nonlinear trend for the rainfall time series. Annual rainfall analysis suggests that the maximum changes in mean rainfall is 11.53% for West Peninsular India, whereas the maximum fall in mean rainfall is 7.8% for the North Mountainous Indian region. The innovative trend analysis method is also capable of finding the number of change point available in the time series. Additionally, we have performed von Neumann ratio test and cumulative deviation test to estimate the departure from homogeneity. Singular spectrum analysis has been applied in this study to evaluate the order of departure from homogeneity in the rainfall time series. Monsoon season (JS) of North Mountainous India and West Peninsular India zones has higher departure from homogeneity and singular spectrum analysis shows the results to be in coherence with the same.

  1. Time Series Decomposition into Oscillation Components and Phase Estimation.

    Science.gov (United States)

    Matsuda, Takeru; Komaki, Fumiyasu

    2017-02-01

    Many time series are naturally considered as a superposition of several oscillation components. For example, electroencephalogram (EEG) time series include oscillation components such as alpha, beta, and gamma. We propose a method for decomposing time series into such oscillation components using state-space models. Based on the concept of random frequency modulation, gaussian linear state-space models for oscillation components are developed. In this model, the frequency of an oscillator fluctuates by noise. Time series decomposition is accomplished by this model like the Bayesian seasonal adjustment method. Since the model parameters are estimated from data by the empirical Bayes' method, the amplitudes and the frequencies of oscillation components are determined in a data-driven manner. Also, the appropriate number of oscillation components is determined with the Akaike information criterion (AIC). In this way, the proposed method provides a natural decomposition of the given time series into oscillation components. In neuroscience, the phase of neural time series plays an important role in neural information processing. The proposed method can be used to estimate the phase of each oscillation component and has several advantages over a conventional method based on the Hilbert transform. Thus, the proposed method enables an investigation of the phase dynamics of time series. Numerical results show that the proposed method succeeds in extracting intermittent oscillations like ripples and detecting the phase reset phenomena. We apply the proposed method to real data from various fields such as astronomy, ecology, tidology, and neuroscience.

  2. Introduction to time series analysis and forecasting

    CERN Document Server

    Montgomery, Douglas C; Kulahci, Murat

    2015-01-01

    Praise for the First Edition ""…[t]he book is great for readers who need to apply the methods and models presented but have little background in mathematics and statistics."" -MAA Reviews Thoroughly updated throughout, Introduction to Time Series Analysis and Forecasting, Second Edition presents the underlying theories of time series analysis that are needed to analyze time-oriented data and construct real-world short- to medium-term statistical forecasts.    Authored by highly-experienced academics and professionals in engineering statistics, the Second Edition features discussions on both

  3. Multi-Scale Dissemination of Time Series Data

    DEFF Research Database (Denmark)

    Guo, Qingsong; Zhou, Yongluan; Su, Li

    2013-01-01

    In this paper, we consider the problem of continuous dissemination of time series data, such as sensor measurements, to a large number of subscribers. These subscribers fall into multiple subscription levels, where each subscription level is specified by the bandwidth constraint of a subscriber......, which is an abstract indicator for both the physical limits and the amount of data that the subscriber would like to handle. To handle this problem, we propose a system framework for multi-scale time series data dissemination that employs a typical tree-based dissemination network and existing time...

  4. Turbulence closure: turbulence, waves and the wave-turbulence transition – Part 1: Vanishing mean shear

    Directory of Open Access Journals (Sweden)

    H. Z. Baumert

    2009-03-01

    Full Text Available This paper extends a turbulence closure-like model for stably stratified flows into a new dynamic domain in which turbulence is generated by internal gravity waves rather than mean shear. The model turbulent kinetic energy (TKE, K balance, its first equation, incorporates a term for the energy transfer from internal waves to turbulence. This energy source is in addition to the traditional shear production. The second variable of the new two-equation model is the turbulent enstrophy (Ω. Compared to the traditional shear-only case, the Ω-equation is modified to account for the effect of the waves on the turbulence time and space scales. This modification is based on the assumption of a non-zero constant flux Richardson number in the limit of vanishing mean shear when turbulence is produced exclusively by internal waves. This paper is part 1 of a continuing theoretical development. It accounts for mean shear- and internal wave-driven mixing only in the two limits of mean shear and no waves and waves but no mean shear, respectively.

    The new model reproduces the wave-turbulence transition analyzed by D'Asaro and Lien (2000b. At small energy density E of the internal wave field, the turbulent dissipation rate (ε scales like ε~E2. This is what is observed in the deep sea. With increasing E, after the wave-turbulence transition has been passed, the scaling changes to ε~E1. This is observed, for example, in the highly energetic tidal flow near a sill in Knight Inlet. The new model further exhibits a turbulent length scale proportional to the Ozmidov scale, as observed in the ocean, and predicts the ratio between the turbulent Thorpe and Ozmidov length scales well within the range observed in the ocean.

  5. An alternative way to track the hot money in turbulent times

    Science.gov (United States)

    Sensoy, Ahmet

    2015-02-01

    During recent years, networks have proven to be an efficient way to characterize and investigate a wide range of complex financial systems. In this study, we first obtain the dynamic conditional correlations between filtered exchange rates (against US dollar) of several countries and introduce a time-varying threshold correlation level to define dynamic strong correlations between these exchange rates. Then, using evolving networks obtained from strong correlations, we propose an alternative approach to track the hot money in turbulent times. The approach is demonstrated for the time period including the financial turmoil of 2008. Other applications are also discussed.

  6. RADON CONCENTRATION TIME SERIES MODELING AND APPLICATION DISCUSSION.

    Science.gov (United States)

    Stránský, V; Thinová, L

    2017-11-01

    In the year 2010 a continual radon measurement was established at Mladeč Caves in the Czech Republic using a continual radon monitor RADIM3A. In order to model radon time series in the years 2010-15, the Box-Jenkins Methodology, often used in econometrics, was applied. Because of the behavior of radon concentrations (RCs), a seasonal integrated, autoregressive moving averages model with exogenous variables (SARIMAX) has been chosen to model the measured time series. This model uses the time series seasonality, previously acquired values and delayed atmospheric parameters, to forecast RC. The developed model for RC time series is called regARIMA(5,1,3). Model residuals could be retrospectively compared with seismic evidence of local or global earthquakes, which occurred during the RCs measurement. This technique enables us to asses if continuously measured RC could serve an earthquake precursor. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Similarity estimators for irregular and age uncertain time series

    Science.gov (United States)

    Rehfeld, K.; Kurths, J.

    2013-09-01

    Paleoclimate time series are often irregularly sampled and age uncertain, which is an important technical challenge to overcome for successful reconstruction of past climate variability and dynamics. Visual comparison and interpolation-based linear correlation approaches have been used to infer dependencies from such proxy time series. While the first is subjective, not measurable and not suitable for the comparison of many datasets at a time, the latter introduces interpolation bias, and both face difficulties if the underlying dependencies are nonlinear. In this paper we investigate similarity estimators that could be suitable for the quantitative investigation of dependencies in irregular and age uncertain time series. We compare the Gaussian-kernel based cross correlation (gXCF, Rehfeld et al., 2011) and mutual information (gMI, Rehfeld et al., 2013) against their interpolation-based counterparts and the new event synchronization function (ESF). We test the efficiency of the methods in estimating coupling strength and coupling lag numerically, using ensembles of synthetic stalagmites with short, autocorrelated, linear and nonlinearly coupled proxy time series, and in the application to real stalagmite time series. In the linear test case coupling strength increases are identified consistently for all estimators, while in the nonlinear test case the correlation-based approaches fail. The lag at which the time series are coupled is identified correctly as the maximum of the similarity functions in around 60-55% (in the linear case) to 53-42% (for the nonlinear processes) of the cases when the dating of the synthetic stalagmite is perfectly precise. If the age uncertainty increases beyond 5% of the time series length, however, the true coupling lag is not identified more often than the others for which the similarity function was estimated. Age uncertainty contributes up to half of the uncertainty in the similarity estimation process. Time series irregularity

  8. Similarity estimators for irregular and age-uncertain time series

    Science.gov (United States)

    Rehfeld, K.; Kurths, J.

    2014-01-01

    Paleoclimate time series are often irregularly sampled and age uncertain, which is an important technical challenge to overcome for successful reconstruction of past climate variability and dynamics. Visual comparison and interpolation-based linear correlation approaches have been used to infer dependencies from such proxy time series. While the first is subjective, not measurable and not suitable for the comparison of many data sets at a time, the latter introduces interpolation bias, and both face difficulties if the underlying dependencies are nonlinear. In this paper we investigate similarity estimators that could be suitable for the quantitative investigation of dependencies in irregular and age-uncertain time series. We compare the Gaussian-kernel-based cross-correlation (gXCF, Rehfeld et al., 2011) and mutual information (gMI, Rehfeld et al., 2013) against their interpolation-based counterparts and the new event synchronization function (ESF). We test the efficiency of the methods in estimating coupling strength and coupling lag numerically, using ensembles of synthetic stalagmites with short, autocorrelated, linear and nonlinearly coupled proxy time series, and in the application to real stalagmite time series. In the linear test case, coupling strength increases are identified consistently for all estimators, while in the nonlinear test case the correlation-based approaches fail. The lag at which the time series are coupled is identified correctly as the maximum of the similarity functions in around 60-55% (in the linear case) to 53-42% (for the nonlinear processes) of the cases when the dating of the synthetic stalagmite is perfectly precise. If the age uncertainty increases beyond 5% of the time series length, however, the true coupling lag is not identified more often than the others for which the similarity function was estimated. Age uncertainty contributes up to half of the uncertainty in the similarity estimation process. Time series irregularity

  9. Robust Forecasting of Non-Stationary Time Series

    OpenAIRE

    Croux, C.; Fried, R.; Gijbels, I.; Mahieu, K.

    2010-01-01

    This paper proposes a robust forecasting method for non-stationary time series. The time series is modelled using non-parametric heteroscedastic regression, and fitted by a localized MM-estimator, combining high robustness and large efficiency. The proposed method is shown to produce reliable forecasts in the presence of outliers, non-linearity, and heteroscedasticity. In the absence of outliers, the forecasts are only slightly less precise than those based on a localized Least Squares estima...

  10. Eulerian and Lagrangian views of a turbulent boundary layer flow using time-resolved tomographic PIV

    NARCIS (Netherlands)

    Schröder, A.; Geisler, R.; Staack, K.; Elsinga, G.E.; Scarano, F.; Wieneke, B.; Henning, A.; Poelma, C.; Westerweel, J.

    2010-01-01

    Coherent structures and their time evolution in the logarithmic region of a turbulent boundary layer investigated by means of 3D space–time correlations and time-dependent conditional averaging techniques are the focuses of the present paper. Experiments have been performed in the water tunnel at TU

  11. Time Series Econometrics for the 21st Century

    Science.gov (United States)

    Hansen, Bruce E.

    2017-01-01

    The field of econometrics largely started with time series analysis because many early datasets were time-series macroeconomic data. As the field developed, more cross-sectional and longitudinal datasets were collected, which today dominate the majority of academic empirical research. In nonacademic (private sector, central bank, and governmental)…

  12. Effectiveness of firefly algorithm based neural network in time series ...

    African Journals Online (AJOL)

    Effectiveness of firefly algorithm based neural network in time series forecasting. ... In the experiments, three well known time series were used to evaluate the performance. Results obtained were compared with ... Keywords: Time series, Artificial Neural Network, Firefly Algorithm, Particle Swarm Optimization, Overfitting ...

  13. Time Series Analysis of Insar Data: Methods and Trends

    Science.gov (United States)

    Osmanoglu, Batuhan; Sunar, Filiz; Wdowinski, Shimon; Cano-Cabral, Enrique

    2015-01-01

    Time series analysis of InSAR data has emerged as an important tool for monitoring and measuring the displacement of the Earth's surface. Changes in the Earth's surface can result from a wide range of phenomena such as earthquakes, volcanoes, landslides, variations in ground water levels, and changes in wetland water levels. Time series analysis is applied to interferometric phase measurements, which wrap around when the observed motion is larger than one-half of the radar wavelength. Thus, the spatio-temporal ''unwrapping" of phase observations is necessary to obtain physically meaningful results. Several different algorithms have been developed for time series analysis of InSAR data to solve for this ambiguity. These algorithms may employ different models for time series analysis, but they all generate a first-order deformation rate, which can be compared to each other. However, there is no single algorithm that can provide optimal results in all cases. Since time series analyses of InSAR data are used in a variety of applications with different characteristics, each algorithm possesses inherently unique strengths and weaknesses. In this review article, following a brief overview of InSAR technology, we discuss several algorithms developed for time series analysis of InSAR data using an example set of results for measuring subsidence rates in Mexico City.

  14. Interpretation of a compositional time series

    Science.gov (United States)

    Tolosana-Delgado, R.; van den Boogaart, K. G.

    2012-04-01

    Common methods for multivariate time series analysis use linear operations, from the definition of a time-lagged covariance/correlation to the prediction of new outcomes. However, when the time series response is a composition (a vector of positive components showing the relative importance of a set of parts in a total, like percentages and proportions), then linear operations are afflicted of several problems. For instance, it has been long recognised that (auto/cross-)correlations between raw percentages are spurious, more dependent on which other components are being considered than on any natural link between the components of interest. Also, a long-term forecast of a composition in models with a linear trend will ultimately predict negative components. In general terms, compositional data should not be treated in a raw scale, but after a log-ratio transformation (Aitchison, 1986: The statistical analysis of compositional data. Chapman and Hill). This is so because the information conveyed by a compositional data is relative, as stated in their definition. The principle of working in coordinates allows to apply any sort of multivariate analysis to a log-ratio transformed composition, as long as this transformation is invertible. This principle is of full application to time series analysis. We will discuss how results (both auto/cross-correlation functions and predictions) can be back-transformed, viewed and interpreted in a meaningful way. One view is to use the exhaustive set of all possible pairwise log-ratios, which allows to express the results into D(D - 1)/2 separate, interpretable sets of one-dimensional models showing the behaviour of each possible pairwise log-ratios. Another view is the interpretation of estimated coefficients or correlations back-transformed in terms of compositions. These two views are compatible and complementary. These issues are illustrated with time series of seasonal precipitation patterns at different rain gauges of the USA

  15. Capturing Structure Implicitly from Time-Series having Limited Data

    OpenAIRE

    Emaasit, Daniel; Johnson, Matthew

    2018-01-01

    Scientific fields such as insider-threat detection and highway-safety planning often lack sufficient amounts of time-series data to estimate statistical models for the purpose of scientific discovery. Moreover, the available limited data are quite noisy. This presents a major challenge when estimating time-series models that are robust to overfitting and have well-calibrated uncertainty estimates. Most of the current literature in these fields involve visualizing the time-series for noticeabl...

  16. Transitional-turbulent spots and turbulent-turbulent spots in boundary layers.

    Science.gov (United States)

    Wu, Xiaohua; Moin, Parviz; Wallace, James M; Skarda, Jinhie; Lozano-Durán, Adrián; Hickey, Jean-Pierre

    2017-07-03

    Two observations drawn from a thoroughly validated direct numerical simulation of the canonical spatially developing, zero-pressure gradient, smooth, flat-plate boundary layer are presented here. The first is that, for bypass transition in the narrow sense defined herein, we found that the transitional-turbulent spot inception mechanism is analogous to the secondary instability of boundary-layer natural transition, namely a spanwise vortex filament becomes a [Formula: see text] vortex and then, a hairpin packet. Long streak meandering does occur but usually when a streak is infected by a nearby existing transitional-turbulent spot. Streak waviness and breakdown are, therefore, not the mechanisms for the inception of transitional-turbulent spots found here. Rather, they only facilitate the growth and spreading of existing transitional-turbulent spots. The second observation is the discovery, in the inner layer of the developed turbulent boundary layer, of what we call turbulent-turbulent spots. These turbulent-turbulent spots are dense concentrations of small-scale vortices with high swirling strength originating from hairpin packets. Although structurally quite similar to the transitional-turbulent spots, these turbulent-turbulent spots are generated locally in the fully turbulent environment, and they are persistent with a systematic variation of detection threshold level. They exert indentation, segmentation, and termination on the viscous sublayer streaks, and they coincide with local concentrations of high levels of Reynolds shear stress, enstrophy, and temperature fluctuations. The sublayer streaks seem to be passive and are often simply the rims of the indentation pockets arising from the turbulent-turbulent spots.

  17. Self-affinity in the dengue fever time series

    Science.gov (United States)

    Azevedo, S. M.; Saba, H.; Miranda, J. G. V.; Filho, A. S. Nascimento; Moret, M. A.

    2016-06-01

    Dengue is a complex public health problem that is common in tropical and subtropical regions. This disease has risen substantially in the last three decades, and the physical symptoms depict the self-affine behavior of the occurrences of reported dengue cases in Bahia, Brazil. This study uses detrended fluctuation analysis (DFA) to verify the scale behavior in a time series of dengue cases and to evaluate the long-range correlations that are characterized by the power law α exponent for different cities in Bahia, Brazil. The scaling exponent (α) presents different long-range correlations, i.e. uncorrelated, anti-persistent, persistent and diffusive behaviors. The long-range correlations highlight the complex behavior of the time series of this disease. The findings show that there are two distinct types of scale behavior. In the first behavior, the time series presents a persistent α exponent for a one-month period. For large periods, the time series signal approaches subdiffusive behavior. The hypothesis of the long-range correlations in the time series of the occurrences of reported dengue cases was validated. The observed self-affinity is useful as a forecasting tool for future periods through extrapolation of the α exponent behavior. This complex system has a higher predictability in a relatively short time (approximately one month), and it suggests a new tool in epidemiological control strategies. However, predictions for large periods using DFA are hidden by the subdiffusive behavior.

  18. On the plurality of times: disunified time and the A-series | Nefdt ...

    African Journals Online (AJOL)

    Then, I attempt to show that disunified time is a problem for a semantics based on the A-series since A-truthmakers are hard to come by in a universe of temporally disconnected time-series. Finally, I provide a novel argument showing that presentists should be particularly fearful of such a universe. South African Journal of ...

  19. Time-series modeling of long-term weight self-monitoring data.

    Science.gov (United States)

    Helander, Elina; Pavel, Misha; Jimison, Holly; Korhonen, Ilkka

    2015-08-01

    Long-term self-monitoring of weight is beneficial for weight maintenance, especially after weight loss. Connected weight scales accumulate time series information over long term and hence enable time series analysis of the data. The analysis can reveal individual patterns, provide more sensitive detection of significant weight trends, and enable more accurate and timely prediction of weight outcomes. However, long term self-weighing data has several challenges which complicate the analysis. Especially, irregular sampling, missing data, and existence of periodic (e.g. diurnal and weekly) patterns are common. In this study, we apply time series modeling approach on daily weight time series from two individuals and describe information that can be extracted from this kind of data. We study the properties of weight time series data, missing data and its link to individuals behavior, periodic patterns and weight series segmentation. Being able to understand behavior through weight data and give relevant feedback is desired to lead to positive intervention on health behaviors.

  20. Time series prediction of apple scab using meteorological ...

    African Journals Online (AJOL)

    A new prediction model for the early warning of apple scab is proposed in this study. The method is based on artificial intelligence and time series prediction. The infection period of apple scab was evaluated as the time series prediction model instead of summation of wetness duration. Also, the relations of different ...

  1. Lagrangian statistics across the turbulent-nonturbulent interface in a turbulent plane jet.

    Science.gov (United States)

    Taveira, Rodrigo R; Diogo, José S; Lopes, Diogo C; da Silva, Carlos B

    2013-10-01

    Lagrangian statistics from millions of particles are used to study the turbulent entrainment mechanism in a direct numerical simulation of a turbulent plane jet at Re(λ) ≈ 110. The particles (tracers) are initially seeded at the irrotational region of the jet near the turbulent shear layer and are followed as they are drawn into the turbulent region across the turbulent-nonturbulent interface (TNTI), allowing the study of the enstrophy buildup and thereby characterizing the turbulent entrainment mechanism in the jet. The use of Lagrangian statistics following fluid particles gives a more correct description of the entrainment mechanism than in previous works since the statistics in relation to the TNTI position involve data from the trajectories of the entraining fluid particles. The Lagrangian statistics for the particles show the existence of a velocity jump and a characteristic vorticity jump (with a thickness which is one order of magnitude greater than the Kolmogorov microscale), in agreement with previous results using Eulerian statistics. The particles initially acquire enstrophy by viscous diffusion and later by enstrophy production, which becomes "active" only deep inside the turbulent region. Both enstrophy diffusion and production near the TNTI differ substantially from inside the turbulent region. Only about 1% of all particles find their way into pockets of irrotational flow engulfed into the turbulent shear layer region, indicating that "engulfment" is not significant for the present flow, indirectly suggesting that the entrainment is largely due to "nibbling" small-scale mechanisms acting along the entire TNTI surface. Probability density functions of particle positions suggests that the particles spend more time crossing the region near the TNTI than traveling inside the turbulent region, consistent with the particles moving tangent to the interface around the time they cross it.

  2. Characterization of time series via Rényi complexity-entropy curves

    Science.gov (United States)

    Jauregui, M.; Zunino, L.; Lenzi, E. K.; Mendes, R. S.; Ribeiro, H. V.

    2018-05-01

    One of the most useful tools for distinguishing between chaotic and stochastic time series is the so-called complexity-entropy causality plane. This diagram involves two complexity measures: the Shannon entropy and the statistical complexity. Recently, this idea has been generalized by considering the Tsallis monoparametric generalization of the Shannon entropy, yielding complexity-entropy curves. These curves have proven to enhance the discrimination among different time series related to stochastic and chaotic processes of numerical and experimental nature. Here we further explore these complexity-entropy curves in the context of the Rényi entropy, which is another monoparametric generalization of the Shannon entropy. By combining the Rényi entropy with the proper generalization of the statistical complexity, we associate a parametric curve (the Rényi complexity-entropy curve) with a given time series. We explore this approach in a series of numerical and experimental applications, demonstrating the usefulness of this new technique for time series analysis. We show that the Rényi complexity-entropy curves enable the differentiation among time series of chaotic, stochastic, and periodic nature. In particular, time series of stochastic nature are associated with curves displaying positive curvature in a neighborhood of their initial points, whereas curves related to chaotic phenomena have a negative curvature; finally, periodic time series are represented by vertical straight lines.

  3. Quantifying Selection with Pool-Seq Time Series Data.

    Science.gov (United States)

    Taus, Thomas; Futschik, Andreas; Schlötterer, Christian

    2017-11-01

    Allele frequency time series data constitute a powerful resource for unraveling mechanisms of adaptation, because the temporal dimension captures important information about evolutionary forces. In particular, Evolve and Resequence (E&R), the whole-genome sequencing of replicated experimentally evolving populations, is becoming increasingly popular. Based on computer simulations several studies proposed experimental parameters to optimize the identification of the selection targets. No such recommendations are available for the underlying parameters selection strength and dominance. Here, we introduce a highly accurate method to estimate selection parameters from replicated time series data, which is fast enough to be applied on a genome scale. Using this new method, we evaluate how experimental parameters can be optimized to obtain the most reliable estimates for selection parameters. We show that the effective population size (Ne) and the number of replicates have the largest impact. Because the number of time points and sequencing coverage had only a minor effect, we suggest that time series analysis is feasible without major increase in sequencing costs. We anticipate that time series analysis will become routine in E&R studies. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  4. Electromagnetic radiation from strong Langmuir turbulence

    International Nuclear Information System (INIS)

    Akimoto, K.; Rowland, H.L.; Papadopoulos, K.

    1988-01-01

    A series of computer simulations is reported showing the generation of electromagnetic radiation by strong Langmuir turbulence. The simulations were carried out with a fully electromagnetic 2 1/2 -dimensional fluid code. The radiation process takes place in two stages that reflect the evolution of the electrostatic turbulence. During the first stage while the electrostatic turbulence is evolving from an initial linear wave packet into a planar soliton, the radiation is primarily at ω/sub e/. During the second stage when transverse instabilities lead to the collapse and dissipation of the solitons, 2ω/sub e/ and ω/sub e/ radiation are comparable, and 3ω/sub e/ is also present. The radiation power at ω = 2ω/sub e/ is in good agreement with theoretical predictions for electromagnetic emissions by collapsing solitons

  5. Transformation-cost time-series method for analyzing irregularly sampled data.

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations-with associated costs-to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  6. Transformation-cost time-series method for analyzing irregularly sampled data

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G. Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations—with associated costs—to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  7. A multidisciplinary database for geophysical time series management

    Science.gov (United States)

    Montalto, P.; Aliotta, M.; Cassisi, C.; Prestifilippo, M.; Cannata, A.

    2013-12-01

    The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.

  8. Time-dependent transport of energetic particles in magnetic turbulence: computer simulations versus analytical theory

    Science.gov (United States)

    Arendt, V.; Shalchi, A.

    2018-06-01

    We explore numerically the transport of energetic particles in a turbulent magnetic field configuration. A test-particle code is employed to compute running diffusion coefficients as well as particle distribution functions in the different directions of space. Our numerical findings are compared with models commonly used in diffusion theory such as Gaussian distribution functions and solutions of the cosmic ray Fokker-Planck equation. Furthermore, we compare the running diffusion coefficients across the mean magnetic field with solutions obtained from the time-dependent version of the unified non-linear transport theory. In most cases we find that particle distribution functions are indeed of Gaussian form as long as a two-component turbulence model is employed. For turbulence setups with reduced dimensionality, however, the Gaussian distribution can no longer be obtained. It is also shown that the unified non-linear transport theory agrees with simulated perpendicular diffusion coefficients as long as the pure two-dimensional model is excluded.

  9. Modeling financial time series with S-plus

    CERN Document Server

    Zivot, Eric

    2003-01-01

    The field of financial econometrics has exploded over the last decade This book represents an integration of theory, methods, and examples using the S-PLUS statistical modeling language and the S+FinMetrics module to facilitate the practice of financial econometrics This is the first book to show the power of S-PLUS for the analysis of time series data It is written for researchers and practitioners in the finance industry, academic researchers in economics and finance, and advanced MBA and graduate students in economics and finance Readers are assumed to have a basic knowledge of S-PLUS and a solid grounding in basic statistics and time series concepts Eric Zivot is an associate professor and Gary Waterman Distinguished Scholar in the Economics Department at the University of Washington, and is co-director of the nascent Professional Master's Program in Computational Finance He regularly teaches courses on econometric theory, financial econometrics and time series econometrics, and is the recipient of the He...

  10. Application of Time Series Analysis in Determination of Lag Time in Jahanbin Basin

    Directory of Open Access Journals (Sweden)

    Seied Yahya Mirzaee

    2005-11-01

        One of the important issues that have significant role in study of hydrology of basin is determination of lag time. Lag time has significant role in hydrological studies. Quantity of rainfall related lag time depends on several factors, such as permeability, vegetation cover, catchments slope, rainfall intensity, storm duration and type of rain. Determination of lag time is important parameter in many projects such as dam design and also water resource studies. Lag time of basin could be calculated using various methods. One of these methods is time series analysis of spectral density. The analysis is based on fouries series. The time series is approximated with Sinuous and Cosines functions. In this method harmonically significant quantities with individual frequencies are presented. Spectral density under multiple time series could be used to obtain basin lag time for annual runoff and short-term rainfall fluctuation. A long lag time could be due to snowmelt as well as melting ice due to rainfalls in freezing days. In this research the lag time of Jahanbin basin has been determined using spectral density method. The catchments is subjected to both rainfall and snowfall. For short term rainfall fluctuation with a return period  2, 3, 4 months, the lag times were found 0.18, 0.5 and 0.083 month, respectively.

  11. Modeling Time Series Data for Supervised Learning

    Science.gov (United States)

    Baydogan, Mustafa Gokce

    2012-01-01

    Temporal data are increasingly prevalent and important in analytics. Time series (TS) data are chronological sequences of observations and an important class of temporal data. Fields such as medicine, finance, learning science and multimedia naturally generate TS data. Each series provide a high-dimensional data vector that challenges the learning…

  12. Empirical method to measure stochasticity and multifractality in nonlinear time series

    Science.gov (United States)

    Lin, Chih-Hao; Chang, Chia-Seng; Li, Sai-Ping

    2013-12-01

    An empirical algorithm is used here to study the stochastic and multifractal nature of nonlinear time series. A parameter can be defined to quantitatively measure the deviation of the time series from a Wiener process so that the stochasticity of different time series can be compared. The local volatility of the time series under study can be constructed using this algorithm, and the multifractal structure of the time series can be analyzed by using this local volatility. As an example, we employ this method to analyze financial time series from different stock markets. The result shows that while developed markets evolve very much like an Ito process, the emergent markets are far from efficient. Differences about the multifractal structures and leverage effects between developed and emergent markets are discussed. The algorithm used here can be applied in a similar fashion to study time series of other complex systems.

  13. Clinical time series prediction: Toward a hierarchical dynamical system framework.

    Science.gov (United States)

    Liu, Zitao; Hauskrecht, Milos

    2015-09-01

    Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Clinical time series prediction: towards a hierarchical dynamical system framework

    Science.gov (United States)

    Liu, Zitao; Hauskrecht, Milos

    2014-01-01

    Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive

  15. LDA measurements and turbulence spectral analysis in an agitated vessel

    Directory of Open Access Journals (Sweden)

    Chára Zdeněk

    2013-04-01

    Full Text Available During the last years considerable improvement of the derivation of turbulence power spectrum from Laser Doppler Anemometry (LDA has been achieved. The irregularly sampled LDA data is proposed to approximate by several methods e.g. Lomb-Scargle method, which estimates amplitude and phase of spectral lines from missing data, methods based on the reconstruction of the auto-correlation function (referred to as correlation slotting technique, methods based on the reconstruction of the time series using interpolation between the uneven sampling and subsequent resampling etc. These different methods were used on the LDA data measured in an agitated vessel and the results of the power spectrum calculations were compared. The measurements were performed in the mixing vessel with flat bottom. The vessel was equipped with four baffles and agitated with a six-blade pitched blade impeller. Three values of the impeller speed (Reynolds number were tested. Long time series of the axial velocity component were measured in selected points. In each point the time series were analyzed and evaluated in a form of power spectrum.

  16. LDA measurements and turbulence spectral analysis in an agitated vessel

    Science.gov (United States)

    Kysela, Bohuš; Konfršt, Jiří; Chára, Zdeněk

    2013-04-01

    During the last years considerable improvement of the derivation of turbulence power spectrum from Laser Doppler Anemometry (LDA) has been achieved. The irregularly sampled LDA data is proposed to approximate by several methods e.g. Lomb-Scargle method, which estimates amplitude and phase of spectral lines from missing data, methods based on the reconstruction of the auto-correlation function (referred to as correlation slotting technique), methods based on the reconstruction of the time series using interpolation between the uneven sampling and subsequent resampling etc. These different methods were used on the LDA data measured in an agitated vessel and the results of the power spectrum calculations were compared. The measurements were performed in the mixing vessel with flat bottom. The vessel was equipped with four baffles and agitated with a six-blade pitched blade impeller. Three values of the impeller speed (Reynolds number) were tested. Long time series of the axial velocity component were measured in selected points. In each point the time series were analyzed and evaluated in a form of power spectrum.

  17. Characterizing time series: when Granger causality triggers complex networks

    Science.gov (United States)

    Ge, Tian; Cui, Yindong; Lin, Wei; Kurths, Jürgen; Liu, Chong

    2012-08-01

    In this paper, we propose a new approach to characterize time series with noise perturbations in both the time and frequency domains by combining Granger causality and complex networks. We construct directed and weighted complex networks from time series and use representative network measures to describe their physical and topological properties. Through analyzing the typical dynamical behaviors of some physical models and the MIT-BIHMassachusetts Institute of Technology-Beth Israel Hospital. human electrocardiogram data sets, we show that the proposed approach is able to capture and characterize various dynamics and has much potential for analyzing real-world time series of rather short length.

  18. Characterizing time series: when Granger causality triggers complex networks

    International Nuclear Information System (INIS)

    Ge Tian; Cui Yindong; Lin Wei; Liu Chong; Kurths, Jürgen

    2012-01-01

    In this paper, we propose a new approach to characterize time series with noise perturbations in both the time and frequency domains by combining Granger causality and complex networks. We construct directed and weighted complex networks from time series and use representative network measures to describe their physical and topological properties. Through analyzing the typical dynamical behaviors of some physical models and the MIT-BIH human electrocardiogram data sets, we show that the proposed approach is able to capture and characterize various dynamics and has much potential for analyzing real-world time series of rather short length. (paper)

  19. Multivariate time series analysis with R and financial applications

    CERN Document Server

    Tsay, Ruey S

    2013-01-01

    Since the publication of his first book, Analysis of Financial Time Series, Ruey Tsay has become one of the most influential and prominent experts on the topic of time series. Different from the traditional and oftentimes complex approach to multivariate (MV) time series, this sequel book emphasizes structural specification, which results in simplified parsimonious VARMA modeling and, hence, eases comprehension. Through a fundamental balance between theory and applications, the book supplies readers with an accessible approach to financial econometric models and their applications to real-worl

  20. Measurements of spatial population synchrony: influence of time series transformations.

    Science.gov (United States)

    Chevalier, Mathieu; Laffaille, Pascal; Ferdy, Jean-Baptiste; Grenouillet, Gaël

    2015-09-01

    Two mechanisms have been proposed to explain spatial population synchrony: dispersal among populations, and the spatial correlation of density-independent factors (the "Moran effect"). To identify which of these two mechanisms is driving spatial population synchrony, time series transformations (TSTs) of abundance data have been used to remove the signature of one mechanism, and highlight the effect of the other. However, several issues with TSTs remain, and to date no consensus has emerged about how population time series should be handled in synchrony studies. Here, by using 3131 time series involving 34 fish species found in French rivers, we computed several metrics commonly used in synchrony studies to determine whether a large-scale climatic factor (temperature) influenced fish population dynamics at the regional scale, and to test the effect of three commonly used TSTs (detrending, prewhitening and a combination of both) on these metrics. We also tested whether the influence of TSTs on time series and population synchrony levels was related to the features of the time series using both empirical and simulated time series. For several species, and regardless of the TST used, we evidenced a Moran effect on freshwater fish populations. However, these results were globally biased downward by TSTs which reduced our ability to detect significant signals. Depending on the species and the features of the time series, we found that TSTs could lead to contradictory results, regardless of the metric considered. Finally, we suggest guidelines on how population time series should be processed in synchrony studies.

  1. The Hurst Phenomenon in Error Estimates Related to Atmospheric Turbulence

    Science.gov (United States)

    Dias, Nelson Luís; Crivellaro, Bianca Luhm; Chamecki, Marcelo

    2018-05-01

    The Hurst phenomenon is a well-known feature of long-range persistence first observed in hydrological and geophysical time series by E. Hurst in the 1950s. It has also been found in several cases in turbulence time series measured in the wind tunnel, the atmosphere, and in rivers. Here, we conduct a systematic investigation of the value of the Hurst coefficient H in atmospheric surface-layer data, and its impact on the estimation of random errors. We show that usually H > 0.5 , which implies the non-existence (in the statistical sense) of the integral time scale. Since the integral time scale is present in the Lumley-Panofsky equation for the estimation of random errors, this has important practical consequences. We estimated H in two principal ways: (1) with an extension of the recently proposed filtering method to estimate the random error (H_p ), and (2) with the classical rescaled range introduced by Hurst (H_R ). Other estimators were tried but were found less able to capture the statistical behaviour of the large scales of turbulence. Using data from three micrometeorological campaigns we found that both first- and second-order turbulence statistics display the Hurst phenomenon. Usually, H_R is larger than H_p for the same dataset, raising the question that one, or even both, of these estimators, may be biased. For the relative error, we found that the errors estimated with the approach adopted by us, that we call the relaxed filtering method, and that takes into account the occurrence of the Hurst phenomenon, are larger than both the filtering method and the classical Lumley-Panofsky estimates. Finally, we found that there is no apparent relationship between H and the Obukhov stability parameter. The relative errors, however, do show stability dependence, particularly in the case of the error of the kinematic momentum flux in unstable conditions, and that of the kinematic sensible heat flux in stable conditions.

  2. Behaviour of turbulence models near a turbulent/non-turbulent interface revisited

    International Nuclear Information System (INIS)

    Ferrey, P.; Aupoix, B.

    2006-01-01

    The behaviour of turbulence models near a turbulent/non-turbulent interface is investigated. The analysis holds as well for two-equation as for Reynolds stress turbulence models using Daly and Harlow diffusion model. The behaviour near the interface is shown not to be a power law, as usually considered, but a more complex parametric solution. Why previous works seemed to numerically confirm the power law solution is explained. Constraints for turbulence modelling, i.e., for ensuring that models have a good behaviour near a turbulent/non-turbulent interface so that the solution is not sensitive to small turbulence levels imposed in the irrotational flow, are drawn

  3. Stochastic time series analysis of hydrology data for water resources

    Science.gov (United States)

    Sathish, S.; Khadar Babu, S. K.

    2017-11-01

    The prediction to current publication of stochastic time series analysis in hydrology and seasonal stage. The different statistical tests for predicting the hydrology time series on Thomas-Fiering model. The hydrology time series of flood flow have accept a great deal of consideration worldwide. The concentration of stochastic process areas of time series analysis method are expanding with develop concerns about seasonal periods and global warming. The recent trend by the researchers for testing seasonal periods in the hydrologic flowseries using stochastic process on Thomas-Fiering model. The present article proposed to predict the seasonal periods in hydrology using Thomas-Fiering model.

  4. Neural network versus classical time series forecasting models

    Science.gov (United States)

    Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam

    2017-05-01

    Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.

  5. Nonlinear time series analysis of the human electrocardiogram

    International Nuclear Information System (INIS)

    Perc, Matjaz

    2005-01-01

    We analyse the human electrocardiogram with simple nonlinear time series analysis methods that are appropriate for graduate as well as undergraduate courses. In particular, attention is devoted to the notions of determinism and stationarity in physiological data. We emphasize that methods of nonlinear time series analysis can be successfully applied only if the studied data set originates from a deterministic stationary system. After positively establishing the presence of determinism and stationarity in the studied electrocardiogram, we calculate the maximal Lyapunov exponent, thus providing interesting insights into the dynamics of the human heart. Moreover, to facilitate interest and enable the integration of nonlinear time series analysis methods into the curriculum at an early stage of the educational process, we also provide user-friendly programs for each implemented method

  6. Multichannel biomedical time series clustering via hierarchical probabilistic latent semantic analysis.

    Science.gov (United States)

    Wang, Jin; Sun, Xiangping; Nahavandi, Saeid; Kouzani, Abbas; Wu, Yuchuan; She, Mary

    2014-11-01

    Biomedical time series clustering that automatically groups a collection of time series according to their internal similarity is of importance for medical record management and inspection such as bio-signals archiving and retrieval. In this paper, a novel framework that automatically groups a set of unlabelled multichannel biomedical time series according to their internal structural similarity is proposed. Specifically, we treat a multichannel biomedical time series as a document and extract local segments from the time series as words. We extend a topic model, i.e., the Hierarchical probabilistic Latent Semantic Analysis (H-pLSA), which was originally developed for visual motion analysis to cluster a set of unlabelled multichannel time series. The H-pLSA models each channel of the multichannel time series using a local pLSA in the first layer. The topics learned in the local pLSA are then fed to a global pLSA in the second layer to discover the categories of multichannel time series. Experiments on a dataset extracted from multichannel Electrocardiography (ECG) signals demonstrate that the proposed method performs better than previous state-of-the-art approaches and is relatively robust to the variations of parameters including length of local segments and dictionary size. Although the experimental evaluation used the multichannel ECG signals in a biometric scenario, the proposed algorithm is a universal framework for multichannel biomedical time series clustering according to their structural similarity, which has many applications in biomedical time series management. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Hidden Markov Models for Time Series An Introduction Using R

    CERN Document Server

    Zucchini, Walter

    2009-01-01

    Illustrates the flexibility of HMMs as general-purpose models for time series data. This work presents an overview of HMMs for analyzing time series data, from continuous-valued, circular, and multivariate series to binary data, bounded and unbounded counts and categorical observations.

  8. Constructing ordinal partition transition networks from multivariate time series.

    Science.gov (United States)

    Zhang, Jiayang; Zhou, Jie; Tang, Ming; Guo, Heng; Small, Michael; Zou, Yong

    2017-08-10

    A growing number of algorithms have been proposed to map a scalar time series into ordinal partition transition networks. However, most observable phenomena in the empirical sciences are of a multivariate nature. We construct ordinal partition transition networks for multivariate time series. This approach yields weighted directed networks representing the pattern transition properties of time series in velocity space, which hence provides dynamic insights of the underling system. Furthermore, we propose a measure of entropy to characterize ordinal partition transition dynamics, which is sensitive to capturing the possible local geometric changes of phase space trajectories. We demonstrate the applicability of pattern transition networks to capture phase coherence to non-coherence transitions, and to characterize paths to phase synchronizations. Therefore, we conclude that the ordinal partition transition network approach provides complementary insight to the traditional symbolic analysis of nonlinear multivariate time series.

  9. Permutation entropy of finite-length white-noise time series.

    Science.gov (United States)

    Little, Douglas J; Kane, Deb M

    2016-08-01

    Permutation entropy (PE) is commonly used to discriminate complex structure from white noise in a time series. While the PE of white noise is well understood in the long time-series limit, analysis in the general case is currently lacking. Here the expectation value and variance of white-noise PE are derived as functions of the number of ordinal pattern trials, N, and the embedding dimension, D. It is demonstrated that the probability distribution of the white-noise PE converges to a χ^{2} distribution with D!-1 degrees of freedom as N becomes large. It is further demonstrated that the PE variance for an arbitrary time series can be estimated as the variance of a related metric, the Kullback-Leibler entropy (KLE), allowing the qualitative N≫D! condition to be recast as a quantitative estimate of the N required to achieve a desired PE calculation precision. Application of this theory to statistical inference is demonstrated in the case of an experimentally obtained noise series, where the probability of obtaining the observed PE value was calculated assuming a white-noise time series. Standard statistical inference can be used to draw conclusions whether the white-noise null hypothesis can be accepted or rejected. This methodology can be applied to other null hypotheses, such as discriminating whether two time series are generated from different complex system states.

  10. Multiresolution analysis of Bursa Malaysia KLCI time series

    Science.gov (United States)

    Ismail, Mohd Tahir; Dghais, Amel Abdoullah Ahmed

    2017-05-01

    In general, a time series is simply a sequence of numbers collected at regular intervals over a period. Financial time series data processing is concerned with the theory and practice of processing asset price over time, such as currency, commodity data, and stock market data. The primary aim of this study is to understand the fundamental characteristics of selected financial time series by using the time as well as the frequency domain analysis. After that prediction can be executed for the desired system for in sample forecasting. In this study, multiresolution analysis which the assist of discrete wavelet transforms (DWT) and maximal overlap discrete wavelet transform (MODWT) will be used to pinpoint special characteristics of Bursa Malaysia KLCI (Kuala Lumpur Composite Index) daily closing prices and return values. In addition, further case study discussions include the modeling of Bursa Malaysia KLCI using linear ARIMA with wavelets to address how multiresolution approach improves fitting and forecasting results.

  11. Modelling bursty time series

    International Nuclear Information System (INIS)

    Vajna, Szabolcs; Kertész, János; Tóth, Bálint

    2013-01-01

    Many human-related activities show power-law decaying interevent time distribution with exponents usually varying between 1 and 2. We study a simple task-queuing model, which produces bursty time series due to the non-trivial dynamics of the task list. The model is characterized by a priority distribution as an input parameter, which describes the choice procedure from the list. We give exact results on the asymptotic behaviour of the model and we show that the interevent time distribution is power-law decaying for any kind of input distributions that remain normalizable in the infinite list limit, with exponents tunable between 1 and 2. The model satisfies a scaling law between the exponents of interevent time distribution (β) and autocorrelation function (α): α + β = 2. This law is general for renewal processes with power-law decaying interevent time distribution. We conclude that slowly decaying autocorrelation function indicates long-range dependence only if the scaling law is violated. (paper)

  12. Timing calibration and spectral cleaning of LOFAR time series data

    NARCIS (Netherlands)

    Corstanje, A.; Buitink, S.; Enriquez, J. E.; Falcke, H.; Horandel, J. R.; Krause, M.; Nelles, A.; Rachen, J. P.; Schellart, P.; Scholten, O.; ter Veen, S.; Thoudam, S.; Trinh, T. N. G.

    We describe a method for spectral cleaning and timing calibration of short time series data of the voltage in individual radio interferometer receivers. It makes use of phase differences in fast Fourier transform (FFT) spectra across antenna pairs. For strong, localized terrestrial sources these are

  13. Time series momentum and contrarian effects in the Chinese stock market

    Science.gov (United States)

    Shi, Huai-Long; Zhou, Wei-Xing

    2017-10-01

    This paper concentrates on the time series momentum or contrarian effects in the Chinese stock market. We evaluate the performance of the time series momentum strategy applied to major stock indices in mainland China and explore the relation between the performance of time series momentum strategies and some firm-specific characteristics. Our findings indicate that there is a time series momentum effect in the short run and a contrarian effect in the long run in the Chinese stock market. The performances of the time series momentum and contrarian strategies are highly dependent on the look-back and holding periods and firm-specific characteristics.

  14. Time-Series Analysis: A Cautionary Tale

    Science.gov (United States)

    Damadeo, Robert

    2015-01-01

    Time-series analysis has often been a useful tool in atmospheric science for deriving long-term trends in various atmospherically important parameters (e.g., temperature or the concentration of trace gas species). In particular, time-series analysis has been repeatedly applied to satellite datasets in order to derive the long-term trends in stratospheric ozone, which is a critical atmospheric constituent. However, many of the potential pitfalls relating to the non-uniform sampling of the datasets were often ignored and the results presented by the scientific community have been unknowingly biased. A newly developed and more robust application of this technique is applied to the Stratospheric Aerosol and Gas Experiment (SAGE) II version 7.0 ozone dataset and the previous biases and newly derived trends are presented.

  15. Characterizing interdependencies of multiple time series theory and applications

    CERN Document Server

    Hosoya, Yuzo; Takimoto, Taro; Kinoshita, Ryo

    2017-01-01

    This book introduces academic researchers and professionals to the basic concepts and methods for characterizing interdependencies of multiple time series in the frequency domain. Detecting causal directions between a pair of time series and the extent of their effects, as well as testing the non existence of a feedback relation between them, have constituted major focal points in multiple time series analysis since Granger introduced the celebrated definition of causality in view of prediction improvement. Causality analysis has since been widely applied in many disciplines. Although most analyses are conducted from the perspective of the time domain, a frequency domain method introduced in this book sheds new light on another aspect that disentangles the interdependencies between multiple time series in terms of long-term or short-term effects, quantitatively characterizing them. The frequency domain method includes the Granger noncausality test as a special case. Chapters 2 and 3 of the book introduce an i...

  16. The Skipheia Wind Measurement Station. Instrumentation, Wind Speed Profiles and Turbulence Spectra

    Energy Technology Data Exchange (ETDEWEB)

    Aasen, S E

    1995-10-01

    This thesis describes the design of a measurement station for turbulent wind and presents results from an analysis of the collected data. The station is located at Skipheia near the south-west end of Froeya, an island off the coast of Mid-Norway. The station is unique for studies of turbulent winds because of the large numbers of sensors, which are located at various heights above ground up to 100 m, a sampling rate of 0.85 Hz and storage of the complete time series. The frequency of lightning and atmospheric discharges to the masts are quite high and much effort has gone into minimizing the damage caused by lightning activity. A major part of the thesis deals with data analysis and modelling. There are detailed discussions on the various types of wind sensors and their calibration, the data acquisition system and operating experiences with it, the database, data quality control, the wind speed profile and turbulence. 40 refs., 78 figs., 17 tabs.

  17. A perturbative approach for enhancing the performance of time series forecasting.

    Science.gov (United States)

    de Mattos Neto, Paulo S G; Ferreira, Tiago A E; Lima, Aranildo R; Vasconcelos, Germano C; Cavalcanti, George D C

    2017-04-01

    This paper proposes a method to perform time series prediction based on perturbation theory. The approach is based on continuously adjusting an initial forecasting model to asymptotically approximate a desired time series model. First, a predictive model generates an initial forecasting for a time series. Second, a residual time series is calculated as the difference between the original time series and the initial forecasting. If that residual series is not white noise, then it can be used to improve the accuracy of the initial model and a new predictive model is adjusted using residual series. The whole process is repeated until convergence or the residual series becomes white noise. The output of the method is then given by summing up the outputs of all trained predictive models in a perturbative sense. To test the method, an experimental investigation was conducted on six real world time series. A comparison was made with six other methods experimented and ten other results found in the literature. Results show that not only the performance of the initial model is significantly improved but also the proposed method outperforms the other results previously published. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Drunk driving detection based on classification of multivariate time series.

    Science.gov (United States)

    Li, Zhenlong; Jin, Xue; Zhao, Xiaohua

    2015-09-01

    This paper addresses the problem of detecting drunk driving based on classification of multivariate time series. First, driving performance measures were collected from a test in a driving simulator located in the Traffic Research Center, Beijing University of Technology. Lateral position and steering angle were used to detect drunk driving. Second, multivariate time series analysis was performed to extract the features. A piecewise linear representation was used to represent multivariate time series. A bottom-up algorithm was then employed to separate multivariate time series. The slope and time interval of each segment were extracted as the features for classification. Third, a support vector machine classifier was used to classify driver's state into two classes (normal or drunk) according to the extracted features. The proposed approach achieved an accuracy of 80.0%. Drunk driving detection based on the analysis of multivariate time series is feasible and effective. The approach has implications for drunk driving detection. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.

  19. Time-resolved measurements of coherent structures in the turbulent boundary layer

    Science.gov (United States)

    LeHew, J. A.; Guala, M.; McKeon, B. J.

    2013-04-01

    Time-resolved particle image velocimetry was used to examine the structure and evolution of swirling coherent structure (SCS), one interpretation of which is a marker for a three-dimensional coherent vortex structure, in wall-parallel planes of a turbulent boundary layer with a large field of view, 4.3 δ × 2.2 δ. Measurements were taken at four different wall-normal locations ranging from y/ δ = 0.08-0.48 at a friction Reynolds number, Re τ = 410. The data set yielded statistically converged results over a larger field of view than typically observed in the literature. The method for identifying and tracking swirling coherent structure is discussed, and the resulting trajectories, convection velocities, and lifespan of these structures are analyzed at each wall-normal location. The ability of a model in which the entirety of an individual SCS travels at a single convection velocity, consistent with the attached eddy hypothesis of Townsend (The structure of turbulent shear flows. Cambridge University Press, Cambridge, 1976), to describe the data is investigated. A methodology for determining whether such structures are "attached" or "detached" from the wall is also proposed and used to measure the lifespan and convection velocity distributions of these different structures. SCS were found to persist for longer periods of time further from the wall, particularly those inferred to be "detached" from the wall, which could be tracked for longer than 5 eddy turnover times.

  20. Evaluation of scaling invariance embedded in short time series.

    Directory of Open Access Journals (Sweden)

    Xue Pan

    Full Text Available Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length ~10(2. Calculations with specified Hurst exponent values of 0.2,0.3,...,0.9 show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias (≤0.03 and sharp confidential interval (standard deviation ≤0.05. Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records.

  1. Evaluation of scaling invariance embedded in short time series.

    Science.gov (United States)

    Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping

    2014-01-01

    Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length ~10(2). Calculations with specified Hurst exponent values of 0.2,0.3,...,0.9 show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias (≤0.03) and sharp confidential interval (standard deviation ≤0.05). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records.

  2. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    Science.gov (United States)

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  3. Geomechanical time series and its singularity spectrum analysis

    Czech Academy of Sciences Publication Activity Database

    Lyubushin, Alexei A.; Kaláb, Zdeněk; Lednická, Markéta

    2012-01-01

    Roč. 47, č. 1 (2012), s. 69-77 ISSN 1217-8977 R&D Projects: GA ČR GA105/09/0089 Institutional research plan: CEZ:AV0Z30860518 Keywords : geomechanical time series * singularity spectrum * time series segmentation * laser distance meter Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 0.347, year: 2012 http://www.akademiai.com/content/88v4027758382225/fulltext.pdf

  4. Pseudo-random bit generator based on lag time series

    Science.gov (United States)

    García-Martínez, M.; Campos-Cantón, E.

    2014-12-01

    In this paper, we present a pseudo-random bit generator (PRBG) based on two lag time series of the logistic map using positive and negative values in the bifurcation parameter. In order to hidden the map used to build the pseudo-random series we have used a delay in the generation of time series. These new series when they are mapped xn against xn+1 present a cloud of points unrelated to the logistic map. Finally, the pseudo-random sequences have been tested with the suite of NIST giving satisfactory results for use in stream ciphers.

  5. Non-linear forecasting in high-frequency financial time series

    Science.gov (United States)

    Strozzi, F.; Zaldívar, J. M.

    2005-08-01

    A new methodology based on state space reconstruction techniques has been developed for trading in financial markets. The methodology has been tested using 18 high-frequency foreign exchange time series. The results are in apparent contradiction with the efficient market hypothesis which states that no profitable information about future movements can be obtained by studying the past prices series. In our (off-line) analysis positive gain may be obtained in all those series. The trading methodology is quite general and may be adapted to other financial time series. Finally, the steps for its on-line application are discussed.

  6. Analysis of JET ELMy time series

    International Nuclear Information System (INIS)

    Zvejnieks, G.; Kuzovkov, V.N.

    2005-01-01

    Full text: Achievement of the planned operational regime in the next generation tokamaks (such as ITER) still faces principal problems. One of the main challenges is obtaining the control of edge localized modes (ELMs), which should lead to both long plasma pulse times and reasonable divertor life time. In order to control ELMs the hypothesis was proposed by Degeling [1] that ELMs exhibit features of chaotic dynamics and thus a standard chaos control methods might be applicable. However, our findings which are based on the nonlinear autoregressive (NAR) model contradict this hypothesis for JET ELMy time-series. In turn, it means that ELM behavior is of a relaxation or random type. These conclusions coincide with our previous results obtained for ASDEX Upgrade time series [2]. [1] A.W. Degeling, Y.R. Martin, P.E. Bak, J. B.Lister, and X. Llobet, Plasma Phys. Control. Fusion 43, 1671 (2001). [2] G. Zvejnieks, V.N. Kuzovkov, O. Dumbrajs, A.W. Degeling, W. Suttrop, H. Urano, and H. Zohm, Physics of Plasmas 11, 5658 (2004)

  7. The Statistical Analysis of Time Series

    CERN Document Server

    Anderson, T W

    2011-01-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences George

  8. Analysis of time series and size of equivalent sample

    International Nuclear Information System (INIS)

    Bernal, Nestor; Molina, Alicia; Pabon, Daniel; Martinez, Jorge

    2004-01-01

    In a meteorological context, a first approach to the modeling of time series is to use models of autoregressive type. This allows one to take into account the meteorological persistence or temporal behavior, thereby identifying the memory of the analyzed process. This article seeks to pre-sent the concept of the size of an equivalent sample, which helps to identify in the data series sub periods with a similar structure. Moreover, in this article we examine the alternative of adjusting the variance of the series, keeping in mind its temporal structure, as well as an adjustment to the covariance of two time series. This article presents two examples, the first one corresponding to seven simulated series with autoregressive structure of first order, and the second corresponding to seven meteorological series of anomalies of the air temperature at the surface in two Colombian regions

  9. Scalable Prediction of Energy Consumption using Incremental Time Series Clustering

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh; Noor, Muhammad Usman

    2013-10-09

    Time series datasets are a canonical form of high velocity Big Data, and often generated by pervasive sensors, such as found in smart infrastructure. Performing predictive analytics on time series data can be computationally complex, and requires approximation techniques. In this paper, we motivate this problem using a real application from the smart grid domain. We propose an incremental clustering technique, along with a novel affinity score for determining cluster similarity, which help reduce the prediction error for cumulative time series within a cluster. We evaluate this technique, along with optimizations, using real datasets from smart meters, totaling ~700,000 data points, and show the efficacy of our techniques in improving the prediction error of time series data within polynomial time.

  10. Turbulence from a microorganism's perspective: Does the open ocean feel different than a coral reef?

    Science.gov (United States)

    Pepper, Rachel; Variano, Evan; Koehl, M. A. R.

    2012-11-01

    Microorganisms in the ocean live in turbulent flows. Swimming microorganisms navigate through the water (e.g. larvae land on suitable substrata, predators find patches of prey), but the mechanisms by which they do so in turbulent flow are poorly understood as are the roles of passive transport versus active behaviors. Because microorganisms are smaller than the Kolmagorov length (the smallest scale of eddies in turbulent flow), they experience turbulence as a series of linear gradients in the velocity that vary in time. While the average strength of these gradients and a timescale can be computed from some typical characteristics of the flow, such as the turbulent kinetic energy or the dissipation rate, there are indications that organisms are disproportionally affected by rare, extreme events. Understanding the frequency of such events in different environments will be critical to understanding how microorganisms respond to and navigate in turbulence. To understand the hydrodynamic cues that microorganisms experience in the ocean we must measure velocity gradients in realistic turbulent flow on the spatial and temporal scales encountered by microorganisms. We have been exploring the effect of the spatial resolution of PIV and DNS of turbulent flow on the presence of velocity gradients of different magnitudes at the scale of microorganisms. Here we present some results of PIV taken at different resolutions in turbulent flow over rough biological substrata to illustrate the challenges of quantifying the fluctuations in velocity gradients encountered by aquatic microorganisms.

  11. Space-time statistics of the turbulence in the PRETEXT and TEXT tokamak edge plasmas

    International Nuclear Information System (INIS)

    Levinson, S.J.

    1986-01-01

    A study of the statistical space-time properties of the turbulence observed in the edge regions of the PRETEXT and the TEXT tokamaks are reported. Computer estimates of the particle-transport spectrum T(omega), and of the local wavenumber-frequency spectra S(K,omega) for poloidal (k/sub y/) and toroidal (k/sub z/) wavenumbers was determined. A conventional fast-Fourier-transform technique is used initially for the analyses of the potential and density fluctuations obtained from spatially fixed Langmuir-probe pairs. Measurements of the fluctuation-induced particle transport revealed that the particle flux is outward for both PRETEXT and TEXT, and it results primarily from the low-frequency, long-wavelength components of the turbulence. The S(K/sub y/, omega) spectra are dominated by low frequencies ( -1 ) and appear broadened about an approximately linear statistical dispersion relation, anti k(omega). The broadening is characterized by a spectral width sigma/sub k/(omega) (rms deviation about anti k(omega)). In PRETEXT, sigma/sub k/(omega) is of the order of anti k(omega), and the turbulence appears to propagate poloidally with an apparent mean phase velocity of 1-2 x 10 5 cm/s in the ion diamagnetic drift direction. In TEXT, a reversal in the phase velocity of the turbulence in the edge plasma was observed

  12. Torque fluctuations caused by upstream mean flow and turbulence

    Science.gov (United States)

    Farr, T. D.; Hancock, P. E.

    2014-12-01

    A series of studies are in progress investigating the effects of turbine-array-wake interactions for a range of atmospheric boundary layer states by means of the EnFlo meteorological wind tunnel. The small, three-blade model wind turbines drive 4-quadrant motor-generators. Only a single turbine in neutral flow is considered here. The motor-generator current can be measured with adequate sensitivity by means of a current sensor allowing the mean and fluctuating torque to be inferred. Spectra of torque fluctuations and streamwise velocity fluctuations ahead of the rotor, between 0.1 and 2 diameters, show that only the large-scale turbulent motions contribute significantly to the torque fluctuations. Time-lagged cross-correlation between upstream velocity and torque fluctuations are largest over the inner part of the blade. They also show the turbulence to be frozen in behaviour over the 2 diameters upstream of the turbine.

  13. Forecasting with nonlinear time series models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    In this paper, nonlinear models are restricted to mean nonlinear parametric models. Several such models popular in time series econo- metrics are presented and some of their properties discussed. This in- cludes two models based on universal approximators: the Kolmogorov- Gabor polynomial model...... applied to economic fore- casting problems, is briefly highlighted. A number of large published studies comparing macroeconomic forecasts obtained using different time series models are discussed, and the paper also contains a small simulation study comparing recursive and direct forecasts in a partic...... and two versions of a simple artificial neural network model. Techniques for generating multi-period forecasts from nonlinear models recursively are considered, and the direct (non-recursive) method for this purpose is mentioned as well. Forecasting with com- plex dynamic systems, albeit less frequently...

  14. New Models for Velocity/Pressure-Gradient Correlations in Turbulent Boundary Layers

    Science.gov (United States)

    Poroseva, Svetlana; Murman, Scott

    2014-11-01

    To improve the performance of Reynolds-Averaged Navier-Stokes (RANS) turbulence models, one has to improve the accuracy of models for three physical processes: turbulent diffusion, interaction of turbulent pressure and velocity fluctuation fields, and dissipative processes. The accuracy of modeling the turbulent diffusion depends on the order of a statistical closure chosen as a basis for a RANS model. When the Gram-Charlier series expansions for the velocity correlations are used to close the set of RANS equations, no assumption on Gaussian turbulence is invoked and no unknown model coefficients are introduced into the modeled equations. In such a way, this closure procedure reduces the modeling uncertainty of fourth-order RANS (FORANS) closures. Experimental and direct numerical simulation data confirmed the validity of using the Gram-Charlier series expansions in various flows including boundary layers. We will address modeling the velocity/pressure-gradient correlations. New linear models will be introduced for the second- and higher-order correlations applicable to two-dimensional incompressible wall-bounded flows. Results of models' validation with DNS data in a channel flow and in a zero-pressure gradient boundary layer over a flat plate will be demonstrated. A part of the material is based upon work supported by NASA under award NNX12AJ61A.

  15. Nonparametric factor analysis of time series

    OpenAIRE

    Rodríguez-Poo, Juan M.; Linton, Oliver Bruce

    1998-01-01

    We introduce a nonparametric smoothing procedure for nonparametric factor analaysis of multivariate time series. The asymptotic properties of the proposed procedures are derived. We present an application based on the residuals from the Fair macromodel.

  16. Time Series Outlier Detection Based on Sliding Window Prediction

    Directory of Open Access Journals (Sweden)

    Yufeng Yu

    2014-01-01

    Full Text Available In order to detect outliers in hydrological time series data for improving data quality and decision-making quality related to design, operation, and management of water resources, this research develops a time series outlier detection method for hydrologic data that can be used to identify data that deviate from historical patterns. The method first built a forecasting model on the history data and then used it to predict future values. Anomalies are assumed to take place if the observed values fall outside a given prediction confidence interval (PCI, which can be calculated by the predicted value and confidence coefficient. The use of PCI as threshold is mainly on the fact that it considers the uncertainty in the data series parameters in the forecasting model to address the suitable threshold selection problem. The method performs fast, incremental evaluation of data as it becomes available, scales to large quantities of data, and requires no preclassification of anomalies. Experiments with different hydrologic real-world time series showed that the proposed methods are fast and correctly identify abnormal data and can be used for hydrologic time series analysis.

  17. Metagenomics meets time series analysis: unraveling microbial community dynamics

    NARCIS (Netherlands)

    Faust, K.; Lahti, L.M.; Gonze, D.; Vos, de W.M.; Raes, J.

    2015-01-01

    The recent increase in the number of microbial time series studies offers new insights into the stability and dynamics of microbial communities, from the world's oceans to human microbiota. Dedicated time series analysis tools allow taking full advantage of these data. Such tools can reveal periodic

  18. Time series forecasting based on deep extreme learning machine

    NARCIS (Netherlands)

    Guo, Xuqi; Pang, Y.; Yan, Gaowei; Qiao, Tiezhu; Yang, Guang-Hong; Yang, Dan

    2017-01-01

    Multi-layer Artificial Neural Networks (ANN) has caught widespread attention as a new method for time series forecasting due to the ability of approximating any nonlinear function. In this paper, a new local time series prediction model is established with the nearest neighbor domain theory, in

  19. THE MATRYOSHKA RUN. II. TIME-DEPENDENT TURBULENCE STATISTICS, STOCHASTIC PARTICLE ACCELERATION, AND MICROPHYSICS IMPACT IN A MASSIVE GALAXY CLUSTER

    International Nuclear Information System (INIS)

    Miniati, Francesco

    2015-01-01

    We use the Matryoshka run to study the time-dependent statistics of structure-formation-driven turbulence in the intracluster medium of a 10 15 M ☉ galaxy cluster. We investigate the turbulent cascade in the inner megaparsec for both compressional and incompressible velocity components. The flow maintains approximate conditions of fully developed turbulence, with departures thereof settling in about an eddy-turnover time. Turbulent velocity dispersion remains above 700 km s –1 even at low mass accretion rate, with the fraction of compressional energy between 10% and 40%. The normalization and the slope of the compressional turbulence are susceptible to large variations on short timescales, unlike the incompressible counterpart. A major merger occurs around redshift z ≅ 0 and is accompanied by a long period of enhanced turbulence, ascribed to temporal clustering of mass accretion related to spatial clustering of matter. We test models of stochastic acceleration by compressional modes for the origin of diffuse radio emission in galaxy clusters. The turbulence simulation model constrains an important unknown of this complex problem and brings forth its dependence on the elusive microphysics of the intracluster plasma. In particular, the specifics of the plasma collisionality and the dissipation physics of weak shocks affect the cascade of compressional modes with strong impact on the acceleration rates. In this context radio halos emerge as complex phenomena in which a hierarchy of processes acting on progressively smaller scales are at work. Stochastic acceleration by compressional modes implies statistical correlation of radio power and spectral index with merging cores distance, both testable in principle with radio surveys

  20. False-nearest-neighbors algorithm and noise-corrupted time series

    International Nuclear Information System (INIS)

    Rhodes, C.; Morari, M.

    1997-01-01

    The false-nearest-neighbors (FNN) algorithm was originally developed to determine the embedding dimension for autonomous time series. For noise-free computer-generated time series, the algorithm does a good job in predicting the embedding dimension. However, the problem of predicting the embedding dimension when the time-series data are corrupted by noise was not fully examined in the original studies of the FNN algorithm. Here it is shown that with large data sets, even small amounts of noise can lead to incorrect prediction of the embedding dimension. Surprisingly, as the length of the time series analyzed by FNN grows larger, the cause of incorrect prediction becomes more pronounced. An analysis of the effect of noise on the FNN algorithm and a solution for dealing with the effects of noise are given here. Some results on the theoretically correct choice of the FNN threshold are also presented. copyright 1997 The American Physical Society

  1. Hierarchy compensation of non-homogeneous intermittent atmospheric turbulence

    Science.gov (United States)

    Redondo, Jose M.; Mahjoub, Otman B.; Cantalapiedra, Inma R.

    2010-05-01

    In this work a study both the internal turbulence energy cascade intermittency evaluated from wind speed series in the atmospheric boundary layer, as well as the role of external or forcing intermittency based on the flatness (Vindel et al 2008)is carried out. The degree of intermittency in the stratified ABL flow (Cuxart et al. 2000) can be studied as the deviation, from the linear form, of the absolute scaling exponents of the structure functions as well as generalizing for non-isotropic and non-homogeneous turbulence, even in non-inertial ranges (in the Kolmogorov-Kraichnan sense) where the scaling exponents are not constant. The degree of intermittency, evaluated in the non-local quasi-inertial range, is explained from the variation with scale of the energy transfer as well as the dissipation. The scale to scale transfer and the structure function scaling exponents are calculated and from these the intermittency parametres. The turbulent diffusivity could also be estimated and compared with Richardson's law. Some two point correlations and time lag calculations are used to investigate the time and spatial integral length scales obtained from both Lagrangian and Eulerian correlations and functions, and we compare these results with both theoretical and laboratory data. We develop a theoretical description of how to measure the different levels of intermittency following (Mahjoub et al. 1998, 2000) and the role of locality in higher order exponents of structure function analysis. Vindel J.M., Yague C. and Redondo J.M. (2008) Structure function analysis and intermittency in the ABL. Nonlin. Processes Geophys., 15, 915-929. Cuxart J, Yague C, Morales G, Terradellas E, Orbe J, Calvo J, Fernández A, Soler M R, Infante C, Buenestado P, Espinalt A, Joergensen H E, Rees J M, Vilá J, Redondo J M, Cantalapiedra R and Conangla L (2000): Stable atmospheric boundary-layer experiment in Spain (Sables 98): a report, Boundary-Layer Meteorology 96, 337-370 Mahjoub O

  2. Dynamic structure in self-sustained turbulence

    International Nuclear Information System (INIS)

    Itoh, K.; Itoh, S.; Yagi, M.; Fukuyama, A.

    1995-06-01

    Dynamical equation for the self-sustained and pressure-driven turbulence in toroidal plasmas is derived. The growth rate of the dressed-test mode, which belongs to the subcritical turbulence, is obtained as a function of the turbulent transport coefficient. In the limit of the low fluctuation level, the mode has the feature of the nonlinear instability and shows the explosive growth. The growth rate vanishes when the driven transport reaches to the stationarily-turbulent level. The stationary solution is thermodynamically stable. The characteristic time, by which the stationary and self-sustained turbulence is established, scales with the ion-sound transit time and is accelerated by the bad magnetic curvature. Influences of the pressure gradient as well as the radial electric field inhomogeneity are quantified. (author)

  3. Airborne-Measured Spatially-Averaged Temperature and Moisture Turbulent Structure Parameters Over a Heterogeneous Surface

    Science.gov (United States)

    Platis, Andreas; Martinez, Daniel; Bange, Jens

    2014-05-01

    Turbulent structure parameters of temperature and humidity can be derived from scintillometer measurements along horizontal paths of several 100 m to several 10 km. These parameters can be very useful to estimate the vertical turbulent heat fluxes at the surface (applying MOST). However, there are many assumptions required by this method which can be checked using in situ data, e.g. 1) Were CT2 and CQ2 correctly derived from the initial CN2 scintillometer data (structure parameter of density fluctuations or refraction index, respectively)? 2) What is the influence of the surround hetereogeneous surface regarding its footprint and the weighted averaging effect of the scintillometer method 3) Does MOST provide the correct turbulent fluxes from scintillometer data. To check these issues, in situ data from low-level flight measurements are well suited, since research aircraft cover horizontal distances in very short time (Taylor's hypothesis of a frozen turbulence structure can be applyed very likely). From airborne-measured time series the spatial series are calculated and then their structure functions that finally provide the structure parameters. The influence of the heterogeneous surface can be controlled by the definition of certain moving-average window sizes. A very useful instrument for this task are UAVs since they can fly very low and maintain altitude very precisely. However, the data base of such unmanned operations is still quite thin. So in this contribution we want to present turbulence data obtained with the Helipod, a turbulence probe hanging below a manned helicopter. The structure parameters of temperature and moisture, CT2 and CQ2, in the lower convective boundary layer were derived from data measured using the Helipod in 2003. The measurements were carried out during the LITFASS03 campaign over a heterogeneous land surface around the boundary-layer field site of the Lindenberg Meteorological Observatory-Richard-Aßmann-Observatory (MOL) of the

  4. CauseMap: fast inference of causality from complex time series.

    Science.gov (United States)

    Maher, M Cyrus; Hernandez, Ryan D

    2015-01-01

    Background. Establishing health-related causal relationships is a central pursuit in biomedical research. Yet, the interdependent non-linearity of biological systems renders causal dynamics laborious and at times impractical to disentangle. This pursuit is further impeded by the dearth of time series that are sufficiently long to observe and understand recurrent patterns of flux. However, as data generation costs plummet and technologies like wearable devices democratize data collection, we anticipate a coming surge in the availability of biomedically-relevant time series data. Given the life-saving potential of these burgeoning resources, it is critical to invest in the development of open source software tools that are capable of drawing meaningful insight from vast amounts of time series data. Results. Here we present CauseMap, the first open source implementation of convergent cross mapping (CCM), a method for establishing causality from long time series data (≳25 observations). Compared to existing time series methods, CCM has the advantage of being model-free and robust to unmeasured confounding that could otherwise induce spurious associations. CCM builds on Takens' Theorem, a well-established result from dynamical systems theory that requires only mild assumptions. This theorem allows us to reconstruct high dimensional system dynamics using a time series of only a single variable. These reconstructions can be thought of as shadows of the true causal system. If reconstructed shadows can predict points from opposing time series, we can infer that the corresponding variables are providing views of the same causal system, and so are causally related. Unlike traditional metrics, this test can establish the directionality of causation, even in the presence of feedback loops. Furthermore, since CCM can extract causal relationships from times series of, e.g., a single individual, it may be a valuable tool to personalized medicine. We implement CCM in Julia, a

  5. CauseMap: fast inference of causality from complex time series

    Directory of Open Access Journals (Sweden)

    M. Cyrus Maher

    2015-03-01

    Full Text Available Background. Establishing health-related causal relationships is a central pursuit in biomedical research. Yet, the interdependent non-linearity of biological systems renders causal dynamics laborious and at times impractical to disentangle. This pursuit is further impeded by the dearth of time series that are sufficiently long to observe and understand recurrent patterns of flux. However, as data generation costs plummet and technologies like wearable devices democratize data collection, we anticipate a coming surge in the availability of biomedically-relevant time series data. Given the life-saving potential of these burgeoning resources, it is critical to invest in the development of open source software tools that are capable of drawing meaningful insight from vast amounts of time series data.Results. Here we present CauseMap, the first open source implementation of convergent cross mapping (CCM, a method for establishing causality from long time series data (≳25 observations. Compared to existing time series methods, CCM has the advantage of being model-free and robust to unmeasured confounding that could otherwise induce spurious associations. CCM builds on Takens’ Theorem, a well-established result from dynamical systems theory that requires only mild assumptions. This theorem allows us to reconstruct high dimensional system dynamics using a time series of only a single variable. These reconstructions can be thought of as shadows of the true causal system. If reconstructed shadows can predict points from opposing time series, we can infer that the corresponding variables are providing views of the same causal system, and so are causally related. Unlike traditional metrics, this test can establish the directionality of causation, even in the presence of feedback loops. Furthermore, since CCM can extract causal relationships from times series of, e.g., a single individual, it may be a valuable tool to personalized medicine. We implement

  6. Time domain series system definition and gear set reliability modeling

    International Nuclear Information System (INIS)

    Xie, Liyang; Wu, Ningxiang; Qian, Wenxue

    2016-01-01

    Time-dependent multi-configuration is a typical feature for mechanical systems such as gear trains and chain drives. As a series system, a gear train is distinct from a traditional series system, such as a chain, in load transmission path, system-component relationship, system functioning manner, as well as time-dependent system configuration. Firstly, the present paper defines time-domain series system to which the traditional series system reliability model is not adequate. Then, system specific reliability modeling technique is proposed for gear sets, including component (tooth) and subsystem (tooth-pair) load history description, material priori/posterior strength expression, time-dependent and system specific load-strength interference analysis, as well as statistically dependent failure events treatment. Consequently, several system reliability models are developed for gear sets with different tooth numbers in the scenario of tooth root material ultimate tensile strength failure. The application of the models is discussed in the last part, and the differences between the system specific reliability model and the traditional series system reliability model are illustrated by virtue of several numerical examples. - Highlights: • A new type of series system, i.e. time-domain multi-configuration series system is defined, that is of great significance to reliability modeling. • Multi-level statistical analysis based reliability modeling method is presented for gear transmission system. • Several system specific reliability models are established for gear set reliability estimation. • The differences between the traditional series system reliability model and the new model are illustrated.

  7. Track Irregularity Time Series Analysis and Trend Forecasting

    Directory of Open Access Journals (Sweden)

    Jia Chaolong

    2012-01-01

    Full Text Available The combination of linear and nonlinear methods is widely used in the prediction of time series data. This paper analyzes track irregularity time series data by using gray incidence degree models and methods of data transformation, trying to find the connotative relationship between the time series data. In this paper, GM (1,1 is based on first-order, single variable linear differential equations; after an adaptive improvement and error correction, it is used to predict the long-term changing trend of track irregularity at a fixed measuring point; the stochastic linear AR, Kalman filtering model, and artificial neural network model are applied to predict the short-term changing trend of track irregularity at unit section. Both long-term and short-term changes prove that the model is effective and can achieve the expected accuracy.

  8. PRESEE: an MDL/MML algorithm to time-series stream segmenting.

    Science.gov (United States)

    Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream.

  9. Time-varying surrogate data to assess nonlinearity in nonstationary time series: application to heart rate variability.

    Science.gov (United States)

    Faes, Luca; Zhao, He; Chon, Ki H; Nollo, Giandomenico

    2009-03-01

    We propose a method to extend to time-varying (TV) systems the procedure for generating typical surrogate time series, in order to test the presence of nonlinear dynamics in potentially nonstationary signals. The method is based on fitting a TV autoregressive (AR) model to the original series and then regressing the model coefficients with random replacements of the model residuals to generate TV AR surrogate series. The proposed surrogate series were used in combination with a TV sample entropy (SE) discriminating statistic to assess nonlinearity in both simulated and experimental time series, in comparison with traditional time-invariant (TIV) surrogates combined with the TIV SE discriminating statistic. Analysis of simulated time series showed that using TIV surrogates, linear nonstationary time series may be erroneously regarded as nonlinear and weak TV nonlinearities may remain unrevealed, while the use of TV AR surrogates markedly increases the probability of a correct interpretation. Application to short (500 beats) heart rate variability (HRV) time series recorded at rest (R), after head-up tilt (T), and during paced breathing (PB) showed: 1) modifications of the SE statistic that were well interpretable with the known cardiovascular physiology; 2) significant contribution of nonlinear dynamics to HRV in all conditions, with significant increase during PB at 0.2 Hz respiration rate; and 3) a disagreement between TV AR surrogates and TIV surrogates in about a quarter of the series, suggesting that nonstationarity may affect HRV recordings and bias the outcome of the traditional surrogate-based nonlinearity test.

  10. Local normalization: Uncovering correlations in non-stationary financial time series

    Science.gov (United States)

    Schäfer, Rudi; Guhr, Thomas

    2010-09-01

    The measurement of correlations between financial time series is of vital importance for risk management. In this paper we address an estimation error that stems from the non-stationarity of the time series. We put forward a method to rid the time series of local trends and variable volatility, while preserving cross-correlations. We test this method in a Monte Carlo simulation, and apply it to empirical data for the S&P 500 stocks.

  11. Fuzzy time-series based on Fibonacci sequence for stock price forecasting

    Science.gov (United States)

    Chen, Tai-Liang; Cheng, Ching-Hsue; Jong Teoh, Hia

    2007-07-01

    Time-series models have been utilized to make reasonably accurate predictions in the areas of stock price movements, academic enrollments, weather, etc. For promoting the forecasting performance of fuzzy time-series models, this paper proposes a new model, which incorporates the concept of the Fibonacci sequence, the framework of Song and Chissom's model and the weighted method of Yu's model. This paper employs a 5-year period TSMC (Taiwan Semiconductor Manufacturing Company) stock price data and a 13-year period of TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) stock index data as experimental datasets. By comparing our forecasting performances with Chen's (Forecasting enrollments based on fuzzy time-series. Fuzzy Sets Syst. 81 (1996) 311-319), Yu's (Weighted fuzzy time-series models for TAIEX forecasting. Physica A 349 (2004) 609-624) and Huarng's (The application of neural networks to forecast fuzzy time series. Physica A 336 (2006) 481-491) models, we conclude that the proposed model surpasses in accuracy these conventional fuzzy time-series models.

  12. TIME-DEPENDENT TURBULENT HEATING OF OPEN FLUX TUBES IN THE CHROMOSPHERE, CORONA, AND SOLAR WIND

    Energy Technology Data Exchange (ETDEWEB)

    Woolsey, L. N.; Cranmer, S. R., E-mail: lwoolsey@cfa.harvard.edu [Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138 (United States)

    2015-10-01

    We investigate several key questions of plasma heating in open-field regions of the corona that connect to the solar wind. We present results for a model of Alfvén-wave-driven turbulence for three typical open magnetic field structures: a polar coronal hole, an open flux tube neighboring an equatorial streamer, and an open flux tube near a strong-field active region. We compare time-steady, one-dimensional turbulent heating models against fully time-dependent three-dimensional reduced-magnetohydrodynamic modeling of BRAID. We find that the time-steady results agree well with time-averaged results from BRAID. The time dependence allows us to investigate the variability of the magnetic fluctuations and of the heating in the corona. The high-frequency tail of the power spectrum of fluctuations forms a power law whose exponent varies with height, and we discuss the possible physical explanation for this behavior. The variability in the heating rate is bursty and nanoflare-like in nature, and we analyze the amount of energy lost via dissipative heating in transient events throughout the simulation. The average energy in these events is 10{sup 21.91} erg, within the “picoflare” range, and many events reach classical “nanoflare” energies. We also estimated the multithermal distribution of temperatures that would result from the heating-rate variability, and found good agreement with observed widths of coronal differential emission measure distributions. The results of the modeling presented in this paper provide compelling evidence that turbulent heating in the solar atmosphere by Alfvén waves accelerates the solar wind in open flux tubes.

  13. Parameterizing unconditional skewness in models for financial time series

    DEFF Research Database (Denmark)

    He, Changli; Silvennoinen, Annastiina; Teräsvirta, Timo

    In this paper we consider the third-moment structure of a class of time series models. It is often argued that the marginal distribution of financial time series such as returns is skewed. Therefore it is of importance to know what properties a model should possess if it is to accommodate...

  14. Forest - added Turbulence: A parametric study on Turbulence intensity in and around forests

    International Nuclear Information System (INIS)

    Pedersen, Henrik Sundgaard; Langreder, Wiebke

    2007-01-01

    The scope of the investigation is to take on-site measured wind data from a number of sites inside and close to forests. From the collected on-site data the ambient turbulence intensity is calculated and analysed depending on the distance to the forest and height above the forest. From this forest turbulence intensity database it is possible to get an overview of the general behaviour of the turbulence above and down stream from the forest. The database currently consists of 65 measurements points from around the globe, and it will be continually updated as relevant sites are made available. Using the database a number of questions can be answered. How does the ambient turbulence intensity decay with height? What does the turbulence profile look like according to wind speed? Is it the general situation that high wind speeds are creating movement in the canopy tops, resulting in higher turbulence? How does the ambient turbulence intensity decay at different height as a function of distance to the forest? From the forest turbulence database it can be seen that in general, the majority of the turbulence intensity created by the forest is visible within a radius of 5 times the forest height in vertical and 500 meters downstream from the forest edge in horizontal direction. Outside these boundaries the ambient turbulence intensity is rapidly approaching normal values

  15. Self-organising mixture autoregressive model for non-stationary time series modelling.

    Science.gov (United States)

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  16. Flux surface shaping effects on tokamak edge turbulence and flows

    Energy Technology Data Exchange (ETDEWEB)

    Kendl, A. [Innsbruck Univ., Institut fuer Theoretische Physik, Association EURATOM (Austria); Scott, B.D. [Max-Planck-Institut fuer Plasmaphysik, EURATOM Association, Garching bei Muenchen (Germany)

    2004-07-01

    The influence of shaping of magnetic flux surfaces in tokamaks on gyro-fluid edge turbulence is studied numerically. Magnetic field shaping in tokamaks is mainly due to elongation, triangularity, shift and the presence of a divertor X-point. A series of tokamak configurations with varying elongation 1 {<=} {kappa} {>=} 2 and triangularity 0 {<=} {delta} {<=} 0.4, and an actual ASDEX Upgrade divertor configuration are obtained with the equilibrium code HELENA and implemented into the gyro-fluid turbulence code GEM. The study finds minimal impact on the zonal flow physics itself, but strong impact on the turbulence and transport. (authors)

  17. Flux surface shaping effects on tokamak edge turbulence and flows

    International Nuclear Information System (INIS)

    Kendl, A.; Scott, B.D.

    2004-01-01

    The influence of shaping of magnetic flux surfaces in tokamaks on gyro-fluid edge turbulence is studied numerically. Magnetic field shaping in tokamaks is mainly due to elongation, triangularity, shift and the presence of a divertor X-point. A series of tokamak configurations with varying elongation 1 ≤ κ ≥ 2 and triangularity 0 ≤ δ ≤ 0.4, and an actual ASDEX Upgrade divertor configuration are obtained with the equilibrium code HELENA and implemented into the gyro-fluid turbulence code GEM. The study finds minimal impact on the zonal flow physics itself, but strong impact on the turbulence and transport. (authors)

  18. The Prediction of Teacher Turnover Employing Time Series Analysis.

    Science.gov (United States)

    Costa, Crist H.

    The purpose of this study was to combine knowledge of teacher demographic data with time-series forecasting methods to predict teacher turnover. Moving averages and exponential smoothing were used to forecast discrete time series. The study used data collected from the 22 largest school districts in Iowa, designated as FACT schools. Predictions…

  19. Stacked Heterogeneous Neural Networks for Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Florin Leon

    2010-01-01

    Full Text Available A hybrid model for time series forecasting is proposed. It is a stacked neural network, containing one normal multilayer perceptron with bipolar sigmoid activation functions, and the other with an exponential activation function in the output layer. As shown by the case studies, the proposed stacked hybrid neural model performs well on a variety of benchmark time series. The combination of weights of the two stack components that leads to optimal performance is also studied.

  20. Chaotic time series prediction: From one to another

    International Nuclear Information System (INIS)

    Zhao Pengfei; Xing Lei; Yu Jun

    2009-01-01

    In this Letter, a new local linear prediction model is proposed to predict a chaotic time series of a component x(t) by using the chaotic time series of another component y(t) in the same system with x(t). Our approach is based on the phase space reconstruction coming from the Takens embedding theorem. To illustrate our results, we present an example of Lorenz system and compare with the performance of the original local linear prediction model.

  1. Grammar-based feature generation for time-series prediction

    CERN Document Server

    De Silva, Anthony Mihirana

    2015-01-01

    This book proposes a novel approach for time-series prediction using machine learning techniques with automatic feature generation. Application of machine learning techniques to predict time-series continues to attract considerable attention due to the difficulty of the prediction problems compounded by the non-linear and non-stationary nature of the real world time-series. The performance of machine learning techniques, among other things, depends on suitable engineering of features. This book proposes a systematic way for generating suitable features using context-free grammar. A number of feature selection criteria are investigated and a hybrid feature generation and selection algorithm using grammatical evolution is proposed. The book contains graphical illustrations to explain the feature generation process. The proposed approaches are demonstrated by predicting the closing price of major stock market indices, peak electricity load and net hourly foreign exchange client trade volume. The proposed method ...

  2. Forecasting autoregressive time series under changing persistence

    DEFF Research Database (Denmark)

    Kruse, Robinson

    Changing persistence in time series models means that a structural change from nonstationarity to stationarity or vice versa occurs over time. Such a change has important implications for forecasting, as negligence may lead to inaccurate model predictions. This paper derives generally applicable...

  3. A model for the response of vertical axis wind turbines to turbulent flow: Parts 1 and 2

    Science.gov (United States)

    Malcolm, D. R.

    1988-07-01

    This report describes a project intended to incorporate the effects of atmospheric turbulence into the structural response of Darrieus rotor, vertical axis wind turbines. The basis of the technique is the generation of a suitable time series of wind velocities, which are passed through a double multiple streamtube aerodynamic representation of the rotor. The aerodynamic loads are decomposed into components of the real eigenvectors of the rotor and subsequently into full-power and cross-spectral densities. These modal spectra are submitted as input to a modified NASTRAN random load analysis and the power spectra of selected responses are obtained. This procedure appears to be successful. Results at zero turbulence agree with alternative solutions, and when turbulence is included, the predicted stress spectra for the Indal 6400 rotor are in good agreement with field data. The model predicts that the effect of turbulence on harmonic frequency peaks and on all lead-lag bending will not be great. However, it appears that only 11 percent turbulence intensity can almost double the rms of cyclic flatwise blade bending.

  4. Recurrent Neural Networks for Multivariate Time Series with Missing Values.

    Science.gov (United States)

    Che, Zhengping; Purushotham, Sanjay; Cho, Kyunghyun; Sontag, David; Liu, Yan

    2018-04-17

    Multivariate time series data in practical applications, such as health care, geoscience, and biology, are characterized by a variety of missing values. In time series prediction and other related tasks, it has been noted that missing values and their missing patterns are often correlated with the target labels, a.k.a., informative missingness. There is very limited work on exploiting the missing patterns for effective imputation and improving prediction performance. In this paper, we develop novel deep learning models, namely GRU-D, as one of the early attempts. GRU-D is based on Gated Recurrent Unit (GRU), a state-of-the-art recurrent neural network. It takes two representations of missing patterns, i.e., masking and time interval, and effectively incorporates them into a deep model architecture so that it not only captures the long-term temporal dependencies in time series, but also utilizes the missing patterns to achieve better prediction results. Experiments of time series classification tasks on real-world clinical datasets (MIMIC-III, PhysioNet) and synthetic datasets demonstrate that our models achieve state-of-the-art performance and provide useful insights for better understanding and utilization of missing values in time series analysis.

  5. Conditional time series forecasting with convolutional neural networks

    NARCIS (Netherlands)

    A. Borovykh (Anastasia); S.M. Bohte (Sander); C.W. Oosterlee (Cornelis)

    2017-01-01

    textabstractForecasting financial time series using past observations has been a significant topic of interest. While temporal relationships in the data exist, they are difficult to analyze and predict accurately due to the non-linear trends and noise present in the series. We propose to learn these

  6. Time Series Analysis of Wheat Futures Reward in China

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Different from the fact that the main researches are focused on single futures contract and lack of the comparison of different periods, this paper described the statistical characteristics of wheat futures reward time series of Zhengzhou Commodity Exchange in recent three years. Besides the basic statistic analysis, the paper used the GARCH and EGARCH model to describe the time series which had the ARCH effect and analyzed the persistence of volatility shocks and the leverage effect. The results showed that compared with that of normal one,wheat futures reward series were abnormality, leptokurtic and thick tail distribution. The study also found that two-part of the reward series had no autocorrelation. Among the six correlative series, three ones presented the ARCH effect. By using of the Auto-regressive Distributed Lag Model, GARCH model and EGARCH model, the paper demonstrates the persistence of volatility shocks and the leverage effect on the wheat futures reward time series. The results reveal that on the one hand, the statistical characteristics of the wheat futures reward are similar to the aboard mature futures market as a whole. But on the other hand, the results reflect some shortages such as the immatureness and the over-control by the government in the Chinese future market.

  7. Time series analysis of continuous-wave coherent Doppler Lidar wind measurements

    DEFF Research Database (Denmark)

    Sjöholm, Mikael; Mikkelsen, Torben; Mann, Jakob

    2008-01-01

    The influence of spatial volume averaging of a focused 1.55 mu m continuous-wave coherent Doppler Lidar on observed wind turbulence measured in the atmospheric surface layer over homogeneous terrain is described and analysed. Comparison of Lidar-measured turbulent spectra with spectra simultaneou......The influence of spatial volume averaging of a focused 1.55 mu m continuous-wave coherent Doppler Lidar on observed wind turbulence measured in the atmospheric surface layer over homogeneous terrain is described and analysed. Comparison of Lidar-measured turbulent spectra with spectra...

  8. forecasting with nonlinear time series model: a monte-carlo

    African Journals Online (AJOL)

    PUBLICATIONS1

    erated recursively up to any step greater than one. For nonlinear time series model, point forecast for step one can be done easily like in the linear case but forecast for a step greater than or equal to ..... London. Franses, P. H. (1998). Time series models for business and Economic forecasting, Cam- bridge University press.

  9. Time series analysis of temporal networks

    Science.gov (United States)

    Sikdar, Sandipan; Ganguly, Niloy; Mukherjee, Animesh

    2016-01-01

    A common but an important feature of all real-world networks is that they are temporal in nature, i.e., the network structure changes over time. Due to this dynamic nature, it becomes difficult to propose suitable growth models that can explain the various important characteristic properties of these networks. In fact, in many application oriented studies only knowing these properties is sufficient. For instance, if one wishes to launch a targeted attack on a network, this can be done even without the knowledge of the full network structure; rather an estimate of some of the properties is sufficient enough to launch the attack. We, in this paper show that even if the network structure at a future time point is not available one can still manage to estimate its properties. We propose a novel method to map a temporal network to a set of time series instances, analyze them and using a standard forecast model of time series, try to predict the properties of a temporal network at a later time instance. To our aim, we consider eight properties such as number of active nodes, average degree, clustering coefficient etc. and apply our prediction framework on them. We mainly focus on the temporal network of human face-to-face contacts and observe that it represents a stochastic process with memory that can be modeled as Auto-Regressive-Integrated-Moving-Average (ARIMA). We use cross validation techniques to find the percentage accuracy of our predictions. An important observation is that the frequency domain properties of the time series obtained from spectrogram analysis could be used to refine the prediction framework by identifying beforehand the cases where the error in prediction is likely to be high. This leads to an improvement of 7.96% (for error level ≤20%) in prediction accuracy on an average across all datasets. As an application we show how such prediction scheme can be used to launch targeted attacks on temporal networks. Contribution to the Topical Issue

  10. The Hierarchical Spectral Merger Algorithm: A New Time Series Clustering Procedure

    KAUST Repository

    Euá n, Carolina; Ombao, Hernando; Ortega, Joaquí n

    2018-01-01

    We present a new method for time series clustering which we call the Hierarchical Spectral Merger (HSM) method. This procedure is based on the spectral theory of time series and identifies series that share similar oscillations or waveforms

  11. Notes on economic time series analysis system theoretic perspectives

    CERN Document Server

    Aoki, Masanao

    1983-01-01

    In seminars and graduate level courses I have had several opportunities to discuss modeling and analysis of time series with economists and economic graduate students during the past several years. These experiences made me aware of a gap between what economic graduate students are taught about vector-valued time series and what is available in recent system literature. Wishing to fill or narrow the gap that I suspect is more widely spread than my personal experiences indicate, I have written these notes to augment and reor­ ganize materials I have given in these courses and seminars. I have endeavored to present, in as much a self-contained way as practicable, a body of results and techniques in system theory that I judge to be relevant and useful to economists interested in using time series in their research. I have essentially acted as an intermediary and interpreter of system theoretic results and perspectives in time series by filtering out non-essential details, and presenting coherent accounts of wha...

  12. Simultaneous identification of transfer functions and combustion noise of a turbulent flame

    Science.gov (United States)

    Merk, M.; Jaensch, S.; Silva, C.; Polifke, W.

    2018-05-01

    The Large Eddy Simulation/System Identification (LES/SI) approach allows to deduce a flame transfer function (FTF) from LES of turbulent reacting flow: Time series of fluctuations of reference velocity and global heat release rate resulting from broad-band excitation of a simulated turbulent flame are post-processed via SI techniques to derive a low order model of the flame dynamics, from which the FTF is readily deduced. The current work investigates an extension of the established LES/SI approach: In addition to estimation of the FTF, a low order model for the combustion noise source is deduced from the same time series data. By incorporating such a noise model into a linear thermoacoustic model, it is possible to predict the overall level as well as the spectral distribution of sound pressure in confined combustion systems that do not exhibit self-excited thermoacoustic instability. A variety of model structures for estimation of a noise model are tested in the present study. The suitability and quality of these model structures are compared against each other, their sensitivity regarding certain time series properties is studied. The influence of time series length, signal-to-noise ratio as well as acoustic reflection coefficient of the boundary conditions on the identification are examined. It is shown that the Box-Jenkins model structure is superior to simpler approaches for the simultaneous identification of models that describe the FTF as well as the combustion noise source. Subsequent to the question of the most adequate model structure, the choice of optimal model order is addressed, as in particular the optimal parametrization of the noise model is not obvious. Akaike's Information Criterion and a model residual analysis are applied to draw qualitative and quantitative conclusions on the most suitable model order. All investigations are based on a surrogate data model, which allows a Monte Carlo study across a large parameter space with modest

  13. Is Insecurity Worse for Well-Being in Turbulent Times? Mental Health in Context

    Science.gov (United States)

    Lam, Jack; Fan, Wen; Moen, Phyllis

    2014-01-01

    Using General Social Survey data, we examine whether any association between job insecurity and well-being is contingent on economic climate (comparing those interviewed in turbulent 2010 vs. pre-recessionary 2006), as well as income and gender. We find respondents with higher levels of job insecurity in 2010 reported lower levels of happiness compared to those similarly insecure in 2006. The positive relationship between job insecurity and days of poor mental health becomes more pronounced for those in the 3rd quartile of personal income in 2010, suggesting middle-class vulnerability during the economic downturn. Men (but not women) with higher insecurity report more days of poor mental health in both 2006 and 2010. These findings reinforce a “cycles of control” theoretical approach, given the mental health-job insecurity relationship is heightened for workers in turbulent times. PMID:25436177

  14. Dynamical analysis and visualization of tornadoes time series.

    Directory of Open Access Journals (Sweden)

    António M Lopes

    Full Text Available In this paper we analyze the behavior of tornado time-series in the U.S. from the perspective of dynamical systems. A tornado is a violently rotating column of air extending from a cumulonimbus cloud down to the ground. Such phenomena reveal features that are well described by power law functions and unveil characteristics found in systems with long range memory effects. Tornado time series are viewed as the output of a complex system and are interpreted as a manifestation of its dynamics. Tornadoes are modeled as sequences of Dirac impulses with amplitude proportional to the events size. First, a collection of time series involving 64 years is analyzed in the frequency domain by means of the Fourier transform. The amplitude spectra are approximated by power law functions and their parameters are read as an underlying signature of the system dynamics. Second, it is adopted the concept of circular time and the collective behavior of tornadoes analyzed. Clustering techniques are then adopted to identify and visualize the emerging patterns.

  15. Dynamical analysis and visualization of tornadoes time series.

    Science.gov (United States)

    Lopes, António M; Tenreiro Machado, J A

    2015-01-01

    In this paper we analyze the behavior of tornado time-series in the U.S. from the perspective of dynamical systems. A tornado is a violently rotating column of air extending from a cumulonimbus cloud down to the ground. Such phenomena reveal features that are well described by power law functions and unveil characteristics found in systems with long range memory effects. Tornado time series are viewed as the output of a complex system and are interpreted as a manifestation of its dynamics. Tornadoes are modeled as sequences of Dirac impulses with amplitude proportional to the events size. First, a collection of time series involving 64 years is analyzed in the frequency domain by means of the Fourier transform. The amplitude spectra are approximated by power law functions and their parameters are read as an underlying signature of the system dynamics. Second, it is adopted the concept of circular time and the collective behavior of tornadoes analyzed. Clustering techniques are then adopted to identify and visualize the emerging patterns.

  16. "Observation Obscurer" - Time Series Viewer, Editor and Processor

    Science.gov (United States)

    Andronov, I. L.

    The program is described, which contains a set of subroutines suitable for East viewing and interactive filtering and processing of regularly and irregularly spaced time series. Being a 32-bit DOS application, it may be used as a default fast viewer/editor of time series in any compute shell ("commander") or in Windows. It allows to view the data in the "time" or "phase" mode, to remove ("obscure") or filter outstanding bad points; to make scale transformations and smoothing using few methods (e.g. mean with phase binning, determination of the statistically opti- mal number of phase bins; "running parabola" (Andronov, 1997, As. Ap. Suppl, 125, 207) fit and to make time series analysis using some methods, e.g. correlation, autocorrelation and histogram analysis: determination of extrema etc. Some features have been developed specially for variable star observers, e.g. the barycentric correction, the creation and fast analysis of "OC" diagrams etc. The manual for "hot keys" is presented. The computer code was compiled with a 32-bit Free Pascal (www.freepascal.org).

  17. Premixed autoignition in compressible turbulence

    Science.gov (United States)

    Konduri, Aditya; Kolla, Hemanth; Krisman, Alexander; Chen, Jacqueline

    2016-11-01

    Prediction of chemical ignition delay in an autoignition process is critical in combustion systems like compression ignition engines and gas turbines. Often, ignition delay times measured in simple homogeneous experiments or homogeneous calculations are not representative of actual autoignition processes in complex turbulent flows. This is due the presence of turbulent mixing which results in fluctuations in thermodynamic properties as well as chemical composition. In the present study the effect of fluctuations of thermodynamic variables on the ignition delay is quantified with direct numerical simulations of compressible isotropic turbulence. A premixed syngas-air mixture is used to remove the effects of inhomogeneity in the chemical composition. Preliminary results show a significant spatial variation in the ignition delay time. We analyze the topology of autoignition kernels and identify the influence of extreme events resulting from compressibility and intermittency. The dependence of ignition delay time on Reynolds and turbulent Mach numbers is also quantified. Supported by Basic Energy Sciences, Dept of Energy, United States.

  18. Modelling road accidents: An approach using structural time series

    Science.gov (United States)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-09-01

    In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.

  19. Multiscale Poincaré plots for visualizing the structure of heartbeat time series.

    Science.gov (United States)

    Henriques, Teresa S; Mariani, Sara; Burykin, Anton; Rodrigues, Filipa; Silva, Tiago F; Goldberger, Ary L

    2016-02-09

    Poincaré delay maps are widely used in the analysis of cardiac interbeat interval (RR) dynamics. To facilitate visualization of the structure of these time series, we introduce multiscale Poincaré (MSP) plots. Starting with the original RR time series, the method employs a coarse-graining procedure to create a family of time series, each of which represents the system's dynamics in a different time scale. Next, the Poincaré plots are constructed for the original and the coarse-grained time series. Finally, as an optional adjunct, color can be added to each point to represent its normalized frequency. We illustrate the MSP method on simulated Gaussian white and 1/f noise time series. The MSP plots of 1/f noise time series reveal relative conservation of the phase space area over multiple time scales, while those of white noise show a marked reduction in area. We also show how MSP plots can be used to illustrate the loss of complexity when heartbeat time series from healthy subjects are compared with those from patients with chronic (congestive) heart failure syndrome or with atrial fibrillation. This generalized multiscale approach to Poincaré plots may be useful in visualizing other types of time series.

  20. Analysis of chaos in plasma turbulence

    DEFF Research Database (Denmark)

    Pedersen, T.S.; Michelsen, Poul; Juul Rasmussen, J.

    1996-01-01

    -stationary turbulent state is reached in a finite time, independent of the initial conditions. Different regimes of the turbulent state can be obtained by varying the coupling parameter C, related to the parallel electron dynamics. The turbulence is described by using particle tracking and tools from chaos analysis...

  1. Statistical properties of turbulence in a toroidal magnetized ECR plasma

    International Nuclear Information System (INIS)

    Yu Yi; Lu Ronghua; Wang Zhijiang; Wen Yizhi; Yu Changxuan; Wan Shude; Liu, Wandong

    2008-01-01

    The statistical analyses of fluctuation data measured by electrostatic-probe arrays clearly show that the self-organized criticality (SOC) avalanches are not the dominant behaviors in a toroidal ECR plasma in the SMT (Simple Magnetic Torus) mode of KT-5D device. The f -1 index region in the auto-correlation spectra of the floating potential V f and the ion saturation current I s , which is a fingerprint of a SOC system, ranges only in a narrow frequency band. By investigating the Hurst exponents at increasingly coarse grained time series, we find that at a time scale of τ>100 μs, there exists no or a very weak long-range correlation over two decades in τ. The difference between the PDFs of I s and V f clearly shows a more global nature of the latter. The transport flux induced by the turbulence suggests that the natural intermittency of turbulent transport maybe independent of the avalanche induced by near criticality. The drift instability is dominant in a SMT plasma generated by means of ECR discharges

  2. Cosmic turbulence

    International Nuclear Information System (INIS)

    Drury, L.O.; Stewart, J.M.

    1976-01-01

    A generalization of a transformation due to Kurskov and Ozernoi is used to rewrite the usual equations governing subsonic turbulence in Robertson-Walker cosmological models as Navier-Stokes equations with a time-dependent viscosity. This paper first rederives some well-known results in a very simple way by means of this transformation. The main result however is that the establishment of a Kolmogorov spectrum at recombination appears to be incompatible with subsonic turbulence. The conditions after recombination are also discussed briefly. (author)

  3. Time series patterns and language support in DBMS

    Science.gov (United States)

    Telnarova, Zdenka

    2017-07-01

    This contribution is focused on pattern type Time Series as a rich in semantics representation of data. Some example of implementation of this pattern type in traditional Data Base Management Systems is briefly presented. There are many approaches how to manipulate with patterns and query patterns. Crucial issue can be seen in systematic approach to pattern management and specific pattern query language which takes into consideration semantics of patterns. Query language SQL-TS for manipulating with patterns is shown on Time Series data.

  4. Two-fractal overlap time series: Earthquakes and market crashes

    Indian Academy of Sciences (India)

    velocity over the other and time series of stock prices. An anticipation method for some of the crashes have been proposed here, based on these observations. Keywords. Cantor set; time series; earthquake; market crash. PACS Nos 05.00; 02.50.-r; 64.60; 89.65.Gh; 95.75.Wx. 1. Introduction. Capturing dynamical patterns of ...

  5. Relevant criteria for testing the quality of turbulence models

    DEFF Research Database (Denmark)

    Frandsen, Sten Tronæs; Ejsing Jørgensen, Hans; Sørensen, J.D.

    2007-01-01

    Seeking relevant criteria for testing the quality of turbulence models, the scale of turbulence and the gust factor have been estimated from data and compared with predictions from first-order models of these two quantities. It is found that the mean of the measured length scales is approx. 10......% smaller than the IEC model, for wind turbine hub height levels. The mean is only marginally dependent on trends in time series. It is also found that the coefficient of variation of the measured length scales is about 50%. 3sec and 10sec pre-averaging of wind speed data are relevant for MW-size wind...... turbines when seeking wind characteristics that correspond to one blade and the entire rotor, respectively. For heights exceeding 50-60m the gust factor increases with wind speed. For heights larger the 60-80m, present assumptions on the value of the gust factor are significantly conservative, both for 3...

  6. Nonlinear time series analysis with R

    CERN Document Server

    Huffaker, Ray; Rosa, Rodolfo

    2017-01-01

    In the process of data analysis, the investigator is often facing highly-volatile and random-appearing observed data. A vast body of literature shows that the assumption of underlying stochastic processes was not necessarily representing the nature of the processes under investigation and, when other tools were used, deterministic features emerged. Non Linear Time Series Analysis (NLTS) allows researchers to test whether observed volatility conceals systematic non linear behavior, and to rigorously characterize governing dynamics. Behavioral patterns detected by non linear time series analysis, along with scientific principles and other expert information, guide the specification of mechanistic models that serve to explain real-world behavior rather than merely reproducing it. Often there is a misconception regarding the complexity of the level of mathematics needed to understand and utilize the tools of NLTS (for instance Chaos theory). However, mathematics used in NLTS is much simpler than many other subjec...

  7. Zero-crossing statistics for non-Markovian time series.

    Science.gov (United States)

    Nyberg, Markus; Lizana, Ludvig; Ambjörnsson, Tobias

    2018-03-01

    In applications spanning from image analysis and speech recognition to energy dissipation in turbulence and time-to failure of fatigued materials, researchers and engineers want to calculate how often a stochastic observable crosses a specific level, such as zero. At first glance this problem looks simple, but it is in fact theoretically very challenging, and therefore few exact results exist. One exception is the celebrated Rice formula that gives the mean number of zero crossings in a fixed time interval of a zero-mean Gaussian stationary process. In this study we use the so-called independent interval approximation to go beyond Rice's result and derive analytic expressions for all higher-order zero-crossing cumulants and moments. Our results agree well with simulations for the non-Markovian autoregressive model.

  8. Zero-crossing statistics for non-Markovian time series

    Science.gov (United States)

    Nyberg, Markus; Lizana, Ludvig; Ambjörnsson, Tobias

    2018-03-01

    In applications spanning from image analysis and speech recognition to energy dissipation in turbulence and time-to failure of fatigued materials, researchers and engineers want to calculate how often a stochastic observable crosses a specific level, such as zero. At first glance this problem looks simple, but it is in fact theoretically very challenging, and therefore few exact results exist. One exception is the celebrated Rice formula that gives the mean number of zero crossings in a fixed time interval of a zero-mean Gaussian stationary process. In this study we use the so-called independent interval approximation to go beyond Rice's result and derive analytic expressions for all higher-order zero-crossing cumulants and moments. Our results agree well with simulations for the non-Markovian autoregressive model.

  9. System for simulating fluctuation diagnostics for application to turbulence computations

    International Nuclear Information System (INIS)

    Bravenec, R.V.; Nevins, W.M.

    2006-01-01

    Present-day nonlinear microstability codes are able to compute the saturated fluctuations of a turbulent fluid versus space and time, whether the fluid be liquid, gas, or plasma. They are therefore able to determine turbulence-induced fluid (or particle) and energy fluxes. These codes, however, must be tested against experimental data not only with respect to transport but also characteristics of the fluctuations. The latter is challenging because of limitations in the diagnostics (e.g., finite spatial resolution) and the fact that the diagnostics typically do not measure exactly the quantities that the codes compute. In this work, we present a system based on IDL registered analysis and visualization software in which user-supplied 'diagnostic filters' are applied to the code outputs to generate simulated diagnostic signals. The same analysis techniques as applied to the measurements, e.g., digital time-series analysis, may then be applied to the synthesized signals. Their statistical properties, such as rms fluctuation level, mean wave numbers, phase and group velocities, correlation lengths and times, and in some cases full S(k,ω) spectra, can then be compared directly to those of the measurements

  10. InSAR Deformation Time Series Processed On-Demand in the Cloud

    Science.gov (United States)

    Horn, W. B.; Weeden, R.; Dimarchi, H.; Arko, S. A.; Hogenson, K.

    2017-12-01

    During this past year, ASF has developed a cloud-based on-demand processing system known as HyP3 (http://hyp3.asf.alaska.edu/), the Hybrid Pluggable Processing Pipeline, for Synthetic Aperture Radar (SAR) data. The system makes it easy for a user who doesn't have the time or inclination to install and use complex SAR processing software to leverage SAR data in their research or operations. One such processing algorithm is generation of a deformation time series product, which is a series of images representing ground displacements over time, which can be computed using a time series of interferometric SAR (InSAR) products. The set of software tools necessary to generate this useful product are difficult to install, configure, and use. Moreover, for a long time series with many images, the processing of just the interferograms can take days. Principally built by three undergraduate students at the ASF DAAC, the deformation time series processing relies the new Amazon Batch service, which enables processing of jobs with complex interconnected dependencies in a straightforward and efficient manner. In the case of generating a deformation time series product from a stack of single-look complex SAR images, the system uses Batch to serialize the up-front processing, interferogram generation, optional tropospheric correction, and deformation time series generation. The most time consuming portion is the interferogram generation, because even for a fairly small stack of images many interferograms need to be processed. By using AWS Batch, the interferograms are all generated in parallel; the entire process completes in hours rather than days. Additionally, the individual interferograms are saved in Amazon's cloud storage, so that when new data is acquired in the stack, an updated time series product can be generated with minimal addiitonal processing. This presentation will focus on the development techniques and enabling technologies that were used in developing the time

  11. Vector bilinear autoregressive time series model and its superiority ...

    African Journals Online (AJOL)

    In this research, a vector bilinear autoregressive time series model was proposed and used to model three revenue series (X1, X2, X3) . The “orders” of the three series were identified on the basis of the distribution of autocorrelation and partial autocorrelation functions and were used to construct the vector bilinear models.

  12. 25 years of time series forecasting

    NARCIS (Netherlands)

    de Gooijer, J.G.; Hyndman, R.J.

    2006-01-01

    We review the past 25 years of research into time series forecasting. In this silver jubilee issue, we naturally highlight results published in journals managed by the International Institute of Forecasters (Journal of Forecasting 1982-1985 and International Journal of Forecasting 1985-2005). During

  13. Markov Trends in Macroeconomic Time Series

    NARCIS (Netherlands)

    R. Paap (Richard)

    1997-01-01

    textabstractMany macroeconomic time series are characterised by long periods of positive growth, expansion periods, and short periods of negative growth, recessions. A popular model to describe this phenomenon is the Markov trend, which is a stochastic segmented trend where the slope depends on the

  14. Modeling seasonality in bimonthly time series

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    1992-01-01

    textabstractA recurring issue in modeling seasonal time series variables is the choice of the most adequate model for the seasonal movements. One selection method for quarterly data is proposed in Hylleberg et al. (1990). Market response models are often constructed for bimonthly variables, and

  15. On clustering fMRI time series

    DEFF Research Database (Denmark)

    Goutte, Cyril; Toft, Peter Aundal; Rostrup, E.

    1999-01-01

    Analysis of fMRI time series is often performed by extracting one or more parameters for the individual voxels. Methods based, e.g., on various statistical tests are then used to yield parameters corresponding to probability of activation or activation strength. However, these methods do...

  16. The plasma transport equations derived by multiple time-scale expansions and turbulent transport. I. General theory

    International Nuclear Information System (INIS)

    Edenstrasser, J.W.

    1995-01-01

    A multiple time-scale derivative expansion scheme is applied to the dimensionless Fokker--Planck equation and to Maxwell's equations, where the parameter range of a typical fusion plasma was assumed. Within kinetic theory, the four time scales considered are those of Larmor gyration, particle transit, collisions, and classical transport. The corresponding magnetohydrodynamic (MHD) time scales are those of ion Larmor gyration, Alfven, MHD collision, and resistive diffusion. The solution of the zeroth-order equations results in the force-free equilibria and ideal Ohm's law. The solution of the first-order equations leads under the assumption of a weak collisional plasma to the ideal MHD equations. On the MHD-collision time scale, not only the full set of the MHD transport equations is obtained, but also turbulent terms, where the related transport quantities are one order in the expansion parameter larger than those of classical transport. Finally, at the resistive diffusion time scale the known transport equations are arrived at including, however, also turbulent contributions. copyright 1995 American Institute of Physics

  17. FALSE DETERMINATIONS OF CHAOS IN SHORT NOISY TIME SERIES. (R828745)

    Science.gov (United States)

    A method (NEMG) proposed in 1992 for diagnosing chaos in noisy time series with 50 or fewer observations entails fitting the time series with an empirical function which predicts an observation in the series from previous observations, and then estimating the rate of divergenc...

  18. Multiscale multifractal multiproperty analysis of financial time series based on Rényi entropy

    Science.gov (United States)

    Yujun, Yang; Jianping, Li; Yimei, Yang

    This paper introduces a multiscale multifractal multiproperty analysis based on Rényi entropy (3MPAR) method to analyze short-range and long-range characteristics of financial time series, and then applies this method to the five time series of five properties in four stock indices. Combining the two analysis techniques of Rényi entropy and multifractal detrended fluctuation analysis (MFDFA), the 3MPAR method focuses on the curves of Rényi entropy and generalized Hurst exponent of five properties of four stock time series, which allows us to study more universal and subtle fluctuation characteristics of financial time series. By analyzing the curves of the Rényi entropy and the profiles of the logarithm distribution of MFDFA of five properties of four stock indices, the 3MPAR method shows some fluctuation characteristics of the financial time series and the stock markets. Then, it also shows a richer information of the financial time series by comparing the profile of five properties of four stock indices. In this paper, we not only focus on the multifractality of time series but also the fluctuation characteristics of the financial time series and subtle differences in the time series of different properties. We find that financial time series is far more complex than reported in some research works using one property of time series.

  19. A Literature Survey of Early Time Series Classification and Deep Learning

    OpenAIRE

    Santos, Tiago; Kern, Roman

    2017-01-01

    This paper provides an overview of current literature on time series classification approaches, in particular of early time series classification. A very common and effective time series classification approach is the 1-Nearest Neighbor classier, with different distance measures such as the Euclidean or dynamic time warping distances. This paper starts by reviewing these baseline methods. More recently, with the gain in popularity in the application of deep neural networks to the eld of...

  20. Signal Processing for Time-Series Functions on a Graph

    Science.gov (United States)

    2018-02-01

    Figures Fig. 1 Time -series function on a fixed graph.............................................2 iv Approved for public release; distribution is...φi〉`2(V)φi (39) 6= f̄ (40) Instead, we simply recover the average of f over time . 13 Approved for public release; distribution is unlimited. This...ARL-TR-8276• FEB 2018 US Army Research Laboratory Signal Processing for Time -Series Functions on a Graph by Humberto Muñoz-Barona, Jean Vettel, and

  1. Non-linear time series extreme events and integer value problems

    CERN Document Server

    Turkman, Kamil Feridun; Zea Bermudez, Patrícia

    2014-01-01

    This book offers a useful combination of probabilistic and statistical tools for analyzing nonlinear time series. Key features of the book include a study of the extremal behavior of nonlinear time series and a comprehensive list of nonlinear models that address different aspects of nonlinearity. Several inferential methods, including quasi likelihood methods, sequential Markov Chain Monte Carlo Methods and particle filters, are also included so as to provide an overall view of the available tools for parameter estimation for nonlinear models. A chapter on integer time series models based on several thinning operations, which brings together all recent advances made in this area, is also included. Readers should have attended a prior course on linear time series, and a good grasp of simulation-based inferential methods is recommended. This book offers a valuable resource for second-year graduate students and researchers in statistics and other scientific areas who need a basic understanding of nonlinear time ...

  2. Learning of time series through neuron-to-neuron instruction

    Energy Technology Data Exchange (ETDEWEB)

    Miyazaki, Y [Department of Physics, Kyoto University, Kyoto 606-8502, (Japan); Kinzel, W [Institut fuer Theoretische Physik, Universitaet Wurzburg, 97074 Wurzburg (Germany); Shinomoto, S [Department of Physics, Kyoto University, Kyoto (Japan)

    2003-02-07

    A model neuron with delayline feedback connections can learn a time series generated by another model neuron. It has been known that some student neurons that have completed such learning under the instruction of a teacher's quasi-periodic sequence mimic the teacher's time series over a long interval, even after instruction has ceased. We found that in addition to such faithful students, there are unfaithful students whose time series eventually diverge exponentially from that of the teacher. In order to understand the circumstances that allow for such a variety of students, the orbit dimension was estimated numerically. The quasi-periodic orbits in question were found to be confined in spaces with dimensions significantly smaller than that of the full phase space.

  3. Learning of time series through neuron-to-neuron instruction

    International Nuclear Information System (INIS)

    Miyazaki, Y; Kinzel, W; Shinomoto, S

    2003-01-01

    A model neuron with delayline feedback connections can learn a time series generated by another model neuron. It has been known that some student neurons that have completed such learning under the instruction of a teacher's quasi-periodic sequence mimic the teacher's time series over a long interval, even after instruction has ceased. We found that in addition to such faithful students, there are unfaithful students whose time series eventually diverge exponentially from that of the teacher. In order to understand the circumstances that allow for such a variety of students, the orbit dimension was estimated numerically. The quasi-periodic orbits in question were found to be confined in spaces with dimensions significantly smaller than that of the full phase space

  4. Quirky patterns in time-series of estimates of recruitment could be artefacts

    DEFF Research Database (Denmark)

    Dickey-Collas, M.; Hinzen, N.T.; Nash, R.D.M.

    2015-01-01

    of recruitment time-series in databases is therefore not consistent across or within species and stocks. Caution is therefore required as perhaps the characteristics of the time-series of stock dynamics may be determined by the model used to generate them, rather than underlying ecological phenomena......The accessibility of databases of global or regional stock assessment outputs is leading to an increase in meta-analysis of the dynamics of fish stocks. In most of these analyses, each of the time-series is generally assumed to be directly comparable. However, the approach to stock assessment...... employed, and the associated modelling assumptions, can have an important influence on the characteristics of each time-series. We explore this idea by investigating recruitment time-series with three different recruitment parameterizations: a stock–recruitment model, a random-walk time-series model...

  5. The Hierarchical Spectral Merger Algorithm: A New Time Series Clustering Procedure

    KAUST Repository

    Euán, Carolina

    2018-04-12

    We present a new method for time series clustering which we call the Hierarchical Spectral Merger (HSM) method. This procedure is based on the spectral theory of time series and identifies series that share similar oscillations or waveforms. The extent of similarity between a pair of time series is measured using the total variation distance between their estimated spectral densities. At each step of the algorithm, every time two clusters merge, a new spectral density is estimated using the whole information present in both clusters, which is representative of all the series in the new cluster. The method is implemented in an R package HSMClust. We present two applications of the HSM method, one to data coming from wave-height measurements in oceanography and the other to electroencefalogram (EEG) data.

  6. Estimation of time-delayed mutual information and bias for irregularly and sparsely sampled time-series

    International Nuclear Information System (INIS)

    Albers, D.J.; Hripcsak, George

    2012-01-01

    Highlights: ► Time-delayed mutual information for irregularly sampled time-series. ► Estimation bias for the time-delayed mutual information calculation. ► Fast, simple, PDF estimator independent, time-delayed mutual information bias estimate. ► Quantification of data-set-size limits of the time-delayed mutual calculation. - Abstract: A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database.

  7. MULTIFLUID MAGNETOHYDRODYNAMIC TURBULENT DECAY

    International Nuclear Information System (INIS)

    Downes, T. P.; O'Sullivan, S.

    2011-01-01

    It is generally believed that turbulence has a significant impact on the dynamics and evolution of molecular clouds and the star formation that occurs within them. Non-ideal magnetohydrodynamic (MHD) effects are known to influence the nature of this turbulence. We present the results of a suite of 512 3 resolution simulations of the decay of initially super-Alfvenic and supersonic fully multifluid MHD turbulence. We find that ambipolar diffusion increases the rate of decay of the turbulence while the Hall effect has virtually no impact. The decay of the kinetic energy can be fitted as a power law in time and the exponent is found to be -1.34 for fully multifluid MHD turbulence. The power spectra of density, velocity, and magnetic field are all steepened significantly by the inclusion of non-ideal terms. The dominant reason for this steepening is ambipolar diffusion with the Hall effect again playing a minimal role except at short length scales where it creates extra structure in the magnetic field. Interestingly we find that, at least at these resolutions, the majority of the physics of multifluid turbulence can be captured by simply introducing fixed (in time and space) resistive terms into the induction equation without the need for a full multifluid MHD treatment. The velocity dispersion is also examined and, in common with previously published results, it is found not to be power law in nature.

  8. Inferring interdependencies from short time series

    Indian Academy of Sciences (India)

    Abstract. Complex networks provide an invaluable framework for the study of interlinked dynamical systems. In many cases, such networks are constructed from observed time series by first estimating the ...... does not quantify causal relations (unlike IOTA, or .... Africa_map_regions.svg, which is under public domain.

  9. Stochastic modeling of hourly rainfall times series in Campania (Italy)

    Science.gov (United States)

    Giorgio, M.; Greco, R.

    2009-04-01

    Occurrence of flowslides and floods in small catchments is uneasy to predict, since it is affected by a number of variables, such as mechanical and hydraulic soil properties, slope morphology, vegetation coverage, rainfall spatial and temporal variability. Consequently, landslide risk assessment procedures and early warning systems still rely on simple empirical models based on correlation between recorded rainfall data and observed landslides and/or river discharges. Effectiveness of such systems could be improved by reliable quantitative rainfall prediction, which can allow gaining larger lead-times. Analysis of on-site recorded rainfall height time series represents the most effective approach for a reliable prediction of local temporal evolution of rainfall. Hydrological time series analysis is a widely studied field in hydrology, often carried out by means of autoregressive models, such as AR, ARMA, ARX, ARMAX (e.g. Salas [1992]). Such models gave the best results when applied to the analysis of autocorrelated hydrological time series, like river flow or level time series. Conversely, they are not able to model the behaviour of intermittent time series, like point rainfall height series usually are, especially when recorded with short sampling time intervals. More useful for this issue are the so-called DRIP (Disaggregated Rectangular Intensity Pulse) and NSRP (Neymann-Scott Rectangular Pulse) model [Heneker et al., 2001; Cowpertwait et al., 2002], usually adopted to generate synthetic point rainfall series. In this paper, the DRIP model approach is adopted, in which the sequence of rain storms and dry intervals constituting the structure of rainfall time series is modeled as an alternating renewal process. Final aim of the study is to provide a useful tool to implement an early warning system for hydrogeological risk management. Model calibration has been carried out with hourly rainfall hieght data provided by the rain gauges of Campania Region civil

  10. Steady turbulent flow in curved rectangular channels

    NARCIS (Netherlands)

    De Vriend, H.J.

    1979-01-01

    After the study of fully developed and developing steady laminar flow in curved channels of shallow rectangular wet cross-section (see earlier reports in this series), steady turbulent flow in such channels is investigated as a next step towards a mathematical model of the flow in shallow river

  11. Comparison of turbulence in a transitional boundary layer to turbulence in a developed boundary layer*

    Science.gov (United States)

    Park, G. I.; Wallace, J.; Wu, X.; Moin, P.

    2010-11-01

    Using a recent DNS of a flat-plate boundary layer, statistics of turbulence in transition at Reθ= 500 where spots merge (distributions of the mean velocity, rms velocity and vorticity fluctuations, Reynolds shear stress, kinetic energy production and dissipation rates and enstrophy) have been compared to these statistics for the developed boundary layer turbulence at Reθ= 1850. When the distributions in the transitional region, determined in narrow planes 0.03 Reθ wide, exclude regions and times when the flow is not turbulent, they closely resemble those in the developed turbulent state at the higher Reynolds number, especially in the buffer and sublayers. The skin friction coefficient, determined in this conditional manner in the transitional flow is, of course, much larger than that obtained by including both turbulent and non-turbulent information there, and is consistent with a value obtained by extrapolating from the developed turbulent region. We are attempting to perform this data analysis even further upstream in the transitioning flow at Reθ= 300 where the turbulent spots are individuated. These results add further evidence to support the view that the structure of a developed turbulent boundary layer is little different from its structure in its embryonic form in turbulent spots. *CTR 2010 Summer Program research.

  12. Consequences of variations in spatial turbulence characteristics for fatigue life time of wind turbines

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, G.C.

    1998-09-01

    The fatigue loading of turbines situated in complex terrain is investigated in order to determine the crucial parameters in the spatial structure of the turbulence in such situations. The parameter study is performed by means of numerical calculations, and it embraces three different wind turbine types, representing a pitch controlled concept, a stall controlled concept, and a stall controlled concept with an extremely flexible tower. For each of the turbine concepts, the fatigue load sensibility to the selected turbulence characteristics are investigated for three different mean wind speeds at hub height. The selected mean wind speeds represent the linear-, the stall-, and the post stall aerodynamic region for the stall controlled turbines and analogously the unregulated-, the partly regulated-, and the fully regulated regime for the pitch controlled turbine. Denoting the turbulence component in the mean wind direction by u, the lateral turbulence component by v, and the vertical turbulence component by w, the selected turbulence characteristics comprise the u-turbulence length scale, the ratio between the v- and w-turbulence intensities and the u-turbulence intensity, the uu-coherence decay factor, and finally the u-v and u-w cross-correlations. The turbulence length scale in the mean wind direction gives rise to significant modification of the fatigue loading on all the investigated wind turbine concepts, but for the other selected parameter variations, large individual differences exists between the turbines. With respect to sensitivity to the performed parameter variations, the Vestas V39 wind turbine is the most robust of the investigated turbines. The Nordtank 500/37 turbine, equipped with the (artificial) soft tower, is by far the most sensitive of the investigated turbine concepts - also much more sensitive than the conventional Nordtank 500/37 turbine equipped with a traditional tower. (au) 2 tabs., 43 ills., 7 refs.

  13. Using forbidden ordinal patterns to detect determinism in irregularly sampled time series.

    Science.gov (United States)

    Kulp, C W; Chobot, J M; Niskala, B J; Needhammer, C J

    2016-02-01

    It is known that when symbolizing a time series into ordinal patterns using the Bandt-Pompe (BP) methodology, there will be ordinal patterns called forbidden patterns that do not occur in a deterministic series. The existence of forbidden patterns can be used to identify deterministic dynamics. In this paper, the ability to use forbidden patterns to detect determinism in irregularly sampled time series is tested on data generated from a continuous model system. The study is done in three parts. First, the effects of sampling time on the number of forbidden patterns are studied on regularly sampled time series. The next two parts focus on two types of irregular-sampling, missing data and timing jitter. It is shown that forbidden patterns can be used to detect determinism in irregularly sampled time series for low degrees of sampling irregularity (as defined in the paper). In addition, comments are made about the appropriateness of using the BP methodology to symbolize irregularly sampled time series.

  14. Forecasting the Reference Evapotranspiration Using Time Series Model

    Directory of Open Access Journals (Sweden)

    H. Zare Abyaneh

    2016-10-01

    Full Text Available Introduction: Reference evapotranspiration is one of the most important factors in irrigation timing and field management. Moreover, reference evapotranspiration forecasting can play a vital role in future developments. Therefore in this study, the seasonal autoregressive integrated moving average (ARIMA model was used to forecast the reference evapotranspiration time series in the Esfahan, Semnan, Shiraz, Kerman, and Yazd synoptic stations. Materials and Methods: In the present study in all stations (characteristics of the synoptic stations are given in Table 1, the meteorological data, including mean, maximum and minimum air temperature, relative humidity, dry-and wet-bulb temperature, dew-point temperature, wind speed, precipitation, air vapor pressure and sunshine hours were collected from the Islamic Republic of Iran Meteorological Organization (IRIMO for the 41 years from 1965 to 2005. The FAO Penman-Monteith equation was used to calculate the monthly reference evapotranspiration in the five synoptic stations and the evapotranspiration time series were formed. The unit root test was used to identify whether the time series was stationary, then using the Box-Jenkins method, seasonal ARIMA models were applied to the sample data. Table 1. The geographical location and climate conditions of the synoptic stations Station\tGeographical location\tAltitude (m\tMean air temperature (°C\tMean precipitation (mm\tClimate, according to the De Martonne index classification Longitude (E\tLatitude (N Annual\tMin. and Max. Esfahan\t51° 40'\t32° 37'\t1550.4\t16.36\t9.4-23.3\t122\tArid Semnan\t53° 33'\t35° 35'\t1130.8\t18.0\t12.4-23.8\t140\tArid Shiraz\t52° 36'\t29° 32'\t1484\t18.0\t10.2-25.9\t324\tSemi-arid Kerman\t56° 58'\t30° 15'\t1753.8\t15.6\t6.7-24.6\t142\tArid Yazd\t54° 17'\t31° 54'\t1237.2\t19.2\t11.8-26.0\t61\tArid Results and Discussion: The monthly meteorological data were used as input for the Ref-ET software and monthly reference

  15. Complexity testing techniques for time series data: A comprehensive literature review

    International Nuclear Information System (INIS)

    Tang, Ling; Lv, Huiling; Yang, Fengmei; Yu, Lean

    2015-01-01

    Highlights: • A literature review of complexity testing techniques for time series data is provided. • Complexity measurements can generally fall into fractality, methods derived from nonlinear dynamics and entropy. • Different types investigate time series data from different perspectives. • Measures, applications and future studies for each type are presented. - Abstract: Complexity may be one of the most important measurements for analysing time series data; it covers or is at least closely related to different data characteristics within nonlinear system theory. This paper provides a comprehensive literature review examining the complexity testing techniques for time series data. According to different features, the complexity measurements for time series data can be divided into three primary groups, i.e., fractality (mono- or multi-fractality) for self-similarity (or system memorability or long-term persistence), methods derived from nonlinear dynamics (via attractor invariants or diagram descriptions) for attractor properties in phase-space, and entropy (structural or dynamical entropy) for the disorder state of a nonlinear system. These estimations analyse time series dynamics from different perspectives but are closely related to or even dependent on each other at the same time. In particular, a weaker self-similarity, a more complex structure of attractor, and a higher-level disorder state of a system consistently indicate that the observed time series data are at a higher level of complexity. Accordingly, this paper presents a historical tour of the important measures and works for each group, as well as ground-breaking and recent applications and future research directions.

  16. Complex dynamic in ecological time series

    Science.gov (United States)

    Peter Turchin; Andrew D. Taylor

    1992-01-01

    Although the possibility of complex dynamical behaviors-limit cycles, quasiperiodic oscillations, and aperiodic chaos-has been recognized theoretically, most ecologists are skeptical of their importance in nature. In this paper we develop a methodology for reconstructing endogenous (or deterministic) dynamics from ecological time series. Our method consists of fitting...

  17. Time Series Modelling using Proc Varmax

    DEFF Research Database (Denmark)

    Milhøj, Anders

    2007-01-01

    In this paper it will be demonstrated how various time series problems could be met using Proc Varmax. The procedure is rather new and hence new features like cointegration, testing for Granger causality are included, but it also means that more traditional ARIMA modelling as outlined by Box...

  18. SensL B-Series and C-Series silicon photomultipliers for time-of-flight positron emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    O' Neill, K., E-mail: koneill@sensl.com; Jackson, C., E-mail: cjackson@sensl.com

    2015-07-01

    Silicon photomultipliers from SensL are designed for high performance, uniformity and low cost. They demonstrate peak photon detection efficiency of 41% at 420 nm, which is matched to the output spectrum of cerium doped lutetium orthosilicate. Coincidence resolving time of less than 220 ps is demonstrated. New process improvements have lead to the development of C-Series SiPM which reduces the dark noise by over an order of magnitude. In this paper we will show characterization test results which include photon detection efficiency, dark count rate, crosstalk probability, afterpulse probability and coincidence resolving time comparing B-Series to the newest pre-production C-Series. Additionally we will discuss the effect of silicon photomultiplier microcell size on coincidence resolving time allowing the optimal microcell size choice to be made for time of flight positron emission tomography systems.

  19. Kriging Methodology and Its Development in Forecasting Econometric Time Series

    Directory of Open Access Journals (Sweden)

    Andrej Gajdoš

    2017-03-01

    Full Text Available One of the approaches for forecasting future values of a time series or unknown spatial data is kriging. The main objective of the paper is to introduce a general scheme of kriging in forecasting econometric time series using a family of linear regression time series models (shortly named as FDSLRM which apply regression not only to a trend but also to a random component of the observed time series. Simultaneously performing a Monte Carlo simulation study with a real electricity consumption dataset in the R computational langure and environment, we investigate the well-known problem of “negative” estimates of variance components when kriging predictions fail. Our following theoretical analysis, including also the modern apparatus of advanced multivariate statistics, gives us the formulation and proof of a general theorem about the explicit form of moments (up to sixth order for a Gaussian time series observation. This result provides a basis for further theoretical and computational research in the kriging methodology development.

  20. Use of Time-Series, ARIMA Designs to Assess Program Efficacy.

    Science.gov (United States)

    Braden, Jeffery P.; And Others

    1990-01-01

    Illustrates use of time-series designs for determining efficacy of interventions with fictitious data describing drug-abuse prevention program. Discusses problems and procedures associated with time-series data analysis using Auto Regressive Integrated Moving Averages (ARIMA) models. Example illustrates application of ARIMA analysis for…

  1. On the ""early-time"" evolution of variables relevant to turbulence models for Rayleigh-Taylor instability

    Energy Technology Data Exchange (ETDEWEB)

    Rollin, Bertrand [Los Alamos National Laboratory; Andrews, Malcolm J [Los Alamos National Laboratory

    2010-01-01

    We present our progress toward setting initial conditions in variable density turbulence models. In particular, we concentrate our efforts on the BHR turbulence model for turbulent Rayleigh-Taylor instability. Our approach is to predict profiles of relevant parameters before the fully turbulent regime and use them as initial conditions for the turbulence model. We use an idealized model of the mixing between two interpenetrating fluids to define the initial profiles for the turbulence model parameters. Velocities and volume fractions used in the idealized mixing model are obtained respectively from a set of ordinary differential equations modeling the growth of the Rayleigh-Taylor instability and from an idealization of the density profile in the mixing layer. A comparison between predicted initial profiles for the turbulence model parameters and initial profiles of the parameters obtained from low Atwood number three dimensional simulations show reasonable agreement.

  2. Understanding Aggregation and Estimating Seasonal Abundance of Chrysaora quinquecirrha Medusae from a Fixed-station Time Series in the Choptank River, Chesapeake Bay

    Science.gov (United States)

    Tay, J.; Hood, R. R.

    2016-02-01

    Although jellyfish exert strong control over marine plankton dynamics (Richardson et al. 2009, Robison et al. 2014) and negatively impact human commercial and recreational activities (Purcell et al. 2007, Purcell 2012), jellyfish biomass is not well quantified due primarily to sampling difficulties with plankton nets or fisheries trawls (Haddock 2004). As a result, some of the longest records of jellyfish are visual shore-based surveys, such as the fixed-station time series of Chrysaora quinquecirrha that began in 1960 in the Patuxent River in Chesapeake Bay, USA (Cargo and King 1990). Time series counts from fixed-station surveys capture two signals: 1) demographic change at timescales on the order of reproductive processes and 2) spatial patchiness at shorter timescales as different parcels of water move in and out of the survey area by tidal and estuarine advection and turbulent mixing (Lee and McAlice 1979). In this study, our goal was to separate these two signals using a 4-year time series of C. quinquecirrha medusa counts from a fixed-station in the Choptank River, Chesapeake Bay. Idealized modeling of tidal and estuarine advection was used to conceptualize the sampling scheme. Change point and time series analysis was used to detect demographic changes. Indices of aggregation (Negative Binomial coefficient, Taylor's Power Law coefficient, and Morisita's Index) were calculated to describe the spatial patchiness of the medusae. Abundance estimates revealed a bloom cycle that differed in duration and magnitude for each of the study years. Indices of aggregation indicated that medusae were aggregated and that patches grew in the number of individuals, and likely in size, as abundance increased. Further inference from the conceptual modeling suggested that medusae patch structure was generally homogenous over the tidal extent. This study highlights the benefits of using fixed-station shore-based surveys for understanding the biology and ecology of jellyfish.

  3. An algorithm of Saxena-Easo on fuzzy time series forecasting

    Science.gov (United States)

    Ramadhani, L. C.; Anggraeni, D.; Kamsyakawuni, A.; Hadi, A. F.

    2018-04-01

    This paper presents a forecast model of Saxena-Easo fuzzy time series prediction to study the prediction of Indonesia inflation rate in 1970-2016. We use MATLAB software to compute this method. The algorithm of Saxena-Easo fuzzy time series doesn’t need stationarity like conventional forecasting method, capable of dealing with the value of time series which are linguistic and has the advantage of reducing the calculation, time and simplifying the calculation process. Generally it’s focus on percentage change as the universe discourse, interval partition and defuzzification. The result indicate that between the actual data and the forecast data are close enough with Root Mean Square Error (RMSE) = 1.5289.

  4. Philosophies and fallacies in turbulence modeling

    Science.gov (United States)

    Spalart, Philippe R.

    2015-04-01

    We present a set of positions, likely to be controversial, on turbulence modeling for the Reynolds-Averaged Navier Stokes (RANS) equations. The paper has three themes. First is what we call the "fundamental paradox" of turbulence modeling, between the local character of the Partial Differential Equations strongly favored by CFD methods and the nonlocal physical nature of turbulence. Second, we oppose two philosophies. The "Systematic" philosophy attempts to model the exact transport equations for the Reynolds stresses or possibly higher moments term by term, gradually relegating the Closure Problem to higher moments and invoking the "Principle of Receding Influence" (although rarely formulating it). In contrast, the "Openly Empirical" philosophy produces models which satisfy strict constraints such as Galilean invariance, but lack an explicit connection with terms in the exact turbulence equations. The prime example is the eddy-viscosity assumption. Third, we explain a series of what we perceive as fallacies, many of them widely held and by senior observers, in turbulence knowledge, leading to turbulence models. We divide them into "hard" fallacies for which a short mathematical argument demonstrates that a particular statement is wrong or meaningless, and "soft" fallacies for which approximate physical arguments can be opposed, but we contend that a clear debate is overdue and wishful thinking has been involved. Some fallacies appear to be "intermediate." An example in the hard class is the supposed isotropy of the diagonal Reynolds stresses. Examples in the soft class are the need to match the decay rate of isotropic turbulence, and the value of realizability in a model. Our hope is to help the direct effort in this field away from simplistic and hopeless lines of work, and to foster debates.

  5. Evolutionary Algorithms for the Detection of Structural Breaks in Time Series

    DEFF Research Database (Denmark)

    Doerr, Benjamin; Fischer, Paul; Hilbert, Astrid

    2013-01-01

    Detecting structural breaks is an essential task for the statistical analysis of time series, for example, for fitting parametric models to it. In short, structural breaks are points in time at which the behavior of the time series changes. Typically, no solid background knowledge of the time...

  6. Global Turbulence Decision Support for Aviation

    Science.gov (United States)

    Williams, J.; Sharman, R.; Kessinger, C.; Feltz, W.; Wimmers, A.

    2009-09-01

    Turbulence is widely recognized as the leading cause of injuries to flight attendants and passengers on commercial air carriers, yet legacy decision support products such as SIGMETs and SIGWX charts provide relatively low spatial- and temporal-resolution assessments and forecasts of turbulence, with limited usefulness for strategic planning and tactical turbulence avoidance. A new effort is underway to develop an automated, rapid-update, gridded global turbulence diagnosis and forecast system that addresses upper-level clear-air turbulence, mountain-wave turbulence, and convectively-induced turbulence. This NASA-funded effort, modeled on the U.S. Federal Aviation Administration's Graphical Turbulence Guidance (GTG) and GTG Nowcast systems, employs NCEP Global Forecast System (GFS) model output and data from NASA and operational satellites to produce quantitative turbulence nowcasts and forecasts. A convective nowcast element based on GFS forecasts and satellite data provides a basis for diagnosing convective turbulence. An operational prototype "Global GTG” system has been running in real-time at the U.S. National Center for Atmospheric Research since the spring of 2009. Initial verification based on data from TRMM, Cloudsat and MODIS (for the convection nowcasting) and AIREPs and AMDAR data (for turbulence) are presented. This product aims to provide the "single authoritative source” for global turbulence information for the U.S. Next Generation Air Transportation System.

  7. On modeling panels of time series

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    2002-01-01

    textabstractThis paper reviews research issues in modeling panels of time series. Examples of this type of data are annually observed macroeconomic indicators for all countries in the world, daily returns on the individual stocks listed in the S&P500, and the sales records of all items in a

  8. Unsupervised Symbolization of Signal Time Series for Extraction of the Embedded Information

    Directory of Open Access Journals (Sweden)

    Yue Li

    2017-03-01

    Full Text Available This paper formulates an unsupervised algorithm for symbolization of signal time series to capture the embedded dynamic behavior. The key idea is to convert time series of the digital signal into a string of (spatially discrete symbols from which the embedded dynamic information can be extracted in an unsupervised manner (i.e., no requirement for labeling of time series. The main challenges here are: (1 definition of the symbol assignment for the time series; (2 identification of the partitioning segment locations in the signal space of time series; and (3 construction of probabilistic finite-state automata (PFSA from the symbol strings that contain temporal patterns. The reported work addresses these challenges by maximizing the mutual information measures between symbol strings and PFSA states. The proposed symbolization method has been validated by numerical simulation as well as by experimentation in a laboratory environment. Performance of the proposed algorithm has been compared to that of two commonly used algorithms of time series partitioning.

  9. Classification of time-series images using deep convolutional neural networks

    Science.gov (United States)

    Hatami, Nima; Gavet, Yann; Debayle, Johan

    2018-04-01

    Convolutional Neural Networks (CNN) has achieved a great success in image recognition task by automatically learning a hierarchical feature representation from raw data. While the majority of Time-Series Classification (TSC) literature is focused on 1D signals, this paper uses Recurrence Plots (RP) to transform time-series into 2D texture images and then take advantage of the deep CNN classifier. Image representation of time-series introduces different feature types that are not available for 1D signals, and therefore TSC can be treated as texture image recognition task. CNN model also allows learning different levels of representations together with a classifier, jointly and automatically. Therefore, using RP and CNN in a unified framework is expected to boost the recognition rate of TSC. Experimental results on the UCR time-series classification archive demonstrate competitive accuracy of the proposed approach, compared not only to the existing deep architectures, but also to the state-of-the art TSC algorithms.

  10. ADIABATIC HEATING OF CONTRACTING TURBULENT FLUIDS

    International Nuclear Information System (INIS)

    Robertson, Brant; Goldreich, Peter

    2012-01-01

    Turbulence influences the behavior of many astrophysical systems, frequently by providing non-thermal pressure support through random bulk motions. Although turbulence is commonly studied in systems with constant volume and mean density, turbulent astrophysical gases often expand or contract under the influence of pressure or gravity. Here, we examine the behavior of turbulence in contracting volumes using idealized models of compressed gases. Employing numerical simulations and an analytical model, we identify a simple mechanism by which the turbulent motions of contracting gases 'adiabatically heat', experiencing an increase in their random bulk velocities until the largest eddies in the gas circulate over a Hubble time of the contraction. Adiabatic heating provides a mechanism for sustaining turbulence in gases where no large-scale driving exists. We describe this mechanism in detail and discuss some potential applications to turbulence in astrophysical settings.

  11. Long Range Dependence Prognostics for Bearing Vibration Intensity Chaotic Time Series

    Directory of Open Access Journals (Sweden)

    Qing Li

    2016-01-01

    Full Text Available According to the chaotic features and typical fractional order characteristics of the bearing vibration intensity time series, a forecasting approach based on long range dependence (LRD is proposed. In order to reveal the internal chaotic properties, vibration intensity time series are reconstructed based on chaos theory in phase-space, the delay time is computed with C-C method and the optimal embedding dimension and saturated correlation dimension are calculated via the Grassberger–Procaccia (G-P method, respectively, so that the chaotic characteristics of vibration intensity time series can be jointly determined by the largest Lyapunov exponent and phase plane trajectory of vibration intensity time series, meanwhile, the largest Lyapunov exponent is calculated by the Wolf method and phase plane trajectory is illustrated using Duffing-Holmes Oscillator (DHO. The Hurst exponent and long range dependence prediction method are proposed to verify the typical fractional order features and improve the prediction accuracy of bearing vibration intensity time series, respectively. Experience shows that the vibration intensity time series have chaotic properties and the LRD prediction method is better than the other prediction methods (largest Lyapunov, auto regressive moving average (ARMA and BP neural network (BPNN model in prediction accuracy and prediction performance, which provides a new approach for running tendency predictions for rotating machinery and provide some guidance value to the engineering practice.

  12. Modelling the optical turbulence boiling and its effect on finite-exposure differential image motion

    Science.gov (United States)

    Berdja, A.; Borgnino, J.

    2007-07-01

    It is usually accepted that whenever dealing with astronomical observation through the atmosphere, the optical turbulence temporal evolution can be sufficiently described with the so-called frozen turbulence hypothesis. In this model, turbulence is supposed to be equivalent to a series of solid phase screens that slide horizontally in front of the observation field of view. Experimental evidence shows, however, that an additional physical process must be taken into account when describing the temporal behaviour of the optical turbulence. In fact, while translating above the observer, turbulence undergoes a proper temporal evolution and affects differently the astronomical and, more specifically, the astrometric observations. The proper temporal evolution of the turbulence-induced optical turbulence observable quantities is here called the optical turbulence boiling. We are proposing through this paper a theoretical approach to the modelling of the optical turbulence temporal evolution when the turbulent layer horizontal translation and the optical turbulence boiling are both involved. The model we propose, as a working hypothesis though, has a direct relevance to differential astrometry because of its explicit dependence upon the optical turbulence temporal evolution. It can also be generalized to other techniques of high angular resolution astronomical observation through the atmospheric turbulence.

  13. On the factors governing water vapor turbulence mixing in the convective boundary layer over land: Concept and data analysis technique using ground-based lidar measurements

    International Nuclear Information System (INIS)

    Pal, Sandip

    2016-01-01

    The convective boundary layer (CBL) turbulence is the key process for exchanging heat, momentum, moisture and trace gases between the earth's surface and the lower part of the troposphere. The turbulence parameterization of the CBL is a challenging but important component in numerical models. In particular, correct estimation of CBL turbulence features, parameterization, and the determination of the contribution of eddy diffusivity are important for simulating convection initiation, and the dispersion of health hazardous air pollutants and Greenhouse gases. In general, measurements of higher-order moments of water vapor mixing ratio (q) variability yield unique estimates of turbulence in the CBL. Using the high-resolution lidar-derived profiles of q variance, third-order moment, and skewness and analyzing concurrent profiles of vertical velocity, potential temperature, horizontal wind and time series of near-surface measurements of surface flux and meteorological parameters, a conceptual framework based on bottom up approach is proposed here for the first time for a robust characterization of the turbulent structure of CBL over land so that our understanding on the processes governing CBL q turbulence could be improved. Finally, principal component analyses will be applied on the lidar-derived long-term data sets of q turbulence statistics to identify the meteorological factors and the dominant physical mechanisms governing the CBL turbulence features. - Highlights: • Lidar based study for CBL turbulence features • Water vapor and aerosol turbulence profiles • Processes governing boundary layer turbulence profiles using lidars

  14. On the factors governing water vapor turbulence mixing in the convective boundary layer over land: Concept and data analysis technique using ground-based lidar measurements

    Energy Technology Data Exchange (ETDEWEB)

    Pal, Sandip, E-mail: sup252@PSU.EDU

    2016-06-01

    The convective boundary layer (CBL) turbulence is the key process for exchanging heat, momentum, moisture and trace gases between the earth's surface and the lower part of the troposphere. The turbulence parameterization of the CBL is a challenging but important component in numerical models. In particular, correct estimation of CBL turbulence features, parameterization, and the determination of the contribution of eddy diffusivity are important for simulating convection initiation, and the dispersion of health hazardous air pollutants and Greenhouse gases. In general, measurements of higher-order moments of water vapor mixing ratio (q) variability yield unique estimates of turbulence in the CBL. Using the high-resolution lidar-derived profiles of q variance, third-order moment, and skewness and analyzing concurrent profiles of vertical velocity, potential temperature, horizontal wind and time series of near-surface measurements of surface flux and meteorological parameters, a conceptual framework based on bottom up approach is proposed here for the first time for a robust characterization of the turbulent structure of CBL over land so that our understanding on the processes governing CBL q turbulence could be improved. Finally, principal component analyses will be applied on the lidar-derived long-term data sets of q turbulence statistics to identify the meteorological factors and the dominant physical mechanisms governing the CBL turbulence features. - Highlights: • Lidar based study for CBL turbulence features • Water vapor and aerosol turbulence profiles • Processes governing boundary layer turbulence profiles using lidars.

  15. Turbulence in the solar wind: spectra from Voyager 2 data at 5 AU

    International Nuclear Information System (INIS)

    Fraternale, F; Gallana, L; Iovieno, M; Tordella, D; Opher, M; Richardson, J D

    2016-01-01

    Fluctuations in the flow velocity and magnetic fields are ubiquitous in the Solar System. These fluctuations are turbulent, in the sense that they are disordered and span a broad range of scales in both space and time. The study of solar wind turbulence is motivated by a number of factors all keys to the understanding of the Solar Wind origin and thermodynamics. The solar wind spectral properties are far from uniformity and evolve with the increasing distance from the sun. Most of the available spectra of solar wind turbulence were computed at 1 astronomical unit, while accurate spectra on wide frequency ranges at larger distances are still few. In this paper we consider solar wind spectra derived from the data recorded by the Voyager 2 mission during 1979 at about 5 AU from the sun. Voyager 2 data are an incomplete time series with a voids/signal ratio that typically increases as the spacecraft moves away from the sun (45% missing data in 1979), making the analysis challenging. In order to estimate the uncertainty of the spectral slopes, different methods are tested on synthetic turbulence signals with the same gap distribution as V2 data. Spectra of all variables show a power law scaling with exponents between −2.1 and −1.1, depending on frequency subranges. Probability density functions (PDFs) and correlations indicate that the flow has a significant intermittency. (invited comment)

  16. Contribution to the study of turbulence spectra

    Science.gov (United States)

    Dumas, R.

    1979-01-01

    An apparatus suitable for turbulence measurement between ranges of 1 to 5000 cps and from 6 to 16,000 cps was developed and is described. Turbulence spectra downstream of the grills were examined with reference to their general characteristics, their LF qualities, and the effects of periodic turbulence. Medium and HF are discussed. Turbulence spectra in the boundary layers are similarly examined, with reference to their fluctuations at right angles to the wall, and to lateral fluctuations. Turbulence spectra in a boundary layer with suction to the wall is discussed. Induced turbulence, and turbulence spectra at high Reynolds numbers. Calculations are presented relating to the effect of filtering on the value of the correlations in time and space.

  17. A Real-Time Turbulence Hazard Cockpit Display, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Aircraft encounters with turbulence are the leading cause of injuries in the airline industry and result in significant human, operational, and maintenance costs to...

  18. Critical values for unit root tests in seasonal time series

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); B. Hobijn (Bart)

    1997-01-01

    textabstractIn this paper, we present tables with critical values for a variety of tests for seasonal and non-seasonal unit roots in seasonal time series. We consider (extensions of) the Hylleberg et al. and Osborn et al. test procedures. These extensions concern time series with increasing seasonal

  19. Classification of time series patterns from complex dynamic systems

    Energy Technology Data Exchange (ETDEWEB)

    Schryver, J.C.; Rao, N.

    1998-07-01

    An increasing availability of high-performance computing and data storage media at decreasing cost is making possible the proliferation of large-scale numerical databases and data warehouses. Numeric warehousing enterprises on the order of hundreds of gigabytes to terabytes are a reality in many fields such as finance, retail sales, process systems monitoring, biomedical monitoring, surveillance and transportation. Large-scale databases are becoming more accessible to larger user communities through the internet, web-based applications and database connectivity. Consequently, most researchers now have access to a variety of massive datasets. This trend will probably only continue to grow over the next several years. Unfortunately, the availability of integrated tools to explore, analyze and understand the data warehoused in these archives is lagging far behind the ability to gain access to the same data. In particular, locating and identifying patterns of interest in numerical time series data is an increasingly important problem for which there are few available techniques. Temporal pattern recognition poses many interesting problems in classification, segmentation, prediction, diagnosis and anomaly detection. This research focuses on the problem of classification or characterization of numerical time series data. Highway vehicles and their drivers are examples of complex dynamic systems (CDS) which are being used by transportation agencies for field testing to generate large-scale time series datasets. Tools for effective analysis of numerical time series in databases generated by highway vehicle systems are not yet available, or have not been adapted to the target problem domain. However, analysis tools from similar domains may be adapted to the problem of classification of numerical time series data.

  20. Sensitivity analysis of machine-learning models of hydrologic time series

    Science.gov (United States)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  1. Fractal analysis and nonlinear forecasting of indoor 222Rn time series

    International Nuclear Information System (INIS)

    Pausch, G.; Bossew, P.; Hofmann, W.; Steger, F.

    1998-01-01

    Fractal analyses of indoor 222 Rn time series were performed using different chaos theory based measurements such as time delay method, Hurst's rescaled range analysis, capacity (fractal) dimension, and Lyapunov exponent. For all time series we calculated only positive Lyapunov exponents which is a hint to chaos, while the Hurst exponents were well below 0.5, indicating antipersistent behaviour (past trends tend to reverse in the future). These time series were also analyzed with a nonlinear prediction method which allowed an estimation of the embedding dimensions with some restrictions, limiting the prediction to about three relative time steps. (orig.)

  2. Koopman Operator Framework for Time Series Modeling and Analysis

    Science.gov (United States)

    Surana, Amit

    2018-01-01

    We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.

  3. Testing for intracycle determinism in pseudoperiodic time series.

    Science.gov (United States)

    Coelho, Mara C S; Mendes, Eduardo M A M; Aguirre, Luis A

    2008-06-01

    A determinism test is proposed based on the well-known method of the surrogate data. Assuming predictability to be a signature of determinism, the proposed method checks for intracycle (e.g., short-term) determinism in the pseudoperiodic time series for which standard methods of surrogate analysis do not apply. The approach presented is composed of two steps. First, the data are preprocessed to reduce the effects of seasonal and trend components. Second, standard tests of surrogate analysis can then be used. The determinism test is applied to simulated and experimental pseudoperiodic time series and the results show the applicability of the proposed test.

  4. Simulation of turbulent flow over staggered tube bundles using multi-relaxation time lattice Boltzmann method

    International Nuclear Information System (INIS)

    Park, Jong Woon; Choi, Hyun Gyung

    2014-01-01

    A turbulent fluid flow over staggered tube bundles is of great interest in many engineering fields including nuclear fuel rods, heat exchangers and especially a gas cooled reactor lower plenum. Computational methods have evolved for the simulation of such flow for decades and lattice Boltzmann method (LBM) is one of the attractive methods due to its sound physical basis and ease of computerization including parallelization. In this study to find computational performance of the LBM in turbulent flows over staggered tubes, a fluid flow analysis code employing multi-relaxation time lattice Boltzmann method (MRT-LBM) is developed based on a 2-dimensional D2Q9 lattice model and classical sub-grid eddy viscosity model of Smagorinsky. As a first step, fundamental performance MRT-LBM is investigated against a standard problem of a flow past a cylinder at low Reynolds number in terms of drag forces. As a major step, benchmarking of the MRT-LBM is performed over a turbulent flow through staggered tube bundles at Reynolds number of 18,000. For a flow past a single cylinder, the accuracy is validated against existing experimental data and previous computations in terms of drag forces on the cylinder. Mainly, the MRT-LBM computation for a flow through staggered tube bundles is performed and compared with experimental data and general purpose computational fluid dynamic (CFD) analyses with standard k-ω turbulence and large eddy simulation (LES) equipped with turbulence closures of Smagrinsky-Lilly and wall-adapting local eddy-viscosity (WALE) model. The agreement between the experimental and the computational results from the present MRT-LBM is found to be reasonably acceptable and even comparable to the LES whereas the computational efficiency is superior. (orig.)

  5. Simulation of turbulent flow over staggered tube bundles using multi-relaxation time lattice Boltzmann method

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jong Woon; Choi, Hyun Gyung [Dongguk Univ., Gyeongju (Korea, Republic of). Nuclear and Energy Engineering Dept.

    2014-02-15

    A turbulent fluid flow over staggered tube bundles is of great interest in many engineering fields including nuclear fuel rods, heat exchangers and especially a gas cooled reactor lower plenum. Computational methods have evolved for the simulation of such flow for decades and lattice Boltzmann method (LBM) is one of the attractive methods due to its sound physical basis and ease of computerization including parallelization. In this study to find computational performance of the LBM in turbulent flows over staggered tubes, a fluid flow analysis code employing multi-relaxation time lattice Boltzmann method (MRT-LBM) is developed based on a 2-dimensional D2Q9 lattice model and classical sub-grid eddy viscosity model of Smagorinsky. As a first step, fundamental performance MRT-LBM is investigated against a standard problem of a flow past a cylinder at low Reynolds number in terms of drag forces. As a major step, benchmarking of the MRT-LBM is performed over a turbulent flow through staggered tube bundles at Reynolds number of 18,000. For a flow past a single cylinder, the accuracy is validated against existing experimental data and previous computations in terms of drag forces on the cylinder. Mainly, the MRT-LBM computation for a flow through staggered tube bundles is performed and compared with experimental data and general purpose computational fluid dynamic (CFD) analyses with standard k-ω turbulence and large eddy simulation (LES) equipped with turbulence closures of Smagrinsky-Lilly and wall-adapting local eddy-viscosity (WALE) model. The agreement between the experimental and the computational results from the present MRT-LBM is found to be reasonably acceptable and even comparable to the LES whereas the computational efficiency is superior. (orig.)

  6. Time series analysis and its applications with R examples

    CERN Document Server

    Shumway, Robert H

    2017-01-01

    The fourth edition of this popular graduate textbook, like its predecessors, presents a balanced and comprehensive treatment of both time and frequency domain methods with accompanying theory. Numerous examples using nontrivial data illustrate solutions to problems such as discovering natural and anthropogenic climate change, evaluating pain perception experiments using functional magnetic resonance imaging, and monitoring a nuclear test ban treaty. The book is designed as a textbook for graduate level students in the physical, biological, and social sciences and as a graduate level text in statistics. Some parts may also serve as an undergraduate introductory course. Theory and methodology are separated to allow presentations on different levels. In addition to coverage of classical methods of time series regression, ARIMA models, spectral analysis and state-space models, the text includes modern developments including categorical time series analysis, multivariate spectral methods, long memory series, nonli...

  7. A KST framework for correlation network construction from time series signals

    Science.gov (United States)

    Qi, Jin-Peng; Gu, Quan; Zhu, Ying; Zhang, Ping

    2018-04-01

    A KST (Kolmogorov-Smirnov test and T statistic) method is used for construction of a correlation network based on the fluctuation of each time series within the multivariate time signals. In this method, each time series is divided equally into multiple segments, and the maximal data fluctuation in each segment is calculated by a KST change detection procedure. Connections between each time series are derived from the data fluctuation matrix, and are used for construction of the fluctuation correlation network (FCN). The method was tested with synthetic simulations and the result was compared with those from using KS or T only for detection of data fluctuation. The novelty of this study is that the correlation analyses was based on the data fluctuation in each segment of each time series rather than on the original time signals, which would be more meaningful for many real world applications and for analysis of large-scale time signals where prior knowledge is uncertain.

  8. Multivariate stochastic analysis for Monthly hydrological time series at Cuyahoga River Basin

    Science.gov (United States)

    zhang, L.

    2011-12-01

    Copula has become a very powerful statistic and stochastic methodology in case of the multivariate analysis in Environmental and Water resources Engineering. In recent years, the popular one-parameter Archimedean copulas, e.g. Gumbel-Houggard copula, Cook-Johnson copula, Frank copula, the meta-elliptical copula, e.g. Gaussian Copula, Student-T copula, etc. have been applied in multivariate hydrological analyses, e.g. multivariate rainfall (rainfall intensity, duration and depth), flood (peak discharge, duration and volume), and drought analyses (drought length, mean and minimum SPI values, and drought mean areal extent). Copula has also been applied in the flood frequency analysis at the confluences of river systems by taking into account the dependence among upstream gauge stations rather than by using the hydrological routing technique. In most of the studies above, the annual time series have been considered as stationary signal which the time series have been assumed as independent identically distributed (i.i.d.) random variables. But in reality, hydrological time series, especially the daily and monthly hydrological time series, cannot be considered as i.i.d. random variables due to the periodicity existed in the data structure. Also, the stationary assumption is also under question due to the Climate Change and Land Use and Land Cover (LULC) change in the fast years. To this end, it is necessary to revaluate the classic approach for the study of hydrological time series by relaxing the stationary assumption by the use of nonstationary approach. Also as to the study of the dependence structure for the hydrological time series, the assumption of same type of univariate distribution also needs to be relaxed by adopting the copula theory. In this paper, the univariate monthly hydrological time series will be studied through the nonstationary time series analysis approach. The dependence structure of the multivariate monthly hydrological time series will be

  9. Forecasting daily meteorological time series using ARIMA and regression models

    Science.gov (United States)

    Murat, Małgorzata; Malinowska, Iwona; Gos, Magdalena; Krzyszczak, Jaromir

    2018-04-01

    The daily air temperature and precipitation time series recorded between January 1, 1980 and December 31, 2010 in four European sites (Jokioinen, Dikopshof, Lleida and Lublin) from different climatic zones were modeled and forecasted. In our forecasting we used the methods of the Box-Jenkins and Holt- Winters seasonal auto regressive integrated moving-average, the autoregressive integrated moving-average with external regressors in the form of Fourier terms and the time series regression, including trend and seasonality components methodology with R software. It was demonstrated that obtained models are able to capture the dynamics of the time series data and to produce sensible forecasts.

  10. Analysis of complex time series using refined composite multiscale entropy

    International Nuclear Information System (INIS)

    Wu, Shuen-De; Wu, Chiu-Wen; Lin, Shiou-Gwo; Lee, Kung-Yen; Peng, Chung-Kang

    2014-01-01

    Multiscale entropy (MSE) is an effective algorithm for measuring the complexity of a time series that has been applied in many fields successfully. However, MSE may yield an inaccurate estimation of entropy or induce undefined entropy because the coarse-graining procedure reduces the length of a time series considerably at large scales. Composite multiscale entropy (CMSE) was recently proposed to improve the accuracy of MSE, but it does not resolve undefined entropy. Here we propose a refined composite multiscale entropy (RCMSE) to improve CMSE. For short time series analyses, we demonstrate that RCMSE increases the accuracy of entropy estimation and reduces the probability of inducing undefined entropy.

  11. Tools for Generating Useful Time-series Data from PhenoCam Images

    Science.gov (United States)

    Milliman, T. E.; Friedl, M. A.; Frolking, S.; Hufkens, K.; Klosterman, S.; Richardson, A. D.; Toomey, M. P.

    2012-12-01

    The PhenoCam project (http://phenocam.unh.edu/) is tasked with acquiring, processing, and archiving digital repeat photography to be used for scientific studies of vegetation phenological processes. Over the past 5 years the PhenoCam project has collected over 2 million time series images for a total over 700 GB of image data. Several papers have been published describing derived "vegetation indices" (such as green-chromatic-coordinate or gcc) which can be compared to standard measures such as NDVI or EVI. Imagery from our archive is available for download but converting series of images for a particular camera into useful scientific data, while simple in principle, is complicated by a variety of factors. Cameras are often exposed to harsh weather conditions (high wind, rain, ice, snow pile up), which result in images where the field of view (FOV) is partially obscured or completely blocked for periods of time. The FOV can also change for other reasons (mount failures, tower maintenance, etc.) Some of the relatively inexpensive cameras that are being used can also temporarily lose color balance or exposure controls resulting in loss of imagery. All these factors negatively influence the automated analysis of the image time series making this a non-trivial task. Here we discuss the challenges of processing PhenoCam image time-series for vegetation monitoring and the associated data management tasks. We describe our current processing framework and a simple standardized output format for the resulting time-series data. The time-series data in this format will be generated for specific "regions of interest" (ROI's) for each of the cameras in the PhenoCam network. This standardized output (which will be updated daily) can be considered 'the pulse' of a particular camera and will provide a default phenological dynamic for said camera. The time-series data can also be viewed as a higher level product which can be used to generate "vegetation indices", like gcc, for

  12. Multiple Time Series Ising Model for Financial Market Simulations

    International Nuclear Information System (INIS)

    Takaishi, Tetsuya

    2015-01-01

    In this paper we propose an Ising model which simulates multiple financial time series. Our model introduces the interaction which couples to spins of other systems. Simulations from our model show that time series exhibit the volatility clustering that is often observed in the real financial markets. Furthermore we also find non-zero cross correlations between the volatilities from our model. Thus our model can simulate stock markets where volatilities of stocks are mutually correlated

  13. Time Series Modelling of Syphilis Incidence in China from 2005 to 2012.

    Science.gov (United States)

    Zhang, Xingyu; Zhang, Tao; Pei, Jiao; Liu, Yuanyuan; Li, Xiaosong; Medrano-Gracia, Pau

    2016-01-01

    The infection rate of syphilis in China has increased dramatically in recent decades, becoming a serious public health concern. Early prediction of syphilis is therefore of great importance for heath planning and management. In this paper, we analyzed surveillance time series data for primary, secondary, tertiary, congenital and latent syphilis in mainland China from 2005 to 2012. Seasonality and long-term trend were explored with decomposition methods. Autoregressive integrated moving average (ARIMA) was used to fit a univariate time series model of syphilis incidence. A separate multi-variable time series for each syphilis type was also tested using an autoregressive integrated moving average model with exogenous variables (ARIMAX). The syphilis incidence rates have increased three-fold from 2005 to 2012. All syphilis time series showed strong seasonality and increasing long-term trend. Both ARIMA and ARIMAX models fitted and estimated syphilis incidence well. All univariate time series showed highest goodness-of-fit results with the ARIMA(0,0,1)×(0,1,1) model. Time series analysis was an effective tool for modelling the historical and future incidence of syphilis in China. The ARIMAX model showed superior performance than the ARIMA model for the modelling of syphilis incidence. Time series correlations existed between the models for primary, secondary, tertiary, congenital and latent syphilis.

  14. FTSPlot: fast time series visualization for large datasets.

    Directory of Open Access Journals (Sweden)

    Michael Riss

    Full Text Available The analysis of electrophysiological recordings often involves visual inspection of time series data to locate specific experiment epochs, mask artifacts, and verify the results of signal processing steps, such as filtering or spike detection. Long-term experiments with continuous data acquisition generate large amounts of data. Rapid browsing through these massive datasets poses a challenge to conventional data plotting software because the plotting time increases proportionately to the increase in the volume of data. This paper presents FTSPlot, which is a visualization concept for large-scale time series datasets using techniques from the field of high performance computer graphics, such as hierarchic level of detail and out-of-core data handling. In a preprocessing step, time series data, event, and interval annotations are converted into an optimized data format, which then permits fast, interactive visualization. The preprocessing step has a computational complexity of O(n x log(N; the visualization itself can be done with a complexity of O(1 and is therefore independent of the amount of data. A demonstration prototype has been implemented and benchmarks show that the technology is capable of displaying large amounts of time series data, event, and interval annotations lag-free with < 20 ms ms. The current 64-bit implementation theoretically supports datasets with up to 2(64 bytes, on the x86_64 architecture currently up to 2(48 bytes are supported, and benchmarks have been conducted with 2(40 bytes/1 TiB or 1.3 x 10(11 double precision samples. The presented software is freely available and can be included as a Qt GUI component in future software projects, providing a standard visualization method for long-term electrophysiological experiments.

  15. Normalization methods in time series of platelet function assays

    Science.gov (United States)

    Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham

    2016-01-01

    Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217

  16. Development and application of a modified dynamic time warping algorithm (DTW-S) to analyses of primate brain expression time series.

    Science.gov (United States)

    Yuan, Yuan; Chen, Yi-Ping Phoebe; Ni, Shengyu; Xu, Augix Guohua; Tang, Lin; Vingron, Martin; Somel, Mehmet; Khaitovich, Philipp

    2011-08-18

    Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.

  17. Automated Bayesian model development for frequency detection in biological time series

    Directory of Open Access Journals (Sweden)

    Oldroyd Giles ED

    2011-06-01

    Full Text Available Abstract Background A first step in building a mathematical model of a biological system is often the analysis of the temporal behaviour of key quantities. Mathematical relationships between the time and frequency domain, such as Fourier Transforms and wavelets, are commonly used to extract information about the underlying signal from a given time series. This one-to-one mapping from time points to frequencies inherently assumes that both domains contain the complete knowledge of the system. However, for truncated, noisy time series with background trends this unique mapping breaks down and the question reduces to an inference problem of identifying the most probable frequencies. Results In this paper we build on the method of Bayesian Spectrum Analysis and demonstrate its advantages over conventional methods by applying it to a number of test cases, including two types of biological time series. Firstly, oscillations of calcium in plant root cells in response to microbial symbionts are non-stationary and noisy, posing challenges to data analysis. Secondly, circadian rhythms in gene expression measured over only two cycles highlights the problem of time series with limited length. The results show that the Bayesian frequency detection approach can provide useful results in specific areas where Fourier analysis can be uninformative or misleading. We demonstrate further benefits of the Bayesian approach for time series analysis, such as direct comparison of different hypotheses, inherent estimation of noise levels and parameter precision, and a flexible framework for modelling the data without pre-processing. Conclusions Modelling in systems biology often builds on the study of time-dependent phenomena. Fourier Transforms are a convenient tool for analysing the frequency domain of time series. However, there are well-known limitations of this method, such as the introduction of spurious frequencies when handling short and noisy time series, and

  18. Automated Bayesian model development for frequency detection in biological time series.

    Science.gov (United States)

    Granqvist, Emma; Oldroyd, Giles E D; Morris, Richard J

    2011-06-24

    A first step in building a mathematical model of a biological system is often the analysis of the temporal behaviour of key quantities. Mathematical relationships between the time and frequency domain, such as Fourier Transforms and wavelets, are commonly used to extract information about the underlying signal from a given time series. This one-to-one mapping from time points to frequencies inherently assumes that both domains contain the complete knowledge of the system. However, for truncated, noisy time series with background trends this unique mapping breaks down and the question reduces to an inference problem of identifying the most probable frequencies. In this paper we build on the method of Bayesian Spectrum Analysis and demonstrate its advantages over conventional methods by applying it to a number of test cases, including two types of biological time series. Firstly, oscillations of calcium in plant root cells in response to microbial symbionts are non-stationary and noisy, posing challenges to data analysis. Secondly, circadian rhythms in gene expression measured over only two cycles highlights the problem of time series with limited length. The results show that the Bayesian frequency detection approach can provide useful results in specific areas where Fourier analysis can be uninformative or misleading. We demonstrate further benefits of the Bayesian approach for time series analysis, such as direct comparison of different hypotheses, inherent estimation of noise levels and parameter precision, and a flexible framework for modelling the data without pre-processing. Modelling in systems biology often builds on the study of time-dependent phenomena. Fourier Transforms are a convenient tool for analysing the frequency domain of time series. However, there are well-known limitations of this method, such as the introduction of spurious frequencies when handling short and noisy time series, and the requirement for uniformly sampled data. Biological time

  19. hctsa: A Computational Framework for Automated Time-Series Phenotyping Using Massive Feature Extraction.

    Science.gov (United States)

    Fulcher, Ben D; Jones, Nick S

    2017-11-22

    Phenotype measurements frequently take the form of time series, but we currently lack a systematic method for relating these complex data streams to scientifically meaningful outcomes, such as relating the movement dynamics of organisms to their genotype or measurements of brain dynamics of a patient to their disease diagnosis. Previous work addressed this problem by comparing implementations of thousands of diverse scientific time-series analysis methods in an approach termed highly comparative time-series analysis. Here, we introduce hctsa, a software tool for applying this methodological approach to data. hctsa includes an architecture for computing over 7,700 time-series features and a suite of analysis and visualization algorithms to automatically select useful and interpretable time-series features for a given application. Using exemplar applications to high-throughput phenotyping experiments, we show how hctsa allows researchers to leverage decades of time-series research to quantify and understand informative structure in time-series data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    Science.gov (United States)

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  1. Applied time series analysis and innovative computing

    CERN Document Server

    Ao, Sio-Iong

    2010-01-01

    This text is a systematic, state-of-the-art introduction to the use of innovative computing paradigms as an investigative tool for applications in time series analysis. It includes frontier case studies based on recent research.

  2. On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series.

    Science.gov (United States)

    Thompson, William Hedley; Fransson, Peter

    2016-12-01

    Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box-Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed.

  3. Turbulence-combustion interaction in direct injection diesel engine

    Directory of Open Access Journals (Sweden)

    Bencherif Mohamed

    2014-01-01

    Full Text Available The experimental measures of chemical species and turbulence intensity during the closed part of the engine combustion cycle are today unattainable exactly. This paper deals with numerical investigations of an experimental direct injection Diesel engine and a commercial turbocharged heavy duty direct injection one. Simulations are carried out with the kiva3v2 code using the RNG (k-ε model. A reduced mechanism for n-heptane was adopted for predicting auto-ignition and combustion processes. From the calibrated code based on experimental in-cylinder pressures, the study focuses on the turbulence parameters and combustion species evolution in the attempt to improve understanding of turbulence-chemistry interaction during the engine cycle. The turbulent kinetic energy and its dissipation rate are taken as representative parameters of turbulence. The results indicate that chemistry reactions of fuel oxidation during the auto-ignition delay improve the turbulence levels. The peak position of turbulent kinetic energy coincides systematically with the auto-ignition timing. This position seems to be governed by the viscous effects generated by the high pressure level reached at the auto-ignition timing. The hot regime flame decreases rapidly the turbulence intensity successively by the viscous effects during the fast premixed combustion and heat transfer during other periods. It is showed that instable species such as CO are due to deficiency of local mixture preparation during the strong decrease of turbulence energy. Also, an attempt to build an innovative relationship between self-ignition and maximum turbulence level is proposed. This work justifies the suggestion to determine otherwise the self-ignition timing.

  4. Characteristics of the transmission of autoregressive sub-patterns in financial time series

    Science.gov (United States)

    Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong

    2014-09-01

    There are many types of autoregressive patterns in financial time series, and they form a transmission process. Here, we define autoregressive patterns quantitatively through an econometrical regression model. We present a computational algorithm that sets the autoregressive patterns as nodes and transmissions between patterns as edges, and then converts the transmission process of autoregressive patterns in a time series into a network. We utilised daily Shanghai (securities) composite index time series to study the transmission characteristics of autoregressive patterns. We found statistically significant evidence that the financial market is not random and that there are similar characteristics between parts and whole time series. A few types of autoregressive sub-patterns and transmission patterns drive the oscillations of the financial market. A clustering effect on fluctuations appears in the transmission process, and certain non-major autoregressive sub-patterns have high media capabilities in the financial time series. Different stock indexes exhibit similar characteristics in the transmission of fluctuation information. This work not only proposes a distinctive perspective for analysing financial time series but also provides important information for investors.

  5. Numerical investigation of kinetic turbulence in relativistic pair plasmas - I. Turbulence statistics

    Science.gov (United States)

    Zhdankin, Vladimir; Uzdensky, Dmitri A.; Werner, Gregory R.; Begelman, Mitchell C.

    2018-02-01

    We describe results from particle-in-cell simulations of driven turbulence in collisionless, magnetized, relativistic pair plasma. This physical regime provides a simple setting for investigating the basic properties of kinetic turbulence and is relevant for high-energy astrophysical systems such as pulsar wind nebulae and astrophysical jets. In this paper, we investigate the statistics of turbulent fluctuations in simulations on lattices of up to 10243 cells and containing up to 2 × 1011 particles. Due to the absence of a cooling mechanism in our simulations, turbulent energy dissipation reduces the magnetization parameter to order unity within a few dynamical times, causing turbulent motions to become sub-relativistic. In the developed stage, our results agree with predictions from magnetohydrodynamic turbulence phenomenology at inertial-range scales, including a power-law magnetic energy spectrum with index near -5/3, scale-dependent anisotropy of fluctuations described by critical balance, lognormal distributions for particle density and internal energy density (related by a 4/3 adiabatic index, as predicted for an ultra-relativistic ideal gas), and the presence of intermittency. We also present possible signatures of a kinetic cascade by measuring power-law spectra for the magnetic, electric and density fluctuations at sub-Larmor scales.

  6. Mathematical and physical theory of turbulence

    CERN Document Server

    Cannon, John

    2006-01-01

    Although the current dynamical system approach offers several important insights into the turbulence problem, issues still remain that present challenges to conventional methodologies and concepts. These challenges call for the advancement and application of new physical concepts, mathematical modeling, and analysis techniques. Bringing together experts from physics, applied mathematics, and engineering, Mathematical and Physical Theory of Turbulence discusses recent progress and some of the major unresolved issues in two- and three-dimensional turbulence as well as scalar compressible turbulence. Containing introductory overviews as well as more specialized sections, this book examines a variety of turbulence-related topics. The authors concentrate on theory, experiments, computational, and mathematical aspects of Navier-Stokes turbulence; geophysical flows; modeling; laboratory experiments; and compressible/magnetohydrodynamic effects. The topics discussed in these areas include finite-time singularities a...

  7. A Review of Some Aspects of Robust Inference for Time Series.

    Science.gov (United States)

    1984-09-01

    REVIEW OF SOME ASPECTSOF ROBUST INFERNCE FOR TIME SERIES by Ad . Dougla Main TE "iAL REPOW No. 63 Septermber 1984 Department of Statistics University of ...clear. One cannot hope to have a good method for dealing with outliers in time series by using only an instantaneous nonlinear transformation of the data...AI.49 716 A REVIEWd OF SOME ASPECTS OF ROBUST INFERENCE FOR TIME 1/1 SERIES(U) WASHINGTON UNIV SEATTLE DEPT OF STATISTICS R D MARTIN SEP 84 TR-53

  8. Refined composite multiscale weighted-permutation entropy of financial time series

    Science.gov (United States)

    Zhang, Yongping; Shang, Pengjian

    2018-04-01

    For quantifying the complexity of nonlinear systems, multiscale weighted-permutation entropy (MWPE) has recently been proposed. MWPE has incorporated amplitude information and been applied to account for the multiple inherent dynamics of time series. However, MWPE may be unreliable, because its estimated values show large fluctuation for slight variation of the data locations, and a significant distinction only for the different length of time series. Therefore, we propose the refined composite multiscale weighted-permutation entropy (RCMWPE). By comparing the RCMWPE results with other methods' results on both synthetic data and financial time series, RCMWPE method shows not only the advantages inherited from MWPE but also lower sensitivity to the data locations, more stable and much less dependent on the length of time series. Moreover, we present and discuss the results of RCMWPE method on the daily price return series from Asian and European stock markets. There are significant differences between Asian markets and European markets, and the entropy values of Hang Seng Index (HSI) are close to but higher than those of European markets. The reliability of the proposed RCMWPE method has been supported by simulations on generated and real data. It could be applied to a variety of fields to quantify the complexity of the systems over multiple scales more accurately.

  9. Localised burst reconstruction from space-time PODs in a turbulent channel

    Science.gov (United States)

    Garcia-Gutierrez, Adrian; Jimenez, Javier

    2017-11-01

    The traditional proper orthogonal decomposition of the turbulent velocity fluctuations in a channel is extended to time under the assumption that the attractor is statistically stationary and can be treated as periodic for long-enough times. The objective is to extract space- and time-localised eddies that optimally represent the kinetic energy (and two-event correlation) of the flow. Using time-resolved data of a small-box simulation at Reτ = 1880 , minimal for y / h 0.25 , PODs are computed from the two-point spectral-density tensor Φ(kx ,kz , y ,y' , ω) . They are Fourier components in x, z and time, and depend on y and on the temporal frequency ω, or, equivalently, on the convection velocity c = ω /kx . Although the latter depends on y, a spatially and temporally localised `burst' can be synthesised by adding a range of PODs with specific phases. The results are localised bursts that are amplified and tilted, in a time-periodic version of Orr-like behaviour. Funded by the ERC COTURB project.

  10. Turbulent times: effects of turbulence and violence exposure in adolescence on high school completion, health risk behavior, and mental health in young adulthood.

    Science.gov (United States)

    Boynton-Jarrett, Renée; Hair, Elizabeth; Zuckerman, Barry

    2013-10-01

    Turbulent social environments are associated with health and developmental risk, yet mechanisms have been understudied. Guided by a life course framework and stress theory, this study examined the association between turbulent life transitions (including frequent residential mobility, school transitions, family structure disruptions, and homelessness) and exposure to violence during adolescence and high school completion, mental health, and health risk behaviors in young adulthood. Participants (n = 4834) from the U.S. National Longitudinal Survey of Youth, 1997 cohort were followed prospectively from age 12-14 years for 10 years. We used structural equation models to investigate pathways between turbulence and cumulative exposure to violence (CEV), and high school completion, mental health, and health risk behaviors, while accounting for early life socio-demographics, family processes, and individual characteristics. Results indicated that turbulence index was associated with cumulative exposure to violence in adolescence. Both turbulence index and cumulative exposure to violence were positively associated with higher health risk behavior, poorer mental health, and inversely associated with high school completion. These findings highlight the importance of considering the cumulative impact of turbulent and adverse social environments when developing interventions to optimize health and developmental trajectory for adolescents transitioning into adulthood. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Parametric, nonparametric and parametric modelling of a chaotic circuit time series

    Science.gov (United States)

    Timmer, J.; Rust, H.; Horbelt, W.; Voss, H. U.

    2000-09-01

    The determination of a differential equation underlying a measured time series is a frequently arising task in nonlinear time series analysis. In the validation of a proposed model one often faces the dilemma that it is hard to decide whether possible discrepancies between the time series and model output are caused by an inappropriate model or by bad estimates of parameters in a correct type of model, or both. We propose a combination of parametric modelling based on Bock's multiple shooting algorithm and nonparametric modelling based on optimal transformations as a strategy to test proposed models and if rejected suggest and test new ones. We exemplify this strategy on an experimental time series from a chaotic circuit where we obtain an extremely accurate reconstruction of the observed attractor.

  12. CFD simulations in the nuclear containment using the DES turbulence models

    International Nuclear Information System (INIS)

    Ding, Peng; Chen, Meilan; Li, Wanai; Liu, Yulan; Wang, Biao

    2015-01-01

    Highlights: • The k-ε based DES model is used in the nuclear containment simulation. • The comparison of results between different turbulent models is obtained. • The superiority of DES models is analyzed. • The computational efficiency with the DES turbulence models is explained. - Abstract: Different species of gases would be released into the containment and cause unpredicted disasters during the nuclear severe accidents. It is important to accurately predict the transportation and stratification phenomena of these gas mixtures. CFD simulations of these thermal hydraulic issues in nuclear containment are investigated in this paper. The main work is to study the influence of turbulence model on the calculation of gas transportation and heat transfer. The k-ε based DES and other frequently used turbulence models are used in the steam and helium release simulation in THAI series experiment. This paper will show the superiority of the DES turbulence model in terms of computational efficiency and accuracy with the experimental results, and analyze the necessities of DES model to simulate the large-scale containment flows with both laminar and turbulence regions

  13. CFD simulations in the nuclear containment using the DES turbulence models

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Peng [School of Engineering, Sun Yat-Sen University, Guangzhou (China); Chen, Meilan [China Nuclear Power Technology Research Institute, Shenzhen (China); Li, Wanai, E-mail: liwai@mail.sysu.edu.cn [Sino-French Institute of Nuclear Engineering & Technology, Sun Yat-Sen University, Guangzhou (China); Liu, Yulan [School of Engineering, Sun Yat-Sen University, Guangzhou (China); Wang, Biao [Sino-French Institute of Nuclear Engineering & Technology, Sun Yat-Sen University, Guangzhou (China)

    2015-06-15

    Highlights: • The k-ε based DES model is used in the nuclear containment simulation. • The comparison of results between different turbulent models is obtained. • The superiority of DES models is analyzed. • The computational efficiency with the DES turbulence models is explained. - Abstract: Different species of gases would be released into the containment and cause unpredicted disasters during the nuclear severe accidents. It is important to accurately predict the transportation and stratification phenomena of these gas mixtures. CFD simulations of these thermal hydraulic issues in nuclear containment are investigated in this paper. The main work is to study the influence of turbulence model on the calculation of gas transportation and heat transfer. The k-ε based DES and other frequently used turbulence models are used in the steam and helium release simulation in THAI series experiment. This paper will show the superiority of the DES turbulence model in terms of computational efficiency and accuracy with the experimental results, and analyze the necessities of DES model to simulate the large-scale containment flows with both laminar and turbulence regions.

  14. Application of Arbitrary-Order Hilbert Spectral Analysis to Passive Scalar Turbulence

    International Nuclear Information System (INIS)

    Huang, Y X; Lu, Z M; Liu, Y L; Schmitt, F G; Gagne, Y

    2011-01-01

    In previous work [Huang et al., PRE 82, 26319, 2010], we found that the passive scalar turbulence field maybe less intermittent than what we believed before. Here we apply the same method, namely arbitrary-order Hilbert spectral analysis, to a passive scalar (temperature) time series with a Taylor's microscale Reynolds number Re λ ≅ 3000. We find that with increasing Reynolds number, the discrepancy of scaling exponents between Hilbert ξ θ (q) and Kolmogorov-Obukhov-Corrsin (KOC) theory is increasing, and consequently the discrepancy between Hilbert and structure function could disappear at infinite Reynolds number.

  15. Synthetic river flow time series generator for dispatch and spot price forecast

    International Nuclear Information System (INIS)

    Flores, R.A.

    2007-01-01

    Decision-making in electricity markets is complicated by uncertainties in demand growth, power supplies and fuel prices. In Peru, where the electrical power system is highly dependent on water resources at dams and river flows, hydrological uncertainties play a primary role in planning, price and dispatch forecast. This paper proposed a signal processing method for generating new synthetic river flow time series as a support for planning and spot market price forecasting. River flow time series are natural phenomena representing a continuous-time domain process. As an alternative synthetic representation of the original river flow time series, this proposed signal processing method preserves correlations, basic statistics and seasonality. It takes into account deterministic, periodic and non periodic components such as those due to the El Nino Southern Oscillation phenomenon. The new synthetic time series has many correlations with the original river flow time series, rendering it suitable for possible replacement of the classical method of sorting historical river flow time series. As a dispatch and planning approach to spot pricing, the proposed method offers higher accuracy modeling by decomposing the signal into deterministic, periodic, non periodic and stochastic sub signals. 4 refs., 4 tabs., 13 figs

  16. VNAP2: a computer program for computation of two-dimensional, time-dependent, compressible, turbulent flow

    Energy Technology Data Exchange (ETDEWEB)

    Cline, M.C.

    1981-08-01

    VNAP2 is a computer program for calculating turbulent (as well as laminar and inviscid), steady, and unsteady flow. VNAP2 solves the two-dimensional, time-dependent, compressible Navier-Stokes equations. The turbulence is modeled with either an algebraic mixing-length model, a one-equation model, or the Jones-Launder two-equation model. The geometry may be a single- or a dual-flowing stream. The interior grid points are computed using the unsplit MacCormack scheme. Two options to speed up the calculations for high Reynolds number flows are included. The boundary grid points are computed using a reference-plane-characteristic scheme with the viscous terms treated as source functions. An explicit artificial viscosity is included for shock computations. The fluid is assumed to be a perfect gas. The flow boundaries may be arbitrary curved solid walls, inflow/outflow boundaries, or free-jet envelopes. Typical problems that can be solved concern nozzles, inlets, jet-powered afterbodies, airfoils, and free-jet expansions. The accuracy and efficiency of the program are shown by calculations of several inviscid and turbulent flows. The program and its use are described completely, and six sample cases and a code listing are included.

  17. Cross-sample entropy of foreign exchange time series

    Science.gov (United States)

    Liu, Li-Zhi; Qian, Xi-Yuan; Lu, Heng-Yao

    2010-11-01

    The correlation of foreign exchange rates in currency markets is investigated based on the empirical data of DKK/USD, NOK/USD, CAD/USD, JPY/USD, KRW/USD, SGD/USD, THB/USD and TWD/USD for a period from 1995 to 2002. Cross-SampEn (cross-sample entropy) method is used to compare the returns of every two exchange rate time series to assess their degree of asynchrony. The calculation method of confidence interval of SampEn is extended and applied to cross-SampEn. The cross-SampEn and its confidence interval for every two of the exchange rate time series in periods 1995-1998 (before the Asian currency crisis) and 1999-2002 (after the Asian currency crisis) are calculated. The results show that the cross-SampEn of every two of these exchange rates becomes higher after the Asian currency crisis, indicating a higher asynchrony between the exchange rates. Especially for Singapore, Thailand and Taiwan, the cross-SampEn values after the Asian currency crisis are significantly higher than those before the Asian currency crisis. Comparison with the correlation coefficient shows that cross-SampEn is superior to describe the correlation between time series.

  18. Clustering Multivariate Time Series Using Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Shima Ghassempour

    2014-03-01

    Full Text Available In this paper we describe an algorithm for clustering multivariate time series with variables taking both categorical and continuous values. Time series of this type are frequent in health care, where they represent the health trajectories of individuals. The problem is challenging because categorical variables make it difficult to define a meaningful distance between trajectories. We propose an approach based on Hidden Markov Models (HMMs, where we first map each trajectory into an HMM, then define a suitable distance between HMMs and finally proceed to cluster the HMMs with a method based on a distance matrix. We test our approach on a simulated, but realistic, data set of 1,255 trajectories of individuals of age 45 and over, on a synthetic validation set with known clustering structure, and on a smaller set of 268 trajectories extracted from the longitudinal Health and Retirement Survey. The proposed method can be implemented quite simply using standard packages in R and Matlab and may be a good candidate for solving the difficult problem of clustering multivariate time series with categorical variables using tools that do not require advanced statistic knowledge, and therefore are accessible to a wide range of researchers.

  19. TimesVector: a vectorized clustering approach to the analysis of time series transcriptome data from multiple phenotypes.

    Science.gov (United States)

    Jung, Inuk; Jo, Kyuri; Kang, Hyejin; Ahn, Hongryul; Yu, Youngjae; Kim, Sun

    2017-12-01

    Identifying biologically meaningful gene expression patterns from time series gene expression data is important to understand the underlying biological mechanisms. To identify significantly perturbed gene sets between different phenotypes, analysis of time series transcriptome data requires consideration of time and sample dimensions. Thus, the analysis of such time series data seeks to search gene sets that exhibit similar or different expression patterns between two or more sample conditions, constituting the three-dimensional data, i.e. gene-time-condition. Computational complexity for analyzing such data is very high, compared to the already difficult NP-hard two dimensional biclustering algorithms. Because of this challenge, traditional time series clustering algorithms are designed to capture co-expressed genes with similar expression pattern in two sample conditions. We present a triclustering algorithm, TimesVector, specifically designed for clustering three-dimensional time series data to capture distinctively similar or different gene expression patterns between two or more sample conditions. TimesVector identifies clusters with distinctive expression patterns in three steps: (i) dimension reduction and clustering of time-condition concatenated vectors, (ii) post-processing clusters for detecting similar and distinct expression patterns and (iii) rescuing genes from unclassified clusters. Using four sets of time series gene expression data, generated by both microarray and high throughput sequencing platforms, we demonstrated that TimesVector successfully detected biologically meaningful clusters of high quality. TimesVector improved the clustering quality compared to existing triclustering tools and only TimesVector detected clusters with differential expression patterns across conditions successfully. The TimesVector software is available at http://biohealth.snu.ac.kr/software/TimesVector/. sunkim.bioinfo@snu.ac.kr. Supplementary data are available at

  20. Stochastic generation of hourly wind speed time series

    International Nuclear Information System (INIS)

    Shamshad, A.; Wan Mohd Ali Wan Hussin; Bawadi, M.A.; Mohd Sanusi, S.A.

    2006-01-01

    In the present study hourly wind speed data of Kuala Terengganu in Peninsular Malaysia are simulated by using transition matrix approach of Markovian process. The wind speed time series is divided into various states based on certain criteria. The next wind speed states are selected based on the previous states. The cumulative probability transition matrix has been formed in which each row ends with 1. Using the uniform random numbers between 0 and 1, a series of future states is generated. These states have been converted to the corresponding wind speed values using another uniform random number generator. The accuracy of the model has been determined by comparing the statistical characteristics such as average, standard deviation, root mean square error, probability density function and autocorrelation function of the generated data to those of the original data. The generated wind speed time series data is capable to preserve the wind speed characteristics of the observed data

  1. Causal strength induction from time series data.

    Science.gov (United States)

    Soo, Kevin W; Rottman, Benjamin M

    2018-04-01

    One challenge when inferring the strength of cause-effect relations from time series data is that the cause and/or effect can exhibit temporal trends. If temporal trends are not accounted for, a learner could infer that a causal relation exists when it does not, or even infer that there is a positive causal relation when the relation is negative, or vice versa. We propose that learners use a simple heuristic to control for temporal trends-that they focus not on the states of the cause and effect at a given instant, but on how the cause and effect change from one observation to the next, which we call transitions. Six experiments were conducted to understand how people infer causal strength from time series data. We found that participants indeed use transitions in addition to states, which helps them to reach more accurate causal judgments (Experiments 1A and 1B). Participants use transitions more when the stimuli are presented in a naturalistic visual format than a numerical format (Experiment 2), and the effect of transitions is not driven by primacy or recency effects (Experiment 3). Finally, we found that participants primarily use the direction in which variables change rather than the magnitude of the change for estimating causal strength (Experiments 4 and 5). Collectively, these studies provide evidence that people often use a simple yet effective heuristic for inferring causal strength from time series data. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Interpretable Categorization of Heterogeneous Time Series Data

    Science.gov (United States)

    Lee, Ritchie; Kochenderfer, Mykel J.; Mengshoel, Ole J.; Silbermann, Joshua

    2017-01-01

    We analyze data from simulated aircraft encounters to validate and inform the development of a prototype aircraft collision avoidance system. The high-dimensional and heterogeneous time series dataset is analyzed to discover properties of near mid-air collisions (NMACs) and categorize the NMAC encounters. Domain experts use these properties to better organize and understand NMAC occurrences. Existing solutions either are not capable of handling high-dimensional and heterogeneous time series datasets or do not provide explanations that are interpretable by a domain expert. The latter is critical to the acceptance and deployment of safety-critical systems. To address this gap, we propose grammar-based decision trees along with a learning algorithm. Our approach extends decision trees with a grammar framework for classifying heterogeneous time series data. A context-free grammar is used to derive decision expressions that are interpretable, application-specific, and support heterogeneous data types. In addition to classification, we show how grammar-based decision trees can also be used for categorization, which is a combination of clustering and generating interpretable explanations for each cluster. We apply grammar-based decision trees to a simulated aircraft encounter dataset and evaluate the performance of four variants of our learning algorithm. The best algorithm is used to analyze and categorize near mid-air collisions in the aircraft encounter dataset. We describe each discovered category in detail and discuss its relevance to aircraft collision avoidance.

  3. Minimum entropy density method for the time series analysis

    Science.gov (United States)

    Lee, Jeong Won; Park, Joongwoo Brian; Jo, Hang-Hyun; Yang, Jae-Suk; Moon, Hie-Tae

    2009-01-01

    The entropy density is an intuitive and powerful concept to study the complicated nonlinear processes derived from physical systems. We develop the minimum entropy density method (MEDM) to detect the structure scale of a given time series, which is defined as the scale in which the uncertainty is minimized, hence the pattern is revealed most. The MEDM is applied to the financial time series of Standard and Poor’s 500 index from February 1983 to April 2006. Then the temporal behavior of structure scale is obtained and analyzed in relation to the information delivery time and efficient market hypothesis.

  4. Effects of spatially varying slip length on friction drag reduction in wall turbulence

    International Nuclear Information System (INIS)

    Hasegawa, Yosuke; Frohnapfel, Bettina; Kasagi, Nobuhide

    2011-01-01

    A series of direct numerical simulation has been made of turbulent flow over hydrophobic surfaces, which are characterized by streamwise periodic micro-grooves. By assuming that the size of micro-grooves is much smaller than the typical length-scale of near-wall turbulent structures, the dynamical boundary condition is expressed by a mobility tensor, which relates the slip velocity and the surface shear stress. Based on the derived mathematical relationship between the friction drag and different dynamical contributions, it is shown how the turbulence contribution can be extracted and analyzed.

  5. Time series analysis of the developed financial markets' integration using visibility graphs

    Science.gov (United States)

    Zhuang, Enyu; Small, Michael; Feng, Gang

    2014-09-01

    A time series representing the developed financial markets' segmentation from 1973 to 2012 is studied. The time series reveals an obvious market integration trend. To further uncover the features of this time series, we divide it into seven windows and generate seven visibility graphs. The measuring capabilities of the visibility graphs provide means to quantitatively analyze the original time series. It is found that the important historical incidents that influenced market integration coincide with variations in the measured graphical node degree. Through the measure of neighborhood span, the frequencies of the historical incidents are disclosed. Moreover, it is also found that large "cycles" and significant noise in the time series are linked to large and small communities in the generated visibility graphs. For large cycles, how historical incidents significantly affected market integration is distinguished by density and compactness of the corresponding communities.

  6. Turbulent/non-turbulent interfaces detected in DNS of incompressible turbulent boundary layers

    Science.gov (United States)

    Watanabe, T.; Zhang, X.; Nagata, K.

    2018-03-01

    The turbulent/non-turbulent interface (TNTI) detected in direct numerical simulations is studied for incompressible, temporally developing turbulent boundary layers at momentum thickness Reynolds number Reθ ≈ 2000. The outer edge of the TNTI layer is detected as an isosurface of the vorticity magnitude with the threshold determined with the dependence of the turbulent volume on a threshold level. The spanwise vorticity magnitude and passive scalar are shown to be good markers of turbulent fluids, where the conditional statistics on a distance from the outer edge of the TNTI layer are almost identical to the ones obtained with the vorticity magnitude. Significant differences are observed for the conditional statistics between the TNTI detected by the kinetic energy and vorticity magnitude. A widely used grid setting determined solely from the wall unit results in an insufficient resolution in a streamwise direction in the outer region, whose influence is found for the geometry of the TNTI and vorticity jump across the TNTI layer. The present results suggest that the grid spacing should be similar for the streamwise and spanwise directions. Comparison of the TNTI layer among different flows requires appropriate normalization of the conditional statistics. Reference quantities of the turbulence near the TNTI layer are obtained with the average of turbulent fluids in the intermittent region. The conditional statistics normalized by the reference turbulence characteristics show good quantitative agreement for the turbulent boundary layer and planar jet when they are plotted against the distance from the outer edge of the TNTI layer divided by the Kolmogorov scale defined for turbulent fluids in the intermittent region.

  7. A cluster merging method for time series microarray with production values.

    Science.gov (United States)

    Chira, Camelia; Sedano, Javier; Camara, Monica; Prieto, Carlos; Villar, Jose R; Corchado, Emilio

    2014-09-01

    A challenging task in time-course microarray data analysis is to cluster genes meaningfully combining the information provided by multiple replicates covering the same key time points. This paper proposes a novel cluster merging method to accomplish this goal obtaining groups with highly correlated genes. The main idea behind the proposed method is to generate a clustering starting from groups created based on individual temporal series (representing different biological replicates measured in the same time points) and merging them by taking into account the frequency by which two genes are assembled together in each clustering. The gene groups at the level of individual time series are generated using several shape-based clustering methods. This study is focused on a real-world time series microarray task with the aim to find co-expressed genes related to the production and growth of a certain bacteria. The shape-based clustering methods used at the level of individual time series rely on identifying similar gene expression patterns over time which, in some models, are further matched to the pattern of production/growth. The proposed cluster merging method is able to produce meaningful gene groups which can be naturally ranked by the level of agreement on the clustering among individual time series. The list of clusters and genes is further sorted based on the information correlation coefficient and new problem-specific relevant measures. Computational experiments and results of the cluster merging method are analyzed from a biological perspective and further compared with the clustering generated based on the mean value of time series and the same shape-based algorithm.

  8. Constructing networks from a dynamical system perspective for multivariate nonlinear time series.

    Science.gov (United States)

    Nakamura, Tomomichi; Tanizawa, Toshihiro; Small, Michael

    2016-03-01

    We describe a method for constructing networks for multivariate nonlinear time series. We approach the interaction between the various scalar time series from a deterministic dynamical system perspective and provide a generic and algorithmic test for whether the interaction between two measured time series is statistically significant. The method can be applied even when the data exhibit no obvious qualitative similarity: a situation in which the naive method utilizing the cross correlation function directly cannot correctly identify connectivity. To establish the connectivity between nodes we apply the previously proposed small-shuffle surrogate (SSS) method, which can investigate whether there are correlation structures in short-term variabilities (irregular fluctuations) between two data sets from the viewpoint of deterministic dynamical systems. The procedure to construct networks based on this idea is composed of three steps: (i) each time series is considered as a basic node of a network, (ii) the SSS method is applied to verify the connectivity between each pair of time series taken from the whole multivariate time series, and (iii) the pair of nodes is connected with an undirected edge when the null hypothesis cannot be rejected. The network constructed by the proposed method indicates the intrinsic (essential) connectivity of the elements included in the system or the underlying (assumed) system. The method is demonstrated for numerical data sets generated by known systems and applied to several experimental time series.

  9. Time Series Modelling of Syphilis Incidence in China from 2005 to 2012

    Science.gov (United States)

    Zhang, Xingyu; Zhang, Tao; Pei, Jiao; Liu, Yuanyuan; Li, Xiaosong; Medrano-Gracia, Pau

    2016-01-01

    Background The infection rate of syphilis in China has increased dramatically in recent decades, becoming a serious public health concern. Early prediction of syphilis is therefore of great importance for heath planning and management. Methods In this paper, we analyzed surveillance time series data for primary, secondary, tertiary, congenital and latent syphilis in mainland China from 2005 to 2012. Seasonality and long-term trend were explored with decomposition methods. Autoregressive integrated moving average (ARIMA) was used to fit a univariate time series model of syphilis incidence. A separate multi-variable time series for each syphilis type was also tested using an autoregressive integrated moving average model with exogenous variables (ARIMAX). Results The syphilis incidence rates have increased three-fold from 2005 to 2012. All syphilis time series showed strong seasonality and increasing long-term trend. Both ARIMA and ARIMAX models fitted and estimated syphilis incidence well. All univariate time series showed highest goodness-of-fit results with the ARIMA(0,0,1)×(0,1,1) model. Conclusion Time series analysis was an effective tool for modelling the historical and future incidence of syphilis in China. The ARIMAX model showed superior performance than the ARIMA model for the modelling of syphilis incidence. Time series correlations existed between the models for primary, secondary, tertiary, congenital and latent syphilis. PMID:26901682

  10. Reconstruction of tritium time series in precipitation

    International Nuclear Information System (INIS)

    Celle-Jeanton, H.; Gourcy, L.; Aggarwal, P.K.

    2002-01-01

    Tritium is commonly used in groundwaters studies to calculate the recharge rate and to identify the presence of a modern recharge. The knowledge of 3 H precipitation time series is then very important for the study of groundwater recharge. Rozanski and Araguas provided good information on precipitation tritium content in 180 stations of the GNIP network to the end of 1987, but it shows some lacks of measurements either within one chronicle or within one region (the Southern hemisphere for instance). Therefore, it seems to be essential to find a method to recalculate data for a region where no measurement is available.To solve this problem, we propose another method which is based on triangulation. It needs the knowledge of 3 H time series of 3 stations surrounding geographically the 4-th station for which tritium input curve has to be reconstructed

  11. Time Series, Stochastic Processes and Completeness of Quantum Theory

    International Nuclear Information System (INIS)

    Kupczynski, Marian

    2011-01-01

    Most of physical experiments are usually described as repeated measurements of some random variables. Experimental data registered by on-line computers form time series of outcomes. The frequencies of different outcomes are compared with the probabilities provided by the algorithms of quantum theory (QT). In spite of statistical predictions of QT a claim was made that it provided the most complete description of the data and of the underlying physical phenomena. This claim could be easily rejected if some fine structures, averaged out in the standard descriptive statistical analysis, were found in time series of experimental data. To search for these structures one has to use more subtle statistical tools which were developed to study time series produced by various stochastic processes. In this talk we review some of these tools. As an example we show how the standard descriptive statistical analysis of the data is unable to reveal a fine structure in a simulated sample of AR (2) stochastic process. We emphasize once again that the violation of Bell inequalities gives no information on the completeness or the non locality of QT. The appropriate way to test the completeness of quantum theory is to search for fine structures in time series of the experimental data by means of the purity tests or by studying the autocorrelation and partial autocorrelation functions.

  12. Efficient use of correlation entropy for analysing time series data

    Indian Academy of Sciences (India)

    Abstract. The correlation dimension D2 and correlation entropy K2 are both important quantifiers in nonlinear time series analysis. However, use of D2 has been more common compared to K2 as a discriminating measure. One reason for this is that D2 is a static measure and can be easily evaluated from a time series.

  13. Turbulent mass transfer in electrochemical systems: Turbulence for electrochemistry, electrochemistry for turbulence

    International Nuclear Information System (INIS)

    Vorotyntsev, M.A.

    1991-01-01

    Key problems of turbulent mass transfer at a solid wall are reviewed: closure problem for the concentration field, information on wall turbulence, applications of microelectrodes to study the structure of turbulence, correlation properties of current fluctuations. (author). 26 refs

  14. Financial time series analysis based on information categorization method

    Science.gov (United States)

    Tian, Qiang; Shang, Pengjian; Feng, Guochen

    2014-12-01

    The paper mainly applies the information categorization method to analyze the financial time series. The method is used to examine the similarity of different sequences by calculating the distances between them. We apply this method to quantify the similarity of different stock markets. And we report the results of similarity in US and Chinese stock markets in periods 1991-1998 (before the Asian currency crisis), 1999-2006 (after the Asian currency crisis and before the global financial crisis), and 2007-2013 (during and after global financial crisis) by using this method. The results show the difference of similarity between different stock markets in different time periods and the similarity of the two stock markets become larger after these two crises. Also we acquire the results of similarity of 10 stock indices in three areas; it means the method can distinguish different areas' markets from the phylogenetic trees. The results show that we can get satisfactory information from financial markets by this method. The information categorization method can not only be used in physiologic time series, but also in financial time series.

  15. Classification of biosensor time series using dynamic time warping: applications in screening cancer cells with characteristic biomarkers.

    Science.gov (United States)

    Rai, Shesh N; Trainor, Patrick J; Khosravi, Farhad; Kloecker, Goetz; Panchapakesan, Balaji

    2016-01-01

    The development of biosensors that produce time series data will facilitate improvements in biomedical diagnostics and in personalized medicine. The time series produced by these devices often contains characteristic features arising from biochemical interactions between the sample and the sensor. To use such characteristic features for determining sample class, similarity-based classifiers can be utilized. However, the construction of such classifiers is complicated by the variability in the time domains of such series that renders the traditional distance metrics such as Euclidean distance ineffective in distinguishing between biological variance and time domain variance. The dynamic time warping (DTW) algorithm is a sequence alignment algorithm that can be used to align two or more series to facilitate quantifying similarity. In this article, we evaluated the performance of DTW distance-based similarity classifiers for classifying time series that mimics electrical signals produced by nanotube biosensors. Simulation studies demonstrated the positive performance of such classifiers in discriminating between time series containing characteristic features that are obscured by noise in the intensity and time domains. We then applied a DTW distance-based k -nearest neighbors classifier to distinguish the presence/absence of mesenchymal biomarker in cancer cells in buffy coats in a blinded test. Using a train-test approach, we find that the classifier had high sensitivity (90.9%) and specificity (81.8%) in differentiating between EpCAM-positive MCF7 cells spiked in buffy coats and those in plain buffy coats.

  16. Finite mixture model applied in the analysis of a turbulent bistable flow on two parallel circular cylinders

    Energy Technology Data Exchange (ETDEWEB)

    Paula, A.V. de, E-mail: vagtinski@mecanica.ufrgs.br [PROMEC – Programa de Pós Graduação em Engenharia Mecânica, UFRGS – Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil); Möller, S.V., E-mail: svmoller@ufrgs.br [PROMEC – Programa de Pós Graduação em Engenharia Mecânica, UFRGS – Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil)

    2013-11-15

    This paper presents a study of the bistable phenomenon which occurs in the turbulent flow impinging on circular cylinders placed side-by-side. Time series of axial and transversal velocity obtained with the constant temperature hot wire anemometry technique in an aerodynamic channel are used as input data in a finite mixture model, to classify the observed data according to a family of probability density functions. Wavelet transforms are applied to analyze the unsteady turbulent signals. Results of flow visualization show that the flow is predominantly two-dimensional. A double-well energy model is suggested to describe the behavior of the bistable phenomenon in this case. -- Highlights: ► Bistable flow on two parallel cylinders is studied with hot wire anemometry as a first step for the application on the analysis to tube bank flow. ► The method of maximum likelihood estimation is applied to hot wire experimental series to classify the data according to PDF functions in a mixture model approach. ► Results show no evident correlation between the changes of flow modes with time. ► An energy model suggests the presence of more than two flow modes.

  17. A novel water quality data analysis framework based on time-series data mining.

    Science.gov (United States)

    Deng, Weihui; Wang, Guoyin

    2017-07-01

    The rapid development of time-series data mining provides an emerging method for water resource management research. In this paper, based on the time-series data mining methodology, we propose a novel and general analysis framework for water quality time-series data. It consists of two parts: implementation components and common tasks of time-series data mining in water quality data. In the first part, we propose to granulate the time series into several two-dimensional normal clouds and calculate the similarities in the granulated level. On the basis of the similarity matrix, the similarity search, anomaly detection, and pattern discovery tasks in the water quality time-series instance dataset can be easily implemented in the second part. We present a case study of this analysis framework on weekly Dissolve Oxygen time-series data collected from five monitoring stations on the upper reaches of Yangtze River, China. It discovered the relationship of water quality in the mainstream and tributary as well as the main changing patterns of DO. The experimental results show that the proposed analysis framework is a feasible and efficient method to mine the hidden and valuable knowledge from water quality historical time-series data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Three-dimensional simulation of the motion of a single particle under a simulated turbulent velocity field

    Science.gov (United States)

    Moreno-Casas, P. A.; Bombardelli, F. A.

    2015-12-01

    A 3D Lagrangian particle tracking model is coupled to a 3D channel velocity field to simulate the saltation motion of a single sediment particle moving in saltation mode. The turbulent field is a high-resolution three dimensional velocity field that reproduces a by-pass transition to turbulence on a flat plate due to free-stream turbulence passing above de plate. In order to reduce computational costs, a decoupled approached is used, i.e., the turbulent flow is simulated independently from the tracking model, and then used to feed the 3D Lagrangian particle model. The simulations are carried using the point-particle approach. The particle tracking model contains three sub-models, namely, particle free-flight, a post-collision velocity and bed representation sub-models. The free-flight sub-model considers the action of the following forces: submerged weight, non-linear drag, lift, virtual mass, Magnus and Basset forces. The model also includes the effect of particle angular velocity. The post-collision velocities are obtained by applying conservation of angular and linear momentum. The complete model was validated with experimental results from literature within the sand range. Results for particle velocity time series and distribution of particle turbulent intensities are presented.

  19. Development and application of a modified dynamic time warping algorithm (DTW-S to analyses of primate brain expression time series

    Directory of Open Access Journals (Sweden)

    Vingron Martin

    2011-08-01

    Full Text Available Abstract Background Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Results Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. Conclusions The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.

  20. PhilDB: the time series database with built-in change logging

    Directory of Open Access Journals (Sweden)

    Andrew MacDonald

    2016-03-01

    Full Text Available PhilDB is an open-source time series database that supports storage of time series datasets that are dynamic; that is, it records updates to existing values in a log as they occur. PhilDB eases loading of data for the user by utilising an intelligent data write method. It preserves existing values during updates and abstracts the update complexity required to achieve logging of data value changes. It implements fast reads to make it practical to select data for analysis. Recent open-source systems have been developed to indefinitely store long-period high-resolution time series data without change logging. Unfortunately, such systems generally require a large initial installation investment before use because they are designed to operate over a cluster of servers to achieve high-performance writing of static data in real time. In essence, they have a ‘big data’ approach to storage and access. Other open-source projects for handling time series data that avoid the ‘big data’ approach are also relatively new and are complex or incomplete. None of these systems gracefully handle revision of existing data while tracking values that change. Unlike ‘big data’ solutions, PhilDB has been designed for single machine deployment on commodity hardware, reducing the barrier to deployment. PhilDB takes a unique approach to meta-data tracking; optional attribute attachment. This facilitates scaling the complexities of storing a wide variety of data. That is, it allows time series data to be loaded as time series instances with minimal initial meta-data, yet additional attributes can be created and attached to differentiate the time series instances when a wider variety of data is needed. PhilDB was written in Python, leveraging existing libraries. While some existing systems come close to meeting the needs PhilDB addresses, none cover all the needs at once. PhilDB was written to fill this gap in existing solutions. This paper explores existing time

  1. On the factors governing water vapor turbulence mixing in the convective boundary layer over land: Concept and data analysis technique using ground-based lidar measurements.

    Science.gov (United States)

    Pal, Sandip

    2016-06-01

    The convective boundary layer (CBL) turbulence is the key process for exchanging heat, momentum, moisture and trace gases between the earth's surface and the lower part of the troposphere. The turbulence parameterization of the CBL is a challenging but important component in numerical models. In particular, correct estimation of CBL turbulence features, parameterization, and the determination of the contribution of eddy diffusivity are important for simulating convection initiation, and the dispersion of health hazardous air pollutants and Greenhouse gases. In general, measurements of higher-order moments of water vapor mixing ratio (q) variability yield unique estimates of turbulence in the CBL. Using the high-resolution lidar-derived profiles of q variance, third-order moment, and skewness and analyzing concurrent profiles of vertical velocity, potential temperature, horizontal wind and time series of near-surface measurements of surface flux and meteorological parameters, a conceptual framework based on bottom up approach is proposed here for the first time for a robust characterization of the turbulent structure of CBL over land so that our understanding on the processes governing CBL q turbulence could be improved. Finally, principal component analyses will be applied on the lidar-derived long-term data sets of q turbulence statistics to identify the meteorological factors and the dominant physical mechanisms governing the CBL turbulence features. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Model-based Clustering of Categorical Time Series with Multinomial Logit Classification

    Science.gov (United States)

    Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea

    2010-09-01

    A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.

  3. Time Series Discord Detection in Medical Data using a Parallel Relational Database

    Energy Technology Data Exchange (ETDEWEB)

    Woodbridge, Diane; Rintoul, Mark Daniel; Wilson, Andrew T.; Goldstein, Richard

    2015-10-01

    Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithms on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.

  4. Estimation of system parameters in discrete dynamical systems from time series

    International Nuclear Information System (INIS)

    Palaniyandi, P.; Lakshmanan, M.

    2005-01-01

    We propose a simple method to estimate the parameters involved in discrete dynamical systems from time series. The method is based on the concept of controlling chaos by constant feedback. The major advantages of the method are that it needs a minimal number of time series data (either vector or scalar) and is applicable to dynamical systems of any dimension. The method also works extremely well even in the presence of noise in the time series. The method is specifically illustrated by means of logistic and Henon maps

  5. Evaluation of nonlinearity and validity of nonlinear modeling for complex time series.

    Science.gov (United States)

    Suzuki, Tomoya; Ikeguchi, Tohru; Suzuki, Masuo

    2007-10-01

    Even if an original time series exhibits nonlinearity, it is not always effective to approximate the time series by a nonlinear model because such nonlinear models have high complexity from the viewpoint of information criteria. Therefore, we propose two measures to evaluate both the nonlinearity of a time series and validity of nonlinear modeling applied to it by nonlinear predictability and information criteria. Through numerical simulations, we confirm that the proposed measures effectively detect the nonlinearity of an observed time series and evaluate the validity of the nonlinear model. The measures are also robust against observational noises. We also analyze some real time series: the difference of the number of chickenpox and measles patients, the number of sunspots, five Japanese vowels, and the chaotic laser. We can confirm that the nonlinear model is effective for the Japanese vowel /a/, the difference of the number of measles patients, and the chaotic laser.

  6. Price setting in turbulent times

    DEFF Research Database (Denmark)

    Ólafsson, Tjörvi; Pétursdóttir, Ásgerdur; Vignisdóttir, Karen Á.

    This price setting survey among Icelandic firms aims to make two contributions to the literature. First, it studies price setting in an advanced economy within a more turbulent macroeconomic environment than has previously been done. The results indicate that price adjustments are to a larger...... extent driven by exchange rate fluctuations than in most other advanced countries. The median Icelandic firm reviews its prices every four months and changes them every six months. The main sources of price rigidity and the most commonly used price setting methods are the same as in most other countries....... A second contribution to the literature is our analysis of the nexus between price setting and exchange rate movements, a topic that has attracted surprisingly limited attention in this survey-based literature. A novel aspect of our approach is to base our analysis on a categorisation of firms...

  7. Plasma Soliton Turbulence and Statistical Mechanics

    International Nuclear Information System (INIS)

    Treumann, R.A.; Pottelette, R.

    1999-01-01

    Collisionless kinetic plasma turbulence is described approximately in terms of a superposition of non-interacting solitary waves. We discuss the relevance of such a description under astrophysical conditions. Several types of solitary waves may be of interest in this relation as generators of turbulence and turbulent transport. A consistent theory of turbulence can be given only in a few particular cases when the description can be reduced to the Korteweg-de Vries equation or some other simple equation like the Kadomtsev-Petviashvili equation. It turns out that the soliton turbulence is usually energetically harder than the ordinary weakly turbulent plasma description. This implies that interaction of particles with such kinds of turbulence can lead to stronger acceleration than in ordinary turbulence. However, the description in our model is only classical and non-relativistic. Transport in solitary turbulence is most important for drift wave turbulence. Such waves form solitary drift wave vortices which may provide cross-field transport. A more general discussion is given on transport. In a model of Levy flight trapping of particles in solitons (or solitary turbulence) one finds that the residence time of particles in the region of turbulence may be described by a generalized Lorentzian probability distribution. It is shown that under collisionless equilibrium conditions far away from thermal equilibrium such distributions are natural equilibrium distributions. A consistent thermodynamic description of such media can be given in terms of a generalized Lorentzian statistical mechanics and thermodynamics. (author)

  8. A Framework and Algorithms for Multivariate Time Series Analytics (MTSA): Learning, Monitoring, and Recommendation

    Science.gov (United States)

    Ngan, Chun-Kit

    2013-01-01

    Making decisions over multivariate time series is an important topic which has gained significant interest in the past decade. A time series is a sequence of data points which are measured and ordered over uniform time intervals. A multivariate time series is a set of multiple, related time series in a particular domain in which domain experts…

  9. On the ""early-time"" evolution of variables relevant to turbulence models for the Rayleigh-Taylor instability

    Energy Technology Data Exchange (ETDEWEB)

    Rollin, Bertrand [Los Alamos National Laboratory; Andrews, Malcolm J [Los Alamos National Laboratory

    2010-01-01

    We present our progress toward setting initial conditions in variable density turbulence models. In particular, we concentrate our efforts on the BHR turbulence model for turbulent Rayleigh-Taylor instability. Our approach is to predict profiles of relevant variables before fully turbulent regime and use them as initial conditions for the turbulence model. We use an idealized model of mixing between two interpenetrating fluids to define the initial profiles for the turbulence model variables. Velocities and volume fractions used in the idealized mixing model are obtained respectively from a set of ordinary differential equations modeling the growth of the Rayleigh-Taylor instability and from an idealization of the density profile in the mixing layer. A comparison between predicted profiles for the turbulence model variables and profiles of the variables obtained from low Atwood number three dimensional simulations show reasonable agreement.

  10. L-mode validation studies of gyrokinetic turbulence simulations via multiscale and multifield turbulence measurements on the DIII-D tokamak

    International Nuclear Information System (INIS)

    Rhodes, T.L.; Doyle, E.J.; Hillesheim, J.C.; Peebles, W.A.; Schmitz, L.; Holland, C.; Smith, S.P.; Burrell, K.H.; Candy, J.; DeBoo, J.C.; Kinsey, J.E.; Petty, C.C.; Prater, R.; Staebler, G.M.; Waltz, R.E.; White, A.E.; McKee, G.R.; Mikkelsen, D.; Parker, S.; Chen, Y.

    2011-01-01

    A series of carefully designed experiments on DIII-D have taken advantage of a broad set of turbulence and profile diagnostics to rigorously test gyrokinetic turbulence simulations. In this paper the goals, tools and experiments performed in these validation studies are reviewed and specific examples presented. It is found that predictions of transport and fluctuation levels in the mid-core region (0.4 < ρ < 0.75) are in better agreement with experiment than those in the outer region (ρ ≥ 0.75) where edge coupling effects may become increasingly important and multiscale simulations may also be necessary. Validation studies such as these are crucial in developing confidence in a first-principles based predictive capability for ITER.

  11. Strategic decisions in turbulent times: lessons from the energy industry

    DEFF Research Database (Denmark)

    Giones, Ferran; Brem, Alexander; Berger, Andreas

    2019-01-01

    of time, traditional business models eroded, and dominant players lost their positions in the industry. Based on personal interviews with the CEOs from RWE (Germany) and NRG (USA) we analyze how they led the transformation of their organizations. We get immersed in their decision-making processes......Most of the firms currently in the S&P 500 will probably not be part of this list in 15 years. In times of great uncertainty managers are called to make the right choices in their strategy, they are asked to preserve the core businesses, and to prepare their organizations for an unclear future. How...... can managers make the right choices when the whole industry is under transformation? In this light, we explore how the popular VUCA framework can help to make sense of turbulent contexts and drive the decision-making of managers. We study the case of the energy industry, where, in a short period...

  12. Modeling vector nonlinear time series using POLYMARS

    NARCIS (Netherlands)

    de Gooijer, J.G.; Ray, B.K.

    2003-01-01

    A modified multivariate adaptive regression splines method for modeling vector nonlinear time series is investigated. The method results in models that can capture certain types of vector self-exciting threshold autoregressive behavior, as well as provide good predictions for more general vector

  13. Forecasting with periodic autoregressive time series models

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)

    1999-01-01

    textabstractThis paper is concerned with forecasting univariate seasonal time series data using periodic autoregressive models. We show how one should account for unit roots and deterministic terms when generating out-of-sample forecasts. We illustrate the models for various quarterly UK consumption

  14. vector bilinear autoregressive time series model and its superiority

    African Journals Online (AJOL)

    KEYWORDS: Linear time series, Autoregressive process, Autocorrelation function, Partial autocorrelation function,. Vector time .... important result on matrix algebra with respect to the spectral ..... application to covariance analysis of super-.

  15. Correlation measure to detect time series distances, whence economy globalization

    Science.gov (United States)

    Miśkiewicz, Janusz; Ausloos, Marcel

    2008-11-01

    An instantaneous time series distance is defined through the equal time correlation coefficient. The idea is applied to the Gross Domestic Product (GDP) yearly increments of 21 rich countries between 1950 and 2005 in order to test the process of economic globalisation. Some data discussion is first presented to decide what (EKS, GK, or derived) GDP series should be studied. Distances are then calculated from the correlation coefficient values between pairs of series. The role of time averaging of the distances over finite size windows is discussed. Three network structures are next constructed based on the hierarchy of distances. It is shown that the mean distance between the most developed countries on several networks actually decreases in time, -which we consider as a proof of globalization. An empirical law is found for the evolution after 1990, similar to that found in flux creep. The optimal observation time window size is found ≃15 years.

  16. Time Series Analysis Using Geometric Template Matching.

    Science.gov (United States)

    Frank, Jordan; Mannor, Shie; Pineau, Joelle; Precup, Doina

    2013-03-01

    We present a novel framework for analyzing univariate time series data. At the heart of the approach is a versatile algorithm for measuring the similarity of two segments of time series called geometric template matching (GeTeM). First, we use GeTeM to compute a similarity measure for clustering and nearest-neighbor classification. Next, we present a semi-supervised learning algorithm that uses the similarity measure with hierarchical clustering in order to improve classification performance when unlabeled training data are available. Finally, we present a boosting framework called TDEBOOST, which uses an ensemble of GeTeM classifiers. TDEBOOST augments the traditional boosting approach with an additional step in which the features used as inputs to the classifier are adapted at each step to improve the training error. We empirically evaluate the proposed approaches on several datasets, such as accelerometer data collected from wearable sensors and ECG data.

  17. Saturation of ion-acoustic turbulence

    International Nuclear Information System (INIS)

    Bychenkov, V.Yu.; Gradov, O.M.

    1985-01-01

    The time evolution of ion-acoustic turbulence is investigated taking into consideration both the scattering of electrons and the induced scattering of waves by the ions. The growth rate of the ion-acoustic turbulence is studied as the function of the wave number, including the long-wave ion sound excitations. It is shown that the relaxation of the ion-acoustic turbulence leads to the quasistationary noise distributions, which are the products of distributions according to the wave number and to the angle. The spectra conform to the stationary theory. (D.Gy.)

  18. Magnetohydrodynamic turbulence

    CERN Document Server

    Biskamp, Dieter

    2003-01-01

    This book presents an introduction to, and modern account of, magnetohydrodynamic (MHD) turbulence, an active field both in general turbulence theory and in various areas of astrophysics. The book starts by introducing the MHD equations, certain useful approximations and the transition to turbulence. The second part of the book covers incompressible MHD turbulence, the macroscopic aspects connected with the different self-organization processes, the phenomenology of the turbulence spectra, two-point closure theory, and intermittency. The third considers two-dimensional turbulence and compressi

  19. Visible imaging of edge turbulence in NSTX

    International Nuclear Information System (INIS)

    Zweben, S.; Maqueda, R.; Hill, K.; Johnson, D.

    2000-01-01

    Edge plasma turbulence in tokamaks and stellarators is believed to cause the radical heat and particle flux across the separatrix and into the scrape-off-layers of these devices. This paper describes initial measurements of 2-D space-time structure of the edge density turbulence made using a visible imaging diagnostic in the National Spherical Torus Experiment (NSTX). The structure of the edge turbulence is most clearly visible using a method of gas puff imaging to locally illuminate the edge density turbulence

  20. Visible imaging of edge turbulence in NSTX

    International Nuclear Information System (INIS)

    S. Zweben; R. Maqueda; K. Hill; D. Johnson; S. Kaye; H. Kugel; F. Levinton; R. Maingi; L. Roquemore; S. Sabbagh; G. Wurden

    2000-01-01

    Edge plasma turbulence in tokamaks and stellarators is believed to cause the radial heat and particle flux across the separatrix and into the scrape-off-layers of these devices. This paper describes initial measurements of 2-D space-time structure of the edge density turbulence made using a visible imaging diagnostic in the National Spherical Torus Experiment (NSTX). The structure of the edge turbulence is most clearly visible using a method of ''gas puff imaging'' to locally illuminate the edge density turbulence

  1. On-line analysis of reactor noise using time-series analysis

    International Nuclear Information System (INIS)

    McGevna, V.G.

    1981-10-01

    A method to allow use of time series analysis for on-line noise analysis has been developed. On-line analysis of noise in nuclear power reactors has been limited primarily to spectral analysis and related frequency domain techniques. Time series analysis has many distinct advantages over spectral analysis in the automated processing of reactor noise. However, fitting an autoregressive-moving average (ARMA) model to time series data involves non-linear least squares estimation. Unless a high speed, general purpose computer is available, the calculations become too time consuming for on-line applications. To eliminate this problem, a special purpose algorithm was developed for fitting ARMA models. While it is based on a combination of steepest descent and Taylor series linearization, properties of the ARMA model are used so that the auto- and cross-correlation functions can be used to eliminate the need for estimating derivatives. The number of calculations, per iteration varies lineegardless of the mee 0.2% yield strength displayed anisotropy, with axial and circumferential values being greater than radial. For CF8-CPF8 and CF8M-CPF8M castings to meet current ASME Code S acid fuel cells

  2. Improving GNSS time series for volcano monitoring: application to Canary Islands (Spain)

    Science.gov (United States)

    García-Cañada, Laura; Sevilla, Miguel J.; Pereda de Pablo, Jorge; Domínguez Cerdeña, Itahiza

    2017-04-01

    The number of permanent GNSS stations has increased significantly in recent years for different geodetic applications such as volcano monitoring, which require a high precision. Recently we have started to have coordinates time series long enough so that we can apply different analysis and filters that allow us to improve the GNSS coordinates results. Following this idea we have processed data from GNSS permanent stations used by the Spanish Instituto Geográfico Nacional (IGN) for volcano monitoring in Canary Islands to obtained time series by double difference processing method with Bernese v5.0 for the period 2007-2014. We have identified the characteristics of these time series and obtained models to estimate velocities with greater accuracy and more realistic uncertainties. In order to improve the results we have used two kinds of filters to improve the time series. The first, a spatial filter, has been computed using the series of residuals of all stations in the Canary Islands without an anomalous behaviour after removing a linear trend. This allows us to apply this filter to all sets of coordinates of the permanent stations reducing their dispersion. The second filter takes account of the temporal correlation in the coordinate time series for each station individually. A research about the evolution of the velocity depending on the series length has been carried out and it has demonstrated the need for using time series of at least four years. Therefore, in those stations with more than four years of data, we calculated the velocity and the characteristic parameters in order to have time series of residuals. This methodology has been applied to the GNSS data network in El Hierro (Canary Islands) during the 2011-2012 eruption and the subsequent magmatic intrusions (2012-2014). The results show that in the new series it is easier to detect anomalous behaviours in the coordinates, so they are most useful to detect crustal deformations in volcano monitoring.

  3. Mapping Crop Cycles in China Using MODIS-EVI Time Series

    Directory of Open Access Journals (Sweden)

    Le Li

    2014-03-01

    Full Text Available As the Earth’s population continues to grow and demand for food increases, the need for improved and timely information related to the properties and dynamics of global agricultural systems is becoming increasingly important. Global land cover maps derived from satellite data provide indispensable information regarding the geographic distribution and areal extent of global croplands. However, land use information, such as cropping intensity (defined here as the number of cropping cycles per year, is not routinely available over large areas because mapping this information from remote sensing is challenging. In this study, we present a simple but efficient algorithm for automated mapping of cropping intensity based on data from NASA’s (NASA: The National Aeronautics and Space Administration MODerate Resolution Imaging Spectroradiometer (MODIS. The proposed algorithm first applies an adaptive Savitzky-Golay filter to smooth Enhanced Vegetation Index (EVI time series derived from MODIS surface reflectance data. It then uses an iterative moving-window methodology to identify cropping cycles from the smoothed EVI time series. Comparison of results from our algorithm with national survey data at both the provincial and prefectural level in China show that the algorithm provides estimates of gross sown area that agree well with inventory data. Accuracy assessment comparing visually interpreted time series with algorithm results for a random sample of agricultural areas in China indicates an overall accuracy of 91.0% for three classes defined based on the number of cycles observed in EVI time series. The algorithm therefore appears to provide a straightforward and efficient method for mapping cropping intensity from MODIS time series data.

  4. Modelling and analysis of turbulent datasets using Auto Regressive Moving Average processes

    International Nuclear Information System (INIS)

    Faranda, Davide; Dubrulle, Bérengère; Daviaud, François; Pons, Flavio Maria Emanuele; Saint-Michel, Brice; Herbert, Éric; Cortet, Pierre-Philippe

    2014-01-01

    We introduce a novel way to extract information from turbulent datasets by applying an Auto Regressive Moving Average (ARMA) statistical analysis. Such analysis goes well beyond the analysis of the mean flow and of the fluctuations and links the behavior of the recorded time series to a discrete version of a stochastic differential equation which is able to describe the correlation structure in the dataset. We introduce a new index Υ that measures the difference between the resulting analysis and the Obukhov model of turbulence, the simplest stochastic model reproducing both Richardson law and the Kolmogorov spectrum. We test the method on datasets measured in a von Kármán swirling flow experiment. We found that the ARMA analysis is well correlated with spatial structures of the flow, and can discriminate between two different flows with comparable mean velocities, obtained by changing the forcing. Moreover, we show that the Υ is highest in regions where shear layer vortices are present, thereby establishing a link between deviations from the Kolmogorov model and coherent structures. These deviations are consistent with the ones observed by computing the Hurst exponents for the same time series. We show that some salient features of the analysis are preserved when considering global instead of local observables. Finally, we analyze flow configurations with multistability features where the ARMA technique is efficient in discriminating different stability branches of the system

  5. Spectral Estimation of UV-Vis Absorbance Time Series for Water Quality Monitoring

    Directory of Open Access Journals (Sweden)

    Leonardo Plazas-Nossa

    2017-05-01

    Full Text Available Context: Signals recorded as multivariate time series by UV-Vis absorbance captors installed in urban sewer systems, can be non-stationary, yielding complications in the analysis of water quality monitoring. This work proposes to perform spectral estimation using the Box-Cox transformation and differentiation in order to obtain stationary multivariate time series in a wide sense. Additionally, Principal Component Analysis (PCA is applied to reduce their dimensionality. Method: Three different UV-Vis absorbance time series for different Colombian locations were studied: (i El-Salitre Wastewater Treatment Plant (WWTP in Bogotá; (ii Gibraltar Pumping Station (GPS in Bogotá; and (iii San-Fernando WWTP in Itagüí. Each UV-Vis absorbance time series had equal sample number (5705. The esti-mation of the spectral power density is obtained using the average of modified periodograms with rectangular window and an overlap of 50%, with the 20 most important harmonics from the Discrete Fourier Transform (DFT and Inverse Fast Fourier Transform (IFFT. Results: Absorbance time series dimensionality reduction using PCA, resulted in 6, 8 and 7 principal components for each study site respectively, altogether explaining more than 97% of their variability. Values of differences below 30% for the UV range were obtained for the three study sites, while for the visible range the maximum differences obtained were: (i 35% for El-Salitre WWTP; (ii 61% for GPS; and (iii 75% for San-Fernando WWTP. Conclusions: The Box-Cox transformation and the differentiation process applied to the UV-Vis absorbance time series for the study sites (El-Salitre, GPS and San-Fernando, allowed to reduce variance and to eliminate ten-dency of the time series. A pre-processing of UV-Vis absorbance time series is recommended to detect and remove outliers and then apply the proposed process for spectral estimation. Language: Spanish.

  6. Toward automatic time-series forecasting using neural networks.

    Science.gov (United States)

    Yan, Weizhong

    2012-07-01

    Over the past few decades, application of artificial neural networks (ANN) to time-series forecasting (TSF) has been growing rapidly due to several unique features of ANN models. However, to date, a consistent ANN performance over different studies has not been achieved. Many factors contribute to the inconsistency in the performance of neural network models. One such factor is that ANN modeling involves determining a large number of design parameters, and the current design practice is essentially heuristic and ad hoc, this does not exploit the full potential of neural networks. Systematic ANN modeling processes and strategies for TSF are, therefore, greatly needed. Motivated by this need, this paper attempts to develop an automatic ANN modeling scheme. It is based on the generalized regression neural network (GRNN), a special type of neural network. By taking advantage of several GRNN properties (i.e., a single design parameter and fast learning) and by incorporating several design strategies (e.g., fusing multiple GRNNs), we have been able to make the proposed modeling scheme to be effective for modeling large-scale business time series. The initial model was entered into the NN3 time-series competition. It was awarded the best prediction on the reduced dataset among approximately 60 different models submitted by scholars worldwide.

  7. Toy models of developed turbulence

    Directory of Open Access Journals (Sweden)

    M.Hnatich

    2005-01-01

    Full Text Available We have investigated the advection of a passive scalar quantity by incompressible helical turbulent flow within the framework of extended Kraichnan model. Turbulent fluctuations of velocity field are assumed to have the Gaussian statistics with zero mean and defined noise with finite time-correlation. Actual calculations have been done up to two-loop approximation within the framework of field-theoretic renormalization group approach. It turned out that space parity violation (helicity of turbulent environment does not affect anomalous scaling which is a peculiar attribute of the corresponding model without helicity. However, stability of asymptotic regimes, where anomalous scaling takes place, strongly depends on the amount of helicity. Moreover, helicity gives rise to the turbulent diffusivity, which has been calculated in one-loop approximation.

  8. Multi-granular trend detection for time-series analysis

    NARCIS (Netherlands)

    van Goethem, A.I.; Staals, F.; Löffler, M.; Dykes, J.; Speckmann, B.

    2017-01-01

    Time series (such as stock prices) and ensembles (such as model runs for weather forecasts) are two important types of one-dimensional time-varying data. Such data is readily available in large quantities but visual analysis of the raw data quickly becomes infeasible, even for moderately sized data

  9. The Timeseries Toolbox - A Web Application to Enable Accessible, Reproducible Time Series Analysis

    Science.gov (United States)

    Veatch, W.; Friedman, D.; Baker, B.; Mueller, C.

    2017-12-01

    The vast majority of data analyzed by climate researchers are repeated observations of physical process or time series data. This data lends itself of a common set of statistical techniques and models designed to determine trends and variability (e.g., seasonality) of these repeated observations. Often, these same techniques and models can be applied to a wide variety of different time series data. The Timeseries Toolbox is a web application designed to standardize and streamline these common approaches to time series analysis and modeling with particular attention to hydrologic time series used in climate preparedness and resilience planning and design by the U. S. Army Corps of Engineers. The application performs much of the pre-processing of time series data necessary for more complex techniques (e.g. interpolation, aggregation). With this tool, users can upload any dataset that conforms to a standard template and immediately begin applying these techniques to analyze their time series data.

  10. Optimal transformations for categorical autoregressive time series

    NARCIS (Netherlands)

    Buuren, S. van

    1996-01-01

    This paper describes a method for finding optimal transformations for analyzing time series by autoregressive models. 'Optimal' implies that the agreement between the autoregressive model and the transformed data is maximal. Such transformations help 1) to increase the model fit, and 2) to analyze

  11. Kelvin-Helmholtz instability: the ``atom'' of geophysical turbulence?

    Science.gov (United States)

    Smyth, William

    2017-11-01

    Observations of small-scale turbulence in Earth's atmosphere and oceans have most commonly been interpreted in terms of the Kolmogorov theory of isotropic turbulence, despite the fact that the observed turbulence is significantly anisotropic due to density stratification and sheared large-scale flows. I will describe an alternative picture in which turbulence consists of distinct events that occur sporadically in space and time. The simplest model for an individual event is the ``Kelvin-Helmholtz (KH) ansatz'', in which turbulence relieves the dynamic instability of a localized shear layer. I will summarize evidence that the KH ansatz is a valid description of observed turbulence events, using microstructure measurements from the equatorial Pacific ocean as an example. While the KH ansatz has been under study for many decades and is reasonably well understood, the bigger picture is much less clear. How are the KH events distributed in space and time? How do different events interact with each other? I will describe some tentative steps toward a more thorough understanding.

  12. Satellite Image Time Series Decomposition Based on EEMD

    Directory of Open Access Journals (Sweden)

    Yun-long Kong

    2015-11-01

    Full Text Available Satellite Image Time Series (SITS have recently been of great interest due to the emerging remote sensing capabilities for Earth observation. Trend and seasonal components are two crucial elements of SITS. In this paper, a novel framework of SITS decomposition based on Ensemble Empirical Mode Decomposition (EEMD is proposed. EEMD is achieved by sifting an ensemble of adaptive orthogonal components called Intrinsic Mode Functions (IMFs. EEMD is noise-assisted and overcomes the drawback of mode mixing in conventional Empirical Mode Decomposition (EMD. Inspired by these advantages, the aim of this work is to employ EEMD to decompose SITS into IMFs and to choose relevant IMFs for the separation of seasonal and trend components. In a series of simulations, IMFs extracted by EEMD achieved a clear representation with physical meaning. The experimental results of 16-day compositions of Moderate Resolution Imaging Spectroradiometer (MODIS, Normalized Difference Vegetation Index (NDVI, and Global Environment Monitoring Index (GEMI time series with disturbance illustrated the effectiveness and stability of the proposed approach to monitoring tasks, such as applications for the detection of abrupt changes.

  13. An accuracy assessment of realtime GNSS time series toward semi- real time seafloor geodetic observation

    Science.gov (United States)

    Osada, Y.; Ohta, Y.; Demachi, T.; Kido, M.; Fujimoto, H.; Azuma, R.; Hino, R.

    2013-12-01

    Large interplate earthquake repeatedly occurred in Japan Trench. Recently, the detail crustal deformation revealed by the nation-wide inland GPS network called as GEONET by GSI. However, the maximum displacement region for interplate earthquake is mainly located offshore region. GPS/Acoustic seafloor geodetic observation (hereafter GPS/A) is quite important and useful for understanding of shallower part of the interplate coupling between subducting and overriding plates. We typically conduct GPS/A in specific ocean area based on repeated campaign style using research vessel or buoy. Therefore, we cannot monitor the temporal variation of seafloor crustal deformation in real time. The one of technical issue on real time observation is kinematic GPS analysis because kinematic GPS analysis based on reference and rover data. If the precise kinematic GPS analysis will be possible in the offshore region, it should be promising method for real time GPS/A with USV (Unmanned Surface Vehicle) and a moored buoy. We assessed stability, precision and accuracy of StarFireTM global satellites based augmentation system. We primarily tested for StarFire in the static condition. In order to assess coordinate precision and accuracy, we compared 1Hz StarFire time series and post-processed precise point positioning (PPP) 1Hz time series by GIPSY-OASIS II processing software Ver. 6.1.2 with three difference product types (ultra-rapid, rapid, and final orbits). We also used difference interval clock information (30 and 300 seconds) for the post-processed PPP processing. The standard deviation of real time StarFire time series is less than 30 mm (horizontal components) and 60 mm (vertical component) based on 1 month continuous processing. We also assessed noise spectrum of the estimated time series by StarFire and post-processed GIPSY PPP results. We found that the noise spectrum of StarFire time series is similar pattern with GIPSY-OASIS II processing result based on JPL rapid orbit

  14. The Exponential Model for the Spectrum of a Time Series: Extensions and Applications

    DEFF Research Database (Denmark)

    Proietti, Tommaso; Luati, Alessandra

    The exponential model for the spectrum of a time series and its fractional extensions are based on the Fourier series expansion of the logarithm of the spectral density. The coefficients of the expansion form the cepstrum of the time series. After deriving the cepstrum of important classes of time...

  15. Enteroclysis and small bowel series: Comparison of radiation dose and examination time

    International Nuclear Information System (INIS)

    Thoeni, R.F.; Gould, R.G.

    1991-01-01

    Respective radiation doses and total examination and fluoroscopy times were compared for 50 patients; 25 underwent enteroclysis and 25 underwent small bowel series with (n = 17) and without (n = 8) an examination of the upper gastrointestinal (GI) tract. For enteroclysis, the mean skin entry radiation dose (12.3 rad [123 mGy]) and mean fluoroscopy time (18.4 minutes) were almost 1 1/2 times greater than those for the small bowel series with examination of the upper GI tract (8.4 rad [84 mGy]; 11.4 minutes) and almost three times greater than those for the small bowel series without upper GI examination (4.6 rad [46 mGy]; 6.3 minutes). However, the mean total examination completion time for enteroclysis (31.2 minutes) was almost half that of the small bowel series without upper GI examination (57.5 minutes) and almost four times shorter than that of the small bowel series with upper GI examination (114 minutes). The higher radiation dose of enteroclysis should be considered along with the short examination time, the age and clinical condition of the patient, and the reported higher accuracy when deciding on the appropriate radiographic examination of the small bowel

  16. Rotation in the dynamic factor modeling of multivariate stationary time series.

    NARCIS (Netherlands)

    Molenaar, P.C.M.; Nesselroade, J.R.

    2001-01-01

    A special rotation procedure is proposed for the exploratory dynamic factor model for stationary multivariate time series. The rotation procedure applies separately to each univariate component series of a q-variate latent factor series and transforms such a component, initially represented as white

  17. Is Fish Response related to Velocity and Turbulence Magnitudes? (Invited)

    Science.gov (United States)

    Wilson, C. A.; Hockley, F. A.; Cable, J.

    2013-12-01

    Riverine fish are subject to heterogeneous velocities and turbulence, and may use this to their advantage by selecting regions which balance energy expenditure for station holding whilst maximising energy gain through feeding opportunities. This study investigated microhabitat selection by guppies (Poecilia reticulata) in terms of the three-dimensional velocity structure generated by idealised boulders in an experimental flume. Velocity and turbulence influenced intra-species variation in swimming behaviour with respect to size, sex and parasite intensity. With increasing body length, fish swam further and more frequently between boulder regions. Larger guppies spent more time in the high velocity and low turbulence region, whereas smaller guppies preferred the low velocity and high shear stress region directly behind the boulders. Male guppies selected the region of low velocity, indicating a possible reduced swimming ability due to hydrodynamic drag imposed by their fins. With increasing parasite (Gyrodactylus turnbulli) burden, fish preferentially selected the region of moderate velocity which had the lowest bulk measure of turbulence of all regions and was also the most spatially homogeneous velocity and turbulence region. Overall the least amount of time was spent in the recirculation zone which had the highest magnitude of shear stresses and mean vertical turbulent length scale to fish length ratio. Shear stresses were a factor of two greater than in the most frequented moderate velocity region, while mean vertical turbulent length scale to fish length ratio were six times greater. Indeed the mean longitudinal turbulent scale was 2-6 times greater than the fish length in all regions. While it is impossible to discriminate between these two turbulence parameters (shear stress and turbulent length to fish length ratio) in influencing the fish preference, our study infers that there is a bias towards fish spending more time in a region where both the bulk

  18. A simple and fast representation space for classifying complex time series

    International Nuclear Information System (INIS)

    Zunino, Luciano; Olivares, Felipe; Bariviera, Aurelio F.; Rosso, Osvaldo A.

    2017-01-01

    In the context of time series analysis considerable effort has been directed towards the implementation of efficient discriminating statistical quantifiers. Very recently, a simple and fast representation space has been introduced, namely the number of turning points versus the Abbe value. It is able to separate time series from stationary and non-stationary processes with long-range dependences. In this work we show that this bidimensional approach is useful for distinguishing complex time series: different sets of financial and physiological data are efficiently discriminated. Additionally, a multiscale generalization that takes into account the multiple time scales often involved in complex systems has been also proposed. This multiscale analysis is essential to reach a higher discriminative power between physiological time series in health and disease. - Highlights: • A bidimensional scheme has been tested for classification purposes. • A multiscale generalization is introduced. • Several practical applications confirm its usefulness. • Different sets of financial and physiological data are efficiently distinguished. • This multiscale bidimensional approach has high potential as discriminative tool.

  19. A simple and fast representation space for classifying complex time series

    Energy Technology Data Exchange (ETDEWEB)

    Zunino, Luciano, E-mail: lucianoz@ciop.unlp.edu.ar [Centro de Investigaciones Ópticas (CONICET La Plata – CIC), C.C. 3, 1897 Gonnet (Argentina); Departamento de Ciencias Básicas, Facultad de Ingeniería, Universidad Nacional de La Plata (UNLP), 1900 La Plata (Argentina); Olivares, Felipe, E-mail: olivaresfe@gmail.com [Instituto de Física, Pontificia Universidad Católica de Valparaíso (PUCV), 23-40025 Valparaíso (Chile); Bariviera, Aurelio F., E-mail: aurelio.fernandez@urv.cat [Department of Business, Universitat Rovira i Virgili, Av. Universitat 1, 43204 Reus (Spain); Rosso, Osvaldo A., E-mail: oarosso@gmail.com [Instituto de Física, Universidade Federal de Alagoas (UFAL), BR 104 Norte km 97, 57072-970, Maceió, Alagoas (Brazil); Instituto Tecnológico de Buenos Aires (ITBA) and CONICET, C1106ACD, Av. Eduardo Madero 399, Ciudad Autónoma de Buenos Aires (Argentina); Complex Systems Group, Facultad de Ingeniería y Ciencias Aplicadas, Universidad de los Andes, Av. Mons. Álvaro del Portillo 12.455, Las Condes, Santiago (Chile)

    2017-03-18

    In the context of time series analysis considerable effort has been directed towards the implementation of efficient discriminating statistical quantifiers. Very recently, a simple and fast representation space has been introduced, namely the number of turning points versus the Abbe value. It is able to separate time series from stationary and non-stationary processes with long-range dependences. In this work we show that this bidimensional approach is useful for distinguishing complex time series: different sets of financial and physiological data are efficiently discriminated. Additionally, a multiscale generalization that takes into account the multiple time scales often involved in complex systems has been also proposed. This multiscale analysis is essential to reach a higher discriminative power between physiological time series in health and disease. - Highlights: • A bidimensional scheme has been tested for classification purposes. • A multiscale generalization is introduced. • Several practical applications confirm its usefulness. • Different sets of financial and physiological data are efficiently distinguished. • This multiscale bidimensional approach has high potential as discriminative tool.

  20. Visibility graphlet approach to chaotic time series

    Energy Technology Data Exchange (ETDEWEB)

    Mutua, Stephen [Business School, University of Shanghai for Science and Technology, Shanghai 200093 (China); Computer Science Department, Masinde Muliro University of Science and Technology, P.O. Box 190-50100, Kakamega (Kenya); Gu, Changgui, E-mail: gu-changgui@163.com, E-mail: hjyang@ustc.edu.cn; Yang, Huijie, E-mail: gu-changgui@163.com, E-mail: hjyang@ustc.edu.cn [Business School, University of Shanghai for Science and Technology, Shanghai 200093 (China)

    2016-05-15

    Many novel methods have been proposed for mapping time series into complex networks. Although some dynamical behaviors can be effectively captured by existing approaches, the preservation and tracking of the temporal behaviors of a chaotic system remains an open problem. In this work, we extended the visibility graphlet approach to investigate both discrete and continuous chaotic time series. We applied visibility graphlets to capture the reconstructed local states, so that each is treated as a node and tracked downstream to create a temporal chain link. Our empirical findings show that the approach accurately captures the dynamical properties of chaotic systems. Networks constructed from periodic dynamic phases all converge to regular networks and to unique network structures for each model in the chaotic zones. Furthermore, our results show that the characterization of chaotic and non-chaotic zones in the Lorenz system corresponds to the maximal Lyapunov exponent, thus providing a simple and straightforward way to analyze chaotic systems.