WorldWideScience

Sample records for sampling series based

  1. Multidimensional scaling analysis of financial time series based on modified cross-sample entropy methods

    Science.gov (United States)

    He, Jiayi; Shang, Pengjian; Xiong, Hui

    2018-06-01

    Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.

  2. Detecting chaos in irregularly sampled time series.

    Science.gov (United States)

    Kulp, C W

    2013-09-01

    Recently, Wiebe and Virgin [Chaos 22, 013136 (2012)] developed an algorithm which detects chaos by analyzing a time series' power spectrum which is computed using the Discrete Fourier Transform (DFT). Their algorithm, like other time series characterization algorithms, requires that the time series be regularly sampled. Real-world data, however, are often irregularly sampled, thus, making the detection of chaotic behavior difficult or impossible with those methods. In this paper, a characterization algorithm is presented, which effectively detects chaos in irregularly sampled time series. The work presented here is a modification of Wiebe and Virgin's algorithm and uses the Lomb-Scargle Periodogram (LSP) to compute a series' power spectrum instead of the DFT. The DFT is not appropriate for irregularly sampled time series. However, the LSP is capable of computing the frequency content of irregularly sampled data. Furthermore, a new method of analyzing the power spectrum is developed, which can be useful for differentiating between chaotic and non-chaotic behavior. The new characterization algorithm is successfully applied to irregularly sampled data generated by a model as well as data consisting of observations of variable stars.

  3. Transformation-cost time-series method for analyzing irregularly sampled data.

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations-with associated costs-to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  4. Transformation-cost time-series method for analyzing irregularly sampled data

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G. Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations—with associated costs—to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  5. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    Science.gov (United States)

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  6. Using forbidden ordinal patterns to detect determinism in irregularly sampled time series.

    Science.gov (United States)

    Kulp, C W; Chobot, J M; Niskala, B J; Needhammer, C J

    2016-02-01

    It is known that when symbolizing a time series into ordinal patterns using the Bandt-Pompe (BP) methodology, there will be ordinal patterns called forbidden patterns that do not occur in a deterministic series. The existence of forbidden patterns can be used to identify deterministic dynamics. In this paper, the ability to use forbidden patterns to detect determinism in irregularly sampled time series is tested on data generated from a continuous model system. The study is done in three parts. First, the effects of sampling time on the number of forbidden patterns are studied on regularly sampled time series. The next two parts focus on two types of irregular-sampling, missing data and timing jitter. It is shown that forbidden patterns can be used to detect determinism in irregularly sampled time series for low degrees of sampling irregularity (as defined in the paper). In addition, comments are made about the appropriateness of using the BP methodology to symbolize irregularly sampled time series.

  7. Cross-sample entropy of foreign exchange time series

    Science.gov (United States)

    Liu, Li-Zhi; Qian, Xi-Yuan; Lu, Heng-Yao

    2010-11-01

    The correlation of foreign exchange rates in currency markets is investigated based on the empirical data of DKK/USD, NOK/USD, CAD/USD, JPY/USD, KRW/USD, SGD/USD, THB/USD and TWD/USD for a period from 1995 to 2002. Cross-SampEn (cross-sample entropy) method is used to compare the returns of every two exchange rate time series to assess their degree of asynchrony. The calculation method of confidence interval of SampEn is extended and applied to cross-SampEn. The cross-SampEn and its confidence interval for every two of the exchange rate time series in periods 1995-1998 (before the Asian currency crisis) and 1999-2002 (after the Asian currency crisis) are calculated. The results show that the cross-SampEn of every two of these exchange rates becomes higher after the Asian currency crisis, indicating a higher asynchrony between the exchange rates. Especially for Singapore, Thailand and Taiwan, the cross-SampEn values after the Asian currency crisis are significantly higher than those before the Asian currency crisis. Comparison with the correlation coefficient shows that cross-SampEn is superior to describe the correlation between time series.

  8. Comparison of correlation analysis techniques for irregularly sampled time series

    Directory of Open Access Journals (Sweden)

    K. Rehfeld

    2011-06-01

    Full Text Available Geoscientific measurements often provide time series with irregular time sampling, requiring either data reconstruction (interpolation or sophisticated methods to handle irregular sampling. We compare the linear interpolation technique and different approaches for analyzing the correlation functions and persistence of irregularly sampled time series, as Lomb-Scargle Fourier transformation and kernel-based methods. In a thorough benchmark test we investigate the performance of these techniques.

    All methods have comparable root mean square errors (RMSEs for low skewness of the inter-observation time distribution. For high skewness, very irregular data, interpolation bias and RMSE increase strongly. We find a 40 % lower RMSE for the lag-1 autocorrelation function (ACF for the Gaussian kernel method vs. the linear interpolation scheme,in the analysis of highly irregular time series. For the cross correlation function (CCF the RMSE is then lower by 60 %. The application of the Lomb-Scargle technique gave results comparable to the kernel methods for the univariate, but poorer results in the bivariate case. Especially the high-frequency components of the signal, where classical methods show a strong bias in ACF and CCF magnitude, are preserved when using the kernel methods.

    We illustrate the performances of interpolation vs. Gaussian kernel method by applying both to paleo-data from four locations, reflecting late Holocene Asian monsoon variability as derived from speleothem δ18O measurements. Cross correlation results are similar for both methods, which we attribute to the long time scales of the common variability. The persistence time (memory is strongly overestimated when using the standard, interpolation-based, approach. Hence, the Gaussian kernel is a reliable and more robust estimator with significant advantages compared to other techniques and suitable for large scale application to paleo-data.

  9. Asymptotic theory for the sample covariance matrix of a heavy-tailed multivariate time series

    DEFF Research Database (Denmark)

    Davis, Richard A.; Mikosch, Thomas Valentin; Pfaffel, Olivier

    2016-01-01

    In this paper we give an asymptotic theory for the eigenvalues of the sample covariance matrix of a multivariate time series. The time series constitutes a linear process across time and between components. The input noise of the linear process has regularly varying tails with index α∈(0,4) in...... particular, the time series has infinite fourth moment. We derive the limiting behavior for the largest eigenvalues of the sample covariance matrix and show point process convergence of the normalized eigenvalues. The limiting process has an explicit form involving points of a Poisson process and eigenvalues...... of a non-negative definite matrix. Based on this convergence we derive limit theory for a host of other continuous functionals of the eigenvalues, including the joint convergence of the largest eigenvalues, the joint convergence of the largest eigenvalue and the trace of the sample covariance matrix...

  10. A window-based time series feature extraction method.

    Science.gov (United States)

    Katircioglu-Öztürk, Deniz; Güvenir, H Altay; Ravens, Ursula; Baykal, Nazife

    2017-10-01

    This study proposes a robust similarity score-based time series feature extraction method that is termed as Window-based Time series Feature ExtraCtion (WTC). Specifically, WTC generates domain-interpretable results and involves significantly low computational complexity thereby rendering itself useful for densely sampled and populated time series datasets. In this study, WTC is applied to a proprietary action potential (AP) time series dataset on human cardiomyocytes and three precordial leads from a publicly available electrocardiogram (ECG) dataset. This is followed by comparing WTC in terms of predictive accuracy and computational complexity with shapelet transform and fast shapelet transform (which constitutes an accelerated variant of the shapelet transform). The results indicate that WTC achieves a slightly higher classification performance with significantly lower execution time when compared to its shapelet-based alternatives. With respect to its interpretable features, WTC has a potential to enable medical experts to explore definitive common trends in novel datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Analysis of time series and size of equivalent sample

    International Nuclear Information System (INIS)

    Bernal, Nestor; Molina, Alicia; Pabon, Daniel; Martinez, Jorge

    2004-01-01

    In a meteorological context, a first approach to the modeling of time series is to use models of autoregressive type. This allows one to take into account the meteorological persistence or temporal behavior, thereby identifying the memory of the analyzed process. This article seeks to pre-sent the concept of the size of an equivalent sample, which helps to identify in the data series sub periods with a similar structure. Moreover, in this article we examine the alternative of adjusting the variance of the series, keeping in mind its temporal structure, as well as an adjustment to the covariance of two time series. This article presents two examples, the first one corresponding to seven simulated series with autoregressive structure of first order, and the second corresponding to seven meteorological series of anomalies of the air temperature at the surface in two Colombian regions

  12. A Story-Based Simulation for Teaching Sampling Distributions

    Science.gov (United States)

    Turner, Stephen; Dabney, Alan R.

    2015-01-01

    Statistical inference relies heavily on the concept of sampling distributions. However, sampling distributions are difficult to teach. We present a series of short animations that are story-based, with associated assessments. We hope that our contribution can be useful as a tool to teach sampling distributions in the introductory statistics…

  13. Time Series Analysis Based on Running Mann Whitney Z Statistics

    Science.gov (United States)

    A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...

  14. Measuring time series regularity using nonlinear similarity-based sample entropy

    International Nuclear Information System (INIS)

    Xie Hongbo; He Weixing; Liu Hui

    2008-01-01

    Sampe Entropy (SampEn), a measure quantifying regularity and complexity, is believed to be an effective analyzing method of diverse settings that include both deterministic chaotic and stochastic processes, particularly operative in the analysis of physiological signals that involve relatively small amount of data. However, the similarity definition of vectors is based on Heaviside function, of which the boundary is discontinuous and hard, may cause some problems in the validity and accuracy of SampEn. Sigmoid function is a smoothed and continuous version of Heaviside function. To overcome the problems SampEn encountered, a modified SampEn (mSampEn) based on nonlinear Sigmoid function was proposed. The performance of mSampEn was tested on the independent identically distributed (i.i.d.) uniform random numbers, the MIX stochastic model, the Rossler map, and the Hennon map. The results showed that mSampEn was superior to SampEn in several aspects, including giving entropy definition in case of small parameters, better relative consistency, robust to noise, and more independence on record length when characterizing time series generated from either deterministic or stochastic system with different regularities

  15. Yfiler® Plus population samples and dilution series

    DEFF Research Database (Denmark)

    Andersen, Mikkel Meyer; Mogensen, Helle Smidt; Eriksen, Poul Svante

    2017-01-01

    DNA complicated the analysis by causing drop-ins of characteristic female DNA artefacts. Even though the customised analytical threshold in combination with the custom-made artefact filters gave more alleles, crime scene samples still needed special attention from the forensic geneticist....... dynamics and performance. We determined dye-dependent analytical thresholds by receiver operating characteristics (ROC) and made a customised artefact filter that includes theoretical known artefacts by use of previously analysed population samples. Dilution series of known male DNA and a selection...

  16. Methodology Series Module 5: Sampling Strategies

    OpenAIRE

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ? Sampling Method?. There are essentially two types of sampling methods: 1) probability sampling ? based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling ? based on researcher's choice, population that accessible & available. Some of the non-probabilit...

  17. Methodology Series Module 5: Sampling Strategies.

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.

  18. Methodology series module 5: Sampling strategies

    Directory of Open Access Journals (Sweden)

    Maninder Singh Setia

    2016-01-01

    Full Text Available Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the 'Sampling Method'. There are essentially two types of sampling methods: 1 probability samplingbased on chance events (such as random numbers, flipping a coin etc.; and 2 non-probability samplingbased on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term 'random sample' when the researcher has used convenience sample. The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the 'generalizability' of these results. In such a scenario, the researcher may want to use 'purposive sampling' for the study.

  19. Methodology Series Module 5: Sampling Strategies

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability samplingbased on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability samplingbased on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438

  20. Multiscale sample entropy and cross-sample entropy based on symbolic representation and similarity of stock markets

    Science.gov (United States)

    Wu, Yue; Shang, Pengjian; Li, Yilong

    2018-03-01

    A modified multiscale sample entropy measure based on symbolic representation and similarity (MSEBSS) is proposed in this paper to research the complexity of stock markets. The modified algorithm reduces the probability of inducing undefined entropies and is confirmed to be robust to strong noise. Considering the validity and accuracy, MSEBSS is more reliable than Multiscale entropy (MSE) for time series mingled with much noise like financial time series. We apply MSEBSS to financial markets and results show American stock markets have the lowest complexity compared with European and Asian markets. There are exceptions to the regularity that stock markets show a decreasing complexity over the time scale, indicating a periodicity at certain scales. Based on MSEBSS, we introduce the modified multiscale cross-sample entropy measure based on symbolic representation and similarity (MCSEBSS) to consider the degree of the asynchrony between distinct time series. Stock markets from the same area have higher synchrony than those from different areas. And for stock markets having relative high synchrony, the entropy values will decrease with the increasing scale factor. While for stock markets having high asynchrony, the entropy values will not decrease with the increasing scale factor sometimes they tend to increase. So both MSEBSS and MCSEBSS are able to distinguish stock markets of different areas, and they are more helpful if used together for studying other features of financial time series.

  1. Mapping Rice Cropping Systems in Vietnam Using an NDVI-Based Time-Series Similarity Measurement Based on DTW Distance

    Directory of Open Access Journals (Sweden)

    Xudong Guan

    2016-01-01

    Full Text Available Normalized Difference Vegetation Index (NDVI derived from Moderate Resolution Imaging Spectroradiometer (MODIS time-series data has been widely used in the fields of crop and rice classification. The cloudy and rainy weather characteristics of the monsoon season greatly reduce the likelihood of obtaining high-quality optical remote sensing images. In addition, the diverse crop-planting system in Vietnam also hinders the comparison of NDVI among different crop stages. To address these problems, we apply a Dynamic Time Warping (DTW distance-based similarity measure approach and use the entire yearly NDVI time series to reduce the inaccuracy of classification using a single image. We first de-noise the NDVI time series using S-G filtering based on the TIMESAT software. Then, a standard NDVI time-series base for rice growth is established based on field survey data and Google Earth sample data. NDVI time-series data for each pixel are constructed and the DTW distance with the standard rice growth NDVI time series is calculated. Then, we apply thresholds to extract rice growth areas. A qualitative assessment using statistical data and a spatial assessment using sampled data from the rice-cropping map reveal a high mapping accuracy at the national scale between the statistical data, with the corresponding R2 being as high as 0.809; however, the mapped rice accuracy decreased at the provincial scale due to the reduced number of rice planting areas per province. An analysis of the results indicates that the 500-m resolution MODIS data are limited in terms of mapping scattered rice parcels. The results demonstrate that the DTW-based similarity measure of the NDVI time series can be effectively used to map large-area rice cropping systems with diverse cultivation processes.

  2. Weighted statistical parameters for irregularly sampled time series

    Science.gov (United States)

    Rimoldini, Lorenzo

    2014-01-01

    Unevenly spaced time series are common in astronomy because of the day-night cycle, weather conditions, dependence on the source position in the sky, allocated telescope time and corrupt measurements, for example, or inherent to the scanning law of satellites like Hipparcos and the forthcoming Gaia. Irregular sampling often causes clumps of measurements and gaps with no data which can severely disrupt the values of estimators. This paper aims at improving the accuracy of common statistical parameters when linear interpolation (in time or phase) can be considered an acceptable approximation of a deterministic signal. A pragmatic solution is formulated in terms of a simple weighting scheme, adapting to the sampling density and noise level, applicable to large data volumes at minimal computational cost. Tests on time series from the Hipparcos periodic catalogue led to significant improvements in the overall accuracy and precision of the estimators with respect to the unweighted counterparts and those weighted by inverse-squared uncertainties. Automated classification procedures employing statistical parameters weighted by the suggested scheme confirmed the benefits of the improved input attributes. The classification of eclipsing binaries, Mira, RR Lyrae, Delta Cephei and Alpha2 Canum Venaticorum stars employing exclusively weighted descriptive statistics achieved an overall accuracy of 92 per cent, about 6 per cent higher than with unweighted estimators.

  3. Visibility Graph Based Time Series Analysis.

    Science.gov (United States)

    Stephen, Mutua; Gu, Changgui; Yang, Huijie

    2015-01-01

    Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it's microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks.

  4. Visibility Graph Based Time Series Analysis.

    Directory of Open Access Journals (Sweden)

    Mutua Stephen

    Full Text Available Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it's microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks.

  5. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    Science.gov (United States)

    Zhang, Qian; Harman, Ciaran J.; Kirchner, James W.

    2018-02-01

    River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1) fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2) the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling - in the form of spectral slope (β) or other equivalent scaling parameters (e.g., Hurst exponent) - are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1) they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β = 0) to Brown noise (β = 2) and (2) their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths) in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb-Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among all methods for a wide range of

  6. Adaptive Sampling of Time Series During Remote Exploration

    Science.gov (United States)

    Thompson, David R.

    2012-01-01

    This work deals with the challenge of online adaptive data collection in a time series. A remote sensor or explorer agent adapts its rate of data collection in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility (all its datapoints lie in the past) and limited control (it can only decide when to collect its next datapoint). This problem is treated from an information-theoretic perspective, fitting a probabilistic model to collected data and optimizing the future sampling strategy to maximize information gain. The performance characteristics of stationary and nonstationary Gaussian process models are compared. Self-throttling sensors could benefit environmental sensor networks and monitoring as well as robotic exploration. Explorer agents can improve performance by adjusting their data collection rate, preserving scarce power or bandwidth resources during uninteresting times while fully covering anomalous events of interest. For example, a remote earthquake sensor could conserve power by limiting its measurements during normal conditions and increasing its cadence during rare earthquake events. A similar capability could improve sensor platforms traversing a fixed trajectory, such as an exploration rover transect or a deep space flyby. These agents can adapt observation times to improve sample coverage during moments of rapid change. An adaptive sampling approach couples sensor autonomy, instrument interpretation, and sampling. The challenge is addressed as an active learning problem, which already has extensive theoretical treatment in the statistics and machine learning literature. A statistical Gaussian process (GP) model is employed to guide sample decisions that maximize information gain. Nonsta tion - ary (e.g., time-varying) covariance relationships permit the system to represent and track local anomalies, in contrast with current GP approaches. Most common GP models

  7. Model-based Clustering of Categorical Time Series with Multinomial Logit Classification

    Science.gov (United States)

    Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea

    2010-09-01

    A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.

  8. Generalized sample entropy analysis for traffic signals based on similarity measure

    Science.gov (United States)

    Shang, Du; Xu, Mengjia; Shang, Pengjian

    2017-05-01

    Sample entropy is a prevailing method used to quantify the complexity of a time series. In this paper a modified method of generalized sample entropy and surrogate data analysis is proposed as a new measure to assess the complexity of a complex dynamical system such as traffic signals. The method based on similarity distance presents a different way of signals patterns match showing distinct behaviors of complexity. Simulations are conducted over synthetic data and traffic signals for providing the comparative study, which is provided to show the power of the new method. Compared with previous sample entropy and surrogate data analysis, the new method has two main advantages. The first one is that it overcomes the limitation about the relationship between the dimension parameter and the length of series. The second one is that the modified sample entropy functions can be used to quantitatively distinguish time series from different complex systems by the similar measure.

  9. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    Directory of Open Access Journals (Sweden)

    Q. Zhang

    2018-02-01

    Full Text Available River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1 fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2 the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling – in the form of spectral slope (β or other equivalent scaling parameters (e.g., Hurst exponent – are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1 they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β  =  0 to Brown noise (β  =  2 and (2 their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb–Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among

  10. Application of a series of artificial neural networks to on-site quantitative analysis of lead into real soil samples by laser induced breakdown spectroscopy

    International Nuclear Information System (INIS)

    El Haddad, J.; Bruyère, D.; Ismaël, A.; Gallou, G.; Laperche, V.; Michel, K.; Canioni, L.; Bousquet, B.

    2014-01-01

    Artificial neural networks were applied to process data from on-site LIBS analysis of soil samples. A first artificial neural network allowed retrieving the relative amounts of silicate, calcareous and ores matrices into soils. As a consequence, each soil sample was correctly located inside the ternary diagram characterized by these three matrices, as verified by ICP-AES. Then a series of artificial neural networks were applied to quantify lead into soil samples. More precisely, two models were designed for classification purpose according to both the type of matrix and the range of lead concentrations. Then, three quantitative models were locally applied to three data subsets. This complete approach allowed reaching a relative error of prediction close to 20%, considered as satisfying in the case of on-site analysis. - Highlights: • Application of a series of artificial neural networks (ANN) to quantitative LIBS • Matrix-based classification of the soil samples by ANN • Concentration-based classification of the soil samples by ANN • Series of quantitative ANN models dedicated to the analysis of data subsets • Relative error of prediction lower than 20% for LIBS analysis of soil samples

  11. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  12. Reliability-Based Optimization of Series Systems of Parallel Systems

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    1993-01-01

    Reliability-based design of structural systems is considered. In particular, systems where the reliability model is a series system of parallel systems are treated. A sensitivity analysis for this class of problems is presented. Optimization problems with series systems of parallel systems...... optimization of series systems of parallel systems, but it is also efficient in reliability-based optimization of series systems in general....

  13. [Winter wheat area estimation with MODIS-NDVI time series based on parcel].

    Science.gov (United States)

    Li, Le; Zhang, Jin-shui; Zhu, Wen-quan; Hu, Tan-gao; Hou, Dong

    2011-05-01

    Several attributes of MODIS (moderate resolution imaging spectrometer) data, especially the short temporal intervals and the global coverage, provide an extremely efficient way to map cropland and monitor its seasonal change. However, the reliability of their measurement results is challenged because of the limited spatial resolution. The parcel data has clear geo-location and obvious boundary information of cropland. Also, the spectral differences and the complexity of mixed pixels are weak in parcels. All of these make that area estimation based on parcels presents more advantage than on pixels. In the present study, winter wheat area estimation based on MODIS-NDVI time series has been performed with the support of cultivated land parcel in Tongzhou, Beijing. In order to extract the regional winter wheat acreage, multiple regression methods were used to simulate the stable regression relationship between MODIS-NDVI time series data and TM samples in parcels. Through this way, the consistency of the extraction results from MODIS and TM can stably reach up to 96% when the amount of samples accounts for 15% of the whole area. The results shows that the use of parcel data can effectively improve the error in recognition results in MODIS-NDVI based multi-series data caused by the low spatial resolution. Therefore, with combination of moderate and low resolution data, the winter wheat area estimation became available in large-scale region which lacks completed medium resolution images or has images covered with clouds. Meanwhile, it carried out the preliminary experiments for other crop area estimation.

  14. Power Forecasting of Combined Heating and Cooling Systems Based on Chaotic Time Series

    Directory of Open Access Journals (Sweden)

    Liu Hai

    2015-01-01

    Full Text Available Theoretic analysis shows that the output power of the distributed generation system is nonlinear and chaotic. And it is coupled with the microenvironment meteorological data. Chaos is an inherent property of nonlinear dynamic system. A predicator of the output power of the distributed generation system is to establish a nonlinear model of the dynamic system based on real time series in the reconstructed phase space. Firstly, chaos should be detected and quantified for the intensive studies of nonlinear systems. If the largest Lyapunov exponent is positive, the dynamical system must be chaotic. Then, the embedding dimension and the delay time are chosen based on the improved C-C method. The attractor of chaotic power time series can be reconstructed based on the embedding dimension and delay time in the phase space. By now, the neural network can be trained based on the training samples, which are observed from the distributed generation system. The neural network model will approximate the curve of output power adequately. Experimental results show that the maximum power point of the distributed generation system will be predicted based on the meteorological data. The system can be controlled effectively based on the prediction.

  15. An Energy-Based Similarity Measure for Time Series

    Directory of Open Access Journals (Sweden)

    Pierre Brunagel

    2007-11-01

    Full Text Available A new similarity measure, called SimilB, for time series analysis, based on the cross-ΨB-energy operator (2004, is introduced. ΨB is a nonlinear measure which quantifies the interaction between two time series. Compared to Euclidean distance (ED or the Pearson correlation coefficient (CC, SimilB includes the temporal information and relative changes of the time series using the first and second derivatives of the time series. SimilB is well suited for both nonstationary and stationary time series and particularly those presenting discontinuities. Some new properties of ΨB are presented. Particularly, we show that ΨB as similarity measure is robust to both scale and time shift. SimilB is illustrated with synthetic time series and an artificial dataset and compared to the CC and the ED measures.

  16. Effectiveness of firefly algorithm based neural network in time series ...

    African Journals Online (AJOL)

    Effectiveness of firefly algorithm based neural network in time series forecasting. ... In the experiments, three well known time series were used to evaluate the performance. Results obtained were compared with ... Keywords: Time series, Artificial Neural Network, Firefly Algorithm, Particle Swarm Optimization, Overfitting ...

  17. Energy-Based Wavelet De-Noising of Hydrologic Time Series

    Science.gov (United States)

    Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu

    2014-01-01

    De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed. PMID:25360533

  18. Hemoglobin in samples with leukocytosis can be measured on ABL 700 series blood gas analyzers

    NARCIS (Netherlands)

    Scharnhorst, V.; Laar, van der P.D.; Vader, H.

    2003-01-01

    To compare lactate, bilirubin and Hemoglobin F concentrations obtained on ABL 700 series blood gas analyzers with those from laboratory methods. Pooled neonatal plasma, cord blood and adult plasma samples were used for comparison of bilirubin, hemoglobin F and lactate concentrations respectively.

  19. Pseudo-random bit generator based on lag time series

    Science.gov (United States)

    García-Martínez, M.; Campos-Cantón, E.

    2014-12-01

    In this paper, we present a pseudo-random bit generator (PRBG) based on two lag time series of the logistic map using positive and negative values in the bifurcation parameter. In order to hidden the map used to build the pseudo-random series we have used a delay in the generation of time series. These new series when they are mapped xn against xn+1 present a cloud of points unrelated to the logistic map. Finally, the pseudo-random sequences have been tested with the suite of NIST giving satisfactory results for use in stream ciphers.

  20. Time Series Analysis of Non-Gaussian Observations Based on State Space Models from Both Classical and Bayesian Perspectives

    NARCIS (Netherlands)

    Durbin, J.; Koopman, S.J.M.

    1998-01-01

    The analysis of non-Gaussian time series using state space models is considered from both classical and Bayesian perspectives. The treatment in both cases is based on simulation using importance sampling and antithetic variables; Monte Carlo Markov chain methods are not employed. Non-Gaussian

  1. Drunk driving detection based on classification of multivariate time series.

    Science.gov (United States)

    Li, Zhenlong; Jin, Xue; Zhao, Xiaohua

    2015-09-01

    This paper addresses the problem of detecting drunk driving based on classification of multivariate time series. First, driving performance measures were collected from a test in a driving simulator located in the Traffic Research Center, Beijing University of Technology. Lateral position and steering angle were used to detect drunk driving. Second, multivariate time series analysis was performed to extract the features. A piecewise linear representation was used to represent multivariate time series. A bottom-up algorithm was then employed to separate multivariate time series. The slope and time interval of each segment were extracted as the features for classification. Third, a support vector machine classifier was used to classify driver's state into two classes (normal or drunk) according to the extracted features. The proposed approach achieved an accuracy of 80.0%. Drunk driving detection based on the analysis of multivariate time series is feasible and effective. The approach has implications for drunk driving detection. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.

  2. Radial artery pulse waveform analysis based on curve fitting using discrete Fourier series.

    Science.gov (United States)

    Jiang, Zhixing; Zhang, David; Lu, Guangming

    2018-04-19

    Radial artery pulse diagnosis has been playing an important role in traditional Chinese medicine (TCM). For its non-invasion and convenience, the pulse diagnosis has great significance in diseases analysis of modern medicine. The practitioners sense the pulse waveforms in patients' wrist to make diagnoses based on their non-objective personal experience. With the researches of pulse acquisition platforms and computerized analysis methods, the objective study on pulse diagnosis can help the TCM to keep up with the development of modern medicine. In this paper, we propose a new method to extract feature from pulse waveform based on discrete Fourier series (DFS). It regards the waveform as one kind of signal that consists of a series of sub-components represented by sine and cosine (SC) signals with different frequencies and amplitudes. After the pulse signals are collected and preprocessed, we fit the average waveform for each sample using discrete Fourier series by least squares. The feature vector is comprised by the coefficients of discrete Fourier series function. Compared with the fitting method using Gaussian mixture function, the fitting errors of proposed method are smaller, which indicate that our method can represent the original signal better. The classification performance of proposed feature is superior to the other features extracted from waveform, liking auto-regression model and Gaussian mixture model. The coefficients of optimized DFS function, who is used to fit the arterial pressure waveforms, can obtain better performance in modeling the waveforms and holds more potential information for distinguishing different psychological states. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Series: Practical guidance to qualitative research. Part 3: Sampling, data collection and analysis.

    Science.gov (United States)

    Moser, Albine; Korstjens, Irene

    2018-12-01

    In the course of our supervisory work over the years, we have noticed that qualitative research tends to evoke a lot of questions and worries, so-called frequently asked questions (FAQs). This series of four articles intends to provide novice researchers with practical guidance for conducting high-quality qualitative research in primary care. By 'novice' we mean Master's students and junior researchers, as well as experienced quantitative researchers who are engaging in qualitative research for the first time. This series addresses their questions and provides researchers, readers, reviewers and editors with references to criteria and tools for judging the quality of qualitative research papers. The second article focused on context, research questions and designs, and referred to publications for further reading. This third article addresses FAQs about sampling, data collection and analysis. The data collection plan needs to be broadly defined and open at first, and become flexible during data collection. Sampling strategies should be chosen in such a way that they yield rich information and are consistent with the methodological approach used. Data saturation determines sample size and will be different for each study. The most commonly used data collection methods are participant observation, face-to-face in-depth interviews and focus group discussions. Analyses in ethnographic, phenomenological, grounded theory, and content analysis studies yield different narrative findings: a detailed description of a culture, the essence of the lived experience, a theory, and a descriptive summary, respectively. The fourth and final article will focus on trustworthiness and publishing qualitative research.

  4. Series: Practical guidance to qualitative research. Part 3: Sampling, data collection and analysis

    Science.gov (United States)

    Moser, Albine; Korstjens, Irene

    2018-01-01

    Abstract In the course of our supervisory work over the years, we have noticed that qualitative research tends to evoke a lot of questions and worries, so-called frequently asked questions (FAQs). This series of four articles intends to provide novice researchers with practical guidance for conducting high-quality qualitative research in primary care. By ‘novice’ we mean Master’s students and junior researchers, as well as experienced quantitative researchers who are engaging in qualitative research for the first time. This series addresses their questions and provides researchers, readers, reviewers and editors with references to criteria and tools for judging the quality of qualitative research papers. The second article focused on context, research questions and designs, and referred to publications for further reading. This third article addresses FAQs about sampling, data collection and analysis. The data collection plan needs to be broadly defined and open at first, and become flexible during data collection. Sampling strategies should be chosen in such a way that they yield rich information and are consistent with the methodological approach used. Data saturation determines sample size and will be different for each study. The most commonly used data collection methods are participant observation, face-to-face in-depth interviews and focus group discussions. Analyses in ethnographic, phenomenological, grounded theory, and content analysis studies yield different narrative findings: a detailed description of a culture, the essence of the lived experience, a theory, and a descriptive summary, respectively. The fourth and final article will focus on trustworthiness and publishing qualitative research. PMID:29199486

  5. An advection-based model to increase the temporal resolution of PIV time series.

    Science.gov (United States)

    Scarano, Fulvio; Moore, Peter

    A numerical implementation of the advection equation is proposed to increase the temporal resolution of PIV time series. The method is based on the principle that velocity fluctuations are transported passively, similar to Taylor's hypothesis of frozen turbulence . In the present work, the advection model is extended to unsteady three-dimensional flows. The main objective of the method is that of lowering the requirement on the PIV repetition rate from the Eulerian frequency toward the Lagrangian one. The local trajectory of the fluid parcel is obtained by forward projection of the instantaneous velocity at the preceding time instant and backward projection from the subsequent time step. The trajectories are approximated by the instantaneous streamlines, which yields accurate results when the amplitude of velocity fluctuations is small with respect to the convective motion. The verification is performed with two experiments conducted at temporal resolutions significantly higher than that dictated by Nyquist criterion. The flow past the trailing edge of a NACA0012 airfoil closely approximates frozen turbulence , where the largest ratio between the Lagrangian and Eulerian temporal scales is expected. An order of magnitude reduction of the needed acquisition frequency is demonstrated by the velocity spectra of super-sampled series. The application to three-dimensional data is made with time-resolved tomographic PIV measurements of a transitional jet. Here, the 3D advection equation is implemented to estimate the fluid trajectories. The reduction in the minimum sampling rate by the use of super-sampling in this case is less, due to the fact that vortices occurring in the jet shear layer are not well approximated by sole advection at large time separation. Both cases reveal that the current requirements for time-resolved PIV experiments can be revised when information is poured from space to time . An additional favorable effect is observed by the analysis in the

  6. Hydrogeologic applications for historical records and images from rock samples collected at the Nevada National Security Site and vicinity, Nye County, Nevada - A supplement to Data Series 297

    Science.gov (United States)

    Wood, David B.

    2018-03-14

    Rock samples have been collected, analyzed, and interpreted from drilling and mining operations at the Nevada National Security Site for over one-half of a century. Records containing geologic and hydrologic analyses and interpretations have been compiled into a series of databases. Rock samples have been photographed and thin sections scanned. Records and images are preserved and available for public viewing and downloading at the U.S. Geological Survey ScienceBase, Mercury Core Library and Data Center Web site at https://www.sciencebase.gov/mercury/ and documented in U.S. Geological Survey Data Series 297. Example applications of these data and images are provided in this report.

  7. A cache-friendly sampling strategy for texture-based volume rendering on GPU

    Directory of Open Access Journals (Sweden)

    Junpeng Wang

    2017-06-01

    Full Text Available The texture-based volume rendering is a memory-intensive algorithm. Its performance relies heavily on the performance of the texture cache. However, most existing texture-based volume rendering methods blindly map computational resources to texture memory and result in incoherent memory access patterns, causing low cache hit rates in certain cases. The distance between samples taken by threads of an atomic scheduling unit (e.g. a warp of 32 threads in CUDA of the GPU is a crucial factor that affects the texture cache performance. Based on this fact, we present a new sampling strategy, called Warp Marching, for the ray-casting algorithm of texture-based volume rendering. The effects of different sample organizations and different thread-pixel mappings in the ray-casting algorithm are thoroughly analyzed. Also, a pipeline manner color blending approach is introduced and the power of warp-level GPU operations is leveraged to improve the efficiency of parallel executions on the GPU. In addition, the rendering performance of the Warp Marching is view-independent, and it outperforms existing empty space skipping techniques in scenarios that need to render large dynamic volumes in a low resolution image. Through a series of micro-benchmarking and real-life data experiments, we rigorously analyze our sampling strategies and demonstrate significant performance enhancements over existing sampling methods.

  8. Dual frequency modulation with two cantilevers in series: a possible means to rapidly acquire tip–sample interaction force curves with dynamic AFM

    International Nuclear Information System (INIS)

    Solares, Santiago D; Chawla, Gaurav

    2008-01-01

    One common application of atomic force microscopy (AFM) is the acquisition of tip–sample interaction force curves. However, this can be a slow process when the user is interested in studying non-uniform samples, because existing contact- and dynamic-mode methods require that the measurement be performed at one fixed surface point at a time. This paper proposes an AFM method based on dual frequency modulation using two cantilevers in series, which could be used to measure the tip–sample interaction force curves and topography of the entire sample with a single surface scan, in a time that is comparable to the time needed to collect a topographic image with current AFM imaging modes. Numerical simulation results are provided along with recommended parameters to characterize tip–sample interactions resembling those of conventional silicon tips and carbon nanotube tips tapping on silicon surfaces

  9. Hierarchical Bayesian modelling of gene expression time series across irregularly sampled replicates and clusters.

    Science.gov (United States)

    Hensman, James; Lawrence, Neil D; Rattray, Magnus

    2013-08-20

    Time course data from microarrays and high-throughput sequencing experiments require simple, computationally efficient and powerful statistical models to extract meaningful biological signal, and for tasks such as data fusion and clustering. Existing methodologies fail to capture either the temporal or replicated nature of the experiments, and often impose constraints on the data collection process, such as regularly spaced samples, or similar sampling schema across replications. We propose hierarchical Gaussian processes as a general model of gene expression time-series, with application to a variety of problems. In particular, we illustrate the method's capacity for missing data imputation, data fusion and clustering.The method can impute data which is missing both systematically and at random: in a hold-out test on real data, performance is significantly better than commonly used imputation methods. The method's ability to model inter- and intra-cluster variance leads to more biologically meaningful clusters. The approach removes the necessity for evenly spaced samples, an advantage illustrated on a developmental Drosophila dataset with irregular replications. The hierarchical Gaussian process model provides an excellent statistical basis for several gene-expression time-series tasks. It has only a few additional parameters over a regular GP, has negligible additional complexity, is easily implemented and can be integrated into several existing algorithms. Our experiments were implemented in python, and are available from the authors' website: http://staffwww.dcs.shef.ac.uk/people/J.Hensman/.

  10. Discovery and identification of a series of alkyl decalin isomers in petroleum geological samples.

    Science.gov (United States)

    Wang, Huitong; Zhang, Shuichang; Weng, Na; Zhang, Bin; Zhu, Guangyou; Liu, Lingyan

    2015-07-07

    The comprehensive two-dimensional gas chromatography/time-of-flight mass spectrometry (GC × GC/TOFMS) has been used to characterize a crude oil and a source rock extract sample. During the process, a series of pairwise components between monocyclic alkanes and mono-aromatics have been discovered. After tentative assignments of decahydronaphthalene isomers, a series of alkyl decalin isomers have been synthesized and used for identification and validation of these petroleum compounds. From both the MS and chromatography information, these pairwise compounds were identified as 2-alkyl-decahydronaphthalenes and 1-alkyl-decahydronaphthalenes. The polarity of 1-alkyl-decahydronaphthalenes was stronger. Their long chain alkyl substituent groups may be due to bacterial transformation or different oil cracking events. This systematic profiling of alkyl-decahydronaphthalene isomers provides further understanding and recognition of these potential petroleum biomarkers.

  11. Estimation of time-delayed mutual information and bias for irregularly and sparsely sampled time-series

    International Nuclear Information System (INIS)

    Albers, D.J.; Hripcsak, George

    2012-01-01

    Highlights: ► Time-delayed mutual information for irregularly sampled time-series. ► Estimation bias for the time-delayed mutual information calculation. ► Fast, simple, PDF estimator independent, time-delayed mutual information bias estimate. ► Quantification of data-set-size limits of the time-delayed mutual calculation. - Abstract: A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database.

  12. Measurement of radionuclide activities of uranium-238 series in soil samples by gamma spectrometry: case of Vinaninkarena

    International Nuclear Information System (INIS)

    Randrianantenaina, F.R.

    2017-01-01

    The aim of this work is to determine the activity level of radionuclides of uranium-238 series. Eight soil samples are collected at Rural Commune of Vinaninkarena. After obtaining secular equilibrium, these samples have been measured using gamma spectrometry system in the Nuclear Analyses and Techniques Department of INSTN-Madagascar, with HPGe detector (30 % relative efficiency) and a Genie 2000 software. Activities obtained vary from (78±2)Bq.kg -1 to (49 231 ± 415)Bq.kg -1 . Among these eight samples, three activity levels are shown. Low activity is an activity which has value lower or equal to (89±3)Bq.kg -1 . Average activity is an activity which has value between (186± 1)Bq.kg -1 and (1049 ±7)Bq.kg -1 . And high activity is an activity which has value higher or equal to (14501±209)Bq.kg -1 . According to UNSCEAR 2000, these value are all higher than the world average value which is 35 Bq.kg -1 .It is due to the localities of sampling points. The variation of the activity level depends on radionuclide concentration of uranium-238 series in the soil. [fr

  13. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling.

    Science.gov (United States)

    Zhou, Fuqun; Zhang, Aining

    2016-10-25

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  14. Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.

    Science.gov (United States)

    Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa

    2017-02-01

    Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.

  15. Spatial-dependence recurrence sample entropy

    Science.gov (United States)

    Pham, Tuan D.; Yan, Hong

    2018-03-01

    Measuring complexity in terms of the predictability of time series is a major area of research in science and engineering, and its applications are spreading throughout many scientific disciplines, where the analysis of physiological signals is perhaps the most widely reported in literature. Sample entropy is a popular measure for quantifying signal irregularity. However, the sample entropy does not take sequential information, which is inherently useful, into its calculation of sample similarity. Here, we develop a method that is based on the mathematical principle of the sample entropy and enables the capture of sequential information of a time series in the context of spatial dependence provided by the binary-level co-occurrence matrix of a recurrence plot. Experimental results on time-series data of the Lorenz system, physiological signals of gait maturation in healthy children, and gait dynamics in Huntington's disease show the potential of the proposed method.

  16. Ratio-based lengths of intervals to improve fuzzy time series forecasting.

    Science.gov (United States)

    Huarng, Kunhuang; Yu, Tiffany Hui-Kuang

    2006-04-01

    The objective of this study is to explore ways of determining the useful lengths of intervals in fuzzy time series. It is suggested that ratios, instead of equal lengths of intervals, can more properly represent the intervals among observations. Ratio-based lengths of intervals are, therefore, proposed to improve fuzzy time series forecasting. Algebraic growth data, such as enrollments and the stock index, and exponential growth data, such as inventory demand, are chosen as the forecasting targets, before forecasting based on the various lengths of intervals is performed. Furthermore, sensitivity analyses are also carried out for various percentiles. The ratio-based lengths of intervals are found to outperform the effective lengths of intervals, as well as the arbitrary ones in regard to the different statistical measures. The empirical analysis suggests that the ratio-based lengths of intervals can also be used to improve fuzzy time series forecasting.

  17. An historically consistent and broadly applicable MRV system based on LiDAR sampling and Landsat time-series

    Science.gov (United States)

    W. Cohen; H. Andersen; S. Healey; G. Moisen; T. Schroeder; C. Woodall; G. Domke; Z. Yang; S. Stehman; R. Kennedy; C. Woodcock; Z. Zhu; J. Vogelmann; D. Steinwand; C. Huang

    2014-01-01

    The authors are developing a REDD+ MRV system that tests different biomass estimation frameworks and components. Design-based inference from a costly fi eld plot network was compared to sampling with LiDAR strips and a smaller set of plots in combination with Landsat for disturbance monitoring. Biomass estimation uncertainties associated with these different data sets...

  18. Time Series Imputation via L1 Norm-Based Singular Spectrum Analysis

    Science.gov (United States)

    Kalantari, Mahdi; Yarmohammadi, Masoud; Hassani, Hossein; Silva, Emmanuel Sirimal

    Missing values in time series data is a well-known and important problem which many researchers have studied extensively in various fields. In this paper, a new nonparametric approach for missing value imputation in time series is proposed. The main novelty of this research is applying the L1 norm-based version of Singular Spectrum Analysis (SSA), namely L1-SSA which is robust against outliers. The performance of the new imputation method has been compared with many other established methods. The comparison is done by applying them to various real and simulated time series. The obtained results confirm that the SSA-based methods, especially L1-SSA can provide better imputation in comparison to other methods.

  19. Sample preparation for phosphoproteomic analysis of circadian time series in Arabidopsis thaliana.

    Science.gov (United States)

    Krahmer, Johanna; Hindle, Matthew M; Martin, Sarah F; Le Bihan, Thierry; Millar, Andrew J

    2015-01-01

    Systems biological approaches to study the Arabidopsis thaliana circadian clock have mainly focused on transcriptomics while little is known about the proteome, and even less about posttranslational modifications. Evidence has emerged that posttranslational protein modifications, in particular phosphorylation, play an important role for the clock and its output. Phosphoproteomics is the method of choice for a large-scale approach to gain more knowledge about rhythmic protein phosphorylation. Recent plant phosphoproteomics publications have identified several thousand phosphopeptides. However, the methods used in these studies are very labor-intensive and therefore not suitable to apply to a well-replicated circadian time series. To address this issue, we present and compare different strategies for sample preparation for phosphoproteomics that are compatible with large numbers of samples. Methods are compared regarding number of identifications, variability of quantitation, and functional categorization. We focus on the type of detergent used for protein extraction as well as methods for its removal. We also test a simple two-fraction separation of the protein extract. © 2015 Elsevier Inc. All rights reserved.

  20. Fourier Magnitude-Based Privacy-Preserving Clustering on Time-Series Data

    Science.gov (United States)

    Kim, Hea-Suk; Moon, Yang-Sae

    Privacy-preserving clustering (PPC in short) is important in publishing sensitive time-series data. Previous PPC solutions, however, have a problem of not preserving distance orders or incurring privacy breach. To solve this problem, we propose a new PPC approach that exploits Fourier magnitudes of time-series. Our magnitude-based method does not cause privacy breach even though its techniques or related parameters are publicly revealed. Using magnitudes only, however, incurs the distance order problem, and we thus present magnitude selection strategies to preserve as many Euclidean distance orders as possible. Through extensive experiments, we showcase the superiority of our magnitude-based approach.

  1. A Two-Dimensional Solar Tracking Stationary Guidance Method Based on Feature-Based Time Series

    Directory of Open Access Journals (Sweden)

    Keke Zhang

    2018-01-01

    Full Text Available The amount of satellite energy acquired has a direct impact on operational capacities of the satellite. As for practical high functional density microsatellites, solar tracking guidance design of solar panels plays an extremely important role. Targeted at stationary tracking problems incurred in a new system that utilizes panels mounted in the two-dimensional turntable to acquire energies to the greatest extent, a two-dimensional solar tracking stationary guidance method based on feature-based time series was proposed under the constraint of limited satellite attitude coupling control capability. By analyzing solar vector variation characteristics within an orbit period and solar vector changes within the whole life cycle, such a method could be adopted to establish a two-dimensional solar tracking guidance model based on the feature-based time series to realize automatic switching of feature-based time series and stationary guidance under the circumstance of different β angles and the maximum angular velocity control, which was applicable to near-earth orbits of all orbital inclination. It was employed to design a two-dimensional solar tracking stationary guidance system, and a mathematical simulation for guidance performance was carried out in diverse conditions under the background of in-orbit application. The simulation results show that the solar tracking accuracy of two-dimensional stationary guidance reaches 10∘ and below under the integrated constraints, which meet engineering application requirements.

  2. Fuzzy time-series based on Fibonacci sequence for stock price forecasting

    Science.gov (United States)

    Chen, Tai-Liang; Cheng, Ching-Hsue; Jong Teoh, Hia

    2007-07-01

    Time-series models have been utilized to make reasonably accurate predictions in the areas of stock price movements, academic enrollments, weather, etc. For promoting the forecasting performance of fuzzy time-series models, this paper proposes a new model, which incorporates the concept of the Fibonacci sequence, the framework of Song and Chissom's model and the weighted method of Yu's model. This paper employs a 5-year period TSMC (Taiwan Semiconductor Manufacturing Company) stock price data and a 13-year period of TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) stock index data as experimental datasets. By comparing our forecasting performances with Chen's (Forecasting enrollments based on fuzzy time-series. Fuzzy Sets Syst. 81 (1996) 311-319), Yu's (Weighted fuzzy time-series models for TAIEX forecasting. Physica A 349 (2004) 609-624) and Huarng's (The application of neural networks to forecast fuzzy time series. Physica A 336 (2006) 481-491) models, we conclude that the proposed model surpasses in accuracy these conventional fuzzy time-series models.

  3. Satellite Image Time Series Decomposition Based on EEMD

    Directory of Open Access Journals (Sweden)

    Yun-long Kong

    2015-11-01

    Full Text Available Satellite Image Time Series (SITS have recently been of great interest due to the emerging remote sensing capabilities for Earth observation. Trend and seasonal components are two crucial elements of SITS. In this paper, a novel framework of SITS decomposition based on Ensemble Empirical Mode Decomposition (EEMD is proposed. EEMD is achieved by sifting an ensemble of adaptive orthogonal components called Intrinsic Mode Functions (IMFs. EEMD is noise-assisted and overcomes the drawback of mode mixing in conventional Empirical Mode Decomposition (EMD. Inspired by these advantages, the aim of this work is to employ EEMD to decompose SITS into IMFs and to choose relevant IMFs for the separation of seasonal and trend components. In a series of simulations, IMFs extracted by EEMD achieved a clear representation with physical meaning. The experimental results of 16-day compositions of Moderate Resolution Imaging Spectroradiometer (MODIS, Normalized Difference Vegetation Index (NDVI, and Global Environment Monitoring Index (GEMI time series with disturbance illustrated the effectiveness and stability of the proposed approach to monitoring tasks, such as applications for the detection of abrupt changes.

  4. Performance enhancement of the single-phase series active filter by employing the load voltage waveform reconstruction and line current sampling delay reduction methods

    DEFF Research Database (Denmark)

    Senturk, O.S.; Hava, A.M.

    2011-01-01

    This paper proposes the waveform reconstruction method (WRM), which is utilized in the single-phase series active filter's (SAF's) control algorithm, in order to extract the load harmonic voltage component of voltage harmonic type single-phase diode rectifier loads. Employing WRM and the line...... current sampling delay reduction method, a single-phase SAF compensated system provides higher harmonic isolation performance and higher stability margins compared to the system using conventional synchronous-reference-frame-based methods. The analytical, simulation, and experimental studies of a 2.5 k...

  5. Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data

    Science.gov (United States)

    Hallac, David; Vare, Sagar; Boyd, Stephen; Leskovec, Jure

    2018-01-01

    Subsequence clustering of multivariate time series is a useful tool for discovering repeated patterns in temporal data. Once these patterns have been discovered, seemingly complicated datasets can be interpreted as a temporal sequence of only a small number of states, or clusters. For example, raw sensor data from a fitness-tracking application can be expressed as a timeline of a select few actions (i.e., walking, sitting, running). However, discovering these patterns is challenging because it requires simultaneous segmentation and clustering of the time series. Furthermore, interpreting the resulting clusters is difficult, especially when the data is high-dimensional. Here we propose a new method of model-based clustering, which we call Toeplitz Inverse Covariance-based Clustering (TICC). Each cluster in the TICC method is defined by a correlation network, or Markov random field (MRF), characterizing the interdependencies between different observations in a typical subsequence of that cluster. Based on this graphical representation, TICC simultaneously segments and clusters the time series data. We solve the TICC problem through alternating minimization, using a variation of the expectation maximization (EM) algorithm. We derive closed-form solutions to efficiently solve the two resulting subproblems in a scalable way, through dynamic programming and the alternating direction method of multipliers (ADMM), respectively. We validate our approach by comparing TICC to several state-of-the-art baselines in a series of synthetic experiments, and we then demonstrate on an automobile sensor dataset how TICC can be used to learn interpretable clusters in real-world scenarios. PMID:29770257

  6. Silicon based ultrafast optical waveform sampling

    DEFF Research Database (Denmark)

    Ji, Hua; Galili, Michael; Pu, Minhao

    2010-01-01

    A 300 nmx450 nmx5 mm silicon nanowire is designed and fabricated for a four wave mixing based non-linear optical gate. Based on this silicon nanowire, an ultra-fast optical sampling system is successfully demonstrated using a free-running fiber laser with a carbon nanotube-based mode-locker as th......A 300 nmx450 nmx5 mm silicon nanowire is designed and fabricated for a four wave mixing based non-linear optical gate. Based on this silicon nanowire, an ultra-fast optical sampling system is successfully demonstrated using a free-running fiber laser with a carbon nanotube-based mode......-locker as the sampling source. A clear eye-diagram of a 320 Gbit/s data signal is obtained. The temporal resolution of the sampling system is estimated to 360 fs....

  7. Random sampling or geostatistical modelling? Choosing between design-based and model-based sampling strategies for soil (with discussion)

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    1997-01-01

    Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based

  8. Case series and descriptive cohort studies in neurosurgery: the confusion and solution.

    Science.gov (United States)

    Esene, Ignatius N; Ngu, Julius; El Zoghby, Mohamed; Solaroglu, Ihsan; Sikod, Anna M; Kotb, Ali; Dechambenoit, Gilbert; El Husseiny, Hossam

    2014-08-01

    Case series (CS) are well-known designs in contemporary use in neurosurgery but are sometimes used in contexts that are incompatible with their true meaning as defined by epidemiologists. This inconsistent, inappropriate and incorrect use, and mislabeling impairs the appropriate indexing and sorting of evidence. Using PubMed, we systematically identified published articles that had "case series" in the "title" in 15 top-ranked neurosurgical journals from January 2008 to December 2012. The abstracts and/or full articles were scanned to identify those with descriptions of the principal method as being "case series" and then classified as "true case series" or "non-case series" by two independent investigators with 100 % inter-rater agreement. Sixty-four articles had the label "case series" in their "titles." Based on the definition of "case series" and our appraisal of the articles using Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines, 18 articles (28.13 %) were true case series, while 46 (71.87 %) were mislabeled. Thirty-five articles (54.69 %) mistook retrospective (descriptive) cohorts for CS. CS are descriptive with an outcome-based sampling, while "descriptive cohorts" have an exposure-based sampling of patients, followed over time to assess outcome(s). A comparison group is not a defining feature of a cohort study and distinguishes descriptive from analytic cohorts. A distinction between a case report, case series, and descriptive cohorts is absolutely necessary to enable the appropriate indexing, sorting, and application of evidence. Researchers need better training in methods and terminology, and editors and reviewers should scrutinize more carefully manuscripts claiming to be "case series" studies.

  9. Wet tropospheric delays forecast based on Vienna Mapping Function time series analysis

    Science.gov (United States)

    Rzepecka, Zofia; Kalita, Jakub

    2016-04-01

    It is well known that the dry part of the zenith tropospheric delay (ZTD) is much easier to model than the wet part (ZTW). The aim of the research is applying stochastic modeling and prediction of ZTW using time series analysis tools. Application of time series analysis enables closer understanding of ZTW behavior as well as short-term prediction of future ZTW values. The ZTW data used for the studies were obtained from the GGOS service hold by Vienna technical University. The resolution of the data is six hours. ZTW for the years 2010 -2013 were adopted for the study. The International GNSS Service (IGS) permanent stations LAMA and GOPE, located in mid-latitudes, were admitted for the investigations. Initially the seasonal part was separated and modeled using periodic signals and frequency analysis. The prominent annual and semi-annual signals were removed using sines and consines functions. The autocorrelation of the resulting signal is significant for several days (20-30 samples). The residuals of this fitting were further analyzed and modeled with ARIMA processes. For both the stations optimal ARMA processes based on several criterions were obtained. On this basis predicted ZTW values were computed for one day ahead, leaving the white process residuals. Accuracy of the prediction can be estimated at about 3 cm.

  10. Reliability-Based Optimization of Series Systems of Parallel Systems

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    Reliability-based design of structural systems is considered. Especially systems where the reliability model is a series system of parallel systems are analysed. A sensitivity analysis for this class of problems is presented. Direct and sequential optimization procedures to solve the optimization...

  11. A general theory on frequency and time-frequency analysis of irregularly sampled time series based on projection methods - Part 1: Frequency analysis

    Science.gov (United States)

    Lenoir, Guillaume; Crucifix, Michel

    2018-03-01

    We develop a general framework for the frequency analysis of irregularly sampled time series. It is based on the Lomb-Scargle periodogram, but extended to algebraic operators accounting for the presence of a polynomial trend in the model for the data, in addition to a periodic component and a background noise. Special care is devoted to the correlation between the trend and the periodic component. This new periodogram is then cast into the Welch overlapping segment averaging (WOSA) method in order to reduce its variance. We also design a test of significance for the WOSA periodogram, against the background noise. The model for the background noise is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, more general than the classical Gaussian white or red noise processes. CARMA parameters are estimated following a Bayesian framework. We provide algorithms that compute the confidence levels for the WOSA periodogram and fully take into account the uncertainty in the CARMA noise parameters. Alternatively, a theory using point estimates of CARMA parameters provides analytical confidence levels for the WOSA periodogram, which are more accurate than Markov chain Monte Carlo (MCMC) confidence levels and, below some threshold for the number of data points, less costly in computing time. We then estimate the amplitude of the periodic component with least-squares methods, and derive an approximate proportionality between the squared amplitude and the periodogram. This proportionality leads to a new extension for the periodogram: the weighted WOSA periodogram, which we recommend for most frequency analyses with irregularly sampled data. The estimated signal amplitude also permits filtering in a frequency band. Our results generalise and unify methods developed in the fields of geosciences, engineering, astronomy and astrophysics. They also constitute the starting point for an extension to the continuous wavelet transform developed in a companion

  12. Analysis of Heavy-Tailed Time Series

    DEFF Research Database (Denmark)

    Xie, Xiaolei

    This thesis is about analysis of heavy-tailed time series. We discuss tail properties of real-world equity return series and investigate the possibility that a single tail index is shared by all return series of actively traded equities in a market. Conditions for this hypothesis to be true...... are identified. We study the eigenvalues and eigenvectors of sample covariance and sample auto-covariance matrices of multivariate heavy-tailed time series, and particularly for time series with very high dimensions. Asymptotic approximations of the eigenvalues and eigenvectors of such matrices are found...... and expressed in terms of the parameters of the dependence structure, among others. Furthermore, we study an importance sampling method for estimating rare-event probabilities of multivariate heavy-tailed time series generated by matrix recursion. We show that the proposed algorithm is efficient in the sense...

  13. Quality Control Procedure Based on Partitioning of NMR Time Series

    Directory of Open Access Journals (Sweden)

    Michał Staniszewski

    2018-03-01

    Full Text Available The quality of the magnetic resonance spectroscopy (MRS depends on the stability of magnetic resonance (MR system performance and optimal hardware functioning, which ensure adequate levels of signal-to-noise ratios (SNR as well as good spectral resolution and minimal artifacts in the spectral data. MRS quality control (QC protocols and methodologies are based on phantom measurements that are repeated regularly. In this work, a signal partitioning algorithm based on a dynamic programming (DP method for QC assessment of the spectral data is described. The proposed algorithm allows detection of the change points—the abrupt variations in the time series data. The proposed QC method was tested using the simulated and real phantom data. Simulated data were randomly generated time series distorted by white noise. The real data were taken from the phantom quality control studies of the MRS scanner collected for four and a half years and analyzed by LCModel software. Along with the proposed algorithm, performance of various literature methods was evaluated for the predefined number of change points based on the error values calculated by subtracting the mean values calculated for the periods between the change-points from the original data points. The time series were checked using external software, a set of external methods and the proposed tool, and the obtained results were comparable. The application of dynamic programming in the analysis of the phantom MRS data is a novel approach to QC. The obtained results confirm that the presented change-point-detection tool can be used either for independent analysis of MRS time series (or any other or as a part of quality control.

  14. Identification of Dynamic Loads Based on Second-Order Taylor-Series Expansion Method

    OpenAIRE

    Li, Xiaowang; Deng, Zhongmin

    2016-01-01

    A new method based on the second-order Taylor-series expansion is presented to identify the structural dynamic loads in the time domain. This algorithm expresses the response vectors as Taylor-series approximation and then a series of formulas are deduced. As a result, an explicit discrete equation which associates system response, system characteristic, and input excitation together is set up. In a multi-input-multi-output (MIMO) numerical simulation study, sinusoidal excitation and white no...

  15. A method for the estimation of the significance of cross-correlations in unevenly sampled red-noise time series

    Science.gov (United States)

    Max-Moerbeck, W.; Richards, J. L.; Hovatta, T.; Pavlidou, V.; Pearson, T. J.; Readhead, A. C. S.

    2014-11-01

    We present a practical implementation of a Monte Carlo method to estimate the significance of cross-correlations in unevenly sampled time series of data, whose statistical properties are modelled with a simple power-law power spectral density. This implementation builds on published methods; we introduce a number of improvements in the normalization of the cross-correlation function estimate and a bootstrap method for estimating the significance of the cross-correlations. A closely related matter is the estimation of a model for the light curves, which is critical for the significance estimates. We present a graphical and quantitative demonstration that uses simulations to show how common it is to get high cross-correlations for unrelated light curves with steep power spectral densities. This demonstration highlights the dangers of interpreting them as signs of a physical connection. We show that by using interpolation and the Hanning sampling window function we are able to reduce the effects of red-noise leakage and to recover steep simple power-law power spectral densities. We also introduce the use of a Neyman construction for the estimation of the errors in the power-law index of the power spectral density. This method provides a consistent way to estimate the significance of cross-correlations in unevenly sampled time series of data.

  16. Optimize the Coverage Probability of Prediction Interval for Anomaly Detection of Sensor-Based Monitoring Series

    Directory of Open Access Journals (Sweden)

    Jingyue Pang

    2018-03-01

    Full Text Available Effective anomaly detection of sensing data is essential for identifying potential system failures. Because they require no prior knowledge or accumulated labels, and provide uncertainty presentation, the probability prediction methods (e.g., Gaussian process regression (GPR and relevance vector machine (RVM are especially adaptable to perform anomaly detection for sensing series. Generally, one key parameter of prediction models is coverage probability (CP, which controls the judging threshold of the testing sample and is generally set to a default value (e.g., 90% or 95%. There are few criteria to determine the optimal CP for anomaly detection. Therefore, this paper designs a graphic indicator of the receiver operating characteristic curve of prediction interval (ROC-PI based on the definition of the ROC curve which can depict the trade-off between the PI width and PI coverage probability across a series of cut-off points. Furthermore, the Youden index is modified to assess the performance of different CPs, by the minimization of which the optimal CP is derived by the simulated annealing (SA algorithm. Experiments conducted on two simulation datasets demonstrate the validity of the proposed method. Especially, an actual case study on sensing series from an on-orbit satellite illustrates its significant performance in practical application.

  17. MTU underfloor rail drives based on Series 1600 engines

    Energy Technology Data Exchange (ETDEWEB)

    Bamberger, Norbert; Lieb, Martin; Reich, Christian [MTU Friedrichshafen GmbH, Friedrichshafen (Germany)

    2013-05-15

    With the heavy demands now being placed on railcar drive systems, ever more powerful solutions are needed. For the new high-speed trains in Britain's Intercity Express Programme (IEP), Hitachi udorses the use of MTU's underfloor drives based on Series 1600 engines.

  18. Changes According to Incubation Periods in Some Microbiological Characteristics at Soil Samples of Some Soil Series from the Gelemen Agricultural Administration

    OpenAIRE

    KARA, Emine Erman

    1998-01-01

    Changes according to incubation periods in some microbiological characteristics at soil samples of soil series from Gelemen Agricultural Administraction were investigated in this study. The results show that bacteria, actinomycet had values in the first periods of incubation (30ºC and field capacity) and in the following periods increased. However, fungus population changed depending upon series properties and reached maximum values 24th and 32th days after the beginning of incubation. During...

  19. Trend analysis using non-stationary time series clustering based on the finite element method

    Science.gov (United States)

    Gorji Sefidmazgi, M.; Sayemuzzaman, M.; Homaifar, A.; Jha, M. K.; Liess, S.

    2014-05-01

    In order to analyze low-frequency variability of climate, it is useful to model the climatic time series with multiple linear trends and locate the times of significant changes. In this paper, we have used non-stationary time series clustering to find change points in the trends. Clustering in a multi-dimensional non-stationary time series is challenging, since the problem is mathematically ill-posed. Clustering based on the finite element method (FEM) is one of the methods that can analyze multidimensional time series. One important attribute of this method is that it is not dependent on any statistical assumption and does not need local stationarity in the time series. In this paper, it is shown how the FEM-clustering method can be used to locate change points in the trend of temperature time series from in situ observations. This method is applied to the temperature time series of North Carolina (NC) and the results represent region-specific climate variability despite higher frequency harmonics in climatic time series. Next, we investigated the relationship between the climatic indices with the clusters/trends detected based on this clustering method. It appears that the natural variability of climate change in NC during 1950-2009 can be explained mostly by AMO and solar activity.

  20. Output Information Based Fault-Tolerant Iterative Learning Control for Dual-Rate Sampling Process with Disturbances and Output Delay

    Directory of Open Access Journals (Sweden)

    Hongfeng Tao

    2018-01-01

    Full Text Available For a class of single-input single-output (SISO dual-rate sampling processes with disturbances and output delay, this paper presents a robust fault-tolerant iterative learning control algorithm based on output information. Firstly, the dual-rate sampling process with output delay is transformed into discrete system in state-space model form with slow sampling rate without time delay by using lifting technology; then output information based fault-tolerant iterative learning control scheme is designed and the control process is turned into an equivalent two-dimensional (2D repetitive process. Moreover, based on the repetitive process stability theory, the sufficient conditions for the stability of system and the design method of robust controller are given in terms of linear matrix inequalities (LMIs technique. Finally, the flow control simulations of two flow tanks in series demonstrate the feasibility and effectiveness of the proposed method.

  1. Volterra Series Based Distortion Effect

    DEFF Research Database (Denmark)

    Agerkvist, Finn T.

    2010-01-01

    A large part of the characteristic sound of the electric guitar comes from nonlinearities in the signal path. Such nonlinearities may come from the input- or output-stage of the amplier, which is often equipped with vacuum tubes or a dedicated distortion pedal. In this paper the Volterra series...... expansion for non linear systems is investigated with respect to generating good distortion. The Volterra series allows for unlimited adjustment of the level and frequency dependency of each distortion component. Subjectively relevant ways of linking the dierent orders are discussed....

  2. The detection of local irreversibility in time series based on segmentation

    Science.gov (United States)

    Teng, Yue; Shang, Pengjian

    2018-06-01

    We propose a strategy for the detection of local irreversibility in stationary time series based on multiple scale. The detection is beneficial to evaluate the displacement of irreversibility toward local skewness. By means of this method, we can availably discuss the local irreversible fluctuations of time series as the scale changes. The method was applied to simulated nonlinear signals generated by the ARFIMA process and logistic map to show how the irreversibility functions react to the increasing of the multiple scale. The method was applied also to series of financial markets i.e., American, Chinese and European markets. The local irreversibility for different markets demonstrate distinct characteristics. Simulations and real data support the need of exploring local irreversibility.

  3. Phase synchronization based minimum spanning trees for analysis of financial time series with nonlinear correlations

    Science.gov (United States)

    Radhakrishnan, Srinivasan; Duvvuru, Arjun; Sultornsanee, Sivarit; Kamarthi, Sagar

    2016-02-01

    The cross correlation coefficient has been widely applied in financial time series analysis, in specific, for understanding chaotic behaviour in terms of stock price and index movements during crisis periods. To better understand time series correlation dynamics, the cross correlation matrices are represented as networks, in which a node stands for an individual time series and a link indicates cross correlation between a pair of nodes. These networks are converted into simpler trees using different schemes. In this context, Minimum Spanning Trees (MST) are the most favoured tree structures because of their ability to preserve all the nodes and thereby retain essential information imbued in the network. Although cross correlations underlying MSTs capture essential information, they do not faithfully capture dynamic behaviour embedded in the time series data of financial systems because cross correlation is a reliable measure only if the relationship between the time series is linear. To address the issue, this work investigates a new measure called phase synchronization (PS) for establishing correlations among different time series which relate to one another, linearly or nonlinearly. In this approach the strength of a link between a pair of time series (nodes) is determined by the level of phase synchronization between them. We compare the performance of phase synchronization based MST with cross correlation based MST along selected network measures across temporal frame that includes economically good and crisis periods. We observe agreement in the directionality of the results across these two methods. They show similar trends, upward or downward, when comparing selected network measures. Though both the methods give similar trends, the phase synchronization based MST is a more reliable representation of the dynamic behaviour of financial systems than the cross correlation based MST because of the former's ability to quantify nonlinear relationships among time

  4. Reliable Quantification of the Potential for Equations Based on Spot Urine Samples to Estimate Population Salt Intake

    DEFF Research Database (Denmark)

    Huang, Liping; Crino, Michelle; Wu, Jason Hy

    2016-01-01

    to a standard format. Individual participant records will be compiled and a series of analyses will be completed to: (1) compare existing equations for estimating 24-hour salt intake from spot urine samples with 24-hour urine samples, and assess the degree of bias according to key demographic and clinical......BACKGROUND: Methods based on spot urine samples (a single sample at one time-point) have been identified as a possible alternative approach to 24-hour urine samples for determining mean population salt intake. OBJECTIVE: The aim of this study is to identify a reliable method for estimating mean...... population salt intake from spot urine samples. This will be done by comparing the performance of existing equations against one other and against estimates derived from 24-hour urine samples. The effects of factors such as ethnicity, sex, age, body mass index, antihypertensive drug use, health status...

  5. Connected to TV series: Quantifying series watching engagement.

    Science.gov (United States)

    Tóth-Király, István; Bőthe, Beáta; Tóth-Fáber, Eszter; Hága, Győző; Orosz, Gábor

    2017-12-01

    Background and aims Television series watching stepped into a new golden age with the appearance of online series. Being highly involved in series could potentially lead to negative outcomes, but the distinction between highly engaged and problematic viewers should be distinguished. As no appropriate measure is available for identifying such differences, a short and valid measure was constructed in a multistudy investigation: the Series Watching Engagement Scale (SWES). Methods In Study 1 (N Sample1  = 740 and N Sample2  = 740), exploratory structural equation modeling and confirmatory factor analysis were used to identify the most important facets of series watching engagement. In Study 2 (N = 944), measurement invariance of the SWES was investigated between males and females. In Study 3 (N = 1,520), latent profile analysis (LPA) was conducted to identify subgroups of viewers. Results Five factors of engagement were identified in Study 1 that are of major relevance: persistence, identification, social interaction, overuse, and self-development. Study 2 supported the high levels of equivalence between males and females. In Study 3, three groups of viewers (low-, medium-, and high-engagement viewers) were identified. The highly engaged at-risk group can be differentiated from the other two along key variables of watching time and personality. Discussion The present findings support the overall validity, reliability, and usefulness of the SWES and the results of the LPA showed that it might be useful to identify at-risk viewers before the development of problematic use.

  6. Grouped fuzzy SVM with EM-based partition of sample space for clustered microcalcification detection.

    Science.gov (United States)

    Wang, Huiya; Feng, Jun; Wang, Hongyu

    2017-07-20

    Detection of clustered microcalcification (MC) from mammograms plays essential roles in computer-aided diagnosis for early stage breast cancer. To tackle problems associated with the diversity of data structures of MC lesions and the variability of normal breast tissues, multi-pattern sample space learning is required. In this paper, a novel grouped fuzzy Support Vector Machine (SVM) algorithm with sample space partition based on Expectation-Maximization (EM) (called G-FSVM) is proposed for clustered MC detection. The diversified pattern of training data is partitioned into several groups based on EM algorithm. Then a series of fuzzy SVM are integrated for classification with each group of samples from the MC lesions and normal breast tissues. From DDSM database, a total of 1,064 suspicious regions are selected from 239 mammography, and the measurement of Accuracy, True Positive Rate (TPR), False Positive Rate (FPR) and EVL = TPR* 1-FPR are 0.82, 0.78, 0.14 and 0.72, respectively. The proposed method incorporates the merits of fuzzy SVM and multi-pattern sample space learning, decomposing the MC detection problem into serial simple two-class classification. Experimental results from synthetic data and DDSM database demonstrate that our integrated classification framework reduces the false positive rate significantly while maintaining the true positive rate.

  7. Nonlinear Prediction Model for Hydrologic Time Series Based on Wavelet Decomposition

    Science.gov (United States)

    Kwon, H.; Khalil, A.; Brown, C.; Lall, U.; Ahn, H.; Moon, Y.

    2005-12-01

    Traditionally forecasting and characterizations of hydrologic systems is performed utilizing many techniques. Stochastic linear methods such as AR and ARIMA and nonlinear ones such as statistical learning theory based tools have been extensively used. The common difficulty to all methods is the determination of sufficient and necessary information and predictors for a successful prediction. Relationships between hydrologic variables are often highly nonlinear and interrelated across the temporal scale. A new hybrid approach is proposed for the simulation of hydrologic time series combining both the wavelet transform and the nonlinear model. The present model employs some merits of wavelet transform and nonlinear time series model. The Wavelet Transform is adopted to decompose a hydrologic nonlinear process into a set of mono-component signals, which are simulated by nonlinear model. The hybrid methodology is formulated in a manner to improve the accuracy of a long term forecasting. The proposed hybrid model yields much better results in terms of capturing and reproducing the time-frequency properties of the system at hand. Prediction results are promising when compared to traditional univariate time series models. An application of the plausibility of the proposed methodology is provided and the results conclude that wavelet based time series model can be utilized for simulating and forecasting of hydrologic variable reasonably well. This will ultimately serve the purpose of integrated water resources planning and management.

  8. Multiscale multifractal multiproperty analysis of financial time series based on Rényi entropy

    Science.gov (United States)

    Yujun, Yang; Jianping, Li; Yimei, Yang

    This paper introduces a multiscale multifractal multiproperty analysis based on Rényi entropy (3MPAR) method to analyze short-range and long-range characteristics of financial time series, and then applies this method to the five time series of five properties in four stock indices. Combining the two analysis techniques of Rényi entropy and multifractal detrended fluctuation analysis (MFDFA), the 3MPAR method focuses on the curves of Rényi entropy and generalized Hurst exponent of five properties of four stock time series, which allows us to study more universal and subtle fluctuation characteristics of financial time series. By analyzing the curves of the Rényi entropy and the profiles of the logarithm distribution of MFDFA of five properties of four stock indices, the 3MPAR method shows some fluctuation characteristics of the financial time series and the stock markets. Then, it also shows a richer information of the financial time series by comparing the profile of five properties of four stock indices. In this paper, we not only focus on the multifractality of time series but also the fluctuation characteristics of the financial time series and subtle differences in the time series of different properties. We find that financial time series is far more complex than reported in some research works using one property of time series.

  9. ESTIMATING RELIABILITY OF DISTURBANCES IN SATELLITE TIME SERIES DATA BASED ON STATISTICAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    Z.-G. Zhou

    2016-06-01

    Full Text Available Normally, the status of land cover is inherently dynamic and changing continuously on temporal scale. However, disturbances or abnormal changes of land cover — caused by such as forest fire, flood, deforestation, and plant diseases — occur worldwide at unknown times and locations. Timely detection and characterization of these disturbances is of importance for land cover monitoring. Recently, many time-series-analysis methods have been developed for near real-time or online disturbance detection, using satellite image time series. However, the detection results were only labelled with “Change/ No change” by most of the present methods, while few methods focus on estimating reliability (or confidence level of the detected disturbances in image time series. To this end, this paper propose a statistical analysis method for estimating reliability of disturbances in new available remote sensing image time series, through analysis of full temporal information laid in time series data. The method consists of three main steps. (1 Segmenting and modelling of historical time series data based on Breaks for Additive Seasonal and Trend (BFAST. (2 Forecasting and detecting disturbances in new time series data. (3 Estimating reliability of each detected disturbance using statistical analysis based on Confidence Interval (CI and Confidence Levels (CL. The method was validated by estimating reliability of disturbance regions caused by a recent severe flooding occurred around the border of Russia and China. Results demonstrated that the method can estimate reliability of disturbances detected in satellite image with estimation error less than 5% and overall accuracy up to 90%.

  10. Automated CBED processing: Sample thickness estimation based on analysis of zone-axis CBED pattern

    Energy Technology Data Exchange (ETDEWEB)

    Klinger, M., E-mail: klinger@post.cz; Němec, M.; Polívka, L.; Gärtnerová, V.; Jäger, A.

    2015-03-15

    An automated processing of convergent beam electron diffraction (CBED) patterns is presented. The proposed methods are used in an automated tool for estimating the thickness of transmission electron microscopy (TEM) samples by matching an experimental zone-axis CBED pattern with a series of patterns simulated for known thicknesses. The proposed tool detects CBED disks, localizes a pattern in detected disks and unifies the coordinate system of the experimental pattern with the simulated one. The experimental pattern is then compared disk-by-disk with a series of simulated patterns each corresponding to different known thicknesses. The thickness of the most similar simulated pattern is then taken as the thickness estimate. The tool was tested on [0 1 1] Si, [0 1 0] α-Ti and [0 1 1] α-Ti samples prepared using different techniques. Results of the presented approach were compared with thickness estimates based on analysis of CBED patterns in two beam conditions. The mean difference between these two methods was 4.1% for the FIB-prepared silicon samples, 5.2% for the electro-chemically polished titanium and 7.9% for Ar{sup +} ion-polished titanium. The proposed techniques can also be employed in other established CBED analyses. Apart from the thickness estimation, it can potentially be used to quantify lattice deformation, structure factors, symmetry, defects or extinction distance. - Highlights: • Automated TEM sample thickness estimation using zone-axis CBED is presented. • Computer vision and artificial intelligence are employed in CBED processing. • This approach reduces operator effort, analysis time and increases repeatability. • Individual parts can be employed in other analyses of CBED/diffraction pattern.

  11. Machine learning methods as a tool to analyse incomplete or irregularly sampled radon time series data.

    Science.gov (United States)

    Janik, M; Bossew, P; Kurihara, O

    2018-07-15

    Machine learning is a class of statistical techniques which has proven to be a powerful tool for modelling the behaviour of complex systems, in which response quantities depend on assumed controls or predictors in a complicated way. In this paper, as our first purpose, we propose the application of machine learning to reconstruct incomplete or irregularly sampled data of time series indoor radon ( 222 Rn). The physical assumption underlying the modelling is that Rn concentration in the air is controlled by environmental variables such as air temperature and pressure. The algorithms "learn" from complete sections of multivariate series, derive a dependence model and apply it to sections where the controls are available, but not the response (Rn), and in this way complete the Rn series. Three machine learning techniques are applied in this study, namely random forest, its extension called the gradient boosting machine and deep learning. For a comparison, we apply the classical multiple regression in a generalized linear model version. Performance of the models is evaluated through different metrics. The performance of the gradient boosting machine is found to be superior to that of the other techniques. By applying learning machines, we show, as our second purpose, that missing data or periods of Rn series data can be reconstructed and resampled on a regular grid reasonably, if data of appropriate physical controls are available. The techniques also identify to which degree the assumed controls contribute to imputing missing Rn values. Our third purpose, though no less important from the viewpoint of physics, is identifying to which degree physical, in this case environmental variables, are relevant as Rn predictors, or in other words, which predictors explain most of the temporal variability of Rn. We show that variables which contribute most to the Rn series reconstruction, are temperature, relative humidity and day of the year. The first two are physical

  12. A general theory on frequency and time-frequency analysis of irregularly sampled time series based on projection methods - Part 2: Extension to time-frequency analysis

    Science.gov (United States)

    Lenoir, Guillaume; Crucifix, Michel

    2018-03-01

    Geophysical time series are sometimes sampled irregularly along the time axis. The situation is particularly frequent in palaeoclimatology. Yet, there is so far no general framework for handling the continuous wavelet transform when the time sampling is irregular. Here we provide such a framework. To this end, we define the scalogram as the continuous-wavelet-transform equivalent of the extended Lomb-Scargle periodogram defined in Part 1 of this study (Lenoir and Crucifix, 2018). The signal being analysed is modelled as the sum of a locally periodic component in the time-frequency plane, a polynomial trend, and a background noise. The mother wavelet adopted here is the Morlet wavelet classically used in geophysical applications. The background noise model is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, which is more general than the traditional Gaussian white and red noise processes. The scalogram is smoothed by averaging over neighbouring times in order to reduce its variance. The Shannon-Nyquist exclusion zone is however defined as the area corrupted by local aliasing issues. The local amplitude in the time-frequency plane is then estimated with least-squares methods. We also derive an approximate formula linking the squared amplitude and the scalogram. Based on this property, we define a new analysis tool: the weighted smoothed scalogram, which we recommend for most analyses. The estimated signal amplitude also gives access to band and ridge filtering. Finally, we design a test of significance for the weighted smoothed scalogram against the stationary Gaussian CARMA background noise, and provide algorithms for computing confidence levels, either analytically or with Monte Carlo Markov chain methods. All the analysis tools presented in this article are available to the reader in the Python package WAVEPAL.

  13. Is mindfulness-based therapy an effective intervention for obsessive-intrusive thoughts: a case series.

    Science.gov (United States)

    Wilkinson-Tough, Megan; Bocci, Laura; Thorne, Kirsty; Herlihy, Jane

    2010-01-01

    Despite the efficacy of cognitive-behavioural interventions in improving the experience of obsessions and compulsions, some people do not benefit from this approach. The present research uses a case series design to establish whether mindfulness-based therapy could benefit those experiencing obsessive-intrusive thoughts by targeting thought-action fusion and thought suppression. Three participants received a relaxation control intervention followed by a six-session mindfulness-based intervention which emphasized daily practice. Following therapy all participants demonstrated reductions in Yale-Brown Obsessive-Compulsive Scale scores to below clinical levels, with two participants maintaining this at follow-up. Qualitative analysis of post-therapy feedback suggested that mindfulness skills such as observation, awareness and acceptance were seen as helpful in managing thought-action fusion and suppression. Despite being limited by small participant numbers, these results suggest that mindfulness may be beneficial to some people experiencing intrusive unwanted thoughts and that further research could establish the possible efficacy of this approach in larger samples. Copyright (c) 2009 John Wiley & Sons, Ltd.

  14. Detecting determinism with improved sensitivity in time series: rank-based nonlinear predictability score.

    Science.gov (United States)

    Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G

    2014-09-01

    The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).

  15. Automated classification of Permanent Scatterers time-series based on statistical characterization tests

    Science.gov (United States)

    Berti, Matteo; Corsini, Alessandro; Franceschini, Silvia; Iannacone, Jean Pascal

    2013-04-01

    The application of space borne synthetic aperture radar interferometry has progressed, over the last two decades, from the pioneer use of single interferograms for analyzing changes on the earth's surface to the development of advanced multi-interferogram techniques to analyze any sort of natural phenomena which involves movements of the ground. The success of multi-interferograms techniques in the analysis of natural hazards such as landslides and subsidence is widely documented in the scientific literature and demonstrated by the consensus among the end-users. Despite the great potential of this technique, radar interpretation of slope movements is generally based on the sole analysis of average displacement velocities, while the information embraced in multi interferogram time series is often overlooked if not completely neglected. The underuse of PS time series is probably due to the detrimental effect of residual atmospheric errors, which make the PS time series characterized by erratic, irregular fluctuations often difficult to interpret, and also to the difficulty of performing a visual, supervised analysis of the time series for a large dataset. In this work is we present a procedure for automatic classification of PS time series based on a series of statistical characterization tests. The procedure allows to classify the time series into six distinctive target trends (0=uncorrelated; 1=linear; 2=quadratic; 3=bilinear; 4=discontinuous without constant velocity; 5=discontinuous with change in velocity) and retrieve for each trend a series of descriptive parameters which can be efficiently used to characterize the temporal changes of ground motion. The classification algorithms were developed and tested using an ENVISAT datasets available in the frame of EPRS-E project (Extraordinary Plan of Environmental Remote Sensing) of the Italian Ministry of Environment (track "Modena", Northern Apennines). This dataset was generated using standard processing, then the

  16. Intuitionistic Fuzzy Time Series Forecasting Model Based on Intuitionistic Fuzzy Reasoning

    Directory of Open Access Journals (Sweden)

    Ya’nan Wang

    2016-01-01

    Full Text Available Fuzzy sets theory cannot describe the data comprehensively, which has greatly limited the objectivity of fuzzy time series in uncertain data forecasting. In this regard, an intuitionistic fuzzy time series forecasting model is built. In the new model, a fuzzy clustering algorithm is used to divide the universe of discourse into unequal intervals, and a more objective technique for ascertaining the membership function and nonmembership function of the intuitionistic fuzzy set is proposed. On these bases, forecast rules based on intuitionistic fuzzy approximate reasoning are established. At last, contrast experiments on the enrollments of the University of Alabama and the Taiwan Stock Exchange Capitalization Weighted Stock Index are carried out. The results show that the new model has a clear advantage of improving the forecast accuracy.

  17. [Predicting Incidence of Hepatitis E in Chinausing Fuzzy Time Series Based on Fuzzy C-Means Clustering Analysis].

    Science.gov (United States)

    Luo, Yi; Zhang, Tao; Li, Xiao-song

    2016-05-01

    To explore the application of fuzzy time series model based on fuzzy c-means clustering in forecasting monthly incidence of Hepatitis E in mainland China. Apredictive model (fuzzy time series method based on fuzzy c-means clustering) was developed using Hepatitis E incidence data in mainland China between January 2004 and July 2014. The incidence datafrom August 2014 to November 2014 were used to test the fitness of the predictive model. The forecasting results were compared with those resulted from traditional fuzzy time series models. The fuzzy time series model based on fuzzy c-means clustering had 0.001 1 mean squared error (MSE) of fitting and 6.977 5 x 10⁻⁴ MSE of forecasting, compared with 0.0017 and 0.0014 from the traditional forecasting model. The results indicate that the fuzzy time series model based on fuzzy c-means clustering has a better performance in forecasting incidence of Hepatitis E.

  18. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV...... application despite the large sample. Unit root tests based on the IV estimator have better finite sample properties in this context....

  19. A novel water quality data analysis framework based on time-series data mining.

    Science.gov (United States)

    Deng, Weihui; Wang, Guoyin

    2017-07-01

    The rapid development of time-series data mining provides an emerging method for water resource management research. In this paper, based on the time-series data mining methodology, we propose a novel and general analysis framework for water quality time-series data. It consists of two parts: implementation components and common tasks of time-series data mining in water quality data. In the first part, we propose to granulate the time series into several two-dimensional normal clouds and calculate the similarities in the granulated level. On the basis of the similarity matrix, the similarity search, anomaly detection, and pattern discovery tasks in the water quality time-series instance dataset can be easily implemented in the second part. We present a case study of this analysis framework on weekly Dissolve Oxygen time-series data collected from five monitoring stations on the upper reaches of Yangtze River, China. It discovered the relationship of water quality in the mainstream and tributary as well as the main changing patterns of DO. The experimental results show that the proposed analysis framework is a feasible and efficient method to mine the hidden and valuable knowledge from water quality historical time-series data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Markov transition probability-based network from time series for characterizing experimental two-phase flow

    International Nuclear Information System (INIS)

    Gao Zhong-Ke; Hu Li-Dan; Jin Ning-De

    2013-01-01

    We generate a directed weighted complex network by a method based on Markov transition probability to represent an experimental two-phase flow. We first systematically carry out gas—liquid two-phase flow experiments for measuring the time series of flow signals. Then we construct directed weighted complex networks from various time series in terms of a network generation method based on Markov transition probability. We find that the generated network inherits the main features of the time series in the network structure. In particular, the networks from time series with different dynamics exhibit distinct topological properties. Finally, we construct two-phase flow directed weighted networks from experimental signals and associate the dynamic behavior of gas-liquid two-phase flow with the topological statistics of the generated networks. The results suggest that the topological statistics of two-phase flow networks allow quantitative characterization of the dynamic flow behavior in the transitions among different gas—liquid flow patterns. (general)

  1. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    OpenAIRE

    Jun-He Yang; Ching-Hsue Cheng; Chia-Pan Chan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting m...

  2. Trend analysis using non-stationary time series clustering based on the finite element method

    OpenAIRE

    Gorji Sefidmazgi, M.; Sayemuzzaman, M.; Homaifar, A.; Jha, M. K.; Liess, S.

    2014-01-01

    In order to analyze low-frequency variability of climate, it is useful to model the climatic time series with multiple linear trends and locate the times of significant changes. In this paper, we have used non-stationary time series clustering to find change points in the trends. Clustering in a multi-dimensional non-stationary time series is challenging, since the problem is mathematically ill-posed. Clustering based on the finite element method (FEM) is one of the methods ...

  3. Aleatoric and epistemic uncertainties in sampling based nuclear data uncertainty and sensitivity analyses

    International Nuclear Information System (INIS)

    Zwermann, W.; Krzykacz-Hausmann, B.; Gallner, L.; Klein, M.; Pautz, A.; Velkov, K.

    2012-01-01

    Sampling based uncertainty and sensitivity analyses due to epistemic input uncertainties, i.e. to an incomplete knowledge of uncertain input parameters, can be performed with arbitrary application programs to solve the physical problem under consideration. For the description of steady-state particle transport, direct simulations of the microscopic processes with Monte Carlo codes are often used. This introduces an additional source of uncertainty, the aleatoric sampling uncertainty, which is due to the randomness of the simulation process performed by sampling, and which adds to the total combined output sampling uncertainty. So far, this aleatoric part of uncertainty is minimized by running a sufficiently large number of Monte Carlo histories for each sample calculation, thus making its impact negligible as compared to the impact from sampling the epistemic uncertainties. Obviously, this process may cause high computational costs. The present paper shows that in many applications reliable epistemic uncertainty results can also be obtained with substantially lower computational effort by performing and analyzing two appropriately generated series of samples with much smaller number of Monte Carlo histories each. The method is applied along with the nuclear data uncertainty and sensitivity code package XSUSA in combination with the Monte Carlo transport code KENO-Va to various critical assemblies and a full scale reactor calculation. It is shown that the proposed method yields output uncertainties and sensitivities equivalent to the traditional approach, with a high reduction of computing time by factors of the magnitude of 100. (authors)

  4. Theory of sampling and its application in tissue based diagnosis

    Directory of Open Access Journals (Sweden)

    Kayser Gian

    2009-02-01

    Full Text Available Abstract Background A general theory of sampling and its application in tissue based diagnosis is presented. Sampling is defined as extraction of information from certain limited spaces and its transformation into a statement or measure that is valid for the entire (reference space. The procedure should be reproducible in time and space, i.e. give the same results when applied under similar circumstances. Sampling includes two different aspects, the procedure of sample selection and the efficiency of its performance. The practical performance of sample selection focuses on search for localization of specific compartments within the basic space, and search for presence of specific compartments. Methods When a sampling procedure is applied in diagnostic processes two different procedures can be distinguished: I the evaluation of a diagnostic significance of a certain object, which is the probability that the object can be grouped into a certain diagnosis, and II the probability to detect these basic units. Sampling can be performed without or with external knowledge, such as size of searched objects, neighbourhood conditions, spatial distribution of objects, etc. If the sample size is much larger than the object size, the application of a translation invariant transformation results in Kriege's formula, which is widely used in search for ores. Usually, sampling is performed in a series of area (space selections of identical size. The size can be defined in relation to the reference space or according to interspatial relationship. The first method is called random sampling, the second stratified sampling. Results Random sampling does not require knowledge about the reference space, and is used to estimate the number and size of objects. Estimated features include area (volume fraction, numerical, boundary and surface densities. Stratified sampling requires the knowledge of objects (and their features and evaluates spatial features in relation to

  5. Modular microfluidic system for biological sample preparation

    Science.gov (United States)

    Rose, Klint A.; Mariella, Jr., Raymond P.; Bailey, Christopher G.; Ness, Kevin Dean

    2015-09-29

    A reconfigurable modular microfluidic system for preparation of a biological sample including a series of reconfigurable modules for automated sample preparation adapted to selectively include a) a microfluidic acoustic focusing filter module, b) a dielectrophoresis bacteria filter module, c) a dielectrophoresis virus filter module, d) an isotachophoresis nucleic acid filter module, e) a lyses module, and f) an isotachophoresis-based nucleic acid filter.

  6. Manualized Family-Based Treatment for Anorexia Nervosa: A Case Series.

    Science.gov (United States)

    Le Grange, Daniel; Binford, Roslyn; Loeb, Katharine L.

    2005-01-01

    Objective: The purpose of this study was to describe a case series of children and adolescents (mean age = 14.5 years, SD = 2.3; range 9-18) with anorexia nervosa who received manualized family-based treatment for their eating disorder. Method: Forty-five patients with anorexia nervosa were compared pre- and post-treatment on weight and menstrual…

  7. Novel inhibitors of IMPDH: a highly potent and selective quinolone-based series.

    Science.gov (United States)

    Watterson, Scott H; Carlsen, Marianne; Dhar, T G Murali; Shen, Zhongqi; Pitts, William J; Guo, Junqing; Gu, Henry H; Norris, Derek; Chorba, John; Chen, Ping; Cheney, Daniel; Witmer, Mark; Fleener, Catherine A; Rouleau, Katherine; Townsend, Robert; Hollenbaugh, Diane L; Iwanowicz, Edwin J

    2003-02-10

    A series of novel quinolone-based small molecule inhibitors of inosine monophosphate dehydrogenase (IMPDH) was explored. The synthesis and the structure-activity relationships (SARs) derived from in vitro studies are described.

  8. Using the modified sample entropy to detect determinism

    Energy Technology Data Exchange (ETDEWEB)

    Xie Hongbo, E-mail: xiehb@sjtu.or [Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung Hom, Kowloon (Hong Kong); Department of Biomedical Engineering, Jiangsu University, Zhenjiang (China); Guo Jingyi [Department of Health Technology and Informatics, Hong Kong Polytechnic University, Hung Hom, Kowloon (Hong Kong); Zheng Yongping, E-mail: ypzheng@ieee.or [Department of Health Technology and Informatics, Hong Kong Polytechnic University, Hung Hom, Kowloon (Hong Kong); Reseach Institute of Innovative Products and Technologies, Hong Kong Polytechnic University (Hong Kong)

    2010-08-23

    A modified sample entropy (mSampEn), based on the nonlinear continuous and convex function, has been proposed and proven to be superior to the standard sample entropy (SampEn) in several aspects. In this Letter, we empirically investigate the ability of the mSampEn statistic combined with surrogate data method to detect determinism. The effects of the datasets length and noise on the proposed method to differentiate between deterministic and stochastic dynamics are tested on several benchmark time series. The noise performance of the mSampEn statistic is also compared with the singular value decomposition (SVD) and symplectic geometry spectrum (SGS) based methods. The results indicate that the mSampEn statistic is a robust index for detecting determinism in short and noisy time series.

  9. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    Science.gov (United States)

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  10. Ocean time-series near Bermuda: Hydrostation S and the US JGOFS Bermuda Atlantic time-series study

    Science.gov (United States)

    Michaels, Anthony F.; Knap, Anthony H.

    1992-01-01

    Bermuda is the site of two ocean time-series programs. At Hydrostation S, the ongoing biweekly profiles of temperature, salinity and oxygen now span 37 years. This is one of the longest open-ocean time-series data sets and provides a view of decadal scale variability in ocean processes. In 1988, the U.S. JGOFS Bermuda Atlantic Time-series Study began a wide range of measurements at a frequency of 14-18 cruises each year to understand temporal variability in ocean biogeochemistry. On each cruise, the data range from chemical analyses of discrete water samples to data from electronic packages of hydrographic and optics sensors. In addition, a range of biological and geochemical rate measurements are conducted that integrate over time-periods of minutes to days. This sampling strategy yields a reasonable resolution of the major seasonal patterns and of decadal scale variability. The Sargasso Sea also has a variety of episodic production events on scales of days to weeks and these are only poorly resolved. In addition, there is a substantial amount of mesoscale variability in this region and some of the perceived temporal patterns are caused by the intersection of the biweekly sampling with the natural spatial variability. In the Bermuda time-series programs, we have added a series of additional cruises to begin to assess these other sources of variation and their impacts on the interpretation of the main time-series record. However, the adequate resolution of higher frequency temporal patterns will probably require the introduction of new sampling strategies and some emerging technologies such as biogeochemical moorings and autonomous underwater vehicles.

  11. Evaluating the coefficients of autocorrelation in a series of annual run-off of the Far East rivers

    Energy Technology Data Exchange (ETDEWEB)

    Sakharyuk, A V

    1981-01-01

    An evaluation is made of the coefficients of autocorrelation in series of annual river run-off based on group analysis using data on the distribution law of sampling correlation coefficients of temporal series subordinate to the III type Pearson's distribution.

  12. New prediction of chaotic time series based on local Lyapunov exponent

    International Nuclear Information System (INIS)

    Zhang Yong

    2013-01-01

    A new method of predicting chaotic time series is presented based on a local Lyapunov exponent, by quantitatively measuring the exponential rate of separation or attraction of two infinitely close trajectories in state space. After reconstructing state space from one-dimensional chaotic time series, neighboring multiple-state vectors of the predicting point are selected to deduce the prediction formula by using the definition of the local Lyapunov exponent. Numerical simulations are carried out to test its effectiveness and verify its higher precision over two older methods. The effects of the number of referential state vectors and added noise on forecasting accuracy are also studied numerically. (general)

  13. Taylor Series-Based Long-Term Creep-Life Prediction of Alloy 617

    International Nuclear Information System (INIS)

    Yin, Song Nan; Kim, Woo Gon; Kim, Yong Wan; Park, Jae Young; Kim, Soen Jin

    2010-01-01

    In this study, a Taylor series (T-S) model based on the Arrhenius, McVetty, and Monkman-Grant equations was developed using a mathematical analysis. In order to reduce fitting errors, the McVetty equation was transformed by considering the first three terms of the Taylor series equation. The model parameters were accurately determined by a statistical technique of maximum likelihood estimation, and this model was applied to the creep data of alloy 617. The T-S model results showed better agreement with the experimental data than other models such as the Eno, exponential, and L-M models. In particular, the T-S model was converted into an isothermal Taylor series (IT-S) model that can predict the creep strength at a given temperature. It was identified that the estimations obtained using the converted ITS model was better than that obtained using the T-S model for predicting the long-term creep life of alloy 617

  14. Alpha Matting with KL-Divergence Based Sparse Sampling.

    Science.gov (United States)

    Karacan, Levent; Erdem, Aykut; Erdem, Erkut

    2017-06-22

    In this paper, we present a new sampling-based alpha matting approach for the accurate estimation of foreground and background layers of an image. Previous sampling-based methods typically rely on certain heuristics in collecting representative samples from known regions, and thus their performance deteriorates if the underlying assumptions are not satisfied. To alleviate this, we take an entirely new approach and formulate sampling as a sparse subset selection problem where we propose to pick a small set of candidate samples that best explains the unknown pixels. Moreover, we describe a new dissimilarity measure for comparing two samples which is based on KLdivergence between the distributions of features extracted in the vicinity of the samples. The proposed framework is general and could be easily extended to video matting by additionally taking temporal information into account in the sampling process. Evaluation on standard benchmark datasets for image and video matting demonstrates that our approach provides more accurate results compared to the state-of-the-art methods.

  15. Identification of Dynamic Loads Based on Second-Order Taylor-Series Expansion Method

    Directory of Open Access Journals (Sweden)

    Xiaowang Li

    2016-01-01

    Full Text Available A new method based on the second-order Taylor-series expansion is presented to identify the structural dynamic loads in the time domain. This algorithm expresses the response vectors as Taylor-series approximation and then a series of formulas are deduced. As a result, an explicit discrete equation which associates system response, system characteristic, and input excitation together is set up. In a multi-input-multi-output (MIMO numerical simulation study, sinusoidal excitation and white noise excitation are applied on a cantilever beam, respectively, to illustrate the effectiveness of this algorithm. One also makes a comparison between the new method and conventional state space method. The results show that the proposed method can obtain a more accurate identified force time history whether the responses are polluted by noise or not.

  16. Using learning analytics to evaluate a video-based lecture series.

    Science.gov (United States)

    Lau, K H Vincent; Farooque, Pue; Leydon, Gary; Schwartz, Michael L; Sadler, R Mark; Moeller, Jeremy J

    2018-01-01

    The video-based lecture (VBL), an important component of the flipped classroom (FC) and massive open online course (MOOC) approaches to medical education, has primarily been evaluated through direct learner feedback. Evaluation may be enhanced through learner analytics (LA) - analysis of quantitative audience usage data generated by video-sharing platforms. We applied LA to an experimental series of ten VBLs on electroencephalography (EEG) interpretation, uploaded to YouTube in the model of a publicly accessible MOOC. Trends in view count; total percentage of video viewed and audience retention (AR) (percentage of viewers watching at a time point compared to the initial total) were examined. The pattern of average AR decline was characterized using regression analysis, revealing a uniform linear decline in viewership for each video, with no evidence of an optimal VBL length. Segments with transient increases in AR corresponded to those focused on core concepts, indicative of content requiring more detailed evaluation. We propose a model for applying LA at four levels: global, series, video, and feedback. LA may be a useful tool in evaluating a VBL series. Our proposed model combines analytics data and learner self-report for comprehensive evaluation.

  17. Pattern-Based Development of Enterprise Systems: from Conceptual Framework to Series of Implementations

    Directory of Open Access Journals (Sweden)

    Sergey V. Zykov

    2013-04-01

    Full Text Available Building enterprise software is a dramatic challenge due to data size, complexity and rapid growth of the both in time. The issue becomes even more dramatic when it gets to integrating heterogeneous applications. Therewith, a uniform approach is required, which combines formal models and CASE tools. The methodology is based on extracting common ERP module level patterns and applying them to series of heterogeneous implementations. The approach includes a lifecycle model, which extends conventional spiral model by formal data representation/management models and DSL-based "low-level" CASE tools supporting the formalisms. The methodology has been successfully implemented as a series of portal-based ERP systems in ITERA oil-and-gas corporation, and in a number of trading/banking enterprise applications for other enterprises. Semantic network-based airline dispatch system, and a 6D-model-driven nuclear power plant construction support system are currently in progress.

  18. Uncertainty estimation with bias-correction for flow series based on rating curve

    Science.gov (United States)

    Shao, Quanxi; Lerat, Julien; Podger, Geoff; Dutta, Dushmanta

    2014-03-01

    Streamflow discharge constitutes one of the fundamental data required to perform water balance studies and develop hydrological models. A rating curve, designed based on a series of concurrent stage and discharge measurements at a gauging location, provides a way to generate complete discharge time series with a reasonable quality if sufficient measurement points are available. However, the associated uncertainty is frequently not available even though it has a significant impact on hydrological modelling. In this paper, we identify the discrepancy of the hydrographers' rating curves used to derive the historical discharge data series and proposed a modification by bias correction which is also in the form of power function as the traditional rating curve. In order to obtain the uncertainty estimation, we propose a further both-side Box-Cox transformation to stabilize the regression residuals as close to the normal distribution as possible, so that a proper uncertainty can be attached for the whole discharge series in the ensemble generation. We demonstrate the proposed method by applying it to the gauging stations in the Flinders and Gilbert rivers in north-west Queensland, Australia.

  19. Similarity estimators for irregular and age uncertain time series

    Science.gov (United States)

    Rehfeld, K.; Kurths, J.

    2013-09-01

    Paleoclimate time series are often irregularly sampled and age uncertain, which is an important technical challenge to overcome for successful reconstruction of past climate variability and dynamics. Visual comparison and interpolation-based linear correlation approaches have been used to infer dependencies from such proxy time series. While the first is subjective, not measurable and not suitable for the comparison of many datasets at a time, the latter introduces interpolation bias, and both face difficulties if the underlying dependencies are nonlinear. In this paper we investigate similarity estimators that could be suitable for the quantitative investigation of dependencies in irregular and age uncertain time series. We compare the Gaussian-kernel based cross correlation (gXCF, Rehfeld et al., 2011) and mutual information (gMI, Rehfeld et al., 2013) against their interpolation-based counterparts and the new event synchronization function (ESF). We test the efficiency of the methods in estimating coupling strength and coupling lag numerically, using ensembles of synthetic stalagmites with short, autocorrelated, linear and nonlinearly coupled proxy time series, and in the application to real stalagmite time series. In the linear test case coupling strength increases are identified consistently for all estimators, while in the nonlinear test case the correlation-based approaches fail. The lag at which the time series are coupled is identified correctly as the maximum of the similarity functions in around 60-55% (in the linear case) to 53-42% (for the nonlinear processes) of the cases when the dating of the synthetic stalagmite is perfectly precise. If the age uncertainty increases beyond 5% of the time series length, however, the true coupling lag is not identified more often than the others for which the similarity function was estimated. Age uncertainty contributes up to half of the uncertainty in the similarity estimation process. Time series irregularity

  20. Similarity estimators for irregular and age-uncertain time series

    Science.gov (United States)

    Rehfeld, K.; Kurths, J.

    2014-01-01

    Paleoclimate time series are often irregularly sampled and age uncertain, which is an important technical challenge to overcome for successful reconstruction of past climate variability and dynamics. Visual comparison and interpolation-based linear correlation approaches have been used to infer dependencies from such proxy time series. While the first is subjective, not measurable and not suitable for the comparison of many data sets at a time, the latter introduces interpolation bias, and both face difficulties if the underlying dependencies are nonlinear. In this paper we investigate similarity estimators that could be suitable for the quantitative investigation of dependencies in irregular and age-uncertain time series. We compare the Gaussian-kernel-based cross-correlation (gXCF, Rehfeld et al., 2011) and mutual information (gMI, Rehfeld et al., 2013) against their interpolation-based counterparts and the new event synchronization function (ESF). We test the efficiency of the methods in estimating coupling strength and coupling lag numerically, using ensembles of synthetic stalagmites with short, autocorrelated, linear and nonlinearly coupled proxy time series, and in the application to real stalagmite time series. In the linear test case, coupling strength increases are identified consistently for all estimators, while in the nonlinear test case the correlation-based approaches fail. The lag at which the time series are coupled is identified correctly as the maximum of the similarity functions in around 60-55% (in the linear case) to 53-42% (for the nonlinear processes) of the cases when the dating of the synthetic stalagmite is perfectly precise. If the age uncertainty increases beyond 5% of the time series length, however, the true coupling lag is not identified more often than the others for which the similarity function was estimated. Age uncertainty contributes up to half of the uncertainty in the similarity estimation process. Time series irregularity

  1. Spatial Pyramid Covariance based Compact Video Code for Robust Face Retrieval in TV-series.

    Science.gov (United States)

    Li, Yan; Wang, Ruiping; Cui, Zhen; Shan, Shiguang; Chen, Xilin

    2016-10-10

    We address the problem of face video retrieval in TV-series which searches video clips based on the presence of specific character, given one face track of his/her. This is tremendously challenging because on one hand, faces in TV-series are captured in largely uncontrolled conditions with complex appearance variations, and on the other hand retrieval task typically needs efficient representation with low time and space complexity. To handle this problem, we propose a compact and discriminative representation for the huge body of video data, named Compact Video Code (CVC). Our method first models the face track by its sample (i.e., frame) covariance matrix to capture the video data variations in a statistical manner. To incorporate discriminative information and obtain more compact video signature suitable for retrieval, the high-dimensional covariance representation is further encoded as a much lower-dimensional binary vector, which finally yields the proposed CVC. Specifically, each bit of the code, i.e., each dimension of the binary vector, is produced via supervised learning in a max margin framework, which aims to make a balance between the discriminability and stability of the code. Besides, we further extend the descriptive granularity of covariance matrix from traditional pixel-level to more general patchlevel, and proceed to propose a novel hierarchical video representation named Spatial Pyramid Covariance (SPC) along with a fast calculation method. Face retrieval experiments on two challenging TV-series video databases, i.e., the Big Bang Theory and Prison Break, demonstrate the competitiveness of the proposed CVC over state-of-the-art retrieval methods. In addition, as a general video matching algorithm, CVC is also evaluated in traditional video face recognition task on a standard Internet database, i.e., YouTube Celebrities, showing its quite promising performance by using an extremely compact code with only 128 bits.

  2. Sample Based Unit Liter Dose Estimates

    International Nuclear Information System (INIS)

    JENSEN, L.

    2000-01-01

    The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999a) and the Final Safety Analysis Report (FSAR) (FDH 1999b) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new data to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in producing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks. The results given in this report are a revision to similar results given in an earlier version of the document (Jensen and Wilmarth 1999). The main difference between the results in this document and the earlier version is that the dose conversion factors (DCF) for converting μCi/g or μCi/L to Sv/L (sieverts per liter) have changed. There are now two DCFs, one based on ICRP-68 and one based on ICW-71 (Brevick 2000)

  3. Assessment the impact of samplers change on the uncertainty related to geothermalwater sampling

    Science.gov (United States)

    Wątor, Katarzyna; Mika, Anna; Sekuła, Klaudia; Kmiecik, Ewa

    2018-02-01

    The aim of this study is to assess the impact of samplers change on the uncertainty associated with the process of the geothermal water sampling. The study was carried out on geothermal water exploited in Podhale region, southern Poland (Małopolska province). To estimate the uncertainty associated with sampling the results of determinations of metasilicic acid (H2SiO3) in normal and duplicate samples collected in two series were used (in each series the samples were collected by qualified sampler). Chemical analyses were performed using ICP-OES method in the certified Hydrogeochemical Laboratory of the Hydrogeology and Engineering Geology Department at the AGH University of Science and Technology in Krakow (Certificate of Polish Centre for Accreditation No. AB 1050). To evaluate the uncertainty arising from sampling the empirical approach was implemented, based on double analysis of normal and duplicate samples taken from the same well in the series of testing. The analyses of the results were done using ROBAN software based on technique of robust statistics analysis of variance (rANOVA). Conducted research proved that in the case of qualified and experienced samplers uncertainty connected with the sampling can be reduced what results in small measurement uncertainty.

  4. A robust anomaly based change detection method for time-series remote sensing images

    Science.gov (United States)

    Shoujing, Yin; Qiao, Wang; Chuanqing, Wu; Xiaoling, Chen; Wandong, Ma; Huiqin, Mao

    2014-03-01

    Time-series remote sensing images record changes happening on the earth surface, which include not only abnormal changes like human activities and emergencies (e.g. fire, drought, insect pest etc.), but also changes caused by vegetation phenology and climate changes. Yet, challenges occur in analyzing global environment changes and even the internal forces. This paper proposes a robust Anomaly Based Change Detection method (ABCD) for time-series images analysis by detecting abnormal points in data sets, which do not need to follow a normal distribution. With ABCD we can detect when and where changes occur, which is the prerequisite condition of global change studies. ABCD was tested initially with 10-day SPOT VGT NDVI (Normalized Difference Vegetation Index) times series tracking land cover type changes, seasonality and noise, then validated to real data in a large area in Jiangxi, south of China. Initial results show that ABCD can precisely detect spatial and temporal changes from long time series images rapidly.

  5. International Work-Conference on Time Series

    CERN Document Server

    Pomares, Héctor; Valenzuela, Olga

    2017-01-01

    This volume of selected and peer-reviewed contributions on the latest developments in time series analysis and forecasting updates the reader on topics such as analysis of irregularly sampled time series, multi-scale analysis of univariate and multivariate time series, linear and non-linear time series models, advanced time series forecasting methods, applications in time series analysis and forecasting, advanced methods and online learning in time series and high-dimensional and complex/big data time series. The contributions were originally presented at the International Work-Conference on Time Series, ITISE 2016, held in Granada, Spain, June 27-29, 2016. The series of ITISE conferences provides a forum for scientists, engineers, educators and students to discuss the latest ideas and implementations in the foundations, theory, models and applications in the field of time series analysis and forecasting.  It focuses on interdisciplinary and multidisciplinary rese arch encompassing the disciplines of comput...

  6. Multi-Cultural Competency-Based Vocational Curricula. Food Service. Multi-Cultural Competency-Based Vocational/Technical Curricula Series.

    Science.gov (United States)

    Hepburn, Larry; Shin, Masako

    This document, one of eight in a multi-cultural competency-based vocational/technical curricula series, is on food service. This program is designed to run 24 weeks and cover 15 instructional areas: orientation, sanitation, management/planning, preparing food for cooking, preparing beverages, cooking eggs, cooking meat, cooking vegetables,…

  7. Visibility graph analysis on quarterly macroeconomic series of China based on complex network theory

    Science.gov (United States)

    Wang, Na; Li, Dong; Wang, Qiwen

    2012-12-01

    The visibility graph approach and complex network theory provide a new insight into time series analysis. The inheritance of the visibility graph from the original time series was further explored in the paper. We found that degree distributions of visibility graphs extracted from Pseudo Brownian Motion series obtained by the Frequency Domain algorithm exhibit exponential behaviors, in which the exponential exponent is a binomial function of the Hurst index inherited in the time series. Our simulations presented that the quantitative relations between the Hurst indexes and the exponents of degree distribution function are different for different series and the visibility graph inherits some important features of the original time series. Further, we convert some quarterly macroeconomic series including the growth rates of value-added of three industry series and the growth rates of Gross Domestic Product series of China to graphs by the visibility algorithm and explore the topological properties of graphs associated from the four macroeconomic series, namely, the degree distribution and correlations, the clustering coefficient, the average path length, and community structure. Based on complex network analysis we find degree distributions of associated networks from the growth rates of value-added of three industry series are almost exponential and the degree distributions of associated networks from the growth rates of GDP series are scale free. We also discussed the assortativity and disassortativity of the four associated networks as they are related to the evolutionary process of the original macroeconomic series. All the constructed networks have “small-world” features. The community structures of associated networks suggest dynamic changes of the original macroeconomic series. We also detected the relationship among government policy changes, community structures of associated networks and macroeconomic dynamics. We find great influences of government

  8. Computing exact Fourier series coefficients of IC rectilinear polygons from low-resolution fast Fourier coefficients

    Science.gov (United States)

    Scheibler, Robin; Hurley, Paul

    2012-03-01

    We present a novel, accurate and fast algorithm to obtain Fourier series coefficients from an IC layer whose description consists of rectilinear polygons on a plane, and how to implement it using off-the-shelf hardware components. Based on properties of Fourier calculus, we derive a relationship between the Discrete Fourier Transforms of the sampled mask transmission function and its continuous Fourier series coefficients. The relationship leads to a straightforward algorithm for computing the continuous Fourier series coefficients where one samples the mask transmission function, compute its discrete Fourier transform and applies a frequency-dependent multiplicative factor. The algorithm is guaranteed to yield the exact continuous Fourier series coefficients for any sampling representing the mask function exactly. Computationally, this leads to significant saving by allowing to choose the maximal such pixel size and reducing the fast Fourier transform size by as much, without compromising accuracy. In addition, the continuous Fourier series is free from aliasing and follows closely the physical model of Fourier optics. We show that in some cases this can make a significant difference, especially in modern very low pitch technology nodes.

  9. Forecasting Jakarta composite index (IHSG) based on chen fuzzy time series and firefly clustering algorithm

    Science.gov (United States)

    Ningrum, R. W.; Surarso, B.; Farikhin; Safarudin, Y. M.

    2018-03-01

    This paper proposes the combination of Firefly Algorithm (FA) and Chen Fuzzy Time Series Forecasting. Most of the existing fuzzy forecasting methods based on fuzzy time series use the static length of intervals. Therefore, we apply an artificial intelligence, i.e., Firefly Algorithm (FA) to set non-stationary length of intervals for each cluster on Chen Method. The method is evaluated by applying on the Jakarta Composite Index (IHSG) and compare with classical Chen Fuzzy Time Series Forecasting. Its performance verified through simulation using Matlab.

  10. Wind Speed Prediction with Wavelet Time Series Based on Lorenz Disturbance

    Directory of Open Access Journals (Sweden)

    ZHANG, Y.

    2017-08-01

    Full Text Available Due to the sustainable and pollution-free characteristics, wind energy has been one of the fastest growing renewable energy sources. However, the intermittent and random fluctuation of wind speed presents many challenges for reliable wind power integration and normal operation of wind farm. Accurate wind speed prediction is the key to ensure the safe operation of power system and to develop wind energy resources. Therefore, this paper has presented a wavelet time series wind speed prediction model based on Lorenz disturbance. Therefore, in this paper, combined with the atmospheric dynamical system, a wavelet-time series improved wind speed prediction model based on Lorenz disturbance is proposed and the wind turbines of different climate types in Spain and China are used to simulate the disturbances of Lorenz equations with different initial values. The prediction results show that the improved model can effectively correct the preliminary prediction of wind speed, improving the prediction. In a word, the research work in this paper will be helpful to arrange the electric power dispatching plan and ensure the normal operation of the wind farm.

  11. Hybrid perturbation methods based on statistical time series models

    Science.gov (United States)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  12. A stochastic HMM-based forecasting model for fuzzy time series.

    Science.gov (United States)

    Li, Sheng-Tun; Cheng, Yi-Chung

    2010-10-01

    Recently, fuzzy time series have attracted more academic attention than traditional time series due to their capability of dealing with the uncertainty and vagueness inherent in the data collected. The formulation of fuzzy relations is one of the key issues affecting forecasting results. Most of the present works adopt IF-THEN rules for relationship representation, which leads to higher computational overhead and rule redundancy. Sullivan and Woodall proposed a Markov-based formulation and a forecasting model to reduce computational overhead; however, its applicability is limited to handling one-factor problems. In this paper, we propose a novel forecasting model based on the hidden Markov model by enhancing Sullivan and Woodall's work to allow handling of two-factor forecasting problems. Moreover, in order to make the nature of conjecture and randomness of forecasting more realistic, the Monte Carlo method is adopted to estimate the outcome. To test the effectiveness of the resulting stochastic model, we conduct two experiments and compare the results with those from other models. The first experiment consists of forecasting the daily average temperature and cloud density in Taipei, Taiwan, and the second experiment is based on the Taiwan Weighted Stock Index by forecasting the exchange rate of the New Taiwan dollar against the U.S. dollar. In addition to improving forecasting accuracy, the proposed model adheres to the central limit theorem, and thus, the result statistically approximates to the real mean of the target value being forecast.

  13. Multi-Cultural Competency-Based Vocational Curricula. Automotive Mechanics. Multi-Cultural Competency-Based Vocational/Technical Curricula Series.

    Science.gov (United States)

    Hepburn, Larry; Shin, Masako

    This document, one of eight in a multi-cultural competency-based vocational/technical curricula series, is on automotive mechanics. This program is designed to run 36 weeks and cover 10 instructional areas: the engine; drive trains--rear ends/drive shafts/manual transmission; carburetor; emission; ignition/tune-up; charging and starting;…

  14. United States Forest Disturbance Trends Observed Using Landsat Time Series

    Science.gov (United States)

    Masek, Jeffrey G.; Goward, Samuel N.; Kennedy, Robert E.; Cohen, Warren B.; Moisen, Gretchen G.; Schleeweis, Karen; Huang, Chengquan

    2013-01-01

    Disturbance events strongly affect the composition, structure, and function of forest ecosystems; however, existing U.S. land management inventories were not designed to monitor disturbance. To begin addressing this gap, the North American Forest Dynamics (NAFD) project has examined a geographic sample of 50 Landsat satellite image time series to assess trends in forest disturbance across the conterminous United States for 1985-2005. The geographic sample design used a probability-based scheme to encompass major forest types and maximize geographic dispersion. For each sample location disturbance was identified in the Landsat series using the Vegetation Change Tracker (VCT) algorithm. The NAFD analysis indicates that, on average, 2.77 Mha/yr of forests were disturbed annually, representing 1.09%/yr of US forestland. These satellite-based national disturbance rates estimates tend to be lower than those derived from land management inventories, reflecting both methodological and definitional differences. In particular the VCT approach used with a biennial time step has limited sensitivity to low-intensity disturbances. Unlike prior satellite studies, our biennial forest disturbance rates vary by nearly a factor of two between high and low years. High western US disturbance rates were associated with active fire years and insect activity, while variability in the east is more strongly related to harvest rates in managed forests. We note that generating a geographic sample based on representing forest type and variability may be problematic since the spatial pattern of disturbance does not necessarily correlate with forest type. We also find that the prevalence of diffuse, non-stand clearing disturbance in US forests makes the application of a biennial geographic sample problematic. Future satellite-based studies of disturbance at regional and national scales should focus on wall-to-wall analyses with annual time step for improved accuracy.

  15. Non-invasive breast biopsy method using GD-DTPA contrast enhanced MRI series and F-18-FDG PET/CT dynamic image series

    Science.gov (United States)

    Magri, Alphonso William

    This study was undertaken to develop a nonsurgical breast biopsy from Gd-DTPA Contrast Enhanced Magnetic Resonance (CE-MR) images and F-18-FDG PET/CT dynamic image series. A five-step process was developed to accomplish this. (1) Dynamic PET series were nonrigidly registered to the initial frame using a finite element method (FEM) based registration that requires fiducial skin markers to sample the displacement field between image frames. A commercial FEM package (ANSYS) was used for meshing and FEM calculations. Dynamic PET image series registrations were evaluated using similarity measurements SAVD and NCC. (2) Dynamic CE-MR series were nonrigidly registered to the initial frame using two registration methods: a multi-resolution free-form deformation (FFD) registration driven by normalized mutual information, and a FEM-based registration method. Dynamic CE-MR image series registrations were evaluated using similarity measurements, localization measurements, and qualitative comparison of motion artifacts. FFD registration was found to be superior to FEM-based registration. (3) Nonlinear curve fitting was performed for each voxel of the PET/CT volume of activity versus time, based on a realistic two-compartmental Patlak model. Three parameters for this model were fitted; two of them describe the activity levels in the blood and in the cellular compartment, while the third characterizes the washout rate of F-18-FDG from the cellular compartment. (4) Nonlinear curve fitting was performed for each voxel of the MR volume of signal intensity versus time, based on a realistic two-compartment Brix model. Three parameters for this model were fitted: rate of Gd exiting the compartment, representing the extracellular space of a lesion; rate of Gd exiting a blood compartment; and a parameter that characterizes the strength of signal intensities. Curve fitting used for PET/CT and MR series was accomplished by application of the Levenburg-Marquardt nonlinear regression

  16. Computational design of new molecular scaffolds for medicinal chemistry, part II: generalization of analog series-based scaffolds

    Science.gov (United States)

    Dimova, Dilyana; Stumpfe, Dagmar; Bajorath, Jürgen

    2018-01-01

    Aim: Extending and generalizing the computational concept of analog series-based (ASB) scaffolds. Materials & methods: Methodological modifications were introduced to further increase the coverage of analog series (ASs) and compounds by ASB scaffolds. From bioactive compounds, ASs were systematically extracted and second-generation ASB scaffolds isolated. Results: More than 20,000 second-generation ASB scaffolds with single or multiple substitution sites were extracted from active compounds, achieving more than 90% coverage of ASs. Conclusion: Generalization of the ASB scaffold approach has yielded a large knowledge base of scaffold-capturing compound series and target information. PMID:29379641

  17. Optimal separable bases and series expansions

    International Nuclear Information System (INIS)

    Poirier, B.

    1997-01-01

    A method is proposed for the efficient calculation of the Green close-quote s functions and eigenstates for quantum systems of two or more dimensions. For a given Hamiltonian, the best possible separable approximation is obtained from the set of all Hilbert-space operators. It is shown that this determination itself, as well as the solution of the resultant approximation, is a problem of reduced dimensionality. Moreover, the approximate eigenstates constitute the optimal separable basis, in the sense of self-consistent field theory. The full solution is obtained from the approximation via iterative expansion. In the time-independent perturbation expansion for instance, all of the first-order energy corrections are zero. In the Green close-quote s function case, we have a distorted-wave Born series with optimized convergence properties. This series may converge even when the usual Born series diverges. Analytical results are presented for an application of the method to the two-dimensional shifted harmonic-oscillator system, in the course of which the quantum tanh 2 potential problem is solved exactly. The universal presence of bound states in the latter is shown to imply long-lived resonances in the former. In a comparison with other theoretical methods, we find that the reaction path Hamiltonian fails to predict such resonances. copyright 1997 The American Physical Society

  18. A Gaussian Process Based Online Change Detection Algorithm for Monitoring Periodic Time Series

    Energy Technology Data Exchange (ETDEWEB)

    Chandola, Varun [ORNL; Vatsavai, Raju [ORNL

    2011-01-01

    Online time series change detection is a critical component of many monitoring systems, such as space and air-borne remote sensing instruments, cardiac monitors, and network traffic profilers, which continuously analyze observations recorded by sensors. Data collected by such sensors typically has a periodic (seasonal) component. Most existing time series change detection methods are not directly applicable to handle such data, either because they are not designed to handle periodic time series or because they cannot operate in an online mode. We propose an online change detection algorithm which can handle periodic time series. The algorithm uses a Gaussian process based non-parametric time series prediction model and monitors the difference between the predictions and actual observations within a statistically principled control chart framework to identify changes. A key challenge in using Gaussian process in an online mode is the need to solve a large system of equations involving the associated covariance matrix which grows with every time step. The proposed algorithm exploits the special structure of the covariance matrix and can analyze a time series of length T in O(T^2) time while maintaining a O(T) memory footprint, compared to O(T^4) time and O(T^2) memory requirement of standard matrix manipulation methods. We experimentally demonstrate the superiority of the proposed algorithm over several existing time series change detection algorithms on a set of synthetic and real time series. Finally, we illustrate the effectiveness of the proposed algorithm for identifying land use land cover changes using Normalized Difference Vegetation Index (NDVI) data collected for an agricultural region in Iowa state, USA. Our algorithm is able to detect different types of changes in a NDVI validation data set (with ~80% accuracy) which occur due to crop type changes as well as disruptive changes (e.g., natural disasters).

  19. Development of Simulink-Based SiC MOSFET Modeling Platform for Series Connected Devices

    DEFF Research Database (Denmark)

    Tsolaridis, Georgios; Ilves, Kalle; Reigosa, Paula Diaz

    2016-01-01

    A new MATLAB/Simulink-based modeling platform has been developed for SiC MOSFET power modules. The modeling platform describes the electrical behavior f a single 1.2 kV/ 350 A SiC MOSFET power module, as well as the series connection of two of them. A fast parameter initialization is followed...... by an optimization process to facilitate the extraction of the model’s parameters in a more automated way relying on a small number of experimental waveforms. Through extensive experimental work, it is shown that the model accurately predicts both static and dynamic performances. The series connection of two Si......C power modules has been investigated through the validation of the static and dynamic conditions. Thanks to the developed model, a better understanding of the challenges introduced by uneven voltage balance sharing among series connected devices is possible....

  20. Classification of biosensor time series using dynamic time warping: applications in screening cancer cells with characteristic biomarkers.

    Science.gov (United States)

    Rai, Shesh N; Trainor, Patrick J; Khosravi, Farhad; Kloecker, Goetz; Panchapakesan, Balaji

    2016-01-01

    The development of biosensors that produce time series data will facilitate improvements in biomedical diagnostics and in personalized medicine. The time series produced by these devices often contains characteristic features arising from biochemical interactions between the sample and the sensor. To use such characteristic features for determining sample class, similarity-based classifiers can be utilized. However, the construction of such classifiers is complicated by the variability in the time domains of such series that renders the traditional distance metrics such as Euclidean distance ineffective in distinguishing between biological variance and time domain variance. The dynamic time warping (DTW) algorithm is a sequence alignment algorithm that can be used to align two or more series to facilitate quantifying similarity. In this article, we evaluated the performance of DTW distance-based similarity classifiers for classifying time series that mimics electrical signals produced by nanotube biosensors. Simulation studies demonstrated the positive performance of such classifiers in discriminating between time series containing characteristic features that are obscured by noise in the intensity and time domains. We then applied a DTW distance-based k -nearest neighbors classifier to distinguish the presence/absence of mesenchymal biomarker in cancer cells in buffy coats in a blinded test. Using a train-test approach, we find that the classifier had high sensitivity (90.9%) and specificity (81.8%) in differentiating between EpCAM-positive MCF7 cells spiked in buffy coats and those in plain buffy coats.

  1. Phenology-based Spartina alterniflora mapping in coastal wetland of the Yangtze Estuary using time series of GaoFen satellite no. 1 wide field of view imagery

    Science.gov (United States)

    Ai, Jinquan; Gao, Wei; Gao, Zhiqiang; Shi, Runhe; Zhang, Chao

    2017-04-01

    Spartina alterniflora is an aggressive invasive plant species that replaces native species, changes the structure and function of the ecosystem across coastal wetlands in China, and is thus a major conservation concern. Mapping the spread of its invasion is a necessary first step for the implementation of effective ecological management strategies. The performance of a phenology-based approach for S. alterniflora mapping is explored in the coastal wetland of the Yangtze Estuary using a time series of GaoFen satellite no. 1 wide field of view camera (GF-1 WFV) imagery. First, a time series of the normalized difference vegetation index (NDVI) was constructed to evaluate the phenology of S. alterniflora. Two phenological stages (the senescence stage from November to mid-December and the green-up stage from late April to May) were determined as important for S. alterniflora detection in the study area based on NDVI temporal profiles, spectral reflectance curves of S. alterniflora and its coexistent species, and field surveys. Three phenology feature sets representing three major phenology-based detection strategies were then compared to map S. alterniflora: (1) the single-date imagery acquired within the optimal phenological window, (2) the multitemporal imagery, including four images from the two important phenological windows, and (3) the monthly NDVI time series imagery. Support vector machines and maximum likelihood classifiers were applied on each phenology feature set at different training sample sizes. For all phenology feature sets, the overall results were produced consistently with high mapping accuracies under sufficient training samples sizes, although significantly improved classification accuracies (10%) were obtained when the monthly NDVI time series imagery was employed. The optimal single-date imagery had the lowest accuracies of all detection strategies. The multitemporal analysis demonstrated little reduction in the overall accuracy compared with the

  2. Consensus of heterogeneous multi-agent systems based on sampled data with a small sampling delay

    International Nuclear Information System (INIS)

    Wang Na; Wu Zhi-Hai; Peng Li

    2014-01-01

    In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay for heterogeneous multi-agent systems is proposed. Then, the algebra graph theory, the matrix method, the stability theory of linear systems, and some other techniques are employed to derive the necessary and sufficient conditions guaranteeing heterogeneous multi-agent systems to asymptotically achieve the stationary consensus. Finally, simulations are performed to demonstrate the correctness of the theoretical results. (interdisciplinary physics and related areas of science and technology)

  3. Multifractal analysis of visibility graph-based Ito-related connectivity time series.

    Science.gov (United States)

    Czechowski, Zbigniew; Lovallo, Michele; Telesca, Luciano

    2016-02-01

    In this study, we investigate multifractal properties of connectivity time series resulting from the visibility graph applied to normally distributed time series generated by the Ito equations with multiplicative power-law noise. We show that multifractality of the connectivity time series (i.e., the series of numbers of links outgoing any node) increases with the exponent of the power-law noise. The multifractality of the connectivity time series could be due to the width of connectivity degree distribution that can be related to the exit time of the associated Ito time series. Furthermore, the connectivity time series are characterized by persistence, although the original Ito time series are random; this is due to the procedure of visibility graph that, connecting the values of the time series, generates persistence but destroys most of the nonlinear correlations. Moreover, the visibility graph is sensitive for detecting wide "depressions" in input time series.

  4. Efficient sampling algorithms for Monte Carlo based treatment planning

    International Nuclear Information System (INIS)

    DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.

    1998-01-01

    Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed

  5. A prediction method based on wavelet transform and multiple models fusion for chaotic time series

    International Nuclear Information System (INIS)

    Zhongda, Tian; Shujiang, Li; Yanhong, Wang; Yi, Sha

    2017-01-01

    In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.

  6. Increasing fMRI sampling rate improves Granger causality estimates.

    Directory of Open Access Journals (Sweden)

    Fa-Hsuan Lin

    Full Text Available Estimation of causal interactions between brain areas is necessary for elucidating large-scale functional brain networks underlying behavior and cognition. Granger causality analysis of time series data can quantitatively estimate directional information flow between brain regions. Here, we show that such estimates are significantly improved when the temporal sampling rate of functional magnetic resonance imaging (fMRI is increased 20-fold. Specifically, healthy volunteers performed a simple visuomotor task during blood oxygenation level dependent (BOLD contrast based whole-head inverse imaging (InI. Granger causality analysis based on raw InI BOLD data sampled at 100-ms resolution detected the expected causal relations, whereas when the data were downsampled to the temporal resolution of 2 s typically used in echo-planar fMRI, the causality could not be detected. An additional control analysis, in which we SINC interpolated additional data points to the downsampled time series at 0.1-s intervals, confirmed that the improvements achieved with the real InI data were not explainable by the increased time-series length alone. We therefore conclude that the high-temporal resolution of InI improves the Granger causality connectivity analysis of the human brain.

  7. Image reconstruction method for electrical capacitance tomography based on the combined series and parallel normalization model

    International Nuclear Information System (INIS)

    Dong, Xiangyuan; Guo, Shuqing

    2008-01-01

    In this paper, a novel image reconstruction method for electrical capacitance tomography (ECT) based on the combined series and parallel model is presented. A regularization technique is used to obtain a stabilized solution of the inverse problem. Also, the adaptive coefficient of the combined model is deduced by numerical optimization. Simulation results indicate that it can produce higher quality images when compared to the algorithm based on the parallel or series models for the cases tested in this paper. It provides a new algorithm for ECT application

  8. An application of sample entropy to precipitation in Paraíba State, Brazil

    Science.gov (United States)

    Xavier, Sílvio Fernando Alves; da Silva Jale, Jader; Stosic, Tatijana; dos Santos, Carlos Antonio Costa; Singh, Vijay P.

    2018-05-01

    A climate system is characterized to be a complex non-linear system. In order to describe the complex characteristics of precipitation series in Paraíba State, Brazil, we aim the use of sample entropy, a kind of entropy-based algorithm, to evaluate the complexity of precipitation series. Sixty-nine meteorological stations are distributed over four macroregions: Zona da Mata, Agreste, Borborema, and Sertão. The results of the analysis show that intricacies of monthly average precipitation have differences in the macroregions. Sample entropy is able to reflect the dynamic change of precipitation series providing a new way to investigate complexity of hydrological series. The complexity exhibits areal variation of local water resource systems which can influence the basis for utilizing and developing resources in dry areas.

  9. Analysis of the Main Factors Influencing Food Production in China Based on Time Series Trend Chart

    Institute of Scientific and Technical Information of China (English)

    Shuangjin; WANG; Jianying; LI

    2014-01-01

    Based on the annual sample data on food production in China since the reform and opening up,we select 8 main factors influencing the total food production( growing area,application rate of chemical fertilizer,effective irrigation area,the affected area,total machinery power,food production cost index,food production price index,financial funds for supporting agriculture,farmers and countryside),and put them into categories of material input,resources and environment,and policy factors. Using the factor analysis,we carry out the multi-angle analysis of these typical influencing factors one by one through the time series trend chart. It is found that application rate of chemical fertilizer,the growing area of food crops and drought-affected area become the key factors affecting food production. On this basis,we set forth the corresponding recommendations for improving the comprehensive food production capacity.

  10. A 10kW series resonant converter design, transistor characterization, and base-drive optimization

    Science.gov (United States)

    Robson, R.; Hancock, D.

    1981-01-01

    Transistors are characterized for use as switches in resonant circuit applications. A base drive circuit to provide the optimal base drive to these transistors under resonant circuit conditions is developed and then used in the design, fabrication and testing of a breadboard, spaceborne type 10 kW series resonant converter.

  11. Magnetic circular dichroism of LaMn sub 1 sub - sub x Al sub x O sub 3 sub + subdelta series of samples

    CERN Document Server

    Banerjee, A; Krishnan, R V; Dasannacharya, B A; Muro, T; Saitoh, Y; Imada, S; Suga, S

    2003-01-01

    We report magnetic circular dichroism (MCD) studies on the polycrystalline LaMn sub 1 sub - sub x Al sub x O sub 3 sub + subdelta series with x=0-0.2. The Mn-2p MCD was recorded in the temperature range from 45 to 300 K for samples with x=0, 0.075, 0.1 and 0.15. It was seen that unlike ac-susceptibility no second transition in MCD was observed at lower temperatures in the samples with x>=0.075 indicating that it is not intrinsic to the samples but arise out of the dynamics of ferromagnetic clusters in the polycrystalline sample. More significantly, the MCD signal persists even 100 K above the ferromagnetic T sub C confirming that the observation of the magnetic correlation above T sub C in bulk measurement is intrinsic to this type of systems.

  12. Current Directional Protection of Series Compensated Line Using Intelligent Classifier

    Directory of Open Access Journals (Sweden)

    M. Mollanezhad Heydarabadi

    2016-12-01

    Full Text Available Current inversion condition leads to incorrect operation of current based directional relay in power system with series compensated device. Application of the intelligent system for fault direction classification has been suggested in this paper. A new current directional protection scheme based on intelligent classifier is proposed for the series compensated line. The proposed classifier uses only half cycle of pre-fault and post fault current samples at relay location to feed the classifier. A lot of forward and backward fault simulations under different system conditions upon a transmission line with a fixed series capacitor are carried out using PSCAD/EMTDC software. The applicability of decision tree (DT, probabilistic neural network (PNN and support vector machine (SVM are investigated using simulated data under different system conditions. The performance comparison of the classifiers indicates that the SVM is a best suitable classifier for fault direction discriminating. The backward faults can be accurately distinguished from forward faults even under current inversion without require to detect of the current inversion condition.

  13. Extended moment series and the parameters of the negative binomial distribution

    International Nuclear Information System (INIS)

    Bowman, K.O.

    1984-01-01

    Recent studies indicate that, for finite sample sizes, moment estimators may be superior to maximum likelihood estimators in some regions of parameter space. In this paper a statistic based on the central moment of the sample is expanded in a Taylor series using 24 derivatives and many more terms than previous expansions. A summary algorithm is required to find meaningful approximants using the higher-order coefficients. A example is presented and a comparison between theoretical assessment and simulation results is made

  14. Time Series Based for Online Signature Verification

    Directory of Open Access Journals (Sweden)

    I Ketut Gede Darma Putra

    2013-11-01

    Full Text Available Signature verification system is to match the tested signature with a claimed signature. This paper proposes time series based for feature extraction method and dynamic time warping for match method. The system made by process of testing 900 signatures belong to 50 participants, 3 signatures for reference and 5 signatures from original user, simple imposters and trained imposters for signatures test. The final result system was tested with 50 participants with 3 references. This test obtained that system accuracy without imposters is 90,44897959% at threshold 44 with rejection errors (FNMR is 5,2% and acceptance errors (FMR is 4,35102%, when with imposters system accuracy is 80,1361% at threshold 27 with error rejection (FNMR is 15,6% and acceptance errors (average FMR is 4,263946%, with details as follows: acceptance errors is 0,391837%, acceptance errors simple imposters is 3,2% and acceptance errors trained imposters is 9,2%.

  15. A Python-based interface to examine motions in time series of solar images

    Science.gov (United States)

    Campos-Rozo, J. I.; Vargas Domínguez, S.

    2017-10-01

    Python is considered to be a mature programming language, besides of being widely accepted as an engaging option for scientific analysis in multiple areas, as will be presented in this work for the particular case of solar physics research. SunPy is an open-source library based on Python that has been recently developed to furnish software tools to solar data analysis and visualization. In this work we present a graphical user interface (GUI) based on Python and Qt to effectively compute proper motions for the analysis of time series of solar data. This user-friendly computing interface, that is intended to be incorporated to the Sunpy library, uses a local correlation tracking technique and some extra tools that allows the selection of different parameters to calculate, vizualize and analyze vector velocity fields of solar data, i.e. time series of solar filtergrams and magnetograms.

  16. Tonal synchrony in mother-infant interaction based on harmonic and pentatonic series.

    Science.gov (United States)

    Van Puyvelde, Martine; Vanfleteren, Pol; Loots, Gerrit; Deschuyffeleer, Sara; Vinck, Bart; Jacquet, Wolfgang; Verhelst, Werner

    2010-12-01

    This study reports the occurrence of 'tonal synchrony' as a new dimension of early mother-infant interaction synchrony. The findings are based on a tonal and temporal analysis of vocal interactions between 15 mothers and their 3-month-old infants during 5 min of free-play in a laboratory setting. In total, 558 vocal exchanges were identified and analysed, of which 84% reflected harmonic or pentatonic series. Another 10% of the exchanges contained absolute and/or relative pitch and/or interval imitations. The total durations of dyads being in tonal synchrony were normally distributed (M=3.71, SD=2.44). Vocalisations based on harmonic series appeared organised around the major triad, containing significantly more simple frequency ratios (octave, fifth and third) than complex ones (non-major triad tones). Tonal synchrony and its characteristics are discussed in relation to infant-directed speech, communicative musicality, pre-reflective communication and its impact on the quality of early mother-infant interaction and child's development. Copyright © 2010 Elsevier Inc. All rights reserved.

  17. Shilling attack detection for recommender systems based on credibility of group users and rating time series.

    Science.gov (United States)

    Zhou, Wei; Wen, Junhao; Qu, Qiang; Zeng, Jun; Cheng, Tian

    2018-01-01

    Recommender systems are vulnerable to shilling attacks. Forged user-generated content data, such as user ratings and reviews, are used by attackers to manipulate recommendation rankings. Shilling attack detection in recommender systems is of great significance to maintain the fairness and sustainability of recommender systems. The current studies have problems in terms of the poor universality of algorithms, difficulty in selection of user profile attributes, and lack of an optimization mechanism. In this paper, a shilling behaviour detection structure based on abnormal group user findings and rating time series analysis is proposed. This paper adds to the current understanding in the field by studying the credibility evaluation model in-depth based on the rating prediction model to derive proximity-based predictions. A method for detecting suspicious ratings based on suspicious time windows and target item analysis is proposed. Suspicious rating time segments are determined by constructing a time series, and data streams of the rating items are examined and suspicious rating segments are checked. To analyse features of shilling attacks by a group user's credibility, an abnormal group user discovery method based on time series and time window is proposed. Standard testing datasets are used to verify the effect of the proposed method.

  18. Sampling genetic diversity in the sympatrically and allopatrically speciating Midas cichlid species complex over a 16 year time series

    Directory of Open Access Journals (Sweden)

    Bunje Paul ME

    2007-02-01

    Full Text Available Abstract Background Speciation often occurs in complex or uncertain temporal and spatial contexts. Processes such as reinforcement, allopatric divergence, and assortative mating can proceed at different rates and with different strengths as populations diverge. The Central American Midas cichlid fish species complex is an important case study for understanding the processes of speciation. Previous analyses have demonstrated that allopatric processes led to species formation among the lakes of Nicaragua as well as sympatric speciation that is occurring within at least one crater lake. However, since speciation is an ongoing process and sampling genetic diversity of such lineages can be biased by collection scheme or random factors, it is important to evaluate the robustness of conclusions drawn on individual time samples. Results In order to assess the validity and reliability of inferences based on different genetic samples, we have analyzed fish from several lakes in Nicaragua sampled at three different times over 16 years. In addition, this time series allows us to analyze the population genetic changes that have occurred between lakes, where allopatric speciation has operated, as well as between different species within lakes, some of which have originated by sympatric speciation. Focusing on commonly used genetic markers, we have analyzed both DNA sequences from the complete mitochondrial control region as well as nuclear DNA variation at ten microsatellite loci from these populations, sampled thrice in a 16 year time period, to develop a robust estimate of the population genetic history of these diversifying lineages. Conclusion The conclusions from previous work are well supported by our comprehensive analysis. In particular, we find that the genetic diversity of derived crater lake populations is lower than that of the source population regardless of when and how each population was sampled. Furthermore, changes in various estimates of

  19. The generalization ability of online SVM classification based on Markov sampling.

    Science.gov (United States)

    Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang

    2015-03-01

    In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.

  20. Time Series Outlier Detection Based on Sliding Window Prediction

    Directory of Open Access Journals (Sweden)

    Yufeng Yu

    2014-01-01

    Full Text Available In order to detect outliers in hydrological time series data for improving data quality and decision-making quality related to design, operation, and management of water resources, this research develops a time series outlier detection method for hydrologic data that can be used to identify data that deviate from historical patterns. The method first built a forecasting model on the history data and then used it to predict future values. Anomalies are assumed to take place if the observed values fall outside a given prediction confidence interval (PCI, which can be calculated by the predicted value and confidence coefficient. The use of PCI as threshold is mainly on the fact that it considers the uncertainty in the data series parameters in the forecasting model to address the suitable threshold selection problem. The method performs fast, incremental evaluation of data as it becomes available, scales to large quantities of data, and requires no preclassification of anomalies. Experiments with different hydrologic real-world time series showed that the proposed methods are fast and correctly identify abnormal data and can be used for hydrologic time series analysis.

  1. Combining Different Conceptual Change Methods within Four-Step Constructivist Teaching Model: A Sample Teaching of Series and Parallel Circuits

    Science.gov (United States)

    Ipek, Hava; Calik, Muammer

    2008-01-01

    Based on students' alternative conceptions of the topics "electric circuits", "electric charge flows within an electric circuit", "how the brightness of bulbs and the resistance changes in series and parallel circuits", the current study aims to present a combination of different conceptual change methods within a four-step constructivist teaching…

  2. Modelling of extreme rainfall events in Peninsular Malaysia based on annual maximum and partial duration series

    Science.gov (United States)

    Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz

    2015-02-01

    In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.

  3. A new non-parametric stationarity test of time series in the time domain

    KAUST Repository

    Jin, Lei

    2014-11-07

    © 2015 The Royal Statistical Society and Blackwell Publishing Ltd. We propose a new double-order selection test for checking second-order stationarity of a time series. To develop the test, a sequence of systematic samples is defined via Walsh functions. Then the deviations of the autocovariances based on these systematic samples from the corresponding autocovariances of the whole time series are calculated and the uniform asymptotic joint normality of these deviations over different systematic samples is obtained. With a double-order selection scheme, our test statistic is constructed by combining the deviations at different lags in the systematic samples. The null asymptotic distribution of the statistic proposed is derived and the consistency of the test is shown under fixed and local alternatives. Simulation studies demonstrate well-behaved finite sample properties of the method proposed. Comparisons with some existing tests in terms of power are given both analytically and empirically. In addition, the method proposed is applied to check the stationarity assumption of a chemical process viscosity readings data set.

  4. Sample Based Unit Liter Dose Estimates

    International Nuclear Information System (INIS)

    JENSEN, L.

    1999-01-01

    The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999) and the Final Safety Analysis Report (FSAR) (FDH 1999) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new data to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in developing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks

  5. A graph-based approach to detect spatiotemporal dynamics in satellite image time series

    Science.gov (United States)

    Guttler, Fabio; Ienco, Dino; Nin, Jordi; Teisseire, Maguelonne; Poncelet, Pascal

    2017-08-01

    Enhancing the frequency of satellite acquisitions represents a key issue for Earth Observation community nowadays. Repeated observations are crucial for monitoring purposes, particularly when intra-annual process should be taken into account. Time series of images constitute a valuable source of information in these cases. The goal of this paper is to propose a new methodological framework to automatically detect and extract spatiotemporal information from satellite image time series (SITS). Existing methods dealing with such kind of data are usually classification-oriented and cannot provide information about evolutions and temporal behaviors. In this paper we propose a graph-based strategy that combines object-based image analysis (OBIA) with data mining techniques. Image objects computed at each individual timestamp are connected across the time series and generates a set of evolution graphs. Each evolution graph is associated to a particular area within the study site and stores information about its temporal evolution. Such information can be deeply explored at the evolution graph scale or used to compare the graphs and supply a general picture at the study site scale. We validated our framework on two study sites located in the South of France and involving different types of natural, semi-natural and agricultural areas. The results obtained from a Landsat SITS support the quality of the methodological approach and illustrate how the framework can be employed to extract and characterize spatiotemporal dynamics.

  6. A highly sensitive monoclonal antibody based biosensor for quantifying 3–5 ring polycyclic aromatic hydrocarbons (PAHs in aqueous environmental samples

    Directory of Open Access Journals (Sweden)

    Xin Li

    2016-03-01

    Full Text Available Immunoassays based on monoclonal antibodies (mAbs are highly sensitive for the detection of polycyclic aromatic hydrocarbons (PAHs and can be employed to determine concentrations in near real-time. A sensitive generic mAb against PAHs, named as 2G8, was developed by a three-step screening procedure. It exhibited nearly uniformly high sensitivity against 3-ring to 5-ring unsubstituted PAHs and their common environmental methylated PAHs, with IC50 values between 1.68 and 31 μg/L (ppb. 2G8 has been successfully applied on the KinExA Inline Biosensor system for quantifying 3–5 ring PAHs in aqueous environmental samples. PAHs were detected at a concentration as low as 0.2 μg/L. Furthermore, the analyses only required 10 min for each sample. To evaluate the accuracy of the 2G8-based biosensor, the total PAH concentrations in a series of environmental samples analyzed by biosensor and GC–MS were compared. In most cases, the results yielded a good correlation between methods. This indicates that generic antibody 2G8 based biosensor possesses significant promise for a low cost, rapid method for PAH determination in aqueous samples. Keywords: Monoclonal antibody, PAH, Pore water, Biosensor, Pyrene

  7. Can trade opportunities and returns be generated in a trend persistent series? Evidence from global indices

    Science.gov (United States)

    Mitra, S. K.; Bawa, Jaslene

    2017-03-01

    In this study, we explore the possibility of generating trade opportunities and returns when a financial stock index series is trend persistent. Through application of Hurst coefficient based on the modified range to standard deviation analysis (Weron, 2002) in a sample of 31 leading global indices during the period December 2000 to November 2015, we found periods of trend persistence. We developed and tested a set of trading strategies on these periods of trend persistent in the financial series and found that significant positive returns can be generated when a series displayed upward trend persistence.

  8. Variable screening and ranking using sampling-based sensitivity measures

    International Nuclear Information System (INIS)

    Wu, Y-T.; Mohanty, Sitakanta

    2006-01-01

    This paper presents a methodology for screening insignificant random variables and ranking significant important random variables using sensitivity measures including two cumulative distribution function (CDF)-based and two mean-response based measures. The methodology features (1) using random samples to compute sensitivities and (2) using acceptance limits, derived from the test-of-hypothesis, to classify significant and insignificant random variables. Because no approximation is needed in either the form of the performance functions or the type of continuous distribution functions representing input variables, the sampling-based approach can handle highly nonlinear functions with non-normal variables. The main characteristics and effectiveness of the sampling-based sensitivity measures are investigated using both simple and complex examples. Because the number of samples needed does not depend on the number of variables, the methodology appears to be particularly suitable for problems with large, complex models that have large numbers of random variables but relatively few numbers of significant random variables

  9. Grammar-based feature generation for time-series prediction

    CERN Document Server

    De Silva, Anthony Mihirana

    2015-01-01

    This book proposes a novel approach for time-series prediction using machine learning techniques with automatic feature generation. Application of machine learning techniques to predict time-series continues to attract considerable attention due to the difficulty of the prediction problems compounded by the non-linear and non-stationary nature of the real world time-series. The performance of machine learning techniques, among other things, depends on suitable engineering of features. This book proposes a systematic way for generating suitable features using context-free grammar. A number of feature selection criteria are investigated and a hybrid feature generation and selection algorithm using grammatical evolution is proposed. The book contains graphical illustrations to explain the feature generation process. The proposed approaches are demonstrated by predicting the closing price of major stock market indices, peak electricity load and net hourly foreign exchange client trade volume. The proposed method ...

  10. Time Series Analysis and Forecasting by Example

    CERN Document Server

    Bisgaard, Soren

    2011-01-01

    An intuition-based approach enables you to master time series analysis with ease Time Series Analysis and Forecasting by Example provides the fundamental techniques in time series analysis using various examples. By introducing necessary theory through examples that showcase the discussed topics, the authors successfully help readers develop an intuitive understanding of seemingly complicated time series models and their implications. The book presents methodologies for time series analysis in a simplified, example-based approach. Using graphics, the authors discuss each presented example in

  11. Time series forecasting based on deep extreme learning machine

    NARCIS (Netherlands)

    Guo, Xuqi; Pang, Y.; Yan, Gaowei; Qiao, Tiezhu; Yang, Guang-Hong; Yang, Dan

    2017-01-01

    Multi-layer Artificial Neural Networks (ANN) has caught widespread attention as a new method for time series forecasting due to the ability of approximating any nonlinear function. In this paper, a new local time series prediction model is established with the nearest neighbor domain theory, in

  12. Item Anomaly Detection Based on Dynamic Partition for Time Series in Recommender Systems.

    Science.gov (United States)

    Gao, Min; Tian, Renli; Wen, Junhao; Xiong, Qingyu; Ling, Bin; Yang, Linda

    2015-01-01

    In recent years, recommender systems have become an effective method to process information overload. However, recommendation technology still suffers from many problems. One of the problems is shilling attacks-attackers inject spam user profiles to disturb the list of recommendation items. There are two characteristics of all types of shilling attacks: 1) Item abnormality: The rating of target items is always maximum or minimum; and 2) Attack promptness: It takes only a very short period time to inject attack profiles. Some papers have proposed item anomaly detection methods based on these two characteristics, but their detection rate, false alarm rate, and universality need to be further improved. To solve these problems, this paper proposes an item anomaly detection method based on dynamic partitioning for time series. This method first dynamically partitions item-rating time series based on important points. Then, we use chi square distribution (χ2) to detect abnormal intervals. The experimental results on MovieLens 100K and 1M indicate that this approach has a high detection rate and a low false alarm rate and is stable toward different attack models and filler sizes.

  13. Item Anomaly Detection Based on Dynamic Partition for Time Series in Recommender Systems

    Science.gov (United States)

    Gao, Min; Tian, Renli; Wen, Junhao; Xiong, Qingyu; Ling, Bin; Yang, Linda

    2015-01-01

    In recent years, recommender systems have become an effective method to process information overload. However, recommendation technology still suffers from many problems. One of the problems is shilling attacks-attackers inject spam user profiles to disturb the list of recommendation items. There are two characteristics of all types of shilling attacks: 1) Item abnormality: The rating of target items is always maximum or minimum; and 2) Attack promptness: It takes only a very short period time to inject attack profiles. Some papers have proposed item anomaly detection methods based on these two characteristics, but their detection rate, false alarm rate, and universality need to be further improved. To solve these problems, this paper proposes an item anomaly detection method based on dynamic partitioning for time series. This method first dynamically partitions item-rating time series based on important points. Then, we use chi square distribution (χ2) to detect abnormal intervals. The experimental results on MovieLens 100K and 1M indicate that this approach has a high detection rate and a low false alarm rate and is stable toward different attack models and filler sizes. PMID:26267477

  14. Chaos Time Series Prediction Based on Membrane Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Meng Li

    2015-01-01

    Full Text Available This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ,m and least squares support vector machine (LS-SVM (γ,σ by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE, root mean square error (RMSE, and mean absolute percentage error (MAPE.

  15. Classification of Small-Scale Eucalyptus Plantations Based on NDVI Time Series Obtained from Multiple High-Resolution Datasets

    Directory of Open Access Journals (Sweden)

    Hailang Qiao

    2016-02-01

    Full Text Available Eucalyptus, a short-rotation plantation, has been expanding rapidly in southeast China in recent years owing to its short growth cycle and high yield of wood. Effective identification of eucalyptus, therefore, is important for monitoring land use changes and investigating environmental quality. For this article, we used remote sensing images over 15 years (one per year with a 30-m spatial resolution, including Landsat 5 thematic mapper images, Landsat 7-enhanced thematic mapper images, and HJ 1A/1B images. These data were used to construct a 15-year Normalized Difference Vegetation Index (NDVI time series for several cities in Guangdong Province, China. Eucalyptus reference NDVI time series sub-sequences were acquired, including one-year-long and two-year-long growing periods, using invested eucalyptus samples in the study region. In order to compensate for the discontinuity of the NDVI time series that is a consequence of the relatively coarse temporal resolution, we developed an inverted triangle area methodology. Using this methodology, the images were classified on the basis of the matching degree of the NDVI time series and two reference NDVI time series sub-sequences during the growing period of the eucalyptus rotations. Three additional methodologies (Bounding Envelope, City Block, and Standardized Euclidian Distance were also tested and used as a comparison group. Threshold coefficients for the algorithms were adjusted using commission–omission error criteria. The results show that the triangle area methodology out-performed the other methodologies in classifying eucalyptus plantations. Threshold coefficients and an optimal discriminant function were determined using a mosaic photograph that had been taken by an unmanned aerial vehicle platform. Good stability was found as we performed further validation using multiple-year data from the high-resolution Gaofen Satellite 1 (GF-1 observations of larger regions. Eucalyptus planting dates

  16. An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method

    International Nuclear Information System (INIS)

    Campolina, Daniel; Lima, Paulo Rubens I.; Pereira, Claubia; Veloso, Maria Auxiliadora F.

    2015-01-01

    Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k eff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)

  17. A Non-standard Empirical Likelihood for Time Series

    DEFF Research Database (Denmark)

    Nordman, Daniel J.; Bunzel, Helle; Lahiri, Soumendra N.

    Standard blockwise empirical likelihood (BEL) for stationary, weakly dependent time series requires specifying a fixed block length as a tuning parameter for setting confidence regions. This aspect can be difficult and impacts coverage accuracy. As an alternative, this paper proposes a new version...... of BEL based on a simple, though non-standard, data-blocking rule which uses a data block of every possible length. Consequently, the method involves no block selection and is also anticipated to exhibit better coverage performance. Its non-standard blocking scheme, however, induces non......-standard asymptotics and requires a significantly different development compared to standard BEL. We establish the large-sample distribution of log-ratio statistics from the new BEL method for calibrating confidence regions for mean or smooth function parameters of time series. This limit law is not the usual chi...

  18. A novel PMT test system based on waveform sampling

    Science.gov (United States)

    Yin, S.; Ma, L.; Ning, Z.; Qian, S.; Wang, Y.; Jiang, X.; Wang, Z.; Yu, B.; Gao, F.; Zhu, Y.; Wang, Z.

    2018-01-01

    Comparing with the traditional test system based on a QDC and TDC and scaler, a test system based on waveform sampling is constructed for signal sampling of the 8"R5912 and the 20"R12860 Hamamatsu PMT in different energy states from single to multiple photoelectrons. In order to achieve high throughput and to reduce the dead time in data processing, the data acquisition software based on LabVIEW is developed and runs with a parallel mechanism. The analysis algorithm is realized in LabVIEW and the spectra of charge, amplitude, signal width and rising time are analyzed offline. The results from Charge-to-Digital Converter, Time-to-Digital Converter and waveform sampling are discussed in detailed comparison.

  19. Complexity analysis of the turbulent environmental fluid flow time series

    Science.gov (United States)

    Mihailović, D. T.; Nikolić-Đorić, E.; Drešković, N.; Mimić, G.

    2014-02-01

    We have used the Kolmogorov complexities, sample and permutation entropies to quantify the randomness degree in river flow time series of two mountain rivers in Bosnia and Herzegovina, representing the turbulent environmental fluid, for the period 1926-1990. In particular, we have examined the monthly river flow time series from two rivers (the Miljacka and the Bosnia) in the mountain part of their flow and then calculated the Kolmogorov complexity (KL) based on the Lempel-Ziv Algorithm (LZA) (lower-KLL and upper-KLU), sample entropy (SE) and permutation entropy (PE) values for each time series. The results indicate that the KLL, KLU, SE and PE values in two rivers are close to each other regardless of the amplitude differences in their monthly flow rates. We have illustrated the changes in mountain river flow complexity by experiments using (i) the data set for the Bosnia River and (ii) anticipated human activities and projected climate changes. We have explored the sensitivity of considered measures in dependence on the length of time series. In addition, we have divided the period 1926-1990 into three subintervals: (a) 1926-1945, (b) 1946-1965, (c) 1966-1990, and calculated the KLL, KLU, SE, PE values for the various time series in these subintervals. It is found that during the period 1946-1965, there is a decrease in their complexities, and corresponding changes in the SE and PE, in comparison to the period 1926-1990. This complexity loss may be primarily attributed to (i) human interventions, after the Second World War, on these two rivers because of their use for water consumption and (ii) climate change in recent times.

  20. Possibilities for automating coal sampling

    Energy Technology Data Exchange (ETDEWEB)

    Helekal, J; Vankova, J

    1987-11-01

    Outlines sampling equipment in use (AVR-, AVP-, AVN- and AVK-series samplers and RDK- and RDH-series separators produced by the Coal Research Institute, Ostrava; extractors, crushers and separators produced by ORGREZ). The Ostrava equipment covers bituminous coal needs while ORGREZ provides equipment for energy coal requirements. This equipment is designed to handle coal up to 200 mm in size at a throughput of up to 1200 t/h. Automation of sampling equipment is foreseen.

  1. Dimension reduction of frequency-based direct Granger causality measures on short time series.

    Science.gov (United States)

    Siggiridou, Elsa; Kimiskidis, Vasilios K; Kugiumtzis, Dimitris

    2017-09-01

    The mainstream in the estimation of effective brain connectivity relies on Granger causality measures in the frequency domain. If the measure is meant to capture direct causal effects accounting for the presence of other observed variables, as in multi-channel electroencephalograms (EEG), typically the fit of a vector autoregressive (VAR) model on the multivariate time series is required. For short time series of many variables, the estimation of VAR may not be stable requiring dimension reduction resulting in restricted or sparse VAR models. The restricted VAR obtained by the modified backward-in-time selection method (mBTS) is adapted to the generalized partial directed coherence (GPDC), termed restricted GPDC (RGPDC). Dimension reduction on other frequency based measures, such the direct directed transfer function (dDTF), is straightforward. First, a simulation study using linear stochastic multivariate systems is conducted and RGPDC is favorably compared to GPDC on short time series in terms of sensitivity and specificity. Then the two measures are tested for their ability to detect changes in brain connectivity during an epileptiform discharge (ED) from multi-channel scalp EEG. It is shown that RGPDC identifies better than GPDC the connectivity structure of the simulated systems, as well as changes in the brain connectivity, and is less dependent on the free parameter of VAR order. The proposed dimension reduction in frequency measures based on VAR constitutes an appropriate strategy to estimate reliably brain networks within short-time windows. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. A Data-Driven Modeling Strategy for Smart Grid Power Quality Coupling Assessment Based on Time Series Pattern Matching

    Directory of Open Access Journals (Sweden)

    Hao Yu

    2018-01-01

    Full Text Available This study introduces a data-driven modeling strategy for smart grid power quality (PQ coupling assessment based on time series pattern matching to quantify the influence of single and integrated disturbance among nodes in different pollution patterns. Periodic and random PQ patterns are constructed by using multidimensional frequency-domain decomposition for all disturbances. A multidimensional piecewise linear representation based on local extreme points is proposed to extract the patterns features of single and integrated disturbance in consideration of disturbance variation trend and severity. A feature distance of pattern (FDP is developed to implement pattern matching on univariate PQ time series (UPQTS and multivariate PQ time series (MPQTS to quantify the influence of single and integrated disturbance among nodes in the pollution patterns. Case studies on a 14-bus distribution system are performed and analyzed; the accuracy and applicability of the FDP in the smart grid PQ coupling assessment are verified by comparing with other time series pattern matching methods.

  3. Clustering of financial time series

    Science.gov (United States)

    D'Urso, Pierpaolo; Cappelli, Carmela; Di Lallo, Dario; Massari, Riccardo

    2013-05-01

    This paper addresses the topic of classifying financial time series in a fuzzy framework proposing two fuzzy clustering models both based on GARCH models. In general clustering of financial time series, due to their peculiar features, needs the definition of suitable distance measures. At this aim, the first fuzzy clustering model exploits the autoregressive representation of GARCH models and employs, in the framework of a partitioning around medoids algorithm, the classical autoregressive metric. The second fuzzy clustering model, also based on partitioning around medoids algorithm, uses the Caiado distance, a Mahalanobis-like distance, based on estimated GARCH parameters and covariances that takes into account the information about the volatility structure of time series. In order to illustrate the merits of the proposed fuzzy approaches an application to the problem of classifying 29 time series of Euro exchange rates against international currencies is presented and discussed, also comparing the fuzzy models with their crisp version.

  4. [Extracting THz absorption coefficient spectrum based on accurate determination of sample thickness].

    Science.gov (United States)

    Li, Zhi; Zhang, Zhao-hui; Zhao, Xiao-yan; Su, Hai-xia; Yan, Fang

    2012-04-01

    Extracting absorption spectrum in THz band is one of the important aspects in THz applications. Sample's absorption coefficient has a complex nonlinear relationship with its thickness. However, as it is not convenient to measure the thickness directly, absorption spectrum is usually determined incorrectly. Based on the method proposed by Duvillaret which was used to precisely determine the thickness of LiNbO3, the approach to measuring the absorption coefficient spectra of glutamine and histidine in frequency range from 0.3 to 2.6 THz(1 THz = 10(12) Hz) was improved in this paper. In order to validate the correctness of this absorption spectrum, we designed a series of experiments to compare the linearity of absorption coefficient belonging to one kind amino acid in different concentrations. The results indicate that as agreed by Lambert-Beer's Law, absorption coefficient spectrum of amino acid from the improved algorithm performs better linearity with its concentration than that from the common algorithm, which can be the basis of quantitative analysis in further researches.

  5. Adaptive Rate Sampling and Filtering Based on Level Crossing Sampling

    Directory of Open Access Journals (Sweden)

    Saeed Mian Qaisar

    2009-01-01

    Full Text Available The recent sophistications in areas of mobile systems and sensor networks demand more and more processing resources. In order to maintain the system autonomy, energy saving is becoming one of the most difficult industrial challenges, in mobile computing. Most of efforts to achieve this goal are focused on improving the embedded systems design and the battery technology, but very few studies target to exploit the input signal time-varying nature. This paper aims to achieve power efficiency by intelligently adapting the processing activity to the input signal local characteristics. It is done by completely rethinking the processing chain, by adopting a non conventional sampling scheme and adaptive rate filtering. The proposed approach, based on the LCSS (Level Crossing Sampling Scheme presents two filtering techniques, able to adapt their sampling rate and filter order by online analyzing the input signal variations. Indeed, the principle is to intelligently exploit the signal local characteristics—which is usually never considered—to filter only the relevant signal parts, by employing the relevant order filters. This idea leads towards a drastic gain in the computational efficiency and hence in the processing power when compared to the classical techniques.

  6. Case Series Investigations in Cognitive Neuropsychology

    Science.gov (United States)

    Schwartz, Myrna F.; Dell, Gary S.

    2011-01-01

    Case series methodology involves the systematic assessment of a sample of related patients, with the goal of understanding how and why they differ from one another. This method has become increasingly important in cognitive neuropsychology, which has long been identified with single-subject research. We review case series studies dealing with impaired semantic memory, reading, and language production, and draw attention to the affinity of this methodology for testing theories that are expressed as computational models and for addressing questions about neuroanatomy. It is concluded that case series methods usefully complement single-subject techniques. PMID:21714756

  7. Time series modeling by a regression approach based on a latent process.

    Science.gov (United States)

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  8. Short-Term Bus Passenger Demand Prediction Based on Time Series Model and Interactive Multiple Model Approach

    Directory of Open Access Journals (Sweden)

    Rui Xue

    2015-01-01

    Full Text Available Although bus passenger demand prediction has attracted increased attention during recent years, limited research has been conducted in the context of short-term passenger demand forecasting. This paper proposes an interactive multiple model (IMM filter algorithm-based model to predict short-term passenger demand. After aggregated in 15 min interval, passenger demand data collected from a busy bus route over four months were used to generate time series. Considering that passenger demand exhibits various characteristics in different time scales, three time series were developed, named weekly, daily, and 15 min time series. After the correlation, periodicity, and stationarity analyses, time series models were constructed. Particularly, the heteroscedasticity of time series was explored to achieve better prediction performance. Finally, IMM filter algorithm was applied to combine individual forecasting models with dynamically predicted passenger demand for next interval. Different error indices were adopted for the analyses of individual and hybrid models. The performance comparison indicates that hybrid model forecasts are superior to individual ones in accuracy. Findings of this study are of theoretical and practical significance in bus scheduling.

  9. Educating for Active Citizenship: Service-Learning, School-Based Service and Youth Civic Engagement. Youth Helping America Series

    Science.gov (United States)

    Spring, Kimberly; Dietz, Nathan; Grimm, Robert, Jr.

    2006-01-01

    This brief is the second in the Youth Helping America Series, a series of reports based on data from the Youth Volunteering and Civic Engagement Survey, a national survey of 3,178 American youth between the ages of 12 and 18 that was conducted by the Corporation for National and Community Service in 2005 in collaboration with the U.S. Census…

  10. Control charts for location based on different sampling schemes

    NARCIS (Netherlands)

    Mehmood, R.; Riaz, M.; Does, R.J.M.M.

    2013-01-01

    Control charts are the most important statistical process control tool for monitoring variations in a process. A number of articles are available in the literature for the X̄ control chart based on simple random sampling, ranked set sampling, median-ranked set sampling (MRSS), extreme-ranked set

  11. Individual and pen-based oral fluid sampling: A welfare-friendly sampling method for group-housed gestating sows.

    Science.gov (United States)

    Pol, Françoise; Dorenlor, Virginie; Eono, Florent; Eudier, Solveig; Eveno, Eric; Liégard-Vanhecke, Dorine; Rose, Nicolas; Fablet, Christelle

    2017-11-01

    The aims of this study were to assess the feasibility of individual and pen-based oral fluid sampling (OFS) in 35 pig herds with group-housed sows, compare these methods to blood sampling, and assess the factors influencing the success of sampling. Individual samples were collected from at least 30 sows per herd. Pen-based OFS was performed using devices placed in at least three pens for 45min. Information related to the farm, the sows, and their living conditions were collected. Factors significantly associated with the duration of sampling and the chewing behaviour of sows were identified by logistic regression. Individual OFS took 2min 42s on average; the type of floor, swab size, and operator were associated with a sampling time >2min. Pen-based OFS was obtained from 112 devices (62.2%). The type of floor, parity, pen-level activity, and type of feeding were associated with chewing behaviour. Pen activity was associated with the latency to interact with the device. The type of floor, gestation stage, parity, group size, and latency to interact with the device were associated with a chewing time >10min. After 15, 30 and 45min of pen-based OFS, 48%, 60% and 65% of the sows were lying down, respectively. The time spent after the beginning of sampling, genetic type, and time elapsed since the last meal were associated with 50% of the sows lying down at one time point. The mean time to blood sample the sows was 1min 16s and 2min 52s if the number of operators required was considered in the sampling time estimation. The genetic type, parity, and type of floor were significantly associated with a sampling time higher than 1min 30s. This study shows that individual OFS is easy to perform in group-housed sows by a single operator, even though straw-bedded animals take longer to sample than animals housed on slatted floors, and suggests some guidelines to optimise pen-based OFS success. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Testing for Stationarity and Nonlinearity of Daily Streamflow Time Series Based on Different Statistical Tests (Case Study: Upstream Basin Rivers of Zarrineh Roud Dam

    Directory of Open Access Journals (Sweden)

    Farshad Fathian

    2017-02-01

    Full Text Available Introduction: Time series models are one of the most important tools for investigating and modeling hydrological processes in order to solve problems related to water resources management. Many hydrological time series shows nonstationary and nonlinear behaviors. One of the important hydrological modeling tasks is determining the existence of nonstationarity and the way through which we can access the stationarity accordingly. On the other hand, streamflow processes are usually considered as nonlinear mechanisms while in many studies linear time series models are used to model streamflow time series. However, it is not clear what kind of nonlinearity is acting underlying the streamflowprocesses and how intensive it is. Materials and Methods: Streamflow time series of 6 hydro-gauge stations located in the upstream basin rivers of ZarrinehRoud dam (located in the southern part of Urmia Lake basin have been considered to investigate stationarity and nonlinearity. All data series used here to startfrom January 1, 1997, and end on December 31, 2011. In this study, stationarity is tested by ADF and KPSS tests and nonlinearity is tested by BDS, Keenan and TLRT tests. The stationarity test is carried out with two methods. Thefirst one method is the augmented Dickey-Fuller (ADF unit root test first proposed by Dickey and Fuller (1979 and modified by Said and Dickey (1984, which examinsthe presence of unit roots in time series.The second onemethod is KPSS test, proposed by Kwiatkowski et al. (1992, which examinesthestationarity around a deterministic trend (trend stationarity and the stationarity around a fixed level (level stationarity. The BDS test (Brock et al., 1996 is a nonparametric method for testing the serial independence and nonlinear structure in time series based on the correlation integral of the series. The null hypothesis is the time series sample comes from an independent identically distributed (i.i.d. process. The alternative hypothesis

  13. Analog series-based scaffolds: computational design and exploration of a new type of molecular scaffolds for medicinal chemistry

    Science.gov (United States)

    Dimova, Dilyana; Stumpfe, Dagmar; Hu, Ye; Bajorath, Jürgen

    2016-01-01

    Aim: Computational design of and systematic search for a new type of molecular scaffolds termed analog series-based scaffolds. Materials & methods: From currently available bioactive compounds, analog series were systematically extracted, key compounds identified and new scaffolds isolated from them. Results: Using our computational approach, more than 12,000 scaffolds were extracted from bioactive compounds. Conclusion: A new scaffold definition is introduced and a computational methodology developed to systematically identify such scaffolds, yielding a large freely available scaffold knowledge base. PMID:28116132

  14. Remote Sensing Based Two-Stage Sampling for Accuracy Assessment and Area Estimation of Land Cover Changes

    Directory of Open Access Journals (Sweden)

    Heinz Gallaun

    2015-09-01

    Full Text Available Land cover change processes are accelerating at the regional to global level. The remote sensing community has developed reliable and robust methods for wall-to-wall mapping of land cover changes; however, land cover changes often occur at rates below the mapping errors. In the current publication, we propose a cost-effective approach to complement wall-to-wall land cover change maps with a sampling approach, which is used for accuracy assessment and accurate estimation of areas undergoing land cover changes, including provision of confidence intervals. We propose a two-stage sampling approach in order to keep accuracy, efficiency, and effort of the estimations in balance. Stratification is applied in both stages in order to gain control over the sample size allocated to rare land cover change classes on the one hand and the cost constraints for very high resolution reference imagery on the other. Bootstrapping is used to complement the accuracy measures and the area estimates with confidence intervals. The area estimates and verification estimations rely on a high quality visual interpretation of the sampling units based on time series of satellite imagery. To demonstrate the cost-effective operational applicability of the approach we applied it for assessment of deforestation in an area characterized by frequent cloud cover and very low change rate in the Republic of Congo, which makes accurate deforestation monitoring particularly challenging.

  15. A case series of family-based treatment for adolescents with atypical anorexia nervosa.

    Science.gov (United States)

    Hughes, Elizabeth K; Le Grange, Daniel; Court, Andrew; Sawyer, Susan M

    2017-04-01

    The aim of this case series was to examine engagement in and outcomes of family-based treatment (FBT) for adolescents with DSM-5 atypical AN, that is, adolescents who were not underweight at presentation. Consecutive referrals for FBT of adolescents with atypical AN to a specialist child and adolescent eating disorder program were examined. Engagement in treatment (i.e., dose of treatment, completion rate), and changes in psychological symptomatology (i.e., eating disorder symptoms, depressive symptoms, self-esteem, obsessive compulsiveness), weight, and menstrual function were examined. The need for additional interventions (i.e., hospitalization and medication), and estimated remission rates were also examined. The sample comprised 42 adolescents aged 12-18 years (88% female). Engagement in FBT was high, with 83% completing at least half the treatment dose. There were significant decreases in eating disorder and depressive symptoms during FBT (p adolescents who were not admitted to hospital prior to FBT gained some weight (M = 3.4 kg) while those who were admitted did not gain weight during FBT (M = 0.2 kg, p adolescents with atypical AN. However, more research is needed into systematic adaptations of FBT and other treatments that could improve overall remission rates. © 2017 Wiley Periodicals, Inc.

  16. A SPIRAL-BASED DOWNSCALING METHOD FOR GENERATING 30 M TIME SERIES IMAGE DATA

    Directory of Open Access Journals (Sweden)

    B. Liu

    2017-09-01

    Full Text Available The spatial detail and updating frequency of land cover data are important factors influencing land surface dynamic monitoring applications in high spatial resolution scale. However, the fragmentized patches and seasonal variable of some land cover types (e. g. small crop field, wetland make it labor-intensive and difficult in the generation of land cover data. Utilizing the high spatial resolution multi-temporal image data is a possible solution. Unfortunately, the spatial and temporal resolution of available remote sensing data like Landsat or MODIS datasets can hardly satisfy the minimum mapping unit and frequency of current land cover mapping / updating at the same time. The generation of high resolution time series may be a compromise to cover the shortage in land cover updating process. One of popular way is to downscale multi-temporal MODIS data with other high spatial resolution auxiliary data like Landsat. But the usual manner of downscaling pixel based on a window may lead to the underdetermined problem in heterogeneous area, result in the uncertainty of some high spatial resolution pixels. Therefore, the downscaled multi-temporal data can hardly reach high spatial resolution as Landsat data. A spiral based method was introduced to downscale low spatial and high temporal resolution image data to high spatial and high temporal resolution image data. By the way of searching the similar pixels around the adjacent region based on the spiral, the pixel set was made up in the adjacent region pixel by pixel. The underdetermined problem is prevented to a large extent from solving the linear system when adopting the pixel set constructed. With the help of ordinary least squares, the method inverted the endmember values of linear system. The high spatial resolution image was reconstructed on the basis of high spatial resolution class map and the endmember values band by band. Then, the high spatial resolution time series was formed with these

  17. FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting.

    Science.gov (United States)

    Alomar, Miquel L; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L

    2016-01-01

    Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.

  18. High efficiency graphene coated copper based thermocells connected in series

    Science.gov (United States)

    Sindhuja, Mani; Indubala, Emayavaramban; Sudha, Venkatachalam; Harinipriya, Seshadri

    2018-04-01

    Conversion of low-grade waste heat into electricity had been studied employing single thermocell or flowcells so far. Graphene coated copper electrodes based thermocells connected in series displayed relatively high efficiency of thermal energy harvesting. The maximum power output of 49.2W/m2 for normalized cross sectional electrode area is obtained at 60ºC of inter electrode temperature difference. The relative carnot efficiency of 20.2% is obtained from the device. The importance of reducing the mass transfer and ion transfer resistance to improve the efficiency of the device is demonstrated. Degradation studies confirmed mild oxidation of copper foil due to corrosion caused by the electrolyte.

  19. Analysis of financial time series using multiscale entropy based on skewness and kurtosis

    Science.gov (United States)

    Xu, Meng; Shang, Pengjian

    2018-01-01

    There is a great interest in studying dynamic characteristics of the financial time series of the daily stock closing price in different regions. Multi-scale entropy (MSE) is effective, mainly in quantifying the complexity of time series on different time scales. This paper applies a new method for financial stability from the perspective of MSE based on skewness and kurtosis. To better understand the superior coarse-graining method for the different kinds of stock indexes, we take into account the developmental characteristics of the three continents of Asia, North America and European stock markets. We study the volatility of different financial time series in addition to analyze the similarities and differences of coarsening time series from the perspective of skewness and kurtosis. A kind of corresponding relationship between the entropy value of stock sequences and the degree of stability of financial markets, were observed. The three stocks which have particular characteristics in the eight piece of stock sequences were discussed, finding the fact that it matches the result of applying the MSE method to showing results on a graph. A comparative study is conducted to simulate over synthetic and real world data. Results show that the modified method is more effective to the change of dynamics and has more valuable information. The result is obtained at the same time, finding the results of skewness and kurtosis discrimination is obvious, but also more stable.

  20. Improving Teachers' Knowledge of Functional Assessment-Based Interventions: Outcomes of a Professional Development Series

    Science.gov (United States)

    Lane, Kathleen Lynne; Oakes, Wendy Peia; Powers, Lisa; Diebold, Tricia; Germer, Kathryn; Common, Eric A.; Brunsting, Nelson

    2015-01-01

    This paper provides outcomes of a study examining the effectiveness of a year-long professional development training series designed to support in-service educators in learning a systematic approach to functional assessment-based interventions developed by Umbreit and colleagues (2007) that has met with demonstrated success when implemented with…

  1. Time Series Data Analysis of Wireless Sensor Network Measurements of Temperature.

    Science.gov (United States)

    Bhandari, Siddhartha; Bergmann, Neil; Jurdak, Raja; Kusy, Branislav

    2017-05-26

    Wireless sensor networks have gained significant traction in environmental signal monitoring and analysis. The cost or lifetime of the system typically depends on the frequency at which environmental phenomena are monitored. If sampling rates are reduced, energy is saved. Using empirical datasets collected from environmental monitoring sensor networks, this work performs time series analyses of measured temperature time series. Unlike previous works which have concentrated on suppressing the transmission of some data samples by time-series analysis but still maintaining high sampling rates, this work investigates reducing the sampling rate (and sensor wake up rate) and looks at the effects on accuracy. Results show that the sampling period of the sensor can be increased up to one hour while still allowing intermediate and future states to be estimated with interpolation RMSE less than 0.2 °C and forecasting RMSE less than 1 °C.

  2. RF Sub-sampling Receiver Architecture based on Milieu Adapting Techniques

    DEFF Research Database (Denmark)

    Behjou, Nastaran; Larsen, Torben; Jensen, Ole Kiel

    2012-01-01

    A novel sub-sampling based architecture is proposed which has the ability of reducing the problem of image distortion and improving the signal to noise ratio significantly. The technique is based on sensing the environment and adapting the sampling rate of the receiver to the best possible...

  3. Detrended fluctuation analysis based on higher-order moments of financial time series

    Science.gov (United States)

    Teng, Yue; Shang, Pengjian

    2018-01-01

    In this paper, a generalized method of detrended fluctuation analysis (DFA) is proposed as a new measure to assess the complexity of a complex dynamical system such as stock market. We extend DFA and local scaling DFA to higher moments such as skewness and kurtosis (labeled SMDFA and KMDFA), so as to investigate the volatility scaling property of financial time series. Simulations are conducted over synthetic and financial data for providing the comparative study. We further report the results of volatility behaviors in three American countries, three Chinese and three European stock markets by using DFA and LSDFA method based on higher moments. They demonstrate the dynamics behaviors of time series in different aspects, which can quantify the changes of complexity for stock market data and provide us with more meaningful information than single exponent. And the results reveal some higher moments volatility and higher moments multiscale volatility details that cannot be obtained using the traditional DFA method.

  4. Regional geochemical maps of the Tonopah 1 degree by 2 degrees Quadrangle, Nevada, based on samples of stream sediment and nonmagnetic heavy-mineral concentrate

    Science.gov (United States)

    Nash, J.T.; Siems, D.F.

    1988-01-01

    This report is part of a series of geologic, geochemical, and geophysical maps of the Tonopah 1° x 2° quadrangle, Nevada, prepared during studies of the area for the Conterminous United States Mineral Assessment Program (CUSMAP). Included here are 21 maps showing the distributions of selected elements or combinations of elements. These regional geochemical maps are based on chemical analyses of the minus-60 mesh (0.25 mm) fraction of stream-sediment samples and the nonmagnetic heavy-mineral concentrate derived from stream sediment. Stream sediments were collected at 1,217 sites. Our geochemical studies of mineralized rock samples provide a framework for evaluating the results from stream sediments.

  5. New Approach Based on Compressive Sampling for Sample Rate Enhancement in DASs for Low-Cost Sensing Nodes

    Directory of Open Access Journals (Sweden)

    Francesco Bonavolontà

    2014-10-01

    Full Text Available The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate.

  6. Time-series models on somatic cell score improve detection of matistis

    DEFF Research Database (Denmark)

    Norberg, E; Korsgaard, I R; Sloth, K H M N

    2008-01-01

    In-line detection of mastitis using frequent milk sampling was studied in 241 cows in a Danish research herd. Somatic cell scores obtained at a daily basis were analyzed using a mixture of four time-series models. Probabilities were assigned to each model for the observations to belong to a normal...... "steady-state" development, change in "level", change of "slope" or "outlier". Mastitis was indicated from the sum of probabilities for the "level" and "slope" models. Time-series models were based on the Kalman filter. Reference data was obtained from veterinary assessment of health status combined...... with bacteriological findings. At a sensitivity of 90% the corresponding specificity was 68%, which increased to 83% using a one-step back smoothing. It is concluded that mixture models based on Kalman filters are efficient in handling in-line sensor data for detection of mastitis and may be useful for similar...

  7. Advances in paper-based sample pretreatment for point-of-care testing.

    Science.gov (United States)

    Tang, Rui Hua; Yang, Hui; Choi, Jane Ru; Gong, Yan; Feng, Shang Sheng; Pingguan-Murphy, Belinda; Huang, Qing Sheng; Shi, Jun Ling; Mei, Qi Bing; Xu, Feng

    2017-06-01

    In recent years, paper-based point-of-care testing (POCT) has been widely used in medical diagnostics, food safety and environmental monitoring. However, a high-cost, time-consuming and equipment-dependent sample pretreatment technique is generally required for raw sample processing, which are impractical for low-resource and disease-endemic areas. Therefore, there is an escalating demand for a cost-effective, simple and portable pretreatment technique, to be coupled with the commonly used paper-based assay (e.g. lateral flow assay) in POCT. In this review, we focus on the importance of using paper as a platform for sample pretreatment. We firstly discuss the beneficial use of paper for sample pretreatment, including sample collection and storage, separation, extraction, and concentration. We highlight the working principle and fabrication of each sample pretreatment device, the existing challenges and the future perspectives for developing paper-based sample pretreatment technique.

  8. [Correlation coefficient-based principle and method for the classification of jump degree in hydrological time series].

    Science.gov (United States)

    Wu, Zi Yi; Xie, Ping; Sang, Yan Fang; Gu, Hai Ting

    2018-04-01

    The phenomenon of jump is one of the importantly external forms of hydrological variabi-lity under environmental changes, representing the adaption of hydrological nonlinear systems to the influence of external disturbances. Presently, the related studies mainly focus on the methods for identifying the jump positions and jump times in hydrological time series. In contrast, few studies have focused on the quantitative description and classification of jump degree in hydrological time series, which make it difficult to understand the environmental changes and evaluate its potential impacts. Here, we proposed a theatrically reliable and easy-to-apply method for the classification of jump degree in hydrological time series, using the correlation coefficient as a basic index. The statistical tests verified the accuracy, reasonability, and applicability of this method. The relationship between the correlation coefficient and the jump degree of series were described using mathematical equation by derivation. After that, several thresholds of correlation coefficients under different statistical significance levels were chosen, based on which the jump degree could be classified into five levels: no, weak, moderate, strong and very strong. Finally, our method was applied to five diffe-rent observed hydrological time series, with diverse geographic and hydrological conditions in China. The results of the classification of jump degrees in those series were closely accorded with their physically hydrological mechanisms, indicating the practicability of our method.

  9. Characterization of the March 2017 Tank 15 Waste Removal Slurry Sample (Combination of Slurry Samples HTF-15-17-28 and HTF-15-17-29)

    Energy Technology Data Exchange (ETDEWEB)

    Reboul, S. H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); King, W. D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Coleman, C. J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-05-09

    Two March 2017 Tank 15 slurry samples (HTF-15-17-28 and HTF-15-17-29) were collected during the second bulk waste removal campaign and submitted to SRNL for characterization. At SRNL, the two samples were combined and then characterized by a series of physical, elemental, radiological, and ionic analysis methods. Sludge settling as a function of time was also quantified. The characterization results reported in this document are consistent with expectations based upon waste type, process knowledge, comparisons between alternate analysis techniques, and comparisons with the characterization results obtained for the November 2016 Tank 15 slurry sample (the sample collected during the first bulk waste removal campaign).

  10. Characterizing and estimating noise in InSAR and InSAR time series with MODIS

    Science.gov (United States)

    Barnhart, William D.; Lohman, Rowena B.

    2013-01-01

    InSAR time series analysis is increasingly used to image subcentimeter displacement rates of the ground surface. The precision of InSAR observations is often affected by several noise sources, including spatially correlated noise from the turbulent atmosphere. Under ideal scenarios, InSAR time series techniques can substantially mitigate these effects; however, in practice the temporal distribution of InSAR acquisitions over much of the world exhibit seasonal biases, long temporal gaps, and insufficient acquisitions to confidently obtain the precisions desired for tectonic research. Here, we introduce a technique for constraining the magnitude of errors expected from atmospheric phase delays on the ground displacement rates inferred from an InSAR time series using independent observations of precipitable water vapor from MODIS. We implement a Monte Carlo error estimation technique based on multiple (100+) MODIS-based time series that sample date ranges close to the acquisitions times of the available SAR imagery. This stochastic approach allows evaluation of the significance of signals present in the final time series product, in particular their correlation with topography and seasonality. We find that topographically correlated noise in individual interferograms is not spatially stationary, even over short-spatial scales (<10 km). Overall, MODIS-inferred displacements and velocities exhibit errors of similar magnitude to the variability within an InSAR time series. We examine the MODIS-based confidence bounds in regions with a range of inferred displacement rates, and find we are capable of resolving velocities as low as 1.5 mm/yr with uncertainties increasing to ∼6 mm/yr in regions with higher topographic relief.

  11. Clinical time series prediction: Toward a hierarchical dynamical system framework.

    Science.gov (United States)

    Liu, Zitao; Hauskrecht, Milos

    2015-09-01

    Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Clinical time series prediction: towards a hierarchical dynamical system framework

    Science.gov (United States)

    Liu, Zitao; Hauskrecht, Milos

    2014-01-01

    Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive

  13. Network structure of multivariate time series.

    Science.gov (United States)

    Lacasa, Lucas; Nicosia, Vincenzo; Latora, Vito

    2015-10-21

    Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail.

  14. Development of a standard data base for FBR core nuclear design (XIII). Analysis of small sample reactivity experiments at ZPPR-9

    International Nuclear Information System (INIS)

    Sato, Wakaei; Fukushima, Manabu; Ishikawa, Makoto

    2000-09-01

    A comprehensive study to evaluate and accumulate the abundant results of fast reactor physics is now in progress at O-arai Engineering Center to improve analytical methods and prediction accuracy of nuclear design for large fast breeder cores such as future commercial FBRs. The present report summarizes the analytical results of sample reactivity experiments at ZPPR-9 core, which has not been evaluated by the latest analytical method yet. The intention of the work is to extend and further generalize the standard data base for FBR core nuclear design. The analytical results of the sample reactivity experiments (samples: PU-30, U-6, DU-6, SS-1 and B-1) at ZPPR-9 core in JUPITER series, with the latest nuclear data library JENDL-3.2 and the analytical method which was established by the JUPITER analysis, can be concluded as follows: The region-averaged final C/E values generally agreed with unity within 5% differences at the inner core region. However, the C/E values of every sample showed the radial space-dependency increasing from center to core edge, especially the discrepancy of B-1 was the largest by 10%. Next, the influence of the present analytical results for the ZPPR-9 sample reactivity to the cross-section adjustment was evaluated. The reference case was a unified cross-section set ADJ98 based on the recent JUPITER analysis. As a conclusion, the present analytical results have sufficient physical consistency with other JUPITER data, and possess qualification as a part of the standard data base for FBR nuclear design. (author)

  15. Passive sampling as a tool for identifying micro-organic compounds in groundwater.

    Science.gov (United States)

    Mali, N; Cerar, S; Koroša, A; Auersperger, P

    2017-09-01

    The paper presents the use of a simple and cost efficient passive sampling device with integrated active carbon with which to test the possibility of determining the presence of micro-organic compounds (MOs) in groundwater and identifying the potential source of pollution as well as the seasonal variability of contamination. Advantage of the passive sampler is to cover a long sampling period by integrating the pollutant concentration over time, and the consequently analytical costs over the monitoring period can be reduced substantially. Passive samplers were installed in 15 boreholes in the Maribor City area in Slovenia, with two sampling campaigns covered a period about one year. At all sampling sites in the first series a total of 103 compounds were detected, and 144 in the second series. Of all detected compounds the 53 most frequently detected were selected for further analysis. These were classified into eight groups based on the type of their source: Pesticides, Halogenated solvents, Non-halogenated solvents, Domestic and personal, Plasticizers and additives, Other industrial, Sterols and Natural compounds. The most frequently detected MO compounds in groundwater were tetrachloroethene and trichloroethene from the Halogenated solvents group. The most frequently detected among the compound's groups were pesticides. Analysis of frequency also showed significant differences between the two sampling series, with less frequent detections in the summer series. For the analysis to determine the origin of contamination three groups of compounds were determined according to type of use: agriculture, urban and industry. Frequency of detection indicates mixed land use in the recharge areas of sampling sites, which makes it difficult to specify the dominant origin of the compound. Passive sampling has proved to be useful tool with which to identify MOs in groundwater and for assessing groundwater quality. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Mapping Crop Cycles in China Using MODIS-EVI Time Series

    Directory of Open Access Journals (Sweden)

    Le Li

    2014-03-01

    Full Text Available As the Earth’s population continues to grow and demand for food increases, the need for improved and timely information related to the properties and dynamics of global agricultural systems is becoming increasingly important. Global land cover maps derived from satellite data provide indispensable information regarding the geographic distribution and areal extent of global croplands. However, land use information, such as cropping intensity (defined here as the number of cropping cycles per year, is not routinely available over large areas because mapping this information from remote sensing is challenging. In this study, we present a simple but efficient algorithm for automated mapping of cropping intensity based on data from NASA’s (NASA: The National Aeronautics and Space Administration MODerate Resolution Imaging Spectroradiometer (MODIS. The proposed algorithm first applies an adaptive Savitzky-Golay filter to smooth Enhanced Vegetation Index (EVI time series derived from MODIS surface reflectance data. It then uses an iterative moving-window methodology to identify cropping cycles from the smoothed EVI time series. Comparison of results from our algorithm with national survey data at both the provincial and prefectural level in China show that the algorithm provides estimates of gross sown area that agree well with inventory data. Accuracy assessment comparing visually interpreted time series with algorithm results for a random sample of agricultural areas in China indicates an overall accuracy of 91.0% for three classes defined based on the number of cycles observed in EVI time series. The algorithm therefore appears to provide a straightforward and efficient method for mapping cropping intensity from MODIS time series data.

  17. Low frequency of defective mismatch repair in a population-based series of upper urothelial carcinoma

    International Nuclear Information System (INIS)

    Ericson, Kajsa M; Isinger, Anna P; Isfoss, Björn L; Nilbert, Mef C

    2005-01-01

    Upper urothelial cancer (UUC), i.e. transitional cell carcinomas of the renal pelvis and the ureter, occur at an increased frequency in patients with hereditary nonpolyposis colorectal cancer (HNPCC). Defective mismatch repair (MMR) specifically characterizes HNPCC-associated tumors, but also occurs in subsets of some sporadic tumors, e.g. in gastrointestinal cancer and endometrial cancer. We assessed the contribution of defective MMR to the development of UUC in a population-based series from the southern Swedish Cancer Registry, through microsatellite instability (MSI) analysis and immunohistochemical evaluation of expression of the MMR proteins MLH1, PMS2, MSH2, and MSH6. A MSI-high phenotype was identified in 9/216 (4%) successfully analyzed patients and a MSI-low phenotype in 5/216 (2%). Loss of MMR protein immunostaining was found in 11/216 (5%) tumors, and affected most commonly MSH2 and MSH6. This population-based series indicates that somatic MMR inactivation is a minor pathway in the development of UUC, but tumors that display defective MMR are, based on the immunohistochemical expression pattern, likely to be associated with HNPCC

  18. Low frequency of defective mismatch repair in a population-based series of upper urothelial carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Ericson, Kajsa M; Isinger, Anna P [Departments of Oncology, University Hospital, Lund (Sweden); Isfoss, Björn L [Departments of Pathology, University Hospital, Lund (Sweden); Nilbert, Mef C [Departments of Oncology, University Hospital, Lund (Sweden)

    2005-01-01

    Upper urothelial cancer (UUC), i.e. transitional cell carcinomas of the renal pelvis and the ureter, occur at an increased frequency in patients with hereditary nonpolyposis colorectal cancer (HNPCC). Defective mismatch repair (MMR) specifically characterizes HNPCC-associated tumors, but also occurs in subsets of some sporadic tumors, e.g. in gastrointestinal cancer and endometrial cancer. We assessed the contribution of defective MMR to the development of UUC in a population-based series from the southern Swedish Cancer Registry, through microsatellite instability (MSI) analysis and immunohistochemical evaluation of expression of the MMR proteins MLH1, PMS2, MSH2, and MSH6. A MSI-high phenotype was identified in 9/216 (4%) successfully analyzed patients and a MSI-low phenotype in 5/216 (2%). Loss of MMR protein immunostaining was found in 11/216 (5%) tumors, and affected most commonly MSH2 and MSH6. This population-based series indicates that somatic MMR inactivation is a minor pathway in the development of UUC, but tumors that display defective MMR are, based on the immunohistochemical expression pattern, likely to be associated with HNPCC.

  19. TIME SERIES ANALYSIS USING A UNIQUE MODEL OF TRANSFORMATION

    Directory of Open Access Journals (Sweden)

    Goran Klepac

    2007-12-01

    Full Text Available REFII1 model is an authorial mathematical model for time series data mining. The main purpose of that model is to automate time series analysis, through a unique transformation model of time series. An advantage of this approach of time series analysis is the linkage of different methods for time series analysis, linking traditional data mining tools in time series, and constructing new algorithms for analyzing time series. It is worth mentioning that REFII model is not a closed system, which means that we have a finite set of methods. At first, this is a model for transformation of values of time series, which prepares data used by different sets of methods based on the same model of transformation in a domain of problem space. REFII model gives a new approach in time series analysis based on a unique model of transformation, which is a base for all kind of time series analysis. The advantage of REFII model is its possible application in many different areas such as finance, medicine, voice recognition, face recognition and text mining.

  20. Survey of sampling-based methods for uncertainty and sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, J.C.; Johnson, J.D.; Sallaberry, C.J.; Storlie, C.B.

    2006-01-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (i) definition of probability distributions to characterize epistemic uncertainty in analysis inputs (ii) generation of samples from uncertain analysis inputs (iii) propagation of sampled inputs through an analysis (iv) presentation of uncertainty analysis results, and (v) determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two-dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition

  1. Survey of sampling-based methods for uncertainty and sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD. (.; .); Storlie, Curt B. (Colorado State University, Fort Collins, CO)

    2006-06-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.

  2. Evaluation of physical sampling efficiency for cyclone-based personal bioaerosol samplers in moving air environments.

    Science.gov (United States)

    Su, Wei-Chung; Tolchinsky, Alexander D; Chen, Bean T; Sigaev, Vladimir I; Cheng, Yung Sung

    2012-09-01

    The need to determine occupational exposure to bioaerosols has notably increased in the past decade, especially for microbiology-related workplaces and laboratories. Recently, two new cyclone-based personal bioaerosol samplers were developed by the National Institute for Occupational Safety and Health (NIOSH) in the USA and the Research Center for Toxicology and Hygienic Regulation of Biopreparations (RCT & HRB) in Russia to monitor bioaerosol exposure in the workplace. Here, a series of wind tunnel experiments were carried out to evaluate the physical sampling performance of these two samplers in moving air conditions, which could provide information for personal biological monitoring in a moving air environment. The experiments were conducted in a small wind tunnel facility using three wind speeds (0.5, 1.0 and 2.0 m s(-1)) and three sampling orientations (0°, 90°, and 180°) with respect to the wind direction. Monodispersed particles ranging from 0.5 to 10 μm were employed as the test aerosols. The evaluation of the physical sampling performance was focused on the aspiration efficiency and capture efficiency of the two samplers. The test results showed that the orientation-averaged aspiration efficiencies of the two samplers closely agreed with the American Conference of Governmental Industrial Hygienists (ACGIH) inhalable convention within the particle sizes used in the evaluation tests, and the effect of the wind speed on the aspiration efficiency was found negligible. The capture efficiencies of these two samplers ranged from 70% to 80%. These data offer important information on the insight into the physical sampling characteristics of the two test samplers.

  3. FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting

    Directory of Open Access Journals (Sweden)

    Miquel L. Alomar

    2016-01-01

    Full Text Available Hardware implementation of artificial neural networks (ANNs allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC has arisen as a strategic technique to design recurrent neural networks (RNNs with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.

  4. High Efficiency Graphene Coated Copper Based Thermocells Connected in Series

    Directory of Open Access Journals (Sweden)

    Mani Sindhuja

    2018-04-01

    Full Text Available Conversion of low-grade waste heat into electricity had been studied employing single thermocell or flowcells so far. Graphene coated copper electrodes based thermocells connected in series displayed relatively high efficiency of thermal energy harvesting. The maximum power output of 49.2 W/m2 for normalized cross sectional electrode area is obtained at 60°C of inter electrode temperature difference. The relative carnot efficiency of 20.2% is obtained from the device. The importance of reducing the mass transfer and ion transfer resistance to improve the efficiency of the device is demonstrated. Degradation studies confirmed mild oxidation of copper foil due to corrosion caused by the electrolyte.

  5. Accuracy of MFCC-Based Speaker Recognition in Series 60 Device

    Directory of Open Access Journals (Sweden)

    Pasi Fränti

    2005-10-01

    Full Text Available A fixed point implementation of speaker recognition based on MFCC signal processing is considered. We analyze the numerical error of the MFCC and its effect on the recognition accuracy. Techniques to reduce the information loss in a converted fixed point implementation are introduced. We increase the signal processing accuracy by adjusting the ratio of presentation accuracy of the operators and the signal. The signal processing error is found out to be more important to the speaker recognition accuracy than the error in the classification algorithm. The results are verified by applying the alternative technique to speech data. We also discuss the specific programming requirements set up by the Symbian and Series 60.

  6. The Recent Developments in Sample Preparation for Mass Spectrometry-Based Metabolomics.

    Science.gov (United States)

    Gong, Zhi-Gang; Hu, Jing; Wu, Xi; Xu, Yong-Jiang

    2017-07-04

    Metabolomics is a critical member in systems biology. Although great progress has been achieved in metabolomics, there are still some problems in sample preparation, data processing and data interpretation. In this review, we intend to explore the roles, challenges and trends in sample preparation for mass spectrometry- (MS-) based metabolomics. The newly emerged sample preparation methods were also critically examined, including laser microdissection, in vivo sampling, dried blood spot, microwave, ultrasound and enzyme-assisted extraction, as well as microextraction techniques. Finally, we provide some conclusions and perspectives for sample preparation in MS-based metabolomics.

  7. Series expansion of the modified Einstein Procedure

    Science.gov (United States)

    Seema Chandrakant Shah-Fairbank

    2009-01-01

    This study examines calculating total sediment discharge based on the Modified Einstein Procedure (MEP). A new procedure based on the Series Expansion of the Modified Einstein Procedure (SEMEP) has been developed. This procedure contains four main modifications to MEP. First, SEMEP solves the Einstein integrals quickly and accurately based on a series expansion. Next,...

  8. Performance of local information-based link prediction: a sampling perspective

    Science.gov (United States)

    Zhao, Jichang; Feng, Xu; Dong, Li; Liang, Xiao; Xu, Ke

    2012-08-01

    Link prediction is pervasively employed to uncover the missing links in the snapshots of real-world networks, which are usually obtained through different kinds of sampling methods. In the previous literature, in order to evaluate the performance of the prediction, known edges in the sampled snapshot are divided into the training set and the probe set randomly, without considering the underlying sampling approaches. However, different sampling methods might lead to different missing links, especially for the biased ways. For this reason, random partition-based evaluation of performance is no longer convincing if we take the sampling method into account. In this paper, we try to re-evaluate the performance of local information-based link predictions through sampling method governed division of the training set and the probe set. It is interesting that we find that for different sampling methods, each prediction approach performs unevenly. Moreover, most of these predictions perform weakly when the sampling method is biased, which indicates that the performance of these methods might have been overestimated in the prior works.

  9. Networked Estimation for Event-Based Sampling Systems with Packet Dropouts

    Directory of Open Access Journals (Sweden)

    Young Soo Suh

    2009-04-01

    Full Text Available This paper is concerned with a networked estimation problem in which sensor data are transmitted over the network. In the event-based sampling scheme known as level-crossing or send-on-delta (SOD, sensor data are transmitted to the estimator node if the difference between the current sensor value and the last transmitted one is greater than a given threshold. Event-based sampling has been shown to be more efficient than the time-triggered one in some situations, especially in network bandwidth improvement. However, it cannot detect packet dropout situations because data transmission and reception do not use a periodical time-stamp mechanism as found in time-triggered sampling systems. Motivated by this issue, we propose a modified event-based sampling scheme called modified SOD in which sensor data are sent when either the change of sensor output exceeds a given threshold or the time elapses more than a given interval. Through simulation results, we show that the proposed modified SOD sampling significantly improves estimation performance when packet dropouts happen.

  10. Sampling Key Populations for HIV Surveillance: Results From Eight Cross-Sectional Studies Using Respondent-Driven Sampling and Venue-Based Snowball Sampling.

    Science.gov (United States)

    Rao, Amrita; Stahlman, Shauna; Hargreaves, James; Weir, Sharon; Edwards, Jessie; Rice, Brian; Kochelani, Duncan; Mavimbela, Mpumelelo; Baral, Stefan

    2017-10-20

    In using regularly collected or existing surveillance data to characterize engagement in human immunodeficiency virus (HIV) services among marginalized populations, differences in sampling methods may produce different pictures of the target population and may therefore result in different priorities for response. The objective of this study was to use existing data to evaluate the sample distribution of eight studies of female sex workers (FSW) and men who have sex with men (MSM), who were recruited using different sampling approaches in two locations within Sub-Saharan Africa: Manzini, Swaziland and Yaoundé, Cameroon. MSM and FSW participants were recruited using either respondent-driven sampling (RDS) or venue-based snowball sampling. Recruitment took place between 2011 and 2016. Participants at each study site were administered a face-to-face survey to assess sociodemographics, along with the prevalence of self-reported HIV status, frequency of HIV testing, stigma, and other HIV-related characteristics. Crude and RDS-adjusted prevalence estimates were calculated. Crude prevalence estimates from the venue-based snowball samples were compared with the overlap of the RDS-adjusted prevalence estimates, between both FSW and MSM in Cameroon and Swaziland. RDS samples tended to be younger (MSM aged 18-21 years in Swaziland: 47.6% [139/310] in RDS vs 24.3% [42/173] in Snowball, in Cameroon: 47.9% [99/306] in RDS vs 20.1% [52/259] in Snowball; FSW aged 18-21 years in Swaziland 42.5% [82/325] in RDS vs 8.0% [20/249] in Snowball; in Cameroon 15.6% [75/576] in RDS vs 8.1% [25/306] in Snowball). They were less educated (MSM: primary school completed or less in Swaziland 42.6% [109/310] in RDS vs 4.0% [7/173] in Snowball, in Cameroon 46.2% [138/306] in RDS vs 14.3% [37/259] in Snowball; FSW: primary school completed or less in Swaziland 86.6% [281/325] in RDS vs 23.9% [59/247] in Snowball, in Cameroon 87.4% [520/576] in RDS vs 77.5% [238/307] in Snowball) than the snowball

  11. Forest Disturbance Mapping Using Dense Synthetic Landsat/MODIS Time-Series and Permutation-Based Disturbance Index Detection

    Directory of Open Access Journals (Sweden)

    David Frantz

    2016-03-01

    Full Text Available Spatio-temporal information on process-based forest loss is essential for a wide range of applications. Despite remote sensing being the only feasible means of monitoring forest change at regional or greater scales, there is no retrospectively available remote sensor that meets the demand of monitoring forests with the required spatial detail and guaranteed high temporal frequency. As an alternative, we employed the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM to produce a dense synthetic time series by fusing Landsat and Moderate Resolution Imaging Spectroradiometer (MODIS nadir Bidirectional Reflectance Distribution Function (BRDF adjusted reflectance. Forest loss was detected by applying a multi-temporal disturbance detection approach implementing a Disturbance Index-based detection strategy. The detection thresholds were permutated with random numbers for the normal distribution in order to generate a multi-dimensional threshold confidence area. As a result, a more robust parameterization and a spatially more coherent detection could be achieved. (i The original Landsat time series; (ii synthetic time series; and a (iii combined hybrid approach were used to identify the timing and extent of disturbances. The identified clearings in the Landsat detection were verified using an annual woodland clearing dataset from Queensland’s Statewide Landcover and Trees Study. Disturbances caused by stand-replacing events were successfully identified. The increased temporal resolution of the synthetic time series indicated promising additional information on disturbance timing. The results of the hybrid detection unified the benefits of both approaches, i.e., the spatial quality and general accuracy of the Landsat detection and the increased temporal information of synthetic time series. Results indicated that a temporal improvement in the detection of the disturbance date could be achieved relative to the irregularly spaced Landsat

  12. Time series regression-based pairs trading in the Korean equities market

    Science.gov (United States)

    Kim, Saejoon; Heo, Jun

    2017-07-01

    Pairs trading is an instance of statistical arbitrage that relies on heavy quantitative data analysis to profit by capitalising low-risk trading opportunities provided by anomalies of related assets. A key element in pairs trading is the rule by which open and close trading triggers are defined. This paper investigates the use of time series regression to define the rule which has previously been identified with fixed threshold-based approaches. Empirical results indicate that our approach may yield significantly increased excess returns compared to ones obtained by previous approaches on large capitalisation stocks in the Korean equities market.

  13. Rainfall Prediction of Indian Peninsula: Comparison of Time Series Based Approach and Predictor Based Approach using Machine Learning Techniques

    Science.gov (United States)

    Dash, Y.; Mishra, S. K.; Panigrahi, B. K.

    2017-12-01

    Prediction of northeast/post monsoon rainfall which occur during October, November and December (OND) over Indian peninsula is a challenging task due to the dynamic nature of uncertain chaotic climate. It is imperative to elucidate this issue by examining performance of different machine leaning (ML) approaches. The prime objective of this research is to compare between a) statistical prediction using historical rainfall observations and global atmosphere-ocean predictors like Sea Surface Temperature (SST) and Sea Level Pressure (SLP) and b) empirical prediction based on a time series analysis of past rainfall data without using any other predictors. Initially, ML techniques have been applied on SST and SLP data (1948-2014) obtained from NCEP/NCAR reanalysis monthly mean provided by the NOAA ESRL PSD. Later, this study investigated the applicability of ML methods using OND rainfall time series for 1948-2014 and forecasted up to 2018. The predicted values of aforementioned methods were verified using observed time series data collected from Indian Institute of Tropical Meteorology and the result revealed good performance of ML algorithms with minimal error scores. Thus, it is found that both statistical and empirical methods are useful for long range climatic projections.

  14. Series: Practical guidance to qualitative research : part 3: sampling, data collection and analysis

    NARCIS (Netherlands)

    Albine Moser; Irene Korstjens

    2017-01-01

    In the course of our supervisory work over the years, we have noticed that qualitative research tends to evoke a lot of questions and worries, so-called frequently asked questions (FAQs). This series of four articles intends to provide novice researchers with practical guidance for

  15. Sampling and estimating recreational use.

    Science.gov (United States)

    Timothy G. Gregoire; Gregory J. Buhyoff

    1999-01-01

    Probability sampling methods applicable to estimate recreational use are presented. Both single- and multiple-access recreation sites are considered. One- and two-stage sampling methods are presented. Estimation of recreational use is presented in a series of examples.

  16. Genomic epidemiology of a major Mycobacterium tuberculosis outbreak: Retrospective cohort study in a low incidence setting using sparse time-series sampling

    DEFF Research Database (Denmark)

    Folkvardsen, Dorte Bek; Norman, Anders; Andersen, Åse Bengård

    2017-01-01

    cases belonging to this outbreak via routine MIRU-VNTR typing. Here, we present a retrospective analysis of the C2/1112-15 dataset, based on whole-genome data from a sparse time-series consisting of five randomly selected isolates from each of the 23 years. Even if these data are derived from only 12...

  17. Identification of flood-rich and flood-poor periods in flood series

    Science.gov (United States)

    Mediero, Luis; Santillán, David; Garrote, Luis

    2015-04-01

    Recently, a general concern about non-stationarity of flood series has arisen, as changes in catchment response can be driven by several factors, such as climatic and land-use changes. Several studies to detect trends in flood series at either national or trans-national scales have been conducted. Trends are usually detected by the Mann-Kendall test. However, the results of this test depend on the starting and ending year of the series, which can lead to different results in terms of the period considered. The results can be conditioned to flood-poor and flood-rich periods located at the beginning or end of the series. A methodology to identify statistically significant flood-rich and flood-poor periods is developed, based on the comparison between the expected sampling variability of floods when stationarity is assumed and the observed variability of floods in a given series. The methodology is applied to a set of long series of annual maximum floods, peaks over threshold and counts of annual occurrences in peaks over threshold series observed in Spain in the period 1942-2009. Mediero et al. (2014) found a general decreasing trend in flood series in some parts of Spain that could be caused by a flood-rich period observed in 1950-1970, placed at the beginning of the flood series. The results of this study support the findings of Mediero et al. (2014), as a flood-rich period in 1950-1970 was identified in most of the selected sites. References: Mediero, L., Santillán, D., Garrote, L., Granados, A. Detection and attribution of trends in magnitude, frequency and timing of floods in Spain, Journal of Hydrology, 517, 1072-1088, 2014.

  18. Risk-Based Sampling: I Don't Want to Weight in Vain.

    Science.gov (United States)

    Powell, Mark R

    2015-12-01

    Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. © 2015 Society for Risk Analysis.

  19. Evaluation of species richness estimators based on quantitative performance measures and sensitivity to patchiness and sample grain size

    Science.gov (United States)

    Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc

    2012-11-01

    Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling

  20. Selecting Sample Preparation Workflows for Mass Spectrometry-Based Proteomic and Phosphoproteomic Analysis of Patient Samples with Acute Myeloid Leukemia.

    Science.gov (United States)

    Hernandez-Valladares, Maria; Aasebø, Elise; Selheim, Frode; Berven, Frode S; Bruserud, Øystein

    2016-08-22

    Global mass spectrometry (MS)-based proteomic and phosphoproteomic studies of acute myeloid leukemia (AML) biomarkers represent a powerful strategy to identify and confirm proteins and their phosphorylated modifications that could be applied in diagnosis and prognosis, as a support for individual treatment regimens and selection of patients for bone marrow transplant. MS-based studies require optimal and reproducible workflows that allow a satisfactory coverage of the proteome and its modifications. Preparation of samples for global MS analysis is a crucial step and it usually requires method testing, tuning and optimization. Different proteomic workflows that have been used to prepare AML patient samples for global MS analysis usually include a standard protein in-solution digestion procedure with a urea-based lysis buffer. The enrichment of phosphopeptides from AML patient samples has previously been carried out either with immobilized metal affinity chromatography (IMAC) or metal oxide affinity chromatography (MOAC). We have recently tested several methods of sample preparation for MS analysis of the AML proteome and phosphoproteome and introduced filter-aided sample preparation (FASP) as a superior methodology for the sensitive and reproducible generation of peptides from patient samples. FASP-prepared peptides can be further fractionated or IMAC-enriched for proteome or phosphoproteome analyses. Herein, we will review both in-solution and FASP-based sample preparation workflows and encourage the use of the latter for the highest protein and phosphorylation coverage and reproducibility.

  1. Hidden discriminative features extraction for supervised high-order time series modeling.

    Science.gov (United States)

    Nguyen, Ngoc Anh Thi; Yang, Hyung-Jeong; Kim, Sunhee

    2016-11-01

    In this paper, an orthogonal Tucker-decomposition-based extraction of high-order discriminative subspaces from a tensor-based time series data structure is presented, named as Tensor Discriminative Feature Extraction (TDFE). TDFE relies on the employment of category information for the maximization of the between-class scatter and the minimization of the within-class scatter to extract optimal hidden discriminative feature subspaces that are simultaneously spanned by every modality for supervised tensor modeling. In this context, the proposed tensor-decomposition method provides the following benefits: i) reduces dimensionality while robustly mining the underlying discriminative features, ii) results in effective interpretable features that lead to an improved classification and visualization, and iii) reduces the processing time during the training stage and the filtering of the projection by solving the generalized eigenvalue issue at each alternation step. Two real third-order tensor-structures of time series datasets (an epilepsy electroencephalogram (EEG) that is modeled as channel×frequency bin×time frame and a microarray data that is modeled as gene×sample×time) were used for the evaluation of the TDFE. The experiment results corroborate the advantages of the proposed method with averages of 98.26% and 89.63% for the classification accuracies of the epilepsy dataset and the microarray dataset, respectively. These performance averages represent an improvement on those of the matrix-based algorithms and recent tensor-based, discriminant-decomposition approaches; this is especially the case considering the small number of samples that are used in practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. GAMMA-RAY CHARACTERIZATION OF THE U-SERIES INTERMEDIATE DAUGHTERS FROM SOIL SAMPLES AT THE PENA BLANCA NATURAL ANALOG, CHIHUAHUA, MEXICO

    International Nuclear Information System (INIS)

    French, D.C.; Anthony, E.Y.; Goodell, P.C.

    2005-01-01

    The Pena Blanca natural analog is located in the Sierra Pena Blanca, approximately 50 miles north of Chihuahua City, Mexico. The Sierra Pena Blanca is composed mainly of ash-flow tuffs, and the uranium in the region is contained in the brecciated zones of these tuffs. The Pena Blanca site is considered a natural analog to the proposed Yucca Mountain Nuclear Waste Repository because they share similar characteristics of structure, volcanic lithology, tectonic activity, and hydrologic regime. One of the mineralized zones, the Nopal I deposit, was mined in the early 1980s and the ore was stockpiled close to the mine. This stockpile area has subsequently been cleared and is referred to as the prior high-grade stockpile (PHGS) site. Soil surrounding boulders of high-grade ore associated with the PHGS site have been sampled. The purpose of this study is to characterize the transport of uranium series radioisotopes from the boulder to the soil during the past 25 years. Transport is characterized by determining the activities of individual radionuclides and daughter to parent ratios. The daughter to parent ratios are used to establish whether the samples are in secular equilibrium. Activities are determined using gamma-ray spectroscopy. Isotopes of the uranium series decay chain detected by gamma-ray spectroscopy include 210 Pb, 234 U, 234 Th, 230 Th, 226 Ra, 214 Pb, 214 Bi, and 234 Pa. Preliminary results indicate that some daughter to parent pairs appear to be in secular disequilibrium. Thorium is in excess relative to uranium, and radium is in excess relative to thorium. A deficiency appears to exist for 210 Pb relative to 214 Bi and 214 Pb. If these results are borne out by further analysis, they would suggest transport of nuclides from the high-grade boulder into its surroundings, followed by continued leaching of uranium and lead from the environment

  3. Triangulation based inclusion probabilities: a design-unbiased sampling approach

    OpenAIRE

    Fehrmann, Lutz; Gregoire, Timothy; Kleinn, Christoph

    2011-01-01

    A probabilistic sampling approach for design-unbiased estimation of area-related quantitative characteristics of spatially dispersed population units is proposed. The developed field protocol includes a fixed number of 3 units per sampling location and is based on partial triangulations over their natural neighbors to derive the individual inclusion probabilities. The performance of the proposed design is tested in comparison to fixed area sample plots in a simulation with two forest stands. ...

  4. Burned area detection based on Landsat time series in savannas of southern Burkina Faso

    Science.gov (United States)

    Liu, Jinxiu; Heiskanen, Janne; Maeda, Eduardo Eiji; Pellikka, Petri K. E.

    2018-02-01

    West African savannas are subject to regular fires, which have impacts on vegetation structure, biodiversity and carbon balance. An efficient and accurate mapping of burned area associated with seasonal fires can greatly benefit decision making in land management. Since coarse resolution burned area products cannot meet the accuracy needed for fire management and climate modelling at local scales, the medium resolution Landsat data is a promising alternative for local scale studies. In this study, we developed an algorithm for continuous monitoring of annual burned areas using Landsat time series. The algorithm is based on burned pixel detection using harmonic model fitting with Landsat time series and breakpoint identification in the time series data. This approach was tested in a savanna area in southern Burkina Faso using 281 images acquired between October 2000 and April 2016. An overall accuracy of 79.2% was obtained with balanced omission and commission errors. This represents a significant improvement in comparison with MODIS burned area product (67.6%), which had more omission errors than commission errors, indicating underestimation of the total burned area. By observing the spatial distribution of burned areas, we found that the Landsat based method misclassified cropland and cloud shadows as burned areas due to the similar spectral response, and MODIS burned area product omitted small and fragmented burned areas. The proposed algorithm is flexible and robust against decreased data availability caused by clouds and Landsat 7 missing lines, therefore having a high potential for being applied in other landscapes in future studies.

  5. Financial time series analysis based on information categorization method

    Science.gov (United States)

    Tian, Qiang; Shang, Pengjian; Feng, Guochen

    2014-12-01

    The paper mainly applies the information categorization method to analyze the financial time series. The method is used to examine the similarity of different sequences by calculating the distances between them. We apply this method to quantify the similarity of different stock markets. And we report the results of similarity in US and Chinese stock markets in periods 1991-1998 (before the Asian currency crisis), 1999-2006 (after the Asian currency crisis and before the global financial crisis), and 2007-2013 (during and after global financial crisis) by using this method. The results show the difference of similarity between different stock markets in different time periods and the similarity of the two stock markets become larger after these two crises. Also we acquire the results of similarity of 10 stock indices in three areas; it means the method can distinguish different areas' markets from the phylogenetic trees. The results show that we can get satisfactory information from financial markets by this method. The information categorization method can not only be used in physiologic time series, but also in financial time series.

  6. On incomplete sampling under birth-death models and connections to the sampling-based coalescent.

    Science.gov (United States)

    Stadler, Tanja

    2009-11-07

    The constant rate birth-death process is used as a stochastic model for many biological systems, for example phylogenies or disease transmission. As the biological data are usually not fully available, it is crucial to understand the effect of incomplete sampling. In this paper, we analyze the constant rate birth-death process with incomplete sampling. We derive the density of the bifurcation events for trees on n leaves which evolved under this birth-death-sampling process. This density is used for calculating prior distributions in Bayesian inference programs and for efficiently simulating trees. We show that the birth-death-sampling process can be interpreted as a birth-death process with reduced rates and complete sampling. This shows that joint inference of birth rate, death rate and sampling probability is not possible. The birth-death-sampling process is compared to the sampling-based population genetics model, the coalescent. It is shown that despite many similarities between these two models, the distribution of bifurcation times remains different even in the case of very large population sizes. We illustrate these findings on an Hepatitis C virus dataset from Egypt. We show that the transmission times estimates are significantly different-the widely used Gamma statistic even changes its sign from negative to positive when switching from the coalescent to the birth-death process.

  7. Fuzzy Inference System Approach for Locating Series, Shunt, and Simultaneous Series-Shunt Faults in Double Circuit Transmission Lines.

    Science.gov (United States)

    Swetapadma, Aleena; Yadav, Anamika

    2015-01-01

    Many schemes are reported for shunt fault location estimation, but fault location estimation of series or open conductor faults has not been dealt with so far. The existing numerical relays only detect the open conductor (series) fault and give the indication of the faulty phase(s), but they are unable to locate the series fault. The repair crew needs to patrol the complete line to find the location of series fault. In this paper fuzzy based fault detection/classification and location schemes in time domain are proposed for both series faults, shunt faults, and simultaneous series and shunt faults. The fault simulation studies and fault location algorithm have been developed using Matlab/Simulink. Synchronized phasors of voltage and current signals of both the ends of the line have been used as input to the proposed fuzzy based fault location scheme. Percentage of error in location of series fault is within 1% and shunt fault is 5% for all the tested fault cases. Validation of percentage of error in location estimation is done using Chi square test with both 1% and 5% level of significance.

  8. Downsizer - A Graphical User Interface-Based Application for Browsing, Acquiring, and Formatting Time-Series Data for Hydrologic Modeling

    Science.gov (United States)

    Ward-Garrison, Christian; Markstrom, Steven L.; Hay, Lauren E.

    2009-01-01

    The U.S. Geological Survey Downsizer is a computer application that selects, downloads, verifies, and formats station-based time-series data for environmental-resource models, particularly the Precipitation-Runoff Modeling System. Downsizer implements the client-server software architecture. The client presents a map-based, graphical user interface that is intuitive to modelers; the server provides streamflow and climate time-series data from over 40,000 measurement stations across the United States. This report is the Downsizer user's manual and provides (1) an overview of the software design, (2) installation instructions, (3) a description of the graphical user interface, (4) a description of selected output files, and (5) troubleshooting information.

  9. Sample-Based Extreme Learning Machine with Missing Data

    Directory of Open Access Journals (Sweden)

    Hang Gao

    2015-01-01

    Full Text Available Extreme learning machine (ELM has been extensively studied in machine learning community during the last few decades due to its high efficiency and the unification of classification, regression, and so forth. Though bearing such merits, existing ELM algorithms cannot efficiently handle the issue of missing data, which is relatively common in practical applications. The problem of missing data is commonly handled by imputation (i.e., replacing missing values with substituted values according to available information. However, imputation methods are not always effective. In this paper, we propose a sample-based learning framework to address this issue. Based on this framework, we develop two sample-based ELM algorithms for classification and regression, respectively. Comprehensive experiments have been conducted in synthetic data sets, UCI benchmark data sets, and a real world fingerprint image data set. As indicated, without introducing extra computational complexity, the proposed algorithms do more accurate and stable learning than other state-of-the-art ones, especially in the case of higher missing ratio.

  10. Low frequency of defective mismatch repair in a population-based series of upper urothelial carcinoma

    Directory of Open Access Journals (Sweden)

    Isfoss Björn L

    2005-03-01

    Full Text Available Abstract Background Upper urothelial cancer (UUC, i.e. transitional cell carcinomas of the renal pelvis and the ureter, occur at an increased frequency in patients with hereditary nonpolyposis colorectal cancer (HNPCC. Defective mismatch repair (MMR specifically characterizes HNPCC-associated tumors, but also occurs in subsets of some sporadic tumors, e.g. in gastrointestinal cancer and endometrial cancer. Methods We assessed the contribution of defective MMR to the development of UUC in a population-based series from the southern Swedish Cancer Registry, through microsatellite instability (MSI analysis and immunohistochemical evaluation of expression of the MMR proteins MLH1, PMS2, MSH2, and MSH6. Results A MSI-high phenotype was identified in 9/216 (4% successfully analyzed patients and a MSI-low phenotype in 5/216 (2%. Loss of MMR protein immunostaining was found in 11/216 (5% tumors, and affected most commonly MSH2 and MSH6. Conclusion This population-based series indicates that somatic MMR inactivation is a minor pathway in the development of UUC, but tumors that display defective MMR are, based on the immunohistochemical expression pattern, likely to be associated with HNPCC.

  11. Time series analysis of wind speed using VAR and the generalized impulse response technique

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, Bradley T. [Area of Information Systems and Quantitative Sciences, Rawls College of Business and Wind Science and Engineering Research Center, Texas Tech University, Lubbock, TX 79409-2101 (United States); Kruse, Jamie Brown [Center for Natural Hazard Research, East Carolina University, Greenville, NC (United States); Schroeder, John L. [Department of Geosciences and Wind Science and Engineering Research Center, Texas Tech University, Lubbock, TX (United States); Smith, Douglas A. [Department of Civil Engineering and Wind Science and Engineering Research Center, Texas Tech University, Lubbock, TX (United States)

    2007-03-15

    This research examines the interdependence in time series wind speed data measured in the same location at four different heights. A multiple-equation system known as a vector autoregression is proposed for characterizing the time series dynamics of wind. Additionally, the recently developed method of generalized impulse response analysis provides insight into the cross-effects of the wind series and their responses to shocks. Findings are based on analysis of contemporaneous wind speed time histories taken at 13, 33, 70 and 160 ft above ground level with a sampling rate of 10 Hz. The results indicate that wind speeds measured at 70 ft was the most variable. Further, the turbulence persisted longer at the 70-ft measurement than at the other heights. The greatest interdependence is observed at 13 ft. Gusts at 160 ft led to the greatest persistence to an 'own' shock and led to greatest persistence in the responses of the other wind series. (author)

  12. USING SURVEY OF SERIES IN AUDIT

    Directory of Open Access Journals (Sweden)

    OFILEANU DIMI

    2014-12-01

    Full Text Available The efficiency of financial audit within an entity can be made by applying sampling statistical techniques. International auditing standards offer the possibility to test only part of financial information of an entity by means of different sampling techniques. The article is a rhetorical and practical speculation regarding the methodology and the possibility to apply a statistical survey of series in the research of documents and accounting records.

  13. Time-varying surrogate data to assess nonlinearity in nonstationary time series: application to heart rate variability.

    Science.gov (United States)

    Faes, Luca; Zhao, He; Chon, Ki H; Nollo, Giandomenico

    2009-03-01

    We propose a method to extend to time-varying (TV) systems the procedure for generating typical surrogate time series, in order to test the presence of nonlinear dynamics in potentially nonstationary signals. The method is based on fitting a TV autoregressive (AR) model to the original series and then regressing the model coefficients with random replacements of the model residuals to generate TV AR surrogate series. The proposed surrogate series were used in combination with a TV sample entropy (SE) discriminating statistic to assess nonlinearity in both simulated and experimental time series, in comparison with traditional time-invariant (TIV) surrogates combined with the TIV SE discriminating statistic. Analysis of simulated time series showed that using TIV surrogates, linear nonstationary time series may be erroneously regarded as nonlinear and weak TV nonlinearities may remain unrevealed, while the use of TV AR surrogates markedly increases the probability of a correct interpretation. Application to short (500 beats) heart rate variability (HRV) time series recorded at rest (R), after head-up tilt (T), and during paced breathing (PB) showed: 1) modifications of the SE statistic that were well interpretable with the known cardiovascular physiology; 2) significant contribution of nonlinear dynamics to HRV in all conditions, with significant increase during PB at 0.2 Hz respiration rate; and 3) a disagreement between TV AR surrogates and TIV surrogates in about a quarter of the series, suggesting that nonstationarity may affect HRV recordings and bias the outcome of the traditional surrogate-based nonlinearity test.

  14. A new wind speed forecasting strategy based on the chaotic time series modelling technique and the Apriori algorithm

    International Nuclear Information System (INIS)

    Guo, Zhenhai; Chi, Dezhong; Wu, Jie; Zhang, Wenyu

    2014-01-01

    Highlights: • Impact of meteorological factors on wind speed forecasting is taken into account. • Forecasted wind speed results are corrected by the associated rules. • Forecasting accuracy is improved by the new wind speed forecasting strategy. • Robust of the proposed model is validated by data sampled from different sites. - Abstract: Wind energy has been the fastest growing renewable energy resource in recent years. Because of the intermittent nature of wind, wind power is a fluctuating source of electrical energy. Therefore, to minimize the impact of wind power on the electrical grid, accurate and reliable wind power forecasting is mandatory. In this paper, a new wind speed forecasting approach based on based on the chaotic time series modelling technique and the Apriori algorithm has been developed. The new approach consists of four procedures: (I) Clustering by using the k-means clustering approach; (II) Employing the Apriori algorithm to discover the association rules; (III) Forecasting the wind speed according to the chaotic time series forecasting model; and (IV) Correcting the forecasted wind speed data using the associated rules discovered previously. This procedure has been verified by 31-day-ahead daily average wind speed forecasting case studies, which employed the wind speed and other meteorological data collected from four meteorological stations located in the Hexi Corridor area of China. The results of these case studies reveal that the chaotic forecasting model can efficiently improve the accuracy of the wind speed forecasting, and the Apriori algorithm can effectively discover the association rules between the wind speed and other meteorological factors. In addition, the correction results demonstrate that the association rules discovered by the Apriori algorithm have powerful capacities in handling the forecasted wind speed values correction when the forecasted values do not match the classification discovered by the association rules

  15. Selecting Sample Preparation Workflows for Mass Spectrometry-Based Proteomic and Phosphoproteomic Analysis of Patient Samples with Acute Myeloid Leukemia

    Directory of Open Access Journals (Sweden)

    Maria Hernandez-Valladares

    2016-08-01

    Full Text Available Global mass spectrometry (MS-based proteomic and phosphoproteomic studies of acute myeloid leukemia (AML biomarkers represent a powerful strategy to identify and confirm proteins and their phosphorylated modifications that could be applied in diagnosis and prognosis, as a support for individual treatment regimens and selection of patients for bone marrow transplant. MS-based studies require optimal and reproducible workflows that allow a satisfactory coverage of the proteome and its modifications. Preparation of samples for global MS analysis is a crucial step and it usually requires method testing, tuning and optimization. Different proteomic workflows that have been used to prepare AML patient samples for global MS analysis usually include a standard protein in-solution digestion procedure with a urea-based lysis buffer. The enrichment of phosphopeptides from AML patient samples has previously been carried out either with immobilized metal affinity chromatography (IMAC or metal oxide affinity chromatography (MOAC. We have recently tested several methods of sample preparation for MS analysis of the AML proteome and phosphoproteome and introduced filter-aided sample preparation (FASP as a superior methodology for the sensitive and reproducible generation of peptides from patient samples. FASP-prepared peptides can be further fractionated or IMAC-enriched for proteome or phosphoproteome analyses. Herein, we will review both in-solution and FASP-based sample preparation workflows and encourage the use of the latter for the highest protein and phosphorylation coverage and reproducibility.

  16. Lanthanide complexes as luminogenic probes to measure sulfide levels in industrial samples

    International Nuclear Information System (INIS)

    Thorson, Megan K.; Ung, Phuc; Leaver, Franklin M.; Corbin, Teresa S.; Tuck, Kellie L.; Graham, Bim; Barrios, Amy M.

    2015-01-01

    A series of lanthanide-based, azide-appended complexes were investigated as hydrogen sulfide-sensitive probes. Europium complex 1 and Tb complex 3 both displayed a sulfide-dependent increase in luminescence, while Tb complex 2 displayed a decrease in luminescence upon exposure to NaHS. The utility of the complexes for monitoring sulfide levels in industrial oil and water samples was investigated. Complex 3 provided a sensitive measure of sulfide levels in petrochemical water samples (detection limit ∼ 250 nM), while complex 1 was capable of monitoring μM levels of sulfide in partially refined crude oil. - Highlights: • Lanthanide–azide based sulfide sensors were synthesized and characterized. • The probes have excitation and emission profiles compatible with sulfide-contaminated samples from the petrochemical industry. • A terbium-based probe was used to measure the sulfide concentration in oil refinery wastewater. • A europium-based probe had compatibility with partially refined crude oil samples.

  17. Lanthanide complexes as luminogenic probes to measure sulfide levels in industrial samples

    Energy Technology Data Exchange (ETDEWEB)

    Thorson, Megan K. [Department of Medicinal Chemistry, University of Utah College of Pharmacy, Salt Lake City, UT 84108 (United States); Ung, Phuc [Monash Institute of Pharmaceutical Sciences, Monash University, Victoria 3052 (Australia); Leaver, Franklin M. [Water & Energy Systems Technology, Inc., Kaysville, UT 84037 (United States); Corbin, Teresa S. [Quality Services Laboratory, Tesoro Refining and Marketing, Salt Lake City, UT 84103 (United States); Tuck, Kellie L., E-mail: kellie.tuck@monash.edu [School of Chemistry, Monash University, Victoria 3800 (Australia); Graham, Bim, E-mail: bim.graham@monash.edu [Monash Institute of Pharmaceutical Sciences, Monash University, Victoria 3052 (Australia); Barrios, Amy M., E-mail: amy.barrios@utah.edu [Department of Medicinal Chemistry, University of Utah College of Pharmacy, Salt Lake City, UT 84108 (United States)

    2015-10-08

    A series of lanthanide-based, azide-appended complexes were investigated as hydrogen sulfide-sensitive probes. Europium complex 1 and Tb complex 3 both displayed a sulfide-dependent increase in luminescence, while Tb complex 2 displayed a decrease in luminescence upon exposure to NaHS. The utility of the complexes for monitoring sulfide levels in industrial oil and water samples was investigated. Complex 3 provided a sensitive measure of sulfide levels in petrochemical water samples (detection limit ∼ 250 nM), while complex 1 was capable of monitoring μM levels of sulfide in partially refined crude oil. - Highlights: • Lanthanide–azide based sulfide sensors were synthesized and characterized. • The probes have excitation and emission profiles compatible with sulfide-contaminated samples from the petrochemical industry. • A terbium-based probe was used to measure the sulfide concentration in oil refinery wastewater. • A europium-based probe had compatibility with partially refined crude oil samples.

  18. Leveraging Disturbance Observer Based Torque Control for Improved Impedance Rendering with Series Elastic Actuators

    Science.gov (United States)

    Mehling, Joshua S.; Holley, James; O'Malley, Marcia K.

    2015-01-01

    The fidelity with which series elastic actuators (SEAs) render desired impedances is important. Numerous approaches to SEA impedance control have been developed under the premise that high-precision actuator torque control is a prerequisite. Indeed, the design of an inner torque compensator has a significant impact on actuator impedance rendering. The disturbance observer (DOB) based torque control implemented in NASA's Valkyrie robot is considered here and a mathematical model of this torque control, cascaded with an outer impedance compensator, is constructed. While previous work has examined the impact a disturbance observer has on torque control performance, little has been done regarding DOBs and impedance rendering accuracy. Both simulation and a series of experiments are used to demonstrate the significant improvements possible in an SEA's ability to render desired dynamic behaviors when utilizing a DOB. Actuator transparency at low impedances is improved, closed loop hysteresis is reduced, and the actuator's dynamic response to both commands and interaction torques more faithfully matches that of the desired model. All of this is achieved by leveraging DOB based control rather than increasing compensator gains, thus making improved SEA impedance control easier to achieve in practice.

  19. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    Science.gov (United States)

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  20. On-line diagnostic techniques for air-operated control valves based on time series analysis

    International Nuclear Information System (INIS)

    Ito, Kenji; Matsuoka, Yoshinori; Minamikawa, Shigeru; Komatsu, Yasuki; Satoh, Takeshi.

    1996-01-01

    The objective of this research is to study the feasibility of applying on-line diagnostic techniques based on time series analysis to air-operated control valves - numerous valves of the type which are used in PWR plants. Generally the techniques can detect anomalies by failures in the initial stages for which detection is difficult by conventional surveillance of process parameters measured directly. However, the effectiveness of these techniques depends on the system being diagnosed. The difficulties in applying diagnostic techniques to air-operated control valves seem to come from the reduced sensitivity of their response as compared with hydraulic control systems, as well as the need to identify anomalies in low level signals that fluctuate only slightly but continuously. In this research, simulation tests were performed by setting various kinds of failure modes for a test valve with the same specifications as of a valve actually used in the plants. Actual control signals recorded from an operating plant were then used as input signals for simulation. The results of the tests confirmed the feasibility of applying on-line diagnostic techniques based on time series analysis to air-operated control valves. (author)

  1. Cyclo-speed reducer 6000 series; Saikuro {reg_sign} gensokuki 6000 series

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-04-20

    This series was put on the market as the advanced speed reducer '6000 series' in April, 2000 after further improvement of various previous excellent features by adopting innovative technologies. Various series of this cyclo-speed reducers adopting a unique inscribed epicyclic gear mechanism reach 7 million units in sales success. Main specifications: (1) Input capacity range: 0.1-132kW, (2) Output torque: 24-68,200N(center dot)m, (3) Reduction ratio: 6-1,000,000. Features: (1) High efficiency and long life by adopting the analysis system based on the latest analytical technology, (2) Noise reduction by a maximum of nearly 6dB, and tone improvement by adopting a new tooth profile, (3) Weight reduction by a maximum of nearly 40% by adopting a motor direct-coupled mechanism. (translated by NEDO)

  2. Data splitting for artificial neural networks using SOM-based stratified sampling.

    Science.gov (United States)

    May, R J; Maier, H R; Dandy, G C

    2010-03-01

    Data splitting is an important consideration during artificial neural network (ANN) development where hold-out cross-validation is commonly employed to ensure generalization. Even for a moderate sample size, the sampling methodology used for data splitting can have a significant effect on the quality of the subsets used for training, testing and validating an ANN. Poor data splitting can result in inaccurate and highly variable model performance; however, the choice of sampling methodology is rarely given due consideration by ANN modellers. Increased confidence in the sampling is of paramount importance, since the hold-out sampling is generally performed only once during ANN development. This paper considers the variability in the quality of subsets that are obtained using different data splitting approaches. A novel approach to stratified sampling, based on Neyman sampling of the self-organizing map (SOM), is developed, with several guidelines identified for setting the SOM size and sample allocation in order to minimize the bias and variance in the datasets. Using an example ANN function approximation task, the SOM-based approach is evaluated in comparison to random sampling, DUPLEX, systematic stratified sampling, and trial-and-error sampling to minimize the statistical differences between data sets. Of these approaches, DUPLEX is found to provide benchmark performance with good model performance, with no variability. The results show that the SOM-based approach also reliably generates high-quality samples and can therefore be used with greater confidence than other approaches, especially in the case of non-uniform datasets, with the benefit of scalability to perform data splitting on large datasets. Copyright 2009 Elsevier Ltd. All rights reserved.

  3. Uranium and thorium determination in water samples taken along River Kura

    International Nuclear Information System (INIS)

    Ahmadov, M.M.; Ibadov, N.A.; Safarova, K.S.; Humbatov, F.Y.; Suleymanov, B.A.

    2014-01-01

    Full text : In the present investigation, uranium and thorium concentration in rivers water of Azerbaijan has been measured using inductively coupled plasma mass spectrometry. The Agilent 7700x series ICP-MS applied for analysis of water samples. This method is based on direct introduction of samples, without any chemical pre-treatment, into an inductively coupled plasma plasma mass spectrometer. Uranium and thorium was determined at the mass mass numbers of 238 and 232 respectively using Bi-209 as internal standard. The main purpose of the study is to measure the level of uranium and thorium in water samples taken along river Kura

  4. Plastid phylogenomics and adaptive evolution of Gaultheria series Trichophyllae (Ericaceae), a clade from sky islands of the Himalaya-Hengduan Mountains.

    Science.gov (United States)

    Zhang, Ming-Ying; Fritsch, Peter W; Ma, Peng-Fei; Wang, Hong; Lu, Lu; Li, De-Zhu

    2017-05-01

    Gaultheria series Trichophyllae Airy Shaw is an angiosperm clade of high-alpine shrublets endemic to the Himalaya-Hengduan Mountains and characterized by recent species divergence and convergent character evolution that has until recently caused much confusion in species circumscription. Although multiple DNA sequence regions have been employed previously, phylogenetic relationships among species in the group have remained largely unresolved. Here we examined the effectiveness of the plastid genome for improving phylogenetic resolution within the G. series Trichophyllae clade. Plastid genomes of 31 samples representing all 19 recognized species of the series and three outgroup species were sequenced with Illumina Sequencing technology. Maximum likelihood (ML), maximum parsimony (MP) and Bayesian inference (BI) phylogenetic analyses were performed with various datasets, i.e., that from the whole plastid genome, coding regions, noncoding regions, large single-copy region (LSC) and inverted-repeat region a (IRa). The partitioned whole plastid genome with inverted-repeat region b (IRb) excluded was also analyzed with ML and BI. Tree topologies based on the whole plastid genome, noncoding regions, and LSC region datasets across all analyses, and that based on the partitioned dataset with ML and BI analyses, are identical and generally strongly supported. Gaultheria series Trichophyllae form a clade with three species and one variety that is sister to a clade of the remaining 16 species; the latter comprises seven main subclades. Interspecific relationships within the series are strongly supported except for those based on the coding-region and IRa-region datasets. Eight divergence hotspot regions, each possessing >5% percent variable sites, were screened across the whole plastid genome of the 28 individuals sampled in the series. Results of morphological character evolution reconstruction diagnose several clades, and a hypothesis of adaptive evolution for plant habit is

  5. Estimation of time-series properties of gourd observed solar irradiance data using cloud properties derived from satellite observations

    Science.gov (United States)

    Watanabe, T.; Nohara, D.

    2017-12-01

    The shorter temporal scale variation in the downward solar irradiance at the ground level (DSI) is not understood well because researches in the shorter-scale variation in the DSI is based on the ground observation and ground observation stations are located coarsely. Use of dataset derived from satellite observation will overcome such defect. DSI data and MODIS cloud properties product are analyzed simultaneously. Three metrics: mean, standard deviation and sample entropy, are used to evaluate time-series properties of the DSI. Three metrics are computed from two-hours time-series centered at the observation time of MODIS over the ground observation stations. We apply the regression methods to design prediction models of each three metrics from cloud properties. The validation of the model accuracy show that mean and standard deviation are predicted with a higher degree of accuracy and that the accuracy of prediction of sample entropy, which represents the complexity of time-series, is not high. One of causes of lower prediction skill of sample entropy is the resolution of the MODIS cloud properties. Higher sample entropy is corresponding to the rapid fluctuation, which is caused by the small and unordered cloud. It seems that such clouds isn't retrieved well.

  6. Asymptotic Effectiveness of the Event-Based Sampling According to the Integral Criterion

    Directory of Open Access Journals (Sweden)

    Marek Miskowicz

    2007-01-01

    Full Text Available A rapid progress in intelligent sensing technology creates new interest in a development of analysis and design of non-conventional sampling schemes. The investigation of the event-based sampling according to the integral criterion is presented in this paper. The investigated sampling scheme is an extension of the pure linear send-on- delta/level-crossing algorithm utilized for reporting the state of objects monitored by intelligent sensors. The motivation of using the event-based integral sampling is outlined. The related works in adaptive sampling are summarized. The analytical closed-form formulas for the evaluation of the mean rate of event-based traffic, and the asymptotic integral sampling effectiveness, are derived. The simulation results verifying the analytical formulas are reported. The effectiveness of the integral sampling is compared with the related linear send-on-delta/level-crossing scheme. The calculation of the asymptotic effectiveness for common signals, which model the state evolution of dynamic systems in time, is exemplified.

  7. A sampling-based approach to probabilistic pursuit evasion

    KAUST Repository

    Mahadevan, Aditya; Amato, Nancy M.

    2012-01-01

    Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented

  8. Validation of the inverse pulse wave transit time series as surrogate of systolic blood pressure in MVAR modeling.

    Science.gov (United States)

    Giassi, Pedro; Okida, Sergio; Oliveira, Maurício G; Moraes, Raimes

    2013-11-01

    Short-term cardiovascular regulation mediated by the sympathetic and parasympathetic branches of the autonomic nervous system has been investigated by multivariate autoregressive (MVAR) modeling, providing insightful analysis. MVAR models employ, as inputs, heart rate (HR), systolic blood pressure (SBP) and respiratory waveforms. ECG (from which HR series is obtained) and respiratory flow waveform (RFW) can be easily sampled from the patients. Nevertheless, the available methods for acquisition of beat-to-beat SBP measurements during exams hamper the wider use of MVAR models in clinical research. Recent studies show an inverse correlation between pulse wave transit time (PWTT) series and SBP fluctuations. PWTT is the time interval between the ECG R-wave peak and photoplethysmography waveform (PPG) base point within the same cardiac cycle. This study investigates the feasibility of using inverse PWTT (IPWTT) series as an alternative input to SBP for MVAR modeling of the cardiovascular regulation. For that, HR, RFW, and IPWTT series acquired from volunteers during postural changes and autonomic blockade were used as input of MVAR models. Obtained results show that IPWTT series can be used as input of MVAR models, replacing SBP measurements in order to overcome practical difficulties related to the continuous sampling of the SBP during clinical exams.

  9. Contingency inferences driven by base rates: Valid by sampling

    Directory of Open Access Journals (Sweden)

    Florian Kutzner

    2011-04-01

    Full Text Available Fiedler et al. (2009, reviewed evidence for the utilization of a contingency inference strategy termed pseudocontingencies (PCs. In PCs, the more frequent levels (and, by implication, the less frequent levels are assumed to be associated. PCs have been obtained using a wide range of task settings and dependent measures. Yet, the readiness with which decision makers rely on PCs is poorly understood. A computer simulation explored two potential sources of subjective validity of PCs. First, PCs are shown to perform above chance level when the task is to infer the sign of moderate to strong population contingencies from a sample of observations. Second, contingency inferences based on PCs and inferences based on cell frequencies are shown to partially agree across samples. Intriguingly, this criterion and convergent validity are by-products of random sampling error, highlighting the inductive nature of contingency inferences.

  10. Review of Worcestershire On-line Fabric Type Series website

    Directory of Open Access Journals (Sweden)

    Beverley Nenk

    2003-06-01

    Full Text Available The study of archaeological ceramics is advanced through the creation and development of regional and national pottery type-series, which contain samples of each type of pottery identified from a particular area or region. Pottery researchers working in any period, from prehistoric to post-medieval, require access to such type-series, and to their associated data, in order to be able to advance the identification of all types of pottery, not only those types produced in the local area, but those produced in surrounding regions, as well as those imported from abroad. The publication of such type-series, as well as their accessibility to researchers, is essential if the information they contain is to be disseminated. The development of the Worcestershire On-Line Fabric Type Series is the first stage in a remarkable project designed to make the complete fabric and form type series for Worcestershire ceramics accessible on the internet. As part of the Historic Environment Record for Worcestershire, formerly the Sites and Monuments Record, it is designed to improve access to finds and environmental data, with the aim of encouraging and facilitating research. Funded by Worcestershire County Council as part of its commitment to e-government, it is being developed by Worcestershire County Council Archaeology Service with OxfordArchDigital. It is one of a proposed series of on-line specialist resources (to include, for example, clay pipes, environmental archaeology, flint tools, historic buildings, which are also designed to stand alone as research tools. The ceramics website is the first part of Pottery in Perspective, a web-based project to provide information on the pottery used and made in Worcestershire from prehistory to c. 1900AD.

  11. Hybrid pregnant reference phantom series based on adult female ICRP reference phantom

    Science.gov (United States)

    Rafat-Motavalli, Laleh; Miri-Hakimabad, Hashem; Hoseinian-Azghadi, Elie

    2018-03-01

    This paper presents boundary representation (BREP) models of pregnant female and her fetus at the end of each trimester. The International Commission on Radiological Protection (ICRP) female reference voxel phantom was used as a base template in development process of the pregnant hybrid phantom series. The differences in shape and location of the displaced maternal organs caused by enlarging uterus were also taken into account. The CT and MR images of fetus specimens and pregnant patients of various ages were used to replace the maternal abdominal pelvic organs of template phantom and insert the fetus inside the gravid uterus. Each fetal model contains 21 different organs and tissues. The skeletal model of the fetus also includes age-dependent cartilaginous and ossified skeletal components. The replaced maternal organ models were converted to NURBS surfaces and then modified to conform to reference values of ICRP Publication 89. The particular feature of current series compared to the previously developed pregnant phantoms is being constructed upon the basis of ICRP reference phantom. The maternal replaced organ models are NURBS surfaces. With this great potential, they might have the feasibility of being converted to high quality polygon mesh phantoms.

  12. Data mining in time series databases

    CERN Document Server

    Kandel, Abraham; Bunke, Horst

    2004-01-01

    Adding the time dimension to real-world databases produces Time SeriesDatabases (TSDB) and introduces new aspects and difficulties to datamining and knowledge discovery. This book covers the state-of-the-artmethodology for mining time series databases. The novel data miningmethods presented in the book include techniques for efficientsegmentation, indexing, and classification of noisy and dynamic timeseries. A graph-based method for anomaly detection in time series isdescribed and the book also studies the implications of a novel andpotentially useful representation of time series as strings. Theproblem of detecting changes in data mining models that are inducedfrom temporal databases is additionally discussed.

  13. Identification of pests and diseases of Dalbergia hainanensis based on EVI time series and classification of decision tree

    Science.gov (United States)

    Luo, Qiu; Xin, Wu; Qiming, Xiong

    2017-06-01

    In the process of vegetation remote sensing information extraction, the problem of phenological features and low performance of remote sensing analysis algorithm is not considered. To solve this problem, the method of remote sensing vegetation information based on EVI time-series and the classification of decision-tree of multi-source branch similarity is promoted. Firstly, to improve the time-series stability of recognition accuracy, the seasonal feature of vegetation is extracted based on the fitting span range of time-series. Secondly, the decision-tree similarity is distinguished by adaptive selection path or probability parameter of component prediction. As an index, it is to evaluate the degree of task association, decide whether to perform migration of multi-source decision tree, and ensure the speed of migration. Finally, the accuracy of classification and recognition of pests and diseases can reach 87%--98% of commercial forest in Dalbergia hainanensis, which is significantly better than that of MODIS coverage accuracy of 80%--96% in this area. Therefore, the validity of the proposed method can be verified.

  14. Does self-directed and web-based support for parents enhance the effects of viewing a reality television series based on the Triple P-Positive Parenting Programme?

    Science.gov (United States)

    Sanders, Matthew; Calam, Rachel; Durand, Marianne; Liversidge, Tom; Carmont, Sue Ann

    2008-09-01

    This study investigated whether providing self-directed and web-based support for parents enhanced the effects of viewing a reality television series based on the Triple P - Positive Parenting Programme. Parents with a child aged 2 to 9 (N = 454) were randomly assigned to either a standard or enhanced intervention condition. In the standard television alone viewing condition, parents watched the six-episode weekly television series, 'Driving Mum and Dad Mad'. Parents in the enhanced television viewing condition received a self-help workbook, extra web support involving downloadable parenting tip sheets, audio and video streaming of positive parenting messages and email support, in addition to viewing the television series. Parents in both conditions reported significant improvements in their child's disruptive behaviour and improvements in dysfunctional parenting practices. Effects were greater for the enhanced condition as seen on the ECBI, two of the three parenting indicators and overall programme satisfaction. However, no significant differences were seen on other measures, including parent affect indicators. The level of improvement was related to number of episodes watched, with greatest changes occurring in families who watched each episode. Improvements achieved at post-intervention by parents in both groups were maintained at six-month follow-up. Online tip sheets were frequently accessed; uptake of web-based resources was highest early in the series. The value of combining self-help approaches, technology and media as part of a comprehensive public health approach to providing parenting support is discussed.

  15. Global Population Density Grid Time Series Estimates

    Data.gov (United States)

    National Aeronautics and Space Administration — Global Population Density Grid Time Series Estimates provide a back-cast time series of population density grids based on the year 2000 population grid from SEDAC's...

  16. Soft magnetic properties of bulk amorphous Co-based samples

    International Nuclear Information System (INIS)

    Fuezer, J.; Bednarcik, J.; Kollar, P.

    2006-01-01

    Ball milling of melt-spun ribbons and subsequent compaction of the resulting powders in the supercooled liquid region were used to prepare disc shaped bulk amorphous Co-based samples. The several bulk samples have been prepared by hot compaction with subsequent heat treatment (500 deg C - 575 deg C). The influence of the consolidation temperature and follow-up heat treatment on the magnetic properties of bulk samples was investigated. The final heat treatment leads to decrease of the coercivity to the value between the 7.5 to 9 A/m (Authors)

  17. A multidisciplinary database for geophysical time series management

    Science.gov (United States)

    Montalto, P.; Aliotta, M.; Cassisi, C.; Prestifilippo, M.; Cannata, A.

    2013-12-01

    The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.

  18. Using Load Balancing to Scalably Parallelize Sampling-Based Motion Planning Algorithms

    KAUST Repository

    Fidel, Adam; Jacobs, Sam Ade; Sharma, Shishir; Amato, Nancy M.; Rauchwerger, Lawrence

    2014-01-01

    Motion planning, which is the problem of computing feasible paths in an environment for a movable object, has applications in many domains ranging from robotics, to intelligent CAD, to protein folding. The best methods for solving this PSPACE-hard problem are so-called sampling-based planners. Recent work introduced uniform spatial subdivision techniques for parallelizing sampling-based motion planning algorithms that scaled well. However, such methods are prone to load imbalance, as planning time depends on region characteristics and, for most problems, the heterogeneity of the sub problems increases as the number of processors increases. In this work, we introduce two techniques to address load imbalance in the parallelization of sampling-based motion planning algorithms: an adaptive work stealing approach and bulk-synchronous redistribution. We show that applying these techniques to representatives of the two major classes of parallel sampling-based motion planning algorithms, probabilistic roadmaps and rapidly-exploring random trees, results in a more scalable and load-balanced computation on more than 3,000 cores. © 2014 IEEE.

  19. Using Load Balancing to Scalably Parallelize Sampling-Based Motion Planning Algorithms

    KAUST Repository

    Fidel, Adam

    2014-05-01

    Motion planning, which is the problem of computing feasible paths in an environment for a movable object, has applications in many domains ranging from robotics, to intelligent CAD, to protein folding. The best methods for solving this PSPACE-hard problem are so-called sampling-based planners. Recent work introduced uniform spatial subdivision techniques for parallelizing sampling-based motion planning algorithms that scaled well. However, such methods are prone to load imbalance, as planning time depends on region characteristics and, for most problems, the heterogeneity of the sub problems increases as the number of processors increases. In this work, we introduce two techniques to address load imbalance in the parallelization of sampling-based motion planning algorithms: an adaptive work stealing approach and bulk-synchronous redistribution. We show that applying these techniques to representatives of the two major classes of parallel sampling-based motion planning algorithms, probabilistic roadmaps and rapidly-exploring random trees, results in a more scalable and load-balanced computation on more than 3,000 cores. © 2014 IEEE.

  20. Bioremediation of PAH contaminated soil samples

    International Nuclear Information System (INIS)

    Joshi, M.M.; Lee, S.

    1994-01-01

    Soils contaminated with polynuclear aromatic hydrocarbons (PAHs) pose a hazard to life. The remediation of such sites can be done using physical, chemical, and biological treatment methods or a combination of them. It is of interest to study the decontamination of soil using bioremediation. The experiments were conducted using Acinetobacter (ATCC 31012) at room temperature without pH or temperature control. In the first series of experiments, contaminated soil samples obtained from Alberta Research Council were analyzed to determine the toxic contaminant and their composition in the soil. These samples were then treated using aerobic fermentation and removal efficiency for each contaminant was determined. In the second series of experiments, a single contaminant was used to prepare a synthetic soil sample. This sample of known composition was then treated using aerobic fermentation in continuously stirred flasks. In one set of flasks, contaminant was the only carbon source and in the other set, starch was an additional carbon source. In the third series of experiments, the synthetic contaminated soil sample was treated in continuously stirred flasks in the first set and in fixed bed in the second set and the removal efficiencies were compared. The removal efficiencies obtained indicated the extent of biodegradation for various contaminants, the effect of additional carbon source, and performance in fixed bed without external aeration

  1. Short-term prediction method of wind speed series based on fractal interpolation

    International Nuclear Information System (INIS)

    Xiu, Chunbo; Wang, Tiantian; Tian, Meng; Li, Yanqing; Cheng, Yi

    2014-01-01

    Highlights: • An improved fractal interpolation prediction method is proposed. • The chaos optimization algorithm is used to obtain the iterated function system. • The fractal extrapolate interpolation prediction of wind speed series is performed. - Abstract: In order to improve the prediction performance of the wind speed series, the rescaled range analysis is used to analyze the fractal characteristics of the wind speed series. An improved fractal interpolation prediction method is proposed to predict the wind speed series whose Hurst exponents are close to 1. An optimization function which is composed of the interpolation error and the constraint items of the vertical scaling factors in the fractal interpolation iterated function system is designed. The chaos optimization algorithm is used to optimize the function to resolve the optimal vertical scaling factors. According to the self-similarity characteristic and the scale invariance, the fractal extrapolate interpolation prediction can be performed by extending the fractal characteristic from internal interval to external interval. Simulation results show that the fractal interpolation prediction method can get better prediction result than others for the wind speed series with the fractal characteristic, and the prediction performance of the proposed method can be improved further because the fractal characteristic of its iterated function system is similar to that of the predicted wind speed series

  2. Fourier series

    CERN Document Server

    Tolstov, Georgi P

    1962-01-01

    Richard A. Silverman's series of translations of outstanding Russian textbooks and monographs is well-known to people in the fields of mathematics, physics, and engineering. The present book is another excellent text from this series, a valuable addition to the English-language literature on Fourier series.This edition is organized into nine well-defined chapters: Trigonometric Fourier Series, Orthogonal Systems, Convergence of Trigonometric Fourier Series, Trigonometric Series with Decreasing Coefficients, Operations on Fourier Series, Summation of Trigonometric Fourier Series, Double Fourie

  3. Using machine learning to accelerate sampling-based inversion

    Science.gov (United States)

    Valentine, A. P.; Sambridge, M.

    2017-12-01

    In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.

  4. Characterizing system dynamics with a weighted and directed network constructed from time series data

    International Nuclear Information System (INIS)

    Sun, Xiaoran; Small, Michael; Zhao, Yi; Xue, Xiaoping

    2014-01-01

    In this work, we propose a novel method to transform a time series into a weighted and directed network. For a given time series, we first generate a set of segments via a sliding window, and then use a doubly symbolic scheme to characterize every windowed segment by combining absolute amplitude information with an ordinal pattern characterization. Based on this construction, a network can be directly constructed from the given time series: segments corresponding to different symbol-pairs are mapped to network nodes and the temporal succession between nodes is represented by directed links. With this conversion, dynamics underlying the time series has been encoded into the network structure. We illustrate the potential of our networks with a well-studied dynamical model as a benchmark example. Results show that network measures for characterizing global properties can detect the dynamical transitions in the underlying system. Moreover, we employ a random walk algorithm to sample loops in our networks, and find that time series with different dynamics exhibits distinct cycle structure. That is, the relative prevalence of loops with different lengths can be used to identify the underlying dynamics

  5. Characterizing system dynamics with a weighted and directed network constructed from time series data

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Xiaoran, E-mail: sxr0806@gmail.com [Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen 518055 (China); School of Mathematics and Statistics, The University of Western Australia, Crawley WA 6009 (Australia); Small, Michael, E-mail: michael.small@uwa.edu.au [School of Mathematics and Statistics, The University of Western Australia, Crawley WA 6009 (Australia); Zhao, Yi [Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen 518055 (China); Xue, Xiaoping [Department of Mathematics, Harbin Institute of Technology, Harbin 150025 (China)

    2014-06-15

    In this work, we propose a novel method to transform a time series into a weighted and directed network. For a given time series, we first generate a set of segments via a sliding window, and then use a doubly symbolic scheme to characterize every windowed segment by combining absolute amplitude information with an ordinal pattern characterization. Based on this construction, a network can be directly constructed from the given time series: segments corresponding to different symbol-pairs are mapped to network nodes and the temporal succession between nodes is represented by directed links. With this conversion, dynamics underlying the time series has been encoded into the network structure. We illustrate the potential of our networks with a well-studied dynamical model as a benchmark example. Results show that network measures for characterizing global properties can detect the dynamical transitions in the underlying system. Moreover, we employ a random walk algorithm to sample loops in our networks, and find that time series with different dynamics exhibits distinct cycle structure. That is, the relative prevalence of loops with different lengths can be used to identify the underlying dynamics.

  6. Synthesis and Properties of Some polyurethane/ Partially Aromatic Polyester Casting Samples

    International Nuclear Information System (INIS)

    Sadek, E.M.; Mazroua, A.M.; Emam, A.S.; Motawie, A.M.

    2005-01-01

    A series of partially aromatic terephthalate polyesters were synthesized by melt transesterification of dimethyl terephthalate with various types of aliphatic diol compounds in 1:1.1 molar ratio. Ethylene-, di-, tri-, tetra ethylene glycol and polyethylene glycol with different molecular weights 1000, 4000, 6000 as well as the prepared dihydroxy natural rubber were used. Another series of partially aromatic adipate and sebacate polyesters based on the prepared bisphenol A and its tetrabromo derivative were also synthesized by direct polycondensation esterification with adipic and sebacic acid. Polyurethane with NCO/OH ratio equal 4 was prepared from the reaction of 2,4 toluene diisocyanate with polyethylene glycol 1000. The prepared polyurethane was mixed with different weight percentages (2, 4, 6, 8, 10 or 12 % w/w) of the prepared partially aromatic polyesters to give polyurethane/polyester compositions. Mechanical and electrical properties as well as water and chemical resistance of the prepared film samples with thickness 3-4 mm were determined and compared with those of polyurethane film sample without polyester. The data indicate that 10 % w/w of the added partially aromatic polyester increases polyurethane tensile strength, improves its insulation properties and hydrolytic stability as well as its chemical resistance. Film samples based on bisphenol A impart excellent properties as compared with those based on aliphatic glycol species and dihydroxy natural rubber. Keywords: Partially aromatic polyesters, Dimethyl terephthalate, Glycols, Bisphenol A, Tetrabromo bisphenol A, Natural rubber, Adipic acid, Sebacic acid, Polyurethane, Casting

  7. Sampling guidelines for oral fluid-based surveys of group-housed animals.

    Science.gov (United States)

    Rotolo, Marisa L; Sun, Yaxuan; Wang, Chong; Giménez-Lirola, Luis; Baum, David H; Gauger, Phillip C; Harmon, Karen M; Hoogland, Marlin; Main, Rodger; Zimmerman, Jeffrey J

    2017-09-01

    Formulas and software for calculating sample size for surveys based on individual animal samples are readily available. However, sample size formulas are not available for oral fluids and other aggregate samples that are increasingly used in production settings. Therefore, the objective of this study was to develop sampling guidelines for oral fluid-based porcine reproductive and respiratory syndrome virus (PRRSV) surveys in commercial swine farms. Oral fluid samples were collected in 9 weekly samplings from all pens in 3 barns on one production site beginning shortly after placement of weaned pigs. Samples (n=972) were tested by real-time reverse-transcription PCR (RT-rtPCR) and the binary results analyzed using a piecewise exponential survival model for interval-censored, time-to-event data with misclassification. Thereafter, simulation studies were used to study the barn-level probability of PRRSV detection as a function of sample size, sample allocation (simple random sampling vs fixed spatial sampling), assay diagnostic sensitivity and specificity, and pen-level prevalence. These studies provided estimates of the probability of detection by sample size and within-barn prevalence. Detection using fixed spatial sampling was as good as, or better than, simple random sampling. Sampling multiple barns on a site increased the probability of detection with the number of barns sampled. These results are relevant to PRRSV control or elimination projects at the herd, regional, or national levels, but the results are also broadly applicable to contagious pathogens of swine for which oral fluid tests of equivalent performance are available. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-05-15

    The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

  9. Adaptive list sequential sampling method for population-based observational studies

    NARCIS (Netherlands)

    Hof, Michel H.; Ravelli, Anita C. J.; Zwinderman, Aeilko H.

    2014-01-01

    In population-based observational studies, non-participation and delayed response to the invitation to participate are complications that often arise during the recruitment of a sample. When both are not properly dealt with, the composition of the sample can be different from the desired

  10. Judgment on the presence of radionuclides in sample analysis: A case study

    International Nuclear Information System (INIS)

    Muhamat Omar; Zalina Laili; Mohd Suhaimi Hamzah

    2012-01-01

    Qualitative and quantitative analysis of samples require good judgment from the analysts. These two aspects in gamma spectrometric analysis of Proficiency Test and solid radioactive waste samples for the determination of radionuclides are discussed. It is vital to judge and decide what energy peaks belong to which radionuclides prior to the creation of customized radionuclide library for the analysis of specific samples. Corrections due to radionuclide decay and growth, and the half-life assigned to a particular radionuclide in the uranium and thorium series are also discussed. Discussion on judgment to confirm the presence of thorium in food samples based on gamma spectrometry and neutron activation analysis is also provided. (author)

  11. Development of indicators of vegetation recovery based on time series analysis of SPOT Vegetation data

    Science.gov (United States)

    Lhermitte, S.; Tips, M.; Verbesselt, J.; Jonckheere, I.; Van Aardt, J.; Coppin, Pol

    2005-10-01

    Large-scale wild fires have direct impacts on natural ecosystems and play a major role in the vegetation ecology and carbon budget. Accurate methods for describing post-fire development of vegetation are therefore essential for the understanding and monitoring of terrestrial ecosystems. Time series analysis of satellite imagery offers the potential to quantify these parameters with spatial and temporal accuracy. Current research focuses on the potential of time series analysis of SPOT Vegetation S10 data (1999-2001) to quantify the vegetation recovery of large-scale burns detected in the framework of GBA2000. The objective of this study was to provide quantitative estimates of the spatio-temporal variation of vegetation recovery based on remote sensing indicators. Southern Africa was used as a pilot study area, given the availability of ground and satellite data. An automated technique was developed to extract consistent indicators of vegetation recovery from the SPOT-VGT time series. Reference areas were used to quantify the vegetation regrowth by means of Regeneration Indices (RI). Two kinds of recovery indicators (time and value- based) were tested for RI's of NDVI, SR, SAVI, NDWI, and pure band information. The effects of vegetation structure and temporal fire regime features on the recovery indicators were subsequently analyzed. Statistical analyses were conducted to assess whether the recovery indicators were different for different vegetation types and dependent on timing of the burning season. Results highlighted the importance of appropriate reference areas and the importance of correct normalization of the SPOT-VGT data.

  12. Stochastic bounded consensus tracking of leader-follower multi-agent systems with measurement noises based on sampled data with general sampling delay

    International Nuclear Information System (INIS)

    Wu Zhi-Hai; Peng Li; Xie Lin-Bo; Wen Ji-Wei

    2013-01-01

    In this paper we provide a unified framework for consensus tracking of leader-follower multi-agent systems with measurement noises based on sampled data with a general sampling delay. First, a stochastic bounded consensus tracking protocol based on sampled data with a general sampling delay is presented by employing the delay decomposition technique. Then, necessary and sufficient conditions are derived for guaranteeing leader-follower multi-agent systems with measurement noises and a time-varying reference state to achieve mean square bounded consensus tracking. The obtained results cover no sampling delay, a small sampling delay and a large sampling delay as three special cases. Last, simulations are provided to demonstrate the effectiveness of the theoretical results. (interdisciplinary physics and related areas of science and technology)

  13. Uranium series disequilibrium measurements at Mol, Belgium

    International Nuclear Information System (INIS)

    Ivanovich, M.; Wilkins, M.A.

    1985-02-01

    The contract just completed has funded two parallel uranium series disequilibrium studies and the aims of and the progress to completion of these studies are given in this report. The larger study was concerned with the measurement of uranium series disequilibrium in ground waters derived from sand layers above and below the Boom Clay formation in North East Belgium. The disequilibrium data are analysed in terms of uranium, thorium and radium isotopic geochemistries and in terms of water types and their mixing in the regional groundwater system. It is concluded that most sampled waters are mixtures of younger and older waters. No true old water end-members have been sampled. Simple considerations of the uranium isotopic data indicate that the longest residence times of the sampled waters are not much in excess of 1 to 10 x 10 3 y. Detailed mixing patterns could not be established from this limited data set particularly in the absence of more detailed modelling in conjunction with groundwater hydraulic pressure and flow direction data. (author)

  14. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Jun-He Yang

    2017-01-01

    Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  15. GAMMA-RAY CHARACTERIZATION OF THE U-SERIES INTERMEDIATE DAUGHTERS FROM SOIL SAMPLES AT THE PENA BLANCA NATURAL ANALOG, CHIHUAHUA, MEXICO

    Energy Technology Data Exchange (ETDEWEB)

    D.C. French; E.Y. Anthony; P.C. Goodell

    2005-07-18

    The Pena Blanca natural analog is located in the Sierra Pena Blanca, approximately 50 miles north of Chihuahua City, Mexico. The Sierra Pena Blanca is composed mainly of ash-flow tuffs, and the uranium in the region is contained in the brecciated zones of these tuffs. The Pena Blanca site is considered a natural analog to the proposed Yucca Mountain Nuclear Waste Repository because they share similar characteristics of structure, volcanic lithology, tectonic activity, and hydrologic regime. One of the mineralized zones, the Nopal I deposit, was mined in the early 1980s and the ore was stockpiled close to the mine. This stockpile area has subsequently been cleared and is referred to as the prior high-grade stockpile (PHGS) site. Soil surrounding boulders of high-grade ore associated with the PHGS site have been sampled. The purpose of this study is to characterize the transport of uranium series radioisotopes from the boulder to the soil during the past 25 years. Transport is characterized by determining the activities of individual radionuclides and daughter to parent ratios. The daughter to parent ratios are used to establish whether the samples are in secular equilibrium. Activities are determined using gamma-ray spectroscopy. Isotopes of the uranium series decay chain detected by gamma-ray spectroscopy include {sup 210}Pb, {sup 234}U, {sup 234}Th, {sup 230}Th, {sup 226}Ra, {sup 214}Pb, {sup 214}Bi, and {sup 234}Pa. Preliminary results indicate that some daughter to parent pairs appear to be in secular disequilibrium. Thorium is in excess relative to uranium, and radium is in excess relative to thorium. A deficiency appears to exist for {sup 210}Pb relative to {sup 214}Bi and {sup 214}Pb. If these results are borne out by further analysis, they would suggest transport of nuclides from the high-grade boulder into its surroundings, followed by continued leaching of uranium and lead from the environment.

  16. 238 series isotopes at different soil depths and disequilibrium over various geology and soil classifications along transects in selected parts of Ireland

    International Nuclear Information System (INIS)

    McAulay, I.R.; Hayes, A.

    1996-01-01

    Sampling of soils was carried out along linear transects in selected regions of the country, a technique known as Transect Sampling. This was a controlled rather than a random sampling technique. The transects were located in regions which were previously known to contain high levels of the 226 Ra isotope, from the 238 U series. The soil sampling was carried out at selected sites along these transects. At each transect site, two different soil depths were examined and the soil samples collected were identified as the top and bottom soil samples. This transect data set, consisting of the isotope activity levels and the influencing variables transect geology and soil types, provided a data base for investigation. Comparisons were made between the soil isotope activity levels measured at different soil depths. An examination of the 238 U decay series showed the existence of disequilibrium. Relationships between the disequilibrium data and the associated geology and soil types were investigated. (author)

  17. Performing T-tests to Compare Autocorrelated Time Series Data Collected from Direct-Reading Instruments.

    Science.gov (United States)

    O'Shaughnessy, Patrick; Cavanaugh, Joseph E

    2015-01-01

    Industrial hygienists now commonly use direct-reading instruments to evaluate hazards in the workplace. The stored values over time from these instruments constitute a time series of measurements that are often autocorrelated. Given the need to statistically compare two occupational scenarios using values from a direct-reading instrument, a t-test must consider measurement autocorrelation or the resulting test will have a largely inflated type-1 error probability (false rejection of the null hypothesis). A method is described for both the one-sample and two-sample cases which properly adjusts for autocorrelation. This method involves the computation of an "equivalent sample size" that effectively decreases the actual sample size when determining the standard error of the mean for the time series. An example is provided for the one-sample case, and an example is given where a two-sample t-test is conducted for two autocorrelated time series comprised of lognormally distributed measurements.

  18. Convergence of statistical moments of particle density time series in scrape-off layer plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Kube, R., E-mail: ralph.kube@uit.no; Garcia, O. E. [Department of Physics and Technology, UiT - The Arctic University of Norway, N-9037 Tromsø (Norway)

    2015-01-15

    Particle density fluctuations in the scrape-off layer of magnetically confined plasmas, as measured by gas-puff imaging or Langmuir probes, are modeled as the realization of a stochastic process in which a superposition of pulses with a fixed shape, an exponential distribution of waiting times, and amplitudes represents the radial motion of blob-like structures. With an analytic formulation of the process at hand, we derive expressions for the mean squared error on estimators of sample mean and sample variance as a function of sample length, sampling frequency, and the parameters of the stochastic process. Employing that the probability distribution function of a particularly relevant stochastic process is given by the gamma distribution, we derive estimators for sample skewness and kurtosis and expressions for the mean squared error on these estimators. Numerically, generated synthetic time series are used to verify the proposed estimators, the sample length dependency of their mean squared errors, and their performance. We find that estimators for sample skewness and kurtosis based on the gamma distribution are more precise and more accurate than common estimators based on the method of moments.

  19. Convergence of statistical moments of particle density time series in scrape-off layer plasmas

    International Nuclear Information System (INIS)

    Kube, R.; Garcia, O. E.

    2015-01-01

    Particle density fluctuations in the scrape-off layer of magnetically confined plasmas, as measured by gas-puff imaging or Langmuir probes, are modeled as the realization of a stochastic process in which a superposition of pulses with a fixed shape, an exponential distribution of waiting times, and amplitudes represents the radial motion of blob-like structures. With an analytic formulation of the process at hand, we derive expressions for the mean squared error on estimators of sample mean and sample variance as a function of sample length, sampling frequency, and the parameters of the stochastic process. Employing that the probability distribution function of a particularly relevant stochastic process is given by the gamma distribution, we derive estimators for sample skewness and kurtosis and expressions for the mean squared error on these estimators. Numerically, generated synthetic time series are used to verify the proposed estimators, the sample length dependency of their mean squared errors, and their performance. We find that estimators for sample skewness and kurtosis based on the gamma distribution are more precise and more accurate than common estimators based on the method of moments

  20. Predicting chaotic time series

    International Nuclear Information System (INIS)

    Farmer, J.D.; Sidorowich, J.J.

    1987-01-01

    We present a forecasting technique for chaotic data. After embedding a time series in a state space using delay coordinates, we ''learn'' the induced nonlinear mapping using local approximation. This allows us to make short-term predictions of the future behavior of a time series, using information based only on past values. We present an error estimate for this technique, and demonstrate its effectiveness by applying it to several examples, including data from the Mackey-Glass delay differential equation, Rayleigh-Benard convection, and Taylor-Couette flow

  1. Analysis of series resonant converter with series-parallel connection

    Science.gov (United States)

    Lin, Bor-Ren; Huang, Chien-Lan

    2011-02-01

    In this study, a parallel inductor-inductor-capacitor (LLC) resonant converter series-connected on the primary side and parallel-connected on the secondary side is presented for server power supply systems. Based on series resonant behaviour, the power metal-oxide-semiconductor field-effect transistors are turned on at zero voltage switching and the rectifier diodes are turned off at zero current switching. Thus, the switching losses on the power semiconductors are reduced. In the proposed converter, the primary windings of the two LLC converters are connected in series. Thus, the two converters have the same primary currents to ensure that they can supply the balance load current. On the output side, two LLC converters are connected in parallel to share the load current and to reduce the current stress on the secondary windings and the rectifier diodes. In this article, the principle of operation, steady-state analysis and design considerations of the proposed converter are provided and discussed. Experiments with a laboratory prototype with a 24 V/21 A output for server power supply were performed to verify the effectiveness of the proposed converter.

  2. Characterizing time series via complexity-entropy curves

    Science.gov (United States)

    Ribeiro, Haroldo V.; Jauregui, Max; Zunino, Luciano; Lenzi, Ervin K.

    2017-06-01

    The search for patterns in time series is a very common task when dealing with complex systems. This is usually accomplished by employing a complexity measure such as entropies and fractal dimensions. However, such measures usually only capture a single aspect of the system dynamics. Here, we propose a family of complexity measures for time series based on a generalization of the complexity-entropy causality plane. By replacing the Shannon entropy by a monoparametric entropy (Tsallis q entropy) and after considering the proper generalization of the statistical complexity (q complexity), we build up a parametric curve (the q -complexity-entropy curve) that is used for characterizing and classifying time series. Based on simple exact results and numerical simulations of stochastic processes, we show that these curves can distinguish among different long-range, short-range, and oscillating correlated behaviors. Also, we verify that simulated chaotic and stochastic time series can be distinguished based on whether these curves are open or closed. We further test this technique in experimental scenarios related to chaotic laser intensity, stock price, sunspot, and geomagnetic dynamics, confirming its usefulness. Finally, we prove that these curves enhance the automatic classification of time series with long-range correlations and interbeat intervals of healthy subjects and patients with heart disease.

  3. Stochastic modeling of hourly rainfall times series in Campania (Italy)

    Science.gov (United States)

    Giorgio, M.; Greco, R.

    2009-04-01

    Occurrence of flowslides and floods in small catchments is uneasy to predict, since it is affected by a number of variables, such as mechanical and hydraulic soil properties, slope morphology, vegetation coverage, rainfall spatial and temporal variability. Consequently, landslide risk assessment procedures and early warning systems still rely on simple empirical models based on correlation between recorded rainfall data and observed landslides and/or river discharges. Effectiveness of such systems could be improved by reliable quantitative rainfall prediction, which can allow gaining larger lead-times. Analysis of on-site recorded rainfall height time series represents the most effective approach for a reliable prediction of local temporal evolution of rainfall. Hydrological time series analysis is a widely studied field in hydrology, often carried out by means of autoregressive models, such as AR, ARMA, ARX, ARMAX (e.g. Salas [1992]). Such models gave the best results when applied to the analysis of autocorrelated hydrological time series, like river flow or level time series. Conversely, they are not able to model the behaviour of intermittent time series, like point rainfall height series usually are, especially when recorded with short sampling time intervals. More useful for this issue are the so-called DRIP (Disaggregated Rectangular Intensity Pulse) and NSRP (Neymann-Scott Rectangular Pulse) model [Heneker et al., 2001; Cowpertwait et al., 2002], usually adopted to generate synthetic point rainfall series. In this paper, the DRIP model approach is adopted, in which the sequence of rain storms and dry intervals constituting the structure of rainfall time series is modeled as an alternating renewal process. Final aim of the study is to provide a useful tool to implement an early warning system for hydrogeological risk management. Model calibration has been carried out with hourly rainfall hieght data provided by the rain gauges of Campania Region civil

  4. Observer-Based Stabilization of Spacecraft Rendezvous with Variable Sampling and Sensor Nonlinearity

    Directory of Open Access Journals (Sweden)

    Zhuoshi Li

    2013-01-01

    Full Text Available This paper addresses the observer-based control problem of spacecraft rendezvous with nonuniform sampling period. The relative dynamic model is based on the classical Clohessy-Wiltshire equation, and sensor nonlinearity and sampling are considered together in a unified framework. The purpose of this paper is to perform an observer-based controller synthesis by using sampled and saturated output measurements, such that the resulting closed-loop system is exponentially stable. A time-dependent Lyapunov functional is developed which depends on time and the upper bound of the sampling period and also does not grow along the input update times. The controller design problem is solved in terms of the linear matrix inequality method, and the obtained results are less conservative than using the traditional Lyapunov functionals. Finally, a numerical simulation example is built to show the validity of the developed sampled-data control strategy.

  5. Quantitative evaluation of time-series GHG emissions by sector and region using consumption-based accounting

    International Nuclear Information System (INIS)

    Homma, Takashi; Akimoto, Keigo; Tomoda, Toshimasa

    2012-01-01

    This study estimates global time-series consumption-based GHG emissions by region from 1990 to 2005, including both CO 2 and non-CO 2 GHG emissions. Estimations are conducted for the whole economy and for two specific sectors: manufacturing and agriculture. Especially in the agricultural sector, it is important to include non-CO 2 GHG emissions because these are the major emissions present. In most of the regions examined, the improvements in GHG intensities achieved in the manufacturing sector are larger than those in the agricultural sector. Compared with developing regions, most developed regions have consistently larger per-capita consumption-based GHG emissions over the whole economy, as well as higher production-based emissions. In the manufacturing sector, differences calculated by subtracting production-based emissions from consumption-based GHG emissions are determined by the regional economic level while, in the agricultural sector, they are dependent on regional production structures that are determined by international trade competitiveness. In the manufacturing sector, these differences are consistently and increasingly positive for the U.S., EU15 and Japan but negative for developing regions. In the agricultural sector, the differences calculated for the major agricultural importers like Japan and the EU15 are consistently positive while those of exporters like the U.S., Australia and New Zealand are consistently negative. - Highlights: ► We evaluate global time-series production-based and consumption-based GHG emissions. ► We focus on both CO 2 and non-CO 2 GHG emissions, broken down by region and by sector. ► Including non-CO 2 GHG emissions is important in agricultural sector. ► In agriculture, differences in accountings are dependent on production structures. ► In manufacturing sector, differences in accountings are determined by economic level.

  6. A novel model for Time-Series Data Clustering Based on piecewise SVD and BIRCH for Stock Data Analysis on Hadoop Platform

    Directory of Open Access Journals (Sweden)

    Ibgtc Bowala

    2017-06-01

    Full Text Available With the rapid growth of financial markets, analyzers are paying more attention on predictions. Stock data are time series data, with huge amounts. Feasible solution for handling the increasing amount of data is to use a cluster for parallel processing, and Hadoop parallel computing platform is a typical representative. There are various statistical models for forecasting time series data, but accurate clusters are a pre-requirement. Clustering analysis for time series data is one of the main methods for mining time series data for many other analysis processes. However, general clustering algorithms cannot perform clustering for time series data because series data has a special structure and a high dimensionality has highly co-related values due to high noise level. A novel model for time series clustering is presented using BIRCH, based on piecewise SVD, leading to a novel dimension reduction approach. Highly co-related features are handled using SVD with a novel approach for dimensionality reduction in order to keep co-related behavior optimal and then use BIRCH for clustering. The algorithm is a novel model that can handle massive time series data. Finally, this new model is successfully applied to real stock time series data of Yahoo finance with satisfactory results.

  7. U-series dating using thermal ionisation mass spectrometry (TIMS)

    International Nuclear Information System (INIS)

    McCulloch, M.T.

    1999-01-01

    U-series dating is based on the decay of the two long-lived isotopes 238 U(τ 1/2 =4.47 x 10 9 years) and 235 U (τ 1/2 0.7 x 10 9 years). 238 U and its intermediate daughter isotopes 234 U (τ 1/2 = 245.4 ka) and 230 Th (τ 1/2 = 75.4 ka) have been the main focus of recently developed mass spectrometric techniques (Edwards et al., 1987) while the other less frequently used decay chain is based on the decay 235 U to 231 Pa (τ 1/2 = 32.8 ka). Both the 238 U and 235 U decay chains terminate at the stable isotopes 206 Pb and 207 Pb respectively. Thermal ionization mass spectrometry (TIMS) has a number of inherent advantages, mainly the ability to measure isotopic ratios at high precision on relatively small samples. In spite of these now obvious advantages, it is only since the mid-1980's when Chen et al., (1986) made the first precise measurements of 234 U and 232 Th in seawater followed by Edwards et al., (1987) who made combined 234 U- 230 Th measurements, was the full potential of mass spectrometric methods first realised. Several examples are given to illustrate various aspects of TIMS U-series

  8. `Indoor` series vending machines; `Indoor` series jido hanbaiki

    Energy Technology Data Exchange (ETDEWEB)

    Gensui, T.; Kida, A. [Fuji Electric Co. Ltd., Tokyo (Japan); Okumura, H. [Fuji Denki Reiki Co. Ltd., Tokyo (Japan)

    1996-07-10

    This paper introduces three series of vending machines that were designed to match the interior of an office building. The three series are vending machines for cups, paper packs, cans, and tobacco. Among the three series, `Interior` series has a symmetric design that was coated in a grain pattern. The inside of the `Interior` series is coated by laser satin to ensure a sense of superior quality and a refined style. The push-button used for product selection is hot-stamped on the plastic surface to ensure the hair-line luster. `Interior Phase II` series has a bay window design with a sense of superior quality and lightness. The inside of the `Interior Phase II` series is coated by laser satin. `Interior 21` series is integrated with the wall except the sales operation panel. The upper and lower dress panels can be detached and attached. The door lock is a wire-type structure with high operativity. The operation block is coated by titanium color. The dimensions of three series are standardized. 6 figs., 1 tab.

  9. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    Directory of Open Access Journals (Sweden)

    Elias Chaibub Neto

    Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  10. Dynamic Forecasting Conditional Probability of Bombing Attacks Based on Time-Series and Intervention Analysis.

    Science.gov (United States)

    Li, Shuying; Zhuang, Jun; Shen, Shifei

    2017-07-01

    In recent years, various types of terrorist attacks occurred, causing worldwide catastrophes. According to the Global Terrorism Database (GTD), among all attack tactics, bombing attacks happened most frequently, followed by armed assaults. In this article, a model for analyzing and forecasting the conditional probability of bombing attacks (CPBAs) based on time-series methods is developed. In addition, intervention analysis is used to analyze the sudden increase in the time-series process. The results show that the CPBA increased dramatically at the end of 2011. During that time, the CPBA increased by 16.0% in a two-month period to reach the peak value, but still stays 9.0% greater than the predicted level after the temporary effect gradually decays. By contrast, no significant fluctuation can be found in the conditional probability process of armed assault. It can be inferred that some social unrest, such as America's troop withdrawal from Afghanistan and Iraq, could have led to the increase of the CPBA in Afghanistan, Iraq, and Pakistan. The integrated time-series and intervention model is used to forecast the monthly CPBA in 2014 and through 2064. The average relative error compared with the real data in 2014 is 3.5%. The model is also applied to the total number of attacks recorded by the GTD between 2004 and 2014. © 2016 Society for Risk Analysis.

  11. 4-H Science Inquiry Video Series

    Science.gov (United States)

    Green, Jeremy W.; Black, Lynette; Willis, Patrick

    2013-01-01

    Studies support science inquiry as a positive method and approach for 4-H professionals and volunteers to use for teaching science-based practices to youth. The development of a science inquiry video series has yielded positive results as it relates to youth development education and science. The video series highlights how to conduct science-rich…

  12. Spectral parameter power series representation for Hill's discriminant

    International Nuclear Information System (INIS)

    Khmelnytskaya, K.V.; Rosu, H.C.

    2010-01-01

    We establish a series representation of the Hill discriminant based on the spectral parameter power series (SPPS) recently introduced by Kravchenko. We also show the invariance of the Hill discriminant under a Darboux transformation and employing the Mathieu case the feasibility of this type of series for numerical calculations of the eigenspectrum.

  13. Frequency-based time-series gene expression recomposition using PRIISM

    Directory of Open Access Journals (Sweden)

    Rosa Bruce A

    2012-06-01

    Full Text Available Abstract Background Circadian rhythm pathways influence the expression patterns of as much as 31% of the Arabidopsis genome through complicated interaction pathways, and have been found to be significantly disrupted by biotic and abiotic stress treatments, complicating treatment-response gene discovery methods due to clock pattern mismatches in the fold change-based statistics. The PRIISM (Pattern Recomposition for the Isolation of Independent Signals in Microarray data algorithm outlined in this paper is designed to separate pattern changes induced by different forces, including treatment-response pathways and circadian clock rhythm disruptions. Results Using the Fourier transform, high-resolution time-series microarray data is projected to the frequency domain. By identifying the clock frequency range from the core circadian clock genes, we separate the frequency spectrum to different sections containing treatment-frequency (representing up- or down-regulation by an adaptive treatment response, clock-frequency (representing the circadian clock-disruption response and noise-frequency components. Then, we project the components’ spectra back to the expression domain to reconstruct isolated, independent gene expression patterns representing the effects of the different influences. By applying PRIISM on a high-resolution time-series Arabidopsis microarray dataset under a cold treatment, we systematically evaluated our method using maximum fold change and principal component analyses. The results of this study showed that the ranked treatment-frequency fold change results produce fewer false positives than the original methodology, and the 26-hour timepoint in our dataset was the best statistic for distinguishing the most known cold-response genes. In addition, six novel cold-response genes were discovered. PRIISM also provides gene expression data which represents only circadian clock influences, and may be useful for circadian clock studies

  14. Smoothing data series by means of cubic splines: quality of approximation and introduction of a repeating spline approach

    Science.gov (United States)

    Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael

    2017-09-01

    Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.

  15. Time-series analysis in imatinib-resistant chronic myeloid leukemia K562-cells under different drug treatments.

    Science.gov (United States)

    Zhao, Yan-Hong; Zhang, Xue-Fang; Zhao, Yan-Qiu; Bai, Fan; Qin, Fan; Sun, Jing; Dong, Ying

    2017-08-01

    Chronic myeloid leukemia (CML) is characterized by the accumulation of active BCR-ABL protein. Imatinib is the first-line treatment of CML; however, many patients are resistant to this drug. In this study, we aimed to compare the differences in expression patterns and functions of time-series genes in imatinib-resistant CML cells under different drug treatments. GSE24946 was downloaded from the GEO database, which included 17 samples of K562-r cells with (n=12) or without drug administration (n=5). Three drug treatment groups were considered for this study: arsenic trioxide (ATO), AMN107, and ATO+AMN107. Each group had one sample at each time point (3, 12, 24, and 48 h). Time-series genes with a ratio of standard deviation/average (coefficient of variation) >0.15 were screened, and their expression patterns were revealed based on Short Time-series Expression Miner (STEM). Then, the functional enrichment analysis of time-series genes in each group was performed using DAVID, and the genes enriched in the top ten functional categories were extracted to detect their expression patterns. Different time-series genes were identified in the three groups, and most of them were enriched in the ribosome and oxidative phosphorylation pathways. Time-series genes in the three treatment groups had different expression patterns and functions. Time-series genes in the ATO group (e.g. CCNA2 and DAB2) were significantly associated with cell adhesion, those in the AMN107 group were related to cellular carbohydrate metabolic process, while those in the ATO+AMN107 group (e.g. AP2M1) were significantly related to cell proliferation and antigen processing. In imatinib-resistant CML cells, ATO could influence genes related to cell adhesion, AMN107 might affect genes involved in cellular carbohydrate metabolism, and the combination therapy might regulate genes involved in cell proliferation.

  16. HOMPRA Europe - A gridded precipitation data set from European homogenized time series

    Science.gov (United States)

    Rustemeier, Elke; Kapala, Alice; Meyer-Christoffer, Anja; Finger, Peter; Schneider, Udo; Venema, Victor; Ziese, Markus; Simmer, Clemens; Becker, Andreas

    2017-04-01

    Reliable monitoring data are essential for robust analyses of climate variability and, in particular, long-term trends. In this regard, a gridded, homogenized data set of monthly precipitation totals - HOMPRA Europe (HOMogenized PRecipitation Analysis of European in-situ data)- is presented. The data base consists of 5373 homogenized monthly time series, a carefully selected subset held by the Global Precipitation Climatology Centre (GPCC). The chosen series cover the period 1951-2005 and contain less than 10% missing values. Due to the large number of data, an automatic algorithm had to be developed for the homogenization of these precipitation series. In principal, the algorithm is based on three steps: * Selection of overlapping station networks in the same precipitation regime, based on rank correlation and Ward's method of minimal variance. Since the underlying time series should be as homogeneous as possible, the station selection is carried out by deterministic first derivation in order to reduce artificial influences. * The natural variability and trends were temporally removed by means of highly correlated neighboring time series to detect artificial break-points in the annual totals. This ensures that only artificial changes can be detected. The method is based on the algorithm of Caussinus and Mestre (2004). * In the last step, the detected breaks are corrected monthly by means of a multiple linear regression (Mestre, 2003). Due to the automation of the homogenization, the validation of the algorithm is essential. Therefore, the method was tested on artificial data sets. Additionally the sensitivity of the method was tested by varying the neighborhood series. If available in digitized form, the station history was also used to search for systematic errors in the jump detection. Finally, the actual HOMPRA Europe product is produced by interpolation of the homogenized series onto a 1° grid using one of the interpolation schems operationally at GPCC

  17. Sample preparation of sewage sludge and soil samples for the determination of polycyclic aromatic hydrocarbons based on one-pot microwave-assisted saponification and extraction

    Energy Technology Data Exchange (ETDEWEB)

    Pena, M.T.; Pensado, Luis; Casais, M.C.; Mejuto, M.C.; Cela, Rafael [Universidad de Santiago de Compostela, Dpto. Quimica Analitica, Nutricion y Bromatologia. Instituto de Investigacion y Analisis Alimentario, Santiago de Compostela (Spain)

    2007-04-15

    A microwave-assisted sample preparation (MASP) procedure was developed for the analysis of polycyclic aromatic hydrocarbons (PAHs) in sewage sludge and soil samples. The procedure involved the simultaneous microwave-assisted extraction of PAHs with n-hexane and the hydrolysis of samples with methanolic potassium hydroxide. Because of the complex nature of the samples, the extracts were submitted to further cleaning with silica and Florisil solid-phase extraction cartridges connected in series. Naphthalene, acenaphthene, fluorene, phenanthrene, anthracene, fluoranthene, pyrene, benz[a]anthracene, chrysene, benzo[e]pyrene, benzo[b]fluoranthene, benzo[k]fluoranthene, benzo[a]pyrene, dibenz[a,h]anthracene, benzo[g,h,i]perylene, and indeno[1,2,3-cd]pyrene, were considered in the study. Quantification limits obtained for all of these compounds (between 0.4 and 14.8 {mu}g kg{sup -1} dry mass) were well below of the limits recommended in the USA and EU. Overall recovery values ranged from 60 to 100%, with most losses being due to evaporation in the solvent exchange stages of the procedure, although excellent extraction recoveries were obtained. Validation of the accuracy was carried out with BCR-088 (sewage sludge) and BCR-524 (contaminated industrial soil) reference materials. (orig.)

  18. Sample preparation of sewage sludge and soil samples for the determination of polycyclic aromatic hydrocarbons based on one-pot microwave-assisted saponification and extraction.

    Science.gov (United States)

    Pena, M Teresa; Pensado, Luis; Casais, M Carmen; Mejuto, M Carmen; Cela, Rafael

    2007-04-01

    A microwave-assisted sample preparation (MASP) procedure was developed for the analysis of polycyclic aromatic hydrocarbons (PAHs) in sewage sludge and soil samples. The procedure involved the simultaneous microwave-assisted extraction of PAHs with n-hexane and the hydrolysis of samples with methanolic potassium hydroxide. Because of the complex nature of the samples, the extracts were submitted to further cleaning with silica and Florisil solid-phase extraction cartridges connected in series. Naphthalene, acenaphthene, fluorene, phenanthrene, anthracene, fluoranthene, pyrene, benz[a]anthracene, chrysene, benzo[e]pyrene, benzo[b]fluoranthene, benzo[k]fluoranthene, benzo[a]pyrene, dibenz[a,h]anthracene, benzo[g,h,i]perylene, and indeno[1,2,3-cd]pyrene, were considered in the study. Quantification limits obtained for all of these compounds (between 0.4 and 14.8 microg kg(-1) dry mass) were well below of the limits recommended in the USA and EU. Overall recovery values ranged from 60 to 100%, with most losses being due to evaporation in the solvent exchange stages of the procedure, although excellent extraction recoveries were obtained. Validation of the accuracy was carried out with BCR-088 (sewage sludge) and BCR-524 (contaminated industrial soil) reference materials.

  19. Reactivity-worth estimates of the OSMOSE samples in the MINERVE reactor R1-UO2 configuration.

    Energy Technology Data Exchange (ETDEWEB)

    Klann, R. T.; Perret, G.; Nuclear Engineering Division

    2007-10-03

    An initial series of calculations of the reactivity-worth of the OSMOSE samples in the MINERVE reactor with the R1-UO2 core configuration were completed. The reactor model was generated using the REBUS code developed at Argonne National Laboratory. The calculations are based on the specifications for fabrication, so they are considered preliminary until sampling and analysis have been completed on the fabricated samples. The estimates indicate a range of reactivity effect from -22 pcm to +25 pcm compared to the natural U sample.

  20. Forecasting with periodic autoregressive time series models

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)

    1999-01-01

    textabstractThis paper is concerned with forecasting univariate seasonal time series data using periodic autoregressive models. We show how one should account for unit roots and deterministic terms when generating out-of-sample forecasts. We illustrate the models for various quarterly UK consumption

  1. Simple nuclear norm based algorithms for imputing missing data and forecasting in time series

    OpenAIRE

    Butcher, Holly Louise; Gillard, Jonathan William

    2017-01-01

    There has been much recent progress on the use of the nuclear norm for the so-called matrix completion problem (the problem of imputing missing values of a matrix). In this paper we investigate the use of the nuclear norm for modelling time series, with particular attention to imputing missing data and forecasting. We introduce a simple alternating projections type algorithm based on the nuclear norm for these tasks, and consider a number of practical examples.

  2. Principle and realization of segmenting contour series algorithm in reverse engineering based on X-ray computerized tomography

    International Nuclear Information System (INIS)

    Wang Yanfang; Liu Li; Yan Yonglian; Shan Baoci; Tang Xiaowei

    2007-01-01

    A new algorithm of segmenting contour series of images is presented, which can achieve three dimension reconstruction with parametric recognition in Reverse Engineering based on X-ray CT. First, in order to get the nested relationship between contours, a method of a certain angle ray is used. Second, for realizing the contour location in one slice, another approach is presented to generate the contour tree by scanning the relevant vector only once. Last, a judge algorithm is put forward to accomplish the contour match between slices by adopting the qualitative and quantitative properties. The example shows that this algorithm can segment contour series of CT parts rapidly and precisely. (authors)

  3. Characterization of Volatiles Loss from Soil Samples at Lunar Environments

    Science.gov (United States)

    Kleinhenz, Julie; Smith, Jim; Roush, Ted; Colaprete, Anthony; Zacny, Kris; Paulsen, Gale; Wang, Alex; Paz, Aaron

    2017-01-01

    Resource Prospector Integrated Thermal Vacuum Test Program A series of ground based dirty thermal vacuum tests are being conducted to better understand the subsurface sampling operations for RP Volatiles loss during sampling operations Hardware performance Sample removal and transfer Concept of operationsInstrumentation5 test campaigns over 5 years have been conducted with RP hardware with advancing hardware designs and additional RP subsystems Volatiles sampling 4 years Using flight-forward regolith sampling hardware, empirically determine volatile retention at lunar-relevant conditions Use data to improve theoretical predictions Determine driving variables for retention Bound water loss potential to define measurement uncertainties. The main goal of this talk is to introduce you to our approach to characterizing volatiles loss for RP. Introduce the facility and its capabilities Overview of the RP hardware used in integrated testing (most recent iteration) Summarize the test variables used thus farReview a sample of the results.

  4. A support vector density-based importance sampling for reliability assessment

    International Nuclear Information System (INIS)

    Dai, Hongzhe; Zhang, Hao; Wang, Wei

    2012-01-01

    An importance sampling method based on the adaptive Markov chain simulation and support vector density estimation is developed in this paper for efficient structural reliability assessment. The methodology involves the generation of samples that can adaptively populate the important region by the adaptive Metropolis algorithm, and the construction of importance sampling density by support vector density. The use of the adaptive Metropolis algorithm may effectively improve the convergence and stability of the classical Markov chain simulation. The support vector density can approximate the sampling density with fewer samples in comparison to the conventional kernel density estimation. The proposed importance sampling method can effectively reduce the number of structural analysis required for achieving a given accuracy. Examples involving both numerical and practical structural problems are given to illustrate the application and efficiency of the proposed methodology.

  5. Microprocessor Card for Cuban Series polarimeters Laserpol

    International Nuclear Information System (INIS)

    Arista Romeu, E.; Mora Mazorra, W.

    2012-01-01

    We present the design consists of a card based on a micro-processor 8-bit adds new software components and their basic living, which allow to deliver new services and expand the possibilities for use in other applications of the polarimeter LASERPOL series, as the polarimetric detection. Given the limitations of the original card it was necessary to introduce a series of changes that would allow to address new user requirements, and expand the possible applications of the instruments. This was done the expansion of the capacity of the EPROM and RAM memory, the decoder circuit was implemented memory map using a programmable integrated circuit, and introduced a real time clock with nonvolatile RAM, these features are exploited to the introduction of new features such as the realization of the polarimeter calibration by the user from a sample pattern or a calibration pattern used as a reference, and the incorporation of the time and date to the reports of measurements required industry for quality control processes. Card that is achieved along with the rest of the components is compatible with polarimeters LASERPOL 101M Series, 3M and LP4, pin to pin, which facilitates their incorporation into the polarimeters in operation in the industry 'in situ' replacement cards from previous models, allowing to extend the possibilities of statistical processing, precision and accuracy of the instruments. Improved measurements in the industry, resulting in significant savings by elimination of losses in production and raw materials. The improved response speed of reading the polarimeters LASERPOL Use and polarimetric detectors. (Author)

  6. Multi-step-prediction of chaotic time series based on co-evolutionary recurrent neural network

    International Nuclear Information System (INIS)

    Ma Qianli; Zheng Qilun; Peng Hong; Qin Jiangwei; Zhong Tanwei

    2008-01-01

    This paper proposes a co-evolutionary recurrent neural network (CERNN) for the multi-step-prediction of chaotic time series, it estimates the proper parameters of phase space reconstruction and optimizes the structure of recurrent neural networks by co-evolutionary strategy. The searching space was separated into two subspaces and the individuals are trained in a parallel computational procedure. It can dynamically combine the embedding method with the capability of recurrent neural network to incorporate past experience due to internal recurrence. The effectiveness of CERNN is evaluated by using three benchmark chaotic time series data sets: the Lorenz series, Mackey-Glass series and real-world sun spot series. The simulation results show that CERNN improves the performances of multi-step-prediction of chaotic time series

  7. Analysis of area-wide management of insect pests based on sampling

    Science.gov (United States)

    David W. Onstad; Mark S. Sisterson

    2011-01-01

    The control of invasive species greatly depends on area-wide pest management (AWPM) in heterogeneous landscapes. Decisions about when and where to treat a population with pesticide are based on sampling pest abundance. One of the challenges of AWPM is sampling large areas with limited funds to cover the cost of sampling. Additionally, AWPM programs are often confronted...

  8. Climate Prediction Center (CPC) Global Temperature Time Series

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The global temperature time series provides time series charts using station based observations of daily temperature. These charts provide information about the...

  9. Community-based survey versus sentinel site sampling in ...

    African Journals Online (AJOL)

    rural children. Implications for nutritional surveillance and the development of nutritional programmes. G. c. Solarsh, D. M. Sanders, C. A. Gibson, E. Gouws. A study of the anthropometric status of under-5-year-olds was conducted in the Nqutu district of Kwazulu by means of a representative community-based sample and.

  10. Periodic fluctuations in correlation-based connectivity density time series: Application to wind speed-monitoring network in Switzerland

    Science.gov (United States)

    Laib, Mohamed; Telesca, Luciano; Kanevski, Mikhail

    2018-02-01

    In this paper, we study the periodic fluctuations of connectivity density time series of a wind speed-monitoring network in Switzerland. By using the correlogram-based robust periodogram annual periodic oscillations were found in the correlation-based network. The intensity of such annual periodic oscillations is larger for lower correlation thresholds and smaller for higher. The annual periodicity in the connectivity density seems reasonably consistent with the seasonal meteo-climatic cycle.

  11. Sampling in interview-based qualitative research: A theoretical and practical guide

    OpenAIRE

    Robinson, Oliver

    2014-01-01

    Sampling is central to the practice of qualitative methods, but compared with data collection and analysis, its processes are discussed relatively little. A four-point approach to sampling in qualitative interview-based research is presented and critically discussed in this article, which integrates theory and process for the following: (1) Defining a sample universe, by way of specifying inclusion and exclusion criteria for potential participation; (2) Deciding upon a sample size, through th...

  12. 等差级数与等比级数乘积项级数的判敛与求和浅析%Convergence and Summation of Arithmetical Series and Geometric Series Product Series

    Institute of Scientific and Technical Information of China (English)

    石会萍

    2012-01-01

    在级数理论中,一般来说,判断级数的敛散性是比较困难的,有时尽管能判断其收敛,但要求其和却是十分困难的。文中根据等差级数和等比级数的特点,给出了一类基于等差级数和等比级数乘积项的无穷级数的判敛与求和方法。%In series theory, generally, it is difficult to determine the convergence and divergence of se- ries. Though sometimes the convergence can be determined, it is very difficult to determine the summa- tion. Based on the characteristics of the arithmetical and geometric series, a method of summation and con- vergence is put forward, based on arithmetical series and assessment of product of the geometric series of infinite series.

  13. TIME SERIES CHARACTERISTIC ANALYSIS OF RAINFALL, LAND USE AND FLOOD DISCHARGE BASED ON ARIMA BOX-JENKINS MODEL

    Directory of Open Access Journals (Sweden)

    Abror Abror

    2014-01-01

    Full Text Available Indonesia located in tropic area consists of wet season and dry season. However, in last few years, in river discharge in dry season is very little, but in contrary, in wet season, frequency of flood increases with sharp peak and increasingly great water elevation. The increased flood discharge may occur due to change in land use or change in rainfall characteristic. Both matters should get clarity. Therefore, a research should be done to analyze rainfall characteristic, land use and flood discharge in some watershed area (DAS quantitatively from time series data. The research was conducted in DAS Gintung in Parakankidang, DAS Gung in Danawarih, DAS Rambut in Cipero, DAS Kemiri in Sidapurna and DAS Comal in Nambo, located in Tegal Regency and Pemalang Regency in Central Java Province. This research activity consisted of three main steps: input, DAS system and output. Input is DAS determination and selection and searching secondary data. DAS system is early secondary data processing consisting of rainfall analysis, HSS GAMA I parameter, land type analysis and DAS land use. Output is final processing step that consisting of calculation of Tadashi Tanimoto, USSCS effective rainfall, flood discharge, ARIMA analysis, result analysis and conclusion. Analytical calculation of ARIMA Box-Jenkins time series used software Number Cruncher Statistical Systems and Power Analysis Sample Size (NCSS-PASS version 2000, which result in time series characteristic in form of time series pattern, mean square errors (MSE, root mean square ( RMS, autocorrelation of residual and trend. Result of this research indicates that composite CN and flood discharge is proportional that means when composite CN trend increase then flood discharge trend also increase and vice versa. Meanwhile, decrease of rainfall trend is not always followed with decrease in flood discharge trend. The main cause of flood discharge characteristic is DAS management characteristic, not change in

  14. The Hubble series: convergence properties and redshift variables

    International Nuclear Information System (INIS)

    Cattoen, Celine; Visser, Matt

    2007-01-01

    In cosmography, cosmokinetics and cosmology, it is quite common to encounter physical quantities expanded as a Taylor series in the cosmological redshift z. Perhaps the most well-known exemplar of this phenomenon is the Hubble relation between distance and redshift. However, we now have considerable high-z data available; for instance, we have supernova data at least back to redshift z ∼ 1.75. This opens up the theoretical question as to whether or not the Hubble series (or more generally any series expansion based on the z-redshift) actually converges for large redshift. Based on a combination of mathematical and physical reasonings, we argue that the radius of convergence of any series expansion in z is less than or equal to 1, and that z-based expansions must break down for z > 1, corresponding to a universe less than half of its current size. Furthermore, we shall argue on theoretical grounds for the utility of an improved parametrization y = z/(1 + z). In terms of the y-redshift, we again argue that the radius of convergence of any series expansion in y is less than or equal to 1, so that y-based expansions are likely to be good all the way back to the big bang (y = 1), but that y-based expansions must break down for y < -1, now corresponding to a universe more than twice its current size

  15. Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering

    Science.gov (United States)

    Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki

    2018-03-01

    We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.

  16. Application of In-Segment Multiple Sampling in Object-Based Classification

    Directory of Open Access Journals (Sweden)

    Nataša Đurić

    2014-12-01

    Full Text Available When object-based analysis is applied to very high-resolution imagery, pixels within the segments reveal large spectral inhomogeneity; their distribution can be considered complex rather than normal. When normality is violated, the classification methods that rely on the assumption of normally distributed data are not as successful or accurate. It is hard to detect normality violations in small samples. The segmentation process produces segments that vary highly in size; samples can be very big or very small. This paper investigates whether the complexity within the segment can be addressed using multiple random sampling of segment pixels and multiple calculations of similarity measures. In order to analyze the effect sampling has on classification results, statistics and probability value equations of non-parametric two-sample Kolmogorov-Smirnov test and parametric Student’s t-test are selected as similarity measures in the classification process. The performance of both classifiers was assessed on a WorldView-2 image for four land cover classes (roads, buildings, grass and trees and compared to two commonly used object-based classifiers—k-Nearest Neighbor (k-NN and Support Vector Machine (SVM. Both proposed classifiers showed a slight improvement in the overall classification accuracies and produced more accurate classification maps when compared to the ground truth image.

  17. Linearization and Control of Series-Series Compensated Inductive Power Transfer System Based on Extended Describing Function Concept

    Directory of Open Access Journals (Sweden)

    Kunwar Aditya

    2016-11-01

    Full Text Available The extended describing function (EDF is a well-known method for modelling resonant converters due to its high accuracy. However, it requires complex mathematical formulation effort. This paper presents a simplified non-linear mathematical model of series-series (SS compensated inductive power transfer (IPT system, considering zero-voltage switching in the inverter. This simplified mathematical model permits the user to derive the small-signal model using the EDF method, with less computational effort, while maintaining the accuracy of an actual physical model. The derived model has been verified using a frequency sweep method in PLECS. The small-signal model has been used to design the voltage loop controller for a SS compensated IPT system. The designed controller was implemented on a 3.6 kW experimental setup, to test its robustness.

  18. Point and Fixed Plot Sampling Inventory Estimates at the Savannah River Site, South Carolina.

    Energy Technology Data Exchange (ETDEWEB)

    Parresol, Bernard, R.

    2004-02-01

    This report provides calculation of systematic point sampling volume estimates for trees greater than or equal to 5 inches diameter breast height (dbh) and fixed radius plot volume estimates for trees < 5 inches dbh at the Savannah River Site (SRS), Aiken County, South Carolina. The inventory of 622 plots was started in March 1999 and completed in January 2002 (Figure 1). Estimates are given in cubic foot volume. The analyses are presented in a series of Tables and Figures. In addition, a preliminary analysis of fuel levels on the SRS is given, based on depth measurements of the duff and litter layers on the 622 inventory plots plus line transect samples of down coarse woody material. Potential standing live fuels are also included. The fuels analyses are presented in a series of tables.

  19. Research on PM2.5 time series characteristics based on data mining technology

    Science.gov (United States)

    Zhao, Lifang; Jia, Jin

    2018-02-01

    With the development of data mining technology and the establishment of environmental air quality database, it is necessary to discover the potential correlations and rules by digging the massive environmental air quality information and analyzing the air pollution process. In this paper, we have presented a sequential pattern mining method based on the air quality data and pattern association technology to analyze the PM2.5 time series characteristics. Utilizing the real-time monitoring data of urban air quality in China, the time series rule and variation properties of PM2.5 under different pollution levels are extracted and analyzed. The analysis results show that the time sequence features of the PM2.5 concentration is directly affected by the alteration of the pollution degree. The longest time that PM2.5 remained stable is about 24 hours. As the pollution degree gets severer, the instability time and step ascending time gradually changes from 12-24 hours to 3 hours. The presented method is helpful for the controlling and forecasting of the air quality while saving the measuring costs, which is of great significance for the government regulation and public prevention of the air pollution.

  20. Preparation of two series of materials with perovskite structure and investigation of their physical properties

    International Nuclear Information System (INIS)

    Mohamed, H.S.R.

    2010-01-01

    Results on structural, electric transport and magnetic properties of a series of (Al / In) doped Ca-series and (Al / In) doped Sr-series are presented and discussed.The polycrystalline ceramic samples were prepared by the solid state reaction technique. Elemental analysis showed a reasonable agreement between nominal and actual sample compositions. The grain size (G.S) of the Ca doped series increased with In content (G.S. (x = 0.2) = 79.5 nm and G.S. (x = 0.8) = 95.4 nm). For the Sr-series it has values in the range of 40 - 42 nm.Room temperature structural analysis using the Rietveld refinement technique,showed no structural transitions with the variation of the Al / In ratio. The doped Ca-series had an orthorhombic symmetry with space group Pnma. The Sr -doped series is rhombohedral with space group ( R3C ). In both series the Mn-O bond distance was found to increase whereas the mean Mn-O-Mn bond angle decreased with x. This was ascribed to the size mismatch between the divalent A- site ions and the B- site as a result of the introduction of the large In 3+ ion size. The tolerance factor varies from 0.918-0.933 for the Ca-series and from 0.932 - 0.948 for the Sr-series as x varies from 0.0 to 1.0. The temperature dependence of the magnetic susceptibility and electric resistivity of the Ca-doped series showed distinct ferromagnetic metallic (FMM) to a paramagnetic insulator (PMI) transitions near the Curie point (T C ), which ranges from T C ∼ 210 - 100 K for x = 0.0 to 1.0 respectively. The temperature dependence of the resistivity for the Sr-doped series showed distinct FMM to PMI transitions for samples with x = 0.0, 0.2 and 1.0, whereas samples with x = 0.4, 0.6 and 0.8 showed FMM to PMM. The transition temperature variation is not linear and lies within a narrow temperature range T p ∼ 344 - 367 K The results of the Sr-series showed that the size mismatch between the A- and B- sites is the major factor that controls the magnetic and electric properties

  1. Fetal organ dosimetry for the Techa River and Ozyorsk offspring cohorts. Pt. 1. A Urals-based series of fetal computational phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Maynard, Matthew R.; Bolch, Wesley E. [University of Florida, Advanced Laboratory for Radiation Dosimetry Studies (ALRADS), J. Crayton Pruitt Family Department of Biomedical Engineering, Gainesville, FL (United States); Shagina, Natalia B.; Tolstykh, Evgenia I.; Degteva, Marina O. [Urals Research Center for Radiation Medicine, Chelyabinsk (Russian Federation); Fell, Tim P. [Public Health England, Centre for Radiation, Chemical and Environmental Health, Didcot, Chilton, Oxon (United Kingdom)

    2015-03-15

    The European Union's SOLO (Epidemiological Studies of Exposed Southern Urals Populations) project aims to improve understanding of cancer risks associated with chronic in utero radiation exposure. A comprehensive series of hybrid computational fetal phantoms was previously developed at the University of Florida in order to provide the SOLO project with the capability of computationally simulating and quantifying radiation exposures to individual fetal bones and soft tissue organs. To improve harmonization between the SOLO fetal biokinetic models and the computational phantoms, a subset of those phantoms was systematically modified to create a novel series of phantoms matching anatomical data representing Russian fetal biometry in the Southern Urals. Using previously established modeling techniques, eight computational Urals-based phantoms aged 8, 12, 18, 22, 26, 30, 34, and 38 weeks post-conception were constructed to match appropriate age-dependent femur lengths, biparietal diameters, individual bone masses and whole-body masses. Bone and soft tissue organ mass differences between the common ages of the subset of UF phantom series and the Urals-based phantom series illustrated the need for improved understanding of fetal bone densities as a critical parameter of computational phantom development. In anticipation for SOLO radiation dosimetry studies involving the developing fetus and pregnant female, the completed phantom series was successfully converted to a cuboidal voxel format easily interpreted by radiation transport software. (orig.)

  2. Preliminary study on the relationship between trends of tree-ring δ 13C series and site conditions

    International Nuclear Information System (INIS)

    Chen Tuo; Qin Dahe; Liu Xiaohong; Ren Jiawen

    2001-01-01

    The long-term trends of tree-ring δ 13 C series, taken respectively from Qilian of Qinghai and Zhaosu and Aleitai of xinjiang were compared. The results showed that a similar trend existed between Qilian series and Zhaosu series, while there was a significant difference between them and Aleitai series. The authors' analysis indicated that the site difference of the trends of tree-ring δ 13 C series was mainly associated with 'canopy effects' of tree growth. It is suggested that tree samples should be selected from sparse forests or the sampled tree foliage high above the whole canopy when the history of δ 13 C of atmospheric CO 2 was reconstructed by tree-ring series

  3. Randomization-Based Inference about Latent Variables from Complex Samples: The Case of Two-Stage Sampling

    Science.gov (United States)

    Li, Tiandong

    2012-01-01

    In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…

  4. A New Methodology Based on Imbalanced Classification for Predicting Outliers in Electricity Demand Time Series

    Directory of Open Access Journals (Sweden)

    Francisco Javier Duque-Pintor

    2016-09-01

    Full Text Available The occurrence of outliers in real-world phenomena is quite usual. If these anomalous data are not properly treated, unreliable models can be generated. Many approaches in the literature are focused on a posteriori detection of outliers. However, a new methodology to a priori predict the occurrence of such data is proposed here. Thus, the main goal of this work is to predict the occurrence of outliers in time series, by using, for the first time, imbalanced classification techniques. In this sense, the problem of forecasting outlying data has been transformed into a binary classification problem, in which the positive class represents the occurrence of outliers. Given that the number of outliers is much lower than the number of common values, the resultant classification problem is imbalanced. To create training and test sets, robust statistical methods have been used to detect outliers in both sets. Once the outliers have been detected, the instances of the dataset are labeled accordingly. Namely, if any of the samples composing the next instance are detected as an outlier, the label is set to one. As a study case, the methodology has been tested on electricity demand time series in the Spanish electricity market, in which most of the outliers were properly forecast.

  5. AFSC/ABL: Ugashik sockeye salmon scale time series

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A time series of scale samples (1956 b?? 2002) collected from adult sockeye salmon returning to Ugashik River were retrieved from the Alaska Department of Fish and...

  6. AFSC/ABL: Naknek sockeye salmon scale time series

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A time series of scale samples (1956 2002) collected from adult sockeye salmon returning to Naknek River were retrieved from the Alaska Department of Fish and Game....

  7. DNA-based molecular fingerprinting of eukaryotic protists and cyanobacteria contributing to sinking particle flux at the Bermuda Atlantic time-series study

    Science.gov (United States)

    Amacher, Jessica; Neuer, Susanne; Lomas, Michael

    2013-09-01

    We used denaturing gradient gel electrophoresis (DGGE) to examine the protist and cyanobacterial communities in the euphotic zone (0-120 m) and in corresponding 150 m particle interceptor traps at the Bermuda Atlantic Time-series Study (BATS) in a two-year monthly time-series from May 2008 to April 2010. Dinoflagellates were the most commonly detected taxa in both water column and trap samples throughout the time series. Diatom sequences were found only eight times in the water column, and only four times in trap material. Small-sized eukaryotic taxa, including the prasinophyte genera Ostreococcus, Micromonas, and Bathycoccus, were present in trap samples, as were the cyanobacteria Prochlorococcus and Synechococcus. Synechococcus was usually overrepresented in trap material, whereas Prochlorococcus was underrepresented compared to the water column. Both seasonal and temporal variability affected patterns of ribosomal DNA found in sediment traps. The two years of this study were quite different hydrographically, with higher storm activity and the passing of a cyclonic eddy causing unusually deep mixing in winter 2010. This was reflected in the DGGE fingerprints of the water column, which showed greater phylotype richness of eukaryotes and a lesser richness of cyanobacteria in winter of 2010 compared with the winter of 2009. Increases in eukaryotic richness could be traced to increased diversity of prasinophytes and prymnesiophytes. The decrease in cyanobacterial richness was in turn reflected in the trap composition, but the increase in eukaryotes was not, indicating a disproportionate contribution of certain taxa to sinking particle flux.

  8. Statistical analysis of hydrological response in urbanising catchments based on adaptive sampling using inter-amount times

    Science.gov (United States)

    ten Veldhuis, Marie-Claire; Schleiss, Marc

    2017-04-01

    Urban catchments are typically characterised by a more flashy nature of the hydrological response compared to natural catchments. Predicting flow changes associated with urbanisation is not straightforward, as they are influenced by interactions between impervious cover, basin size, drainage connectivity and stormwater management infrastructure. In this study, we present an alternative approach to statistical analysis of hydrological response variability and basin flashiness, based on the distribution of inter-amount times. We analyse inter-amount time distributions of high-resolution streamflow time series for 17 (semi-)urbanised basins in North Carolina, USA, ranging from 13 to 238 km2 in size. We show that in the inter-amount-time framework, sampling frequency is tuned to the local variability of the flow pattern, resulting in a different representation and weighting of high and low flow periods in the statistical distribution. This leads to important differences in the way the distribution quantiles, mean, coefficient of variation and skewness vary across scales and results in lower mean intermittency and improved scaling. Moreover, we show that inter-amount-time distributions can be used to detect regulation effects on flow patterns, identify critical sampling scales and characterise flashiness of hydrological response. The possibility to use both the classical approach and the inter-amount-time framework to identify minimum observable scales and analyse flow data opens up interesting areas for future research.

  9. A sampling-based approach to probabilistic pursuit evasion

    KAUST Repository

    Mahadevan, Aditya

    2012-05-01

    Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented with useful information to model interesting scenarios related to multi-agent interaction and coordination. © 2012 IEEE.

  10. Study of the relationship between chemical structure and antimicrobial activity in a series of hydrazine-based coordination compounds.

    Science.gov (United States)

    Dobrova, B N; Dimoglo, A S; Chumakov, Y M

    2000-08-01

    The dependence of antimicrobial activity on the structure of compounds is studied in a series of compounds based on hydrazine coordinated with ions of Cu(II), Ni(II) and Pd(II). The study has been carried out by means of the original electron-topological method developed earlier. A molecular fragment has been found that is only characteristic of biologically active compounds. Its spatial and electron parameters have been used for the quantitative assessment of the activity in view. The results obtained can be used for the antimicrobial activity prediction in a series of compounds with similar structures.

  11. Reachable Distance Space: Efficient Sampling-Based Planning for Spatially Constrained Systems

    KAUST Repository

    Xinyu Tang,

    2010-01-25

    Motion planning for spatially constrained robots is difficult due to additional constraints placed on the robot, such as closure constraints for closed chains or requirements on end-effector placement for articulated linkages. It is usually computationally too expensive to apply sampling-based planners to these problems since it is difficult to generate valid configurations. We overcome this challenge by redefining the robot\\'s degrees of freedom and constraints into a new set of parameters, called reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the number of the robot\\'s degrees of freedom. In addition to supporting efficient sampling of configurations, we show that the RD-space formulation naturally supports planning and, in particular, we design a local planner suitable for use by sampling-based planners. We demonstrate the effectiveness and efficiency of our approach for several systems including closed chain planning with multiple loops, restricted end-effector sampling, and on-line planning for drawing/sculpting. We can sample single-loop closed chain systems with 1,000 links in time comparable to open chain sampling, and we can generate samples for 1,000-link multi-loop systems of varying topologies in less than a second. © 2010 The Author(s).

  12. A Study of Assimilation Bias in Name-Based Sampling of Migrants

    Directory of Open Access Journals (Sweden)

    Schnell Rainer

    2014-06-01

    Full Text Available The use of personal names for screening is an increasingly popular sampling technique for migrant populations. Although this is often an effective sampling procedure, very little is known about the properties of this method. Based on a large German survey, this article compares characteristics of respondents whose names have been correctly classified as belonging to a migrant population with respondentswho aremigrants and whose names have not been classified as belonging to a migrant population. Although significant differences were found for some variables even with some large effect sizes, the overall bias introduced by name-based sampling (NBS is small as long as procedures with small false-negative rates are employed.

  13. Geometric noise reduction for multivariate time series.

    Science.gov (United States)

    Mera, M Eugenia; Morán, Manuel

    2006-03-01

    We propose an algorithm for the reduction of observational noise in chaotic multivariate time series. The algorithm is based on a maximum likelihood criterion, and its goal is to reduce the mean distance of the points of the cleaned time series to the attractor. We give evidence of the convergence of the empirical measure associated with the cleaned time series to the underlying invariant measure, implying the possibility to predict the long run behavior of the true dynamics.

  14. BRITS: Bidirectional Recurrent Imputation for Time Series

    OpenAIRE

    Cao, Wei; Wang, Dong; Li, Jian; Zhou, Hao; Li, Lei; Li, Yitan

    2018-01-01

    Time series are widely used as signals in many classification/regression tasks. It is ubiquitous that time series contains many missing values. Given multiple correlated time series data, how to fill in missing values and to predict their class labels? Existing imputation methods often impose strong assumptions of the underlying data generating process, such as linear dynamics in the state space. In this paper, we propose BRITS, a novel method based on recurrent neural networks for missing va...

  15. Advanced radar-interpretation of InSAR time series for mapping and characterization of geological processes

    OpenAIRE

    Cigna, F.; Del Ventisette, C.; Liguori, V.; Casagli, N.

    2011-01-01

    We present a new post-processing methodology for the analysis of InSAR (Synthetic Aperture Radar Interferometry) multi-temporal measures, based on the temporal under-sampling of displacement time series, the identification of potential changes occurring during the monitoring period and, eventually, the classification of different deformation behaviours. The potentials of this approach for the analysis of geological processes were tested on the case study of Naro (Italy), specifically selected...

  16. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klaauw, B.; Koning, R.H.

    2003-01-01

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  17. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klauw, B.; Koning, R.H.

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  18. Volterra series based predistortion for broadband RF power amplifiers with memory effects

    Institute of Scientific and Technical Information of China (English)

    Jin Zhe; Song Zhihuan; He Jiaming

    2008-01-01

    RF power amplifiers(PAs)are usually considered as memoryless devices in most existing predistortion techniques.However,in broadband communication systems,such as WCDMA,the PA memory effects are significant,and memoryless predistortion cannot linearize the PAs effectively.After analyzing the PA memory effects,a novel predistortion method based on the simplified Volterra series is proposed to linearize broadband RF PAs with memory effects.The indirect learning architecture is adopted to design the predistortion scheme and the recursive least squares algorithm with forgetting factor is applied to identify the parameters of the predistorter.Simulation results show that the proposed predistortion method can compensate the nonlinear distortion and memory effects of broadband RF PAs effectively.

  19. Reconstructing the temporal ordering of biological samples using microarray data.

    Science.gov (United States)

    Magwene, Paul M; Lizardi, Paul; Kim, Junhyong

    2003-05-01

    Accurate time series for biological processes are difficult to estimate due to problems of synchronization, temporal sampling and rate heterogeneity. Methods are needed that can utilize multi-dimensional data, such as those resulting from DNA microarray experiments, in order to reconstruct time series from unordered or poorly ordered sets of observations. We present a set of algorithms for estimating temporal orderings from unordered sets of sample elements. The techniques we describe are based on modifications of a minimum-spanning tree calculated from a weighted, undirected graph. We demonstrate the efficacy of our approach by applying these techniques to an artificial data set as well as several gene expression data sets derived from DNA microarray experiments. In addition to estimating orderings, the techniques we describe also provide useful heuristics for assessing relevant properties of sample datasets such as noise and sampling intensity, and we show how a data structure called a PQ-tree can be used to represent uncertainty in a reconstructed ordering. Academic implementations of the ordering algorithms are available as source code (in the programming language Python) on our web site, along with documentation on their use. The artificial 'jelly roll' data set upon which the algorithm was tested is also available from this web site. The publicly available gene expression data may be found at http://genome-www.stanford.edu/cellcycle/ and http://caulobacter.stanford.edu/CellCycle/.

  20. Ultrasonic-based membrane aided sample preparation of urine proteomes.

    Science.gov (United States)

    Jesus, Jemmyson Romário; Santos, Hugo M; López-Fernández, H; Lodeiro, Carlos; Arruda, Marco Aurélio Zezzi; Capelo, J L

    2018-02-01

    A new ultrafast ultrasonic-based method for shotgun proteomics as well as label-free protein quantification in urine samples is developed. The method first separates the urine proteins using nitrocellulose-based membranes and then proteins are in-membrane digested using trypsin. The enzymatic digestion process is accelerated from overnight to four minutes using a sonoreactor ultrasonic device. Overall, the sample treatment pipeline comprising protein separation, digestion and identification is done in just 3h. The process is assessed using urine of healthy volunteers. The method shows that male can be differentiated from female using the protein content of urine in a fast, easy and straightforward way. 232 and 226 proteins are identified in urine of male and female, respectively. From this, 162 are common to both genders, whilst 70 are unique to male and 64 to female. From the 162 common proteins, 13 are present at levels statistically different (p minimalism concept as outlined by Halls, as each stage of this analysis is evaluated to minimize the time, cost, sample requirement, reagent consumption, energy requirements and production of waste products. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Las series televisivas juveniles: tramas y conflictos en una «teen series» Television Fiction Series Targeted at Young Audience: Plots and Conflicts Portrayed in a Teen Series

    Directory of Open Access Journals (Sweden)

    Núria García Muñoz

    2011-10-01

    . In fact, the potential consumers of the teen series –the teenagers– find themselves at a key moment in the construction of their identities. First, the article presents a review of the background literature on young people’s portrayal in television fiction series. Secondly, it discusses the concept of teen series and their relationship with youth consumption. Finally, the article presents a case study that consisted of a content analysis of the North American teen drama Dawson’s Creek. Content analysis was conducted on a representative sample of three seasons of the show, in order to analyse two groups of variables: the variables of the characters’ personalities and those of plot and story characteristics. The article discusses the results of the second group of variables, focusing on the main characteristics of the plots and on the characters’ roles in the development and resolution of the conflicts. Acceptance of one’s personal identity, love and friendship have been identified as the most highly recurring themes. In addition, the importance of social relationships among the characters in the development of plots and conflicts has been highlighted.

  2. Homogenization of long instrumental temperature and precipitation series over the Spanish Northern Coast

    Science.gov (United States)

    Sigro, J.; Brunet, M.; Aguilar, E.; Stoll, H.; Jimenez, M.

    2009-04-01

    The Spanish-funded research project Rapid Climate Changes in the Iberian Peninsula (IP) Based on Proxy Calibration, Long Term Instrumental Series and High Resolution Analyses of Terrestrial and Marine Records (CALIBRE: ref. CGL2006-13327-C04/CLI) has as main objective to analyse climate dynamics during periods of rapid climate change by means of developing high-resolution paleoclimate proxy records from marine and terrestrial (lakes and caves) deposits over the IP and calibrating them with long-term and high-quality instrumental climate time series. Under CALIBRE, the coordinated project Developing and Enhancing a Climate Instrumental Dataset for Calibrating Climate Proxy Data and Analysing Low-Frequency Climate Variability over the Iberian Peninsula (CLICAL: CGL2006-13327-C04-03/CLI) is devoted to the development of homogenised climate records and sub-regional time series which can be confidently used in the calibration of the lacustrine, marine and speleothem time series generated under CALIBRE. Here we present the procedures followed in order to homogenise a dataset of maximum and minimum temperature and precipitation data on a monthly basis over the Spanish northern coast. The dataset is composed of thirty (twenty) precipitation (temperature) long monthly records. The data are quality controlled following the procedures recommended by Aguilar et al. (2003) and tested for homogeneity and adjusted by following the approach adopted by Brunet et al. (2008). Sub-regional time series of precipitation, maximum and minimum temperatures for the period 1853-2007 have been generated by averaging monthly anomalies and then adding back the base-period mean, according to the method of Jones and Hulme (1996). Also, a method to adjust the variance bias present in regional time series associated over time with varying sample size has been applied (Osborn et al., 1997). The results of this homogenisation exercise and the development of the associated sub-regional time series

  3. Operation of multiple superconducting energy doubler magnets in series

    International Nuclear Information System (INIS)

    Kalbfleisch, G.; Limon, P.J.; Rode, C.

    1977-01-01

    In order to understand the operational characteristics of the Energy Doubler, a series of experiments were begun which were designed to be a practical test of running superconducting accelerator magnets in series. Two separate tests in which two Energy Doubler dipoles were powered in series are described. Of particular interest are the static losses of the cryostats and the behavior of the coils and cryostats during quenches. The results of the tests show that Energy Doubler magnets can be safely operated near their short sample limit, and that the various safety devices used are adequate to protect the coils and the cryostats from damage

  4. Urbanization and Income Inequality in Post-Reform China: A Causal Analysis Based on Time Series Data.

    Science.gov (United States)

    Chen, Guo; Glasmeier, Amy K; Zhang, Min; Shao, Yang

    2016-01-01

    This paper investigates the potential causal relationship(s) between China's urbanization and income inequality since the start of the economic reform. Based on the economic theory of urbanization and income distribution, we analyze the annual time series of China's urbanization rate and Gini index from 1978 to 2014. The results show that urbanization has an immediate alleviating effect on income inequality, as indicated by the negative relationship between the two time series at the same year (lag = 0). However, urbanization also seems to have a lagged aggravating effect on income inequality, as indicated by positive relationship between urbanization and the Gini index series at lag 1. Although the link between urbanization and income inequality is not surprising, the lagged aggravating effect of urbanization on the Gini index challenges the popular belief that urbanization in post-reform China generally helps reduce income inequality. At deeper levels, our results suggest an urgent need to focus on the social dimension of urbanization as China transitions to the next stage of modernization. Comprehensive social reforms must be prioritized to avoid a long-term economic dichotomy and permanent social segregation.

  5. Urbanization and Income Inequality in Post-Reform China: A Causal Analysis Based on Time Series Data.

    Directory of Open Access Journals (Sweden)

    Guo Chen

    Full Text Available This paper investigates the potential causal relationship(s between China's urbanization and income inequality since the start of the economic reform. Based on the economic theory of urbanization and income distribution, we analyze the annual time series of China's urbanization rate and Gini index from 1978 to 2014. The results show that urbanization has an immediate alleviating effect on income inequality, as indicated by the negative relationship between the two time series at the same year (lag = 0. However, urbanization also seems to have a lagged aggravating effect on income inequality, as indicated by positive relationship between urbanization and the Gini index series at lag 1. Although the link between urbanization and income inequality is not surprising, the lagged aggravating effect of urbanization on the Gini index challenges the popular belief that urbanization in post-reform China generally helps reduce income inequality. At deeper levels, our results suggest an urgent need to focus on the social dimension of urbanization as China transitions to the next stage of modernization. Comprehensive social reforms must be prioritized to avoid a long-term economic dichotomy and permanent social segregation.

  6. Immunophenotype Discovery, Hierarchical Organization, and Template-based Classification of Flow Cytometry Samples

    Directory of Open Access Journals (Sweden)

    Ariful Azad

    2016-08-01

    Full Text Available We describe algorithms for discovering immunophenotypes from large collections of flow cytometry (FC samples, and using them to organize the samples into a hierarchy based on phenotypic similarity. The hierarchical organization is helpful for effective and robust cytometry data mining, including the creation of collections of cell populations characteristic of different classes of samples, robust classification, and anomaly detection. We summarize a set of samples belonging to a biological class or category with a statistically derived template for the class. Whereas individual samples are represented in terms of their cell populations (clusters, a template consists of generic meta-populations (a group of homogeneous cell populations obtained from the samples in a class that describe key phenotypes shared among all those samples. We organize an FC data collection in a hierarchical data structure that supports the identification of immunophenotypes relevant to clinical diagnosis. A robust template-based classification scheme is also developed, but our primary focus is in the discovery of phenotypic signatures and inter-sample relationships in an FC data collection. This collective analysis approach is more efficient and robust since templates describe phenotypic signatures common to cell populations in several samples, while ignoring noise and small sample-specific variations.We have applied the template-base scheme to analyze several data setsincluding one representing a healthy immune system, and one of Acute Myeloid Leukemia (AMLsamples. The last task is challenging due to the phenotypic heterogeneity of the severalsubtypes of AML. However, we identified thirteen immunophenotypes corresponding to subtypes of AML, and were able to distinguish Acute Promyelocytic Leukemia from other subtypes of AML.

  7. The Toggle Local Planner for sampling-based motion planning

    KAUST Repository

    Denny, Jory; Amato, Nancy M.

    2012-01-01

    Sampling-based solutions to the motion planning problem, such as the probabilistic roadmap method (PRM), have become commonplace in robotics applications. These solutions are the norm as the dimensionality of the planning space grows, i.e., d > 5

  8. A Virtual Machine Migration Strategy Based on Time Series Workload Prediction Using Cloud Model

    Directory of Open Access Journals (Sweden)

    Yanbing Liu

    2014-01-01

    Full Text Available Aimed at resolving the issues of the imbalance of resources and workloads at data centers and the overhead together with the high cost of virtual machine (VM migrations, this paper proposes a new VM migration strategy which is based on the cloud model time series workload prediction algorithm. By setting the upper and lower workload bounds for host machines, forecasting the tendency of their subsequent workloads by creating a workload time series using the cloud model, and stipulating a general VM migration criterion workload-aware migration (WAM, the proposed strategy selects a source host machine, a destination host machine, and a VM on the source host machine carrying out the task of the VM migration. Experimental results and analyses show, through comparison with other peer research works, that the proposed method can effectively avoid VM migrations caused by momentary peak workload values, significantly lower the number of VM migrations, and dynamically reach and maintain a resource and workload balance for virtual machines promoting an improved utilization of resources in the entire data center.

  9. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  10. Sampling rare fluctuations of discrete-time Markov chains

    Science.gov (United States)

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  11. Principle of natural and artificial radioactive series equivalency

    International Nuclear Information System (INIS)

    Vasilyeva, A.N.; Starkov, O.V.

    2001-01-01

    In the present paper one approach used under development of radioactive waste management conception is under consideration. This approach is based on the principle of natural and artificial radioactive series radiotoxic equivalency. The radioactivity of natural and artificial radioactive series has been calculated for 10 9 - years period. The toxicity evaluation for natural and artificial series has also been made. The correlation between natural radioactive series and their predecessors - actinides produced in thermal and fast reactors - has been considered. It has been shown that systematized reactor series data had great scientific significance and the principle of differential calculation of radiotoxicity was necessary to realize long-lived radioactive waste and uranium and thorium ore radiotoxicity equivalency conception. The calculations show that the execution of equivalency principle is possible for uranium series (4n+2, 4n+1). It is a problem for thorium. series. This principle is impracticable for neptunium series. (author)

  12. UniFIeD Univariate Frequency-based Imputation for Time Series Data

    OpenAIRE

    Friese, Martina; Stork, Jörg; Ramos Guerra, Ricardo; Bartz-Beielstein, Thomas; Thaker, Soham; Flasch, Oliver; Zaefferer, Martin

    2013-01-01

    This paper introduces UniFIeD, a new data preprocessing method for time series. UniFIeD can cope with large intervals of missing data. A scalable test function generator, which allows the simulation of time series with different gap sizes, is presented additionally. An experimental study demonstrates that (i) UniFIeD shows a significant better performance than simple imputation methods and (ii) UniFIeD is able to handle situations, where advanced imputation methods fail. The results are indep...

  13. Polymeric ionic liquid-based portable tip microextraction device for on-site sample preparation of water samples.

    Science.gov (United States)

    Chen, Lei; Pei, Junxian; Huang, Xiaojia; Lu, Min

    2018-06-05

    On-site sample preparation is highly desired because it avoids the transportation of large-volume samples and ensures the accuracy of the analytical results. In this work, a portable prototype of tip microextraction device (TMD) was designed and developed for on-site sample pretreatment. The assembly procedure of TMD is quite simple. Firstly, polymeric ionic liquid (PIL)-based adsorbent was in-situ prepared in a pipette tip. After that, the tip was connected with a syringe which was driven by a bidirectional motor. The flow rates in adsorption and desorption steps were controlled accurately by the motor. To evaluate the practicability of the developed device, the TMD was used to on-site sample preparation of waters and combined with high-performance liquid chromatography with diode array detection to measure trace estrogens in water samples. Under the most favorable conditions, the limits of detection (LODs, S/N = 3) for the target analytes were in the range of 4.9-22 ng/L, with good coefficients of determination. Confirmatory study well evidences that the extraction performance of TMD is comparable to that of the traditional laboratory solid-phase extraction process, but the proposed TMD is more simple and convenient. At the same time, the TMD avoids complicated sampling and transferring steps of large-volume water samples. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Iterative random vs. Kennard-Stone sampling for IR spectrum-based classification task using PLS2-DA

    Science.gov (United States)

    Lee, Loong Chuen; Liong, Choong-Yeun; Jemain, Abdul Aziz

    2018-04-01

    External testing (ET) is preferred over auto-prediction (AP) or k-fold-cross-validation in estimating more realistic predictive ability of a statistical model. With IR spectra, Kennard-stone (KS) sampling algorithm is often used to split the data into training and test sets, i.e. respectively for model construction and for model testing. On the other hand, iterative random sampling (IRS) has not been the favored choice though it is theoretically more likely to produce reliable estimation. The aim of this preliminary work is to compare performances of KS and IRS in sampling a representative training set from an attenuated total reflectance - Fourier transform infrared spectral dataset (of four varieties of blue gel pen inks) for PLS2-DA modeling. The `best' performance achievable from the dataset is estimated with AP on the full dataset (APF, error). Both IRS (n = 200) and KS were used to split the dataset in the ratio of 7:3. The classic decision rule (i.e. maximum value-based) is employed for new sample prediction via partial least squares - discriminant analysis (PLS2-DA). Error rate of each model was estimated repeatedly via: (a) AP on full data (APF, error); (b) AP on training set (APS, error); and (c) ET on the respective test set (ETS, error). A good PLS2-DA model is expected to produce APS, error and EVS, error that is similar to the APF, error. Bearing that in mind, the similarities between (a) APS, error vs. APF, error; (b) ETS, error vs. APF, error and; (c) APS, error vs. ETS, error were evaluated using correlation tests (i.e. Pearson and Spearman's rank test), using series of PLS2-DA models computed from KS-set and IRS-set, respectively. Overall, models constructed from IRS-set exhibits more similarities between the internal and external error rates than the respective KS-set, i.e. less risk of overfitting. In conclusion, IRS is more reliable than KS in sampling representative training set.

  15. On Sums of Numerical Series and Fourier Series

    Science.gov (United States)

    Pavao, H. Germano; de Oliveira, E. Capelas

    2008-01-01

    We discuss a class of trigonometric functions whose corresponding Fourier series, on a conveniently chosen interval, can be used to calculate several numerical series. Particular cases are presented and two recent results involving numerical series are recovered. (Contains 1 note.)

  16. Time-series modeling of long-term weight self-monitoring data.

    Science.gov (United States)

    Helander, Elina; Pavel, Misha; Jimison, Holly; Korhonen, Ilkka

    2015-08-01

    Long-term self-monitoring of weight is beneficial for weight maintenance, especially after weight loss. Connected weight scales accumulate time series information over long term and hence enable time series analysis of the data. The analysis can reveal individual patterns, provide more sensitive detection of significant weight trends, and enable more accurate and timely prediction of weight outcomes. However, long term self-weighing data has several challenges which complicate the analysis. Especially, irregular sampling, missing data, and existence of periodic (e.g. diurnal and weekly) patterns are common. In this study, we apply time series modeling approach on daily weight time series from two individuals and describe information that can be extracted from this kind of data. We study the properties of weight time series data, missing data and its link to individuals behavior, periodic patterns and weight series segmentation. Being able to understand behavior through weight data and give relevant feedback is desired to lead to positive intervention on health behaviors.

  17. Classification and authentication of unknown water samples using machine learning algorithms.

    Science.gov (United States)

    Kundu, Palash K; Panchariya, P C; Kundu, Madhusree

    2011-07-01

    This paper proposes the development of water sample classification and authentication, in real life which is based on machine learning algorithms. The proposed techniques used experimental measurements from a pulse voltametry method which is based on an electronic tongue (E-tongue) instrumentation system with silver and platinum electrodes. E-tongue include arrays of solid state ion sensors, transducers even of different types, data collectors and data analysis tools, all oriented to the classification of liquid samples and authentication of unknown liquid samples. The time series signal and the corresponding raw data represent the measurement from a multi-sensor system. The E-tongue system, implemented in a laboratory environment for 6 numbers of different ISI (Bureau of Indian standard) certified water samples (Aquafina, Bisleri, Kingfisher, Oasis, Dolphin, and McDowell) was the data source for developing two types of machine learning algorithms like classification and regression. A water data set consisting of 6 numbers of sample classes containing 4402 numbers of features were considered. A PCA (principal component analysis) based classification and authentication tool was developed in this study as the machine learning component of the E-tongue system. A proposed partial least squares (PLS) based classifier, which was dedicated as well; to authenticate a specific category of water sample evolved out as an integral part of the E-tongue instrumentation system. The developed PCA and PLS based E-tongue system emancipated an overall encouraging authentication percentage accuracy with their excellent performances for the aforesaid categories of water samples. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Infinite series

    CERN Document Server

    Hirschman, Isidore Isaac

    2014-01-01

    This text for advanced undergraduate and graduate students presents a rigorous approach that also emphasizes applications. Encompassing more than the usual amount of material on the problems of computation with series, the treatment offers many applications, including those related to the theory of special functions. Numerous problems appear throughout the book.The first chapter introduces the elementary theory of infinite series, followed by a relatively complete exposition of the basic properties of Taylor series and Fourier series. Additional subjects include series of functions and the app

  19. Academic Primer Series: Key Papers About Competency-Based Medical Education

    Directory of Open Access Journals (Sweden)

    Robert Cooney

    2017-05-01

    Full Text Available Introduction: Competency-based medical education (CBME presents a paradigm shift in medical training. This outcome-based education movement has triggered substantive changes across the globe. Since this transition is only beginning, many faculty members may not have experience with CBME nor a solid foundation in the grounding literature. We identify and summarize key papers to help faculty members learn more about CBME. Methods: Based on the online discussions of the 2016–2017 ALiEM Faculty Incubator program, a series of papers on the topic of CBME was developed. Augmenting this list with suggestions by a guest expert and by an open call on Twitter for other important papers, we were able to generate a list of 21 papers in total. Subsequently, we used a modified Delphi study methodology to narrow the list to key papers that describe the importance and significance for educators interested in learning about CBME. To determine the most impactful papers, the mixed junior and senior faculty authorship group used three-round voting methodology based upon the Delphi method. Results: Summaries of the five most highly rated papers on the topic of CBME, as determined by this modified Delphi approach, are presented in this paper. Major themes include a definition of core CBME themes, CBME principles to consider in the design of curricula, a history of the development of the CBME movement, and a rationale for changes to accreditation with CBME. The application of the study findings to junior faculty and faculty developers is discussed. Conclusion: We present five key papers on CBME that junior faculty members and faculty experts identified as essential to faculty development. These papers are a mix of foundational and explanatory papers that may provide a basis from which junior faculty members may build upon as they help to implement CBME programs.

  20. Academic Primer Series: Key Papers About Competency-Based Medical Education.

    Science.gov (United States)

    Cooney, Robert; Chan, Teresa M; Gottlieb, Michael; Abraham, Michael; Alden, Sylvia; Mongelluzzo, Jillian; Pasirstein, Michael; Sherbino, Jonathan

    2017-06-01

    Competency-based medical education (CBME) presents a paradigm shift in medical training. This outcome-based education movement has triggered substantive changes across the globe. Since this transition is only beginning, many faculty members may not have experience with CBME nor a solid foundation in the grounding literature. We identify and summarize key papers to help faculty members learn more about CBME. Based on the online discussions of the 2016-2017 ALiEM Faculty Incubator program, a series of papers on the topic of CBME was developed. Augmenting this list with suggestions by a guest expert and by an open call on Twitter for other important papers, we were able to generate a list of 21 papers in total. Subsequently, we used a modified Delphi study methodology to narrow the list to key papers that describe the importance and significance for educators interested in learning about CBME. To determine the most impactful papers, the mixed junior and senior faculty authorship group used three-round voting methodology based upon the Delphi method. Summaries of the five most highly rated papers on the topic of CBME, as determined by this modified Delphi approach, are presented in this paper. Major themes include a definition of core CBME themes, CBME principles to consider in the design of curricula, a history of the development of the CBME movement, and a rationale for changes to accreditation with CBME. The application of the study findings to junior faculty and faculty developers is discussed. We present five key papers on CBME that junior faculty members and faculty experts identified as essential to faculty development. These papers are a mix of foundational and explanatory papers that may provide a basis from which junior faculty members may build upon as they help to implement CBME programs.

  1. Parametric Identification of Solar Series based on an Adaptive ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Department of Computer Science, University of Extremadura, Campus ... els, applying it to the case of sunspot series. .... inspired on the concept of artificial evolution (Goldberg 1989) (Rechenberg 1973) and ... benchmark when na = 5. ... clusion is that an accurate tuning for general purposes could be from na = 5, although.

  2. The Real-time Frequency Spectrum Analysis of Neutron Pulse Signal Series

    International Nuclear Information System (INIS)

    Tang Yuelin; Ren Yong; Wei Biao; Feng Peng; Mi Deling; Pan Yingjun; Li Jiansheng; Ye Cenming

    2009-01-01

    The frequency spectrum analysis of neutron pulse signal is a very important method in nuclear stochastic signal processing Focused on the special '0' and '1' of neutron pulse signal series, this paper proposes new rotation-table and realizes a real-time frequency spectrum algorithm under 1G Hz sample rate based on PC with add, address and SSE. The numerical experimental results show that under the count rate of 3X10 6 s -1 , this algorithm is superior to FFTW in time-consumption and can meet the real-time requirement of frequency spectrum analysis. (authors)

  3. Biological time series analysis using a context free language: applicability to pulsatile hormone data.

    Directory of Open Access Journals (Sweden)

    Dennis A Dean

    Full Text Available We present a novel approach for analyzing biological time-series data using a context-free language (CFL representation that allows the extraction and quantification of important features from the time-series. This representation results in Hierarchically AdaPtive (HAP analysis, a suite of multiple complementary techniques that enable rapid analysis of data and does not require the user to set parameters. HAP analysis generates hierarchically organized parameter distributions that allow multi-scale components of the time-series to be quantified and includes a data analysis pipeline that applies recursive analyses to generate hierarchically organized results that extend traditional outcome measures such as pharmacokinetics and inter-pulse interval. Pulsicons, a novel text-based time-series representation also derived from the CFL approach, are introduced as an objective qualitative comparison nomenclature. We apply HAP to the analysis of 24 hours of frequently sampled pulsatile cortisol hormone data, which has known analysis challenges, from 14 healthy women. HAP analysis generated results in seconds and produced dozens of figures for each participant. The results quantify the observed qualitative features of cortisol data as a series of pulse clusters, each consisting of one or more embedded pulses, and identify two ultradian phenotypes in this dataset. HAP analysis is designed to be robust to individual differences and to missing data and may be applied to other pulsatile hormones. Future work can extend HAP analysis to other time-series data types, including oscillatory and other periodic physiological signals.

  4. Time-Series Analysis: A Cautionary Tale

    Science.gov (United States)

    Damadeo, Robert

    2015-01-01

    Time-series analysis has often been a useful tool in atmospheric science for deriving long-term trends in various atmospherically important parameters (e.g., temperature or the concentration of trace gas species). In particular, time-series analysis has been repeatedly applied to satellite datasets in order to derive the long-term trends in stratospheric ozone, which is a critical atmospheric constituent. However, many of the potential pitfalls relating to the non-uniform sampling of the datasets were often ignored and the results presented by the scientific community have been unknowingly biased. A newly developed and more robust application of this technique is applied to the Stratospheric Aerosol and Gas Experiment (SAGE) II version 7.0 ozone dataset and the previous biases and newly derived trends are presented.

  5. Palmprint Verification Using Time Series Method

    Directory of Open Access Journals (Sweden)

    A. A. Ketut Agung Cahyawan Wiranatha

    2013-11-01

    Full Text Available The use of biometrics as an automatic recognition system is growing rapidly in solving security problems, palmprint is one of biometric system which often used. This paper used two steps in center of mass moment method for region of interest (ROI segmentation and apply the time series method combined with block window method as feature representation. Normalized Euclidean Distance is used to measure the similarity degrees of two feature vectors of palmprint. System testing is done using 500 samples palms, with 4 samples as the reference image and the 6 samples as test images. Experiment results show that this system can achieve a high performance with success rate about 97.33% (FNMR=1.67%, FMR=1.00 %, T=0.036.

  6. Sample size for post-marketing safety studies based on historical controls.

    Science.gov (United States)

    Wu, Yu-te; Makuch, Robert W

    2010-08-01

    As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.

  7. Study of the U and Th series in Crassostrea mangle shell

    Energy Technology Data Exchange (ETDEWEB)

    Farias, Wellington M.; Damatto, Sandra R.; Silva, Paulo S.C., E-mail: wellington.m@usp.br, E-mail: damatto@ipen.br, E-mail: pscsilva@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Simone, Luiz R.L.; Amaral, Vanessa S., E-mail: lrsimone@usp.br, E-mail: vanessamolusco@gmail.com [Universidade de Sao Paulo (MZ/USP), Sao Paulo, SP (Brazil). Museu de Zoologia

    2015-07-01

    Foraminifera, corals and mollusks shells have been used as proxies for environmental, paleoenvironmental and climatic change studies in marine system by using elemental and isotopic ratios as recorder of such events. Nevertheless, there is little information available on the U and Th radionuclides decay series applied on those fields. In this sense, the objective of this paper was to evaluate the activity concentrations of the U and Th nuclide decay series in Crassostrea mangle shell samples as a function of the geographic location. Samples from Sao Paulo, Parana, Alagoas, Rio Grande do Norte and Pernambuco states were analyzed by Neutron Activation Analysis and Gross Alpha and Beta Counting. Statistical analysis applied to the obtained results allowed differencing samples coming from Sao Paulo from that coming from Parana. (author)

  8. Study of the U and Th series in Crassostrea mangle shell

    International Nuclear Information System (INIS)

    Farias, Wellington M.; Damatto, Sandra R.; Silva, Paulo S.C.; Simone, Luiz R.L.; Amaral, Vanessa S.

    2015-01-01

    Foraminifera, corals and mollusks shells have been used as proxies for environmental, paleoenvironmental and climatic change studies in marine system by using elemental and isotopic ratios as recorder of such events. Nevertheless, there is little information available on the U and Th radionuclides decay series applied on those fields. In this sense, the objective of this paper was to evaluate the activity concentrations of the U and Th nuclide decay series in Crassostrea mangle shell samples as a function of the geographic location. Samples from Sao Paulo, Parana, Alagoas, Rio Grande do Norte and Pernambuco states were analyzed by Neutron Activation Analysis and Gross Alpha and Beta Counting. Statistical analysis applied to the obtained results allowed differencing samples coming from Sao Paulo from that coming from Parana. (author)

  9. Stratified sampling design based on data mining.

    Science.gov (United States)

    Kim, Yeonkook J; Oh, Yoonhwan; Park, Sunghoon; Cho, Sungzoon; Park, Hayoung

    2013-09-01

    To explore classification rules based on data mining methodologies which are to be used in defining strata in stratified sampling of healthcare providers with improved sampling efficiency. We performed k-means clustering to group providers with similar characteristics, then, constructed decision trees on cluster labels to generate stratification rules. We assessed the variance explained by the stratification proposed in this study and by conventional stratification to evaluate the performance of the sampling design. We constructed a study database from health insurance claims data and providers' profile data made available to this study by the Health Insurance Review and Assessment Service of South Korea, and population data from Statistics Korea. From our database, we used the data for single specialty clinics or hospitals in two specialties, general surgery and ophthalmology, for the year 2011 in this study. Data mining resulted in five strata in general surgery with two stratification variables, the number of inpatients per specialist and population density of provider location, and five strata in ophthalmology with two stratification variables, the number of inpatients per specialist and number of beds. The percentages of variance in annual changes in the productivity of specialists explained by the stratification in general surgery and ophthalmology were 22% and 8%, respectively, whereas conventional stratification by the type of provider location and number of beds explained 2% and 0.2% of variance, respectively. This study demonstrated that data mining methods can be used in designing efficient stratified sampling with variables readily available to the insurer and government; it offers an alternative to the existing stratification method that is widely used in healthcare provider surveys in South Korea.

  10. Eating Disorders among a Community-Based Sample of Chilean Female Adolescents

    Science.gov (United States)

    Granillo, M. Teresa; Grogan-Kaylor, Andrew; Delva, Jorge; Castillo, Marcela

    2011-01-01

    The purpose of this study was to explore the prevalence and correlates of eating disorders among a community-based sample of female Chilean adolescents. Data were collected through structured interviews with 420 female adolescents residing in Santiago, Chile. Approximately 4% of the sample reported ever being diagnosed with an eating disorder.…

  11. Magnitude of 14C/12C variations based on archaeological samples

    International Nuclear Information System (INIS)

    Kusumgar, S.; Agrawal, D.P.

    1977-01-01

    The magnitude of 14 C/ 12 C variations in the period A.D. 5O0 to 200 B.C. and 370 B.C. to 2900 B.C. is discussed. The 14 C dates of well-dated archaeological samples from India and Egypt do not show any significant divergence from the historical ages. On the other hand, the corrections based on dendrochronological samples show marked deviations for the same time period. A plea is, therefore, made to study old tree samples from Anatolia and Irish bogs and archaeological samples from west Asia to arrive at a more realistic calibration curve. (author)

  12. Evaluating steady-state soil thickness by coupling uranium series and 10Be cosmogenic radionuclides

    Science.gov (United States)

    Vanacker, Veerle; Schoonejans, Jerome; Opfergelt, Sophie; Granet, Matthieu; Christl, Marcus; Chabaux, Francois

    2017-04-01

    Within the Critical Zone, the development of the regolith mantle is controlled by the downwards propagation of the weathering front into the bedrock and denudation at the surface of the regolith by mass movements, water and wind erosion. When the removal of surface material is approximately balanced by the soil production, the soil system is assumed to be in steady-state. The steady state soil thickness (or so-called SSST) can be considered as a dynamic equilibrium of the system, where the thickness of the soil mantle stays relatively constant over time. In this study, we present and compare analytical data from two independent isotopic techniques: in-situ produced cosmogenic nuclides and U-series disequilibria to constrain soil development under semi-arid climatic conditions. The Spanish Betic Cordillera (Southeast Spain) was selected for this study, as it offers us a unique opportunity to analyze soil thickness steady-state conditions for thin soils of semiarid environments. Three soil profiles were sampled across the Betic Ranges, at the ridge crest of zero-order catchments with distinct topographic relief, hillslope gradient and 10Be-derived denudation rate. The magnitude of soil production rates determined based on U-series isotopes (238U, 234U, 230Th and 226Ra) is in the same order of magnitude as the 10Be-derived denudation rates, suggesting steady state soil thickness in two out of three sampling sites. The results suggest that coupling U-series isotopes with in-situ produced radionuclides can provide new insights in the rates of soil development; and also illustrate the potential frontiers in applying U-series disequilibria to track soil production in rapidly eroding landscapes characterized by thin weathering depths.

  13. DTW-APPROACH FOR UNCORRELATED MULTIVARIATE TIME SERIES IMPUTATION

    OpenAIRE

    Phan , Thi-Thu-Hong; Poisson Caillault , Emilie; Bigand , André; Lefebvre , Alain

    2017-01-01

    International audience; Missing data are inevitable in almost domains of applied sciences. Data analysis with missing values can lead to a loss of efficiency and unreliable results, especially for large missing sub-sequence(s). Some well-known methods for multivariate time series imputation require high correlations between series or their features. In this paper , we propose an approach based on the shape-behaviour relation in low/un-correlated multivariate time series under an assumption of...

  14. Students creative thinking skills in solving two dimensional arithmetic series through research-based learning

    Science.gov (United States)

    Tohir, M.; Abidin, Z.; Dafik; Hobri

    2018-04-01

    Arithmetics is one of the topics in Mathematics, which deals with logic and detailed process upon generalizing formula. Creativity and flexibility are needed in generalizing formula of arithmetics series. This research aimed at analyzing students creative thinking skills in generalizing arithmetic series. The triangulation method and research-based learning was used in this research. The subjects were students of the Master Program of Mathematics Education in Faculty of Teacher Training and Education at Jember University. The data was collected by giving assignments to the students. The data collection was done by giving open problem-solving task and documentation study to the students to arrange generalization pattern based on the dependent function formula i and the function depend on i and j. Then, the students finished the next problem-solving task to construct arithmetic generalization patterns based on the function formula which depends on i and i + n and the sum formula of functions dependent on i and j of the arithmetic compiled. The data analysis techniques operative in this study was Miles and Huberman analysis model. Based on the result of data analysis on task 1, the levels of students creative thinking skill were classified as follows; 22,22% of the students categorized as “not creative” 38.89% of the students categorized as “less creative” category; 22.22% of the students categorized as “sufficiently creative” and 16.67% of the students categorized as “creative”. By contrast, the results of data analysis on task 2 found that the levels of students creative thinking skills were classified as follows; 22.22% of the students categorized as “sufficiently creative”, 44.44% of the students categorized as “creative” and 33.33% of the students categorized as “very creative”. This analysis result can set the basis for teaching references and actualizing a better teaching model in order to increase students creative thinking skills.

  15. ON SAMPLING BASED METHODS FOR THE DUBINS TRAVELING SALESMAN PROBLEM WITH NEIGHBORHOODS

    Directory of Open Access Journals (Sweden)

    Petr Váňa

    2015-12-01

    Full Text Available In this paper, we address the problem of path planning to visit a set of regions by Dubins vehicle, which is also known as the Dubins Traveling Salesman Problem Neighborhoods (DTSPN. We propose a modification of the existing sampling-based approach to determine increasing number of samples per goal region and thus improve the solution quality if a more computational time is available. The proposed modification of the sampling-based algorithm has been compared with performance of existing approaches for the DTSPN and results of the quality of the found solutions and the required computational time are presented in the paper.

  16. Efficient approach for reliability-based optimization based on weighted importance sampling approach

    International Nuclear Information System (INIS)

    Yuan, Xiukai; Lu, Zhenzhou

    2014-01-01

    An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology

  17. CVD-graphene for low equivalent series resistance in rGO/CVD-graphene/Ni-based supercapacitors

    Science.gov (United States)

    Kwon, Young Hwi; Kumar, Sunil; Bae, Joonho; Seo, Yongho

    2018-05-01

    Reduced equivalent series resistance (ESR) is necessary, particularly at a high current density, for high performance supercapacitors, and the interface resistance between the current collector and electrode material is one of the main components of ESR. In this report, we have optimized chemical vapor deposition-grown graphene (CVD-G) on a current collector (Ni-foil) using reduced graphene oxide as an active electrode material to fabricate an electric double layer capacitor with reduced ESR. The CVD-G was grown at different cooling rates—20 °C min‑1, 40 °C min‑1 and 100 °C min‑1—to determine the optimum conditions. The lowest ESR, 0.38 Ω, was obtained for a cell with a 100 °C min‑1 cooling rate, while the sample without a CVD-G interlayer exhibited 0.80 Ω. The CVD-G interlayer-based supercapacitors exhibited fast CD characteristics with high scan rates up to 10 Vs‑1 due to low ESR. The specific capacitances deposited with CVD-G were in the range of 145.6 F g‑1–213.8 F g‑1 at a voltage scan rate of 0.05 V s‑1. A quasi-rectangular behavior was observed in the cyclic voltammetry curves, even at very high scan rates of 50 and 100 V s‑1, for the cell with optimized CVD-G at higher cooling rates, i.e. 100 °C min‑1.

  18. mHealth Series: Factors influencing sample size calculations for mHealth–based studies – A mixed methods study in rural China

    Science.gov (United States)

    van Velthoven, Michelle Helena; Li, Ye; Wang, Wei; Du, Xiaozhen; Chen, Li; Wu, Qiong; Majeed, Azeem; Zhang, Yanfeng; Car, Josip

    2013-01-01

    Background An important issue for mHealth evaluation is the lack of information for sample size calculations. Objective To explore factors that influence sample size calculations for mHealth–based studies and to suggest strategies for increasing the participation rate. Methods We explored factors influencing recruitment and follow–up of participants (caregivers of children) in an mHealth text messaging data collection cross–over study. With help of village doctors, we recruited 1026 (25%) caregivers of children under five out of the 4170 registered. To explore factors influencing recruitment and provide recommendations for improving recruitment, we conducted semi–structured interviews with village doctors. Of the 1014 included participants, 662 (65%) responded to the first question about willingness to participate, 538 (53%) responded to the first survey question and 356 (35%) completed the text message survey. To explore factors influencing follow–up and provide recommendations for improving follow–up, we conducted interviews with participants. We added views from the researchers who were involved in the study to contextualize the findings. Results We found several factors influencing recruitment related to the following themes: experiences with recruitment, village doctors’ work, village doctors’ motivations, caregivers’ characteristics, caregivers’ motivations. Village doctors gave several recommendations for ways to recruit more caregivers and we added our views to these. We found the following factors influencing follow–up: mobile phone usage, ability to use mobile phone, problems with mobile phone, checking mobile phone, available time, paying back text message costs, study incentives, subjective norm, culture, trust, perceived usefulness of process, perceived usefulness of outcome, perceived ease of use, attitude, behavioural intention to use, and actual use. From our perspective, factors influencing follow–up were: different

  19. Efficient Algorithms for Segmentation of Item-Set Time Series

    Science.gov (United States)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  20. Mackenzie River Delta morphological change based on Landsat time series

    Science.gov (United States)

    Vesakoski, Jenni-Mari; Alho, Petteri; Gustafsson, David; Arheimer, Berit; Isberg, Kristina

    2015-04-01

    Arctic rivers are sensitive and yet quite unexplored river systems to which the climate change will impact on. Research has not focused in detail on the fluvial geomorphology of the Arctic rivers mainly due to the remoteness and wideness of the watersheds, problems with data availability and difficult accessibility. Nowadays wide collaborative spatial databases in hydrology as well as extensive remote sensing datasets over the Arctic are available and they enable improved investigation of the Arctic watersheds. Thereby, it is also important to develop and improve methods that enable detecting the fluvio-morphological processes based on the available data. Furthermore, it is essential to reconstruct and improve the understanding of the past fluvial processes in order to better understand prevailing and future fluvial processes. In this study we sum up the fluvial geomorphological change in the Mackenzie River Delta during the last ~30 years. The Mackenzie River Delta (~13 000 km2) is situated in the North Western Territories, Canada where the Mackenzie River enters to the Beaufort Sea, Arctic Ocean near the city of Inuvik. Mackenzie River Delta is lake-rich, productive ecosystem and ecologically sensitive environment. Research objective is achieved through two sub-objectives: 1) Interpretation of the deltaic river channel planform change by applying Landsat time series. 2) Definition of the variables that have impacted the most on detected changes by applying statistics and long hydrological time series derived from Arctic-HYPE model (HYdrologic Predictions for Environment) developed by Swedish Meteorological and Hydrological Institute. According to our satellite interpretation, field observations and statistical analyses, notable spatio-temporal changes have occurred in the morphology of the river channel and delta during the past 30 years. For example, the channels have been developing in braiding and sinuosity. In addition, various linkages between the studied

  1. Fabry-Pérot cavity based on chirped sampled fiber Bragg gratings.

    Science.gov (United States)

    Zheng, Jilin; Wang, Rong; Pu, Tao; Lu, Lin; Fang, Tao; Li, Weichun; Xiong, Jintian; Chen, Yingfang; Zhu, Huatao; Chen, Dalei; Chen, Xiangfei

    2014-02-10

    A novel kind of Fabry-Pérot (FP) structure based on chirped sampled fiber Bragg grating (CSFBG) is proposed and demonstrated. In this structure, the regular chirped FBG (CFBG) that functions as reflecting mirror in the FP cavity is replaced by CSFBG, which is realized by chirping the sampling periods of a sampled FBG having uniform local grating period. The realization of such CSFBG-FPs having diverse properties just needs a single uniform pitch phase mask and sub-micrometer precision moving stage. Compared with the conventional CFBG-FP, it becomes more flexible to design CSFBG-FPs of diverse functions, and the fabrication process gets simpler. As a demonstration, based on the same experimental facilities, FPs with uniform FSR (~73 pm) and chirped FSR (varying from 28 pm to 405 pm) are fabricated respectively, which shows good agreement with simulation results.

  2. Molecular-based rapid inventories of sympatric diversity: a comparison of DNA barcode clustering methods applied to geography-based vs clade-based sampling of amphibians.

    Science.gov (United States)

    Paz, Andrea; Crawford, Andrew J

    2012-11-01

    Molecular markers offer a universal source of data for quantifying biodiversity. DNA barcoding uses a standardized genetic marker and a curated reference database to identify known species and to reveal cryptic diversity within wellsampled clades. Rapid biological inventories, e.g. rapid assessment programs (RAPs), unlike most barcoding campaigns, are focused on particular geographic localities rather than on clades. Because of the potentially sparse phylogenetic sampling, the addition of DNA barcoding to RAPs may present a greater challenge for the identification of named species or for revealing cryptic diversity. In this article we evaluate the use of DNA barcoding for quantifying lineage diversity within a single sampling site as compared to clade-based sampling, and present examples from amphibians. We compared algorithms for identifying DNA barcode clusters (e.g. species, cryptic species or Evolutionary Significant Units) using previously published DNA barcode data obtained from geography-based sampling at a site in Central Panama, and from clade-based sampling in Madagascar. We found that clustering algorithms based on genetic distance performed similarly on sympatric as well as clade-based barcode data, while a promising coalescent-based method performed poorly on sympatric data. The various clustering algorithms were also compared in terms of speed and software implementation. Although each method has its shortcomings in certain contexts, we recommend the use of the ABGD method, which not only performs fairly well under either sampling method, but does so in a few seconds and with a user-friendly Web interface.

  3. Wave scattering theory a series approach based on the Fourier transformation

    CERN Document Server

    Eom, Hyo J

    2001-01-01

    The book provides a unified technique of Fourier transform to solve the wave scattering, diffraction, penetration, and radiation problems where the technique of separation of variables is applicable. The book discusses wave scattering from waveguide discontinuities, various apertures, and coupling structures, often encountered in electromagnetic, electrostatic, magnetostatic, and acoustic problems. A system of simultaneous equations for the modal coefficients is formulated and the rapidly-convergent series solutions amenable to numerical computation are presented. The series solutions find practical applications in the design of microwave/acoustic transmission lines, waveguide filters, antennas, and electromagnetic interference/compatibilty-related problems.

  4. Towards quantitative laser-induced breakdown spectroscopy analysis of soil samples

    International Nuclear Information System (INIS)

    Bousquet, B.; Sirven, J.-B.; Canioni, L.

    2007-01-01

    A quantitative analysis of chromium in soil samples is presented. Different emission lines related to chromium are studied in order to select the best one for quantitative features. Important matrix effects are demonstrated from one soil to the other, preventing any prediction of concentration in different soils on the basis of a univariate calibration curve. Finally, a classification of the LIBS data based on a series of Principal Component Analyses (PCA) is applied to a reduced dataset of selected spectral lines related to the major chemical elements in the soils. LIBS data of heterogeneous soils appear to be widely dispersed, which leads to a reconsideration of the sampling step in the analysis process

  5. 46,XX males: a case series based on clinical and genetics evaluation.

    Science.gov (United States)

    Mohammadpour Lashkari, F; Totonchi, M; Zamanian, M R; Mansouri, Z; Sadighi Gilani, M A; Sabbaghian, M; Mohseni Meybodi, A

    2017-09-01

    46,XX male sex reversal syndrome is one of the rarest sex chromosomal aberrations. The presence of SRY gene on one of the X chromosomes is the most frequent cause of this syndrome. Based on Y chromosome profile, there are SRY-positive and SRY-negative forms. The purpose of our study was to report first case series of Iranian patients and describe the different clinical appearances based on their genetic component. From the 8,114 azoospermic and severe oligozoospermic patients referred to Royan institute, we diagnosed 57 cases as sex reversal patients. Based on the endocrinological history, we performed karyotyping, SRY and AZF microdeletion screening. Patients had a female karyotype. According to available hormonal reports of 37 patients, 16 cases had low levels of testosterone (43.2%). On the other hand, 15 males were SRY positive (90.2%), while they lacked the spermatogenic factors encoding genes on Yq. Commencing the testicular differentiation in males, the SRY gene is considered to be very important in this process. Due to homogeneous results of karyotyping and AZF deletion, there are both positive and negative SRY cases that show similar sex reversal phenotypes. Evidences show that there could be diverse phenotypic differences that could be raised from various reasons. © 2016 Blackwell Verlag GmbH.

  6. MODIS Time Series to Detect Anthropogenic Interventions and Degradation Processes in Tropical Pasture

    Directory of Open Access Journals (Sweden)

    Daniel Alves Aguiar

    2017-01-01

    Full Text Available The unavoidable diet change in emerging countries, projected for the coming years, will significantly increase the global consumption of animal protein. It is expected that Brazilian livestock production, responsible for close to 15% of global production, be prepared to answer to the increasing demand of beef. Consequently, the evaluation of pasture quality at regional scale is important to inform public policies towards a rational land use strategy directed to improve livestock productivity in the country. Our hypothesis is that MODIS images can be used to evaluate the processes of degradation, restoration and renovation of tropical pastures. To test this hypothesis, two field campaigns were performed covering a route of approximately 40,000 km through nine Brazilian states. To characterize the sampled pastures, biophysical parameters were measured and observations about the pastures, the adopted management and the landscape were collected. Each sampled pasture was evaluated using a time series of MODIS EVI2 images from 2000–2012, according to a new protocol based on seven phenological metrics, 14 Boolean criteria and two numerical criteria. The theoretical basis of this protocol was derived from interviews with producers and livestock experts during a third field campaign. The analysis of the MODIS EVI2 time series provided valuable historical information on the type of intervention and on the biological degradation process of the sampled pastures. Of the 782 pastures sampled, 26.6% experienced some type of intervention, 19.1% were under biological degradation, and 54.3% presented neither intervention nor trend of biomass decrease during the period analyzed.

  7. Integrating field plots, lidar, and landsat time series to provide temporally consistent annual estimates of biomass from 1990 to present

    Science.gov (United States)

    Warren B. Cohen; Hans-Erik Andersen; Sean P. Healey; Gretchen G. Moisen; Todd A. Schroeder; Christopher W. Woodall; Grant M. Domke; Zhiqiang Yang; Robert E. Kennedy; Stephen V. Stehman; Curtis Woodcock; Jim Vogelmann; Zhe Zhu; Chengquan. Huang

    2015-01-01

    We are developing a system that provides temporally consistent biomass estimates for national greenhouse gas inventory reporting to the United Nations Framework Convention on Climate Change. Our model-assisted estimation framework relies on remote sensing to scale from plot measurements to lidar strip samples, to Landsat time series-based maps. As a demonstration, new...

  8. Physiologically Based Pharmacokinetic Modeling in Lead Optimization. 1. Evaluation and Adaptation of GastroPlus To Predict Bioavailability of Medchem Series.

    Science.gov (United States)

    Daga, Pankaj R; Bolger, Michael B; Haworth, Ian S; Clark, Robert D; Martin, Eric J

    2018-03-05

    When medicinal chemists need to improve bioavailability (%F) within a chemical series during lead optimization, they synthesize new series members with systematically modified properties mainly by following experience and general rules of thumb. More quantitative models that predict %F of proposed compounds from chemical structure alone have proven elusive. Global empirical %F quantitative structure-property (QSPR) models perform poorly, and projects have too little data to train local %F QSPR models. Mechanistic oral absorption and physiologically based pharmacokinetic (PBPK) models simulate the dissolution, absorption, systemic distribution, and clearance of a drug in preclinical species and humans. Attempts to build global PBPK models based purely on calculated inputs have not achieved the optimization. In this work, local GastroPlus PBPK models are instead customized for individual medchem series. The key innovation was building a local QSPR for a numerically fitted effective intrinsic clearance (CL loc ). All inputs are subsequently computed from structure alone, so the models can be applied in advance of synthesis. Training CL loc on the first 15-18 rat %F measurements gave adequate predictions, with clear improvements up to about 30 measurements, and incremental improvements beyond that.

  9. Statistical sampling methods for soils monitoring

    Science.gov (United States)

    Ann M. Abbott

    2010-01-01

    Development of the best sampling design to answer a research question should be an interactive venture between the land manager or researcher and statisticians, and is the result of answering various questions. A series of questions that can be asked to guide the researcher in making decisions that will arrive at an effective sampling plan are described, and a case...

  10. Autoregressive-model-based missing value estimation for DNA microarray time series data.

    Science.gov (United States)

    Choong, Miew Keen; Charbit, Maurice; Yan, Hong

    2009-01-01

    Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.

  11. Permutation entropy analysis of financial time series based on Hill's diversity number

    Science.gov (United States)

    Zhang, Yali; Shang, Pengjian

    2017-12-01

    In this paper the permutation entropy based on Hill's diversity number (Nn,r) is introduced as a new way to assess the complexity of a complex dynamical system such as stock market. We test the performance of this method with simulated data. Results show that Nn,r with appropriate parameters is more sensitive to the change of system and describes the trends of complex systems clearly. In addition, we research the stock closing price series from different data that consist of six indices: three US stock indices and three Chinese stock indices during different periods, Nn,r can quantify the changes of complexity for stock market data. Moreover, we get richer information from Nn,r, and obtain some properties about the differences between the US and Chinese stock indices.

  12. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    Science.gov (United States)

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  13. Multivariate Time Series Decomposition into Oscillation Components.

    Science.gov (United States)

    Matsuda, Takeru; Komaki, Fumiyasu

    2017-08-01

    Many time series are considered to be a superposition of several oscillation components. We have proposed a method for decomposing univariate time series into oscillation components and estimating their phases (Matsuda & Komaki, 2017 ). In this study, we extend that method to multivariate time series. We assume that several oscillators underlie the given multivariate time series and that each variable corresponds to a superposition of the projections of the oscillators. Thus, the oscillators superpose on each variable with amplitude and phase modulation. Based on this idea, we develop gaussian linear state-space models and use them to decompose the given multivariate time series. The model parameters are estimated from data using the empirical Bayes method, and the number of oscillators is determined using the Akaike information criterion. Therefore, the proposed method extracts underlying oscillators in a data-driven manner and enables investigation of phase dynamics in a given multivariate time series. Numerical results show the effectiveness of the proposed method. From monthly mean north-south sunspot number data, the proposed method reveals an interesting phase relationship.

  14. Quantifying memory in complex physiological time-series.

    Science.gov (United States)

    Shirazi, Amir H; Raoufy, Mohammad R; Ebadi, Haleh; De Rui, Michele; Schiff, Sami; Mazloom, Roham; Hajizadeh, Sohrab; Gharibzadeh, Shahriar; Dehpour, Ahmad R; Amodio, Piero; Jafari, G Reza; Montagnese, Sara; Mani, Ali R

    2013-01-01

    In a time-series, memory is a statistical feature that lasts for a period of time and distinguishes the time-series from a random, or memory-less, process. In the present study, the concept of "memory length" was used to define the time period, or scale over which rare events within a physiological time-series do not appear randomly. The method is based on inverse statistical analysis and provides empiric evidence that rare fluctuations in cardio-respiratory time-series are 'forgotten' quickly in healthy subjects while the memory for such events is significantly prolonged in pathological conditions such as asthma (respiratory time-series) and liver cirrhosis (heart-beat time-series). The memory length was significantly higher in patients with uncontrolled asthma compared to healthy volunteers. Likewise, it was significantly higher in patients with decompensated cirrhosis compared to those with compensated cirrhosis and healthy volunteers. We also observed that the cardio-respiratory system has simple low order dynamics and short memory around its average, and high order dynamics around rare fluctuations.

  15. The Santander Atlantic Time-Series Station (SATS): A Time Series combination of a monthly hydrographic Station and The Biscay AGL Oceanic Observatory.

    Science.gov (United States)

    Lavin, Alicia; Somavilla, Raquel; Cano, Daniel; Rodriguez, Carmen; Gonzalez-Pola, Cesar; Viloria, Amaia; Tel, Elena; Ruiz-Villareal, Manuel

    2017-04-01

    height character with respect to monthly average, and currents with respect to seasonal averages. Ocean-atmosphere heat fluxes (latent and sensible) are computed from the buoy atmospheric and oceanic measurements. Estimations of the mixed layer depth and bulk series at different water levels are provided in a monthly basis. Quality controlled series are distributed for sea surface salinity, oxygen and chlorophyll data. Some sensors are particularly affected by biofouling, and monthly visits to the buoy permit to follow these sensors behaviour. Chlorophyll-fluorescence sensor is the main concern, but Dissolved Oxygen sensor is also problematic. Periods of realistic smooth variations present strong offset that is corrected based on the Winkler analysis of water samples. Also Wind air temperature and humidilty buoy sensors are monthly compared with the research vessel data. Next step will consist in working on a better validation of the data, mainly ten-year data from the Biscay AGL buoy, but also the 25 year data of the station 7, close to the buoy. Data will be depurated an analyzed and the final product will be published and widening to improve and get the better use of them.

  16. Transition edge sensor series array bolometer

    Energy Technology Data Exchange (ETDEWEB)

    Beyer, J, E-mail: joern.beyer@ptb.d [Physikalisch-Technische Bundesanstalt (PTB), Abbestrasse 2-12, D-10587 Berlin (Germany)

    2010-10-15

    A transition edge sensor series array (TES-SA) is an array of identical TESs that are connected in series by low-inductance superconducting wiring. The array elements are equally and well thermally coupled to the absorber and respond to changes in the absorber temperature in synchronization. The TES-SA total resistance increases compared to a single TES while the shape of the superconducting transition is preserved. We are developing a TES-SA with a large number, hundreds to thousands, of array elements with the goal of enabling the readout of a TES-based bolometer operated at 4.2 K with a semiconductor-based amplifier located at room temperature. The noise and dynamic performance of a TES-SA bolometer based on a niobium/aluminum bilayer is analyzed. It is shown that stable readout of the bolometer with a low-noise transimpedance amplifier is feasible.

  17. Transition edge sensor series array bolometer

    International Nuclear Information System (INIS)

    Beyer, J

    2010-01-01

    A transition edge sensor series array (TES-SA) is an array of identical TESs that are connected in series by low-inductance superconducting wiring. The array elements are equally and well thermally coupled to the absorber and respond to changes in the absorber temperature in synchronization. The TES-SA total resistance increases compared to a single TES while the shape of the superconducting transition is preserved. We are developing a TES-SA with a large number, hundreds to thousands, of array elements with the goal of enabling the readout of a TES-based bolometer operated at 4.2 K with a semiconductor-based amplifier located at room temperature. The noise and dynamic performance of a TES-SA bolometer based on a niobium/aluminum bilayer is analyzed. It is shown that stable readout of the bolometer with a low-noise transimpedance amplifier is feasible.

  18. Children's Perceived Realism of Family Television Series.

    Science.gov (United States)

    Rabin, Beth E.; And Others

    This study examined the influence of grade level, program content, and ethnic match between viewer and television characters on children's perceptions of the realism of families portrayed in television series. In the 1986-87 school year, a sample of 1,692 children in 2nd, 5th, and 10th grades completed a 13-item questionnaire measuring their…

  19. The Hierarchical Spectral Merger Algorithm: A New Time Series Clustering Procedure

    KAUST Repository

    Euá n, Carolina; Ombao, Hernando; Ortega, Joaquí n

    2018-01-01

    We present a new method for time series clustering which we call the Hierarchical Spectral Merger (HSM) method. This procedure is based on the spectral theory of time series and identifies series that share similar oscillations or waveforms

  20. Magnetic Field Emission Comparison for Series-Parallel and Series-Series Wireless Power Transfer to Vehicles – PART 2/2

    DEFF Research Database (Denmark)

    Batra, Tushar; Schaltz, Erik

    2014-01-01

    Series-series and series-parallel topologies are the most favored topologies for design of wireless power transfer system for vehicle applications. The series-series topology has the advantage of reflecting only the resistive part on the primary side. On the other hand, the current source output...... characteristics of the series-parallel topology are more suited for the battery of the vehicle. This paper compares the two topologies in terms of magnetic emissions to the surroundings for the same input power, primary current, quality factor and inductors. Theoretical and simulation results show that the series...

  1. Sample selection based on kernel-subclustering for the signal reconstruction of multifunctional sensors

    International Nuclear Information System (INIS)

    Wang, Xin; Wei, Guo; Sun, Jinwei

    2013-01-01

    The signal reconstruction methods based on inverse modeling for the signal reconstruction of multifunctional sensors have been widely studied in recent years. To improve the accuracy, the reconstruction methods have become more and more complicated because of the increase in the model parameters and sample points. However, there is another factor that affects the reconstruction accuracy, the position of the sample points, which has not been studied. A reasonable selection of the sample points could improve the signal reconstruction quality in at least two ways: improved accuracy with the same number of sample points or the same accuracy obtained with a smaller number of sample points. Both ways are valuable for improving the accuracy and decreasing the workload, especially for large batches of multifunctional sensors. In this paper, we propose a sample selection method based on kernel-subclustering distill groupings of the sample data and produce the representation of the data set for inverse modeling. The method calculates the distance between two data points based on the kernel-induced distance instead of the conventional distance. The kernel function is a generalization of the distance metric by mapping the data that are non-separable in the original space into homogeneous groups in the high-dimensional space. The method obtained the best results compared with the other three methods in the simulation. (paper)

  2. On the analyticity of Laguerre series

    International Nuclear Information System (INIS)

    Weniger, Ernst Joachim

    2008-01-01

    The transformation of a Laguerre series f(z) = Σ ∞ n=0 λ (α) n L (α) n (z) to a power series f(z) = Σ ∞ n=0 γ n z n is discussed. Since many nonanalytic functions can be expanded in terms of generalized Laguerre polynomials, success is not guaranteed and such a transformation can easily lead to a mathematically meaningless expansion containing power series coefficients that are infinite in magnitude. Simple sufficient conditions based on the decay rates and sign patterns of the Laguerre series coefficients λ (α) n as n → ∞ can be formulated which guarantee that the resulting power series represents an analytic function. The transformation produces a mathematically meaningful result if the coefficients λ (α) n either decay exponentially or factorially as n → ∞. The situation is much more complicated-but also much more interesting-if the λ (α) n decay only algebraically as n → ∞. If the λ (α) n ultimately have the same sign, the series expansions for the power series coefficients diverge, and the corresponding function is not analytic at the origin. If the λ (α) n ultimately have strictly alternating signs, the series expansions for the power series coefficients still diverge, but are summable to something finite, and the resulting power series represents an analytic function. If algebraically decaying and ultimately alternating Laguerre series coefficients λ (α) n possess sufficiently simple explicit analytical expressions, the summation of the divergent series for the power series coefficients can often be accomplished with the help of analytic continuation formulae for hypergeometric series p+1 F p , but if the λ (α) n have a complicated structure or if only their numerical values are available, numerical summation techniques have to be employed. It is shown that certain nonlinear sequence transformations-in particular the so-called delta transformation (Weniger 1989 Comput. Phys. Rep. 10 189-371 (equation (8.4-4)))-are able to

  3. Automated Bayesian model development for frequency detection in biological time series

    Directory of Open Access Journals (Sweden)

    Oldroyd Giles ED

    2011-06-01

    the requirement for uniformly sampled data. Biological time series often deviate significantly from the requirements of optimality for Fourier transformation. In this paper we present an alternative approach based on Bayesian inference. We show the value of placing spectral analysis in the framework of Bayesian inference and demonstrate how model comparison can automate this procedure.

  4. Automated Bayesian model development for frequency detection in biological time series.

    Science.gov (United States)

    Granqvist, Emma; Oldroyd, Giles E D; Morris, Richard J

    2011-06-24

    A first step in building a mathematical model of a biological system is often the analysis of the temporal behaviour of key quantities. Mathematical relationships between the time and frequency domain, such as Fourier Transforms and wavelets, are commonly used to extract information about the underlying signal from a given time series. This one-to-one mapping from time points to frequencies inherently assumes that both domains contain the complete knowledge of the system. However, for truncated, noisy time series with background trends this unique mapping breaks down and the question reduces to an inference problem of identifying the most probable frequencies. In this paper we build on the method of Bayesian Spectrum Analysis and demonstrate its advantages over conventional methods by applying it to a number of test cases, including two types of biological time series. Firstly, oscillations of calcium in plant root cells in response to microbial symbionts are non-stationary and noisy, posing challenges to data analysis. Secondly, circadian rhythms in gene expression measured over only two cycles highlights the problem of time series with limited length. The results show that the Bayesian frequency detection approach can provide useful results in specific areas where Fourier analysis can be uninformative or misleading. We demonstrate further benefits of the Bayesian approach for time series analysis, such as direct comparison of different hypotheses, inherent estimation of noise levels and parameter precision, and a flexible framework for modelling the data without pre-processing. Modelling in systems biology often builds on the study of time-dependent phenomena. Fourier Transforms are a convenient tool for analysing the frequency domain of time series. However, there are well-known limitations of this method, such as the introduction of spurious frequencies when handling short and noisy time series, and the requirement for uniformly sampled data. Biological time

  5. New Homologues Series of Heterocyclic Schiff Base Ester: Synthesis and Characterization

    Directory of Open Access Journals (Sweden)

    Yee-Ting Chong

    2016-01-01

    Full Text Available A homologous series of liquid crystal bearing with heterocyclic thiophene Schiff base ester with alkanoyloxy chain (CH3(CH2nCOO–, where n=4, 6, 8, 10, 12, 14, 16 was successfully synthesized through the modification of some reported methods. The structural information of these compounds was isolated and characterized through some spectroscopic techniques, such as FTIR, 1H, and 13C NMR and elemental analysis. Textural observation was carried out using a polarizing optical microscope (POM over heating and cooling cycles. It was found that all synthesized compounds (3a–g exhibited an enantiotropic nematic phase upon the heating and cooling cycle with high thermal stability. Moreover, a characteristic bar transition texture was observed for compounds 3f and 3g which have shown transition of nematic-to-smectic C phase. This has been further confirmed by obtaining relative phase transition temperature using the differential scanning calorimetry (DSC.

  6. An Architectural Based Framework for the Distributed Collection, Analysis and Query from Inhomogeneous Time Series Data Sets and Wearables for Biofeedback Applications

    Directory of Open Access Journals (Sweden)

    James Lee

    2017-02-01

    Full Text Available The increasing professionalism of sports persons and desire of consumers to imitate this has led to an increased metrification of sport. This has been driven in no small part by the widespread availability of comparatively cheap assessment technologies and, more recently, wearable technologies. Historically, whilst these have produced large data sets, often only the most rudimentary analysis has taken place (Wisbey et al in: “Quantifying movement demands of AFL football using GPS tracking”. This paucity of analysis is due in no small part to the challenges of analysing large sets of data that are often from disparate data sources to glean useful key performance indicators, which has been a largely a labour intensive process. This paper presents a framework that can be cloud based for the gathering, storing and algorithmic interpretation of large and inhomogeneous time series data sets. The framework is architecture based and technology agnostic in the data sources it can gather, and presents a model for multi set analysis for inter- and intra- devices and individual subject matter. A sample implementation demonstrates the utility of the framework for sports performance data collected from distributed inertial sensors in the sport of swimming.

  7. U-series dating using thermal ionisation mass spectrometry (TIMS)

    Energy Technology Data Exchange (ETDEWEB)

    McCulloch, M.T. [Australian National University, Canberra, ACT (Australia). Research School of Earth Science

    1999-11-01

    U-series dating is based on the decay of the two long-lived isotopes{sup 238}U({tau}{sub 1/2}=4.47 x 10{sup 9} years) and {sup 235}U ({tau}{sub 1/2} 0.7 x 10{sup 9} years). {sup 238}U and its intermediate daughter isotopes {sup 234}U ({tau}{sub 1/2} = 245.4 ka) and {sup 230}Th ({tau}{sub 1/2} = 75.4 ka) have been the main focus of recently developed mass spectrometric techniques (Edwards et al., 1987) while the other less frequently used decay chain is based on the decay {sup 235}U to {sup 231}Pa ({tau}{sub 1/2} = 32.8 ka). Both the {sup 238}U and {sup 235}U decay chains terminate at the stable isotopes {sup 206}Pb and {sup 207}Pb respectively. Thermal ionization mass spectrometry (TIMS) has a number of inherent advantages, mainly the ability to measure isotopic ratios at high precision on relatively small samples. In spite of these now obvious advantages, it is only since the mid-1980`s when Chen et al., (1986) made the first precise measurements of {sup 234}U and {sup 232}Th in seawater followed by Edwards et al., (1987) who made combined {sup 234}U-{sup 230}Th measurements, was the full potential of mass spectrometric methods first realised. Several examples are given to illustrate various aspects of TIMS U-series 9 refs., 3 figs.

  8. Event-based stochastic point rainfall resampling for statistical replication and climate projection of historical rainfall series

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Korup Andersen, Aske; Larsen, Anders Badsberg

    2017-01-01

    Continuous and long rainfall series are a necessity in rural and urban hydrology for analysis and design purposes. Local historical point rainfall series often cover several decades, which makes it possible to estimate rainfall means at different timescales, and to assess return periods of extreme...... includes climate changes projected to a specific future period. This paper presents a framework for resampling of historical point rainfall series in order to generate synthetic rainfall series, which has the same statistical properties as an original series. Using a number of key target predictions...... for the future climate, such as winter and summer precipitation, and representation of extreme events, the resampled historical series are projected to represent rainfall properties in a future climate. Climate-projected rainfall series are simulated by brute force randomization of model parameters, which leads...

  9. How to statistically analyze nano exposure measurement results: Using an ARIMA time series approach

    NARCIS (Netherlands)

    Klein Entink, R.H.; Fransman, W.; Brouwer, D.H.

    2011-01-01

    Measurement strategies for exposure to nano-sized particles differ from traditional integrated sampling methods for exposure assessment by the use of real-time instruments. The resulting measurement series is a time series, where typically the sequential measurements are not independent from each

  10. A perturbative approach for enhancing the performance of time series forecasting.

    Science.gov (United States)

    de Mattos Neto, Paulo S G; Ferreira, Tiago A E; Lima, Aranildo R; Vasconcelos, Germano C; Cavalcanti, George D C

    2017-04-01

    This paper proposes a method to perform time series prediction based on perturbation theory. The approach is based on continuously adjusting an initial forecasting model to asymptotically approximate a desired time series model. First, a predictive model generates an initial forecasting for a time series. Second, a residual time series is calculated as the difference between the original time series and the initial forecasting. If that residual series is not white noise, then it can be used to improve the accuracy of the initial model and a new predictive model is adjusted using residual series. The whole process is repeated until convergence or the residual series becomes white noise. The output of the method is then given by summing up the outputs of all trained predictive models in a perturbative sense. To test the method, an experimental investigation was conducted on six real world time series. A comparison was made with six other methods experimented and ten other results found in the literature. Results show that not only the performance of the initial model is significantly improved but also the proposed method outperforms the other results previously published. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Color Spectrum Properties of Pure and Non-Pure LATEX in Discriminating Rubber Clone Series

    International Nuclear Information System (INIS)

    Noor Aishah Khairuzzaman; Hadzli Hashim; Nina Korlina Madzhi; Noor Ezan Abdullah; Faridatul Aima Ismail; Ahmad Faiz Sampian; Azhana Fatnin Che Will

    2015-01-01

    A study of color spectrum properties for pure and non-pure latex in discriminating rubber clone series has been presented in this paper. There were five types of clones from the same series being used as samples in this study named RRIM2002, RRIM2007, RRIM2008, RRIM2014, and RRIM3001. The main objective is to identify the significant color spectrum (RGB) from pure and non-pure latex that can discriminate rubber clone series. The significant information of color spectrum properties for pure and non-pure latex is determined by using spectrometer and Statistical Package for the Social Science (SPSS). Visible light spectrum (VIS) is used as a radiation light of the spectrometer to emit light to the surface of the latex sample. By using SPSS software, the further numerical analysis of color spectrum properties is being conducted. As the conclusion, blue color spectrum for non-pure is able to discriminate for all rubber clone series whereas only certain color spectrum can differentiate several clone series for pure latex. (author)

  12. Acceptance Sampling Plans Based on Truncated Life Tests for Sushila Distribution

    Directory of Open Access Journals (Sweden)

    Amer Ibrahim Al-Omari

    2018-03-01

    Full Text Available An acceptance sampling plan problem based on truncated life tests when the lifetime following a Sushila distribution is considered in this paper. For various acceptance numbers, confidence levels and values of the ratio between fixed experiment time and particular mean lifetime, the minimum sample sizes required to ascertain a specified mean life were found. The operating characteristic function values of the suggested sampling plans and the producer’s risk are presented. Some tables are provided and the results are illustrated by an example of a real data set.

  13. Simultaneous multicopter-based air sampling and sensing of meteorological variables

    Science.gov (United States)

    Brosy, Caroline; Krampf, Karina; Zeeman, Matthias; Wolf, Benjamin; Junkermann, Wolfgang; Schäfer, Klaus; Emeis, Stefan; Kunstmann, Harald

    2017-08-01

    The state and composition of the lowest part of the planetary boundary layer (PBL), i.e., the atmospheric surface layer (SL), reflects the interactions of external forcing, land surface, vegetation, human influence and the atmosphere. Vertical profiles of atmospheric variables in the SL at high spatial (meters) and temporal (1 Hz and better) resolution increase our understanding of these interactions but are still challenging to measure appropriately. Traditional ground-based observations include towers that often cover only a few measurement heights at a fixed location. At the same time, most remote sensing techniques and aircraft measurements have limitations to achieve sufficient detail close to the ground (up to 50 m). Vertical and horizontal transects of the PBL can be complemented by unmanned aerial vehicles (UAV). Our aim in this case study is to assess the use of a multicopter-type UAV for the spatial sampling of air and simultaneously the sensing of meteorological variables for the study of the surface exchange processes. To this end, a UAV was equipped with onboard air temperature and humidity sensors, while wind conditions were determined from the UAV's flight control sensors. Further, the UAV was used to systematically change the location of a sample inlet connected to a sample tube, allowing the observation of methane abundance using a ground-based analyzer. Vertical methane gradients of about 0.3 ppm were found during stable atmospheric conditions. Our results showed that both methane and meteorological conditions were in agreement with other observations at the site during the ScaleX-2015 campaign. The multicopter-type UAV was capable of simultaneous in situ sensing of meteorological state variables and sampling of air up to 50 m above the surface, which extended the vertical profile height of existing tower-based infrastructure by a factor of 5.

  14. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  15. Characterization and uranium-series dating of travertine from Suettoe in Hungary

    Energy Technology Data Exchange (ETDEWEB)

    Sierralta, M; Geldern, R van; Frechen, M [Leibniz Institute for Applied Geosciences, Hannover (Germany); Kele, S [Hungarian Academy of Sciences (Hungary); Melcher, F Author [Federal Institute for Geosciences and Natural Resources (Germany)

    2007-07-01

    Full text: Terrestrial carbonate formations, such as travertines, speleothems and lake sediments, are archives of terrestrial climate forcing. At the Suettoe section in Hungary, a succession of travertine is covered by a loess-palaeosol sequence; both are high resolution terrestrial archives of climate and environment change. Uranium-series ({sup 230}Th/U) dating using thermal ionisation mass spectroscopy was carried out to set up a reliable chronological frame for the travertine. As the growth of travertine is complex, pore cements may cause serious problems for precise dating. Therefore, we applied microscopic, mineralogical and geochemical methods to determine the abundance of secondary calcite. The state of alteration of primary spar and micrite was characterized by cathodoluminescence and microprobe analyzes. The travertine from Suettoe showed homogeneous phases of primary calcite, minor micropores and rare pore cements. Stable carbon and oxygen isotope analyzes were carried out to characterize the depositional environment of the Suettoe travertines. The carbon isotopic composition indicated that the source of carbon was a mixture of atmospheric and soil derived CO{sub 2}. Calculated water temperatures based on oxygen isotope data ranged from 22{sup o}C to 31{sup o}C. For uranium-series dating bulk samples were prepared from areas with mainly micrite and spar. {sup 230}Th/U ages were determined applying an isochron approach. As the dense travertine deposits had a dense structure, the bulk sampling method was successfully applied in determining uranium-series ages with much higher precision than former studies with alpha spectrometry could achieve. Travertines from Suettoe yielded Mid-Pleistocene ages ranging from the antepenultimate glacial to the penultimate interglacial (310-240 ka). These results are in agreement with those from OSL and AAR dating of the overlying sediment indicating at least an MIS 7 age for the travertine. (author)

  16. Uranium-series dating of fossil bones from alpine caves

    International Nuclear Information System (INIS)

    Leitner-Wild, E.; Steffan, I.

    1993-01-01

    During the course of an investigation of fossil cave bear populations the uranium-series method for absolute age determination has been applied to bone material. The applicability of the method to bone samples from alpine caves is demonstrated by the concordance of U/Th and U/Pa ages and cross-checks with the radiocarbon method. Stratigraphic agreement between bone ages and carbonate speleothem ages also indicates the potential of the uranium-series method as a suitable tool for the age determination of fossil bones from alpine cave environments. (Author)

  17. Spectral analysis of uneven time series of geological variables; Analisis espectral de series temporales de variables geologicas con muestreo irregular

    Energy Technology Data Exchange (ETDEWEB)

    Pardo-Iguzquiza, E.; Rodriguez-Tovar, F. J.

    2013-06-01

    In geosciences the sampling of a time series tends to afford uneven results, sometimes because the sampling itself is random or because of hiatuses or even completely missing data or due to difficulties involved in the conversion of data from a spatial to a time scale when the sedimentation rate was not constant. Whatever the case, the best solution does not lie in interpolation but rather in resorting to a method that deals with the irregular data. We show here how the use of the smoothed Lomb-Scargle periodogram is both a practical and efficient choice. We describe the effects on the estimated power spectrum of the type of irregular sampling, the number of data, interpolation, and the presence of drift. We propose the permutation test as being an efficient way of calculating statistical confidence levels. By applying the Lomb-Scargle periodogram to a synthetic series with a known spectral content we are able to confirm the validity of this method in the face of the difficulties mentioned above. A case study with real data, including hiatuses, representing the thickness of the annual banding in a stalagmite, is chosen to demonstrate an application using the statistical and physical interpretation of spectral peaks. (Author)

  18. A home-based body weight supported treadmill training program for children with cerebral palsy: A case series.

    Science.gov (United States)

    Kenyon, Lisa K; Westman, Marci; Hefferan, Ashley; McCrary, Peter; Baker, Barbara J

    2017-07-01

    Contemporary approaches to the treatment of cerebral palsy (CP) advocate a task-specific approach that emphasizes repetition and practice of specific tasks. Recent studies suggest that body-weight-supported treadmill training (BWSTT) programs may be beneficial in clinical settings. The purposes of this case series were to explore the outcomes and feasibility of a home-based BWSTT program for three children with CP. Three children with CP at Gross Motor Function Classification System (GMFCS) Levels III or IV participated in this case series. Examination included the Functional Assessment Questionnaire (FAQ), the 10-meter walk test, the Gross Motor Function Measure (GMFM-66), and the Pediatric Evaluation of Disability Inventory-Computer Adaptive Test (PEDI-CAT). A harness system was used to conduct the BWSTT program over an 8-12 week period. All of the families reported enjoying the BWSTT program and found the harness easy to use. Participant 2 increased from a 2 to a 4 on the FAQ, while Participant 3 increased from a 6 to a 7. Two of the participants demonstrated post-intervention improvements in functional mobility. In addition to mobility outcomes, future research should explore the potential health benefits of a home-based BWSTT program.

  19. Improved mesh based photon sampling techniques for neutron activation analysis

    International Nuclear Information System (INIS)

    Relson, E.; Wilson, P. P. H.; Biondo, E. D.

    2013-01-01

    The design of fusion power systems requires analysis of neutron activation of large, complex volumes, and the resulting particles emitted from these volumes. Structured mesh-based discretization of these problems allows for improved modeling in these activation analysis problems. Finer discretization of these problems results in large computational costs, which drives the investigation of more efficient methods. Within an ad hoc subroutine of the Monte Carlo transport code MCNP, we implement sampling of voxels and photon energies for volumetric sources using the alias method. The alias method enables efficient sampling of a discrete probability distribution, and operates in 0(1) time, whereas the simpler direct discrete method requires 0(log(n)) time. By using the alias method, voxel sampling becomes a viable alternative to sampling space with the 0(1) approach of uniformly sampling the problem volume. Additionally, with voxel sampling it is straightforward to introduce biasing of volumetric sources, and we implement this biasing of voxels as an additional variance reduction technique that can be applied. We verify our implementation and compare the alias method, with and without biasing, to direct discrete sampling of voxels, and to uniform sampling. We study the behavior of source biasing in a second set of tests and find trends between improvements and source shape, material, and material density. Overall, however, the magnitude of improvements from source biasing appears to be limited. Future work will benefit from the implementation of efficient voxel sampling - particularly with conformal unstructured meshes where the uniform sampling approach cannot be applied. (authors)

  20. Accurate determination of rates from non-uniformly sampled relaxation data

    Energy Technology Data Exchange (ETDEWEB)

    Stetz, Matthew A.; Wand, A. Joshua, E-mail: wand@upenn.edu [University of Pennsylvania Perelman School of Medicine, Johnson Research Foundation and Department of Biochemistry and Biophysics (United States)

    2016-08-15

    The application of non-uniform sampling (NUS) to relaxation experiments traditionally used to characterize the fast internal motion of proteins is quantitatively examined. Experimentally acquired Poisson-gap sampled data reconstructed with iterative soft thresholding are compared to regular sequentially sampled (RSS) data. Using ubiquitin as a model system, it is shown that 25 % sampling is sufficient for the determination of quantitatively accurate relaxation rates. When the sampling density is fixed at 25 %, the accuracy of rates is shown to increase sharply with the total number of sampled points until eventually converging near the inherent reproducibility of the experiment. Perhaps contrary to some expectations, it is found that accurate peak height reconstruction is not required for the determination of accurate rates. Instead, inaccuracies in rates arise from inconsistencies in reconstruction across the relaxation series that primarily manifest as a non-linearity in the recovered peak height. This indicates that the performance of an NUS relaxation experiment cannot be predicted from comparison of peak heights using a single RSS reference spectrum. The generality of these findings was assessed using three alternative reconstruction algorithms, eight different relaxation measurements, and three additional proteins that exhibit varying degrees of spectral complexity. From these data, it is revealed that non-linearity in peak height reconstruction across the relaxation series is strongly correlated with errors in NUS-derived relaxation rates. Importantly, it is shown that this correlation can be exploited to reliably predict the performance of an NUS-relaxation experiment by using three or more RSS reference planes from the relaxation series. The RSS reference time points can also serve to provide estimates of the uncertainty of the sampled intensity, which for a typical relaxation times series incurs no penalty in total acquisition time.

  1. Time-Scale and Time-Frequency Analyses of Irregularly Sampled Astronomical Time Series

    Directory of Open Access Journals (Sweden)

    S. Roques

    2005-09-01

    Full Text Available We evaluate the quality of spectral restoration in the case of irregular sampled signals in astronomy. We study in details a time-scale method leading to a global wavelet spectrum comparable to the Fourier period, and a time-frequency matching pursuit allowing us to identify the frequencies and to control the error propagation. In both cases, the signals are first resampled with a linear interpolation. Both results are compared with those obtained using Lomb's periodogram and using the weighted waveletZ-transform developed in astronomy for unevenly sampled variable stars observations. These approaches are applied to simulations and to light variations of four variable stars. This leads to the conclusion that the matching pursuit is more efficient for recovering the spectral contents of a pulsating star, even with a preliminary resampling. In particular, the results are almost independent of the quality of the initial irregular sampling.

  2. Series expansion in fractional calculus and fractional differential equations

    OpenAIRE

    Li, Ming-Fan; Ren, Ji-Rong; Zhu, Tao

    2009-01-01

    Fractional calculus is the calculus of differentiation and integration of non-integer orders. In a recently paper (Annals of Physics 323 (2008) 2756-2778), the Fundamental Theorem of Fractional Calculus is highlighted. Based on this theorem, in this paper we introduce fractional series expansion method to fractional calculus. We define a kind of fractional Taylor series of an infinitely fractionally-differentiable function. Further, based on our definition we generalize hypergeometric functio...

  3. Time series prediction of apple scab using meteorological ...

    African Journals Online (AJOL)

    A new prediction model for the early warning of apple scab is proposed in this study. The method is based on artificial intelligence and time series prediction. The infection period of apple scab was evaluated as the time series prediction model instead of summation of wetness duration. Also, the relations of different ...

  4. An evaluation of sampling and full enumeration strategies for Fisher Jenks classification in big data settings

    Science.gov (United States)

    Rey, Sergio J.; Stephens, Philip A.; Laura, Jason R.

    2017-01-01

    Large data contexts present a number of challenges to optimal choropleth map classifiers. Application of optimal classifiers to a sample of the attribute space is one proposed solution. The properties of alternative sampling-based classification methods are examined through a series of Monte Carlo simulations. The impacts of spatial autocorrelation, number of desired classes, and form of sampling are shown to have significant impacts on the accuracy of map classifications. Tradeoffs between improved speed of the sampling approaches and loss of accuracy are also considered. The results suggest the possibility of guiding the choice of classification scheme as a function of the properties of large data sets.

  5. Problems with sampling desert tortoises: A simulation analysis based on field data

    Science.gov (United States)

    Freilich, J.E.; Camp, R.J.; Duda, J.J.; Karl, A.E.

    2005-01-01

    The desert tortoise (Gopherus agassizii) was listed as a U.S. threatened species in 1990 based largely on population declines inferred from mark-recapture surveys of 2.59-km2 (1-mi2) plots. Since then, several census methods have been proposed and tested, but all methods still pose logistical or statistical difficulties. We conducted computer simulations using actual tortoise location data from 2 1-mi2 plot surveys in southern California, USA, to identify strengths and weaknesses of current sampling strategies. We considered tortoise population estimates based on these plots as "truth" and then tested various sampling methods based on sampling smaller plots or transect lines passing through the mile squares. Data were analyzed using Schnabel's mark-recapture estimate and program CAPTURE. Experimental subsampling with replacement of the 1-mi2 data using 1-km2 and 0.25-km2 plot boundaries produced data sets of smaller plot sizes, which we compared to estimates from the 1-mi 2 plots. We also tested distance sampling by saturating a 1-mi 2 site with computer simulated transect lines, once again evaluating bias in density estimates. Subsampling estimates from 1-km2 plots did not differ significantly from the estimates derived at 1-mi2. The 0.25-km2 subsamples significantly overestimated population sizes, chiefly because too few recaptures were made. Distance sampling simulations were biased 80% of the time and had high coefficient of variation to density ratios. Furthermore, a prospective power analysis suggested limited ability to detect population declines as high as 50%. We concluded that poor performance and bias of both sampling procedures was driven by insufficient sample size, suggesting that all efforts must be directed to increasing numbers found in order to produce reliable results. Our results suggest that present methods may not be capable of accurately estimating desert tortoise populations.

  6. Volterra-series-based nonlinear system modeling and its engineering applications: A state-of-the-art review

    Science.gov (United States)

    Cheng, C. M.; Peng, Z. K.; Zhang, W. M.; Meng, G.

    2017-03-01

    Nonlinear problems have drawn great interest and extensive attention from engineers, physicists and mathematicians and many other scientists because most real systems are inherently nonlinear in nature. To model and analyze nonlinear systems, many mathematical theories and methods have been developed, including Volterra series. In this paper, the basic definition of the Volterra series is recapitulated, together with some frequency domain concepts which are derived from the Volterra series, including the general frequency response function (GFRF), the nonlinear output frequency response function (NOFRF), output frequency response function (OFRF) and associated frequency response function (AFRF). The relationship between the Volterra series and other nonlinear system models and nonlinear problem solving methods are discussed, including the Taylor series, Wiener series, NARMAX model, Hammerstein model, Wiener model, Wiener-Hammerstein model, harmonic balance method, perturbation method and Adomian decomposition. The challenging problems and their state of arts in the series convergence study and the kernel identification study are comprehensively introduced. In addition, a detailed review is then given on the applications of Volterra series in mechanical engineering, aeroelasticity problem, control engineering, electronic and electrical engineering.

  7. Linking the Negative Binomial and Logarithmic Series Distributions via their Associated Series

    OpenAIRE

    SADINLE, MAURICIO

    2008-01-01

    The negative binomial distribution is associated to the series obtained by taking derivatives of the logarithmic series. Conversely, the logarithmic series distribution is associated to the series found by integrating the series associated to the negative binomial distribution. The parameter of the number of failures of the negative binomial distribution is the number of derivatives needed to obtain the negative binomial series from the logarithmic series. The reasoning in this article could ...

  8. Design-based estimators for snowball sampling

    OpenAIRE

    Shafie, Termeh

    2010-01-01

    Snowball sampling, where existing study subjects recruit further subjects from amongtheir acquaintances, is a popular approach when sampling from hidden populations.Since people with many in-links are more likely to be selected, there will be a selectionbias in the samples obtained. In order to eliminate this bias, the sample data must beweighted. However, the exact selection probabilities are unknown for snowball samplesand need to be approximated in an appropriate way. This paper proposes d...

  9. Combined ESR and U-series isochron dating of fossil tooth from Longgupo cave

    International Nuclear Information System (INIS)

    Han Fei; Yin Gongming; Liu Chunru; Jean-Jacques Bahain

    2012-01-01

    Background: In ESR and luminescence archaeological dating, the assessment of external radiation dose rate is one of the constant sources of uncertainty because of its variation in the past time and it cannot be determined accurately using the present-day measurements. Purpose: ESR isochron protocol was proposed to solve this uncertainty for the tooth samples. This protocol is applicable wherever multiple samples with different internal doses have all experienced a common external dose. The variable uranium concentration of tooth samples makes it possible to plot the equivalent dose versus the internal dose rate of each sample, and the slope of isochron line gives hence the age. For isochron dating of teeth, combined ESR/U-series dating analysis must be done together with isochron protocol. Methods: In this study, we try to use combined ESR/U-series isochron method on 5 tooth samples collected from immediate adjacent square in layer C Ⅲ'6 of Longgupo archaeological site, Chongqing, China. Combined ESR/U-series analysis with in situ external dose rate shows recent uranium uptake of all the samples. Results: The time-averaged external dose rate was iterative calculated by isochron protocol, and gives an isochron age of 1.77±0.09 Ma for layer C Ⅲ'6, which consistent with the mean US-ESR ages of 5 samples (1.64+0.16/-0.21 Ma) in the error range. The calculated time-averaged external dose rate(∼807 μGy/a) was basically in agreement with the in situ measured gamma dose rate value (8.50 μGy/a) in 2006, indicating the geochemical alterations may not occurred or do not affect the environmental dose rate obviously during the burial history. Conclusions: This study indicates the potential of solving both internal and external dose rate problems of ESR dating of fossil teeth by combining with U-series analysis and isochron protocol. (authors)

  10. Scale-dependent intrinsic entropies of complex time series.

    Science.gov (United States)

    Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E

    2016-04-13

    Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. © 2016 The Author(s).

  11. Developing a local least-squares support vector machines-based neuro-fuzzy model for nonlinear and chaotic time series prediction.

    Science.gov (United States)

    Miranian, A; Abdollahzade, M

    2013-02-01

    Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.

  12. Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model

    Science.gov (United States)

    Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.

    2009-04-01

    The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.

  13. The Earth Observation Monitor - Automated monitoring and alerting for spatial time-series data based on OGC web services

    Science.gov (United States)

    Eberle, J.; Hüttich, C.; Schmullius, C.

    2014-12-01

    Spatial time series data are freely available around the globe from earth observation satellites and meteorological stations for many years until now. They provide useful and important information to detect ongoing changes of the environment; but for end-users it is often too complex to extract this information out of the original time series datasets. This issue led to the development of the Earth Observation Monitor (EOM), an operational framework and research project to provide simple access, analysis and monitoring tools for global spatial time series data. A multi-source data processing middleware in the backend is linked to MODIS data from Land Processes Distributed Archive Center (LP DAAC) and Google Earth Engine as well as daily climate station data from NOAA National Climatic Data Center. OGC Web Processing Services are used to integrate datasets from linked data providers or external OGC-compliant interfaces to the EOM. Users can either use the web portal (webEOM) or the mobile application (mobileEOM) to execute these processing services and to retrieve the requested data for a given point or polygon in userfriendly file formats (CSV, GeoTiff). Beside providing just data access tools, users can also do further time series analyses like trend calculations, breakpoint detections or the derivation of phenological parameters from vegetation time series data. Furthermore data from climate stations can be aggregated over a given time interval. Calculated results can be visualized in the client and downloaded for offline usage. Automated monitoring and alerting of the time series data integrated by the user is provided by an OGC Sensor Observation Service with a coupled OGC Web Notification Service. Users can decide which datasets and parameters are monitored with a given filter expression (e.g., precipitation value higher than x millimeter per day, occurrence of a MODIS Fire point, detection of a time series anomaly). Datasets integrated in the SOS service are

  14. Comparing and Contrasting Traditional Membrane Bioreactor Models with Novel Ones Based on Time Series Analysis

    Directory of Open Access Journals (Sweden)

    Parneet Paul

    2013-02-01

    Full Text Available The computer modelling and simulation of wastewater treatment plant and their specific technologies, such as membrane bioreactors (MBRs, are becoming increasingly useful to consultant engineers when designing, upgrading, retrofitting, operating and controlling these plant. This research uses traditional phenomenological mechanistic models based on MBR filtration and biochemical processes to measure the effectiveness of alternative and novel time series models based upon input–output system identification methods. Both model types are calibrated and validated using similar plant layouts and data sets derived for this purpose. Results prove that although both approaches have their advantages, they also have specific disadvantages as well. In conclusion, the MBR plant designer and/or operator who wishes to use good quality, calibrated models to gain a better understanding of their process, should carefully consider which model type is selected based upon on what their initial modelling objectives are. Each situation usually proves unique.

  15. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  16. Development of Stronger and More Reliable Cast Austenitic Stainless Steels (H-Series) Based on Scientific Design Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Muralidharan, G.; Sikka, V.K.; Pankiw, R.I.

    2006-04-15

    The goal of this program was to increase the high-temperature strength of the H-Series of cast austenitic stainless steels by 50% and upper use temperature by 86 to 140 F (30 to 60 C). Meeting this goal is expected to result in energy savings of 38 trillion Btu/year by 2020 and energy cost savings of $185 million/year. The higher strength H-Series of cast stainless steels (HK and HP type) have applications for the production of ethylene in the chemical industry, for radiant burner tubes and transfer rolls for secondary processing of steel in the steel industry, and for many applications in the heat-treating industry. The project was led by Duraloy Technologies, Inc. with research participation by the Oak Ridge National Laboratory (ORNL) and industrial participation by a diverse group of companies. Energy Industries of Ohio (EIO) was also a partner in this project. Each team partner had well-defined roles. Duraloy Technologies led the team by identifying the base alloys that were to be improved from this research. Duraloy Technologies also provided an extensive creep data base on current alloys, provided creep-tested specimens of certain commercial alloys, and carried out centrifugal casting and component fabrication of newly designed alloys. Nucor Steel was the first partner company that installed the radiant burner tube assembly in their heat-treating furnace. Other steel companies participated in project review meetings and are currently working with Duraloy Technologies to obtain components of the new alloys. EIO is promoting the enhanced performance of the newly designed alloys to Ohio-based companies. The Timken Company is one of the Ohio companies being promoted by EIO. The project management and coordination plan is shown in Fig. 1.1. A related project at University of Texas-Arlington (UT-A) is described in Development of Semi-Stochastic Algorithm for Optimizing Alloy Composition of High-Temperature Austenitic Stainless Steels (H-Series) for Desired

  17. National Clearinghouse for Drug Abuse Information Selected Reference Series, Series 4, No. 1.

    Science.gov (United States)

    National Inst. on Drug Abuse (DHEW/PHS), Rockville, MD. National Clearinghouse for Drug Abuse Information.

    This bibliography, which attempts to gather the significant research on the reproductive effects of the drugs of abuse, is one in a series prepared by the National Clearinghouse for Drug Abuse Information on subjects of topical interest. Selection of literature is based on its currency, its significance in the field, and its availability in local…

  18. Magnetic Field Emission Comparison for Series-Parallel and Series-Series Wireless Power Transfer to Vehicles – PART 1/2

    DEFF Research Database (Denmark)

    Batra, Tushar; Schaltz, Erik

    2014-01-01

    Resonant circuits of wireless power transfer system can be designed in four possible ways by placing the primary and secondary capacitor in a series or parallel order with respect to the corresponding inductor. The two topologies series-parallel and series-series under investigation have been...... already compared in terms of their output behavior (current or voltage source) and reflection of the secondary impedance on the primary side. In this paper it is shown that for the same power rating series-parallel topology emits lesser magnetic fields to the surroundings than its series...

  19. Profiling of adrenocorticotropic hormone and arginine vasopressin in human pituitary gland and tumor thin tissue sections using droplet-based liquid-microjunction surface-sampling-HPLC-ESI-MS-MS.

    Science.gov (United States)

    Kertesz, Vilmos; Calligaris, David; Feldman, Daniel R; Changelian, Armen; Laws, Edward R; Santagata, Sandro; Agar, Nathalie Y R; Van Berkel, Gary J

    2015-08-01

    Described here are the results from the profiling of the proteins arginine vasopressin (AVP) and adrenocorticotropic hormone (ACTH) from normal human pituitary gland and pituitary adenoma tissue sections, using a fully automated droplet-based liquid-microjunction surface-sampling-HPLC-ESI-MS-MS system for spatially resolved sampling, HPLC separation, and mass spectrometric detection. Excellent correlation was found between the protein distribution data obtained with this method and data obtained with matrix-assisted laser desorption/ionization (MALDI) chemical imaging analyses of serial sections of the same tissue. The protein distributions correlated with the visible anatomic pattern of the pituitary gland. AVP was most abundant in the posterior pituitary gland region (neurohypophysis), and ATCH was dominant in the anterior pituitary gland region (adenohypophysis). The relative amounts of AVP and ACTH sampled from a series of ACTH-secreting and non-secreting pituitary adenomas correlated with histopathological evaluation. ACTH was readily detected at significantly higher levels in regions of ACTH-secreting adenomas and in normal anterior adenohypophysis compared with non-secreting adenoma and neurohypophysis. AVP was mostly detected in normal neurohypophysis, as expected. This work reveals that a fully automated droplet-based liquid-microjunction surface-sampling system coupled to HPLC-ESI-MS-MS can be readily used for spatially resolved sampling, separation, detection, and semi-quantitation of physiologically-relevant peptide and protein hormones, including AVP and ACTH, directly from human tissue. In addition, the relative simplicity, rapidity, and specificity of this method support the potential of this basic technology, with further advancement, for assisting surgical decision-making. Graphical Abstract Mass spectrometry based profiling of hormones in human pituitary gland and tumor thin tissue sections.

  20. The role of graphene-based sorbents in modern sample preparation techniques.

    Science.gov (United States)

    de Toffoli, Ana Lúcia; Maciel, Edvaldo Vasconcelos Soares; Fumes, Bruno Henrique; Lanças, Fernando Mauro

    2018-01-01

    The application of graphene-based sorbents in sample preparation techniques has increased significantly since 2011. These materials have good physicochemical properties to be used as sorbent and have shown excellent results in different sample preparation techniques. Graphene and its precursor graphene oxide have been considered to be good candidates to improve the extraction and concentration of different classes of target compounds (e.g., parabens, polycyclic aromatic hydrocarbon, pyrethroids, triazines, and so on) present in complex matrices. Its applications have been employed during the analysis of different matrices (e.g., environmental, biological and food). In this review, we highlight the most important characteristics of graphene-based material, their properties, synthesis routes, and the most important applications in both off-line and on-line sample preparation techniques. The discussion of the off-line approaches includes methods derived from conventional solid-phase extraction focusing on the miniaturized magnetic and dispersive modes. The modes of microextraction techniques called stir bar sorptive extraction, solid phase microextraction, and microextraction by packed sorbent are discussed. The on-line approaches focus on the use of graphene-based material mainly in on-line solid phase extraction, its variation called in-tube solid-phase microextraction, and on-line microdialysis systems. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Summation of series

    CERN Document Server

    Jolley, LB W

    2004-01-01

    Over 1,100 common series, all grouped for easy reference. Arranged by category, these series include arithmetical and geometrical progressions, powers and products of natural numbers, figurate and polygonal numbers, inverse natural numbers, exponential and logarithmic series, binomials, simple inverse products, factorials, trigonometrical and hyperbolic expansions, and additional series. 1961 edition.

  2. Reactivity Measurements On Burnt And Reference Fuel Samples In LWR-PROTEUS Phase II

    International Nuclear Information System (INIS)

    Murphy, M.; Jatuff, F.; Grimm, P.; Seiler, R.; Luethi, A.; Van Geemert, R.; Brogli, R.; Chawla, R.; Meier, G.; Berger, H.-D.

    2003-01-01

    During the year 2002, the PROTEUS research reactor was used to make a series of reactivity measurements on Pressurised Water Reactor (PWR) burnt fuel samples, and on a series of specially prepared standards. These investigations have been made in two different neutron spectra. In addition, the intrinsic neutron emissions of the burnt fuel samples have been determined. (author)

  3. A non linear analysis of human gait time series based on multifractal analysis and cross correlations

    International Nuclear Information System (INIS)

    Munoz-Diosdado, A

    2005-01-01

    We analyzed databases with gait time series of adults and persons with Parkinson, Huntington and amyotrophic lateral sclerosis (ALS) diseases. We obtained the staircase graphs of accumulated events that can be bounded by a straight line whose slope can be used to distinguish between gait time series from healthy and ill persons. The global Hurst exponent of these series do not show tendencies, we intend that this is because some gait time series have monofractal behavior and others have multifractal behavior so they cannot be characterized with a single Hurst exponent. We calculated the multifractal spectra, obtained the spectra width and found that the spectra of the healthy young persons are almost monofractal. The spectra of ill persons are wider than the spectra of healthy persons. In opposition to the interbeat time series where the pathology implies loss of multifractality, in the gait time series the multifractal behavior emerges with the pathology. Data were collected from healthy and ill subjects as they walked in a roughly circular path and they have sensors in both feet, so we have one time series for the left foot and other for the right foot. First, we analyzed these time series separately, and then we compared both results, with direct comparison and with a cross correlation analysis. We tried to find differences in both time series that can be used as indicators of equilibrium problems

  4. A non linear analysis of human gait time series based on multifractal analysis and cross correlations

    Energy Technology Data Exchange (ETDEWEB)

    Munoz-Diosdado, A [Department of Mathematics, Unidad Profesional Interdisciplinaria de Biotecnologia, Instituto Politecnico Nacional, Av. Acueducto s/n, 07340, Mexico City (Mexico)

    2005-01-01

    We analyzed databases with gait time series of adults and persons with Parkinson, Huntington and amyotrophic lateral sclerosis (ALS) diseases. We obtained the staircase graphs of accumulated events that can be bounded by a straight line whose slope can be used to distinguish between gait time series from healthy and ill persons. The global Hurst exponent of these series do not show tendencies, we intend that this is because some gait time series have monofractal behavior and others have multifractal behavior so they cannot be characterized with a single Hurst exponent. We calculated the multifractal spectra, obtained the spectra width and found that the spectra of the healthy young persons are almost monofractal. The spectra of ill persons are wider than the spectra of healthy persons. In opposition to the interbeat time series where the pathology implies loss of multifractality, in the gait time series the multifractal behavior emerges with the pathology. Data were collected from healthy and ill subjects as they walked in a roughly circular path and they have sensors in both feet, so we have one time series for the left foot and other for the right foot. First, we analyzed these time series separately, and then we compared both results, with direct comparison and with a cross correlation analysis. We tried to find differences in both time series that can be used as indicators of equilibrium problems.

  5. Importance sampling large deviations in nonequilibrium steady states. I

    Science.gov (United States)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.

    2018-03-01

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  6. Importance sampling large deviations in nonequilibrium steady states. I.

    Science.gov (United States)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T

    2018-03-28

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  7. Taxation, regulation, and addiction: a demand function for cigarettes based on time-series evidence.

    Science.gov (United States)

    Keeler, T E; Hu, T W; Barnett, P G; Manning, W G

    1993-04-01

    This work analyzes the effects of prices, taxes, income, and anti-smoking regulations on the consumption of cigarettes in California (a 25-cent-per-pack state tax increase in 1989 enhances the usefulness of this exercise). Analysis is based on monthly time-series data for 1980 through 1990. Results show a price elasticity of demand for cigarettes in the short run of -0.3 to -0.5 at mean data values, and -0.5 to -0.6 in the long run. We find at least some support for two further hypotheses: that antismoking regulations reduce cigarette consumption, and that consumers behave consistently with the model of rational addiction.

  8. Updating stand-level forest inventories using airborne laser scanning and Landsat time series data

    Science.gov (United States)

    Bolton, Douglas K.; White, Joanne C.; Wulder, Michael A.; Coops, Nicholas C.; Hermosilla, Txomin; Yuan, Xiaoping

    2018-04-01

    Vertical forest structure can be mapped over large areas by combining samples of airborne laser scanning (ALS) data with wall-to-wall spatial data, such as Landsat imagery. Here, we use samples of ALS data and Landsat time-series metrics to produce estimates of top height, basal area, and net stem volume for two timber supply areas near Kamloops, British Columbia, Canada, using an imputation approach. Both single-year and time series metrics were calculated from annual, gap-free Landsat reflectance composites representing 1984-2014. Metrics included long-term means of vegetation indices, as well as measures of the variance and slope of the indices through time. Terrain metrics, generated from a 30 m digital elevation model, were also included as predictors. We found that imputation models improved with the inclusion of Landsat time series metrics when compared to single-year Landsat metrics (relative RMSE decreased from 22.8% to 16.5% for top height, from 32.1% to 23.3% for basal area, and from 45.6% to 34.1% for net stem volume). Landsat metrics that characterized 30-years of stand history resulted in more accurate models (for all three structural attributes) than Landsat metrics that characterized only the most recent 10 or 20 years of stand history. To test model transferability, we compared imputed attributes against ALS-based estimates in nearby forest blocks (>150,000 ha) that were not included in model training or testing. Landsat-imputed attributes correlated strongly to ALS-based estimates in these blocks (R2 = 0.62 and relative RMSE = 13.1% for top height, R2 = 0.75 and relative RMSE = 17.8% for basal area, and R2 = 0.67 and relative RMSE = 26.5% for net stem volume), indicating model transferability. These findings suggest that in areas containing spatially-limited ALS data acquisitions, imputation models, and Landsat time series and terrain metrics can be effectively used to produce wall-to-wall estimates of key inventory attributes, providing an

  9. Pore formation during dehydration of a polycrystalline gypsum sample observed and quantified in a time-series synchrotron X-ray micro-tomography experiment

    Directory of Open Access Journals (Sweden)

    F. Fusseis

    2012-03-01

    Full Text Available We conducted an in-situ X-ray micro-computed tomography heating experiment at the Advanced Photon Source (USA to dehydrate an unconfined 2.3 mm diameter cylinder of Volterra Gypsum. We used a purpose-built X-ray transparent furnace to heat the sample to 388 K for a total of 310 min to acquire a three-dimensional time-series tomography dataset comprising nine time steps. The voxel size of 2.2 μm3 proved sufficient to pinpoint reaction initiation and the organization of drainage architecture in space and time.

    We observed that dehydration commences across a narrow front, which propagates from the margins to the centre of the sample in more than four hours. The advance of this front can be fitted with a square-root function, implying that the initiation of the reaction in the sample can be described as a diffusion process.

    Novel parallelized computer codes allow quantifying the geometry of the porosity and the drainage architecture from the very large tomographic datasets (20483 voxels in unprecedented detail. We determined position, volume, shape and orientation of each resolvable pore and tracked these properties over the duration of the experiment. We found that the pore-size distribution follows a power law. Pores tend to be anisotropic but rarely crack-shaped and have a preferred orientation, likely controlled by a pre-existing fabric in the sample. With on-going dehydration, pores coalesce into a single interconnected pore cluster that is connected to the surface of the sample cylinder and provides an effective drainage pathway.

    Our observations can be summarized in a model in which gypsum is stabilized by thermal expansion stresses and locally increased pore fluid pressures until the dehydration front approaches to within about 100 μm. Then, the internal stresses are released and dehydration happens efficiently, resulting in new pore space. Pressure release, the production of pores and the

  10. Pore formation during dehydration of a polycrystalline gypsum sample observed and quantified in a time-series synchrotron X-ray micro-tomography experiment

    Science.gov (United States)

    Fusseis, F.; Schrank, C.; Liu, J.; Karrech, A.; Llana-Fúnez, S.; Xiao, X.; Regenauer-Lieb, K.

    2012-03-01

    We conducted an in-situ X-ray micro-computed tomography heating experiment at the Advanced Photon Source (USA) to dehydrate an unconfined 2.3 mm diameter cylinder of Volterra Gypsum. We used a purpose-built X-ray transparent furnace to heat the sample to 388 K for a total of 310 min to acquire a three-dimensional time-series tomography dataset comprising nine time steps. The voxel size of 2.2 μm3 proved sufficient to pinpoint reaction initiation and the organization of drainage architecture in space and time. We observed that dehydration commences across a narrow front, which propagates from the margins to the centre of the sample in more than four hours. The advance of this front can be fitted with a square-root function, implying that the initiation of the reaction in the sample can be described as a diffusion process. Novel parallelized computer codes allow quantifying the geometry of the porosity and the drainage architecture from the very large tomographic datasets (20483 voxels) in unprecedented detail. We determined position, volume, shape and orientation of each resolvable pore and tracked these properties over the duration of the experiment. We found that the pore-size distribution follows a power law. Pores tend to be anisotropic but rarely crack-shaped and have a preferred orientation, likely controlled by a pre-existing fabric in the sample. With on-going dehydration, pores coalesce into a single interconnected pore cluster that is connected to the surface of the sample cylinder and provides an effective drainage pathway. Our observations can be summarized in a model in which gypsum is stabilized by thermal expansion stresses and locally increased pore fluid pressures until the dehydration front approaches to within about 100 μm. Then, the internal stresses are released and dehydration happens efficiently, resulting in new pore space. Pressure release, the production of pores and the advance of the front are coupled in a feedback loop.

  11. Test plan for Series 2 spent fuel cladding containment credit tests

    International Nuclear Information System (INIS)

    Wilson, C.N.

    1984-10-01

    This test plan describes a second series of tests to be conducted by Westinghouse Hanford Company (WHC) to evaluate the effectiveness of breached cladding as a barrier to radionuclide release in the NNWSI-proposed geologic repository. These tests will be conducted at the Hanford Engineering Development Laboratory (HEDL). A first series of tests, initiated at HEDL during FY 1983, demonstrated specimen preparation and feasibility of the testing concept. The second series tests will be similar to the Series 1 tests with the following exceptions: NNWSI reference groundwater obtained from well J-13 will be used as the leachant instead of deionized water; fuel from a second source will be used; and certain refinements will be made in specimen preparation, sampling, and analytical procedures. 12 references, 5 figures, 5 tables

  12. Target and suspect screening of psychoactive substances in sewage-based samples by UHPLC-QTOF

    Energy Technology Data Exchange (ETDEWEB)

    Baz-Lomba, J.A., E-mail: jba@niva.no [Norwegian Institute for Water Research, Gaustadalléen 21, NO-0349, Oslo (Norway); Faculty of Medicine, University of Oslo, PO box 1078 Blindern, 0316, Oslo (Norway); Reid, Malcolm J.; Thomas, Kevin V. [Norwegian Institute for Water Research, Gaustadalléen 21, NO-0349, Oslo (Norway)

    2016-03-31

    The quantification of illicit drug and pharmaceutical residues in sewage has been shown to be a valuable tool that complements existing approaches in monitoring the patterns and trends of drug use. The present work delineates the development of a novel analytical tool and dynamic workflow for the analysis of a wide range of substances in sewage-based samples. The validated method can simultaneously quantify 51 target psychoactive substances and pharmaceuticals in sewage-based samples using an off-line automated solid phase extraction (SPE-DEX) method, using Oasis HLB disks, followed by ultra-high performance liquid chromatography coupled to quadrupole time-of-flight mass spectrometry (UHPLC-QTOF) in MS{sup e}. Quantification and matrix effect corrections were overcome with the use of 25 isotopic labeled internal standards (ILIS). Recoveries were generally greater than 60% and the limits of quantification were in the low nanogram-per-liter range (0.4–187 ng L{sup −1}). The emergence of new psychoactive substances (NPS) on the drug scene poses a specific analytical challenge since their market is highly dynamic with new compounds continuously entering the market. Suspect screening using high-resolution mass spectrometry (HRMS) simultaneously allowed the unequivocal identification of NPS based on a mass accuracy criteria of 5 ppm (of the molecular ion and at least two fragments) and retention time (2.5% tolerance) using the UNIFI screening platform. Applying MS{sup e} data against a suspect screening database of over 1000 drugs and metabolites, this method becomes a broad and reliable tool to detect and confirm NPS occurrence. This was demonstrated through the HRMS analysis of three different sewage-based sample types; influent wastewater, passive sampler extracts and pooled urine samples resulting in the concurrent quantification of known psychoactive substances and the identification of NPS and pharmaceuticals. - Highlights: • A novel reiterative workflow

  13. Target and suspect screening of psychoactive substances in sewage-based samples by UHPLC-QTOF

    International Nuclear Information System (INIS)

    Baz-Lomba, J.A.; Reid, Malcolm J.; Thomas, Kevin V.

    2016-01-01

    The quantification of illicit drug and pharmaceutical residues in sewage has been shown to be a valuable tool that complements existing approaches in monitoring the patterns and trends of drug use. The present work delineates the development of a novel analytical tool and dynamic workflow for the analysis of a wide range of substances in sewage-based samples. The validated method can simultaneously quantify 51 target psychoactive substances and pharmaceuticals in sewage-based samples using an off-line automated solid phase extraction (SPE-DEX) method, using Oasis HLB disks, followed by ultra-high performance liquid chromatography coupled to quadrupole time-of-flight mass spectrometry (UHPLC-QTOF) in MS"e. Quantification and matrix effect corrections were overcome with the use of 25 isotopic labeled internal standards (ILIS). Recoveries were generally greater than 60% and the limits of quantification were in the low nanogram-per-liter range (0.4–187 ng L"−"1). The emergence of new psychoactive substances (NPS) on the drug scene poses a specific analytical challenge since their market is highly dynamic with new compounds continuously entering the market. Suspect screening using high-resolution mass spectrometry (HRMS) simultaneously allowed the unequivocal identification of NPS based on a mass accuracy criteria of 5 ppm (of the molecular ion and at least two fragments) and retention time (2.5% tolerance) using the UNIFI screening platform. Applying MS"e data against a suspect screening database of over 1000 drugs and metabolites, this method becomes a broad and reliable tool to detect and confirm NPS occurrence. This was demonstrated through the HRMS analysis of three different sewage-based sample types; influent wastewater, passive sampler extracts and pooled urine samples resulting in the concurrent quantification of known psychoactive substances and the identification of NPS and pharmaceuticals. - Highlights: • A novel reiterative workflow based on three

  14. A cluster merging method for time series microarray with production values.

    Science.gov (United States)

    Chira, Camelia; Sedano, Javier; Camara, Monica; Prieto, Carlos; Villar, Jose R; Corchado, Emilio

    2014-09-01

    A challenging task in time-course microarray data analysis is to cluster genes meaningfully combining the information provided by multiple replicates covering the same key time points. This paper proposes a novel cluster merging method to accomplish this goal obtaining groups with highly correlated genes. The main idea behind the proposed method is to generate a clustering starting from groups created based on individual temporal series (representing different biological replicates measured in the same time points) and merging them by taking into account the frequency by which two genes are assembled together in each clustering. The gene groups at the level of individual time series are generated using several shape-based clustering methods. This study is focused on a real-world time series microarray task with the aim to find co-expressed genes related to the production and growth of a certain bacteria. The shape-based clustering methods used at the level of individual time series rely on identifying similar gene expression patterns over time which, in some models, are further matched to the pattern of production/growth. The proposed cluster merging method is able to produce meaningful gene groups which can be naturally ranked by the level of agreement on the clustering among individual time series. The list of clusters and genes is further sorted based on the information correlation coefficient and new problem-specific relevant measures. Computational experiments and results of the cluster merging method are analyzed from a biological perspective and further compared with the clustering generated based on the mean value of time series and the same shape-based algorithm.

  15. Non-linear forecasting in high-frequency financial time series

    Science.gov (United States)

    Strozzi, F.; Zaldívar, J. M.

    2005-08-01

    A new methodology based on state space reconstruction techniques has been developed for trading in financial markets. The methodology has been tested using 18 high-frequency foreign exchange time series. The results are in apparent contradiction with the efficient market hypothesis which states that no profitable information about future movements can be obtained by studying the past prices series. In our (off-line) analysis positive gain may be obtained in all those series. The trading methodology is quite general and may be adapted to other financial time series. Finally, the steps for its on-line application are discussed.

  16. Time series analytics using sliding window metaheuristic optimization-based machine learning system for identifying building energy consumption patterns

    International Nuclear Information System (INIS)

    Chou, Jui-Sheng; Ngo, Ngoc-Tri

    2016-01-01

    Highlights: • This study develops a novel time-series sliding window forecast system. • The system integrates metaheuristics, machine learning and time-series models. • Site experiment of smart grid infrastructure is installed to retrieve real-time data. • The proposed system accurately predicts energy consumption in residential buildings. • The forecasting system can help users minimize their electricity usage. - Abstract: Smart grids are a promising solution to the rapidly growing power demand because they can considerably increase building energy efficiency. This study developed a novel time-series sliding window metaheuristic optimization-based machine learning system for predicting real-time building energy consumption data collected by a smart grid. The proposed system integrates a seasonal autoregressive integrated moving average (SARIMA) model and metaheuristic firefly algorithm-based least squares support vector regression (MetaFA-LSSVR) model. Specifically, the proposed system fits the SARIMA model to linear data components in the first stage, and the MetaFA-LSSVR model captures nonlinear data components in the second stage. Real-time data retrieved from an experimental smart grid installed in a building were used to evaluate the efficacy and effectiveness of the proposed system. A k-week sliding window approach is proposed for employing historical data as input for the novel time-series forecasting system. The prediction system yielded high and reliable accuracy rates in 1-day-ahead predictions of building energy consumption, with a total error rate of 1.181% and mean absolute error of 0.026 kW h. Notably, the system demonstrates an improved accuracy rate in the range of 36.8–113.2% relative to those of the linear forecasting model (i.e., SARIMA) and nonlinear forecasting models (i.e., LSSVR and MetaFA-LSSVR). Therefore, end users can further apply the forecasted information to enhance efficiency of energy usage in their buildings, especially

  17. Two sample Bayesian prediction intervals for order statistics based on the inverse exponential-type distributions using right censored sample

    Directory of Open Access Journals (Sweden)

    M.M. Mohie El-Din

    2011-10-01

    Full Text Available In this paper, two sample Bayesian prediction intervals for order statistics (OS are obtained. This prediction is based on a certain class of the inverse exponential-type distributions using a right censored sample. A general class of prior density functions is used and the predictive cumulative function is obtained in the two samples case. The class of the inverse exponential-type distributions includes several important distributions such the inverse Weibull distribution, the inverse Burr distribution, the loglogistic distribution, the inverse Pareto distribution and the inverse paralogistic distribution. Special cases of the inverse Weibull model such as the inverse exponential model and the inverse Rayleigh model are considered.

  18. Forecasting business cycle with chaotic time series based on neural network with weighted fuzzy membership functions

    International Nuclear Information System (INIS)

    Chai, Soo H.; Lim, Joon S.

    2016-01-01

    This study presents a forecasting model of cyclical fluctuations of the economy based on the time delay coordinate embedding method. The model uses a neuro-fuzzy network called neural network with weighted fuzzy membership functions (NEWFM). The preprocessed time series of the leading composite index using the time delay coordinate embedding method are used as input data to the NEWFM to forecast the business cycle. A comparative study is conducted using other methods based on wavelet transform and Principal Component Analysis for the performance comparison. The forecasting results are tested using a linear regression analysis to compare the approximation of the input data against the target class, gross domestic product (GDP). The chaos based model captures nonlinear dynamics and interactions within the system, which other two models ignore. The test results demonstrated that chaos based method significantly improved the prediction capability, thereby demonstrating superior performance to the other methods.

  19. Elucidating the association between the self-harm inventory and several borderline personality measures in an inpatient psychiatric sample.

    Science.gov (United States)

    Sellbom, Martin; Sansone, Randy A; Songer, Douglas A

    2017-09-01

    The current study evaluated the utility of the self-harm inventory (SHI) as a proxy for and screening measure of borderline personality disorder (BPD) using several diagnostic and statistical manual of mental disorders (DSM)-based BPD measures as criteria. We used a sample of 145 psychiatric inpatients, who completed the SHI and a series of well-validated, DSM-based self-report measures of BPD. Using a series of latent trait and latent class analyses, we found that the SHI was substantially associated with a latent construct representing BPD, as well as differentiated latent classes of 'high' vs. 'low' BPD, with good accuracy. The SHI can serve as proxy for and a good screening measure for BPD, but future research needs to replicate these findings using structured interview-based measurement of BPD.

  20. ESR dating of tooth enamel samples

    International Nuclear Information System (INIS)

    Chen Tiemei; Yang quan; Wu En

    1993-01-01

    Five tooth samples from the palaeoanthropological site of Jinniushan were dated with both electron-spin-resonance (ESR) and uranium-series techniques. The ESR age of about 230 ka is in good agreement with the U-series dating result, which confirms the hypothesis of possible coexistence of Homo erect us and Homo sapiens in China. Problems in ESR dating are discussed such as: 1) inappropriate of simple exponential extrapolation for accumulated dose determination; 2)experimental measurement of alpha detection efficiency and radon emanation and 3)selection of U-uptake model

  1. Sample Entropy-Based Approach to Evaluate the Stability of Double-Wire Pulsed MIG Welding

    Directory of Open Access Journals (Sweden)

    Ping Yao

    2014-01-01

    Full Text Available According to the sample entropy, this paper deals with a quantitative method to evaluate the current stability in double-wire pulsed MIG welding. Firstly, the sample entropy of current signals with different stability but the same parameters is calculated. The results show that the more stable the current, the smaller the value and the standard deviation of sample entropy. Secondly, four parameters, which are pulse width, peak current, base current, and frequency, are selected for four-level three-factor orthogonal experiment. The calculation and analysis of desired signals indicate that sample entropy values are affected by welding current parameters. Then, a quantitative method based on sample entropy is proposed. The experiment results show that the method can preferably quantify the welding current stability.

  2. Patch-based visual tracking with online representative sample selection

    Science.gov (United States)

    Ou, Weihua; Yuan, Di; Li, Donghao; Liu, Bin; Xia, Daoxun; Zeng, Wu

    2017-05-01

    Occlusion is one of the most challenging problems in visual object tracking. Recently, a lot of discriminative methods have been proposed to deal with this problem. For the discriminative methods, it is difficult to select the representative samples for the target template updating. In general, the holistic bounding boxes that contain tracked results are selected as the positive samples. However, when the objects are occluded, this simple strategy easily introduces the noises into the training data set and the target template and then leads the tracker to drift away from the target seriously. To address this problem, we propose a robust patch-based visual tracker with online representative sample selection. Different from previous works, we divide the object and the candidates into several patches uniformly and propose a score function to calculate the score of each patch independently. Then, the average score is adopted to determine the optimal candidate. Finally, we utilize the non-negative least square method to find the representative samples, which are used to update the target template. The experimental results on the object tracking benchmark 2013 and on the 13 challenging sequences show that the proposed method is robust to the occlusion and achieves promising results.

  3. Advanced data extraction infrastructure: Web based system for management of time series data

    Energy Technology Data Exchange (ETDEWEB)

    Chilingaryan, S; Beglarian, A [Forschungszentrum Karlsruhe, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Kopmann, A; Voecking, S, E-mail: Suren.Chilingaryan@kit.ed [University of Muenster, Institut fuer Kernphysik, Wilhelm-Klemm-Strasse 9, 48149 Mnster (Germany)

    2010-04-01

    During operation of high energy physics experiments a big amount of slow control data is recorded. It is necessary to examine all collected data checking the integrity and validity of measurements. With growing maturity of AJAX technologies it becomes possible to construct sophisticated interfaces using web technologies only. Our solution for handling time series, generally slow control data, has a modular architecture: backend system for data analysis and preparation, a web service interface for data access and a fast AJAX web display. In order to provide fast interactive access the time series are aggregated over time slices of few predefined lengths. The aggregated values are stored in the temporary caching database and, then, are used to create generalizing data plots. These plots may include indication of data quality and are generated within few hundreds of milliseconds even if very high data rates are involved. The extensible export subsystem provides data in multiple formats including CSV, Excel, ROOT, and TDMS. The search engine can be used to find periods of time where indications of selected sensors are falling into the specified ranges. Utilization of the caching database allows performing most of such lookups within a second. Based on this functionality a web interface facilitating fast (Google-maps style) navigation through the data has been implemented. The solution is at the moment used by several slow control systems at Test Facility for Fusion Magnets (TOSKA) and Karlsruhe Tritium Neutrino (KATRIN).

  4. Advanced data extraction infrastructure: Web based system for management of time series data

    International Nuclear Information System (INIS)

    Chilingaryan, S; Beglarian, A; Kopmann, A; Voecking, S

    2010-01-01

    During operation of high energy physics experiments a big amount of slow control data is recorded. It is necessary to examine all collected data checking the integrity and validity of measurements. With growing maturity of AJAX technologies it becomes possible to construct sophisticated interfaces using web technologies only. Our solution for handling time series, generally slow control data, has a modular architecture: backend system for data analysis and preparation, a web service interface for data access and a fast AJAX web display. In order to provide fast interactive access the time series are aggregated over time slices of few predefined lengths. The aggregated values are stored in the temporary caching database and, then, are used to create generalizing data plots. These plots may include indication of data quality and are generated within few hundreds of milliseconds even if very high data rates are involved. The extensible export subsystem provides data in multiple formats including CSV, Excel, ROOT, and TDMS. The search engine can be used to find periods of time where indications of selected sensors are falling into the specified ranges. Utilization of the caching database allows performing most of such lookups within a second. Based on this functionality a web interface facilitating fast (Google-maps style) navigation through the data has been implemented. The solution is at the moment used by several slow control systems at Test Facility for Fusion Magnets (TOSKA) and Karlsruhe Tritium Neutrino (KATRIN).

  5. Time Series Decomposition into Oscillation Components and Phase Estimation.

    Science.gov (United States)

    Matsuda, Takeru; Komaki, Fumiyasu

    2017-02-01

    Many time series are naturally considered as a superposition of several oscillation components. For example, electroencephalogram (EEG) time series include oscillation components such as alpha, beta, and gamma. We propose a method for decomposing time series into such oscillation components using state-space models. Based on the concept of random frequency modulation, gaussian linear state-space models for oscillation components are developed. In this model, the frequency of an oscillator fluctuates by noise. Time series decomposition is accomplished by this model like the Bayesian seasonal adjustment method. Since the model parameters are estimated from data by the empirical Bayes' method, the amplitudes and the frequencies of oscillation components are determined in a data-driven manner. Also, the appropriate number of oscillation components is determined with the Akaike information criterion (AIC). In this way, the proposed method provides a natural decomposition of the given time series into oscillation components. In neuroscience, the phase of neural time series plays an important role in neural information processing. The proposed method can be used to estimate the phase of each oscillation component and has several advantages over a conventional method based on the Hilbert transform. Thus, the proposed method enables an investigation of the phase dynamics of time series. Numerical results show that the proposed method succeeds in extracting intermittent oscillations like ripples and detecting the phase reset phenomena. We apply the proposed method to real data from various fields such as astronomy, ecology, tidology, and neuroscience.

  6. Time-series prediction of shellfish farm closure: A comparison of alternatives

    Directory of Open Access Journals (Sweden)

    Ashfaqur Rahman

    2014-08-01

    Full Text Available Shellfish farms are closed for harvest when microbial pollutants are present. Such pollutants are typically present in rainfall runoff from various land uses in catchments. Experts currently use a number of observable parameters (river flow, rainfall, salinity as proxies to determine when to close farms. We have proposed using the short term historical rainfall data as a time-series prediction problem where we aim to predict the closure of shellfish farms based only on rainfall. Time-series event prediction consists of two steps: (i feature extraction, and (ii prediction. A number of data mining challenges exist for these scenarios: (i which feature extraction method best captures the rainfall pattern over successive days that leads to opening or closure of the farms?, (ii The farm closure events occur infrequently and this leads to a class imbalance problem; the question is what is the best way to deal with this problem? In this paper we have analysed and compared different combinations of balancing methods (under-sampling and over-sampling, feature extraction methods (cluster profile, curve fitting, Fourier Transform, Piecewise Aggregate Approximation, and Wavelet Transform and learning algorithms (neural network, support vector machine, k-nearest neighbour, decision tree, and Bayesian Network to predict closure events accurately considering the above data mining challenges. We have identified the best combination of techniques to accurately predict shellfish farm closure from rainfall, given the above data mining challenges.

  7. The MIDAS Touch: Mixed Data Sampling Regression Models

    OpenAIRE

    Ghysels, Eric; Santa-Clara, Pedro; Valkanov, Rossen

    2004-01-01

    We introduce Mixed Data Sampling (henceforth MIDAS) regression models. The regressions involve time series data sampled at different frequencies. Technically speaking MIDAS models specify conditional expectations as a distributed lag of regressors recorded at some higher sampling frequencies. We examine the asymptotic properties of MIDAS regression estimation and compare it with traditional distributed lag models. MIDAS regressions have wide applicability in macroeconomics and �nance.

  8. Hierarchical Hidden Markov Models for Multivariate Integer-Valued Time-Series

    DEFF Research Database (Denmark)

    Catania, Leopoldo; Di Mari, Roberto

    2018-01-01

    We propose a new flexible dynamic model for multivariate nonnegative integer-valued time-series. Observations are assumed to depend on the realization of two additional unobserved integer-valued stochastic variables which control for the time-and cross-dependence of the data. An Expectation......-Maximization algorithm for maximum likelihood estimation of the model's parameters is derived. We provide conditional and unconditional (cross)-moments implied by the model, as well as the limiting distribution of the series. A Monte Carlo experiment investigates the finite sample properties of our estimation...

  9. Robust Forecasting of Non-Stationary Time Series

    OpenAIRE

    Croux, C.; Fried, R.; Gijbels, I.; Mahieu, K.

    2010-01-01

    This paper proposes a robust forecasting method for non-stationary time series. The time series is modelled using non-parametric heteroscedastic regression, and fitted by a localized MM-estimator, combining high robustness and large efficiency. The proposed method is shown to produce reliable forecasts in the presence of outliers, non-linearity, and heteroscedasticity. In the absence of outliers, the forecasts are only slightly less precise than those based on a localized Least Squares estima...

  10. Generalized unscented Kalman filtering based radial basis function neural network for the prediction of ground radioactivity time series with missing data

    International Nuclear Information System (INIS)

    Wu Xue-Dong; Liu Wei-Ting; Zhu Zhi-Yu; Wang Yao-Nan

    2011-01-01

    On the assumption that random interruptions in the observation process are modeled by a sequence of independent Bernoulli random variables, we firstly generalize two kinds of nonlinear filtering methods with random interruption failures in the observation based on the extended Kalman filtering (EKF) and the unscented Kalman filtering (UKF), which were shortened as GEKF and GUKF in this paper, respectively. Then the nonlinear filtering model is established by using the radial basis function neural network (RBFNN) prototypes and the network weights as state equation and the output of RBFNN to present the observation equation. Finally, we take the filtering problem under missing observed data as a special case of nonlinear filtering with random intermittent failures by setting each missing data to be zero without needing to pre-estimate the missing data, and use the GEKF-based RBFNN and the GUKF-based RBFNN to predict the ground radioactivity time series with missing data. Experimental results demonstrate that the prediction results of GUKF-based RBFNN accord well with the real ground radioactivity time series while the prediction results of GEKF-based RBFNN are divergent. (geophysics, astronomy, and astrophysics)

  11. A time series model: First-order integer-valued autoregressive (INAR(1))

    Science.gov (United States)

    Simarmata, D. M.; Novkaniza, F.; Widyaningsih, Y.

    2017-07-01

    Nonnegative integer-valued time series arises in many applications. A time series model: first-order Integer-valued AutoRegressive (INAR(1)) is constructed by binomial thinning operator to model nonnegative integer-valued time series. INAR (1) depends on one period from the process before. The parameter of the model can be estimated by Conditional Least Squares (CLS). Specification of INAR(1) is following the specification of (AR(1)). Forecasting in INAR(1) uses median or Bayesian forecasting methodology. Median forecasting methodology obtains integer s, which is cumulative density function (CDF) until s, is more than or equal to 0.5. Bayesian forecasting methodology forecasts h-step-ahead of generating the parameter of the model and parameter of innovation term using Adaptive Rejection Metropolis Sampling within Gibbs sampling (ARMS), then finding the least integer s, where CDF until s is more than or equal to u . u is a value taken from the Uniform(0,1) distribution. INAR(1) is applied on pneumonia case in Penjaringan, Jakarta Utara, January 2008 until April 2016 monthly.

  12. Novel Stool-Based Protein Biomarkers for Improved Colorectal Cancer Screening: A Case-Control Study.

    Science.gov (United States)

    Bosch, Linda J W; de Wit, Meike; Pham, Thang V; Coupé, Veerle M H; Hiemstra, Annemieke C; Piersma, Sander R; Oudgenoeg, Gideon; Scheffer, George L; Mongera, Sandra; Sive Droste, Jochim Terhaar; Oort, Frank A; van Turenhout, Sietze T; Larbi, Ilhame Ben; Louwagie, Joost; van Criekinge, Wim; van der Hulst, Rene W M; Mulder, Chris J J; Carvalho, Beatriz; Fijneman, Remond J A; Jimenez, Connie R; Meijer, Gerrit A

    2017-12-19

    The fecal immunochemical test (FIT) for detecting hemoglobin is used widely for noninvasive colorectal cancer (CRC) screening, but its sensitivity leaves room for improvement. To identify novel protein biomarkers in stool that outperform or complement hemoglobin in detecting CRC and advanced adenomas. Case-control study. Colonoscopy-controlled referral population from several centers. 315 stool samples from one series of 12 patients with CRC and 10 persons without colorectal neoplasia (control samples) and a second series of 81 patients with CRC, 40 with advanced adenomas, and 43 with nonadvanced adenomas, as well as 129 persons without colorectal neoplasia (control samples); 72 FIT samples from a third independent series of 14 patients with CRC, 16 with advanced adenomas, and 18 with nonadvanced adenomas, as well as 24 persons without colorectal neoplasia (control samples). Stool samples were analyzed by mass spectrometry. Classification and regression tree (CART) analysis and logistic regression analyses were performed to identify protein combinations that differentiated CRC or advanced adenoma from control samples. Antibody-based assays for 4 selected proteins were done on FIT samples. In total, 834 human proteins were identified, 29 of which were statistically significantly enriched in CRC versus control stool samples in both series. Combinations of 4 proteins reached sensitivities of 80% and 45% for detecting CRC and advanced adenomas, respectively, at 95% specificity, which was higher than that of hemoglobin alone (P control samples (P control samples. Proof of concept that such proteins can be detected with antibody-based assays in small sample volumes indicates the potential of these biomarkers to be applied in population screening. Center for Translational Molecular Medicine, International Translational Cancer Research Dream Team, Stand Up to Cancer (American Association for Cancer Research and the Dutch Cancer Society), Dutch Digestive Foundation, and VU

  13. Feasibility of self-sampled dried blood spot and saliva samples sent by mail in a population-based study.

    Science.gov (United States)

    Sakhi, Amrit Kaur; Bastani, Nasser Ezzatkhah; Ellingjord-Dale, Merete; Gundersen, Thomas Erik; Blomhoff, Rune; Ursin, Giske

    2015-04-11

    In large epidemiological studies it is often challenging to obtain biological samples. Self-sampling by study participants using dried blood spots (DBS) technique has been suggested to overcome this challenge. DBS is a type of biosampling where blood samples are obtained by a finger-prick lancet, blotted and dried on filter paper. However, the feasibility and efficacy of collecting DBS samples from study participants in large-scale epidemiological studies is not known. The aim of the present study was to test the feasibility and response rate of collecting self-sampled DBS and saliva samples in a population-based study of women above 50 years of age. We determined response proportions, number of phone calls to the study center with questions about sampling, and quality of the DBS. We recruited women through a study conducted within the Norwegian Breast Cancer Screening Program. Invitations, instructions and materials were sent to 4,597 women. The data collection took place over a 3 month period in the spring of 2009. Response proportions for the collection of DBS and saliva samples were 71.0% (3,263) and 70.9% (3,258), respectively. We received 312 phone calls (7% of the 4,597 women) with questions regarding sampling. Of the 3,263 individuals that returned DBS cards, 3,038 (93.1%) had been packaged and shipped according to instructions. A total of 3,032 DBS samples were sufficient for at least one biomarker analysis (i.e. 92.9% of DBS samples received by the laboratory). 2,418 (74.1%) of the DBS cards received by the laboratory were filled with blood according to the instructions (i.e. 10 completely filled spots with up to 7 punches per spot for up to 70 separate analyses). To assess the quality of the samples, we selected and measured two biomarkers (carotenoids and vitamin D). The biomarker levels were consistent with previous reports. Collecting self-sampled DBS and saliva samples through the postal services provides a low cost, effective and feasible

  14. Study of Railway Track Irregularity Standard Deviation Time Series Based on Data Mining and Linear Model

    Directory of Open Access Journals (Sweden)

    Jia Chaolong

    2013-01-01

    Full Text Available Good track geometry state ensures the safe operation of the railway passenger service and freight service. Railway transportation plays an important role in the Chinese economic and social development. This paper studies track irregularity standard deviation time series data and focuses on the characteristics and trend changes of track state by applying clustering analysis. Linear recursive model and linear-ARMA model based on wavelet decomposition reconstruction are proposed, and all they offer supports for the safe management of railway transportation.

  15. A Remote Sensing Approach for Regional-Scale Mapping of Agricultural Land-Use Systems Based on NDVI Time Series

    Directory of Open Access Journals (Sweden)

    Beatriz Bellón

    2017-06-01

    Full Text Available In response to the need for generic remote sensing tools to support large-scale agricultural monitoring, we present a new approach for regional-scale mapping of agricultural land-use systems (ALUS based on object-based Normalized Difference Vegetation Index (NDVI time series analysis. The approach consists of two main steps. First, to obtain relatively homogeneous land units in terms of phenological patterns, a principal component analysis (PCA is applied to an annual MODIS NDVI time series, and an automatic segmentation is performed on the resulting high-order principal component images. Second, the resulting land units are classified into the crop agriculture domain or the livestock domain based on their land-cover characteristics. The crop agriculture domain land units are further classified into different cropping systems based on the correspondence of their NDVI temporal profiles with the phenological patterns associated with the cropping systems of the study area. A map of the main ALUS of the Brazilian state of Tocantins was produced for the 2013–2014 growing season with the new approach, and a significant coherence was observed between the spatial distribution of the cropping systems in the final ALUS map and in a reference map extracted from the official agricultural statistics of the Brazilian Institute of Geography and Statistics (IBGE. This study shows the potential of remote sensing techniques to provide valuable baseline spatial information for supporting agricultural monitoring and for large-scale land-use systems analysis.

  16. Study on Apparent Kinetic Prediction Model of the Smelting Reduction Based on the Time-Series

    Directory of Open Access Journals (Sweden)

    Guo-feng Fan

    2012-01-01

    Full Text Available A series of direct smelting reduction experiment has been carried out with high phosphorous iron ore of the different bases by thermogravimetric analyzer. The derivative thermogravimetric (DTG data have been obtained from the experiments. One-step forward local weighted linear (LWL method , one of the most suitable ways of predicting chaotic time-series methods which focus on the errors, is used to predict DTG. In the meanwhile, empirical mode decomposition-autoregressive (EMD-AR, a data mining technique in signal processing, is also used to predict DTG. The results show that (1 EMD-AR(4 is the most appropriate and its error is smaller than the former; (2 root mean square error (RMSE has decreased about two-thirds; (3 standardized root mean square error (NMSE has decreased in an order of magnitude. Finally in this paper, EMD-AR method has been improved by golden section weighting; its error would be smaller than before. Therefore, the improved EMD-AR model is a promising alternative for apparent reaction rate (DTG. The analytical results have been an important reference in the field of industrial control.

  17. Introduction to time series and forecasting

    CERN Document Server

    Brockwell, Peter J

    2016-01-01

    This book is aimed at the reader who wishes to gain a working knowledge of time series and forecasting methods as applied to economics, engineering and the natural and social sciences. It assumes knowledge only of basic calculus, matrix algebra and elementary statistics. This third edition contains detailed instructions for the use of the professional version of the Windows-based computer package ITSM2000, now available as a free download from the Springer Extras website. The logic and tools of time series model-building are developed in detail. Numerous exercises are included and the software can be used to analyze and forecast data sets of the user's own choosing. The book can also be used in conjunction with other time series packages such as those included in R. The programs in ITSM2000 however are menu-driven and can be used with minimal investment of time in the computational details. The core of the book covers stationary processes, ARMA and ARIMA processes, multivariate time series and state-space mod...

  18. Multiresolution analysis of Bursa Malaysia KLCI time series

    Science.gov (United States)

    Ismail, Mohd Tahir; Dghais, Amel Abdoullah Ahmed

    2017-05-01

    In general, a time series is simply a sequence of numbers collected at regular intervals over a period. Financial time series data processing is concerned with the theory and practice of processing asset price over time, such as currency, commodity data, and stock market data. The primary aim of this study is to understand the fundamental characteristics of selected financial time series by using the time as well as the frequency domain analysis. After that prediction can be executed for the desired system for in sample forecasting. In this study, multiresolution analysis which the assist of discrete wavelet transforms (DWT) and maximal overlap discrete wavelet transform (MODWT) will be used to pinpoint special characteristics of Bursa Malaysia KLCI (Kuala Lumpur Composite Index) daily closing prices and return values. In addition, further case study discussions include the modeling of Bursa Malaysia KLCI using linear ARIMA with wavelets to address how multiresolution approach improves fitting and forecasting results.

  19. A comparison of temporal and location-based sampling strategies for global positioning system-triggered electronic diaries.

    Science.gov (United States)

    Törnros, Tobias; Dorn, Helen; Reichert, Markus; Ebner-Priemer, Ulrich; Salize, Hans-Joachim; Tost, Heike; Meyer-Lindenberg, Andreas; Zipf, Alexander

    2016-11-21

    Self-reporting is a well-established approach within the medical and psychological sciences. In order to avoid recall bias, i.e. past events being remembered inaccurately, the reports can be filled out on a smartphone in real-time and in the natural environment. This is often referred to as ambulatory assessment and the reports are usually triggered at regular time intervals. With this sampling scheme, however, rare events (e.g. a visit to a park or recreation area) are likely to be missed. When addressing the correlation between mood and the environment, it may therefore be beneficial to include participant locations within the ambulatory assessment sampling scheme. Based on the geographical coordinates, the database query system then decides if a self-report should be triggered or not. We simulated four different ambulatory assessment sampling schemes based on movement data (coordinates by minute) from 143 voluntary participants tracked for seven consecutive days. Two location-based sampling schemes incorporating the environmental characteristics (land use and population density) at each participant's location were introduced and compared to a time-based sampling scheme triggering a report on the hour as well as to a sampling scheme incorporating physical activity. We show that location-based sampling schemes trigger a report less often, but we obtain more unique trigger positions and a greater spatial spread in comparison to sampling strategies based on time and distance. Additionally, the location-based methods trigger significantly more often at rarely visited types of land use and less often outside the study region where no underlying environmental data are available.

  20. Nonparametric factor analysis of time series

    OpenAIRE

    Rodríguez-Poo, Juan M.; Linton, Oliver Bruce

    1998-01-01

    We introduce a nonparametric smoothing procedure for nonparametric factor analaysis of multivariate time series. The asymptotic properties of the proposed procedures are derived. We present an application based on the residuals from the Fair macromodel.