WorldWideScience

Sample records for sampling series based

  1. Multidimensional scaling analysis of financial time series based on modified cross-sample entropy methods

    Science.gov (United States)

    He, Jiayi; Shang, Pengjian; Xiong, Hui

    2018-06-01

    Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.

  2. Measuring time series regularity using nonlinear similarity-based sample entropy

    International Nuclear Information System (INIS)

    Xie Hongbo; He Weixing; Liu Hui

    2008-01-01

    Sampe Entropy (SampEn), a measure quantifying regularity and complexity, is believed to be an effective analyzing method of diverse settings that include both deterministic chaotic and stochastic processes, particularly operative in the analysis of physiological signals that involve relatively small amount of data. However, the similarity definition of vectors is based on Heaviside function, of which the boundary is discontinuous and hard, may cause some problems in the validity and accuracy of SampEn. Sigmoid function is a smoothed and continuous version of Heaviside function. To overcome the problems SampEn encountered, a modified SampEn (mSampEn) based on nonlinear Sigmoid function was proposed. The performance of mSampEn was tested on the independent identically distributed (i.i.d.) uniform random numbers, the MIX stochastic model, the Rossler map, and the Hennon map. The results showed that mSampEn was superior to SampEn in several aspects, including giving entropy definition in case of small parameters, better relative consistency, robust to noise, and more independence on record length when characterizing time series generated from either deterministic or stochastic system with different regularities

  3. Detecting chaos in irregularly sampled time series.

    Science.gov (United States)

    Kulp, C W

    2013-09-01

    Recently, Wiebe and Virgin [Chaos 22, 013136 (2012)] developed an algorithm which detects chaos by analyzing a time series' power spectrum which is computed using the Discrete Fourier Transform (DFT). Their algorithm, like other time series characterization algorithms, requires that the time series be regularly sampled. Real-world data, however, are often irregularly sampled, thus, making the detection of chaotic behavior difficult or impossible with those methods. In this paper, a characterization algorithm is presented, which effectively detects chaos in irregularly sampled time series. The work presented here is a modification of Wiebe and Virgin's algorithm and uses the Lomb-Scargle Periodogram (LSP) to compute a series' power spectrum instead of the DFT. The DFT is not appropriate for irregularly sampled time series. However, the LSP is capable of computing the frequency content of irregularly sampled data. Furthermore, a new method of analyzing the power spectrum is developed, which can be useful for differentiating between chaotic and non-chaotic behavior. The new characterization algorithm is successfully applied to irregularly sampled data generated by a model as well as data consisting of observations of variable stars.

  4. An historically consistent and broadly applicable MRV system based on LiDAR sampling and Landsat time-series

    Science.gov (United States)

    W. Cohen; H. Andersen; S. Healey; G. Moisen; T. Schroeder; C. Woodall; G. Domke; Z. Yang; S. Stehman; R. Kennedy; C. Woodcock; Z. Zhu; J. Vogelmann; D. Steinwand; C. Huang

    2014-01-01

    The authors are developing a REDD+ MRV system that tests different biomass estimation frameworks and components. Design-based inference from a costly fi eld plot network was compared to sampling with LiDAR strips and a smaller set of plots in combination with Landsat for disturbance monitoring. Biomass estimation uncertainties associated with these different data sets...

  5. Methodology Series Module 5: Sampling Strategies

    OpenAIRE

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ? Sampling Method?. There are essentially two types of sampling methods: 1) probability sampling ? based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling ? based on researcher's choice, population that accessible & available. Some of the non-probabilit...

  6. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    Science.gov (United States)

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  7. Methodology Series Module 5: Sampling Strategies.

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.

  8. Methodology series module 5: Sampling strategies

    Directory of Open Access Journals (Sweden)

    Maninder Singh Setia

    2016-01-01

    Full Text Available Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the 'Sampling Method'. There are essentially two types of sampling methods: 1 probability samplingbased on chance events (such as random numbers, flipping a coin etc.; and 2 non-probability samplingbased on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term 'random sample' when the researcher has used convenience sample. The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the 'generalizability' of these results. In such a scenario, the researcher may want to use 'purposive sampling' for the study.

  9. Methodology Series Module 5: Sampling Strategies

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability samplingbased on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability samplingbased on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438

  10. A general theory on frequency and time-frequency analysis of irregularly sampled time series based on projection methods - Part 1: Frequency analysis

    Science.gov (United States)

    Lenoir, Guillaume; Crucifix, Michel

    2018-03-01

    We develop a general framework for the frequency analysis of irregularly sampled time series. It is based on the Lomb-Scargle periodogram, but extended to algebraic operators accounting for the presence of a polynomial trend in the model for the data, in addition to a periodic component and a background noise. Special care is devoted to the correlation between the trend and the periodic component. This new periodogram is then cast into the Welch overlapping segment averaging (WOSA) method in order to reduce its variance. We also design a test of significance for the WOSA periodogram, against the background noise. The model for the background noise is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, more general than the classical Gaussian white or red noise processes. CARMA parameters are estimated following a Bayesian framework. We provide algorithms that compute the confidence levels for the WOSA periodogram and fully take into account the uncertainty in the CARMA noise parameters. Alternatively, a theory using point estimates of CARMA parameters provides analytical confidence levels for the WOSA periodogram, which are more accurate than Markov chain Monte Carlo (MCMC) confidence levels and, below some threshold for the number of data points, less costly in computing time. We then estimate the amplitude of the periodic component with least-squares methods, and derive an approximate proportionality between the squared amplitude and the periodogram. This proportionality leads to a new extension for the periodogram: the weighted WOSA periodogram, which we recommend for most frequency analyses with irregularly sampled data. The estimated signal amplitude also permits filtering in a frequency band. Our results generalise and unify methods developed in the fields of geosciences, engineering, astronomy and astrophysics. They also constitute the starting point for an extension to the continuous wavelet transform developed in a companion

  11. A general theory on frequency and time-frequency analysis of irregularly sampled time series based on projection methods - Part 2: Extension to time-frequency analysis

    Science.gov (United States)

    Lenoir, Guillaume; Crucifix, Michel

    2018-03-01

    Geophysical time series are sometimes sampled irregularly along the time axis. The situation is particularly frequent in palaeoclimatology. Yet, there is so far no general framework for handling the continuous wavelet transform when the time sampling is irregular. Here we provide such a framework. To this end, we define the scalogram as the continuous-wavelet-transform equivalent of the extended Lomb-Scargle periodogram defined in Part 1 of this study (Lenoir and Crucifix, 2018). The signal being analysed is modelled as the sum of a locally periodic component in the time-frequency plane, a polynomial trend, and a background noise. The mother wavelet adopted here is the Morlet wavelet classically used in geophysical applications. The background noise model is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, which is more general than the traditional Gaussian white and red noise processes. The scalogram is smoothed by averaging over neighbouring times in order to reduce its variance. The Shannon-Nyquist exclusion zone is however defined as the area corrupted by local aliasing issues. The local amplitude in the time-frequency plane is then estimated with least-squares methods. We also derive an approximate formula linking the squared amplitude and the scalogram. Based on this property, we define a new analysis tool: the weighted smoothed scalogram, which we recommend for most analyses. The estimated signal amplitude also gives access to band and ridge filtering. Finally, we design a test of significance for the weighted smoothed scalogram against the stationary Gaussian CARMA background noise, and provide algorithms for computing confidence levels, either analytically or with Monte Carlo Markov chain methods. All the analysis tools presented in this article are available to the reader in the Python package WAVEPAL.

  12. Comparison of correlation analysis techniques for irregularly sampled time series

    Directory of Open Access Journals (Sweden)

    K. Rehfeld

    2011-06-01

    Full Text Available Geoscientific measurements often provide time series with irregular time sampling, requiring either data reconstruction (interpolation or sophisticated methods to handle irregular sampling. We compare the linear interpolation technique and different approaches for analyzing the correlation functions and persistence of irregularly sampled time series, as Lomb-Scargle Fourier transformation and kernel-based methods. In a thorough benchmark test we investigate the performance of these techniques.

    All methods have comparable root mean square errors (RMSEs for low skewness of the inter-observation time distribution. For high skewness, very irregular data, interpolation bias and RMSE increase strongly. We find a 40 % lower RMSE for the lag-1 autocorrelation function (ACF for the Gaussian kernel method vs. the linear interpolation scheme,in the analysis of highly irregular time series. For the cross correlation function (CCF the RMSE is then lower by 60 %. The application of the Lomb-Scargle technique gave results comparable to the kernel methods for the univariate, but poorer results in the bivariate case. Especially the high-frequency components of the signal, where classical methods show a strong bias in ACF and CCF magnitude, are preserved when using the kernel methods.

    We illustrate the performances of interpolation vs. Gaussian kernel method by applying both to paleo-data from four locations, reflecting late Holocene Asian monsoon variability as derived from speleothem δ18O measurements. Cross correlation results are similar for both methods, which we attribute to the long time scales of the common variability. The persistence time (memory is strongly overestimated when using the standard, interpolation-based, approach. Hence, the Gaussian kernel is a reliable and more robust estimator with significant advantages compared to other techniques and suitable for large scale application to paleo-data.

  13. Cross-sample entropy of foreign exchange time series

    Science.gov (United States)

    Liu, Li-Zhi; Qian, Xi-Yuan; Lu, Heng-Yao

    2010-11-01

    The correlation of foreign exchange rates in currency markets is investigated based on the empirical data of DKK/USD, NOK/USD, CAD/USD, JPY/USD, KRW/USD, SGD/USD, THB/USD and TWD/USD for a period from 1995 to 2002. Cross-SampEn (cross-sample entropy) method is used to compare the returns of every two exchange rate time series to assess their degree of asynchrony. The calculation method of confidence interval of SampEn is extended and applied to cross-SampEn. The cross-SampEn and its confidence interval for every two of the exchange rate time series in periods 1995-1998 (before the Asian currency crisis) and 1999-2002 (after the Asian currency crisis) are calculated. The results show that the cross-SampEn of every two of these exchange rates becomes higher after the Asian currency crisis, indicating a higher asynchrony between the exchange rates. Especially for Singapore, Thailand and Taiwan, the cross-SampEn values after the Asian currency crisis are significantly higher than those before the Asian currency crisis. Comparison with the correlation coefficient shows that cross-SampEn is superior to describe the correlation between time series.

  14. Analysis of time series and size of equivalent sample

    International Nuclear Information System (INIS)

    Bernal, Nestor; Molina, Alicia; Pabon, Daniel; Martinez, Jorge

    2004-01-01

    In a meteorological context, a first approach to the modeling of time series is to use models of autoregressive type. This allows one to take into account the meteorological persistence or temporal behavior, thereby identifying the memory of the analyzed process. This article seeks to pre-sent the concept of the size of an equivalent sample, which helps to identify in the data series sub periods with a similar structure. Moreover, in this article we examine the alternative of adjusting the variance of the series, keeping in mind its temporal structure, as well as an adjustment to the covariance of two time series. This article presents two examples, the first one corresponding to seven simulated series with autoregressive structure of first order, and the second corresponding to seven meteorological series of anomalies of the air temperature at the surface in two Colombian regions

  15. mHealth Series: Factors influencing sample size calculations for mHealth–based studies – A mixed methods study in rural China

    Science.gov (United States)

    van Velthoven, Michelle Helena; Li, Ye; Wang, Wei; Du, Xiaozhen; Chen, Li; Wu, Qiong; Majeed, Azeem; Zhang, Yanfeng; Car, Josip

    2013-01-01

    Background An important issue for mHealth evaluation is the lack of information for sample size calculations. Objective To explore factors that influence sample size calculations for mHealth–based studies and to suggest strategies for increasing the participation rate. Methods We explored factors influencing recruitment and follow–up of participants (caregivers of children) in an mHealth text messaging data collection cross–over study. With help of village doctors, we recruited 1026 (25%) caregivers of children under five out of the 4170 registered. To explore factors influencing recruitment and provide recommendations for improving recruitment, we conducted semi–structured interviews with village doctors. Of the 1014 included participants, 662 (65%) responded to the first question about willingness to participate, 538 (53%) responded to the first survey question and 356 (35%) completed the text message survey. To explore factors influencing follow–up and provide recommendations for improving follow–up, we conducted interviews with participants. We added views from the researchers who were involved in the study to contextualize the findings. Results We found several factors influencing recruitment related to the following themes: experiences with recruitment, village doctors’ work, village doctors’ motivations, caregivers’ characteristics, caregivers’ motivations. Village doctors gave several recommendations for ways to recruit more caregivers and we added our views to these. We found the following factors influencing follow–up: mobile phone usage, ability to use mobile phone, problems with mobile phone, checking mobile phone, available time, paying back text message costs, study incentives, subjective norm, culture, trust, perceived usefulness of process, perceived usefulness of outcome, perceived ease of use, attitude, behavioural intention to use, and actual use. From our perspective, factors influencing follow–up were: different

  16. Volterra Series Based Distortion Effect

    DEFF Research Database (Denmark)

    Agerkvist, Finn T.

    2010-01-01

    A large part of the characteristic sound of the electric guitar comes from nonlinearities in the signal path. Such nonlinearities may come from the input- or output-stage of the amplier, which is often equipped with vacuum tubes or a dedicated distortion pedal. In this paper the Volterra series...... expansion for non linear systems is investigated with respect to generating good distortion. The Volterra series allows for unlimited adjustment of the level and frequency dependency of each distortion component. Subjectively relevant ways of linking the dierent orders are discussed....

  17. Yfiler® Plus population samples and dilution series

    DEFF Research Database (Denmark)

    Andersen, Mikkel Meyer; Mogensen, Helle Smidt; Eriksen, Poul Svante

    2017-01-01

    DNA complicated the analysis by causing drop-ins of characteristic female DNA artefacts. Even though the customised analytical threshold in combination with the custom-made artefact filters gave more alleles, crime scene samples still needed special attention from the forensic geneticist....... dynamics and performance. We determined dye-dependent analytical thresholds by receiver operating characteristics (ROC) and made a customised artefact filter that includes theoretical known artefacts by use of previously analysed population samples. Dilution series of known male DNA and a selection...

  18. Evaluating Site-Specific and Generic Spatial Models of Aboveground Forest Biomass Based on Landsat Time-Series and LiDAR Strip Samples in the Eastern USA

    Science.gov (United States)

    Ram Deo; Matthew Russell; Grant Domke; Hans-Erik Andersen; Warren Cohen; Christopher Woodall

    2017-01-01

    Large-area assessment of aboveground tree biomass (AGB) to inform regional or national forest monitoring programs can be efficiently carried out by combining remotely sensed data and field sample measurements through a generic statistical model, in contrast to site-specific models. We integrated forest inventory plot data with spatial predictors from Landsat time-...

  19. Transformation-cost time-series method for analyzing irregularly sampled data.

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations-with associated costs-to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  20. Transformation-cost time-series method for analyzing irregularly sampled data

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G. Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations—with associated costs—to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  1. Weighted statistical parameters for irregularly sampled time series

    Science.gov (United States)

    Rimoldini, Lorenzo

    2014-01-01

    Unevenly spaced time series are common in astronomy because of the day-night cycle, weather conditions, dependence on the source position in the sky, allocated telescope time and corrupt measurements, for example, or inherent to the scanning law of satellites like Hipparcos and the forthcoming Gaia. Irregular sampling often causes clumps of measurements and gaps with no data which can severely disrupt the values of estimators. This paper aims at improving the accuracy of common statistical parameters when linear interpolation (in time or phase) can be considered an acceptable approximation of a deterministic signal. A pragmatic solution is formulated in terms of a simple weighting scheme, adapting to the sampling density and noise level, applicable to large data volumes at minimal computational cost. Tests on time series from the Hipparcos periodic catalogue led to significant improvements in the overall accuracy and precision of the estimators with respect to the unweighted counterparts and those weighted by inverse-squared uncertainties. Automated classification procedures employing statistical parameters weighted by the suggested scheme confirmed the benefits of the improved input attributes. The classification of eclipsing binaries, Mira, RR Lyrae, Delta Cephei and Alpha2 Canum Venaticorum stars employing exclusively weighted descriptive statistics achieved an overall accuracy of 92 per cent, about 6 per cent higher than with unweighted estimators.

  2. Adaptive Sampling of Time Series During Remote Exploration

    Science.gov (United States)

    Thompson, David R.

    2012-01-01

    This work deals with the challenge of online adaptive data collection in a time series. A remote sensor or explorer agent adapts its rate of data collection in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility (all its datapoints lie in the past) and limited control (it can only decide when to collect its next datapoint). This problem is treated from an information-theoretic perspective, fitting a probabilistic model to collected data and optimizing the future sampling strategy to maximize information gain. The performance characteristics of stationary and nonstationary Gaussian process models are compared. Self-throttling sensors could benefit environmental sensor networks and monitoring as well as robotic exploration. Explorer agents can improve performance by adjusting their data collection rate, preserving scarce power or bandwidth resources during uninteresting times while fully covering anomalous events of interest. For example, a remote earthquake sensor could conserve power by limiting its measurements during normal conditions and increasing its cadence during rare earthquake events. A similar capability could improve sensor platforms traversing a fixed trajectory, such as an exploration rover transect or a deep space flyby. These agents can adapt observation times to improve sample coverage during moments of rapid change. An adaptive sampling approach couples sensor autonomy, instrument interpretation, and sampling. The challenge is addressed as an active learning problem, which already has extensive theoretical treatment in the statistics and machine learning literature. A statistical Gaussian process (GP) model is employed to guide sample decisions that maximize information gain. Nonsta tion - ary (e.g., time-varying) covariance relationships permit the system to represent and track local anomalies, in contrast with current GP approaches. Most common GP models

  3. Time Series Analysis Based on Running Mann Whitney Z Statistics

    Science.gov (United States)

    A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...

  4. Visibility Graph Based Time Series Analysis.

    Science.gov (United States)

    Stephen, Mutua; Gu, Changgui; Yang, Huijie

    2015-01-01

    Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it's microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks.

  5. Visibility Graph Based Time Series Analysis.

    Directory of Open Access Journals (Sweden)

    Mutua Stephen

    Full Text Available Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it's microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks.

  6. Asymptotic theory for the sample covariance matrix of a heavy-tailed multivariate time series

    DEFF Research Database (Denmark)

    Davis, Richard A.; Mikosch, Thomas Valentin; Pfaffel, Olivier

    2016-01-01

    In this paper we give an asymptotic theory for the eigenvalues of the sample covariance matrix of a multivariate time series. The time series constitutes a linear process across time and between components. The input noise of the linear process has regularly varying tails with index α∈(0,4) in...... particular, the time series has infinite fourth moment. We derive the limiting behavior for the largest eigenvalues of the sample covariance matrix and show point process convergence of the normalized eigenvalues. The limiting process has an explicit form involving points of a Poisson process and eigenvalues...... of a non-negative definite matrix. Based on this convergence we derive limit theory for a host of other continuous functionals of the eigenvalues, including the joint convergence of the largest eigenvalues, the joint convergence of the largest eigenvalue and the trace of the sample covariance matrix...

  7. A Story-Based Simulation for Teaching Sampling Distributions

    Science.gov (United States)

    Turner, Stephen; Dabney, Alan R.

    2015-01-01

    Statistical inference relies heavily on the concept of sampling distributions. However, sampling distributions are difficult to teach. We present a series of short animations that are story-based, with associated assessments. We hope that our contribution can be useful as a tool to teach sampling distributions in the introductory statistics…

  8. Time Series Based for Online Signature Verification

    Directory of Open Access Journals (Sweden)

    I Ketut Gede Darma Putra

    2013-11-01

    Full Text Available Signature verification system is to match the tested signature with a claimed signature. This paper proposes time series based for feature extraction method and dynamic time warping for match method. The system made by process of testing 900 signatures belong to 50 participants, 3 signatures for reference and 5 signatures from original user, simple imposters and trained imposters for signatures test. The final result system was tested with 50 participants with 3 references. This test obtained that system accuracy without imposters is 90,44897959% at threshold 44 with rejection errors (FNMR is 5,2% and acceptance errors (FMR is 4,35102%, when with imposters system accuracy is 80,1361% at threshold 27 with error rejection (FNMR is 15,6% and acceptance errors (average FMR is 4,263946%, with details as follows: acceptance errors is 0,391837%, acceptance errors simple imposters is 3,2% and acceptance errors trained imposters is 9,2%.

  9. A window-based time series feature extraction method.

    Science.gov (United States)

    Katircioglu-Öztürk, Deniz; Güvenir, H Altay; Ravens, Ursula; Baykal, Nazife

    2017-10-01

    This study proposes a robust similarity score-based time series feature extraction method that is termed as Window-based Time series Feature ExtraCtion (WTC). Specifically, WTC generates domain-interpretable results and involves significantly low computational complexity thereby rendering itself useful for densely sampled and populated time series datasets. In this study, WTC is applied to a proprietary action potential (AP) time series dataset on human cardiomyocytes and three precordial leads from a publicly available electrocardiogram (ECG) dataset. This is followed by comparing WTC in terms of predictive accuracy and computational complexity with shapelet transform and fast shapelet transform (which constitutes an accelerated variant of the shapelet transform). The results indicate that WTC achieves a slightly higher classification performance with significantly lower execution time when compared to its shapelet-based alternatives. With respect to its interpretable features, WTC has a potential to enable medical experts to explore definitive common trends in novel datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Using forbidden ordinal patterns to detect determinism in irregularly sampled time series.

    Science.gov (United States)

    Kulp, C W; Chobot, J M; Niskala, B J; Needhammer, C J

    2016-02-01

    It is known that when symbolizing a time series into ordinal patterns using the Bandt-Pompe (BP) methodology, there will be ordinal patterns called forbidden patterns that do not occur in a deterministic series. The existence of forbidden patterns can be used to identify deterministic dynamics. In this paper, the ability to use forbidden patterns to detect determinism in irregularly sampled time series is tested on data generated from a continuous model system. The study is done in three parts. First, the effects of sampling time on the number of forbidden patterns are studied on regularly sampled time series. The next two parts focus on two types of irregular-sampling, missing data and timing jitter. It is shown that forbidden patterns can be used to detect determinism in irregularly sampled time series for low degrees of sampling irregularity (as defined in the paper). In addition, comments are made about the appropriateness of using the BP methodology to symbolize irregularly sampled time series.

  11. Reliability-Based Optimization of Series Systems of Parallel Systems

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    1993-01-01

    Reliability-based design of structural systems is considered. In particular, systems where the reliability model is a series system of parallel systems are treated. A sensitivity analysis for this class of problems is presented. Optimization problems with series systems of parallel systems...... optimization of series systems of parallel systems, but it is also efficient in reliability-based optimization of series systems in general....

  12. Effectiveness of firefly algorithm based neural network in time series ...

    African Journals Online (AJOL)

    Effectiveness of firefly algorithm based neural network in time series forecasting. ... In the experiments, three well known time series were used to evaluate the performance. Results obtained were compared with ... Keywords: Time series, Artificial Neural Network, Firefly Algorithm, Particle Swarm Optimization, Overfitting ...

  13. Optimal separable bases and series expansions

    International Nuclear Information System (INIS)

    Poirier, B.

    1997-01-01

    A method is proposed for the efficient calculation of the Green close-quote s functions and eigenstates for quantum systems of two or more dimensions. For a given Hamiltonian, the best possible separable approximation is obtained from the set of all Hilbert-space operators. It is shown that this determination itself, as well as the solution of the resultant approximation, is a problem of reduced dimensionality. Moreover, the approximate eigenstates constitute the optimal separable basis, in the sense of self-consistent field theory. The full solution is obtained from the approximation via iterative expansion. In the time-independent perturbation expansion for instance, all of the first-order energy corrections are zero. In the Green close-quote s function case, we have a distorted-wave Born series with optimized convergence properties. This series may converge even when the usual Born series diverges. Analytical results are presented for an application of the method to the two-dimensional shifted harmonic-oscillator system, in the course of which the quantum tanh 2 potential problem is solved exactly. The universal presence of bound states in the latter is shown to imply long-lived resonances in the former. In a comparison with other theoretical methods, we find that the reaction path Hamiltonian fails to predict such resonances. copyright 1997 The American Physical Society

  14. Hemoglobin in samples with leukocytosis can be measured on ABL 700 series blood gas analyzers

    NARCIS (Netherlands)

    Scharnhorst, V.; Laar, van der P.D.; Vader, H.

    2003-01-01

    To compare lactate, bilirubin and Hemoglobin F concentrations obtained on ABL 700 series blood gas analyzers with those from laboratory methods. Pooled neonatal plasma, cord blood and adult plasma samples were used for comparison of bilirubin, hemoglobin F and lactate concentrations respectively.

  15. Pseudo-random bit generator based on lag time series

    Science.gov (United States)

    García-Martínez, M.; Campos-Cantón, E.

    2014-12-01

    In this paper, we present a pseudo-random bit generator (PRBG) based on two lag time series of the logistic map using positive and negative values in the bifurcation parameter. In order to hidden the map used to build the pseudo-random series we have used a delay in the generation of time series. These new series when they are mapped xn against xn+1 present a cloud of points unrelated to the logistic map. Finally, the pseudo-random sequences have been tested with the suite of NIST giving satisfactory results for use in stream ciphers.

  16. An Energy-Based Similarity Measure for Time Series

    Directory of Open Access Journals (Sweden)

    Pierre Brunagel

    2007-11-01

    Full Text Available A new similarity measure, called SimilB, for time series analysis, based on the cross-ΨB-energy operator (2004, is introduced. ΨB is a nonlinear measure which quantifies the interaction between two time series. Compared to Euclidean distance (ED or the Pearson correlation coefficient (CC, SimilB includes the temporal information and relative changes of the time series using the first and second derivatives of the time series. SimilB is well suited for both nonstationary and stationary time series and particularly those presenting discontinuities. Some new properties of ΨB are presented. Particularly, we show that ΨB as similarity measure is robust to both scale and time shift. SimilB is illustrated with synthetic time series and an artificial dataset and compared to the CC and the ED measures.

  17. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  18. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    Directory of Open Access Journals (Sweden)

    Q. Zhang

    2018-02-01

    Full Text Available River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1 fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2 the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling – in the form of spectral slope (β or other equivalent scaling parameters (e.g., Hurst exponent – are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1 they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β  =  0 to Brown noise (β  =  2 and (2 their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb–Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among

  19. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    Science.gov (United States)

    Zhang, Qian; Harman, Ciaran J.; Kirchner, James W.

    2018-02-01

    River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1) fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2) the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling - in the form of spectral slope (β) or other equivalent scaling parameters (e.g., Hurst exponent) - are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1) they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β = 0) to Brown noise (β = 2) and (2) their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths) in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb-Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among all methods for a wide range of

  20. Discovery and identification of a series of alkyl decalin isomers in petroleum geological samples.

    Science.gov (United States)

    Wang, Huitong; Zhang, Shuichang; Weng, Na; Zhang, Bin; Zhu, Guangyou; Liu, Lingyan

    2015-07-07

    The comprehensive two-dimensional gas chromatography/time-of-flight mass spectrometry (GC × GC/TOFMS) has been used to characterize a crude oil and a source rock extract sample. During the process, a series of pairwise components between monocyclic alkanes and mono-aromatics have been discovered. After tentative assignments of decahydronaphthalene isomers, a series of alkyl decalin isomers have been synthesized and used for identification and validation of these petroleum compounds. From both the MS and chromatography information, these pairwise compounds were identified as 2-alkyl-decahydronaphthalenes and 1-alkyl-decahydronaphthalenes. The polarity of 1-alkyl-decahydronaphthalenes was stronger. Their long chain alkyl substituent groups may be due to bacterial transformation or different oil cracking events. This systematic profiling of alkyl-decahydronaphthalene isomers provides further understanding and recognition of these potential petroleum biomarkers.

  1. Silicon based ultrafast optical waveform sampling

    DEFF Research Database (Denmark)

    Ji, Hua; Galili, Michael; Pu, Minhao

    2010-01-01

    A 300 nmx450 nmx5 mm silicon nanowire is designed and fabricated for a four wave mixing based non-linear optical gate. Based on this silicon nanowire, an ultra-fast optical sampling system is successfully demonstrated using a free-running fiber laser with a carbon nanotube-based mode-locker as th......A 300 nmx450 nmx5 mm silicon nanowire is designed and fabricated for a four wave mixing based non-linear optical gate. Based on this silicon nanowire, an ultra-fast optical sampling system is successfully demonstrated using a free-running fiber laser with a carbon nanotube-based mode......-locker as the sampling source. A clear eye-diagram of a 320 Gbit/s data signal is obtained. The temporal resolution of the sampling system is estimated to 360 fs....

  2. Series: Practical guidance to qualitative research. Part 3: Sampling, data collection and analysis.

    Science.gov (United States)

    Moser, Albine; Korstjens, Irene

    2018-12-01

    In the course of our supervisory work over the years, we have noticed that qualitative research tends to evoke a lot of questions and worries, so-called frequently asked questions (FAQs). This series of four articles intends to provide novice researchers with practical guidance for conducting high-quality qualitative research in primary care. By 'novice' we mean Master's students and junior researchers, as well as experienced quantitative researchers who are engaging in qualitative research for the first time. This series addresses their questions and provides researchers, readers, reviewers and editors with references to criteria and tools for judging the quality of qualitative research papers. The second article focused on context, research questions and designs, and referred to publications for further reading. This third article addresses FAQs about sampling, data collection and analysis. The data collection plan needs to be broadly defined and open at first, and become flexible during data collection. Sampling strategies should be chosen in such a way that they yield rich information and are consistent with the methodological approach used. Data saturation determines sample size and will be different for each study. The most commonly used data collection methods are participant observation, face-to-face in-depth interviews and focus group discussions. Analyses in ethnographic, phenomenological, grounded theory, and content analysis studies yield different narrative findings: a detailed description of a culture, the essence of the lived experience, a theory, and a descriptive summary, respectively. The fourth and final article will focus on trustworthiness and publishing qualitative research.

  3. Hierarchical Bayesian modelling of gene expression time series across irregularly sampled replicates and clusters.

    Science.gov (United States)

    Hensman, James; Lawrence, Neil D; Rattray, Magnus

    2013-08-20

    Time course data from microarrays and high-throughput sequencing experiments require simple, computationally efficient and powerful statistical models to extract meaningful biological signal, and for tasks such as data fusion and clustering. Existing methodologies fail to capture either the temporal or replicated nature of the experiments, and often impose constraints on the data collection process, such as regularly spaced samples, or similar sampling schema across replications. We propose hierarchical Gaussian processes as a general model of gene expression time-series, with application to a variety of problems. In particular, we illustrate the method's capacity for missing data imputation, data fusion and clustering.The method can impute data which is missing both systematically and at random: in a hold-out test on real data, performance is significantly better than commonly used imputation methods. The method's ability to model inter- and intra-cluster variance leads to more biologically meaningful clusters. The approach removes the necessity for evenly spaced samples, an advantage illustrated on a developmental Drosophila dataset with irregular replications. The hierarchical Gaussian process model provides an excellent statistical basis for several gene-expression time-series tasks. It has only a few additional parameters over a regular GP, has negligible additional complexity, is easily implemented and can be integrated into several existing algorithms. Our experiments were implemented in python, and are available from the authors' website: http://staffwww.dcs.shef.ac.uk/people/J.Hensman/.

  4. Adaptive Rate Sampling and Filtering Based on Level Crossing Sampling

    Directory of Open Access Journals (Sweden)

    Saeed Mian Qaisar

    2009-01-01

    Full Text Available The recent sophistications in areas of mobile systems and sensor networks demand more and more processing resources. In order to maintain the system autonomy, energy saving is becoming one of the most difficult industrial challenges, in mobile computing. Most of efforts to achieve this goal are focused on improving the embedded systems design and the battery technology, but very few studies target to exploit the input signal time-varying nature. This paper aims to achieve power efficiency by intelligently adapting the processing activity to the input signal local characteristics. It is done by completely rethinking the processing chain, by adopting a non conventional sampling scheme and adaptive rate filtering. The proposed approach, based on the LCSS (Level Crossing Sampling Scheme presents two filtering techniques, able to adapt their sampling rate and filter order by online analyzing the input signal variations. Indeed, the principle is to intelligently exploit the signal local characteristics—which is usually never considered—to filter only the relevant signal parts, by employing the relevant order filters. This idea leads towards a drastic gain in the computational efficiency and hence in the processing power when compared to the classical techniques.

  5. Drunk driving detection based on classification of multivariate time series.

    Science.gov (United States)

    Li, Zhenlong; Jin, Xue; Zhao, Xiaohua

    2015-09-01

    This paper addresses the problem of detecting drunk driving based on classification of multivariate time series. First, driving performance measures were collected from a test in a driving simulator located in the Traffic Research Center, Beijing University of Technology. Lateral position and steering angle were used to detect drunk driving. Second, multivariate time series analysis was performed to extract the features. A piecewise linear representation was used to represent multivariate time series. A bottom-up algorithm was then employed to separate multivariate time series. The slope and time interval of each segment were extracted as the features for classification. Third, a support vector machine classifier was used to classify driver's state into two classes (normal or drunk) according to the extracted features. The proposed approach achieved an accuracy of 80.0%. Drunk driving detection based on the analysis of multivariate time series is feasible and effective. The approach has implications for drunk driving detection. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.

  6. MTU underfloor rail drives based on Series 1600 engines

    Energy Technology Data Exchange (ETDEWEB)

    Bamberger, Norbert; Lieb, Martin; Reich, Christian [MTU Friedrichshafen GmbH, Friedrichshafen (Germany)

    2013-05-15

    With the heavy demands now being placed on railcar drive systems, ever more powerful solutions are needed. For the new high-speed trains in Britain's Intercity Express Programme (IEP), Hitachi udorses the use of MTU's underfloor drives based on Series 1600 engines.

  7. Reliability-Based Optimization of Series Systems of Parallel Systems

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    Reliability-based design of structural systems is considered. Especially systems where the reliability model is a series system of parallel systems are analysed. A sensitivity analysis for this class of problems is presented. Direct and sequential optimization procedures to solve the optimization...

  8. Machine learning methods as a tool to analyse incomplete or irregularly sampled radon time series data.

    Science.gov (United States)

    Janik, M; Bossew, P; Kurihara, O

    2018-07-15

    Machine learning is a class of statistical techniques which has proven to be a powerful tool for modelling the behaviour of complex systems, in which response quantities depend on assumed controls or predictors in a complicated way. In this paper, as our first purpose, we propose the application of machine learning to reconstruct incomplete or irregularly sampled data of time series indoor radon ( 222 Rn). The physical assumption underlying the modelling is that Rn concentration in the air is controlled by environmental variables such as air temperature and pressure. The algorithms "learn" from complete sections of multivariate series, derive a dependence model and apply it to sections where the controls are available, but not the response (Rn), and in this way complete the Rn series. Three machine learning techniques are applied in this study, namely random forest, its extension called the gradient boosting machine and deep learning. For a comparison, we apply the classical multiple regression in a generalized linear model version. Performance of the models is evaluated through different metrics. The performance of the gradient boosting machine is found to be superior to that of the other techniques. By applying learning machines, we show, as our second purpose, that missing data or periods of Rn series data can be reconstructed and resampled on a regular grid reasonably, if data of appropriate physical controls are available. The techniques also identify to which degree the assumed controls contribute to imputing missing Rn values. Our third purpose, though no less important from the viewpoint of physics, is identifying to which degree physical, in this case environmental variables, are relevant as Rn predictors, or in other words, which predictors explain most of the temporal variability of Rn. We show that variables which contribute most to the Rn series reconstruction, are temperature, relative humidity and day of the year. The first two are physical

  9. Series: Practical guidance to qualitative research. Part 3: Sampling, data collection and analysis

    Science.gov (United States)

    Moser, Albine; Korstjens, Irene

    2018-01-01

    Abstract In the course of our supervisory work over the years, we have noticed that qualitative research tends to evoke a lot of questions and worries, so-called frequently asked questions (FAQs). This series of four articles intends to provide novice researchers with practical guidance for conducting high-quality qualitative research in primary care. By ‘novice’ we mean Master’s students and junior researchers, as well as experienced quantitative researchers who are engaging in qualitative research for the first time. This series addresses their questions and provides researchers, readers, reviewers and editors with references to criteria and tools for judging the quality of qualitative research papers. The second article focused on context, research questions and designs, and referred to publications for further reading. This third article addresses FAQs about sampling, data collection and analysis. The data collection plan needs to be broadly defined and open at first, and become flexible during data collection. Sampling strategies should be chosen in such a way that they yield rich information and are consistent with the methodological approach used. Data saturation determines sample size and will be different for each study. The most commonly used data collection methods are participant observation, face-to-face in-depth interviews and focus group discussions. Analyses in ethnographic, phenomenological, grounded theory, and content analysis studies yield different narrative findings: a detailed description of a culture, the essence of the lived experience, a theory, and a descriptive summary, respectively. The fourth and final article will focus on trustworthiness and publishing qualitative research. PMID:29199486

  10. Sample Based Unit Liter Dose Estimates

    International Nuclear Information System (INIS)

    JENSEN, L.

    2000-01-01

    The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999a) and the Final Safety Analysis Report (FSAR) (FDH 1999b) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new data to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in producing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks. The results given in this report are a revision to similar results given in an earlier version of the document (Jensen and Wilmarth 1999). The main difference between the results in this document and the earlier version is that the dose conversion factors (DCF) for converting μCi/g or μCi/L to Sv/L (sieverts per liter) have changed. There are now two DCFs, one based on ICRP-68 and one based on ICW-71 (Brevick 2000)

  11. Sample preparation for phosphoproteomic analysis of circadian time series in Arabidopsis thaliana.

    Science.gov (United States)

    Krahmer, Johanna; Hindle, Matthew M; Martin, Sarah F; Le Bihan, Thierry; Millar, Andrew J

    2015-01-01

    Systems biological approaches to study the Arabidopsis thaliana circadian clock have mainly focused on transcriptomics while little is known about the proteome, and even less about posttranslational modifications. Evidence has emerged that posttranslational protein modifications, in particular phosphorylation, play an important role for the clock and its output. Phosphoproteomics is the method of choice for a large-scale approach to gain more knowledge about rhythmic protein phosphorylation. Recent plant phosphoproteomics publications have identified several thousand phosphopeptides. However, the methods used in these studies are very labor-intensive and therefore not suitable to apply to a well-replicated circadian time series. To address this issue, we present and compare different strategies for sample preparation for phosphoproteomics that are compatible with large numbers of samples. Methods are compared regarding number of identifications, variability of quantitation, and functional categorization. We focus on the type of detergent used for protein extraction as well as methods for its removal. We also test a simple two-fraction separation of the protein extract. © 2015 Elsevier Inc. All rights reserved.

  12. Quality Control Procedure Based on Partitioning of NMR Time Series

    Directory of Open Access Journals (Sweden)

    Michał Staniszewski

    2018-03-01

    Full Text Available The quality of the magnetic resonance spectroscopy (MRS depends on the stability of magnetic resonance (MR system performance and optimal hardware functioning, which ensure adequate levels of signal-to-noise ratios (SNR as well as good spectral resolution and minimal artifacts in the spectral data. MRS quality control (QC protocols and methodologies are based on phantom measurements that are repeated regularly. In this work, a signal partitioning algorithm based on a dynamic programming (DP method for QC assessment of the spectral data is described. The proposed algorithm allows detection of the change points—the abrupt variations in the time series data. The proposed QC method was tested using the simulated and real phantom data. Simulated data were randomly generated time series distorted by white noise. The real data were taken from the phantom quality control studies of the MRS scanner collected for four and a half years and analyzed by LCModel software. Along with the proposed algorithm, performance of various literature methods was evaluated for the predefined number of change points based on the error values calculated by subtracting the mean values calculated for the periods between the change-points from the original data points. The time series were checked using external software, a set of external methods and the proposed tool, and the obtained results were comparable. The application of dynamic programming in the analysis of the phantom MRS data is a novel approach to QC. The obtained results confirm that the presented change-point-detection tool can be used either for independent analysis of MRS time series (or any other or as a part of quality control.

  13. Hydrogeologic applications for historical records and images from rock samples collected at the Nevada National Security Site and vicinity, Nye County, Nevada - A supplement to Data Series 297

    Science.gov (United States)

    Wood, David B.

    2018-03-14

    Rock samples have been collected, analyzed, and interpreted from drilling and mining operations at the Nevada National Security Site for over one-half of a century. Records containing geologic and hydrologic analyses and interpretations have been compiled into a series of databases. Rock samples have been photographed and thin sections scanned. Records and images are preserved and available for public viewing and downloading at the U.S. Geological Survey ScienceBase, Mercury Core Library and Data Center Web site at https://www.sciencebase.gov/mercury/ and documented in U.S. Geological Survey Data Series 297. Example applications of these data and images are provided in this report.

  14. Mapping Rice Cropping Systems in Vietnam Using an NDVI-Based Time-Series Similarity Measurement Based on DTW Distance

    Directory of Open Access Journals (Sweden)

    Xudong Guan

    2016-01-01

    Full Text Available Normalized Difference Vegetation Index (NDVI derived from Moderate Resolution Imaging Spectroradiometer (MODIS time-series data has been widely used in the fields of crop and rice classification. The cloudy and rainy weather characteristics of the monsoon season greatly reduce the likelihood of obtaining high-quality optical remote sensing images. In addition, the diverse crop-planting system in Vietnam also hinders the comparison of NDVI among different crop stages. To address these problems, we apply a Dynamic Time Warping (DTW distance-based similarity measure approach and use the entire yearly NDVI time series to reduce the inaccuracy of classification using a single image. We first de-noise the NDVI time series using S-G filtering based on the TIMESAT software. Then, a standard NDVI time-series base for rice growth is established based on field survey data and Google Earth sample data. NDVI time-series data for each pixel are constructed and the DTW distance with the standard rice growth NDVI time series is calculated. Then, we apply thresholds to extract rice growth areas. A qualitative assessment using statistical data and a spatial assessment using sampled data from the rice-cropping map reveal a high mapping accuracy at the national scale between the statistical data, with the corresponding R2 being as high as 0.809; however, the mapped rice accuracy decreased at the provincial scale due to the reduced number of rice planting areas per province. An analysis of the results indicates that the 500-m resolution MODIS data are limited in terms of mapping scattered rice parcels. The results demonstrate that the DTW-based similarity measure of the NDVI time series can be effectively used to map large-area rice cropping systems with diverse cultivation processes.

  15. Sample Based Unit Liter Dose Estimates

    International Nuclear Information System (INIS)

    JENSEN, L.

    1999-01-01

    The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999) and the Final Safety Analysis Report (FSAR) (FDH 1999) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new data to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in developing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks

  16. Satellite Image Time Series Decomposition Based on EEMD

    Directory of Open Access Journals (Sweden)

    Yun-long Kong

    2015-11-01

    Full Text Available Satellite Image Time Series (SITS have recently been of great interest due to the emerging remote sensing capabilities for Earth observation. Trend and seasonal components are two crucial elements of SITS. In this paper, a novel framework of SITS decomposition based on Ensemble Empirical Mode Decomposition (EEMD is proposed. EEMD is achieved by sifting an ensemble of adaptive orthogonal components called Intrinsic Mode Functions (IMFs. EEMD is noise-assisted and overcomes the drawback of mode mixing in conventional Empirical Mode Decomposition (EMD. Inspired by these advantages, the aim of this work is to employ EEMD to decompose SITS into IMFs and to choose relevant IMFs for the separation of seasonal and trend components. In a series of simulations, IMFs extracted by EEMD achieved a clear representation with physical meaning. The experimental results of 16-day compositions of Moderate Resolution Imaging Spectroradiometer (MODIS, Normalized Difference Vegetation Index (NDVI, and Global Environment Monitoring Index (GEMI time series with disturbance illustrated the effectiveness and stability of the proposed approach to monitoring tasks, such as applications for the detection of abrupt changes.

  17. Model-based Clustering of Categorical Time Series with Multinomial Logit Classification

    Science.gov (United States)

    Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea

    2010-09-01

    A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.

  18. Hybrid perturbation methods based on statistical time series models

    Science.gov (United States)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  19. Application of a series of artificial neural networks to on-site quantitative analysis of lead into real soil samples by laser induced breakdown spectroscopy

    International Nuclear Information System (INIS)

    El Haddad, J.; Bruyère, D.; Ismaël, A.; Gallou, G.; Laperche, V.; Michel, K.; Canioni, L.; Bousquet, B.

    2014-01-01

    Artificial neural networks were applied to process data from on-site LIBS analysis of soil samples. A first artificial neural network allowed retrieving the relative amounts of silicate, calcareous and ores matrices into soils. As a consequence, each soil sample was correctly located inside the ternary diagram characterized by these three matrices, as verified by ICP-AES. Then a series of artificial neural networks were applied to quantify lead into soil samples. More precisely, two models were designed for classification purpose according to both the type of matrix and the range of lead concentrations. Then, three quantitative models were locally applied to three data subsets. This complete approach allowed reaching a relative error of prediction close to 20%, considered as satisfying in the case of on-site analysis. - Highlights: • Application of a series of artificial neural networks (ANN) to quantitative LIBS • Matrix-based classification of the soil samples by ANN • Concentration-based classification of the soil samples by ANN • Series of quantitative ANN models dedicated to the analysis of data subsets • Relative error of prediction lower than 20% for LIBS analysis of soil samples

  20. Stratified sampling design based on data mining.

    Science.gov (United States)

    Kim, Yeonkook J; Oh, Yoonhwan; Park, Sunghoon; Cho, Sungzoon; Park, Hayoung

    2013-09-01

    To explore classification rules based on data mining methodologies which are to be used in defining strata in stratified sampling of healthcare providers with improved sampling efficiency. We performed k-means clustering to group providers with similar characteristics, then, constructed decision trees on cluster labels to generate stratification rules. We assessed the variance explained by the stratification proposed in this study and by conventional stratification to evaluate the performance of the sampling design. We constructed a study database from health insurance claims data and providers' profile data made available to this study by the Health Insurance Review and Assessment Service of South Korea, and population data from Statistics Korea. From our database, we used the data for single specialty clinics or hospitals in two specialties, general surgery and ophthalmology, for the year 2011 in this study. Data mining resulted in five strata in general surgery with two stratification variables, the number of inpatients per specialist and population density of provider location, and five strata in ophthalmology with two stratification variables, the number of inpatients per specialist and number of beds. The percentages of variance in annual changes in the productivity of specialists explained by the stratification in general surgery and ophthalmology were 22% and 8%, respectively, whereas conventional stratification by the type of provider location and number of beds explained 2% and 0.2% of variance, respectively. This study demonstrated that data mining methods can be used in designing efficient stratified sampling with variables readily available to the insurer and government; it offers an alternative to the existing stratification method that is widely used in healthcare provider surveys in South Korea.

  1. Theory of sampling and its application in tissue based diagnosis

    Directory of Open Access Journals (Sweden)

    Kayser Gian

    2009-02-01

    Full Text Available Abstract Background A general theory of sampling and its application in tissue based diagnosis is presented. Sampling is defined as extraction of information from certain limited spaces and its transformation into a statement or measure that is valid for the entire (reference space. The procedure should be reproducible in time and space, i.e. give the same results when applied under similar circumstances. Sampling includes two different aspects, the procedure of sample selection and the efficiency of its performance. The practical performance of sample selection focuses on search for localization of specific compartments within the basic space, and search for presence of specific compartments. Methods When a sampling procedure is applied in diagnostic processes two different procedures can be distinguished: I the evaluation of a diagnostic significance of a certain object, which is the probability that the object can be grouped into a certain diagnosis, and II the probability to detect these basic units. Sampling can be performed without or with external knowledge, such as size of searched objects, neighbourhood conditions, spatial distribution of objects, etc. If the sample size is much larger than the object size, the application of a translation invariant transformation results in Kriege's formula, which is widely used in search for ores. Usually, sampling is performed in a series of area (space selections of identical size. The size can be defined in relation to the reference space or according to interspatial relationship. The first method is called random sampling, the second stratified sampling. Results Random sampling does not require knowledge about the reference space, and is used to estimate the number and size of objects. Estimated features include area (volume fraction, numerical, boundary and surface densities. Stratified sampling requires the knowledge of objects (and their features and evaluates spatial features in relation to

  2. High efficiency graphene coated copper based thermocells connected in series

    Science.gov (United States)

    Sindhuja, Mani; Indubala, Emayavaramban; Sudha, Venkatachalam; Harinipriya, Seshadri

    2018-04-01

    Conversion of low-grade waste heat into electricity had been studied employing single thermocell or flowcells so far. Graphene coated copper electrodes based thermocells connected in series displayed relatively high efficiency of thermal energy harvesting. The maximum power output of 49.2W/m2 for normalized cross sectional electrode area is obtained at 60ºC of inter electrode temperature difference. The relative carnot efficiency of 20.2% is obtained from the device. The importance of reducing the mass transfer and ion transfer resistance to improve the efficiency of the device is demonstrated. Degradation studies confirmed mild oxidation of copper foil due to corrosion caused by the electrolyte.

  3. High Efficiency Graphene Coated Copper Based Thermocells Connected in Series

    Directory of Open Access Journals (Sweden)

    Mani Sindhuja

    2018-04-01

    Full Text Available Conversion of low-grade waste heat into electricity had been studied employing single thermocell or flowcells so far. Graphene coated copper electrodes based thermocells connected in series displayed relatively high efficiency of thermal energy harvesting. The maximum power output of 49.2 W/m2 for normalized cross sectional electrode area is obtained at 60°C of inter electrode temperature difference. The relative carnot efficiency of 20.2% is obtained from the device. The importance of reducing the mass transfer and ion transfer resistance to improve the efficiency of the device is demonstrated. Degradation studies confirmed mild oxidation of copper foil due to corrosion caused by the electrolyte.

  4. Sampling genetic diversity in the sympatrically and allopatrically speciating Midas cichlid species complex over a 16 year time series

    Directory of Open Access Journals (Sweden)

    Bunje Paul ME

    2007-02-01

    Full Text Available Abstract Background Speciation often occurs in complex or uncertain temporal and spatial contexts. Processes such as reinforcement, allopatric divergence, and assortative mating can proceed at different rates and with different strengths as populations diverge. The Central American Midas cichlid fish species complex is an important case study for understanding the processes of speciation. Previous analyses have demonstrated that allopatric processes led to species formation among the lakes of Nicaragua as well as sympatric speciation that is occurring within at least one crater lake. However, since speciation is an ongoing process and sampling genetic diversity of such lineages can be biased by collection scheme or random factors, it is important to evaluate the robustness of conclusions drawn on individual time samples. Results In order to assess the validity and reliability of inferences based on different genetic samples, we have analyzed fish from several lakes in Nicaragua sampled at three different times over 16 years. In addition, this time series allows us to analyze the population genetic changes that have occurred between lakes, where allopatric speciation has operated, as well as between different species within lakes, some of which have originated by sympatric speciation. Focusing on commonly used genetic markers, we have analyzed both DNA sequences from the complete mitochondrial control region as well as nuclear DNA variation at ten microsatellite loci from these populations, sampled thrice in a 16 year time period, to develop a robust estimate of the population genetic history of these diversifying lineages. Conclusion The conclusions from previous work are well supported by our comprehensive analysis. In particular, we find that the genetic diversity of derived crater lake populations is lower than that of the source population regardless of when and how each population was sampled. Furthermore, changes in various estimates of

  5. Multiscale sample entropy and cross-sample entropy based on symbolic representation and similarity of stock markets

    Science.gov (United States)

    Wu, Yue; Shang, Pengjian; Li, Yilong

    2018-03-01

    A modified multiscale sample entropy measure based on symbolic representation and similarity (MSEBSS) is proposed in this paper to research the complexity of stock markets. The modified algorithm reduces the probability of inducing undefined entropies and is confirmed to be robust to strong noise. Considering the validity and accuracy, MSEBSS is more reliable than Multiscale entropy (MSE) for time series mingled with much noise like financial time series. We apply MSEBSS to financial markets and results show American stock markets have the lowest complexity compared with European and Asian markets. There are exceptions to the regularity that stock markets show a decreasing complexity over the time scale, indicating a periodicity at certain scales. Based on MSEBSS, we introduce the modified multiscale cross-sample entropy measure based on symbolic representation and similarity (MCSEBSS) to consider the degree of the asynchrony between distinct time series. Stock markets from the same area have higher synchrony than those from different areas. And for stock markets having relative high synchrony, the entropy values will decrease with the increasing scale factor. While for stock markets having high asynchrony, the entropy values will not decrease with the increasing scale factor sometimes they tend to increase. So both MSEBSS and MCSEBSS are able to distinguish stock markets of different areas, and they are more helpful if used together for studying other features of financial time series.

  6. Design-based estimators for snowball sampling

    OpenAIRE

    Shafie, Termeh

    2010-01-01

    Snowball sampling, where existing study subjects recruit further subjects from amongtheir acquaintances, is a popular approach when sampling from hidden populations.Since people with many in-links are more likely to be selected, there will be a selectionbias in the samples obtained. In order to eliminate this bias, the sample data must beweighted. However, the exact selection probabilities are unknown for snowball samplesand need to be approximated in an appropriate way. This paper proposes d...

  7. Series: Practical guidance to qualitative research : part 3: sampling, data collection and analysis

    NARCIS (Netherlands)

    Albine Moser; Irene Korstjens

    2017-01-01

    In the course of our supervisory work over the years, we have noticed that qualitative research tends to evoke a lot of questions and worries, so-called frequently asked questions (FAQs). This series of four articles intends to provide novice researchers with practical guidance for

  8. Chaos Time Series Prediction Based on Membrane Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Meng Li

    2015-01-01

    Full Text Available This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ,m and least squares support vector machine (LS-SVM (γ,σ by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE, root mean square error (RMSE, and mean absolute percentage error (MAPE.

  9. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling.

    Science.gov (United States)

    Zhou, Fuqun; Zhang, Aining

    2016-10-25

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  10. Time series forecasting based on deep extreme learning machine

    NARCIS (Netherlands)

    Guo, Xuqi; Pang, Y.; Yan, Gaowei; Qiao, Tiezhu; Yang, Guang-Hong; Yang, Dan

    2017-01-01

    Multi-layer Artificial Neural Networks (ANN) has caught widespread attention as a new method for time series forecasting due to the ability of approximating any nonlinear function. In this paper, a new local time series prediction model is established with the nearest neighbor domain theory, in

  11. Financial time series analysis based on effective phase transfer entropy

    Science.gov (United States)

    Yang, Pengbo; Shang, Pengjian; Lin, Aijing

    2017-02-01

    Transfer entropy is a powerful technique which is able to quantify the impact of one dynamic system on another system. In this paper, we propose the effective phase transfer entropy method based on the transfer entropy method. We use simulated data to test the performance of this method, and the experimental results confirm that the proposed approach is capable of detecting the information transfer between the systems. We also explore the relationship between effective phase transfer entropy and some variables, such as data size, coupling strength and noise. The effective phase transfer entropy is positively correlated with the data size and the coupling strength. Even in the presence of a large amount of noise, it can detect the information transfer between systems, and it is very robust to noise. Moreover, this measure is indeed able to accurately estimate the information flow between systems compared with phase transfer entropy. In order to reflect the application of this method in practice, we apply this method to financial time series and gain new insight into the interactions between systems. It is demonstrated that the effective phase transfer entropy can be used to detect some economic fluctuations in the financial market. To summarize, the effective phase transfer entropy method is a very efficient tool to estimate the information flow between systems.

  12. Dual frequency modulation with two cantilevers in series: a possible means to rapidly acquire tip–sample interaction force curves with dynamic AFM

    International Nuclear Information System (INIS)

    Solares, Santiago D; Chawla, Gaurav

    2008-01-01

    One common application of atomic force microscopy (AFM) is the acquisition of tip–sample interaction force curves. However, this can be a slow process when the user is interested in studying non-uniform samples, because existing contact- and dynamic-mode methods require that the measurement be performed at one fixed surface point at a time. This paper proposes an AFM method based on dual frequency modulation using two cantilevers in series, which could be used to measure the tip–sample interaction force curves and topography of the entire sample with a single surface scan, in a time that is comparable to the time needed to collect a topographic image with current AFM imaging modes. Numerical simulation results are provided along with recommended parameters to characterize tip–sample interactions resembling those of conventional silicon tips and carbon nanotube tips tapping on silicon surfaces

  13. Mackenzie River Delta morphological change based on Landsat time series

    Science.gov (United States)

    Vesakoski, Jenni-Mari; Alho, Petteri; Gustafsson, David; Arheimer, Berit; Isberg, Kristina

    2015-04-01

    Arctic rivers are sensitive and yet quite unexplored river systems to which the climate change will impact on. Research has not focused in detail on the fluvial geomorphology of the Arctic rivers mainly due to the remoteness and wideness of the watersheds, problems with data availability and difficult accessibility. Nowadays wide collaborative spatial databases in hydrology as well as extensive remote sensing datasets over the Arctic are available and they enable improved investigation of the Arctic watersheds. Thereby, it is also important to develop and improve methods that enable detecting the fluvio-morphological processes based on the available data. Furthermore, it is essential to reconstruct and improve the understanding of the past fluvial processes in order to better understand prevailing and future fluvial processes. In this study we sum up the fluvial geomorphological change in the Mackenzie River Delta during the last ~30 years. The Mackenzie River Delta (~13 000 km2) is situated in the North Western Territories, Canada where the Mackenzie River enters to the Beaufort Sea, Arctic Ocean near the city of Inuvik. Mackenzie River Delta is lake-rich, productive ecosystem and ecologically sensitive environment. Research objective is achieved through two sub-objectives: 1) Interpretation of the deltaic river channel planform change by applying Landsat time series. 2) Definition of the variables that have impacted the most on detected changes by applying statistics and long hydrological time series derived from Arctic-HYPE model (HYdrologic Predictions for Environment) developed by Swedish Meteorological and Hydrological Institute. According to our satellite interpretation, field observations and statistical analyses, notable spatio-temporal changes have occurred in the morphology of the river channel and delta during the past 30 years. For example, the channels have been developing in braiding and sinuosity. In addition, various linkages between the studied

  14. Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.

    Science.gov (United States)

    Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa

    2017-02-01

    Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.

  15. Combining Different Conceptual Change Methods within Four-Step Constructivist Teaching Model: A Sample Teaching of Series and Parallel Circuits

    Science.gov (United States)

    Ipek, Hava; Calik, Muammer

    2008-01-01

    Based on students' alternative conceptions of the topics "electric circuits", "electric charge flows within an electric circuit", "how the brightness of bulbs and the resistance changes in series and parallel circuits", the current study aims to present a combination of different conceptual change methods within a four-step constructivist teaching…

  16. Generalized sample entropy analysis for traffic signals based on similarity measure

    Science.gov (United States)

    Shang, Du; Xu, Mengjia; Shang, Pengjian

    2017-05-01

    Sample entropy is a prevailing method used to quantify the complexity of a time series. In this paper a modified method of generalized sample entropy and surrogate data analysis is proposed as a new measure to assess the complexity of a complex dynamical system such as traffic signals. The method based on similarity distance presents a different way of signals patterns match showing distinct behaviors of complexity. Simulations are conducted over synthetic data and traffic signals for providing the comparative study, which is provided to show the power of the new method. Compared with previous sample entropy and surrogate data analysis, the new method has two main advantages. The first one is that it overcomes the limitation about the relationship between the dimension parameter and the length of series. The second one is that the modified sample entropy functions can be used to quantitatively distinguish time series from different complex systems by the similar measure.

  17. Time Series Outlier Detection Based on Sliding Window Prediction

    Directory of Open Access Journals (Sweden)

    Yufeng Yu

    2014-01-01

    Full Text Available In order to detect outliers in hydrological time series data for improving data quality and decision-making quality related to design, operation, and management of water resources, this research develops a time series outlier detection method for hydrologic data that can be used to identify data that deviate from historical patterns. The method first built a forecasting model on the history data and then used it to predict future values. Anomalies are assumed to take place if the observed values fall outside a given prediction confidence interval (PCI, which can be calculated by the predicted value and confidence coefficient. The use of PCI as threshold is mainly on the fact that it considers the uncertainty in the data series parameters in the forecasting model to address the suitable threshold selection problem. The method performs fast, incremental evaluation of data as it becomes available, scales to large quantities of data, and requires no preclassification of anomalies. Experiments with different hydrologic real-world time series showed that the proposed methods are fast and correctly identify abnormal data and can be used for hydrologic time series analysis.

  18. Power Forecasting of Combined Heating and Cooling Systems Based on Chaotic Time Series

    Directory of Open Access Journals (Sweden)

    Liu Hai

    2015-01-01

    Full Text Available Theoretic analysis shows that the output power of the distributed generation system is nonlinear and chaotic. And it is coupled with the microenvironment meteorological data. Chaos is an inherent property of nonlinear dynamic system. A predicator of the output power of the distributed generation system is to establish a nonlinear model of the dynamic system based on real time series in the reconstructed phase space. Firstly, chaos should be detected and quantified for the intensive studies of nonlinear systems. If the largest Lyapunov exponent is positive, the dynamical system must be chaotic. Then, the embedding dimension and the delay time are chosen based on the improved C-C method. The attractor of chaotic power time series can be reconstructed based on the embedding dimension and delay time in the phase space. By now, the neural network can be trained based on the training samples, which are observed from the distributed generation system. The neural network model will approximate the curve of output power adequately. Experimental results show that the maximum power point of the distributed generation system will be predicted based on the meteorological data. The system can be controlled effectively based on the prediction.

  19. Time-Scale and Time-Frequency Analyses of Irregularly Sampled Astronomical Time Series

    Directory of Open Access Journals (Sweden)

    S. Roques

    2005-09-01

    Full Text Available We evaluate the quality of spectral restoration in the case of irregular sampled signals in astronomy. We study in details a time-scale method leading to a global wavelet spectrum comparable to the Fourier period, and a time-frequency matching pursuit allowing us to identify the frequencies and to control the error propagation. In both cases, the signals are first resampled with a linear interpolation. Both results are compared with those obtained using Lomb's periodogram and using the weighted waveletZ-transform developed in astronomy for unevenly sampled variable stars observations. These approaches are applied to simulations and to light variations of four variable stars. This leads to the conclusion that the matching pursuit is more efficient for recovering the spectral contents of a pulsating star, even with a preliminary resampling. In particular, the results are almost independent of the quality of the initial irregular sampling.

  20. Grammar-based feature generation for time-series prediction

    CERN Document Server

    De Silva, Anthony Mihirana

    2015-01-01

    This book proposes a novel approach for time-series prediction using machine learning techniques with automatic feature generation. Application of machine learning techniques to predict time-series continues to attract considerable attention due to the difficulty of the prediction problems compounded by the non-linear and non-stationary nature of the real world time-series. The performance of machine learning techniques, among other things, depends on suitable engineering of features. This book proposes a systematic way for generating suitable features using context-free grammar. A number of feature selection criteria are investigated and a hybrid feature generation and selection algorithm using grammatical evolution is proposed. The book contains graphical illustrations to explain the feature generation process. The proposed approaches are demonstrated by predicting the closing price of major stock market indices, peak electricity load and net hourly foreign exchange client trade volume. The proposed method ...

  1. An advection-based model to increase the temporal resolution of PIV time series.

    Science.gov (United States)

    Scarano, Fulvio; Moore, Peter

    A numerical implementation of the advection equation is proposed to increase the temporal resolution of PIV time series. The method is based on the principle that velocity fluctuations are transported passively, similar to Taylor's hypothesis of frozen turbulence . In the present work, the advection model is extended to unsteady three-dimensional flows. The main objective of the method is that of lowering the requirement on the PIV repetition rate from the Eulerian frequency toward the Lagrangian one. The local trajectory of the fluid parcel is obtained by forward projection of the instantaneous velocity at the preceding time instant and backward projection from the subsequent time step. The trajectories are approximated by the instantaneous streamlines, which yields accurate results when the amplitude of velocity fluctuations is small with respect to the convective motion. The verification is performed with two experiments conducted at temporal resolutions significantly higher than that dictated by Nyquist criterion. The flow past the trailing edge of a NACA0012 airfoil closely approximates frozen turbulence , where the largest ratio between the Lagrangian and Eulerian temporal scales is expected. An order of magnitude reduction of the needed acquisition frequency is demonstrated by the velocity spectra of super-sampled series. The application to three-dimensional data is made with time-resolved tomographic PIV measurements of a transitional jet. Here, the 3D advection equation is implemented to estimate the fluid trajectories. The reduction in the minimum sampling rate by the use of super-sampling in this case is less, due to the fact that vortices occurring in the jet shear layer are not well approximated by sole advection at large time separation. Both cases reveal that the current requirements for time-resolved PIV experiments can be revised when information is poured from space to time . An additional favorable effect is observed by the analysis in the

  2. Time Series Analysis of Non-Gaussian Observations Based on State Space Models from Both Classical and Bayesian Perspectives

    NARCIS (Netherlands)

    Durbin, J.; Koopman, S.J.M.

    1998-01-01

    The analysis of non-Gaussian time series using state space models is considered from both classical and Bayesian perspectives. The treatment in both cases is based on simulation using importance sampling and antithetic variables; Monte Carlo Markov chain methods are not employed. Non-Gaussian

  3. Measurement of radionuclide activities of uranium-238 series in soil samples by gamma spectrometry: case of Vinaninkarena

    International Nuclear Information System (INIS)

    Randrianantenaina, F.R.

    2017-01-01

    The aim of this work is to determine the activity level of radionuclides of uranium-238 series. Eight soil samples are collected at Rural Commune of Vinaninkarena. After obtaining secular equilibrium, these samples have been measured using gamma spectrometry system in the Nuclear Analyses and Techniques Department of INSTN-Madagascar, with HPGe detector (30 % relative efficiency) and a Genie 2000 software. Activities obtained vary from (78±2)Bq.kg -1 to (49 231 ± 415)Bq.kg -1 . Among these eight samples, three activity levels are shown. Low activity is an activity which has value lower or equal to (89±3)Bq.kg -1 . Average activity is an activity which has value between (186± 1)Bq.kg -1 and (1049 ±7)Bq.kg -1 . And high activity is an activity which has value higher or equal to (14501±209)Bq.kg -1 . According to UNSCEAR 2000, these value are all higher than the world average value which is 35 Bq.kg -1 .It is due to the localities of sampling points. The variation of the activity level depends on radionuclide concentration of uranium-238 series in the soil. [fr

  4. Parametric Identification of Solar Series based on an Adaptive ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Department of Computer Science, University of Extremadura, Campus ... els, applying it to the case of sunspot series. .... inspired on the concept of artificial evolution (Goldberg 1989) (Rechenberg 1973) and ... benchmark when na = 5. ... clusion is that an accurate tuning for general purposes could be from na = 5, although.

  5. [Winter wheat area estimation with MODIS-NDVI time series based on parcel].

    Science.gov (United States)

    Li, Le; Zhang, Jin-shui; Zhu, Wen-quan; Hu, Tan-gao; Hou, Dong

    2011-05-01

    Several attributes of MODIS (moderate resolution imaging spectrometer) data, especially the short temporal intervals and the global coverage, provide an extremely efficient way to map cropland and monitor its seasonal change. However, the reliability of their measurement results is challenged because of the limited spatial resolution. The parcel data has clear geo-location and obvious boundary information of cropland. Also, the spectral differences and the complexity of mixed pixels are weak in parcels. All of these make that area estimation based on parcels presents more advantage than on pixels. In the present study, winter wheat area estimation based on MODIS-NDVI time series has been performed with the support of cultivated land parcel in Tongzhou, Beijing. In order to extract the regional winter wheat acreage, multiple regression methods were used to simulate the stable regression relationship between MODIS-NDVI time series data and TM samples in parcels. Through this way, the consistency of the extraction results from MODIS and TM can stably reach up to 96% when the amount of samples accounts for 15% of the whole area. The results shows that the use of parcel data can effectively improve the error in recognition results in MODIS-NDVI based multi-series data caused by the low spatial resolution. Therefore, with combination of moderate and low resolution data, the winter wheat area estimation became available in large-scale region which lacks completed medium resolution images or has images covered with clouds. Meanwhile, it carried out the preliminary experiments for other crop area estimation.

  6. Template-Based Sampling of Anisotropic BRDFs

    Czech Academy of Sciences Publication Activity Database

    Filip, Jiří; Vávra, Radomír

    2014-01-01

    Roč. 33, č. 7 (2014), s. 91-99 ISSN 0167-7055. [Pacific Graphics 2014. Soul, 08.10.2014-10.10.2014] R&D Projects: GA ČR(CZ) GA14-02652S; GA ČR(CZ) GA14-10911S; GA ČR GAP103/11/0335 Institutional support: RVO:67985556 Keywords : BRDF database * material appearnce * sampling * measurement Subject RIV: BD - Theory of Information Impact factor: 1.642, year: 2014 http://library.utia.cas.cz/separaty/2014/RO/filip-0432894.pdf

  7. Arrangement to measure the nuclear radiation of a series of radioactive samples

    International Nuclear Information System (INIS)

    Lohr, W.; Berthold, F.; Allington, R.W.

    1976-01-01

    Samples to be counted for radioactivity are contained in vials placed in rows in magazins, and held in position by a grip. The magazins are positioned on a track, and automatically shifted in closed circuit, each at some stage reaching a working station. Here the magazin advances in discrete steps so that the bottom of each vial in turn is placed exactly above a small piston, the grip released, the piston lowered, and the vial slid down to a counting station. After counting the vial is returned to the magazine, gripped, and the magazine moves another step. (DVP) [de

  8. Radial artery pulse waveform analysis based on curve fitting using discrete Fourier series.

    Science.gov (United States)

    Jiang, Zhixing; Zhang, David; Lu, Guangming

    2018-04-19

    Radial artery pulse diagnosis has been playing an important role in traditional Chinese medicine (TCM). For its non-invasion and convenience, the pulse diagnosis has great significance in diseases analysis of modern medicine. The practitioners sense the pulse waveforms in patients' wrist to make diagnoses based on their non-objective personal experience. With the researches of pulse acquisition platforms and computerized analysis methods, the objective study on pulse diagnosis can help the TCM to keep up with the development of modern medicine. In this paper, we propose a new method to extract feature from pulse waveform based on discrete Fourier series (DFS). It regards the waveform as one kind of signal that consists of a series of sub-components represented by sine and cosine (SC) signals with different frequencies and amplitudes. After the pulse signals are collected and preprocessed, we fit the average waveform for each sample using discrete Fourier series by least squares. The feature vector is comprised by the coefficients of discrete Fourier series function. Compared with the fitting method using Gaussian mixture function, the fitting errors of proposed method are smaller, which indicate that our method can represent the original signal better. The classification performance of proposed feature is superior to the other features extracted from waveform, liking auto-regression model and Gaussian mixture model. The coefficients of optimized DFS function, who is used to fit the arterial pressure waveforms, can obtain better performance in modeling the waveforms and holds more potential information for distinguishing different psychological states. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. A method for the estimation of the significance of cross-correlations in unevenly sampled red-noise time series

    Science.gov (United States)

    Max-Moerbeck, W.; Richards, J. L.; Hovatta, T.; Pavlidou, V.; Pearson, T. J.; Readhead, A. C. S.

    2014-11-01

    We present a practical implementation of a Monte Carlo method to estimate the significance of cross-correlations in unevenly sampled time series of data, whose statistical properties are modelled with a simple power-law power spectral density. This implementation builds on published methods; we introduce a number of improvements in the normalization of the cross-correlation function estimate and a bootstrap method for estimating the significance of the cross-correlations. A closely related matter is the estimation of a model for the light curves, which is critical for the significance estimates. We present a graphical and quantitative demonstration that uses simulations to show how common it is to get high cross-correlations for unrelated light curves with steep power spectral densities. This demonstration highlights the dangers of interpreting them as signs of a physical connection. We show that by using interpolation and the Hanning sampling window function we are able to reduce the effects of red-noise leakage and to recover steep simple power-law power spectral densities. We also introduce the use of a Neyman construction for the estimation of the errors in the power-law index of the power spectral density. This method provides a consistent way to estimate the significance of cross-correlations in unevenly sampled time series of data.

  10. A Two-Dimensional Solar Tracking Stationary Guidance Method Based on Feature-Based Time Series

    Directory of Open Access Journals (Sweden)

    Keke Zhang

    2018-01-01

    Full Text Available The amount of satellite energy acquired has a direct impact on operational capacities of the satellite. As for practical high functional density microsatellites, solar tracking guidance design of solar panels plays an extremely important role. Targeted at stationary tracking problems incurred in a new system that utilizes panels mounted in the two-dimensional turntable to acquire energies to the greatest extent, a two-dimensional solar tracking stationary guidance method based on feature-based time series was proposed under the constraint of limited satellite attitude coupling control capability. By analyzing solar vector variation characteristics within an orbit period and solar vector changes within the whole life cycle, such a method could be adopted to establish a two-dimensional solar tracking guidance model based on the feature-based time series to realize automatic switching of feature-based time series and stationary guidance under the circumstance of different β angles and the maximum angular velocity control, which was applicable to near-earth orbits of all orbital inclination. It was employed to design a two-dimensional solar tracking stationary guidance system, and a mathematical simulation for guidance performance was carried out in diverse conditions under the background of in-orbit application. The simulation results show that the solar tracking accuracy of two-dimensional stationary guidance reaches 10∘ and below under the integrated constraints, which meet engineering application requirements.

  11. Financial time series analysis based on information categorization method

    Science.gov (United States)

    Tian, Qiang; Shang, Pengjian; Feng, Guochen

    2014-12-01

    The paper mainly applies the information categorization method to analyze the financial time series. The method is used to examine the similarity of different sequences by calculating the distances between them. We apply this method to quantify the similarity of different stock markets. And we report the results of similarity in US and Chinese stock markets in periods 1991-1998 (before the Asian currency crisis), 1999-2006 (after the Asian currency crisis and before the global financial crisis), and 2007-2013 (during and after global financial crisis) by using this method. The results show the difference of similarity between different stock markets in different time periods and the similarity of the two stock markets become larger after these two crises. Also we acquire the results of similarity of 10 stock indices in three areas; it means the method can distinguish different areas' markets from the phylogenetic trees. The results show that we can get satisfactory information from financial markets by this method. The information categorization method can not only be used in physiologic time series, but also in financial time series.

  12. Optimize the Coverage Probability of Prediction Interval for Anomaly Detection of Sensor-Based Monitoring Series

    Directory of Open Access Journals (Sweden)

    Jingyue Pang

    2018-03-01

    Full Text Available Effective anomaly detection of sensing data is essential for identifying potential system failures. Because they require no prior knowledge or accumulated labels, and provide uncertainty presentation, the probability prediction methods (e.g., Gaussian process regression (GPR and relevance vector machine (RVM are especially adaptable to perform anomaly detection for sensing series. Generally, one key parameter of prediction models is coverage probability (CP, which controls the judging threshold of the testing sample and is generally set to a default value (e.g., 90% or 95%. There are few criteria to determine the optimal CP for anomaly detection. Therefore, this paper designs a graphic indicator of the receiver operating characteristic curve of prediction interval (ROC-PI based on the definition of the ROC curve which can depict the trade-off between the PI width and PI coverage probability across a series of cut-off points. Furthermore, the Youden index is modified to assess the performance of different CPs, by the minimization of which the optimal CP is derived by the simulated annealing (SA algorithm. Experiments conducted on two simulation datasets demonstrate the validity of the proposed method. Especially, an actual case study on sensing series from an on-orbit satellite illustrates its significant performance in practical application.

  13. Random sampling or geostatistical modelling? Choosing between design-based and model-based sampling strategies for soil (with discussion)

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    1997-01-01

    Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based

  14. Estimation of time-delayed mutual information and bias for irregularly and sparsely sampled time-series

    International Nuclear Information System (INIS)

    Albers, D.J.; Hripcsak, George

    2012-01-01

    Highlights: ► Time-delayed mutual information for irregularly sampled time-series. ► Estimation bias for the time-delayed mutual information calculation. ► Fast, simple, PDF estimator independent, time-delayed mutual information bias estimate. ► Quantification of data-set-size limits of the time-delayed mutual calculation. - Abstract: A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database.

  15. Performance enhancement of the single-phase series active filter by employing the load voltage waveform reconstruction and line current sampling delay reduction methods

    DEFF Research Database (Denmark)

    Senturk, O.S.; Hava, A.M.

    2011-01-01

    This paper proposes the waveform reconstruction method (WRM), which is utilized in the single-phase series active filter's (SAF's) control algorithm, in order to extract the load harmonic voltage component of voltage harmonic type single-phase diode rectifier loads. Employing WRM and the line...... current sampling delay reduction method, a single-phase SAF compensated system provides higher harmonic isolation performance and higher stability margins compared to the system using conventional synchronous-reference-frame-based methods. The analytical, simulation, and experimental studies of a 2.5 k...

  16. Measuring Coupling of Rhythmical Time Series Using Cross Sample Entropy and Cross Recurrence Quantification Analysis

    Directory of Open Access Journals (Sweden)

    John McCamley

    2017-01-01

    Full Text Available The aim of this investigation was to compare and contrast the use of cross sample entropy (xSE and cross recurrence quantification analysis (cRQA measures for the assessment of coupling of rhythmical patterns. Measures were assessed using simulated signals with regular, chaotic, and random fluctuations in frequency, amplitude, and a combination of both. Biological data were studied as models of normal and abnormal locomotor-respiratory coupling. Nine signal types were generated for seven frequency ratios. Fifteen patients with COPD (abnormal coupling and twenty-one healthy controls (normal coupling walked on a treadmill at three speeds while breathing and walking were recorded. xSE and the cRQA measures of percent determinism, maximum line, mean line, and entropy were quantified for both the simulated and experimental data. In the simulated data, xSE, percent determinism, and entropy were influenced by the frequency manipulation. The 1 : 1 frequency ratio was different than other frequency ratios for almost all measures and/or manipulations. The patients with COPD used a 2 : 3 ratio more often and xSE, percent determinism, maximum line, mean line, and cRQA entropy were able to discriminate between the groups. Analysis of the effects of walking speed indicated that all measures were able to discriminate between speeds.

  17. Sample classroom activities based on climate science

    Science.gov (United States)

    Miler, T.

    2009-09-01

    We present several activities developed for the middle school education based on a climate science. The first activity was designed to teach about the ocean acidification. A simple experiment can prove that absorption of CO2 in water increases its acidity. A liquid pH indicator is suitable for the demonstration in a classroom. The second activity uses data containing coordinates of a hurricane position. Pupils draw a path of a hurricane eye in a tracking chart (map of the Atlantic ocean). They calculate an average speed of the hurricane, investigate its direction and intensity development. The third activity uses pictures of the Arctic ocean on September when ice extend is usually the lowest. Students measure the ice extend for several years using a square grid printed on a plastic foil. Then they plot a graph and discuss the results. All these activities can be used to improve the natural science education and increase the climate change literacy.

  18. Multiscale multifractal multiproperty analysis of financial time series based on Rényi entropy

    Science.gov (United States)

    Yujun, Yang; Jianping, Li; Yimei, Yang

    This paper introduces a multiscale multifractal multiproperty analysis based on Rényi entropy (3MPAR) method to analyze short-range and long-range characteristics of financial time series, and then applies this method to the five time series of five properties in four stock indices. Combining the two analysis techniques of Rényi entropy and multifractal detrended fluctuation analysis (MFDFA), the 3MPAR method focuses on the curves of Rényi entropy and generalized Hurst exponent of five properties of four stock time series, which allows us to study more universal and subtle fluctuation characteristics of financial time series. By analyzing the curves of the Rényi entropy and the profiles of the logarithm distribution of MFDFA of five properties of four stock indices, the 3MPAR method shows some fluctuation characteristics of the financial time series and the stock markets. Then, it also shows a richer information of the financial time series by comparing the profile of five properties of four stock indices. In this paper, we not only focus on the multifractality of time series but also the fluctuation characteristics of the financial time series and subtle differences in the time series of different properties. We find that financial time series is far more complex than reported in some research works using one property of time series.

  19. SEM Based CARMA Time Series Modeling for Arbitrary N.

    Science.gov (United States)

    Oud, Johan H L; Voelkle, Manuel C; Driver, Charles C

    2018-01-01

    This article explains in detail the state space specification and estimation of first and higher-order autoregressive moving-average models in continuous time (CARMA) in an extended structural equation modeling (SEM) context for N = 1 as well as N > 1. To illustrate the approach, simulations will be presented in which a single panel model (T = 41 time points) is estimated for a sample of N = 1,000 individuals as well as for samples of N = 100 and N = 50 individuals, followed by estimating 100 separate models for each of the one-hundred N = 1 cases in the N = 100 sample. Furthermore, we will demonstrate how to test the difference between the full panel model and each N = 1 model by means of a subject-group-reproducibility test. Finally, the proposed analyses will be applied in an empirical example, in which the relationships between mood at work and mood at home are studied in a sample of N = 55 women. All analyses are carried out by ctsem, an R-package for continuous time modeling, interfacing to OpenMx.

  20. Identification of Dynamic Loads Based on Second-Order Taylor-Series Expansion Method

    OpenAIRE

    Li, Xiaowang; Deng, Zhongmin

    2016-01-01

    A new method based on the second-order Taylor-series expansion is presented to identify the structural dynamic loads in the time domain. This algorithm expresses the response vectors as Taylor-series approximation and then a series of formulas are deduced. As a result, an explicit discrete equation which associates system response, system characteristic, and input excitation together is set up. In a multi-input-multi-output (MIMO) numerical simulation study, sinusoidal excitation and white no...

  1. Trend analysis using non-stationary time series clustering based on the finite element method

    OpenAIRE

    Gorji Sefidmazgi, M.; Sayemuzzaman, M.; Homaifar, A.; Jha, M. K.; Liess, S.

    2014-01-01

    In order to analyze low-frequency variability of climate, it is useful to model the climatic time series with multiple linear trends and locate the times of significant changes. In this paper, we have used non-stationary time series clustering to find change points in the trends. Clustering in a multi-dimensional non-stationary time series is challenging, since the problem is mathematically ill-posed. Clustering based on the finite element method (FEM) is one of the methods ...

  2. Consensus of heterogeneous multi-agent systems based on sampled data with a small sampling delay

    International Nuclear Information System (INIS)

    Wang Na; Wu Zhi-Hai; Peng Li

    2014-01-01

    In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay for heterogeneous multi-agent systems is proposed. Then, the algebra graph theory, the matrix method, the stability theory of linear systems, and some other techniques are employed to derive the necessary and sufficient conditions guaranteeing heterogeneous multi-agent systems to asymptotically achieve the stationary consensus. Finally, simulations are performed to demonstrate the correctness of the theoretical results. (interdisciplinary physics and related areas of science and technology)

  3. Stock price forecasting based on time series analysis

    Science.gov (United States)

    Chi, Wan Le

    2018-05-01

    Using the historical stock price data to set up a sequence model to explain the intrinsic relationship of data, the future stock price can forecasted. The used models are auto-regressive model, moving-average model and autoregressive-movingaverage model. The original data sequence of unit root test was used to judge whether the original data sequence was stationary. The non-stationary original sequence as a first order difference needed further processing. Then the stability of the sequence difference was re-inspected. If it is still non-stationary, the second order differential processing of the sequence is carried out. Autocorrelation diagram and partial correlation diagram were used to evaluate the parameters of the identified ARMA model, including coefficients of the model and model order. Finally, the model was used to forecast the fitting of the shanghai composite index daily closing price with precision. Results showed that the non-stationary original data series was stationary after the second order difference. The forecast value of shanghai composite index daily closing price was closer to actual value, indicating that the ARMA model in the paper was a certain accuracy.

  4. Visibility graph analysis on quarterly macroeconomic series of China based on complex network theory

    Science.gov (United States)

    Wang, Na; Li, Dong; Wang, Qiwen

    2012-12-01

    The visibility graph approach and complex network theory provide a new insight into time series analysis. The inheritance of the visibility graph from the original time series was further explored in the paper. We found that degree distributions of visibility graphs extracted from Pseudo Brownian Motion series obtained by the Frequency Domain algorithm exhibit exponential behaviors, in which the exponential exponent is a binomial function of the Hurst index inherited in the time series. Our simulations presented that the quantitative relations between the Hurst indexes and the exponents of degree distribution function are different for different series and the visibility graph inherits some important features of the original time series. Further, we convert some quarterly macroeconomic series including the growth rates of value-added of three industry series and the growth rates of Gross Domestic Product series of China to graphs by the visibility algorithm and explore the topological properties of graphs associated from the four macroeconomic series, namely, the degree distribution and correlations, the clustering coefficient, the average path length, and community structure. Based on complex network analysis we find degree distributions of associated networks from the growth rates of value-added of three industry series are almost exponential and the degree distributions of associated networks from the growth rates of GDP series are scale free. We also discussed the assortativity and disassortativity of the four associated networks as they are related to the evolutionary process of the original macroeconomic series. All the constructed networks have “small-world” features. The community structures of associated networks suggest dynamic changes of the original macroeconomic series. We also detected the relationship among government policy changes, community structures of associated networks and macroeconomic dynamics. We find great influences of government

  5. Analysis of the Main Factors Influencing Food Production in China Based on Time Series Trend Chart

    Institute of Scientific and Technical Information of China (English)

    Shuangjin; WANG; Jianying; LI

    2014-01-01

    Based on the annual sample data on food production in China since the reform and opening up,we select 8 main factors influencing the total food production( growing area,application rate of chemical fertilizer,effective irrigation area,the affected area,total machinery power,food production cost index,food production price index,financial funds for supporting agriculture,farmers and countryside),and put them into categories of material input,resources and environment,and policy factors. Using the factor analysis,we carry out the multi-angle analysis of these typical influencing factors one by one through the time series trend chart. It is found that application rate of chemical fertilizer,the growing area of food crops and drought-affected area become the key factors affecting food production. On this basis,we set forth the corresponding recommendations for improving the comprehensive food production capacity.

  6. Control charts for location based on different sampling schemes

    NARCIS (Netherlands)

    Mehmood, R.; Riaz, M.; Does, R.J.M.M.

    2013-01-01

    Control charts are the most important statistical process control tool for monitoring variations in a process. A number of articles are available in the literature for the X̄ control chart based on simple random sampling, ranked set sampling, median-ranked set sampling (MRSS), extreme-ranked set

  7. Efficient sampling algorithms for Monte Carlo based treatment planning

    International Nuclear Information System (INIS)

    DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.

    1998-01-01

    Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed

  8. Wet tropospheric delays forecast based on Vienna Mapping Function time series analysis

    Science.gov (United States)

    Rzepecka, Zofia; Kalita, Jakub

    2016-04-01

    It is well known that the dry part of the zenith tropospheric delay (ZTD) is much easier to model than the wet part (ZTW). The aim of the research is applying stochastic modeling and prediction of ZTW using time series analysis tools. Application of time series analysis enables closer understanding of ZTW behavior as well as short-term prediction of future ZTW values. The ZTW data used for the studies were obtained from the GGOS service hold by Vienna technical University. The resolution of the data is six hours. ZTW for the years 2010 -2013 were adopted for the study. The International GNSS Service (IGS) permanent stations LAMA and GOPE, located in mid-latitudes, were admitted for the investigations. Initially the seasonal part was separated and modeled using periodic signals and frequency analysis. The prominent annual and semi-annual signals were removed using sines and consines functions. The autocorrelation of the resulting signal is significant for several days (20-30 samples). The residuals of this fitting were further analyzed and modeled with ARIMA processes. For both the stations optimal ARMA processes based on several criterions were obtained. On this basis predicted ZTW values were computed for one day ahead, leaving the white process residuals. Accuracy of the prediction can be estimated at about 3 cm.

  9. Spatial Pyramid Covariance based Compact Video Code for Robust Face Retrieval in TV-series.

    Science.gov (United States)

    Li, Yan; Wang, Ruiping; Cui, Zhen; Shan, Shiguang; Chen, Xilin

    2016-10-10

    We address the problem of face video retrieval in TV-series which searches video clips based on the presence of specific character, given one face track of his/her. This is tremendously challenging because on one hand, faces in TV-series are captured in largely uncontrolled conditions with complex appearance variations, and on the other hand retrieval task typically needs efficient representation with low time and space complexity. To handle this problem, we propose a compact and discriminative representation for the huge body of video data, named Compact Video Code (CVC). Our method first models the face track by its sample (i.e., frame) covariance matrix to capture the video data variations in a statistical manner. To incorporate discriminative information and obtain more compact video signature suitable for retrieval, the high-dimensional covariance representation is further encoded as a much lower-dimensional binary vector, which finally yields the proposed CVC. Specifically, each bit of the code, i.e., each dimension of the binary vector, is produced via supervised learning in a max margin framework, which aims to make a balance between the discriminability and stability of the code. Besides, we further extend the descriptive granularity of covariance matrix from traditional pixel-level to more general patchlevel, and proceed to propose a novel hierarchical video representation named Spatial Pyramid Covariance (SPC) along with a fast calculation method. Face retrieval experiments on two challenging TV-series video databases, i.e., the Big Bang Theory and Prison Break, demonstrate the competitiveness of the proposed CVC over state-of-the-art retrieval methods. In addition, as a general video matching algorithm, CVC is also evaluated in traditional video face recognition task on a standard Internet database, i.e., YouTube Celebrities, showing its quite promising performance by using an extremely compact code with only 128 bits.

  10. A Society Based on Work. Information Series No. 270.

    Science.gov (United States)

    Carnevale, Anthony Patrick

    American society is based on work. The industrial revolution exposed a growing proportion of the population to unemployment, underemployment, and dislocation. Early theoreticians believed that unemployment was a temporary labor market imbalance that would correct itself with downward wage adjustments. John Maynard Keynes, on the other hand, argued…

  11. Novel inhibitors of IMPDH: a highly potent and selective quinolone-based series.

    Science.gov (United States)

    Watterson, Scott H; Carlsen, Marianne; Dhar, T G Murali; Shen, Zhongqi; Pitts, William J; Guo, Junqing; Gu, Henry H; Norris, Derek; Chorba, John; Chen, Ping; Cheney, Daniel; Witmer, Mark; Fleener, Catherine A; Rouleau, Katherine; Townsend, Robert; Hollenbaugh, Diane L; Iwanowicz, Edwin J

    2003-02-10

    A series of novel quinolone-based small molecule inhibitors of inosine monophosphate dehydrogenase (IMPDH) was explored. The synthesis and the structure-activity relationships (SARs) derived from in vitro studies are described.

  12. Detecting determinism with improved sensitivity in time series: rank-based nonlinear predictability score.

    Science.gov (United States)

    Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G

    2014-09-01

    The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).

  13. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    OpenAIRE

    Jun-He Yang; Ching-Hsue Cheng; Chia-Pan Chan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting m...

  14. Energy-Based Wavelet De-Noising of Hydrologic Time Series

    Science.gov (United States)

    Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu

    2014-01-01

    De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed. PMID:25360533

  15. Trend analysis using non-stationary time series clustering based on the finite element method

    Science.gov (United States)

    Gorji Sefidmazgi, M.; Sayemuzzaman, M.; Homaifar, A.; Jha, M. K.; Liess, S.

    2014-05-01

    In order to analyze low-frequency variability of climate, it is useful to model the climatic time series with multiple linear trends and locate the times of significant changes. In this paper, we have used non-stationary time series clustering to find change points in the trends. Clustering in a multi-dimensional non-stationary time series is challenging, since the problem is mathematically ill-posed. Clustering based on the finite element method (FEM) is one of the methods that can analyze multidimensional time series. One important attribute of this method is that it is not dependent on any statistical assumption and does not need local stationarity in the time series. In this paper, it is shown how the FEM-clustering method can be used to locate change points in the trend of temperature time series from in situ observations. This method is applied to the temperature time series of North Carolina (NC) and the results represent region-specific climate variability despite higher frequency harmonics in climatic time series. Next, we investigated the relationship between the climatic indices with the clusters/trends detected based on this clustering method. It appears that the natural variability of climate change in NC during 1950-2009 can be explained mostly by AMO and solar activity.

  16. A prediction method based on wavelet transform and multiple models fusion for chaotic time series

    International Nuclear Information System (INIS)

    Zhongda, Tian; Shujiang, Li; Yanhong, Wang; Yi, Sha

    2017-01-01

    In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.

  17. Phase synchronization based minimum spanning trees for analysis of financial time series with nonlinear correlations

    Science.gov (United States)

    Radhakrishnan, Srinivasan; Duvvuru, Arjun; Sultornsanee, Sivarit; Kamarthi, Sagar

    2016-02-01

    The cross correlation coefficient has been widely applied in financial time series analysis, in specific, for understanding chaotic behaviour in terms of stock price and index movements during crisis periods. To better understand time series correlation dynamics, the cross correlation matrices are represented as networks, in which a node stands for an individual time series and a link indicates cross correlation between a pair of nodes. These networks are converted into simpler trees using different schemes. In this context, Minimum Spanning Trees (MST) are the most favoured tree structures because of their ability to preserve all the nodes and thereby retain essential information imbued in the network. Although cross correlations underlying MSTs capture essential information, they do not faithfully capture dynamic behaviour embedded in the time series data of financial systems because cross correlation is a reliable measure only if the relationship between the time series is linear. To address the issue, this work investigates a new measure called phase synchronization (PS) for establishing correlations among different time series which relate to one another, linearly or nonlinearly. In this approach the strength of a link between a pair of time series (nodes) is determined by the level of phase synchronization between them. We compare the performance of phase synchronization based MST with cross correlation based MST along selected network measures across temporal frame that includes economically good and crisis periods. We observe agreement in the directionality of the results across these two methods. They show similar trends, upward or downward, when comparing selected network measures. Though both the methods give similar trends, the phase synchronization based MST is a more reliable representation of the dynamic behaviour of financial systems than the cross correlation based MST because of the former's ability to quantify nonlinear relationships among time

  18. Triangulation based inclusion probabilities: a design-unbiased sampling approach

    OpenAIRE

    Fehrmann, Lutz; Gregoire, Timothy; Kleinn, Christoph

    2011-01-01

    A probabilistic sampling approach for design-unbiased estimation of area-related quantitative characteristics of spatially dispersed population units is proposed. The developed field protocol includes a fixed number of 3 units per sampling location and is based on partial triangulations over their natural neighbors to derive the individual inclusion probabilities. The performance of the proposed design is tested in comparison to fixed area sample plots in a simulation with two forest stands. ...

  19. The Toggle Local Planner for sampling-based motion planning

    KAUST Repository

    Denny, Jory; Amato, Nancy M.

    2012-01-01

    Sampling-based solutions to the motion planning problem, such as the probabilistic roadmap method (PRM), have become commonplace in robotics applications. These solutions are the norm as the dimensionality of the planning space grows, i.e., d > 5

  20. Variable screening and ranking using sampling-based sensitivity measures

    International Nuclear Information System (INIS)

    Wu, Y-T.; Mohanty, Sitakanta

    2006-01-01

    This paper presents a methodology for screening insignificant random variables and ranking significant important random variables using sensitivity measures including two cumulative distribution function (CDF)-based and two mean-response based measures. The methodology features (1) using random samples to compute sensitivities and (2) using acceptance limits, derived from the test-of-hypothesis, to classify significant and insignificant random variables. Because no approximation is needed in either the form of the performance functions or the type of continuous distribution functions representing input variables, the sampling-based approach can handle highly nonlinear functions with non-normal variables. The main characteristics and effectiveness of the sampling-based sensitivity measures are investigated using both simple and complex examples. Because the number of samples needed does not depend on the number of variables, the methodology appears to be particularly suitable for problems with large, complex models that have large numbers of random variables but relatively few numbers of significant random variables

  1. A sampling-based approach to probabilistic pursuit evasion

    KAUST Repository

    Mahadevan, Aditya; Amato, Nancy M.

    2012-01-01

    Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented

  2. A novel water quality data analysis framework based on time-series data mining.

    Science.gov (United States)

    Deng, Weihui; Wang, Guoyin

    2017-07-01

    The rapid development of time-series data mining provides an emerging method for water resource management research. In this paper, based on the time-series data mining methodology, we propose a novel and general analysis framework for water quality time-series data. It consists of two parts: implementation components and common tasks of time-series data mining in water quality data. In the first part, we propose to granulate the time series into several two-dimensional normal clouds and calculate the similarities in the granulated level. On the basis of the similarity matrix, the similarity search, anomaly detection, and pattern discovery tasks in the water quality time-series instance dataset can be easily implemented in the second part. We present a case study of this analysis framework on weekly Dissolve Oxygen time-series data collected from five monitoring stations on the upper reaches of Yangtze River, China. It discovered the relationship of water quality in the mainstream and tributary as well as the main changing patterns of DO. The experimental results show that the proposed analysis framework is a feasible and efficient method to mine the hidden and valuable knowledge from water quality historical time-series data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. On incomplete sampling under birth-death models and connections to the sampling-based coalescent.

    Science.gov (United States)

    Stadler, Tanja

    2009-11-07

    The constant rate birth-death process is used as a stochastic model for many biological systems, for example phylogenies or disease transmission. As the biological data are usually not fully available, it is crucial to understand the effect of incomplete sampling. In this paper, we analyze the constant rate birth-death process with incomplete sampling. We derive the density of the bifurcation events for trees on n leaves which evolved under this birth-death-sampling process. This density is used for calculating prior distributions in Bayesian inference programs and for efficiently simulating trees. We show that the birth-death-sampling process can be interpreted as a birth-death process with reduced rates and complete sampling. This shows that joint inference of birth rate, death rate and sampling probability is not possible. The birth-death-sampling process is compared to the sampling-based population genetics model, the coalescent. It is shown that despite many similarities between these two models, the distribution of bifurcation times remains different even in the case of very large population sizes. We illustrate these findings on an Hepatitis C virus dataset from Egypt. We show that the transmission times estimates are significantly different-the widely used Gamma statistic even changes its sign from negative to positive when switching from the coalescent to the birth-death process.

  4. Forecasting Jakarta composite index (IHSG) based on chen fuzzy time series and firefly clustering algorithm

    Science.gov (United States)

    Ningrum, R. W.; Surarso, B.; Farikhin; Safarudin, Y. M.

    2018-03-01

    This paper proposes the combination of Firefly Algorithm (FA) and Chen Fuzzy Time Series Forecasting. Most of the existing fuzzy forecasting methods based on fuzzy time series use the static length of intervals. Therefore, we apply an artificial intelligence, i.e., Firefly Algorithm (FA) to set non-stationary length of intervals for each cluster on Chen Method. The method is evaluated by applying on the Jakarta Composite Index (IHSG) and compare with classical Chen Fuzzy Time Series Forecasting. Its performance verified through simulation using Matlab.

  5. Alpha Matting with KL-Divergence Based Sparse Sampling.

    Science.gov (United States)

    Karacan, Levent; Erdem, Aykut; Erdem, Erkut

    2017-06-22

    In this paper, we present a new sampling-based alpha matting approach for the accurate estimation of foreground and background layers of an image. Previous sampling-based methods typically rely on certain heuristics in collecting representative samples from known regions, and thus their performance deteriorates if the underlying assumptions are not satisfied. To alleviate this, we take an entirely new approach and formulate sampling as a sparse subset selection problem where we propose to pick a small set of candidate samples that best explains the unknown pixels. Moreover, we describe a new dissimilarity measure for comparing two samples which is based on KLdivergence between the distributions of features extracted in the vicinity of the samples. The proposed framework is general and could be easily extended to video matting by additionally taking temporal information into account in the sampling process. Evaluation on standard benchmark datasets for image and video matting demonstrates that our approach provides more accurate results compared to the state-of-the-art methods.

  6. Is mindfulness-based therapy an effective intervention for obsessive-intrusive thoughts: a case series.

    Science.gov (United States)

    Wilkinson-Tough, Megan; Bocci, Laura; Thorne, Kirsty; Herlihy, Jane

    2010-01-01

    Despite the efficacy of cognitive-behavioural interventions in improving the experience of obsessions and compulsions, some people do not benefit from this approach. The present research uses a case series design to establish whether mindfulness-based therapy could benefit those experiencing obsessive-intrusive thoughts by targeting thought-action fusion and thought suppression. Three participants received a relaxation control intervention followed by a six-session mindfulness-based intervention which emphasized daily practice. Following therapy all participants demonstrated reductions in Yale-Brown Obsessive-Compulsive Scale scores to below clinical levels, with two participants maintaining this at follow-up. Qualitative analysis of post-therapy feedback suggested that mindfulness skills such as observation, awareness and acceptance were seen as helpful in managing thought-action fusion and suppression. Despite being limited by small participant numbers, these results suggest that mindfulness may be beneficial to some people experiencing intrusive unwanted thoughts and that further research could establish the possible efficacy of this approach in larger samples. Copyright (c) 2009 John Wiley & Sons, Ltd.

  7. Changes According to Incubation Periods in Some Microbiological Characteristics at Soil Samples of Some Soil Series from the Gelemen Agricultural Administration

    OpenAIRE

    KARA, Emine Erman

    1998-01-01

    Changes according to incubation periods in some microbiological characteristics at soil samples of soil series from Gelemen Agricultural Administraction were investigated in this study. The results show that bacteria, actinomycet had values in the first periods of incubation (30ºC and field capacity) and in the following periods increased. However, fungus population changed depending upon series properties and reached maximum values 24th and 32th days after the beginning of incubation. During...

  8. An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method

    International Nuclear Information System (INIS)

    Campolina, Daniel; Lima, Paulo Rubens I.; Pereira, Claubia; Veloso, Maria Auxiliadora F.

    2015-01-01

    Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k eff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)

  9. Ratio-based lengths of intervals to improve fuzzy time series forecasting.

    Science.gov (United States)

    Huarng, Kunhuang; Yu, Tiffany Hui-Kuang

    2006-04-01

    The objective of this study is to explore ways of determining the useful lengths of intervals in fuzzy time series. It is suggested that ratios, instead of equal lengths of intervals, can more properly represent the intervals among observations. Ratio-based lengths of intervals are, therefore, proposed to improve fuzzy time series forecasting. Algebraic growth data, such as enrollments and the stock index, and exponential growth data, such as inventory demand, are chosen as the forecasting targets, before forecasting based on the various lengths of intervals is performed. Furthermore, sensitivity analyses are also carried out for various percentiles. The ratio-based lengths of intervals are found to outperform the effective lengths of intervals, as well as the arbitrary ones in regard to the different statistical measures. The empirical analysis suggests that the ratio-based lengths of intervals can also be used to improve fuzzy time series forecasting.

  10. A 10kW series resonant converter design, transistor characterization, and base-drive optimization

    Science.gov (United States)

    Robson, R.; Hancock, D.

    1981-01-01

    Transistors are characterized for use as switches in resonant circuit applications. A base drive circuit to provide the optimal base drive to these transistors under resonant circuit conditions is developed and then used in the design, fabrication and testing of a breadboard, spaceborne type 10 kW series resonant converter.

  11. A robust anomaly based change detection method for time-series remote sensing images

    Science.gov (United States)

    Shoujing, Yin; Qiao, Wang; Chuanqing, Wu; Xiaoling, Chen; Wandong, Ma; Huiqin, Mao

    2014-03-01

    Time-series remote sensing images record changes happening on the earth surface, which include not only abnormal changes like human activities and emergencies (e.g. fire, drought, insect pest etc.), but also changes caused by vegetation phenology and climate changes. Yet, challenges occur in analyzing global environment changes and even the internal forces. This paper proposes a robust Anomaly Based Change Detection method (ABCD) for time-series images analysis by detecting abnormal points in data sets, which do not need to follow a normal distribution. With ABCD we can detect when and where changes occur, which is the prerequisite condition of global change studies. ABCD was tested initially with 10-day SPOT VGT NDVI (Normalized Difference Vegetation Index) times series tracking land cover type changes, seasonality and noise, then validated to real data in a large area in Jiangxi, south of China. Initial results show that ABCD can precisely detect spatial and temporal changes from long time series images rapidly.

  12. Soft magnetic properties of bulk amorphous Co-based samples

    International Nuclear Information System (INIS)

    Fuezer, J.; Bednarcik, J.; Kollar, P.

    2006-01-01

    Ball milling of melt-spun ribbons and subsequent compaction of the resulting powders in the supercooled liquid region were used to prepare disc shaped bulk amorphous Co-based samples. The several bulk samples have been prepared by hot compaction with subsequent heat treatment (500 deg C - 575 deg C). The influence of the consolidation temperature and follow-up heat treatment on the magnetic properties of bulk samples was investigated. The final heat treatment leads to decrease of the coercivity to the value between the 7.5 to 9 A/m (Authors)

  13. A novel PMT test system based on waveform sampling

    Science.gov (United States)

    Yin, S.; Ma, L.; Ning, Z.; Qian, S.; Wang, Y.; Jiang, X.; Wang, Z.; Yu, B.; Gao, F.; Zhu, Y.; Wang, Z.

    2018-01-01

    Comparing with the traditional test system based on a QDC and TDC and scaler, a test system based on waveform sampling is constructed for signal sampling of the 8"R5912 and the 20"R12860 Hamamatsu PMT in different energy states from single to multiple photoelectrons. In order to achieve high throughput and to reduce the dead time in data processing, the data acquisition software based on LabVIEW is developed and runs with a parallel mechanism. The analysis algorithm is realized in LabVIEW and the spectra of charge, amplitude, signal width and rising time are analyzed offline. The results from Charge-to-Digital Converter, Time-to-Digital Converter and waveform sampling are discussed in detailed comparison.

  14. Clinical characteristics of patients with tinnitus evaluated with the Tinnitus Sample Case History Questionnaire in Japan: A case series.

    Directory of Open Access Journals (Sweden)

    Takashi Kojima

    Full Text Available The Tinnitus Sample Case History Questionnaire was determined as a standardized questionnaire for obtaining patient case histories and for characterizing patients into subgroups at the Tinnitus Research Initiative in 2006. In this study, we developed a Japanese version of this questionnaire for evaluating the clinical characteristics of patients with tinnitus. The Japanese version of the questionnaire will be available for evaluating treatments for tinnitus and for comparing data on tinnitus in research centers.To evaluate the clinical characteristics of patients with tinnitus in Japan using a newly developed Japanese version of Tinnitus Sample Case History Questionnaire.This was a prospective study based on patient records.University hospitals, general hospitals, and clinics.We collected patient data using a Japanese translated version of the Tinnitus Sample Case History Questionnaire. In total, 584 patients who visited our institutions in Japan between August 2012 and March 2014 were included (280 males and 304 females; age 13-92 years; mean age, 60.8. We examined patients after dividing them into two groups according to the presence or absence of hyperacusis. The collected results were compared with those from the Tinnitus Research Initiative database.Compared with the TRI database, there were significantly more elderly female patients and fewer patients with trauma-associated tinnitus. There was a statistically lower ratio of patients with hyperacusis. We found that patients with tinnitus in addition to hyperacusis had greater tinnitus severity and exhibited higher rates of various complications.The Japanese version of the Tinnitus Sample Case History Questionnaire developed in this study can be a useful tool for evaluating patients with tinnitus in Japan. The results of this multicenter study reflect the characteristics of patients with tinnitus who require medical care in Japan. Our data provides a preliminary basis for an international

  15. Fourier Magnitude-Based Privacy-Preserving Clustering on Time-Series Data

    Science.gov (United States)

    Kim, Hea-Suk; Moon, Yang-Sae

    Privacy-preserving clustering (PPC in short) is important in publishing sensitive time-series data. Previous PPC solutions, however, have a problem of not preserving distance orders or incurring privacy breach. To solve this problem, we propose a new PPC approach that exploits Fourier magnitudes of time-series. Our magnitude-based method does not cause privacy breach even though its techniques or related parameters are publicly revealed. Using magnitudes only, however, incurs the distance order problem, and we thus present magnitude selection strategies to preserve as many Euclidean distance orders as possible. Through extensive experiments, we showcase the superiority of our magnitude-based approach.

  16. Image reconstruction method for electrical capacitance tomography based on the combined series and parallel normalization model

    International Nuclear Information System (INIS)

    Dong, Xiangyuan; Guo, Shuqing

    2008-01-01

    In this paper, a novel image reconstruction method for electrical capacitance tomography (ECT) based on the combined series and parallel model is presented. A regularization technique is used to obtain a stabilized solution of the inverse problem. Also, the adaptive coefficient of the combined model is deduced by numerical optimization. Simulation results indicate that it can produce higher quality images when compared to the algorithm based on the parallel or series models for the cases tested in this paper. It provides a new algorithm for ECT application

  17. A case series of family-based treatment for adolescents with atypical anorexia nervosa.

    Science.gov (United States)

    Hughes, Elizabeth K; Le Grange, Daniel; Court, Andrew; Sawyer, Susan M

    2017-04-01

    The aim of this case series was to examine engagement in and outcomes of family-based treatment (FBT) for adolescents with DSM-5 atypical AN, that is, adolescents who were not underweight at presentation. Consecutive referrals for FBT of adolescents with atypical AN to a specialist child and adolescent eating disorder program were examined. Engagement in treatment (i.e., dose of treatment, completion rate), and changes in psychological symptomatology (i.e., eating disorder symptoms, depressive symptoms, self-esteem, obsessive compulsiveness), weight, and menstrual function were examined. The need for additional interventions (i.e., hospitalization and medication), and estimated remission rates were also examined. The sample comprised 42 adolescents aged 12-18 years (88% female). Engagement in FBT was high, with 83% completing at least half the treatment dose. There were significant decreases in eating disorder and depressive symptoms during FBT (p adolescents who were not admitted to hospital prior to FBT gained some weight (M = 3.4 kg) while those who were admitted did not gain weight during FBT (M = 0.2 kg, p adolescents with atypical AN. However, more research is needed into systematic adaptations of FBT and other treatments that could improve overall remission rates. © 2017 Wiley Periodicals, Inc.

  18. Fuzzy time-series based on Fibonacci sequence for stock price forecasting

    Science.gov (United States)

    Chen, Tai-Liang; Cheng, Ching-Hsue; Jong Teoh, Hia

    2007-07-01

    Time-series models have been utilized to make reasonably accurate predictions in the areas of stock price movements, academic enrollments, weather, etc. For promoting the forecasting performance of fuzzy time-series models, this paper proposes a new model, which incorporates the concept of the Fibonacci sequence, the framework of Song and Chissom's model and the weighted method of Yu's model. This paper employs a 5-year period TSMC (Taiwan Semiconductor Manufacturing Company) stock price data and a 13-year period of TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) stock index data as experimental datasets. By comparing our forecasting performances with Chen's (Forecasting enrollments based on fuzzy time-series. Fuzzy Sets Syst. 81 (1996) 311-319), Yu's (Weighted fuzzy time-series models for TAIEX forecasting. Physica A 349 (2004) 609-624) and Huarng's (The application of neural networks to forecast fuzzy time series. Physica A 336 (2006) 481-491) models, we conclude that the proposed model surpasses in accuracy these conventional fuzzy time-series models.

  19. Improved mesh based photon sampling techniques for neutron activation analysis

    International Nuclear Information System (INIS)

    Relson, E.; Wilson, P. P. H.; Biondo, E. D.

    2013-01-01

    The design of fusion power systems requires analysis of neutron activation of large, complex volumes, and the resulting particles emitted from these volumes. Structured mesh-based discretization of these problems allows for improved modeling in these activation analysis problems. Finer discretization of these problems results in large computational costs, which drives the investigation of more efficient methods. Within an ad hoc subroutine of the Monte Carlo transport code MCNP, we implement sampling of voxels and photon energies for volumetric sources using the alias method. The alias method enables efficient sampling of a discrete probability distribution, and operates in 0(1) time, whereas the simpler direct discrete method requires 0(log(n)) time. By using the alias method, voxel sampling becomes a viable alternative to sampling space with the 0(1) approach of uniformly sampling the problem volume. Additionally, with voxel sampling it is straightforward to introduce biasing of volumetric sources, and we implement this biasing of voxels as an additional variance reduction technique that can be applied. We verify our implementation and compare the alias method, with and without biasing, to direct discrete sampling of voxels, and to uniform sampling. We study the behavior of source biasing in a second set of tests and find trends between improvements and source shape, material, and material density. Overall, however, the magnitude of improvements from source biasing appears to be limited. Future work will benefit from the implementation of efficient voxel sampling - particularly with conformal unstructured meshes where the uniform sampling approach cannot be applied. (authors)

  20. A Gaussian Process Based Online Change Detection Algorithm for Monitoring Periodic Time Series

    Energy Technology Data Exchange (ETDEWEB)

    Chandola, Varun [ORNL; Vatsavai, Raju [ORNL

    2011-01-01

    Online time series change detection is a critical component of many monitoring systems, such as space and air-borne remote sensing instruments, cardiac monitors, and network traffic profilers, which continuously analyze observations recorded by sensors. Data collected by such sensors typically has a periodic (seasonal) component. Most existing time series change detection methods are not directly applicable to handle such data, either because they are not designed to handle periodic time series or because they cannot operate in an online mode. We propose an online change detection algorithm which can handle periodic time series. The algorithm uses a Gaussian process based non-parametric time series prediction model and monitors the difference between the predictions and actual observations within a statistically principled control chart framework to identify changes. A key challenge in using Gaussian process in an online mode is the need to solve a large system of equations involving the associated covariance matrix which grows with every time step. The proposed algorithm exploits the special structure of the covariance matrix and can analyze a time series of length T in O(T^2) time while maintaining a O(T) memory footprint, compared to O(T^4) time and O(T^2) memory requirement of standard matrix manipulation methods. We experimentally demonstrate the superiority of the proposed algorithm over several existing time series change detection algorithms on a set of synthetic and real time series. Finally, we illustrate the effectiveness of the proposed algorithm for identifying land use land cover changes using Normalized Difference Vegetation Index (NDVI) data collected for an agricultural region in Iowa state, USA. Our algorithm is able to detect different types of changes in a NDVI validation data set (with ~80% accuracy) which occur due to crop type changes as well as disruptive changes (e.g., natural disasters).

  1. Time Series Imputation via L1 Norm-Based Singular Spectrum Analysis

    Science.gov (United States)

    Kalantari, Mahdi; Yarmohammadi, Masoud; Hassani, Hossein; Silva, Emmanuel Sirimal

    Missing values in time series data is a well-known and important problem which many researchers have studied extensively in various fields. In this paper, a new nonparametric approach for missing value imputation in time series is proposed. The main novelty of this research is applying the L1 norm-based version of Singular Spectrum Analysis (SSA), namely L1-SSA which is robust against outliers. The performance of the new imputation method has been compared with many other established methods. The comparison is done by applying them to various real and simulated time series. The obtained results confirm that the SSA-based methods, especially L1-SSA can provide better imputation in comparison to other methods.

  2. Contingency inferences driven by base rates: Valid by sampling

    Directory of Open Access Journals (Sweden)

    Florian Kutzner

    2011-04-01

    Full Text Available Fiedler et al. (2009, reviewed evidence for the utilization of a contingency inference strategy termed pseudocontingencies (PCs. In PCs, the more frequent levels (and, by implication, the less frequent levels are assumed to be associated. PCs have been obtained using a wide range of task settings and dependent measures. Yet, the readiness with which decision makers rely on PCs is poorly understood. A computer simulation explored two potential sources of subjective validity of PCs. First, PCs are shown to perform above chance level when the task is to infer the sign of moderate to strong population contingencies from a sample of observations. Second, contingency inferences based on PCs and inferences based on cell frequencies are shown to partially agree across samples. Intriguingly, this criterion and convergent validity are by-products of random sampling error, highlighting the inductive nature of contingency inferences.

  3. ESTIMATING RELIABILITY OF DISTURBANCES IN SATELLITE TIME SERIES DATA BASED ON STATISTICAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    Z.-G. Zhou

    2016-06-01

    Full Text Available Normally, the status of land cover is inherently dynamic and changing continuously on temporal scale. However, disturbances or abnormal changes of land cover — caused by such as forest fire, flood, deforestation, and plant diseases — occur worldwide at unknown times and locations. Timely detection and characterization of these disturbances is of importance for land cover monitoring. Recently, many time-series-analysis methods have been developed for near real-time or online disturbance detection, using satellite image time series. However, the detection results were only labelled with “Change/ No change” by most of the present methods, while few methods focus on estimating reliability (or confidence level of the detected disturbances in image time series. To this end, this paper propose a statistical analysis method for estimating reliability of disturbances in new available remote sensing image time series, through analysis of full temporal information laid in time series data. The method consists of three main steps. (1 Segmenting and modelling of historical time series data based on Breaks for Additive Seasonal and Trend (BFAST. (2 Forecasting and detecting disturbances in new time series data. (3 Estimating reliability of each detected disturbance using statistical analysis based on Confidence Interval (CI and Confidence Levels (CL. The method was validated by estimating reliability of disturbance regions caused by a recent severe flooding occurred around the border of Russia and China. Results demonstrated that the method can estimate reliability of disturbances detected in satellite image with estimation error less than 5% and overall accuracy up to 90%.

  4. Community-based survey versus sentinel site sampling in ...

    African Journals Online (AJOL)

    rural children. Implications for nutritional surveillance and the development of nutritional programmes. G. c. Solarsh, D. M. Sanders, C. A. Gibson, E. Gouws. A study of the anthropometric status of under-5-year-olds was conducted in the Nqutu district of Kwazulu by means of a representative community-based sample and.

  5. A sampling-based approach to probabilistic pursuit evasion

    KAUST Repository

    Mahadevan, Aditya

    2012-05-01

    Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented with useful information to model interesting scenarios related to multi-agent interaction and coordination. © 2012 IEEE.

  6. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    Science.gov (United States)

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  7. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  8. SeaWiFS technical report series. Volume 4: An analysis of GAC sampling algorithms. A case study

    Science.gov (United States)

    Yeh, Eueng-Nan (Editor); Hooker, Stanford B. (Editor); Hooker, Stanford B. (Editor); Mccain, Charles R. (Editor); Fu, Gary (Editor)

    1992-01-01

    The Sea-viewing Wide Field-of-view Sensor (SeaWiFS) instrument will sample at approximately a 1 km resolution at nadir which will be broadcast for reception by realtime ground stations. However, the global data set will be comprised of coarser four kilometer data which will be recorded and broadcast to the SeaWiFS Project for processing. Several algorithms for degrading the one kilometer data to four kilometer data are examined using imagery from the Coastal Zone Color Scanner (CZCS) in an effort to determine which algorithm would best preserve the statistical characteristics of the derived products generated from the one kilometer data. Of the algorithms tested, subsampling based on a fixed pixel within a 4 x 4 pixel array is judged to yield the most consistent results when compared to the one kilometer data products.

  9. Improving Teachers' Knowledge of Functional Assessment-Based Interventions: Outcomes of a Professional Development Series

    Science.gov (United States)

    Lane, Kathleen Lynne; Oakes, Wendy Peia; Powers, Lisa; Diebold, Tricia; Germer, Kathryn; Common, Eric A.; Brunsting, Nelson

    2015-01-01

    This paper provides outcomes of a study examining the effectiveness of a year-long professional development training series designed to support in-service educators in learning a systematic approach to functional assessment-based interventions developed by Umbreit and colleagues (2007) that has met with demonstrated success when implemented with…

  10. Manualized Family-Based Treatment for Anorexia Nervosa: A Case Series.

    Science.gov (United States)

    Le Grange, Daniel; Binford, Roslyn; Loeb, Katharine L.

    2005-01-01

    Objective: The purpose of this study was to describe a case series of children and adolescents (mean age = 14.5 years, SD = 2.3; range 9-18) with anorexia nervosa who received manualized family-based treatment for their eating disorder. Method: Forty-five patients with anorexia nervosa were compared pre- and post-treatment on weight and menstrual…

  11. A cache-friendly sampling strategy for texture-based volume rendering on GPU

    Directory of Open Access Journals (Sweden)

    Junpeng Wang

    2017-06-01

    Full Text Available The texture-based volume rendering is a memory-intensive algorithm. Its performance relies heavily on the performance of the texture cache. However, most existing texture-based volume rendering methods blindly map computational resources to texture memory and result in incoherent memory access patterns, causing low cache hit rates in certain cases. The distance between samples taken by threads of an atomic scheduling unit (e.g. a warp of 32 threads in CUDA of the GPU is a crucial factor that affects the texture cache performance. Based on this fact, we present a new sampling strategy, called Warp Marching, for the ray-casting algorithm of texture-based volume rendering. The effects of different sample organizations and different thread-pixel mappings in the ray-casting algorithm are thoroughly analyzed. Also, a pipeline manner color blending approach is introduced and the power of warp-level GPU operations is leveraged to improve the efficiency of parallel executions on the GPU. In addition, the rendering performance of the Warp Marching is view-independent, and it outperforms existing empty space skipping techniques in scenarios that need to render large dynamic volumes in a low resolution image. Through a series of micro-benchmarking and real-life data experiments, we rigorously analyze our sampling strategies and demonstrate significant performance enhancements over existing sampling methods.

  12. New prediction of chaotic time series based on local Lyapunov exponent

    International Nuclear Information System (INIS)

    Zhang Yong

    2013-01-01

    A new method of predicting chaotic time series is presented based on a local Lyapunov exponent, by quantitatively measuring the exponential rate of separation or attraction of two infinitely close trajectories in state space. After reconstructing state space from one-dimensional chaotic time series, neighboring multiple-state vectors of the predicting point are selected to deduce the prediction formula by using the definition of the local Lyapunov exponent. Numerical simulations are carried out to test its effectiveness and verify its higher precision over two older methods. The effects of the number of referential state vectors and added noise on forecasting accuracy are also studied numerically. (general)

  13. Pattern-Based Development of Enterprise Systems: from Conceptual Framework to Series of Implementations

    Directory of Open Access Journals (Sweden)

    Sergey V. Zykov

    2013-04-01

    Full Text Available Building enterprise software is a dramatic challenge due to data size, complexity and rapid growth of the both in time. The issue becomes even more dramatic when it gets to integrating heterogeneous applications. Therewith, a uniform approach is required, which combines formal models and CASE tools. The methodology is based on extracting common ERP module level patterns and applying them to series of heterogeneous implementations. The approach includes a lifecycle model, which extends conventional spiral model by formal data representation/management models and DSL-based "low-level" CASE tools supporting the formalisms. The methodology has been successfully implemented as a series of portal-based ERP systems in ITERA oil-and-gas corporation, and in a number of trading/banking enterprise applications for other enterprises. Semantic network-based airline dispatch system, and a 6D-model-driven nuclear power plant construction support system are currently in progress.

  14. Identification of Dynamic Loads Based on Second-Order Taylor-Series Expansion Method

    Directory of Open Access Journals (Sweden)

    Xiaowang Li

    2016-01-01

    Full Text Available A new method based on the second-order Taylor-series expansion is presented to identify the structural dynamic loads in the time domain. This algorithm expresses the response vectors as Taylor-series approximation and then a series of formulas are deduced. As a result, an explicit discrete equation which associates system response, system characteristic, and input excitation together is set up. In a multi-input-multi-output (MIMO numerical simulation study, sinusoidal excitation and white noise excitation are applied on a cantilever beam, respectively, to illustrate the effectiveness of this algorithm. One also makes a comparison between the new method and conventional state space method. The results show that the proposed method can obtain a more accurate identified force time history whether the responses are polluted by noise or not.

  15. Development of Simulink-Based SiC MOSFET Modeling Platform for Series Connected Devices

    DEFF Research Database (Denmark)

    Tsolaridis, Georgios; Ilves, Kalle; Reigosa, Paula Diaz

    2016-01-01

    A new MATLAB/Simulink-based modeling platform has been developed for SiC MOSFET power modules. The modeling platform describes the electrical behavior f a single 1.2 kV/ 350 A SiC MOSFET power module, as well as the series connection of two of them. A fast parameter initialization is followed...... by an optimization process to facilitate the extraction of the model’s parameters in a more automated way relying on a small number of experimental waveforms. Through extensive experimental work, it is shown that the model accurately predicts both static and dynamic performances. The series connection of two Si......C power modules has been investigated through the validation of the static and dynamic conditions. Thanks to the developed model, a better understanding of the challenges introduced by uneven voltage balance sharing among series connected devices is possible....

  16. The detection of local irreversibility in time series based on segmentation

    Science.gov (United States)

    Teng, Yue; Shang, Pengjian

    2018-06-01

    We propose a strategy for the detection of local irreversibility in stationary time series based on multiple scale. The detection is beneficial to evaluate the displacement of irreversibility toward local skewness. By means of this method, we can availably discuss the local irreversible fluctuations of time series as the scale changes. The method was applied to simulated nonlinear signals generated by the ARFIMA process and logistic map to show how the irreversibility functions react to the increasing of the multiple scale. The method was applied also to series of financial markets i.e., American, Chinese and European markets. The local irreversibility for different markets demonstrate distinct characteristics. Simulations and real data support the need of exploring local irreversibility.

  17. Identification of Unsaturated and 2H Polyfluorocarboxylate Homologous Series and Their Detection in Environmental Samples and as Polymer Degradation Products

    Science.gov (United States)

    A pair of homologous series of polyfluorinated degradation products have been identified, both having structures similar to perfluorocarboxylic acids but (i) having a H substitution for F on the α carbon for 2H polyfluorocarboxylic acids (2HPFCAs) and (ii) bearing a double ...

  18. Modelling of extreme rainfall events in Peninsular Malaysia based on annual maximum and partial duration series

    Science.gov (United States)

    Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz

    2015-02-01

    In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.

  19. The determination of environmental levels of uranium and thorium series isotopes and 137Cs in aquatic and terrestrial samples

    International Nuclear Information System (INIS)

    Wilkinson, P.

    1985-01-01

    This publication details the analytical methods used at the Freshwater Institute for the radiochemical analysis of aquatic and terrestrial samples. Sample collection methods are described with emphasis on water sampling. A detailed 'Calculations' section contains the mathematical formulae used to determine the absolute activity of each isotope analyzed. 25 refs

  20. Research on test of product based on spatial sampling criteria and variable step sampling mechanism

    Science.gov (United States)

    Li, Ruihong; Han, Yueping

    2014-09-01

    This paper presents an effective approach for online testing the assembly structures inside products using multiple views technique and X-ray digital radiography system based on spatial sampling criteria and variable step sampling mechanism. Although there are some objects inside one product to be tested, there must be a maximal rotary step for an object within which the least structural size to be tested is predictable. In offline learning process, Rotating the object by the step and imaging it and so on until a complete cycle is completed, an image sequence is obtained that includes the full structural information for recognition. The maximal rotary step is restricted by the least structural size and the inherent resolution of the imaging system. During online inspection process, the program firstly finds the optimum solutions to all different target parts in the standard sequence, i.e., finds their exact angles in one cycle. Aiming at the issue of most sizes of other targets in product are larger than that of the least structure, the paper adopts variable step-size sampling mechanism to rotate the product specific angles with different steps according to different objects inside the product and match. Experimental results show that the variable step-size method can greatly save time compared with the traditional fixed-step inspection method while the recognition accuracy is guaranteed.

  1. Linearization and Control of Series-Series Compensated Inductive Power Transfer System Based on Extended Describing Function Concept

    Directory of Open Access Journals (Sweden)

    Kunwar Aditya

    2016-11-01

    Full Text Available The extended describing function (EDF is a well-known method for modelling resonant converters due to its high accuracy. However, it requires complex mathematical formulation effort. This paper presents a simplified non-linear mathematical model of series-series (SS compensated inductive power transfer (IPT system, considering zero-voltage switching in the inverter. This simplified mathematical model permits the user to derive the small-signal model using the EDF method, with less computational effort, while maintaining the accuracy of an actual physical model. The derived model has been verified using a frequency sweep method in PLECS. The small-signal model has been used to design the voltage loop controller for a SS compensated IPT system. The designed controller was implemented on a 3.6 kW experimental setup, to test its robustness.

  2. Aleatoric and epistemic uncertainties in sampling based nuclear data uncertainty and sensitivity analyses

    International Nuclear Information System (INIS)

    Zwermann, W.; Krzykacz-Hausmann, B.; Gallner, L.; Klein, M.; Pautz, A.; Velkov, K.

    2012-01-01

    Sampling based uncertainty and sensitivity analyses due to epistemic input uncertainties, i.e. to an incomplete knowledge of uncertain input parameters, can be performed with arbitrary application programs to solve the physical problem under consideration. For the description of steady-state particle transport, direct simulations of the microscopic processes with Monte Carlo codes are often used. This introduces an additional source of uncertainty, the aleatoric sampling uncertainty, which is due to the randomness of the simulation process performed by sampling, and which adds to the total combined output sampling uncertainty. So far, this aleatoric part of uncertainty is minimized by running a sufficiently large number of Monte Carlo histories for each sample calculation, thus making its impact negligible as compared to the impact from sampling the epistemic uncertainties. Obviously, this process may cause high computational costs. The present paper shows that in many applications reliable epistemic uncertainty results can also be obtained with substantially lower computational effort by performing and analyzing two appropriately generated series of samples with much smaller number of Monte Carlo histories each. The method is applied along with the nuclear data uncertainty and sensitivity code package XSUSA in combination with the Monte Carlo transport code KENO-Va to various critical assemblies and a full scale reactor calculation. It is shown that the proposed method yields output uncertainties and sensitivities equivalent to the traditional approach, with a high reduction of computing time by factors of the magnitude of 100. (authors)

  3. Shilling attack detection for recommender systems based on credibility of group users and rating time series.

    Science.gov (United States)

    Zhou, Wei; Wen, Junhao; Qu, Qiang; Zeng, Jun; Cheng, Tian

    2018-01-01

    Recommender systems are vulnerable to shilling attacks. Forged user-generated content data, such as user ratings and reviews, are used by attackers to manipulate recommendation rankings. Shilling attack detection in recommender systems is of great significance to maintain the fairness and sustainability of recommender systems. The current studies have problems in terms of the poor universality of algorithms, difficulty in selection of user profile attributes, and lack of an optimization mechanism. In this paper, a shilling behaviour detection structure based on abnormal group user findings and rating time series analysis is proposed. This paper adds to the current understanding in the field by studying the credibility evaluation model in-depth based on the rating prediction model to derive proximity-based predictions. A method for detecting suspicious ratings based on suspicious time windows and target item analysis is proposed. Suspicious rating time segments are determined by constructing a time series, and data streams of the rating items are examined and suspicious rating segments are checked. To analyse features of shilling attacks by a group user's credibility, an abnormal group user discovery method based on time series and time window is proposed. Standard testing datasets are used to verify the effect of the proposed method.

  4. Nonlinear Prediction Model for Hydrologic Time Series Based on Wavelet Decomposition

    Science.gov (United States)

    Kwon, H.; Khalil, A.; Brown, C.; Lall, U.; Ahn, H.; Moon, Y.

    2005-12-01

    Traditionally forecasting and characterizations of hydrologic systems is performed utilizing many techniques. Stochastic linear methods such as AR and ARIMA and nonlinear ones such as statistical learning theory based tools have been extensively used. The common difficulty to all methods is the determination of sufficient and necessary information and predictors for a successful prediction. Relationships between hydrologic variables are often highly nonlinear and interrelated across the temporal scale. A new hybrid approach is proposed for the simulation of hydrologic time series combining both the wavelet transform and the nonlinear model. The present model employs some merits of wavelet transform and nonlinear time series model. The Wavelet Transform is adopted to decompose a hydrologic nonlinear process into a set of mono-component signals, which are simulated by nonlinear model. The hybrid methodology is formulated in a manner to improve the accuracy of a long term forecasting. The proposed hybrid model yields much better results in terms of capturing and reproducing the time-frequency properties of the system at hand. Prediction results are promising when compared to traditional univariate time series models. An application of the plausibility of the proposed methodology is provided and the results conclude that wavelet based time series model can be utilized for simulating and forecasting of hydrologic variable reasonably well. This will ultimately serve the purpose of integrated water resources planning and management.

  5. Markov transition probability-based network from time series for characterizing experimental two-phase flow

    International Nuclear Information System (INIS)

    Gao Zhong-Ke; Hu Li-Dan; Jin Ning-De

    2013-01-01

    We generate a directed weighted complex network by a method based on Markov transition probability to represent an experimental two-phase flow. We first systematically carry out gas—liquid two-phase flow experiments for measuring the time series of flow signals. Then we construct directed weighted complex networks from various time series in terms of a network generation method based on Markov transition probability. We find that the generated network inherits the main features of the time series in the network structure. In particular, the networks from time series with different dynamics exhibit distinct topological properties. Finally, we construct two-phase flow directed weighted networks from experimental signals and associate the dynamic behavior of gas-liquid two-phase flow with the topological statistics of the generated networks. The results suggest that the topological statistics of two-phase flow networks allow quantitative characterization of the dynamic flow behavior in the transitions among different gas—liquid flow patterns. (general)

  6. Linear and nonlinear attributes of ultrasonic time series recorded from experimentally loaded rock samples and total failure prediction

    Czech Academy of Sciences Publication Activity Database

    Rudajev, Vladimír; Číž, R.

    2007-01-01

    Roč. 44, č. 3 (2007), s. 457-467 ISSN 1365-1609 R&D Projects: GA ČR GA205/06/0906 Institutional research plan: CEZ:AV0Z30130516; CEZ:AV0Z30460519 Keywords : ultrasonic emission * microfracturing * time series * autocorrelation * fractal dimensions * neural networks Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 0.735, year: 2007

  7. Using machine learning to accelerate sampling-based inversion

    Science.gov (United States)

    Valentine, A. P.; Sambridge, M.

    2017-12-01

    In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.

  8. Patch-based visual tracking with online representative sample selection

    Science.gov (United States)

    Ou, Weihua; Yuan, Di; Li, Donghao; Liu, Bin; Xia, Daoxun; Zeng, Wu

    2017-05-01

    Occlusion is one of the most challenging problems in visual object tracking. Recently, a lot of discriminative methods have been proposed to deal with this problem. For the discriminative methods, it is difficult to select the representative samples for the target template updating. In general, the holistic bounding boxes that contain tracked results are selected as the positive samples. However, when the objects are occluded, this simple strategy easily introduces the noises into the training data set and the target template and then leads the tracker to drift away from the target seriously. To address this problem, we propose a robust patch-based visual tracker with online representative sample selection. Different from previous works, we divide the object and the candidates into several patches uniformly and propose a score function to calculate the score of each patch independently. Then, the average score is adopted to determine the optimal candidate. Finally, we utilize the non-negative least square method to find the representative samples, which are used to update the target template. The experimental results on the object tracking benchmark 2013 and on the 13 challenging sequences show that the proposed method is robust to the occlusion and achieves promising results.

  9. The RBANS Effort Index: base rates in geriatric samples.

    Science.gov (United States)

    Duff, Kevin; Spering, Cynthia C; O'Bryant, Sid E; Beglinger, Leigh J; Moser, David J; Bayless, John D; Culp, Kennith R; Mold, James W; Adams, Russell L; Scott, James G

    2011-01-01

    The Effort Index (EI) of the RBANS was developed to assist clinicians in discriminating patients who demonstrate good effort from those with poor effort. However, there are concerns that older adults might be unfairly penalized by this index, which uses uncorrected raw scores. Using five independent samples of geriatric patients with a broad range of cognitive functioning (e.g., cognitively intact, nursing home residents, probable Alzheimer's disease), base rates of failure on the EI were calculated. In cognitively intact and mildly impaired samples, few older individuals were classified as demonstrating poor effort (e.g., 3% in cognitively intact). However, in the more severely impaired geriatric patients, over one third had EI scores that fell above suggested cutoff scores (e.g., 37% in nursing home residents, 33% in probable Alzheimer's disease). In the cognitively intact sample, older and less educated patients were more likely to have scores suggestive of poor effort. Education effects were observed in three of the four clinical samples. Overall cognitive functioning was significantly correlated with EI scores, with poorer cognition being associated with greater suspicion of low effort. The current results suggest that age, education, and level of cognitive functioning should be taken into consideration when interpreting EI results and that significant caution is warranted when examining EI scores in elders suspected of having dementia.

  10. Ultrasonic-based membrane aided sample preparation of urine proteomes.

    Science.gov (United States)

    Jesus, Jemmyson Romário; Santos, Hugo M; López-Fernández, H; Lodeiro, Carlos; Arruda, Marco Aurélio Zezzi; Capelo, J L

    2018-02-01

    A new ultrafast ultrasonic-based method for shotgun proteomics as well as label-free protein quantification in urine samples is developed. The method first separates the urine proteins using nitrocellulose-based membranes and then proteins are in-membrane digested using trypsin. The enzymatic digestion process is accelerated from overnight to four minutes using a sonoreactor ultrasonic device. Overall, the sample treatment pipeline comprising protein separation, digestion and identification is done in just 3h. The process is assessed using urine of healthy volunteers. The method shows that male can be differentiated from female using the protein content of urine in a fast, easy and straightforward way. 232 and 226 proteins are identified in urine of male and female, respectively. From this, 162 are common to both genders, whilst 70 are unique to male and 64 to female. From the 162 common proteins, 13 are present at levels statistically different (p minimalism concept as outlined by Halls, as each stage of this analysis is evaluated to minimize the time, cost, sample requirement, reagent consumption, energy requirements and production of waste products. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. A Python-based interface to examine motions in time series of solar images

    Science.gov (United States)

    Campos-Rozo, J. I.; Vargas Domínguez, S.

    2017-10-01

    Python is considered to be a mature programming language, besides of being widely accepted as an engaging option for scientific analysis in multiple areas, as will be presented in this work for the particular case of solar physics research. SunPy is an open-source library based on Python that has been recently developed to furnish software tools to solar data analysis and visualization. In this work we present a graphical user interface (GUI) based on Python and Qt to effectively compute proper motions for the analysis of time series of solar data. This user-friendly computing interface, that is intended to be incorporated to the Sunpy library, uses a local correlation tracking technique and some extra tools that allows the selection of different parameters to calculate, vizualize and analyze vector velocity fields of solar data, i.e. time series of solar filtergrams and magnetograms.

  12. Intuitionistic Fuzzy Time Series Forecasting Model Based on Intuitionistic Fuzzy Reasoning

    Directory of Open Access Journals (Sweden)

    Ya’nan Wang

    2016-01-01

    Full Text Available Fuzzy sets theory cannot describe the data comprehensively, which has greatly limited the objectivity of fuzzy time series in uncertain data forecasting. In this regard, an intuitionistic fuzzy time series forecasting model is built. In the new model, a fuzzy clustering algorithm is used to divide the universe of discourse into unequal intervals, and a more objective technique for ascertaining the membership function and nonmembership function of the intuitionistic fuzzy set is proposed. On these bases, forecast rules based on intuitionistic fuzzy approximate reasoning are established. At last, contrast experiments on the enrollments of the University of Alabama and the Taiwan Stock Exchange Capitalization Weighted Stock Index are carried out. The results show that the new model has a clear advantage of improving the forecast accuracy.

  13. GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING

    Directory of Open Access Journals (Sweden)

    Christopher Ouma Onyango

    2010-09-01

    Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.

  14. Simple nuclear norm based algorithms for imputing missing data and forecasting in time series

    OpenAIRE

    Butcher, Holly Louise; Gillard, Jonathan William

    2017-01-01

    There has been much recent progress on the use of the nuclear norm for the so-called matrix completion problem (the problem of imputing missing values of a matrix). In this paper we investigate the use of the nuclear norm for modelling time series, with particular attention to imputing missing data and forecasting. We introduce a simple alternating projections type algorithm based on the nuclear norm for these tasks, and consider a number of practical examples.

  15. Uncertainty estimation with bias-correction for flow series based on rating curve

    Science.gov (United States)

    Shao, Quanxi; Lerat, Julien; Podger, Geoff; Dutta, Dushmanta

    2014-03-01

    Streamflow discharge constitutes one of the fundamental data required to perform water balance studies and develop hydrological models. A rating curve, designed based on a series of concurrent stage and discharge measurements at a gauging location, provides a way to generate complete discharge time series with a reasonable quality if sufficient measurement points are available. However, the associated uncertainty is frequently not available even though it has a significant impact on hydrological modelling. In this paper, we identify the discrepancy of the hydrographers' rating curves used to derive the historical discharge data series and proposed a modification by bias correction which is also in the form of power function as the traditional rating curve. In order to obtain the uncertainty estimation, we propose a further both-side Box-Cox transformation to stabilize the regression residuals as close to the normal distribution as possible, so that a proper uncertainty can be attached for the whole discharge series in the ensemble generation. We demonstrate the proposed method by applying it to the gauging stations in the Flinders and Gilbert rivers in north-west Queensland, Australia.

  16. Analysis of financial time series using multiscale entropy based on skewness and kurtosis

    Science.gov (United States)

    Xu, Meng; Shang, Pengjian

    2018-01-01

    There is a great interest in studying dynamic characteristics of the financial time series of the daily stock closing price in different regions. Multi-scale entropy (MSE) is effective, mainly in quantifying the complexity of time series on different time scales. This paper applies a new method for financial stability from the perspective of MSE based on skewness and kurtosis. To better understand the superior coarse-graining method for the different kinds of stock indexes, we take into account the developmental characteristics of the three continents of Asia, North America and European stock markets. We study the volatility of different financial time series in addition to analyze the similarities and differences of coarsening time series from the perspective of skewness and kurtosis. A kind of corresponding relationship between the entropy value of stock sequences and the degree of stability of financial markets, were observed. The three stocks which have particular characteristics in the eight piece of stock sequences were discussed, finding the fact that it matches the result of applying the MSE method to showing results on a graph. A comparative study is conducted to simulate over synthetic and real world data. Results show that the modified method is more effective to the change of dynamics and has more valuable information. The result is obtained at the same time, finding the results of skewness and kurtosis discrimination is obvious, but also more stable.

  17. Sample-Based Extreme Learning Machine with Missing Data

    Directory of Open Access Journals (Sweden)

    Hang Gao

    2015-01-01

    Full Text Available Extreme learning machine (ELM has been extensively studied in machine learning community during the last few decades due to its high efficiency and the unification of classification, regression, and so forth. Though bearing such merits, existing ELM algorithms cannot efficiently handle the issue of missing data, which is relatively common in practical applications. The problem of missing data is commonly handled by imputation (i.e., replacing missing values with substituted values according to available information. However, imputation methods are not always effective. In this paper, we propose a sample-based learning framework to address this issue. Based on this framework, we develop two sample-based ELM algorithms for classification and regression, respectively. Comprehensive experiments have been conducted in synthetic data sets, UCI benchmark data sets, and a real world fingerprint image data set. As indicated, without introducing extra computational complexity, the proposed algorithms do more accurate and stable learning than other state-of-the-art ones, especially in the case of higher missing ratio.

  18. Multiclass classification for skin cancer profiling based on the integration of heterogeneous gene expression series.

    Science.gov (United States)

    Gálvez, Juan Manuel; Castillo, Daniel; Herrera, Luis Javier; San Román, Belén; Valenzuela, Olga; Ortuño, Francisco Manuel; Rojas, Ignacio

    2018-01-01

    Most of the research studies developed applying microarray technology to the characterization of different pathological states of any disease may fail in reaching statistically significant results. This is largely due to the small repertoire of analysed samples, and to the limitation in the number of states or pathologies usually addressed. Moreover, the influence of potential deviations on the gene expression quantification is usually disregarded. In spite of the continuous changes in omic sciences, reflected for instance in the emergence of new Next-Generation Sequencing-related technologies, the existing availability of a vast amount of gene expression microarray datasets should be properly exploited. Therefore, this work proposes a novel methodological approach involving the integration of several heterogeneous skin cancer series, and a later multiclass classifier design. This approach is thus a way to provide the clinicians with an intelligent diagnosis support tool based on the use of a robust set of selected biomarkers, which simultaneously distinguishes among different cancer-related skin states. To achieve this, a multi-platform combination of microarray datasets from Affymetrix and Illumina manufacturers was carried out. This integration is expected to strengthen the statistical robustness of the study as well as the finding of highly-reliable skin cancer biomarkers. Specifically, the designed operation pipeline has allowed the identification of a small subset of 17 differentially expressed genes (DEGs) from which to distinguish among 7 involved skin states. These genes were obtained from the assessment of a number of potential batch effects on the gene expression data. The biological interpretation of these genes was inspected in the specific literature to understand their underlying information in relation to skin cancer. Finally, in order to assess their possible effectiveness in cancer diagnosis, a cross-validation Support Vector Machines (SVM)-based

  19. Genomic epidemiology of a major Mycobacterium tuberculosis outbreak: Retrospective cohort study in a low incidence setting using sparse time-series sampling

    DEFF Research Database (Denmark)

    Folkvardsen, Dorte Bek; Norman, Anders; Andersen, Åse Bengård

    2017-01-01

    cases belonging to this outbreak via routine MIRU-VNTR typing. Here, we present a retrospective analysis of the C2/1112-15 dataset, based on whole-genome data from a sparse time-series consisting of five randomly selected isolates from each of the 23 years. Even if these data are derived from only 12...

  20. Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data

    Science.gov (United States)

    Hallac, David; Vare, Sagar; Boyd, Stephen; Leskovec, Jure

    2018-01-01

    Subsequence clustering of multivariate time series is a useful tool for discovering repeated patterns in temporal data. Once these patterns have been discovered, seemingly complicated datasets can be interpreted as a temporal sequence of only a small number of states, or clusters. For example, raw sensor data from a fitness-tracking application can be expressed as a timeline of a select few actions (i.e., walking, sitting, running). However, discovering these patterns is challenging because it requires simultaneous segmentation and clustering of the time series. Furthermore, interpreting the resulting clusters is difficult, especially when the data is high-dimensional. Here we propose a new method of model-based clustering, which we call Toeplitz Inverse Covariance-based Clustering (TICC). Each cluster in the TICC method is defined by a correlation network, or Markov random field (MRF), characterizing the interdependencies between different observations in a typical subsequence of that cluster. Based on this graphical representation, TICC simultaneously segments and clusters the time series data. We solve the TICC problem through alternating minimization, using a variation of the expectation maximization (EM) algorithm. We derive closed-form solutions to efficiently solve the two resulting subproblems in a scalable way, through dynamic programming and the alternating direction method of multipliers (ADMM), respectively. We validate our approach by comparing TICC to several state-of-the-art baselines in a series of synthetic experiments, and we then demonstrate on an automobile sensor dataset how TICC can be used to learn interpretable clusters in real-world scenarios. PMID:29770257

  1. CdTe detector based PIXE mapping of geological samples

    Energy Technology Data Exchange (ETDEWEB)

    Chaves, P.C., E-mail: cchaves@ctn.ist.utl.pt [Centro de Física Atómica da Universidade de Lisboa, Av. Prof. Gama Pinto 2, 1649-003 Lisboa (Portugal); IST/ITN, Instituto Superior Técnico, Universidade Técnica de Lisboa, Campus Tecnológico e Nuclear, EN10, 2686-953 Sacavém (Portugal); Taborda, A. [Centro de Física Atómica da Universidade de Lisboa, Av. Prof. Gama Pinto 2, 1649-003 Lisboa (Portugal); IST/ITN, Instituto Superior Técnico, Universidade Técnica de Lisboa, Campus Tecnológico e Nuclear, EN10, 2686-953 Sacavém (Portugal); Oliveira, D.P.S. de [Laboratório Nacional de Energia e Geologia (LNEG), Apartado 7586, 2611-901 Alfragide (Portugal); Reis, M.A. [Centro de Física Atómica da Universidade de Lisboa, Av. Prof. Gama Pinto 2, 1649-003 Lisboa (Portugal); IST/ITN, Instituto Superior Técnico, Universidade Técnica de Lisboa, Campus Tecnológico e Nuclear, EN10, 2686-953 Sacavém (Portugal)

    2014-01-01

    A sample collected from a borehole drilled approximately 10 km ESE of Bragança, Trás-os-Montes, was analysed by standard and high energy PIXE at both CTN (previous ITN) PIXE setups. The sample is a fine-grained metapyroxenite grading to coarse-grained in the base with disseminated sulphides and fine veinlets of pyrrhotite and pyrite. Matrix composition was obtained at the standard PIXE setup using a 1.25 MeV H{sup +} beam at three different spots. Medium and high Z elemental concentrations were then determined using the DT2fit and DT2simul codes (Reis et al., 2008, 2013 [1,2]), on the spectra obtained in the High Resolution and High Energy (HRHE)-PIXE setup (Chaves et al., 2013 [3]) by irradiation of the sample with a 3.8 MeV proton beam provided by the CTN 3 MV Tandetron accelerator. In this paper we present results, discuss detection limits of the method and the added value of the use of the CdTe detector in this context.

  2. Automated classification of Permanent Scatterers time-series based on statistical characterization tests

    Science.gov (United States)

    Berti, Matteo; Corsini, Alessandro; Franceschini, Silvia; Iannacone, Jean Pascal

    2013-04-01

    The application of space borne synthetic aperture radar interferometry has progressed, over the last two decades, from the pioneer use of single interferograms for analyzing changes on the earth's surface to the development of advanced multi-interferogram techniques to analyze any sort of natural phenomena which involves movements of the ground. The success of multi-interferograms techniques in the analysis of natural hazards such as landslides and subsidence is widely documented in the scientific literature and demonstrated by the consensus among the end-users. Despite the great potential of this technique, radar interpretation of slope movements is generally based on the sole analysis of average displacement velocities, while the information embraced in multi interferogram time series is often overlooked if not completely neglected. The underuse of PS time series is probably due to the detrimental effect of residual atmospheric errors, which make the PS time series characterized by erratic, irregular fluctuations often difficult to interpret, and also to the difficulty of performing a visual, supervised analysis of the time series for a large dataset. In this work is we present a procedure for automatic classification of PS time series based on a series of statistical characterization tests. The procedure allows to classify the time series into six distinctive target trends (0=uncorrelated; 1=linear; 2=quadratic; 3=bilinear; 4=discontinuous without constant velocity; 5=discontinuous with change in velocity) and retrieve for each trend a series of descriptive parameters which can be efficiently used to characterize the temporal changes of ground motion. The classification algorithms were developed and tested using an ENVISAT datasets available in the frame of EPRS-E project (Extraordinary Plan of Environmental Remote Sensing) of the Italian Ministry of Environment (track "Modena", Northern Apennines). This dataset was generated using standard processing, then the

  3. Magnetic circular dichroism of LaMn sub 1 sub - sub x Al sub x O sub 3 sub + subdelta series of samples

    CERN Document Server

    Banerjee, A; Krishnan, R V; Dasannacharya, B A; Muro, T; Saitoh, Y; Imada, S; Suga, S

    2003-01-01

    We report magnetic circular dichroism (MCD) studies on the polycrystalline LaMn sub 1 sub - sub x Al sub x O sub 3 sub + subdelta series with x=0-0.2. The Mn-2p MCD was recorded in the temperature range from 45 to 300 K for samples with x=0, 0.075, 0.1 and 0.15. It was seen that unlike ac-susceptibility no second transition in MCD was observed at lower temperatures in the samples with x>=0.075 indicating that it is not intrinsic to the samples but arise out of the dynamics of ferromagnetic clusters in the polycrystalline sample. More significantly, the MCD signal persists even 100 K above the ferromagnetic T sub C confirming that the observation of the magnetic correlation above T sub C in bulk measurement is intrinsic to this type of systems.

  4. Chemometric classification of casework arson samples based on gasoline content.

    Science.gov (United States)

    Sinkov, Nikolai A; Sandercock, P Mark L; Harynuk, James J

    2014-02-01

    Detection and identification of ignitable liquids (ILs) in arson debris is a critical part of arson investigations. The challenge of this task is due to the complex and unpredictable chemical nature of arson debris, which also contains pyrolysis products from the fire. ILs, most commonly gasoline, are complex chemical mixtures containing hundreds of compounds that will be consumed or otherwise weathered by the fire to varying extents depending on factors such as temperature, air flow, the surface on which IL was placed, etc. While methods such as ASTM E-1618 are effective, data interpretation can be a costly bottleneck in the analytical process for some laboratories. In this study, we address this issue through the application of chemometric tools. Prior to the application of chemometric tools such as PLS-DA and SIMCA, issues of chromatographic alignment and variable selection need to be addressed. Here we use an alignment strategy based on a ladder consisting of perdeuterated n-alkanes. Variable selection and model optimization was automated using a hybrid backward elimination (BE) and forward selection (FS) approach guided by the cluster resolution (CR) metric. In this work, we demonstrate the automated construction, optimization, and application of chemometric tools to casework arson data. The resulting PLS-DA and SIMCA classification models, trained with 165 training set samples, have provided classification of 55 validation set samples based on gasoline content with 100% specificity and sensitivity. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Reliable Quantification of the Potential for Equations Based on Spot Urine Samples to Estimate Population Salt Intake

    DEFF Research Database (Denmark)

    Huang, Liping; Crino, Michelle; Wu, Jason Hy

    2016-01-01

    to a standard format. Individual participant records will be compiled and a series of analyses will be completed to: (1) compare existing equations for estimating 24-hour salt intake from spot urine samples with 24-hour urine samples, and assess the degree of bias according to key demographic and clinical......BACKGROUND: Methods based on spot urine samples (a single sample at one time-point) have been identified as a possible alternative approach to 24-hour urine samples for determining mean population salt intake. OBJECTIVE: The aim of this study is to identify a reliable method for estimating mean...... population salt intake from spot urine samples. This will be done by comparing the performance of existing equations against one other and against estimates derived from 24-hour urine samples. The effects of factors such as ethnicity, sex, age, body mass index, antihypertensive drug use, health status...

  6. Development of indicators of vegetation recovery based on time series analysis of SPOT Vegetation data

    Science.gov (United States)

    Lhermitte, S.; Tips, M.; Verbesselt, J.; Jonckheere, I.; Van Aardt, J.; Coppin, Pol

    2005-10-01

    Large-scale wild fires have direct impacts on natural ecosystems and play a major role in the vegetation ecology and carbon budget. Accurate methods for describing post-fire development of vegetation are therefore essential for the understanding and monitoring of terrestrial ecosystems. Time series analysis of satellite imagery offers the potential to quantify these parameters with spatial and temporal accuracy. Current research focuses on the potential of time series analysis of SPOT Vegetation S10 data (1999-2001) to quantify the vegetation recovery of large-scale burns detected in the framework of GBA2000. The objective of this study was to provide quantitative estimates of the spatio-temporal variation of vegetation recovery based on remote sensing indicators. Southern Africa was used as a pilot study area, given the availability of ground and satellite data. An automated technique was developed to extract consistent indicators of vegetation recovery from the SPOT-VGT time series. Reference areas were used to quantify the vegetation regrowth by means of Regeneration Indices (RI). Two kinds of recovery indicators (time and value- based) were tested for RI's of NDVI, SR, SAVI, NDWI, and pure band information. The effects of vegetation structure and temporal fire regime features on the recovery indicators were subsequently analyzed. Statistical analyses were conducted to assess whether the recovery indicators were different for different vegetation types and dependent on timing of the burning season. Results highlighted the importance of appropriate reference areas and the importance of correct normalization of the SPOT-VGT data.

  7. Dimension reduction of frequency-based direct Granger causality measures on short time series.

    Science.gov (United States)

    Siggiridou, Elsa; Kimiskidis, Vasilios K; Kugiumtzis, Dimitris

    2017-09-01

    The mainstream in the estimation of effective brain connectivity relies on Granger causality measures in the frequency domain. If the measure is meant to capture direct causal effects accounting for the presence of other observed variables, as in multi-channel electroencephalograms (EEG), typically the fit of a vector autoregressive (VAR) model on the multivariate time series is required. For short time series of many variables, the estimation of VAR may not be stable requiring dimension reduction resulting in restricted or sparse VAR models. The restricted VAR obtained by the modified backward-in-time selection method (mBTS) is adapted to the generalized partial directed coherence (GPDC), termed restricted GPDC (RGPDC). Dimension reduction on other frequency based measures, such the direct directed transfer function (dDTF), is straightforward. First, a simulation study using linear stochastic multivariate systems is conducted and RGPDC is favorably compared to GPDC on short time series in terms of sensitivity and specificity. Then the two measures are tested for their ability to detect changes in brain connectivity during an epileptiform discharge (ED) from multi-channel scalp EEG. It is shown that RGPDC identifies better than GPDC the connectivity structure of the simulated systems, as well as changes in the brain connectivity, and is less dependent on the free parameter of VAR order. The proposed dimension reduction in frequency measures based on VAR constitutes an appropriate strategy to estimate reliably brain networks within short-time windows. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Burned area detection based on Landsat time series in savannas of southern Burkina Faso

    Science.gov (United States)

    Liu, Jinxiu; Heiskanen, Janne; Maeda, Eduardo Eiji; Pellikka, Petri K. E.

    2018-02-01

    West African savannas are subject to regular fires, which have impacts on vegetation structure, biodiversity and carbon balance. An efficient and accurate mapping of burned area associated with seasonal fires can greatly benefit decision making in land management. Since coarse resolution burned area products cannot meet the accuracy needed for fire management and climate modelling at local scales, the medium resolution Landsat data is a promising alternative for local scale studies. In this study, we developed an algorithm for continuous monitoring of annual burned areas using Landsat time series. The algorithm is based on burned pixel detection using harmonic model fitting with Landsat time series and breakpoint identification in the time series data. This approach was tested in a savanna area in southern Burkina Faso using 281 images acquired between October 2000 and April 2016. An overall accuracy of 79.2% was obtained with balanced omission and commission errors. This represents a significant improvement in comparison with MODIS burned area product (67.6%), which had more omission errors than commission errors, indicating underestimation of the total burned area. By observing the spatial distribution of burned areas, we found that the Landsat based method misclassified cropland and cloud shadows as burned areas due to the similar spectral response, and MODIS burned area product omitted small and fragmented burned areas. The proposed algorithm is flexible and robust against decreased data availability caused by clouds and Landsat 7 missing lines, therefore having a high potential for being applied in other landscapes in future studies.

  9. A graph-based approach to detect spatiotemporal dynamics in satellite image time series

    Science.gov (United States)

    Guttler, Fabio; Ienco, Dino; Nin, Jordi; Teisseire, Maguelonne; Poncelet, Pascal

    2017-08-01

    Enhancing the frequency of satellite acquisitions represents a key issue for Earth Observation community nowadays. Repeated observations are crucial for monitoring purposes, particularly when intra-annual process should be taken into account. Time series of images constitute a valuable source of information in these cases. The goal of this paper is to propose a new methodological framework to automatically detect and extract spatiotemporal information from satellite image time series (SITS). Existing methods dealing with such kind of data are usually classification-oriented and cannot provide information about evolutions and temporal behaviors. In this paper we propose a graph-based strategy that combines object-based image analysis (OBIA) with data mining techniques. Image objects computed at each individual timestamp are connected across the time series and generates a set of evolution graphs. Each evolution graph is associated to a particular area within the study site and stores information about its temporal evolution. Such information can be deeply explored at the evolution graph scale or used to compare the graphs and supply a general picture at the study site scale. We validated our framework on two study sites located in the South of France and involving different types of natural, semi-natural and agricultural areas. The results obtained from a Landsat SITS support the quality of the methodological approach and illustrate how the framework can be employed to extract and characterize spatiotemporal dynamics.

  10. Detrended fluctuation analysis based on higher-order moments of financial time series

    Science.gov (United States)

    Teng, Yue; Shang, Pengjian

    2018-01-01

    In this paper, a generalized method of detrended fluctuation analysis (DFA) is proposed as a new measure to assess the complexity of a complex dynamical system such as stock market. We extend DFA and local scaling DFA to higher moments such as skewness and kurtosis (labeled SMDFA and KMDFA), so as to investigate the volatility scaling property of financial time series. Simulations are conducted over synthetic and financial data for providing the comparative study. We further report the results of volatility behaviors in three American countries, three Chinese and three European stock markets by using DFA and LSDFA method based on higher moments. They demonstrate the dynamics behaviors of time series in different aspects, which can quantify the changes of complexity for stock market data and provide us with more meaningful information than single exponent. And the results reveal some higher moments volatility and higher moments multiscale volatility details that cannot be obtained using the traditional DFA method.

  11. Grouped fuzzy SVM with EM-based partition of sample space for clustered microcalcification detection.

    Science.gov (United States)

    Wang, Huiya; Feng, Jun; Wang, Hongyu

    2017-07-20

    Detection of clustered microcalcification (MC) from mammograms plays essential roles in computer-aided diagnosis for early stage breast cancer. To tackle problems associated with the diversity of data structures of MC lesions and the variability of normal breast tissues, multi-pattern sample space learning is required. In this paper, a novel grouped fuzzy Support Vector Machine (SVM) algorithm with sample space partition based on Expectation-Maximization (EM) (called G-FSVM) is proposed for clustered MC detection. The diversified pattern of training data is partitioned into several groups based on EM algorithm. Then a series of fuzzy SVM are integrated for classification with each group of samples from the MC lesions and normal breast tissues. From DDSM database, a total of 1,064 suspicious regions are selected from 239 mammography, and the measurement of Accuracy, True Positive Rate (TPR), False Positive Rate (FPR) and EVL = TPR* 1-FPR are 0.82, 0.78, 0.14 and 0.72, respectively. The proposed method incorporates the merits of fuzzy SVM and multi-pattern sample space learning, decomposing the MC detection problem into serial simple two-class classification. Experimental results from synthetic data and DDSM database demonstrate that our integrated classification framework reduces the false positive rate significantly while maintaining the true positive rate.

  12. [Predicting Incidence of Hepatitis E in Chinausing Fuzzy Time Series Based on Fuzzy C-Means Clustering Analysis].

    Science.gov (United States)

    Luo, Yi; Zhang, Tao; Li, Xiao-song

    2016-05-01

    To explore the application of fuzzy time series model based on fuzzy c-means clustering in forecasting monthly incidence of Hepatitis E in mainland China. Apredictive model (fuzzy time series method based on fuzzy c-means clustering) was developed using Hepatitis E incidence data in mainland China between January 2004 and July 2014. The incidence datafrom August 2014 to November 2014 were used to test the fitness of the predictive model. The forecasting results were compared with those resulted from traditional fuzzy time series models. The fuzzy time series model based on fuzzy c-means clustering had 0.001 1 mean squared error (MSE) of fitting and 6.977 5 x 10⁻⁴ MSE of forecasting, compared with 0.0017 and 0.0014 from the traditional forecasting model. The results indicate that the fuzzy time series model based on fuzzy c-means clustering has a better performance in forecasting incidence of Hepatitis E.

  13. Forecasting business cycle with chaotic time series based on neural network with weighted fuzzy membership functions

    International Nuclear Information System (INIS)

    Chai, Soo H.; Lim, Joon S.

    2016-01-01

    This study presents a forecasting model of cyclical fluctuations of the economy based on the time delay coordinate embedding method. The model uses a neuro-fuzzy network called neural network with weighted fuzzy membership functions (NEWFM). The preprocessed time series of the leading composite index using the time delay coordinate embedding method are used as input data to the NEWFM to forecast the business cycle. A comparative study is conducted using other methods based on wavelet transform and Principal Component Analysis for the performance comparison. The forecasting results are tested using a linear regression analysis to compare the approximation of the input data against the target class, gross domestic product (GDP). The chaos based model captures nonlinear dynamics and interactions within the system, which other two models ignore. The test results demonstrated that chaos based method significantly improved the prediction capability, thereby demonstrating superior performance to the other methods.

  14. Solution-based targeted genomic enrichment for precious DNA samples

    Directory of Open Access Journals (Sweden)

    Shearer Aiden

    2012-05-01

    Full Text Available Abstract Background Solution-based targeted genomic enrichment (TGE protocols permit selective sequencing of genomic regions of interest on a massively parallel scale. These protocols could be improved by: 1 modifying or eliminating time consuming steps; 2 increasing yield to reduce input DNA and excessive PCR cycling; and 3 enhancing reproducible. Results We developed a solution-based TGE method for downstream Illumina sequencing in a non-automated workflow, adding standard Illumina barcode indexes during the post-hybridization amplification to allow for sample pooling prior to sequencing. The method utilizes Agilent SureSelect baits, primers and hybridization reagents for the capture, off-the-shelf reagents for the library preparation steps, and adaptor oligonucleotides for Illumina paired-end sequencing purchased directly from an oligonucleotide manufacturing company. Conclusions This solution-based TGE method for Illumina sequencing is optimized for small- or medium-sized laboratories and addresses the weaknesses of standard protocols by reducing the amount of input DNA required, increasing capture yield, optimizing efficiency, and improving reproducibility.

  15. Preview-based sampling for controlling gaseous simulations

    KAUST Repository

    Huang, Ruoguan

    2011-01-01

    In this work, we describe an automated method for directing the control of a high resolution gaseous fluid simulation based on the results of a lower resolution preview simulation. Small variations in accuracy between low and high resolution grids can lead to divergent simulations, which is problematic for those wanting to achieve a desired behavior. Our goal is to provide a simple method for ensuring that the high resolution simulation matches key properties from the lower resolution simulation. We first let a user specify a fast, coarse simulation that will be used for guidance. Our automated method samples the data to be matched at various positions and scales in the simulation, or allows the user to identify key portions of the simulation to maintain. During the high resolution simulation, a matching process ensures that the properties sampled from the low resolution simulation are maintained. This matching process keeps the different resolution simulations aligned even for complex systems, and can ensure consistency of not only the velocity field, but also advected scalar values. Because the final simulation is naturally similar to the preview simulation, only minor controlling adjustments are needed, allowing a simpler control method than that used in prior keyframing approaches. Copyright © 2011 by the Association for Computing Machinery, Inc.

  16. Taylor Series-Based Long-Term Creep-Life Prediction of Alloy 617

    International Nuclear Information System (INIS)

    Yin, Song Nan; Kim, Woo Gon; Kim, Yong Wan; Park, Jae Young; Kim, Soen Jin

    2010-01-01

    In this study, a Taylor series (T-S) model based on the Arrhenius, McVetty, and Monkman-Grant equations was developed using a mathematical analysis. In order to reduce fitting errors, the McVetty equation was transformed by considering the first three terms of the Taylor series equation. The model parameters were accurately determined by a statistical technique of maximum likelihood estimation, and this model was applied to the creep data of alloy 617. The T-S model results showed better agreement with the experimental data than other models such as the Eno, exponential, and L-M models. In particular, the T-S model was converted into an isothermal Taylor series (IT-S) model that can predict the creep strength at a given temperature. It was identified that the estimations obtained using the converted ITS model was better than that obtained using the T-S model for predicting the long-term creep life of alloy 617

  17. Dynamic Forecasting Conditional Probability of Bombing Attacks Based on Time-Series and Intervention Analysis.

    Science.gov (United States)

    Li, Shuying; Zhuang, Jun; Shen, Shifei

    2017-07-01

    In recent years, various types of terrorist attacks occurred, causing worldwide catastrophes. According to the Global Terrorism Database (GTD), among all attack tactics, bombing attacks happened most frequently, followed by armed assaults. In this article, a model for analyzing and forecasting the conditional probability of bombing attacks (CPBAs) based on time-series methods is developed. In addition, intervention analysis is used to analyze the sudden increase in the time-series process. The results show that the CPBA increased dramatically at the end of 2011. During that time, the CPBA increased by 16.0% in a two-month period to reach the peak value, but still stays 9.0% greater than the predicted level after the temporary effect gradually decays. By contrast, no significant fluctuation can be found in the conditional probability process of armed assault. It can be inferred that some social unrest, such as America's troop withdrawal from Afghanistan and Iraq, could have led to the increase of the CPBA in Afghanistan, Iraq, and Pakistan. The integrated time-series and intervention model is used to forecast the monthly CPBA in 2014 and through 2064. The average relative error compared with the real data in 2014 is 3.5%. The model is also applied to the total number of attacks recorded by the GTD between 2004 and 2014. © 2016 Society for Risk Analysis.

  18. Time series modeling by a regression approach based on a latent process.

    Science.gov (United States)

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  19. Using learning analytics to evaluate a video-based lecture series.

    Science.gov (United States)

    Lau, K H Vincent; Farooque, Pue; Leydon, Gary; Schwartz, Michael L; Sadler, R Mark; Moeller, Jeremy J

    2018-01-01

    The video-based lecture (VBL), an important component of the flipped classroom (FC) and massive open online course (MOOC) approaches to medical education, has primarily been evaluated through direct learner feedback. Evaluation may be enhanced through learner analytics (LA) - analysis of quantitative audience usage data generated by video-sharing platforms. We applied LA to an experimental series of ten VBLs on electroencephalography (EEG) interpretation, uploaded to YouTube in the model of a publicly accessible MOOC. Trends in view count; total percentage of video viewed and audience retention (AR) (percentage of viewers watching at a time point compared to the initial total) were examined. The pattern of average AR decline was characterized using regression analysis, revealing a uniform linear decline in viewership for each video, with no evidence of an optimal VBL length. Segments with transient increases in AR corresponded to those focused on core concepts, indicative of content requiring more detailed evaluation. We propose a model for applying LA at four levels: global, series, video, and feedback. LA may be a useful tool in evaluating a VBL series. Our proposed model combines analytics data and learner self-report for comprehensive evaluation.

  20. Quantum Ensemble Classification: A Sampling-Based Learning Control Approach.

    Science.gov (United States)

    Chen, Chunlin; Dong, Daoyi; Qi, Bo; Petersen, Ian R; Rabitz, Herschel

    2017-06-01

    Quantum ensemble classification (QEC) has significant applications in discrimination of atoms (or molecules), separation of isotopes, and quantum information extraction. However, quantum mechanics forbids deterministic discrimination among nonorthogonal states. The classification of inhomogeneous quantum ensembles is very challenging, since there exist variations in the parameters characterizing the members within different classes. In this paper, we recast QEC as a supervised quantum learning problem. A systematic classification methodology is presented by using a sampling-based learning control (SLC) approach for quantum discrimination. The classification task is accomplished via simultaneously steering members belonging to different classes to their corresponding target states (e.g., mutually orthogonal states). First, a new discrimination method is proposed for two similar quantum systems. Then, an SLC method is presented for QEC. Numerical results demonstrate the effectiveness of the proposed approach for the binary classification of two-level quantum ensembles and the multiclass classification of multilevel quantum ensembles.

  1. Soil classification basing on the spectral characteristics of topsoil samples

    Science.gov (United States)

    Liu, Huanjun; Zhang, Xiaokang; Zhang, Xinle

    2016-04-01

    Soil taxonomy plays an important role in soil utility and management, but China has only course soil map created based on 1980s data. New technology, e.g. spectroscopy, could simplify soil classification. The study try to classify soils basing on the spectral characteristics of topsoil samples. 148 topsoil samples of typical soils, including Black soil, Chernozem, Blown soil and Meadow soil, were collected from Songnen plain, Northeast China, and the room spectral reflectance in the visible and near infrared region (400-2500 nm) were processed with weighted moving average, resampling technique, and continuum removal. Spectral indices were extracted from soil spectral characteristics, including the second absorption positions of spectral curve, the first absorption vale's area, and slope of spectral curve at 500-600 nm and 1340-1360 nm. Then K-means clustering and decision tree were used respectively to build soil classification model. The results indicated that 1) the second absorption positions of Black soil and Chernozem were located at 610 nm and 650 nm respectively; 2) the spectral curve of the meadow is similar to its adjacent soil, which could be due to soil erosion; 3) decision tree model showed higher classification accuracy, and accuracy of Black soil, Chernozem, Blown soil and Meadow are 100%, 88%, 97%, 50% respectively, and the accuracy of Blown soil could be increased to 100% by adding one more spectral index (the first two vole's area) to the model, which showed that the model could be used for soil classification and soil map in near future.

  2. Automated CBED processing: Sample thickness estimation based on analysis of zone-axis CBED pattern

    Energy Technology Data Exchange (ETDEWEB)

    Klinger, M., E-mail: klinger@post.cz; Němec, M.; Polívka, L.; Gärtnerová, V.; Jäger, A.

    2015-03-15

    An automated processing of convergent beam electron diffraction (CBED) patterns is presented. The proposed methods are used in an automated tool for estimating the thickness of transmission electron microscopy (TEM) samples by matching an experimental zone-axis CBED pattern with a series of patterns simulated for known thicknesses. The proposed tool detects CBED disks, localizes a pattern in detected disks and unifies the coordinate system of the experimental pattern with the simulated one. The experimental pattern is then compared disk-by-disk with a series of simulated patterns each corresponding to different known thicknesses. The thickness of the most similar simulated pattern is then taken as the thickness estimate. The tool was tested on [0 1 1] Si, [0 1 0] α-Ti and [0 1 1] α-Ti samples prepared using different techniques. Results of the presented approach were compared with thickness estimates based on analysis of CBED patterns in two beam conditions. The mean difference between these two methods was 4.1% for the FIB-prepared silicon samples, 5.2% for the electro-chemically polished titanium and 7.9% for Ar{sup +} ion-polished titanium. The proposed techniques can also be employed in other established CBED analyses. Apart from the thickness estimation, it can potentially be used to quantify lattice deformation, structure factors, symmetry, defects or extinction distance. - Highlights: • Automated TEM sample thickness estimation using zone-axis CBED is presented. • Computer vision and artificial intelligence are employed in CBED processing. • This approach reduces operator effort, analysis time and increases repeatability. • Individual parts can be employed in other analyses of CBED/diffraction pattern.

  3. Design-based Sample and Probability Law-Assumed Sample: Their Role in Scientific Investigation.

    Science.gov (United States)

    Ojeda, Mario Miguel; Sahai, Hardeo

    2002-01-01

    Discusses some key statistical concepts in probabilistic and non-probabilistic sampling to provide an overview for understanding the inference process. Suggests a statistical model constituting the basis of statistical inference and provides a brief review of the finite population descriptive inference and a quota sampling inferential theory.…

  4. Time series regression-based pairs trading in the Korean equities market

    Science.gov (United States)

    Kim, Saejoon; Heo, Jun

    2017-07-01

    Pairs trading is an instance of statistical arbitrage that relies on heavy quantitative data analysis to profit by capitalising low-risk trading opportunities provided by anomalies of related assets. A key element in pairs trading is the rule by which open and close trading triggers are defined. This paper investigates the use of time series regression to define the rule which has previously been identified with fixed threshold-based approaches. Empirical results indicate that our approach may yield significantly increased excess returns compared to ones obtained by previous approaches on large capitalisation stocks in the Korean equities market.

  5. FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting.

    Science.gov (United States)

    Alomar, Miquel L; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L

    2016-01-01

    Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.

  6. A SPIRAL-BASED DOWNSCALING METHOD FOR GENERATING 30 M TIME SERIES IMAGE DATA

    Directory of Open Access Journals (Sweden)

    B. Liu

    2017-09-01

    Full Text Available The spatial detail and updating frequency of land cover data are important factors influencing land surface dynamic monitoring applications in high spatial resolution scale. However, the fragmentized patches and seasonal variable of some land cover types (e. g. small crop field, wetland make it labor-intensive and difficult in the generation of land cover data. Utilizing the high spatial resolution multi-temporal image data is a possible solution. Unfortunately, the spatial and temporal resolution of available remote sensing data like Landsat or MODIS datasets can hardly satisfy the minimum mapping unit and frequency of current land cover mapping / updating at the same time. The generation of high resolution time series may be a compromise to cover the shortage in land cover updating process. One of popular way is to downscale multi-temporal MODIS data with other high spatial resolution auxiliary data like Landsat. But the usual manner of downscaling pixel based on a window may lead to the underdetermined problem in heterogeneous area, result in the uncertainty of some high spatial resolution pixels. Therefore, the downscaled multi-temporal data can hardly reach high spatial resolution as Landsat data. A spiral based method was introduced to downscale low spatial and high temporal resolution image data to high spatial and high temporal resolution image data. By the way of searching the similar pixels around the adjacent region based on the spiral, the pixel set was made up in the adjacent region pixel by pixel. The underdetermined problem is prevented to a large extent from solving the linear system when adopting the pixel set constructed. With the help of ordinary least squares, the method inverted the endmember values of linear system. The high spatial resolution image was reconstructed on the basis of high spatial resolution class map and the endmember values band by band. Then, the high spatial resolution time series was formed with these

  7. Study of Railway Track Irregularity Standard Deviation Time Series Based on Data Mining and Linear Model

    Directory of Open Access Journals (Sweden)

    Jia Chaolong

    2013-01-01

    Full Text Available Good track geometry state ensures the safe operation of the railway passenger service and freight service. Railway transportation plays an important role in the Chinese economic and social development. This paper studies track irregularity standard deviation time series data and focuses on the characteristics and trend changes of track state by applying clustering analysis. Linear recursive model and linear-ARMA model based on wavelet decomposition reconstruction are proposed, and all they offer supports for the safe management of railway transportation.

  8. Item Anomaly Detection Based on Dynamic Partition for Time Series in Recommender Systems.

    Science.gov (United States)

    Gao, Min; Tian, Renli; Wen, Junhao; Xiong, Qingyu; Ling, Bin; Yang, Linda

    2015-01-01

    In recent years, recommender systems have become an effective method to process information overload. However, recommendation technology still suffers from many problems. One of the problems is shilling attacks-attackers inject spam user profiles to disturb the list of recommendation items. There are two characteristics of all types of shilling attacks: 1) Item abnormality: The rating of target items is always maximum or minimum; and 2) Attack promptness: It takes only a very short period time to inject attack profiles. Some papers have proposed item anomaly detection methods based on these two characteristics, but their detection rate, false alarm rate, and universality need to be further improved. To solve these problems, this paper proposes an item anomaly detection method based on dynamic partitioning for time series. This method first dynamically partitions item-rating time series based on important points. Then, we use chi square distribution (χ2) to detect abnormal intervals. The experimental results on MovieLens 100K and 1M indicate that this approach has a high detection rate and a low false alarm rate and is stable toward different attack models and filler sizes.

  9. Low frequency of defective mismatch repair in a population-based series of upper urothelial carcinoma

    International Nuclear Information System (INIS)

    Ericson, Kajsa M; Isinger, Anna P; Isfoss, Björn L; Nilbert, Mef C

    2005-01-01

    Upper urothelial cancer (UUC), i.e. transitional cell carcinomas of the renal pelvis and the ureter, occur at an increased frequency in patients with hereditary nonpolyposis colorectal cancer (HNPCC). Defective mismatch repair (MMR) specifically characterizes HNPCC-associated tumors, but also occurs in subsets of some sporadic tumors, e.g. in gastrointestinal cancer and endometrial cancer. We assessed the contribution of defective MMR to the development of UUC in a population-based series from the southern Swedish Cancer Registry, through microsatellite instability (MSI) analysis and immunohistochemical evaluation of expression of the MMR proteins MLH1, PMS2, MSH2, and MSH6. A MSI-high phenotype was identified in 9/216 (4%) successfully analyzed patients and a MSI-low phenotype in 5/216 (2%). Loss of MMR protein immunostaining was found in 11/216 (5%) tumors, and affected most commonly MSH2 and MSH6. This population-based series indicates that somatic MMR inactivation is a minor pathway in the development of UUC, but tumors that display defective MMR are, based on the immunohistochemical expression pattern, likely to be associated with HNPCC

  10. Low frequency of defective mismatch repair in a population-based series of upper urothelial carcinoma

    Directory of Open Access Journals (Sweden)

    Isfoss Björn L

    2005-03-01

    Full Text Available Abstract Background Upper urothelial cancer (UUC, i.e. transitional cell carcinomas of the renal pelvis and the ureter, occur at an increased frequency in patients with hereditary nonpolyposis colorectal cancer (HNPCC. Defective mismatch repair (MMR specifically characterizes HNPCC-associated tumors, but also occurs in subsets of some sporadic tumors, e.g. in gastrointestinal cancer and endometrial cancer. Methods We assessed the contribution of defective MMR to the development of UUC in a population-based series from the southern Swedish Cancer Registry, through microsatellite instability (MSI analysis and immunohistochemical evaluation of expression of the MMR proteins MLH1, PMS2, MSH2, and MSH6. Results A MSI-high phenotype was identified in 9/216 (4% successfully analyzed patients and a MSI-low phenotype in 5/216 (2%. Loss of MMR protein immunostaining was found in 11/216 (5% tumors, and affected most commonly MSH2 and MSH6. Conclusion This population-based series indicates that somatic MMR inactivation is a minor pathway in the development of UUC, but tumors that display defective MMR are, based on the immunohistochemical expression pattern, likely to be associated with HNPCC.

  11. Low frequency of defective mismatch repair in a population-based series of upper urothelial carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Ericson, Kajsa M; Isinger, Anna P [Departments of Oncology, University Hospital, Lund (Sweden); Isfoss, Björn L [Departments of Pathology, University Hospital, Lund (Sweden); Nilbert, Mef C [Departments of Oncology, University Hospital, Lund (Sweden)

    2005-01-01

    Upper urothelial cancer (UUC), i.e. transitional cell carcinomas of the renal pelvis and the ureter, occur at an increased frequency in patients with hereditary nonpolyposis colorectal cancer (HNPCC). Defective mismatch repair (MMR) specifically characterizes HNPCC-associated tumors, but also occurs in subsets of some sporadic tumors, e.g. in gastrointestinal cancer and endometrial cancer. We assessed the contribution of defective MMR to the development of UUC in a population-based series from the southern Swedish Cancer Registry, through microsatellite instability (MSI) analysis and immunohistochemical evaluation of expression of the MMR proteins MLH1, PMS2, MSH2, and MSH6. A MSI-high phenotype was identified in 9/216 (4%) successfully analyzed patients and a MSI-low phenotype in 5/216 (2%). Loss of MMR protein immunostaining was found in 11/216 (5%) tumors, and affected most commonly MSH2 and MSH6. This population-based series indicates that somatic MMR inactivation is a minor pathway in the development of UUC, but tumors that display defective MMR are, based on the immunohistochemical expression pattern, likely to be associated with HNPCC.

  12. A stochastic HMM-based forecasting model for fuzzy time series.

    Science.gov (United States)

    Li, Sheng-Tun; Cheng, Yi-Chung

    2010-10-01

    Recently, fuzzy time series have attracted more academic attention than traditional time series due to their capability of dealing with the uncertainty and vagueness inherent in the data collected. The formulation of fuzzy relations is one of the key issues affecting forecasting results. Most of the present works adopt IF-THEN rules for relationship representation, which leads to higher computational overhead and rule redundancy. Sullivan and Woodall proposed a Markov-based formulation and a forecasting model to reduce computational overhead; however, its applicability is limited to handling one-factor problems. In this paper, we propose a novel forecasting model based on the hidden Markov model by enhancing Sullivan and Woodall's work to allow handling of two-factor forecasting problems. Moreover, in order to make the nature of conjecture and randomness of forecasting more realistic, the Monte Carlo method is adopted to estimate the outcome. To test the effectiveness of the resulting stochastic model, we conduct two experiments and compare the results with those from other models. The first experiment consists of forecasting the daily average temperature and cloud density in Taipei, Taiwan, and the second experiment is based on the Taiwan Weighted Stock Index by forecasting the exchange rate of the New Taiwan dollar against the U.S. dollar. In addition to improving forecasting accuracy, the proposed model adheres to the central limit theorem, and thus, the result statistically approximates to the real mean of the target value being forecast.

  13. Item Anomaly Detection Based on Dynamic Partition for Time Series in Recommender Systems

    Science.gov (United States)

    Gao, Min; Tian, Renli; Wen, Junhao; Xiong, Qingyu; Ling, Bin; Yang, Linda

    2015-01-01

    In recent years, recommender systems have become an effective method to process information overload. However, recommendation technology still suffers from many problems. One of the problems is shilling attacks-attackers inject spam user profiles to disturb the list of recommendation items. There are two characteristics of all types of shilling attacks: 1) Item abnormality: The rating of target items is always maximum or minimum; and 2) Attack promptness: It takes only a very short period time to inject attack profiles. Some papers have proposed item anomaly detection methods based on these two characteristics, but their detection rate, false alarm rate, and universality need to be further improved. To solve these problems, this paper proposes an item anomaly detection method based on dynamic partitioning for time series. This method first dynamically partitions item-rating time series based on important points. Then, we use chi square distribution (χ2) to detect abnormal intervals. The experimental results on MovieLens 100K and 1M indicate that this approach has a high detection rate and a low false alarm rate and is stable toward different attack models and filler sizes. PMID:26267477

  14. A Table-Based Random Sampling Simulation for Bioluminescence Tomography

    Directory of Open Access Journals (Sweden)

    Xiaomeng Zhang

    2006-01-01

    Full Text Available As a popular simulation of photon propagation in turbid media, the main problem of Monte Carlo (MC method is its cumbersome computation. In this work a table-based random sampling simulation (TBRS is proposed. The key idea of TBRS is to simplify multisteps of scattering to a single-step process, through randomly table querying, thus greatly reducing the computing complexity of the conventional MC algorithm and expediting the computation. The TBRS simulation is a fast algorithm of the conventional MC simulation of photon propagation. It retained the merits of flexibility and accuracy of conventional MC method and adapted well to complex geometric media and various source shapes. Both MC simulations were conducted in a homogeneous medium in our work. Also, we present a reconstructing approach to estimate the position of the fluorescent source based on the trial-and-error theory as a validation of the TBRS algorithm. Good agreement is found between the conventional MC simulation and the TBRS simulation.

  15. Comparing and Contrasting Traditional Membrane Bioreactor Models with Novel Ones Based on Time Series Analysis

    Directory of Open Access Journals (Sweden)

    Parneet Paul

    2013-02-01

    Full Text Available The computer modelling and simulation of wastewater treatment plant and their specific technologies, such as membrane bioreactors (MBRs, are becoming increasingly useful to consultant engineers when designing, upgrading, retrofitting, operating and controlling these plant. This research uses traditional phenomenological mechanistic models based on MBR filtration and biochemical processes to measure the effectiveness of alternative and novel time series models based upon input–output system identification methods. Both model types are calibrated and validated using similar plant layouts and data sets derived for this purpose. Results prove that although both approaches have their advantages, they also have specific disadvantages as well. In conclusion, the MBR plant designer and/or operator who wishes to use good quality, calibrated models to gain a better understanding of their process, should carefully consider which model type is selected based upon on what their initial modelling objectives are. Each situation usually proves unique.

  16. Multifractal analysis of visibility graph-based Ito-related connectivity time series.

    Science.gov (United States)

    Czechowski, Zbigniew; Lovallo, Michele; Telesca, Luciano

    2016-02-01

    In this study, we investigate multifractal properties of connectivity time series resulting from the visibility graph applied to normally distributed time series generated by the Ito equations with multiplicative power-law noise. We show that multifractality of the connectivity time series (i.e., the series of numbers of links outgoing any node) increases with the exponent of the power-law noise. The multifractality of the connectivity time series could be due to the width of connectivity degree distribution that can be related to the exit time of the associated Ito time series. Furthermore, the connectivity time series are characterized by persistence, although the original Ito time series are random; this is due to the procedure of visibility graph that, connecting the values of the time series, generates persistence but destroys most of the nonlinear correlations. Moreover, the visibility graph is sensitive for detecting wide "depressions" in input time series.

  17. Educating for Active Citizenship: Service-Learning, School-Based Service and Youth Civic Engagement. Youth Helping America Series

    Science.gov (United States)

    Spring, Kimberly; Dietz, Nathan; Grimm, Robert, Jr.

    2006-01-01

    This brief is the second in the Youth Helping America Series, a series of reports based on data from the Youth Volunteering and Civic Engagement Survey, a national survey of 3,178 American youth between the ages of 12 and 18 that was conducted by the Corporation for National and Community Service in 2005 in collaboration with the U.S. Census…

  18. Estimation of plant sampling uncertainty: an example based on chemical analysis of moss samples.

    Science.gov (United States)

    Dołęgowska, Sabina

    2016-11-01

    In order to estimate the level of uncertainty arising from sampling, 54 samples (primary and duplicate) of the moss species Pleurozium schreberi (Brid.) Mitt. were collected within three forested areas (Wierna Rzeka, Piaski, Posłowice Range) in the Holy Cross Mountains (south-central Poland). During the fieldwork, each primary sample composed of 8 to 10 increments (subsamples) was taken over an area of 10 m 2 whereas duplicate samples were collected in the same way at a distance of 1-2 m. Subsequently, all samples were triple rinsed with deionized water, dried, milled, and digested (8 mL HNO 3 (1:1) + 1 mL 30 % H 2 O 2 ) in a closed microwave system Multiwave 3000. The prepared solutions were analyzed twice for Cu, Fe, Mn, and Zn using FAAS and GFAAS techniques. All datasets were checked for normality and for normally distributed elements (Cu from Piaski, Zn from Posłowice, Fe, Zn from Wierna Rzeka). The sampling uncertainty was computed with (i) classical ANOVA, (ii) classical RANOVA, (iii) modified RANOVA, and (iv) range statistics. For the remaining elements, the sampling uncertainty was calculated with traditional and/or modified RANOVA (if the amount of outliers did not exceed 10 %) or classical ANOVA after Box-Cox transformation (if the amount of outliers exceeded 10 %). The highest concentrations of all elements were found in moss samples from Piaski, whereas the sampling uncertainty calculated with different statistical methods ranged from 4.1 to 22 %.

  19. A New Methodology Based on Imbalanced Classification for Predicting Outliers in Electricity Demand Time Series

    Directory of Open Access Journals (Sweden)

    Francisco Javier Duque-Pintor

    2016-09-01

    Full Text Available The occurrence of outliers in real-world phenomena is quite usual. If these anomalous data are not properly treated, unreliable models can be generated. Many approaches in the literature are focused on a posteriori detection of outliers. However, a new methodology to a priori predict the occurrence of such data is proposed here. Thus, the main goal of this work is to predict the occurrence of outliers in time series, by using, for the first time, imbalanced classification techniques. In this sense, the problem of forecasting outlying data has been transformed into a binary classification problem, in which the positive class represents the occurrence of outliers. Given that the number of outliers is much lower than the number of common values, the resultant classification problem is imbalanced. To create training and test sets, robust statistical methods have been used to detect outliers in both sets. Once the outliers have been detected, the instances of the dataset are labeled accordingly. Namely, if any of the samples composing the next instance are detected as an outlier, the label is set to one. As a study case, the methodology has been tested on electricity demand time series in the Spanish electricity market, in which most of the outliers were properly forecast.

  20. Wind Speed Prediction with Wavelet Time Series Based on Lorenz Disturbance

    Directory of Open Access Journals (Sweden)

    ZHANG, Y.

    2017-08-01

    Full Text Available Due to the sustainable and pollution-free characteristics, wind energy has been one of the fastest growing renewable energy sources. However, the intermittent and random fluctuation of wind speed presents many challenges for reliable wind power integration and normal operation of wind farm. Accurate wind speed prediction is the key to ensure the safe operation of power system and to develop wind energy resources. Therefore, this paper has presented a wavelet time series wind speed prediction model based on Lorenz disturbance. Therefore, in this paper, combined with the atmospheric dynamical system, a wavelet-time series improved wind speed prediction model based on Lorenz disturbance is proposed and the wind turbines of different climate types in Spain and China are used to simulate the disturbances of Lorenz equations with different initial values. The prediction results show that the improved model can effectively correct the preliminary prediction of wind speed, improving the prediction. In a word, the research work in this paper will be helpful to arrange the electric power dispatching plan and ensure the normal operation of the wind farm.

  1. Tonal synchrony in mother-infant interaction based on harmonic and pentatonic series.

    Science.gov (United States)

    Van Puyvelde, Martine; Vanfleteren, Pol; Loots, Gerrit; Deschuyffeleer, Sara; Vinck, Bart; Jacquet, Wolfgang; Verhelst, Werner

    2010-12-01

    This study reports the occurrence of 'tonal synchrony' as a new dimension of early mother-infant interaction synchrony. The findings are based on a tonal and temporal analysis of vocal interactions between 15 mothers and their 3-month-old infants during 5 min of free-play in a laboratory setting. In total, 558 vocal exchanges were identified and analysed, of which 84% reflected harmonic or pentatonic series. Another 10% of the exchanges contained absolute and/or relative pitch and/or interval imitations. The total durations of dyads being in tonal synchrony were normally distributed (M=3.71, SD=2.44). Vocalisations based on harmonic series appeared organised around the major triad, containing significantly more simple frequency ratios (octave, fifth and third) than complex ones (non-major triad tones). Tonal synchrony and its characteristics are discussed in relation to infant-directed speech, communicative musicality, pre-reflective communication and its impact on the quality of early mother-infant interaction and child's development. Copyright © 2010 Elsevier Inc. All rights reserved.

  2. Leveraging Disturbance Observer Based Torque Control for Improved Impedance Rendering with Series Elastic Actuators

    Science.gov (United States)

    Mehling, Joshua S.; Holley, James; O'Malley, Marcia K.

    2015-01-01

    The fidelity with which series elastic actuators (SEAs) render desired impedances is important. Numerous approaches to SEA impedance control have been developed under the premise that high-precision actuator torque control is a prerequisite. Indeed, the design of an inner torque compensator has a significant impact on actuator impedance rendering. The disturbance observer (DOB) based torque control implemented in NASA's Valkyrie robot is considered here and a mathematical model of this torque control, cascaded with an outer impedance compensator, is constructed. While previous work has examined the impact a disturbance observer has on torque control performance, little has been done regarding DOBs and impedance rendering accuracy. Both simulation and a series of experiments are used to demonstrate the significant improvements possible in an SEA's ability to render desired dynamic behaviors when utilizing a DOB. Actuator transparency at low impedances is improved, closed loop hysteresis is reduced, and the actuator's dynamic response to both commands and interaction torques more faithfully matches that of the desired model. All of this is achieved by leveraging DOB based control rather than increasing compensator gains, thus making improved SEA impedance control easier to achieve in practice.

  3. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    Science.gov (United States)

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  4. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Jun-He Yang

    2017-01-01

    Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  5. On-line diagnostic techniques for air-operated control valves based on time series analysis

    International Nuclear Information System (INIS)

    Ito, Kenji; Matsuoka, Yoshinori; Minamikawa, Shigeru; Komatsu, Yasuki; Satoh, Takeshi.

    1996-01-01

    The objective of this research is to study the feasibility of applying on-line diagnostic techniques based on time series analysis to air-operated control valves - numerous valves of the type which are used in PWR plants. Generally the techniques can detect anomalies by failures in the initial stages for which detection is difficult by conventional surveillance of process parameters measured directly. However, the effectiveness of these techniques depends on the system being diagnosed. The difficulties in applying diagnostic techniques to air-operated control valves seem to come from the reduced sensitivity of their response as compared with hydraulic control systems, as well as the need to identify anomalies in low level signals that fluctuate only slightly but continuously. In this research, simulation tests were performed by setting various kinds of failure modes for a test valve with the same specifications as of a valve actually used in the plants. Actual control signals recorded from an operating plant were then used as input signals for simulation. The results of the tests confirmed the feasibility of applying on-line diagnostic techniques based on time series analysis to air-operated control valves. (author)

  6. The Toggle Local Planner for sampling-based motion planning

    KAUST Repository

    Denny, Jory

    2012-05-01

    Sampling-based solutions to the motion planning problem, such as the probabilistic roadmap method (PRM), have become commonplace in robotics applications. These solutions are the norm as the dimensionality of the planning space grows, i.e., d > 5. An important primitive of these methods is the local planner, which is used for validation of simple paths between two configurations. The most common is the straight-line local planner which interpolates along the straight line between the two configurations. In this paper, we introduce a new local planner, Toggle Local Planner (Toggle LP), which extends local planning to a two-dimensional subspace of the overall planning space. If no path exists between the two configurations in the subspace, then Toggle LP is guaranteed to correctly return false. Intuitively, more connections could be found by Toggle LP than by the straight-line planner, resulting in better connected roadmaps. As shown in our results, this is the case, and additionally, the extra cost, in terms of time or storage, for Toggle LP is minimal. Additionally, our experimental analysis of the planner shows the benefit for a wide array of robots, with DOF as high as 70. © 2012 IEEE.

  7. Students creative thinking skills in solving two dimensional arithmetic series through research-based learning

    Science.gov (United States)

    Tohir, M.; Abidin, Z.; Dafik; Hobri

    2018-04-01

    Arithmetics is one of the topics in Mathematics, which deals with logic and detailed process upon generalizing formula. Creativity and flexibility are needed in generalizing formula of arithmetics series. This research aimed at analyzing students creative thinking skills in generalizing arithmetic series. The triangulation method and research-based learning was used in this research. The subjects were students of the Master Program of Mathematics Education in Faculty of Teacher Training and Education at Jember University. The data was collected by giving assignments to the students. The data collection was done by giving open problem-solving task and documentation study to the students to arrange generalization pattern based on the dependent function formula i and the function depend on i and j. Then, the students finished the next problem-solving task to construct arithmetic generalization patterns based on the function formula which depends on i and i + n and the sum formula of functions dependent on i and j of the arithmetic compiled. The data analysis techniques operative in this study was Miles and Huberman analysis model. Based on the result of data analysis on task 1, the levels of students creative thinking skill were classified as follows; 22,22% of the students categorized as “not creative” 38.89% of the students categorized as “less creative” category; 22.22% of the students categorized as “sufficiently creative” and 16.67% of the students categorized as “creative”. By contrast, the results of data analysis on task 2 found that the levels of students creative thinking skills were classified as follows; 22.22% of the students categorized as “sufficiently creative”, 44.44% of the students categorized as “creative” and 33.33% of the students categorized as “very creative”. This analysis result can set the basis for teaching references and actualizing a better teaching model in order to increase students creative thinking skills.

  8. Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model

    Science.gov (United States)

    Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.

    2009-04-01

    The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.

  9. Predicting Drug-Target Interactions Based on Small Positive Samples.

    Science.gov (United States)

    Hu, Pengwei; Chan, Keith C C; Hu, Yanxing

    2018-01-01

    evaluation of ODT shows that it can be potentially useful. It confirms that predicting potential or missing DTIs based on the known interactions is a promising direction to solve problems related to the use of uncertain and unreliable negative samples and those related to the great demand in computational resources. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  10. Rainfall Prediction of Indian Peninsula: Comparison of Time Series Based Approach and Predictor Based Approach using Machine Learning Techniques

    Science.gov (United States)

    Dash, Y.; Mishra, S. K.; Panigrahi, B. K.

    2017-12-01

    Prediction of northeast/post monsoon rainfall which occur during October, November and December (OND) over Indian peninsula is a challenging task due to the dynamic nature of uncertain chaotic climate. It is imperative to elucidate this issue by examining performance of different machine leaning (ML) approaches. The prime objective of this research is to compare between a) statistical prediction using historical rainfall observations and global atmosphere-ocean predictors like Sea Surface Temperature (SST) and Sea Level Pressure (SLP) and b) empirical prediction based on a time series analysis of past rainfall data without using any other predictors. Initially, ML techniques have been applied on SST and SLP data (1948-2014) obtained from NCEP/NCAR reanalysis monthly mean provided by the NOAA ESRL PSD. Later, this study investigated the applicability of ML methods using OND rainfall time series for 1948-2014 and forecasted up to 2018. The predicted values of aforementioned methods were verified using observed time series data collected from Indian Institute of Tropical Meteorology and the result revealed good performance of ML algorithms with minimal error scores. Thus, it is found that both statistical and empirical methods are useful for long range climatic projections.

  11. Computational design of new molecular scaffolds for medicinal chemistry, part II: generalization of analog series-based scaffolds

    Science.gov (United States)

    Dimova, Dilyana; Stumpfe, Dagmar; Bajorath, Jürgen

    2018-01-01

    Aim: Extending and generalizing the computational concept of analog series-based (ASB) scaffolds. Materials & methods: Methodological modifications were introduced to further increase the coverage of analog series (ASs) and compounds by ASB scaffolds. From bioactive compounds, ASs were systematically extracted and second-generation ASB scaffolds isolated. Results: More than 20,000 second-generation ASB scaffolds with single or multiple substitution sites were extracted from active compounds, achieving more than 90% coverage of ASs. Conclusion: Generalization of the ASB scaffold approach has yielded a large knowledge base of scaffold-capturing compound series and target information. PMID:29379641

  12. Hybrid pregnant reference phantom series based on adult female ICRP reference phantom

    Science.gov (United States)

    Rafat-Motavalli, Laleh; Miri-Hakimabad, Hashem; Hoseinian-Azghadi, Elie

    2018-03-01

    This paper presents boundary representation (BREP) models of pregnant female and her fetus at the end of each trimester. The International Commission on Radiological Protection (ICRP) female reference voxel phantom was used as a base template in development process of the pregnant hybrid phantom series. The differences in shape and location of the displaced maternal organs caused by enlarging uterus were also taken into account. The CT and MR images of fetus specimens and pregnant patients of various ages were used to replace the maternal abdominal pelvic organs of template phantom and insert the fetus inside the gravid uterus. Each fetal model contains 21 different organs and tissues. The skeletal model of the fetus also includes age-dependent cartilaginous and ossified skeletal components. The replaced maternal organ models were converted to NURBS surfaces and then modified to conform to reference values of ICRP Publication 89. The particular feature of current series compared to the previously developed pregnant phantoms is being constructed upon the basis of ICRP reference phantom. The maternal replaced organ models are NURBS surfaces. With this great potential, they might have the feasibility of being converted to high quality polygon mesh phantoms.

  13. Study on Apparent Kinetic Prediction Model of the Smelting Reduction Based on the Time-Series

    Directory of Open Access Journals (Sweden)

    Guo-feng Fan

    2012-01-01

    Full Text Available A series of direct smelting reduction experiment has been carried out with high phosphorous iron ore of the different bases by thermogravimetric analyzer. The derivative thermogravimetric (DTG data have been obtained from the experiments. One-step forward local weighted linear (LWL method , one of the most suitable ways of predicting chaotic time-series methods which focus on the errors, is used to predict DTG. In the meanwhile, empirical mode decomposition-autoregressive (EMD-AR, a data mining technique in signal processing, is also used to predict DTG. The results show that (1 EMD-AR(4 is the most appropriate and its error is smaller than the former; (2 root mean square error (RMSE has decreased about two-thirds; (3 standardized root mean square error (NMSE has decreased in an order of magnitude. Finally in this paper, EMD-AR method has been improved by golden section weighting; its error would be smaller than before. Therefore, the improved EMD-AR model is a promising alternative for apparent reaction rate (DTG. The analytical results have been an important reference in the field of industrial control.

  14. Advanced data extraction infrastructure: Web based system for management of time series data

    Energy Technology Data Exchange (ETDEWEB)

    Chilingaryan, S; Beglarian, A [Forschungszentrum Karlsruhe, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Kopmann, A; Voecking, S, E-mail: Suren.Chilingaryan@kit.ed [University of Muenster, Institut fuer Kernphysik, Wilhelm-Klemm-Strasse 9, 48149 Mnster (Germany)

    2010-04-01

    During operation of high energy physics experiments a big amount of slow control data is recorded. It is necessary to examine all collected data checking the integrity and validity of measurements. With growing maturity of AJAX technologies it becomes possible to construct sophisticated interfaces using web technologies only. Our solution for handling time series, generally slow control data, has a modular architecture: backend system for data analysis and preparation, a web service interface for data access and a fast AJAX web display. In order to provide fast interactive access the time series are aggregated over time slices of few predefined lengths. The aggregated values are stored in the temporary caching database and, then, are used to create generalizing data plots. These plots may include indication of data quality and are generated within few hundreds of milliseconds even if very high data rates are involved. The extensible export subsystem provides data in multiple formats including CSV, Excel, ROOT, and TDMS. The search engine can be used to find periods of time where indications of selected sensors are falling into the specified ranges. Utilization of the caching database allows performing most of such lookups within a second. Based on this functionality a web interface facilitating fast (Google-maps style) navigation through the data has been implemented. The solution is at the moment used by several slow control systems at Test Facility for Fusion Magnets (TOSKA) and Karlsruhe Tritium Neutrino (KATRIN).

  15. Permutation entropy based time series analysis: Equalities in the input signal can lead to false conclusions

    Energy Technology Data Exchange (ETDEWEB)

    Zunino, Luciano, E-mail: lucianoz@ciop.unlp.edu.ar [Centro de Investigaciones Ópticas (CONICET La Plata – CIC), C.C. 3, 1897 Gonnet (Argentina); Departamento de Ciencias Básicas, Facultad de Ingeniería, Universidad Nacional de La Plata (UNLP), 1900 La Plata (Argentina); Olivares, Felipe, E-mail: olivaresfe@gmail.com [Instituto de Física, Pontificia Universidad Católica de Valparaíso (PUCV), 23-40025 Valparaíso (Chile); Scholkmann, Felix, E-mail: Felix.Scholkmann@gmail.com [Research Office for Complex Physical and Biological Systems (ROCoS), Mutschellenstr. 179, 8038 Zurich (Switzerland); Biomedical Optics Research Laboratory, Department of Neonatology, University Hospital Zurich, University of Zurich, 8091 Zurich (Switzerland); Rosso, Osvaldo A., E-mail: oarosso@gmail.com [Instituto de Física, Universidade Federal de Alagoas (UFAL), BR 104 Norte km 97, 57072-970, Maceió, Alagoas (Brazil); Instituto Tecnológico de Buenos Aires (ITBA) and CONICET, C1106ACD, Av. Eduardo Madero 399, Ciudad Autónoma de Buenos Aires (Argentina); Complex Systems Group, Facultad de Ingeniería y Ciencias Aplicadas, Universidad de los Andes, Av. Mons. Álvaro del Portillo 12.455, Las Condes, Santiago (Chile)

    2017-06-15

    A symbolic encoding scheme, based on the ordinal relation between the amplitude of neighboring values of a given data sequence, should be implemented before estimating the permutation entropy. Consequently, equalities in the analyzed signal, i.e. repeated equal values, deserve special attention and treatment. In this work, we carefully study the effect that the presence of equalities has on permutation entropy estimated values when these ties are symbolized, as it is commonly done, according to their order of appearance. On the one hand, the analysis of computer-generated time series is initially developed to understand the incidence of repeated values on permutation entropy estimations in controlled scenarios. The presence of temporal correlations is erroneously concluded when true pseudorandom time series with low amplitude resolutions are considered. On the other hand, the analysis of real-world data is included to illustrate how the presence of a significant number of equal values can give rise to false conclusions regarding the underlying temporal structures in practical contexts. - Highlights: • Impact of repeated values in a signal when estimating permutation entropy is studied. • Numerical and experimental tests are included for characterizing this limitation. • Non-negligible temporal correlations can be spuriously concluded by repeated values. • Data digitized with low amplitude resolutions could be especially affected. • Analysis with shuffled realizations can help to overcome this limitation.

  16. Research on PM2.5 time series characteristics based on data mining technology

    Science.gov (United States)

    Zhao, Lifang; Jia, Jin

    2018-02-01

    With the development of data mining technology and the establishment of environmental air quality database, it is necessary to discover the potential correlations and rules by digging the massive environmental air quality information and analyzing the air pollution process. In this paper, we have presented a sequential pattern mining method based on the air quality data and pattern association technology to analyze the PM2.5 time series characteristics. Utilizing the real-time monitoring data of urban air quality in China, the time series rule and variation properties of PM2.5 under different pollution levels are extracted and analyzed. The analysis results show that the time sequence features of the PM2.5 concentration is directly affected by the alteration of the pollution degree. The longest time that PM2.5 remained stable is about 24 hours. As the pollution degree gets severer, the instability time and step ascending time gradually changes from 12-24 hours to 3 hours. The presented method is helpful for the controlling and forecasting of the air quality while saving the measuring costs, which is of great significance for the government regulation and public prevention of the air pollution.

  17. Advanced data extraction infrastructure: Web based system for management of time series data

    International Nuclear Information System (INIS)

    Chilingaryan, S; Beglarian, A; Kopmann, A; Voecking, S

    2010-01-01

    During operation of high energy physics experiments a big amount of slow control data is recorded. It is necessary to examine all collected data checking the integrity and validity of measurements. With growing maturity of AJAX technologies it becomes possible to construct sophisticated interfaces using web technologies only. Our solution for handling time series, generally slow control data, has a modular architecture: backend system for data analysis and preparation, a web service interface for data access and a fast AJAX web display. In order to provide fast interactive access the time series are aggregated over time slices of few predefined lengths. The aggregated values are stored in the temporary caching database and, then, are used to create generalizing data plots. These plots may include indication of data quality and are generated within few hundreds of milliseconds even if very high data rates are involved. The extensible export subsystem provides data in multiple formats including CSV, Excel, ROOT, and TDMS. The search engine can be used to find periods of time where indications of selected sensors are falling into the specified ranges. Utilization of the caching database allows performing most of such lookups within a second. Based on this functionality a web interface facilitating fast (Google-maps style) navigation through the data has been implemented. The solution is at the moment used by several slow control systems at Test Facility for Fusion Magnets (TOSKA) and Karlsruhe Tritium Neutrino (KATRIN).

  18. Modelling tourism demand in Madeira since 1946: and historical overview based on a time series approach

    Directory of Open Access Journals (Sweden)

    António Manuel Martins de Almeida

    2016-06-01

    Full Text Available Tourism is the leading economic sector in most islands and for that reason market trends are closely monitored due to the huge impacts of relatively minor changes in the demand patterns. An interesting line of research regarding the analysis of market trends concerns the examination of time series to get an historical overview of the data patterns. The modelling of demand patterns is obviously dependent on data availability, and the measurement of changes in demand patterns is quite often focused on a few decades. In this paper, we use long-term time-series data to analyse the evolution of the main markets in Madeira, by country of origin, in order to re-examine the Butler life cycle model, based on data available from 1946 onwards. This study is an opportunity to document the historical development of the industry in Madeira and to introduce the discussion about the rejuvenation of a mature destination. Tourism development in Madeira has experienced rapid growth until the late 90s, as one of the leading destinations in the European context. However, annual growth rates are not within acceptable ranges, which lead policy-makers and experts to recommend a thoughtfully assessment of the industry prospects.

  19. A Virtual Machine Migration Strategy Based on Time Series Workload Prediction Using Cloud Model

    Directory of Open Access Journals (Sweden)

    Yanbing Liu

    2014-01-01

    Full Text Available Aimed at resolving the issues of the imbalance of resources and workloads at data centers and the overhead together with the high cost of virtual machine (VM migrations, this paper proposes a new VM migration strategy which is based on the cloud model time series workload prediction algorithm. By setting the upper and lower workload bounds for host machines, forecasting the tendency of their subsequent workloads by creating a workload time series using the cloud model, and stipulating a general VM migration criterion workload-aware migration (WAM, the proposed strategy selects a source host machine, a destination host machine, and a VM on the source host machine carrying out the task of the VM migration. Experimental results and analyses show, through comparison with other peer research works, that the proposed method can effectively avoid VM migrations caused by momentary peak workload values, significantly lower the number of VM migrations, and dynamically reach and maintain a resource and workload balance for virtual machines promoting an improved utilization of resources in the entire data center.

  20. Output Information Based Fault-Tolerant Iterative Learning Control for Dual-Rate Sampling Process with Disturbances and Output Delay

    Directory of Open Access Journals (Sweden)

    Hongfeng Tao

    2018-01-01

    Full Text Available For a class of single-input single-output (SISO dual-rate sampling processes with disturbances and output delay, this paper presents a robust fault-tolerant iterative learning control algorithm based on output information. Firstly, the dual-rate sampling process with output delay is transformed into discrete system in state-space model form with slow sampling rate without time delay by using lifting technology; then output information based fault-tolerant iterative learning control scheme is designed and the control process is turned into an equivalent two-dimensional (2D repetitive process. Moreover, based on the repetitive process stability theory, the sufficient conditions for the stability of system and the design method of robust controller are given in terms of linear matrix inequalities (LMIs technique. Finally, the flow control simulations of two flow tanks in series demonstrate the feasibility and effectiveness of the proposed method.

  1. 238U-series radionuclides in Finnish groundwater-based drinking water and effective doses

    International Nuclear Information System (INIS)

    Vesterbacka, P.

    2005-09-01

    The thesis deals with the occurrence of 238 U-series radionuclides and particle-bound 210 Pb and 210 Po in Finnish groundwater-based drinking water, methods used for removing 234 U, 238 U, 210 Pb and 210 Po, and the annual effective doses caused by 238 U-series radionuclides in drinking water. In order to reduce radiation exposure and avoid high doses, it is important to examine the activity levels of natural radionuclides in groundwater. In this work, the activity concentrations of radon ( 222 Rn), radium ( 226 Ra), uranium ( 238 U and 234 U), lead ( 210 Pb) and polonium ( 210 Po) were determined from 472 private wells, which were selected randomly from across Finland. On the basis of the results, the activity concentrations in groundwater and the radiation exposure from drinking water of people living outside the public water supply in Finland was specified. The efficiency of 238 U, 234 U, 210 Pb and 210 Po removal from drinking water was examined at ten private homes. In order to obtain accurate results and correct estimates of effective doses, attention was paid to the sampling of 222 Rn and 210 Pb, and the determination of 210 Pb. The results revealed that the median activity concentrations of natural radionuclides were as much as ten times higher in drilled wells than in wells dug in soil. The average activity concentration of 222 Rn in drilled wells was 460 Bq/l and in dug wells 50 Bq/l. The highest activity concentrations were found in Southern Finland. In addition, occasional high activity concentrations were found all over Finland. The average activity concentrations of 234 U and 238 U in drilled wells were 0.35 and 0.26 Bq/l and in dug wells 0.020 and 0.015 Bq/l, respectively. The spatial distribution of 234 U, 238 U, 210 Pb and 210 Po was essentially similar to that of 222 Rn. In contrast to other natural radionuclides, the highest 226 Ra activity concentrations were found in coastal areas, since drilled well water near the sea has a higher salinity

  2. Selecting Sample Preparation Workflows for Mass Spectrometry-Based Proteomic and Phosphoproteomic Analysis of Patient Samples with Acute Myeloid Leukemia.

    Science.gov (United States)

    Hernandez-Valladares, Maria; Aasebø, Elise; Selheim, Frode; Berven, Frode S; Bruserud, Øystein

    2016-08-22

    Global mass spectrometry (MS)-based proteomic and phosphoproteomic studies of acute myeloid leukemia (AML) biomarkers represent a powerful strategy to identify and confirm proteins and their phosphorylated modifications that could be applied in diagnosis and prognosis, as a support for individual treatment regimens and selection of patients for bone marrow transplant. MS-based studies require optimal and reproducible workflows that allow a satisfactory coverage of the proteome and its modifications. Preparation of samples for global MS analysis is a crucial step and it usually requires method testing, tuning and optimization. Different proteomic workflows that have been used to prepare AML patient samples for global MS analysis usually include a standard protein in-solution digestion procedure with a urea-based lysis buffer. The enrichment of phosphopeptides from AML patient samples has previously been carried out either with immobilized metal affinity chromatography (IMAC) or metal oxide affinity chromatography (MOAC). We have recently tested several methods of sample preparation for MS analysis of the AML proteome and phosphoproteome and introduced filter-aided sample preparation (FASP) as a superior methodology for the sensitive and reproducible generation of peptides from patient samples. FASP-prepared peptides can be further fractionated or IMAC-enriched for proteome or phosphoproteome analyses. Herein, we will review both in-solution and FASP-based sample preparation workflows and encourage the use of the latter for the highest protein and phosphorylation coverage and reproducibility.

  3. Selecting Sample Preparation Workflows for Mass Spectrometry-Based Proteomic and Phosphoproteomic Analysis of Patient Samples with Acute Myeloid Leukemia

    Directory of Open Access Journals (Sweden)

    Maria Hernandez-Valladares

    2016-08-01

    Full Text Available Global mass spectrometry (MS-based proteomic and phosphoproteomic studies of acute myeloid leukemia (AML biomarkers represent a powerful strategy to identify and confirm proteins and their phosphorylated modifications that could be applied in diagnosis and prognosis, as a support for individual treatment regimens and selection of patients for bone marrow transplant. MS-based studies require optimal and reproducible workflows that allow a satisfactory coverage of the proteome and its modifications. Preparation of samples for global MS analysis is a crucial step and it usually requires method testing, tuning and optimization. Different proteomic workflows that have been used to prepare AML patient samples for global MS analysis usually include a standard protein in-solution digestion procedure with a urea-based lysis buffer. The enrichment of phosphopeptides from AML patient samples has previously been carried out either with immobilized metal affinity chromatography (IMAC or metal oxide affinity chromatography (MOAC. We have recently tested several methods of sample preparation for MS analysis of the AML proteome and phosphoproteome and introduced filter-aided sample preparation (FASP as a superior methodology for the sensitive and reproducible generation of peptides from patient samples. FASP-prepared peptides can be further fractionated or IMAC-enriched for proteome or phosphoproteome analyses. Herein, we will review both in-solution and FASP-based sample preparation workflows and encourage the use of the latter for the highest protein and phosphorylation coverage and reproducibility.

  4. New Approach Based on Compressive Sampling for Sample Rate Enhancement in DASs for Low-Cost Sensing Nodes

    Directory of Open Access Journals (Sweden)

    Francesco Bonavolontà

    2014-10-01

    Full Text Available The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate.

  5. FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting

    Directory of Open Access Journals (Sweden)

    Miquel L. Alomar

    2016-01-01

    Full Text Available Hardware implementation of artificial neural networks (ANNs allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC has arisen as a strategic technique to design recurrent neural networks (RNNs with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.

  6. Autoregressive-model-based missing value estimation for DNA microarray time series data.

    Science.gov (United States)

    Choong, Miew Keen; Charbit, Maurice; Yan, Hong

    2009-01-01

    Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.

  7. Base catalysed isomerisation of aldoses of the arabino and lyxo series in the presence of aluminate.

    Science.gov (United States)

    Ekeberg, Dag; Morgenlie, Svein; Stenstrøm, Yngve

    2002-04-30

    Base-catalysed isomerisation of aldoses of the arabino and lyxo series in aluminate solution has been investigated. L-Arabinose and D-galactose give L-erythro-2-pentulose (L-ribulose) and D-lyxo-2-hexulose (D-tagatose), respectively, in good yields, whereas lower reactivity is observed for 6-deoxy-D-galactose (D-fucose). From D-lyxose, D-mannose and 6-deoxy-L-mannose (L-rhamnose) are obtained mixtures of ketoses and C-2 epimeric aldoses. Small amounts of the 3-epimers of the ketoses were also formed. 6-Deoxy-L-arabino-2-hexulose (6-deoxy-L-fructose) and 6-deoxy-L-glucose (L-quinovose) were formed in low yields from 6-deoxy-L-mannose and isolated as their O-isopropylidene derivatives. Explanations of the differences in reactivity and course of the reaction have been suggested on the basis of steric effects.

  8. Permutation entropy analysis of financial time series based on Hill's diversity number

    Science.gov (United States)

    Zhang, Yali; Shang, Pengjian

    2017-12-01

    In this paper the permutation entropy based on Hill's diversity number (Nn,r) is introduced as a new way to assess the complexity of a complex dynamical system such as stock market. We test the performance of this method with simulated data. Results show that Nn,r with appropriate parameters is more sensitive to the change of system and describes the trends of complex systems clearly. In addition, we research the stock closing price series from different data that consist of six indices: three US stock indices and three Chinese stock indices during different periods, Nn,r can quantify the changes of complexity for stock market data. Moreover, we get richer information from Nn,r, and obtain some properties about the differences between the US and Chinese stock indices.

  9. Volterra series based predistortion for broadband RF power amplifiers with memory effects

    Institute of Scientific and Technical Information of China (English)

    Jin Zhe; Song Zhihuan; He Jiaming

    2008-01-01

    RF power amplifiers(PAs)are usually considered as memoryless devices in most existing predistortion techniques.However,in broadband communication systems,such as WCDMA,the PA memory effects are significant,and memoryless predistortion cannot linearize the PAs effectively.After analyzing the PA memory effects,a novel predistortion method based on the simplified Volterra series is proposed to linearize broadband RF PAs with memory effects.The indirect learning architecture is adopted to design the predistortion scheme and the recursive least squares algorithm with forgetting factor is applied to identify the parameters of the predistorter.Simulation results show that the proposed predistortion method can compensate the nonlinear distortion and memory effects of broadband RF PAs effectively.

  10. Taxation, regulation, and addiction: a demand function for cigarettes based on time-series evidence.

    Science.gov (United States)

    Keeler, T E; Hu, T W; Barnett, P G; Manning, W G

    1993-04-01

    This work analyzes the effects of prices, taxes, income, and anti-smoking regulations on the consumption of cigarettes in California (a 25-cent-per-pack state tax increase in 1989 enhances the usefulness of this exercise). Analysis is based on monthly time-series data for 1980 through 1990. Results show a price elasticity of demand for cigarettes in the short run of -0.3 to -0.5 at mean data values, and -0.5 to -0.6 in the long run. We find at least some support for two further hypotheses: that antismoking regulations reduce cigarette consumption, and that consumers behave consistently with the model of rational addiction.

  11. New Homologues Series of Heterocyclic Schiff Base Ester: Synthesis and Characterization

    Directory of Open Access Journals (Sweden)

    Yee-Ting Chong

    2016-01-01

    Full Text Available A homologous series of liquid crystal bearing with heterocyclic thiophene Schiff base ester with alkanoyloxy chain (CH3(CH2nCOO–, where n=4, 6, 8, 10, 12, 14, 16 was successfully synthesized through the modification of some reported methods. The structural information of these compounds was isolated and characterized through some spectroscopic techniques, such as FTIR, 1H, and 13C NMR and elemental analysis. Textural observation was carried out using a polarizing optical microscope (POM over heating and cooling cycles. It was found that all synthesized compounds (3a–g exhibited an enantiotropic nematic phase upon the heating and cooling cycle with high thermal stability. Moreover, a characteristic bar transition texture was observed for compounds 3f and 3g which have shown transition of nematic-to-smectic C phase. This has been further confirmed by obtaining relative phase transition temperature using the differential scanning calorimetry (DSC.

  12. Accuracy of MFCC-Based Speaker Recognition in Series 60 Device

    Directory of Open Access Journals (Sweden)

    Pasi Fränti

    2005-10-01

    Full Text Available A fixed point implementation of speaker recognition based on MFCC signal processing is considered. We analyze the numerical error of the MFCC and its effect on the recognition accuracy. Techniques to reduce the information loss in a converted fixed point implementation are introduced. We increase the signal processing accuracy by adjusting the ratio of presentation accuracy of the operators and the signal. The signal processing error is found out to be more important to the speaker recognition accuracy than the error in the classification algorithm. The results are verified by applying the alternative technique to speech data. We also discuss the specific programming requirements set up by the Symbian and Series 60.

  13. A new wind speed forecasting strategy based on the chaotic time series modelling technique and the Apriori algorithm

    International Nuclear Information System (INIS)

    Guo, Zhenhai; Chi, Dezhong; Wu, Jie; Zhang, Wenyu

    2014-01-01

    Highlights: • Impact of meteorological factors on wind speed forecasting is taken into account. • Forecasted wind speed results are corrected by the associated rules. • Forecasting accuracy is improved by the new wind speed forecasting strategy. • Robust of the proposed model is validated by data sampled from different sites. - Abstract: Wind energy has been the fastest growing renewable energy resource in recent years. Because of the intermittent nature of wind, wind power is a fluctuating source of electrical energy. Therefore, to minimize the impact of wind power on the electrical grid, accurate and reliable wind power forecasting is mandatory. In this paper, a new wind speed forecasting approach based on based on the chaotic time series modelling technique and the Apriori algorithm has been developed. The new approach consists of four procedures: (I) Clustering by using the k-means clustering approach; (II) Employing the Apriori algorithm to discover the association rules; (III) Forecasting the wind speed according to the chaotic time series forecasting model; and (IV) Correcting the forecasted wind speed data using the associated rules discovered previously. This procedure has been verified by 31-day-ahead daily average wind speed forecasting case studies, which employed the wind speed and other meteorological data collected from four meteorological stations located in the Hexi Corridor area of China. The results of these case studies reveal that the chaotic forecasting model can efficiently improve the accuracy of the wind speed forecasting, and the Apriori algorithm can effectively discover the association rules between the wind speed and other meteorological factors. In addition, the correction results demonstrate that the association rules discovered by the Apriori algorithm have powerful capacities in handling the forecasted wind speed values correction when the forecasted values do not match the classification discovered by the association rules

  14. Preview-based sampling for controlling gaseous simulations

    KAUST Repository

    Huang, Ruoguan; Melek, Zeki; Keyser, John

    2011-01-01

    to maintain. During the high resolution simulation, a matching process ensures that the properties sampled from the low resolution simulation are maintained. This matching process keeps the different resolution simulations aligned even for complex systems

  15. 46,XX males: a case series based on clinical and genetics evaluation.

    Science.gov (United States)

    Mohammadpour Lashkari, F; Totonchi, M; Zamanian, M R; Mansouri, Z; Sadighi Gilani, M A; Sabbaghian, M; Mohseni Meybodi, A

    2017-09-01

    46,XX male sex reversal syndrome is one of the rarest sex chromosomal aberrations. The presence of SRY gene on one of the X chromosomes is the most frequent cause of this syndrome. Based on Y chromosome profile, there are SRY-positive and SRY-negative forms. The purpose of our study was to report first case series of Iranian patients and describe the different clinical appearances based on their genetic component. From the 8,114 azoospermic and severe oligozoospermic patients referred to Royan institute, we diagnosed 57 cases as sex reversal patients. Based on the endocrinological history, we performed karyotyping, SRY and AZF microdeletion screening. Patients had a female karyotype. According to available hormonal reports of 37 patients, 16 cases had low levels of testosterone (43.2%). On the other hand, 15 males were SRY positive (90.2%), while they lacked the spermatogenic factors encoding genes on Yq. Commencing the testicular differentiation in males, the SRY gene is considered to be very important in this process. Due to homogeneous results of karyotyping and AZF deletion, there are both positive and negative SRY cases that show similar sex reversal phenotypes. Evidences show that there could be diverse phenotypic differences that could be raised from various reasons. © 2016 Blackwell Verlag GmbH.

  16. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    Directory of Open Access Journals (Sweden)

    Elias Chaibub Neto

    Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  17. Finding metastabilities in reversible Markov chains based on incomplete sampling

    Directory of Open Access Journals (Sweden)

    Fackeldey Konstantin

    2017-01-01

    Full Text Available In order to fully characterize the state-transition behaviour of finite Markov chains one needs to provide the corresponding transition matrix P. In many applications such as molecular simulation and drug design, the entries of the transition matrix P are estimated by generating realizations of the Markov chain and determining the one-step conditional probability Pij for a transition from one state i to state j. This sampling can be computational very demanding. Therefore, it is a good idea to reduce the sampling effort. The main purpose of this paper is to design a sampling strategy, which provides a partial sampling of only a subset of the rows of such a matrix P. Our proposed approach fits very well to stochastic processes stemming from simulation of molecular systems or random walks on graphs and it is different from the matrix completion approaches which try to approximate the transition matrix by using a low-rank-assumption. It will be shown how Markov chains can be analyzed on the basis of a partial sampling. More precisely. First, we will estimate the stationary distribution from a partially given matrix P. Second, we will estimate the infinitesimal generator Q of P on the basis of this stationary distribution. Third, from the generator we will compute the leading invariant subspace, which should be identical to the leading invariant subspace of P. Forth, we will apply Robust Perron Cluster Analysis (PCCA+ in order to identify metastabilities using this subspace.

  18. Reliability assessment based on small samples of normal distribution

    International Nuclear Information System (INIS)

    Ma Zhibo; Zhu Jianshi; Xu Naixin

    2003-01-01

    When the pertinent parameter involved in reliability definition complies with normal distribution, the conjugate prior of its distributing parameters (μ, h) is of normal-gamma distribution. With the help of maximum entropy and the moments-equivalence principles, the subjective information of the parameter and the sampling data of its independent variables are transformed to a Bayesian prior of (μ,h). The desired estimates are obtained from either the prior or the posterior which is formed by combining the prior and sampling data. Computing methods are described and examples are presented to give demonstrations

  19. Multi-step-prediction of chaotic time series based on co-evolutionary recurrent neural network

    International Nuclear Information System (INIS)

    Ma Qianli; Zheng Qilun; Peng Hong; Qin Jiangwei; Zhong Tanwei

    2008-01-01

    This paper proposes a co-evolutionary recurrent neural network (CERNN) for the multi-step-prediction of chaotic time series, it estimates the proper parameters of phase space reconstruction and optimizes the structure of recurrent neural networks by co-evolutionary strategy. The searching space was separated into two subspaces and the individuals are trained in a parallel computational procedure. It can dynamically combine the embedding method with the capability of recurrent neural network to incorporate past experience due to internal recurrence. The effectiveness of CERNN is evaluated by using three benchmark chaotic time series data sets: the Lorenz series, Mackey-Glass series and real-world sun spot series. The simulation results show that CERNN improves the performances of multi-step-prediction of chaotic time series

  20. GAMMA-RAY CHARACTERIZATION OF THE U-SERIES INTERMEDIATE DAUGHTERS FROM SOIL SAMPLES AT THE PENA BLANCA NATURAL ANALOG, CHIHUAHUA, MEXICO

    Energy Technology Data Exchange (ETDEWEB)

    D.C. French; E.Y. Anthony; P.C. Goodell

    2005-07-18

    The Pena Blanca natural analog is located in the Sierra Pena Blanca, approximately 50 miles north of Chihuahua City, Mexico. The Sierra Pena Blanca is composed mainly of ash-flow tuffs, and the uranium in the region is contained in the brecciated zones of these tuffs. The Pena Blanca site is considered a natural analog to the proposed Yucca Mountain Nuclear Waste Repository because they share similar characteristics of structure, volcanic lithology, tectonic activity, and hydrologic regime. One of the mineralized zones, the Nopal I deposit, was mined in the early 1980s and the ore was stockpiled close to the mine. This stockpile area has subsequently been cleared and is referred to as the prior high-grade stockpile (PHGS) site. Soil surrounding boulders of high-grade ore associated with the PHGS site have been sampled. The purpose of this study is to characterize the transport of uranium series radioisotopes from the boulder to the soil during the past 25 years. Transport is characterized by determining the activities of individual radionuclides and daughter to parent ratios. The daughter to parent ratios are used to establish whether the samples are in secular equilibrium. Activities are determined using gamma-ray spectroscopy. Isotopes of the uranium series decay chain detected by gamma-ray spectroscopy include {sup 210}Pb, {sup 234}U, {sup 234}Th, {sup 230}Th, {sup 226}Ra, {sup 214}Pb, {sup 214}Bi, and {sup 234}Pa. Preliminary results indicate that some daughter to parent pairs appear to be in secular disequilibrium. Thorium is in excess relative to uranium, and radium is in excess relative to thorium. A deficiency appears to exist for {sup 210}Pb relative to {sup 214}Bi and {sup 214}Pb. If these results are borne out by further analysis, they would suggest transport of nuclides from the high-grade boulder into its surroundings, followed by continued leaching of uranium and lead from the environment.

  1. GAMMA-RAY CHARACTERIZATION OF THE U-SERIES INTERMEDIATE DAUGHTERS FROM SOIL SAMPLES AT THE PENA BLANCA NATURAL ANALOG, CHIHUAHUA, MEXICO

    International Nuclear Information System (INIS)

    French, D.C.; Anthony, E.Y.; Goodell, P.C.

    2005-01-01

    The Pena Blanca natural analog is located in the Sierra Pena Blanca, approximately 50 miles north of Chihuahua City, Mexico. The Sierra Pena Blanca is composed mainly of ash-flow tuffs, and the uranium in the region is contained in the brecciated zones of these tuffs. The Pena Blanca site is considered a natural analog to the proposed Yucca Mountain Nuclear Waste Repository because they share similar characteristics of structure, volcanic lithology, tectonic activity, and hydrologic regime. One of the mineralized zones, the Nopal I deposit, was mined in the early 1980s and the ore was stockpiled close to the mine. This stockpile area has subsequently been cleared and is referred to as the prior high-grade stockpile (PHGS) site. Soil surrounding boulders of high-grade ore associated with the PHGS site have been sampled. The purpose of this study is to characterize the transport of uranium series radioisotopes from the boulder to the soil during the past 25 years. Transport is characterized by determining the activities of individual radionuclides and daughter to parent ratios. The daughter to parent ratios are used to establish whether the samples are in secular equilibrium. Activities are determined using gamma-ray spectroscopy. Isotopes of the uranium series decay chain detected by gamma-ray spectroscopy include 210 Pb, 234 U, 234 Th, 230 Th, 226 Ra, 214 Pb, 214 Bi, and 234 Pa. Preliminary results indicate that some daughter to parent pairs appear to be in secular disequilibrium. Thorium is in excess relative to uranium, and radium is in excess relative to thorium. A deficiency appears to exist for 210 Pb relative to 214 Bi and 214 Pb. If these results are borne out by further analysis, they would suggest transport of nuclides from the high-grade boulder into its surroundings, followed by continued leaching of uranium and lead from the environment

  2. Evaluation of physical sampling efficiency for cyclone-based personal bioaerosol samplers in moving air environments.

    Science.gov (United States)

    Su, Wei-Chung; Tolchinsky, Alexander D; Chen, Bean T; Sigaev, Vladimir I; Cheng, Yung Sung

    2012-09-01

    The need to determine occupational exposure to bioaerosols has notably increased in the past decade, especially for microbiology-related workplaces and laboratories. Recently, two new cyclone-based personal bioaerosol samplers were developed by the National Institute for Occupational Safety and Health (NIOSH) in the USA and the Research Center for Toxicology and Hygienic Regulation of Biopreparations (RCT & HRB) in Russia to monitor bioaerosol exposure in the workplace. Here, a series of wind tunnel experiments were carried out to evaluate the physical sampling performance of these two samplers in moving air conditions, which could provide information for personal biological monitoring in a moving air environment. The experiments were conducted in a small wind tunnel facility using three wind speeds (0.5, 1.0 and 2.0 m s(-1)) and three sampling orientations (0°, 90°, and 180°) with respect to the wind direction. Monodispersed particles ranging from 0.5 to 10 μm were employed as the test aerosols. The evaluation of the physical sampling performance was focused on the aspiration efficiency and capture efficiency of the two samplers. The test results showed that the orientation-averaged aspiration efficiencies of the two samplers closely agreed with the American Conference of Governmental Industrial Hygienists (ACGIH) inhalable convention within the particle sizes used in the evaluation tests, and the effect of the wind speed on the aspiration efficiency was found negligible. The capture efficiencies of these two samplers ranged from 70% to 80%. These data offer important information on the insight into the physical sampling characteristics of the two test samplers.

  3. A non linear analysis of human gait time series based on multifractal analysis and cross correlations

    International Nuclear Information System (INIS)

    Munoz-Diosdado, A

    2005-01-01

    We analyzed databases with gait time series of adults and persons with Parkinson, Huntington and amyotrophic lateral sclerosis (ALS) diseases. We obtained the staircase graphs of accumulated events that can be bounded by a straight line whose slope can be used to distinguish between gait time series from healthy and ill persons. The global Hurst exponent of these series do not show tendencies, we intend that this is because some gait time series have monofractal behavior and others have multifractal behavior so they cannot be characterized with a single Hurst exponent. We calculated the multifractal spectra, obtained the spectra width and found that the spectra of the healthy young persons are almost monofractal. The spectra of ill persons are wider than the spectra of healthy persons. In opposition to the interbeat time series where the pathology implies loss of multifractality, in the gait time series the multifractal behavior emerges with the pathology. Data were collected from healthy and ill subjects as they walked in a roughly circular path and they have sensors in both feet, so we have one time series for the left foot and other for the right foot. First, we analyzed these time series separately, and then we compared both results, with direct comparison and with a cross correlation analysis. We tried to find differences in both time series that can be used as indicators of equilibrium problems

  4. A non linear analysis of human gait time series based on multifractal analysis and cross correlations

    Energy Technology Data Exchange (ETDEWEB)

    Munoz-Diosdado, A [Department of Mathematics, Unidad Profesional Interdisciplinaria de Biotecnologia, Instituto Politecnico Nacional, Av. Acueducto s/n, 07340, Mexico City (Mexico)

    2005-01-01

    We analyzed databases with gait time series of adults and persons with Parkinson, Huntington and amyotrophic lateral sclerosis (ALS) diseases. We obtained the staircase graphs of accumulated events that can be bounded by a straight line whose slope can be used to distinguish between gait time series from healthy and ill persons. The global Hurst exponent of these series do not show tendencies, we intend that this is because some gait time series have monofractal behavior and others have multifractal behavior so they cannot be characterized with a single Hurst exponent. We calculated the multifractal spectra, obtained the spectra width and found that the spectra of the healthy young persons are almost monofractal. The spectra of ill persons are wider than the spectra of healthy persons. In opposition to the interbeat time series where the pathology implies loss of multifractality, in the gait time series the multifractal behavior emerges with the pathology. Data were collected from healthy and ill subjects as they walked in a roughly circular path and they have sensors in both feet, so we have one time series for the left foot and other for the right foot. First, we analyzed these time series separately, and then we compared both results, with direct comparison and with a cross correlation analysis. We tried to find differences in both time series that can be used as indicators of equilibrium problems.

  5. Two sample Bayesian prediction intervals for order statistics based on the inverse exponential-type distributions using right censored sample

    Directory of Open Access Journals (Sweden)

    M.M. Mohie El-Din

    2011-10-01

    Full Text Available In this paper, two sample Bayesian prediction intervals for order statistics (OS are obtained. This prediction is based on a certain class of the inverse exponential-type distributions using a right censored sample. A general class of prior density functions is used and the predictive cumulative function is obtained in the two samples case. The class of the inverse exponential-type distributions includes several important distributions such the inverse Weibull distribution, the inverse Burr distribution, the loglogistic distribution, the inverse Pareto distribution and the inverse paralogistic distribution. Special cases of the inverse Weibull model such as the inverse exponential model and the inverse Rayleigh model are considered.

  6. Individual and pen-based oral fluid sampling: A welfare-friendly sampling method for group-housed gestating sows.

    Science.gov (United States)

    Pol, Françoise; Dorenlor, Virginie; Eono, Florent; Eudier, Solveig; Eveno, Eric; Liégard-Vanhecke, Dorine; Rose, Nicolas; Fablet, Christelle

    2017-11-01

    The aims of this study were to assess the feasibility of individual and pen-based oral fluid sampling (OFS) in 35 pig herds with group-housed sows, compare these methods to blood sampling, and assess the factors influencing the success of sampling. Individual samples were collected from at least 30 sows per herd. Pen-based OFS was performed using devices placed in at least three pens for 45min. Information related to the farm, the sows, and their living conditions were collected. Factors significantly associated with the duration of sampling and the chewing behaviour of sows were identified by logistic regression. Individual OFS took 2min 42s on average; the type of floor, swab size, and operator were associated with a sampling time >2min. Pen-based OFS was obtained from 112 devices (62.2%). The type of floor, parity, pen-level activity, and type of feeding were associated with chewing behaviour. Pen activity was associated with the latency to interact with the device. The type of floor, gestation stage, parity, group size, and latency to interact with the device were associated with a chewing time >10min. After 15, 30 and 45min of pen-based OFS, 48%, 60% and 65% of the sows were lying down, respectively. The time spent after the beginning of sampling, genetic type, and time elapsed since the last meal were associated with 50% of the sows lying down at one time point. The mean time to blood sample the sows was 1min 16s and 2min 52s if the number of operators required was considered in the sampling time estimation. The genetic type, parity, and type of floor were significantly associated with a sampling time higher than 1min 30s. This study shows that individual OFS is easy to perform in group-housed sows by a single operator, even though straw-bedded animals take longer to sample than animals housed on slatted floors, and suggests some guidelines to optimise pen-based OFS success. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Optimization of Contact Force and Pull-in Voltage for Series based MEMS Switch

    Directory of Open Access Journals (Sweden)

    Abhijeet KSHIRSAGAR

    2010-04-01

    Full Text Available Cantilever based metal-to-metal contact type MEMS series switch has many applications namely in RF MEMS, Power MEMS etc. A typical MEMS switch consists of a cantilever as actuating element to make the contact between the two metal terminals of the switch. The cantilever is pulled down by applying a pull-in voltage to the control electrode that is located below the middle portion of the cantilever while only the tip portion of the cantilever makes contact between the two terminals. Detailed analysis of bending of the cantilever for different pull-in voltages reveals some interesting facts. At low pull-in voltage the cantilever tip barely touches the two terminals, thus resulting in very less contact area. To increase contact area a very high pull-in voltage is applied, but it lifts the tip from the free end due to concave curving of the cantilever in the middle region of the cantilever where the electrode is located. Again it results in less contact area. Furthermore, the high pull-in voltage produces large stress at the base of the cantilever close to the anchor. Therefore, an optimum, pull-in voltage must exist at which the concave curving is eliminated and contact area is maximum. In this paper authors report the finding of optimum contact force and pull-in voltage.

  8. A 10-kW series resonant converter design, transistor characterization, and base-drive optimization

    Science.gov (United States)

    Robson, R. R.; Hancock, D. J.

    1982-01-01

    The development, components, and performance of a transistor-based 10 kW series resonant converter for use in resonant circuits in space applications is described. The transistors serve to switch on the converter current, which has a half-sinusoid waveform when the transistor is in saturation. The goal of the program was to handle an input-output voltage range of 230-270 Vdc, an output voltage range of 200-500 Vdc, and a current limit range of 0-20 A. Testing procedures for the D60T and D7ST transistors are outlined and base drive waveforms are presented. The total device dissipation was minimized and found to be independent of the regenerative feedback ratio at lower current levels. Dissipation was set at within 10% and rise times were found to be acceptable. The finished unit displayed a 91% efficiency at full power levels of 500 V and 20 A and 93.7% at 500 V and 10 A.

  9. Case Reports, Case Series - From Clinical Practice to Evidence-Based Medicine in Graduate Medical Education.

    Science.gov (United States)

    Sayre, Jerry W; Toklu, Hale Z; Ye, Fan; Mazza, Joseph; Yale, Steven

    2017-08-07

    Case reports and case series or case study research are descriptive studies that are prepared for illustrating novel, unusual, or atypical features identified in patients in medical practice, and they potentially generate new research questions. They are empirical inquiries or investigations of a patient or a group of patients in a natural, real-world clinical setting. Case study research is a method that focuses on the contextual analysis of a number of events or conditions and their relationships. There is disagreement among physicians on the value of case studies in the medical literature, particularly for educators focused on teaching evidence-based medicine (EBM) for student learners in graduate medical education. Despite their limitations, case study research is a beneficial tool and learning experience in graduate medical education and among novice researchers. The preparation and presentation of case studies can help students and graduate medical education programs evaluate and apply the six American College of Graduate Medical Education (ACGME) competencies in the areas of medical knowledge, patient care, practice-based learning, professionalism, systems-based practice, and communication. A goal in graduate medical education should be to assist residents to expand their critical thinking, problem-solving, and decision-making skills. These attributes are required in the teaching and practice of EBM. In this aspect, case studies provide a platform for developing clinical skills and problem-based learning methods. Hence, graduate medical education programs should encourage, assist, and support residents in the publication of clinical case studies; and clinical teachers should encourage graduate students to publish case reports during their graduate medical education.

  10. Effect of an evidence-based website on healthcare usage: an interrupted time-series study

    Science.gov (United States)

    Spoelman, Wouter A; Bonten, Tobias N; de Waal, Margot W M; Drenthen, Ton; Smeele, Ivo J M; Nielen, Markus M J; Chavannes, Niels H

    2016-01-01

    Objectives Healthcare costs and usage are rising. Evidence-based online health information may reduce healthcare usage, but the evidence is scarce. The objective of this study was to determine whether the release of a nationwide evidence-based health website was associated with a reduction in healthcare usage. Design Interrupted time series analysis of observational primary care data of healthcare use in the Netherlands from 2009 to 2014. Setting General community primary care. Population 912 000 patients who visited their general practitioners 18.1 million times during the study period. Intervention In March 2012, an evidence-based health information website was launched by the Dutch College of General Practitioners. It was easily accessible and understandable using plain language. At the end of the study period, the website had 2.9 million unique page views per month. Main outcomes measures Primary outcome was the change in consultation rate (consultations/1000 patients/month) before and after the release of the website. Additionally, a reference group was created by including consultations about topics not being viewed at the website. Subgroup analyses were performed for type of consultations, sex, age and socioeconomic status. Results After launch of the website, the trend in consultation rate decreased with 1.620 consultations/1000 patients/month (p<0.001). This corresponds to a 12% decline in consultations 2 years after launch of the website. The trend in consultation rate of the reference group showed no change. The subgroup analyses showed a specific decline for consultations by phone and were significant for all other subgroups, except for the youngest age group. Conclusions Healthcare usage decreased by 12% after providing high-quality evidence-based online health information. These findings show that e-Health can be effective to improve self-management and reduce healthcare usage in times of increasing healthcare costs. PMID:28186945

  11. Academic Primer Series: Key Papers About Competency-Based Medical Education

    Directory of Open Access Journals (Sweden)

    Robert Cooney

    2017-05-01

    Full Text Available Introduction: Competency-based medical education (CBME presents a paradigm shift in medical training. This outcome-based education movement has triggered substantive changes across the globe. Since this transition is only beginning, many faculty members may not have experience with CBME nor a solid foundation in the grounding literature. We identify and summarize key papers to help faculty members learn more about CBME. Methods: Based on the online discussions of the 2016–2017 ALiEM Faculty Incubator program, a series of papers on the topic of CBME was developed. Augmenting this list with suggestions by a guest expert and by an open call on Twitter for other important papers, we were able to generate a list of 21 papers in total. Subsequently, we used a modified Delphi study methodology to narrow the list to key papers that describe the importance and significance for educators interested in learning about CBME. To determine the most impactful papers, the mixed junior and senior faculty authorship group used three-round voting methodology based upon the Delphi method. Results: Summaries of the five most highly rated papers on the topic of CBME, as determined by this modified Delphi approach, are presented in this paper. Major themes include a definition of core CBME themes, CBME principles to consider in the design of curricula, a history of the development of the CBME movement, and a rationale for changes to accreditation with CBME. The application of the study findings to junior faculty and faculty developers is discussed. Conclusion: We present five key papers on CBME that junior faculty members and faculty experts identified as essential to faculty development. These papers are a mix of foundational and explanatory papers that may provide a basis from which junior faculty members may build upon as they help to implement CBME programs.

  12. Academic Primer Series: Key Papers About Competency-Based Medical Education.

    Science.gov (United States)

    Cooney, Robert; Chan, Teresa M; Gottlieb, Michael; Abraham, Michael; Alden, Sylvia; Mongelluzzo, Jillian; Pasirstein, Michael; Sherbino, Jonathan

    2017-06-01

    Competency-based medical education (CBME) presents a paradigm shift in medical training. This outcome-based education movement has triggered substantive changes across the globe. Since this transition is only beginning, many faculty members may not have experience with CBME nor a solid foundation in the grounding literature. We identify and summarize key papers to help faculty members learn more about CBME. Based on the online discussions of the 2016-2017 ALiEM Faculty Incubator program, a series of papers on the topic of CBME was developed. Augmenting this list with suggestions by a guest expert and by an open call on Twitter for other important papers, we were able to generate a list of 21 papers in total. Subsequently, we used a modified Delphi study methodology to narrow the list to key papers that describe the importance and significance for educators interested in learning about CBME. To determine the most impactful papers, the mixed junior and senior faculty authorship group used three-round voting methodology based upon the Delphi method. Summaries of the five most highly rated papers on the topic of CBME, as determined by this modified Delphi approach, are presented in this paper. Major themes include a definition of core CBME themes, CBME principles to consider in the design of curricula, a history of the development of the CBME movement, and a rationale for changes to accreditation with CBME. The application of the study findings to junior faculty and faculty developers is discussed. We present five key papers on CBME that junior faculty members and faculty experts identified as essential to faculty development. These papers are a mix of foundational and explanatory papers that may provide a basis from which junior faculty members may build upon as they help to implement CBME programs.

  13. Effect of an evidence-based website on healthcare usage: an interrupted time-series study.

    Science.gov (United States)

    Spoelman, Wouter A; Bonten, Tobias N; de Waal, Margot W M; Drenthen, Ton; Smeele, Ivo J M; Nielen, Markus M J; Chavannes, Niels H

    2016-11-09

    Healthcare costs and usage are rising. Evidence-based online health information may reduce healthcare usage, but the evidence is scarce. The objective of this study was to determine whether the release of a nationwide evidence-based health website was associated with a reduction in healthcare usage. Interrupted time series analysis of observational primary care data of healthcare use in the Netherlands from 2009 to 2014. General community primary care. 912 000 patients who visited their general practitioners 18.1 million times during the study period. In March 2012, an evidence-based health information website was launched by the Dutch College of General Practitioners. It was easily accessible and understandable using plain language. At the end of the study period, the website had 2.9 million unique page views per month. Primary outcome was the change in consultation rate (consultations/1000 patients/month) before and after the release of the website. Additionally, a reference group was created by including consultations about topics not being viewed at the website. Subgroup analyses were performed for type of consultations, sex, age and socioeconomic status. After launch of the website, the trend in consultation rate decreased with 1.620 consultations/1000 patients/month (pHealthcare usage decreased by 12% after providing high-quality evidence-based online health information. These findings show that e-Health can be effective to improve self-management and reduce healthcare usage in times of increasing healthcare costs. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  14. Dietary intakes of pesticides based on community duplicate diet samples.

    Science.gov (United States)

    Melnyk, Lisa Jo; Xue, Jianping; Brown, G Gordon; McCombs, Michelle; Nishioka, Marcia; Michael, Larry C

    2014-01-15

    The calculation of dietary intake of selected pesticides was accomplished using food samples collected from individual representatives of a defined demographic community using a community duplicate diet approach. A community of nine participants was identified in Apopka, FL from which intake assessments of organophosphate (OP) and pyrethroid pesticides were made. From these nine participants, sixty-seven individual samples were collected and subsequently analyzed by gas chromatography/mass spectrometry. Measured concentrations were used to estimate dietary intakes for individuals and for the community. Individual intakes of total OP and pyrethroid pesticides ranged from 6.7 to 996 ng and 1.2 to 16,000 ng, respectively. The community intake was 256 ng for OPs and 3430 ng for pyrethroid pesticides. The most commonly detected pesticide was permethrin, but the highest overall intake was of bifenthrin followed by esfenvalerate. These data indicate that the community in Apopka, FL, as represented by the nine individuals, was potentially exposed to both OP and pyrethroid pesticides at levels consistent with a dietary model and other field studies in which standard duplicate diet samples were collected. Higher levels of pyrethroid pesticides were measured than OPs, which is consistent with decreased usage of OPs. The diversity of pyrethroid pesticides detected in food samples was greater than expected. Continually changing pesticide usage patterns need to be considered when determining analytes of interest for large scale epidemiology studies. The Community Duplicate Diet Methodology is a tool for researchers to meet emerging exposure measurement needs that will lead to more accurate assessments of intake which may enhance decisions for chemical regulation. Successfully determining the intake of pesticides through the dietary route will allow for accurate assessments of pesticide exposures to a community of individuals, thereby significantly enhancing the research benefit

  15. The Performance Analysis Based on SAR Sample Covariance Matrix

    Directory of Open Access Journals (Sweden)

    Esra Erten

    2012-03-01

    Full Text Available Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given.

  16. A time-series approach for clustering farms based on slaughterhouse health aberration data.

    Science.gov (United States)

    Hulsegge, B; de Greef, K H

    2018-05-01

    A large amount of data is collected routinely in meat inspection in pig slaughterhouses. A time series clustering approach is presented and applied that groups farms based on similar statistical characteristics of meat inspection data over time. A three step characteristic-based clustering approach was used from the idea that the data contain more info than the incidence figures. A stratified subset containing 511,645 pigs was derived as a study set from 3.5 years of meat inspection data. The monthly averages of incidence of pleuritis and of pneumonia of 44 Dutch farms (delivering 5149 batches to 2 pig slaughterhouses) were subjected to 1) derivation of farm level data characteristics 2) factor analysis and 3) clustering into groups of farms. The characteristic-based clustering was able to cluster farms for both lung aberrations. Three groups of data characteristics were informative, describing incidence, time pattern and degree of autocorrelation. The consistency of clustering similar farms was confirmed by repetition of the analysis in a larger dataset. The robustness of the clustering was tested on a substantially extended dataset. This confirmed the earlier results, three data distribution aspects make up the majority of distinction between groups of farms and in these groups (clusters) the majority of the farms was allocated comparable to the earlier allocation (75% and 62% for pleuritis and pneumonia, respectively). The difference between pleuritis and pneumonia in their seasonal dependency was confirmed, supporting the biological relevance of the clustering. Comparison of the identified clusters of statistically comparable farms can be used to detect farm level risk factors causing the health aberrations beyond comparison on disease incidence and trend alone. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Feasibility of self-sampled dried blood spot and saliva samples sent by mail in a population-based study

    International Nuclear Information System (INIS)

    Sakhi, Amrit Kaur; Bastani, Nasser Ezzatkhah; Ellingjord-Dale, Merete; Gundersen, Thomas Erik; Blomhoff, Rune; Ursin, Giske

    2015-01-01

    In large epidemiological studies it is often challenging to obtain biological samples. Self-sampling by study participants using dried blood spots (DBS) technique has been suggested to overcome this challenge. DBS is a type of biosampling where blood samples are obtained by a finger-prick lancet, blotted and dried on filter paper. However, the feasibility and efficacy of collecting DBS samples from study participants in large-scale epidemiological studies is not known. The aim of the present study was to test the feasibility and response rate of collecting self-sampled DBS and saliva samples in a population–based study of women above 50 years of age. We determined response proportions, number of phone calls to the study center with questions about sampling, and quality of the DBS. We recruited women through a study conducted within the Norwegian Breast Cancer Screening Program. Invitations, instructions and materials were sent to 4,597 women. The data collection took place over a 3 month period in the spring of 2009. Response proportions for the collection of DBS and saliva samples were 71.0% (3,263) and 70.9% (3,258), respectively. We received 312 phone calls (7% of the 4,597 women) with questions regarding sampling. Of the 3,263 individuals that returned DBS cards, 3,038 (93.1%) had been packaged and shipped according to instructions. A total of 3,032 DBS samples were sufficient for at least one biomarker analysis (i.e. 92.9% of DBS samples received by the laboratory). 2,418 (74.1%) of the DBS cards received by the laboratory were filled with blood according to the instructions (i.e. 10 completely filled spots with up to 7 punches per spot for up to 70 separate analyses). To assess the quality of the samples, we selected and measured two biomarkers (carotenoids and vitamin D). The biomarker levels were consistent with previous reports. Collecting self-sampled DBS and saliva samples through the postal services provides a low cost, effective and feasible

  18. Feasibility of self-sampled dried blood spot and saliva samples sent by mail in a population-based study.

    Science.gov (United States)

    Sakhi, Amrit Kaur; Bastani, Nasser Ezzatkhah; Ellingjord-Dale, Merete; Gundersen, Thomas Erik; Blomhoff, Rune; Ursin, Giske

    2015-04-11

    In large epidemiological studies it is often challenging to obtain biological samples. Self-sampling by study participants using dried blood spots (DBS) technique has been suggested to overcome this challenge. DBS is a type of biosampling where blood samples are obtained by a finger-prick lancet, blotted and dried on filter paper. However, the feasibility and efficacy of collecting DBS samples from study participants in large-scale epidemiological studies is not known. The aim of the present study was to test the feasibility and response rate of collecting self-sampled DBS and saliva samples in a population-based study of women above 50 years of age. We determined response proportions, number of phone calls to the study center with questions about sampling, and quality of the DBS. We recruited women through a study conducted within the Norwegian Breast Cancer Screening Program. Invitations, instructions and materials were sent to 4,597 women. The data collection took place over a 3 month period in the spring of 2009. Response proportions for the collection of DBS and saliva samples were 71.0% (3,263) and 70.9% (3,258), respectively. We received 312 phone calls (7% of the 4,597 women) with questions regarding sampling. Of the 3,263 individuals that returned DBS cards, 3,038 (93.1%) had been packaged and shipped according to instructions. A total of 3,032 DBS samples were sufficient for at least one biomarker analysis (i.e. 92.9% of DBS samples received by the laboratory). 2,418 (74.1%) of the DBS cards received by the laboratory were filled with blood according to the instructions (i.e. 10 completely filled spots with up to 7 punches per spot for up to 70 separate analyses). To assess the quality of the samples, we selected and measured two biomarkers (carotenoids and vitamin D). The biomarker levels were consistent with previous reports. Collecting self-sampled DBS and saliva samples through the postal services provides a low cost, effective and feasible

  19. Polymeric ionic liquid-based portable tip microextraction device for on-site sample preparation of water samples.

    Science.gov (United States)

    Chen, Lei; Pei, Junxian; Huang, Xiaojia; Lu, Min

    2018-06-05

    On-site sample preparation is highly desired because it avoids the transportation of large-volume samples and ensures the accuracy of the analytical results. In this work, a portable prototype of tip microextraction device (TMD) was designed and developed for on-site sample pretreatment. The assembly procedure of TMD is quite simple. Firstly, polymeric ionic liquid (PIL)-based adsorbent was in-situ prepared in a pipette tip. After that, the tip was connected with a syringe which was driven by a bidirectional motor. The flow rates in adsorption and desorption steps were controlled accurately by the motor. To evaluate the practicability of the developed device, the TMD was used to on-site sample preparation of waters and combined with high-performance liquid chromatography with diode array detection to measure trace estrogens in water samples. Under the most favorable conditions, the limits of detection (LODs, S/N = 3) for the target analytes were in the range of 4.9-22 ng/L, with good coefficients of determination. Confirmatory study well evidences that the extraction performance of TMD is comparable to that of the traditional laboratory solid-phase extraction process, but the proposed TMD is more simple and convenient. At the same time, the TMD avoids complicated sampling and transferring steps of large-volume water samples. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Risk assessment of environmentally influenced airway diseases based on time-series analysis.

    Science.gov (United States)

    Herbarth, O

    1995-09-01

    Threshold values are of prime importance in providing a sound basis for public health decisions. A key issue is determining threshold or maximum exposure values for pollutants and assessing their potential health risks. Environmental epidemiology could be instrumental in assessing these levels, especially since the assessment of ambient exposures involves relatively low concentrations of pollutants. This paper presents a statistical method that allows the determination of threshold values as well as the assessment of the associated risk using a retrospective, longitudinal study design with a prospective follow-up. Morbidity data were analyzed using the Fourier method, a time-series analysis that is based on the assumption of a high temporal resolution of the data. This method eliminates time-dependent responses like temporal inhomogeneity and pseudocorrelation. The frequency of calls for respiratory distress conditions to the regional Mobile Medical Emergency Service (MMES) in the city of Leipzig were investigated. The entire population of Leipzig served as a pool for data collection. In addition to the collection of morbidity data, air pollution measurements were taken every 30 min for the entire study period using sulfur dioxide as the regional indicator variable. This approach allowed the calculation of a dose-response curve for respiratory diseases and air pollution indices in children and adults. Significantly higher morbidities were observed above a 24-hr mean value of 0.6 mg SO2/m3 air for children and 0.8 mg SO2/m3 for adults.(ABSTRACT TRUNCATED AT 250 WORDS)

  1. A new accuracy measure based on bounded relative error for time series forecasting.

    Science.gov (United States)

    Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.

  2. A population based time series analysis of asthma hospitalisations in Ontario, Canada: 1988 to 2000

    Directory of Open Access Journals (Sweden)

    Upshur Ross EG

    2001-08-01

    Full Text Available Abstract Background Asthma is a common yet incompletely understood health problem associated with a high morbidity burden. A wide variety of seasonally variable environmental stimuli such as viruses and air pollution are believed to influence asthma morbidity. This study set out to examine the seasonal patterns of asthma hospitalisations in relation to age and gender for the province of Ontario over a period of 12 years. Methods A retrospective, population-based study design was used to assess temporal patterns in hospitalisations for asthma from April 1, 1988 to March 31, 2000. Approximately 14 million residents of Ontario eligible for universal healthcare coverage during this time were included for analysis. Time series analyses were conducted on monthly aggregations of hospitalisations. Results There is strong evidence of an autumn peak and summer trough seasonal pattern occurring every year over the 12-year period (Fisher-Kappa (FK = 23.93, p > 0.01; Bartlett Kolmogorov Smirnov (BKS = 0.459, p Conclusions A clear and consistent seasonal pattern was observed in this study for asthma hospitalisations. These findings have important implications for the development of effective management and prevention strategies.

  3. Randomization-Based Inference about Latent Variables from Complex Samples: The Case of Two-Stage Sampling

    Science.gov (United States)

    Li, Tiandong

    2012-01-01

    In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…

  4. Frequency-based time-series gene expression recomposition using PRIISM

    Directory of Open Access Journals (Sweden)

    Rosa Bruce A

    2012-06-01

    Full Text Available Abstract Background Circadian rhythm pathways influence the expression patterns of as much as 31% of the Arabidopsis genome through complicated interaction pathways, and have been found to be significantly disrupted by biotic and abiotic stress treatments, complicating treatment-response gene discovery methods due to clock pattern mismatches in the fold change-based statistics. The PRIISM (Pattern Recomposition for the Isolation of Independent Signals in Microarray data algorithm outlined in this paper is designed to separate pattern changes induced by different forces, including treatment-response pathways and circadian clock rhythm disruptions. Results Using the Fourier transform, high-resolution time-series microarray data is projected to the frequency domain. By identifying the clock frequency range from the core circadian clock genes, we separate the frequency spectrum to different sections containing treatment-frequency (representing up- or down-regulation by an adaptive treatment response, clock-frequency (representing the circadian clock-disruption response and noise-frequency components. Then, we project the components’ spectra back to the expression domain to reconstruct isolated, independent gene expression patterns representing the effects of the different influences. By applying PRIISM on a high-resolution time-series Arabidopsis microarray dataset under a cold treatment, we systematically evaluated our method using maximum fold change and principal component analyses. The results of this study showed that the ranked treatment-frequency fold change results produce fewer false positives than the original methodology, and the 26-hour timepoint in our dataset was the best statistic for distinguishing the most known cold-response genes. In addition, six novel cold-response genes were discovered. PRIISM also provides gene expression data which represents only circadian clock influences, and may be useful for circadian clock studies

  5. Adaptive Sampling for Nonlinear Dimensionality Reduction Based on Manifold Learning

    DEFF Research Database (Denmark)

    Franz, Thomas; Zimmermann, Ralf; Goertz, Stefan

    2017-01-01

    We make use of the non-intrusive dimensionality reduction method Isomap in order to emulate nonlinear parametric flow problems that are governed by the Reynolds-averaged Navier-Stokes equations. Isomap is a manifold learning approach that provides a low-dimensional embedding space that is approxi...... to detect and fill up gaps in the sampling in the embedding space. The performance of the proposed manifold filling method will be illustrated by numerical experiments, where we consider nonlinear parameter-dependent steady-state Navier-Stokes flows in the transonic regime.......We make use of the non-intrusive dimensionality reduction method Isomap in order to emulate nonlinear parametric flow problems that are governed by the Reynolds-averaged Navier-Stokes equations. Isomap is a manifold learning approach that provides a low-dimensional embedding space...

  6. Analog series-based scaffolds: computational design and exploration of a new type of molecular scaffolds for medicinal chemistry

    Science.gov (United States)

    Dimova, Dilyana; Stumpfe, Dagmar; Hu, Ye; Bajorath, Jürgen

    2016-01-01

    Aim: Computational design of and systematic search for a new type of molecular scaffolds termed analog series-based scaffolds. Materials & methods: From currently available bioactive compounds, analog series were systematically extracted, key compounds identified and new scaffolds isolated from them. Results: Using our computational approach, more than 12,000 scaffolds were extracted from bioactive compounds. Conclusion: A new scaffold definition is introduced and a computational methodology developed to systematically identify such scaffolds, yielding a large freely available scaffold knowledge base. PMID:28116132

  7. All-polymer microfluidic systems for droplet based sample analysis

    DEFF Research Database (Denmark)

    Poulsen, Carl Esben

    In this PhD project, I pursued to develop an all-polymer injection moulded microfluidic platform with integrated droplet based single cell interrogation. To allow for a proper ”one device - one experiment” methodology and to ensure a high relevancy to non-academic settings, the systems presented ...

  8. Sampling in image space for vision based SLAM

    NARCIS (Netherlands)

    Booij, O.; Zivkovic, Z.; Kröse, B.

    2008-01-01

    Loop closing in vision based SLAM applications is a difficult task. Comparing new image data with all previous image data acquired for the map is practically impossible because of the high computational costs. This problem is part of the bigger problem to acquire local geometric constraints from

  9. Protein expression based multimarker analysis of breast cancer samples

    International Nuclear Information System (INIS)

    Presson, Angela P; Horvath, Steve; Yoon, Nam K; Bagryanova, Lora; Mah, Vei; Alavi, Mohammad; Maresh, Erin L; Rajasekaran, Ayyappan K; Goodglick, Lee; Chia, David

    2011-01-01

    Tissue microarray (TMA) data are commonly used to validate the prognostic accuracy of tumor markers. For example, breast cancer TMA data have led to the identification of several promising prognostic markers of survival time. Several studies have shown that TMA data can also be used to cluster patients into clinically distinct groups. Here we use breast cancer TMA data to cluster patients into distinct prognostic groups. We apply weighted correlation network analysis (WGCNA) to TMA data consisting of 26 putative tumor biomarkers measured on 82 breast cancer patients. Based on this analysis we identify three groups of patients with low (5.4%), moderate (22%) and high (50%) mortality rates, respectively. We then develop a simple threshold rule using a subset of three markers (p53, Na-KATPase-β1, and TGF β receptor II) that can approximately define these mortality groups. We compare the results of this correlation network analysis with results from a standard Cox regression analysis. We find that the rule-based grouping variable (referred to as WGCNA*) is an independent predictor of survival time. While WGCNA* is based on protein measurements (TMA data), it validated in two independent Affymetrix microarray gene expression data (which measure mRNA abundance). We find that the WGCNA patient groups differed by 35% from mortality groups defined by a more conventional stepwise Cox regression analysis approach. We show that correlation network methods, which are primarily used to analyze the relationships between gene products, are also useful for analyzing the relationships between patients and for defining distinct patient groups based on TMA data. We identify a rule based on three tumor markers for predicting breast cancer survival outcomes

  10. Comprehensive model of annual plankton succession based on the whole-plankton time series approach.

    Directory of Open Access Journals (Sweden)

    Jean-Baptiste Romagnan

    Full Text Available Ecological succession provides a widely accepted description of seasonal changes in phytoplankton and mesozooplankton assemblages in the natural environment, but concurrent changes in smaller (i.e. microbes and larger (i.e. macroplankton organisms are not included in the model because plankton ranging from bacteria to jellies are seldom sampled and analyzed simultaneously. Here we studied, for the first time in the aquatic literature, the succession of marine plankton in the whole-plankton assemblage that spanned 5 orders of magnitude in size from microbes to macroplankton predators (not including fish or fish larvae, for which no consistent data were available. Samples were collected in the northwestern Mediterranean Sea (Bay of Villefranche weekly during 10 months. Simultaneously collected samples were analyzed by flow cytometry, inverse microscopy, FlowCam, and ZooScan. The whole-plankton assemblage underwent sharp reorganizations that corresponded to bottom-up events of vertical mixing in the water-column, and its development was top-down controlled by large gelatinous filter feeders and predators. Based on the results provided by our novel whole-plankton assemblage approach, we propose a new comprehensive conceptual model of the annual plankton succession (i.e. whole plankton model characterized by both stepwise stacking of four broad trophic communities from early spring through summer, which is a new concept, and progressive replacement of ecological plankton categories within the different trophic communities, as recognised traditionally.

  11. Pore formation during dehydration of a polycrystalline gypsum sample observed and quantified in a time-series synchrotron X-ray micro-tomography experiment

    Directory of Open Access Journals (Sweden)

    F. Fusseis

    2012-03-01

    Full Text Available We conducted an in-situ X-ray micro-computed tomography heating experiment at the Advanced Photon Source (USA to dehydrate an unconfined 2.3 mm diameter cylinder of Volterra Gypsum. We used a purpose-built X-ray transparent furnace to heat the sample to 388 K for a total of 310 min to acquire a three-dimensional time-series tomography dataset comprising nine time steps. The voxel size of 2.2 μm3 proved sufficient to pinpoint reaction initiation and the organization of drainage architecture in space and time.

    We observed that dehydration commences across a narrow front, which propagates from the margins to the centre of the sample in more than four hours. The advance of this front can be fitted with a square-root function, implying that the initiation of the reaction in the sample can be described as a diffusion process.

    Novel parallelized computer codes allow quantifying the geometry of the porosity and the drainage architecture from the very large tomographic datasets (20483 voxels in unprecedented detail. We determined position, volume, shape and orientation of each resolvable pore and tracked these properties over the duration of the experiment. We found that the pore-size distribution follows a power law. Pores tend to be anisotropic but rarely crack-shaped and have a preferred orientation, likely controlled by a pre-existing fabric in the sample. With on-going dehydration, pores coalesce into a single interconnected pore cluster that is connected to the surface of the sample cylinder and provides an effective drainage pathway.

    Our observations can be summarized in a model in which gypsum is stabilized by thermal expansion stresses and locally increased pore fluid pressures until the dehydration front approaches to within about 100 μm. Then, the internal stresses are released and dehydration happens efficiently, resulting in new pore space. Pressure release, the production of pores and the

  12. Pore formation during dehydration of a polycrystalline gypsum sample observed and quantified in a time-series synchrotron X-ray micro-tomography experiment

    Science.gov (United States)

    Fusseis, F.; Schrank, C.; Liu, J.; Karrech, A.; Llana-Fúnez, S.; Xiao, X.; Regenauer-Lieb, K.

    2012-03-01

    We conducted an in-situ X-ray micro-computed tomography heating experiment at the Advanced Photon Source (USA) to dehydrate an unconfined 2.3 mm diameter cylinder of Volterra Gypsum. We used a purpose-built X-ray transparent furnace to heat the sample to 388 K for a total of 310 min to acquire a three-dimensional time-series tomography dataset comprising nine time steps. The voxel size of 2.2 μm3 proved sufficient to pinpoint reaction initiation and the organization of drainage architecture in space and time. We observed that dehydration commences across a narrow front, which propagates from the margins to the centre of the sample in more than four hours. The advance of this front can be fitted with a square-root function, implying that the initiation of the reaction in the sample can be described as a diffusion process. Novel parallelized computer codes allow quantifying the geometry of the porosity and the drainage architecture from the very large tomographic datasets (20483 voxels) in unprecedented detail. We determined position, volume, shape and orientation of each resolvable pore and tracked these properties over the duration of the experiment. We found that the pore-size distribution follows a power law. Pores tend to be anisotropic but rarely crack-shaped and have a preferred orientation, likely controlled by a pre-existing fabric in the sample. With on-going dehydration, pores coalesce into a single interconnected pore cluster that is connected to the surface of the sample cylinder and provides an effective drainage pathway. Our observations can be summarized in a model in which gypsum is stabilized by thermal expansion stresses and locally increased pore fluid pressures until the dehydration front approaches to within about 100 μm. Then, the internal stresses are released and dehydration happens efficiently, resulting in new pore space. Pressure release, the production of pores and the advance of the front are coupled in a feedback loop.

  13. [Correlation coefficient-based principle and method for the classification of jump degree in hydrological time series].

    Science.gov (United States)

    Wu, Zi Yi; Xie, Ping; Sang, Yan Fang; Gu, Hai Ting

    2018-04-01

    The phenomenon of jump is one of the importantly external forms of hydrological variabi-lity under environmental changes, representing the adaption of hydrological nonlinear systems to the influence of external disturbances. Presently, the related studies mainly focus on the methods for identifying the jump positions and jump times in hydrological time series. In contrast, few studies have focused on the quantitative description and classification of jump degree in hydrological time series, which make it difficult to understand the environmental changes and evaluate its potential impacts. Here, we proposed a theatrically reliable and easy-to-apply method for the classification of jump degree in hydrological time series, using the correlation coefficient as a basic index. The statistical tests verified the accuracy, reasonability, and applicability of this method. The relationship between the correlation coefficient and the jump degree of series were described using mathematical equation by derivation. After that, several thresholds of correlation coefficients under different statistical significance levels were chosen, based on which the jump degree could be classified into five levels: no, weak, moderate, strong and very strong. Finally, our method was applied to five diffe-rent observed hydrological time series, with diverse geographic and hydrological conditions in China. The results of the classification of jump degrees in those series were closely accorded with their physically hydrological mechanisms, indicating the practicability of our method.

  14. Visualization studies on evidence-based medicine domain knowledge (series 3): visualization for dissemination of evidence based medicine information.

    Science.gov (United States)

    Shen, Jiantong; Yao, Leye; Li, Youping; Clarke, Mike; Gan, Qi; Li, Yifei; Fan, Yi; Gou, Yongchao; Wang, Li

    2011-05-01

    To identify patterns in information sharing between a series of Chinese evidence based medicine (EBM) journals and the Cochrane Database of Systematic Reviews, to determine key evidence dissemination areas for EBM and to provide a scientific basis for improving the dissemination of EBM research. Data were collected on citing and cited from the Chinese Journal of Evidence-Based Medicine (CJEBM), Journal of Evidence-Based Medicine (JEBMc), Chinese Journal of Evidence Based Pediatrics (CJEBP), and the Cochrane Database of Systematic Reviews (CDSR). Relationships between citations were visualized. High-frequency key words from these sources were identified, to build a word co-occurrence matrix and to map research subjects. CDSR contains a large collection of information of relevance to EBM and its contents are widely cited across many journals, suggesting a well-developed citation environment. The content and citation of the Chinese journals have been increasing in recent years. However, their citation environments are much less developed, and there is a wide variation in the breadth and strength of their knowledge communication, with the ranking from highest to lowest being CJEBM, JEBMc and CJEBP. The content of CDSR is almost exclusively Cochrane intervention reviews examining the effects of healthcare interventions, so it's contribution to EBM is mostly in disease control and treatment. On the other hand, the Chinese journals on evidence-based medicine and practice focused more on areas such as education and research, design and quality of clinical trials, evidence based policymaking, evidence based clinical practice, tumor treatment, and pediatrics. Knowledge and findings of EBM are widely communicated and disseminated. However, citation environments and range of knowledge communication differ greatly between the journals examined in this study. This finds that Chinese EBM has focused mainly on clinical medicine, Traditional Chinese Medicine, pediatrics, tumor

  15. Characteristics of the co-fluctuation matrix transmission network based on financial multi-time series

    OpenAIRE

    Huajiao Li; Haizhong An; Xiangyun Gao; Wei Fang

    2015-01-01

    The co-fluctuation of two time series has often been studied by analysing the correlation coefficient over a selected period. However, in both domestic and global financial markets, there are more than two active time series that fluctuate constantly as a result of various factors, including geographic locations, information communications and so on. In addition to correlation relationships over longer periods, daily co-fluctuation relationships and their transmission features are also import...

  16. Enviromental sampling at remote sites based on radiological screening assessments

    International Nuclear Information System (INIS)

    Ebinger, M.H.; Hansen, W.R.; Wenz, G.; Oxenberg, T.P.

    1996-01-01

    Environmental radiation monitoring (ERM) data from remote sites on the White Sands Missile Range, New Mexico, were used to estimate doses to humans and terrestrial mammals from residual radiation deposited during testing of components containing depleted uranium (DU) and thorium (Th). ERM data were used with the DOE code RESRAD and a simple steady-state pathway code to estimate the potential adverse effects from DU and Th to workers in the contaminated zones, to hunters consuming animals from the contaminated zones, and to terrestrial mammals that inhabit the contaminated zones. Assessments of zones contaminated with DU and Th and DU alone were conducted. Radiological doses from Th and DU in soils were largest with a maximum of about 3.5 mrem y -1 in humans and maximum of about 0.1 mrad d -1 in deer. Dose estimates from DU alone in soils were significantly less with a maximum of about 1 mrem y -1 in humans and about 0.04 mrad d -1 in deer. The results of the dose estimates suggest strongly that environmental sampling in these affected areas can be infrequent and still provide adequate assessments of radiological doses to workers, hunters, and terrestrial mammals

  17. Evaluation of species richness estimators based on quantitative performance measures and sensitivity to patchiness and sample grain size

    Science.gov (United States)

    Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc

    2012-11-01

    Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling

  18. Short-term prediction method of wind speed series based on fractal interpolation

    International Nuclear Information System (INIS)

    Xiu, Chunbo; Wang, Tiantian; Tian, Meng; Li, Yanqing; Cheng, Yi

    2014-01-01

    Highlights: • An improved fractal interpolation prediction method is proposed. • The chaos optimization algorithm is used to obtain the iterated function system. • The fractal extrapolate interpolation prediction of wind speed series is performed. - Abstract: In order to improve the prediction performance of the wind speed series, the rescaled range analysis is used to analyze the fractal characteristics of the wind speed series. An improved fractal interpolation prediction method is proposed to predict the wind speed series whose Hurst exponents are close to 1. An optimization function which is composed of the interpolation error and the constraint items of the vertical scaling factors in the fractal interpolation iterated function system is designed. The chaos optimization algorithm is used to optimize the function to resolve the optimal vertical scaling factors. According to the self-similarity characteristic and the scale invariance, the fractal extrapolate interpolation prediction can be performed by extending the fractal characteristic from internal interval to external interval. Simulation results show that the fractal interpolation prediction method can get better prediction result than others for the wind speed series with the fractal characteristic, and the prediction performance of the proposed method can be improved further because the fractal characteristic of its iterated function system is similar to that of the predicted wind speed series

  19. Sampling Key Populations for HIV Surveillance: Results From Eight Cross-Sectional Studies Using Respondent-Driven Sampling and Venue-Based Snowball Sampling.

    Science.gov (United States)

    Rao, Amrita; Stahlman, Shauna; Hargreaves, James; Weir, Sharon; Edwards, Jessie; Rice, Brian; Kochelani, Duncan; Mavimbela, Mpumelelo; Baral, Stefan

    2017-10-20

    In using regularly collected or existing surveillance data to characterize engagement in human immunodeficiency virus (HIV) services among marginalized populations, differences in sampling methods may produce different pictures of the target population and may therefore result in different priorities for response. The objective of this study was to use existing data to evaluate the sample distribution of eight studies of female sex workers (FSW) and men who have sex with men (MSM), who were recruited using different sampling approaches in two locations within Sub-Saharan Africa: Manzini, Swaziland and Yaoundé, Cameroon. MSM and FSW participants were recruited using either respondent-driven sampling (RDS) or venue-based snowball sampling. Recruitment took place between 2011 and 2016. Participants at each study site were administered a face-to-face survey to assess sociodemographics, along with the prevalence of self-reported HIV status, frequency of HIV testing, stigma, and other HIV-related characteristics. Crude and RDS-adjusted prevalence estimates were calculated. Crude prevalence estimates from the venue-based snowball samples were compared with the overlap of the RDS-adjusted prevalence estimates, between both FSW and MSM in Cameroon and Swaziland. RDS samples tended to be younger (MSM aged 18-21 years in Swaziland: 47.6% [139/310] in RDS vs 24.3% [42/173] in Snowball, in Cameroon: 47.9% [99/306] in RDS vs 20.1% [52/259] in Snowball; FSW aged 18-21 years in Swaziland 42.5% [82/325] in RDS vs 8.0% [20/249] in Snowball; in Cameroon 15.6% [75/576] in RDS vs 8.1% [25/306] in Snowball). They were less educated (MSM: primary school completed or less in Swaziland 42.6% [109/310] in RDS vs 4.0% [7/173] in Snowball, in Cameroon 46.2% [138/306] in RDS vs 14.3% [37/259] in Snowball; FSW: primary school completed or less in Swaziland 86.6% [281/325] in RDS vs 23.9% [59/247] in Snowball, in Cameroon 87.4% [520/576] in RDS vs 77.5% [238/307] in Snowball) than the snowball

  20. Deducing magnetic resonance neuroimages based on knowledge from samples.

    Science.gov (United States)

    Jiang, Yuwei; Liu, Feng; Fan, Mingxia; Li, Xuzhou; Zhao, Zhiyong; Zeng, Zhaoling; Wang, Yi; Xu, Dongrong

    2017-12-01

    Because individual variance always exists, using the same set of predetermined parameters for magnetic resonance imaging (MRI) may not be exactly suitable for each participant. We propose a knowledge-based method that can repair MRI data of undesired contrast as if a new scan were acquired using imaging parameters that had been individually optimized. The method employed a strategy called analogical reasoning to deduce voxel-wise relaxation properties using morphological and biological similarity. The proposed framework involves steps of intensity normalization, tissue segmentation, relaxation time deducing, and image deducing. This approach has been preliminarily validated using conventional MRI data at 3T from several examples, including 5 normal and 9 clinical datasets. It can effectively improve the contrast of real MRI data by deducing imaging data using optimized imaging parameters based on deduced relaxation properties. The statistics of deduced images shows a high correlation with real data that were actually collected using the same set of imaging parameters. The proposed method of deducing MRI data using knowledge of relaxation times alternatively provides a way of repairing MRI data of less optimal contrast. The method is also capable of optimizing an MRI protocol for individual participants, thereby realizing personalized MR imaging. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Efficient approach for reliability-based optimization based on weighted importance sampling approach

    International Nuclear Information System (INIS)

    Yuan, Xiukai; Lu, Zhenzhou

    2014-01-01

    An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology

  2. Remote sensing-based time series models for malaria early warning in the highlands of Ethiopia

    Directory of Open Access Journals (Sweden)

    Midekisa Alemayehu

    2012-05-01

    Full Text Available Abstract Background Malaria is one of the leading public health problems in most of sub-Saharan Africa, particularly in Ethiopia. Almost all demographic groups are at risk of malaria because of seasonal and unstable transmission of the disease. Therefore, there is a need to develop malaria early-warning systems to enhance public health decision making for control and prevention of malaria epidemics. Data from orbiting earth-observing sensors can monitor environmental risk factors that trigger malaria epidemics. Remotely sensed environmental indicators were used to examine the influences of climatic and environmental variability on temporal patterns of malaria cases in the Amhara region of Ethiopia. Methods In this study seasonal autoregressive integrated moving average (SARIMA models were used to quantify the relationship between malaria cases and remotely sensed environmental variables, including rainfall, land-surface temperature (LST, vegetation indices (NDVI and EVI, and actual evapotranspiration (ETa with lags ranging from one to three months. Predictions from the best model with environmental variables were compared to the actual observations from the last 12 months of the time series. Results Malaria cases exhibited positive associations with LST at a lag of one month and positive associations with indicators of moisture (rainfall, EVI and ETa at lags from one to three months. SARIMA models that included these environmental covariates had better fits and more accurate predictions, as evidenced by lower AIC and RMSE values, than models without environmental covariates. Conclusions Malaria risk indicators such as satellite-based rainfall estimates, LST, EVI, and ETa exhibited significant lagged associations with malaria cases in the Amhara region and improved model fit and prediction accuracy. These variables can be monitored frequently and extensively across large geographic areas using data from earth-observing sensors to support public

  3. Cost effective management of duodenal ulcers in Uganda: interventions based on a series of seven cases.

    Science.gov (United States)

    Nzarubara, Gabriel R

    2005-03-01

    Our understanding of the cause and treatment of peptic ulcer disease has changed dramatically over the last couple of decades. It was quite common some years ago to treat chronic ulcers surgically. These days, the operative treatment is restricted to the small proportion of ulcer patients who have complications such as perforation. The author reports seven cases of perforated duodenal ulcers seen in a surgical clinic between 1995 and 2001. Recommendations on the criteria for selecting the appropriate surgical intervention for patients with perforated duodenal ulcer are given. To decide on the appropriate surgical interventions for patients with perforated duodenal ulcer. These are case series of 7 patients who presented with perforated duodenal ulcers without a history of peptic ulcer disease. Seven patients presented with perforated duodenal ulcer 72 hours after perforation in a specialist surgical clinic in Kampala were analyzed. Appropriate management based on these patients is suggested. These patients were initially treated in upcountry clinics for acute gastritis from either alcohol consumption or suspected food poisoning. There was no duodenal ulcer history. As a result, they came to specialist surgical clinic more than 72 hours after perforation. Diagnosis of perforated duodenal ulcer was made and they were operated using the appropriate surgical intervention. Diagnosis of hangovers and acute gastritis from alcoholic consumption or suspected food poisoning should be treated with suspicion because the symptoms and signs may mimic perforated peptic ulcer in "silent" chronic ulcers. The final decision on the appropriate surgical intervention for patients with perforated duodenal ulcer stratifies them into two groups: The previously fit patients who have relatively mild physiological compromise imposed on previously healthy organ system by the perforation can withstand the operative stress of definitive procedure. The Second category includes patients who are

  4. Temporal trend of carpal tunnel release surgery: a population-based time series analysis.

    Directory of Open Access Journals (Sweden)

    Naif Fnais

    Full Text Available BACKGROUND: Carpal tunnel release (CTR is among the most common hand surgeries, although little is known about its pattern. In this study, we aimed to investigate temporal trends, age and gender variation and current practice patterns in CTR surgeries. METHODS: We conducted a population-based time series analysis among over 13 million residents of Ontario, who underwent operative management for carpal tunnel syndrome (CTS from April 1, 1992 to March 31, 2010 using administrative claims data. RESULTS: The primary analysis revealed a fairly stable procedure rate of approximately 10 patients per 10,000 population per year receiving CTRs without any significant, consistent temporal trend (p = 0.94. Secondary analyses revealed different trends in procedure rates according to age. The annual procedure rate among those age >75 years increased from 22 per 10,000 population at the beginning of the study period to over 26 patients per 10,000 population (p<0.01 by the end of the study period. CTR surgical procedures were approximately two-fold more common among females relative to males (64.9% vs. 35.1 respectively; p<0.01. Lastly, CTR procedures are increasingly being conducted in the outpatient setting while procedures in the inpatient setting have been declining steadily - the proportion of procedures performed in the outpatient setting increased from 13% to over 30% by 2010 (p<0.01. CONCLUSION: Overall, CTR surgical-procedures are conducted at a rate of approximately 10 patients per 10,000 population annually with significant variation with respect to age and gender. CTR surgical procedures in ambulatory-care facilities may soon outpace procedure rates in the in-hospital setting.

  5. Headache attributed to airplane travel ('airplane headache'): clinical profile based on a large case series.

    Science.gov (United States)

    Mainardi, F; Lisotto, C; Maggioni, F; Zanchin, G

    2012-06-01

    The 'headache attributed to airplane travel', also named 'airplane headache' (AH), is a recently described headache disorder that appears exclusively in relation to airplane flights, in particular during the landing phase. Based on the stereotypical nature of the attacks in all reported cases, we proposed provisional diagnostic criteria for AH in a previously published paper. Up to now 37 cases have been described in the literature. After our paper was disseminated via the Internet, we received several email messages from subjects around the world who had experienced such a peculiar headache. Their cooperation, by completing a structured questionnaire and allowing the direct observation of three subjects, enabled us to carry out a study on a total of 75 patients suffering from AH. Our survey confirmed the stereotypical nature of the attacks, in particular with regard to the short duration of the pain (lasting less than 30 minutes in up to 95% of the cases), the clear relationship with the landing phase, the unilateral pain, the male preponderance, and the absence of accompanying signs and/or symptoms. It is conceivable to consider barotrauma as one of the main mechanisms involved in the pathophysiology of AH. The observation that the pain appears inconstantly in the majority of cases, without any evident disorder affecting the paranasal sinuses, could be consistent with a multimodal pathogenesis underlying this condition, possibly resulting in the interaction between anatomic, environmental and temporary concurrent factors. This is by far the largest AH case series ever reported in the literature. The diagnostic criteria that we previously proposed proved to be valid when applied to a large number of patients suffering from this condition. We support its recognition as a new form of headache, to be included in the forthcoming update of the International Headache Society Classification, within '10. Headache attributed to disorder of homoeostasis'. Its formal

  6. Interactive Web-based Visualization of Atomic Position-time Series Data

    Science.gov (United States)

    Thapa, S.; Karki, B. B.

    2017-12-01

    Extracting and interpreting the information contained in large sets of time-varying three dimensional positional data for the constituent atoms of simulated material is a challenging task. We have recently implemented a web-based visualization system to analyze the position-time series data extracted from the local or remote hosts. It involves a pre-processing step for data reduction, which involves skipping uninteresting parts of the data uniformly (at full atomic configuration level) or non-uniformly (at atomic species level or individual atom level). Atomic configuration snapshot is rendered using the ball-stick representation and can be animated by rendering successive configurations. The entire atomic dynamics can be captured as the trajectories by rendering the atomic positions at all time steps together as points. The trajectories can be manipulated at both species and atomic levels so that we can focus on one or more trajectories of interest, and can be also superimposed with the instantaneous atomic structure. The implementation was done using WebGL and Three.js for graphical rendering, HTML5 and Javascript for GUI, and Elasticsearch and JSON for data storage and retrieval within the Grails Framework. We have applied our visualization system to the simulation datatsets for proton-bearing forsterite (Mg2SiO4) - an abundant mineral of Earths upper mantle. Visualization reveals that protons (hydrogen ions) incorporated as interstitials are much more mobile than protons substituting the host Mg and Si cation sites. The proton diffusion appears to be anisotropic with high mobility along the x-direction, showing limited discrete jumps in other two directions.

  7. Scleroderma prevalence: demographic variations in a population-based sample.

    Science.gov (United States)

    Bernatsky, S; Joseph, L; Pineau, C A; Belisle, P; Hudson, M; Clarke, A E

    2009-03-15

    To estimate the prevalence of systemic sclerosis (SSc) using population-based administrative data, and to assess the sensitivity of case ascertainment approaches. We ascertained SSc cases from Quebec physician billing and hospitalization databases (covering approximately 7.5 million individuals). Three case definition algorithms were compared, and statistical methods accounting for imperfect case ascertainment were used to estimate SSc prevalence and case ascertainment sensitivity. A hierarchical Bayesian latent class regression model that accounted for possible between-test dependence conditional on disease status estimated the effect of patient characteristics on SSc prevalence and the sensitivity of the 3 ascertainment algorithms. Accounting for error inherent in both the billing and the hospitalization data, we estimated SSc prevalence in 2003 at 74.4 cases per 100,000 women (95% credible interval [95% CrI] 69.3-79.7) and 13.3 cases per 100,000 men (95% CrI 11.1-16.1). Prevalence was higher for older individuals, particularly in urban women (161.2 cases per 100,000, 95% CrI 148.6-175.0). Prevalence was lowest in young men (in rural areas, as low as 2.8 cases per 100,000, 95% CrI 1.4-4.8). In general, no single algorithm was very sensitive, with point estimates for sensitivity ranging from 20-73%. We found marked differences in SSc prevalence according to age, sex, and region. In general, no single case ascertainment approach was very sensitive for SSc. Therefore, using data from multiple sources, with adjustment for the imperfect nature of each, is an important strategy in population-based studies of SSc and similar conditions.

  8. Design of a New Concentration Series for the Orthogonal Sample Design Approach and Estimation of the Number of Reactions in Chemical Systems.

    Science.gov (United States)

    Shi, Jiajia; Liu, Yuhai; Guo, Ran; Li, Xiaopei; He, Anqi; Gao, Yunlong; Wei, Yongju; Liu, Cuige; Zhao, Ying; Xu, Yizhuang; Noda, Isao; Wu, Jinguang

    2015-11-01

    A new concentration series is proposed for the construction of a two-dimensional (2D) synchronous spectrum for orthogonal sample design analysis to probe intermolecular interaction between solutes dissolved in the same solutions. The obtained 2D synchronous spectrum possesses the following two properties: (1) cross peaks in the 2D synchronous spectra can be used to reflect intermolecular interaction reliably, since interference portions that have nothing to do with intermolecular interaction are completely removed, and (2) the two-dimensional synchronous spectrum produced can effectively avoid accidental collinearity. Hence, the correct number of nonzero eigenvalues can be obtained so that the number of chemical reactions can be estimated. In a real chemical system, noise present in one-dimensional spectra may also produce nonzero eigenvalues. To get the correct number of chemical reactions, we classified nonzero eigenvalues into significant nonzero eigenvalues and insignificant nonzero eigenvalues. Significant nonzero eigenvalues can be identified by inspecting the pattern of the corresponding eigenvector with help of the Durbin-Watson statistic. As a result, the correct number of chemical reactions can be obtained from significant nonzero eigenvalues. This approach provides a solid basis to obtain insight into subtle spectral variations caused by intermolecular interaction.

  9. [Extracting THz absorption coefficient spectrum based on accurate determination of sample thickness].

    Science.gov (United States)

    Li, Zhi; Zhang, Zhao-hui; Zhao, Xiao-yan; Su, Hai-xia; Yan, Fang

    2012-04-01

    Extracting absorption spectrum in THz band is one of the important aspects in THz applications. Sample's absorption coefficient has a complex nonlinear relationship with its thickness. However, as it is not convenient to measure the thickness directly, absorption spectrum is usually determined incorrectly. Based on the method proposed by Duvillaret which was used to precisely determine the thickness of LiNbO3, the approach to measuring the absorption coefficient spectra of glutamine and histidine in frequency range from 0.3 to 2.6 THz(1 THz = 10(12) Hz) was improved in this paper. In order to validate the correctness of this absorption spectrum, we designed a series of experiments to compare the linearity of absorption coefficient belonging to one kind amino acid in different concentrations. The results indicate that as agreed by Lambert-Beer's Law, absorption coefficient spectrum of amino acid from the improved algorithm performs better linearity with its concentration than that from the common algorithm, which can be the basis of quantitative analysis in further researches.

  10. Effects of CLIL on EAP Learners: Based on Sample Analysis of Doctoral Students of Science

    Directory of Open Access Journals (Sweden)

    Guizhen Gao

    2015-09-01

    Full Text Available In Europe most studies of Content and Language Integrated Learning (CLIL focus on language knowledge and language skills and most studies of CLIL are carried out in primary schools and secondary schools. As for the implementation of CLIL in China, most studies are done theoretically and are carried out among undergraduates. CLIL is mainly applied in the teaching and learning of general English rather than in the teaching and learning of English for Academic Purpose (EAP. In order to have a better understanding of the effect of CLIL on EAP learners, a sample analysis is undertaken among doctoral students of science. Two kinds of instruments are adopted in this paper to conduct both quantitative and qualitative study, including two questionnaires and a series of classroom observations. The study obtains the following findings: Firstly, as CLIL is effective due to its dual-focus, it is possible to implement CLIL in EAP teaching and learning. Secondly, class activities such as group work, pair work, class presentations as well as task-based course activities such as translation, paper writing, paper analysis and rewriting practice play an important role in motivating the participants to integrate discipline content and language. Besides, the four factors of CLIL which include content, communication, culture and cognition are attached great importance to by learners. Finally, the increasing ability to integrate content and language as well as the thinking patterns and cultural awareness in EAP writing greatly contributes to the participants’ further academic researches.

  11. Impact of an Innovative Classroom-Based Lecture Series on Residents’ Evaluations of an Anesthesiology Rotation

    Directory of Open Access Journals (Sweden)

    Pedro Tanaka

    2016-01-01

    Full Text Available Introduction. Millennial resident learners may benefit from innovative instructional methods. The goal of this study is to assess the impact of a new daily, 15 minutes on one anesthesia keyword, lecture series given by faculty member each weekday on resident postrotation evaluation scores. Methods. A quasi-experimental study design was implemented with the residents’ rotation evaluations for the 24-month period ending by 7/30/2013 before the new lecture series was implemented which was compared to the 14-month period after the lecture series began on 8/1/2013. The primary endpoint was “overall teaching quality of this rotation.” We also collected survey data from residents at clinical rotations at two other different institutions during the same two evaluation periods that did not have the education intervention. Results. One hundred and thirty-one residents were eligible to participate in the study. Completed surveys ranged from 77 to 87% for the eight-question evaluation instrument. On a 5-point Likert-type scale the mean score on “overall teaching quality of this rotation” increased significantly from 3.9 (SD 0.8 to 4.2 (SD 0.7 after addition of the lecture series, whereas the scores decreased slightly at the comparison sites. Conclusion. Rotation evaluation scores for overall teaching quality improved with implementation of a new structured slide daily lectures series.

  12. Optimal coordination of distance and over-current relays in series compensated systems based on MAPSO

    International Nuclear Information System (INIS)

    Moravej, Zahra; Jazaeri, Mostafa; Gholamzadeh, Mehdi

    2012-01-01

    Highlight: ► Optimal coordination problem between distance relays and Directional Over-Current Relays (DOCRs) is studied. ► A new problem formulation for both uncompensated and series compensated system is proposed. ► In order to solve the coordination problem a Modified Adaptive Particle Swarm Optimization (MAPSO) is employed. ► The optimum results are found in both uncompensated and series compensated systems. - Abstract: In this paper, a novel problem formulation for optimal coordination between distance relays and Directional Over-Current Relays (DOCRs) in series compensated systems is proposed. The integration of the series capacitor (SC) into the transmission line makes the coordination problem more complex. The main contribution of this paper is a new systematic method for computing the optimal second zone timing of distance relays and optimal settings of DOCRs, in series compensated and uncompensated transmission systems, which have a combined protection scheme with DOCRs and distance relays. In order to solve this coordination problem, which is a nonlinear and non-convex problem, a Modified Adaptive Particle Swarm Optimization (MAPSO) is employed. The new proposed method is supported by obtained results from a typical test case and a real power system network.

  13. Testing for Stationarity and Nonlinearity of Daily Streamflow Time Series Based on Different Statistical Tests (Case Study: Upstream Basin Rivers of Zarrineh Roud Dam

    Directory of Open Access Journals (Sweden)

    Farshad Fathian

    2017-02-01

    Full Text Available Introduction: Time series models are one of the most important tools for investigating and modeling hydrological processes in order to solve problems related to water resources management. Many hydrological time series shows nonstationary and nonlinear behaviors. One of the important hydrological modeling tasks is determining the existence of nonstationarity and the way through which we can access the stationarity accordingly. On the other hand, streamflow processes are usually considered as nonlinear mechanisms while in many studies linear time series models are used to model streamflow time series. However, it is not clear what kind of nonlinearity is acting underlying the streamflowprocesses and how intensive it is. Materials and Methods: Streamflow time series of 6 hydro-gauge stations located in the upstream basin rivers of ZarrinehRoud dam (located in the southern part of Urmia Lake basin have been considered to investigate stationarity and nonlinearity. All data series used here to startfrom January 1, 1997, and end on December 31, 2011. In this study, stationarity is tested by ADF and KPSS tests and nonlinearity is tested by BDS, Keenan and TLRT tests. The stationarity test is carried out with two methods. Thefirst one method is the augmented Dickey-Fuller (ADF unit root test first proposed by Dickey and Fuller (1979 and modified by Said and Dickey (1984, which examinsthe presence of unit roots in time series.The second onemethod is KPSS test, proposed by Kwiatkowski et al. (1992, which examinesthestationarity around a deterministic trend (trend stationarity and the stationarity around a fixed level (level stationarity. The BDS test (Brock et al., 1996 is a nonparametric method for testing the serial independence and nonlinear structure in time series based on the correlation integral of the series. The null hypothesis is the time series sample comes from an independent identically distributed (i.i.d. process. The alternative hypothesis

  14. Principle and realization of segmenting contour series algorithm in reverse engineering based on X-ray computerized tomography

    International Nuclear Information System (INIS)

    Wang Yanfang; Liu Li; Yan Yonglian; Shan Baoci; Tang Xiaowei

    2007-01-01

    A new algorithm of segmenting contour series of images is presented, which can achieve three dimension reconstruction with parametric recognition in Reverse Engineering based on X-ray CT. First, in order to get the nested relationship between contours, a method of a certain angle ray is used. Second, for realizing the contour location in one slice, another approach is presented to generate the contour tree by scanning the relevant vector only once. Last, a judge algorithm is put forward to accomplish the contour match between slices by adopting the qualitative and quantitative properties. The example shows that this algorithm can segment contour series of CT parts rapidly and precisely. (authors)

  15. Study of the relationship between chemical structure and antimicrobial activity in a series of hydrazine-based coordination compounds.

    Science.gov (United States)

    Dobrova, B N; Dimoglo, A S; Chumakov, Y M

    2000-08-01

    The dependence of antimicrobial activity on the structure of compounds is studied in a series of compounds based on hydrazine coordinated with ions of Cu(II), Ni(II) and Pd(II). The study has been carried out by means of the original electron-topological method developed earlier. A molecular fragment has been found that is only characteristic of biologically active compounds. Its spatial and electron parameters have been used for the quantitative assessment of the activity in view. The results obtained can be used for the antimicrobial activity prediction in a series of compounds with similar structures.

  16. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  17. The generalization ability of online SVM classification based on Markov sampling.

    Science.gov (United States)

    Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang

    2015-03-01

    In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.

  18. UniFIeD Univariate Frequency-based Imputation for Time Series Data

    OpenAIRE

    Friese, Martina; Stork, Jörg; Ramos Guerra, Ricardo; Bartz-Beielstein, Thomas; Thaker, Soham; Flasch, Oliver; Zaefferer, Martin

    2013-01-01

    This paper introduces UniFIeD, a new data preprocessing method for time series. UniFIeD can cope with large intervals of missing data. A scalable test function generator, which allows the simulation of time series with different gap sizes, is presented additionally. An experimental study demonstrates that (i) UniFIeD shows a significant better performance than simple imputation methods and (ii) UniFIeD is able to handle situations, where advanced imputation methods fail. The results are indep...

  19. Wave scattering theory a series approach based on the Fourier transformation

    CERN Document Server

    Eom, Hyo J

    2001-01-01

    The book provides a unified technique of Fourier transform to solve the wave scattering, diffraction, penetration, and radiation problems where the technique of separation of variables is applicable. The book discusses wave scattering from waveguide discontinuities, various apertures, and coupling structures, often encountered in electromagnetic, electrostatic, magnetostatic, and acoustic problems. A system of simultaneous equations for the modal coefficients is formulated and the rapidly-convergent series solutions amenable to numerical computation are presented. The series solutions find practical applications in the design of microwave/acoustic transmission lines, waveguide filters, antennas, and electromagnetic interference/compatibilty-related problems.

  20. Periodic fluctuations in correlation-based connectivity density time series: Application to wind speed-monitoring network in Switzerland

    Science.gov (United States)

    Laib, Mohamed; Telesca, Luciano; Kanevski, Mikhail

    2018-02-01

    In this paper, we study the periodic fluctuations of connectivity density time series of a wind speed-monitoring network in Switzerland. By using the correlogram-based robust periodogram annual periodic oscillations were found in the correlation-based network. The intensity of such annual periodic oscillations is larger for lower correlation thresholds and smaller for higher. The annual periodicity in the connectivity density seems reasonably consistent with the seasonal meteo-climatic cycle.

  1. Whole arm manipulation planning based on feedback velocity fields and sampling-based techniques.

    Science.gov (United States)

    Talaei, B; Abdollahi, F; Talebi, H A; Omidi Karkani, E

    2013-09-01

    Changing the configuration of a cooperative whole arm manipulator is not easy while enclosing an object. This difficulty is mainly because of risk of jamming caused by kinematic constraints. To reduce this risk, this paper proposes a feedback manipulation planning algorithm that takes grasp kinematics into account. The idea is based on a vector field that imposes perturbation in object motion inducing directions when the movement is considerably along manipulator redundant directions. Obstacle avoidance problem is then considered by combining the algorithm with sampling-based techniques. As experimental results confirm, the proposed algorithm is effective in avoiding jamming as well as obstacles for a 6-DOF dual arm whole arm manipulator. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Short-Term Bus Passenger Demand Prediction Based on Time Series Model and Interactive Multiple Model Approach

    Directory of Open Access Journals (Sweden)

    Rui Xue

    2015-01-01

    Full Text Available Although bus passenger demand prediction has attracted increased attention during recent years, limited research has been conducted in the context of short-term passenger demand forecasting. This paper proposes an interactive multiple model (IMM filter algorithm-based model to predict short-term passenger demand. After aggregated in 15 min interval, passenger demand data collected from a busy bus route over four months were used to generate time series. Considering that passenger demand exhibits various characteristics in different time scales, three time series were developed, named weekly, daily, and 15 min time series. After the correlation, periodicity, and stationarity analyses, time series models were constructed. Particularly, the heteroscedasticity of time series was explored to achieve better prediction performance. Finally, IMM filter algorithm was applied to combine individual forecasting models with dynamically predicted passenger demand for next interval. Different error indices were adopted for the analyses of individual and hybrid models. The performance comparison indicates that hybrid model forecasts are superior to individual ones in accuracy. Findings of this study are of theoretical and practical significance in bus scheduling.

  3. Modeling the impact of forecast-based regime switches on macroeconomic time series

    NARCIS (Netherlands)

    K. Bel (Koen); R. Paap (Richard)

    2013-01-01

    textabstractForecasts of key macroeconomic variables may lead to policy changes of governments, central banks and other economic agents. Policy changes in turn lead to structural changes in macroeconomic time series models. To describe this phenomenon we introduce a logistic smooth transition

  4. A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress

    Directory of Open Access Journals (Sweden)

    Ching-Hsue Cheng

    2018-01-01

    Full Text Available The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i the proposed model is different from the previous models lacking the concept of time series; (ii the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies.

  5. A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress

    Science.gov (United States)

    2018-01-01

    The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies. PMID:29765399

  6. A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress.

    Science.gov (United States)

    Cheng, Ching-Hsue; Chan, Chia-Pang; Yang, Jun-He

    2018-01-01

    The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies.

  7. Neural network modeling of nonlinear systems based on Volterra series extension of a linear model

    Science.gov (United States)

    Soloway, Donald I.; Bialasiewicz, Jan T.

    1992-01-01

    A Volterra series approach was applied to the identification of nonlinear systems which are described by a neural network model. A procedure is outlined by which a mathematical model can be developed from experimental data obtained from the network structure. Applications of the results to the control of robotic systems are discussed.

  8. Decoupling of modeling and measuring interval in groundwater time series analysis based on response characteristics

    NARCIS (Netherlands)

    Berendrecht, W.L.; Heemink, A.W.; Geer, F.C. van; Gehrels, J.C.

    2003-01-01

    A state-space representation of the transfer function-noise (TFN) model allows the choice of a modeling (input) interval that is smaller than the measuring interval of the output variable. Since in geohydrological applications the interval of the available input series (precipitation excess) is

  9. Phenology-based Spartina alterniflora mapping in coastal wetland of the Yangtze Estuary using time series of GaoFen satellite no. 1 wide field of view imagery

    Science.gov (United States)

    Ai, Jinquan; Gao, Wei; Gao, Zhiqiang; Shi, Runhe; Zhang, Chao

    2017-04-01

    Spartina alterniflora is an aggressive invasive plant species that replaces native species, changes the structure and function of the ecosystem across coastal wetlands in China, and is thus a major conservation concern. Mapping the spread of its invasion is a necessary first step for the implementation of effective ecological management strategies. The performance of a phenology-based approach for S. alterniflora mapping is explored in the coastal wetland of the Yangtze Estuary using a time series of GaoFen satellite no. 1 wide field of view camera (GF-1 WFV) imagery. First, a time series of the normalized difference vegetation index (NDVI) was constructed to evaluate the phenology of S. alterniflora. Two phenological stages (the senescence stage from November to mid-December and the green-up stage from late April to May) were determined as important for S. alterniflora detection in the study area based on NDVI temporal profiles, spectral reflectance curves of S. alterniflora and its coexistent species, and field surveys. Three phenology feature sets representing three major phenology-based detection strategies were then compared to map S. alterniflora: (1) the single-date imagery acquired within the optimal phenological window, (2) the multitemporal imagery, including four images from the two important phenological windows, and (3) the monthly NDVI time series imagery. Support vector machines and maximum likelihood classifiers were applied on each phenology feature set at different training sample sizes. For all phenology feature sets, the overall results were produced consistently with high mapping accuracies under sufficient training samples sizes, although significantly improved classification accuracies (10%) were obtained when the monthly NDVI time series imagery was employed. The optimal single-date imagery had the lowest accuracies of all detection strategies. The multitemporal analysis demonstrated little reduction in the overall accuracy compared with the

  10. A Data-Driven Modeling Strategy for Smart Grid Power Quality Coupling Assessment Based on Time Series Pattern Matching

    Directory of Open Access Journals (Sweden)

    Hao Yu

    2018-01-01

    Full Text Available This study introduces a data-driven modeling strategy for smart grid power quality (PQ coupling assessment based on time series pattern matching to quantify the influence of single and integrated disturbance among nodes in different pollution patterns. Periodic and random PQ patterns are constructed by using multidimensional frequency-domain decomposition for all disturbances. A multidimensional piecewise linear representation based on local extreme points is proposed to extract the patterns features of single and integrated disturbance in consideration of disturbance variation trend and severity. A feature distance of pattern (FDP is developed to implement pattern matching on univariate PQ time series (UPQTS and multivariate PQ time series (MPQTS to quantify the influence of single and integrated disturbance among nodes in the pollution patterns. Case studies on a 14-bus distribution system are performed and analyzed; the accuracy and applicability of the FDP in the smart grid PQ coupling assessment are verified by comparing with other time series pattern matching methods.

  11. CVD-graphene for low equivalent series resistance in rGO/CVD-graphene/Ni-based supercapacitors

    Science.gov (United States)

    Kwon, Young Hwi; Kumar, Sunil; Bae, Joonho; Seo, Yongho

    2018-05-01

    Reduced equivalent series resistance (ESR) is necessary, particularly at a high current density, for high performance supercapacitors, and the interface resistance between the current collector and electrode material is one of the main components of ESR. In this report, we have optimized chemical vapor deposition-grown graphene (CVD-G) on a current collector (Ni-foil) using reduced graphene oxide as an active electrode material to fabricate an electric double layer capacitor with reduced ESR. The CVD-G was grown at different cooling rates—20 °C min‑1, 40 °C min‑1 and 100 °C min‑1—to determine the optimum conditions. The lowest ESR, 0.38 Ω, was obtained for a cell with a 100 °C min‑1 cooling rate, while the sample without a CVD-G interlayer exhibited 0.80 Ω. The CVD-G interlayer-based supercapacitors exhibited fast CD characteristics with high scan rates up to 10 Vs‑1 due to low ESR. The specific capacitances deposited with CVD-G were in the range of 145.6 F g‑1–213.8 F g‑1 at a voltage scan rate of 0.05 V s‑1. A quasi-rectangular behavior was observed in the cyclic voltammetry curves, even at very high scan rates of 50 and 100 V s‑1, for the cell with optimized CVD-G at higher cooling rates, i.e. 100 °C min‑1.

  12. TIME SERIES CHARACTERISTIC ANALYSIS OF RAINFALL, LAND USE AND FLOOD DISCHARGE BASED ON ARIMA BOX-JENKINS MODEL

    Directory of Open Access Journals (Sweden)

    Abror Abror

    2014-01-01

    Full Text Available Indonesia located in tropic area consists of wet season and dry season. However, in last few years, in river discharge in dry season is very little, but in contrary, in wet season, frequency of flood increases with sharp peak and increasingly great water elevation. The increased flood discharge may occur due to change in land use or change in rainfall characteristic. Both matters should get clarity. Therefore, a research should be done to analyze rainfall characteristic, land use and flood discharge in some watershed area (DAS quantitatively from time series data. The research was conducted in DAS Gintung in Parakankidang, DAS Gung in Danawarih, DAS Rambut in Cipero, DAS Kemiri in Sidapurna and DAS Comal in Nambo, located in Tegal Regency and Pemalang Regency in Central Java Province. This research activity consisted of three main steps: input, DAS system and output. Input is DAS determination and selection and searching secondary data. DAS system is early secondary data processing consisting of rainfall analysis, HSS GAMA I parameter, land type analysis and DAS land use. Output is final processing step that consisting of calculation of Tadashi Tanimoto, USSCS effective rainfall, flood discharge, ARIMA analysis, result analysis and conclusion. Analytical calculation of ARIMA Box-Jenkins time series used software Number Cruncher Statistical Systems and Power Analysis Sample Size (NCSS-PASS version 2000, which result in time series characteristic in form of time series pattern, mean square errors (MSE, root mean square ( RMS, autocorrelation of residual and trend. Result of this research indicates that composite CN and flood discharge is proportional that means when composite CN trend increase then flood discharge trend also increase and vice versa. Meanwhile, decrease of rainfall trend is not always followed with decrease in flood discharge trend. The main cause of flood discharge characteristic is DAS management characteristic, not change in

  13. Effectiveness of mindfulness-based cognitive therapy in patients with bipolar affective disorder: A case series

    Directory of Open Access Journals (Sweden)

    Suvarna Shirish Joshi

    2018-01-01

    Full Text Available The present investigation was undertaken to examine the effects of mindfulness-based cognitive therapy (MBCT on interepisodic symptoms, emotional regulation, and quality of life in patients with bipolar affective disorder (BPAD in remission. The sample for the study comprised a total of five patients with the diagnosis of BPAD in partial or complete remission. Each patient was screened to fit the inclusion and exclusion criteria and later assessed on the Beck Depressive Inventory I, Beck Anxiety Inventory, Difficulties in Emotion Regulation Scale, Acceptance and Action Questionnaire-II, and The World Health Organization Quality of Life Assessment-BREF. Following preassessments, patients underwent 8–10 weeks of MBCT. A single case design with pre- and post-intervention assessment was adopted to evaluate the changes. Improvement was observed in all five cases on the outcome variables. The details of the results are discussed in the context of the available literature. Implications, limitations, and ideas for future investigations are also discussed.

  14. True random bit generators based on current time series of contact glow discharge electrolysis

    Science.gov (United States)

    Rojas, Andrea Espinel; Allagui, Anis; Elwakil, Ahmed S.; Alawadhi, Hussain

    2018-05-01

    Random bit generators (RBGs) in today's digital information and communication systems employ a high rate physical entropy sources such as electronic, photonic, or thermal time series signals. However, the proper functioning of such physical systems is bound by specific constrains that make them in some cases weak and susceptible to external attacks. In this study, we show that the electrical current time series of contact glow discharge electrolysis, which is a dc voltage-powered micro-plasma in liquids, can be used for generating random bit sequences in a wide range of high dc voltages. The current signal is quantized into a binary stream by first using a simple moving average function which makes the distribution centered around zero, and then applying logical operations which enables the binarized data to pass all tests in industry-standard randomness test suite by the National Institute of Standard Technology. Furthermore, the robustness of this RBG against power supply attacks has been examined and verified.

  15. Wavelet based correlation coefficient of time series of Saudi Meteorological Data

    International Nuclear Information System (INIS)

    Rehman, S.; Siddiqi, A.H.

    2009-01-01

    In this paper, wavelet concepts are used to study a correlation between pairs of time series of meteorological parameters such as pressure, temperature, rainfall, relative humidity and wind speed. The study utilized the daily average values of meteorological parameters of nine meteorological stations of Saudi Arabia located at different strategic locations. The data used in this study cover a period of 16 years between 1990 and 2005. Besides obtaining wavelet spectra, we also computed the wavelet correlation coefficients between two same parameters from two different locations and show that strong correlation or strong anti-correlation depends on scale. The cross-correlation coefficients of meteorological parameters between two stations were also calculated using statistical function. For coastal to costal pair of stations, pressure time series was found to be strongly correlated. In general, the temperature data were found to be strongly correlated for all pairs of stations and the rainfall data the least.

  16. Analysis of area-wide management of insect pests based on sampling

    Science.gov (United States)

    David W. Onstad; Mark S. Sisterson

    2011-01-01

    The control of invasive species greatly depends on area-wide pest management (AWPM) in heterogeneous landscapes. Decisions about when and where to treat a population with pesticide are based on sampling pest abundance. One of the challenges of AWPM is sampling large areas with limited funds to cover the cost of sampling. Additionally, AWPM programs are often confronted...

  17. The influence of noise on nonlinear time series detection based on Volterra-Wiener-Korenberg model

    Energy Technology Data Exchange (ETDEWEB)

    Lei Min [State Key Laboratory of Vibration, Shock and Noise, Shanghai Jiao Tong University, Shanghai 200030 (China)], E-mail: leimin@sjtu.edu.cn; Meng Guang [State Key Laboratory of Vibration, Shock and Noise, Shanghai Jiao Tong University, Shanghai 200030 (China)

    2008-04-15

    This paper studies the influence of noises on Volterra-Wiener-Korenberg (VWK) nonlinear test model. Our numerical results reveal that different types of noises lead to different behavior of VWK model detection. For dynamic noise, it is difficult to distinguish chaos from nonchaotic but nonlinear determinism. For time series, measure noise has no impact on chaos determinism detection. This paper also discusses various behavior of VWK model detection with surrogate data for different noises.

  18. Modeling Financial Time Series Based on a Market Microstructure Model with Leverage Effect

    OpenAIRE

    Yanhui Xi; Hui Peng; Yemei Qin

    2016-01-01

    The basic market microstructure model specifies that the price/return innovation and the volatility innovation are independent Gaussian white noise processes. However, the financial leverage effect has been found to be statistically significant in many financial time series. In this paper, a novel market microstructure model with leverage effects is proposed. The model specification assumed a negative correlation in the errors between the price/return innovation and the volatility innovation....

  19. Time series modeling of live-cell shape dynamics for image-based phenotypic profiling.

    Science.gov (United States)

    Gordonov, Simon; Hwang, Mun Kyung; Wells, Alan; Gertler, Frank B; Lauffenburger, Douglas A; Bathe, Mark

    2016-01-01

    Live-cell imaging can be used to capture spatio-temporal aspects of cellular responses that are not accessible to fixed-cell imaging. As the use of live-cell imaging continues to increase, new computational procedures are needed to characterize and classify the temporal dynamics of individual cells. For this purpose, here we present the general experimental-computational framework SAPHIRE (Stochastic Annotation of Phenotypic Individual-cell Responses) to characterize phenotypic cellular responses from time series imaging datasets. Hidden Markov modeling is used to infer and annotate morphological state and state-switching properties from image-derived cell shape measurements. Time series modeling is performed on each cell individually, making the approach broadly useful for analyzing asynchronous cell populations. Two-color fluorescent cells simultaneously expressing actin and nuclear reporters enabled us to profile temporal changes in cell shape following pharmacological inhibition of cytoskeleton-regulatory signaling pathways. Results are compared with existing approaches conventionally applied to fixed-cell imaging datasets, and indicate that time series modeling captures heterogeneous dynamic cellular responses that can improve drug classification and offer additional important insight into mechanisms of drug action. The software is available at http://saphire-hcs.org.

  20. GEKF, GUKF and GGPF based prediction of chaotic time-series with additive and multiplicative noises

    International Nuclear Information System (INIS)

    Wu Xuedong; Song Zhihuan

    2008-01-01

    On the assumption that random interruptions in the observation process are modelled by a sequence of independent Bernoulli random variables, this paper generalize the extended Kalman filtering (EKF), the unscented Kalman filtering (UKF) and the Gaussian particle filtering (GPF) to the case in which there is a positive probability that the observation in each time consists of noise alone and does not contain the chaotic signal (These generalized novel algorithms are referred to as GEKF, GUKF and GGPF correspondingly in this paper). Using weights and network output of neural networks to constitute state equation and observation equation for chaotic time-series prediction to obtain the linear system state transition equation with continuous update scheme in an online fashion, and the prediction results of chaotic time series represented by the predicted observation value, these proposed novel algorithms are applied to the prediction of Mackey–Glass time-series with additive and multiplicative noises. Simulation results prove that the GGPF provides a relatively better prediction performance in comparison with GEKF and GUKF. (general)

  1. The Recent Developments in Sample Preparation for Mass Spectrometry-Based Metabolomics.

    Science.gov (United States)

    Gong, Zhi-Gang; Hu, Jing; Wu, Xi; Xu, Yong-Jiang

    2017-07-04

    Metabolomics is a critical member in systems biology. Although great progress has been achieved in metabolomics, there are still some problems in sample preparation, data processing and data interpretation. In this review, we intend to explore the roles, challenges and trends in sample preparation for mass spectrometry- (MS-) based metabolomics. The newly emerged sample preparation methods were also critically examined, including laser microdissection, in vivo sampling, dried blood spot, microwave, ultrasound and enzyme-assisted extraction, as well as microextraction techniques. Finally, we provide some conclusions and perspectives for sample preparation in MS-based metabolomics.

  2. Advances in paper-based sample pretreatment for point-of-care testing.

    Science.gov (United States)

    Tang, Rui Hua; Yang, Hui; Choi, Jane Ru; Gong, Yan; Feng, Shang Sheng; Pingguan-Murphy, Belinda; Huang, Qing Sheng; Shi, Jun Ling; Mei, Qi Bing; Xu, Feng

    2017-06-01

    In recent years, paper-based point-of-care testing (POCT) has been widely used in medical diagnostics, food safety and environmental monitoring. However, a high-cost, time-consuming and equipment-dependent sample pretreatment technique is generally required for raw sample processing, which are impractical for low-resource and disease-endemic areas. Therefore, there is an escalating demand for a cost-effective, simple and portable pretreatment technique, to be coupled with the commonly used paper-based assay (e.g. lateral flow assay) in POCT. In this review, we focus on the importance of using paper as a platform for sample pretreatment. We firstly discuss the beneficial use of paper for sample pretreatment, including sample collection and storage, separation, extraction, and concentration. We highlight the working principle and fabrication of each sample pretreatment device, the existing challenges and the future perspectives for developing paper-based sample pretreatment technique.

  3. Forest Disturbance Mapping Using Dense Synthetic Landsat/MODIS Time-Series and Permutation-Based Disturbance Index Detection

    Directory of Open Access Journals (Sweden)

    David Frantz

    2016-03-01

    Full Text Available Spatio-temporal information on process-based forest loss is essential for a wide range of applications. Despite remote sensing being the only feasible means of monitoring forest change at regional or greater scales, there is no retrospectively available remote sensor that meets the demand of monitoring forests with the required spatial detail and guaranteed high temporal frequency. As an alternative, we employed the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM to produce a dense synthetic time series by fusing Landsat and Moderate Resolution Imaging Spectroradiometer (MODIS nadir Bidirectional Reflectance Distribution Function (BRDF adjusted reflectance. Forest loss was detected by applying a multi-temporal disturbance detection approach implementing a Disturbance Index-based detection strategy. The detection thresholds were permutated with random numbers for the normal distribution in order to generate a multi-dimensional threshold confidence area. As a result, a more robust parameterization and a spatially more coherent detection could be achieved. (i The original Landsat time series; (ii synthetic time series; and a (iii combined hybrid approach were used to identify the timing and extent of disturbances. The identified clearings in the Landsat detection were verified using an annual woodland clearing dataset from Queensland’s Statewide Landcover and Trees Study. Disturbances caused by stand-replacing events were successfully identified. The increased temporal resolution of the synthetic time series indicated promising additional information on disturbance timing. The results of the hybrid detection unified the benefits of both approaches, i.e., the spatial quality and general accuracy of the Landsat detection and the increased temporal information of synthetic time series. Results indicated that a temporal improvement in the detection of the disturbance date could be achieved relative to the irregularly spaced Landsat

  4. Antarctic Iceberg Tracking Based on Time Series of Aqua AMSRE Microwave Brightness Temperature Measurements

    Science.gov (United States)

    Blonski, Slawomir; Peterson, Craig

    2006-01-01

    Observations of icebergs are identified as one of the requirements for the GEOSS (Global Earth Observation System of Systems) in the area of reducing loss of life and property from natural and human-induced disasters. However, iceberg observations are not included among targets in the GEOSS 10-Year Implementation Plan, and thus there is an unfulfilled need for iceberg detection and tracking in the near future. Large Antarctic icebergs have been tracked by the National Ice Center and by the academic community using a variety of satellite sensors including both passive and active microwave imagers, such as SSM/I (Special Sensor Microwave/Imager) deployed on the DMSP (Defense Meteorological Satellite Program) spacecraft. Improvements provided in recent years by NASA and non-NASA satellite radars, scatterometers, and radiometers resulted in an increased number of observed icebergs and even prompted a question: Is The Number of Antarctic Icebergs Really Increasing? [D.G. Long, J. Ballantyne, and C. Bertoia, Eos, Transactions of the American Geophysical Union 83 (42): 469 & 474, 15 October 2002]. AMSR-E (Advanced Microwave Scanning Radiometer for the Earth Observing System) represents an improvement over SSM/I, its predecessor. AMSR-E has more measurement channels and higher spatial resolution than SSM/I. For example, the instantaneous field of view of the AMSR-E s 89-GHz channels is 6 km by 4 km versus 16 km by 14 km for SSM/I s comparable 85-GHz channels. AMSR-E, deployed on the Aqua satellite, scans across a 1450-km swath and provides brightness temperature measurements with nearglobal coverage every one or two days. In polar regions, overlapping swaths generate coverage up to multiple times per day and allow for creation of image time series with high temporal resolution. Despite these advantages, only incidental usage of AMSR-E data for iceberg tracking has been reported so far, none in an operational environment. Therefore, an experiment was undertaken in the RPC

  5. Antarctic Iceberg Tracking Based on Time Series of Aqua AMSR-E Microwave Brightness Temperature Measurements

    Science.gov (United States)

    Blonski, S.; Peterson, C. A.

    2006-12-01

    Observations of icebergs are identified as one of the requirements for the GEOSS (Global Earth Observation System of Systems) in the area of reducing loss of life and property from natural and human-induced disasters. However, iceberg observations are not included among targets in the GEOSS 10-Year Implementation Plan, and thus there is an unfulfilled need for iceberg detection and tracking in the near future. Large Antarctic icebergs have been tracked by the National Ice Center and by the academic community using a variety of satellite sensors including both passive and active microwave imagers, such as SSM/I (Special Sensor Microwave/Imager) deployed on the DMSP (Defense Meteorological Satellite Program) spacecraft. Improvements provided in recent years by NASA and non-NASA satellite radars, scatterometers, and radiometers resulted in an increased number of observed icebergs and even prompted a question: `Is The Number of Antarctic Icebergs Really Increasing?' [D.G. Long, J. Ballantyne, and C. Bertoia, Eos, AGU Transactions 83(42):469&474, 15 October 2002]. AMSR-E (Advanced Microwave Scanning Radiometer for the Earth Observing System) represents an improvement over SSM/I, its predecessor. AMSR-E has more measurement channels and higher spatial resolution than SSM/I. For example, the instantaneous field of view of the AMSR-E's 89-GHz channels is 6 km by 4 km versus 16 km by 14 km for SSM/I's comparable 85-GHz channels. AMSR-E, deployed on the Aqua satellite, scans across a 1450-km swath and provides brightness temperature measurements with near-global coverage every one or two days. In polar regions, overlapping swaths generate coverage up to multiple times per day and allow for creation of image time series with high temporal resolution. Despite these advantages, only incidental usage of AMSR-E data for iceberg tracking has been reported so far, none in an operational environment. Therefore, an experiment was undertaken in the RPC (Rapid Prototyping Capability

  6. Downsizer - A Graphical User Interface-Based Application for Browsing, Acquiring, and Formatting Time-Series Data for Hydrologic Modeling

    Science.gov (United States)

    Ward-Garrison, Christian; Markstrom, Steven L.; Hay, Lauren E.

    2009-01-01

    The U.S. Geological Survey Downsizer is a computer application that selects, downloads, verifies, and formats station-based time-series data for environmental-resource models, particularly the Precipitation-Runoff Modeling System. Downsizer implements the client-server software architecture. The client presents a map-based, graphical user interface that is intuitive to modelers; the server provides streamflow and climate time-series data from over 40,000 measurement stations across the United States. This report is the Downsizer user's manual and provides (1) an overview of the software design, (2) installation instructions, (3) a description of the graphical user interface, (4) a description of selected output files, and (5) troubleshooting information.

  7. Sampling in interview-based qualitative research: A theoretical and practical guide

    OpenAIRE

    Robinson, Oliver

    2014-01-01

    Sampling is central to the practice of qualitative methods, but compared with data collection and analysis, its processes are discussed relatively little. A four-point approach to sampling in qualitative interview-based research is presented and critically discussed in this article, which integrates theory and process for the following: (1) Defining a sample universe, by way of specifying inclusion and exclusion criteria for potential participation; (2) Deciding upon a sample size, through th...

  8. RF Sub-sampling Receiver Architecture based on Milieu Adapting Techniques

    DEFF Research Database (Denmark)

    Behjou, Nastaran; Larsen, Torben; Jensen, Ole Kiel

    2012-01-01

    A novel sub-sampling based architecture is proposed which has the ability of reducing the problem of image distortion and improving the signal to noise ratio significantly. The technique is based on sensing the environment and adapting the sampling rate of the receiver to the best possible...

  9. A Phenology-Based Method for Monitoring Woody and Herbaceous Vegetation in Mediterranean Forests from NDVI Time Series

    OpenAIRE

    David Helman; Itamar M. Lensky; Naama Tessler; Yagil Osem

    2015-01-01

    We present an efficient method for monitoring woody (i.e., evergreen) and herbaceous (i.e., ephemeral) vegetation in Mediterranean forests at a sub pixel scale from Normalized Difference Vegetation Index (NDVI) time series derived from the Moderate Resolution Imaging Spectroradiometer (MODIS). The method is based on the distinct development periods of those vegetation components. In the dry season, herbaceous vegetation is absent or completely dry in Mediterranean forests. Thus the mean NDVI ...

  10. Fourier series

    CERN Document Server

    Tolstov, Georgi P

    1962-01-01

    Richard A. Silverman's series of translations of outstanding Russian textbooks and monographs is well-known to people in the fields of mathematics, physics, and engineering. The present book is another excellent text from this series, a valuable addition to the English-language literature on Fourier series.This edition is organized into nine well-defined chapters: Trigonometric Fourier Series, Orthogonal Systems, Convergence of Trigonometric Fourier Series, Trigonometric Series with Decreasing Coefficients, Operations on Fourier Series, Summation of Trigonometric Fourier Series, Double Fourie

  11. Assessing multiscale complexity of short heart rate variability series through a model-based linear approach

    Science.gov (United States)

    Porta, Alberto; Bari, Vlasta; Ranuzzi, Giovanni; De Maria, Beatrice; Baselli, Giuseppe

    2017-09-01

    We propose a multiscale complexity (MSC) method assessing irregularity in assigned frequency bands and being appropriate for analyzing the short time series. It is grounded on the identification of the coefficients of an autoregressive model, on the computation of the mean position of the poles generating the components of the power spectral density in an assigned frequency band, and on the assessment of its distance from the unit circle in the complex plane. The MSC method was tested on simulations and applied to the short heart period (HP) variability series recorded during graded head-up tilt in 17 subjects (age from 21 to 54 years, median = 28 years, 7 females) and during paced breathing protocols in 19 subjects (age from 27 to 35 years, median = 31 years, 11 females) to assess the contribution of time scales typical of the cardiac autonomic control, namely in low frequency (LF, from 0.04 to 0.15 Hz) and high frequency (HF, from 0.15 to 0.5 Hz) bands to the complexity of the cardiac regulation. The proposed MSC technique was compared to a traditional model-free multiscale method grounded on information theory, i.e., multiscale entropy (MSE). The approach suggests that the reduction of HP variability complexity observed during graded head-up tilt is due to a regularization of the HP fluctuations in LF band via a possible intervention of sympathetic control and the decrement of HP variability complexity observed during slow breathing is the result of the regularization of the HP variations in both LF and HF bands, thus implying the action of physiological mechanisms working at time scales even different from that of respiration. MSE did not distinguish experimental conditions at time scales larger than 1. Over a short time series MSC allows a more insightful association between cardiac control complexity and physiological mechanisms modulating cardiac rhythm compared to a more traditional tool such as MSE.

  12. Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion

    Science.gov (United States)

    Li, Z.; Ghaith, M.

    2017-12-01

    Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.

  13. Analysis of the development trend of China’s business administration based on time series

    OpenAIRE

    Jiang Rui

    2016-01-01

    On the general direction of the economic system, China is in a crucial period of the establishment of the modern enterprise system and reform of the macroeconomic system, and a lot of high-quality business administration talents are required to make China’s economy be stably developed. This paper carries out time series analysis of the development situation of China’s business administration major: on the whole, the society currently presents an upward trend on the demand for the business adm...

  14. Determination of accelerated factors in gradient descent iterations based on Taylor's series

    Directory of Open Access Journals (Sweden)

    Petrović Milena

    2017-01-01

    Full Text Available In this paper the efficiency of accelerated gradient descent methods regarding the way of determination of accelerated factor is considered. Due to the previous researches we assert that the use of Taylor's series of posed gradient descent iteration in calculation of accelerated parameter gives better final results than some other choices. We give a comparative analysis of efficiency of several methods with different approaches in obtaining accelerated parameter. According to the achieved results of numerical experiments we make a conclusion about the one of the most optimal way in defining accelerated parameter in accelerated gradient descent schemes.

  15. Time Series Model of Wind Speed for Multi Wind Turbines based on Mixed Copula

    Directory of Open Access Journals (Sweden)

    Nie Dan

    2016-01-01

    Full Text Available Because wind power is intermittent, random and so on, large scale grid will directly affect the safe and stable operation of power grid. In order to make a quantitative study on the characteristics of the wind speed of wind turbine, the wind speed time series model of the multi wind turbine generator is constructed by using the mixed Copula-ARMA function in this paper, and a numerical example is also given. The research results show that the model can effectively predict the wind speed, ensure the efficient operation of the wind turbine, and provide theoretical basis for the stability of wind power grid connected operation.

  16. Classification of Small-Scale Eucalyptus Plantations Based on NDVI Time Series Obtained from Multiple High-Resolution Datasets

    Directory of Open Access Journals (Sweden)

    Hailang Qiao

    2016-02-01

    Full Text Available Eucalyptus, a short-rotation plantation, has been expanding rapidly in southeast China in recent years owing to its short growth cycle and high yield of wood. Effective identification of eucalyptus, therefore, is important for monitoring land use changes and investigating environmental quality. For this article, we used remote sensing images over 15 years (one per year with a 30-m spatial resolution, including Landsat 5 thematic mapper images, Landsat 7-enhanced thematic mapper images, and HJ 1A/1B images. These data were used to construct a 15-year Normalized Difference Vegetation Index (NDVI time series for several cities in Guangdong Province, China. Eucalyptus reference NDVI time series sub-sequences were acquired, including one-year-long and two-year-long growing periods, using invested eucalyptus samples in the study region. In order to compensate for the discontinuity of the NDVI time series that is a consequence of the relatively coarse temporal resolution, we developed an inverted triangle area methodology. Using this methodology, the images were classified on the basis of the matching degree of the NDVI time series and two reference NDVI time series sub-sequences during the growing period of the eucalyptus rotations. Three additional methodologies (Bounding Envelope, City Block, and Standardized Euclidian Distance were also tested and used as a comparison group. Threshold coefficients for the algorithms were adjusted using commission–omission error criteria. The results show that the triangle area methodology out-performed the other methodologies in classifying eucalyptus plantations. Threshold coefficients and an optimal discriminant function were determined using a mosaic photograph that had been taken by an unmanned aerial vehicle platform. Good stability was found as we performed further validation using multiple-year data from the high-resolution Gaofen Satellite 1 (GF-1 observations of larger regions. Eucalyptus planting dates

  17. Multidimensional k-nearest neighbor model based on EEMD for financial time series forecasting

    Science.gov (United States)

    Zhang, Ningning; Lin, Aijing; Shang, Pengjian

    2017-07-01

    In this paper, we propose a new two-stage methodology that combines the ensemble empirical mode decomposition (EEMD) with multidimensional k-nearest neighbor model (MKNN) in order to forecast the closing price and high price of the stocks simultaneously. The modified algorithm of k-nearest neighbors (KNN) has an increasingly wide application in the prediction of all fields. Empirical mode decomposition (EMD) decomposes a nonlinear and non-stationary signal into a series of intrinsic mode functions (IMFs), however, it cannot reveal characteristic information of the signal with much accuracy as a result of mode mixing. So ensemble empirical mode decomposition (EEMD), an improved method of EMD, is presented to resolve the weaknesses of EMD by adding white noise to the original data. With EEMD, the components with true physical meaning can be extracted from the time series. Utilizing the advantage of EEMD and MKNN, the new proposed ensemble empirical mode decomposition combined with multidimensional k-nearest neighbor model (EEMD-MKNN) has high predictive precision for short-term forecasting. Moreover, we extend this methodology to the case of two-dimensions to forecast the closing price and high price of the four stocks (NAS, S&P500, DJI and STI stock indices) at the same time. The results indicate that the proposed EEMD-MKNN model has a higher forecast precision than EMD-KNN, KNN method and ARIMA.

  18. Free vibration characteristics analysis of rectangular plate with rectangular opening based on Fourier series method

    Directory of Open Access Journals (Sweden)

    WANG Minhao

    2017-08-01

    Full Text Available Plate structures with openings are common in many engineering structures. The study of the vibration characteristics of such structures is directly related to the vibration reduction, noise reduction and stability analysis of an overall structure. This paper conducts research into the free vibration characteristics of a thin elastic plate with a rectangular opening parallel to the plate in an arbitrary position. We use the improved Fourier series to represent the displacement tolerance function of the rectangular plate with an opening. We can divide the plate into an eight zone plate to simplify the calculation. We then use linear springs, which are uniformly distributed along the boundary, to simulate the classical boundary conditions and the boundary conditions of the boundaries between the regions. According to the energy functional and variational method, we can obtain the overall energy functional. We can also obtain the generalized eigenvalue matrix equation by studying the extremum of the unknown improved Fourier series expansion coefficients. We can then obtain the natural frequencies and corresponding vibration modes of the rectangular plate with an opening by solving the equation. We then compare the calculated results with the finite element method to verify the accuracy and effectiveness of the method proposed in this paper. Finally, we research the influence of the boundary condition, opening size and opening position on the vibration characteristics of a plate with an opening. This provides a theoretical reference for practical engineering application.

  19. Complex dynamic behaviors of oriented percolation-based financial time series and Hang Seng index

    International Nuclear Information System (INIS)

    Niu, Hongli; Wang, Jun

    2013-01-01

    Highlights: • We develop a financial time series model by two-dimensional oriented percolation system. • We investigate the statistical behaviors of returns for HSI and the financial model by chaos-exploring methods. • We forecast the phase point of reconstructed phase space by RBF neural network. -- Abstract: We develop a financial price model by the two-dimensional oriented (directed) percolation system. The oriented percolation model is a directed variant of ordinary (isotropic) percolation, and it is applied to describe the fluctuations of stock prices. In this work, we assume that the price fluctuations result from the participants’ investment attitudes toward the market, and we investigate the information spreading among the traders and the corresponding effect on the price fluctuations. We study the complex dynamic behaviors of return time series of the model by using the multiaspect chaos-exploring methods. And we also explore the corresponding behaviors of the actual market index (Hang Seng Index) for comparison. Further, we introduce the radial basic function (RBF) neural network to train and forecast the phase point of reconstructed phase space

  20. Sampling guidelines for oral fluid-based surveys of group-housed animals.

    Science.gov (United States)

    Rotolo, Marisa L; Sun, Yaxuan; Wang, Chong; Giménez-Lirola, Luis; Baum, David H; Gauger, Phillip C; Harmon, Karen M; Hoogland, Marlin; Main, Rodger; Zimmerman, Jeffrey J

    2017-09-01

    Formulas and software for calculating sample size for surveys based on individual animal samples are readily available. However, sample size formulas are not available for oral fluids and other aggregate samples that are increasingly used in production settings. Therefore, the objective of this study was to develop sampling guidelines for oral fluid-based porcine reproductive and respiratory syndrome virus (PRRSV) surveys in commercial swine farms. Oral fluid samples were collected in 9 weekly samplings from all pens in 3 barns on one production site beginning shortly after placement of weaned pigs. Samples (n=972) were tested by real-time reverse-transcription PCR (RT-rtPCR) and the binary results analyzed using a piecewise exponential survival model for interval-censored, time-to-event data with misclassification. Thereafter, simulation studies were used to study the barn-level probability of PRRSV detection as a function of sample size, sample allocation (simple random sampling vs fixed spatial sampling), assay diagnostic sensitivity and specificity, and pen-level prevalence. These studies provided estimates of the probability of detection by sample size and within-barn prevalence. Detection using fixed spatial sampling was as good as, or better than, simple random sampling. Sampling multiple barns on a site increased the probability of detection with the number of barns sampled. These results are relevant to PRRSV control or elimination projects at the herd, regional, or national levels, but the results are also broadly applicable to contagious pathogens of swine for which oral fluid tests of equivalent performance are available. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  1. The Earth Observation Monitor - Automated monitoring and alerting for spatial time-series data based on OGC web services

    Science.gov (United States)

    Eberle, J.; Hüttich, C.; Schmullius, C.

    2014-12-01

    Spatial time series data are freely available around the globe from earth observation satellites and meteorological stations for many years until now. They provide useful and important information to detect ongoing changes of the environment; but for end-users it is often too complex to extract this information out of the original time series datasets. This issue led to the development of the Earth Observation Monitor (EOM), an operational framework and research project to provide simple access, analysis and monitoring tools for global spatial time series data. A multi-source data processing middleware in the backend is linked to MODIS data from Land Processes Distributed Archive Center (LP DAAC) and Google Earth Engine as well as daily climate station data from NOAA National Climatic Data Center. OGC Web Processing Services are used to integrate datasets from linked data providers or external OGC-compliant interfaces to the EOM. Users can either use the web portal (webEOM) or the mobile application (mobileEOM) to execute these processing services and to retrieve the requested data for a given point or polygon in userfriendly file formats (CSV, GeoTiff). Beside providing just data access tools, users can also do further time series analyses like trend calculations, breakpoint detections or the derivation of phenological parameters from vegetation time series data. Furthermore data from climate stations can be aggregated over a given time interval. Calculated results can be visualized in the client and downloaded for offline usage. Automated monitoring and alerting of the time series data integrated by the user is provided by an OGC Sensor Observation Service with a coupled OGC Web Notification Service. Users can decide which datasets and parameters are monitored with a given filter expression (e.g., precipitation value higher than x millimeter per day, occurrence of a MODIS Fire point, detection of a time series anomaly). Datasets integrated in the SOS service are

  2. LVQ-SMOTE - Learning Vector Quantization based Synthetic Minority Over-sampling Technique for biomedical data.

    Science.gov (United States)

    Nakamura, Munehiro; Kajiwara, Yusuke; Otsuka, Atsushi; Kimura, Haruhiko

    2013-10-02

    Over-sampling methods based on Synthetic Minority Over-sampling Technique (SMOTE) have been proposed for classification problems of imbalanced biomedical data. However, the existing over-sampling methods achieve slightly better or sometimes worse result than the simplest SMOTE. In order to improve the effectiveness of SMOTE, this paper presents a novel over-sampling method using codebooks obtained by the learning vector quantization. In general, even when an existing SMOTE applied to a biomedical dataset, its empty feature space is still so huge that most classification algorithms would not perform well on estimating borderlines between classes. To tackle this problem, our over-sampling method generates synthetic samples which occupy more feature space than the other SMOTE algorithms. Briefly saying, our over-sampling method enables to generate useful synthetic samples by referring to actual samples taken from real-world datasets. Experiments on eight real-world imbalanced datasets demonstrate that our proposed over-sampling method performs better than the simplest SMOTE on four of five standard classification algorithms. Moreover, it is seen that the performance of our method increases if the latest SMOTE called MWMOTE is used in our algorithm. Experiments on datasets for β-turn types prediction show some important patterns that have not been seen in previous analyses. The proposed over-sampling method generates useful synthetic samples for the classification of imbalanced biomedical data. Besides, the proposed over-sampling method is basically compatible with basic classification algorithms and the existing over-sampling methods.

  3. Multi-Cultural Competency-Based Vocational Curricula. Food Service. Multi-Cultural Competency-Based Vocational/Technical Curricula Series.

    Science.gov (United States)

    Hepburn, Larry; Shin, Masako

    This document, one of eight in a multi-cultural competency-based vocational/technical curricula series, is on food service. This program is designed to run 24 weeks and cover 15 instructional areas: orientation, sanitation, management/planning, preparing food for cooking, preparing beverages, cooking eggs, cooking meat, cooking vegetables,…

  4. Multi-Cultural Competency-Based Vocational Curricula. Automotive Mechanics. Multi-Cultural Competency-Based Vocational/Technical Curricula Series.

    Science.gov (United States)

    Hepburn, Larry; Shin, Masako

    This document, one of eight in a multi-cultural competency-based vocational/technical curricula series, is on automotive mechanics. This program is designed to run 36 weeks and cover 10 instructional areas: the engine; drive trains--rear ends/drive shafts/manual transmission; carburetor; emission; ignition/tune-up; charging and starting;…

  5. Visualization of and Software for Omnibus Test Based Change Detected in a Time Series of Polarimetric SAR Data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Conradsen, Knut; Skriver, Henning

    2017-01-01

    Based on an omnibus likelihood ratio test statistic for the equality of several variance-covariance matrices following the complex Wishart distribution and a factorization of this test statistic with associated p-values, change analysis in a time series of multilook polarimetric SAR data...... in the covariance matrix representation is carried out. The omnibus test statistic and its factorization detect if and when change occurs. Using airborne EMISAR and spaceborne RADARSAT-2 data this paper focuses on change detection based on the p-values, on visualization of change at pixel as well as segment level......, and on computer software....

  6. Identification of pests and diseases of Dalbergia hainanensis based on EVI time series and classification of decision tree

    Science.gov (United States)

    Luo, Qiu; Xin, Wu; Qiming, Xiong

    2017-06-01

    In the process of vegetation remote sensing information extraction, the problem of phenological features and low performance of remote sensing analysis algorithm is not considered. To solve this problem, the method of remote sensing vegetation information based on EVI time-series and the classification of decision-tree of multi-source branch similarity is promoted. Firstly, to improve the time-series stability of recognition accuracy, the seasonal feature of vegetation is extracted based on the fitting span range of time-series. Secondly, the decision-tree similarity is distinguished by adaptive selection path or probability parameter of component prediction. As an index, it is to evaluate the degree of task association, decide whether to perform migration of multi-source decision tree, and ensure the speed of migration. Finally, the accuracy of classification and recognition of pests and diseases can reach 87%--98% of commercial forest in Dalbergia hainanensis, which is significantly better than that of MODIS coverage accuracy of 80%--96% in this area. Therefore, the validity of the proposed method can be verified.

  7. Predicting Charging Time of Battery Electric Vehicles Based on Regression and Time-Series Methods: A Case Study of Beijing

    Directory of Open Access Journals (Sweden)

    Jun Bi

    2018-04-01

    Full Text Available Battery electric vehicles (BEVs reduce energy consumption and air pollution as compared with conventional vehicles. However, the limited driving range and potential long charging time of BEVs create new problems. Accurate charging time prediction of BEVs helps drivers determine travel plans and alleviate their range anxiety during trips. This study proposed a combined model for charging time prediction based on regression and time-series methods according to the actual data from BEVs operating in Beijing, China. After data analysis, a regression model was established by considering the charged amount for charging time prediction. Furthermore, a time-series method was adopted to calibrate the regression model, which significantly improved the fitting accuracy of the model. The parameters of the model were determined by using the actual data. Verification results confirmed the accuracy of the model and showed that the model errors were small. The proposed model can accurately depict the charging time characteristics of BEVs in Beijing.

  8. Increasing accuracy in the interval analysis by the improved format of interval extension based on the first order Taylor series

    Science.gov (United States)

    Li, Yi; Xu, Yan Long

    2018-05-01

    When the dependence of the function on uncertain variables is non-monotonic in interval, the interval of function obtained by the classic interval extension based on the first order Taylor series will exhibit significant errors. In order to reduce theses errors, the improved format of the interval extension with the first order Taylor series is developed here considering the monotonicity of function. Two typical mathematic examples are given to illustrate this methodology. The vibration of a beam with lumped masses is studied to demonstrate the usefulness of this method in the practical application, and the necessary input data of which are only the function value at the central point of interval, sensitivity and deviation of function. The results of above examples show that the interval of function from the method developed by this paper is more accurate than the ones obtained by the classic method.

  9. Evaluation of physical sampling efficiency for cyclone-based personal bioaerosol samplers in moving air environments

    OpenAIRE

    Su, Wei-Chung; Tolchinsky, Alexander D.; Chen, Bean T.; Sigaev, Vladimir I.; Cheng, Yung Sung

    2012-01-01

    The need to determine occupational exposure to bioaerosols has notably increased in the past decade, especially for microbiology-related workplaces and laboratories. Recently, two new cyclone-based personal bioaerosol samplers were developed by the National Institute for Occupational Safety and Health (NIOSH) in the USA and the Research Center for Toxicology and Hygienic Regulation of Biopreparations (RCT & HRB) in Russia to monitor bioaerosol exposure in the workplace. Here, a series of wind...

  10. Autonomous spatially adaptive sampling in experiments based on curvature, statistical error and sample spacing with applications in LDA measurements

    Science.gov (United States)

    Theunissen, Raf; Kadosh, Jesse S.; Allen, Christian B.

    2015-06-01

    Spatially varying signals are typically sampled by collecting uniformly spaced samples irrespective of the signal content. For signals with inhomogeneous information content, this leads to unnecessarily dense sampling in regions of low interest or insufficient sample density at important features, or both. A new adaptive sampling technique is presented directing sample collection in proportion to local information content, capturing adequately the short-period features while sparsely sampling less dynamic regions. The proposed method incorporates a data-adapted sampling strategy on the basis of signal curvature, sample space-filling, variable experimental uncertainty and iterative improvement. Numerical assessment has indicated a reduction in the number of samples required to achieve a predefined uncertainty level overall while improving local accuracy for important features. The potential of the proposed method has been further demonstrated on the basis of Laser Doppler Anemometry experiments examining the wake behind a NACA0012 airfoil and the boundary layer characterisation of a flat plate.

  11. Remote Sensing Based Two-Stage Sampling for Accuracy Assessment and Area Estimation of Land Cover Changes

    Directory of Open Access Journals (Sweden)

    Heinz Gallaun

    2015-09-01

    Full Text Available Land cover change processes are accelerating at the regional to global level. The remote sensing community has developed reliable and robust methods for wall-to-wall mapping of land cover changes; however, land cover changes often occur at rates below the mapping errors. In the current publication, we propose a cost-effective approach to complement wall-to-wall land cover change maps with a sampling approach, which is used for accuracy assessment and accurate estimation of areas undergoing land cover changes, including provision of confidence intervals. We propose a two-stage sampling approach in order to keep accuracy, efficiency, and effort of the estimations in balance. Stratification is applied in both stages in order to gain control over the sample size allocated to rare land cover change classes on the one hand and the cost constraints for very high resolution reference imagery on the other. Bootstrapping is used to complement the accuracy measures and the area estimates with confidence intervals. The area estimates and verification estimations rely on a high quality visual interpretation of the sampling units based on time series of satellite imagery. To demonstrate the cost-effective operational applicability of the approach we applied it for assessment of deforestation in an area characterized by frequent cloud cover and very low change rate in the Republic of Congo, which makes accurate deforestation monitoring particularly challenging.

  12. Event-sequence time series analysis in ground-based gamma-ray astronomy

    International Nuclear Information System (INIS)

    Barres de Almeida, U.; Chadwick, P.; Daniel, M.; Nolan, S.; McComb, L.

    2008-01-01

    The recent, extreme episodes of variability detected from Blazars by the leading atmospheric Cerenkov experiments motivate the development and application of specialized statistical techniques that enable the study of this rich data set to its furthest extent. The identification of the shortest variability timescales supported by the data and the actual variability structure observed in the light curves of these sources are some of the fundamental aspects being studied, that answers can bring new developments on the understanding of the physics of these objects and on the mechanisms of production of VHE gamma-rays in the Universe. Some of our efforts in studying the time variability of VHE sources involve the application of dynamic programming algorithms to the problem of detecting change-points in a Poisson sequence. In this particular paper we concentrate on the more primary issue of the applicability of counting statistics to the analysis of time-series on VHE gamma-ray astronomy.

  13. Investigation on Law and Economics Based on Complex Network and Time Series Analysis

    Science.gov (United States)

    Yang, Jian; Qu, Zhao; Chang, Hui

    2015-01-01

    The research focuses on the cooperative relationship and the strategy tendency among three mutually interactive parties in financing: small enterprises, commercial banks and micro-credit companies. Complex network theory and time series analysis were applied to figure out the quantitative evidence. Moreover, this paper built up a fundamental model describing the particular interaction among them through evolutionary game. Combining the results of data analysis and current situation, it is justifiable to put forward reasonable legislative recommendations for regulations on lending activities among small enterprises, commercial banks and micro-credit companies. The approach in this research provides a framework for constructing mathematical models and applying econometrics and evolutionary game in the issue of corporation financing. PMID:26076460

  14. Current features of primary tuberculosis on medical imaging based on a series of fourteen cases

    International Nuclear Information System (INIS)

    Ouzidane, L.; Adamsbaum, C.; Cohen, P.A.; Kalifa, G.; Gendrel, D.

    1995-01-01

    Active pulmonary tuberculosis, a source of contamination, is currently undergoing a recrudescence in developed countries, particularly in clinical contexts of immuno-depression. The authors report a retrospective series of 14 cases of primary tuberculosis in a paediatric population (7 girls and 7 boys) with a mean age of 3.5 years (range: 4 months - 16 years) observed over a 3-year period. After reviewing the current radiological features of patent primary tuberculosis, the authors emphasize the contribution of chest CT scan in latent forms with a normal chest x-ray and a difficult bacteriological diagnosis. Imaging remains an essential tool in early diagnosis, therapeutic management and active surveillance of this form. The authors propose a decisional flow-chart in the case of suspected primary tuberculosis in children. (authors). 20 refs., 8 figs

  15. Analysis of the development trend of China’s business administration based on time series

    Directory of Open Access Journals (Sweden)

    Jiang Rui

    2016-01-01

    Full Text Available On the general direction of the economic system, China is in a crucial period of the establishment of the modern enterprise system and reform of the macroeconomic system, and a lot of high-quality business administration talents are required to make China’s economy be stably developed. This paper carries out time series analysis of the development situation of China’s business administration major: on the whole, the society currently presents an upward trend on the demand for the business administration talents. With the gradually increasing demand for the business administration talents, various colleges and universities also set up the business administration major to train a large number of administration talents, thus leading to an upward trend for the academic focus on business administration.

  16. Soiled-based uranium disequilibrium and mixed uranium-thorium series radionuclide reference materials

    International Nuclear Information System (INIS)

    Donivan, S.; Chessmore, R.

    1988-12-01

    The US Department of Energy (DOE) Office of Remedial Action and Waste Technology has assigned the Technical Measurements Center (TMC), located at the DOE Grand Junction Colorado, Projects Office and operated by UNC Geotech (UNC), the task of supporting ongoing remedial action programs by providing both technical guidance and assistance in making the various measurements required in all phases of remedial action work. Pursuant to this task, the Technical Measurements Center prepared two sets of radionuclide reference materials for use by remedial action contractors and cognizant federal and state agencies. A total of six reference materials, two sets comprising three reference materials each, were prepared with varying concentrations of radionuclides using mill tailings materials, ores, and a river-bottom soil diluent. One set (disequilibrium set) contains varying amounts of uranium with nominal amounts of radium-226. The other set (mixed-nuclide set) contains varying amounts of uranium-238 and thorium-232 decay series nuclides. 14 refs., 10 tabs

  17. Subsidence Evaluation of High-Speed Railway in Shenyang Based on Time-Series Insar

    Science.gov (United States)

    Zhang, Yun; Wei, Lianhuan; Li, Jiayu; Liu, Shanjun; Mao, Yachun; Wu, Lixin

    2018-04-01

    More and more high-speed railway are under construction in China. The slow settlement along high-speed railway tracks and newly-built stations would lead to inhomogeneous deformation of local area, and the accumulation may be a threat to the safe operation of high-speed rail system. In this paper, surface deformation of the newly-built high-speed railway station as well as the railway lines in Shenyang region will be retrieved by time series InSAR analysis using multi-orbit COSMO-SkyMed images. This paper focuses on the non-uniform subsidence caused by the changing of local environment along the railway. The accuracy of the settlement results can be verified by cross validation of the results obtained from two different orbits during the same period.

  18. Monitoring rubber plantation expansion using Landsat data time series and a Shapelet-based approach

    Science.gov (United States)

    Ye, Su; Rogan, John; Sangermano, Florencia

    2018-02-01

    The expansion of tree plantations in tropical forests for commercial rubber cultivation threatens biodiversity which may affect ecosystem services, and hinders ecosystem productivity, causing net carbon emission. Numerous studies refer to the challenge of reliably distinguishing rubber plantations from natural forest, using satellite data, due to their similar spectral signatures, even when phenology is incorporated into an analysis. This study presents a novel approach for monitoring the establishment and expansion of rubber plantations in Seima Protection Forest (SPF), Cambodia (1995-2015), by detecting and analyzing the 'shapelet' structure in a Landsat-NDVI time series. This paper introduces a new classification procedure consisting of two steps: (1) an exhaustive-searching algorithm to detect shapelets that represent a period for relatively low NDVI values within an image time series; and (2) a t-test used to determine if NDVI values of detected shapelets are significantly different than their non-shapelet trend, thereby indicating the presence of rubber plantations. Using this approach, historical rubber plantation events were mapped over the twenty-year timespan. The shapelet algorithm produced two types of information: (1) year of rubber plantation establishment; and (2) pre-conversion land-cover type (i.e., agriculture, or natural forest). The overall accuracy of the rubber plantation map for the year of 2015 was 89%. The multi-temporal map products reveal that more than half of the rubber planting activity (57%) took place in 2010 and 2011, following the granting of numerous rubber concessions two years prior. Seventy-three percent of the rubber plantations were converted from natural forest and twenty-three percent were established on non-forest land-cover. The shapelet approach developed here can be used reliably to improve our understanding of the expansion of rubber production beyond Seima Protection Forest of Cambodia, and likely elsewhere in the

  19. Sleep, School Performance, and a School-Based Intervention among School-Aged Children: A Sleep Series Study in China

    Science.gov (United States)

    Li, Shenghui; Arguelles, Lester; Jiang, Fan; Chen, Wenjuan; Jin, Xingming; Yan, Chonghuai; Tian, Ying; Hong, Xiumei; Qian, Ceng; Zhang, Jun; Wang, Xiaobin; Shen, Xiaoming

    2013-01-01

    Background Sufficient sleep during childhood is essential to ensure a transition into a healthy adulthood. However, chronic sleep loss continues to increase worldwide. In this context, it is imperative to make sleep a high-priority and take action to promote sleep health among children. The present series of studies aimed to shed light on sleep patterns, on the longitudinal association of sleep with school performance, and on practical intervention strategy for Chinese school-aged children. Methods and Findings A serial sleep researches, including a national cross-sectional survey, a prospective cohort study, and a school-based sleep intervention, were conducted in China from November 2005 through December 2009. The national cross-sectional survey was conducted in 8 cities and a random sample of 20,778 children aged 9.0±1.61 years participated in the survey. The five-year prospective cohort study included 612 children aged 6.8±0.31 years. The comparative cross-sectional study (baseline: n = 525, aged 10.80±0.41; post-intervention follow-up: n = 553, aged 10.81±0.33) was undertaken in 6 primary schools in Shanghai. A battery of parent and teacher reported questionnaires were used to collect information on children’s sleep behaviors, school performance, and sociodemographic characteristics. The mean sleep duration was 9.35±0.77 hours. The prevalence of daytime sleepiness was 64.4% (sometimes: 37.50%; frequently: 26.94%). Daytime sleepiness was significantly associated with impaired attention, learning motivation, and particularly, academic achievement. By contrast, short sleep duration only related to impaired academic achievement. After delaying school start time 30 minutes and 60 minutes, respectively, sleep duration correspondingly increased by 15.6 minutes and 22.8 minutes, respectively. Moreover, intervention significantly improved the sleep duration and daytime sleepiness. Conclusions Insufficient sleep and daytime sleepiness commonly existed and

  20. Sleep, school performance, and a school-based intervention among school-aged children: a sleep series study in China.

    Science.gov (United States)

    Li, Shenghui; Arguelles, Lester; Jiang, Fan; Chen, Wenjuan; Jin, Xingming; Yan, Chonghuai; Tian, Ying; Hong, Xiumei; Qian, Ceng; Zhang, Jun; Wang, Xiaobin; Shen, Xiaoming

    2013-01-01

    Sufficient sleep during childhood is essential to ensure a transition into a healthy adulthood. However, chronic sleep loss continues to increase worldwide. In this context, it is imperative to make sleep a high-priority and take action to promote sleep health among children. The present series of studies aimed to shed light on sleep patterns, on the longitudinal association of sleep with school performance, and on practical intervention strategy for Chinese school-aged children. A serial sleep researches, including a national cross-sectional survey, a prospective cohort study, and a school-based sleep intervention, were conducted in China from November 2005 through December 2009. The national cross-sectional survey was conducted in 8 cities and a random sample of 20,778 children aged 9.0±1.61 years participated in the survey. The five-year prospective cohort study included 612 children aged 6.8±0.31 years. The comparative cross-sectional study (baseline: n = 525, aged 10.80±0.41; post-intervention follow-up: n = 553, aged 10.81±0.33) was undertaken in 6 primary schools in Shanghai. A battery of parent and teacher reported questionnaires were used to collect information on children's sleep behaviors, school performance, and sociodemographic characteristics. The mean sleep duration was 9.35±0.77 hours. The prevalence of daytime sleepiness was 64.4% (sometimes: 37.50%; frequently: 26.94%). Daytime sleepiness was significantly associated with impaired attention, learning motivation, and particularly, academic achievement. By contrast, short sleep duration only related to impaired academic achievement. After delaying school start time 30 minutes and 60 minutes, respectively, sleep duration correspondingly increased by 15.6 minutes and 22.8 minutes, respectively. Moreover, intervention significantly improved the sleep duration and daytime sleepiness. Insufficient sleep and daytime sleepiness commonly existed and positively associated with the impairment of

  1. Sleep, school performance, and a school-based intervention among school-aged children: a sleep series study in China.

    Directory of Open Access Journals (Sweden)

    Shenghui Li

    Full Text Available BACKGROUND: Sufficient sleep during childhood is essential to ensure a transition into a healthy adulthood. However, chronic sleep loss continues to increase worldwide. In this context, it is imperative to make sleep a high-priority and take action to promote sleep health among children. The present series of studies aimed to shed light on sleep patterns, on the longitudinal association of sleep with school performance, and on practical intervention strategy for Chinese school-aged children. METHODS AND FINDINGS: A serial sleep researches, including a national cross-sectional survey, a prospective cohort study, and a school-based sleep intervention, were conducted in China from November 2005 through December 2009. The national cross-sectional survey was conducted in 8 cities and a random sample of 20,778 children aged 9.0±1.61 years participated in the survey. The five-year prospective cohort study included 612 children aged 6.8±0.31 years. The comparative cross-sectional study (baseline: n = 525, aged 10.80±0.41; post-intervention follow-up: n = 553, aged 10.81±0.33 was undertaken in 6 primary schools in Shanghai. A battery of parent and teacher reported questionnaires were used to collect information on children's sleep behaviors, school performance, and sociodemographic characteristics. The mean sleep duration was 9.35±0.77 hours. The prevalence of daytime sleepiness was 64.4% (sometimes: 37.50%; frequently: 26.94%. Daytime sleepiness was significantly associated with impaired attention, learning motivation, and particularly, academic achievement. By contrast, short sleep duration only related to impaired academic achievement. After delaying school start time 30 minutes and 60 minutes, respectively, sleep duration correspondingly increased by 15.6 minutes and 22.8 minutes, respectively. Moreover, intervention significantly improved the sleep duration and daytime sleepiness. CONCLUSIONS: Insufficient sleep and daytime sleepiness

  2. Evidence of Increased Anthropogenic Emissions of Platinum in Coastal Systems from Time-Series Analysis of Mussels Samples (1991-2011

    Directory of Open Access Journals (Sweden)

    Patricia Neira Del Río

    2014-06-01

    , soil and marine ecosystems (Ravindra et al., 2004. As a consequence of the growing use of PGEs, concentrations of these noble metals are rising clearly in different environmental matrices, mainly as road dusts, soils along heavily frequented roads, sewage sludge and sediments of urban rivers and harbour basins (Zimmermann and Sures, 2004. Currently, these emissions continue to grow. From the places near to the emission points, such as roads and other traffic routes PGEs are introduced into aquatic habitats where they accumulate in sediments and marine organisms. Despite this growing interest as emerging contaminants in recent decades, few studies in this field have been reported. This is due to the specific analysis methods and with a very high limit of detection for the determination of traces of PGEs at the ultra-trace level (Alsenz et al., 2009. The work here presented is focused on the assessment of changes in concentration of the PGEs, especially platinum, introduced by humans and the impact of its use in catalytic converters. To this aim, time-series analysis (1991-2011 of Pt were performed in mussel samples (Mytilus galloprovinciales collected in the Vigo Ria (NW Iberian Peninsula within the Instituto Español de Oceanografía (IEO Marine Pollution Monitoring Program. Mussels are ideal organisms for use as bio-indicators of water pollution because as filtering organisms tend to accumulate dissolved substances in the environment. Mussels are abundant in the study environment and may constitute an important food source for fish and other predators, allowing platinum to enter food chains. The determination of platinum was carried out by catalytic adsorptive cathodic stripping voltammetry (Cobelo-García et al. 2014. The effects of PGE on the marine organisms have been investigated in several laboratory experiments but very limited field studies have been carried out. The uptake rate of the noble metals is dependent on various factors such as exposure concentration

  3. Observer-Based Stabilization of Spacecraft Rendezvous with Variable Sampling and Sensor Nonlinearity

    Directory of Open Access Journals (Sweden)

    Zhuoshi Li

    2013-01-01

    Full Text Available This paper addresses the observer-based control problem of spacecraft rendezvous with nonuniform sampling period. The relative dynamic model is based on the classical Clohessy-Wiltshire equation, and sensor nonlinearity and sampling are considered together in a unified framework. The purpose of this paper is to perform an observer-based controller synthesis by using sampled and saturated output measurements, such that the resulting closed-loop system is exponentially stable. A time-dependent Lyapunov functional is developed which depends on time and the upper bound of the sampling period and also does not grow along the input update times. The controller design problem is solved in terms of the linear matrix inequality method, and the obtained results are less conservative than using the traditional Lyapunov functionals. Finally, a numerical simulation example is built to show the validity of the developed sampled-data control strategy.

  4. A Remote Sensing Approach for Regional-Scale Mapping of Agricultural Land-Use Systems Based on NDVI Time Series

    Directory of Open Access Journals (Sweden)

    Beatriz Bellón

    2017-06-01

    Full Text Available In response to the need for generic remote sensing tools to support large-scale agricultural monitoring, we present a new approach for regional-scale mapping of agricultural land-use systems (ALUS based on object-based Normalized Difference Vegetation Index (NDVI time series analysis. The approach consists of two main steps. First, to obtain relatively homogeneous land units in terms of phenological patterns, a principal component analysis (PCA is applied to an annual MODIS NDVI time series, and an automatic segmentation is performed on the resulting high-order principal component images. Second, the resulting land units are classified into the crop agriculture domain or the livestock domain based on their land-cover characteristics. The crop agriculture domain land units are further classified into different cropping systems based on the correspondence of their NDVI temporal profiles with the phenological patterns associated with the cropping systems of the study area. A map of the main ALUS of the Brazilian state of Tocantins was produced for the 2013–2014 growing season with the new approach, and a significant coherence was observed between the spatial distribution of the cropping systems in the final ALUS map and in a reference map extracted from the official agricultural statistics of the Brazilian Institute of Geography and Statistics (IBGE. This study shows the potential of remote sensing techniques to provide valuable baseline spatial information for supporting agricultural monitoring and for large-scale land-use systems analysis.

  5. Modeling of Engine Parameters for Condition-Based Maintenance of the MTU Series 2000 Diesel Engine

    Science.gov (United States)

    2016-09-01

    particles in the analysis of engine oil samples (Jiang and Yan 2008). Lee monitors the exhaust gas temperature of the diesel engine for a roll-on...roll-off-passenger commercial vessel (Lee 2013). Jardine, Lin and Banjevic note other monitoring parameters, such as acoustic, moisture , humidity...expressed in terms of a constant y- intercept , , a disturbance, , an independent variable, , their past, −

  6. Use of Language Sample Analysis by School-Based SLPs: Results of a Nationwide Survey

    Science.gov (United States)

    Pavelko, Stacey L.; Owens, Robert E., Jr.; Ireland, Marie; Hahs-Vaughn, Debbie L.

    2016-01-01

    Purpose: This article examines use of language sample analysis (LSA) by school-based speech-language pathologists (SLPs), including characteristics of language samples, methods of transcription and analysis, barriers to LSA use, and factors affecting LSA use, such as American Speech-Language-Hearing Association certification, number of years'…

  7. Adaptive list sequential sampling method for population-based observational studies

    NARCIS (Netherlands)

    Hof, Michel H.; Ravelli, Anita C. J.; Zwinderman, Aeilko H.

    2014-01-01

    In population-based observational studies, non-participation and delayed response to the invitation to participate are complications that often arise during the recruitment of a sample. When both are not properly dealt with, the composition of the sample can be different from the desired

  8. Eating Disorders among a Community-Based Sample of Chilean Female Adolescents

    Science.gov (United States)

    Granillo, M. Teresa; Grogan-Kaylor, Andrew; Delva, Jorge; Castillo, Marcela

    2011-01-01

    The purpose of this study was to explore the prevalence and correlates of eating disorders among a community-based sample of female Chilean adolescents. Data were collected through structured interviews with 420 female adolescents residing in Santiago, Chile. Approximately 4% of the sample reported ever being diagnosed with an eating disorder.…

  9. Urbanization and Income Inequality in Post-Reform China: A Causal Analysis Based on Time Series Data.

    Science.gov (United States)

    Chen, Guo; Glasmeier, Amy K; Zhang, Min; Shao, Yang

    2016-01-01

    This paper investigates the potential causal relationship(s) between China's urbanization and income inequality since the start of the economic reform. Based on the economic theory of urbanization and income distribution, we analyze the annual time series of China's urbanization rate and Gini index from 1978 to 2014. The results show that urbanization has an immediate alleviating effect on income inequality, as indicated by the negative relationship between the two time series at the same year (lag = 0). However, urbanization also seems to have a lagged aggravating effect on income inequality, as indicated by positive relationship between urbanization and the Gini index series at lag 1. Although the link between urbanization and income inequality is not surprising, the lagged aggravating effect of urbanization on the Gini index challenges the popular belief that urbanization in post-reform China generally helps reduce income inequality. At deeper levels, our results suggest an urgent need to focus on the social dimension of urbanization as China transitions to the next stage of modernization. Comprehensive social reforms must be prioritized to avoid a long-term economic dichotomy and permanent social segregation.

  10. Urbanization and Income Inequality in Post-Reform China: A Causal Analysis Based on Time Series Data.

    Directory of Open Access Journals (Sweden)

    Guo Chen

    Full Text Available This paper investigates the potential causal relationship(s between China's urbanization and income inequality since the start of the economic reform. Based on the economic theory of urbanization and income distribution, we analyze the annual time series of China's urbanization rate and Gini index from 1978 to 2014. The results show that urbanization has an immediate alleviating effect on income inequality, as indicated by the negative relationship between the two time series at the same year (lag = 0. However, urbanization also seems to have a lagged aggravating effect on income inequality, as indicated by positive relationship between urbanization and the Gini index series at lag 1. Although the link between urbanization and income inequality is not surprising, the lagged aggravating effect of urbanization on the Gini index challenges the popular belief that urbanization in post-reform China generally helps reduce income inequality. At deeper levels, our results suggest an urgent need to focus on the social dimension of urbanization as China transitions to the next stage of modernization. Comprehensive social reforms must be prioritized to avoid a long-term economic dichotomy and permanent social segregation.

  11. Object-Based Classification of Grasslands from High Resolution Satellite Image Time Series Using Gaussian Mean Map Kernels

    Directory of Open Access Journals (Sweden)

    Mailys Lopes

    2017-07-01

    Full Text Available This paper deals with the classification of grasslands using high resolution satellite image time series. Grasslands considered in this work are semi-natural elements in fragmented landscapes, i.e., they are heterogeneous and small elements. The first contribution of this study is to account for grassland heterogeneity while working at the object level by modeling its pixels distributions by a Gaussian distribution. To measure the similarity between two grasslands, a new kernel is proposed as a second contribution: the α -Gaussian mean kernel. It allows one to weight the influence of the covariance matrix when comparing two Gaussian distributions. This kernel is introduced in support vector machines for the supervised classification of grasslands from southwest France. A dense intra-annual multispectral time series of the Formosat-2 satellite is used for the classification of grasslands’ management practices, while an inter-annual NDVI time series of Formosat-2 is used for old and young grasslands’ discrimination. Results are compared to other existing pixel- and object-based approaches in terms of classification accuracy and processing time. The proposed method is shown to be a good compromise between processing speed and classification accuracy. It can adapt to the classification constraints, and it encompasses several similarity measures known in the literature. It is appropriate for the classification of small and heterogeneous objects such as grasslands.

  12. Case Series.

    Science.gov (United States)

    Vetrayan, Jayachandran; Othman, Suhana; Victor Paulraj, Smily Jesu Priya

    2017-01-01

    To assess the effectiveness and feasibility of behavioral sleep intervention for medicated children with ADHD. Six medicated children (five boys, one girl; aged 6-12 years) with ADHD participated in a 4-week sleep intervention program. The main behavioral strategies used were Faded Bedtime With Response Cost (FBRC) and positive reinforcement. Within a case-series design, objective measure (Sleep Disturbance Scale for Children [SDSC]) and subjective measure (sleep diaries) were used to record changes in children's sleep. For all six children, significant decrease was found in the severity of children's sleep problems (based on SDSC data). Bedtime resistance and mean sleep onset latency were reduced following the 4-week intervention program according to sleep diaries data. Gains were generally maintained at the follow-up. Parents perceived the intervention as being helpful. Based on the initial data, this intervention shows promise as an effective and feasible treatment.

  13. Robotic, MEMS-based Multi Utility Sample Preparation Instrument for ISS Biological Workstation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This project will develop a multi-functional, automated sample preparation instrument for biological wet-lab workstations on the ISS. The instrument is based on a...

  14. China’s Carbon Footprint Based on Input-Output Table Series: 1992–2020

    Directory of Open Access Journals (Sweden)

    Haitao Zheng

    2017-03-01

    Full Text Available Reducing carbon emissions is a major concern for China’s future. This paper explores the embodied carbon footprint of Chinese final demand from the point of view of industries. It uses the Matrix Transformation Technique (MTT to update the input-output table series from 1992 to 2020 in China. Then, we measure the embodied carbon emissions for the period 1992–2020 from 29 industry producers to the final demand, covering urban and rural residential consumption, government consumption, fixed capital formation, and net exports. The results show that construction, other services, wholesale, retail trade, accommodation and catering, industrial machinery and equipment, transport, storage and postal services, and manufacture of foods and tobacco are the industries with the greatest carbon emissions from producers, while fixed capital formation and urban consumption are the largest emitters from the perspective of final demand. The embodied carbon emission multipliers for most of the industries are decreasing, while the total carbon emissions are increasing each year. The ratio of emissions from residential consumption in terms of total emissions is decreasing. Each industry has a different main final demand-driven influencing factor on emission and, for each type of final demand, there are different industries with higher emissions.

  15. A Fourier-series-based kernel-independent fast multipole method

    International Nuclear Information System (INIS)

    Zhang Bo; Huang Jingfang; Pitsianis, Nikos P.; Sun Xiaobai

    2011-01-01

    We present in this paper a new kernel-independent fast multipole method (FMM), named as FKI-FMM, for pairwise particle interactions with translation-invariant kernel functions. FKI-FMM creates, using numerical techniques, sufficiently accurate and compressive representations of a given kernel function over multi-scale interaction regions in the form of a truncated Fourier series. It provides also economic operators for the multipole-to-multipole, multipole-to-local, and local-to-local translations that are typical and essential in the FMM algorithms. The multipole-to-local translation operator, in particular, is readily diagonal and does not dominate in arithmetic operations. FKI-FMM provides an alternative and competitive option, among other kernel-independent FMM algorithms, for an efficient application of the FMM, especially for applications where the kernel function consists of multi-physics and multi-scale components as those arising in recent studies of biological systems. We present the complexity analysis and demonstrate with experimental results the FKI-FMM performance in accuracy and efficiency.

  16. Comparison on the Analysis on PM10 Data based on Average and Extreme Series

    Directory of Open Access Journals (Sweden)

    Mohd Amin Nor Azrita

    2018-01-01

    Full Text Available The main concern in environmental issue is on extreme phenomena (catastrophic instead of common events. However, most statistical approaches are concerned primarily with the centre of a distribution or on the average value rather than the tail of the distribution which contains the extreme observations. The concept of extreme value theory affords attention to the tails of distribution where standard models are proved unreliable to analyse extreme series. High level of particulate matter (PM10 is a common environmental problem which causes various impacts to human health and material damages. If the main concern is on extreme events, then extreme value analysis provides the best result with significant evidence. The monthly average and monthly maxima PM10 data for Perlis from 2003 to 2014 were analysed. Forecasting for average data is made by Holt-Winters method while return level determine the predicted value of extreme events that occur on average once in a certain period. The forecasting from January 2015 to December 2016 for average data found that the highest forecasted value is 58.18 (standard deviation 18.45 on February 2016 while return level achieved 253.76 units for 24 months (2015-2016 return periods.

  17. Modeling Financial Time Series Based on a Market Microstructure Model with Leverage Effect

    Directory of Open Access Journals (Sweden)

    Yanhui Xi

    2016-01-01

    Full Text Available The basic market microstructure model specifies that the price/return innovation and the volatility innovation are independent Gaussian white noise processes. However, the financial leverage effect has been found to be statistically significant in many financial time series. In this paper, a novel market microstructure model with leverage effects is proposed. The model specification assumed a negative correlation in the errors between the price/return innovation and the volatility innovation. With the new representations, a theoretical explanation of leverage effect is provided. Simulated data and daily stock market indices (Shanghai composite index, Shenzhen component index, and Standard and Poor’s 500 Composite index via Bayesian Markov Chain Monte Carlo (MCMC method are used to estimate the leverage market microstructure model. The results verify the effectiveness of the model and its estimation approach proposed in the paper and also indicate that the stock markets have strong leverage effects. Compared with the classical leverage stochastic volatility (SV model in terms of DIC (Deviance Information Criterion, the leverage market microstructure model fits the data better.

  18. Biologically based treatment of immature permanent teeth with pulpal necrosis: a case series.

    Science.gov (United States)

    Jung, Il-Young; Lee, Seung-Jong; Hargreaves, Kenneth M

    2012-06-01

    This case series reports the outcomes of 8 patients (ages 9-4 years) who presented with 9 immature permanent teeth with pulpal necrosis and apical periodontitis. During treatment, 5 of the teeth were found to have at least some residual vital tissue remaining in the root canal systems. After NaOCI irrigation and medication with ciprofloxacin, metronidazole, and minocycline, these teeth were sealed with mineral trioxide aggregate and restored. The other group of 4 teeth had no evidence of any residual vital pulp tissue. This second group of teeth was treated with NaOCl irrigation and medicated with ciprofloxacin, metronidazole, and minocycline followed by a revascularization procedure adopted from the trauma literature (bleeding evoked to form an intracanal blood clot). In both groups of patients, there was evidence of satisfactory postoperative clinical outcomes (1-5 years); the patients were asymptomatic, no sinus tracts were evident, apical periodontitis was resolved, and there was radiographic evidence of continuing thickness of dentinal walls, apical closure, or increased root length.

  19. Quantitative evaluation of time-series GHG emissions by sector and region using consumption-based accounting

    International Nuclear Information System (INIS)

    Homma, Takashi; Akimoto, Keigo; Tomoda, Toshimasa

    2012-01-01

    This study estimates global time-series consumption-based GHG emissions by region from 1990 to 2005, including both CO 2 and non-CO 2 GHG emissions. Estimations are conducted for the whole economy and for two specific sectors: manufacturing and agriculture. Especially in the agricultural sector, it is important to include non-CO 2 GHG emissions because these are the major emissions present. In most of the regions examined, the improvements in GHG intensities achieved in the manufacturing sector are larger than those in the agricultural sector. Compared with developing regions, most developed regions have consistently larger per-capita consumption-based GHG emissions over the whole economy, as well as higher production-based emissions. In the manufacturing sector, differences calculated by subtracting production-based emissions from consumption-based GHG emissions are determined by the regional economic level while, in the agricultural sector, they are dependent on regional production structures that are determined by international trade competitiveness. In the manufacturing sector, these differences are consistently and increasingly positive for the U.S., EU15 and Japan but negative for developing regions. In the agricultural sector, the differences calculated for the major agricultural importers like Japan and the EU15 are consistently positive while those of exporters like the U.S., Australia and New Zealand are consistently negative. - Highlights: ► We evaluate global time-series production-based and consumption-based GHG emissions. ► We focus on both CO 2 and non-CO 2 GHG emissions, broken down by region and by sector. ► Including non-CO 2 GHG emissions is important in agricultural sector. ► In agriculture, differences in accountings are dependent on production structures. ► In manufacturing sector, differences in accountings are determined by economic level.

  20. Data splitting for artificial neural networks using SOM-based stratified sampling.

    Science.gov (United States)

    May, R J; Maier, H R; Dandy, G C

    2010-03-01

    Data splitting is an important consideration during artificial neural network (ANN) development where hold-out cross-validation is commonly employed to ensure generalization. Even for a moderate sample size, the sampling methodology used for data splitting can have a significant effect on the quality of the subsets used for training, testing and validating an ANN. Poor data splitting can result in inaccurate and highly variable model performance; however, the choice of sampling methodology is rarely given due consideration by ANN modellers. Increased confidence in the sampling is of paramount importance, since the hold-out sampling is generally performed only once during ANN development. This paper considers the variability in the quality of subsets that are obtained using different data splitting approaches. A novel approach to stratified sampling, based on Neyman sampling of the self-organizing map (SOM), is developed, with several guidelines identified for setting the SOM size and sample allocation in order to minimize the bias and variance in the datasets. Using an example ANN function approximation task, the SOM-based approach is evaluated in comparison to random sampling, DUPLEX, systematic stratified sampling, and trial-and-error sampling to minimize the statistical differences between data sets. Of these approaches, DUPLEX is found to provide benchmark performance with good model performance, with no variability. The results show that the SOM-based approach also reliably generates high-quality samples and can therefore be used with greater confidence than other approaches, especially in the case of non-uniform datasets, with the benefit of scalability to perform data splitting on large datasets. Copyright 2009 Elsevier Ltd. All rights reserved.

  1. Occupational Home Economics Education Series. Fabrics and Textiles Merchandising. Competency Based Teaching Module.

    Science.gov (United States)

    Martin, Ruth

    This module, one of ten competency based modules developed for vocational home economics teachers, is based on a job cluster in fabric and textiles merchandising. It is designed for use with a variety of groups including grades 9-14 and adults. Focusing on the specific job title fabric and textiles salesperson, ten competencies and the student…

  2. Occupational Home Economics Education Series. Catering Services. Competency Based Teaching Module.

    Science.gov (United States)

    Lowe, Phyllis; And Others

    This module, one of ten competency based modules developed for vocational home economics teachers, is based on a job cluster in the catering industry. It is designed for use with a variety of levels of learners (secondary, postsecondary, adult) in both school and non-school educational settings. Focusing on two levels of employment, food caterer…

  3. Effect of an evidence-based website on healthcare usage: an interrupted time-series study.

    NARCIS (Netherlands)

    Spoelman, W.A.; Bonten, T.N.; Waal, M.W.M. de; Drenthen, T.; Smeele, I.J.M.; Nielen, M.M.; Chavannes, N.

    2016-01-01

    Objectives: Healthcare costs and usage are rising. Evidence-based online health information may reduce healthcare usage, but the evidence is scarce. The objective of this study was to determine whether the release of a nationwide evidence-based health website was associated with a reduction in

  4. Ripe for Change: Garden-Based Learning in Schools. Harvard Education Letter Impact Series

    Science.gov (United States)

    Hirschi, Jane S.

    2015-01-01

    "Ripe for Change: Garden-Based Learning in Schools" takes a big-picture view of the school garden movement and the state of garden-based learning in public K--8 education. The book frames the garden movement for educators and shows how school gardens have the potential to be a significant resource for teaching and learning. In this…

  5. A series of fluorene-based two-photon absorbing molecules: synthesis, linear and nonlinear characterization, and bioimaging

    Science.gov (United States)

    Andrade, Carolina D.; Yanez, Ciceron O.; Rodriguez, Luis; Belfield, Kevin D.

    2010-01-01

    The synthesis, structural, and photophysical characterization of a series of new fluorescent donor–acceptor and acceptor-acceptor molecules, based on the fluorenyl ring system, with two-photon absorbing properties is described. These new compounds exhibited large Stokes shifts, high fluorescent quantum yields, and, significantly, high two-photon absorption cross sections, making them well suited for two-photon fluorescence microscopy (2PFM) imaging. Confocal and two-photon fluorescence microscopy imaging of COS-7 and HCT 116 cells incubated with probe I showed endosomal selectivity, demonstrating the potential of this class of fluorescent probes in multiphoton fluorescence microscopy. PMID:20481596

  6. Magnitude of 14C/12C variations based on archaeological samples

    International Nuclear Information System (INIS)

    Kusumgar, S.; Agrawal, D.P.

    1977-01-01

    The magnitude of 14 C/ 12 C variations in the period A.D. 5O0 to 200 B.C. and 370 B.C. to 2900 B.C. is discussed. The 14 C dates of well-dated archaeological samples from India and Egypt do not show any significant divergence from the historical ages. On the other hand, the corrections based on dendrochronological samples show marked deviations for the same time period. A plea is, therefore, made to study old tree samples from Anatolia and Irish bogs and archaeological samples from west Asia to arrive at a more realistic calibration curve. (author)

  7. A Method for Microalgae Proteomics Analysis Based on Modified Filter-Aided Sample Preparation.

    Science.gov (United States)

    Li, Song; Cao, Xupeng; Wang, Yan; Zhu, Zhen; Zhang, Haowei; Xue, Song; Tian, Jing

    2017-11-01

    With the fast development of microalgal biofuel researches, the proteomics studies of microalgae increased quickly. A filter-aided sample preparation (FASP) method is widely used proteomics sample preparation method since 2009. Here, a method of microalgae proteomics analysis based on modified filter-aided sample preparation (mFASP) was described to meet the characteristics of microalgae cells and eliminate the error caused by over-alkylation. Using Chlamydomonas reinhardtii as the model, the prepared sample was tested by standard LC-MS/MS and compared with the previous reports. The results showed mFASP is suitable for most of occasions of microalgae proteomics studies.

  8. ON SAMPLING BASED METHODS FOR THE DUBINS TRAVELING SALESMAN PROBLEM WITH NEIGHBORHOODS

    Directory of Open Access Journals (Sweden)

    Petr Váňa

    2015-12-01

    Full Text Available In this paper, we address the problem of path planning to visit a set of regions by Dubins vehicle, which is also known as the Dubins Traveling Salesman Problem Neighborhoods (DTSPN. We propose a modification of the existing sampling-based approach to determine increasing number of samples per goal region and thus improve the solution quality if a more computational time is available. The proposed modification of the sampling-based algorithm has been compared with performance of existing approaches for the DTSPN and results of the quality of the found solutions and the required computational time are presented in the paper.

  9. Sample Entropy-Based Approach to Evaluate the Stability of Double-Wire Pulsed MIG Welding

    Directory of Open Access Journals (Sweden)

    Ping Yao

    2014-01-01

    Full Text Available According to the sample entropy, this paper deals with a quantitative method to evaluate the current stability in double-wire pulsed MIG welding. Firstly, the sample entropy of current signals with different stability but the same parameters is calculated. The results show that the more stable the current, the smaller the value and the standard deviation of sample entropy. Secondly, four parameters, which are pulse width, peak current, base current, and frequency, are selected for four-level three-factor orthogonal experiment. The calculation and analysis of desired signals indicate that sample entropy values are affected by welding current parameters. Then, a quantitative method based on sample entropy is proposed. The experiment results show that the method can preferably quantify the welding current stability.

  10. Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering

    Science.gov (United States)

    Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki

    2018-03-01

    We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.

  11. A support vector density-based importance sampling for reliability assessment

    International Nuclear Information System (INIS)

    Dai, Hongzhe; Zhang, Hao; Wang, Wei

    2012-01-01

    An importance sampling method based on the adaptive Markov chain simulation and support vector density estimation is developed in this paper for efficient structural reliability assessment. The methodology involves the generation of samples that can adaptively populate the important region by the adaptive Metropolis algorithm, and the construction of importance sampling density by support vector density. The use of the adaptive Metropolis algorithm may effectively improve the convergence and stability of the classical Markov chain simulation. The support vector density can approximate the sampling density with fewer samples in comparison to the conventional kernel density estimation. The proposed importance sampling method can effectively reduce the number of structural analysis required for achieving a given accuracy. Examples involving both numerical and practical structural problems are given to illustrate the application and efficiency of the proposed methodology.

  12. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  13. Final LDRD report : development of sample preparation methods for ChIPMA-based imaging mass spectrometry of tissue samples.

    Energy Technology Data Exchange (ETDEWEB)

    Maharrey, Sean P.; Highley, Aaron M.; Behrens, Richard, Jr.; Wiese-Smith, Deneille

    2007-12-01

    The objective of this short-term LDRD project was to acquire the tools needed to use our chemical imaging precision mass analyzer (ChIPMA) instrument to analyze tissue samples. This effort was an outgrowth of discussions with oncologists on the need to find the cellular origin of signals in mass spectra of serum samples, which provide biomarkers for ovarian cancer. The ultimate goal would be to collect chemical images of biopsy samples allowing the chemical images of diseased and nondiseased sections of a sample to be compared. The equipment needed to prepare tissue samples have been acquired and built. This equipment includes an cyro-ultramicrotome for preparing thin sections of samples and a coating unit. The coating unit uses an electrospray system to deposit small droplets of a UV-photo absorbing compound on the surface of the tissue samples. Both units are operational. The tissue sample must be coated with the organic compound to enable matrix assisted laser desorption/ionization (MALDI) and matrix enhanced secondary ion mass spectrometry (ME-SIMS) measurements with the ChIPMA instrument Initial plans to test the sample preparation using human tissue samples required development of administrative procedures beyond the scope of this LDRD. Hence, it was decided to make two types of measurements: (1) Testing the spatial resolution of ME-SIMS by preparing a substrate coated with a mixture of an organic matrix and a bio standard and etching a defined pattern in the coating using a liquid metal ion beam, and (2) preparing and imaging C. elegans worms. Difficulties arose in sectioning the C. elegans for analysis and funds and time to overcome these difficulties were not available in this project. The facilities are now available for preparing biological samples for analysis with the ChIPMA instrument. Some further investment of time and resources in sample preparation should make this a useful tool for chemical imaging applications.

  14. [Study of spatial stratified sampling strategy of Oncomelania hupensis snail survey based on plant abundance].

    Science.gov (United States)

    Xun-Ping, W; An, Z

    2017-07-27

    Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.

  15. Rapid space trajectory generation using a Fourier series shape-based approach

    Science.gov (United States)

    Taheri, Ehsan

    With the insatiable curiosity of human beings to explore the universe and our solar system, it is essential to benefit from larger propulsion capabilities to execute efficient transfers and carry more scientific equipments. In the field of space trajectory optimization the fundamental advances in using low-thrust propulsion and exploiting the multi-body dynamics has played pivotal role in designing efficient space mission trajectories. The former provides larger cumulative momentum change in comparison with the conventional chemical propulsion whereas the latter results in almost ballistic trajectories with negligible amount of propellant. However, the problem of space trajectory design translates into an optimal control problem which is, in general, time-consuming and very difficult to solve. Therefore, the goal of the thesis is to address the above problem by developing a methodology to simplify and facilitate the process of finding initial low-thrust trajectories in both two-body and multi-body environments. This initial solution will not only provide mission designers with a better understanding of the problem and solution but also serves as a good initial guess for high-fidelity optimal control solvers and increases their convergence rate. Almost all of the high-fidelity solvers enjoy the existence of an initial guess that already satisfies the equations of motion and some of the most important constraints. Despite the nonlinear nature of the problem, it is sought to find a robust technique for a wide range of typical low-thrust transfers with reduced computational intensity. Another important aspect of our developed methodology is the representation of low-thrust trajectories by Fourier series with which the number of design variables reduces significantly. Emphasis is given on simplifying the equations of motion to the possible extent and avoid approximating the controls. These facts contribute to speeding up the solution finding procedure. Several example

  16. Endoscopic Endonasal Approach in Skull Base Chondrosarcoma Associated with Maffucci Syndrome: Case Series and Literature Review.

    Science.gov (United States)

    Beer-Furlan, André; Balsalobre, Leonardo; Vellutini, Eduardo A S; Stamm, Aldo C

    2016-01-01

    Maffucci syndrome is a nonhereditary disorder in which patients develop multiple enchondromas and cutaneous, visceral, or soft tissue hemangiomas. The potential malignant progression of enchondroma into a secondary chondrosarcoma is a well-known fact. Nevertheless, chondrosarcoma located at the skull base in patients with Maffuci syndrome is a very rare condition, with only 18 cases reported in the literature. We report 2 other cases successfully treated through an expanded endoscopic endonasal approach and discuss the condition based on the literature review. Skull base chondrosarcoma associated with Maffucci syndrome is a rare condition. The disease cannot be cured, therefore surgical treatment should be performed in symptomatic patients aiming for maximal tumor resection with function preservation. The endoscopic endonasal approach is a safe and reliable alternative for the management of these tumors. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. ARIMA-Based Time Series Model of Stochastic Wind Power Generation

    DEFF Research Database (Denmark)

    Chen, Peiyuan; Pedersen, Troels; Bak-Jensen, Birgitte

    2010-01-01

    This paper proposes a stochastic wind power model based on an autoregressive integrated moving average (ARIMA) process. The model takes into account the nonstationarity and physical limits of stochastic wind power generation. The model is constructed based on wind power measurement of one year from...... the Nysted offshore wind farm in Denmark. The proposed limited-ARIMA (LARIMA) model introduces a limiter and characterizes the stochastic wind power generation by mean level, temporal correlation and driving noise. The model is validated against the measurement in terms of temporal correlation...... and probability distribution. The LARIMA model outperforms a first-order transition matrix based discrete Markov model in terms of temporal correlation, probability distribution and model parameter number. The proposed LARIMA model is further extended to include the monthly variation of the stochastic wind power...

  18. 40 CFR 761.298 - Decisions based on PCB concentration measurements resulting from sampling.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Decisions based on PCB concentration... Cleanup and On-Site Disposal of Bulk PCB Remediation Waste and Porous Surfaces in Accordance With § 761.61(a)(6) § 761.298 Decisions based on PCB concentration measurements resulting from sampling. (a) For...

  19. INQUIRY-BASED SCIENCE COMIC PHYSICS SERIES INTEGRATED WITH CHARACTER EDUCATION

    Directory of Open Access Journals (Sweden)

    D Yulianti

    2016-04-01

    Full Text Available This study aimed to test the level of readability and feasibility of science comic, to knowcharacter development through a small test in some schools. The research design was Research & Development, trials were using quasi-experimental pre-test-post-test experimental design. The instruments to measure attitudes were: a questionnaire and observation sheet, a test used to measure comprehension of the material. The results showed that learning science by inquiry-based science comic can improvecharacters and cognitive achievement of primary school students. Results in the form of inquiry-based science comic can be utilized in learning science as a companion teaching materials.

  20. Intelligent sizing of a series hybrid electric power-train system based on Chaos-enhanced accelerated particle swarm optimization

    International Nuclear Information System (INIS)

    Zhou, Quan; Zhang, Wei; Cash, Scott; Olatunbosun, Oluremi; Xu, Hongming; Lu, Guoxiang

    2017-01-01

    Highlights: • A novel algorithm for hybrid electric powertrain intelligent sizing is introduced and applied. • The proposed CAPSO algorithm is capable of finding the real optimal result with much higher reputation. • Logistic mapping is the most effective strategy to build CAPSO. • The CAPSO gave more reliable results and increased the efficiency by 1.71%. - Abstract: This paper firstly proposed a novel HEV sizing method using the Chaos-enhanced Accelerated Particle Swarm Optimization (CAPSO) algorithm and secondly provided a demonstration on sizing a series hybrid electric powertrain with investigations of chaotic mapping strategies to achieve the global optimization. In this paper, the intelligent sizing of a series hybrid electric powertrain is formulated as an integer multi-objective optimization issue by modelling the powertrain system. The intelligent sizing mechanism based on APSO is then introduced, and 4 types of the most effective chaotic mapping strategy are investigated to upgrade the standard APSO into CAPSO algorithms for intelligent sizing. The evaluation of the intelligent sizing systems based on standard APSO and CAPSOs are then performed. The Monte Carlo analysis and reputation evaluation indicate that the CAPSO outperforms the standard APSO for finding the real optimal sizing result with much higher reputation, and CAPSO with logistic mapping strategy is the most effective algorithm for HEV powertrain components intelligent sizing. In addition, this paper also performs the sensitivity analysis and Pareto analysis to help engineers customize the intelligent sizing system.

  1. A home-based body weight supported treadmill training program for children with cerebral palsy: A case series.

    Science.gov (United States)

    Kenyon, Lisa K; Westman, Marci; Hefferan, Ashley; McCrary, Peter; Baker, Barbara J

    2017-07-01

    Contemporary approaches to the treatment of cerebral palsy (CP) advocate a task-specific approach that emphasizes repetition and practice of specific tasks. Recent studies suggest that body-weight-supported treadmill training (BWSTT) programs may be beneficial in clinical settings. The purposes of this case series were to explore the outcomes and feasibility of a home-based BWSTT program for three children with CP. Three children with CP at Gross Motor Function Classification System (GMFCS) Levels III or IV participated in this case series. Examination included the Functional Assessment Questionnaire (FAQ), the 10-meter walk test, the Gross Motor Function Measure (GMFM-66), and the Pediatric Evaluation of Disability Inventory-Computer Adaptive Test (PEDI-CAT). A harness system was used to conduct the BWSTT program over an 8-12 week period. All of the families reported enjoying the BWSTT program and found the harness easy to use. Participant 2 increased from a 2 to a 4 on the FAQ, while Participant 3 increased from a 6 to a 7. Two of the participants demonstrated post-intervention improvements in functional mobility. In addition to mobility outcomes, future research should explore the potential health benefits of a home-based BWSTT program.

  2. Infinite series

    CERN Document Server

    Hirschman, Isidore Isaac

    2014-01-01

    This text for advanced undergraduate and graduate students presents a rigorous approach that also emphasizes applications. Encompassing more than the usual amount of material on the problems of computation with series, the treatment offers many applications, including those related to the theory of special functions. Numerous problems appear throughout the book.The first chapter introduces the elementary theory of infinite series, followed by a relatively complete exposition of the basic properties of Taylor series and Fourier series. Additional subjects include series of functions and the app

  3. Sample selection based on kernel-subclustering for the signal reconstruction of multifunctional sensors

    International Nuclear Information System (INIS)

    Wang, Xin; Wei, Guo; Sun, Jinwei

    2013-01-01

    The signal reconstruction methods based on inverse modeling for the signal reconstruction of multifunctional sensors have been widely studied in recent years. To improve the accuracy, the reconstruction methods have become more and more complicated because of the increase in the model parameters and sample points. However, there is another factor that affects the reconstruction accuracy, the position of the sample points, which has not been studied. A reasonable selection of the sample points could improve the signal reconstruction quality in at least two ways: improved accuracy with the same number of sample points or the same accuracy obtained with a smaller number of sample points. Both ways are valuable for improving the accuracy and decreasing the workload, especially for large batches of multifunctional sensors. In this paper, we propose a sample selection method based on kernel-subclustering distill groupings of the sample data and produce the representation of the data set for inverse modeling. The method calculates the distance between two data points based on the kernel-induced distance instead of the conventional distance. The kernel function is a generalization of the distance metric by mapping the data that are non-separable in the original space into homogeneous groups in the high-dimensional space. The method obtained the best results compared with the other three methods in the simulation. (paper)

  4. Asymptotic Effectiveness of the Event-Based Sampling According to the Integral Criterion

    Directory of Open Access Journals (Sweden)

    Marek Miskowicz

    2007-01-01

    Full Text Available A rapid progress in intelligent sensing technology creates new interest in a development of analysis and design of non-conventional sampling schemes. The investigation of the event-based sampling according to the integral criterion is presented in this paper. The investigated sampling scheme is an extension of the pure linear send-on- delta/level-crossing algorithm utilized for reporting the state of objects monitored by intelligent sensors. The motivation of using the event-based integral sampling is outlined. The related works in adaptive sampling are summarized. The analytical closed-form formulas for the evaluation of the mean rate of event-based traffic, and the asymptotic integral sampling effectiveness, are derived. The simulation results verifying the analytical formulas are reported. The effectiveness of the integral sampling is compared with the related linear send-on-delta/level-crossing scheme. The calculation of the asymptotic effectiveness for common signals, which model the state evolution of dynamic systems in time, is exemplified.

  5. Survey of sampling-based methods for uncertainty and sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, J.C.; Johnson, J.D.; Sallaberry, C.J.; Storlie, C.B.

    2006-01-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (i) definition of probability distributions to characterize epistemic uncertainty in analysis inputs (ii) generation of samples from uncertain analysis inputs (iii) propagation of sampled inputs through an analysis (iv) presentation of uncertainty analysis results, and (v) determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two-dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition

  6. Survey of sampling-based methods for uncertainty and sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD. (.; .); Storlie, Curt B. (Colorado State University, Fort Collins, CO)

    2006-06-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.

  7. School-Based Management. School Management Digest, Series 1, No. 23.

    Science.gov (United States)

    Lindelow, John

    Many educators advocate school-based management, a method of decentralization wherein the school, instead of the district office, becomes the primary unit of educational decision-making. This shift is part of American education's long-term oscillation between administrative centralization and decentralization. Centralization, say its critics, has…

  8. A Humanistic Approach to Performance-Based Teacher Education. PBTE Series No. 10.

    Science.gov (United States)

    Nash, Paul

    Questions are reaised in making performance-based teacher education (PBTE) a more humanistic enterprise. A definition of the term "humanistic" could include such qualities as freedom, uniqueness, creativity, productivity, wholeness, responsibility, and social humanization. As to freedom, a humanistic approach to PBTE would encourage people to act…

  9. Reliability Based Optimal Design of Vertical Breakwaters Modelled as a Series System Failure

    DEFF Research Database (Denmark)

    Christiani, E.; Burcharth, H. F.; Sørensen, John Dalsgaard

    1996-01-01

    Reliability based design of monolithic vertical breakwaters is considered. Probabilistic models of important failure modes such as sliding and rupture failure in the rubble mound and the subsoil are described. Characterisation of the relevant stochastic parameters are presented, and relevant design...... variables are identified and an optimal system reliability formulation is presented. An illustrative example is given....

  10. Flexible Pedagogies: Employer Engagement and Work-Based Learning. Flexible Pedagogies: Preparing for the Future Series

    Science.gov (United States)

    Kettle, Jane

    2013-01-01

    This publication focuses on national and international policy initiatives to develop a better understanding of work-based learners and the types of flexibility that may well enhance their study especially pedagogically. As part of our five-strand research project "Flexible Pedagogies: preparing for the future" it: (1) highlights the…

  11. Immunophenotype Discovery, Hierarchical Organization, and Template-based Classification of Flow Cytometry Samples

    Directory of Open Access Journals (Sweden)

    Ariful Azad

    2016-08-01

    Full Text Available We describe algorithms for discovering immunophenotypes from large collections of flow cytometry (FC samples, and using them to organize the samples into a hierarchy based on phenotypic similarity. The hierarchical organization is helpful for effective and robust cytometry data mining, including the creation of collections of cell populations characteristic of different classes of samples, robust classification, and anomaly detection. We summarize a set of samples belonging to a biological class or category with a statistically derived template for the class. Whereas individual samples are represented in terms of their cell populations (clusters, a template consists of generic meta-populations (a group of homogeneous cell populations obtained from the samples in a class that describe key phenotypes shared among all those samples. We organize an FC data collection in a hierarchical data structure that supports the identification of immunophenotypes relevant to clinical diagnosis. A robust template-based classification scheme is also developed, but our primary focus is in the discovery of phenotypic signatures and inter-sample relationships in an FC data collection. This collective analysis approach is more efficient and robust since templates describe phenotypic signatures common to cell populations in several samples, while ignoring noise and small sample-specific variations.We have applied the template-base scheme to analyze several data setsincluding one representing a healthy immune system, and one of Acute Myeloid Leukemia (AMLsamples. The last task is challenging due to the phenotypic heterogeneity of the severalsubtypes of AML. However, we identified thirteen immunophenotypes corresponding to subtypes of AML, and were able to distinguish Acute Promyelocytic Leukemia from other subtypes of AML.

  12. Performance of local information-based link prediction: a sampling perspective

    Science.gov (United States)

    Zhao, Jichang; Feng, Xu; Dong, Li; Liang, Xiao; Xu, Ke

    2012-08-01

    Link prediction is pervasively employed to uncover the missing links in the snapshots of real-world networks, which are usually obtained through different kinds of sampling methods. In the previous literature, in order to evaluate the performance of the prediction, known edges in the sampled snapshot are divided into the training set and the probe set randomly, without considering the underlying sampling approaches. However, different sampling methods might lead to different missing links, especially for the biased ways. For this reason, random partition-based evaluation of performance is no longer convincing if we take the sampling method into account. In this paper, we try to re-evaluate the performance of local information-based link predictions through sampling method governed division of the training set and the probe set. It is interesting that we find that for different sampling methods, each prediction approach performs unevenly. Moreover, most of these predictions perform weakly when the sampling method is biased, which indicates that the performance of these methods might have been overestimated in the prior works.

  13. Reachable Distance Space: Efficient Sampling-Based Planning for Spatially Constrained Systems

    KAUST Repository

    Xinyu Tang,

    2010-01-25

    Motion planning for spatially constrained robots is difficult due to additional constraints placed on the robot, such as closure constraints for closed chains or requirements on end-effector placement for articulated linkages. It is usually computationally too expensive to apply sampling-based planners to these problems since it is difficult to generate valid configurations. We overcome this challenge by redefining the robot\\'s degrees of freedom and constraints into a new set of parameters, called reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the number of the robot\\'s degrees of freedom. In addition to supporting efficient sampling of configurations, we show that the RD-space formulation naturally supports planning and, in particular, we design a local planner suitable for use by sampling-based planners. We demonstrate the effectiveness and efficiency of our approach for several systems including closed chain planning with multiple loops, restricted end-effector sampling, and on-line planning for drawing/sculpting. We can sample single-loop closed chain systems with 1,000 links in time comparable to open chain sampling, and we can generate samples for 1,000-link multi-loop systems of varying topologies in less than a second. © 2010 The Author(s).

  14. Revisiting random walk based sampling in networks: evasion of burn-in period and frequent regenerations.

    Science.gov (United States)

    Avrachenkov, Konstantin; Borkar, Vivek S; Kadavankandy, Arun; Sreedharan, Jithin K

    2018-01-01

    In the framework of network sampling, random walk (RW) based estimation techniques provide many pragmatic solutions while uncovering the unknown network as little as possible. Despite several theoretical advances in this area, RW based sampling techniques usually make a strong assumption that the samples are in stationary regime, and hence are impelled to leave out the samples collected during the burn-in period. This work proposes two sampling schemes without burn-in time constraint to estimate the average of an arbitrary function defined on the network nodes, for example, the average age of users in a social network. The central idea of the algorithms lies in exploiting regeneration of RWs at revisits to an aggregated super-node or to a set of nodes, and in strategies to enhance the frequency of such regenerations either by contracting the graph or by making the hitting set larger. Our first algorithm, which is based on reinforcement learning (RL), uses stochastic approximation to derive an estimator. This method can be seen as intermediate between purely stochastic Markov chain Monte Carlo iterations and deterministic relative value iterations. The second algorithm, which we call the Ratio with Tours (RT)-estimator, is a modified form of respondent-driven sampling (RDS) that accommodates the idea of regeneration. We study the methods via simulations on real networks. We observe that the trajectories of RL-estimator are much more stable than those of standard random walk based estimation procedures, and its error performance is comparable to that of respondent-driven sampling (RDS) which has a smaller asymptotic variance than many other estimators. Simulation studies also show that the mean squared error of RT-estimator decays much faster than that of RDS with time. The newly developed RW based estimators (RL- and RT-estimators) allow to avoid burn-in period, provide better control of stability along the sample path, and overall reduce the estimation time. Our

  15. Time series analytics using sliding window metaheuristic optimization-based machine learning system for identifying building energy consumption patterns

    International Nuclear Information System (INIS)

    Chou, Jui-Sheng; Ngo, Ngoc-Tri

    2016-01-01

    Highlights: • This study develops a novel time-series sliding window forecast system. • The system integrates metaheuristics, machine learning and time-series models. • Site experiment of smart grid infrastructure is installed to retrieve real-time data. • The proposed system accurately predicts energy consumption in residential buildings. • The forecasting system can help users minimize their electricity usage. - Abstract: Smart grids are a promising solution to the rapidly growing power demand because they can considerably increase building energy efficiency. This study developed a novel time-series sliding window metaheuristic optimization-based machine learning system for predicting real-time building energy consumption data collected by a smart grid. The proposed system integrates a seasonal autoregressive integrated moving average (SARIMA) model and metaheuristic firefly algorithm-based least squares support vector regression (MetaFA-LSSVR) model. Specifically, the proposed system fits the SARIMA model to linear data components in the first stage, and the MetaFA-LSSVR model captures nonlinear data components in the second stage. Real-time data retrieved from an experimental smart grid installed in a building were used to evaluate the efficacy and effectiveness of the proposed system. A k-week sliding window approach is proposed for employing historical data as input for the novel time-series forecasting system. The prediction system yielded high and reliable accuracy rates in 1-day-ahead predictions of building energy consumption, with a total error rate of 1.181% and mean absolute error of 0.026 kW h. Notably, the system demonstrates an improved accuracy rate in the range of 36.8–113.2% relative to those of the linear forecasting model (i.e., SARIMA) and nonlinear forecasting models (i.e., LSSVR and MetaFA-LSSVR). Therefore, end users can further apply the forecasted information to enhance efficiency of energy usage in their buildings, especially

  16. Multimodal physiotherapy treatment based on a biobehavioral approach for patients with chronic cervico-craniofacial pain: a prospective case series.

    Science.gov (United States)

    Marcos-Martín, Fernando; González-Ferrero, Luis; Martín-Alcocer, Noelia; Paris-Alemany, Alba; La Touche, Roy

    2018-01-17

    The purpose of this prospective case series was to observe and describe changes in patients with chronic cervico-craniofacial pain of muscular origin treated with multimodal physiotherapy based on a biobehavioral approach. Nine patients diagnosed with chronic myofascial temporomandibular disorder and neck pain were treated with 6 sessions over the course of 2 weeks including: (1) orthopedic manual physiotherapy (joint mobilizations, neurodynamic mobilization, and dynamic soft tissue mobilizations); (2) therapeutic exercises (motor control and muscular endurance exercises); and (3) patient education. The outcome measures of craniofacial (CF-PDI) and neck disability (NDI), kinesiophobia (TSK-11) and catastrophizing (PCS), and range of cervical and mandibular motion (ROM) and posture were collected at baseline, and at 2 and 14 weeks post-baseline. Compared to baseline, statistically significant (p posture were observed following a multimodal physiotherapy treatment based on a biobehavioral approach.

  17. Evaluation of data reduction methods for dynamic PET series based on Monte Carlo techniques and the NCAT phantom

    International Nuclear Information System (INIS)

    Thireou, Trias; Rubio Guivernau, Jose Luis; Atlamazoglou, Vassilis; Ledesma, Maria Jesus; Pavlopoulos, Sotiris; Santos, Andres; Kontaxakis, George

    2006-01-01

    A realistic dynamic positron-emission tomography (PET) thoracic study was generated, using the 4D NURBS-based (non-uniform rational B-splines) cardiac-torso (NCAT) phantom and a sophisticated model of the PET imaging process, simulating two solitary pulmonary nodules. Three data reduction and blind source separation methods were applied to the simulated data: principal component analysis, independent component analysis and similarity mapping. All methods reduced the initial amount of image data to a smaller, comprehensive and easily managed set of parametric images, where structures were separated based on their different kinetic characteristics and the lesions were readily identified. The results indicate that the above-mentioned methods can provide an accurate tool for the support of both visual inspection and subsequent detailed kinetic analysis of the dynamic series via compartmental or non-compartmental models

  18. A Big Data Approach for Situation-Aware estimation, correction and prediction of aerosol effects, based on MODIS Joint Atmosphere product (collection 6) time series data

    Science.gov (United States)

    Singh, A. K.; Toshniwal, D.

    2017-12-01

    The MODIS Joint Atmosphere product, MODATML2 and MYDATML2 L2/3 provided by LAADS DAAC (Level-1 and Atmosphere Archive & Distribution System Distributed Active Archive Center) re-sampled from medium resolution MODIS Terra /Aqua Satellites data at 5km scale, contains Cloud Reflectance, Cloud Top Temperature, Water Vapor, Aerosol Optical Depth/Thickness, Humidity data. These re-sampled data, when used for deriving climatic effects of aerosols (particularly in case of cooling effect) still exposes limitations in presence of uncertainty measures in atmospheric artifacts such as aerosol, cloud, cirrus cloud etc. The effect of uncertainty measures in these artifacts imposes an important challenge for estimation of aerosol effects, adequately affecting precise regional weather modeling and predictions: Forecasting and recommendation applications developed largely depend on these short-term local conditions (e.g. City/Locality based recommendations to citizens/farmers based on local weather models). Our approach inculcates artificial intelligence technique for representing heterogeneous data(satellite data along with air quality data from local weather stations (i.e. in situ data)) to learn, correct and predict aerosol effects in the presence of cloud and other atmospheric artifacts, defusing Spatio-temporal correlations and regressions. The Big Data process pipeline consisting correlation and regression techniques developed on Apache Spark platform can easily scale for large data sets including many tiles (scenes) and over widened time-scale. Keywords: Climatic Effects of Aerosols, Situation-Aware, Big Data, Apache Spark, MODIS Terra /Aqua, Time Series

  19. Mindfulness-based cognitive therapy in patients with late-life depression: A case series

    OpenAIRE

    Sonal Mathur; Mahendra Prakash Sharma; Srikala Bharath

    2016-01-01

    Depression is the most common mental illness in the elderly, and cost-effective treatments are required. Therefore, this study is aimed at evaluating the effectiveness of a mindfulness-based cognitive therapy (MBCT) on depressive symptoms, mindfulness skills, acceptance, and quality of life across four domains in patients with late-onset depression. A single case design with pre- and post-assessment was adopted. Five patients meeting the specified inclusion and exclusion criteria were recruit...

  20. Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications.

    Science.gov (United States)

    Al-Rawabdeh, Abdulla; Moussa, Adel; Foroutan, Marzieh; El-Sheimy, Naser; Habib, Ayman

    2017-10-18

    Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research.

  1. A series of dithienobenzodithiophene based small molecules for highly efficient organic solar cells

    Institute of Scientific and Technical Information of China (English)

    Huanran Feng; Miaomiao Li; Wang Ni; Bin Kan; Yunchuang Wang; Yamin Zhang; Hongtao Zhang; Xiangjian Wan; Yongsheng Chen

    2017-01-01

    Three acceptor-donor-acceptor(A-D-A) small molecules DCAODTBDT,DRDTBDT and DTBDTBDT using dithieno[2,3-d:2’,3’-d’]benzo[l,2-b:4,5-b’]dithiophene as the central building block,octyl cyanoacetate,3-octylrhodanine and thiobarbituric acid as the end groups were designed and synthesized as donor materials in solution-processed photovoltaic cells(OPVs).The impacts of these different electron withdrawing end groups on the photophysical properties,energy levels,charge carrier mobility,morphologies of blend films,and their photovoltaic properties have been systematically investigated.OPVs device based on DRDTBDT gave the best power conversion efficiency(PCE) of 8.34%,which was significantly higher than that based on DCAODTBDT(4.83%) or DTBDTBDT(3.39%).These results indicate that rather dedicated and balanced consideration of absorption,energy levels,morphology,mobility,etc.for the design of small-molecule-based OPVs(SM-OPVs)and systematic investigations are highly needed to achieve high performance for SM-OPVs.

  2. A series of dithienobenzodithiophene based small molecules for highly efficient organic solar cells

    Institute of Scientific and Technical Information of China (English)

    Huanran Feng; Miaomiao Li; Wang Ni; Bin Kan; Yunchuang Wang; Yamin Zhang; Hongtao Zhang; Xiangjian Wan; Yongsheng Chen

    2017-01-01

    Three acceptor-donor-acceptor (A-D-A) small molecules DCAODTBDT,DRDTBDT and DTBDTBDT using dithieno[2,3-d∶2',3'-d']benzo[1,2-b∶4,5-b']dithiophene as the central building block,octyl cyanoacetate,3-octylrhodanine and thiobarbituric acid as the end groups were designed and synthesized as donor materials in solution-processed photovoltaic cells (OPVs).The impacts of these different electron withdrawing end groups on the photophysical properties,energy levels,charge carrier mobility,morphologies of blend films,and their photovoltaic properties have been systematically investigated.OPVs device based on DRDTBDT gave the best power conversion efficiency (PCE) of 8.34%,which was significantly higher than that based on DCAODTBDT (4.83%) or DTBDTBDT (3.39%).These results indicate that rather dedicated and balanced consideration of absorption,energy levels,morphology,mobility,etc.for the design of small-molecule-based OPVs (SM-OPVs) and systematic investigations are highly needed to achieve high performance for SM-OPVs.

  3. Accounting for sampling error when inferring population synchrony from time-series data: a Bayesian state-space modelling approach with applications.

    Directory of Open Access Journals (Sweden)

    Hugues Santin-Janin

    Full Text Available BACKGROUND: Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal with respect to extrinsic factors (the Moran effect in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. METHODOLOGY/PRINCIPAL FINDINGS: The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i has been previously estimated, and (ii has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. CONCLUSION/SIGNIFICANCE: The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for

  4. Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-05-15

    The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

  5. Stochastic bounded consensus tracking of leader-follower multi-agent systems with measurement noises based on sampled data with general sampling delay

    International Nuclear Information System (INIS)

    Wu Zhi-Hai; Peng Li; Xie Lin-Bo; Wen Ji-Wei

    2013-01-01

    In this paper we provide a unified framework for consensus tracking of leader-follower multi-agent systems with measurement noises based on sampled data with a general sampling delay. First, a stochastic bounded consensus tracking protocol based on sampled data with a general sampling delay is presented by employing the delay decomposition technique. Then, necessary and sufficient conditions are derived for guaranteeing leader-follower multi-agent systems with measurement noises and a time-varying reference state to achieve mean square bounded consensus tracking. The obtained results cover no sampling delay, a small sampling delay and a large sampling delay as three special cases. Last, simulations are provided to demonstrate the effectiveness of the theoretical results. (interdisciplinary physics and related areas of science and technology)

  6. A novel model for Time-Series Data Clustering Based on piecewise SVD and BIRCH for Stock Data Analysis on Hadoop Platform

    Directory of Open Access Journals (Sweden)

    Ibgtc Bowala

    2017-06-01

    Full Text Available With the rapid growth of financial markets, analyzers are paying more attention on predictions. Stock data are time series data, with huge amounts. Feasible solution for handling the increasing amount of data is to use a cluster for parallel processing, and Hadoop parallel computing platform is a typical representative. There are various statistical models for forecasting time series data, but accurate clusters are a pre-requirement. Clustering analysis for time series data is one of the main methods for mining time series data for many other analysis processes. However, general clustering algorithms cannot perform clustering for time series data because series data has a special structure and a high dimensionality has highly co-related values due to high noise level. A novel model for time series clustering is presented using BIRCH, based on piecewise SVD, leading to a novel dimension reduction approach. Highly co-related features are handled using SVD with a novel approach for dimensionality reduction in order to keep co-related behavior optimal and then use BIRCH for clustering. The algorithm is a novel model that can handle massive time series data. Finally, this new model is successfully applied to real stock time series data of Yahoo finance with satisfactory results.

  7. 350 keV accelerator based PGNAA setup to detect nitrogen in bulk samples

    Energy Technology Data Exchange (ETDEWEB)

    Naqvi, A.A., E-mail: aanaqvi@kfupm.edu.sa [Department of Physics and King Fahd University of Petroleum and Minerals, Dhahran (Saudi Arabia); Al-Matouq, Faris A.; Khiari, F.Z.; Gondal, M.A.; Rehman, Khateeb-ur [Department of Physics and King Fahd University of Petroleum and Minerals, Dhahran (Saudi Arabia); Isab, A.A. [Department of Chemistry, King Fahd University of Petroleum and Minerals, Dhahran (Saudi Arabia); Raashid, M.; Dastageer, M.A. [Department of Physics and King Fahd University of Petroleum and Minerals, Dhahran (Saudi Arabia)

    2013-11-21

    Nitrogen concentration was measured in explosive and narcotics proxy material, e.g. anthranilic acid, caffeine, melamine, and urea samples, bulk samples through thermal neutron capture reaction using 350 keV accelerator based prompt gamma ray neutron activation (PGNAA) setup. Intensity of 2.52, 3.53–3.68, 4.51, 5.27–5.30 and 10.38 MeV prompt gamma rays of nitrogen from the bulk samples was measured using a cylindrical 100 mm×100 mm (diameter×height ) BGO detector. Inspite of interference of nitrogen gamma rays from bulk samples with capture prompt gamma rays from BGO detector material, an excellent agreement between the experimental and calculated yields of nitrogen gamma rays has been obtained. This is an indication of the excellent performance of the PGNAA setup for detection of nitrogen in bulk samples.

  8. Imaging a Large Sample with Selective Plane Illumination Microscopy Based on Multiple Fluorescent Microsphere Tracking

    Science.gov (United States)

    Ryu, Inkeon; Kim, Daekeun

    2018-04-01

    A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.

  9. 350 keV accelerator based PGNAA setup to detect nitrogen in bulk samples

    International Nuclear Information System (INIS)

    Naqvi, A.A.; Al-Matouq, Faris A.; Khiari, F.Z.; Gondal, M.A.; Rehman, Khateeb-ur; Isab, A.A.; Raashid, M.; Dastageer, M.A.

    2013-01-01

    Nitrogen concentration was measured in explosive and narcotics proxy material, e.g. anthranilic acid, caffeine, melamine, and urea samples, bulk samples through thermal neutron capture reaction using 350 keV accelerator based prompt gamma ray neutron activation (PGNAA) setup. Intensity of 2.52, 3.53–3.68, 4.51, 5.27–5.30 and 10.38 MeV prompt gamma rays of nitrogen from the bulk samples was measured using a cylindrical 100 mm×100 mm (diameter×height ) BGO detector. Inspite of interference of nitrogen gamma rays from bulk samples with capture prompt gamma rays from BGO detector material, an excellent agreement between the experimental and calculated yields of nitrogen gamma rays has been obtained. This is an indication of the excellent performance of the PGNAA setup for detection of nitrogen in bulk samples

  10. 350 keV accelerator based PGNAA setup to detect nitrogen in bulk samples

    Science.gov (United States)

    Naqvi, A. A.; Al-Matouq, Faris A.; Khiari, F. Z.; Gondal, M. A.; Rehman, Khateeb-ur; Isab, A. A.; Raashid, M.; Dastageer, M. A.

    2013-11-01

    Nitrogen concentration was measured in explosive and narcotics proxy material, e.g. anthranilic acid, caffeine, melamine, and urea samples, bulk samples through thermal neutron capture reaction using 350 keV accelerator based prompt gamma ray neutron activation (PGNAA) setup. Intensity of 2.52, 3.53-3.68, 4.51, 5.27-5.30 and 10.38 MeV prompt gamma rays of nitrogen from the bulk samples was measured using a cylindrical 100 mm×100 mm (diameter×height ) BGO detector. Inspite of interference of nitrogen gamma rays from bulk samples with capture prompt gamma rays from BGO detector material, an excellent agreement between the experimental and calculated yields of nitrogen gamma rays has been obtained. This is an indication of the excellent performance of the PGNAA setup for detection of nitrogen in bulk samples.

  11. Influence of Freezing and Storage Procedure on Human Urine Samples in NMR-Based Metabolomics

    OpenAIRE

    Rist, Manuela; Muhle-Goll, Claudia; Görling, Benjamin; Bub, Achim; Heissler, Stefan; Watzl, Bernhard; Luy, Burkhard

    2013-01-01

    It is consensus in the metabolomics community that standardized protocols should be followed for sample handling, storage and analysis, as it is of utmost importance to maintain constant measurement conditions to identify subtle biological differences. The aim of this work, therefore, was to systematically investigate the influence of freezing procedures and storage temperatures and their effect on NMR spectra as a potentially disturbing aspect for NMR-based metabolomics studies. Urine sample...

  12. A design-based approximation to the Bayes Information Criterion in finite population sampling

    Directory of Open Access Journals (Sweden)

    Enrico Fabrizi

    2014-05-01

    Full Text Available In this article, various issues related to the implementation of the usual Bayesian Information Criterion (BIC are critically examined in the context of modelling a finite population. A suitable design-based approximation to the BIC is proposed in order to avoid the derivation of the exact likelihood of the sample which is often very complex in a finite population sampling. The approximation is justified using a theoretical argument and a Monte Carlo simulation study.

  13. Embedded algorithms within an FPGA-based system to process nonlinear time series data

    Science.gov (United States)

    Jones, Jonathan D.; Pei, Jin-Song; Tull, Monte P.

    2008-03-01

    This paper presents some preliminary results of an ongoing project. A pattern classification algorithm is being developed and embedded into a Field-Programmable Gate Array (FPGA) and microprocessor-based data processing core in this project. The goal is to enable and optimize the functionality of onboard data processing of nonlinear, nonstationary data for smart wireless sensing in structural health monitoring. Compared with traditional microprocessor-based systems, fast growing FPGA technology offers a more powerful, efficient, and flexible hardware platform including on-site (field-programmable) reconfiguration capability of hardware. An existing nonlinear identification algorithm is used as the baseline in this study. The implementation within a hardware-based system is presented in this paper, detailing the design requirements, validation, tradeoffs, optimization, and challenges in embedding this algorithm. An off-the-shelf high-level abstraction tool along with the Matlab/Simulink environment is utilized to program the FPGA, rather than coding the hardware description language (HDL) manually. The implementation is validated by comparing the simulation results with those from Matlab. In particular, the Hilbert Transform is embedded into the FPGA hardware and applied to the baseline algorithm as the centerpiece in processing nonlinear time histories and extracting instantaneous features of nonstationary dynamic data. The selection of proper numerical methods for the hardware execution of the selected identification algorithm and consideration of the fixed-point representation are elaborated. Other challenges include the issues of the timing in the hardware execution cycle of the design, resource consumption, approximation accuracy, and user flexibility of input data types limited by the simplicity of this preliminary design. Future work includes making an FPGA and microprocessor operate together to embed a further developed algorithm that yields better

  14. Acute hyperammonemic encephalopathy after fluoropyrimidine-based chemotherapy: A case series and review of the literature.

    Science.gov (United States)

    Mitani, Seiichiro; Kadowaki, Shigenori; Komori, Azusa; Sugiyama, Keiji; Narita, Yukiya; Taniguchi, Hiroya; Ura, Takashi; Ando, Masashi; Sato, Yozo; Yamaura, Hidekazu; Inaba, Yoshitaka; Ishihara, Makoto; Tanaka, Tsutomu; Tajika, Masahiro; Muro, Kei

    2017-06-01

    Acute hyperammonemic encephalopathy induced by fluoropyrimidines (FPs) is a rare complication. Its pathophysiology remains unclear, especially given the currently used regimens, including intermediate-doses of 5-fluorouracil (5-FU) or oral FP agents. We aimed to characterize the clinical manifestations in cancer patients who developed hyperammonemic encephalopathy after receiving FP-based chemotherapy.We retrospectively reviewed 1786 patients with gastrointestinal or primary-unknown cancer who received FP-based regimens between 2007 and 2012. Eleven patients (0.6%) developed acute hyperammonemic encephalopathy. The incidence according to the administered anticancer drugs were as follows: 5-FU (8 of 1176, 0.7%), S-1 (1 of 679, 0.1%), capecitabine (2 of 225, 0.9%), and tegafur-uracil (UFT) (0 of 39, 0%). Ten patients (90.9%) had at least 1 aggravating factor, including infection, dehydration, constipation, renal dysfunction, and muscle loss. All the 10 patients met the definition of sarcopenia. Median time to the onset of hyperammonemic encephalopathy in the cycle was 3 days (range: 2-21). Three patients (27.3%) developed encephalopathy during the first cycle of the regimen and the remaining 8 patients during the second or more cycles. Seven patients (63.6%) had received at least 1 other FP-containing regimen before without episodes of encephalopathy.All patients recovered soon after immediate discontinuation of chemotherapy and supportive therapies, such as hydration, infusion of branched-chain amino acids, and oral lactulose intake, with a median time to recovery of 2 days (range: encephalopathy due to S-1 monotherapy, received modified FOLFOX-6 therapy without encephalopathy later.FP-associated acute hyperammonemic encephalopathy is extremely rare, but a possible event at any time and even during the administration of oral FP agents. Particular attention is warranted when giving FP-based therapy for patients with aggravating factors, such as sarcopenia. This

  15. Sample preparation techniques based on combustion reactions in closed vessels - A brief overview and recent applications

    International Nuclear Information System (INIS)

    Flores, Erico M.M.; Barin, Juliano S.; Mesko, Marcia F.; Knapp, Guenter

    2007-01-01

    In this review, a general discussion of sample preparation techniques based on combustion reactions in closed vessels is presented. Applications for several kinds of samples are described, taking into account the literature data reported in the last 25 years. The operational conditions as well as the main characteristics and drawbacks are discussed for bomb combustion, oxygen flask and microwave-induced combustion (MIC) techniques. Recent applications of MIC techniques are discussed with special concern for samples not well digested by conventional microwave-assisted wet digestion as, for example, coal and also for subsequent determination of halogens

  16. Acceptance Sampling Plans Based on Truncated Life Tests for Sushila Distribution

    Directory of Open Access Journals (Sweden)

    Amer Ibrahim Al-Omari

    2018-03-01

    Full Text Available An acceptance sampling plan problem based on truncated life tests when the lifetime following a Sushila distribution is considered in this paper. For various acceptance numbers, confidence levels and values of the ratio between fixed experiment time and particular mean lifetime, the minimum sample sizes required to ascertain a specified mean life were found. The operating characteristic function values of the suggested sampling plans and the producer’s risk are presented. Some tables are provided and the results are illustrated by an example of a real data set.

  17. Research on time series data prediction based on clustering algorithm - A case study of Yuebao

    Science.gov (United States)

    Lu, Xu; Zhao, Tianzhong

    2017-08-01

    Forecasting is the prerequisite for making scientific decisions, it is based on the past information of the research on the phenomenon, and combined with some of the factors affecting this phenomenon, then using scientific methods to forecast the development trend of the future, it is an important way for people to know the world. This is particularly important in the prediction of financial data, because proper financial data forecasts can provide a great deal of help to financial institutions in their strategic implementation, strategic alignment and risk control. However, the current forecasts of financial data generally use the method of forecast of overall data, which lack of consideration of customer behavior and other factors in the financial data forecasting process, and they are important factors influencing the change of financial data. Based on this situation, this paper analyzed the data of Yuebao, and according to the user's attributes and the operating characteristics, this paper classified 567 users of Yuebao, and made further predicted the data of Yuebao for every class of users, the results showed that the forecasting model in this paper can meet the demand of forecasting.

  18. Synthesis, characterization, photophysics and electroluminescence based on a series of pyran-containing emitters

    Energy Technology Data Exchange (ETDEWEB)

    Yang Lifen [State Key Laboratory of Rare Earth Materials Chemistry and Applications, Peking University, Beijing 100871 (China); Guan Min [State Key Laboratory of Rare Earth Materials Chemistry and Applications, Peking University, Beijing 100871 (China); Bian Zuqiang [State Key Laboratory of Rare Earth Materials Chemistry and Applications, Peking University, Beijing 100871 (China)]. E-mail: bianzq@pku.edu.cn; Xie Junqi [State Key Laboratory of Rare Earth Materials Chemistry and Applications, Peking University, Beijing 100871 (China); Chen Tianpeng [State Key Laboratory of Rare Earth Materials Chemistry and Applications, Peking University, Beijing 100871 (China); Huang Chunhui [State Key Laboratory of Rare Earth Materials Chemistry and Applications, Peking University, Beijing 100871 (China)]. E-mail: chhuang@pku.edu.cn

    2006-04-03

    Four unsymmetric as well as symmetric carbazole or oxadiazole modified pyran-containing compounds have been synthesized and characterized. These compounds are 4-(dicyanomethylene)-2-methyl-6-(4-(carbazolo-9-yl)phenyl)-4H-pyran (10), 4-(dicyanomethylene)-2,6-bis(4-(carbazolo-9-yl)phenyl)-4H-pyran (11), 4-(dicyanomethylene)-2-methyl-6-(4-tert-phenyl)-1,3,4-oxdiazole-4-phenyl)-4 H-pyran (12), and 4-(dicyanomethylene)-2,6-bis(4-tert-phenyl)-1,3, 4-oxdiazole-4-phenyl-4H-pyran (13). Photoluminescent measurements indicated that their maximal emissions can be tuned from 543 to 590 nm in acetone solution. Electroluminescent studies based on these compounds as dopants resulted in greenish yellow light emission. It was found that the device based on the bis-condensed symmetric compound (11) with the configuration of indium tin oxide / Copper (II) phthalocyanine (5 nm) / N,N'-bis-(1-naphthl)-diphenyl-1,1'-biphenyl-4,4'-diamine (40 nm) / compound (11) : tris-(8-quinolinolato)aluminium (Alq{sub 3}) (1%) (30 nm) / 2,9-dimethyl-4,7-diphenyl-1,10-phenanthroline (5 nm) / Alq{sub 3} (40 nm) / Mg : Ag (9 : 1) (200 nm) / Ag (80 nm) has achieved the highest luminance (6869 cd/m{sup 2}) and efficiency (1.32 lm/W and 2.52 cd/A) among the four emitters.

  19. Mutual Information-Based Inputs Selection for Electric Load Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Nenad Floranović

    2013-02-01

    Full Text Available Providing accurate load forecast to electric utility corporations is essential in order to reduce their operational costs and increase profits. Hence, training set selection is an important preprocessing step which has to be considered in practice in order to increase the accuracy of load forecasts. The usage of mutual information (MI has been recently proposed in regression tasks, mostly for feature selection and for identifying the real instances from training sets that contains noise and outliers. This paper proposes a methodology for the training set selection in a least squares support vector machines (LS-SVMs load forecasting model. A new application of the concept of MI is presented for the selection of a training set based on MI computation between initial training set instances and testing set instances. Accordingly, several LS-SVMs models have been trained, based on the proposed methodology, for hourly prediction of electric load for one day ahead. The results obtained from a real-world data set indicate that the proposed method increases the accuracy of load forecasting as well as reduces the size of the initial training set needed for model training.

  20. A study of uranium series disequilibrium in core profiles and mineral separates from the samples of Lac du Bonnet granite from the URL site, Pinawa, Manitoba, Canada

    International Nuclear Information System (INIS)

    Ivanovich, M.; Longworth, G.; Wilkins, M.A.; Hasler, S.E.

    1987-12-01

    Uranium series disequilibrium measurements of actinide activities and activity ratios have been used to study the geochemical history of Lac du Bonnet granite, from the URL site, Pinawa, Canada. Measurements on core profiles between fractured surfaces and the parent rock show that the granite underwent high temperature events several million years ago, followed by more recent low temperature events within the last million years. The main locations for the rock/water interaction and exchange of actinides are the fracture surfaces. The results of similar measurements on separated mineral phases show that the 'soft' minerals such as biotite and feldspar are more vulnerable to weathering than the 'hard' accessory minerals such as zircon. (author)

  1. A Procedure for Identification of Appropriate State Space and ARIMA Models Based on Time-Series Cross-Validation

    Directory of Open Access Journals (Sweden)

    Patrícia Ramos

    2016-11-01

    Full Text Available In this work, a cross-validation procedure is used to identify an appropriate Autoregressive Integrated Moving Average model and an appropriate state space model for a time series. A minimum size for the training set is specified. The procedure is based on one-step forecasts and uses different training sets, each containing one more observation than the previous one. All possible state space models and all ARIMA models where the orders are allowed to range reasonably are fitted considering raw data and log-transformed data with regular differencing (up to second order differences and, if the time series is seasonal, seasonal differencing (up to first order differences. The value of root mean squared error for each model is calculated averaging the one-step forecasts obtained. The model which has the lowest root mean squared error value and passes the Ljung–Box test using all of the available data with a reasonable significance level is selected among all the ARIMA and state space models considered. The procedure is exemplified in this paper with a case study of retail sales of different categories of women’s footwear from a Portuguese retailer, and its accuracy is compared with three reliable forecasting approaches. The results show that our procedure consistently forecasts more accurately than the other approaches and the improvements in the accuracy are significant.

  2. The forecasting of menstruation based on a state-space modeling of basal body temperature time series.

    Science.gov (United States)

    Fukaya, Keiichi; Kawamori, Ai; Osada, Yutaka; Kitazawa, Masumi; Ishiguro, Makio

    2017-09-20

    Women's basal body temperature (BBT) shows a periodic pattern that associates with menstrual cycle. Although this fact suggests a possibility that daily BBT time series can be useful for estimating the underlying phase state as well as for predicting the length of current menstrual cycle, little attention has been paid to model BBT time series. In this study, we propose a state-space model that involves the menstrual phase as a latent state variable to explain the daily fluctuation of BBT and the menstruation cycle length. Conditional distributions of the phase are obtained by using sequential Bayesian filtering techniques. A predictive distribution of the next menstruation day can be derived based on this conditional distribution and the model, leading to a novel statistical framework that provides a sequentially updated prediction for upcoming menstruation day. We applied this framework to a real data set of women's BBT and menstruation days and compared prediction accuracy of the proposed method with that of previous methods, showing that the proposed method generally provides a better prediction. Because BBT can be obtained with relatively small cost and effort, the proposed method can be useful for women's health management. Potential extensions of this framework as the basis of modeling and predicting events that are associated with the menstrual cycles are discussed. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  3. Thermal-Induced Errors Prediction and Compensation for a Coordinate Boring Machine Based on Time Series Analysis

    Directory of Open Access Journals (Sweden)

    Jun Yang

    2014-01-01

    Full Text Available To improve the CNC machine tools precision, a thermal error modeling for the motorized spindle was proposed based on time series analysis, considering the length of cutting tools and thermal declined angles, and the real-time error compensation was implemented. A five-point method was applied to measure radial thermal declinations and axial expansion of the spindle with eddy current sensors, solving the problem that the three-point measurement cannot obtain the radial thermal angle errors. Then the stationarity of the thermal error sequences was determined by the Augmented Dickey-Fuller Test Algorithm, and the autocorrelation/partial autocorrelation function was applied to identify the model pattern. By combining both Yule-Walker equations and information criteria, the order and parameters of the models were solved effectively, which improved the prediction accuracy and generalization ability. The results indicated that the prediction accuracy of the time series model could reach up to 90%. In addition, the axial maximum error decreased from 39.6 μm to 7 μm after error compensation, and the machining accuracy was improved by 89.7%. Moreover, the X/Y-direction accuracy can reach up to 77.4% and 86%, respectively, which demonstrated that the proposed methods of measurement, modeling, and compensation were effective.

  4. Passenger Flow Forecasting Research for Airport Terminal Based on SARIMA Time Series Model

    Science.gov (United States)

    Li, Ziyu; Bi, Jun; Li, Zhiyin

    2017-12-01

    Based on the data of practical operating of Kunming Changshui International Airport during2016, this paper proposes Seasonal Autoregressive Integrated Moving Average (SARIMA) model to predict the passenger flow. This article not only considers the non-stationary and autocorrelation of the sequence, but also considers the daily periodicity of the sequence. The prediction results can accurately describe the change trend of airport passenger flow and provide scientific decision support for the optimal allocation of airport resources and optimization of departure process. The result shows that this model is applicable to the short-term prediction of airport terminal departure passenger traffic and the average error ranges from 1% to 3%. The difference between the predicted and the true values of passenger traffic flow is quite small, which indicates that the model has fairly good passenger traffic flow prediction ability.

  5. Mean-variance portfolio optimization by using time series approaches based on logarithmic utility function

    Science.gov (United States)

    Soeryana, E.; Fadhlina, N.; Sukono; Rusyaman, E.; Supian, S.

    2017-01-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on logarithmic utility function. Non constant mean analysed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analysed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyse some Islamic stocks in Indonesia. The expected result is to get the proportion of investment in each Islamic stock analysed.

  6. 20 Years of Total and Tropical Ozone Time Series Based on European Satellite Observations

    Science.gov (United States)

    Loyola, D. G.; Heue, K. P.; Coldewey-Egbers, M.

    2016-12-01

    Ozone is an important trace gas in the atmosphere, while the stratospheric ozone layer protects the earth surface from the incident UV radiation, the tropospheric ozone acts as green house gas and causes health damages as well as crop loss. The total ozone column is dominated by the stratospheric column, the tropospheric columns only contributes about 10% to the total column.The ozone column data from the European satellite instruments GOME, SCIAMACHY, OMI, GOME-2A and GOME-2B are available within the ESA Climate Change Initiative project with a high degree of inter-sensor consistency. The tropospheric ozone columns are based on the convective cloud differential algorithm. The datasets encompass a period of more than 20 years between 1995 and 2015, for the trend analysis the data sets were harmonized relative to one of the instruments. For the tropics we found an increase in the tropospheric ozone column of 0.75 ± 0.12 DU decade^{-1} with local variations between 1.8 and -0.8. The largest trends were observed over southern Africa and the Atlantic Ocean. A seasonal trend analysis led to the assumption that the increase is caused by additional forest fires.The trend for the total column was not that certain, based on model predicted trend data and the measurement uncertainty we estimated that another 10 to 15 years of observations will be required to observe a statistical significant trend. In the mid latitudes the trends are currently hidden in the large variability and for the tropics the modelled trends are low. Also the possibility of diverging trends at different altitudes must be considered; an increase in the tropospheric ozone might be accompanied by decreasing stratospheric ozone.The European satellite data record will be extended over the next two decades with the atmospheric satellite missions Sentinel 5 Precursor (launch end of 2016), Sentinel 4 and Sentinel 5.

  7. Rapid filtration separation-based sample preparation method for Bacillus spores in powdery and environmental matrices.

    Science.gov (United States)

    Isabel, Sandra; Boissinot, Maurice; Charlebois, Isabelle; Fauvel, Chantal M; Shi, Lu-E; Lévesque, Julie-Christine; Paquin, Amélie T; Bastien, Martine; Stewart, Gale; Leblanc, Eric; Sato, Sachiko; Bergeron, Michel G

    2012-03-01

    Authorities frequently need to analyze suspicious powders and other samples for biothreat agents in order to assess environmental safety. Numerous nucleic acid detection technologies have been developed to detect and identify biowarfare agents in a timely fashion. The extraction of microbial nucleic acids from a wide variety of powdery and environmental samples to obtain a quality level adequate for these technologies still remains a technical challenge. We aimed to develop a rapid and versatile method of separating bacteria from these samples and then extracting their microbial DNA. Bacillus atrophaeus subsp. globigii was used as a simulant of Bacillus anthracis. We studied the effects of a broad variety of powdery and environmental samples on PCR detection and the steps required to alleviate their interference. With a benchmark DNA extraction procedure, 17 of the 23 samples investigated interfered with bacterial lysis and/or PCR-based detection. Therefore, we developed the dual-filter method for applied recovery of microbial particles from environmental and powdery samples (DARE). The DARE procedure allows the separation of bacteria from contaminating matrices that interfere with PCR detection. This procedure required only 2 min, while the DNA extraction process lasted 7 min, for a total of sample preparation procedure allowed the recovery of cleaned bacterial spores and relieved detection interference caused by a wide variety of samples. Our procedure was easily completed in a laboratory facility and is amenable to field application and automation.

  8. EMD-Based Predictive Deep Belief Network for Time Series Prediction: An Application to Drought Forecasting

    Directory of Open Access Journals (Sweden)

    Norbert A. Agana

    2018-02-01

    Full Text Available Drought is a stochastic natural feature that arises due to intense and persistent shortage of precipitation. Its impact is mostly manifested as agricultural and hydrological droughts following an initial meteorological phenomenon. Drought prediction is essential because it can aid in the preparedness and impact-related management of its effects. This study considers the drought forecasting problem by developing a hybrid predictive model using a denoised empirical mode decomposition (EMD and a deep belief network (DBN. The proposed method first decomposes the data into several intrinsic mode functions (IMFs using EMD, and a reconstruction of the original data is obtained by considering only relevant IMFs. Detrended fluctuation analysis (DFA was applied to each IMF to determine the threshold for robust denoising performance. Based on their scaling exponents, irrelevant intrinsic mode functions are identified and suppressed. The proposed method was applied to predict different time scale drought indices across the Colorado River basin using a standardized streamflow index (SSI as the drought index. The results obtained using the proposed method was compared with standard methods such as multilayer perceptron (MLP and support vector regression (SVR. The proposed hybrid model showed improvement in prediction accuracy, especially for multi-step ahead predictions.

  9. A time series based sequence prediction algorithm to detect activities of daily living in smart home.

    Science.gov (United States)

    Marufuzzaman, M; Reaz, M B I; Ali, M A M; Rahman, L F

    2015-01-01

    The goal of smart homes is to create an intelligent environment adapting the inhabitants need and assisting the person who needs special care and safety in their daily life. This can be reached by collecting the ADL (activities of daily living) data and further analysis within existing computing elements. In this research, a very recent algorithm named sequence prediction via enhanced episode discovery (SPEED) is modified and in order to improve accuracy time component is included. The modified SPEED or M-SPEED is a sequence prediction algorithm, which modified the previous SPEED algorithm by using time duration of appliance's ON-OFF states to decide the next state. M-SPEED discovered periodic episodes of inhabitant behavior, trained it with learned episodes, and made decisions based on the obtained knowledge. The results showed that M-SPEED achieves 96.8% prediction accuracy, which is better than other time prediction algorithms like PUBS, ALZ with temporal rules and the previous SPEED. Since human behavior shows natural temporal patterns, duration times can be used to predict future events more accurately. This inhabitant activity prediction system will certainly improve the smart homes by ensuring safety and better care for elderly and handicapped people.

  10. Mindfulness-based cognitive therapy in patients with late-life depression: A case series

    Directory of Open Access Journals (Sweden)

    Sonal Mathur

    2016-01-01

    Full Text Available Depression is the most common mental illness in the elderly, and cost-effective treatments are required. Therefore, this study is aimed at evaluating the effectiveness of a mindfulness-based cognitive therapy (MBCT on depressive symptoms, mindfulness skills, acceptance, and quality of life across four domains in patients with late-onset depression. A single case design with pre- and post-assessment was adopted. Five patients meeting the specified inclusion and exclusion criteria were recruited for the study and assessed on the behavioral analysis pro forma, geriatric depression scale, Hamilton depression rating scale, Kentucky inventory of mindfulness skills, Acceptance and Action Questionnaire II, The World Health Organization quality of life Assessment Brief version (WHOQO-L-BREF. The therapeutic program consisted of education regarding the nature of depression, training in formal and informal mindfulness meditation, and cognitive restructuring. A total of 8 sessions over 8 weeks were conducted for each patient. The results of this study indicate clinically significant improvement in the severity of depression, mindfulness skills, acceptance, and overall quality of life in all 5 patients. Eight-week MBCT program has led to reduction in depression and increased mindfulness skills, acceptance, and overall quality of life in patients with late-life depression.

  11. Risk-Based Sampling: I Don't Want to Weight in Vain.

    Science.gov (United States)

    Powell, Mark R

    2015-12-01

    Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. © 2015 Society for Risk Analysis.

  12. Statistical analysis of hydrological response in urbanising catchments based on adaptive sampling using inter-amount times

    Science.gov (United States)

    ten Veldhuis, Marie-Claire; Schleiss, Marc

    2017-04-01

    Urban catchments are typically characterised by a more flashy nature of the hydrological response compared to natural catchments. Predicting flow changes associated with urbanisation is not straightforward, as they are influenced by interactions between impervious cover, basin size, drainage connectivity and stormwater management infrastructure. In this study, we present an alternative approach to statistical analysis of hydrological response variability and basin flashiness, based on the distribution of inter-amount times. We analyse inter-amount time distributions of high-resolution streamflow time series for 17 (semi-)urbanised basins in North Carolina, USA, ranging from 13 to 238 km2 in size. We show that in the inter-amount-time framework, sampling frequency is tuned to the local variability of the flow pattern, resulting in a different representation and weighting of high and low flow periods in the statistical distribution. This leads to important differences in the way the distribution quantiles, mean, coefficient of variation and skewness vary across scales and results in lower mean intermittency and improved scaling. Moreover, we show that inter-amount-time distributions can be used to detect regulation effects on flow patterns, identify critical sampling scales and characterise flashiness of hydrological response. The possibility to use both the classical approach and the inter-amount-time framework to identify minimum observable scales and analyse flow data opens up interesting areas for future research.

  13. Women’s experience with home-based self-sampling for human papillomavirus testing

    International Nuclear Information System (INIS)

    Sultana, Farhana; Mullins, Robyn; English, Dallas R.; Simpson, Julie A.; Drennan, Kelly T.; Heley, Stella; Wrede, C. David; Brotherton, Julia M. L.; Saville, Marion; Gertig, Dorota M.

    2015-01-01

    Increasing cervical screening coverage by reaching inadequately screened groups is essential for improving the effectiveness of cervical screening programs. Offering HPV self-sampling to women who are never or under-screened can improve screening participation, however participation varies widely between settings. Information on women’s experience with self-sampling and preferences for future self-sampling screening is essential for programs to optimize participation. The survey was conducted as part of a larger trial (“iPap”) investigating the effect of HPV self-sampling on participation of never and under-screened women in Victoria, Australia. Questionnaires were mailed to a) most women who participated in the self-sampling to document their experience with and preference for self-sampling in future, and b) a sample of the women who did not participate asking reasons for non-participation and suggestions for enabling participation. Reasons for not having a previous Pap test were also explored. About half the women who collected a self sample for the iPap trial returned the subsequent questionnaire (746/1521). Common reasons for not having cervical screening were that having Pap test performed by a doctor was embarrassing (18 %), not having the time (14 %), or that a Pap test was painful and uncomfortable (11 %). Most (94 %) found the home-based self-sampling less embarrassing, less uncomfortable (90 %) and more convenient (98 %) compared with their last Pap test experience (if they had one); however, many were unsure about the test accuracy (57 %). Women who self-sampled thought the instructions were clear (98 %), it was easy to use the swab (95 %), and were generally confident that they did the test correctly (81 %). Most preferred to take the self-sample at home in the future (88 %) because it was simple and did not require a doctor’s appointment. Few women (126/1946, 7 %) who did not return a self-sample in the iPap trial returned the questionnaire

  14. Structural damage detection in wind turbine blades based on time series representations of dynamic responses

    Science.gov (United States)

    Hoell, Simon; Omenzetter, Piotr

    2015-03-01

    The development of large wind turbines that enable to harvest energy more efficiently is a consequence of the increasing demand for renewables in the world. To optimize the potential energy output, light and flexible wind turbine blades (WTBs) are designed. However, the higher flexibilities and lower buckling capacities adversely affect the long-term safety and reliability of WTBs, and thus the increased operation and maintenance costs reduce the expected revenue. Effective structural health monitoring techniques can help to counteract this by limiting inspection efforts and avoiding unplanned maintenance actions. Vibration-based methods deserve high attention due to the moderate instrumentation efforts and the applicability for in-service measurements. The present paper proposes the use of cross-correlations (CCs) of acceleration responses between sensors at different locations for structural damage detection in WTBs. CCs were in the past successfully applied for damage detection in numerical and experimental beam structures while utilizing only single lags between the signals. The present approach uses vectors of CC coefficients for multiple lags between measurements of two selected sensors taken from multiple possible combinations of sensors. To reduce the dimensionality of the damage sensitive feature (DSF) vectors, principal component analysis is performed. The optimal number of principal components (PCs) is chosen with respect to a statistical threshold. Finally, the detection phase uses the selected PCs of the healthy structure to calculate scores from a current DSF vector, where statistical hypothesis testing is performed for making a decision about the current structural state. The method is applied to laboratory experiments conducted on a small WTB with non-destructive damage scenarios.

  15. Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.

    Science.gov (United States)

    Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier

    2017-07-10

    A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

  16. A multiple linear regression analysis of hot corrosion attack on a series of nickel base turbine alloys

    Science.gov (United States)

    Barrett, C. A.

    1985-01-01

    Multiple linear regression analysis was used to determine an equation for estimating hot corrosion attack for a series of Ni base cast turbine alloys. The U transform (i.e., 1/sin (% A/100) to the 1/2) was shown to give the best estimate of the dependent variable, y. A complete second degree equation is described for the centered" weight chemistries for the elements Cr, Al, Ti, Mo, W, Cb, Ta, and Co. In addition linear terms for the minor elements C, B, and Zr were added for a basic 47 term equation. The best reduced equation was determined by the stepwise selection method with essentially 13 terms. The Cr term was found to be the most important accounting for 60 percent of the explained variability hot corrosion attack.

  17. Structure-Based Design of a Novel Series of Potent, Selective Inhibitors of the Class I Phosphatidylinositol 3-Kinases

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Adrian L.; D’Angelo, Noel D.; Bo, Yunxin Y.; Booker, Shon K.; Cee, Victor J.; Herberich, Brad; Hong, Fang-Tsao; Jackson, Claire L.M.; Lanman, Brian A.; Liu, Longbin; Nishimura, Nobuko; Pettus, Liping H.; Reed, Anthony B.; Tadesse, Seifu; Tamayo, Nuria A.; Wurz, Ryan P.; Yang, Kevin; Andrews, Kristin L.; Whittington, Douglas A.; McCarter, John D.; Miguel, Tisha San; Zalameda, Leeanne; Jiang, Jian; Subramanian, Raju; Mullady, Erin L.; Caenepeel, Sean; Freeman, Daniel J.; Wang, Ling; Zhang, Nancy; Wu, Tian; Hughes, Paul E.; Norman, Mark H. (Amgen)

    2012-09-17

    A highly selective series of inhibitors of the class I phosphatidylinositol 3-kinases (PI3Ks) has been designed and synthesized. Starting from the dual PI3K/mTOR inhibitor 5, a structure-based approach was used to improve potency and selectivity, resulting in the identification of 54 as a potent inhibitor of the class I PI3Ks with excellent selectivity over mTOR, related phosphatidylinositol kinases, and a broad panel of protein kinases. Compound 54 demonstrated a robust PD-PK relationship inhibiting the PI3K/Akt pathway in vivo in a mouse model, and it potently inhibited tumor growth in a U-87 MG xenograft model with an activated PI3K/Akt pathway.

  18. A Series of Fluorescent and Colorimetric Chemodosimeters for Selective Recognition of Cyanide Based on the FRET Mechanism.

    Science.gov (United States)

    Hua, Ying-Xi; Shao, Yongliang; Wang, Ya-Wen; Peng, Yu

    2017-06-16

    A series of fluorescence "turn-on" probes (PY, AN, NA, B1, and B2) have been developed and successfully applied to detect cyanide anions based on the Michael addition reaction and FRET mechanism. These probes demonstrated good selectivity, high sensitivity, and very fast recognition for CN - . In particular, the fluorescence response of probe NA finished within 3 s. Low limits of detection (down to 63 nM) are also obtained in these probes with remarkable fluorescence enhancement factors. In addition, fluorescence colors of these probes turned to blue, yellow, or orange upon sensing CN - . In UV-vis mode, all of them showed ratiometric response for CN - . 1 H NMR titration experiments and TDDFT calculations were taken to verify the mechanism of the specific reaction and fluorescence properties of the corresponding compounds. Moreover, silica gel plates with these probes were also fabricated and utilized to detect cyanide.

  19. Rape (Brassica napus L. Growth Monitoring and Mapping Based on Radarsat-2 Time-Series Data

    Directory of Open Access Journals (Sweden)

    Wangfei Zhang

    2018-01-01

    Full Text Available In this study, 27 polarimetric parameters were extracted from Radarsat-2 polarimetric synthetic aperture radar (SAR at each growth stage of the rape crop. The sensitivity to growth parameters such as stem height, leaf area index (LAI, and biomass were investigated as a function of days after sowing. Based on the sensitivity analysis, five empirical regression models were compared to determine the best model for stem height, LAI, and biomass inversion. Of these five models, quadratic models had higher R2 values than other models in most cases of growth parameter inversions, but when these results were related to physical scattering mechanisms, the inversion results produced overestimation in the performance of some parameters. By contrast, linear and logarithmic models, which had lower R2 values than the quadratic models, had stable performance for growth parameter inversions, particularly in terms of their performance at each growth stage. The best biomass inversion performance was acquired by the volume component of a quadratic model, with an R2 value of 0.854 and root mean square error (RMSE of 109.93 g m−2. The best LAI inversion was also acquired by a quadratic model, but used the radar vegetation index (Cloude, with an R2 value of 0.8706 and RMSE of 0.56 m2 m−2. Stem height was acquired by scattering angle alpha ( α using a logarithmic model, with an R2 of 0.926 value and RMSE of 11.09 cm. The performances of these models were also analysed for biomass estimation at the second growth stage (P2, third growth stage (P3, and fourth growth stage (P4. The results showed that the models built at the P3 stage had better substitutability with the models built during all of the growth stages. From the mapping results, we conclude that a model built at the P3 stage can be used for rape biomass inversion, with 90% of estimation errors being less than 100 g m−2.

  20. A case series study on the effect of Ebola on facility-based deliveries in rural Liberia.

    Science.gov (United States)

    Lori, Jody R; Rominski, Sarah Danielson; Perosky, Joseph E; Munro, Michelle L; Williams, Garfee; Bell, Sue Anne; Nyanplu, Aloysius B; Amarah, Patricia N M; Boyd, Carol J

    2015-10-12

    As communities' fears of Ebola virus disease (EVD) in West Africa exacerbate and their trust in healthcare providers diminishes, EVD has the potential to reverse the recent progress made in promoting facility-based delivery. Using retrospective data from a study focused on maternal and newborn health, this analysis examined the influence of EVD on the use of facility-based maternity care in Bong Country, Liberia, which shares a boarder with Sierra Leone - near the epicenter of the outbreak. Using a case series design, retrospective data from logbooks were collected at 12 study sites in one county. These data were then analyzed to determine women's use of facility-based maternity care between January 2012 and October 2014. The primary outcome was the number of facility-based deliveries over time. The first suspected case of EVD in Bong County was reported on June 30, 2014. Heat maps were generated and the number of deliveries was normalized to the average number of deliveries during the full 12 months before the EVD outbreak (March 2013 - February 2014). Prior to the EVD outbreak, facility-based deliveries steadily increased in Bong County reaching an all-time high of over 500 per month at study sites in the first half of 2014 - indicating Liberia was making inroads in normalizing institutional maternal healthcare. However, as reports of EVD escalated, facility-based deliveries decreased to a low of 113 in August 2014. Ebola virus disease has negatively impacted the use of facility-based maternity services, placing childbearing women at increased risk for morbidity and death.

  1. Networked Estimation for Event-Based Sampling Systems with Packet Dropouts

    Directory of Open Access Journals (Sweden)

    Young Soo Suh

    2009-04-01

    Full Text Available This paper is concerned with a networked estimation problem in which sensor data are transmitted over the network. In the event-based sampling scheme known as level-crossing or send-on-delta (SOD, sensor data are transmitted to the estimator node if the difference between the current sensor value and the last transmitted one is greater than a given threshold. Event-based sampling has been shown to be more efficient than the time-triggered one in some situations, especially in network bandwidth improvement. However, it cannot detect packet dropout situations because data transmission and reception do not use a periodical time-stamp mechanism as found in time-triggered sampling systems. Motivated by this issue, we propose a modified event-based sampling scheme called modified SOD in which sensor data are sent when either the change of sensor output exceeds a given threshold or the time elapses more than a given interval. Through simulation results, we show that the proposed modified SOD sampling significantly improves estimation performance when packet dropouts happen.

  2. A novel mutual information-based Boolean network inference method from time-series gene expression data.

    Directory of Open Access Journals (Sweden)

    Shohag Barman

    Full Text Available Inferring a gene regulatory network from time-series gene expression data in systems biology is a challenging problem. Many methods have been suggested, most of which have a scalability limitation due to the combinatorial cost of searching a regulatory set of genes. In addition, they have focused on the accurate inference of a network structure only. Therefore, there is a pressing need to develop a network inference method to search regulatory genes efficiently and to predict the network dynamics accurately.In this study, we employed a Boolean network model with a restricted update rule scheme to capture coarse-grained dynamics, and propose a novel mutual information-based Boolean network inference (MIBNI method. Given time-series gene expression data as an input, the method first identifies a set of initial regulatory genes using mutual information-based feature selection, and then improves the dynamics prediction accuracy by iteratively swapping a pair of genes between sets of the selected regulatory genes and the other genes. Through extensive simulations with artificial datasets, MIBNI showed consistently better performance than six well-known existing methods, REVEAL, Best-Fit, RelNet, CST, CLR, and BIBN in terms of both structural and dynamics prediction accuracy. We further tested the proposed method with two real gene expression datasets for an Escherichia coli gene regulatory network and a fission yeast cell cycle network, and also observed better results using MIBNI compared to the six other methods.Taken together, MIBNI is a promising tool for predicting both the structure and the dynamics of a gene regulatory network.

  3. Event-based stochastic point rainfall resampling for statistical replication and climate projection of historical rainfall series

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Korup Andersen, Aske; Larsen, Anders Badsberg

    2017-01-01

    Continuous and long rainfall series are a necessity in rural and urban hydrology for analysis and design purposes. Local historical point rainfall series often cover several decades, which makes it possible to estimate rainfall means at different timescales, and to assess return periods of extreme...... includes climate changes projected to a specific future period. This paper presents a framework for resampling of historical point rainfall series in order to generate synthetic rainfall series, which has the same statistical properties as an original series. Using a number of key target predictions...... for the future climate, such as winter and summer precipitation, and representation of extreme events, the resampled historical series are projected to represent rainfall properties in a future climate. Climate-projected rainfall series are simulated by brute force randomization of model parameters, which leads...

  4. Target and suspect screening of psychoactive substances in sewage-based samples by UHPLC-QTOF

    Energy Technology Data Exchange (ETDEWEB)

    Baz-Lomba, J.A., E-mail: jba@niva.no [Norwegian Institute for Water Research, Gaustadalléen 21, NO-0349, Oslo (Norway); Faculty of Medicine, University of Oslo, PO box 1078 Blindern, 0316, Oslo (Norway); Reid, Malcolm J.; Thomas, Kevin V. [Norwegian Institute for Water Research, Gaustadalléen 21, NO-0349, Oslo (Norway)

    2016-03-31

    The quantification of illicit drug and pharmaceutical residues in sewage has been shown to be a valuable tool that complements existing approaches in monitoring the patterns and trends of drug use. The present work delineates the development of a novel analytical tool and dynamic workflow for the analysis of a wide range of substances in sewage-based samples. The validated method can simultaneously quantify 51 target psychoactive substances and pharmaceuticals in sewage-based samples using an off-line automated solid phase extraction (SPE-DEX) method, using Oasis HLB disks, followed by ultra-high performance liquid chromatography coupled to quadrupole time-of-flight mass spectrometry (UHPLC-QTOF) in MS{sup e}. Quantification and matrix effect corrections were overcome with the use of 25 isotopic labeled internal standards (ILIS). Recoveries were generally greater than 60% and the limits of quantification were in the low nanogram-per-liter range (0.4–187 ng L{sup −1}). The emergence of new psychoactive substances (NPS) on the drug scene poses a specific analytical challenge since their market is highly dynamic with new compounds continuously entering the market. Suspect screening using high-resolution mass spectrometry (HRMS) simultaneously allowed the unequivocal identification of NPS based on a mass accuracy criteria of 5 ppm (of the molecular ion and at least two fragments) and retention time (2.5% tolerance) using the UNIFI screening platform. Applying MS{sup e} data against a suspect screening database of over 1000 drugs and metabolites, this method becomes a broad and reliable tool to detect and confirm NPS occurrence. This was demonstrated through the HRMS analysis of three different sewage-based sample types; influent wastewater, passive sampler extracts and pooled urine samples resulting in the concurrent quantification of known psychoactive substances and the identification of NPS and pharmaceuticals. - Highlights: • A novel reiterative workflow

  5. Target and suspect screening of psychoactive substances in sewage-based samples by UHPLC-QTOF

    International Nuclear Information System (INIS)

    Baz-Lomba, J.A.; Reid, Malcolm J.; Thomas, Kevin V.

    2016-01-01

    The quantification of illicit drug and pharmaceutical residues in sewage has been shown to be a valuable tool that complements existing approaches in monitoring the patterns and trends of drug use. The present work delineates the development of a novel analytical tool and dynamic workflow for the analysis of a wide range of substances in sewage-based samples. The validated method can simultaneously quantify 51 target psychoactive substances and pharmaceuticals in sewage-based samples using an off-line automated solid phase extraction (SPE-DEX) method, using Oasis HLB disks, followed by ultra-high performance liquid chromatography coupled to quadrupole time-of-flight mass spectrometry (UHPLC-QTOF) in MS"e. Quantification and matrix effect corrections were overcome with the use of 25 isotopic labeled internal standards (ILIS). Recoveries were generally greater than 60% and the limits of quantification were in the low nanogram-per-liter range (0.4–187 ng L"−"1). The emergence of new psychoactive substances (NPS) on the drug scene poses a specific analytical challenge since their market is highly dynamic with new compounds continuously entering the market. Suspect screening using high-resolution mass spectrometry (HRMS) simultaneously allowed the unequivocal identification of NPS based on a mass accuracy criteria of 5 ppm (of the molecular ion and at least two fragments) and retention time (2.5% tolerance) using the UNIFI screening platform. Applying MS"e data against a suspect screening database of over 1000 drugs and metabolites, this method becomes a broad and reliable tool to detect and confirm NPS occurrence. This was demonstrated through the HRMS analysis of three different sewage-based sample types; influent wastewater, passive sampler extracts and pooled urine samples resulting in the concurrent quantification of known psychoactive substances and the identification of NPS and pharmaceuticals. - Highlights: • A novel reiterative workflow based on three

  6. Evaluating the effect of disturbed ensemble distributions on SCFG based statistical sampling of RNA secondary structures

    Directory of Open Access Journals (Sweden)

    Scheid Anika

    2012-07-01

    Full Text Available Abstract Background Over the past years, statistical and Bayesian approaches have become increasingly appreciated to address the long-standing problem of computational RNA structure prediction. Recently, a novel probabilistic method for the prediction of RNA secondary structures from a single sequence has been studied which is based on generating statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method samples the possible foldings from a distribution implied by a sophisticated (traditional or length-dependent stochastic context-free grammar (SCFG that mirrors the standard thermodynamic model applied in modern physics-based prediction algorithms. Specifically, that grammar represents an exact probabilistic counterpart to the energy model underlying the Sfold software, which employs a sampling extension of the partition function (PF approach to produce statistically representative subsets of the Boltzmann-weighted ensemble. Although both sampling approaches have the same worst-case time and space complexities, it has been indicated that they differ in performance (both with respect to prediction accuracy and quality of generated samples, where neither of these two competing approaches generally outperforms the other. Results In this work, we will consider the SCFG based approach in order to perform an analysis on how the quality of generated sample sets and the corresponding prediction accuracy changes when different degrees of disturbances are incorporated into the needed sampling probabilities. This is motivated by the fact that if the results prove to be resistant to large errors on the distinct sampling probabilities (compared to the exact ones, then it will be an indication that these probabilities do not need to be computed exactly, but it may be sufficient and more efficient to approximate them. Thus, it might then be possible to decrease the worst

  7. Global sampling of the seasonal changes in vegetation biophysical properties and associated carbon flux dynamics: using the synergy of information captured by spectral time series

    Science.gov (United States)

    Campbell, P. K. E.; Huemmrich, K. F.; Middleton, E.; Voorhis, S.; Landis, D.

    2016-12-01

    Spatial heterogeneity and seasonal dynamics in vegetation function contribute significantly to the uncertainties in regional and global CO2 budgets. High spectral resolution imaging spectroscopy ( 10 nm, 400-2500 nm) provides an efficient tool for synoptic evaluation of the factors significantly affecting the ability of the vegetation to sequester carbon and to reflect radiation, due to changes in vegetation chemical and structural composition. EO-1 Hyperion has collected more than 15 years of repeated observations for vegetation studies, and currently Hyperion time series are available for study of vegetation carbon dynamics at a number of FLUX sites. This study presents results from the analysis of EO-1 Hyperion and FLUX seasonal composites for a range of ecosystems across the globe. Spectral differences and seasonal trends were evaluated for each vegetation type and specific phenology. Evaluating the relationships between CO2 flux parameters (e.g., Net ecosystem production - NEP; Gross Ecosystem Exchange - GEE, CO2 flux, μmol m-2 s-1) and spectral parameters for these very different ecosystems, high correlations were established to parameters associated with canopy water and chlorophyll content for deciduous, and photosynthetic function for conifers. Imaging spectrometry provided high spatial resolution maps of CO2 fluxes absorbed by vegetation, and was efficient in tracing seasonal flux dynamics. This study will present examples for key ecosystem tipes to demonstrate the ability of imaging spectrometry and EO-1 Hyperion to map and compare CO2 flux dynamics across the globe.

  8. The NBOMe hallucinogenic drug series: Patterns of use, characteristics of users and self-reported effects in a large international sample.

    Science.gov (United States)

    Lawn, Will; Barratt, Monica; Williams, Martin; Horne, Abi; Winstock, Adam

    2014-08-01

    The NBOMe compounds are a novel series of hallucinogenic drugs that are potent agonists of the 5-HT2A receptor, have a short history of human consumption and are available to buy online, in most countries. In this study, we sought to investigate the patterns of use, characteristics of users and self-reported effects. A cross-sectional anonymous online survey exploring the patterns of drug use was conducted in 2012 (n = 22,289), including questions about the use of 25B-NBOMe, 25C-NBOMe, and 25I-NBOMe and comparison drugs. We found that 2.6% of respondents (n = 582) reported having ever tried one of the three NBOMe drugs and that at 2.0%, 25I-NBOMe was the most popular (n = 442). Almost all (93.5%) respondents whose last new drug tried was a NBOMe drug, tried it in 2012, and 81.2% of this group administered the drug orally or sublingually/buccally. Subjective effects were similar to comparison serotonergic hallucinogens, though higher 'negative effects while high' and greater 'value for money' were reported. The most common (41.7%) drug source was via a website. The NBOMe drugs have emerged recently, are frequently bought using the internet and have similar effects to other hallucinogenic drugs; however, they may pose larger risks, due to the limited knowledge about them, their relatively low price and availability via the internet. © The Author(s) 2014.

  9. Sample Data Synchronization and Harmonic Analysis Algorithm Based on Radial Basis Function Interpolation

    Directory of Open Access Journals (Sweden)

    Huaiqing Zhang

    2014-01-01

    Full Text Available The spectral leakage has a harmful effect on the accuracy of harmonic analysis for asynchronous sampling. This paper proposed a time quasi-synchronous sampling algorithm which is based on radial basis function (RBF interpolation. Firstly, a fundamental period is evaluated by a zero-crossing technique with fourth-order Newton’s interpolation, and then, the sampling sequence is reproduced by the RBF interpolation. Finally, the harmonic parameters can be calculated by FFT on the synchronization of sampling data. Simulation results showed that the proposed algorithm has high accuracy in measuring distorted and noisy signals. Compared to the local approximation schemes as linear, quadric, and fourth-order Newton interpolations, the RBF is a global approximation method which can acquire more accurate results while the time-consuming is about the same as Newton’s.

  10. Adaptive sampling rate control for networked systems based on statistical characteristics of packet disordering.

    Science.gov (United States)

    Li, Jin-Na; Er, Meng-Joo; Tan, Yen-Kheng; Yu, Hai-Bin; Zeng, Peng

    2015-09-01

    This paper investigates an adaptive sampling rate control scheme for networked control systems (NCSs) subject to packet disordering. The main objectives of the proposed scheme are (a) to avoid heavy packet disordering existing in communication networks and (b) to stabilize NCSs with packet disordering, transmission delay and packet loss. First, a novel sampling rate control algorithm based on statistical characteristics of disordering entropy is proposed; secondly, an augmented closed-loop NCS that consists of a plant, a sampler and a state-feedback controller is transformed into an uncertain and stochastic system, which facilitates the controller design. Then, a sufficient condition for stochastic stability in terms of Linear Matrix Inequalities (LMIs) is given. Moreover, an adaptive tracking controller is designed such that the sampling period tracks a desired sampling period, which represents a significant contribution. Finally, experimental results are given to illustrate the effectiveness and advantages of the proposed scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Cassette-based in-situ TEM sample inspection in the dual-beam FIB

    International Nuclear Information System (INIS)

    Kendrick, A B; Moore, T M; Zaykova-Feldman, L; Amador, G; Hammer, M

    2008-01-01

    A novel method is presented, combining site-specific TEM sample preparation and in-situ STEM analysis in a dual-beam microscope (FIB/SEM) fitted with a chamber mounted nano-manipulator. TEM samples are prepared using a modified in-situ, lift-out method, whereby the samples are thinned and oriented for immediate in-situ STEM analysis using the tilt, translation, and rotation capabilities of a FIB/SEM sample stage, a nano-manipulator, and a novel cassette. This cassette can provide a second tilt axis, orthogonal to the stage tilt axis, so that the STEM image contrast can be optimized to reveal the structural features of the sample (true STEM imaging in the FIB/SEM). The angles necessary for stage rotation and probe shaft rotation are calculated based on the position of the nano-manipulator relative to the stage and door and the stage tilt angle. A FIB/SEM instrument, equipped with a high resolution scanning electron column, can provide sufficiently high image resolution to enable many failure analysis and process control applications to be successfully carried out without requiring the use of a separate dedicated TEM/STEM instrument. The benefits of this novel approach are increased throughput and reduced cost per sample. Comparative analysis of different sample preparation methods is provided, and the STEM images obtained are shown.

  12. Nitrogen Detection in Bulk Samples Using a D-D Reaction-Based Portable Neutron Generator

    Directory of Open Access Journals (Sweden)

    A. A. Naqvi

    2013-01-01

    Full Text Available Nitrogen concentration was measured via 2.52 MeV nitrogen gamma ray from melamine, caffeine, urea, and disperse orange bulk samples using a newly designed D-D portable neutron generator-based prompt gamma ray setup. Inspite of low flux of thermal neutrons produced by D-D reaction-based portable neutron generator and interference of 2.52 MeV gamma rays from nitrogen in bulk samples with 2.50 MeV gamma ray from bismuth in BGO detector material, an excellent agreement between the experimental and calculated yields of nitrogen gamma rays indicates satisfactory performance of the setup for detection of nitrogen in bulk samples.

  13. A Study of Assimilation Bias in Name-Based Sampling of Migrants

    Directory of Open Access Journals (Sweden)

    Schnell Rainer

    2014-06-01

    Full Text Available The use of personal names for screening is an increasingly popular sampling technique for migrant populations. Although this is often an effective sampling procedure, very little is known about the properties of this method. Based on a large German survey, this article compares characteristics of respondents whose names have been correctly classified as belonging to a migrant population with respondentswho aremigrants and whose names have not been classified as belonging to a migrant population. Although significant differences were found for some variables even with some large effect sizes, the overall bias introduced by name-based sampling (NBS is small as long as procedures with small false-negative rates are employed.

  14. Fabry-Pérot cavity based on chirped sampled fiber Bragg gratings.

    Science.gov (United States)

    Zheng, Jilin; Wang, Rong; Pu, Tao; Lu, Lin; Fang, Tao; Li, Weichun; Xiong, Jintian; Chen, Yingfang; Zhu, Huatao; Chen, Dalei; Chen, Xiangfei

    2014-02-10

    A novel kind of Fabry-Pérot (FP) structure based on chirped sampled fiber Bragg grating (CSFBG) is proposed and demonstrated. In this structure, the regular chirped FBG (CFBG) that functions as reflecting mirror in the FP cavity is replaced by CSFBG, which is realized by chirping the sampling periods of a sampled FBG having uniform local grating period. The realization of such CSFBG-FPs having diverse properties just needs a single uniform pitch phase mask and sub-micrometer precision moving stage. Compared with the conventional CFBG-FP, it becomes more flexible to design CSFBG-FPs of diverse functions, and the fabrication process gets simpler. As a demonstration, based on the same experimental facilities, FPs with uniform FSR (~73 pm) and chirped FSR (varying from 28 pm to 405 pm) are fabricated respectively, which shows good agreement with simulation results.

  15. Inference for multivariate regression model based on multiply imputed synthetic data generated via posterior predictive sampling

    Science.gov (United States)

    Moura, Ricardo; Sinha, Bimal; Coelho, Carlos A.

    2017-06-01

    The recent popularity of the use of synthetic data as a Statistical Disclosure Control technique has enabled the development of several methods of generating and analyzing such data, but almost always relying in asymptotic distributions and in consequence being not adequate for small sample datasets. Thus, a likelihood-based exact inference procedure is derived for the matrix of regression coefficients of the multivariate regression model, for multiply imputed synthetic data generated via Posterior Predictive Sampling. Since it is based in exact distributions this procedure may even be used in small sample datasets. Simulation studies compare the results obtained from the proposed exact inferential procedure with the results obtained from an adaptation of Reiters combination rule to multiply imputed synthetic datasets and an application to the 2000 Current Population Survey is discussed.

  16. Problems with sampling desert tortoises: A simulation analysis based on field data

    Science.gov (United States)

    Freilich, J.E.; Camp, R.J.; Duda, J.J.; Karl, A.E.

    2005-01-01

    The desert tortoise (Gopherus agassizii) was listed as a U.S. threatened species in 1990 based largely on population declines inferred from mark-recapture surveys of 2.59-km2 (1-mi2) plots. Since then, several census methods have been proposed and tested, but all methods still pose logistical or statistical difficulties. We conducted computer simulations using actual tortoise location data from 2 1-mi2 plot surveys in southern California, USA, to identify strengths and weaknesses of current sampling strategies. We considered tortoise population estimates based on these plots as "truth" and then tested various sampling methods based on sampling smaller plots or transect lines passing through the mile squares. Data were analyzed using Schnabel's mark-recapture estimate and program CAPTURE. Experimental subsampling with replacement of the 1-mi2 data using 1-km2 and 0.25-km2 plot boundaries produced data sets of smaller plot sizes, which we compared to estimates from the 1-mi 2 plots. We also tested distance sampling by saturating a 1-mi 2 site with computer simulated transect lines, once again evaluating bias in density estimates. Subsampling estimates from 1-km2 plots did not differ significantly from the estimates derived at 1-mi2. The 0.25-km2 subsamples significantly overestimated population sizes, chiefly because too few recaptures were made. Distance sampling simulations were biased 80% of the time and had high coefficient of variation to density ratios. Furthermore, a prospective power analysis suggested limited ability to detect population declines as high as 50%. We concluded that poor performance and bias of both sampling procedures was driven by insufficient sample size, suggesting that all efforts must be directed to increasing numbers found in order to produce reliable results. Our results suggest that present methods may not be capable of accurately estimating desert tortoise populations.

  17. Algorithm/Architecture Co-design of the Generalized Sampling Theorem Based De-Interlacer.

    NARCIS (Netherlands)

    Beric, A.; Haan, de G.; Sethuraman, R.; Meerbergen, van J.

    2005-01-01

    De-interlacing is a major determinant of image quality in a modern display processing chain. The de-interlacing method based on the generalized sampling theorem (GST)applied to motion estimation and motion compensation provides the best de-interlacing results. With HDTV interlaced input material

  18. Sociocultural Experiences of Bulimic and Non-Bulimic Adolescents in a School-Based Chinese Sample

    Science.gov (United States)

    Jackson, Todd; Chen, Hong

    2010-01-01

    From a large school-based sample (N = 3,084), 49 Mainland Chinese adolescents (31 girls, 18 boys) who endorsed all DSM-IV criteria for bulimia nervosa (BN) or sub-threshold BN and 49 matched controls (31 girls, 18 boys) completed measures of demographics and sociocultural experiences related to body image. Compared to less symptomatic peers, those…

  19. Using Load Balancing to Scalably Parallelize Sampling-Based Motion Planning Algorithms

    KAUST Repository

    Fidel, Adam; Jacobs, Sam Ade; Sharma, Shishir; Amato, Nancy M.; Rauchwerger, Lawrence

    2014-01-01

    Motion planning, which is the problem of computing feasible paths in an environment for a movable object, has applications in many domains ranging from robotics, to intelligent CAD, to protein folding. The best methods for solving this PSPACE-hard problem are so-called sampling-based planners. Recent work introduced uniform spatial subdivision techniques for parallelizing sampling-based motion planning algorithms that scaled well. However, such methods are prone to load imbalance, as planning time depends on region characteristics and, for most problems, the heterogeneity of the sub problems increases as the number of processors increases. In this work, we introduce two techniques to address load imbalance in the parallelization of sampling-based motion planning algorithms: an adaptive work stealing approach and bulk-synchronous redistribution. We show that applying these techniques to representatives of the two major classes of parallel sampling-based motion planning algorithms, probabilistic roadmaps and rapidly-exploring random trees, results in a more scalable and load-balanced computation on more than 3,000 cores. © 2014 IEEE.

  20. Some advances in importance sampling of reliability models based on zero variance approximation

    NARCIS (Netherlands)

    Reijsbergen, D.P.; de Boer, Pieter-Tjerk; Scheinhardt, Willem R.W.; Juneja, Sandeep

    We are interested in estimating, through simulation, the probability of entering a rare failure state before a regeneration state. Since this probability is typically small, we apply importance sampling. The method that we use is based on finding the most likely paths to failure. We present an