WorldWideScience

Sample records for resampling-based significance testing

  1. Efficient p-value evaluation for resampling-based tests

    KAUST Repository

    Yu, K.

    2011-01-05

    The resampling-based test, which often relies on permutation or bootstrap procedures, has been widely used for statistical hypothesis testing when the asymptotic distribution of the test statistic is unavailable or unreliable. It requires repeated calculations of the test statistic on a large number of simulated data sets for its significance level assessment, and thus it could become very computationally intensive. Here, we propose an efficient p-value evaluation procedure by adapting the stochastic approximation Markov chain Monte Carlo algorithm. The new procedure can be used easily for estimating the p-value for any resampling-based test. We show through numeric simulations that the proposed procedure can be 100-500 000 times as efficient (in term of computing time) as the standard resampling-based procedure when evaluating a test statistic with a small p-value (e.g. less than 10( - 6)). With its computational burden reduced by this proposed procedure, the versatile resampling-based test would become computationally feasible for a much wider range of applications. We demonstrate the application of the new method by applying it to a large-scale genetic association study of prostate cancer.

  2. Efficient p-value evaluation for resampling-based tests

    KAUST Repository

    Yu, K.; Liang, F.; Ciampa, J.; Chatterjee, N.

    2011-01-01

    The resampling-based test, which often relies on permutation or bootstrap procedures, has been widely used for statistical hypothesis testing when the asymptotic distribution of the test statistic is unavailable or unreliable. It requires repeated

  3. Resampling-based methods in single and multiple testing for equality of covariance/correlation matrices.

    Science.gov (United States)

    Yang, Yang; DeGruttola, Victor

    2012-06-22

    Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.

  4. Assessment of resampling methods for causality testing: A note on the US inflation behavior

    Science.gov (United States)

    Kyrtsou, Catherine; Kugiumtzis, Dimitris; Diks, Cees

    2017-01-01

    Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms. PMID:28708870

  5. Assessment of resampling methods for causality testing: A note on the US inflation behavior.

    Science.gov (United States)

    Papana, Angeliki; Kyrtsou, Catherine; Kugiumtzis, Dimitris; Diks, Cees

    2017-01-01

    Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms.

  6. Accelerated spike resampling for accurate multiple testing controls.

    Science.gov (United States)

    Harrison, Matthew T

    2013-02-01

    Controlling for multiple hypothesis tests using standard spike resampling techniques often requires prohibitive amounts of computation. Importance sampling techniques can be used to accelerate the computation. The general theory is presented, along with specific examples for testing differences across conditions using permutation tests and for testing pairwise synchrony and precise lagged-correlation between many simultaneously recorded spike trains using interval jitter.

  7. Testing for Granger Causality in the Frequency Domain: A Phase Resampling Method.

    Science.gov (United States)

    Liu, Siwei; Molenaar, Peter

    2016-01-01

    This article introduces phase resampling, an existing but rarely used surrogate data method for making statistical inferences of Granger causality in frequency domain time series analysis. Granger causality testing is essential for establishing causal relations among variables in multivariate dynamic processes. However, testing for Granger causality in the frequency domain is challenging due to the nonlinear relation between frequency domain measures (e.g., partial directed coherence, generalized partial directed coherence) and time domain data. Through a simulation study, we demonstrate that phase resampling is a general and robust method for making statistical inferences even with short time series. With Gaussian data, phase resampling yields satisfactory type I and type II error rates in all but one condition we examine: when a small effect size is combined with an insufficient number of data points. Violations of normality lead to slightly higher error rates but are mostly within acceptable ranges. We illustrate the utility of phase resampling with two empirical examples involving multivariate electroencephalography (EEG) and skin conductance data.

  8. A resampling-based meta-analysis for detection of differential gene expression in breast cancer

    International Nuclear Information System (INIS)

    Gur-Dedeoglu, Bala; Konu, Ozlen; Kir, Serkan; Ozturk, Ahmet Rasit; Bozkurt, Betul; Ergul, Gulusan; Yulug, Isik G

    2008-01-01

    Accuracy in the diagnosis of breast cancer and classification of cancer subtypes has improved over the years with the development of well-established immunohistopathological criteria. More recently, diagnostic gene-sets at the mRNA expression level have been tested as better predictors of disease state. However, breast cancer is heterogeneous in nature; thus extraction of differentially expressed gene-sets that stably distinguish normal tissue from various pathologies poses challenges. Meta-analysis of high-throughput expression data using a collection of statistical methodologies leads to the identification of robust tumor gene expression signatures. A resampling-based meta-analysis strategy, which involves the use of resampling and application of distribution statistics in combination to assess the degree of significance in differential expression between sample classes, was developed. Two independent microarray datasets that contain normal breast, invasive ductal carcinoma (IDC), and invasive lobular carcinoma (ILC) samples were used for the meta-analysis. Expression of the genes, selected from the gene list for classification of normal breast samples and breast tumors encompassing both the ILC and IDC subtypes were tested on 10 independent primary IDC samples and matched non-tumor controls by real-time qRT-PCR. Other existing breast cancer microarray datasets were used in support of the resampling-based meta-analysis. The two independent microarray studies were found to be comparable, although differing in their experimental methodologies (Pearson correlation coefficient, R = 0.9389 and R = 0.8465 for ductal and lobular samples, respectively). The resampling-based meta-analysis has led to the identification of a highly stable set of genes for classification of normal breast samples and breast tumors encompassing both the ILC and IDC subtypes. The expression results of the selected genes obtained through real-time qRT-PCR supported the meta-analysis results. The

  9. A resampling-based meta-analysis for detection of differential gene expression in breast cancer

    Directory of Open Access Journals (Sweden)

    Ergul Gulusan

    2008-12-01

    Full Text Available Abstract Background Accuracy in the diagnosis of breast cancer and classification of cancer subtypes has improved over the years with the development of well-established immunohistopathological criteria. More recently, diagnostic gene-sets at the mRNA expression level have been tested as better predictors of disease state. However, breast cancer is heterogeneous in nature; thus extraction of differentially expressed gene-sets that stably distinguish normal tissue from various pathologies poses challenges. Meta-analysis of high-throughput expression data using a collection of statistical methodologies leads to the identification of robust tumor gene expression signatures. Methods A resampling-based meta-analysis strategy, which involves the use of resampling and application of distribution statistics in combination to assess the degree of significance in differential expression between sample classes, was developed. Two independent microarray datasets that contain normal breast, invasive ductal carcinoma (IDC, and invasive lobular carcinoma (ILC samples were used for the meta-analysis. Expression of the genes, selected from the gene list for classification of normal breast samples and breast tumors encompassing both the ILC and IDC subtypes were tested on 10 independent primary IDC samples and matched non-tumor controls by real-time qRT-PCR. Other existing breast cancer microarray datasets were used in support of the resampling-based meta-analysis. Results The two independent microarray studies were found to be comparable, although differing in their experimental methodologies (Pearson correlation coefficient, R = 0.9389 and R = 0.8465 for ductal and lobular samples, respectively. The resampling-based meta-analysis has led to the identification of a highly stable set of genes for classification of normal breast samples and breast tumors encompassing both the ILC and IDC subtypes. The expression results of the selected genes obtained through real

  10. Introductory statistics and analytics a resampling perspective

    CERN Document Server

    Bruce, Peter C

    2014-01-01

    Concise, thoroughly class-tested primer that features basic statistical concepts in the concepts in the context of analytics, resampling, and the bootstrapA uniquely developed presentation of key statistical topics, Introductory Statistics and Analytics: A Resampling Perspective provides an accessible approach to statistical analytics, resampling, and the bootstrap for readers with various levels of exposure to basic probability and statistics. Originally class-tested at one of the first online learning companies in the discipline, www.statistics.com, the book primarily focuses on application

  11. PARTICLE FILTER BASED VEHICLE TRACKING APPROACH WITH IMPROVED RESAMPLING STAGE

    Directory of Open Access Journals (Sweden)

    Wei Leong Khong

    2014-02-01

    Full Text Available Optical sensors based vehicle tracking can be widely implemented in traffic surveillance and flow control. The vast development of video surveillance infrastructure in recent years has drawn the current research focus towards vehicle tracking using high-end and low cost optical sensors. However, tracking vehicles via such sensors could be challenging due to the high probability of changing vehicle appearance and illumination, besides the occlusion and overlapping incidents. Particle filter has been proven as an approach which can overcome nonlinear and non-Gaussian situations caused by cluttered background and occlusion incidents. Unfortunately, conventional particle filter approach encounters particle degeneracy especially during and after the occlusion. Particle filter with sampling important resampling (SIR is an important step to overcome the drawback of particle filter, but SIR faced the problem of sample impoverishment when heavy particles are statistically selected many times. In this work, genetic algorithm has been proposed to be implemented in the particle filter resampling stage, where the estimated position can converge faster to hit the real position of target vehicle under various occlusion incidents. The experimental results show that the improved particle filter with genetic algorithm resampling method manages to increase the tracking accuracy and meanwhile reduce the particle sample size in the resampling stage.

  12. Assessment of Resampling Methods for Causality Testing: A note on the US Inflation Behavior

    NARCIS (Netherlands)

    Papana, A.; Kyrtsou, C.; Kugiumtzis, D.; Diks, C.

    2017-01-01

    Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial

  13. Testing the Difference of Correlated Agreement Coefficients for Statistical Significance

    Science.gov (United States)

    Gwet, Kilem L.

    2016-01-01

    This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling…

  14. A Resampling-Based Stochastic Approximation Method for Analysis of Large Geostatistical Data

    KAUST Repository

    Liang, Faming; Cheng, Yichen; Song, Qifan; Park, Jincheol; Yang, Ping

    2013-01-01

    large number of observations. This article proposes a resampling-based stochastic approximation method to address this challenge. At each iteration of the proposed method, a small subsample is drawn from the full dataset, and then the current estimate

  15. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    Science.gov (United States)

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Fourier transform resampling: Theory and application

    International Nuclear Information System (INIS)

    Hawkins, W.G.

    1996-01-01

    One of the most challenging problems in medical imaging is the development of reconstruction algorithms for nonstandard geometries. This work focuses on the application of Fourier analysis to the problem of resampling or rebinning. Conventional resampling methods utilizing some form of interpolation almost always result in a loss of resolution in the tomographic image. Fourier Transform Resampling (FTRS) offers potential improvement because the Modulation Transfer Function (MTF) of the process behaves like an ideal low pass filter. The MTF, however, is nonstationary if the coordinate transformation is nonlinear. FTRS may be viewed as a generalization of the linear coordinate transformations of standard Fourier analysis. Simulated MTF's were obtained by projecting point sources at different transverse positions in the flat fan beam detector geometry. These MTF's were compared to the closed form expression for FIRS. Excellent agreement was obtained for frequencies at or below the estimated cutoff frequency. The resulting FTRS algorithm is applied to simulations with symmetric fan beam geometry, an elliptical orbit and uniform attenuation, with a normalized root mean square error (NRME) of 0.036. Also, a Tc-99m point source study (1 cm dia., placed in air 10 cm from the COR) for a circular fan beam acquisition was reconstructed with a hybrid resampling method. The FWHM of the hybrid resampling method was 11.28 mm and compares favorably with a direct reconstruction (FWHM: 11.03 mm)

  17. Resampling Methods Improve the Predictive Power of Modeling in Class-Imbalanced Datasets

    Directory of Open Access Journals (Sweden)

    Paul H. Lee

    2014-09-01

    Full Text Available In the medical field, many outcome variables are dichotomized, and the two possible values of a dichotomized variable are referred to as classes. A dichotomized dataset is class-imbalanced if it consists mostly of one class, and performance of common classification models on this type of dataset tends to be suboptimal. To tackle such a problem, resampling methods, including oversampling and undersampling can be used. This paper aims at illustrating the effect of resampling methods using the National Health and Nutrition Examination Survey (NHANES wave 2009–2010 dataset. A total of 4677 participants aged ≥20 without self-reported diabetes and with valid blood test results were analyzed. The Classification and Regression Tree (CART procedure was used to build a classification model on undiagnosed diabetes. A participant demonstrated evidence of diabetes according to WHO diabetes criteria. Exposure variables included demographics and socio-economic status. CART models were fitted using a randomly selected 70% of the data (training dataset, and area under the receiver operating characteristic curve (AUC was computed using the remaining 30% of the sample for evaluation (testing dataset. CART models were fitted using the training dataset, the oversampled training dataset, the weighted training dataset, and the undersampled training dataset. In addition, resampling case-to-control ratio of 1:1, 1:2, and 1:4 were examined. Resampling methods on the performance of other extensions of CART (random forests and generalized boosted trees were also examined. CARTs fitted on the oversampled (AUC = 0.70 and undersampled training data (AUC = 0.74 yielded a better classification power than that on the training data (AUC = 0.65. Resampling could also improve the classification power of random forests and generalized boosted trees. To conclude, applying resampling methods in a class-imbalanced dataset improved the classification power of CART, random forests

  18. Confidence Limits for the Indirect Effect: Distribution of the Product and Resampling Methods

    Science.gov (United States)

    MacKinnon, David P.; Lockwood, Chondra M.; Williams, Jason

    2010-01-01

    The most commonly used method to test an indirect effect is to divide the estimate of the indirect effect by its standard error and compare the resulting z statistic with a critical value from the standard normal distribution. Confidence limits for the indirect effect are also typically based on critical values from the standard normal distribution. This article uses a simulation study to demonstrate that confidence limits are imbalanced because the distribution of the indirect effect is normal only in special cases. Two alternatives for improving the performance of confidence limits for the indirect effect are evaluated: (a) a method based on the distribution of the product of two normal random variables, and (b) resampling methods. In Study 1, confidence limits based on the distribution of the product are more accurate than methods based on an assumed normal distribution but confidence limits are still imbalanced. Study 2 demonstrates that more accurate confidence limits are obtained using resampling methods, with the bias-corrected bootstrap the best method overall. PMID:20157642

  19. Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping.

    Science.gov (United States)

    Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca

    2015-08-12

    Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights.

  20. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.

    Science.gov (United States)

    de Nijs, Robin

    2015-07-21

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.

  1. Image re-sampling detection through a novel interpolation kernel.

    Science.gov (United States)

    Hilal, Alaa

    2018-06-01

    Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Resampling Approach for Determination of the Method for Reference Interval Calculation in Clinical Laboratory Practice▿

    Science.gov (United States)

    Pavlov, Igor Y.; Wilson, Andrew R.; Delgado, Julio C.

    2010-01-01

    Reference intervals (RI) play a key role in clinical interpretation of laboratory test results. Numerous articles are devoted to analyzing and discussing various methods of RI determination. The two most widely used approaches are the parametric method, which assumes data normality, and a nonparametric, rank-based procedure. The decision about which method to use is usually made arbitrarily. The goal of this study was to demonstrate that using a resampling approach for the comparison of RI determination techniques could help researchers select the right procedure. Three methods of RI calculation—parametric, transformed parametric, and quantile-based bootstrapping—were applied to multiple random samples drawn from 81 values of complement factor B observations and from a computer-simulated normally distributed population. It was shown that differences in RI between legitimate methods could be up to 20% and even more. The transformed parametric method was found to be the best method for the calculation of RI of non-normally distributed factor B estimations, producing an unbiased RI and the lowest confidence limits and interquartile ranges. For a simulated Gaussian population, parametric calculations, as expected, were the best; quantile-based bootstrapping produced biased results at low sample sizes, and the transformed parametric method generated heavily biased RI. The resampling approach could help compare different RI calculation methods. An algorithm showing a resampling procedure for choosing the appropriate method for RI calculations is included. PMID:20554803

  3. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'

    DEFF Research Database (Denmark)

    de Nijs, Robin

    2015-01-01

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed...... by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all...... methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics...

  4. Genetic divergence among cupuaçu accessions by multiscale bootstrap resampling

    Directory of Open Access Journals (Sweden)

    Vinicius Silva dos Santos

    2015-06-01

    Full Text Available This study aimed at investigating the genetic divergence of eighteen accessions of cupuaçu trees based on fruit morphometric traits and comparing usual methods of cluster analysis with the proposed multiscale bootstrap resampling methodology. The data were obtained from an experiment conducted in Tomé-Açu city (PA, Brazil, arranged in a completely randomized design with eighteen cupuaçu accessions and 10 repetitions, from 2004 to 2011. Genetic parameters were estimated by restricted maximum likelihood/best linear unbiased prediction (REML/BLUP methodology. The predicted breeding values were used in the study on genetic divergence through Unweighted Pair Cluster Method with Arithmetic Mean (UPGMA hierarchical clustering and Tocher’s optimization method based on standardized Euclidean distance. Clustering consistency and optimal number of clusters in the UPGMA method were verified by the cophenetic correlation coefficient (CCC and Mojena’s criterion, respectively, besides the multiscale bootstrap resampling technique. The use of the clustering UPGMA method in situations with and without multiscale bootstrap resulted in four and five clusters, respectively, while the Tocher’s method resulted in seven clusters. The multiscale bootstrap resampling technique proves to be efficient to assess the consistency of clustering in hierarchical methods and, consequently, the optimal number of clusters.

  5. A comparison of resampling schemes for estimating model observer performance with small ensembles

    Science.gov (United States)

    Elshahaby, Fatma E. A.; Jha, Abhinav K.; Ghaly, Michael; Frey, Eric C.

    2017-09-01

    In objective assessment of image quality, an ensemble of images is used to compute the 1st and 2nd order statistics of the data. Often, only a finite number of images is available, leading to the issue of statistical variability in numerical observer performance. Resampling-based strategies can help overcome this issue. In this paper, we compared different combinations of resampling schemes (the leave-one-out (LOO) and the half-train/half-test (HT/HT)) and model observers (the conventional channelized Hotelling observer (CHO), channelized linear discriminant (CLD) and channelized quadratic discriminant). Observer performance was quantified by the area under the ROC curve (AUC). For a binary classification task and for each observer, the AUC value for an ensemble size of 2000 samples per class served as a gold standard for that observer. Results indicated that each observer yielded a different performance depending on the ensemble size and the resampling scheme. For a small ensemble size, the combination [CHO, HT/HT] had more accurate rankings than the combination [CHO, LOO]. Using the LOO scheme, the CLD and CHO had similar performance for large ensembles. However, the CLD outperformed the CHO and gave more accurate rankings for smaller ensembles. As the ensemble size decreased, the performance of the [CHO, LOO] combination seriously deteriorated as opposed to the [CLD, LOO] combination. Thus, it might be desirable to use the CLD with the LOO scheme when smaller ensemble size is available.

  6. Resampling methods in Microsoft Excel® for estimating reference intervals.

    Science.gov (United States)

    Theodorsson, Elvar

    2015-01-01

    Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. 
The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular.
 Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.

  7. An add-in implementation of the RESAMPLING syntax under Microsoft EXCEL.

    Science.gov (United States)

    Meineke, I

    2000-10-01

    The RESAMPLING syntax defines a set of powerful commands, which allow the programming of probabilistic statistical models with few, easily memorized statements. This paper presents an implementation of the RESAMPLING syntax using Microsoft EXCEL with Microsoft WINDOWS(R) as a platform. Two examples are given to demonstrate typical applications of RESAMPLING in biomedicine. Details of the implementation with special emphasis on the programming environment are discussed at length. The add-in is available electronically to interested readers upon request. The use of the add-in facilitates numerical statistical analyses of data from within EXCEL in a comfortable way.

  8. Modeling of correlated data with informative cluster sizes: An evaluation of joint modeling and within-cluster resampling approaches.

    Science.gov (United States)

    Zhang, Bo; Liu, Wei; Zhang, Zhiwei; Qu, Yanping; Chen, Zhen; Albert, Paul S

    2017-08-01

    Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether.

  9. Optimal resampling for the noisy OneMax problem

    OpenAIRE

    Liu, Jialin; Fairbank, Michael; Pérez-Liébana, Diego; Lucas, Simon M.

    2016-01-01

    The OneMax problem is a standard benchmark optimisation problem for a binary search space. Recent work on applying a Bandit-Based Random Mutation Hill-Climbing algorithm to the noisy OneMax Problem showed that it is important to choose a good value for the resampling number to make a careful trade off between taking more samples in order to reduce noise, and taking fewer samples to reduce the total computational cost. This paper extends that observation, by deriving an analytical expression f...

  10. Conditional Monthly Weather Resampling Procedure for Operational Seasonal Water Resources Forecasting

    Science.gov (United States)

    Beckers, J.; Weerts, A.; Tijdeman, E.; Welles, E.; McManamon, A.

    2013-12-01

    To provide reliable and accurate seasonal streamflow forecasts for water resources management several operational hydrologic agencies and hydropower companies around the world use the Extended Streamflow Prediction (ESP) procedure. The ESP in its original implementation does not accommodate for any additional information that the forecaster may have about expected deviations from climatology in the near future. Several attempts have been conducted to improve the skill of the ESP forecast, especially for areas which are affected by teleconnetions (e,g. ENSO, PDO) via selection (Hamlet and Lettenmaier, 1999) or weighting schemes (Werner et al., 2004; Wood and Lettenmaier, 2006; Najafi et al., 2012). A disadvantage of such schemes is that they lead to a reduction of the signal to noise ratio of the probabilistic forecast. To overcome this, we propose a resampling method conditional on climate indices to generate meteorological time series to be used in the ESP. The method can be used to generate a large number of meteorological ensemble members in order to improve the statistical properties of the ensemble. The effectiveness of the method was demonstrated in a real-time operational hydrologic seasonal forecasts system for the Columbia River basin operated by the Bonneville Power Administration. The forecast skill of the k-nn resampler was tested against the original ESP for three basins at the long-range seasonal time scale. The BSS and CRPSS were used to compare the results to those of the original ESP method. Positive forecast skill scores were found for the resampler method conditioned on different indices for the prediction of spring peak flows in the Dworshak and Hungry Horse basin. For the Libby Dam basin however, no improvement of skill was found. The proposed resampling method is a promising practical approach that can add skill to ESP forecasts at the seasonal time scale. Further improvement is possible by fine tuning the method and selecting the most

  11. Event-based stochastic point rainfall resampling for statistical replication and climate projection of historical rainfall series

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Korup Andersen, Aske; Larsen, Anders Badsberg

    2017-01-01

    Continuous and long rainfall series are a necessity in rural and urban hydrology for analysis and design purposes. Local historical point rainfall series often cover several decades, which makes it possible to estimate rainfall means at different timescales, and to assess return periods of extreme...... includes climate changes projected to a specific future period. This paper presents a framework for resampling of historical point rainfall series in order to generate synthetic rainfall series, which has the same statistical properties as an original series. Using a number of key target predictions...... for the future climate, such as winter and summer precipitation, and representation of extreme events, the resampled historical series are projected to represent rainfall properties in a future climate. Climate-projected rainfall series are simulated by brute force randomization of model parameters, which leads...

  12. NAIP Aerial Imagery (Resampled), Salton Sea - 2005 [ds425

    Data.gov (United States)

    California Natural Resource Agency — NAIP 2005 aerial imagery that has been resampled from 1-meter source resolution to approximately 30-meter resolution. This is a mosaic composed from several NAIP...

  13. Automotive FMCW Radar-Enhanced Range Estimation via a Local Resampling Fourier Transform

    Directory of Open Access Journals (Sweden)

    Cailing Wang

    2016-02-01

    Full Text Available In complex traffic scenarios, more accurate measurement and discrimination for an automotive frequency-modulated continuous-wave (FMCW radar is required for intelligent robots, driverless cars and driver-assistant systems. A more accurate range estimation method based on a local resampling Fourier transform (LRFT for a FMCW radar is developed in this paper. Radar signal correlation in the phase space sees a higher signal-noise-ratio (SNR to achieve more accurate ranging, and the LRFT - which acts on a local neighbour as a refinement step - can achieve a more accurate target range. The rough range is estimated through conditional pulse compression (PC and then, around the initial rough estimation, a refined estimation through the LRFT in the local region achieves greater precision. Furthermore, the LRFT algorithm is tested in numerous simulations and physical system experiments, which show that the LRFT algorithm achieves a more precise range estimation than traditional FFT-based algorithms, especially for lower bandwidth signals.

  14. Improved efficiency of multi-criteria IMPT treatment planning using iterative resampling of randomly placed pencil beams

    Science.gov (United States)

    van de Water, S.; Kraan, A. C.; Breedveld, S.; Schillemans, W.; Teguh, D. N.; Kooy, H. M.; Madden, T. M.; Heijmen, B. J. M.; Hoogeman, M. S.

    2013-10-01

    This study investigates whether ‘pencil beam resampling’, i.e. iterative selection and weight optimization of randomly placed pencil beams (PBs), reduces optimization time and improves plan quality for multi-criteria optimization in intensity-modulated proton therapy, compared with traditional modes in which PBs are distributed over a regular grid. Resampling consisted of repeatedly performing: (1) random selection of candidate PBs from a very fine grid, (2) inverse multi-criteria optimization, and (3) exclusion of low-weight PBs. The newly selected candidate PBs were added to the PBs in the existing solution, causing the solution to improve with each iteration. Resampling and traditional regular grid planning were implemented into our in-house developed multi-criteria treatment planning system ‘Erasmus iCycle’. The system optimizes objectives successively according to their priorities as defined in the so-called ‘wish-list’. For five head-and-neck cancer patients and two PB widths (3 and 6 mm sigma at 230 MeV), treatment plans were generated using: (1) resampling, (2) anisotropic regular grids and (3) isotropic regular grids, while using varying sample sizes (resampling) or grid spacings (regular grid). We assessed differences in optimization time (for comparable plan quality) and in plan quality parameters (for comparable optimization time). Resampling reduced optimization time by a factor of 2.8 and 5.6 on average (7.8 and 17.0 at maximum) compared with the use of anisotropic and isotropic grids, respectively. Doses to organs-at-risk were generally reduced when using resampling, with median dose reductions ranging from 0.0 to 3.0 Gy (maximum: 14.3 Gy, relative: 0%-42%) compared with anisotropic grids and from -0.3 to 2.6 Gy (maximum: 11.4 Gy, relative: -4%-19%) compared with isotropic grids. Resampling was especially effective when using thin PBs (3 mm sigma). Resampling plans contained on average fewer PBs, energy layers and protons than anisotropic

  15. VOYAGER 1 SATURN MAGNETOMETER RESAMPLED DATA 9.60 SEC

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes Voyager 1 Saturn encounter magnetometer data that have been resampled at a 9.6 second sample rate. The data set is composed of 6 columns: 1)...

  16. VOYAGER 2 JUPITER MAGNETOMETER RESAMPLED DATA 48.0 SEC

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes Voyager 2 Jupiter encounter magnetometer data that have been resampled at a 48.0 second sample rate. The data set is composed of 6 columns: 1)...

  17. Comparison of parametric and bootstrap method in bioequivalence test.

    Science.gov (United States)

    Ahn, Byung-Jin; Yim, Dong-Seok

    2009-10-01

    The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption.

  18. Resampling to accelerate cross-correlation searches for continuous gravitational waves from binary systems

    Science.gov (United States)

    Meadors, Grant David; Krishnan, Badri; Papa, Maria Alessandra; Whelan, John T.; Zhang, Yuanhao

    2018-02-01

    Continuous-wave (CW) gravitational waves (GWs) call for computationally-intensive methods. Low signal-to-noise ratio signals need templated searches with long coherent integration times and thus fine parameter-space resolution. Longer integration increases sensitivity. Low-mass x-ray binaries (LMXBs) such as Scorpius X-1 (Sco X-1) may emit accretion-driven CWs at strains reachable by current ground-based observatories. Binary orbital parameters induce phase modulation. This paper describes how resampling corrects binary and detector motion, yielding source-frame time series used for cross-correlation. Compared to the previous, detector-frame, templated cross-correlation method, used for Sco X-1 on data from the first Advanced LIGO observing run (O1), resampling is about 20 × faster in the costliest, most-sensitive frequency bands. Speed-up factors depend on integration time and search setup. The speed could be reinvested into longer integration with a forecast sensitivity gain, 20 to 125 Hz median, of approximately 51%, or from 20 to 250 Hz, 11%, given the same per-band cost and setup. This paper's timing model enables future setup optimization. Resampling scales well with longer integration, and at 10 × unoptimized cost could reach respectively 2.83 × and 2.75 × median sensitivities, limited by spin-wandering. Then an O1 search could yield a marginalized-polarization upper limit reaching torque-balance at 100 Hz. Frequencies from 40 to 140 Hz might be probed in equal observing time with 2 × improved detectors.

  19. Comparison of standard resampling methods for performance estimation of artificial neural network ensembles

    OpenAIRE

    Green, Michael; Ohlsson, Mattias

    2007-01-01

    Estimation of the generalization performance for classification within the medical applications domain is always an important task. In this study we focus on artificial neural network ensembles as the machine learning technique. We present a numerical comparison between five common resampling techniques: k-fold cross validation (CV), holdout, using three cutoffs, and bootstrap using five different data sets. The results show that CV together with holdout $0.25$ and $0.50$ are the best resampl...

  20. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  1. Illustrating, Quantifying, and Correcting for Bias in Post-hoc Analysis of Gene-Based Rare Variant Tests of Association

    Science.gov (United States)

    Grinde, Kelsey E.; Arbet, Jaron; Green, Alden; O'Connell, Michael; Valcarcel, Alessandra; Westra, Jason; Tintle, Nathan

    2017-01-01

    To date, gene-based rare variant testing approaches have focused on aggregating information across sets of variants to maximize statistical power in identifying genes showing significant association with diseases. Beyond identifying genes that are associated with diseases, the identification of causal variant(s) in those genes and estimation of their effect is crucial for planning replication studies and characterizing the genetic architecture of the locus. However, we illustrate that straightforward single-marker association statistics can suffer from substantial bias introduced by conditioning on gene-based test significance, due to the phenomenon often referred to as “winner's curse.” We illustrate the ramifications of this bias on variant effect size estimation and variant prioritization/ranking approaches, outline parameters of genetic architecture that affect this bias, and propose a bootstrap resampling method to correct for this bias. We find that our correction method significantly reduces the bias due to winner's curse (average two-fold decrease in bias, p bias and improve inference in post-hoc analysis of gene-based tests under a wide variety of genetic architectures. PMID:28959274

  2. ROSETTA-ORBITER SW RPCMAG 4 CR2 RESAMPLED V3.0

    Data.gov (United States)

    National Aeronautics and Space Administration — 2010-07-30 SBN:T.Barnes Updated and DATA_SET_DESCThis dataset contains RESAMPLED DATA of the CRUISE 2 phase (CR2). (Version 3.0 is the first version archived.)

  3. A Resampling-Based Stochastic Approximation Method for Analysis of Large Geostatistical Data

    KAUST Repository

    Liang, Faming

    2013-03-01

    The Gaussian geostatistical model has been widely used in modeling of spatial data. However, it is challenging to computationally implement this method because it requires the inversion of a large covariance matrix, particularly when there is a large number of observations. This article proposes a resampling-based stochastic approximation method to address this challenge. At each iteration of the proposed method, a small subsample is drawn from the full dataset, and then the current estimate of the parameters is updated accordingly under the framework of stochastic approximation. Since the proposed method makes use of only a small proportion of the data at each iteration, it avoids inverting large covariance matrices and thus is scalable to large datasets. The proposed method also leads to a general parameter estimation approach, maximum mean log-likelihood estimation, which includes the popular maximum (log)-likelihood estimation (MLE) approach as a special case and is expected to play an important role in analyzing large datasets. Under mild conditions, it is shown that the estimator resulting from the proposed method converges in probability to a set of parameter values of equivalent Gaussian probability measures, and that the estimator is asymptotically normally distributed. To the best of the authors\\' knowledge, the present study is the first one on asymptotic normality under infill asymptotics for general covariance functions. The proposed method is illustrated with large datasets, both simulated and real. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  4. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    Directory of Open Access Journals (Sweden)

    Yeqing Zhang

    2018-02-01

    Full Text Available For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully.

  5. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    Science.gov (United States)

    Zhang, Yeqing; Wang, Meiling; Li, Yafeng

    2018-01-01

    For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully. PMID:29495301

  6. A steady-State Genetic Algorithm with Resampling for Noisy Inventory Control

    NARCIS (Netherlands)

    Prestwich, S.; Tarim, S.A.; Rossi, R.; Hnich, B.

    2008-01-01

    Noisy fitness functions occur in many practical applications of evolutionary computation. A standard technique for solving these problems is fitness resampling but this may be inefficient or need a large population, and combined with elitism it may overvalue chromosomes or reduce genetic diversity.

  7. RELATIVE ORIENTATION AND MODIFIED PIECEWISE EPIPOLAR RESAMPLING FOR HIGH RESOLUTION SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    K. Gong

    2017-05-01

    Full Text Available High resolution, optical satellite sensors are boosted to a new era in the last few years, because satellite stereo images at half meter or even 30cm resolution are available. Nowadays, high resolution satellite image data have been commonly used for Digital Surface Model (DSM generation and 3D reconstruction. It is common that the Rational Polynomial Coefficients (RPCs provided by the vendors have rough precision and there is no ground control information available to refine the RPCs. Therefore, we present two relative orientation methods by using corresponding image points only: the first method will use quasi ground control information, which is generated from the corresponding points and rough RPCs, for the bias-compensation model; the second method will estimate the relative pointing errors on the matching image and remove this error by an affine model. Both methods do not need ground control information and are applied for the entire image. To get very dense point clouds, the Semi-Global Matching (SGM method is an efficient tool. However, before accomplishing the matching process the epipolar constraints are required. In most conditions, satellite images have very large dimensions, contrary to the epipolar geometry generation and image resampling, which is usually carried out in small tiles. This paper also presents a modified piecewise epipolar resampling method for the entire image without tiling. The quality of the proposed relative orientation and epipolar resampling method are evaluated, and finally sub-pixel accuracy has been achieved in our work.

  8. On removing interpolation and resampling artifacts in rigid image registration.

    Science.gov (United States)

    Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R; Fischl, Bruce

    2013-02-01

    We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration.

  9. Illustrating, Quantifying, and Correcting for Bias in Post-hoc Analysis of Gene-Based Rare Variant Tests of Association

    Directory of Open Access Journals (Sweden)

    Kelsey E. Grinde

    2017-09-01

    Full Text Available To date, gene-based rare variant testing approaches have focused on aggregating information across sets of variants to maximize statistical power in identifying genes showing significant association with diseases. Beyond identifying genes that are associated with diseases, the identification of causal variant(s in those genes and estimation of their effect is crucial for planning replication studies and characterizing the genetic architecture of the locus. However, we illustrate that straightforward single-marker association statistics can suffer from substantial bias introduced by conditioning on gene-based test significance, due to the phenomenon often referred to as “winner's curse.” We illustrate the ramifications of this bias on variant effect size estimation and variant prioritization/ranking approaches, outline parameters of genetic architecture that affect this bias, and propose a bootstrap resampling method to correct for this bias. We find that our correction method significantly reduces the bias due to winner's curse (average two-fold decrease in bias, p < 2.2 × 10−6 and, consequently, substantially improves mean squared error and variant prioritization/ranking. The method is particularly helpful in adjustment for winner's curse effects when the initial gene-based test has low power and for relatively more common, non-causal variants. Adjustment for winner's curse is recommended for all post-hoc estimation and ranking of variants after a gene-based test. Further work is necessary to continue seeking ways to reduce bias and improve inference in post-hoc analysis of gene-based tests under a wide variety of genetic architectures.

  10. A novel approach for epipolar resampling of cross-track linear pushbroom imagery using orbital parameters model

    Science.gov (United States)

    Jannati, Mojtaba; Valadan Zoej, Mohammad Javad; Mokhtarzade, Mehdi

    2018-03-01

    This paper presents a novel approach to epipolar resampling of cross-track linear pushbroom imagery using orbital parameters model (OPM). The backbone of the proposed method relies on modification of attitude parameters of linear array stereo imagery in such a way to parallelize the approximate conjugate epipolar lines (ACELs) with the instantaneous base line (IBL) of the conjugate image points (CIPs). Afterward, a complementary rotation is applied in order to parallelize all the ACELs throughout the stereo imagery. The new estimated attitude parameters are evaluated based on the direction of the IBL and the ACELs. Due to the spatial and temporal variability of the IBL (respectively changes in column and row numbers of the CIPs) and nonparallel nature of the epipolar lines in the stereo linear images, some polynomials in the both column and row numbers of the CIPs are used to model new attitude parameters. As the instantaneous position of sensors remains fix, the digital elevation model (DEM) of the area of interest is not required in the resampling process. According to the experimental results obtained from two pairs of SPOT and RapidEye stereo imagery with a high elevation relief, the average absolute values of remained vertical parallaxes of CIPs in the normalized images were obtained 0.19 and 0.28 pixels respectively, which confirm the high accuracy and applicability of the proposed method.

  11. Inferring microevolution from museum collections and resampling: lessons learned from Cepaea

    Directory of Open Access Journals (Sweden)

    Małgorzata Ożgo

    2017-10-01

    Full Text Available Natural history collections are an important and largely untapped source of long-term data on evolutionary changes in wild populations. Here, we utilize three large geo-referenced sets of samples of the common European land-snail Cepaea nemoralis stored in the collection of Naturalis Biodiversity Center in Leiden, the Netherlands. Resampling of these populations allowed us to gain insight into changes occurring over 95, 69, and 50 years. Cepaea nemoralis is polymorphic for the colour and banding of the shell; the mode of inheritance of these patterns is known, and the polymorphism is under both thermal and predatory selection. At two sites the general direction of changes was towards lighter shells (yellow and less heavily banded, which is consistent with predictions based on on-going climatic change. At one site no directional changes were detected. At all sites there were significant shifts in morph frequencies between years, and our study contributes to the recognition that short-term changes in the states of populations often exceed long-term trends. Our interpretation was limited by the few time points available in the studied collections. We therefore stress the need for natural history collections to routinely collect large samples of common species, to allow much more reliable hind-casting of evolutionary responses to environmental change.

  12. Estimating significances of differences between slopes: A new methodology and software

    Directory of Open Access Journals (Sweden)

    Vasco M. N. C. S. Vieira

    2013-09-01

    Full Text Available Determining the significance of slope differences is a common requirement in studies of self-thinning, ontogeny and sexual dimorphism, among others. This has long been carried out testing for the overlap of the bootstrapped 95% confidence intervals of the slopes. However, the numerical random re-sampling with repetition favours the occurrence of re-combinations yielding largely diverging slopes, widening the confidence intervals and thus increasing the chances of overlooking significant differences. To overcome this problem a permutation test simulating the null hypothesis of no differences between slopes is proposed. This new methodology, when applied both to artificial and factual data, showed an enhanced ability to differentiate slopes.

  13. On uniform resampling and gaze analysis of bidirectional texture functions

    Czech Academy of Sciences Publication Activity Database

    Filip, Jiří; Chantler, M.J.; Haindl, Michal

    2009-01-01

    Roč. 6, č. 3 (2009), s. 1-15 ISSN 1544-3558 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0593 Grant - others:EC Marie Curie(BE) 41358 Institutional research plan: CEZ:AV0Z10750506 Keywords : BTF * texture * eye tracking Subject RIV: BD - Theory of Information Impact factor: 1.447, year: 2009 http://library.utia.cas.cz/separaty/2009/RO/haindl-on uniform resampling and gaze analysis of bidirectional texture functions.pdf

  14. The efficiency of average linkage hierarchical clustering algorithm associated multi-scale bootstrap resampling in identifying homogeneous precipitation catchments

    Science.gov (United States)

    Chuan, Zun Liang; Ismail, Noriszura; Shinyie, Wendy Ling; Lit Ken, Tan; Fam, Soo-Fen; Senawi, Azlyna; Yusoff, Wan Nur Syahidah Wan

    2018-04-01

    Due to the limited of historical precipitation records, agglomerative hierarchical clustering algorithms widely used to extrapolate information from gauged to ungauged precipitation catchments in yielding a more reliable projection of extreme hydro-meteorological events such as extreme precipitation events. However, identifying the optimum number of homogeneous precipitation catchments accurately based on the dendrogram resulted using agglomerative hierarchical algorithms are very subjective. The main objective of this study is to propose an efficient regionalized algorithm to identify the homogeneous precipitation catchments for non-stationary precipitation time series. The homogeneous precipitation catchments are identified using average linkage hierarchical clustering algorithm associated multi-scale bootstrap resampling, while uncentered correlation coefficient as the similarity measure. The regionalized homogeneous precipitation is consolidated using K-sample Anderson Darling non-parametric test. The analysis result shows the proposed regionalized algorithm performed more better compared to the proposed agglomerative hierarchical clustering algorithm in previous studies.

  15. A Non-Parametric Surrogate-based Test of Significance for T-Wave Alternans Detection

    Science.gov (United States)

    Nemati, Shamim; Abdala, Omar; Bazán, Violeta; Yim-Yeh, Susie; Malhotra, Atul; Clifford, Gari

    2010-01-01

    We present a non-parametric adaptive surrogate test that allows for the differentiation of statistically significant T-Wave Alternans (TWA) from alternating patterns that can be solely explained by the statistics of noise. The proposed test is based on estimating the distribution of noise induced alternating patterns in a beat sequence from a set of surrogate data derived from repeated reshuffling of the original beat sequence. Thus, in assessing the significance of the observed alternating patterns in the data no assumptions are made about the underlying noise distribution. In addition, since the distribution of noise-induced alternans magnitudes is calculated separately for each sequence of beats within the analysis window, the method is robust to data non-stationarities in both noise and TWA. The proposed surrogate method for rejecting noise was compared to the standard noise rejection methods used with the Spectral Method (SM) and the Modified Moving Average (MMA) techniques. Using a previously described realistic multi-lead model of TWA, and real physiological noise, we demonstrate the proposed approach reduces false TWA detections, while maintaining a lower missed TWA detection compared with all the other methods tested. A simple averaging-based TWA estimation algorithm was coupled with the surrogate significance testing and was evaluated on three public databases; the Normal Sinus Rhythm Database (NRSDB), the Chronic Heart Failure Database (CHFDB) and the Sudden Cardiac Death Database (SCDDB). Differences in TWA amplitudes between each database were evaluated at matched heart rate (HR) intervals from 40 to 120 beats per minute (BPM). Using the two-sample Kolmogorov-Smirnov test, we found that significant differences in TWA levels exist between each patient group at all decades of heart rates. The most marked difference was generally found at higher heart rates, and the new technique resulted in a larger margin of separability between patient populations than

  16. A NONPARAMETRIC HYPOTHESIS TEST VIA THE BOOTSTRAP RESAMPLING

    OpenAIRE

    Temel, Tugrul T.

    2001-01-01

    This paper adapts an already existing nonparametric hypothesis test to the bootstrap framework. The test utilizes the nonparametric kernel regression method to estimate a measure of distance between the models stated under the null hypothesis. The bootstraped version of the test allows to approximate errors involved in the asymptotic hypothesis test. The paper also develops a Mathematica Code for the test algorithm.

  17. Testing Significance Testing

    Directory of Open Access Journals (Sweden)

    Joachim I. Krueger

    2018-04-01

    Full Text Available The practice of Significance Testing (ST remains widespread in psychological science despite continual criticism of its flaws and abuses. Using simulation experiments, we address four concerns about ST and for two of these we compare ST’s performance with prominent alternatives. We find the following: First, the 'p' values delivered by ST predict the posterior probability of the tested hypothesis well under many research conditions. Second, low 'p' values support inductive inferences because they are most likely to occur when the tested hypothesis is false. Third, 'p' values track likelihood ratios without raising the uncertainties of relative inference. Fourth, 'p' values predict the replicability of research findings better than confidence intervals do. Given these results, we conclude that 'p' values may be used judiciously as a heuristic tool for inductive inference. Yet, 'p' values cannot bear the full burden of inference. We encourage researchers to be flexible in their selection and use of statistical methods.

  18. The Bootstrap, the Jackknife, and the Randomization Test: A Sampling Taxonomy.

    Science.gov (United States)

    Rodgers, J L

    1999-10-01

    A simple sampling taxonomy is defined that shows the differences between and relationships among the bootstrap, the jackknife, and the randomization test. Each method has as its goal the creation of an empirical sampling distribution that can be used to test statistical hypotheses, estimate standard errors, and/or create confidence intervals. Distinctions between the methods can be made based on the sampling approach (with replacement versus without replacement) and the sample size (replacing the whole original sample versus replacing a subset of the original sample). The taxonomy is useful for teaching the goals and purposes of resampling schemes. An extension of the taxonomy implies other possible resampling approaches that have not previously been considered. Univariate and multivariate examples are presented.

  19. Resampling nucleotide sequences with closest-neighbor trimming and its comparison to other methods.

    Directory of Open Access Journals (Sweden)

    Kouki Yonezawa

    Full Text Available A large number of nucleotide sequences of various pathogens are available in public databases. The growth of the datasets has resulted in an enormous increase in computational costs. Moreover, due to differences in surveillance activities, the number of sequences found in databases varies from one country to another and from year to year. Therefore, it is important to study resampling methods to reduce the sampling bias. A novel algorithm-called the closest-neighbor trimming method-that resamples a given number of sequences from a large nucleotide sequence dataset was proposed. The performance of the proposed algorithm was compared with other algorithms by using the nucleotide sequences of human H3N2 influenza viruses. We compared the closest-neighbor trimming method with the naive hierarchical clustering algorithm and [Formula: see text]-medoids clustering algorithm. Genetic information accumulated in public databases contains sampling bias. The closest-neighbor trimming method can thin out densely sampled sequences from a given dataset. Since nucleotide sequences are among the most widely used materials for life sciences, we anticipate that our algorithm to various datasets will result in reducing sampling bias.

  20. Significance levels for studies with correlated test statistics.

    Science.gov (United States)

    Shi, Jianxin; Levinson, Douglas F; Whittemore, Alice S

    2008-07-01

    When testing large numbers of null hypotheses, one needs to assess the evidence against the global null hypothesis that none of the hypotheses is false. Such evidence typically is based on the test statistic of the largest magnitude, whose statistical significance is evaluated by permuting the sample units to simulate its null distribution. Efron (2007) has noted that correlation among the test statistics can induce substantial interstudy variation in the shapes of their histograms, which may cause misleading tail counts. Here, we show that permutation-based estimates of the overall significance level also can be misleading when the test statistics are correlated. We propose that such estimates be conditioned on a simple measure of the spread of the observed histogram, and we provide a method for obtaining conditional significance levels. We justify this conditioning using the conditionality principle described by Cox and Hinkley (1974). Application of the method to gene expression data illustrates the circumstances when conditional significance levels are needed.

  1. Wayside Bearing Fault Diagnosis Based on Envelope Analysis Paved with Time-Domain Interpolation Resampling and Weighted-Correlation-Coefficient-Guided Stochastic Resonance

    Directory of Open Access Journals (Sweden)

    Yongbin Liu

    2017-01-01

    Full Text Available Envelope spectrum analysis is a simple, effective, and classic method for bearing fault identification. However, in the wayside acoustic health monitoring system, owing to the high relative moving speed between the railway vehicle and the wayside mounted microphone, the recorded signal is embedded with Doppler effect, which brings in shift and expansion of the bearing fault characteristic frequency (FCF. What is more, the background noise is relatively heavy, which makes it difficult to identify the FCF. To solve the two problems, this study introduces solutions for the wayside acoustic fault diagnosis of train bearing based on Doppler effect reduction using the improved time-domain interpolation resampling (TIR method and diagnosis-relevant information enhancement using Weighted-Correlation-Coefficient-Guided Stochastic Resonance (WCCSR method. First, the traditional TIR method is improved by incorporating the original method with kinematic parameter estimation based on time-frequency analysis and curve fitting. Based on the estimated parameters, the Doppler effect is removed using the TIR easily. Second, WCCSR is employed to enhance the diagnosis-relevant period signal component in the obtained Doppler-free signal. Finally, paved with the above two procedures, the local fault is identified using envelope spectrum analysis. Simulated and experimental cases have verified the effectiveness of the proposed method.

  2. Use of a 137Cs re-sampling technique to investigate temporal changes in soil erosion and sediment mobilisation for a small forested catchment in southern Italy

    International Nuclear Information System (INIS)

    Porto, Paolo; Walling, Des E.; Alewell, Christine; Callegari, Giovanni; Mabit, Lionel; Mallimo, Nicola; Meusburger, Katrin; Zehringer, Markus

    2014-01-01

    Soil erosion and both its on-site and off-site impacts are increasingly seen as a serious environmental problem across the world. The need for an improved evidence base on soil loss and soil redistribution rates has directed attention to the use of fallout radionuclides, and particularly 137 Cs, for documenting soil redistribution rates. This approach possesses important advantages over more traditional means of documenting soil erosion and soil redistribution. However, one key limitation of the approach is the time-averaged or lumped nature of the estimated erosion rates. In nearly all cases, these will relate to the period extending from the main period of bomb fallout to the time of sampling. Increasing concern for the impact of global change, particularly that related to changing land use and climate change, has frequently directed attention to the need to document changes in soil redistribution rates within this period. Re-sampling techniques, which should be distinguished from repeat-sampling techniques, have the potential to meet this requirement. As an example, the use of a re-sampling technique to derive estimates of the mean annual net soil loss from a small (1.38 ha) forested catchment in southern Italy is reported. The catchment was originally sampled in 1998 and samples were collected from points very close to the original sampling points again in 2013. This made it possible to compare the estimate of mean annual erosion for the period 1954–1998 with that for the period 1999–2013. The availability of measurements of sediment yield from the catchment for parts of the overall period made it possible to compare the results provided by the 137 Cs re-sampling study with the estimates of sediment yield for the same periods. In order to compare the estimates of soil loss and sediment yield for the two different periods, it was necessary to establish the uncertainty associated with the individual estimates. In the absence of a generally accepted procedure

  3. A New Method to Implement Resampled Uniform PWM Suitable for Distributed Control of Modular Multilevel Converters

    DEFF Research Database (Denmark)

    Huang, Shaojun; Mathe, Laszlo; Teodorescu, Remus

    2013-01-01

    Two existing methods to implement resampling modulation technique for modular multilevel converter (MMC) (the sampling frequency is a multiple of the carrier frequency) are: the software solution (using a microcontroller) and the hardware solution (using FPGA). The former has a certain level...

  4. Groundwater-quality data in seven GAMA study units: results from initial sampling, 2004-2005, and resampling, 2007-2008, of wells: California GAMA Program Priority Basin Project

    Science.gov (United States)

    Kent, Robert; Belitz, Kenneth; Fram, Miranda S.

    2014-01-01

    The Priority Basin Project (PBP) of the Groundwater Ambient Monitoring and Assessment (GAMA) Program was developed in response to the Groundwater Quality Monitoring Act of 2001 and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The GAMA-PBP began sampling, primarily public supply wells in May 2004. By the end of February 2006, seven (of what would eventually be 35) study units had been sampled over a wide area of the State. Selected wells in these first seven study units were resampled for water quality from August 2007 to November 2008 as part of an assessment of temporal trends in water quality by the GAMA-PBP. The initial sampling was designed to provide a spatially unbiased assessment of the quality of raw groundwater used for public water supplies within the seven study units. In the 7 study units, 462 wells were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study area. Wells selected this way are referred to as grid wells or status wells. Approximately 3 years after the initial sampling, 55 of these previously sampled status wells (approximately 10 percent in each study unit) were randomly selected for resampling. The seven resampled study units, the total number of status wells sampled for each study unit, and the number of these wells resampled for trends are as follows, in chronological order of sampling: San Diego Drainages (53 status wells, 7 trend wells), North San Francisco Bay (84, 10), Northern San Joaquin Basin (51, 5), Southern Sacramento Valley (67, 7), San Fernando–San Gabriel (35, 6), Monterey Bay and Salinas Valley Basins (91, 11), and Southeast San Joaquin Valley (83, 9). The groundwater samples were analyzed for a large number of synthetic organic constituents (volatile organic compounds [VOCs], pesticides, and pesticide degradates), constituents of special interest (perchlorate, N

  5. Cross wavelet analysis: significance testing and pitfalls

    Directory of Open Access Journals (Sweden)

    D. Maraun

    2004-01-01

    Full Text Available In this paper, we present a detailed evaluation of cross wavelet analysis of bivariate time series. We develop a statistical test for zero wavelet coherency based on Monte Carlo simulations. If at least one of the two processes considered is Gaussian white noise, an approximative formula for the critical value can be utilized. In a second part, typical pitfalls of wavelet cross spectra and wavelet coherency are discussed. The wavelet cross spectrum appears to be not suitable for significance testing the interrelation between two processes. Instead, one should rather apply wavelet coherency. Furthermore we investigate problems due to multiple testing. Based on these results, we show that coherency between ENSO and NAO is an artefact for most of the time from 1900 to 1995. However, during a distinct period from around 1920 to 1940, significant coherency between the two phenomena occurs.

  6. A practitioners guide to resampling for data analysis, data mining, and modeling: A cookbook for starters

    NARCIS (Netherlands)

    van den Broek, Egon

    A practitioner’s guide to resampling for data analysis, data mining, and modeling provides a gentle and pragmatic introduction in the proposed topics. Its supporting Web site was offline and, hence, its potentially added value could not be verified. The book refrains from using advanced mathematics

  7. Significance of the impact of motion compensation on the variability of PET image features

    Science.gov (United States)

    Carles, M.; Bach, T.; Torres-Espallardo, I.; Baltas, D.; Nestle, U.; Martí-Bonmatí, L.

    2018-03-01

    In lung cancer, quantification by positron emission tomography/computed tomography (PET/CT) imaging presents challenges due to respiratory movement. Our primary aim was to study the impact of motion compensation implied by retrospectively gated (4D)-PET/CT on the variability of PET quantitative parameters. Its significance was evaluated by comparison with the variability due to (i) the voxel size in image reconstruction and (ii) the voxel size in image post-resampling. The method employed for feature extraction was chosen based on the analysis of (i) the effect of discretization of the standardized uptake value (SUV) on complementarity between texture features (TF) and conventional indices, (ii) the impact of the segmentation method on the variability of image features, and (iii) the variability of image features across the time-frame of 4D-PET. Thirty-one PET-features were involved. Three SUV discretization methods were applied: a constant width (SUV resolution) of the resampling bin (method RW), a constant number of bins (method RN) and RN on the image obtained after histogram equalization (method EqRN). The segmentation approaches evaluated were 40% of SUVmax and the contrast oriented algorithm (COA). Parameters derived from 4D-PET images were compared with values derived from the PET image obtained for (i) the static protocol used in our clinical routine (3D) and (ii) the 3D image post-resampled to the voxel size of the 4D image and PET image derived after modifying the reconstruction of the 3D image to comprise the voxel size of the 4D image. Results showed that TF complementarity with conventional indices was sensitive to the SUV discretization method. In the comparison of COA and 40% contours, despite the values not being interchangeable, all image features showed strong linear correlations (r  >  0.91, p\\ll 0.001 ). Across the time-frames of 4D-PET, all image features followed a normal distribution in most patients. For our patient cohort, the

  8. A PLL-based resampling technique for vibration analysis in variable-speed wind turbines with PMSG: A bearing fault case

    Science.gov (United States)

    Pezzani, Carlos M.; Bossio, José M.; Castellino, Ariel M.; Bossio, Guillermo R.; De Angelo, Cristian H.

    2017-02-01

    Condition monitoring in permanent magnet synchronous machines has gained interest due to the increasing use in applications such as electric traction and power generation. Particularly in wind power generation, non-invasive condition monitoring techniques are of great importance. Usually, in such applications the access to the generator is complex and costly, while unexpected breakdowns results in high repair costs. This paper presents a technique which allows using vibration analysis for bearing fault detection in permanent magnet synchronous generators used in wind turbines. Given that in wind power applications the generator rotational speed may vary during normal operation, it is necessary to use special sampling techniques to apply spectral analysis of mechanical vibrations. In this work, a resampling technique based on order tracking without measuring the rotor position is proposed. To synchronize sampling with rotor position, an estimation of the rotor position obtained from the angle of the voltage vector is proposed. This angle is obtained from a phase-locked loop synchronized with the generator voltages. The proposed strategy is validated by laboratory experimental results obtained from a permanent magnet synchronous generator. Results with single point defects in the outer race of a bearing under variable speed and load conditions are presented.

  9. Caveats for using statistical significance tests in research assessments

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg

    2013-01-01

    controversial and numerous criticisms have been leveled against their use. Based on examples from articles by proponents of the use statistical significance tests in research assessments, we address some of the numerous problems with such tests. The issues specifically discussed are the ritual practice......This article raises concerns about the advantages of using statistical significance tests in research assessments as has recently been suggested in the debate about proper normalization procedures for citation indicators by Opthof and Leydesdorff (2010). Statistical significance tests are highly...... argue that applying statistical significance tests and mechanically adhering to their results are highly problematic and detrimental to critical thinking. We claim that the use of such tests do not provide any advantages in relation to deciding whether differences between citation indicators...

  10. Evaluation of resampling applied to UAV imagery for weed detection using OBIA

    OpenAIRE

    Borra, I.; Peña Barragán, José Manuel; Torres Sánchez, Jorge; López Granados, Francisca

    2015-01-01

    Los vehículos aéreos no tripulados (UAVs) son una tecnología emergente en el estudio de parámetros agrícolas por sus características y por portar sensores en diferente rango espectral. En este trabajo se ha detectado y cartografiado rodales de malas hierbas en fase temprana mediante análisis OBIA para elaborar mapas que optimicen el tratamiento herbicida localizado. Se ha aplicado resampling (resampleo) sobre imágenes tomadas en campo desde un UAV (UAV-I) para crear una nueva imagen con disti...

  11. Methods of soil resampling to monitor changes in the chemical concentrations of forest soils

    Science.gov (United States)

    Lawrence, Gregory B.; Fernandez, Ivan J.; Hazlett, Paul W.; Bailey, Scott W.; Ross, Donald S.; Villars, Thomas R.; Quintana, Angelica; Ouimet, Rock; McHale, Michael; Johnson, Chris E.; Briggs, Russell D.; Colter, Robert A.; Siemion, Jason; Bartlett, Olivia L.; Vargas, Olga; Antidormi, Michael; Koppers, Mary Margaret

    2016-01-01

    Recent soils research has shown that important chemical soil characteristics can change in less than a decade, often the result of broad environmental changes. Repeated sampling to monitor these changes in forest soils is a relatively new practice that is not well documented in the literature and has only recently been broadly embraced by the scientific community. The objective of this protocol is therefore to synthesize the latest information on methods of soil resampling in a format that can be used to design and implement a soil monitoring program. Successful monitoring of forest soils requires that a study unit be defined within an area of forested land that can be characterized with replicate sampling locations. A resampling interval of 5 years is recommended, but if monitoring is done to evaluate a specific environmental driver, the rate of change expected in that driver should be taken into consideration. Here, we show that the sampling of the profile can be done by horizon where boundaries can be clearly identified and horizons are sufficiently thick to remove soil without contamination from horizons above or below. Otherwise, sampling can be done by depth interval. Archiving of sample for future reanalysis is a key step in avoiding analytical bias and providing the opportunity for additional analyses as new questions arise.

  12. Methods of Soil Resampling to Monitor Changes in the Chemical Concentrations of Forest Soils.

    Science.gov (United States)

    Lawrence, Gregory B; Fernandez, Ivan J; Hazlett, Paul W; Bailey, Scott W; Ross, Donald S; Villars, Thomas R; Quintana, Angelica; Ouimet, Rock; McHale, Michael R; Johnson, Chris E; Briggs, Russell D; Colter, Robert A; Siemion, Jason; Bartlett, Olivia L; Vargas, Olga; Antidormi, Michael R; Koppers, Mary M

    2016-11-25

    Recent soils research has shown that important chemical soil characteristics can change in less than a decade, often the result of broad environmental changes. Repeated sampling to monitor these changes in forest soils is a relatively new practice that is not well documented in the literature and has only recently been broadly embraced by the scientific community. The objective of this protocol is therefore to synthesize the latest information on methods of soil resampling in a format that can be used to design and implement a soil monitoring program. Successful monitoring of forest soils requires that a study unit be defined within an area of forested land that can be characterized with replicate sampling locations. A resampling interval of 5 years is recommended, but if monitoring is done to evaluate a specific environmental driver, the rate of change expected in that driver should be taken into consideration. Here, we show that the sampling of the profile can be done by horizon where boundaries can be clearly identified and horizons are sufficiently thick to remove soil without contamination from horizons above or below. Otherwise, sampling can be done by depth interval. Archiving of sample for future reanalysis is a key step in avoiding analytical bias and providing the opportunity for additional analyses as new questions arise.

  13. A shift from significance test to hypothesis test through power analysis in medical research.

    Science.gov (United States)

    Singh, G

    2006-01-01

    Medical research literature until recently, exhibited substantial dominance of the Fisher's significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson's hypothesis test considering both probability of type I and II error. Fisher's approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson's approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher's significance test to Neyman-Pearson's hypothesis test procedure.

  14. Testing for marginal linear effects in quantile regression

    KAUST Repository

    Wang, Huixia Judy

    2017-10-23

    The paper develops a new marginal testing procedure to detect significant predictors that are associated with the conditional quantiles of a scalar response. The idea is to fit the marginal quantile regression on each predictor one at a time, and then to base the test on the t-statistics that are associated with the most predictive predictors. A resampling method is devised to calibrate this test statistic, which has non-regular limiting behaviour due to the selection of the most predictive variables. Asymptotic validity of the procedure is established in a general quantile regression setting in which the marginal quantile regression models can be misspecified. Even though a fixed dimension is assumed to derive the asymptotic results, the test proposed is applicable and computationally feasible for large dimensional predictors. The method is more flexible than existing marginal screening test methods based on mean regression and has the added advantage of being robust against outliers in the response. The approach is illustrated by using an application to a human immunodeficiency virus drug resistance data set.

  15. Testing for marginal linear effects in quantile regression

    KAUST Repository

    Wang, Huixia Judy; McKeague, Ian W.; Qian, Min

    2017-01-01

    The paper develops a new marginal testing procedure to detect significant predictors that are associated with the conditional quantiles of a scalar response. The idea is to fit the marginal quantile regression on each predictor one at a time, and then to base the test on the t-statistics that are associated with the most predictive predictors. A resampling method is devised to calibrate this test statistic, which has non-regular limiting behaviour due to the selection of the most predictive variables. Asymptotic validity of the procedure is established in a general quantile regression setting in which the marginal quantile regression models can be misspecified. Even though a fixed dimension is assumed to derive the asymptotic results, the test proposed is applicable and computationally feasible for large dimensional predictors. The method is more flexible than existing marginal screening test methods based on mean regression and has the added advantage of being robust against outliers in the response. The approach is illustrated by using an application to a human immunodeficiency virus drug resistance data set.

  16. Community level patterns in diverse systems: A case study of litter fauna in a Mexican pine-oak forest using higher taxa surrogates and re-sampling methods

    Science.gov (United States)

    Moreno, Claudia E.; Guevara, Roger; Sánchez-Rojas, Gerardo; Téllez, Dianeis; Verdú, José R.

    2008-01-01

    Environmental assessment at the community level in highly diverse ecosystems is limited by taxonomic constraints and statistical methods requiring true replicates. Our objective was to show how diverse systems can be studied at the community level using higher taxa as biodiversity surrogates, and re-sampling methods to allow comparisons. To illustrate this we compared the abundance, richness, evenness and diversity of the litter fauna in a pine-oak forest in central Mexico among seasons, sites and collecting methods. We also assessed changes in the abundance of trophic guilds and evaluated the relationships between community parameters and litter attributes. With the direct search method we observed differences in the rate of taxa accumulation between sites. Bootstrap analysis showed that abundance varied significantly between seasons and sampling methods, but not between sites. In contrast, diversity and evenness were significantly higher at the managed than at the non-managed site. Tree regression models show that abundance varied mainly between seasons, whereas taxa richness was affected by litter attributes (composition and moisture content). The abundance of trophic guilds varied among methods and seasons, but overall we found that parasitoids, predators and detrivores decreased under management. Therefore, although our results suggest that management has positive effects on the richness and diversity of litter fauna, the analysis of trophic guilds revealed a contrasting story. Our results indicate that functional groups and re-sampling methods may be used as tools for describing community patterns in highly diverse systems. Also, the higher taxa surrogacy could be seen as a preliminary approach when it is not possible to identify the specimens at a low taxonomic level in a reasonable period of time and in a context of limited financial resources, but further studies are needed to test whether the results are specific to a system or whether they are general

  17. Speckle reduction in digital holography with resampling ring masks

    Science.gov (United States)

    Zhang, Wenhui; Cao, Liangcai; Jin, Guofan

    2018-01-01

    One-shot digital holographic imaging has the advantages of high stability and low temporal cost. However, the reconstruction is affected by the speckle noise. Resampling ring-mask method in spectrum domain is proposed for speckle reduction. The useful spectrum of one hologram is divided into several sub-spectra by ring masks. In the reconstruction, angular spectrum transform is applied to guarantee the calculation accuracy which has no approximation. N reconstructed amplitude images are calculated from the corresponding sub-spectra. Thanks to speckle's random distribution, superimposing these N uncorrelated amplitude images would lead to a final reconstructed image with lower speckle noise. Normalized relative standard deviation values of the reconstructed image are used to evaluate the reduction of speckle. Effect of the method on the spatial resolution of the reconstructed image is also quantitatively evaluated. Experimental and simulation results prove the feasibility and effectiveness of the proposed method.

  18. A shift from significance test to hypothesis test through power analysis in medical research

    Directory of Open Access Journals (Sweden)

    Singh Girish

    2006-01-01

    Full Text Available Medical research literature until recently, exhibited substantial dominance of the Fisher′s significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson′s hypothesis test considering both probability of type I and II error. Fisher′s approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson′s approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher′s significance test to Neyman-Pearson′s hypothesis test procedure.

  19. Hardware Architecture of Polyphase Filter Banks Performing Embedded Resampling for Software-Defined Radio Front-Ends

    DEFF Research Database (Denmark)

    Awan, Mehmood-Ur-Rehman; Le Moullec, Yannick; Koch, Peter

    2012-01-01

    , and power optimization for field programmable gate array (FPGA) based architectures in an M -path polyphase filter bank with modified N -path polyphase filter. Such systems allow resampling by arbitrary ratios while simultaneously performing baseband aliasing from center frequencies at Nyquist zones......In this paper, we describe resource-efficient hardware architectures for software-defined radio (SDR) front-ends. These architectures are made efficient by using a polyphase channelizer that performs arbitrary sample rate changes, frequency selection, and bandwidth control. We discuss area, time...... that are not multiples of the output sample rate. A non-maximally decimated polyphase filter bank, where the number of data loads is not equal to the number of M subfilters, processes M subfilters in a time period that is either less than or greater than the M data-load’s time period. We present a load...

  20. Correction of the significance level when attempting multiple transformations of an explanatory variable in generalized linear models

    Science.gov (United States)

    2013-01-01

    Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. PMID:23758852

  1. OPATs: Omnibus P-value association tests.

    Science.gov (United States)

    Chen, Chia-Wei; Yang, Hsin-Chou

    2017-07-10

    Combining statistical significances (P-values) from a set of single-locus association tests in genome-wide association studies is a proof-of-principle method for identifying disease-associated genomic segments, functional genes and biological pathways. We review P-value combinations for genome-wide association studies and introduce an integrated analysis tool, Omnibus P-value Association Tests (OPATs), which provides popular analysis methods of P-value combinations. The software OPATs programmed in R and R graphical user interface features a user-friendly interface. In addition to analysis modules for data quality control and single-locus association tests, OPATs provides three types of set-based association test: window-, gene- and biopathway-based association tests. P-value combinations with or without threshold and rank truncation are provided. The significance of a set-based association test is evaluated by using resampling procedures. Performance of the set-based association tests in OPATs has been evaluated by simulation studies and real data analyses. These set-based association tests help boost the statistical power, alleviate the multiple-testing problem, reduce the impact of genetic heterogeneity, increase the replication efficiency of association tests and facilitate the interpretation of association signals by streamlining the testing procedures and integrating the genetic effects of multiple variants in genomic regions of biological relevance. In summary, P-value combinations facilitate the identification of marker sets associated with disease susceptibility and uncover missing heritability in association studies, thereby establishing a foundation for the genetic dissection of complex diseases and traits. OPATs provides an easy-to-use and statistically powerful analysis tool for P-value combinations. OPATs, examples, and user guide can be downloaded from http://www.stat.sinica.edu.tw/hsinchou/genetics/association/OPATs.htm. © The Author 2017

  2. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    Science.gov (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-06-01

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.

  3. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    Energy Technology Data Exchange (ETDEWEB)

    Holoien, Thomas W.-S.; /Ohio State U., Dept. Astron. /Ohio State U., CCAPP /KIPAC, Menlo Park /SLAC; Marshall, Philip J.; Wechsler, Risa H.; /KIPAC, Menlo Park /SLAC

    2017-05-11

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.

  4. Automatic recognition of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNNs.

    Science.gov (United States)

    Han, Guanghui; Liu, Xiabi; Zheng, Guangyuan; Wang, Murong; Huang, Shan

    2018-06-06

    Ground-glass opacity (GGO) is a common CT imaging sign on high-resolution CT, which means the lesion is more likely to be malignant compared to common solid lung nodules. The automatic recognition of GGO CT imaging signs is of great importance for early diagnosis and possible cure of lung cancers. The present GGO recognition methods employ traditional low-level features and system performance improves slowly. Considering the high-performance of CNN model in computer vision field, we proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling is performed on multi-views and multi-receptive fields, which reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has the ability to obtain the optimal fine-tuning model. Multi-CNN models fusion strategy obtains better performance than any single trained model. We evaluated our method on the GGO nodule samples in publicly available LIDC-IDRI dataset of chest CT scans. The experimental results show that our method yields excellent results with 96.64% sensitivity, 71.43% specificity, and 0.83 F1 score. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images. Graphical abstract We proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has ability to obtain the optimal fine-tuning model. Our method is a promising approach to apply deep learning method to computer-aided analysis

  5. Bootstrap resampling: a powerful method of assessing confidence intervals for doses from experimental data

    International Nuclear Information System (INIS)

    Iwi, G.; Millard, R.K.; Palmer, A.M.; Preece, A.W.; Saunders, M.

    1999-01-01

    Bootstrap resampling provides a versatile and reliable statistical method for estimating the accuracy of quantities which are calculated from experimental data. It is an empirically based method, in which large numbers of simulated datasets are generated by computer from existing measurements, so that approximate confidence intervals of the derived quantities may be obtained by direct numerical evaluation. A simple introduction to the method is given via a detailed example of estimating 95% confidence intervals for cumulated activity in the thyroid following injection of 99m Tc-sodium pertechnetate using activity-time data from 23 subjects. The application of the approach to estimating confidence limits for the self-dose to the kidney following injection of 99m Tc-DTPA organ imaging agent based on uptake data from 19 subjects is also illustrated. Results are then given for estimates of doses to the foetus following administration of 99m Tc-sodium pertechnetate for clinical reasons during pregnancy, averaged over 25 subjects. The bootstrap method is well suited for applications in radiation dosimetry including uncertainty, reliability and sensitivity analysis of dose coefficients in biokinetic models, but it can also be applied in a wide range of other biomedical situations. (author)

  6. Random resampling masks: a non-Bayesian one-shot strategy for noise reduction in digital holography.

    Science.gov (United States)

    Bianco, V; Paturzo, M; Memmolo, P; Finizio, A; Ferraro, P; Javidi, B

    2013-03-01

    Holographic imaging may become severely degraded by a mixture of speckle and incoherent additive noise. Bayesian approaches reduce the incoherent noise, but prior information is needed on the noise statistics. With no prior knowledge, one-shot reduction of noise is a highly desirable goal, as the recording process is simplified and made faster. Indeed, neither multiple acquisitions nor a complex setup are needed. So far, this result has been achieved at the cost of a deterministic resolution loss. Here we propose a fast non-Bayesian denoising method that avoids this trade-off by means of a numerical synthesis of a moving diffuser. In this way, only one single hologram is required as multiple uncorrelated reconstructions are provided by random complementary resampling masks. Experiments show a significant incoherent noise reduction, close to the theoretical improvement bound, resulting in image-contrast improvement. At the same time, we preserve the resolution of the unprocessed image.

  7. Efficient Kernel-Based Ensemble Gaussian Mixture Filtering

    KAUST Repository

    Liu, Bo

    2015-11-11

    We consider the Bayesian filtering problem for data assimilation following the kernel-based ensemble Gaussian-mixture filtering (EnGMF) approach introduced by Anderson and Anderson (1999). In this approach, the posterior distribution of the system state is propagated with the model using the ensemble Monte Carlo method, providing a forecast ensemble that is then used to construct a prior Gaussian-mixture (GM) based on the kernel density estimator. This results in two update steps: a Kalman filter (KF)-like update of the ensemble members and a particle filter (PF)-like update of the weights, followed by a resampling step to start a new forecast cycle. After formulating EnGMF for any observational operator, we analyze the influence of the bandwidth parameter of the kernel function on the covariance of the posterior distribution. We then focus on two aspects: i) the efficient implementation of EnGMF with (relatively) small ensembles, where we propose a new deterministic resampling strategy preserving the first two moments of the posterior GM to limit the sampling error; and ii) the analysis of the effect of the bandwidth parameter on contributions of KF and PF updates and on the weights variance. Numerical results using the Lorenz-96 model are presented to assess the behavior of EnGMF with deterministic resampling, study its sensitivity to different parameters and settings, and evaluate its performance against ensemble KFs. The proposed EnGMF approach with deterministic resampling suggests improved estimates in all tested scenarios, and is shown to require less localization and to be less sensitive to the choice of filtering parameters.

  8. Winter Holts Oscillatory Method: A New Method of Resampling in Time Series.

    Directory of Open Access Journals (Sweden)

    Muhammad Imtiaz Subhani

    2016-12-01

    Full Text Available The core proposition behind this research is to create innovative methods of bootstrapping that can be applied in time series data. In order to find new methods of bootstrapping, various methods were reviewed; The data of automotive Sales, Market Shares and Net Exports of the top 10 countries, which includes China, Europe, United States of America (USA, Japan, Germany, South Korea, India, Mexico, Brazil, Spain and, Canada from 2002 to 2014 were collected through various sources which includes UN Comtrade, Index Mundi and World Bank. The findings of this paper confirmed that Bootstrapping for resampling through winter forecasting by Oscillation and Average methods give more robust results than the winter forecasting by any general methods.

  9. Resampling: An optimization method for inverse planning in robotic radiosurgery

    International Nuclear Information System (INIS)

    Schweikard, Achim; Schlaefer, Alexander; Adler, John R. Jr.

    2006-01-01

    By design, the range of beam directions in conventional radiosurgery are constrained to an isocentric array. However, the recent introduction of robotic radiosurgery dramatically increases the flexibility of targeting, and as a consequence, beams need be neither coplanar nor isocentric. Such a nonisocentric design permits a large number of distinct beam directions to be used in one single treatment. These major technical differences provide an opportunity to improve upon the well-established principles for treatment planning used with GammaKnife or LINAC radiosurgery. With this objective in mind, our group has developed over the past decade an inverse planning tool for robotic radiosurgery. This system first computes a set of beam directions, and then during an optimization step, weights each individual beam. Optimization begins with a feasibility query, the answer to which is derived through linear programming. This approach offers the advantage of completeness and avoids local optima. Final beam selection is based on heuristics. In this report we present and evaluate a new strategy for utilizing the advantages of linear programming to improve beam selection. Starting from an initial solution, a heuristically determined set of beams is added to the optimization problem, while beams with zero weight are removed. This process is repeated to sample a set of beams much larger compared with typical optimization. Experimental results indicate that the planning approach efficiently finds acceptable plans and that resampling can further improve its efficiency

  10. Testing for significance of phase synchronisation dynamics in the EEG.

    Science.gov (United States)

    Daly, Ian; Sweeney-Reed, Catherine M; Nasuto, Slawomir J

    2013-06-01

    A number of tests exist to check for statistical significance of phase synchronisation within the Electroencephalogram (EEG); however, the majority suffer from a lack of generality and applicability. They may also fail to account for temporal dynamics in the phase synchronisation, regarding synchronisation as a constant state instead of a dynamical process. Therefore, a novel test is developed for identifying the statistical significance of phase synchronisation based upon a combination of work characterising temporal dynamics of multivariate time-series and Markov modelling. We show how this method is better able to assess the significance of phase synchronisation than a range of commonly used significance tests. We also show how the method may be applied to identify and classify significantly different phase synchronisation dynamics in both univariate and multivariate datasets.

  11. Manipulating the Alpha Level Cannot Cure Significance Testing

    Directory of Open Access Journals (Sweden)

    David Trafimow

    2018-05-01

    Full Text Available We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and sample size much more directly than significance testing does; but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p-value threshold of 0.05, 0.01, 0.005, or anything else, is not acceptable.

  12. New significance test methods for Fourier analysis of geophysical time series

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2011-09-01

    Full Text Available When one applies the discrete Fourier transform to analyze finite-length time series, discontinuities at the data boundaries will distort its Fourier power spectrum. In this paper, based on a rigid statistics framework, we present a new significance test method which can extract the intrinsic feature of a geophysical time series very well. We show the difference in significance level compared with traditional Fourier tests by analyzing the Arctic Oscillation (AO and the Nino3.4 time series. In the AO, we find significant peaks at about 2.8, 4.3, and 5.7 yr periods and in Nino3.4 at about 12 yr period in tests against red noise. These peaks are not significant in traditional tests.

  13. Identification of significant features by the Global Mean Rank test.

    Science.gov (United States)

    Klammer, Martin; Dybowski, J Nikolaj; Hoffmann, Daniel; Schaab, Christoph

    2014-01-01

    With the introduction of omics-technologies such as transcriptomics and proteomics, numerous methods for the reliable identification of significantly regulated features (genes, proteins, etc.) have been developed. Experimental practice requires these tests to successfully deal with conditions such as small numbers of replicates, missing values, non-normally distributed expression levels, and non-identical distributions of features. With the MeanRank test we aimed at developing a test that performs robustly under these conditions, while favorably scaling with the number of replicates. The test proposed here is a global one-sample location test, which is based on the mean ranks across replicates, and internally estimates and controls the false discovery rate. Furthermore, missing data is accounted for without the need of imputation. In extensive simulations comparing MeanRank to other frequently used methods, we found that it performs well with small and large numbers of replicates, feature dependent variance between replicates, and variable regulation across features on simulation data and a recent two-color microarray spike-in dataset. The tests were then used to identify significant changes in the phosphoproteomes of cancer cells induced by the kinase inhibitors erlotinib and 3-MB-PP1 in two independently published mass spectrometry-based studies. MeanRank outperformed the other global rank-based methods applied in this study. Compared to the popular Significance Analysis of Microarrays and Linear Models for Microarray methods, MeanRank performed similar or better. Furthermore, MeanRank exhibits more consistent behavior regarding the degree of regulation and is robust against the choice of preprocessing methods. MeanRank does not require any imputation of missing values, is easy to understand, and yields results that are easy to interpret. The software implementing the algorithm is freely available for academic and commercial use.

  14. What if there were no significance tests?

    CERN Document Server

    Harlow, Lisa L; Steiger, James H

    2013-01-01

    This book is the result of a spirited debate stimulated by a recent meeting of the Society of Multivariate Experimental Psychology. Although the viewpoints span a range of perspectives, the overriding theme that emerges states that significance testing may still be useful if supplemented with some or all of the following -- Bayesian logic, caution, confidence intervals, effect sizes and power, other goodness of approximation measures, replication and meta-analysis, sound reasoning, and theory appraisal and corroboration. The book is organized into five general areas. The first presents an overview of significance testing issues that sythesizes the highlights of the remainder of the book. The next discusses the debate in which significance testing should be rejected or retained. The third outlines various methods that may supplement current significance testing procedures. The fourth discusses Bayesian approaches and methods and the use of confidence intervals versus significance tests. The last presents the p...

  15. MapReduce particle filtering with exact resampling and deterministic runtime

    Science.gov (United States)

    Thiyagalingam, Jeyarajan; Kekempanos, Lykourgos; Maskell, Simon

    2017-12-01

    Particle filtering is a numerical Bayesian technique that has great potential for solving sequential estimation problems involving non-linear and non-Gaussian models. Since the estimation accuracy achieved by particle filters improves as the number of particles increases, it is natural to consider as many particles as possible. MapReduce is a generic programming model that makes it possible to scale a wide variety of algorithms to Big data. However, despite the application of particle filters across many domains, little attention has been devoted to implementing particle filters using MapReduce. In this paper, we describe an implementation of a particle filter using MapReduce. We focus on a component that what would otherwise be a bottleneck to parallel execution, the resampling component. We devise a new implementation of this component, which requires no approximations, has O( N) spatial complexity and deterministic O((log N)2) time complexity. Results demonstrate the utility of this new component and culminate in consideration of a particle filter with 224 particles being distributed across 512 processor cores.

  16. Co-integration Rank Testing under Conditional Heteroskedasticity

    DEFF Research Database (Denmark)

    Cavaliere, Guiseppe; Rahbæk, Anders; Taylor, A.M. Robert

    null distributions of the rank statistics coincide with those derived by previous authors who assume either i.i.d. or (strict and covariance) stationary martingale difference innovations. We then propose wild bootstrap implementations of the co-integrating rank tests and demonstrate that the associated...... bootstrap rank statistics replicate the first-order asymptotic null distributions of the rank statistics. We show the same is also true of the corresponding rank tests based on the i.i.d. bootstrap of Swensen (2006). The wild bootstrap, however, has the important property that, unlike the i.i.d. bootstrap......, it preserves in the re-sampled data the pattern of heteroskedasticity present in the original shocks. Consistent with this, numerical evidence sug- gests that, relative to tests based on the asymptotic critical values or the i.i.d. bootstrap, the wild bootstrap rank tests perform very well in small samples un...

  17. A method to test the reproducibility and to improve performance of computer-aided detection schemes for digitized mammograms

    International Nuclear Information System (INIS)

    Zheng Bin; Gur, David; Good, Walter F.; Hardesty, Lara A.

    2004-01-01

    The purpose of this study is to develop a new method for assessment of the reproducibility of computer-aided detection (CAD) schemes for digitized mammograms and to evaluate the possibility of using the implemented approach for improving CAD performance. Two thousand digitized mammograms (representing 500 cases) with 300 depicted verified masses were selected in the study. Series of images were generated for each digitized image by resampling after a series of slight image rotations. A CAD scheme developed in our laboratory was applied to all images to detect suspicious mass regions. We evaluated the reproducibility of the scheme using the detection sensitivity and false-positive rates for the original and resampled images. We also explored the possibility of improving CAD performance using three methods of combining results from the original and resampled images, including simple grouping, averaging output scores, and averaging output scores after grouping. The CAD scheme generated a detection score (from 0 to 1) for each identified suspicious region. A region with a detection score >0.5 was considered as positive. The CAD scheme detected 238 masses (79.3% case-based sensitivity) and identified 1093 false-positive regions (average 0.55 per image) in the original image dataset. In eleven repeated tests using original and ten sets of rotated and resampled images, the scheme detected a maximum of 271 masses and identified as many as 2359 false-positive regions. Two hundred and eighteen masses (80.4%) and 618 false-positive regions (26.2%) were detected in all 11 sets of images. Combining detection results improved reproducibility and the overall CAD performance. In the range of an average false-positive detection rate between 0.5 and 1 per image, the sensitivity of the scheme could be increased approximately 5% after averaging the scores of the regions detected in at least four images. At low false-positive rate (e.g., ≤average 0.3 per image), the grouping method

  18. Estimating variability in functional images using a synthetic resampling approach

    International Nuclear Information System (INIS)

    Maitra, R.; O'Sullivan, F.

    1996-01-01

    Functional imaging of biologic parameters like in vivo tissue metabolism is made possible by Positron Emission Tomography (PET). Many techniques, such as mixture analysis, have been suggested for extracting such images from dynamic sequences of reconstructed PET scans. Methods for assessing the variability in these functional images are of scientific interest. The nonlinearity of the methods used in the mixture analysis approach makes analytic formulae for estimating variability intractable. The usual resampling approach is infeasible because of the prohibitive computational effort in simulating a number of sinogram. datasets, applying image reconstruction, and generating parametric images for each replication. Here we introduce an approach that approximates the distribution of the reconstructed PET images by a Gaussian random field and generates synthetic realizations in the imaging domain. This eliminates the reconstruction steps in generating each simulated functional image and is therefore practical. Results of experiments done to evaluate the approach on a model one-dimensional problem are very encouraging. Post-processing of the estimated variances is seen to improve the accuracy of the estimation method. Mixture analysis is used to estimate functional images; however, the suggested approach is general enough to extend to other parametric imaging methods

  19. A multiparametric magnetic resonance imaging-based risk model to determine the risk of significant prostate cancer prior to biopsy.

    Science.gov (United States)

    van Leeuwen, Pim J; Hayen, Andrew; Thompson, James E; Moses, Daniel; Shnier, Ron; Böhm, Maret; Abuodha, Magdaline; Haynes, Anne-Maree; Ting, Francis; Barentsz, Jelle; Roobol, Monique; Vass, Justin; Rasiah, Krishan; Delprado, Warick; Stricker, Phillip D

    2017-12-01

    To develop and externally validate a predictive model for detection of significant prostate cancer. Development of the model was based on a prospective cohort including 393 men who underwent multiparametric magnetic resonance imaging (mpMRI) before biopsy. External validity of the model was then examined retrospectively in 198 men from a separate institution whom underwent mpMRI followed by biopsy for abnormal prostate-specific antigen (PSA) level or digital rectal examination (DRE). A model was developed with age, PSA level, DRE, prostate volume, previous biopsy, and Prostate Imaging Reporting and Data System (PIRADS) score, as predictors for significant prostate cancer (Gleason 7 with >5% grade 4, ≥20% cores positive or ≥7 mm of cancer in any core). Probability was studied via logistic regression. Discriminatory performance was quantified by concordance statistics and internally validated with bootstrap resampling. In all, 393 men had complete data and 149 (37.9%) had significant prostate cancer. While the variable model had good accuracy in predicting significant prostate cancer, area under the curve (AUC) of 0.80, the advanced model (incorporating mpMRI) had a significantly higher AUC of 0.88 (P prostate cancer. Individualised risk assessment of significant prostate cancer using a predictive model that incorporates mpMRI PIRADS score and clinical data allows a considerable reduction in unnecessary biopsies and reduction of the risk of over-detection of insignificant prostate cancer at the cost of a very small increase in the number of significant cancers missed. © 2017 The Authors BJU International © 2017 BJU International Published by John Wiley & Sons Ltd.

  20. Field significance of performance measures in the context of regional climate model evaluation. Part 2: precipitation

    Science.gov (United States)

    Ivanov, Martin; Warrach-Sagi, Kirsten; Wulfmeyer, Volker

    2018-04-01

    A new approach for rigorous spatial analysis of the downscaling performance of regional climate model (RCM) simulations is introduced. It is based on a multiple comparison of the local tests at the grid cells and is also known as `field' or `global' significance. The block length for the local resampling tests is precisely determined to adequately account for the time series structure. New performance measures for estimating the added value of downscaled data relative to the large-scale forcing fields are developed. The methodology is exemplarily applied to a standard EURO-CORDEX hindcast simulation with the Weather Research and Forecasting (WRF) model coupled with the land surface model NOAH at 0.11 ∘ grid resolution. Daily precipitation climatology for the 1990-2009 period is analysed for Germany for winter and summer in comparison with high-resolution gridded observations from the German Weather Service. The field significance test controls the proportion of falsely rejected local tests in a meaningful way and is robust to spatial dependence. Hence, the spatial patterns of the statistically significant local tests are also meaningful. We interpret them from a process-oriented perspective. While the downscaled precipitation distributions are statistically indistinguishable from the observed ones in most regions in summer, the biases of some distribution characteristics are significant over large areas in winter. WRF-NOAH generates appropriate stationary fine-scale climate features in the daily precipitation field over regions of complex topography in both seasons and appropriate transient fine-scale features almost everywhere in summer. As the added value of global climate model (GCM)-driven simulations cannot be smaller than this perfect-boundary estimate, this work demonstrates in a rigorous manner the clear additional value of dynamical downscaling over global climate simulations. The evaluation methodology has a broad spectrum of applicability as it is

  1. Theory of nonparametric tests

    CERN Document Server

    Dickhaus, Thorsten

    2018-01-01

    This textbook provides a self-contained presentation of the main concepts and methods of nonparametric statistical testing, with a particular focus on the theoretical foundations of goodness-of-fit tests, rank tests, resampling tests, and projection tests. The substitution principle is employed as a unified approach to the nonparametric test problems discussed. In addition to mathematical theory, it also includes numerous examples and computer implementations. The book is intended for advanced undergraduate, graduate, and postdoc students as well as young researchers. Readers should be familiar with the basic concepts of mathematical statistics typically covered in introductory statistics courses.

  2. A Powerful Test for Comparing Multiple Regression Functions.

    Science.gov (United States)

    Maity, Arnab

    2012-09-01

    In this article, we address the important problem of comparison of two or more population regression functions. Recently, Pardo-Fernández, Van Keilegom and González-Manteiga (2007) developed test statistics for simple nonparametric regression models: Y(ij) = θ(j)(Z(ij)) + σ(j)(Z(ij))∊(ij), based on empirical distributions of the errors in each population j = 1, … , J. In this paper, we propose a test for equality of the θ(j)(·) based on the concept of generalized likelihood ratio type statistics. We also generalize our test for other nonparametric regression setups, e.g, nonparametric logistic regression, where the loglikelihood for population j is any general smooth function [Formula: see text]. We describe a resampling procedure to obtain the critical values of the test. In addition, we present a simulation study to evaluate the performance of the proposed test and compare our results to those in Pardo-Fernández et al. (2007).

  3. Can a significance test be genuinely Bayesian?

    OpenAIRE

    Pereira, Carlos A. de B.; Stern, Julio Michael; Wechsler, Sergio

    2008-01-01

    The Full Bayesian Significance Test, FBST, is extensively reviewed. Its test statistic, a genuine Bayesian measure of evidence, is discussed in detail. Its behavior in some problems of statistical inference like testing for independence in contingency tables is discussed.

  4. Characteristic function-based semiparametric inference for skew-symmetric models

    KAUST Repository

    Potgieter, Cornelis J.; Genton, Marc G.

    2012-01-01

    testing. Two tests for a hypothesis of specific parameter values are considered, as well as a test for the hypothesis that the symmetric component has a specific parametric form. A resampling algorithm is described for practical implementation

  5. Permutation tests for goodness-of-fit testing of mathematical models to experimental data.

    Science.gov (United States)

    Fişek, M Hamit; Barlas, Zeynep

    2013-03-01

    This paper presents statistical procedures for improving the goodness-of-fit testing of theoretical models to data obtained from laboratory experiments. We use an experimental study in the expectation states research tradition which has been carried out in the "standardized experimental situation" associated with the program to illustrate the application of our procedures. We briefly review the expectation states research program and the fundamentals of resampling statistics as we develop our procedures in the resampling context. The first procedure we develop is a modification of the chi-square test which has been the primary statistical tool for assessing goodness of fit in the EST research program, but has problems associated with its use. We discuss these problems and suggest a procedure to overcome them. The second procedure we present, the "Average Absolute Deviation" test, is a new test and is proposed as an alternative to the chi square test, as being simpler and more informative. The third and fourth procedures are permutation versions of Jonckheere's test for ordered alternatives, and Kendall's tau(b), a rank order correlation coefficient. The fifth procedure is a new rank order goodness-of-fit test, which we call the "Deviation from Ideal Ranking" index, which we believe may be more useful than other rank order tests for assessing goodness-of-fit of models to experimental data. The application of these procedures to the sample data is illustrated in detail. We then present another laboratory study from an experimental paradigm different from the expectation states paradigm - the "network exchange" paradigm, and describe how our procedures may be applied to this data set. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Significance tests for the wavelet cross spectrum and wavelet linear coherence

    Directory of Open Access Journals (Sweden)

    Z. Ge

    2008-12-01

    Full Text Available This work attempts to develop significance tests for the wavelet cross spectrum and the wavelet linear coherence as a follow-up study on Ge (2007. Conventional approaches that are used by Torrence and Compo (1998 based on stationary background noise time series were used here in estimating the sampling distributions of the wavelet cross spectrum and the wavelet linear coherence. The sampling distributions are then used for establishing significance levels for these two wavelet-based quantities. In addition to these two wavelet quantities, properties of the phase angle of the wavelet cross spectrum of, or the phase difference between, two Gaussian white noise series are discussed. It is found that the tangent of the principal part of the phase angle approximately has a standard Cauchy distribution and the phase angle is uniformly distributed, which makes it impossible to establish significance levels for the phase angle. The simulated signals clearly show that, when there is no linear relation between the two analysed signals, the phase angle disperses into the entire range of [−π,π] with fairly high probabilities for values close to ±π to occur. Conversely, when linear relations are present, the phase angle of the wavelet cross spectrum settles around an associated value with considerably reduced fluctuations. When two signals are linearly coupled, their wavelet linear coherence will attain values close to one. The significance test of the wavelet linear coherence can therefore be used to complement the inspection of the phase angle of the wavelet cross spectrum. The developed significance tests are also applied to actual data sets, simultaneously recorded wind speed and wave elevation series measured from a NOAA buoy on Lake Michigan. Significance levels of the wavelet cross spectrum and the wavelet linear coherence between the winds and the waves reasonably separated meaningful peaks from those generated by randomness in the data set. As

  7. The insignificance of statistical significance testing

    Science.gov (United States)

    Johnson, Douglas H.

    1999-01-01

    Despite their use in scientific journals such as The Journal of Wildlife Management, statistical hypothesis tests add very little value to the products of research. Indeed, they frequently confuse the interpretation of data. This paper describes how statistical hypothesis tests are often viewed, and then contrasts that interpretation with the correct one. I discuss the arbitrariness of P-values, conclusions that the null hypothesis is true, power analysis, and distinctions between statistical and biological significance. Statistical hypothesis testing, in which the null hypothesis about the properties of a population is almost always known a priori to be false, is contrasted with scientific hypothesis testing, which examines a credible null hypothesis about phenomena in nature. More meaningful alternatives are briefly outlined, including estimation and confidence intervals for determining the importance of factors, decision theory for guiding actions in the face of uncertainty, and Bayesian approaches to hypothesis testing and other statistical practices.

  8. FPGA Accelerator for Wavelet-Based Automated Global Image Registration

    Directory of Open Access Journals (Sweden)

    Baofeng Li

    2009-01-01

    Full Text Available Wavelet-based automated global image registration (WAGIR is fundamental for most remote sensing image processing algorithms and extremely computation-intensive. With more and more algorithms migrating from ground computing to onboard computing, an efficient dedicated architecture of WAGIR is desired. In this paper, a BWAGIR architecture is proposed based on a block resampling scheme. BWAGIR achieves a significant performance by pipelining computational logics, parallelizing the resampling process and the calculation of correlation coefficient and parallel memory access. A proof-of-concept implementation with 1 BWAGIR processing unit of the architecture performs at least 7.4X faster than the CL cluster system with 1 node, and at least 3.4X than the MPM massively parallel machine with 1 node. Further speedup can be achieved by parallelizing multiple BWAGIR units. The architecture with 5 units achieves a speedup of about 3X against the CL with 16 nodes and a comparative speed with the MPM with 30 nodes. More importantly, the BWAGIR architecture can be deployed onboard economically.

  9. FPGA Accelerator for Wavelet-Based Automated Global Image Registration

    Directory of Open Access Journals (Sweden)

    Li Baofeng

    2009-01-01

    Full Text Available Abstract Wavelet-based automated global image registration (WAGIR is fundamental for most remote sensing image processing algorithms and extremely computation-intensive. With more and more algorithms migrating from ground computing to onboard computing, an efficient dedicated architecture of WAGIR is desired. In this paper, a BWAGIR architecture is proposed based on a block resampling scheme. BWAGIR achieves a significant performance by pipelining computational logics, parallelizing the resampling process and the calculation of correlation coefficient and parallel memory access. A proof-of-concept implementation with 1 BWAGIR processing unit of the architecture performs at least 7.4X faster than the CL cluster system with 1 node, and at least 3.4X than the MPM massively parallel machine with 1 node. Further speedup can be achieved by parallelizing multiple BWAGIR units. The architecture with 5 units achieves a speedup of about 3X against the CL with 16 nodes and a comparative speed with the MPM with 30 nodes. More importantly, the BWAGIR architecture can be deployed onboard economically.

  10. Using vis-NIR to predict soil organic carbon and clay at national scale: validation of geographically closest resampling strategy

    DEFF Research Database (Denmark)

    Peng, Yi; Knadel, Maria; Greve, Mette Balslev

    2016-01-01

    geographically closest sampling points. The SOC prediction resulted in R2: 0.76; RMSE: 4.02 %; RPD: 1.59; RPIQ: 0.35. The results for clay prediction were also successful (R2: 0.84; RMSE: 2.36 %; RPD: 2.35; RPIQ: 2.88). For SOC predictions, over 90% of soil samples were well predicted compared...... samples) for soils from each 7-km grid sampling point in the country. In the resampling and modelling process, each target sample was predicted by a specific model which was calibrated using geographically closest soil spectra. The geographically closest 20, 30, 40, and 50 sampling points (profiles) were...

  11. Significance testing in ridge regression for genetic data

    Directory of Open Access Journals (Sweden)

    De Iorio Maria

    2011-09-01

    Full Text Available Abstract Background Technological developments have increased the feasibility of large scale genetic association studies. Densely typed genetic markers are obtained using SNP arrays, next-generation sequencing technologies and imputation. However, SNPs typed using these methods can be highly correlated due to linkage disequilibrium among them, and standard multiple regression techniques fail with these data sets due to their high dimensionality and correlation structure. There has been increasing interest in using penalised regression in the analysis of high dimensional data. Ridge regression is one such penalised regression technique which does not perform variable selection, instead estimating a regression coefficient for each predictor variable. It is therefore desirable to obtain an estimate of the significance of each ridge regression coefficient. Results We develop and evaluate a test of significance for ridge regression coefficients. Using simulation studies, we demonstrate that the performance of the test is comparable to that of a permutation test, with the advantage of a much-reduced computational cost. We introduce the p-value trace, a plot of the negative logarithm of the p-values of ridge regression coefficients with increasing shrinkage parameter, which enables the visualisation of the change in p-value of the regression coefficients with increasing penalisation. We apply the proposed method to a lung cancer case-control data set from EPIC, the European Prospective Investigation into Cancer and Nutrition. Conclusions The proposed test is a useful alternative to a permutation test for the estimation of the significance of ridge regression coefficients, at a much-reduced computational cost. The p-value trace is an informative graphical tool for evaluating the results of a test of significance of ridge regression coefficients as the shrinkage parameter increases, and the proposed test makes its production computationally feasible.

  12. Non-destructive testing: significant facts

    International Nuclear Information System (INIS)

    Espejo, Hector; Ruch, Marta C.

    2006-01-01

    In the last fifty years different organisations, both public and private, have been assigned to the mission of introducing into the country the most relevant aspects of the modern technological discipline 'Non Destructive Testing' (NDT) through a manifold of activities, such as training and education, research, development, technical assistance and services, personnel qualification/certification and standardisation. A review is given of the significant facts in this process, in which the Argentine Atomic Energy Commission, CNEA, played a leading part, a balance of the accomplishments is made and a forecast of the future of the activity is sketched. (author) [es

  13. A rule-based software test data generator

    Science.gov (United States)

    Deason, William H.; Brown, David B.; Chang, Kai-Hsiung; Cross, James H., II

    1991-01-01

    Rule-based software test data generation is proposed as an alternative to either path/predicate analysis or random data generation. A prototype rule-based test data generator for Ada programs is constructed and compared to a random test data generator. Four Ada procedures are used in the comparison. Approximately 2000 rule-based test cases and 100,000 randomly generated test cases are automatically generated and executed. The success of the two methods is compared using standard coverage metrics. Simple statistical tests showing that even the primitive rule-based test data generation prototype is significantly better than random data generation are performed. This result demonstrates that rule-based test data generation is feasible and shows great promise in assisting test engineers, especially when the rule base is developed further.

  14. Pearson's chi-square test and rank correlation inferences for clustered data.

    Science.gov (United States)

    Shih, Joanna H; Fay, Michael P

    2017-09-01

    Pearson's chi-square test has been widely used in testing for association between two categorical responses. Spearman rank correlation and Kendall's tau are often used for measuring and testing association between two continuous or ordered categorical responses. However, the established statistical properties of these tests are only valid when each pair of responses are independent, where each sampling unit has only one pair of responses. When each sampling unit consists of a cluster of paired responses, the assumption of independent pairs is violated. In this article, we apply the within-cluster resampling technique to U-statistics to form new tests and rank-based correlation estimators for possibly tied clustered data. We develop large sample properties of the new proposed tests and estimators and evaluate their performance by simulations. The proposed methods are applied to a data set collected from a PET/CT imaging study for illustration. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  15. The Harm Done to Reproducibility by the Culture of Null Hypothesis Significance Testing.

    Science.gov (United States)

    Lash, Timothy L

    2017-09-15

    In the last few years, stakeholders in the scientific community have raised alarms about a perceived lack of reproducibility of scientific results. In reaction, guidelines for journals have been promulgated and grant applicants have been asked to address the rigor and reproducibility of their proposed projects. Neither solution addresses a primary culprit, which is the culture of null hypothesis significance testing that dominates statistical analysis and inference. In an innovative research enterprise, selection of results for further evaluation based on null hypothesis significance testing is doomed to yield a low proportion of reproducible results and a high proportion of effects that are initially overestimated. In addition, the culture of null hypothesis significance testing discourages quantitative adjustments to account for systematic errors and quantitative incorporation of prior information. These strategies would otherwise improve reproducibility and have not been previously proposed in the widely cited literature on this topic. Without discarding the culture of null hypothesis significance testing and implementing these alternative methods for statistical analysis and inference, all other strategies for improving reproducibility will yield marginal gains at best. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. A Particle Smoother with Sequential Importance Resampling for soil hydraulic parameter estimation: A lysimeter experiment

    Science.gov (United States)

    Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry

    2013-04-01

    An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.

  17. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    Science.gov (United States)

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  18. Reducing test-data volume and test-power simultaneously in LFSR reseeding-based compression environment

    Energy Technology Data Exchange (ETDEWEB)

    Wang Weizheng; Kuang Jishun; You Zhiqiang; Liu Peng, E-mail: jshkuang@163.com [College of Information Science and Engineering, Hunan University, Changsha 410082 (China)

    2011-07-15

    This paper presents a new test scheme based on scan block encoding in a linear feedback shift register (LFSR) reseeding-based compression environment. Meanwhile, our paper also introduces a novel algorithm of scan-block clustering. The main contribution of this paper is a flexible test-application framework that achieves significant reductions in switching activity during scan shift and the number of specified bits that need to be generated via LFSR reseeding. Thus, it can significantly reduce the test power and test data volume. Experimental results using Mintest test set on the larger ISCAS'89 benchmarks show that the proposed method reduces the switching activity significantly by 72%-94% and provides a best possible test compression of 74%-94% with little hardware overhead. (semiconductor integrated circuits)

  19. Fine-mapping additive and dominant SNP effects using group-LASSO and Fractional Resample Model Averaging

    Science.gov (United States)

    Sabourin, Jeremy; Nobel, Andrew B.; Valdar, William

    2014-01-01

    Genomewide association studies sometimes identify loci at which both the number and identities of the underlying causal variants are ambiguous. In such cases, statistical methods that model effects of multiple SNPs simultaneously can help disentangle the observed patterns of association and provide information about how those SNPs could be prioritized for follow-up studies. Current multi-SNP methods, however, tend to assume that SNP effects are well captured by additive genetics; yet when genetic dominance is present, this assumption translates to reduced power and faulty prioritizations. We describe a statistical procedure for prioritizing SNPs at GWAS loci that efficiently models both additive and dominance effects. Our method, LLARRMA-dawg, combines a group LASSO procedure for sparse modeling of multiple SNP effects with a resampling procedure based on fractional observation weights; it estimates for each SNP the robustness of association with the phenotype both to sampling variation and to competing explanations from other SNPs. In producing a SNP prioritization that best identifies underlying true signals, we show that: our method easily outperforms a single marker analysis; when additive-only signals are present, our joint model for additive and dominance is equivalent to or only slightly less powerful than modeling additive-only effects; and, when dominance signals are present, even in combination with substantial additive effects, our joint model is unequivocally more powerful than a model assuming additivity. We also describe how performance can be improved through calibrated randomized penalization, and discuss how dominance in ungenotyped SNPs can be incorporated through either heterozygote dosage or multiple imputation. PMID:25417853

  20. Network diffusion-based analysis of high-throughput data for the detection of differentially enriched modules

    Science.gov (United States)

    Bersanelli, Matteo; Mosca, Ettore; Remondini, Daniel; Castellani, Gastone; Milanesi, Luciano

    2016-01-01

    A relation exists between network proximity of molecular entities in interaction networks, functional similarity and association with diseases. The identification of network regions associated with biological functions and pathologies is a major goal in systems biology. We describe a network diffusion-based pipeline for the interpretation of different types of omics in the context of molecular interaction networks. We introduce the network smoothing index, a network-based quantity that allows to jointly quantify the amount of omics information in genes and in their network neighbourhood, using network diffusion to define network proximity. The approach is applicable to both descriptive and inferential statistics calculated on omics data. We also show that network resampling, applied to gene lists ranked by quantities derived from the network smoothing index, indicates the presence of significantly connected genes. As a proof of principle, we identified gene modules enriched in somatic mutations and transcriptional variations observed in samples of prostate adenocarcinoma (PRAD). In line with the local hypothesis, network smoothing index and network resampling underlined the existence of a connected component of genes harbouring molecular alterations in PRAD. PMID:27731320

  1. Kolmogorov-Smirnov test for spatially correlated data

    Science.gov (United States)

    Olea, R.A.; Pawlowsky-Glahn, V.

    2009-01-01

    The Kolmogorov-Smirnov test is a convenient method for investigating whether two underlying univariate probability distributions can be regarded as undistinguishable from each other or whether an underlying probability distribution differs from a hypothesized distribution. Application of the test requires that the sample be unbiased and the outcomes be independent and identically distributed, conditions that are violated in several degrees by spatially continuous attributes, such as topographical elevation. A generalized form of the bootstrap method is used here for the purpose of modeling the distribution of the statistic D of the Kolmogorov-Smirnov test. The innovation is in the resampling, which in the traditional formulation of bootstrap is done by drawing from the empirical sample with replacement presuming independence. The generalization consists of preparing resamplings with the same spatial correlation as the empirical sample. This is accomplished by reading the value of unconditional stochastic realizations at the sampling locations, realizations that are generated by simulated annealing. The new approach was tested by two empirical samples taken from an exhaustive sample closely following a lognormal distribution. One sample was a regular, unbiased sample while the other one was a clustered, preferential sample that had to be preprocessed. Our results show that the p-value for the spatially correlated case is always larger that the p-value of the statistic in the absence of spatial correlation, which is in agreement with the fact that the information content of an uncorrelated sample is larger than the one for a spatially correlated sample of the same size. ?? Springer-Verlag 2008.

  2. Testing the significance of canonical axes in redundancy analysis

    NARCIS (Netherlands)

    Legendre, P.; Oksanen, J.; Braak, ter C.J.F.

    2011-01-01

    1. Tests of significance of the individual canonical axes in redundancy analysis allow researchers to determine which of the axes represent variation that can be distinguished from random. Variation along the significant axes can be mapped, used to draw biplots or interpreted through subsequent

  3. Effects of model complexity and priors on estimation using sequential importance sampling/resampling for species conservation

    Science.gov (United States)

    Dunham, Kylee; Grand, James B.

    2016-01-01

    We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.

  4. Your Chi-Square Test Is Statistically Significant: Now What?

    Science.gov (United States)

    Sharpe, Donald

    2015-01-01

    Applied researchers have employed chi-square tests for more than one hundred years. This paper addresses the question of how one should follow a statistically significant chi-square test result in order to determine the source of that result. Four approaches were evaluated: calculating residuals, comparing cells, ransacking, and partitioning. Data…

  5. Properties of permutation-based gene tests and controlling type 1 error using a summary statistic based gene test.

    Science.gov (United States)

    Swanson, David M; Blacker, Deborah; Alchawa, Taofik; Ludwig, Kerstin U; Mangold, Elisabeth; Lange, Christoph

    2013-11-07

    The advent of genome-wide association studies has led to many novel disease-SNP associations, opening the door to focused study on their biological underpinnings. Because of the importance of analyzing these associations, numerous statistical methods have been devoted to them. However, fewer methods have attempted to associate entire genes or genomic regions with outcomes, which is potentially more useful knowledge from a biological perspective and those methods currently implemented are often permutation-based. One property of some permutation-based tests is that their power varies as a function of whether significant markers are in regions of linkage disequilibrium (LD) or not, which we show from a theoretical perspective. We therefore develop two methods for quantifying the degree of association between a genomic region and outcome, both of whose power does not vary as a function of LD structure. One method uses dimension reduction to "filter" redundant information when significant LD exists in the region, while the other, called the summary-statistic test, controls for LD by scaling marker Z-statistics using knowledge of the correlation matrix of markers. An advantage of this latter test is that it does not require the original data, but only their Z-statistics from univariate regressions and an estimate of the correlation structure of markers, and we show how to modify the test to protect the type 1 error rate when the correlation structure of markers is misspecified. We apply these methods to sequence data of oral cleft and compare our results to previously proposed gene tests, in particular permutation-based ones. We evaluate the versatility of the modification of the summary-statistic test since the specification of correlation structure between markers can be inaccurate. We find a significant association in the sequence data between the 8q24 region and oral cleft using our dimension reduction approach and a borderline significant association using the

  6. Testing statistical hypotheses

    CERN Document Server

    Lehmann, E L

    2005-01-01

    The third edition of Testing Statistical Hypotheses updates and expands upon the classic graduate text, emphasizing optimality theory for hypothesis testing and confidence sets. The principal additions include a rigorous treatment of large sample optimality, together with the requisite tools. In addition, an introduction to the theory of resampling methods such as the bootstrap is developed. The sections on multiple testing and goodness of fit testing are expanded. The text is suitable for Ph.D. students in statistics and includes over 300 new problems out of a total of more than 760. E.L. Lehmann is Professor of Statistics Emeritus at the University of California, Berkeley. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences, and the recipient of honorary degrees from the University of Leiden, The Netherlands and the University of Chicago. He is the author of Elements of Large-Sample Theory and (with George Casella) he is also the author of Theory of Point Estimat...

  7. Test for the statistical significance of differences between ROC curves

    International Nuclear Information System (INIS)

    Metz, C.E.; Kronman, H.B.

    1979-01-01

    A test for the statistical significance of observed differences between two measured Receiver Operating Characteristic (ROC) curves has been designed and evaluated. The set of observer response data for each ROC curve is assumed to be independent and to arise from a ROC curve having a form which, in the absence of statistical fluctuations in the response data, graphs as a straight line on double normal-deviate axes. To test the significance of an apparent difference between two measured ROC curves, maximum likelihood estimates of the two parameters of each curve and the associated parameter variances and covariance are calculated from the corresponding set of observer response data. An approximate Chi-square statistic with two degrees of freedom is then constructed from the differences between the parameters estimated for each ROC curve and from the variances and covariances of these estimates. This statistic is known to be truly Chi-square distributed only in the limit of large numbers of trials in the observer performance experiments. Performance of the statistic for data arising from a limited number of experimental trials was evaluated. Independent sets of rating scale data arising from the same underlying ROC curve were paired, and the fraction of differences found (falsely) significant was compared to the significance level, α, used with the test. Although test performance was found to be somewhat dependent on both the number of trials in the data and the position of the underlying ROC curve in the ROC space, the results for various significance levels showed the test to be reliable under practical experimental conditions

  8. Significance tests to determine the direction of effects in linear regression models.

    Science.gov (United States)

    Wiedermann, Wolfgang; Hagmann, Michael; von Eye, Alexander

    2015-02-01

    Previous studies have discussed asymmetric interpretations of the Pearson correlation coefficient and have shown that higher moments can be used to decide on the direction of dependence in the bivariate linear regression setting. The current study extends this approach by illustrating that the third moment of regression residuals may also be used to derive conclusions concerning the direction of effects. Assuming non-normally distributed variables, it is shown that the distribution of residuals of the correctly specified regression model (e.g., Y is regressed on X) is more symmetric than the distribution of residuals of the competing model (i.e., X is regressed on Y). Based on this result, 4 one-sample tests are discussed which can be used to decide which variable is more likely to be the response and which one is more likely to be the explanatory variable. A fifth significance test is proposed based on the differences of skewness estimates, which leads to a more direct test of a hypothesis that is compatible with direction of dependence. A Monte Carlo simulation study was performed to examine the behaviour of the procedures under various degrees of associations, sample sizes, and distributional properties of the underlying population. An empirical example is given which illustrates the application of the tests in practice. © 2014 The British Psychological Society.

  9. The significance test controversy revisited the fiducial Bayesian alternative

    CERN Document Server

    Lecoutre, Bruno

    2014-01-01

    The purpose of this book is not only to revisit the “significance test controversy,”but also to provide a conceptually sounder alternative. As such, it presents a Bayesian framework for a new approach to analyzing and interpreting experimental data. It also prepares students and researchers for reporting on experimental results. Normative aspects: The main views of statistical tests are revisited and the philosophies of Fisher, Neyman-Pearson and Jeffrey are discussed in detail. Descriptive aspects: The misuses of Null Hypothesis Significance Tests are reconsidered in light of Jeffreys’ Bayesian conceptions concerning the role of statistical inference in experimental investigations. Prescriptive aspects: The current effect size and confidence interval reporting practices are presented and seriously questioned. Methodological aspects are carefully discussed and fiducial Bayesian methods are proposed as a more suitable alternative for reporting on experimental results. In closing, basic routine procedures...

  10. Understanding the Sampling Distribution and Its Use in Testing Statistical Significance.

    Science.gov (United States)

    Breunig, Nancy A.

    Despite the increasing criticism of statistical significance testing by researchers, particularly in the publication of the 1994 American Psychological Association's style manual, statistical significance test results are still popular in journal articles. For this reason, it remains important to understand the logic of inferential statistics. A…

  11. Do School-Based Tutoring Programs Significantly Improve Student Performance on Standardized Tests?

    Science.gov (United States)

    Rothman, Terri; Henderson, Mary

    2011-01-01

    This study used a pre-post, nonequivalent control group design to examine the impact of an in-district, after-school tutoring program on eighth grade students' standardized test scores in language arts and mathematics. Students who had scored in the near-passing range on either the language arts or mathematics aspect of a standardized test at the…

  12. Finding significantly connected voxels based on histograms of connection strengths

    DEFF Research Database (Denmark)

    Kasenburg, Niklas; Pedersen, Morten Vester; Darkner, Sune

    2016-01-01

    We explore a new approach for structural connectivity based segmentations of subcortical brain regions. Connectivity based segmentations are usually based on fibre connections from a seed region to predefined target regions. We present a method for finding significantly connected voxels based...... on the distribution of connection strengths. Paths from seed voxels to all voxels in a target region are obtained from a shortest-path tractography. For each seed voxel we approximate the distribution with a histogram of path scores. We hypothesise that the majority of estimated connections are false-positives...... and that their connection strength is distributed differently from true-positive connections. Therefore, an empirical null-distribution is defined for each target region as the average normalized histogram over all voxels in the seed region. Single histograms are then tested against the corresponding null...

  13. Modelling lead bioaccessibility in urban topsoils based on data from Glasgow, London, Northampton and Swansea, UK

    International Nuclear Information System (INIS)

    Appleton, J.D.; Cave, M.R.; Wragg, J.

    2012-01-01

    Predictive linear regression (LR) modelling between bioaccessible Pb and a range of total elemental compositions and soil properties was executed for the Glasgow, London, Northampton and Swansea urban areas in order to assess the potential for developing a national urban bioaccessible Pb dataset for the UK. LR indicates that total Pb is the only highly significant independent variable for estimating the bioaccessibility of Pb. Bootstrap resampling shows that the relationship between total Pb and bioaccessible Pb is broadly the same in the four urban areas. The median bioaccessible fraction ranges from 38% in Northampton to 68% in London and Swansea. Results of this study can be used as part of a lines of evidence approach to localised risk assessment but should not be used to replace bioaccessibility testing at individual sites where local conditions may vary considerably from the broad overview presented in this study. - Highlights: ► Total Pb is the only significant predictor for bioaccessible Pb in UK urban topsoils. ► Bootstrap resampling confirms relationship similar in four urban areas. ► Median bioaccessible fraction ranges from 38 to 68%. ► Results can be used for initial risk assessment in UK urban areas. - Total Pb is the only significant predictor for bioaccessible Pb in topsoils from four urban areas in the UK.

  14. Polynomial regression analysis and significance test of the regression function

    International Nuclear Information System (INIS)

    Gao Zhengming; Zhao Juan; He Shengping

    2012-01-01

    In order to analyze the decay heating power of a certain radioactive isotope per kilogram with polynomial regression method, the paper firstly demonstrated the broad usage of polynomial function and deduced its parameters with ordinary least squares estimate. Then significance test method of polynomial regression function is derived considering the similarity between the polynomial regression model and the multivariable linear regression model. Finally, polynomial regression analysis and significance test of the polynomial function are done to the decay heating power of the iso tope per kilogram in accord with the authors' real work. (authors)

  15. Testing particle filters on convective scale dynamics

    Science.gov (United States)

    Haslehner, Mylene; Craig, George. C.; Janjic, Tijana

    2014-05-01

    Particle filters have been developed in recent years to deal with highly nonlinear dynamics and non Gaussian error statistics that also characterize data assimilation on convective scales. In this work we explore the use of the efficient particle filter (P.v. Leeuwen, 2011) for convective scale data assimilation application. The method is tested in idealized setting, on two stochastic models. The models were designed to reproduce some of the properties of convection, for example the rapid development and decay of convective clouds. The first model is a simple one-dimensional, discrete state birth-death model of clouds (Craig and Würsch, 2012). For this model, the efficient particle filter that includes nudging the variables shows significant improvement compared to Ensemble Kalman Filter and Sequential Importance Resampling (SIR) particle filter. The success of the combination of nudging and resampling, measured as RMS error with respect to the 'true state', is proportional to the nudging intensity. Significantly, even a very weak nudging intensity brings notable improvement over SIR. The second model is a modified version of a stochastic shallow water model (Würsch and Craig 2013), which contains more realistic dynamical characteristics of convective scale phenomena. Using the efficient particle filter and different combination of observations of the three field variables (wind, water 'height' and rain) allows the particle filter to be evaluated in comparison to a regime where only nudging is used. Sensitivity to the properties of the model error covariance is also considered. Finally, criteria are identified under which the efficient particle filter outperforms nudging alone. References: Craig, G. C. and M. Würsch, 2012: The impact of localization and observation averaging for convective-scale data assimilation in a simple stochastic model. Q. J. R. Meteorol. Soc.,139, 515-523. Van Leeuwen, P. J., 2011: Efficient non-linear data assimilation in geophysical

  16. Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, M D; Cole, S; Frenk, C S; Szapudi, I

    2011-02-14

    We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a power spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.

  17. permGPU: Using graphics processing units in RNA microarray association studies

    Directory of Open Access Journals (Sweden)

    George Stephen L

    2010-06-01

    Full Text Available Abstract Background Many analyses of microarray association studies involve permutation, bootstrap resampling and cross-validation, that are ideally formulated as embarrassingly parallel computing problems. Given that these analyses are computationally intensive, scalable approaches that can take advantage of multi-core processor systems need to be developed. Results We have developed a CUDA based implementation, permGPU, that employs graphics processing units in microarray association studies. We illustrate the performance and applicability of permGPU within the context of permutation resampling for a number of test statistics. An extensive simulation study demonstrates a dramatic increase in performance when using permGPU on an NVIDIA GTX 280 card compared to an optimized C/C++ solution running on a conventional Linux server. Conclusions permGPU is available as an open-source stand-alone application and as an extension package for the R statistical environment. It provides a dramatic increase in performance for permutation resampling analysis in the context of microarray association studies. The current version offers six test statistics for carrying out permutation resampling analyses for binary, quantitative and censored time-to-event traits.

  18. Unit Roots in Economic and Financial Time Series: A Re-Evaluation at the Decision-Based Significance Levels

    Directory of Open Access Journals (Sweden)

    Jae H. Kim

    2017-09-01

    Full Text Available This paper re-evaluates key past results of unit root tests, emphasizing that the use of a conventional level of significance is not in general optimal due to the test having low power. The decision-based significance levels for popular unit root tests, chosen using the line of enlightened judgement under a symmetric loss function, are found to be much higher than conventional ones. We also propose simple calibration rules for the decision-based significance levels for a range of unit root tests. At the decision-based significance levels, many time series in Nelson and Plosser’s (1982 (extended data set are judged to be trend-stationary, including real income variables, employment variables and money stock. We also find that nearly all real exchange rates covered in Elliott and Pesavento’s (2006 study are stationary; and that most of the real interest rates covered in Rapach and Weber’s (2004 study are stationary. In addition, using a specific loss function, the U.S. nominal interest rate is found to be stationary under economically sensible values of relative loss and prior belief for the null hypothesis.

  19. A Quantitative Analysis of Evidence-Based Testing Practices in Nursing Education

    Science.gov (United States)

    Moore, Wendy

    2017-01-01

    The focus of this dissertation is evidence-based testing practices in nursing education. Specifically, this research study explored the implementation of evidence-based testing practices between nursing faculty of various experience levels. While the significance of evidence-based testing in nursing education is well documented, little is known…

  20. Testing statistical significance scores of sequence comparison methods with structure similarity

    Directory of Open Access Journals (Sweden)

    Leunissen Jack AM

    2006-10-01

    Full Text Available Abstract Background In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical significance testing for an alignment. The e-value is the most commonly used statistical validation method for sequence database searching. The CluSTr database and the Protein World database have been created using an alternative statistical significance test: a Z-score based on Monte-Carlo statistics. Several papers have described the superiority of the Z-score as compared to the e-value, using simulated data. We were interested if this could be validated when applied to existing, evolutionary related protein sequences. Results All experiments are performed on the ASTRAL SCOP database. The Smith-Waterman sequence comparison algorithm with both e-value and Z-score statistics is evaluated, using ROC, CVE and AP measures. The BLAST and FASTA algorithms are used as reference. We find that two out of three Smith-Waterman implementations with e-value are better at predicting structural similarities between proteins than the Smith-Waterman implementation with Z-score. SSEARCH especially has very high scores. Conclusion The compute intensive Z-score does not have a clear advantage over the e-value. The Smith-Waterman implementations give generally better results than their heuristic counterparts. We recommend using the SSEARCH algorithm combined with e-values for pairwise sequence comparisons.

  1. Memory and Trend of Precipitation in China during 1966-2013

    Science.gov (United States)

    Du, M.; Sun, F.; Liu, W.

    2017-12-01

    As climate change has had a significant impact on water cycle, the characteristic and variation of precipitation under climate change turned into a hotspot in hydrology. This study aims to analyze the trend and memory (both short-term and long-term) of precipitation in China. To do that, we apply statistical tests (including Mann-Kendall test, Ljung-Box test and Hurst exponent) to annual precipitation (P), frequency of rainy day (λ) and mean daily rainfall in days when precipitation occurs (α) in China (1966-2013). We also use a resampling approach to determine the field significance. From there, we evaluate the spatial distribution and percentages of stations with significant memory or trend. We find that the percentages of significant downtrends for λ and significant uptrends for α are significantly larger than the critical values at 95% field significance level, probably caused by the global warming. From these results, we conclude that extra care is necessary when significant results are obtained using statistical tests. This is because the null hypothesis could be rejected by chance and this situation is more likely to occur if spatial correlation is ignored according to the results of the resampling approach.

  2. Computer-Based Testing: Test Site Security.

    Science.gov (United States)

    Rosen, Gerald A.

    Computer-based testing places great burdens on all involved parties to ensure test security. A task analysis of test site security might identify the areas of protecting the test, protecting the data, and protecting the environment as essential issues in test security. Protecting the test involves transmission of the examinations, identifying the…

  3. LandScape: a simple method to aggregate p--Values and other stochastic variables without a priori grouping

    DEFF Research Database (Denmark)

    Wiuf, Carsten; Pallesen, Jonatan; Foldager, Leslie

    2016-01-01

    variables without assuming a priori defined groups. We provide different ways to evaluate the significance of the aggregated variables based on theoretical considerations and resampling techniques, and show that under certain assumptions the FWER is controlled in the strong sense. Validity of the method...... and the results might depend on the chosen criteria. Methods that summarize, or aggregate, test statistics or p-values, without relying on a priori criteria, are therefore desirable. We present a simple method to aggregate a sequence of stochastic variables, such as test statistics or p-values, into fewer...

  4. Afrika Statistika ISSN 2316-090X A Bayesian significance test of ...

    African Journals Online (AJOL)

    of the generalized likelihood ratio test to detect a change in binomial ... computational simplicity to the problem of calculating posterior marginals. ... the impact of a single outlier on the performance of the Bayesian significance test of change.

  5. After statistics reform : Should we still teach significance testing?

    NARCIS (Netherlands)

    A. Hak (Tony)

    2014-01-01

    textabstractIn the longer term null hypothesis significance testing (NHST) will disappear because p- values are not informative and not replicable. Should we continue to teach in the future the procedures of then abolished routines (i.e., NHST)? Three arguments are discussed for not teaching NHST in

  6. Security Considerations and Recommendations in Computer-Based Testing

    Directory of Open Access Journals (Sweden)

    Saleh M. Al-Saleem

    2014-01-01

    Full Text Available Many organizations and institutions around the globe are moving or planning to move their paper-and-pencil based testing to computer-based testing (CBT. However, this conversion will not be the best option for all kinds of exams and it will require significant resources. These resources may include the preparation of item banks, methods for test delivery, procedures for test administration, and last but not least test security. Security aspects may include but are not limited to the identification and authentication of examinee, the risks that are associated with cheating on the exam, and the procedures related to test delivery to the examinee. This paper will mainly investigate the security considerations associated with CBT and will provide some recommendations for the security of these kinds of tests. We will also propose a palm-based biometric authentication system incorporated with basic authentication system (username/password in order to check the identity and authenticity of the examinee.

  7. Security considerations and recommendations in computer-based testing.

    Science.gov (United States)

    Al-Saleem, Saleh M; Ullah, Hanif

    2014-01-01

    Many organizations and institutions around the globe are moving or planning to move their paper-and-pencil based testing to computer-based testing (CBT). However, this conversion will not be the best option for all kinds of exams and it will require significant resources. These resources may include the preparation of item banks, methods for test delivery, procedures for test administration, and last but not least test security. Security aspects may include but are not limited to the identification and authentication of examinee, the risks that are associated with cheating on the exam, and the procedures related to test delivery to the examinee. This paper will mainly investigate the security considerations associated with CBT and will provide some recommendations for the security of these kinds of tests. We will also propose a palm-based biometric authentication system incorporated with basic authentication system (username/password) in order to check the identity and authenticity of the examinee.

  8. Significance tests for functional data with complex dependence structure

    KAUST Repository

    Staicu, Ana-Maria; Lahiri, Soumen N.; Carroll, Raymond J.

    2015-01-01

    We propose an L (2)-norm based global testing procedure for the null hypothesis that multiple group mean functions are equal, for functional data with complex dependence structure. Specifically, we consider the setting of functional data with a

  9. Safety Testing of Ammonium Nitrate Based Mixtures

    Science.gov (United States)

    Phillips, Jason; Lappo, Karmen; Phelan, James; Peterson, Nathan; Gilbert, Don

    2013-06-01

    Ammonium nitrate (AN)/ammonium nitrate based explosives have a lengthy documented history of use by adversaries in acts of terror. While historical research has been conducted on AN-based explosive mixtures, it has primarily focused on detonation performance while varying the oxygen balance between the oxidizer and fuel components. Similarly, historical safety data on these materials is often lacking in pertinent details such as specific fuel type, particle size parameters, oxidizer form, etc. A variety of AN-based fuel-oxidizer mixtures were tested for small-scale sensitivity in preparation for large-scale testing. Current efforts focus on maintaining a zero oxygen-balance (a stoichiometric ratio for active chemical participants) while varying factors such as charge geometry, oxidizer form, particle size, and inert diluent ratios. Small-scale safety testing was conducted on various mixtures and fuels. It was found that ESD sensitivity is significantly affected by particle size, while this is less so for impact and friction. Thermal testing is in progress to evaluate hazards that may be experienced during large-scale testing.

  10. Reliability analysis of a gravity-based foundation for wind turbines

    DEFF Research Database (Denmark)

    Vahdatirad, Mohammad Javad; Griffiths, D. V.; Andersen, Lars Vabbersgaard

    2014-01-01

    its bearing capacity, is used to calibrate a code-based design procedure. A probabilistic finite element model is developed to analyze the bearing capacity of a surface footing on soil with spatially variable undrained strength. Monte Carlo simulation is combined with a re-sampling simulation...

  11. Shaping Up the Practice of Null Hypothesis Significance Testing.

    Science.gov (United States)

    Wainer, Howard; Robinson, Daniel H.

    2003-01-01

    Discusses criticisms of null hypothesis significance testing (NHST), suggesting that historical use of NHST was reasonable, and current users should read Sir Ronald Fisher's applied work. Notes that modifications to NHST and interpretations of its outcomes might better suit the needs of modern science. Concludes that NHST is most often useful as…

  12. Significance of acceleration period in a dynamic strength testing study.

    Science.gov (United States)

    Chen, W L; Su, F C; Chou, Y L

    1994-06-01

    The acceleration period that occurs during isokinetic tests may provide valuable information regarding neuromuscular readiness to produce maximal contraction. The purpose of this study was to collect the normative data of acceleration time during isokinetic knee testing, to calculate the acceleration work (Wacc), and to determine the errors (ERexp, ERwork, ERpower) due to ignoring Wacc during explosiveness, total work, and average power measurements. Seven male and 13 female subjects attended the test by using the Cybex 325 system and electronic stroboscope machine for 10 testing speeds (30-300 degrees/sec). A three-way ANOVA was used to assess gender, direction, and speed factors on acceleration time, Wacc, and errors. The results indicated that acceleration time was significantly affected by speed and direction; Wacc and ERexp by speed, direction, and gender; and ERwork and ERpower by speed and gender. The errors appeared to increase when testing the female subjects, during the knee flexion test, or when speed increased. To increase validity in clinical testing, it is important to consider the acceleration phase effect, especially in higher velocity isokinetic testing or for weaker muscle groups.

  13. Significance tests for functional data with complex dependence structure.

    Science.gov (United States)

    Staicu, Ana-Maria; Lahiri, Soumen N; Carroll, Raymond J

    2015-01-01

    We propose an L 2 -norm based global testing procedure for the null hypothesis that multiple group mean functions are equal, for functional data with complex dependence structure. Specifically, we consider the setting of functional data with a multilevel structure of the form groups-clusters or subjects-units, where the unit-level profiles are spatially correlated within the cluster, and the cluster-level data are independent. Orthogonal series expansions are used to approximate the group mean functions and the test statistic is estimated using the basis coefficients. The asymptotic null distribution of the test statistic is developed, under mild regularity conditions. To our knowledge this is the first work that studies hypothesis testing, when data have such complex multilevel functional and spatial structure. Two small-sample alternatives, including a novel block bootstrap for functional data, are proposed, and their performance is examined in simulation studies. The paper concludes with an illustration of a motivating experiment.

  14. Significance tests for functional data with complex dependence structure

    KAUST Repository

    Staicu, Ana-Maria

    2015-01-01

    We propose an L (2)-norm based global testing procedure for the null hypothesis that multiple group mean functions are equal, for functional data with complex dependence structure. Specifically, we consider the setting of functional data with a multilevel structure of the form groups-clusters or subjects-units, where the unit-level profiles are spatially correlated within the cluster, and the cluster-level data are independent. Orthogonal series expansions are used to approximate the group mean functions and the test statistic is estimated using the basis coefficients. The asymptotic null distribution of the test statistic is developed, under mild regularity conditions. To our knowledge this is the first work that studies hypothesis testing, when data have such complex multilevel functional and spatial structure. Two small-sample alternatives, including a novel block bootstrap for functional data, are proposed, and their performance is examined in simulation studies. The paper concludes with an illustration of a motivating experiment.

  15. P-Value, a true test of statistical significance? a cautionary note ...

    African Journals Online (AJOL)

    While it's not the intention of the founders of significance testing and hypothesis testing to have the two ideas intertwined as if they are complementary, the inconvenient marriage of the two practices into one coherent, convenient, incontrovertible and misinterpreted practice has dotted our standard statistics textbooks and ...

  16. Hypothesis Tests for Bernoulli Experiments: Ordering the Sample Space by Bayes Factors and Using Adaptive Significance Levels for Decisions

    Directory of Open Access Journals (Sweden)

    Carlos A. de B. Pereira

    2017-12-01

    Full Text Available The main objective of this paper is to find the relation between the adaptive significance level presented here and the sample size. We statisticians know of the inconsistency, or paradox, in the current classical tests of significance that are based on p-value statistics that are compared to the canonical significance levels (10%, 5%, and 1%: “Raise the sample to reject the null hypothesis” is the recommendation of some ill-advised scientists! This paper will show that it is possible to eliminate this problem of significance tests. We present here the beginning of a larger research project. The intention is to extend its use to more complex applications such as survival analysis, reliability tests, and other areas. The main tools used here are the Bayes factor and the extended Neyman–Pearson Lemma.

  17. Efficient Test Application for Core-Based Systems Using Twisted-Ring Counters

    OpenAIRE

    Anshuman Chandra; Krishnendu Chakrabarty; Mark C. Hansen

    2001-01-01

    We present novel test set encoding and pattern decompression methods for core-based systems. These are based on the use of twisted-ring counters and offer a number of important advantages–significant test compression (over 10X in many cases), less tester memory and reduced testing time, the ability to use a slow tester without compromising test quality or testing time, and no performance degradation for the core under test. Surprisingly, the encoded test sets obtained from partially-specified...

  18. THE SMALL BUT SIGNIFICANT AND NONTRANSITORY INCREASE IN PRICES (SSNIP TEST

    Directory of Open Access Journals (Sweden)

    Liviana Niminet

    2008-12-01

    Full Text Available The Small but Significant Nontransitory Increase in Price Test was designed to define the relevant market by concepts of product, geographical area and time. This test, also called the ,,hypothetical monopolistic test” is the subject of many researches both economical and legal as it deals with economic concepts as well as with legally aspects.

  19. Significance tests in mutagen screening: another method considering historical control frequencies

    International Nuclear Information System (INIS)

    Traut, H.

    1983-01-01

    Recently a method has been devised for testing the significance of the difference between a mutation frequency observed after chemical treatment or iradiation and the historical ('stable') control frequency. Another test is proposed serving the same purpose. Both methods are applied to several examples (experimental frequency versus historical control frequency). The results (P values) obtained agree well. (author)

  20. Measures of precision for dissimilarity-based multivariate analysis of ecological communities.

    Science.gov (United States)

    Anderson, Marti J; Santana-Garcon, Julia

    2015-01-01

    Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. © 2014 The Authors. Ecology Letters published by John Wiley & Sons Ltd and CNRS.

  1. Risk-based inservice testing program modifications at Palo Verde nuclear generating station

    International Nuclear Information System (INIS)

    Knauf, S.; Lindenlaub, B.; Linthicum, R.

    1996-01-01

    Arizona Public Service Company (APS) is investigating changes to the Palo Verde Inservice Testing (IST) Program that are intended to result in the reduction of the required test frequency for various valves in the American Society of Mechanical Engineers (ASME) Section XI IST program. The analytical techniques employed to select candidate valves and to demonstrate that these frequency reductions are acceptable are risk based. The results of the Palo Verde probabilistic risk assessment (PRA), updated in June 1994, and the risk significant determination performed as part of the implementation efforts for 10 CFR 50.65 (the maintenance rule) were used to select candidate valves for extended test intervals. Additional component level evaluations were conducted by an 'expert panel.' The decision to pursue these changes was facilitated by the ASME Risk-Based Inservice Testing Research Task Force for which Palo Verde is participating as a pilot plant. The NRC's increasing acceptance of cost beneficial licensing actions and risk-based submittals also provided incentive to seek these changes. Arizona Public Service is pursuing the risk-based IST program modification in order to reduce the unnecessary regulatory burden of the IST program through qualitative and quantitative analysis consistent with maintaining a high level of plant safety. The objectives of this project at Palo Verde are as follows: (1) Apply risk-based technologies to IST components to determine their risk significance (i.e., high or low). (2) Apply a combination of deterministic and risk-based methods to determine appropriate testing requirements for IST components including improvement of testing methods and frequency intervals for high-risk significant components. (3) Apply risk-based technologies to high-risk significant components identified by the open-quotes expert panelclose quotes and outside of the IST program to determine whether additional testing requirements are appropriate

  2. Risk-based inservice testing program modifications at Palo Verde nuclear generating station

    Energy Technology Data Exchange (ETDEWEB)

    Knauf, S.; Lindenlaub, B.; Linthicum, R.

    1996-12-01

    Arizona Public Service Company (APS) is investigating changes to the Palo Verde Inservice Testing (IST) Program that are intended to result in the reduction of the required test frequency for various valves in the American Society of Mechanical Engineers (ASME) Section XI IST program. The analytical techniques employed to select candidate valves and to demonstrate that these frequency reductions are acceptable are risk based. The results of the Palo Verde probabilistic risk assessment (PRA), updated in June 1994, and the risk significant determination performed as part of the implementation efforts for 10 CFR 50.65 (the maintenance rule) were used to select candidate valves for extended test intervals. Additional component level evaluations were conducted by an `expert panel.` The decision to pursue these changes was facilitated by the ASME Risk-Based Inservice Testing Research Task Force for which Palo Verde is participating as a pilot plant. The NRC`s increasing acceptance of cost beneficial licensing actions and risk-based submittals also provided incentive to seek these changes. Arizona Public Service is pursuing the risk-based IST program modification in order to reduce the unnecessary regulatory burden of the IST program through qualitative and quantitative analysis consistent with maintaining a high level of plant safety. The objectives of this project at Palo Verde are as follows: (1) Apply risk-based technologies to IST components to determine their risk significance (i.e., high or low). (2) Apply a combination of deterministic and risk-based methods to determine appropriate testing requirements for IST components including improvement of testing methods and frequency intervals for high-risk significant components. (3) Apply risk-based technologies to high-risk significant components identified by the {open_quotes}expert panel{close_quotes} and outside of the IST program to determine whether additional testing requirements are appropriate.

  3. Adult age differences in perceptually based, but not conceptually based implicit tests of memory.

    Science.gov (United States)

    Small, B J; Hultsch, D F; Masson, M E

    1995-05-01

    Implicit tests of memory assess the influence of recent experience without requiring awareness of remembering. Evidence concerning age differences on implicit tests of memory suggests small age differences in favor of younger adults. However, the majority of research examining this issue has relied upon perceptually based implicit tests. Recently, a second type of implicit test, one that relies upon conceptually based processes, has been identified. The pattern of age differences on this second type of implicit test is less clear. In the present study, we examined the pattern of age differences on one conceptually based (fact completion) and one perceptually based (stem completion) implicit test of memory, as well as two explicit tests of memory (fact and word recall). Tasks were administered to 403 adults from three age groups (19-34 years, 58-73 years, 74-89 years). Significant age differences in favor of the young were found on stem completion but not fact completion. Age differences were present for both word and fast recall. Correlational analyses examining the relationship of memory performance to other cognitive variables indicated that the implicit tests were supported by different components than the explicit tests, as well as being different from each other.

  4. DENBRAN: A basic program for a significance test for multivariate normality of clusters from branching patterns in dendrograms

    Science.gov (United States)

    Sneath, P. H. A.

    A BASIC program is presented for significance tests to determine whether a dendrogram is derived from clustering of points that belong to a single multivariate normal distribution. The significance tests are based on statistics of the Kolmogorov—Smirnov type, obtained by comparing the observed cumulative graph of branch levels with a graph for the hypothesis of multivariate normality. The program also permits testing whether the dendrogram could be from a cluster of lower dimensionality due to character correlations. The program makes provision for three similarity coefficients, (1) Euclidean distances, (2) squared Euclidean distances, and (3) Simple Matching Coefficients, and for five cluster methods (1) WPGMA, (2) UPGMA, (3) Single Linkage (or Minimum Spanning Trees), (4) Complete Linkage, and (5) Ward's Increase in Sums of Squares. The program is entitled DENBRAN.

  5. Semantics-based Automated Web Testing

    Directory of Open Access Journals (Sweden)

    Hai-Feng Guo

    2015-08-01

    Full Text Available We present TAO, a software testing tool performing automated test and oracle generation based on a semantic approach. TAO entangles grammar-based test generation with automated semantics evaluation using a denotational semantics framework. We show how TAO can be incorporated with the Selenium automation tool for automated web testing, and how TAO can be further extended to support automated delta debugging, where a failing web test script can be systematically reduced based on grammar-directed strategies. A real-life parking website is adopted throughout the paper to demonstrate the effectivity of our semantics-based web testing approach.

  6. Application of risk-based methods to inservice testing of check valves

    Energy Technology Data Exchange (ETDEWEB)

    Closky, N.B.; Balkey, K.R.; McAllister, W.J. [and others

    1996-12-01

    Research efforts have been underway in the American Society of Mechanical Engineers (ASME) and industry to define appropriate methods for the application of risk-based technology in the development of inservice testing (IST) programs for pumps and valves in nuclear steam supply systems. This paper discusses a pilot application of these methods to the inservice testing of check valves in the emergency core cooling system of Georgia Power`s Vogtle nuclear power station. The results of the probabilistic safety assessment (PSA) are used to divide the check valves into risk-significant and less-risk-significant groups. This information is reviewed by a plant expert panel along with the consideration of appropriate deterministic insights to finally categorize the check valves into more safety-significant and less safety-significant component groups. All of the more safety-significant check valves are further evaluated in detail using a failure modes and causes analysis (FMCA) to assist in defining effective IST strategies. A template has been designed to evaluate how effective current and emerging tests for check valves are in detecting failures or in finding significant conditions that are precursors to failure for the likely failure causes. This information is then used to design and evaluate appropriate IST strategies that consider both the test method and frequency. A few of the less safety-significant check valves are also evaluated using this process since differences exist in check valve design, function, and operating conditions. Appropriate test strategies are selected for each check valve that has been evaluated based on safety and cost considerations. Test strategies are inferred from this information for the other check valves based on similar check valve conditions. Sensitivity studies are performed using the PSA model to arrive at an overall IST program that maintains or enhances safety at the lowest achievable cost.

  7. Significance of high level test data in piping design

    International Nuclear Information System (INIS)

    McLean, J.L.; Bitner, J.L.

    1991-01-01

    During the 1980's the piping technical community in the U.S. initiated a series of research activities aimed at reducing the conservatism inherent in nuclear piping design. One of these activities was directed at the application of the ASME Code rules to the design of piping subjected to dynamic loads. This paper surveys the test data obtained from three groups in the U.S. and none in the U.K., and correlates the findings as they relate to the failure modes of piping subjected to seismic loads. The failure modes experienced as the result of testing at dynamic loads significantly in excess of anticipated loads specified for any of the ASME Code service levels are discussed. A recommendation is presented for modifying the Code piping rules to reduce the conservatism inherent in seismic design

  8. Adaptive Tests of Significance Using Permutations of Residuals with R and SAS

    CERN Document Server

    O'Gorman, Thomas W

    2012-01-01

    Provides the tools needed to successfully perform adaptive tests across a broad range of datasets Adaptive Tests of Significance Using Permutations of Residuals with R and SAS illustrates the power of adaptive tests and showcases their ability to adjust the testing method to suit a particular set of data. The book utilizes state-of-the-art software to demonstrate the practicality and benefits for data analysis in various fields of study. Beginning with an introduction, the book moves on to explore the underlying concepts of adaptive tests, including:Smoothing methods and normalizing transforma

  9. A Novel Bearing Fault Diagnosis Method Based on Gaussian Restricted Boltzmann Machine

    Directory of Open Access Journals (Sweden)

    Xiao-hui He

    2016-01-01

    Full Text Available To realize the fault diagnosis of bearing effectively, this paper presents a novel bearing fault diagnosis method based on Gaussian restricted Boltzmann machine (Gaussian RBM. Vibration signals are firstly resampled to the same equivalent speed. Subsequently, the envelope spectrums of the resampled data are used directly as the feature vectors to represent the fault types of bearing. Finally, in order to deal with the high-dimensional feature vectors based on envelope spectrum, a classifier model based on Gaussian RBM is applied. Gaussian RBM has the ability to provide a closed-form representation of the distribution underlying the training data, and it is very convenient for modeling high-dimensional real-valued data. Experiments on 10 different data sets verify the performance of the proposed method. The superiority of Gaussian RBM classifier is also confirmed by comparing with other classifiers, such as extreme learning machine, support vector machine, and deep belief network. The robustness of the proposed method is also studied in this paper. It can be concluded that the proposed method can realize the bearing fault diagnosis accurately and effectively.

  10. Can the Bruckner test be used as a rapid screening test to detect significant refractive errors in children?

    Directory of Open Access Journals (Sweden)

    Kothari Mihir

    2007-01-01

    Full Text Available Purpose: To assess the suitability of Brückner test as a screening test to detect significant refractive errors in children. Materials and Methods: A pediatric ophthalmologist prospectively observed the size and location of pupillary crescent on Brückner test as hyperopic, myopic or astigmatic. This was compared with the cycloplegic refraction. Detailed ophthalmic examination was done for all. Sensitivity, specificity, positive predictive value and negative predictive value of Brückner test were determined for the defined cutoff levels of ametropia. Results: Ninety-six subjects were examined. Mean age was 8.6 years (range 1 to 16 years. Brückner test could be completed for all; the time taken to complete this test was 10 seconds per subject. The ophthalmologist identified 131 eyes as ametropic, 61 as emmetropic. The Brückner test had sensitivity 91%, specificity 72.8%, positive predictive value 85.5% and negative predictive value 83.6%. Of 10 false negatives four had compound hypermetropic astigmatism and three had myopia. Conclusions: Brückner test can be used to rapidly screen the children for significant refractive errors. The potential benefits from such use may be maximized if programs use the test with lower crescent measurement cutoffs, a crescent measurement ruler and a distance fixation target.

  11. Bootstrap Determination of the Co-integration Rank in Heteroskedastic VAR Models

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Rahbek, Anders; Taylor, A.M.Robert

    In a recent paper Cavaliere et al. (2012) develop bootstrap implementations of the (pseudo-) likelihood ratio [PLR] co-integration rank test and associated sequential rank determination procedure of Johansen (1996). The bootstrap samples are constructed using the restricted parameter estimates...... of the underlying VAR model which obtain under the reduced rank null hypothesis. They propose methods based on an i.i.d. bootstrap re-sampling scheme and establish the validity of their proposed bootstrap procedures in the context of a co-integrated VAR model with i.i.d. innovations. In this paper we investigate...... the properties of their bootstrap procedures, together with analogous procedures based on a wild bootstrap re-sampling scheme, when time-varying behaviour is present in either the conditional or unconditional variance of the innovations. We show that the bootstrap PLR tests are asymptotically correctly sized and...

  12. Bootstrap Determination of the Co-Integration Rank in Heteroskedastic VAR Models

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Rahbek, Anders; Taylor, A. M. Robert

    In a recent paper Cavaliere et al. (2012) develop bootstrap implementations of the (pseudo-) likelihood ratio [PLR] co-integration rank test and associated sequential rank determination procedure of Johansen (1996). The bootstrap samples are constructed using the restricted parameter estimates...... of the underlying VAR model which obtain under the reduced rank null hypothesis. They propose methods based on an i.i.d. bootstrap re-sampling scheme and establish the validity of their proposed bootstrap procedures in the context of a co-integrated VAR model with i.i.d. innovations. In this paper we investigate...... the properties of their bootstrap procedures, together with analogous procedures based on a wild bootstrap re-sampling scheme, when time-varying behaviour is present in either the conditional or unconditional variance of the innovations. We show that the bootstrap PLR tests are asymptotically correctly sized and...

  13. Model-Based GUI Testing Using Uppaal at Novo Nordisk

    Science.gov (United States)

    Hjort, Ulrik H.; Illum, Jacob; Larsen, Kim G.; Petersen, Michael A.; Skou, Arne

    This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates a test suite satisfying some testing criterion, such as edge or state coverage, and converts the individual test case into a scripting language that can be automatically executed against the target. The tool has significantly reduced the time required for test construction and generation, and reduced the number of test scripts while increasing the coverage.

  14. Prognostic significance of silent myocardial ischemia on a thallium stress test

    International Nuclear Information System (INIS)

    Heller, L.I.; Tresgallo, M.; Sciacca, R.R.; Blood, D.K.; Seldin, D.W.; Johnson, L.L.

    1990-01-01

    The clinical significance of silent ischemia is not fully known. The purpose of this study was to determine whether the presence or absence of angina during a thallium stress test positive for ischemia was independently predictive of an adverse outcome. Two hundred thirty-four consecutive patients with ischemia on a thallium stress test were identified. Ischemia was defined as the presence of defect(s) on the immediate postexercise scans not in the distribution of prior infarctions that redistributed on 4-hour scans. During the test 129 patients had angina, defined as characteristic neck, jaw, arm, back or chest discomfort, while the remaining 105 patients had no angina. Follow-up ranged from 2 to 8.2 years (mean 5.2 +/- 2.1) and was successfully obtained in 156 patients. Eighty-two of the 156 patients had angina (group A) and 74 had silent ischemia (group S). Group A patients were significantly older (62 +/- 8 vs 59 +/- 8 years, p less than 0.05). There was no significant difference between the 2 groups in terms of sex, history of prior infarction or presence of left main/3-vessel disease. A larger percentage of patients in group A were receiving beta blockers (60 vs 41%, p less than 0.05) and nitrates (52 vs 36%, 0.05 less than p less than 0.10). There was a large number of cardiac events (myocardial infarction, revascularization and death) in both groups (37 of 82 [45%] in group A; 28 of 72 [38%] in group S) but no statistically significant difference between the groups. Similarly, life-table analysis revealed no difference in mortality between the 2 groups

  15. Model selection for semiparametric marginal mean regression accounting for within-cluster subsampling variability and informative cluster size.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2018-03-13

    We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.

  16. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker

    2012-02-01

    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  17. Communication Optimizations for a Wireless Distributed Prognostic Framework

    Science.gov (United States)

    Saha, Sankalita; Saha, Bhaskar; Goebel, Kai

    2009-01-01

    Distributed architecture for prognostics is an essential step in prognostic research in order to enable feasible real-time system health management. Communication overhead is an important design problem for such systems. In this paper we focus on communication issues faced in the distributed implementation of an important class of algorithms for prognostics - particle filters. In spite of being computation and memory intensive, particle filters lend well to distributed implementation except for one significant step - resampling. We propose new resampling scheme called parameterized resampling that attempts to reduce communication between collaborating nodes in a distributed wireless sensor network. Analysis and comparison with relevant resampling schemes is also presented. A battery health management system is used as a target application. A new resampling scheme for distributed implementation of particle filters has been discussed in this paper. Analysis and comparison of this new scheme with existing resampling schemes in the context for minimizing communication overhead have also been discussed. Our proposed new resampling scheme performs significantly better compared to other schemes by attempting to reduce both the communication message length as well as number total communication messages exchanged while not compromising prediction accuracy and precision. Future work will explore the effects of the new resampling scheme in the overall computational performance of the whole system as well as full implementation of the new schemes on the Sun SPOT devices. Exploring different network architectures for efficient communication is an importance future research direction as well.

  18. Ensemble-based prediction of RNA secondary structures.

    Science.gov (United States)

    Aghaeepour, Nima; Hoos, Holger H

    2013-04-24

    Accurate structure prediction methods play an important role for the understanding of RNA function. Energy-based, pseudoknot-free secondary structure prediction is one of the most widely used and versatile approaches, and improved methods for this task have received much attention over the past five years. Despite the impressive progress that as been achieved in this area, existing evaluations of the prediction accuracy achieved by various algorithms do not provide a comprehensive, statistically sound assessment. Furthermore, while there is increasing evidence that no prediction algorithm consistently outperforms all others, no work has been done to exploit the complementary strengths of multiple approaches. In this work, we present two contributions to the area of RNA secondary structure prediction. Firstly, we use state-of-the-art, resampling-based statistical methods together with a previously published and increasingly widely used dataset of high-quality RNA structures to conduct a comprehensive evaluation of existing RNA secondary structure prediction procedures. The results from this evaluation clarify the performance relationship between ten well-known existing energy-based pseudoknot-free RNA secondary structure prediction methods and clearly demonstrate the progress that has been achieved in recent years. Secondly, we introduce AveRNA, a generic and powerful method for combining a set of existing secondary structure prediction procedures into an ensemble-based method that achieves significantly higher prediction accuracies than obtained from any of its component procedures. Our new, ensemble-based method, AveRNA, improves the state of the art for energy-based, pseudoknot-free RNA secondary structure prediction by exploiting the complementary strengths of multiple existing prediction procedures, as demonstrated using a state-of-the-art statistical resampling approach. In addition, AveRNA allows an intuitive and effective control of the trade-off between

  19. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    Science.gov (United States)

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  20. Compare diagnostic tests using transformation-invariant smoothed ROC curves⋆

    Science.gov (United States)

    Tang, Liansheng; Du, Pang; Wu, Chengqing

    2012-01-01

    Receiver operating characteristic (ROC) curve, plotting true positive rates against false positive rates as threshold varies, is an important tool for evaluating biomarkers in diagnostic medicine studies. By definition, ROC curve is monotone increasing from 0 to 1 and is invariant to any monotone transformation of test results. And it is often a curve with certain level of smoothness when test results from the diseased and non-diseased subjects follow continuous distributions. Most existing ROC curve estimation methods do not guarantee all of these properties. One of the exceptions is Du and Tang (2009) which applies certain monotone spline regression procedure to empirical ROC estimates. However, their method does not consider the inherent correlations between empirical ROC estimates. This makes the derivation of the asymptotic properties very difficult. In this paper we propose a penalized weighted least square estimation method, which incorporates the covariance between empirical ROC estimates as a weight matrix. The resulting estimator satisfies all the aforementioned properties, and we show that it is also consistent. Then a resampling approach is used to extend our method for comparisons of two or more diagnostic tests. Our simulations show a significantly improved performance over the existing method, especially for steep ROC curves. We then apply the proposed method to a cancer diagnostic study that compares several newly developed diagnostic biomarkers to a traditional one. PMID:22639484

  1. Transfer Entropy for Nonparametric Granger Causality Detection : An Evaluation of Different Resampling Methods

    NARCIS (Netherlands)

    Diks, C.; Fang, H.

    2017-01-01

    The information-theoretical concept transfer entropy is an ideal measure for detecting conditional independence, or Granger causality in a time series setting. The recent literature indeed witnesses an increased interest in applications of entropy-based tests in this direction. However, those tests

  2. Conducting tests for statistically significant differences using forest inventory data

    Science.gov (United States)

    James A. Westfall; Scott A. Pugh; John W. Coulston

    2013-01-01

    Many forest inventory and monitoring programs are based on a sample of ground plots from which estimates of forest resources are derived. In addition to evaluating metrics such as number of trees or amount of cubic wood volume, it is often desirable to make comparisons between resource attributes. To properly conduct statistical tests for differences, it is imperative...

  3. The relationship between academic self-concept, intrinsic motivation, test anxiety, and academic achievement among nursing students: mediating and moderating effects.

    Science.gov (United States)

    Khalaila, Rabia

    2015-03-01

    The impact of cognitive factors on academic achievement is well documented. However, little is known about the mediating and moderating effects of non-cognitive, motivational and situational factors on academic achievement among nursing students. The aim of this study is to explore the direct and/or indirect effects of academic self-concept on academic achievement, and examine whether intrinsic motivation moderates the negative effect of test anxiety on academic achievement. This descriptive-correlational study was carried out on a convenience sample of 170 undergraduate nursing students, in an academic college in northern Israel. Academic motivation, academic self-concept and test anxiety scales were used as measuring instruments. Bootstrapping with resampling strategies was used for testing multiple mediators' model and examining the moderator effect. A higher self-concept was found to be directly related to greater academic achievement. Test anxiety and intrinsic motivation were found to be significant mediators in the relationship between self-concept and academic achievement. In addition, intrinsic motivation significantly moderated the negative effect of test anxiety on academic achievement. The results suggested that institutions should pay more attention to the enhancement of motivational factors (e.g., self-concept and motivation) and alleviate the negative impact of situational factors (e.g., test anxiety) when offering psycho-educational interventions designed to improve nursing students' academic achievements. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Model-based security testing

    OpenAIRE

    Schieferdecker, Ina; Großmann, Jürgen; Schneider, Martin

    2012-01-01

    Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security...

  5. Methodology to identify risk-significant components for inservice inspection and testing

    International Nuclear Information System (INIS)

    Anderson, M.T.; Hartley, R.S.; Jones, J.L. Jr.; Kido, C.; Phillips, J.H.

    1992-08-01

    Periodic inspection and testing of vital system components should be performed to ensure the safe and reliable operation of Department of Energy (DOE) nuclear processing facilities. Probabilistic techniques may be used to help identify and rank components by their relative risk. A risk-based ranking would allow varied DOE sites to implement inspection and testing programs in an effective and cost-efficient manner. This report describes a methodology that can be used to rank components, while addressing multiple risk issues

  6. Analysing Test-Takers’ Views on a Computer-Based Speaking Test

    Directory of Open Access Journals (Sweden)

    Marian Amengual-Pizarro

    2017-11-01

    Full Text Available This study examines test-takers’ views on a computer-delivered speaking test in order to investigate the aspects they consider most relevant in technology-based oral assessment, and to explore the main advantages and disadvantages computer-based tests may offer as compared to face-to-face speaking tests. A small-scale open questionnaire was administered to 80 test-takers who took the APTIS speaking test at the Universidad de Alcalá in April 2016. Results reveal that examinees believe computer-based tests provide a valid measure of oral competence in English and are considered to be an adequate method for the assessment of speaking. Interestingly, the data suggest that personal characteristics of test-takers seem to play a key role in deciding upon the most suitable and reliable delivery mode.

  7. Automation and Evaluation of the SOWH Test with SOWHAT.

    Science.gov (United States)

    Church, Samuel H; Ryan, Joseph F; Dunn, Casey W

    2015-11-01

    The Swofford-Olsen-Waddell-Hillis (SOWH) test evaluates statistical support for incongruent phylogenetic topologies. It is commonly applied to determine if the maximum likelihood tree in a phylogenetic analysis is significantly different than an alternative hypothesis. The SOWH test compares the observed difference in log-likelihood between two topologies to a null distribution of differences in log-likelihood generated by parametric resampling. The test is a well-established phylogenetic method for topology testing, but it is sensitive to model misspecification, it is computationally burdensome to perform, and its implementation requires the investigator to make several decisions that each have the potential to affect the outcome of the test. We analyzed the effects of multiple factors using seven data sets to which the SOWH test was previously applied. These factors include a number of sample replicates, likelihood software, the introduction of gaps to simulated data, the use of distinct models of evolution for data simulation and likelihood inference, and a suggested test correction wherein an unresolved "zero-constrained" tree is used to simulate sequence data. To facilitate these analyses and future applications of the SOWH test, we wrote SOWHAT, a program that automates the SOWH test. We find that inadequate bootstrap sampling can change the outcome of the SOWH test. The results also show that using a zero-constrained tree for data simulation can result in a wider null distribution and higher p-values, but does not change the outcome of the SOWH test for most of the data sets tested here. These results will help others implement and evaluate the SOWH test and allow us to provide recommendations for future applications of the SOWH test. SOWHAT is available for download from https://github.com/josephryan/SOWHAT. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  8. Wayside Bearing Fault Diagnosis Based on a Data-Driven Doppler Effect Eliminator and Transient Model Analysis

    Science.gov (United States)

    Liu, Fang; Shen, Changqing; He, Qingbo; Zhang, Ao; Liu, Yongbin; Kong, Fanrang

    2014-01-01

    A fault diagnosis strategy based on the wayside acoustic monitoring technique is investigated for locomotive bearing fault diagnosis. Inspired by the transient modeling analysis method based on correlation filtering analysis, a so-called Parametric-Mother-Doppler-Wavelet (PMDW) is constructed with six parameters, including a center characteristic frequency and five kinematic model parameters. A Doppler effect eliminator containing a PMDW generator, a correlation filtering analysis module, and a signal resampler is invented to eliminate the Doppler effect embedded in the acoustic signal of the recorded bearing. Through the Doppler effect eliminator, the five kinematic model parameters can be identified based on the signal itself. Then, the signal resampler is applied to eliminate the Doppler effect using the identified parameters. With the ability to detect early bearing faults, the transient model analysis method is employed to detect localized bearing faults after the embedded Doppler effect is eliminated. The effectiveness of the proposed fault diagnosis strategy is verified via simulation studies and applications to diagnose locomotive roller bearing defects. PMID:24803197

  9. Bayesian target tracking based on particle filter

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    For being able to deal with the nonlinear or non-Gaussian problems, particle filters have been studied by many researchers. Based on particle filter, the extended Kalman filter (EKF) proposal function is applied to Bayesian target tracking. Markov chain Monte Carlo (MCMC) method, the resampling step, etc novel techniques are also introduced into Bayesian target tracking. And the simulation results confirm the improved particle filter with these techniques outperforms the basic one.

  10. Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.

    Science.gov (United States)

    Kieffer, Kevin M.; Thompson, Bruce

    As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate…

  11. IRT-based test construction

    OpenAIRE

    van der Linden, Willem J.; Theunissen, T.J.J.M.; Boekkooi-Timminga, Ellen; Kelderman, Henk

    1987-01-01

    Four discussions of test construction based on item response theory (IRT) are presented. The first discussion, "Test Design as Model Building in Mathematical Programming" (T.J.J.M. Theunissen), presents test design as a decision process under certainty. A natural way of modeling this process leads to mathematical programming. General models of test construction are discussed, with information about algorithms and heuristics; ideas about the analysis and refinement of test constraints are also...

  12. On the relation between Kaiser–Bessel blob and tube of response based modelling of the system matrix in iterative PET image reconstruction

    International Nuclear Information System (INIS)

    Lougovski, Alexandr; Hofheinz, Frank; Maus, Jens; Schramm, Georg; Van den Hoff, Jörg

    2015-01-01

    We investigate the question of how the blob approach is related to tube of response based modelling of the system matrix. In our model, the tube of response (TOR) is approximated as a cylinder with constant density (TOR-CD) and the cubic voxels are replaced by spheres. Here we investigate a modification of the TOR model that makes it effectively equivalent to the blob model, which models the intersection of lines of response (LORs) with radially variant basis functions (‘blobs’) replacing the cubic voxels. Implications of the achieved equivalence regarding the necessity of final resampling in blob-based reconstructions are considered. We extended TOR-CD to a variable density tube model (TOR-VD) that yields a weighting function (defining all system matrix elements) which is essentially identical to that of the blob model. The variable density of TOR-VD was modelled by a Gaussian and a Kaiser–Bessel function, respectively. The free parameters of both model functions were determined by fitting the corresponding weighting function to the weighting function of the blob model. TOR-CD and the best-fitting TOR-VD were compared to the blob model with a final resampling step (BLOB-RS) and without resampling (BLOB-NRS) in phantom studies. For three different contrast ratios and two different voxel sizes, resolution noise curves were generated. TOR-VD and BLOB-NRS lead to nearly identical images for all investigated contrast ratios and voxel sizes. Both models showed strong Gibbs artefacts at 4 mm voxel size, while at 2 mm voxel size there were no Gibbs artefacts visible. The spatial resolution was similar to the resolution with TOR-CD in all cases. The resampling step removed most of the Gibbs artefacts and reduced the noise level but also degraded the spatial resolution substantially. We conclude that the blob model can be considered just as a special case of a TOR-based reconstruction. The latter approach provides a more natural description of the detection process

  13. Agonist anti-GITR antibody significantly enhances the therapeutic efficacy of Listeria monocytogenes-based immunotherapy.

    Science.gov (United States)

    Shrimali, Rajeev; Ahmad, Shamim; Berrong, Zuzana; Okoev, Grigori; Matevosyan, Adelaida; Razavi, Ghazaleh Shoja E; Petit, Robert; Gupta, Seema; Mkrtichyan, Mikayel; Khleif, Samir N

    2017-08-15

    We previously demonstrated that in addition to generating an antigen-specific immune response, Listeria monocytogenes (Lm)-based immunotherapy significantly reduces the ratio of regulatory T cells (Tregs)/CD4 + and myeloid-derived suppressor cells (MDSCs) in the tumor microenvironment. Since Lm-based immunotherapy is able to inhibit the immune suppressive environment, we hypothesized that combining this treatment with agonist antibody to a co-stimulatory receptor that would further boost the effector arm of immunity will result in significant improvement of anti-tumor efficacy of treatment. Here we tested the immune and therapeutic efficacy of Listeria-based immunotherapy combination with agonist antibody to glucocorticoid-induced tumor necrosis factor receptor-related protein (GITR) in TC-1 mouse tumor model. We evaluated the potency of combination on tumor growth and survival of treated animals and profiled tumor microenvironment for effector and suppressor cell populations. We demonstrate that combination of Listeria-based immunotherapy with agonist antibody to GITR synergizes to improve immune and therapeutic efficacy of treatment in a mouse tumor model. We show that this combinational treatment leads to significant inhibition of tumor-growth, prolongs survival and leads to complete regression of established tumors in 60% of treated animals. We determined that this therapeutic benefit of combinational treatment is due to a significant increase in tumor infiltrating effector CD4 + and CD8 + T cells along with a decrease of inhibitory cells. To our knowledge, this is the first study that exploits Lm-based immunotherapy combined with agonist anti-GITR antibody as a potent treatment strategy that simultaneously targets both the effector and suppressor arms of the immune system, leading to significantly improved anti-tumor efficacy. We believe that our findings depicted in this manuscript provide a promising and translatable strategy that can enhance the overall

  14. The Need for Nuance in the Null Hypothesis Significance Testing Debate

    Science.gov (United States)

    Häggström, Olle

    2017-01-01

    Null hypothesis significance testing (NHST) provides an important statistical toolbox, but there are a number of ways in which it is often abused and misinterpreted, with bad consequences for the reliability and progress of science. Parts of contemporary NHST debate, especially in the psychological sciences, is reviewed, and a suggestion is made…

  15. Effects of computer-based immediate feedback on foreign language listening comprehension and test-associated anxiety.

    Science.gov (United States)

    Lee, Shu-Ping; Su, Hui-Kai; Lee, Shin-Da

    2012-06-01

    This study investigated the effects of immediate feedback on computer-based foreign language listening comprehension tests and on intrapersonal test-associated anxiety in 72 English major college students at a Taiwanese University. Foreign language listening comprehension of computer-based tests designed by MOODLE, a dynamic e-learning environment, with or without immediate feedback together with the state-trait anxiety inventory (STAI) were tested and repeated after one week. The analysis indicated that immediate feedback during testing caused significantly higher anxiety and resulted in significantly higher listening scores than in the control group, which had no feedback. However, repeated feedback did not affect the test anxiety and listening scores. Computer-based immediate feedback did not lower debilitating effects of anxiety but enhanced students' intrapersonal eustress-like anxiety and probably improved their attention during listening tests. Computer-based tests with immediate feedback might help foreign language learners to increase attention in foreign language listening comprehension.

  16. Cloud-based solution to identify statistically significant MS peaks differentiating sample categories.

    Science.gov (United States)

    Ji, Jun; Ling, Jeffrey; Jiang, Helen; Wen, Qiaojun; Whitin, John C; Tian, Lu; Cohen, Harvey J; Ling, Xuefeng B

    2013-03-23

    Mass spectrometry (MS) has evolved to become the primary high throughput tool for proteomics based biomarker discovery. Until now, multiple challenges in protein MS data analysis remain: large-scale and complex data set management; MS peak identification, indexing; and high dimensional peak differential analysis with the concurrent statistical tests based false discovery rate (FDR). "Turnkey" solutions are needed for biomarker investigations to rapidly process MS data sets to identify statistically significant peaks for subsequent validation. Here we present an efficient and effective solution, which provides experimental biologists easy access to "cloud" computing capabilities to analyze MS data. The web portal can be accessed at http://transmed.stanford.edu/ssa/. Presented web application supplies large scale MS data online uploading and analysis with a simple user interface. This bioinformatic tool will facilitate the discovery of the potential protein biomarkers using MS.

  17. Comparing Science Virtual and Paper-Based Test to Measure Students’ Critical Thinking based on VAK Learning Style Model

    Science.gov (United States)

    Rosyidah, T. H.; Firman, H.; Rusyati, L.

    2017-02-01

    This research was comparing virtual and paper-based test to measure students’ critical thinking based on VAK (Visual-Auditory-Kynesthetic) learning style model. Quasi experiment method with one group post-test only design is applied in this research in order to analyze the data. There was 40 eight grade students at one of public junior high school in Bandung becoming the sample in this research. The quantitative data was obtained through 26 questions about living thing and environment sustainability which is constructed based on the eight elements of critical thinking and be provided in the form of virtual and paper-based test. Based on analysis of the result, it is shown that within visual, auditory, and kinesthetic were not significantly difference in virtual and paper-based test. Besides, all result was supported by quistionnaire about students’ respond on virtual test which shows 3.47 in the scale of 4. Means that student showed positive respond in all aspet measured, which are interest, impression, and expectation.

  18. Test the Overall Significance of p-values by Using Joint Tail Probability of Ordered p-values as Test Statistic

    OpenAIRE

    Fang, Yongxiang; Wit, Ernst

    2008-01-01

    Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values and decide the test result. This is, in some cases, viewed as a flaw. In order to overcome this flaw and improve the power of the test, the joint tail probability of a set p-values is proposed as a ...

  19. [Clinical significance of the tests used in the diagnosis of pancreatic diseases].

    Science.gov (United States)

    Lenti, G; Emanuelli, G

    1976-11-14

    Different methods available for investigating patients for pancreatic disease are discussed. They first include measurement of pancreatic enzymes in biological fluids. Basal amylase and/or lipase in blood are truly diagnostic in acute pancreatitis but their utility is low in chronic pancreatic diseases. Evocative tests have been performed to increase the sensitivity of blood enzyme measurement. The procedure is based on enzyme determination following administration of pancreozymin and secretin, and offers a valuable aid in diagnosis of chronic pancreatitis and cancer of the pancreas. They are capable of discerning pancreatic lesions but are not really discriminatory because similar changes are observed in both diseases. The measurement of urinary enzyme levels in patients with acute pancreatitis is a sensitive indicator of disease. The urinary amylase excretion rises to abnormal levels and persists at significant values for a longer period of time than the serum amylase in acute pancreatitis. The fractional urinary amylase escretion seems to be more sensitive than daily urinary measurement. The pancreatic exocrin function can be assessed by examining the duodenal contents after intravenous administration of pancreozymin and secretin. Different abnormal secretory patterns can be determinated. Total secretory deficiency is observed in patients with obstruction of excretory ducts by tumors of the head of the pancreas and in the end stage of chronic pancreatitis. Low volume with normal bicarbonate and enzyme concentration is another typical pattern seen in neoplastic obstruction of escretory ducts. In chronic pancreatitis the chief defect is the inability of the gland to secrete a juice with a high bicarbonate concentration; but in the advanced stage diminution of enzyme and volume is also evident. Diagnostic procedures for pancreatic diseases include digestion and absorption tests. The microscopic examination and chemical estimation of the fats in stool specimens in

  20. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  1. Test the Overall Significance of p-values by Using Joint Tail Probability of Ordered p-values as Test Statistic

    NARCIS (Netherlands)

    Fang, Yongxiang; Wit, Ernst

    2008-01-01

    Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values

  2. Multi-objective Search-based Mobile Testing

    OpenAIRE

    Mao, K.

    2017-01-01

    Despite the tremendous popularity of mobile applications, mobile testing still relies heavily on manual testing. This thesis presents mobile test automation approaches based on multi-objective search. We introduce three approaches: Sapienz (for native Android app testing), Octopuz (for hybrid/web JavaScript app testing) and Polariz (for using crowdsourcing to support search-based mobile testing). These three approaches represent the primary scientific and technical contributions of the thesis...

  3. Bootstrap Determination of the Co-Integration Rank in Heteroskedastic VAR Models

    DEFF Research Database (Denmark)

    Cavaliere, G.; Rahbek, Anders; Taylor, A.M.R.

    2014-01-01

    In a recent paper Cavaliere et al. (2012) develop bootstrap implementations of the (pseudo-) likelihood ratio (PLR) co-integration rank test and associated sequential rank determination procedure of Johansen (1996). The bootstrap samples are constructed using the restricted parameter estimates...... of the underlying vector autoregressive (VAR) model which obtain under the reduced rank null hypothesis. They propose methods based on an independent and individual distributed (i.i.d.) bootstrap resampling scheme and establish the validity of their proposed bootstrap procedures in the context of a co......-integrated VAR model with i.i.d. innovations. In this paper we investigate the properties of their bootstrap procedures, together with analogous procedures based on a wild bootstrap resampling scheme, when time-varying behavior is present in either the conditional or unconditional variance of the innovations. We...

  4. On the equivalence of the Clauser–Horne and Eberhard inequality based tests

    International Nuclear Information System (INIS)

    Khrennikov, Andrei; Ramelow, Sven; Ursin, Rupert; Wittmann, Bernhard; Kofler, Johannes; Basieva, Irina

    2014-01-01

    Recently, the results of the first experimental test for entangled photons closing the detection loophole (also referred to as the fair sampling loophole) were published (Vienna, 2013). From the theoretical viewpoint the main distinguishing feature of this long-aspired to experiment was that the Eberhard inequality was used. Almost simultaneously another experiment closing this loophole was performed (Urbana-Champaign, 2013) and it was based on the Clauser–Horne inequality (for probabilities). The aim of this note is to analyze the mathematical and experimental equivalence of tests based on the Eberhard inequality and various forms of the Clauser–Horne inequality. The structure of the mathematical equivalence is nontrivial. In particular, it is necessary to distinguish between algebraic and statistical equivalence. Although the tests based on these inequalities are algebraically equivalent, they need not be equivalent statistically, i.e., theoretically the level of statistical significance can drop under transition from one test to another (at least for finite samples). Nevertheless, the data collected in the Vienna test implies not only a statistically significant violation of the Eberhard inequality, but also of the Clauser–Horne inequality (in the ratio-rate form): for both a violation >60σ. (paper)

  5. Safety Significance of the Halden IFA-650 LOCA Test Results

    International Nuclear Information System (INIS)

    Fuketa, Toyoshi; Nagase, Fumihisa; Grandjean, Claude; Petit, Marc; Hozer, Zoltan; Kelppe, Seppo; Khvostov, Grigori; Hafidi, Biya; Therache, Benjamin; Heins, Lothar; Valach, Mojmir; Voglewede, John; Wiesenack, Wolfgang

    2010-01-01

    The safety criteria for loss-of-coolant accidents were defined to ensure that the core would remain coolable. Since the time of the first LOCA experiments, which were largely conducted with fresh fuel, changes in fuel design, the introduction of new cladding materials and in particular the move to high burnup have generated a need to re-examine these criteria and to verify their continued validity. As part of international efforts to this end, the OECD Halden Reactor Project program implemented a LOCA test series. Based on recommendations of a group of experts from the US NRC, EPRI, EDF, IRSN, FRAMATOME-ANP and GNF, the primary objective of the experiments were defined as 1. Measure the extent of fuel (fragment) relocation into the ballooned region and evaluate its possible effect on cladding temperature and oxidation. 2. Investigate the extent (if any) of 'secondary transient hydriding' on the inner side of the cladding above and below the burst region. The fourth test of the series, IFA-650.4 conducted in April 2006, caused particular attention in the international nuclear community. The fuel used in the experiment had a high burnup, 92 MWd/kgU, and a low pre-test hydrogen content of about 50 ppm. The test aimed at and achieved a peak cladding temperature of 850 deg. C. The rod burst occurred at 790 deg. C. The burst caused a marked temperature increase at the lower end and a decrease at the upper end of the system, indicating that fuel relocation had occurred. Subsequent gamma scanning showed that approximately 19 cm of the fuel stack were missing from the upper part of the rod and that fuel had fallen to the bottom of the capsule. PIE at the IFE-Kjeller hot cells corroborated this evidence of substantial fuel relocation. The fact that fuel dispersal could occur upon ballooning and burst, i.e. at cladding temperatures as low as 800 deg. C and thus far lower than the temperature entailed by the current 1200 deg. C / 17% ECR limit, caused concern. The

  6. The uriscreen test to detect significant asymptomatic bacteriuria during pregnancy.

    Science.gov (United States)

    Teppa, Roberto J; Roberts, James M

    2005-01-01

    Asymptomatic bacteriuria (ASB) occurs in 2-11% of pregnancies and it is a clear predisposition to the development of acute pyelonephritis, which, in turn, poses risk to mother and fetus. Treatment of bacteriuria during pregnancy reduces the incidence of pyelonephritis. Therefore, it is recommended to screen for ASB at the first prenatal visit. The gold standard for detection of bacteriuria during pregnancy is urine culture, but this test is expensive, time-consuming, and labor-intensive. To determine the reliability of an enzymatic urine screening test (Uriscreen; Savyon Diagnostics, Ashdod, Israel) for detecting ASB in pregnancy. Catheterized urine samples were collected from 150 women who had routine prenatal screening for ASB. Patients with urinary symptoms, active vaginal bleeding, or who were previously on antibiotics therapy were excluded from the study. Sensitivity, specificity, and the positive and negative predictive values for the Uriscreen were estimated using urine culture as the criterion standard. Urine cultures were considered positive if they grew >10(5) colony-forming units of a single uropathogen. Twenty-eight women (18.7%) had urine culture results indicating significant bacteriuria, and 17 of these 28 specimens had positive enzyme activity. Of 122 samples with no growth, 109 had negative enzyme activity. Sensitivity, specificity, and positive and negative predictive values for the Uriscreen test were 60.7% (+/-18.1), 89.3% (+/-5.6), 56.6%, and 90.8%, respectively. The Uriscreen test had inadequate sensitivity for rapid screening of bacteriuria in pregnancy.

  7. Significant-Loophole-Free Test of Bell's Theorem with Entangled Photons.

    Science.gov (United States)

    Giustina, Marissa; Versteegh, Marijn A M; Wengerowsky, Sören; Handsteiner, Johannes; Hochrainer, Armin; Phelan, Kevin; Steinlechner, Fabian; Kofler, Johannes; Larsson, Jan-Åke; Abellán, Carlos; Amaya, Waldimar; Pruneri, Valerio; Mitchell, Morgan W; Beyer, Jörn; Gerrits, Thomas; Lita, Adriana E; Shalm, Lynden K; Nam, Sae Woo; Scheidl, Thomas; Ursin, Rupert; Wittmann, Bernhard; Zeilinger, Anton

    2015-12-18

    Local realism is the worldview in which physical properties of objects exist independently of measurement and where physical influences cannot travel faster than the speed of light. Bell's theorem states that this worldview is incompatible with the predictions of quantum mechanics, as is expressed in Bell's inequalities. Previous experiments convincingly supported the quantum predictions. Yet, every experiment requires assumptions that provide loopholes for a local realist explanation. Here, we report a Bell test that closes the most significant of these loopholes simultaneously. Using a well-optimized source of entangled photons, rapid setting generation, and highly efficient superconducting detectors, we observe a violation of a Bell inequality with high statistical significance. The purely statistical probability of our results to occur under local realism does not exceed 3.74×10^{-31}, corresponding to an 11.5 standard deviation effect.

  8. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    KAUST Repository

    Khaki, M.

    2017-07-06

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.

  9. Association test based on SNP set: logistic kernel machine based test vs. principal component analysis.

    Directory of Open Access Journals (Sweden)

    Yang Zhao

    Full Text Available GWAS has facilitated greatly the discovery of risk SNPs associated with complex diseases. Traditional methods analyze SNP individually and are limited by low power and reproducibility since correction for multiple comparisons is necessary. Several methods have been proposed based on grouping SNPs into SNP sets using biological knowledge and/or genomic features. In this article, we compare the linear kernel machine based test (LKM and principal components analysis based approach (PCA using simulated datasets under the scenarios of 0 to 3 causal SNPs, as well as simple and complex linkage disequilibrium (LD structures of the simulated regions. Our simulation study demonstrates that both LKM and PCA can control the type I error at the significance level of 0.05. If the causal SNP is in strong LD with the genotyped SNPs, both the PCA with a small number of principal components (PCs and the LKM with kernel of linear or identical-by-state function are valid tests. However, if the LD structure is complex, such as several LD blocks in the SNP set, or when the causal SNP is not in the LD block in which most of the genotyped SNPs reside, more PCs should be included to capture the information of the causal SNP. Simulation studies also demonstrate the ability of LKM and PCA to combine information from multiple causal SNPs and to provide increased power over individual SNP analysis. We also apply LKM and PCA to analyze two SNP sets extracted from an actual GWAS dataset on non-small cell lung cancer.

  10. A performance-oriented and risk-based regulation for containment testing

    International Nuclear Information System (INIS)

    Dey, M.

    1994-01-01

    In August 1992, the NRC initiated a major initiative to develop requirements for containment testing that are less prescriptive, and more performance-oriented and risk-based. This action was a result of public comments and several studies that concluded that the economic burden of certain, present containment testing requirements are not commensurate with their safety benefits. The rulemaking will include consideration of relaxing the allowable containment leakage rate, increasing the interval for the integrated containment test, and establishing intervals for the local containment leak rate tests based on their performance. A study has been conducted to provide technical information for establishing the performance criteria for containment tests, the allowable leakage rate, commensurate with its significance to total public risk. The study used results of a recent comprehensive study conducted by the NRC, NUREG-1150, 'Severe Accident Risks: An Assessment for Five U.S. Nuclear Power Plants,' to examine the sensitivity of containment leakage to public risk. Risk was found to be insensitive to containment leakage rate up to levels of about 100 percent-volume per day for certain types of containments. PRA methods have also been developed to establish risk-based intervals for containment tests based on their past experience. Preliminary evaluations show that increasing the interval for the integrated containment leakage test from three times to once every ten years would have an insignificant impact on public risk. Preliminary analyses of operational experience data for local leak rate tests show that performance-based testing, valves and penetrations that perform well are tested less frequently, is feasible with marginal impact on safety. The above technical studies are being used to develop efficient (cost-effective) requirements for containment tests. (author). 4 refs., 2 figs

  11. A procedure for the significance testing of unmodeled errors in GNSS observations

    Science.gov (United States)

    Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling

    2018-01-01

    It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.

  12. Proceedings of the Annual Seismic Research Symposium on Monitoring a Comprehensive Test Ban Treaty (19th). Held in Orlando, Florida on 23-25 September 1997

    Science.gov (United States)

    1997-09-05

    that cross the path; no ray need ever have followed the exact path previously. P- residuals (predicted) (observed) -2S ^AA+25 - 2Sri i AAAA+25...resampling techniques, such as Monte-Carlo iterations or bootstraping . IV. Disclaimer A historical U.S. explosion has been used in this study solely...diagnostic cluster population characteristics. The method can be applied to obtain " bootstrap " ground truth explosion waveforms for testing

  13. Wavelet Co-movement Significance Testing with Respect to Gaussian White Noise Background

    Directory of Open Access Journals (Sweden)

    Poměnková Jitka

    2018-01-01

    Full Text Available The paper deals with significance testing of time series co-movement measured via wavelet analysis, namely via the wavelet cross-spectra. This technique is very popular for its better time resolution compare to other techniques. Such approach put in evidence the existence of both long-run and short-run co-movement. In order to have better predictive power it is suitable to support and validate obtained results via some testing approach. We investigate the test of wavelet power cross-spectrum with respect to the Gaussian white noise background with the use of the Bessel function. Our experiment is performed on real data, i.e. seasonally adjusted quarterly data of gross domestic product of the United Kingdom, Korea and G7 countries. To validate the test results we perform Monte Carlo simulation. We describe the advantages and disadvantages of both approaches and formulate recommendations for its using.

  14. A novel fruit shape classification method based on multi-scale analysis

    Science.gov (United States)

    Gui, Jiangsheng; Ying, Yibin; Rao, Xiuqin

    2005-11-01

    Shape is one of the major concerns and which is still a difficult problem in automated inspection and sorting of fruits. In this research, we proposed the multi-scale energy distribution (MSED) for object shape description, the relationship between objects shape and its boundary energy distribution at multi-scale was explored for shape extraction. MSED offers not only the mainly energy which represent primary shape information at the lower scales, but also subordinate energy which represent local shape information at higher differential scales. Thus, it provides a natural tool for multi resolution representation and can be used as a feature for shape classification. We addressed the three main processing steps in the MSED-based shape classification. They are namely, 1) image preprocessing and citrus shape extraction, 2) shape resample and shape feature normalization, 3) energy decomposition by wavelet and classification by BP neural network. Hereinto, shape resample is resample 256 boundary pixel from a curve which is approximated original boundary by using cubic spline in order to get uniform raw data. A probability function was defined and an effective method to select a start point was given through maximal expectation, which overcame the inconvenience of traditional methods in order to have a property of rotation invariants. The experiment result is relatively well normal citrus and serious abnormality, with a classification rate superior to 91.2%. The global correct classification rate is 89.77%, and our method is more effective than traditional method. The global result can meet the request of fruit grading.

  15. Rehearsal significantly improves immediate and delayed recall on the Rey Auditory Verbal Learning Test.

    Science.gov (United States)

    Hessen, Erik

    2011-10-01

    A repeated observation during memory assessment with the Rey Auditory Verbal Learning Test (RAVLT) is that patients who spontaneously employ a memory rehearsal strategy by repeating the word list more than once achieve better scores than patients who only repeat the word list once. This observation led to concern about the ability of the standard test procedure of RAVLT and similar tests in eliciting the best possible recall scores. The purpose of the present study was to test the hypothesis that a rehearsal recall strategy of repeating the word list more than once would result in improved scores of recall on the RAVLT. We report on differences in outcome after standard administration and after experimental administration on Immediate and Delayed Recall measures from the RAVLT of 50 patients. The experimental administration resulted in significantly improved scores for all the variables employed. Additionally, it was found that patients who failed effort screening showed significantly poorer improvement on Delayed Recall compared with those who passed the effort screening. The general clear improvement both in raw scores and T-scores demonstrates that recall performance can be significantly influenced by the strategy of the patient or by small variations in instructions by the examiner.

  16. BRCA1 and BRCA2 genetic testing-pitfalls and recommendations for managing variants of uncertain clinical significance.

    Science.gov (United States)

    Eccles, D M; Mitchell, G; Monteiro, A N A; Schmutzler, R; Couch, F J; Spurdle, A B; Gómez-García, E B

    2015-10-01

    Increasing use of BRCA1/2 testing for tailoring cancer treatment and extension of testing to tumour tissue for somatic mutation is moving BRCA1/2 mutation screening from a primarily prevention arena delivered by specialist genetic services into mainstream oncology practice. A considerable number of gene tests will identify rare variants where clinical significance cannot be inferred from sequence information alone. The proportion of variants of uncertain clinical significance (VUS) is likely to grow with lower thresholds for testing and laboratory providers with less experience of BRCA. Most VUS will not be associated with a high risk of cancer but a misinterpreted VUS has the potential to lead to mismanagement of both the patient and their relatives. Members of the Clinical Working Group of ENIGMA (Evidence-based Network for the Interpretation of Germline Mutant Alleles) global consortium (www.enigmaconsortium.org) observed wide variation in practices in reporting, disclosure and clinical management of patients with a VUS. Examples from current clinical practice are presented and discussed to illustrate potential pitfalls, explore factors contributing to misinterpretation, and propose approaches to improving clarity. Clinicians, patients and their relatives would all benefit from an improved level of genetic literacy. Genetic laboratories working with clinical geneticists need to agree on a clinically clear and uniform format for reporting BRCA test results to non-geneticists. An international consortium of experts, collecting and integrating all available lines of evidence and classifying variants according to an internationally recognized system, will facilitate reclassification of variants for clinical use. © The Author 2015. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  17. Effects of dependence in high-dimensional multiple testing problems

    Directory of Open Access Journals (Sweden)

    van de Wiel Mark A

    2008-02-01

    Full Text Available Abstract Background We consider effects of dependence among variables of high-dimensional data in multiple hypothesis testing problems, in particular the False Discovery Rate (FDR control procedures. Recent simulation studies consider only simple correlation structures among variables, which is hardly inspired by real data features. Our aim is to systematically study effects of several network features like sparsity and correlation strength by imposing dependence structures among variables using random correlation matrices. Results We study the robustness against dependence of several FDR procedures that are popular in microarray studies, such as Benjamin-Hochberg FDR, Storey's q-value, SAM and resampling based FDR procedures. False Non-discovery Rates and estimates of the number of null hypotheses are computed from those methods and compared. Our simulation study shows that methods such as SAM and the q-value do not adequately control the FDR to the level claimed under dependence conditions. On the other hand, the adaptive Benjamini-Hochberg procedure seems to be most robust while remaining conservative. Finally, the estimates of the number of true null hypotheses under various dependence conditions are variable. Conclusion We discuss a new method for efficient guided simulation of dependent data, which satisfy imposed network constraints as conditional independence structures. Our simulation set-up allows for a structural study of the effect of dependencies on multiple testing criterions and is useful for testing a potentially new method on π0 or FDR estimation in a dependency context.

  18. Measured dose to ovaries and testes from Hodgkin's fields and determination of genetically significant dose

    International Nuclear Information System (INIS)

    Niroomand-Rad, A.; Cumberlin, R.

    1993-01-01

    The purpose of this study was to determine the genetically significant dose from therapeutic radiation exposure with Hodgkin's fields by estimating the doses to ovaries and testes. Phantom measurements were performed to verify estimated doses to ovaries and testes from Hodgkin's fields. Thermoluminescent LiF dosimeters (TLD-100) of 1 x 3 x 3 mm 3 dimensions were embedded in phantoms and exposed to standard mantle and paraaortic fields using Co-60, 4 MV, 6 MV, and 10 MV photon beams. The results show that measured doses to ovaries and testes are about two to five times higher than the corresponding graphically estimated doses for Co-60 and 4 MVX photon beams as depicted in ICRP publication 44. In addition, the measured doses to ovaries and testes are about 30% to 65% lower for 10 MV photon beams than for their corresponding Co-60 photon beams. The genetically significant dose from Hodgkin's treatment (less than 0.01 mSv) adds about 4% to the genetically significant dose contribution to medical procedures and adds less than 1% to the genetically significant dose from all sources. Therefore, the consequence to society is considered to be very small. The consequences for the individual patient are, likewise, small. 28 refs., 3 figs., 5 tabs

  19. Time and Power Optimizations in FPGA-Based Architectures for Polyphase Channelizers

    DEFF Research Database (Denmark)

    Awan, Mehmood-Ur-Rehman; Harris, Fred; Koch, Peter

    2012-01-01

    This paper presents the time and power optimization considerations for Field Programmable Gate Array (FPGA) based architectures for a polyphase filter bank channelizer with an embedded square root shaping filter in its polyphase engine. This configuration performs two different re-sampling tasks......% slice register resources of a Xilinx Virtex-5 FPGA, operating at 400 and 480 MHz, and consuming 1.9 and 2.6 Watts of dynamic power, respectively....

  20. Mutagenicity in drug development: interpretation and significance of test results.

    Science.gov (United States)

    Clive, D

    1985-03-01

    The use of mutagenicity data has been proposed and widely accepted as a relatively fast and inexpensive means of predicting long-term risk to man (i.e., cancer in somatic cells, heritable mutations in germ cells). This view is based on the universal nature of the genetic material, the somatic mutation model of carcinogenesis, and a number of studies showing correlations between mutagenicity and carcinogenicity. An uncritical acceptance of this approach by some regulatory and industrial concerns is over-conservative, naive, and scientifically unjustifiable on a number of grounds: Human cancers are largely life-style related (e.g., cigarettes, diet, tanning). Mutagens (both natural and man-made) are far more prevalent in the environment than was originally assumed (e.g., the natural bases and nucleosides, protein pyrolysates, fluorescent lights, typewriter ribbon, red wine, diesel fuel exhausts, viruses, our own leukocytes). "False-positive" (relative to carcinogenicity) and "false-negative" mutagenicity results occur, often with rational explanations (e.g., high threshold, inappropriate metabolism, inadequate genetic endpoint), and thereby confound any straightforward interpretation of mutagenicity test results. Test battery composition affects both the proper identification of mutagens and, in many instances, the ability to make preliminary risk assessments. In vitro mutagenicity assays ignore whole animal protective mechanisms, may provide unphysiological metabolism, and may be either too sensitive (e.g., testing at orders-of-magnitude higher doses than can be ingested) or not sensitive enough (e.g., short-term treatments inadequately model chronic exposure in bioassay). Bacterial systems, particularly the Ames assay, cannot in principle detect chromosomal events which are involved in both carcinogenesis and germ line mutations in man. Some compounds induce only chromosomal events and little or no detectable single-gene events (e.g., acyclovir, caffeine

  1. INS/GNSS Tightly-Coupled Integration Using Quaternion-Based AUPF for USV

    Directory of Open Access Journals (Sweden)

    Guoqing Xia

    2016-08-01

    Full Text Available This paper addresses the problem of integration of Inertial Navigation System (INS and Global Navigation Satellite System (GNSS for the purpose of developing a low-cost, robust and highly accurate navigation system for unmanned surface vehicles (USVs. A tightly-coupled integration approach is one of the most promising architectures to fuse the GNSS data with INS measurements. However, the resulting system and measurement models turn out to be nonlinear, and the sensor stochastic measurement errors are non-Gaussian and distributed in a practical system. Particle filter (PF, one of the most theoretical attractive non-linear/non-Gaussian estimation methods, is becoming more and more attractive in navigation applications. However, the large computation burden limits its practical usage. For the purpose of reducing the computational burden without degrading the system estimation accuracy, a quaternion-based adaptive unscented particle filter (AUPF, which combines the adaptive unscented Kalman filter (AUKF with PF, has been proposed in this paper. The unscented Kalman filter (UKF is used in the algorithm to improve the proposal distribution and generate a posterior estimates, which specify the PF importance density function for generating particles more intelligently. In addition, the computational complexity of the filter is reduced with the avoidance of the re-sampling step. Furthermore, a residual-based covariance matching technique is used to adapt the measurement error covariance. A trajectory simulator based on a dynamic model of USV is used to test the proposed algorithm. Results show that quaternion-based AUPF can significantly improve the overall navigation accuracy and reliability.

  2. Optimized periodic verification testing blended risk and performance-based MOV inservice test program an application of ASME code case OMN-1

    Energy Technology Data Exchange (ETDEWEB)

    Sellers, C.; Fleming, K.; Bidwell, D.; Forbes, P. [and others

    1996-12-01

    This paper presents an application of ASME Code Case OMN-1 to the GL 89-10 Program at the South Texas Project Electric Generating Station (STPEGS). Code Case OMN-1 provides guidance for a performance-based MOV inservice test program that can be used for periodic verification testing and allows consideration of risk insights. Blended probabilistic and deterministic evaluation techniques were used to establish inservice test strategies including both test methods and test frequency. Described in the paper are the methods and criteria for establishing MOV safety significance based on the STPEGS probabilistic safety assessment, deterministic considerations of MOV performance characteristics and performance margins, the expert panel evaluation process, and the development of inservice test strategies. Test strategies include a mix of dynamic and static testing as well as MOV exercising.

  3. Optimized periodic verification testing blended risk and performance-based MOV inservice test program an application of ASME code case OMN-1

    International Nuclear Information System (INIS)

    Sellers, C.; Fleming, K.; Bidwell, D.; Forbes, P.

    1996-01-01

    This paper presents an application of ASME Code Case OMN-1 to the GL 89-10 Program at the South Texas Project Electric Generating Station (STPEGS). Code Case OMN-1 provides guidance for a performance-based MOV inservice test program that can be used for periodic verification testing and allows consideration of risk insights. Blended probabilistic and deterministic evaluation techniques were used to establish inservice test strategies including both test methods and test frequency. Described in the paper are the methods and criteria for establishing MOV safety significance based on the STPEGS probabilistic safety assessment, deterministic considerations of MOV performance characteristics and performance margins, the expert panel evaluation process, and the development of inservice test strategies. Test strategies include a mix of dynamic and static testing as well as MOV exercising

  4. Testlet-Based Multidimensional Adaptive Testing.

    Science.gov (United States)

    Frey, Andreas; Seitz, Nicki-Nils; Brandt, Steffen

    2016-01-01

    Multidimensional adaptive testing (MAT) is a highly efficient method for the simultaneous measurement of several latent traits. Currently, no psychometrically sound approach is available for the use of MAT in testlet-based tests. Testlets are sets of items sharing a common stimulus such as a graph or a text. They are frequently used in large operational testing programs like TOEFL, PISA, PIRLS, or NAEP. To make MAT accessible for such testing programs, we present a novel combination of MAT with a multidimensional generalization of the random effects testlet model (MAT-MTIRT). MAT-MTIRT compared to non-adaptive testing is examined for several combinations of testlet effect variances (0.0, 0.5, 1.0, and 1.5) and testlet sizes (3, 6, and 9 items) with a simulation study considering three ability dimensions with simple loading structure. MAT-MTIRT outperformed non-adaptive testing regarding the measurement precision of the ability estimates. Further, the measurement precision decreased when testlet effect variances and testlet sizes increased. The suggested combination of the MTIRT model therefore provides a solution to the substantial problems of testlet-based tests while keeping the length of the test within an acceptable range.

  5. Testlet-based Multidimensional Adaptive Testing

    Directory of Open Access Journals (Sweden)

    Andreas Frey

    2016-11-01

    Full Text Available Multidimensional adaptive testing (MAT is a highly efficient method for the simultaneous measurement of several latent traits. Currently, no psychometrically sound approach is available for the use of MAT in testlet-based tests. Testlets are sets of items sharing a common stimulus such as a graph or a text. They are frequently used in large operational testing programs like TOEFL, PISA, PIRLS, or NAEP. To make MAT accessible for such testing programs, we present a novel combination of MAT with a multidimensional generalization of the random effects testlet model (MAT-MTIRT. MAT-MTIRT compared to non-adaptive testing is examined for several combinations of testlet effect variances (0.0, 0.5, 1.0, 1.5 and testlet sizes (3 items, 6 items, 9 items with a simulation study considering three ability dimensions with simple loading structure. MAT-MTIRT outperformed non-adaptive testing regarding the measurement precision of the ability estimates. Further, the measurement precision decreased when testlet effect variances and testlet sizes increased. The suggested combination of the MTIRT model therefore provides a solution to the substantial problems of testlet-based tests while keeping the length of the test within an acceptable range.

  6. Assessing group differences in biodiversity by simultaneously testing a user-defined selection of diversity indices.

    Science.gov (United States)

    Pallmann, Philip; Schaarschmidt, Frank; Hothorn, Ludwig A; Fischer, Christiane; Nacke, Heiko; Priesnitz, Kai U; Schork, Nicholas J

    2012-11-01

    Comparing diversities between groups is a task biologists are frequently faced with, for example in ecological field trials or when dealing with metagenomics data. However, researchers often waver about which measure of diversity to choose as there is a multitude of approaches available. As Jost (2008, Molecular Ecology, 17, 4015) has pointed out, widely used measures such as the Shannon or Simpson index have undesirable properties which make them hard to compare and interpret. Many of the problems associated with the use of these 'raw' indices can be corrected by transforming them into 'true' diversity measures. We introduce a technique that allows the comparison of two or more groups of observations and simultaneously tests a user-defined selection of a number of 'true' diversity measures. This procedure yields multiplicity-adjusted P-values according to the method of Westfall and Young (1993, Resampling-Based Multiple Testing: Examples and Methods for p-Value Adjustment, 49, 941), which ensures that the rate of false positives (type I error) does not rise when the number of groups and/or diversity indices is extended. Software is available in the R package 'simboot'. © 2012 Blackwell Publishing Ltd.

  7. Simulation-based Testing of Control Software

    Energy Technology Data Exchange (ETDEWEB)

    Ozmen, Ozgur [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nutaro, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Sanyal, Jibonananda [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Olama, Mohammed M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-02-10

    It is impossible to adequately test complex software by examining its operation in a physical prototype of the system monitored. Adequate test coverage can require millions of test cases, and the cost of equipment prototypes combined with the real-time constraints of testing with them makes it infeasible to sample more than a small number of these tests. Model based testing seeks to avoid this problem by allowing for large numbers of relatively inexpensive virtual prototypes that operate in simulation time at a speed limited only by the available computing resources. In this report, we describe how a computer system emulator can be used as part of a model based testing environment; specifically, we show that a complete software stack including operating system and application software - can be deployed within a simulated environment, and that these simulations can proceed as fast as possible. To illustrate this approach to model based testing, we describe how it is being used to test several building control systems that act to coordinate air conditioning loads for the purpose of reducing peak demand. These tests involve the use of ADEVS (A Discrete Event System Simulator) and QEMU (Quick Emulator) to host the operational software within the simulation, and a building model developed with the MODELICA programming language using Buildings Library and packaged as an FMU (Functional Mock-up Unit) that serves as the virtual test environment.

  8. Safety significance of ATR [Advanced Test Reactor] passive safety response attributes

    International Nuclear Information System (INIS)

    Atkinson, S.A.

    1989-01-01

    The Advanced Test Reactor (ATR) at the Idaho National Engineering Laboratory was designed with some passive safety response attributes which contribute to the safety posture of the facility. The three passive safety attributes being evaluated in the paper are: (1) In-core and in-vessel natural convection cooling, (2) a passive heat sink capability of the ATR primary coolant system (PCS) for the transfer of decay power from the uninsulated piping to the confinement, and (3) gravity feed of emergency coolant makeup. The safety significance of the ATR passive safety response attributes is that the reactor can passively respond for most transients, given a reactor scram, to provide adequate decay power removal and a significant time for operator action should the normal active heat removal systems and their backup systems both fail. The ATR Interim Level 1 Probabilistic Risk Assessment (PRA) model ands results were used to evaluate the significance to ATR fuel damage frequency (or probability) of the above three passive response attributes. The results of the evaluation indicate that the first attribute is a major safety characteristic of the ATR. The second attribute has a noticeable but only minor safety significance. The third attribute has no significant influence on the ATR Level 1 PRA because of the diversity and redundancy of the ATR firewater injection system (emergency coolant system). 8 refs., 4 figs., 1 tab

  9. A Computational Tool for Testing Dose-related Trend Using an Age-adjusted Bootstrap-based Poly-k Test

    Directory of Open Access Journals (Sweden)

    Hojin Moon

    2006-08-01

    Full Text Available A computational tool for testing for a dose-related trend and/or a pairwise difference in the incidence of an occult tumor via an age-adjusted bootstrap-based poly-k test and the original poly-k test is presented in this paper. The poly-k test (Bailer and Portier 1988 is a survival-adjusted Cochran-Armitage test, which achieves robustness to effects of differential mortality across dose groups. The original poly-k test is asymptotically standard normal under the null hypothesis. However, the asymptotic normality is not valid if there is a deviation from the tumor onset distribution that is assumed in this test. Our age-adjusted bootstrap-based poly-k test assesses the significance of assumed asymptotic normal tests and investigates an empirical distribution of the original poly-k test statistic using an age-adjusted bootstrap method. A tumor of interest is an occult tumor for which the time to onset is not directly observable. Since most of the animal carcinogenicity studies are designed with a single terminal sacrifice, the present tool is applicable to rodent tumorigenicity assays that have a single terminal sacrifice. The present tool takes input information simply from a user screen and reports testing results back to the screen through a user-interface. The computational tool is implemented in C/C++ and is applied to analyze a real data set as an example. Our tool enables the FDA and the pharmaceutical industry to implement a statistical analysis of tumorigenicity data from animal bioassays via our age-adjusted bootstrap-based poly-k test and the original poly-k test which has been adopted by the National Toxicology Program as its standard statistical test.

  10. Pattern Recognition of Momentary Mental Workload Based on Multi-Channel Electrophysiological Data and Ensemble Convolutional Neural Networks.

    Science.gov (United States)

    Zhang, Jianhua; Li, Sunan; Wang, Rubin

    2017-01-01

    In this paper, we deal with the Mental Workload (MWL) classification problem based on the measured physiological data. First we discussed the optimal depth (i.e., the number of hidden layers) and parameter optimization algorithms for the Convolutional Neural Networks (CNN). The base CNNs designed were tested according to five classification performance indices, namely Accuracy, Precision, F-measure, G-mean, and required training time. Then we developed an Ensemble Convolutional Neural Network (ECNN) to enhance the accuracy and robustness of the individual CNN model. For the ECNN design, three model aggregation approaches (weighted averaging, majority voting and stacking) were examined and a resampling strategy was used to enhance the diversity of individual CNN models. The results of MWL classification performance comparison indicated that the proposed ECNN framework can effectively improve MWL classification performance and is featured by entirely automatic feature extraction and MWL classification, when compared with traditional machine learning methods.

  11. BEAT: A Web-Based Boolean Expression Fault-Based Test Case Generation Tool

    Science.gov (United States)

    Chen, T. Y.; Grant, D. D.; Lau, M. F.; Ng, S. P.; Vasa, V. R.

    2006-01-01

    BEAT is a Web-based system that generates fault-based test cases from Boolean expressions. It is based on the integration of our several fault-based test case selection strategies. The generated test cases are considered to be fault-based, because they are aiming at the detection of particular faults. For example, when the Boolean expression is in…

  12. On the nature of data collection for soft-tissue image-to-physical organ registration: a noise characterization study

    Science.gov (United States)

    Collins, Jarrod A.; Heiselman, Jon S.; Weis, Jared A.; Clements, Logan W.; Simpson, Amber L.; Jarnagin, William R.; Miga, Michael I.

    2017-03-01

    In image-guided liver surgery (IGLS), sparse representations of the anterior organ surface may be collected intraoperatively to drive image-to-physical space registration. Soft tissue deformation represents a significant source of error for IGLS techniques. This work investigates the impact of surface data quality on current surface based IGLS registration methods. In this work, we characterize the robustness of our IGLS registration methods to noise in organ surface digitization. We study this within a novel human-to-phantom data framework that allows a rapid evaluation of clinically realistic data and noise patterns on a fully characterized hepatic deformation phantom. Additionally, we implement a surface data resampling strategy that is designed to decrease the impact of differences in surface acquisition. For this analysis, n=5 cases of clinical intraoperative data consisting of organ surface and salient feature digitizations from open liver resection were collected and analyzed within our human-to-phantom validation framework. As expected, results indicate that increasing levels of noise in surface acquisition cause registration fidelity to deteriorate. With respect to rigid registration using the raw and resampled data at clinically realistic levels of noise (i.e. a magnitude of 1.5 mm), resampling improved TRE by 21%. In terms of nonrigid registration, registrations using resampled data outperformed the raw data result by 14% at clinically realistic levels and were less susceptible to noise across the range of noise investigated. These results demonstrate the types of analyses our novel human-to-phantom validation framework can provide and indicate the considerable benefits of resampling strategies.

  13. Recent Literature on Whether Statistical Significance Tests Should or Should Not Be Banned.

    Science.gov (United States)

    Deegear, James

    This paper summarizes the literature regarding statistical significant testing with an emphasis on recent literature in various discipline and literature exploring why researchers have demonstrably failed to be influenced by the American Psychological Association publication manual's encouragement to report effect sizes. Also considered are…

  14. Pressure tests to assess the significance of defects in boiler and superheater tubing

    International Nuclear Information System (INIS)

    Guest, J.C.; Hutchings, J.A.

    1975-01-01

    Internal pressure tests on 9 per cent Cr-1 per cent Mo steel tubing containing artificial defects demonstrated that the resultant loss of strength was less than a simple calculation based on the reduced tube thickness would suggest. Bursting tests on tubes containing longitudinal defects of varying length, depth and acuity showed notch strengthening at ambient temperature and at 550 0 C. A flow stress concept developed for simple bursting tests was shown to apply to creep conditions at 550 0 C. Results of creep and short-term bursting tests show that the length as well as the depth of the defect is an important factor affecting the life of bursting strength of the tubes. Defects less than 10 per cent of the tube thickness were found to have an insignificant effect. (author)

  15. Performance-based alternative assessments as a means of eliminating gender achievement differences on science tests

    Science.gov (United States)

    Brown, Norman Merrill

    1998-09-01

    Historically, researchers have reported an achievement difference between females and males on standardized science tests. These differences have been reported to be based upon science knowledge, abstract reasoning skills, mathematical abilities, and cultural and social phenomena. This research was designed to determine how mastery of specific science content from public school curricula might be evaluated with performance-based assessment models, without producing gender achievement differences. The assessment instruments used were Harcourt Brace Educational Measurement's GOALSsp°ler: A Performance-Based Measure of Achievement and the performance-based portion of the Stanford Achievement Testspcopyright, Ninth Edition. The identified independent variables were test, gender, ethnicity, and grade level. A 2 x 2 x 6 x 12 (test x gender x ethnicity x grade) factorial experimental design was used to organize the data. A stratified random sample (N = 2400) was selected from a national pool of norming data: N = 1200 from the GOALSsp°ler group and N = 1200 from the SAT9spcopyright group. The ANOVA analysis yielded mixed results. The factors of test, gender, ethnicity by grade, gender by grade, and gender by grade by ethnicity failed to produce significant results (alpha = 0.05). The factors yielding significant results were ethnicity, grade, and ethnicity by grade. Therefore, no significant differences were found between female and male achievement on these performance-based assessments.

  16. Space Launch System Base Heating Test: Environments and Base Flow Physics

    Science.gov (United States)

    Mehta, Manish; Knox, Kyle S.; Seaford, C. Mark; Dufrene, Aaron T.

    2016-01-01

    The NASA Space Launch System (SLS) vehicle is composed of four RS-25 liquid oxygen- hydrogen rocket engines in the core-stage and two 5-segment solid rocket boosters and as a result six hot supersonic plumes interact within the aft section of the vehicle during ight. Due to the complex nature of rocket plume-induced ows within the launch vehicle base during ascent and a new vehicle con guration, sub-scale wind tunnel testing is required to reduce SLS base convective environment uncertainty and design risk levels. This hot- re test program was conducted at the CUBRC Large Energy National Shock (LENS) II short-duration test facility to simulate ight from altitudes of 50 kft to 210 kft. The test program is a challenging and innovative e ort that has not been attempted in 40+ years for a NASA vehicle. This presentation discusses the various trends of base convective heat ux and pressure as a function of altitude at various locations within the core-stage and booster base regions of the two-percent SLS wind tunnel model. In-depth understanding of the base ow physics is presented using the test data, infrared high-speed imaging and theory. The normalized test design environments are compared to various NASA semi- empirical numerical models to determine exceedance and conservatism of the ight scaled test-derived base design environments. Brief discussion of thermal impact to the launch vehicle base components is also presented.

  17. Model-based testing for embedded systems

    CERN Document Server

    Zander, Justyna; Mosterman, Pieter J

    2011-01-01

    What the experts have to say about Model-Based Testing for Embedded Systems: "This book is exactly what is needed at the exact right time in this fast-growing area. From its beginnings over 10 years ago of deriving tests from UML statecharts, model-based testing has matured into a topic with both breadth and depth. Testing embedded systems is a natural application of MBT, and this book hits the nail exactly on the head. Numerous topics are presented clearly, thoroughly, and concisely in this cutting-edge book. The authors are world-class leading experts in this area and teach us well-used

  18. Oscillation-based test in mixed-signal circuits

    CERN Document Server

    Sánchez, Gloria Huertas; Rueda, Adoración Rueda

    2007-01-01

    This book presents the development and experimental validation of the structural test strategy called Oscillation-Based Test - OBT in short. The results presented here assert, not only from a theoretical point of view, but also based on a wide experimental support, that OBT is an efficient defect-oriented test solution, complementing the existing functional test techniques for mixed-signal circuits.

  19. Antirandom Testing: A Distance-Based Approach

    Directory of Open Access Journals (Sweden)

    Shen Hui Wu

    2008-01-01

    Full Text Available Random testing requires each test to be selected randomly regardless of the tests previously applied. This paper introduces the concept of antirandom testing where each test applied is chosen such that its total distance from all previous tests is maximum. This spans the test vector space to the maximum extent possible for a given number of vectors. An algorithm for generating antirandom tests is presented. Compared with traditional pseudorandom testing, antirandom testing is found to be very effective when a high-fault coverage needs to be achieved with a limited number of test vectors. The superiority of the new approach is even more significant for testing bridging faults.

  20. Evaluation and significance of hyperchromatic crowded groups (HCG in liquid-based paps

    Directory of Open Access Journals (Sweden)

    Chivukula Mamatha

    2007-01-01

    Full Text Available Abstract Objective Hyperchromatic crowded groups (HCG, a term first introduced into the cytology literature by DeMay in 1995, are commonly observed in Pap tests and may rarely be associated with serious but difficult to interpret lesions. In this study, we specifically defined HCG as dark crowded cell groups with more than 15 cells which can be identified at 10× screening magnification. Methods We evaluated consecutive liquid-based (Surepath Pap tests from 601 women (age 17–74 years, mean age 29.4 yrs and observed HCG in 477 cases. In all 477 HCG cases, Pap tests were found to be satisfactory and to contain an endocervical sample. HCG were easily detectible at 10× screening magnification (size up to 400 um, mean 239.5 um and ranged from 1 to 50 (mean 19.5 per Pap slide. Results HCG predominantly represented 3-Dimensional groups of endocervical cells with some nuclear overlap (379/477 – 79%, reactive endocervical cells with relatively prominent nucleoli and some nuclear crowding (29/477 – 6%, clusters of inflammatory cells (25/477 – 5.2%, parabasal cells (22/477 – 4.6%, endometrial cells (1/477 – 0.2%. Epithelial cell abnormalities (ECA were present in only 21 of 477 cases (4.6%. 18 of 21 women with HCG-associated ECA were less than 40 years old; only 3 were =/> 40 years. HCG-associated final abnormal Pap test interpretations were as follows: ASCUS (6/21 – 28%, LSIL (12/21 – 57%, ASC-H (2/21 – 9.5%, and HSIL/CIN2-3 (3/21 – 14%. The association of HCG with ECA was statistically significant (p = 0.0174. chi-square test. In patients with ECA, biopsy results were available in 10 cases, and 4 cases of biopsy-proven CIN2/3 were detected. Among these four cases, HCG in the Pap tests, in retrospect represented the lesional high grade cells in three cases (one HSIL case and two ASC-H cases. Interestingly, none of the 124 cases without HCG were found to have an epithelial cell abnormality. Conclusion We conclude: a. HCG are observed

  1. Model-based testing for software safety

    NARCIS (Netherlands)

    Gurbuz, Havva Gulay; Tekinerdogan, Bedir

    2017-01-01

    Testing safety-critical systems is crucial since a failure or malfunction may result in death or serious injuries to people, equipment, or environment. An important challenge in testing is the derivation of test cases that can identify the potential faults. Model-based testing adopts models of a

  2. The Added Value of Medical Testing in Underwriting Life Insurance.

    Directory of Open Access Journals (Sweden)

    Jan Bronsema

    Full Text Available In present-day life-insurance medical underwriting practice the risk assessment starts with a standard health declaration (SHD. Indication for additional medical screening depends predominantly on age and amount of insured capital. From a medical perspective it is questionable whether there is an association between the level of insured capital and medical risk in terms of mortality. The aim of the study is to examine the prognostic value of parameters from the health declaration and application form on extra mortality based on results from additional medical testing.A history register-based cohort study was conducted including about 15.000 application files accepted between 2007 and 2010. Blood pressure, lipids, cotinine and glucose levels were used as dependent variables in logistic regression models. Resampling validation was applied using 250 bootstrap samples to calculate area under the curves (AUC's. The AUC was used to discriminate between persons with and without at least 25% extra mortality.BMI and the overall assessment of the health declaration by an insurance physician or medical underwriter showed the strongest discrimination in multivariable analysis. Including all variables at minimum cut-off levels resulted in an AUC of 0.710 while by using a model with BMI, the assessment of the health declaration and gender, the AUC was 0.708. Including all variables at maximum cut-off levels lead to an AUC of 0.743 while a model with BMI, the assessment of the health declaration and age resulted in an AUC of 0.741.The outcome of this study shows that BMI and the overall assessment of the health declaration were the dominant variables to discriminate between applicants for life-insurance with and without at least 25 percent extra mortality. The variable insured capital set by insurers as factor for additional medical testing could not be established in this study population. The indication for additional medical testing at underwriting life

  3. On school choice and test-based accountability.

    Directory of Open Access Journals (Sweden)

    Damian W. Betebenner

    2005-10-01

    Full Text Available Among the two most prominent school reform measures currently being implemented in The United States are school choice and test-based accountability. Until recently, the two policy initiatives remained relatively distinct from one another. With the passage of the No Child Left Behind Act of 2001 (NCLB, a mutualism between choice and accountability emerged whereby school choice complements test-based accountability. In the first portion of this study we present a conceptual overview of school choice and test-based accountability and explicate connections between the two that are explicit in reform implementations like NCLB or implicit within the market-based reform literature in which school choice and test-based accountability reside. In the second portion we scrutinize the connections, in particular, between school choice and test-based accountability using a large western school district with a popular choice system in place. Data from three sources are combined to explore the ways in which school choice and test-based accountability draw on each other: state assessment data of children in the district, school choice data for every participating student in the district choice program, and a parental survey of both participants and non-participants of choice asking their attitudes concerning the use of school report cards in the district. Results suggest that choice is of benefit academically to only the lowest achieving students, choice participation is not uniform across different ethnic groups in the district, and parents' primary motivations as reported on a survey for participation in choice are not due to test scores, though this is not consistent with choice preferences among parents in the district. As such, our results generally confirm the hypotheses of choice critics more so than advocates. Keywords: school choice; accountability; student testing.

  4. Accuracy of lung nodule density on HRCT: analysis by PSF-based image simulation.

    Science.gov (United States)

    Ohno, Ken; Ohkubo, Masaki; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi

    2012-11-08

    A computed tomography (CT) image simulation technique based on the point spread function (PSF) was applied to analyze the accuracy of CT-based clinical evaluations of lung nodule density. The PSF of the CT system was measured and used to perform the lung nodule image simulation. Then, the simulated image was resampled at intervals equal to the pixel size and the slice interval found in clinical high-resolution CT (HRCT) images. On those images, the nodule density was measured by placing a region of interest (ROI) commonly used for routine clinical practice, and comparing the measured value with the true value (a known density of object function used in the image simulation). It was quantitatively determined that the measured nodule density depended on the nodule diameter and the image reconstruction parameters (kernel and slice thickness). In addition, the measured density fluctuated, depending on the offset between the nodule center and the image voxel center. This fluctuation was reduced by decreasing the slice interval (i.e., with the use of overlapping reconstruction), leading to a stable density evaluation. Our proposed method of PSF-based image simulation accompanied with resampling enables a quantitative analysis of the accuracy of CT-based evaluations of lung nodule density. These results could potentially reveal clinical misreadings in diagnosis, and lead to more accurate and precise density evaluations. They would also be of value for determining the optimum scan and reconstruction parameters, such as image reconstruction kernels and slice thicknesses/intervals.

  5. Cross-Mode Comparability of Computer-Based Testing (CBT) versus Paper-Pencil Based Testing (PPT): An Investigation of Testing Administration Mode among Iranian Intermediate EFL Learners

    Science.gov (United States)

    Khoshsima, Hooshang; Hosseini, Monirosadat; Toroujeni, Seyyed Morteza Hashemi

    2017-01-01

    Advent of technology has caused growing interest in using computers to convert conventional paper and pencil-based testing (Henceforth PPT) into Computer-based testing (Henceforth CBT) in the field of education during last decades. This constant promulgation of computers to reshape the conventional tests into computerized format permeated the…

  6. Diagnostic significance of haematological testing in patients presenting at the Emergency Department

    Directory of Open Access Journals (Sweden)

    Giuseppe Lippi

    2012-03-01

    Full Text Available The use of simple and economic tests to rule out diseases of sufficient clinical severity is appealing in emergency department (ED, since it would be effective for contrasting ED overcrowding and decreasing healthcare costs. The aim of this study was to assess the diagnostic performance of simple and economic haematological testing in a large sample of adult patients presenting at the ED of the Academic Hospital of Parma during the year 2010 with the five most frequent acute pathologies (i.e., acute myocardial infarction, renal colic, pneumonia, trauma and pancreatitis. Both leukocyte count and hemoglobin showed a good diagnostic performance (Area Under the Curve [AUC] of 0.85 for leukocyte count and 0.76 for hemoglobin; both p < 0.01. Although the platelet count was significantly increased in all patients groups except pancreatitis, the diagnostic performance did not achieve statistical significance (AUC 0.53; p = 0.07. We also observed an increased RDW in all groups, except in those with trauma and the diagnostic performance was acceptable (AUC 0.705; p < 0.01. The mean platelet volume (MPV was consistently lower in all patients groups and also characterized by an efficient diagnostic performance (AUC 0.76; p < 0.01. This evidence led us to design an arbitrary formula, whereby MPV and hemoglobin were multiplied, and further divided by the leukocyte count, obtaining a remarkable AUC (0.91; p < 0.01. We conclude that simple, rapid and cheap hematological tests might provide relevant clinical information for decision making to busy emergency physicians, and the their combination into an arbitrary formula might further increase the specific diagnostic potential of each of them.

  7. Model-Based Software Testing for Object-Oriented Software

    Science.gov (United States)

    Biju, Soly Mathew

    2008-01-01

    Model-based testing is one of the best solutions for testing object-oriented software. It has a better test coverage than other testing styles. Model-based testing takes into consideration behavioural aspects of a class, which are usually unchecked in other testing methods. An increase in the complexity of software has forced the software industry…

  8. Team-Based Testing Improves Individual Learning

    Science.gov (United States)

    Vogler, Jane S.; Robinson, Daniel H.

    2016-01-01

    In two experiments, 90 undergraduates took six tests as part of an educational psychology course. Using a crossover design, students took three tests individually without feedback and then took the same test again, following the process of team-based testing (TBT), in teams in which the members reached consensus for each question and answered…

  9. Interface-based software testing

    Directory of Open Access Journals (Sweden)

    Aziz Ahmad Rais

    2016-10-01

    Full Text Available Software quality is determined by assessing the characteristics that specify how it should work, which are verified through testing. If it were possible to touch, see, or measure software, it would be easier to analyze and prove its quality. Unfortunately, software is an intangible asset, which makes testing complex. This is especially true when software quality is not a question of particular functions that can be tested through a graphical user interface. The primary objective of software architecture is to design quality of software through modeling and visualization. There are many methods and standards that define how to control and manage quality. However, many IT software development projects still fail due to the difficulties involved in measuring, controlling, and managing software quality. Software quality failure factors are numerous. Examples include beginning to test software too late in the development process, or failing properly to understand, or design, the software architecture and the software component structure. The goal of this article is to provide an interface-based software testing technique that better measures software quality, automates software quality testing, encourages early testing, and increases the software’s overall testability

  10. Kernel-based tests for joint independence

    DEFF Research Database (Denmark)

    Pfister, Niklas; Bühlmann, Peter; Schölkopf, Bernhard

    2018-01-01

    if the $d$ variables are jointly independent, as long as the kernel is characteristic. Based on an empirical estimate of dHSIC, we define three different non-parametric hypothesis tests: a permutation test, a bootstrap test and a test based on a Gamma approximation. We prove that the permutation test......We investigate the problem of testing whether $d$ random variables, which may or may not be continuous, are jointly (or mutually) independent. Our method builds on ideas of the two variable Hilbert-Schmidt independence criterion (HSIC) but allows for an arbitrary number of variables. We embed...... the $d$-dimensional joint distribution and the product of the marginals into a reproducing kernel Hilbert space and define the $d$-variable Hilbert-Schmidt independence criterion (dHSIC) as the squared distance between the embeddings. In the population case, the value of dHSIC is zero if and only...

  11. Validity evidence based on test content.

    Science.gov (United States)

    Sireci, Stephen; Faulkner-Bond, Molly

    2014-01-01

    Validity evidence based on test content is one of the five forms of validity evidence stipulated in the Standards for Educational and Psychological Testing developed by the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. In this paper, we describe the logic and theory underlying such evidence and describe traditional and modern methods for gathering and analyzing content validity data. A comprehensive review of the literature and of the aforementioned Standards is presented. For educational tests and other assessments targeting knowledge and skill possessed by examinees, validity evidence based on test content is necessary for building a validity argument to support the use of a test for a particular purpose. By following the methods described in this article, practitioners have a wide arsenal of tools available for determining how well the content of an assessment is congruent with and appropriate for the specific testing purposes.

  12. Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data

    KAUST Repository

    Dong, Kai

    2015-09-16

    DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.

  13. Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data

    KAUST Repository

    Dong, Kai; Pang, Herbert; Tong, Tiejun; Genton, Marc G.

    2015-01-01

    DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.

  14. Test Review: Test of English as a Foreign Language[TM]--Internet-Based Test (TOEFL iBT[R])

    Science.gov (United States)

    Alderson, J. Charles

    2009-01-01

    In this article, the author reviews the TOEFL iBT which is the latest version of the TOEFL, whose history stretches back to 1961. The TOEFL iBT was introduced in the USA, Canada, France, Germany and Italy in late 2005. Currently the TOEFL test is offered in two testing formats: (1) Internet-based testing (iBT); and (2) paper-based testing (PBT).…

  15. Uncertainties of flood frequency estimation approaches based on continuous simulation using data resampling

    Science.gov (United States)

    Arnaud, Patrick; Cantet, Philippe; Odry, Jean

    2017-11-01

    Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with

  16. Students Perception on the Use of Computer Based Test

    Science.gov (United States)

    Nugroho, R. A.; Kusumawati, N. S.; Ambarwati, O. C.

    2018-02-01

    Teaching nowadays might use technology in order to disseminate science and knowledge. As part of teaching, the way evaluating study progress and result has also benefited from this IT rapid progress. The computer-based test (CBT) has been introduced to replace the more conventional Paper and Pencil Test (PPT). CBT are considered more advantageous than PPT. It is considered as more efficient, transparent, and has the ability of minimising fraud in cognitive evaluation. Current studies have indicated the debate of CBT vs PPT usage. Most of the current research compares the two methods without exploring the students’ perception about the test. This study will fill the gap in the literature by providing students’ perception on the two tests method. Survey approach is conducted to obtain the data. The sample is collected in two identical classes with similar subject in a public university in Indonesia. Mann-Whitney U test used to analyse the data. The result indicates that there is a significant difference between two groups of students regarding CBT usage. Student with different test method prefers to have test other than what they were having. Further discussion and research implication is discussed in the paper.

  17. An Automated, Adaptive Framework for Optimizing Preprocessing Pipelines in Task-Based Functional MRI.

    Directory of Open Access Journals (Sweden)

    Nathan W Churchill

    Full Text Available BOLD fMRI is sensitive to blood-oxygenation changes correlated with brain function; however, it is limited by relatively weak signal and significant noise confounds. Many preprocessing algorithms have been developed to control noise and improve signal detection in fMRI. Although the chosen set of preprocessing and analysis steps (the "pipeline" significantly affects signal detection, pipelines are rarely quantitatively validated in the neuroimaging literature, due to complex preprocessing interactions. This paper outlines and validates an adaptive resampling framework for evaluating and optimizing preprocessing choices by optimizing data-driven metrics of task prediction and spatial reproducibility. Compared to standard "fixed" preprocessing pipelines, this optimization approach significantly improves independent validation measures of within-subject test-retest, and between-subject activation overlap, and behavioural prediction accuracy. We demonstrate that preprocessing choices function as implicit model regularizers, and that improvements due to pipeline optimization generalize across a range of simple to complex experimental tasks and analysis models. Results are shown for brief scanning sessions (<3 minutes each, demonstrating that with pipeline optimization, it is possible to obtain reliable results and brain-behaviour correlations in relatively small datasets.

  18. The Improved Locating Algorithm of Particle Filter Based on ROS Robot

    Science.gov (United States)

    Fang, Xun; Fu, Xiaoyang; Sun, Ming

    2018-03-01

    This paperanalyzes basic theory and primary algorithm of the real-time locating system and SLAM technology based on ROS system Robot. It proposes improved locating algorithm of particle filter effectively reduces the matching time of laser radar and map, additional ultra-wideband technology directly accelerates the global efficiency of FastSLAM algorithm, which no longer needs searching on the global map. Meanwhile, the re-sampling has been largely reduced about 5/6 that directly cancels the matching behavior on Roboticsalgorithm.

  19. Prognostic significance of electrophysiological tests for facial nerve outcome in vestibular schwannoma surgery.

    Science.gov (United States)

    van Dinther, J J S; Van Rompaey, V; Somers, T; Zarowski, A; Offeciers, F E

    2011-01-01

    To assess the prognostic significance of pre-operative electrophysiological tests for facial nerve outcome in vestibular schwannoma surgery. Retrospective study design in a tertiary referral neurology unit. We studied a total of 123 patients with unilateral vestibular schwannoma who underwent microsurgical removal of the lesion. Nine patients were excluded because they had clinically abnormal pre-operative facial function. Pre-operative electrophysiological facial nerve function testing (EPhT) was performed. Short-term (1 month) and long-term (1 year) post-operative clinical facial nerve function were assessed. When pre-operative facial nerve function, evaluated by EPhT, was normal, the outcome from clinical follow-up at 1-month post-operatively was excellent in 78% (i.e. HB I-II) of patients, moderate in 11% (i.e. HB III-IV), and bad in 11% (i.e. HB V-VI). After 1 year, 86% had excellent outcomes, 13% had moderate outcomes, and 1% had bad outcomes. Of all patients with normal clinical facial nerve function, 22% had an abnormal EPhT result and 78% had a normal result. No statistically significant differences could be observed in short-term and long-term post-operative facial function between the groups. In this study, electrophysiological tests were not able to predict facial nerve outcome after vestibular schwannoma surgery. Tumour size remains the best pre-operative prognostic indicator of facial nerve function outcome, i.e. a better outcome in smaller lesions.

  20. Correlations between power and test reactor data bases

    International Nuclear Information System (INIS)

    Guthrie, G.L.; Simonen, E.P.

    1989-02-01

    Differences between power reactor and test reactor data bases have been evaluated. Charpy shift data has been assembled from specimens irradiated in both high-flux test reactors and low-flux power reactors. Preliminary tests for the existence of a bias between test and power reactor data bases indicate a possible bias between the weld data bases. The bias is nonconservative for power predictive purposes, using test reactor data. The lesser shift for test reactor data compared to power reactor data is interpreted primarily in terms of greater point defect recombination for test reactor fluxes compared to power reactor fluxes. The possibility of greater thermal aging effects during lower damage rates is also discussed. 15 refs., 5 figs., 2 tabs

  1. [Formula: see text]Determination of the smoking gun of intent: significance testing of forced choice results in social security claimants.

    Science.gov (United States)

    Binder, Laurence M; Chafetz, Michael D

    2018-01-01

    Significantly below-chance findings on forced choice tests have been described as revealing "the smoking gun of intent" that proved malingering. The issues of probability levels, one-tailed vs. two-tailed tests, and the combining of PVT scores on significantly below-chance findings were addressed in a previous study, with a recommendation of a probability level of .20 to test the significance of below-chance results. The purpose of the present study was to determine the rate of below-chance findings in a Social Security Disability claimant sample using the previous recommendations. We compared the frequency of below-chance results on forced choice performance validity tests (PVTs) at two levels of significance, .05 and .20, and when using significance testing on individual subtests of the PVTs compared with total scores in claimants for Social Security Disability in order to determine the rate of the expected increase. The frequency of significant results increased with the higher level of significance for each subtest of the PVT and when combining individual test sections to increase the number of test items, with up to 20% of claimants showing significantly below-chance results at the higher p-value. These findings are discussed in light of Social Security Administration policy, showing an impact on policy issues concerning child abuse and neglect, and the importance of using these techniques in evaluations for Social Security Disability.

  2. Confidence intervals permit, but don't guarantee, better inference than statistical significance testing

    Directory of Open Access Journals (Sweden)

    Melissa Coulson

    2010-07-01

    Full Text Available A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST, or confidence intervals (CIs. Authors of articles published in psychology, behavioural neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST.

  3. Profile of Students' Creative Thinking Skills on Quantitative Project-Based Protein Testing using Local Materials

    Directory of Open Access Journals (Sweden)

    D. K. Sari

    2017-04-01

    Full Text Available The purpose of this study is to obtain a profile of students’ creative thinking skills on quantitative project-based protein testing using local materials. Implementation of the research is using quasi-experimental method pre-test post-test control group design with 40 students involved in Biochemistry lab. The research instrument is pre-test and post-test using creative thinking skills in the form of description and students’ questionnaire. The analysis was performed with SPSS 22.0 program to see the significance normality, U Mann-Whitney test for nonparametric statistics, N-Gain score, and the percentage of student responses to the practicum performed. The research result shows that the pretest rate in the experimental group is 8.25 while in the control group is 6.90. After attending a project-based practicum with local materials, the experimental group obtained the mean of posttest is 37.55 while in control class is 11.18. The students’ improvement on creative thinking skills can be seen from the average of N-Gain in the experimental class with 0.32 (medium category and in the control category with 0.05 (low category. The experimental and control class have different creative thinking skills significantly different fluency, flexibility, novelty, and detail. It can be concluded that quantitative project-based protein testing using local materials can improve students’ creative thinking skills. 71% of total students feel that quantitative project-based protein testing using local materials make them more creative in doing a practicum in the laboratory.

  4. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    Science.gov (United States)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  5. Design and Testing of a Flexible Inclinometer Probe for Model Tests of Landslide Deep Displacement Measurement.

    Science.gov (United States)

    Zhang, Yongquan; Tang, Huiming; Li, Changdong; Lu, Guiying; Cai, Yi; Zhang, Junrong; Tan, Fulin

    2018-01-14

    The physical model test of landslides is important for studying landslide structural damage, and parameter measurement is key in this process. To meet the measurement requirements for deep displacement in landslide physical models, an automatic flexible inclinometer probe with good coupling and large deformation capacity was designed. The flexible inclinometer probe consists of several gravity acceleration sensing units that are protected and positioned by silicon encapsulation, all the units are connected to a 485-comunication bus. By sensing the two-axis tilt angle, the direction and magnitude of the displacement for a measurement unit can be calculated, then the overall displacement is accumulated according to all units, integrated from bottom to top in turn. In the conversion from angle to displacement, two spline interpolation methods are introduced to correct and resample the data; one is to interpolate the displacement after conversion, and the other is to interpolate the angle before conversion; compared with the result read from checkered paper, the latter is proved to have a better effect, with an additional condition that the displacement curve move up half the length of the unit. The flexible inclinometer is verified with respect to its principle and arrangement by a laboratory physical model test, and the test results are highly consistent with the actual deformation of the landslide model.

  6. Computer-Aided Test Flow in Core-Based Design

    NARCIS (Netherlands)

    Zivkovic, V.; Tangelder, R.J.W.T.; Kerkhoff, Hans G.

    2000-01-01

    This paper copes with the efficient test-pattern generation in a core-based design. A consistent Computer-Aided Test (CAT) flow is proposed based on the required core-test strategy. It generates a test-pattern set for the embedded cores with high fault coverage and low DfT area overhead. The CAT

  7. Ethernet-based test stand for a CAN network

    Science.gov (United States)

    Ziebinski, Adam; Cupek, Rafal; Drewniak, Marek

    2017-11-01

    This paper presents a test stand for the CAN-based systems that are used in automotive systems. The authors propose applying an Ethernet-based test system that supports the virtualisation of a CAN network. The proposed solution has many advantages compared to classical test beds that are based on dedicated CAN-PC interfaces: it allows the physical constraints associated with the number of interfaces that can be simultaneously connected to a tested system to be avoided, which enables the test time for parallel tests to be shortened; the high speed of Ethernet transmission allows for more frequent sampling of the messages that are transmitted by a CAN network (as the authors show in the experiment results section) and the cost of the proposed solution is much lower than the traditional lab-based dedicated CAN interfaces for PCs.

  8. An introduction to Bartlett correction and bias reduction

    CERN Document Server

    Cordeiro, Gauss M

    2014-01-01

    This book presents a concise introduction to Bartlett and Bartlett-type corrections of statistical tests and bias correction of point estimators. The underlying idea behind both groups of corrections is to obtain higher accuracy in small samples. While the main focus is on corrections that can be analytically derived, the authors also present alternative strategies for improving estimators and tests based on bootstrap, a data resampling technique, and discuss concrete applications to several important statistical models.

  9. UFC advisor: An AI-based system for the automatic test environment

    Science.gov (United States)

    Lincoln, David T.; Fink, Pamela K.

    1990-01-01

    The Air Logistics Command within the Air Force is responsible for maintaining a wide variety of aircraft fleets and weapon systems. To maintain these fleets and systems requires specialized test equipment that provides data concerning the behavior of a particular device. The test equipment is used to 'poke and prod' the device to determine its functionality. The data represent voltages, pressures, torques, temperatures, etc. and are called testpoints. These testpoints can be defined numerically as being in or out of limits/tolerance. Some test equipment is termed 'automatic' because it is computer-controlled. Due to the fact that effective maintenance in the test arena requires a significant amount of expertise, it is an ideal area for the application of knowledge-based system technology. Such a system would take testpoint data, identify values out-of-limits, and determine potential underlying problems based on what is out-of-limits and how far. This paper discusses the application of this technology to a device called the Unified Fuel Control (UFC) which is maintained in this manner.

  10. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    Science.gov (United States)

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  11. Automated Search-Based Robustness Testing for Autonomous Vehicle Software

    Directory of Open Access Journals (Sweden)

    Kevin M. Betts

    2016-01-01

    Full Text Available Autonomous systems must successfully operate in complex time-varying spatial environments even when dealing with system faults that may occur during a mission. Consequently, evaluating the robustness, or ability to operate correctly under unexpected conditions, of autonomous vehicle control software is an increasingly important issue in software testing. New methods to automatically generate test cases for robustness testing of autonomous vehicle control software in closed-loop simulation are needed. Search-based testing techniques were used to automatically generate test cases, consisting of initial conditions and fault sequences, intended to challenge the control software more than test cases generated using current methods. Two different search-based testing methods, genetic algorithms and surrogate-based optimization, were used to generate test cases for a simulated unmanned aerial vehicle attempting to fly through an entryway. The effectiveness of the search-based methods in generating challenging test cases was compared to both a truth reference (full combinatorial testing and the method most commonly used today (Monte Carlo testing. The search-based testing techniques demonstrated better performance than Monte Carlo testing for both of the test case generation performance metrics: (1 finding the single most challenging test case and (2 finding the set of fifty test cases with the highest mean degree of challenge.

  12. Computer-aided assessment of breast density: comparison of supervised deep learning and feature-based statistical learning.

    Science.gov (United States)

    Li, Songfeng; Wei, Jun; Chan, Heang-Ping; Helvie, Mark A; Roubidoux, Marilyn A; Lu, Yao; Zhou, Chuan; Hadjiiski, Lubomir M; Samala, Ravi K

    2018-01-09

    Breast density is one of the most significant factors that is associated with cancer risk. In this study, our purpose was to develop a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammograms (DMs). The input 'for processing' DMs was first log-transformed, enhanced by a multi-resolution preprocessing scheme, and subsampled to a pixel size of 800 µm  ×  800 µm from 100 µm  ×  100 µm. A deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD) by using a domain adaptation resampling method. The PD was estimated as the ratio of the dense area to the breast area based on the PMD. The DCNN approach was compared to a feature-based statistical learning approach. Gray level, texture and morphological features were extracted and a least absolute shrinkage and selection operator was used to combine the features into a feature-based PMD. With approval of the Institutional Review Board, we retrospectively collected a training set of 478 DMs and an independent test set of 183 DMs from patient files in our institution. Two experienced mammography quality standards act radiologists interactively segmented PD as the reference standard. Ten-fold cross-validation was used for model selection and evaluation with the training set. With cross-validation, DCNN obtained a Dice's coefficient (DC) of 0.79  ±  0.13 and Pearson's correlation (r) of 0.97, whereas feature-based learning obtained DC  =  0.72  ±  0.18 and r  =  0.85. For the independent test set, DCNN achieved DC  =  0.76  ±  0.09 and r  =  0.94, while feature-based learning achieved DC  =  0.62  ±  0.21 and r  =  0.75. Our DCNN approach was significantly better and more robust than the feature-based learning approach for automated PD estimation on DMs, demonstrating its potential use for automated density reporting as well as

  13. Computer-aided assessment of breast density: comparison of supervised deep learning and feature-based statistical learning

    Science.gov (United States)

    Li, Songfeng; Wei, Jun; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Lu, Yao; Zhou, Chuan; Hadjiiski, Lubomir M.; Samala, Ravi K.

    2018-01-01

    Breast density is one of the most significant factors that is associated with cancer risk. In this study, our purpose was to develop a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammograms (DMs). The input ‘for processing’ DMs was first log-transformed, enhanced by a multi-resolution preprocessing scheme, and subsampled to a pixel size of 800 µm  ×  800 µm from 100 µm  ×  100 µm. A deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD) by using a domain adaptation resampling method. The PD was estimated as the ratio of the dense area to the breast area based on the PMD. The DCNN approach was compared to a feature-based statistical learning approach. Gray level, texture and morphological features were extracted and a least absolute shrinkage and selection operator was used to combine the features into a feature-based PMD. With approval of the Institutional Review Board, we retrospectively collected a training set of 478 DMs and an independent test set of 183 DMs from patient files in our institution. Two experienced mammography quality standards act radiologists interactively segmented PD as the reference standard. Ten-fold cross-validation was used for model selection and evaluation with the training set. With cross-validation, DCNN obtained a Dice’s coefficient (DC) of 0.79  ±  0.13 and Pearson’s correlation (r) of 0.97, whereas feature-based learning obtained DC  =  0.72  ±  0.18 and r  =  0.85. For the independent test set, DCNN achieved DC  =  0.76  ±  0.09 and r  =  0.94, while feature-based learning achieved DC  =  0.62  ±  0.21 and r  =  0.75. Our DCNN approach was significantly better and more robust than the feature-based learning approach for automated PD estimation on DMs, demonstrating its potential use for automated density reporting as

  14. Comparison of the Clock Test and a questionnaire-based test for ...

    African Journals Online (AJOL)

    Comparison of the Clock Test and a questionnaire-based test for screening for cognitive impairment in Nigerians. D J VanderJagt, S Ganga, M O Obadofin, P Stanley, M Zimmerman, B J Skipper, R H Glew ...

  15. Accounting for Proof Test Data in a Reliability Based Design Optimization Framework

    Science.gov (United States)

    Ventor, Gerharad; Scotti, Stephen J.

    2012-01-01

    This paper investigates the use of proof (or acceptance) test data during the reliability based design optimization of structural components. It is assumed that every component will be proof tested and that the component will only enter into service if it passes the proof test. The goal is to reduce the component weight, while maintaining high reliability, by exploiting the proof test results during the design process. The proposed procedure results in the simultaneous design of the structural component and the proof test itself and provides the designer with direct control over the probability of failing the proof test. The procedure is illustrated using two analytical example problems and the results indicate that significant weight savings are possible when exploiting the proof test results during the design process.

  16. The predictive value of the sacral base pressure test in detecting specific types of sacroiliac dysfunction

    Science.gov (United States)

    Mitchell, Travis D.; Urli, Kristina E.; Breitenbach, Jacques; Yelverton, Chris

    2007-01-01

    Abstract Objective This study aimed to evaluate the validity of the sacral base pressure test in diagnosing sacroiliac joint dysfunction. It also determined the predictive powers of the test in determining which type of sacroiliac joint dysfunction was present. Methods This was a double-blind experimental study with 62 participants. The results from the sacral base pressure test were compared against a cluster of previously validated tests of sacroiliac joint dysfunction to determine its validity and predictive powers. The external rotation of the feet, occurring during the sacral base pressure test, was measured using a digital inclinometer. Results There was no statistically significant difference in the results of the sacral base pressure test between the types of sacroiliac joint dysfunction. In terms of the results of validity, the sacral base pressure test was useful in identifying positive values of sacroiliac joint dysfunction. It was fairly helpful in correctly diagnosing patients with negative test results; however, it had only a “slight” agreement with the diagnosis for κ interpretation. Conclusions In this study, the sacral base pressure test was not a valid test for determining the presence of sacroiliac joint dysfunction or the type of dysfunction present. Further research comparing the agreement of the sacral base pressure test or other sacroiliac joint dysfunction tests with a criterion standard of diagnosis is necessary. PMID:19674694

  17. Test-Access Planning and Test Scheduling for Embedded Core-Based System Chips

    OpenAIRE

    Goel, Sandeep Kumar

    2005-01-01

    Advances in the semiconductor process technology enable the creation of a complete system on one single die, the so-called system chip or SOC. To reduce time-to-market for large SOCs, reuse of pre-designed and pre-veried blocks called cores is employed. Like the design style, testing of SOCs can be best approached in a core-based fashion. In order to enable core-based test development, an embedded core should be isolated from its surrounding circuitry and electrical test access from chip pins...

  18. Feature selection based on SVM significance maps for classification of dementia

    NARCIS (Netherlands)

    E.E. Bron (Esther); M. Smits (Marion); J.C. van Swieten (John); W.J. Niessen (Wiro); S. Klein (Stefan)

    2014-01-01

    textabstractSupport vector machine significance maps (SVM p-maps) previously showed clusters of significantly different voxels in dementiarelated brain regions. We propose a novel feature selection method for classification of dementia based on these p-maps. In our approach, the SVM p-maps are

  19. Automatic Test Pattern Generator for Fuzzing Based on Finite State Machine

    Directory of Open Access Journals (Sweden)

    Ming-Hung Wang

    2017-01-01

    Full Text Available With the rapid development of the Internet, several emerging technologies are adopted to construct fancy, interactive, and user-friendly websites. Among these technologies, HTML5 is a popular one and is widely used in establishing modern sites. However, the security issues in the new web technologies are also raised and are worthy of investigation. For vulnerability investigation, many previous studies used fuzzing and focused on generation-based approaches to produce test cases for fuzzing; however, these methods require a significant amount of knowledge and mental efforts to develop test patterns for generating test cases. To decrease the entry barrier of conducting fuzzing, in this study, we propose a test pattern generation algorithm based on the concept of finite state machines. We apply graph analysis techniques to extract paths from finite state machines and use these paths to construct test patterns automatically. According to the proposal, fuzzing can be completed through inputting a regular expression corresponding to the test target. To evaluate the performance of our proposal, we conduct an experiment in identifying vulnerabilities of the input attributes in HTML5. According to the results, our approach is not only efficient but also effective for identifying weak validators in HTML5.

  20. Wind turbine blade testing system using base excitation

    Science.gov (United States)

    Cotrell, Jason; Thresher, Robert; Lambert, Scott; Hughes, Scott; Johnson, Jay

    2014-03-25

    An apparatus (500) for fatigue testing elongate test articles (404) including wind turbine blades through forced or resonant excitation of the base (406) of the test articles (404). The apparatus (500) includes a testing platform or foundation (402). A blade support (410) is provided for retaining or supporting a base (406) of an elongate test article (404), and the blade support (410) is pivotally mounted on the testing platform (402) with at least two degrees of freedom of motion relative to the testing platform (402). An excitation input assembly (540) is interconnected with the blade support (410) and includes first and second actuators (444, 446, 541) that act to concurrently apply forces or loads to the blade support (410). The actuator forces are cyclically applied in first and second transverse directions. The test article (404) responds to shaking of its base (406) by oscillating in two, transverse directions (505, 507).

  1. Normal Threshold Size of Stimuli in Children Using a Game-Based Visual Field Test.

    Science.gov (United States)

    Wang, Yanfang; Ali, Zaria; Subramani, Siddharth; Biswas, Susmito; Fenerty, Cecilia; Henson, David B; Aslam, Tariq

    2017-06-01

    The aim of this study was to demonstrate and explore the ability of novel game-based perimetry to establish normal visual field thresholds in children. One hundred and eighteen children (aged 8.0 ± 2.8 years old) with no history of visual field loss or significant medical history were recruited. Each child had one eye tested using a game-based visual field test 'Caspar's Castle' at four retinal locations 12.7° (N = 118) from fixation. Thresholds were established repeatedly using up/down staircase algorithms with stimuli of varying diameter (luminance 20 cd/m 2 , duration 200 ms, background luminance 10 cd/m 2 ). Relationships between threshold and age were determined along with measures of intra- and intersubject variability. The Game-based visual field test was able to establish threshold estimates in the full range of children tested. Threshold size reduced with increasing age in children. Intrasubject variability and intersubject variability were inversely related to age in children. Normal visual field thresholds were established for specific locations in children using a novel game-based visual field test. These could be used as a foundation for developing a game-based perimetry screening test for children.

  2. Rule-based Test Generation with Mind Maps

    Directory of Open Access Journals (Sweden)

    Dimitry Polivaev

    2012-02-01

    Full Text Available This paper introduces basic concepts of rule based test generation with mind maps, and reports experiences learned from industrial application of this technique in the domain of smart card testing by Giesecke & Devrient GmbH over the last years. It describes the formalization of test selection criteria used by our test generator, our test generation architecture and test generation framework.

  3. The Influence of Study-Level Inference Models and Study Set Size on Coordinate-Based fMRI Meta-Analyses

    Directory of Open Access Journals (Sweden)

    Han Bossier

    2018-01-01

    Full Text Available Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1 the balance between false and true positives and (2 the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS, or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35. To do this, we apply a resampling scheme on a large dataset (N = 1,400 to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results.

  4. Automatic bearing fault diagnosis of permanent magnet synchronous generators in wind turbines subjected to noise interference

    Science.gov (United States)

    Guo, Jun; Lu, Siliang; Zhai, Chao; He, Qingbo

    2018-02-01

    An automatic bearing fault diagnosis method is proposed for permanent magnet synchronous generators (PMSGs), which are widely installed in wind turbines subjected to low rotating speeds, speed fluctuations, and electrical device noise interferences. The mechanical rotating angle curve is first extracted from the phase current of a PMSG by sequentially applying a series of algorithms. The synchronous sampled vibration signal of the fault bearing is then resampled in the angular domain according to the obtained rotating phase information. Considering that the resampled vibration signal is still overwhelmed by heavy background noise, an adaptive stochastic resonance filter is applied to the resampled signal to enhance the fault indicator and facilitate bearing fault identification. Two types of fault bearings with different fault sizes in a PMSG test rig are subjected to experiments to test the effectiveness of the proposed method. The proposed method is fully automated and thus shows potential for convenient, highly efficient and in situ bearing fault diagnosis for wind turbines subjected to harsh environments.

  5. Base Deficit as an Indicator of Significant Blunt Abdominal Trauma

    African Journals Online (AJOL)

    multiruka1

    important cause of morbidity and mortality among trauma patients. ... the use of BD as an indicator of significant BAT. Methods: ... Key words: Base deficit, Blunt abdominal trauma,. Predictor. ..... Delineate Risk for Torso Injury in Stable Patients.

  6. Who is more skilful? Doping and its implication on the validity, morality and significance of the sporting test

    DEFF Research Database (Denmark)

    Christiansen, Ask Vest; Møller, Rasmus Bysted

    2016-01-01

    In this article, we explore if and in what ways doping can be regarded as a challenge to the validity, morality and significance of the sporting test. We start out by examining Kalevi Heinilä’s analysis of the logic of elite sport, which shows how the ‘spiral of competition’ leads to the use...... of ‘dubious means’. As a supplement to Heinilä, we revisit American sports historian John Hoberman’s writings on sport and technology. Then we discuss what function equality and fairness have in sport and what separates legitimate form illegitimate ways of enhancing performance. We proceed by discussing...... the line of argumentation set forth by philosopher Torbjörn Tännsjö on how our admiration of sporting superiority based on natural talent or ‘birth luck’ is immoral. We analyse his argument in favour of eliminating the significance of meritless luck in sport by lifting the ban on doping and argue that its...

  7. Computer-Aided Test Flow in Core-Based Design

    NARCIS (Netherlands)

    Zivkovic, V.; Tangelder, R.J.W.T.; Kerkhoff, Hans G.

    2000-01-01

    This paper copes with the test-pattern generation and fault coverage determination in the core based design. The basic core-test strategy that one has to apply in the core-based design is stated in this work. A Computer-Aided Test (CAT) flow is proposed resulting in accurate fault coverage of

  8. An authoring tool for building both mobile adaptable tests and web-based adaptive or classic tests

    NARCIS (Netherlands)

    Romero, C.; Ventura, S.; Hervás, C.; De Bra, P.M.E.; Wade, V.; Ashman, H.; Smyth, B.

    2006-01-01

    This paper describes Test Editor, an authoring tool for building both mobile adaptable tests and web-based adaptive or classic tests. This tool facilitates the development and maintenance of different types of XML-based multiple- choice tests for using in web-based education systems and wireless

  9. Benford's law first significant digit and distribution distances for testing the reliability of financial reports in developing countries

    Science.gov (United States)

    Shi, Jing; Ausloos, Marcel; Zhu, Tingting

    2018-02-01

    We discuss a common suspicion about reported financial data, in 10 industrial sectors of the 6 so called "main developing countries" over the time interval [2000-2014]. These data are examined through Benford's law first significant digit and through distribution distances tests. It is shown that several visually anomalous data have to be a priori removed. Thereafter, the distributions much better follow the first digit significant law, indicating the usefulness of a Benford's law test from the research starting line. The same holds true for distance tests. A few outliers are pointed out.

  10. Predicting the occurrence of iron chlorosis in grapevine with tests based on soil iron forms

    Directory of Open Access Journals (Sweden)

    Isabel Díaz de la Torre

    2010-06-01

    Significance and impact of study: This study has shown the limited usefulness of tests based on the contents and reactivity of the soil carbonate to predict the occurrence of Fe chlorosis in grapevine; tests capable of estimating the contents of the labile soil Fe forms constitute the best alternative.

  11. Meal Microstructure Characterization from Sensor-Based Food Intake Detection

    Directory of Open Access Journals (Sweden)

    Abul Doulah

    2017-07-01

    Full Text Available To avoid the pitfalls of self-reported dietary intake, wearable sensors can be used. Many food ingestion sensors offer the ability to automatically detect food intake using time resolutions that range from 23 ms to 8 min. There is no defined standard time resolution to accurately measure ingestive behavior or a meal microstructure. This paper aims to estimate the time resolution needed to accurately represent the microstructure of meals such as duration of eating episode, the duration of actual ingestion, and number of eating events. Twelve participants wore the automatic ingestion monitor (AIM and kept a standard diet diary to report their food intake in free-living conditions for 24 h. As a reference, participants were also asked to mark food intake with a push button sampled every 0.1 s. The duration of eating episodes, duration of ingestion, and number of eating events were computed from the food diary, AIM, and the push button resampled at different time resolutions (0.1–30s. ANOVA and multiple comparison tests showed that the duration of eating episodes estimated from the diary differed significantly from that estimated by the AIM and the push button (p-value <0.001. There were no significant differences in the number of eating events for push button resolutions of 0.1, 1, and 5 s, but there were significant differences in resolutions of 10–30s (p-value <0.05. The results suggest that the desired time resolution of sensor-based food intake detection should be ≤5 s to accurately detect meal microstructure. Furthermore, the AIM provides more accurate measurement of the eating episode duration than the diet diary.

  12. A Note on Comparing the Power of Test Statistics at Low Significance Levels.

    Science.gov (United States)

    Morris, Nathan; Elston, Robert

    2011-01-01

    It is an obvious fact that the power of a test statistic is dependent upon the significance (alpha) level at which the test is performed. It is perhaps a less obvious fact that the relative performance of two statistics in terms of power is also a function of the alpha level. Through numerous personal discussions, we have noted that even some competent statisticians have the mistaken intuition that relative power comparisons at traditional levels such as α = 0.05 will be roughly similar to relative power comparisons at very low levels, such as the level α = 5 × 10 -8 , which is commonly used in genome-wide association studies. In this brief note, we demonstrate that this notion is in fact quite wrong, especially with respect to comparing tests with differing degrees of freedom. In fact, at very low alpha levels the cost of additional degrees of freedom is often comparatively low. Thus we recommend that statisticians exercise caution when interpreting the results of power comparison studies which use alpha levels that will not be used in practice.

  13. How Many Subjects are Needed for a Visual Field Normative Database? A Comparison of Ground Truth and Bootstrapped Statistics.

    Science.gov (United States)

    Phu, Jack; Bui, Bang V; Kalloniatis, Michael; Khuu, Sieu K

    2018-03-01

    The number of subjects needed to establish the normative limits for visual field (VF) testing is not known. Using bootstrap resampling, we determined whether the ground truth mean, distribution limits, and standard deviation (SD) could be approximated using different set size ( x ) levels, in order to provide guidance for the number of healthy subjects required to obtain robust VF normative data. We analyzed the 500 Humphrey Field Analyzer (HFA) SITA-Standard results of 116 healthy subjects and 100 HFA full threshold results of 100 psychophysically experienced healthy subjects. These VFs were resampled (bootstrapped) to determine mean sensitivity, distribution limits (5th and 95th percentiles), and SD for different ' x ' and numbers of resamples. We also used the VF results of 122 glaucoma patients to determine the performance of ground truth and bootstrapped results in identifying and quantifying VF defects. An x of 150 (for SITA-Standard) and 60 (for full threshold) produced bootstrapped descriptive statistics that were no longer different to the original distribution limits and SD. Removing outliers produced similar results. Differences between original and bootstrapped limits in detecting glaucomatous defects were minimized at x = 250. Ground truth statistics of VF sensitivities could be approximated using set sizes that are significantly smaller than the original cohort. Outlier removal facilitates the use of Gaussian statistics and does not significantly affect the distribution limits. We provide guidance for choosing the cohort size for different levels of error when performing normative comparisons with glaucoma patients.

  14. Illegal performance enhancing drugs and doping in sport: a picture-based brief implicit association test for measuring athletes’ attitudes

    Science.gov (United States)

    2014-01-01

    Background Doping attitude is a key variable in predicting athletes’ intention to use forbidden performance enhancing drugs. Indirect reaction-time based attitude tests, such as the implicit association test, conceal the ultimate goal of measurement from the participant better than questionnaires. Indirect tests are especially useful when socially sensitive constructs such as attitudes towards doping need to be described. The present study serves the development and validation of a novel picture-based brief implicit association test (BIAT) for testing athletes’ attitudes towards doping in sport. It shall provide the basis for a transnationally compatible research instrument able to harmonize anti-doping research efforts. Method Following a known-group differences validation strategy, the doping attitudes of 43 athletes from bodybuilding (representative for a highly doping prone sport) and handball (as a contrast group) were compared using the picture-based doping-BIAT. The Performance Enhancement Attitude Scale (PEAS) was employed as a corresponding direct measure in order to additionally validate the results. Results As expected, in the group of bodybuilders, indirectly measured doping attitudes as tested with the picture-based doping-BIAT were significantly less negative (η2 = .11). The doping-BIAT and PEAS scores correlated significantly at r = .50 for bodybuilders, and not significantly at r = .36 for handball players. There was a low error rate (7%) and a satisfactory internal consistency (r tt  = .66) for the picture-based doping-BIAT. Conclusions The picture-based doping-BIAT constitutes a psychometrically tested method, ready to be adopted by the international research community. The test can be administered via the internet. All test material is available “open source”. The test might be implemented, for example, as a new effect-measure in the evaluation of prevention programs. PMID:24479865

  15. Illegal performance enhancing drugs and doping in sport: a picture-based brief implicit association test for measuring athletes' attitudes.

    Science.gov (United States)

    Brand, Ralf; Heck, Philipp; Ziegler, Matthias

    2014-01-30

    Doping attitude is a key variable in predicting athletes' intention to use forbidden performance enhancing drugs. Indirect reaction-time based attitude tests, such as the implicit association test, conceal the ultimate goal of measurement from the participant better than questionnaires. Indirect tests are especially useful when socially sensitive constructs such as attitudes towards doping need to be described. The present study serves the development and validation of a novel picture-based brief implicit association test (BIAT) for testing athletes' attitudes towards doping in sport. It shall provide the basis for a transnationally compatible research instrument able to harmonize anti-doping research efforts. Following a known-group differences validation strategy, the doping attitudes of 43 athletes from bodybuilding (representative for a highly doping prone sport) and handball (as a contrast group) were compared using the picture-based doping-BIAT. The Performance Enhancement Attitude Scale (PEAS) was employed as a corresponding direct measure in order to additionally validate the results. As expected, in the group of bodybuilders, indirectly measured doping attitudes as tested with the picture-based doping-BIAT were significantly less negative (η2 = .11). The doping-BIAT and PEAS scores correlated significantly at r = .50 for bodybuilders, and not significantly at r = .36 for handball players. There was a low error rate (7%) and a satisfactory internal consistency (rtt = .66) for the picture-based doping-BIAT. The picture-based doping-BIAT constitutes a psychometrically tested method, ready to be adopted by the international research community. The test can be administered via the internet. All test material is available "open source". The test might be implemented, for example, as a new effect-measure in the evaluation of prevention programs.

  16. Vibration based condition monitoring of a multistage epicyclic gearbox in lifting cranes

    Science.gov (United States)

    Assaad, Bassel; Eltabach, Mario; Antoni, Jérôme

    2014-01-01

    This paper proposes a model-based technique for detecting wear in a multistage planetary gearbox used by lifting cranes. The proposed method establishes a vibration signal model which deals with cyclostationary and autoregressive models. First-order cyclostationarity is addressed by the analysis of the time synchronous average (TSA) of the angular resampled vibration signal. Then an autoregressive model (AR) is applied to the TSA part in order to extract a residual signal containing pertinent fault signatures. The paper also explores a number of methods commonly used in vibration monitoring of planetary gearboxes, in order to make comparisons. In the experimental part of this study, these techniques are applied to accelerated lifetime test bench data for the lifting winch. After processing raw signals recorded with an accelerometer mounted on the outside of the gearbox, a number of condition indicators (CIs) are derived from the TSA signal, the residual autoregressive signal and other signals derived using standard signal processing methods. The goal is to check the evolution of the CIs during the accelerated lifetime test (ALT). Clarity and fluctuation level of the historical trends are finally considered as a criteria for comparing between the extracted CIs.

  17. Computer-Aided Test Flow in Core-Based Design

    OpenAIRE

    Zivkovic, V.; Tangelder, R.J.W.T.; Kerkhoff, Hans G.

    2000-01-01

    This paper copes with the test-pattern generation and fault coverage determination in the core based design. The basic core-test strategy that one has to apply in the core-based design is stated in this work. A Computer-Aided Test (CAT) flow is proposed resulting in accurate fault coverage of embedded cores. The CAT now is applied to a few cores within the Philips Core Test Pilot IC project

  18. A Note on Testing Mediated Effects in Structural Equation Models: Reconciling Past and Current Research on the Performance of the Test of Joint Significance

    Science.gov (United States)

    Valente, Matthew J.; Gonzalez, Oscar; Miocevic, Milica; MacKinnon, David P.

    2016-01-01

    Methods to assess the significance of mediated effects in education and the social sciences are well studied and fall into two categories: single sample methods and computer-intensive methods. A popular single sample method to detect the significance of the mediated effect is the test of joint significance, and a popular computer-intensive method…

  19. Monte Carlo Simulations Comparing Fisher Exact Test and Unequal Variances t Test for Analysis of Differences Between Groups in Brief Hospital Lengths of Stay.

    Science.gov (United States)

    Dexter, Franklin; Bayman, Emine O; Dexter, Elisabeth U

    2017-12-01

    We examined type I and II error rates for analysis of (1) mean hospital length of stay (LOS) versus (2) percentage of hospital LOS that are overnight. These 2 end points are suitable for when LOS is treated as a secondary economic end point. We repeatedly resampled LOS for 5052 discharges of thoracoscopic wedge resections and lung lobectomy at 26 hospitals. Unequal variances t test (Welch method) and Fisher exact test both were conservative (ie, type I error rate less than nominal level). The Wilcoxon rank sum test was included as a comparator; the type I error rates did not differ from the nominal level of 0.05 or 0.01. Fisher exact test was more powerful than the unequal variances t test at detecting differences among hospitals; estimated odds ratio for obtaining P < .05 with Fisher exact test versus unequal variances t test = 1.94, with 95% confidence interval, 1.31-3.01. Fisher exact test and Wilcoxon-Mann-Whitney had comparable statistical power in terms of differentiating LOS between hospitals. For studies with LOS to be used as a secondary end point of economic interest, there is currently considerable interest in the planned analysis being for the percentage of patients suitable for ambulatory surgery (ie, hospital LOS equals 0 or 1 midnight). Our results show that there need not be a loss of statistical power when groups are compared using this binary end point, as compared with either Welch method or Wilcoxon rank sum test.

  20. Feasibility and willingness-to-pay for integrated community-based tuberculosis testing

    Directory of Open Access Journals (Sweden)

    Vickery Carter

    2011-11-01

    Full Text Available Abstract Background Community-based screening for TB, combined with HIV and syphilis testing, faces a number of barriers. One significant barrier is the value that target communities place on such screening. Methods Integrated testing for TB, HIV, and syphilis was performed in neighborhoods identified using geographic information systems-based disease mapping. TB testing included skin testing and interferon gamma release assays. Subjects completed a survey describing disease risk factors, healthcare access, healthcare utilization, and willingness to pay for integrated testing. Results Behavioral and social risk factors among the 113 subjects were prevalent (71% prior incarceration, 27% prior or current crack cocaine use, 35% homelessness, and only 38% had a regular healthcare provider. The initial 24 subjects reported that they would be willing to pay a median $20 (IQR: 0-100 for HIV testing and $10 (IQR: 0-100 for TB testing when the question was asked in an open-ended fashion, but when the question was changed to a multiple-choice format, the next 89 subjects reported that they would pay a median $5 for testing, and 23% reported that they would either not pay anything to get tested or would need to be paid $5 to get tested for TB, HIV, or syphilis. Among persons who received tuberculin skin testing, only 14/78 (18% participants returned to have their skin tests read. Only 14/109 (13% persons who underwent HIV testing returned to receive their HIV results. Conclusion The relatively high-risk persons screened in this community outreach study placed low value on testing. Reported willingness to pay for such testing, while low, likely overestimated the true willingness to pay. Successful TB, HIV, and syphilis integrated testing programs in high risk populations will likely require one-visit diagnostic testing and incentives.

  1. Visually directed vs. software-based targeted biopsy compared to transperineal template mapping biopsy in the detection of clinically significant prostate cancer.

    Science.gov (United States)

    Valerio, Massimo; McCartan, Neil; Freeman, Alex; Punwani, Shonit; Emberton, Mark; Ahmed, Hashim U

    2015-10-01

    Targeted biopsy based on cognitive or software magnetic resonance imaging (MRI) to transrectal ultrasound registration seems to increase the detection rate of clinically significant prostate cancer as compared with standard biopsy. However, these strategies have not been directly compared against an accurate test yet. The aim of this study was to obtain pilot data on the diagnostic ability of visually directed targeted biopsy vs. software-based targeted biopsy, considering transperineal template mapping (TPM) biopsy as the reference test. Prospective paired cohort study included 50 consecutive men undergoing TPM with one or more visible targets detected on preoperative multiparametric MRI. Targets were contoured on the Biojet software. Patients initially underwent software-based targeted biopsies, then visually directed targeted biopsies, and finally systematic TPM. The detection rate of clinically significant disease (Gleason score ≥3+4 and/or maximum cancer core length ≥4mm) of one strategy against another was compared by 3×3 contingency tables. Secondary analyses were performed using a less stringent threshold of significance (Gleason score ≥4+3 and/or maximum cancer core length ≥6mm). Median age was 68 (interquartile range: 63-73); median prostate-specific antigen level was 7.9ng/mL (6.4-10.2). A total of 79 targets were detected with a mean of 1.6 targets per patient. Of these, 27 (34%), 28 (35%), and 24 (31%) were scored 3, 4, and 5, respectively. At a patient level, the detection rate was 32 (64%), 34 (68%), and 38 (76%) for visually directed targeted, software-based biopsy, and TPM, respectively. Combining the 2 targeted strategies would have led to detection rate of 39 (78%). At a patient level and at a target level, software-based targeted biopsy found more clinically significant diseases than did visually directed targeted biopsy, although this was not statistically significant (22% vs. 14%, P = 0.48; 51.9% vs. 44.3%, P = 0.24). Secondary

  2. Characterization of mammographic masses based on level set segmentation with new image features and patient information

    International Nuclear Information System (INIS)

    Shi Jiazheng; Sahiner, Berkman; Chan Heangping; Ge Jun; Hadjiiski, Lubomir; Helvie, Mark A.; Nees, Alexis; Wu Yita; Wei Jun; Zhou Chuan; Zhang Yiheng; Cui Jing

    2008-01-01

    Computer-aided diagnosis (CAD) for characterization of mammographic masses as malignant or benign has the potential to assist radiologists in reducing the biopsy rate without increasing false negatives. The purpose of this study was to develop an automated method for mammographic mass segmentation and explore new image based features in combination with patient information in order to improve the performance of mass characterization. The authors' previous CAD system, which used the active contour segmentation, and morphological, textural, and spiculation features, has achieved promising results in mass characterization. The new CAD system is based on the level set method and includes two new types of image features related to the presence of microcalcifications with the mass and abruptness of the mass margin, and patient age. A linear discriminant analysis (LDA) classifier with stepwise feature selection was used to merge the extracted features into a classification score. The classification accuracy was evaluated using the area under the receiver operating characteristic curve. The authors' primary data set consisted of 427 biopsy-proven masses (200 malignant and 227 benign) in 909 regions of interest (ROIs) (451 malignant and 458 benign) from multiple mammographic views. Leave-one-case-out resampling was used for training and testing. The new CAD system based on the level set segmentation and the new mammographic feature space achieved a view-based A z value of 0.83±0.01. The improvement compared to the previous CAD system was statistically significant (p=0.02). When patient age was included in the new CAD system, view-based and case-based A z values were 0.85±0.01 and 0.87±0.02, respectively. The study also demonstrated the consistency of the newly developed CAD system by evaluating the statistics of the weights of the LDA classifiers in leave-one-case-out classification. Finally, an independent test on the publicly available digital database for screening

  3. Screening for cognitive impairment in older individuals. Validation study of a computer-based test.

    Science.gov (United States)

    Green, R C; Green, J; Harrison, J M; Kutner, M H

    1994-08-01

    This study examined the validity of a computer-based cognitive test that was recently designed to screen the elderly for cognitive impairment. Criterion-related validity was examined by comparing test scores of impaired patients and normal control subjects. Construct-related validity was computed through correlations between computer-based subtests and related conventional neuropsychological subtests. University center for memory disorders. Fifty-two patients with mild cognitive impairment by strict clinical criteria and 50 unimpaired, age- and education-matched control subjects. Control subjects were rigorously screened by neurological, neuropsychological, imaging, and electrophysiological criteria to identify and exclude individuals with occult abnormalities. Using a cut-off total score of 126, this computer-based instrument had a sensitivity of 0.83 and a specificity of 0.96. Using a prevalence estimate of 10%, predictive values, positive and negative, were 0.70 and 0.96, respectively. Computer-based subtests correlated significantly with conventional neuropsychological tests measuring similar cognitive domains. Thirteen (17.8%) of 73 volunteers with normal medical histories were excluded from the control group, with unsuspected abnormalities on standard neuropsychological tests, electroencephalograms, or magnetic resonance imaging scans. Computer-based testing is a valid screening methodology for the detection of mild cognitive impairment in the elderly, although this particular test has important limitations. Broader applications of computer-based testing will require extensive population-based validation. Future studies should recognize that normal control subjects without a history of disease who are typically used in validation studies may have a high incidence of unsuspected abnormalities on neurodiagnostic studies.

  4. Significance of the combined tests application in serum and liquor of patients with suspected neurosyphilis

    Directory of Open Access Journals (Sweden)

    Mirković Mihailo

    2007-01-01

    Full Text Available Background. Tertiary syphilis develops in 8-40% of untreated patients. It is most commonly manifested in the form of neurosyphilis, which can be asymptomatic taking the form of tabes dorsalis or progressive paralyze. Nowadays, in the developed countries, progressive paralyze is a rather rare disease, although the incidence of this disease has been rising within the last decades. Case report. We reported a 74-year-old male with the clinical image of dementia showing psychotic symptoms. On cytobiochemical examination of cerebrospinal liquor, hyperproteinorhacmia of 0.70 g/l with the normal number of cells was revealed. Computed tomography of the brain showed the marked cortical cerebral and cerebellar reduction changes with multiple ischemic lesions. Within a routine examination of patients with demention, we performed serologic reactions to syphilis out of which the Veneral Disease Researc Laboratory (VDRL test in serum and liquor was unreactive, while the Treponema pallidum hemagglutination (TPNA test in serum and liquor was positive. Positivity in serum and liquor was additionally confirmed by the Western blot method and fluoroscent treponema antibody (FTA test. The treatment with benzathine fenylpenicilline 2.4 g once weekly resulted in significant improving the psychotic symptoms of the disease even after two weeks. Conclusion. This case report showed that within the differential diagnostics in patients with demention or psychotic disorder it is obligatory to consider syphilis of the nervous system, as well as to apply a combination of various tests which, besides the typical liquor findings, significantly improve the accuracy of diagnosis. Such approach is especially important regarding the fact that neurosyphilis can remain clinically quite asymptomatic for a long period, which could lead to late therapy, while, on the contrary, an adequate and timely treatment can contribute to a significant recovery of any patients.

  5. Efficient Kernel-Based Ensemble Gaussian Mixture Filtering

    KAUST Repository

    Liu, Bo; Ait-El-Fquih, Boujemaa; Hoteit, Ibrahim

    2015-01-01

    (KF)-like update of the ensemble members and a particle filter (PF)-like update of the weights, followed by a resampling step to start a new forecast cycle. After formulating EnGMF for any observational operator, we analyze the influence

  6. Significance of hair-dye base-induced sensory irritation.

    Science.gov (United States)

    Fujita, F; Azuma, T; Tajiri, M; Okamoto, H; Sano, M; Tominaga, M

    2010-06-01

    Oxidation hair-dyes, which are the principal hair-dyes, sometimes induce painful sensory irritation of the scalp caused by the combination of highly reactive substances, such as hydrogen peroxide and alkali agents. Although many cases of severe facial and scalp dermatitis have been reported following the use of hair-dyes, sensory irritation caused by contact of the hair-dye with the skin has not been reported clearly. In this study, we used a self-assessment questionnaire to measure the sensory irritation in various regions of the body caused by two model hair-dye bases that contained different amounts of alkali agents without dyes. Moreover, the occipital region was found as an alternative region of the scalp to test for sensory irritation of the hair-dye bases. We used this region to evaluate the relationship of sensitivity with skin properties, such as trans-epidermal water loss (TEWL), stratum corneum water content, sebum amount, surface temperature, current perception threshold (CPT), catalase activities in tape-stripped skin and sensory irritation score with the model hair-dye bases. The hair-dye sensitive group showed higher TEWL, a lower sebum amount, a lower surface temperature and higher catalase activity than the insensitive group, and was similar to that of damaged skin. These results suggest that sensory irritation caused by hair-dye could occur easily on the damaged dry scalp, as that caused by skin cosmetics reported previously.

  7. Description of test facilities bound to the research on sodium aerosols - some significant results

    Energy Technology Data Exchange (ETDEWEB)

    Dolias, M; Lafon, A; Vidard, M; Schaller, K H [DRNR/STRS - Centre de Cadarache, Saint-Paul-lez-Durance (France)

    1977-01-01

    This communication is dedicated to the description of the CEA (French Atomic Energy Authority) testing located at CADARACHE and which are utilized for the study of sodium aerosols behavior. These testing loops are necessary for studying the operating of equipment such as filters, sodium vapour traps, condensers and separators. It is also possible to study the effect of characteristics parameters on formation, coagulation and carrying away of sodium aerosols in the cover gas. Sodium aerosols deposits in a vertical annular space configuration with a cold area in its upper part are also studied. Some significant results emphasize the importance of operating conditions on the formation of aerosols. (author)

  8. Orthogonal projections and bootstrap resampling procedures in the study of infraspecific variation

    Directory of Open Access Journals (Sweden)

    Luiza Carla Duarte

    1998-12-01

    Full Text Available The effect of an increase in quantitative continuous characters resulting from indeterminate growth upon the analysis of population differentiation was investigated using, as an example, a set of continuous characters measured as distance variables in 10 populations of a rodent species. The data before and after correction for allometric size effects using orthogonal projections were analyzed with a parametric bootstrap resampling procedure applied to canonical variate analysis. The variance component of the distance measures attributable to indeterminate growth within the populations was found to be substantial, although the ordination of the populations was not affected, as evidenced by the relative and absolute positions of the centroids. The covariance pattern of the distance variables used to infer the nature of the morphological differences was strongly influenced by indeterminate growth. The uncorrected data produced a misleading picture of morphological differentiation by indicating that groups of populations differed in size. However, the data corrected for allometric effects clearly demonstrated that populations differed morphologically both in size and shape. These results are discussed in terms of the analysis of morphological differentiation among populations and the definition of infraspecific geographic units.A influência do aumento em caracteres quantitativos contínuos devido ao crescimento indeterminado sobre a análise de diferenciação entre populações foi investigado utilizando como exemplo um conjunto de dados de variáveis craniométricas em 10 populações de uma espécie de roedor. Dois conjuntos de dados, um não corrigido para o efeito alométrico do tamanho e um outro corrigido para o efeito alométrico do tamanho utilizando um método de projeção ortogonal, foram analisados por um procedimento "bootstrap" de reamostragem aplicado à análise de variáveis canônicas. O componente de variância devido ao

  9. Testing Game-Based Performance in Team-Handball.

    Science.gov (United States)

    Wagner, Herbert; Orwat, Matthias; Hinz, Matthias; Pfusterschmied, Jürgen; Bacharach, David W; von Duvillard, Serge P; Müller, Erich

    2016-10-01

    Wagner, H, Orwat, M, Hinz, M, Pfusterschmied, J, Bacharach, DW, von Duvillard, SP, and Müller, E. Testing game-based performance in team-handball. J Strength Cond Res 30(10): 2794-2801, 2016-Team-handball is a fast paced game of defensive and offensive action that includes specific movements of jumping, passing, throwing, checking, and screening. To date and to the best of our knowledge, a game-based performance test (GBPT) for team-handball does not exist. Therefore, the aim of this study was to develop and validate such a test. Seventeen experienced team-handball players performed 2 GBPTs separated by 7 days between each test, an incremental treadmill running test, and a team-handball test game (TG) (2 × 20 minutes). Peak oxygen uptake (V[Combining Dot Above]O2peak), blood lactate concentration (BLC), heart rate (HR), sprinting time, time of offensive and defensive actions as well as running intensities, ball velocity, and jump height were measured in the game-based test. Reliability of the tests was calculated using an intraclass correlation coefficient (ICC). Additionally, we measured V[Combining Dot Above]O2peak in the incremental treadmill running test and BLC, HR, and running intensities in the team-handball TG to determine the validity of the GBPT. For the test-retest reliability, we found an ICC >0.70 for the peak BLC and HR, mean offense and defense time, as well as ball velocity that yielded an ICC >0.90 for the V[Combining Dot Above]O2peak in the GBPT. Percent walking and standing constituted 73% of total time. Moderate (18%) and high (9%) intensity running in the GBPT was similar to the team-handball TG. Our results indicated that the GBPT is a valid and reliable test to analyze team-handball performance (physiological and biomechanical variables) under conditions similar to competition.

  10. TREAT (TREe-based Association Test)

    Science.gov (United States)

    TREAT is an R package for detecting complex joint effects in case-control studies. The test statistic is derived from a tree-structure model by recursive partitioning the data. Ultra-fast algorithm is designed to evaluate the significance of association between candidate gene and disease outcome

  11. Detecting significant changes in protein abundance

    Directory of Open Access Journals (Sweden)

    Kai Kammers

    2015-06-01

    Full Text Available We review and demonstrate how an empirical Bayes method, shrinking a protein's sample variance towards a pooled estimate, leads to far more powerful and stable inference to detect significant changes in protein abundance compared to ordinary t-tests. Using examples from isobaric mass labelled proteomic experiments we show how to analyze data from multiple experiments simultaneously, and discuss the effects of missing data on the inference. We also present easy to use open source software for normalization of mass spectrometry data and inference based on moderated test statistics.

  12. A LabVIEWTM-based detector testing system

    International Nuclear Information System (INIS)

    Yang Haori; Li Yuanjing; Wang Yi; Li Yulan; Li Jin

    2003-01-01

    The construction of a LabVIEW-based detector testing system is described in this paper. In this system, the signal of detector is magnified and digitized, so amplitude or time spectrum can be obtained. The Analog-to-Digital Converter is a peak-sensitive ADC based on VME bus. The virtual instrument constructed by LabVIEW can be used to acquire data, draw spectrum and save testing results

  13. Realistic generation of natural phenomena based on video synthesis

    Science.gov (United States)

    Wang, Changbo; Quan, Hongyan; Li, Chenhui; Xiao, Zhao; Chen, Xiao; Li, Peng; Shen, Liuwei

    2009-10-01

    Research on the generation of natural phenomena has many applications in special effects of movie, battlefield simulation and virtual reality, etc. Based on video synthesis technique, a new approach is proposed for the synthesis of natural phenomena, including flowing water and fire flame. From the fire and flow video, the seamless video of arbitrary length is generated. Then, the interaction between wind and fire flame is achieved through the skeleton of flame. Later, the flow is also synthesized by extending the video textures using an edge resample method. Finally, we can integrate the synthesized natural phenomena into a virtual scene.

  14. Automation for a base station stability testing

    OpenAIRE

    Punnek, Elvis

    2016-01-01

    This Batchelor’s thesis was commissioned by Oy LM Ericsson Ab Oulu. The aim of it was to help to investigate and create a test automation solution for the stability testing of the LTE base station. The main objective was to create a test automation for a predefined test set. This test automation solution had to be created for specific environments and equipment. This work included creating the automation for the test cases and putting them to daily test automation jobs. The key factor...

  15. Prognostic significance of blood coagulation tests in carcinoma of the lung and colon.

    Science.gov (United States)

    Wojtukiewicz, M Z; Zacharski, L R; Moritz, T E; Hur, K; Edwards, R L; Rickles, F R

    1992-08-01

    Blood coagulation test results were collected prospectively in patients with previously untreated, advanced lung or colon cancer who entered into a clinical trial. In patients with colon cancer, reduced survival was associated (in univariate analysis) with higher values obtained at entry to the study for fibrinogen, fibrin(ogen) split products, antiplasmin, and fibrinopeptide A and accelerated euglobulin lysis times. In patients with non-small cell lung cancer, reduced survival was associated (in univariate analysis) with higher fibrinogen and fibrin(ogen) split products, platelet counts and activated partial thromboplastin times. In patients with small cell carcinoma of the lung, only higher activated partial thromboplastin times were associated (in univariate analysis) with reduced survival in patients with disseminated disease. In multivariate analysis, higher activated partial thromboplastin times were a significant independent predictor of survival for patients with non-small cell lung cancer limited to one hemithorax and with disseminated small cell carcinoma of the lung. Fibrin(ogen) split product levels were an independent predictor of survival for patients with disseminated non-small cell lung cancer as were both the fibrinogen and fibrinopeptide A levels for patients with disseminated colon cancer. These results suggest that certain tests of blood coagulation may be indicative of prognosis in lung and colon cancer. The heterogeneity of these results suggests that the mechanism(s), intensity, and pathophysiological significance of coagulation activation in cancer may differ between tumour types.

  16. USEFULNESS OF BOOTSTRAPPING IN PORTFOLIO MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Boris Radovanov

    2012-12-01

    Full Text Available This paper contains a comparison of in-sample and out-of-sample performances between the resampled efficiency technique, patented by Richard Michaud and Robert Michaud (1999, and traditional Mean-Variance portfolio selection, presented by Harry Markowitz (1952. Based on the Monte Carlo simulation, data (samples generation process determines the algorithms by using both, parametric and nonparametric bootstrap techniques. Resampled efficiency provides the solution to use uncertain information without the need for constrains in portfolio optimization. Parametric bootstrap process starts with a parametric model specification, where we apply Capital Asset Pricing Model. After the estimation of specified model, the series of residuals are used for resampling process. On the other hand, nonparametric bootstrap divides series of price returns into the new series of blocks containing previous determined number of consecutive price returns. This procedure enables smooth resampling process and preserves the original structure of data series.

  17. Evaluation of a Secure Laptop-Based Testing Program in an Undergraduate Nursing Program: Students' Perspective.

    Science.gov (United States)

    Tao, Jinyuan; Gunter, Glenda; Tsai, Ming-Hsiu; Lim, Dan

    2016-01-01

    Recently, the many robust learning management systems, and the availability of affordable laptops, have made secure laptop-based testing a reality on many campuses. The undergraduate nursing program at the authors' university began to implement a secure laptop-based testing program in 2009, which allowed students to use their newly purchased laptops to take quizzes and tests securely in classrooms. After nearly 5 years' secure laptop-based testing program implementation, a formative evaluation, using a mixed method that has both descriptive and correlational data elements, was conducted to seek constructive feedback from students to improve the program. Evaluation data show that, overall, students (n = 166) believed the secure laptop-based testing program helps them get hands-on experience of taking examinations on the computer and gets them prepared for their computerized NCLEX-RN. Students, however, had a lot of concerns about laptop glitches and campus wireless network glitches they experienced during testing. At the same time, NCLEX-RN first-time passing rate data were analyzed using the χ2 test, and revealed no significant association between the two testing methods (paper-and-pencil testing and the secure laptop-based testing) and students' first-time NCLEX-RN passing rate. Based on the odds ratio, however, the odds of students passing NCLEX-RN the first time was 1.37 times higher if they were taught with the secure laptop-based testing method than if taught with the traditional paper-and-pencil testing method in nursing school. It was recommended to the institution that better quality of laptops needs to be provided to future students, measures needed to be taken to further stabilize the campus wireless Internet network, and there was a need to reevaluate the Laptop Initiative Program.

  18. An Examination of Sources of Variability Across the Consonant-Nucleus-Consonant Test in Cochlear Implant Listeners

    Directory of Open Access Journals (Sweden)

    Julie Arenberg Bierer

    2016-04-01

    Full Text Available The 10 consonant-nucleus-consonant (CNC word lists are considered the gold standard in the testing of cochlear implant (CI users. However, variance in scores across lists could degrade the sensitivity and reliability of them to identify deficits in speech perception. This study examined the relationship between variability in performance among lists and the lexical characteristics of the words. Data are from 28 adult CI users. Each subject was tested on all 10 CNC word lists. Data were analyzed in terms of lexical characteristics, lexical frequency, neighborhood density, bi-, and tri-phonemic probabilities. To determine whether individual performance variability across lists can be reduced, the standard set of 10 phonetically balanced 50-word lists was redistributed into a new set of lists using two sampling strategies: (a balancing with respect to word lexical frequency or (b selecting words with equal probability. The mean performance on the CNC lists varied from 53.1% to 62.4% correct. The average difference between the highest and lowest scores within individuals across the lists was 20.9% (from 12% to 28%. Lexical frequency and bi-phonemic probabilities were correlated with word recognition performance. The range of scores was not significantly reduced for all individuals when responses were simulated with 1,000 sets of redistributed lists, using both types of sampling methods. These results indicate that resampling of words does not affect the test–retest reliability and diagnostic value of the CNC word test.

  19. Bootstrap-Based Inference for Cube Root Consistent Estimators

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Jansson, Michael; Nagasawa, Kenichi

    This note proposes a consistent bootstrap-based distributional approximation for cube root consistent estimators such as the maximum score estimator of Manski (1975) and the isotonic density estimator of Grenander (1956). In both cases, the standard nonparametric bootstrap is known...... to be inconsistent. Our method restores consistency of the nonparametric bootstrap by altering the shape of the criterion function defining the estimator whose distribution we seek to approximate. This modification leads to a generic and easy-to-implement resampling method for inference that is conceptually distinct...... from other available distributional approximations based on some form of modified bootstrap. We offer simulation evidence showcasing the performance of our inference method in finite samples. An extension of our methodology to general M-estimation problems is also discussed....

  20. Cytogenotoxicity screening of source water, wastewater and treated water of drinking water treatment plants using two in vivo test systems: Allium cepa root based and Nile tilapia erythrocyte based tests.

    Science.gov (United States)

    Hemachandra, Chamini K; Pathiratne, Asoka

    2017-01-01

    Biological effect directed in vivo tests with model organisms are useful in assessing potential health risks associated with chemical contaminations in surface waters. This study examined the applicability of two in vivo test systems viz. plant, Allium cepa root based tests and fish, Oreochromis niloticus erythrocyte based tests for screening cytogenotoxic potential of raw source water, water treatment waste (effluents) and treated water of drinking water treatment plants (DWTPs) using two DWTPs associated with a major river in Sri Lanka. Measured physico-chemical parameters of the raw water, effluents and treated water samples complied with the respective Sri Lankan standards. In the in vivo tests, raw water induced statistically significant root growth retardation, mitodepression and chromosomal abnormalities in the root meristem of the plant and micronuclei/nuclear buds evolution and genetic damage (as reflected by comet scores) in the erythrocytes of the fish compared to the aged tap water controls signifying greater genotoxicity of the source water especially in the dry period. The effluents provoked relatively high cytogenotoxic effects on both test systems but the toxicity in most cases was considerably reduced to the raw water level with the effluent dilution (1:8). In vivo tests indicated reduction of cytogenotoxic potential in the tested drinking water samples. The results support the potential applications of practically feasible in vivo biological test systems such as A. cepa root based tests and the fish erythrocyte based tests as complementary tools for screening cytogenotoxicity potential of the source water and water treatment waste reaching downstream of aquatic ecosystems and for evaluating cytogenotoxicity eliminating efficacy of the DWTPs in different seasons in view of human and ecological safety. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Significance evaluation in factor graphs

    DEFF Research Database (Denmark)

    Madsen, Tobias; Hobolth, Asger; Jensen, Jens Ledet

    2017-01-01

    in genomics and the multiple-testing issues accompanying them, accurate significance evaluation is of great importance. We here address the problem of evaluating statistical significance of observations from factor graph models. Results Two novel numerical approximations for evaluation of statistical...... significance are presented. First a method using importance sampling. Second a saddlepoint approximation based method. We develop algorithms to efficiently compute the approximations and compare them to naive sampling and the normal approximation. The individual merits of the methods are analysed both from....... Conclusions The applicability of saddlepoint approximation and importance sampling is demonstrated on known models in the factor graph framework. Using the two methods we can substantially improve computational cost without compromising accuracy. This contribution allows analyses of large datasets...

  2. Social inequality and HIV-testing: Comparing home- and clinic-based testing in rural Malawi

    Directory of Open Access Journals (Sweden)

    Alexander A. Weinreb

    2009-10-01

    Full Text Available The plan to increase HIV testing is a cornerstone of the international health strategy against the HIV/AIDS epidemic, particularly in sub-Saharan Africa. This paper highlights a problematic aspect of that plan: the reliance on clinic- rather than home-based testing. First, drawing on DHS data from across Africa, we demonstrate the substantial differences in socio-demographic and economic profiles between those who report having ever had an HIV test, and those who report never having had one. Then, using data from a random household survey in rural Malawi, we show that substituting home-based for clinic-based testing may eliminate this source of inequality between those tested and those not tested. This result, which is stable across modeling frameworks, has important implications for accurately and equitably addressing the counseling and treatment programs that comprise the international health strategy against AIDS, and that promise to shape the future trajectory of the epidemic in Africa and beyond.

  3. Understanding text-based persuasion and support tactics of concerned significant others

    Directory of Open Access Journals (Sweden)

    Katherine van Stolk-Cooke

    2015-08-01

    Full Text Available The behavior of concerned significant others (CSOs can have a measurable impact on the health and wellness of individuals attempting to meet behavioral and health goals, and research is needed to better understand the attributes of text-based CSO language when encouraging target significant others (TSOs to achieve those goals. In an effort to inform the development of interventions for CSOs, this study examined the language content of brief text-based messages generated by CSOs to motivate TSOs to achieve a behavioral goal. CSOs generated brief text-based messages for TSOs for three scenarios: (1 to help TSOs achieve the goal, (2 in the event that the TSO is struggling to meet the goal, and (3 in the event that the TSO has given up on meeting the goal. Results indicate that there was a significant relationship between the tone and compassion of messages generated by CSOs, the CSOs’ perceptions of TSO motivation, and their expectation of a grateful or annoyed reaction by the TSO to their feedback or support. Results underscore the importance of attending to patterns in language when CSOs communicate with TSOs about goal achievement or failure, and how certain variables in the CSOs’ perceptions of their TSOs affect these characteristics.

  4. Space Launch System Base Heating Test: Experimental Operations & Results

    Science.gov (United States)

    Dufrene, Aaron; Mehta, Manish; MacLean, Matthew; Seaford, Mark; Holden, Michael

    2016-01-01

    NASA's Space Launch System (SLS) uses four clustered liquid rocket engines along with two solid rocket boosters. The interaction between all six rocket exhaust plumes will produce a complex and severe thermal environment in the base of the vehicle. This work focuses on a recent 2% scale, hot-fire SLS base heating test. These base heating tests are short-duration tests executed with chamber pressures near the full-scale values with gaseous hydrogen/oxygen engines and RSRMV analogous solid propellant motors. The LENS II shock tunnel/Ludwieg tube tunnel was used at or near flight duplicated conditions up to Mach 5. Model development was based on the Space Shuttle base heating tests with several improvements including doubling of the maximum chamber pressures and duplication of freestream conditions. Test methodology and conditions are presented, and base heating results from 76 runs are reported in non-dimensional form. Regions of high heating are identified and comparisons of various configuration and conditions are highlighted. Base pressure and radiometer results are also reported.

  5. A simple bedside blood test (Fibrofast; FIB-5) is superior to FIB-4 index for the differentiation between non-significant and significant fibrosis in patients with chronic hepatitis C.

    Science.gov (United States)

    Shiha, G; Seif, S; Eldesoky, A; Elbasiony, M; Soliman, R; Metwally, A; Zalata, K; Mikhail, N

    2017-05-01

    A simple non-invasive score (Fibrofast, FIB-5) was developed using five routine laboratory tests (ALT, AST, alkaline phosphatase, albumin and platelets count) for the detection of significant hepatic fibrosis in patients with chronic hepatitis C. The FIB-4 index is a non-invasive test for the assessment of liver fibrosis, and a score of ≤1.45 enables the correct identification of patients who have non-significant (F0-1) from significant fibrosis (F2-4), and could avoid liver biopsy. The aim of this study was to compare the performance characteristics of FIB-5 and FIB-4 to differentiate between non-significant and significant fibrosis. A cross-sectional study included 604 chronic HCV patients. All liver biopsies were scored using the METAVIR system. Both FIB-5 and FIB-4 scores were measured and the performance characteristics were calculated using the ROC curve. The performance characteristics of FIB-5 at ≥7.5 and FIB-4 at ≤1.45 for the differentiation between non-significant fibrosis and significant fibrosis were: specificity 94.4%, PPV 85.7%, and specificity 54.9%, PPV 55.7% respectively. FIB-5 score at the new cutoff is superior to FIB-4 index for the differentiation between non-significant and significant fibrosis.

  6. The comparison between science virtual and paper based test in measuring grade 7 students’ critical thinking

    Science.gov (United States)

    Dhitareka, P. H.; Firman, H.; Rusyati, L.

    2018-05-01

    This research is comparing science virtual and paper-based test in measuring grade 7 students’ critical thinking based on Multiple Intelligences and gender. Quasi experimental method with within-subjects design is conducted in this research in order to obtain the data. The population of this research was all seventh grade students in ten classes of one public secondary school in Bandung. There were 71 students within two classes taken randomly became the sample in this research. The data are obtained through 28 questions with a topic of living things and environmental sustainability constructed based on eight critical thinking elements proposed by Inch then the questions provided in science virtual and paper-based test. The data was analysed by using paired-samples t test when the data are parametric and Wilcoxon signed ranks test when the data are non-parametric. In general comparison, the p-value of the comparison between science virtual and paper-based tests’ score is 0.506, indicated that there are no significance difference between science virtual and paper-based test based on the tests’ score. The results are furthermore supported by the students’ attitude result which is 3.15 from the scale from 1 to 4, indicated that they have positive attitudes towards Science Virtual Test.

  7. The GOLM-database standard- a framework for time-series data management based on free software

    Science.gov (United States)

    Eichler, M.; Francke, T.; Kneis, D.; Reusser, D.

    2009-04-01

    Monitoring and modelling projects usually involve time series data originating from different sources. Often, file formats, temporal resolution and meta-data documentation rarely adhere to a common standard. As a result, much effort is spent on converting, harmonizing, merging, checking, resampling and reformatting these data. Moreover, in work groups or during the course of time, these tasks tend to be carried out redundantly and repeatedly, especially when new data becomes available. The resulting duplication of data in various formats strains additional ressources. We propose a database structure and complementary scripts for facilitating these tasks. The GOLM- (General Observation and Location Management) framework allows for import and storage of time series data of different type while assisting in meta-data documentation, plausibility checking and harmonization. The imported data can be visually inspected and its coverage among locations and variables may be visualized. Supplementing scripts provide options for data export for selected stations and variables and resampling of the data to the desired temporal resolution. These tools can, for example, be used for generating model input files or reports. Since GOLM fully supports network access, the system can be used efficiently by distributed working groups accessing the same data over the internet. GOLM's database structure and the complementary scripts can easily be customized to specific needs. Any involved software such as MySQL, R, PHP, OpenOffice as well as the scripts for building and using the data base, including documentation, are free for download. GOLM was developed out of the practical requirements of the OPAQUE-project. It has been tested and further refined in the ERANET-CRUE and SESAM projects, all of which used GOLM to manage meteorological, hydrological and/or water quality data.

  8. Diagnostic tests based on human basophils

    DEFF Research Database (Denmark)

    Kleine-Tebbe, Jörg; Erdmann, Stephan; Knol, Edward F

    2006-01-01

    -maximal responses, termed 'intrinsic sensitivity'. These variables give rise to shifts in the dose-response curves which, in a diagnostic setting where only a single antigen concentration is employed, may produce false-negative data. Thus, in order to meaningfully utilize the current basophil activation tests....... Diagnostic studies using CD63 or CD203c in hymenoptera, food and drug allergy are critically discussed. Basophil-based tests are indicated for allergy testing in selected cases but should only be performed by experienced laboratories....

  9. Test-Taking Strategies and Task-based Assessment: The Case of Iranian EFL Learners

    Directory of Open Access Journals (Sweden)

    Hossein Barati

    2012-01-01

    Full Text Available The present study examined the effect of task-based assessment on the type and frequency of test-taking strategies that three proficiency groups of Iranian adult EFL learners used when completing the First Certificate in English FCE reading paper. A total of 70 EFL university undergraduates (53 females and 17 males took part in the main phase of this study. They were divided into three proficiency groups: high, intermediate, and low. A set of Chi-square analyses was used to explore the type and frequency of test-taking strategies used by participants. The results suggested that the intermediate group test takers used the strategies significantly different after completing each task (sub-test in the FCE reading paper. However, the high and low proficient test takers› use of strategies was only significant after completing the third task of the FCE reading paper. The findings also revealed that a pattern could be drawn of the type of strategies used by the three proficiency groups who participated in this study. Nonetheless, such a pattern shifted at times depending on the ability of the test takers and/or the task under study.

  10. Finding of No Significant Impact and Environmental Assessment for Flight Test to the Edge of Space

    Science.gov (United States)

    2008-12-01

    Runway 22 or on Rogers Dry Lakebed at Edwards AFB. 17 On the basis of the findings of the Environmental Assessment, no significant impact to human...FLIGHT TEST CENTER Environmental Assessment for Flight Test to the Edge of Space Page 5-3 Bowles, A.E., S. Eckert, L . Starke, E. Berg, L . Wolski, and...Numbers. Anne Choate, Laura 20 Pederson , Jeremy Scharfenberg, Henry Farland. Washington, D.C. September. 21 Jeppesen Sanderson, Incorporated 22

  11. PERFORMANCE COMPARISON OF SCENARIO-GENERATION METHODS APPLIED TO A STOCHASTIC OPTIMIZATION ASSET-LIABILITY MANAGEMENT MODEL

    Directory of Open Access Journals (Sweden)

    Alan Delgado de Oliveira

    Full Text Available ABSTRACT In this paper, we provide an empirical discussion of the differences among some scenario tree-generation approaches for stochastic programming. We consider the classical Monte Carlo sampling and Moment matching methods. Moreover, we test the Resampled average approximation, which is an adaptation of Monte Carlo sampling and Monte Carlo with naive allocation strategy as the benchmark. We test the empirical effects of each approach on the stability of the problem objective function and initial portfolio allocation, using a multistage stochastic chance-constrained asset-liability management (ALM model as the application. The Moment matching and Resampled average approximation are more stable than the other two strategies.

  12. Determination of Geometrical REVs Based on Volumetric Fracture Intensity and Statistical Tests

    Directory of Open Access Journals (Sweden)

    Ying Liu

    2018-05-01

    Full Text Available This paper presents a method to estimate a representative element volume (REV of a fractured rock mass based on the volumetric fracture intensity P32 and statistical tests. A 150 m × 80 m × 50 m 3D fracture network model was generated based on field data collected at the Maji dam site by using the rectangular window sampling method. The volumetric fracture intensity P32 of each cube was calculated by varying the cube location in the generated 3D fracture network model and varying the cube side length from 1 to 20 m, and the distribution of the P32 values was described. The size effect and spatial effect of the fractured rock mass were studied; the P32 values from the same cube sizes and different locations were significantly different, and the fluctuation in P32 values clearly decreases as the cube side length increases. In this paper, a new method that comprehensively considers the anisotropy of rock masses, simplicity of calculation and differences between different methods was proposed to estimate the geometrical REV size. The geometrical REV size of the fractured rock mass was determined based on the volumetric fracture intensity P32 and two statistical test methods, namely, the likelihood ratio test and the Wald–Wolfowitz runs test. The results of the two statistical tests were substantially different; critical cube sizes of 13 m and 12 m were estimated by the Wald–Wolfowitz runs test and the likelihood ratio test, respectively. Because the different test methods emphasize different considerations and impact factors, considering a result that these two tests accept, the larger cube size, 13 m, was selected as the geometrical REV size of the fractured rock mass at the Maji dam site in China.

  13. A comparative test of phylogenetic diversity indices.

    Science.gov (United States)

    Schweiger, Oliver; Klotz, Stefan; Durka, Walter; Kühn, Ingolf

    2008-09-01

    Traditional measures of biodiversity, such as species richness, usually treat species as being equal. As this is obviously not the case, measuring diversity in terms of features accumulated over evolutionary history provides additional value to theoretical and applied ecology. Several phylogenetic diversity indices exist, but their behaviour has not yet been tested in a comparative framework. We provide a test of ten commonly used phylogenetic diversity indices based on 40 simulated phylogenies of varying topology. We restrict our analysis to a topological fully resolved tree without information on branch lengths and species lists with presence-absence data. A total of 38,000 artificial communities varying in species richness covering 5-95% of the phylogenies were created by random resampling. The indices were evaluated based on their ability to meet a priori defined requirements. No index meets all requirements, but three indices turned out to be more suitable than others under particular conditions. Average taxonomic distinctness (AvTD) and intensive quadratic entropy (J) are calculated by averaging and are, therefore, unbiased by species richness while reflecting phylogeny per se well. However, averaging leads to the violation of set monotonicity, which requires that species extinction cannot increase the index. Total taxonomic distinctness (TTD) sums up distinctiveness values for particular species across the community. It is therefore strongly linked to species richness and reflects phylogeny per se weakly but satisfies set monotonicity. We suggest that AvTD and J are best applied to studies that compare spatially or temporally rather independent communities that potentially vary strongly in their phylogenetic composition-i.e. where set monotonicity is a more negligible issue, but independence of species richness is desired. In contrast, we suggest that TTD be used in studies that compare rather interdependent communities where changes occur more gradually by

  14. Combination of blood tests for significant fibrosis and cirrhosis improves the assessment of liver-prognosis in chronic hepatitis C.

    Science.gov (United States)

    Boursier, J; Brochard, C; Bertrais, S; Michalak, S; Gallois, Y; Fouchard-Hubert, I; Oberti, F; Rousselet, M-C; Calès, P

    2014-07-01

    Recent longitudinal studies have emphasised the prognostic value of noninvasive tests of liver fibrosis and cross-sectional studies have shown their combination significantly improves diagnostic accuracy. To compare the prognostic accuracy of six blood fibrosis tests and liver biopsy, and evaluate if test combination improves the liver-prognosis assessment in chronic hepatitis C (CHC). A total of 373 patients with compensated CHC, liver biopsy (Metavir F) and blood tests targeting fibrosis (APRI, FIB4, Fibrotest, Hepascore, FibroMeter) or cirrhosis (CirrhoMeter) were included. Significant liver-related events (SLRE) and liver-related deaths were recorded during follow-up (started the day of biopsy). During the median follow-up of 9.5 years (3508 person-years), 47 patients had a SLRE and 23 patients died from liver-related causes. For the prediction of first SLRE, most blood tests allowed higher prognostication than Metavir F [Harrell C-index: 0.811 (95% CI: 0.751-0.868)] with a significant increase for FIB4: 0.879 [0.832-0.919] (P = 0.002), FibroMeter: 0.870 [0.812-0.922] (P = 0.005) and APRI: 0.861 [0.813-0.902] (P = 0.039). Multivariate analysis identified FibroMeter, CirrhoMeter and sustained viral response as independent predictors of first SLRE. CirrhoMeter was the only independent predictor of liver-related death. The combination of FibroMeter and CirrhoMeter classifications into a new FM/CM classification improved the liver-prognosis assessment compared to Metavir F staging or single tests by identifying five subgroups of patients with significantly different prognoses. Some blood fibrosis tests are more accurate than liver biopsy for determining liver prognosis in CHC. A new combination of two complementary blood tests, one targeted for fibrosis and the other for cirrhosis, optimises assessment of liver-prognosis. © 2014 John Wiley & Sons Ltd.

  15. Risk Based Optimal Fatigue Testing

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Faber, M.H.; Kroon, I.B.

    1992-01-01

    Optimal fatigue life testing of materials is considered. Based on minimization of the total expected costs of a mechanical component a strategy is suggested to determine the optimal stress range levels for which additional experiments are to be performed together with an optimal value...

  16. Stress test, what is the reality and significance of it?

    International Nuclear Information System (INIS)

    Sawada, Tetsuo

    2012-01-01

    Stress test was introduced in July 2011 by 'political judgment' to demonstrate the ability of nuclear power plants to withstand severe earthquake and tsunami. Stress test consisted of two stages and the first stage using computerized simulation required to obtain 'cliff edge' for earthquake, tsunami, their superposition, loss of all alternating current power and loss of final heat sink, and effectiveness of severe accident management after emergency safety measures. Clearing the first stage of the test was a prerequisite for restarting reactors that had been suspended for regular inspections. NISA had received such test results for 14 nuclear reactors as of January 18, 2012. After passing IAEA's evaluation of stress test review process, NISA's endorsement of test results, NSC's confirmation of NISA's screening results and approval of local government, Prime Minister and relevant ministers concerned would decide whether reactors could be restarted as 'political judgment'. Using ranking list and referring to respective experiences of 14 reactors hit by earthquake and tsunami at the Great East Japan earthquake might better perform comprehensive judgment. (T. Tanaka)

  17. The diagnostic sensitivity of dengue rapid test assays is significantly enhanced by using a combined antigen and antibody testing approach.

    Directory of Open Access Journals (Sweden)

    Scott R Fry

    2011-06-01

    Full Text Available BACKGROUND: Serological tests for IgM and IgG are routinely used in clinical laboratories for the rapid diagnosis of dengue and can differentiate between primary and secondary infections. Dengue virus non-structural protein 1 (NS1 has been identified as an early marker for acute dengue, and is typically present between days 1-9 post-onset of illness but following seroconversion it can be difficult to detect in serum. AIMS: To evaluate the performance of a newly developed Panbio® Dengue Early Rapid test for NS1 and determine if it can improve diagnostic sensitivity when used in combination with a commercial IgM/IgG rapid test. METHODOLOGY: The clinical performance of the Dengue Early Rapid was evaluated in a retrospective study in Vietnam with 198 acute laboratory-confirmed positive and 100 negative samples. The performance of the Dengue Early Rapid in combination with the IgM/IgG Rapid test was also evaluated in Malaysia with 263 laboratory-confirmed positive and 30 negative samples. KEY RESULTS: In Vietnam the sensitivity and specificity of the test was 69.2% (95% CI: 62.8% to 75.6% and 96% (95% CI: 92.2% to 99.8 respectively. In Malaysia the performance was similar with 68.9% sensitivity (95% CI: 61.8% to 76.1% and 96.7% specificity (95% CI: 82.8% to 99.9% compared to RT-PCR. Importantly, when the Dengue Early Rapid test was used in combination with the IgM/IgG test the sensitivity increased to 93.0%. When the two tests were compared at each day post-onset of illness there was clear differentiation between the antigen and antibody markers. CONCLUSIONS: This study highlights that using dengue NS1 antigen detection in combination with anti-glycoprotein E IgM and IgG serology can significantly increase the sensitivity of acute dengue diagnosis and extends the possible window of detection to include very early acute samples and enhances the clinical utility of rapid immunochromatographic testing for dengue.

  18. Geometrical error calibration in reflective surface testing based on reverse Hartmann test

    Science.gov (United States)

    Gong, Zhidong; Wang, Daodang; Xu, Ping; Wang, Chao; Liang, Rongguang; Kong, Ming; Zhao, Jun; Mo, Linhai; Mo, Shuhui

    2017-08-01

    In the fringe-illumination deflectometry based on reverse-Hartmann-test configuration, ray tracing of the modeled testing system is performed to reconstruct the test surface error. Careful calibration of system geometry is required to achieve high testing accuracy. To realize the high-precision surface testing with reverse Hartmann test, a computer-aided geometrical error calibration method is proposed. The aberrations corresponding to various geometrical errors are studied. With the aberration weights for various geometrical errors, the computer-aided optimization of system geometry with iterative ray tracing is carried out to calibration the geometrical error, and the accuracy in the order of subnanometer is achieved.

  19. A practical approach for implementing risk-based inservice testing of pumps at nuclear power plants

    International Nuclear Information System (INIS)

    Hartley, R.S.; Maret, D.; Seniuk, P.; Smith, L.

    1996-01-01

    The American Society of Mechanical Engineers (ASME) Center for Research and Technology Development's (CRTD) Research Task Force on Risk-Based Inservice Testing has developed guidelines for risk-based inservice testing (IST) of pumps and valves. These guidelines are intended to help the ASME Operation and Maintenance (OM) Committee to enhance plant safety while focussing appropriate testing resources on critical components. This paper describes a practical approach for implementing those guidelines for pumps at nuclear power plants. The approach, as described in this paper, relies on input, direction, and assistance from several entities such as the ASME Code Committees, United States Nuclear Regulatory Commission (NRC), and the National Laboratories, as well as industry groups and personnel with applicable expertise. Key parts of the risk-based IST process that are addressed here include: identification of important failure modes, identification of significant failure causes, assessing the effectiveness of testing and maintenance activities, development of alternative testing and maintenance strategies, and assessing the effectiveness of alternative testing strategies with present ASME Code requirements. Finally, the paper suggests a method of implementing this process into the ASME OM Code for pump testing

  20. A practical approach for implementing risk-based inservice testing of pumps at nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, R.S. [Idaho National Engineering Lab., Idaho Falls, ID (United States); Maret, D.; Seniuk, P.; Smith, L.

    1996-12-01

    The American Society of Mechanical Engineers (ASME) Center for Research and Technology Development`s (CRTD) Research Task Force on Risk-Based Inservice Testing has developed guidelines for risk-based inservice testing (IST) of pumps and valves. These guidelines are intended to help the ASME Operation and Maintenance (OM) Committee to enhance plant safety while focussing appropriate testing resources on critical components. This paper describes a practical approach for implementing those guidelines for pumps at nuclear power plants. The approach, as described in this paper, relies on input, direction, and assistance from several entities such as the ASME Code Committees, United States Nuclear Regulatory Commission (NRC), and the National Laboratories, as well as industry groups and personnel with applicable expertise. Key parts of the risk-based IST process that are addressed here include: identification of important failure modes, identification of significant failure causes, assessing the effectiveness of testing and maintenance activities, development of alternative testing and maintenance strategies, and assessing the effectiveness of alternative testing strategies with present ASME Code requirements. Finally, the paper suggests a method of implementing this process into the ASME OM Code for pump testing.

  1. Erroneous analyses of interactions in neuroscience: a problem of significance

    NARCIS (Netherlands)

    Nieuwenhuis, S.; Forstmann, B.U.; Wagenmakers, E.-J.

    2011-01-01

    In theory, a comparison of two experimental effects requires a statistical test on their difference. In practice, this comparison is often based on an incorrect procedure involving two separate tests in which researchers conclude that effects differ when one effect is significant (P < 0.05) but the

  2. Enhancing SAT-Based Test Pattern Generation

    Institute of Scientific and Technical Information of China (English)

    LIU Xin; XIONG You-lun

    2005-01-01

    This paper presents modeling tools based on Boolean satisfiability (SAT) to solve problems of test generation for combinational circuits. It exploits an added layer to maintain circuit-related information and value justification relations to a generic SAT algorithm. It dovetails binary decision graphs (BDD) and SAT techniques to improve the efficiency of automatic test pattern generation (ATPG). More specifically, it first exploits inexpensive reconvergent fanout analysis of circuit to gather information on the local signal correlation by using BDD learning, then uses the above learned information to restrict and focus the overall search space of SAT-based ATPG. Its learning technique is effective and lightweight. The experimental results demonstrate the effectiveness of the approach.

  3. Worldwide Research, Worldwide Participation: Web-Based Test Logger

    Science.gov (United States)

    Clark, David A.

    1998-01-01

    Thanks to the World Wide Web, a new paradigm has been born. ESCORT (steady state data system) facilities can now be configured to use a Web-based test logger, enabling worldwide participation in tests. NASA Lewis Research Center's new Web-based test logger for ESCORT automatically writes selected test and facility parameters to a browser and allows researchers to insert comments. All data can be viewed in real time via Internet connections, so anyone with a Web browser and the correct URL (universal resource locator, or Web address) can interactively participate. As the test proceeds and ESCORT data are taken, Web browsers connected to the logger are updated automatically. The use of this logger has demonstrated several benefits. First, researchers are free from manual data entry and are able to focus more on the tests. Second, research logs can be printed in report format immediately after (or during) a test. And finally, all test information is readily available to an international public.

  4. Methodology for testing and validating knowledge bases

    Science.gov (United States)

    Krishnamurthy, C.; Padalkar, S.; Sztipanovits, J.; Purves, B. R.

    1987-01-01

    A test and validation toolset developed for artificial intelligence programs is described. The basic premises of this method are: (1) knowledge bases have a strongly declarative character and represent mostly structural information about different domains, (2) the conditions for integrity, consistency, and correctness can be transformed into structural properties of knowledge bases, and (3) structural information and structural properties can be uniformly represented by graphs and checked by graph algorithms. The interactive test and validation environment have been implemented on a SUN workstation.

  5. Does the Test Work? Evaluating a Web-Based Language Placement Test

    Science.gov (United States)

    Long, Avizia Y.; Shin, Sun-Young; Geeslin, Kimberly; Willis, Erik W.

    2018-01-01

    In response to the need for examples of test validation from which everyday language programs can benefit, this paper reports on a study that used Bachman's (2005) assessment use argument (AUA) framework to examine evidence to support claims made about the intended interpretations and uses of scores based on a new web-based Spanish language…

  6. Test-retest reliability of computer-based video analysis of general movements in healthy term-born infants.

    Science.gov (United States)

    Valle, Susanne Collier; Støen, Ragnhild; Sæther, Rannei; Jensenius, Alexander Refsum; Adde, Lars

    2015-10-01

    A computer-based video analysis has recently been presented for quantitative assessment of general movements (GMs). This method's test-retest reliability, however, has not yet been evaluated. The aim of the current study was to evaluate the test-retest reliability of computer-based video analysis of GMs, and to explore the association between computer-based video analysis and the temporal organization of fidgety movements (FMs). Test-retest reliability study. 75 healthy, term-born infants were recorded twice the same day during the FMs period using a standardized video set-up. The computer-based movement variables "quantity of motion mean" (Qmean), "quantity of motion standard deviation" (QSD) and "centroid of motion standard deviation" (CSD) were analyzed, reflecting the amount of motion and the variability of the spatial center of motion of the infant, respectively. In addition, the association between the variable CSD and the temporal organization of FMs was explored. Intraclass correlation coefficients (ICC 1.1 and ICC 3.1) were calculated to assess test-retest reliability. The ICC values for the variables CSD, Qmean and QSD were 0.80, 0.80 and 0.86 for ICC (1.1), respectively; and 0.80, 0.86 and 0.90 for ICC (3.1), respectively. There were significantly lower CSD values in the recordings with continual FMs compared to the recordings with intermittent FMs (ptest-retest reliability of computer-based video analysis of GMs, and a significant association between our computer-based video analysis and the temporal organization of FMs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  7. Top-Down and Bottom-Up Approach for Model-Based Testing of Product Lines

    Directory of Open Access Journals (Sweden)

    Stephan Weißleder

    2013-03-01

    Full Text Available Systems tend to become more and more complex. This has a direct impact on system engineering processes. Two of the most important phases in these processes are requirements engineering and quality assurance. Two significant complexity drivers located in these phases are the growing number of product variants that have to be integrated into the requirements engineering and the ever growing effort for manual test design. There are modeling techniques to deal with both complexity drivers like, e.g., feature modeling and model-based test design. Their combination, however, has been seldom the focus of investigation. In this paper, we present two approaches to combine feature modeling and model-based testing as an efficient quality assurance technique for product lines. We present the corresponding difficulties and approaches to overcome them. All explanations are supported by an example of an online shop product line.

  8. Test-Retest Intervisit Variability of Functional and Structural Parameters in X-Linked Retinoschisis.

    Science.gov (United States)

    Jeffrey, Brett G; Cukras, Catherine A; Vitale, Susan; Turriff, Amy; Bowles, Kristin; Sieving, Paul A

    2014-09-01

    To examine the variability of four outcome measures that could be used to address safety and efficacy in therapeutic trials with X-linked juvenile retinoschisis. Seven men with confirmed mutations in the RS1 gene were evaluated over four visits spanning 6 months. Assessments included visual acuity, full-field electroretinograms (ERG), microperimetric macular sensitivity, and retinal thickness measured by optical coherence tomography (OCT). Eyes were separated into Better or Worse Eye groups based on acuity at baseline. Repeatability coefficients were calculated for each parameter and jackknife resampling used to derive 95% confidence intervals (CIs). The threshold for statistically significant change in visual acuity ranged from three to eight letters. For ERG a-wave, an amplitude reduction greater than 56% would be considered significant. For other parameters, variabilities were lower in the Worse Eye group, likely a result of floor effects due to collapse of the schisis pockets and/or retinal atrophy. The criteria for significant change (Better/Worse Eye) for three important parameters were: ERG b/a-wave ratio (0.44/0.23), point wise sensitivity (10.4/7.0 dB), and central retinal thickness (31%/18%). The 95% CI range for visual acuity, ERG, retinal sensitivity, and central retinal thickness relative to baseline are described for this cohort of participants with X-linked juvenile retinoschisis (XLRS). A quantitative understanding of the variability of outcome measures is vital to establishing the safety and efficacy limits for therapeutic trials of XLRS patients.

  9. Building test data from real outbreaks for evaluating detection algorithms.

    Science.gov (United States)

    Texier, Gaetan; Jackson, Michael L; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve

    2017-01-01

    Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals.

  10. Building test data from real outbreaks for evaluating detection algorithms.

    Directory of Open Access Journals (Sweden)

    Gaetan Texier

    Full Text Available Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler. We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1 resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak

  11. GPS Device Testing Based on User Performance Metrics

    Science.gov (United States)

    2015-10-02

    1. Rationale for a Test Program Based on User Performance Metrics ; 2. Roberson and Associates Test Program ; 3. Status of, and Revisions to, the Roberson and Associates Test Program ; 4. Comparison of Roberson and DOT/Volpe Programs

  12. Testing the performance of technical trading rules in the Chinese markets based on superior predictive test

    Science.gov (United States)

    Wang, Shan; Jiang, Zhi-Qiang; Li, Sai-Ping; Zhou, Wei-Xing

    2015-12-01

    Technical trading rules have a long history of being used by practitioners in financial markets. The profitable ability and efficiency of technical trading rules are yet controversial. In this paper, we test the performance of more than seven thousand traditional technical trading rules on the Shanghai Securities Composite Index (SSCI) from May 21, 1992 through June 30, 2013 and China Securities Index 300 (CSI 300) from April 8, 2005 through June 30, 2013 to check whether an effective trading strategy could be found by using the performance measurements based on the return and Sharpe ratio. To correct for the influence of the data-snooping effect, we adopt the Superior Predictive Ability test to evaluate if there exists a trading rule that can significantly outperform the benchmark. The result shows that for SSCI, technical trading rules offer significant profitability, while for CSI 300, this ability is lost. We further partition the SSCI into two sub-series and find that the efficiency of technical trading in sub-series, which have exactly the same spanning period as that of CSI 300, is severely weakened. By testing the trading rules on both indexes with a five-year moving window, we find that during the financial bubble from 2005 to 2007, the effectiveness of technical trading rules is greatly improved. This is consistent with the predictive ability of technical trading rules which appears when the market is less efficient.

  13. Using the Coefficient of Determination "R"[superscript 2] to Test the Significance of Multiple Linear Regression

    Science.gov (United States)

    Quinino, Roberto C.; Reis, Edna A.; Bessegato, Lupercio F.

    2013-01-01

    This article proposes the use of the coefficient of determination as a statistic for hypothesis testing in multiple linear regression based on distributions acquired by beta sampling. (Contains 3 figures.)

  14. Testing ESL sociopragmatics development and validation of a web-based test battery

    CERN Document Server

    Roever, Carsten; Elder, Catherine

    2014-01-01

    Testing of second language pragmatics has grown as a research area but still suffers from a tension between construct coverage and practicality. In this book, the authors describe the development and validation of a web-based test of second language pragmatics for learners of English. The test has a sociopragmatic orientation and strives for a broad coverage of the construct by assessing learners'' metapragmatic judgments as well as their ability to co-construct discourse. To ensure practicality, the test is delivered online and is scored partially automatically and partially by human raters.

  15. Human papillomavirus mRNA and DNA testing in women with atypical squamous cells of undetermined significance

    DEFF Research Database (Denmark)

    Thomsen, Louise T; Dehlendorff, Christian; Junge, Jette

    2016-01-01

    In this prospective cohort study, we compared the performance of human papillomavirus (HPV) mRNA and DNA testing of women with atypical squamous cells of undetermined significance (ASC-US) during cervical cancer screening. Using a nationwide Danish pathology register, we identified women aged 30......-65 years with ASC-US during 2005-2011 who were tested for HPV16/18/31/33/45 mRNA using PreTect HPV-Proofer (n = 3,226) or for high-risk HPV (hrHPV) DNA using Hybrid Capture 2 (HC2) (n = 9,405) or Linear Array HPV-Genotyping test (LA) (n = 1,533). Women with ≥1 subsequent examination in the register (n = 13...... those testing HC2 negative (3.2% [95% CI: 2.2-4.2%] versus 0.5% [95% CI: 0.3-0.7%]). Patterns were similar after 18 months and 5 years'; follow-up; for CIN2+ and cancer as outcomes; across all age groups; and when comparing mRNA testing to hrHPV DNA testing using LA. In conclusion, the HPV16...

  16. Significance of specificity of Tinetti B-POMA test and fall risk factor in third age of life.

    Science.gov (United States)

    Avdić, Dijana; Pecar, Dzemal

    2006-02-01

    As for the third age, psychophysical abilities of humans gradually decrease, while the ability of adaptation to endogenous and exogenous burdens is going down. In 1987, "Harada" et al. (1) have found out that 9.5 million persons in USA have difficulties running daily activities, while 59% of them (which is 5.6 million) are older than 65 years in age. The study has encompassed 77 questioned persons of both sexes with their average age 71.73 +/- 5.63 (scope of 65-90 years in age), chosen by random sampling. Each patient has been questioned in his/her own home and familiar to great extent with the methodology and aims of the questionnaire. Percentage of questioned women was 64.94% (50 patients) while the percentage for men was 35.06% (27 patients). As for the value of risk factor score achieved conducting the questionnaire and B-POMA test, there are statistically significant differences between men and women, as well as between patients who fell and those who never did. As for the way of life (alone or in the community), there are no significant statistical differences. Average results gained through B-POMA test in this study are statistically significantly higher in men and patients who did not provide data about falling, while there was no statistically significant difference in the way of life. In relation to the percentage of maximum number of positive answers to particular questions, regarding gender, way of life and the data about falling, there were no statistically significant differences between the value of B-POMA test and the risk factor score (the questionnaire).

  17. Testing a computer-based ostomy care training resource for staff nurses.

    Science.gov (United States)

    Bales, Isabel

    2010-05-01

    Fragmented teaching and ostomy care provided by nonspecialized clinicians unfamiliar with state-of-the-art care and products have been identified as problems in teaching ostomy care to the new ostomate. After conducting a literature review of theories and concepts related to the impact of nurse behaviors and confidence on ostomy care, the author developed a computer-based learning resource and assessed its effect on staff nurse confidence. Of 189 staff nurses with a minimum of 1 year acute-care experience employed in the acute care, emergency, and rehabilitation departments of an acute care facility in the Midwestern US, 103 agreed to participate and returned completed pre- and post-tests, each comprising the same eight statements about providing ostomy care. F and P values were computed for differences between pre- and post test scores. Based on a scale where 1 = totally disagree and 5 = totally agree with the statement, baseline confidence and perceived mean knowledge scores averaged 3.8 and after viewing the resource program post-test mean scores averaged 4.51, a statistically significant improvement (P = 0.000). The largest difference between pre- and post test scores involved feeling confident in having the resources to learn ostomy skills independently. The availability of an electronic ostomy care resource was rated highly in both pre- and post testing. Studies to assess the effects of increased confidence and knowledge on the quality and provision of care are warranted.

  18. Do sediment type and test durations affect results of laboratory-based, accelerated testing studies of permeable pavement clogging?

    Science.gov (United States)

    Nichols, Peter W B; White, Richard; Lucke, Terry

    2015-04-01

    Previous studies have attempted to quantify the clogging processes of Permeable Interlocking Concrete Pavers (PICPs) using accelerated testing methods. However, the results have been variable. This study investigated the effects that three different sediment types (natural and silica), and different simulated rainfall intensities, and testing durations had on the observed clogging processes (and measured surface infiltration rates) of laboratory-based, accelerated PICP testing studies. Results showed that accelerated simulated laboratory testing results are highly dependent on the type, and size of sediment used in the experiments. For example, when using real stormwater sediment up to 1.18 mm in size, the results showed that neither testing duration, nor stormwater application rate had any significant effect on PICP clogging. However, the study clearly showed that shorter testing durations generally increased clogging and reduced the surface infiltration rates of the models when artificial silica sediment was used. Longer testing durations also generally increased clogging of the models when using fine sediment (<300 μm). Results from this study will help researchers and designers better anticipate when and why PICPs are susceptible to clogging, reduce maintenance and extend the useful life of these increasingly common stormwater best management practices. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. 77 FR 21065 - Certain High Production Volume Chemicals; Test Rule and Significant New Use Rule; Fourth Group of...

    Science.gov (United States)

    2012-04-09

    ... 2070-AJ66 Certain High Production Volume Chemicals; Test Rule and Significant New Use Rule; Fourth... an opportunity to comment on a proposed test rule for 23 high production volume (HPV) chemical... necessary, to prohibit or limit that activity before it occurs. The opportunity to present oral comment was...

  20. Automated model-based testing of hybrid systems

    NARCIS (Netherlands)

    Osch, van M.P.W.J.

    2009-01-01

    In automated model-based input-output conformance testing, tests are automati- cally generated from a speci¯cation and automatically executed on an implemen- tation. Input is applied to the implementation and output is observed from the implementation. If the observed output is allowed according to

  1. Defect-based testing of LTS digital circuits

    NARCIS (Netherlands)

    Arun, A.J.

    2006-01-01

    A Defect-Based Test (DBT) methodology for Superconductor Electronics (SCE) is presented in this thesis, so that commercial production and efficient testing of systems can be implemented in this technology in the future. In the first chapter, the features and prospects for SCE have been presented.

  2. Towards model-based testing of electronic funds transfer systems

    OpenAIRE

    Asaadi, H.R.; Khosravi, R.; Mousavi, M.R.; Noroozi, N.

    2010-01-01

    We report on our first experience with applying model-based testing techniques to an operational Electronic Funds Transfer (EFT) switch. The goal is to test the conformance of the EFT switch to the standard flows described by the ISO 8583 standard. To this end, we first make a formalization of the transaction flows specified in the ISO 8583 standard in terms of a Labeled Transition System (LTS). This formalization paves the way for model-based testing based on the formal notion of Input-Outpu...

  3. Risk based test interval and maintenance optimisation - Application and uses

    International Nuclear Information System (INIS)

    Sparre, E.

    1999-10-01

    The project is part of an IAEA co-ordinated Research Project (CRP) on 'Development of Methodologies for Optimisation of Surveillance Testing and Maintenance of Safety Related Equipment at NPPs'. The purpose of the project is to investigate the sensitivity of the results obtained when performing risk based optimisation of the technical specifications. Previous projects have shown that complete LPSA models can be created and that these models allow optimisation of technical specifications. However, these optimisations did not include any in depth check of the result sensitivity with regards to methods, model completeness etc. Four different test intervals have been investigated in this study. Aside from an original, nominal, optimisation a set of sensitivity analyses has been performed and the results from these analyses have been compared to the original optimisation. The analyses indicate that the result of an optimisation is rather stable. However, it is not possible to draw any certain conclusions without performing a number of sensitivity analyses. Significant differences in the optimisation result were discovered when analysing an alternative configuration. Also deterministic uncertainties seem to affect the result of an optimisation largely. The sensitivity of failure data uncertainties is important to investigate in detail since the methodology is based on the assumption that the unavailability of a component is dependent on the length of the test interval

  4. The Relative Importance of Low Significance Level and High Power in Multiple Tests of Significance.

    Science.gov (United States)

    Westermann, Rainer; Hager, Willi

    1983-01-01

    Two psychological experiments--Anderson and Shanteau (1970), Berkowitz and LePage (1967)--are reanalyzed to present the problem of the relative importance of low Type 1 error probability and high power when answering a research question by testing several statistical hypotheses. (Author/PN)

  5. A comparison of test statistics for the recovery of rapid growth-based enumeration tests

    NARCIS (Netherlands)

    van den Heuvel, Edwin R.; IJzerman-Boon, Pieta C.

    This paper considers five test statistics for comparing the recovery of a rapid growth-based enumeration test with respect to the compendial microbiological method using a specific nonserial dilution experiment. The finite sample distributions of these test statistics are unknown, because they are

  6. Design Of Computer Based Test Using The Unified Modeling Language

    Science.gov (United States)

    Tedyyana, Agus; Danuri; Lidyawati

    2017-12-01

    The Admission selection of Politeknik Negeri Bengkalis through interest and talent search (PMDK), Joint Selection of admission test for state Polytechnics (SB-UMPN) and Independent (UM-Polbeng) were conducted by using paper-based Test (PBT). Paper Based Test model has some weaknesses. They are wasting too much paper, the leaking of the questios to the public, and data manipulation of the test result. This reasearch was Aimed to create a Computer-based Test (CBT) models by using Unified Modeling Language (UML) the which consists of Use Case diagrams, Activity diagram and sequence diagrams. During the designing process of the application, it is important to pay attention on the process of giving the password for the test questions before they were shown through encryption and description process. RSA cryptography algorithm was used in this process. Then, the questions shown in the questions banks were randomized by using the Fisher-Yates Shuffle method. The network architecture used in Computer Based test application was a client-server network models and Local Area Network (LAN). The result of the design was the Computer Based Test application for admission to the selection of Politeknik Negeri Bengkalis.

  7. Using the noninformative families in family-based association tests : A powerful new testing strategy

    NARCIS (Netherlands)

    Lange, C; DeMeo, D; Silverman, EK; Weiss, ST; Laird, NM

    2003-01-01

    For genetic association studies with multiple phenotypes, we propose a new strategy for multiple testing with family-based association tests (FBATs). The strategy increases the power by both using all available family data and reducing the number of hypotheses tested while being robust against

  8. Surface Fitting for Quasi Scattered Data from Coordinate Measuring Systems.

    Science.gov (United States)

    Mao, Qing; Liu, Shugui; Wang, Sen; Ma, Xinhui

    2018-01-13

    Non-uniform rational B-spline (NURBS) surface fitting from data points is wildly used in the fields of computer aided design (CAD), medical imaging, cultural relic representation and object-shape detection. Usually, the measured data acquired from coordinate measuring systems is neither gridded nor completely scattered. The distribution of this kind of data is scattered in physical space, but the data points are stored in a way consistent with the order of measurement, so it is named quasi scattered data in this paper. Therefore they can be organized into rows easily but the number of points in each row is random. In order to overcome the difficulty of surface fitting from this kind of data, a new method based on resampling is proposed. It consists of three major steps: (1) NURBS curve fitting for each row, (2) resampling on the fitted curve and (3) surface fitting from the resampled data. Iterative projection optimization scheme is applied in the first and third step to yield advisable parameterization and reduce the time cost of projection. A resampling approach based on parameters, local peaks and contour curvature is proposed to overcome the problems of nodes redundancy and high time consumption in the fitting of this kind of scattered data. Numerical experiments are conducted with both simulation and practical data, and the results show that the proposed method is fast, effective and robust. What's more, by analyzing the fitting results acquired form data with different degrees of scatterness it can be demonstrated that the error introduced by resampling is negligible and therefore it is feasible.

  9. Tracing the Base: A Topographic Test for Collusive Basing-Point Pricing

    NARCIS (Netherlands)

    Bos, Iwan; Schinkel, Maarten Pieter

    2009-01-01

    Basing-point pricing is known to have been abused by geographically dispersed firms in order to eliminate competition on transportation costs. This paper develops a topographic test for collusive basing-point pricing. The method uses transaction data (prices, quantities) and customer project site

  10. Tracing the base: A topographic test for collusive basing-point pricing

    NARCIS (Netherlands)

    Bos, I.; Schinkel, M.P.

    2008-01-01

    Basing-point pricing is known to have been abused by geographically dispersed firms in order to eliminate competition on transportation costs. This paper develops a topographic test for collusive basing-point pricing. The method uses transaction data (prices, quantities) and customer project site

  11. Visualization of big SPH simulations via compressed octree grids

    KAUST Repository

    Reichl, Florian

    2013-10-01

    Interactive and high-quality visualization of spatially continuous 3D fields represented by scattered distributions of billions of particles is challenging. One common approach is to resample the quantities carried by the particles to a regular grid and to render the grid via volume ray-casting. In large-scale applications such as astrophysics, however, the required grid resolution can easily exceed 10K samples per spatial dimension, letting resampling approaches appear unfeasible. In this paper we demonstrate that even in these extreme cases such approaches perform surprisingly well, both in terms of memory requirement and rendering performance. We resample the particle data to a multiresolution multiblock grid, where the resolution of the blocks is dictated by the particle distribution. From this structure we build an octree grid, and we then compress each block in the hierarchy at no visual loss using wavelet-based compression. Since decompression can be performed on the GPU, it can be integrated effectively into GPU-based out-of-core volume ray-casting. We compare our approach to the perspective grid approach which resamples at run-time into a view-aligned grid. We demonstrate considerably faster rendering times at high quality, at only a moderate memory increase compared to the raw particle set. © 2013 IEEE.

  12. Investigating a multigene prognostic assay based on significant pathways for Luminal A breast cancer through gene expression profile analysis.

    Science.gov (United States)

    Gao, Haiyan; Yang, Mei; Zhang, Xiaolan

    2018-04-01

    The present study aimed to investigate potential recurrence-risk biomarkers based on significant pathways for Luminal A breast cancer through gene expression profile analysis. Initially, the gene expression profiles of Luminal A breast cancer patients were downloaded from The Cancer Genome Atlas database. The differentially expressed genes (DEGs) were identified using a Limma package and the hierarchical clustering analysis was conducted for the DEGs. In addition, the functional pathways were screened using Kyoto Encyclopedia of Genes and Genomes pathway enrichment analyses and rank ratio calculation. The multigene prognostic assay was exploited based on the statistically significant pathways and its prognostic function was tested using train set and verified using the gene expression data and survival data of Luminal A breast cancer patients downloaded from the Gene Expression Omnibus. A total of 300 DEGs were identified between good and poor outcome groups, including 176 upregulated genes and 124 downregulated genes. The DEGs may be used to effectively distinguish Luminal A samples with different prognoses verified by hierarchical clustering analysis. There were 9 pathways screened as significant pathways and a total of 18 DEGs involved in these 9 pathways were identified as prognostic biomarkers. According to the survival analysis and receiver operating characteristic curve, the obtained 18-gene prognostic assay exhibited good prognostic function with high sensitivity and specificity to both the train and test samples. In conclusion the 18-gene prognostic assay including the key genes, transcription factor 7-like 2, anterior parietal cortex and lymphocyte enhancer factor-1 may provide a new method for predicting outcomes and may be conducive to the promotion of precision medicine for Luminal A breast cancer.

  13. Introducing evidence based medicine to the journal club, using a structured pre and post test: a cohort study

    Directory of Open Access Journals (Sweden)

    Mahoney Martin C

    2001-11-01

    Full Text Available Abstract Background Journal Club at a University-based residency program was restructured to introduce, reinforce and evaluate residents understanding of the concepts of Evidence Based Medicine. Methods Over the course of a year structured pre and post-tests were developed for use during each Journal Club. Questions were derived from the articles being reviewed. Performance with the key concepts of Evidence Based Medicine was assessed. Study subjects were 35 PGY2 and PGY3 residents in a University based Family Practice Program. Results Performance on the pre-test demonstrated a significant improvement from a median of 54.5 % to 78.9 % over the course of the year (F 89.17, p Conclusions Following organizational revision, the introduction of a pre-test/post-test instrument supported achievement of the learning objectives with a better understanding and utilization of the concepts of Evidence Based Medicine.

  14. Social marketing campaign significantly associated with increases in syphilis testing among gay and bisexual men in San Francisco.

    Science.gov (United States)

    Montoya, Jorge A; Kent, Charlotte K; Rotblatt, Harlan; McCright, Jacque; Kerndt, Peter R; Klausner, Jeffrey D

    2005-07-01

    Between 1999 and 2002, San Francisco experienced a sharp increase in early syphilis among gay and bisexual men. In response, the San Francisco Department of Public Health launched a social marketing campaign to increase testing for syphilis, and awareness and knowledge about syphilis among gay and bisexual men. A convenience sample of 244 gay and bisexual men (18-60 years of age) were surveyed to evaluate the effectiveness of the campaign. Respondents were interviewed to elicit unaided and aided awareness about the campaign, knowledge about syphilis, recent sexual behaviors, and syphilis testing behavior. After controlling for other potential confounders, unaided campaign awareness was a significant correlate of having a syphilis test in the last 6 months (odds ratio, 3.21; 95% confidence interval, 1.30-7.97) compared with no awareness of the campaign. A comparison of respondents aware of the campaign with those not aware also revealed significant increases in awareness and knowledge about syphilis. The Healthy Penis 2002 campaign achieved its primary objective of increasing syphilis testing, and awareness and knowledge about syphilis among gay and bisexual men in San Francisco.

  15. Effects of an Inquiry-Based Short Intervention on State Test Anxiety in Comparison to Alternative Coping Strategies

    Directory of Open Access Journals (Sweden)

    Ann Krispenz

    2018-02-01

    Full Text Available Background and Objectives: Test anxiety can have undesirable consequences for learning and academic achievement. The control-value theory of achievement emotions assumes that test anxiety is experienced if a student appraises an achievement situation as important (value appraisal, but feels that the situation and its outcome are not fully under his or her control (control appraisal. Accordingly, modification of cognitive appraisals is assumed to reduce test anxiety. One method aiming at the modification of appraisals is inquiry-based stress reduction. In the present study (N = 162, we assessed the effects of an inquiry-based short intervention on test anxiety.Design: Short-term longitudinal, randomized control trial.Methods: Focusing on an individual worry thought, 53 university students received an inquiry-based short intervention. Control participants reflected on their worry thought (n = 55 or were distracted (n = 52. Thought related test anxiety was assessed before, immediately after, and 2 days after the experimental treatment.Results: After the intervention as well as 2 days later, individuals who had received the inquiry-based intervention demonstrated significantly lower test anxiety than participants from the pooled control groups. Further analyses showed that the inquiry-based short intervention was more effective than reflecting on a worry thought but had no advantage over distraction.Conclusions: Our findings provide first experimental evidence for the effectiveness of an inquiry-based short intervention in reducing students’ test anxiety.

  16. An Effective Strategy to Build Up a Balanced Test Suite for Spectrum-Based Fault Localization

    Directory of Open Access Journals (Sweden)

    Ning Li

    2016-01-01

    Full Text Available During past decades, many automated software faults diagnosis techniques including Spectrum-Based Fault Localization (SBFL have been proposed to improve the efficiency of software debugging activity. In the field of SBFL, suspiciousness calculation is closely related to the number of failed and passed test cases. Studies have shown that the ratio of the number of failed and passed test case has more significant impact on the accuracy of SBFL than the total number of test cases, and a balanced test suite is more beneficial to improving the accuracy of SBFL. Based on theoretical analysis, we proposed an PNF (Passed test cases, Not execute Faulty statement strategy to reduce test suite and build up a more balanced one for SBFL, which can be used in regression testing. We evaluated the strategy making experiments using the Siemens program and Space program. Experiments indicated that our PNF strategy can be used to construct a new test suite effectively. Compared with the original test suite, the new one has smaller size (average 90% test case was reduced in experiments and more balanced ratio of failed test cases to passed test cases, while it has the same statement coverage and fault localization accuracy.

  17. Testing the effectiveness of group-based memory rehabilitation in chronic stroke patients.

    Science.gov (United States)

    Miller, Laurie A; Radford, Kylie

    2014-01-01

    Memory complaints are common after stroke, yet there have been very few studies of the outcome of memory rehabilitation in these patients. The present study evaluated the effectiveness of a new manualised, group-based memory training programme. Forty outpatients with a single-stroke history and ongoing memory complaints were enrolled. The six-week course involved education and strategy training and was evaluated using a wait-list crossover design, with three assessments conducted 12 weeks apart. Outcome measures included: tests of anterograde memory (Rey Auditory Verbal Learning Test: RAVLT; Complex Figure Test) and prospective memory (Royal Prince Alfred Prospective Memory Test); the Comprehensive Assessment of Prospective Memory (CAPM) questionnaire and self-report of number of strategies used. Significant training-related gains were found on RAVLT learning and delayed recall and on CAPM informant report. Lower baseline scores predicted greater gains for several outcome measures. Patients with higher IQ or level of education showed more gains in number of strategies used. Shorter time since onset was related to gains in prospective memory, but no other stroke-related variables influenced outcome. Our study provides evidence that a relatively brief, group-based training intervention can improve memory functioning in chronic stroke patients and clarified some of the baseline factors that influence outcome.

  18. System health monitoring using multiple-model adaptive estimation techniques

    Science.gov (United States)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  19. Accuracy and user-acceptability of HIV self-testing using an oral fluid-based HIV rapid test.

    Directory of Open Access Journals (Sweden)

    Oon Tek Ng

    Full Text Available BACKGROUND: The United States FDA approved an over-the-counter HIV self-test, to facilitate increased HIV testing and earlier linkage to care. We assessed the accuracy of self-testing by untrained participants compared to healthcare worker (HCW testing, participants' ability to interpret sample results and user-acceptability of self-tests in Singapore. METHODOLOGY/PRINCIPAL FINDINGS: A cross-sectional study, involving 200 known HIV-positive patients and 794 unknown HIV status at-risk participants was conducted. Participants (all without prior self-test experience performed self-testing guided solely by visual instructions, followed by HCW testing, both using the OraQuick ADVANCE Rapid HIV 1/2 Antibody Test, with both results interpreted by the HCW. To assess ability to interpret results, participants were provided 3 sample results (positive, negative, and invalid to interpret. Of 192 participants who tested positive on HCW testing, self-testing was positive in 186 (96.9%, negative in 5 (2.6%, and invalid in 1 (0.5%. Of 794 participants who tested negative on HCW testing, self-testing was negative in 791 (99.6%, positive in 1 (0.1%, and invalid in 2 (0.3%. Excluding invalid tests, self-testing had sensitivity of 97.4% (95% CI 95.1% to 99.7% and specificity of 99.9% (95% CI: 99.6% to 100%. When interpreting results, 96%, 93.1% and 95.2% correctly read the positive, negative and invalid respectively. There were no significant demographic predictors for false negative self-testing or wrongly interpreting positive or invalid sample results as negative. Eighty-seven percent would purchase the kit over-the-counter; 89% preferred to take HIV tests in private. 72.5% and 74.9% felt the need for pre- and post-test counseling respectively. Only 28% would pay at least USD15 for the test. CONCLUSIONS/SIGNIFICANCE: Self-testing was associated with high specificity, and a small but significant number of false negatives. Incorrectly identifying model results as

  20. Watermarking on 3D mesh based on spherical wavelet transform.

    Science.gov (United States)

    Jin, Jian-Qiu; Dai, Min-Ya; Bao, Hu-Jun; Peng, Qun-Sheng

    2004-03-01

    In this paper we propose a robust watermarking algorithm for 3D mesh. The algorithm is based on spherical wavelet transform. Our basic idea is to decompose the original mesh into a series of details at different scales by using spherical wavelet transform; the watermark is then embedded into the different levels of details. The embedding process includes: global sphere parameterization, spherical uniform sampling, spherical wavelet forward transform, embedding watermark, spherical wavelet inverse transform, and at last resampling the mesh watermarked to recover the topological connectivity of the original model. Experiments showed that our algorithm can improve the capacity of the watermark and the robustness of watermarking against attacks.

  1. Benchmarking of a T-wave alternans detection method based on empirical mode decomposition.

    Science.gov (United States)

    Blanco-Velasco, Manuel; Goya-Esteban, Rebeca; Cruz-Roldán, Fernando; García-Alberola, Arcadi; Rojo-Álvarez, José Luis

    2017-07-01

    T-wave alternans (TWA) is a fluctuation of the ST-T complex occurring on an every-other-beat basis of the surface electrocardiogram (ECG). It has been shown to be an informative risk stratifier for sudden cardiac death, though the lack of gold standard to benchmark detection methods has promoted the use of synthetic signals. This work proposes a novel signal model to study the performance of a TWA detection. Additionally, the methodological validation of a denoising technique based on empirical mode decomposition (EMD), which is used here along with the spectral method, is also tackled. The proposed test bed system is based on the following guidelines: (1) use of open source databases to enable experimental replication; (2) use of real ECG signals and physiological noise; (3) inclusion of randomized TWA episodes. Both sensitivity (Se) and specificity (Sp) are separately analyzed. Also a nonparametric hypothesis test, based on Bootstrap resampling, is used to determine whether the presence of the EMD block actually improves the performance. The results show an outstanding specificity when the EMD block is used, even in very noisy conditions (0.96 compared to 0.72 for SNR = 8 dB), being always superior than that of the conventional SM alone. Regarding the sensitivity, using the EMD method also outperforms in noisy conditions (0.57 compared to 0.46 for SNR=8 dB), while it decreases in noiseless conditions. The proposed test setting designed to analyze the performance guarantees that the actual physiological variability of the cardiac system is reproduced. The use of the EMD-based block in noisy environment enables the identification of most patients with fatal arrhythmias. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Validity of selected cardiovascular field-based test among Malaysian ...

    African Journals Online (AJOL)

    Based on emerge obese problem among Malaysian, this research is formulated to validate published tests among healthy female adult. Selected test namely; 20 meter multi-stage shuttle run, 2.4km run test, 1 mile walk test and Harvard Step test were correlated with laboratory test (Bruce protocol) to find the criterion validity ...

  3. A methodological critique on using temperature-conditioned resampling for climate projections as in the paper of Gerstengarbe et al. (2013) winter storm- and summer thunderstorm-related loss events in Theoretical and Applied Climatology (TAC)

    Science.gov (United States)

    Wechsung, Frank; Wechsung, Maximilian

    2016-11-01

    The STatistical Analogue Resampling Scheme (STARS) statistical approach was recently used to project changes of climate variables in Germany corresponding to a supposed degree of warming. We show by theoretical and empirical analysis that STARS simply transforms interannual gradients between warmer and cooler seasons into climate trends. According to STARS projections, summers in Germany will inevitably become dryer and winters wetter under global warming. Due to the dominance of negative interannual correlations between precipitation and temperature during the year, STARS has a tendency to generate a net annual decrease in precipitation under mean German conditions. Furthermore, according to STARS, the annual level of global radiation would increase in Germany. STARS can be still used, e.g., for generating scenarios in vulnerability and uncertainty studies. However, it is not suitable as a climate downscaling tool to access risks following from changing climate for a finer than general circulation model (GCM) spatial scale.

  4. Moving beyond the Failure of Test-Based Accountability

    Science.gov (United States)

    Koretz, Daniel

    2018-01-01

    In "The Testing Charade: Pretending to Make Schools Better", the author's new book from which this article is drawn, the failures of test-based accountability are documented and some of the most egregious misuses and outright abuses of testing are described, along with some of the most serious negative effects. Neither good intentions…

  5. Protein-Based Urine Test Predicts Kidney Transplant Outcomes

    Science.gov (United States)

    ... News Releases News Release Thursday, August 22, 2013 Protein-based urine test predicts kidney transplant outcomes NIH- ... supporting development of noninvasive tests. Levels of a protein in the urine of kidney transplant recipients can ...

  6. Computer Based Test Untuk Seleksi Masuk Politeknik Negeri Bengkalis

    Directory of Open Access Journals (Sweden)

    Agus Tedyyana

    2017-11-01

    Full Text Available AbstrakPenyeleksian calon mahasiswa baru dapat dilakukan dengan aplikasi Computer Based Test (CBT. Metode yang digunakan meliputi teknik pengumpulan data, analisis sistem, model perancangan, implementasi dan pengujian. Penelitian ini menghasilkan aplikasi CBT dimana soal yang dimunculkan dari bank soal melalui proses pengacakan dengan tidak akan memunculkan soal yang sama dengan menggunakan metoda Fisher-Yates Shuffle. Dalam proses pengamanan informasi soal saat terhubung ke jaringan maka diperlukan teknik untuk penyandian pesan agar soal tersebut sebeum dimunculkan melewati proses enkripsi dan deskripsi data terlebih dahulu maka digunakan algoritma kriptografi  RSA. Metode perancangan perangkat lunak menggunakan model waterfall, perancangan database menggunakan entity relationship diagram, perancangan antarmuka menggunakan hypertext markup language (HTML Cascading Style Sheet (CSS dan jQuery serta diimplementasikan berbasis web dengan menggunakan bahasa pemrograman PHP dan database MySQL, Arsitektur jaringan yang digunakan aplikasi Computer Based Test adalah model jaringan client-server dengan jaringan Local Area Network (LAN. Kata kunci: Computer Based Test, Fisher-Yates Shuffle, Criptography, Local Area Network AbstractSelection of new student candidates can be done with Computer Based Test (CBT application. The methods used include data collection techniques, system analysis, design model, implementation and testing. This study produces a CBT application where the questions raised from the question bank through randomization process will not bring up the same problem using the Fisher-Yates Shuffle method. In the process of securing information about the problem when connected to the network it is necessary techniques for encoding the message so that the problem before appear through the process of encryption and description of data first then used RSA cryptography algorithm. Software design method using waterfall model, database design

  7. Finding differentially expressed genes in high dimensional data: Rank based test statistic via a distance measure.

    Science.gov (United States)

    Mathur, Sunil; Sadana, Ajit

    2015-12-01

    We present a rank-based test statistic for the identification of differentially expressed genes using a distance measure. The proposed test statistic is highly robust against extreme values and does not assume the distribution of parent population. Simulation studies show that the proposed test is more powerful than some of the commonly used methods, such as paired t-test, Wilcoxon signed rank test, and significance analysis of microarray (SAM) under certain non-normal distributions. The asymptotic distribution of the test statistic, and the p-value function are discussed. The application of proposed method is shown using a real-life data set. © The Author(s) 2011.

  8. Confidence Intervals: From tests of statistical significance to confidence intervals, range hypotheses and substantial effects

    Directory of Open Access Journals (Sweden)

    Dominic Beaulieu-Prévost

    2006-03-01

    Full Text Available For the last 50 years of research in quantitative social sciences, the empirical evaluation of scientific hypotheses has been based on the rejection or not of the null hypothesis. However, more than 300 articles demonstrated that this method was problematic. In summary, null hypothesis testing (NHT is unfalsifiable, its results depend directly on sample size and the null hypothesis is both improbable and not plausible. Consequently, alternatives to NHT such as confidence intervals (CI and measures of effect size are starting to be used in scientific publications. The purpose of this article is, first, to provide the conceptual tools necessary to implement an approach based on confidence intervals, and second, to briefly demonstrate why such an approach is an interesting alternative to an approach based on NHT. As demonstrated in the article, the proposed CI approach avoids most problems related to a NHT approach and can often improve the scientific and contextual relevance of the statistical interpretations by testing range hypotheses instead of a point hypothesis and by defining the minimal value of a substantial effect. The main advantage of such a CI approach is that it replaces the notion of statistical power by an easily interpretable three-value logic (probable presence of a substantial effect, probable absence of a substantial effect and probabilistic undetermination. The demonstration includes a complete example.

  9. Lawrence Livermore National Laboratory underground coal gasification data base. [US DOE-supported field tests; data

    Energy Technology Data Exchange (ETDEWEB)

    Cena, R. J.; Thorsness, C. B.

    1981-08-21

    The Department of Energy has sponsored a number of field projects to determine the feasibility of converting the nation's vast coal reserves into a clean efficient energy source via underground coal gasification (UCG). Due to these tests, a significant data base of process information has developed covering a range of coal seams (flat subbituminous, deep flat bituminous and steeply dipping subbituminous) and processing techniques. A summary of all DOE-sponsored tests to data is shown. The development of UCG on a commercial scale requires involvement from both the public and private sectors. However, without detailed process information, accurate assessments of the commercial viability of UCG cannot be determined. To help overcome this problem the DOE has directed the Lawrence Livermore National Laboratory (LLNL) to develop a UCG data base containing raw and reduced process data from all DOE-sponsored field tests. It is our intent to make the data base available upon request to interested parties, to help them assess the true potential of UCG.

  10. Towards universal voluntary HIV testing and counselling: a systematic review and meta-analysis of community-based approaches.

    Directory of Open Access Journals (Sweden)

    Amitabh B Suthar

    2013-08-01

    Full Text Available BACKGROUND: Effective national and global HIV responses require a significant expansion of HIV testing and counselling (HTC to expand access to prevention and care. Facility-based HTC, while essential, is unlikely to meet national and global targets on its own. This article systematically reviews the evidence for community-based HTC. METHODS AND FINDINGS: PubMed was searched on 4 March 2013, clinical trial registries were searched on 3 September 2012, and Embase and the World Health Organization Global Index Medicus were searched on 10 April 2012 for studies including community-based HTC (i.e., HTC outside of health facilities. Randomised controlled trials, and observational studies were eligible if they included a community-based testing approach and reported one or more of the following outcomes: uptake, proportion receiving their first HIV test, CD4 value at diagnosis, linkage to care, HIV positivity rate, HTC coverage, HIV incidence, or cost per person tested (outcomes are defined fully in the text. The following community-based HTC approaches were reviewed: (1 door-to-door testing (systematically offering HTC to homes in a catchment area, (2 mobile testing for the general population (offering HTC via a mobile HTC service, (3 index testing (offering HTC to household members of people with HIV and persons who may have been exposed to HIV, (4 mobile testing for men who have sex with men, (5 mobile testing for people who inject drugs, (6 mobile testing for female sex workers, (7 mobile testing for adolescents, (8 self-testing, (9 workplace HTC, (10 church-based HTC, and (11 school-based HTC. The Newcastle-Ottawa Quality Assessment Scale and the Cochrane Collaboration's "risk of bias" tool were used to assess the risk of bias in studies with a comparator arm included in pooled estimates. 117 studies, including 864,651 participants completing HTC, met the inclusion criteria. The percentage of people offered community-based HTC who accepted HTC

  11. Liver stiffness measurement-based scoring system for significant inflammation related to chronic hepatitis B.

    Directory of Open Access Journals (Sweden)

    Mei-Zhu Hong

    Full Text Available Liver biopsy is indispensable because liver stiffness measurement alone cannot provide information on intrahepatic inflammation. However, the presence of fibrosis highly correlates with inflammation. We constructed a noninvasive model to determine significant inflammation in chronic hepatitis B patients by using liver stiffness measurement and serum markers.The training set included chronic hepatitis B patients (n = 327, and the validation set included 106 patients; liver biopsies were performed, liver histology was scored, and serum markers were investigated. All patients underwent liver stiffness measurement.An inflammation activity scoring system for significant inflammation was constructed. In the training set, the area under the curve, sensitivity, and specificity of the fibrosis-based activity score were 0.964, 91.9%, and 90.8% in the HBeAg(+ patients and 0.978, 85.0%, and 94.0% in the HBeAg(- patients, respectively. In the validation set, the area under the curve, sensitivity, and specificity of the fibrosis-based activity score were 0.971, 90.5%, and 92.5% in the HBeAg(+ patients and 0.977, 95.2%, and 95.8% in the HBeAg(- patients. The liver stiffness measurement-based activity score was comparable to that of the fibrosis-based activity score in both HBeAg(+ and HBeAg(- patients for recognizing significant inflammation (G ≥3.Significant inflammation can be accurately predicted by this novel method. The liver stiffness measurement-based scoring system can be used without the aid of computers and provides a noninvasive alternative for the prediction of chronic hepatitis B-related significant inflammation.

  12. Using a micro computer based test bank

    International Nuclear Information System (INIS)

    Hamel, R.T.

    1987-01-01

    Utilizing a micro computer based test bank offers a training department many advantages and can have a positive impact upon training procedures and examination standards. Prior to data entry, Training Department management must pre-review the examination questions and answers to ensure compliance with examination standards and to verify the validity of all questions. Management must adhere to the TSD format since all questions require an enabling objective numbering scheme. Each question is entered under the enabling objective upon which it is based. Then the question is selected via the enabling objective. This eliminates any instructor bias because a random number generator chooses the test question. However, the instructor may load specific questions to create an emphasis theme for any test. The examination, answer and cover sheets are produced and printed within minutes. The test bank eliminates the large amount of time that is normally required for an instructor to formulate an examination. The need for clerical support is reduced by the elimination of typing examinations and also by the software's ability to maintain and generate student/course lists, attendance sheets, and grades. Software security measures limit access to the test bank, and the impromptu method used to generate and print an examination enhance its security

  13. Improvement of testing and maintenance based on fault tree analysis

    International Nuclear Information System (INIS)

    Cepin, M.

    2000-01-01

    Testing and maintenance of safety equipment is an important issue, which significantly contributes to safe and efficient operation of a nuclear power plant. In this paper a method, which extends the classical fault tree with time, is presented. Its mathematical model is represented by a set of equations, which include time requirements defined in the house event matrix. House events matrix is a representation of house events switched on and off through the discrete points of time. It includes house events, which timely switch on and off parts of the fault tree in accordance with the status of the plant configuration. Time dependent top event probability is calculated by the fault tree evaluations. Arrangement of components outages is determined on base of minimization of mean system unavailability. The results show that application of the method may improve the time placement of testing and maintenance activities of safety equipment. (author)

  14. Frequency of Testing for Dyslipidemia: An Evidence-Based Analysis

    Science.gov (United States)

    2014-01-01

    Background Dyslipidemias include high levels of total cholesterol, low-density lipoprotein (LDL) cholesterol, and triglycerides and low levels of high-density lipoprotein (HDL) cholesterol. Dyslipidemia is a risk factor for cardiovascular disease, which is a major contributor to mortality in Canada. Approximately 23% of the 2009/11 Canadian Health Measures Survey (CHMS) participants had a high level of LDL cholesterol, with prevalence increasing with age, and approximately 15% had a total cholesterol to HDL ratio above the threshold. Objectives To evaluate the frequency of lipid testing in adults not diagnosed with dyslipidemia and in adults on treatment for dyslipidemia. Research Methods A systematic review of the literature set out to identify randomized controlled trials (RCTs), systematic reviews, health technology assessments (HTAs), and observational studies published between January 1, 2000, and November 29, 2012, that evaluated the frequency of testing for dyslipidemia in the 2 populations. Results Two observational studies assessed the frequency of lipid testing, 1 in individuals not on lipid-lowering medications and 1 in treated individuals. Both studies were based on previously collected data intended for a different objective and, therefore, no conclusions could be reached about the frequency of testing at intervals other than the ones used in the original studies. Given this limitation and generalizability issues, the quality of evidence was considered very low. No evidence for the frequency of lipid testing was identified in the 2 HTAs included. Canadian and international guidelines recommend testing for dyslipidemia in individuals at an increased risk for cardiovascular disease. The frequency of testing recommended is based on expert consensus. Conclusions Conclusions on the frequency of lipid testing could not be made based on the 2 observational studies. Current guidelines recommend lipid testing in adults with increased cardiovascular risk, with

  15. Tests of gravity with future space-based experiments

    Science.gov (United States)

    Sakstein, Jeremy

    2018-03-01

    Future space-based tests of relativistic gravitation—laser ranging to Phobos, accelerometers in orbit, and optical networks surrounding Earth—will constrain the theory of gravity with unprecedented precision by testing the inverse-square law, the strong and weak equivalence principles, and the deflection and time delay of light by massive bodies. In this paper, we estimate the bounds that could be obtained on alternative gravity theories that use screening mechanisms to suppress deviations from general relativity in the Solar System: chameleon, symmetron, and Galileon models. We find that space-based tests of the parametrized post-Newtonian parameter γ will constrain chameleon and symmetron theories to new levels, and that tests of the inverse-square law using laser ranging to Phobos will provide the most stringent constraints on Galileon theories to date. We end by discussing the potential for constraining these theories using upcoming tests of the weak equivalence principle, and conclude that further theoretical modeling is required in order to fully utilize the data.

  16. Operational Based Vision Assessment Automated Vision Test Collection User Guide

    Science.gov (United States)

    2017-05-15

    AFRL-SA-WP-SR-2017-0012 Operational Based Vision Assessment Automated Vision Test Collection User Guide Elizabeth Shoda, Alex...June 2015 – May 2017 4. TITLE AND SUBTITLE Operational Based Vision Assessment Automated Vision Test Collection User Guide 5a. CONTRACT NUMBER... automated vision tests , or AVT. Development of the AVT was required to support threshold-level vision testing capability needed to investigate the

  17. Model Based Analysis and Test Generation for Flight Software

    Science.gov (United States)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  18. Evolution of a Computer-Based Testing Laboratory

    Science.gov (United States)

    Moskal, Patrick; Caldwell, Richard; Ellis, Taylor

    2009-01-01

    In 2003, faced with increasing growth in technology-based and large-enrollment courses, the College of Business Administration at the University of Central Florida opened a computer-based testing lab to facilitate administration of course examinations. Patrick Moskal, Richard Caldwell, and Taylor Ellis describe the development and evolution of the…

  19. Formal Specification Based Automatic Test Generation for Embedded Network Systems

    Directory of Open Access Journals (Sweden)

    Eun Hye Choi

    2014-01-01

    Full Text Available Embedded systems have become increasingly connected and communicate with each other, forming large-scaled and complicated network systems. To make their design and testing more reliable and robust, this paper proposes a formal specification language called SENS and a SENS-based automatic test generation tool called TGSENS. Our approach is summarized as follows: (1 A user describes requirements of target embedded network systems by logical property-based constraints using SENS. (2 Given SENS specifications, test cases are automatically generated using a SAT-based solver. Filtering mechanisms to select efficient test cases are also available in our tool. (3 In addition, given a testing goal by the user, test sequences are automatically extracted from exhaustive test cases. We’ve implemented our approach and conducted several experiments on practical case studies. Through the experiments, we confirmed the efficiency of our approach in design and test generation of real embedded air-conditioning network systems.

  20. Vehicle Fault Diagnose Based on Smart Sensor

    Science.gov (United States)

    Zhining, Li; Peng, Wang; Jianmin, Mei; Jianwei, Li; Fei, Teng

    In the vehicle's traditional fault diagnose system, we usually use a computer system with a A/D card and with many sensors connected to it. The disadvantage of this system is that these sensor can hardly be shared with control system and other systems, there are too many connect lines and the electro magnetic compatibility(EMC) will be affected. In this paper, smart speed sensor, smart acoustic press sensor, smart oil press sensor, smart acceleration sensor and smart order tracking sensor were designed to solve this problem. With the CAN BUS these smart sensors, fault diagnose computer and other computer could be connected together to establish a network system which can monitor and control the vehicle's diesel and other system without any duplicate sensor. The hard and soft ware of the smart sensor system was introduced, the oil press, vibration and acoustic signal are resampled by constant angle increment to eliminate the influence of the rotate speed. After the resample, the signal in every working cycle could be averaged in angle domain and do other analysis like order spectrum.

  1. Computer-Based English Language Testing in China: Present and Future

    Science.gov (United States)

    Yu, Guoxing; Zhang, Jing

    2017-01-01

    In this special issue on high-stakes English language testing in China, the two articles on computer-based testing (Jin & Yan; He & Min) highlight a number of consistent, ongoing challenges and concerns in the development and implementation of the nationwide IB-CET (Internet Based College English Test) and institutional computer-adaptive…

  2. Distance-based microfluidic quantitative detection methods for point-of-care testing.

    Science.gov (United States)

    Tian, Tian; Li, Jiuxing; Song, Yanling; Zhou, Leiji; Zhu, Zhi; Yang, Chaoyong James

    2016-04-07

    Equipment-free devices with quantitative readout are of great significance to point-of-care testing (POCT), which provides real-time readout to users and is especially important in low-resource settings. Among various equipment-free approaches, distance-based visual quantitative detection methods rely on reading the visual signal length for corresponding target concentrations, thus eliminating the need for sophisticated instruments. The distance-based methods are low-cost, user-friendly and can be integrated into portable analytical devices. Moreover, such methods enable quantitative detection of various targets by the naked eye. In this review, we first introduce the concept and history of distance-based visual quantitative detection methods. Then, we summarize the main methods for translation of molecular signals to distance-based readout and discuss different microfluidic platforms (glass, PDMS, paper and thread) in terms of applications in biomedical diagnostics, food safety monitoring, and environmental analysis. Finally, the potential and future perspectives are discussed.

  3. The Local Fractional Bootstrap

    DEFF Research Database (Denmark)

    Bennedsen, Mikkel; Hounyo, Ulrich; Lunde, Asger

    We introduce a bootstrap procedure for high-frequency statistics of Brownian semistationary processes. More specifically, we focus on a hypothesis test on the roughness of sample paths of Brownian semistationary processes, which uses an estimator based on a ratio of realized power variations. Our...... new resampling method, the local fractional bootstrap, relies on simulating an auxiliary fractional Brownian motion that mimics the fine properties of high frequency differences of the Brownian semistationary process under the null hypothesis. We prove the first order validity of the bootstrap method...... and in simulations we observe that the bootstrap-based hypothesis test provides considerable finite-sample improvements over an existing test that is based on a central limit theorem. This is important when studying the roughness properties of time series data; we illustrate this by applying the bootstrap method...

  4. Scopolamine provocation-based pharmacological MRI model for testing procognitive agents.

    Science.gov (United States)

    Hegedűs, Nikolett; Laszy, Judit; Gyertyán, István; Kocsis, Pál; Gajári, Dávid; Dávid, Szabolcs; Deli, Levente; Pozsgay, Zsófia; Tihanyi, Károly

    2015-04-01

    There is a huge unmet need to understand and treat pathological cognitive impairment. The development of disease modifying cognitive enhancers is hindered by the lack of correct pathomechanism and suitable animal models. Most animal models to study cognition and pathology do not fulfil either the predictive validity, face validity or construct validity criteria, and also outcome measures greatly differ from those of human trials. Fortunately, some pharmacological agents such as scopolamine evoke similar effects on cognition and cerebral circulation in rodents and humans and functional MRI enables us to compare cognitive agents directly in different species. In this paper we report the validation of a scopolamine based rodent pharmacological MRI provocation model. The effects of deemed procognitive agents (donepezil, vinpocetine, piracetam, alpha 7 selective cholinergic compounds EVP-6124, PNU-120596) were compared on the blood-oxygen-level dependent responses and also linked to rodent cognitive models. These drugs revealed significant effect on scopolamine induced blood-oxygen-level dependent change except for piracetam. In the water labyrinth test only PNU-120596 did not show a significant effect. This provocational model is suitable for testing procognitive compounds. These functional MR imaging experiments can be paralleled with human studies, which may help reduce the number of false cognitive clinical trials. © The Author(s) 2015.

  5. Gene-based testing of interactions in association studies of quantitative traits.

    Directory of Open Access Journals (Sweden)

    Li Ma

    Full Text Available Various methods have been developed for identifying gene-gene interactions in genome-wide association studies (GWAS. However, most methods focus on individual markers as the testing unit, and the large number of such tests drastically erodes statistical power. In this study, we propose novel interaction tests of quantitative traits that are gene-based and that confer advantage in both statistical power and biological interpretation. The framework of gene-based gene-gene interaction (GGG tests combine marker-based interaction tests between all pairs of markers in two genes to produce a gene-level test for interaction between the two. The tests are based on an analytical formula we derive for the correlation between marker-based interaction tests due to linkage disequilibrium. We propose four GGG tests that extend the following P value combining methods: minimum P value, extended Simes procedure, truncated tail strength, and truncated P value product. Extensive simulations point to correct type I error rates of all tests and show that the two truncated tests are more powerful than the other tests in cases of markers involved in the underlying interaction not being directly genotyped and in cases of multiple underlying interactions. We applied our tests to pairs of genes that exhibit a protein-protein interaction to test for gene-level interactions underlying lipid levels using genotype data from the Atherosclerosis Risk in Communities study. We identified five novel interactions that are not evident from marker-based interaction testing and successfully replicated one of these interactions, between SMAD3 and NEDD9, in an independent sample from the Multi-Ethnic Study of Atherosclerosis. We conclude that our GGG tests show improved power to identify gene-level interactions in existing, as well as emerging, association studies.

  6. Bayesian models based on test statistics for multiple hypothesis testing problems.

    Science.gov (United States)

    Ji, Yuan; Lu, Yiling; Mills, Gordon B

    2008-04-01

    We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.

  7. Testing for heteroscedasticity in jumpy and noisy high-frequency data: A resampling approach

    DEFF Research Database (Denmark)

    Christensen, Kim; Hounyo, Ulrich; Podolskij, Mark

    -frequency data. We document the importance of jump-robustness, when measuring heteroscedasticity in practice. We also find that a large fraction of variation in intraday volatility is accounted for by seasonality. This suggests that, once we control for jumps and deate asset returns by a non-parametric estimate...

  8. Space Launch System Base Heating Test: Tunable Diode Laser Absorption Spectroscopy

    Science.gov (United States)

    Parker, Ron; Carr, Zak; MacLean, Mathew; Dufrene, Aaron; Mehta, Manish

    2016-01-01

    This paper describes the Tunable Diode Laser Absorption Spectroscopy (TDLAS) measurement of several water transitions that were interrogated during a hot-fire testing of the Space Launch Systems (SLS) sub-scale vehicle installed in LENS II. The temperature of the recirculating gas flow over the base plate was found to increase with altitude and is consistent with CFD results. It was also observed that the gas above the base plate has significant velocity along the optical path of the sensor at the higher altitudes. The line-by-line analysis of the H2O absorption features must include the effects of the Doppler shift phenomena particularly at high altitude. The TDLAS experimental measurements and the analysis procedure which incorporates the velocity dependent flow will be described.

  9. On detection and assessment of statistical significance of Genomic Islands

    Directory of Open Access Journals (Sweden)

    Chaudhuri Probal

    2008-04-01

    Full Text Available Abstract Background Many of the available methods for detecting Genomic Islands (GIs in prokaryotic genomes use markers such as transposons, proximal tRNAs, flanking repeats etc., or they use other supervised techniques requiring training datasets. Most of these methods are primarily based on the biases in GC content or codon and amino acid usage of the islands. However, these methods either do not use any formal statistical test of significance or use statistical tests for which the critical values and the P-values are not adequately justified. We propose a method, which is unsupervised in nature and uses Monte-Carlo statistical tests based on randomly selected segments of a chromosome. Such tests are supported by precise statistical distribution theory, and consequently, the resulting P-values are quite reliable for making the decision. Results Our algorithm (named Design-Island, an acronym for Detection of Statistically Significant Genomic Island runs in two phases. Some 'putative GIs' are identified in the first phase, and those are refined into smaller segments containing horizontally acquired genes in the refinement phase. This method is applied to Salmonella typhi CT18 genome leading to the discovery of several new pathogenicity, antibiotic resistance and metabolic islands that were missed by earlier methods. Many of these islands contain mobile genetic elements like phage-mediated genes, transposons, integrase and IS elements confirming their horizontal acquirement. Conclusion The proposed method is based on statistical tests supported by precise distribution theory and reliable P-values along with a technique for visualizing statistically significant islands. The performance of our method is better than many other well known methods in terms of their sensitivity and accuracy, and in terms of specificity, it is comparable to other methods.

  10. Smart device-based testing for medical students in Korea: satisfaction, convenience, and advantages

    Directory of Open Access Journals (Sweden)

    Eun Young Lim

    2017-04-01

    Full Text Available The aim of this study was to investigate respondents’ satisfaction with smart device-based testing (SBT, as well as its convenience and advantages, in order to improve its implementation. The survey was conducted among 108 junior medical students at Kyungpook National University School of Medicine, Korea, who took a practice licensing examination using SBT in September 2015. The survey contained 28 items scored using a 5-point Likert scale. The items were divided into the following three categories: satisfaction with SBT administration, convenience of SBT features, and advantages of SBT compared to paper-and-pencil testing or computer-based testing. The reliability of the survey was 0.95. Of the three categories, the convenience of the SBT features received the highest mean (M score (M= 3.75, standard deviation [SD]= 0.69, while the category of satisfaction with SBT received the lowest (M= 3.13, SD= 1.07. No statistically significant differences across these categories with respect to sex, age, or experience were observed. These results indicate that SBT was practical and effective to take and to administer.

  11. Tunable Absorption System based on magnetorheological elastomers and Halbach array: design and testing

    Energy Technology Data Exchange (ETDEWEB)

    Bocian, Mirosław; Kaleta, Jerzy; Lewandowski, Daniel, E-mail: daniel.lewandowski@pwr.edu.pl; Przybylski, Michał

    2017-08-01

    Highlights: • Construction of a Tunable Absorption System incorporating MRE has been done. • For system control by magnetic field a double circular Halbach array has been used. • Significant changes of the TSAs natural frequency and damping has been obtained. - Abstract: In this paper, the systematic design, construction and testing of a Tunable Absorption System (TAS) incorporating magnetorheological elastomer (MRE) has been investigated. The TAS has been designed for energy absorption and mitigation of vibratory motions from an impact excitation. The main advantage of the designed TAS is that it has the ability to change and adapt to working conditions. Tunability can be realised through a change in the magnetic field caused by the change of an internal arrangement of permanent magnets within a double dipolar circular Halbach array. To show the capabilities of the tested system, experiments based on an impulse excitation have been performed. Significant changes of the TASs natural frequency and damping characteristics have been obtained. By incorporating magnetic tunability within the TAS a significant qualitative and quantitative change in the devices mechanical properties and performance were obtained.

  12. Comparison of the clinical performance of an HPV mRNA test and an HPV DNA test in triage of atypical squamous cells of undetermined significance (ASC-US)

    DEFF Research Database (Denmark)

    Waldstrom, M; Ornskov, D

    2012-01-01

    The effect of triaging women with atypical squamous cells of undetermined significance (ASC-US) with human papillomavirus (HPV) DNA testing has been well documented. New tests detecting HPV E6/E7 mRNA are emerging, claiming to be more specific for detecting high-grade disease. We evaluated the cl...

  13. Statistical Significance for Hierarchical Clustering

    Science.gov (United States)

    Kimes, Patrick K.; Liu, Yufeng; Hayes, D. Neil; Marron, J. S.

    2017-01-01

    Summary Cluster analysis has proved to be an invaluable tool for the exploratory and unsupervised analysis of high dimensional datasets. Among methods for clustering, hierarchical approaches have enjoyed substantial popularity in genomics and other fields for their ability to simultaneously uncover multiple layers of clustering structure. A critical and challenging question in cluster analysis is whether the identified clusters represent important underlying structure or are artifacts of natural sampling variation. Few approaches have been proposed for addressing this problem in the context of hierarchical clustering, for which the problem is further complicated by the natural tree structure of the partition, and the multiplicity of tests required to parse the layers of nested clusters. In this paper, we propose a Monte Carlo based approach for testing statistical significance in hierarchical clustering which addresses these issues. The approach is implemented as a sequential testing procedure guaranteeing control of the family-wise error rate. Theoretical justification is provided for our approach, and its power to detect true clustering structure is illustrated through several simulation studies and applications to two cancer gene expression datasets. PMID:28099990

  14. Uncertainty management in knowledge based systems for nondestructive testing-an example from ultrasonic testing

    International Nuclear Information System (INIS)

    Rajagopalan, C.; Kalyanasundaram, P.; Baldev Raj

    1996-01-01

    The use of fuzzy logic, as a framework for uncertainty management, in a knowledge-based system (KBS) for ultrasonic testing of austenitic stainless steels is described. Parameters that may contain uncertain values are identified. Methodologies to handle uncertainty in these parameters using fuzzy logic are detailed. The overall improvement in the performance of the knowledge-based system after incorporating fuzzy logic is discussed. The methodology developed being universal, its extension to other KBS for nondestructive testing and evaluation is highlighted. (author)

  15. Smartphone-based audiometric test for screening hearing loss in the elderly.

    Science.gov (United States)

    Abu-Ghanem, Sara; Handzel, Ophir; Ness, Lior; Ben-Artzi-Blima, Miri; Fait-Ghelbendorf, Karin; Himmelfarb, Mordechai

    2016-02-01

    Hearing loss is widespread among the elderly. One of the main obstacles to rehabilitation is identifying individuals with potentially correctable hearing loss. Smartphone-based hearing tests can be administered at home, thus greatly facilitating access to screening. This study evaluates the use of a smartphone application as a screening tool for hearing loss in individuals aged ≥ 65 years. Twenty-six subjects aged 84.4 ± 6.73 years (mean ± SD) were recruited. Pure-tone audiometry was administered by both a smartphone application (uHear for iPhone, v1.0 Unitron, Canada) and a standard portable audiometer by trained personnel. Participants also completed a questionnaire on their hearing. Pure-tone thresholds were compared between the two testing modalities and correlated with the questionnaire results. The cutoff point for failing screening tests was a pure tone average of 40 dB for the frequencies 250-6,000 Hz. The smartphone application's pure tone thresholds were higher (poorer hearing) than the audiometric thresholds, with a significant difference in all frequencies but 2,000 Hz. The application and the audiometric values were in agreement for 24 subjects (92 %). The application had a sensitivity of 100 % and specificity of 60 % for screening compared with the audiometer. The questionnaire was significantly less accurate, having assigned a passing score to three participants who failed both the application and audiometric tests. While a smartphone application may not be able to accurately determine the level of hearing impairment, it is useful as a highly accessible portable audiometer substitute for screening for hearing loss in elderly populations.

  16. Evaluation of a low-cost liquid-based Pap test in rural El Salvador: a split-sample study.

    Science.gov (United States)

    Guo, Jin; Cremer, Miriam; Maza, Mauricio; Alfaro, Karla; Felix, Juan C

    2014-04-01

    We sought to test the diagnostic efficacy of a low-cost, liquid-based cervical cytology that could be implemented in low-resource settings. A prospective, split-sample Pap study was performed in 595 women attending a cervical cancer screening clinic in rural El Salvador. Collected cervical samples were used to make a conventional Pap (cell sample directly to glass slide), whereas residual material was used to make the liquid-based sample using the ClearPrep method. Selected samples were tested from the residual sample of the liquid-based collection for the presence of high-risk Human papillomaviruses. Of 595 patients, 570 were interpreted with the same diagnosis between the 2 methods (95.8% agreement). There were comparable numbers of unsatisfactory cases; however, ClearPrep significantly increased detection of low-grade squamous intraepithelial lesions and decreased the diagnoses of atypical squamous cells of undetermined significance. ClearPrep identified an equivalent number of high-grade squamous intraepithelial lesion cases as the conventional Pap. High-risk human papillomavirus was identified in all cases of high-grade squamous intraepithelial lesion, adenocarcinoma in situ, and cancer as well as in 78% of low-grade squamous intraepithelial lesions out of the residual fluid of the ClearPrep vials. The low-cost ClearPrep Pap test demonstrated equivalent detection of squamous intraepithelial lesions when compared with the conventional Pap smear and demonstrated the potential for ancillary molecular testing. The test seems a viable option for implementation in low-resource settings.

  17. Exploring pharmacy and home-based sexually transmissible infection testing.

    Science.gov (United States)

    Habel, Melissa A; Scheinmann, Roberta; Verdesoto, Elizabeth; Gaydos, Charlotte; Bertisch, Maggie; Chiasson, Mary Ann

    2015-11-01

    Background This study assessed the feasibility and acceptability of pharmacy and home-based sexually transmissible infection (STI) screening as alternate testing venues among emergency contraception (EC) users. The study included two phases in February 2011-July 2012. In Phase I, customers purchasing EC from eight pharmacies in Manhattan received vouchers for free STI testing at onsite medical clinics. In Phase II, three Facebook ads targeted EC users to connect them with free home-based STI test kits ordered online. Participants completed a self-administered survey. Only 38 participants enrolled in Phase I: 90% female, ≤29 years (74%), 45% White non-Hispanic and 75% college graduates; 71% were not tested for STIs in the past year and 68% reported a new partner in the past 3 months. None tested positive for STIs. In Phase II, ads led to >45000 click-throughs, 382 completed the survey and 290 requested kits; 28% were returned. Phase II participants were younger and less educated than Phase I participants; six tested positive for STIs. Challenges included recruitment, pharmacy staff participation, advertising with discretion and cost. This study found low uptake of pharmacy and home-based testing among EC users; however, STI testing in these settings is feasible and the acceptability findings indicate an appeal among younger women for testing in non-traditional settings. Collaborating with and training pharmacy and medical staff are key elements of service provision. Future research should explore how different permutations of expanding screening in non-traditional settings could improve testing uptake and detect additional STI cases.

  18. Statistical inference, the bootstrap, and neural-network modeling with application to foreign exchange rates.

    Science.gov (United States)

    White, H; Racine, J

    2001-01-01

    We propose tests for individual and joint irrelevance of network inputs. Such tests can be used to determine whether an input or group of inputs "belong" in a particular model, thus permitting valid statistical inference based on estimated feedforward neural-network models. The approaches employ well-known statistical resampling techniques. We conduct a small Monte Carlo experiment showing that our tests have reasonable level and power behavior, and we apply our methods to examine whether there are predictable regularities in foreign exchange rates. We find that exchange rates do appear to contain information that is exploitable for enhanced point prediction, but the nature of the predictive relations evolves through time.

  19. Inquiry-Based Instruction and High Stakes Testing

    Science.gov (United States)

    Cothern, Rebecca L.

    Science education is a key to economic success for a country in terms of promoting advances in national industry and technology and maximizing competitive advantage in a global marketplace. The December 2010 Program for International Student Assessment (PISA) ranked the United States 23rd of 65 countries in science. That dismal standing in science proficiency impedes the ability of American school graduates to compete in the global market place. Furthermore, the implementation of high stakes testing in science mandated by the 2007 No Child Left Behind (NCLB) Act has created an additional need for educators to find effective science pedagogy. Research has shown that inquiry-based science instruction is one of the predominant science instructional methods. Inquiry-based instruction is a multifaceted teaching method with its theoretical foundation in constructivism. A correlational survey research design was used to determine the relationship between levels of inquiry-based science instruction and student performance on a standardized state science test. A self-report survey, using a Likert-type scale, was completed by 26 fifth grade teachers. Participants' responses were analyzed and grouped as high, medium, or low level inquiry instruction. The unit of analysis for the achievement variable was the student scale score average from the state science test. Spearman's Rho correlation data showed a positive relationship between the level of inquiry-based instruction and student achievement on the state assessment. The findings can assist teachers and administrators by providing additional research on the benefits of the inquiry-based instructional method. Implications for positive social change include increases in student proficiency and decision-making skills related to science policy issues which can help make them more competitive in the global marketplace.

  20. A hybrid approach to fault diagnosis of roller bearings under variable speed conditions

    Science.gov (United States)

    Wang, Yanxue; Yang, Lin; Xiang, Jiawei; Yang, Jianwei; He, Shuilong

    2017-12-01

    Rolling element bearings are one of the main elements in rotating machines, whose failure may lead to a fatal breakdown and significant economic losses. Conventional vibration-based diagnostic methods are based on the stationary assumption, thus they are not applicable to the diagnosis of bearings working under varying speeds. This constraint limits the bearing diagnosis to the industrial application significantly. A hybrid approach to fault diagnosis of roller bearings under variable speed conditions is proposed in this work, based on computed order tracking (COT) and variational mode decomposition (VMD)-based time frequency representation (VTFR). COT is utilized to resample the non-stationary vibration signal in the angular domain, while VMD is used to decompose the resampled signal into a number of band-limited intrinsic mode functions (BLIMFs). A VTFR is then constructed based on the estimated instantaneous frequency and instantaneous amplitude of each BLIMF. Moreover, the Gini index and time-frequency kurtosis are both proposed to quantitatively measure the sparsity and concentration measurement of time-frequency representation, respectively. The effectiveness of the VTFR for extracting nonlinear components has been verified by a bat signal. Results of this numerical simulation also show the sparsity and concentration of the VTFR are better than those of short-time Fourier transform, continuous wavelet transform, Hilbert-Huang transform and Wigner-Ville distribution techniques. Several experimental results have further demonstrated that the proposed method can well detect bearing faults under variable speed conditions.

  1. A versatile electrophoresis-based self-test platform.

    Science.gov (United States)

    Staal, Steven; Ungerer, Mathijn; Floris, Arjan; Ten Brinke, Hans-Willem; Helmhout, Roy; Tellegen, Marian; Janssen, Kjeld; Karstens, Erik; van Arragon, Charlotte; Lenk, Stefan; Staijen, Erik; Bartholomew, Jody; Krabbe, Hans; Movig, Kris; Dubský, Pavel; van den Berg, Albert; Eijkel, Jan

    2015-03-01

    This paper reports on recent research creating a family of electrophoresis-based point of care devices for the determination of a wide range of ionic analytes in various sample matrices. These devices are based on a first version for the point-of-care measurement of Li(+), reported in 2010 by Floris et al. (Lab Chip 2010, 10, 1799-1806). With respect to this device, significant improvements in accuracy, precision, detection limit, and reliability have been obtained especially by the use of multiple injections of one sample on a single chip and integrated data analysis. Internal and external validation by clinical laboratories for the determination of analytes in real patients by a self-test is reported. For Li(+) in blood better precision than the standard clinical determination for Li(+) was achieved. For Na(+) in human urine the method was found to be within the clinical acceptability limits. In a veterinary application, Ca(2+) and Mg(2+) were determined in bovine blood by means of the same chip, but using a different platform. Finally, promising preliminary results are reported with the Medimate platform for the determination of creatinine in whole blood and quantification of both cations and anions through replicate measurements on the same sample with the same chip. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Testing for Turkeys Faith-Based Community HIV Testing Initiative: An Update.

    Science.gov (United States)

    DeGrezia, Mary; Baker, Dorcas; McDowell, Ingrid

    2018-06-04

    Testing for Turkeys (TFT) HIV/hepatitis C virus (HCV) and sexually transmitted infection (STI) testing initiative is a joint effort between Older Women Embracing Life (OWEL), Inc., a nonprofit faith-based community HIV support and advocacy organization; the Johns Hopkins University Regional Partner MidAtlantic AIDS Education and Training Center (MAAETC); and the University of Maryland, Baltimore JACQUES Initiative (JI), and is now in its 11th year of providing HIV outreach, testing, and linkage to care. Since 2008, the annual TFT daylong community HIV testing and linkage to care initiative has been held 2 weeks before Thanksgiving at a faith-based center in Baltimore, Maryland, in a zip code where one in 26 adults and adolescents ages 13 years and older are living with HIV (Maryland Department of Health, Center for HIV Surveillance, Epidemiology, and Evaluation, 2017). TFT includes a health fair with vendors that supply an abundance of education information (handouts, videos, one-on-one counseling) and safer sex necessities, including male and female condoms, dental dams, and lube. Nutritious boxed lunches and beverages are provided to all attendees and volunteers. Everyone tested for HIV who stays to obtain their results is given a free frozen turkey as they exit. The Baltimore City Health Department is on hand with a confidential no-test list (persons in the state already known to have HIV) to diminish retesting of individuals previously diagnosed with HIV. However, linkage to care is available to everyone: newly diagnosed individuals and those previously diagnosed and currently out of care. Copyright © 2018 Association of Nurses in AIDS Care. Published by Elsevier Inc. All rights reserved.

  3. A Window Into Clinical Next-Generation Sequencing-Based Oncology Testing Practices.

    Science.gov (United States)

    Nagarajan, Rakesh; Bartley, Angela N; Bridge, Julia A; Jennings, Lawrence J; Kamel-Reid, Suzanne; Kim, Annette; Lazar, Alexander J; Lindeman, Neal I; Moncur, Joel; Rai, Alex J; Routbort, Mark J; Vasalos, Patricia; Merker, Jason D

    2017-12-01

    - Detection of acquired variants in cancer is a paradigm of precision medicine, yet little has been reported about clinical laboratory practices across a broad range of laboratories. - To use College of American Pathologists proficiency testing survey results to report on the results from surveys on next-generation sequencing-based oncology testing practices. - College of American Pathologists proficiency testing survey results from more than 250 laboratories currently performing molecular oncology testing were used to determine laboratory trends in next-generation sequencing-based oncology testing. - These presented data provide key information about the number of laboratories that currently offer or are planning to offer next-generation sequencing-based oncology testing. Furthermore, we present data from 60 laboratories performing next-generation sequencing-based oncology testing regarding specimen requirements and assay characteristics. The findings indicate that most laboratories are performing tumor-only targeted sequencing to detect single-nucleotide variants and small insertions and deletions, using desktop sequencers and predesigned commercial kits. Despite these trends, a diversity of approaches to testing exists. - This information should be useful to further inform a variety of topics, including national discussions involving clinical laboratory quality systems, regulation and oversight of next-generation sequencing-based oncology testing, and precision oncology efforts in a data-driven manner.

  4. Explanation of Two Anomalous Results in Statistical Mediation Analysis

    Science.gov (United States)

    Fritz, Matthew S.; Taylor, Aaron B.; MacKinnon, David P.

    2012-01-01

    Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special…

  5. On testing the significance of atmospheric response to smoke from the Kuwaiti oil fires using the Los Alamos general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Kao, C.J.; Glatzmaier, G.A.; Malone, R.C. [Los Alamos National Laboratory, Los Alamos, NM (United States)

    1994-07-01

    The response of the Los Alamos atmospheric general circulation model to the smoke from the Kuwaiti oil fires set in 1991 is examined. The model has an interactive soot transport module that uses a Lagrangian tracer particle scheme. The statistical significance of the results is evaluated using a methodology based on the classic Student`s t test. Among various estimated smoke emission rates and associated visible absorption coefficients, the worst- and best-case scenarios are selected. In each of the scenarios, an ensemble of 10 30-day June simulations are conducted with the smoke and are compared to the same 10 June simulations without the smoke. The results of the worst-case scneario show that a statistically significant wave train pattern propagates eastward-poleward downstream from the source. The signals favorably compare with the observed climate anomalies in summer 1991, albeit some possible El Nino-Southern Oscillation effects were involved in the actual climate. The results of the best-case (i.e., least-impact) scenario show that the significance is rather small but that its general pattern is quite similar to that in the worst-case scenario.

  6. On testing the significance of atmospheric response to smoke from the Kuwaiti oil fires using the Los Alamos general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Chih-Yue Jim Kao; Glatzmaier, G.A.; Malone, R.C. [Los Alamos National Lab., NM (United States)

    1994-07-20

    The response of the Los Alamos atmospheric general circulation model to the smoke from the Kuwaiti oil fires set in 1991 is examined. The model has an interactive soot transport module that uses a Lagrangian tracer particle scheme. The statistical significance of the results is evaluated using a methodology based on the classic Student`s t test. Among various estimated smoke emission rates and associated visible absorption coefficients, the worst- and best-case scenarios are selected. In each of the scenarios, an ensemble of 10, 30-day June simulations are conducted with the smoke, and are compared to the same 10 June simulations without the smoke. The results of the worst-case scenario show that a statistically significant wave train pattern propagates eastward-poleward downstream from the source. The signals favorably compare with the observed climate anomalies in summer 1991, albeit some possible El Nino-Southern Oscillation effects were involved in the actual climate. The results of the best-case (i.e., least-impact) scenario show that the significance is rather small but that its general pattern is quite similar to that in the worst-case scenario. 24 refs., 5 figs.

  7. TR-EDB: Test Reactor Embrittlement Data Base, Version 1

    Energy Technology Data Exchange (ETDEWEB)

    Stallmann, F.W.; Wang, J.A.; Kam, F.B.K. [Oak Ridge National Lab., TN (United States)

    1994-01-01

    The Test Reactor Embrittlement Data Base (TR-EDB) is a collection of results from irradiation in materials test reactors. It complements the Power Reactor Embrittlement Data Base (PR-EDB), whose data are restricted to the results from the analysis of surveillance capsules in commercial power reactors. The rationale behind their restriction was the assumption that the results of test reactor experiments may not be applicable to power reactors and could, therefore, be challenged if such data were included. For this very reason the embrittlement predictions in the Reg. Guide 1.99, Rev. 2, were based exclusively on power reactor data. However, test reactor experiments are able to cover a much wider range of materials and irradiation conditions that are needed to explore more fully a variety of models for the prediction of irradiation embrittlement. These data are also needed for the study of effects of annealing for life extension of reactor pressure vessels that are difficult to obtain from surveillance capsule results.

  8. TR-EDB: Test Reactor Embrittlement Data Base, Version 1

    International Nuclear Information System (INIS)

    Stallmann, F.W.; Wang, J.A.; Kam, F.B.K.

    1994-01-01

    The Test Reactor Embrittlement Data Base (TR-EDB) is a collection of results from irradiation in materials test reactors. It complements the Power Reactor Embrittlement Data Base (PR-EDB), whose data are restricted to the results from the analysis of surveillance capsules in commercial power reactors. The rationale behind their restriction was the assumption that the results of test reactor experiments may not be applicable to power reactors and could, therefore, be challenged if such data were included. For this very reason the embrittlement predictions in the Reg. Guide 1.99, Rev. 2, were based exclusively on power reactor data. However, test reactor experiments are able to cover a much wider range of materials and irradiation conditions that are needed to explore more fully a variety of models for the prediction of irradiation embrittlement. These data are also needed for the study of effects of annealing for life extension of reactor pressure vessels that are difficult to obtain from surveillance capsule results

  9. The Test Reactor Embrittlement Data Base (TR-EDB)

    International Nuclear Information System (INIS)

    Stallmann, F.W.; Kam, F.B.K.; Wang, J.A.

    1993-01-01

    The Test Reactor Embrittlement Data Base (TR-EDB) is part of an ongoing program to collect test data from materials irradiations to aid in the research and evaluation of embrittlement prediction models that are used to assure the safety of pressure vessels in power reactors. This program is being funded by the US Nuclear Regulatory Commission (NRC) and has resulted in the publication of the Power Reactor Embrittlement Data Base (PR-EDB) whose second version is currently being released. The TR-EDB is a compatible collection of data from experiments in materials test reactors. These data contain information that is not obtainable from surveillance results, especially, about the effects of annealing after irradiation. Other information that is only available from test reactors is the influence of fluence rates and irradiation temperatures on radiation embrittlement. The first version of the TR-EDB will be released in fall of 1993 and contains published results from laboratories in many countries. Data collection will continue and further updates will be published

  10. Detection of HCV core antigen and its diagnostic significance

    Directory of Open Access Journals (Sweden)

    YANG Jie

    2013-02-01

    Full Text Available ObjectiveTo compare the abilities of the hepatitis C virus (HCV core antigen (cAg test and the HCV RNA assay for confirming anti-HCV presence in order to determine the clinical utility of the HCV-cAg as an alternative or confirmatory diagnostic tool. MethodsSerum samples collected from 158 patients diagnosed with HCV infection were subjected to the enzyme-linked immunosorbent assay-based HCV-cAg test. The optical density (OD measured values were used to calculate the ratio of specimen absorbance to the cutoff value (S/CO. Simultaneously, the serum samples were subjected to PCR-based nucleic acid amplification quantitative fluorescence detection of HCV RNA. ResultsNone of the serum samples had a S/CO value <1 for the HCV-cAg test (100% negative, but all of the samples had a S/CO value >5 (100% positive. The HCV-cAg test sensitivity was 87.05%, specificity was 76.67%, positive predictive value was 9653%, and negative predictive value was 44.23%. As the S/CO value gradually increased, the significantly higher positive coincident rate of the HCV RNA test decreased. The HCV RNA negative coincident rate was significantly higher than that of the HCV-cAg test. HCV-cAg S/CO values between 1 and 2 corresponded to an HCV RNA values between 1.0×103 copies/ml and 1.0×104 copies/ml. The highest S/CO value obtained was 1.992. ConclusionThe HCV-cAg test is comparable to the HCV RNA assay for diagnosing HCV infection.

  11. Forum: Is Test-Based Accountability Dead?

    Science.gov (United States)

    Polikoff, Morgan S.; Greene, Jay P.; Huffman, Kevin

    2017-01-01

    Since the 2001 passage of the No Child Left Behind Act (NCLB), test-based accountability has been an organizing principle--perhaps "the" organizing principle--of efforts to improve American schools. But lately, accountability has been under fire from many critics, including Common Core opponents and those calling for more multifaceted…

  12. Comparing Postsecondary Marketing Student Performance on Computer-Based and Handwritten Essay Tests

    Science.gov (United States)

    Truell, Allen D.; Alexander, Melody W.; Davis, Rodney E.

    2004-01-01

    The purpose of this study was to determine if there were differences in postsecondary marketing student performance on essay tests based on test format (i.e., computer-based or handwritten). Specifically, the variables of performance, test completion time, and gender were explored for differences based on essay test format. Results of the study…

  13. Fault tolerant system based on IDDQ testing

    Science.gov (United States)

    Guibane, Badi; Hamdi, Belgacem; Mtibaa, Abdellatif; Bensalem, Brahim

    2018-06-01

    Offline test is essential to ensure good manufacturing quality. However, for permanent or transient faults that occur during the use of the integrated circuit in an application, an online integrated test is needed as well. This procedure should ensure the detection and possibly the correction or the masking of these faults. This requirement of self-correction is sometimes necessary, especially in critical applications that require high security such as automotive, space or biomedical applications. We propose a fault-tolerant design for analogue and mixed-signal design complementary metal oxide (CMOS) circuits based on the quiescent current supply (IDDQ) testing. A defect can cause an increase in current consumption. IDDQ testing technique is based on the measurement of power supply current to distinguish between functional and failed circuits. The technique has been an effective testing method for detecting physical defects such as gate-oxide shorts, floating gates (open) and bridging defects in CMOS integrated circuits. An architecture called BICS (Built In Current Sensor) is used for monitoring the supply current (IDDQ) of the connected integrated circuit. If the measured current is not within the normal range, a defect is signalled and the system switches connection from the defective to a functional integrated circuit. The fault-tolerant technique is composed essentially by a double mirror built-in current sensor, allowing the detection of abnormal current consumption and blocks allowing the connection to redundant circuits, if a defect occurs. Spices simulations are performed to valid the proposed design.

  14. Computer-Based Readability Testing of Information Booklets for German Cancer Patients.

    Science.gov (United States)

    Keinki, Christian; Zowalla, Richard; Pobiruchin, Monika; Huebner, Jutta; Wiesner, Martin

    2018-04-12

    Understandable health information is essential for treatment adherence and improved health outcomes. For readability testing, several instruments analyze the complexity of sentence structures, e.g., Flesch-Reading Ease (FRE) or Vienna-Formula (WSTF). Moreover, the vocabulary is of high relevance for readers. The aim of this study is to investigate the agreement of sentence structure and vocabulary-based (SVM) instruments. A total of 52 freely available German patient information booklets on cancer were collected from the Internet. The mean understandability level L was computed for 51 booklets. The resulting values of FRE, WSTF, and SVM were assessed pairwise for agreement with Bland-Altman plots and two-sided, paired t tests. For the pairwise comparison, the mean L values are L FRE  = 6.81, L WSTF  = 7.39, L SVM  = 5.09. The sentence structure-based metrics gave significantly different scores (P < 0.001) for all assessed booklets, confirmed by the Bland-Altman analysis. The study findings suggest that vocabulary-based instruments cannot be interchanged with FRE/WSTF. However, both analytical aspects should be considered and checked by authors to linguistically refine texts with respect to the individual target group. Authors of health information can be supported by automated readability analysis. Health professionals can benefit by direct booklet comparisons allowing for time-effective selection of suitable booklets for patients.

  15. A Rigorous Temperature-Dependent Stochastic Modelling and Testing for MEMS-Based Inertial Sensor Errors

    Directory of Open Access Journals (Sweden)

    Spiros Pagiatakis

    2009-10-01

    Full Text Available In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times. It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF. It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at −40 °C, −20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.

  16. A Rigorous Temperature-Dependent Stochastic Modelling and Testing for MEMS-Based Inertial Sensor Errors.

    Science.gov (United States)

    El-Diasty, Mohammed; Pagiatakis, Spiros

    2009-01-01

    In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM) models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times). It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF). It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at -40 °C, -20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.

  17. HEV Test Bench Based on CAN Bus Sensor Communication

    Directory of Open Access Journals (Sweden)

    Shupeng ZHAO

    2014-02-01

    Full Text Available The HEV test bench based on Controller Area Network bus was studied and developed. Control system of HEV power test bench used the CAN bus technology. The application of CAN bus technology on control system development has opened up a new research direction for domestic automobile experimental platform. The HEV power control system development work was completed, including power master controller, electric throttle controller, driving simulation platform, CAN2.0 B communication protocol procedures for formulation, CAN communication monitoring system, the simulation model based on MATLAB code automatic generation technology research, etc. Maximum absorption power of the test bench is 90 kW, the test bench top speed is 6000 r/min, the CAN communication data baud rate is 10~500 k, the conventional electric measurement parameter part precision satisfies the requirement of development of HEV. On the HEV test bench the result of regenerative braking experiment shows that the result got by the test bench was closer to the results got by outdoor road test. And the fuel consumption experiment test results show that the HEV fuel consumption and the charge-discharge character are in linear relationship. The establishment of the test platform for the evaluation of the development of hybrid electric vehicle and power provides physical simulation and test platform.

  18. Implementing reduced-risk integrated pest management in fresh-market cabbage: influence of sampling parameters, and validation of binomial sequential sampling plans for the cabbage looper (Lepidoptera Noctuidae).

    Science.gov (United States)

    Burkness, Eric C; Hutchison, W D

    2009-10-01

    Populations of cabbage looper, Trichoplusiani (Lepidoptera: Noctuidae), were sampled in experimental plots and commercial fields of cabbage (Brasicca spp.) in Minnesota during 1998-1999 as part of a larger effort to implement an integrated pest management program. Using a resampling approach and the Wald's sequential probability ratio test, sampling plans with different sampling parameters were evaluated using independent presence/absence and enumerative data. Evaluations and comparisons of the different sampling plans were made based on the operating characteristic and average sample number functions generated for each plan and through the use of a decision probability matrix. Values for upper and lower decision boundaries, sequential error rates (alpha, beta), and tally threshold were modified to determine parameter influence on the operating characteristic and average sample number functions. The following parameters resulted in the most desirable operating characteristic and average sample number functions; action threshold of 0.1 proportion of plants infested, tally threshold of 1, alpha = beta = 0.1, upper boundary of 0.15, lower boundary of 0.05, and resampling with replacement. We found that sampling parameters can be modified and evaluated using resampling software to achieve desirable operating characteristic and average sample number functions. Moreover, management of T. ni by using binomial sequential sampling should provide a good balance between cost and reliability by minimizing sample size and maintaining a high level of correct decisions (>95%) to treat or not treat.

  19. The PIT-trap-A "model-free" bootstrap procedure for inference about regression models with discrete, multivariate responses.

    Science.gov (United States)

    Warton, David I; Thibaut, Loïc; Wang, Yi Alice

    2017-01-01

    Bootstrap methods are widely used in statistics, and bootstrapping of residuals can be especially useful in the regression context. However, difficulties are encountered extending residual resampling to regression settings where residuals are not identically distributed (thus not amenable to bootstrapping)-common examples including logistic or Poisson regression and generalizations to handle clustered or multivariate data, such as generalised estimating equations. We propose a bootstrap method based on probability integral transform (PIT-) residuals, which we call the PIT-trap, which assumes data come from some marginal distribution F of known parametric form. This method can be understood as a type of "model-free bootstrap", adapted to the problem of discrete and highly multivariate data. PIT-residuals have the key property that they are (asymptotically) pivotal. The PIT-trap thus inherits the key property, not afforded by any other residual resampling approach, that the marginal distribution of data can be preserved under PIT-trapping. This in turn enables the derivation of some standard bootstrap properties, including second-order correctness of pivotal PIT-trap test statistics. In multivariate data, bootstrapping rows of PIT-residuals affords the property that it preserves correlation in data without the need for it to be modelled, a key point of difference as compared to a parametric bootstrap. The proposed method is illustrated on an example involving multivariate abundance data in ecology, and demonstrated via simulation to have improved properties as compared to competing resampling methods.

  20. Specification-based testing: What is it? How can it be automated?

    International Nuclear Information System (INIS)

    Poston, R.M.

    1994-01-01

    Software testing should begin with a written requirements specification. A specification states how software is expected to behave and describes operational characteristics (performance, reliability, etc.) for the software. A specification serves as a reference or base to test against, giving rise to the name, specification-based testing. Should analysts or designers fail to write a specification, then testers are obliged to write their own specification to test against. Specifications written by testers may be called test plans or test objectives

  1. Nanomaterial-Based Electrochemical Immunosensors for Clinically Significant Biomarkers

    Directory of Open Access Journals (Sweden)

    Niina J. Ronkainen

    2014-06-01

    Full Text Available Nanotechnology has played a crucial role in the development of biosensors over the past decade. The development, testing, optimization, and validation of new biosensors has become a highly interdisciplinary effort involving experts in chemistry, biology, physics, engineering, and medicine. The sensitivity, the specificity and the reproducibility of biosensors have improved tremendously as a result of incorporating nanomaterials in their design. In general, nanomaterials-based electrochemical immunosensors amplify the sensitivity by facilitating greater loading of the larger sensing surface with biorecognition molecules as well as improving the electrochemical properties of the transducer. The most common types of nanomaterials and their properties will be described. In addition, the utilization of nanomaterials in immunosensors for biomarker detection will be discussed since these biosensors have enormous potential for a myriad of clinical uses. Electrochemical immunosensors provide a specific and simple analytical alternative as evidenced by their brief analysis times, inexpensive instrumentation, lower assay cost as well as good portability and amenability to miniaturization. The role nanomaterials play in biosensors, their ability to improve detection capabilities in low concentration analytes yielding clinically useful data and their impact on other biosensor performance properties will be discussed. Finally, the most common types of electroanalytical detection methods will be briefly touched upon.

  2. Syndromic Panel-Based Testing in Clinical Microbiology.

    Science.gov (United States)

    Ramanan, Poornima; Bryson, Alexandra L; Binnicker, Matthew J; Pritt, Bobbi S; Patel, Robin

    2018-01-01

    The recent development of commercial panel-based molecular diagnostics for the rapid detection of pathogens in positive blood culture bottles, respiratory specimens, stool, and cerebrospinal fluid has resulted in a paradigm shift in clinical microbiology and clinical practice. This review focuses on U.S. Food and Drug Administration (FDA)-approved/cleared multiplex molecular panels with more than five targets designed to assist in the diagnosis of bloodstream, respiratory tract, gastrointestinal, or central nervous system infections. While these panel-based assays have the clear advantages of a rapid turnaround time and the detection of a large number of microorganisms and promise to improve health care, they present certain challenges, including cost and the definition of ideal test utilization strategies (i.e., optimal ordering) and test interpretation. Copyright © 2017 American Society for Microbiology.

  3. Two non-parametric methods for derivation of constraints from radiotherapy dose–histogram data

    International Nuclear Information System (INIS)

    Ebert, M A; Kennedy, A; Joseph, D J; Gulliford, S L; Buettner, F; Foo, K; Haworth, A; Denham, J W

    2014-01-01

    Dose constraints based on histograms provide a convenient and widely-used method for informing and guiding radiotherapy treatment planning. Methods of derivation of such constraints are often poorly described. Two non-parametric methods for derivation of constraints are described and investigated in the context of determination of dose-specific cut-points—values of the free parameter (e.g., percentage volume of the irradiated organ) which best reflect resulting changes in complication incidence. A method based on receiver operating characteristic (ROC) analysis and one based on a maximally-selected standardized rank sum are described and compared using rectal toxicity data from a prostate radiotherapy trial. Multiple test corrections are applied using a free step-down resampling algorithm, which accounts for the large number of tests undertaken to search for optimal cut-points and the inherent correlation between dose–histogram points. Both methods provide consistent significant cut-point values, with the rank sum method displaying some sensitivity to the underlying data. The ROC method is simple to implement and can utilize a complication atlas, though an advantage of the rank sum method is the ability to incorporate all complication grades without the need for grade dichotomization. (note)

  4. Psychomotor testing predicts rate of skill acquisition for proficiency-based laparoscopic skills training.

    Science.gov (United States)

    Stefanidis, Dimitrios; Korndorffer, James R; Black, F William; Dunne, J Bruce; Sierra, Rafael; Touchard, Cheri L; Rice, David A; Markert, Ronald J; Kastl, Peter R; Scott, Daniel J

    2006-08-01

    . Proficiency-based laparoscopic simulator training provides improvement in performance and can be effectively implemented as a routine part of resident education, but may require significant resources. Although psychomotor testing may be of limited value in the prediction of baseline laparoscopic performance, its importance may lie in the prediction of the rapidity of skill acquisition. These tests may be useful in optimizing curricular design by allowing the tailoring of training to individual needs.

  5. Measuring individual significant change on the Beck Depression Inventory-II through IRT-based statistics.

    NARCIS (Netherlands)

    Brouwer, D.; Meijer, R.R.; Zevalkink, D.J.

    2013-01-01

    Several researchers have emphasized that item response theory (IRT)-based methods should be preferred over classical approaches in measuring change for individual patients. In the present study we discuss and evaluate the use of IRT-based statistics to measure statistical significant individual

  6. The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.

    Science.gov (United States)

    Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J

    2018-07-01

    This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  7. Synthesis and test of sorbents based on calcium aluminates for SE-SR

    International Nuclear Information System (INIS)

    Barelli, L.; Bidini, G.; Di Michele, A.; Gallorini, F.; Petrillo, C.; Sacchetti, F.

    2014-01-01

    Highlights: • Synthesis strategy of CaO incorporation into calcium aluminates was approached. • Three innovative sorbents (M1, M2, M3) were synthesized and characterized. • Sorption capacity of developed sorbents was evaluated in multi-cycle processes. • M3 sorbent showed best performance, much higher than conventional CaO ones. • M3 sorbent functionality in SE-SR process was verified. - Abstract: Greenhouse gases emission of power generation plants will be continuously tightened to achieve European targets in terms of CO 2 emissions. In particular, the switching to a sustainable power generation using fossil fuels will be strongly encouraged in the future. In this context, sorption-enhanced steam reforming (SE-SR) is a promising process because it can be implemented as a CCS pre-combustion methodology. The purpose of this study is to develop and test innovative materials in order to overcome main limitations of standard CaO sorbent, usually used in the SE-SR process. The investigated innovative sorbents are based on incorporation of CaO particles into inert materials which significantly reduce the performance degradation. In particular, sorbent materials based on calcium aluminates were considered, investigating different techniques of synthesis. All synthesized materials were packed, together with the catalyst, in a fixed bed reactor and tested in sorption/regeneration cycles. Significant improvements were obtained respect to standard CaO regarding sorption capacity stability exhibited by the sorbent

  8. Development and Testing of a Smartphone-Based Cognitive/Neuropsychological Evaluation System for Substance Abusers.

    Science.gov (United States)

    Pal, Reshmi; Mendelson, John; Clavier, Odile; Baggott, Mathew J; Coyle, Jeremy; Galloway, Gantt P

    2016-01-01

    In methamphetamine (MA) users, drug-induced neurocognitive deficits may help to determine treatment, monitor adherence, and predict relapse. To measure these relationships, we developed an iPhone app (Neurophone) to compare lab and field performance of N-Back, Stop Signal, and Stroop tasks that are sensitive to MA-induced deficits. Twenty healthy controls and 16 MA-dependent participants performed the tasks in-lab using a validated computerized platform and the Neurophone before taking the latter home and performing the tasks twice daily for two weeks. N-Back task: there were no clear differences in performance between computer-based vs. phone-based in-lab tests and phone-based in-lab vs. phone-based in-field tests. Stop-Signal task: difference in parameters prevented comparison of computer-based and phone-based versions. There was significant difference in phone performance between field and lab. Stroop task: response time measured by the speech recognition engine lacked precision to yield quantifiable results. There was no learning effect over time. On an average, each participant completed 84.3% of the in-field NBack tasks and 90.4% of the in-field Stop Signal tasks (MA-dependent participants: 74.8% and 84.3%; healthy controls: 91.4% and 95.0%, respectively). Participants rated Neurophone easy to use. Cognitive tasks performed in-field using Neurophone have the potential to yield results comparable to those obtained in a laboratory setting. Tasks need to be modified for use as the app's voice recognition system is not yet adequate for timed tests.

  9. OCL-BASED TEST CASE GENERATION USING CATEGORY PARTITIONING METHOD

    Directory of Open Access Journals (Sweden)

    A. Jalila

    2015-10-01

    Full Text Available The adoption of fault detection techniques during initial stages of software development life cycle urges to improve reliability of a software product. Specification-based testing is one of the major criterions to detect faults in the requirement specification or design of a software system. However, due to the non-availability of implementation details, test case generation from formal specifications become a challenging task. As a novel approach, the proposed work presents a methodology to generate test cases from OCL (Object constraint Language formal specification using Category Partitioning Method (CPM. The experiment results indicate that the proposed methodology is more effective in revealing specification based faults. Furthermore, it has been observed that OCL and CPM form an excellent combination for performing functional testing at the earliest to improve software quality with reduced cost.

  10. Development and testing of an assessment instrument for the formative peer review of significant event analyses.

    Science.gov (United States)

    McKay, J; Murphy, D J; Bowie, P; Schmuck, M-L; Lough, M; Eva, K W

    2007-04-01

    To establish the content validity and specific aspects of reliability for an assessment instrument designed to provide formative feedback to general practitioners (GPs) on the quality of their written analysis of a significant event. Content validity was quantified by application of a content validity index. Reliability testing involved a nested design, with 5 cells, each containing 4 assessors, rating 20 unique significant event analysis (SEA) reports (10 each from experienced GPs and GPs in training) using the assessment instrument. The variance attributable to each identified variable in the study was established by analysis of variance. Generalisability theory was then used to investigate the instrument's ability to discriminate among SEA reports. Content validity was demonstrated with at least 8 of 10 experts endorsing all 10 items of the assessment instrument. The overall G coefficient for the instrument was moderate to good (G>0.70), indicating that the instrument can provide consistent information on the standard achieved by the SEA report. There was moderate inter-rater reliability (G>0.60) when four raters were used to judge the quality of the SEA. This study provides the first steps towards validating an instrument that can provide educational feedback to GPs on their analysis of significant events. The key area identified to improve instrument reliability is variation among peer assessors in their assessment of SEA reports. Further validity and reliability testing should be carried out to provide GPs, their appraisers and contractual bodies with a validated feedback instrument on this aspect of the general practice quality agenda.

  11. Spatial prediction models for landslide hazards: review, comparison and evaluation

    Directory of Open Access Journals (Sweden)

    A. Brenning

    2005-01-01

    Full Text Available The predictive power of logistic regression, support vector machines and bootstrap-aggregated classification trees (bagging, double-bagging is compared using misclassification error rates on independent test data sets. Based on a resampling approach that takes into account spatial autocorrelation, error rates for predicting 'present' and 'future' landslides are estimated within and outside the training area. In a case study from the Ecuadorian Andes, logistic regression with stepwise backward variable selection yields lowest error rates and demonstrates the best generalization capabilities. The evaluation outside the training area reveals that tree-based methods tend to overfit the data.

  12. Effects of age, gender, education and race on two tests of language ability in community-based older adults.

    Science.gov (United States)

    Snitz, Beth E; Unverzagt, Frederick W; Chang, Chung-Chou H; Bilt, Joni Vander; Gao, Sujuan; Saxton, Judith; Hall, Kathleen S; Ganguli, Mary

    2009-12-01

    Neuropsychological tests, including tests of language ability, are frequently used to differentiate normal from pathological cognitive aging. However, language can be particularly difficult to assess in a standardized manner in cross-cultural studies and in patients from different educational and cultural backgrounds. This study examined the effects of age, gender, education and race on performance of two language tests: the animal fluency task (AFT) and the Indiana University Token Test (IUTT). We report population-based normative data on these tests from two combined ethnically divergent, cognitively normal, representative population samples of older adults. Participants aged > or =65 years from the Monongahela-Youghiogheny Healthy Aging Team (MYHAT) and from the Indianapolis Study of Health and Aging (ISHA) were selected based on (1) a Clinical Dementia Rating (CDR) score of 0; (2) non-missing baseline language test data; and (3) race self-reported as African-American or white. The combined sample (n = 1885) was 28.1% African-American. Multivariate ordinal logistic regression was used to model the effects of demographic characteristics on test scores. On both language tests, better performance was significantly associated with higher education, younger age, and white race. On the IUTT, better performance was also associated with female gender. We found no significant interactions between age and sex, and between race and education. Age and education are more potent variables than are race and gender influencing performance on these language tests. Demographically stratified normative tables for these measures can be used to guide test interpretation and aid clinical diagnosis of impaired cognition.

  13. Classification of user performance in the Ruff Figural Fluency Test based on eye-tracking features

    Directory of Open Access Journals (Sweden)

    Borys Magdalena

    2017-01-01

    Full Text Available Cognitive assessment in neurological diseases represents a relevant topic due to its diagnostic significance in detecting disease, but also in assessing progress of the treatment. Computer-based tests provide objective and accurate cognitive skills and capacity measures. The Ruff Figural Fluency Test (RFFT provides information about non-verbal capacity for initiation, planning, and divergent reasoning. The traditional paper form of the test was transformed into a computer application and examined. The RFFT was applied in an experiment performed among 70 male students to assess their cognitive performance in the laboratory environment. Each student was examined in three sequential series. Besides the students’ performances measured by using in app keylogging, the eye-tracking data obtained by non-invasive video-based oculography were gathered, from which several features were extracted. Eye-tracking features combined with performance measures (a total number of designs and/or error ratio were applied in machine learning classification. Various classification algorithms were applied, and their accuracy, specificity, sensitivity and performance were compared.

  14. Integrating Multiple On-line Knowledge Bases for Disease-Lab Test Relation Extraction.

    Science.gov (United States)

    Zhang, Yaoyun; Soysal, Ergin; Moon, Sungrim; Wang, Jingqi; Tao, Cui; Xu, Hua

    2015-01-01

    A computable knowledge base containing relations between diseases and lab tests would be a great resource for many biomedical informatics applications. This paper describes our initial step towards establishing a comprehensive knowledge base of disease and lab tests relations utilizing three public on-line resources. LabTestsOnline, MedlinePlus and Wikipedia are integrated to create a freely available, computable disease-lab test knowledgebase. Disease and lab test concepts are identified using MetaMap and relations between diseases and lab tests are determined based on source-specific rules. Experimental results demonstrate a high precision for relation extraction, with Wikipedia achieving the highest precision of 87%. Combining the three sources reached a recall of 51.40%, when compared with a subset of disease-lab test relations extracted from a reference book. Moreover, we found additional disease-lab test relations from on-line resources, indicating they are complementary to existing reference books for building a comprehensive disease and lab test relation knowledge base.

  15. T-UPPAAL: Online Model-based Testing of Real-Time Systems

    DEFF Research Database (Denmark)

    Mikucionis, Marius; Larsen, Kim Guldstrand; Nielsen, Brian

    2004-01-01

    The goal of testing is to gain confidence in a physical computer based system by means of executing it. More than one third of typical project resources is spent on testing embedded and real-time systems, but still it remains ad-hoc, based on heuristics, and error-prone. Therefore systematic...

  16. A critical discussion of null hypothesis significance testing and statistical power analysis within psychological research

    DEFF Research Database (Denmark)

    Jones, Allan; Sommerlund, Bo

    2007-01-01

    The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...... the potential error introduced by small and large samples is advocated. Power analysis is not recommended as a replacement to NHST but as an additional source of information about the phenomena under investigation. Moreover, the importance of conceptual analysis in relation to statistical analysis of hypothesis...

  17. Home-based HIV testing for men preferred over clinic-based testing by pregnant women and their male partners, a nested cross-sectional study.

    Science.gov (United States)

    Osoti, Alfred Onyango; John-Stewart, Grace; Kiarie, James Njogu; Barbra, Richardson; Kinuthia, John; Krakowiak, Daisy; Farquhar, Carey

    2015-07-30

    Male partner HIV testing and counseling (HTC) is associated with enhanced uptake of prevention of mother-to-child HIV transmission (PMTCT), yet male HTC during pregnancy remains low. Identifying settings preferred by pregnant women and their male partners may improve male involvement in PMTCT. Participants in a randomized clinical trial (NCT01620073) to improve male partner HTC were interviewed to determine whether the preferred male partner HTC setting was the home, antenatal care (ANC) clinic or VCT center. In this nested cross sectional study, responses were evaluated at baseline and after 6 weeks. Differences between the two time points were compared using McNemar's test and correlates of preference were determined using logistic regression. Among 300 pregnant female participants, 54% preferred home over ANC clinic testing (34.0%) or VCT center (12.0%). Among 188 male partners, 68% preferred home-based HTC to antenatal clinic (19%) or VCT (13%). Men who desired more children and women who had less than secondary education or daily income Pregnant women and their male partners preferred home-based compared to clinic or VCT-center based male partner HTC. Home-based HTC during pregnancy appears acceptable and may improve male testing and involvement in PMTCT.

  18. Microcomputer based test system for charge coupled devices

    International Nuclear Information System (INIS)

    Sidman, S.

    1981-02-01

    A microcomputer based system for testing analog charge coupled integrated circuits has been developed. It measures device performance for three parameters: dynamic range, baseline shift due to leakage current, and transfer efficiency. A companion board tester has also been developed. The software consists of a collection of BASIC and assembly language routines developed on the test system microcomputer

  19. Qualitative tests for the determination of inorganic bases

    OpenAIRE

    Založnik, Urša

    2013-01-01

    The unit on acids, bases and salts is dealt with in primary and secondary schools and can be very interesting to students because they encounter these substances on an everyday basis. In my Diploma thesis I will focus on bases, especially on how the students could determine in the most interesting way whether a solution is acid or base and which solution (base) that actually is. My goal is to develop simple qualitative tests to determine inorganic bases in primary schools. In nature, ba...

  20. Web based aphasia test using service oriented architecture (SOA)

    International Nuclear Information System (INIS)

    Voos, J A; Vigliecca, N S; Gonzalez, E A

    2007-01-01

    Based on an aphasia test for Spanish speakers which analyze the patient's basic resources of verbal communication, a web-enabled software was developed to automate its execution. A clinical database was designed as a complement, in order to evaluate the antecedents (risk factors, pharmacological and medical backgrounds, neurological or psychiatric symptoms, brain injury -anatomical and physiological characteristics, etc) which are necessary to carry out a multi-factor statistical analysis in different samples of patients. The automated test was developed following service oriented architecture and implemented in a web site which contains a tests suite, which would allow both integrating the aphasia test with other neuropsychological instruments and increasing the available site information for scientific research. The test design, the database and the study of its psychometric properties (validity, reliability and objectivity) were made in conjunction with neuropsychological researchers, who participate actively in the software design, based on the patients or other subjects of investigation feedback

  1. Web based aphasia test using service oriented architecture (SOA)

    Energy Technology Data Exchange (ETDEWEB)

    Voos, J A [Clinical Engineering R and D Center, Universidad Tecnologica Nacional, Facultad Regional Cordoba, Cordoba (Argentina); Vigliecca, N S [Consejo Nacional de Investigaciones Cientificas y Tecnicas, CONICET, Cordoba (Argentina); Gonzalez, E A [Clinical Engineering R and D Center, Universidad Tecnologica Nacional, Facultad Regional Cordoba, Cordoba (Argentina)

    2007-11-15

    Based on an aphasia test for Spanish speakers which analyze the patient's basic resources of verbal communication, a web-enabled software was developed to automate its execution. A clinical database was designed as a complement, in order to evaluate the antecedents (risk factors, pharmacological and medical backgrounds, neurological or psychiatric symptoms, brain injury -anatomical and physiological characteristics, etc) which are necessary to carry out a multi-factor statistical analysis in different samples of patients. The automated test was developed following service oriented architecture and implemented in a web site which contains a tests suite, which would allow both integrating the aphasia test with other neuropsychological instruments and increasing the available site information for scientific research. The test design, the database and the study of its psychometric properties (validity, reliability and objectivity) were made in conjunction with neuropsychological researchers, who participate actively in the software design, based on the patients or other subjects of investigation feedback.

  2. Multirobot FastSLAM Algorithm Based on Landmark Consistency Correction

    Directory of Open Access Journals (Sweden)

    Shi-Ming Chen

    2014-01-01

    Full Text Available Considering the influence of uncertain map information on multirobot SLAM problem, a multirobot FastSLAM algorithm based on landmark consistency correction is proposed. Firstly, electromagnetism-like mechanism is introduced to the resampling procedure in single-robot FastSLAM, where we assume that each sampling particle is looked at as a charged electron and attraction-repulsion mechanism in electromagnetism field is used to simulate interactive force between the particles to improve the distribution of particles. Secondly, when multiple robots observe the same landmarks, every robot is regarded as one node and Kalman-Consensus Filter is proposed to update landmark information, which further improves the accuracy of localization and mapping. Finally, the simulation results show that the algorithm is suitable and effective.

  3. Tuberculosis Infection in Urban Adolescents: Results of a School-Based Testing Program.

    Science.gov (United States)

    Barry, M. Anita; And Others

    1990-01-01

    Discusses a tuberculosis skin testing program introduced for seventh and tenth grade students in Boston (Massachusetts) public schools. Positivity rate was significantly higher in tenth grade students. Among those testing positive, the majority were born outside the United States. Results suggest that testing may identify a significant number of…

  4. Good agreement of conventional and gel-based direct agglutination test in immune-mediated haemolytic anaemia

    Directory of Open Access Journals (Sweden)

    Piek Christine J

    2012-02-01

    Full Text Available Abstract Background The aim of this study was to compare a gel-based test with the traditional direct agglutination test (DAT for the diagnosis of immune-mediated haemolytic anaemia (IMHA. Methods Canine (n = 247 and feline (n = 74 blood samples were submitted for DAT testing to two laboratories. A subset of canine samples was categorized as having idiopathic IMHA, secondary IMHA, or no IMHA. Results The kappa values for agreement between the tests were in one laboratory 0.86 for canine and 0.58 for feline samples, and in the other 0.48 for canine samples. The lower agreement in the second laboratory was caused by a high number of positive canine DATs for which the gel test was negative. This group included significantly more dogs with secondary IMHA. Conclusions The gel test might be used as a screening test for idiopathic IMHA and is less often positive in secondary IMHA than the DAT.

  5. Evidence for the different physiological significance of the 6- and 2-minute walk tests in multiple sclerosis

    Directory of Open Access Journals (Sweden)

    Motl Robert W

    2012-03-01

    Full Text Available Abstract Background Researchers have recently advocated for the 2-minute walk (2MW as an alternative for the 6-minute walk (6MW to assess long distance ambulation in persons with multiple sclerosis (MS. This recommendation has not been based on physiological considerations such as the rate of oxygen consumption (V·O2 over the 6MW range. Objective This study examined the pattern of change in V·O2 over the range of the 6MW in a large sample of persons with MS who varied as a function of disability status. Method Ninety-five persons with clinically-definite MS underwent a neurological examination for generating an Expanded Disability Status Scale (EDSS score, and then completion of the 6MW protocol while wearing a portable metabolic unit and an accelerometer. Results There was a time main effect on V·O2 during the 6MW (p = .0001 such that V·O2 increased significantly every 30 seconds over the first 3 minutes of the 6MW, and then remained stable over the second 3 minutes of the 6MW. This occurred despite no change in cadence across the 6MW (p = .84. Conclusions The pattern of change in V·O2 indicates that there are different metabolic systems providing energy for ambulation during the 6MW in MS subjects and steady state aerobic metabolism is reached during the last 3 minutes of the 6MW. By extension, the first 3 minutes would represent a test of mixed aerobic and anaerobic work, whereas the second 3 minutes would represent a test of aerobic work during walking.

  6. ASSESSING SMALL SAMPLE WAR-GAMING DATASETS

    Directory of Open Access Journals (Sweden)

    W. J. HURLEY

    2013-10-01

    Full Text Available One of the fundamental problems faced by military planners is the assessment of changes to force structure. An example is whether to replace an existing capability with an enhanced system. This can be done directly with a comparison of measures such as accuracy, lethality, survivability, etc. However this approach does not allow an assessment of the force multiplier effects of the proposed change. To gauge these effects, planners often turn to war-gaming. For many war-gaming experiments, it is expensive, both in terms of time and dollars, to generate a large number of sample observations. This puts a premium on the statistical methodology used to examine these small datasets. In this paper we compare the power of three tests to assess population differences: the Wald-Wolfowitz test, the Mann-Whitney U test, and re-sampling. We employ a series of Monte Carlo simulation experiments. Not unexpectedly, we find that the Mann-Whitney test performs better than the Wald-Wolfowitz test. Resampling is judged to perform slightly better than the Mann-Whitney test.

  7. WEB-BASED ADAPTIVE TESTING SYSTEM (WATS FOR CLASSIFYING STUDENTS ACADEMIC ABILITY

    Directory of Open Access Journals (Sweden)

    Jaemu LEE,

    2012-08-01

    Full Text Available Computer Adaptive Testing (CAT has been highlighted as a promising assessment method to fulfill two testing purposes: estimating student academic ability and classifying student academic level. In this paper, we introduced the Web-based Adaptive Testing System (WATS developed to support a cost effective assessment for classifying students’ ability into different academic levels. Instead of using a traditional paper and pencil test, the WATS is expected to serve as an alternate method to promptly diagnosis and identify underachieving students through Web-based testing. The WATS can also help provide students with appropriate learning contents and necessary academic support in time. In this paper, theoretical background and structure of WATS, item construction process based upon item response theory, and user interfaces of WATS were discussed.

  8. A Bootstrap Neural Network Based Heterogeneous Panel Unit Root Test: Application to Exchange Rates

    OpenAIRE

    Christian de Peretti; Carole Siani; Mario Cerrato

    2010-01-01

    This paper proposes a bootstrap artificial neural network based panel unit root test in a dynamic heterogeneous panel context. An application to a panel of bilateral real exchange rate series with the US Dollar from the 20 major OECD countries is provided to investigate the Purchase Power Parity (PPP). The combination of neural network and bootstrapping significantly changes the findings of the economic study in favour of PPP.

  9. Preoperative prediction of inpatient recovery of function after total hip arthroplasty using performance-based tests: a prospective cohort study.

    Science.gov (United States)

    Oosting, Ellen; Hoogeboom, Thomas J; Appelman-de Vries, Suzan A; Swets, Adam; Dronkers, Jaap J; van Meeteren, Nico L U

    2016-01-01

    The aim of this study was to evaluate the value of conventional factors, the Risk Assessment and Predictor Tool (RAPT) and performance-based functional tests as predictors of delayed recovery after total hip arthroplasty (THA). A prospective cohort study in a regional hospital in the Netherlands with 315 patients was attending for THA in 2012. The dependent variable recovery of function was assessed with the Modified Iowa Levels of Assistance scale. Delayed recovery was defined as taking more than 3 days to walk independently. Independent variables were age, sex, BMI, Charnley score, RAPT score and scores for four performance-based tests [2-minute walk test, timed up and go test (TUG), 10-meter walking test (10 mW) and hand grip strength]. Regression analysis with all variables identified older age (>70 years), Charnley score C, slow walking speed (10 mW >10.0 s) and poor functional mobility (TUG >10.5 s) as the best predictors of delayed recovery of function. This model (AUC 0.85, 95% CI 0.79-0.91) performed better than a model with conventional factors and RAPT scores, and significantly better (p = 0.04) than a model with only conventional factors (AUC 0.81, 95% CI 0.74-0.87). The combination of performance-based tests and conventional factors predicted inpatient functional recovery after THA. Two simple functional performance-based tests have a significant added value to a more conventional screening with age and comorbidities to predict recovery of functioning immediately after total hip surgery. Patients over 70 years old, with comorbidities, with a TUG score >10.5 s and a walking speed >1.0 m/s are at risk for delayed recovery of functioning. Those high risk patients need an accurate discharge plan and could benefit from targeted pre- and postoperative therapeutic exercise programs.

  10. An Improved Test Selection Optimization Model Based on Fault Ambiguity Group Isolation and Chaotic Discrete PSO

    Directory of Open Access Journals (Sweden)

    Xiaofeng Lv

    2018-01-01

    Full Text Available Sensor data-based test selection optimization is the basis for designing a test work, which ensures that the system is tested under the constraint of the conventional indexes such as fault detection rate (FDR and fault isolation rate (FIR. From the perspective of equipment maintenance support, the ambiguity isolation has a significant effect on the result of test selection. In this paper, an improved test selection optimization model is proposed by considering the ambiguity degree of fault isolation. In the new model, the fault test dependency matrix is adopted to model the correlation between the system fault and the test group. The objective function of the proposed model is minimizing the test cost with the constraint of FDR and FIR. The improved chaotic discrete particle swarm optimization (PSO algorithm is adopted to solve the improved test selection optimization model. The new test selection optimization model is more consistent with real complicated engineering systems. The experimental result verifies the effectiveness of the proposed method.

  11. Tumor Suppressor Gene-Based Nanotherapy: From Test Tube to the Clinic

    Directory of Open Access Journals (Sweden)

    Manish Shanker

    2011-01-01

    Full Text Available Cancer is a major health problem in the world. Advances made in cancer therapy have improved the survival of patients in certain types of cancer. However, the overall five-year survival has not significantly improved in the majority of cancer types. Major challenges encountered in having effective cancer therapy are development of drug resistance by the tumor cells, nonspecific cytotoxicity, and inability to affect metastatic tumors by the chemodrugs. Overcoming these challenges requires development and testing of novel therapies. One attractive cancer therapeutic approach is cancer gene therapy. Several laboratories including the authors' laboratory have been investigating nonviral formulations for delivering therapeutic genes as a mode for effective cancer therapy. In this paper the authors will summarize their experience in the development and testing of a cationic lipid-based nanocarrier formulation and the results from their preclinical studies leading to a Phase I clinical trial for nonsmall cell lung cancer. Their nanocarrier formulation containing therapeutic genes such as tumor suppressor genes when administered intravenously effectively controls metastatic tumor growth. Additional Phase I clinical trials based on the results of their nanocarrier formulation have been initiated or proposed for treatment of cancer of the breast, ovary, pancreas, and metastatic melanoma, and will be discussed.

  12. Tumor suppressor gene-based nanotherapy: from test tube to the clinic.

    Science.gov (United States)

    Shanker, Manish; Jin, Jiankang; Branch, Cynthia D; Miyamoto, Shinya; Grimm, Elizabeth A; Roth, Jack A; Ramesh, Rajagopal

    2011-01-01

    Cancer is a major health problem in the world. Advances made in cancer therapy have improved the survival of patients in certain types of cancer. However, the overall five-year survival has not significantly improved in the majority of cancer types. Major challenges encountered in having effective cancer therapy are development of drug resistance by the tumor cells, nonspecific cytotoxicity, and inability to affect metastatic tumors by the chemodrugs. Overcoming these challenges requires development and testing of novel therapies. One attractive cancer therapeutic approach is cancer gene therapy. Several laboratories including the authors' laboratory have been investigating nonviral formulations for delivering therapeutic genes as a mode for effective cancer therapy. In this paper the authors will summarize their experience in the development and testing of a cationic lipid-based nanocarrier formulation and the results from their preclinical studies leading to a Phase I clinical trial for nonsmall cell lung cancer. Their nanocarrier formulation containing therapeutic genes such as tumor suppressor genes when administered intravenously effectively controls metastatic tumor growth. Additional Phase I clinical trials based on the results of their nanocarrier formulation have been initiated or proposed for treatment of cancer of the breast, ovary, pancreas, and metastatic melanoma, and will be discussed.

  13. Universal Verification Methodology Based Register Test Automation Flow.

    Science.gov (United States)

    Woo, Jae Hun; Cho, Yong Kwan; Park, Sun Kyu

    2016-05-01

    In today's SoC design, the number of registers has been increased along with complexity of hardware blocks. Register validation is a time-consuming and error-pron task. Therefore, we need an efficient way to perform verification with less effort in shorter time. In this work, we suggest register test automation flow based UVM (Universal Verification Methodology). UVM provides a standard methodology, called a register model, to facilitate stimulus generation and functional checking of registers. However, it is not easy for designers to create register models for their functional blocks or integrate models in test-bench environment because it requires knowledge of SystemVerilog and UVM libraries. For the creation of register models, many commercial tools support a register model generation from register specification described in IP-XACT, but it is time-consuming to describe register specification in IP-XACT format. For easy creation of register model, we propose spreadsheet-based register template which is translated to IP-XACT description, from which register models can be easily generated using commercial tools. On the other hand, we also automate all the steps involved integrating test-bench and generating test-cases, so that designers may use register model without detailed knowledge of UVM or SystemVerilog. This automation flow involves generating and connecting test-bench components (e.g., driver, checker, bus adaptor, etc.) and writing test sequence for each type of register test-case. With the proposed flow, designers can save considerable amount of time to verify functionality of registers.

  14. The application and testing of diatom-based indices of stream water ...

    African Journals Online (AJOL)

    The application and testing of diatom-based indices of stream water quality in Chinhoyi Town, Zimbabwe. ... PROMOTING ACCESS TO AFRICAN RESEARCH ... test the applicability of foreign diatom-based water quality assessment indices to ...

  15. A genomic biomarker signature can predict skin sensitizers using a cell-based in vitro alternative to animal tests

    Directory of Open Access Journals (Sweden)

    Albrekt Ann-Sofie

    2011-08-01

    Full Text Available Abstract Background Allergic contact dermatitis is an inflammatory skin disease that affects a significant proportion of the population. This disease is caused by an adverse immune response towards chemical haptens, and leads to a substantial economic burden for society. Current test of sensitizing chemicals rely on animal experimentation. New legislations on the registration and use of chemicals within pharmaceutical and cosmetic industries have stimulated significant research efforts to develop alternative, human cell-based assays for the prediction of sensitization. The aim is to replace animal experiments with in vitro tests displaying a higher predictive power. Results We have developed a novel cell-based assay for the prediction of sensitizing chemicals. By analyzing the transcriptome of the human cell line MUTZ-3 after 24 h stimulation, using 20 different sensitizing chemicals, 20 non-sensitizing chemicals and vehicle controls, we have identified a biomarker signature of 200 genes with potent discriminatory ability. Using a Support Vector Machine for supervised classification, the prediction performance of the assay revealed an area under the ROC curve of 0.98. In addition, categorizing the chemicals according to the LLNA assay, this gene signature could also predict sensitizing potency. The identified markers are involved in biological pathways with immunological relevant functions, which can shed light on the process of human sensitization. Conclusions A gene signature predicting sensitization, using a human cell line in vitro, has been identified. This simple and robust cell-based assay has the potential to completely replace or drastically reduce the utilization of test systems based on experimental animals. Being based on human biology, the assay is proposed to be more accurate for predicting sensitization in humans, than the traditional animal-based tests.

  16. A genomic biomarker signature can predict skin sensitizers using a cell-based in vitro alternative to animal tests

    Science.gov (United States)

    2011-01-01

    Background Allergic contact dermatitis is an inflammatory skin disease that affects a significant proportion of the population. This disease is caused by an adverse immune response towards chemical haptens, and leads to a substantial economic burden for society. Current test of sensitizing chemicals rely on animal experimentation. New legislations on the registration and use of chemicals within pharmaceutical and cosmetic industries have stimulated significant research efforts to develop alternative, human cell-based assays for the prediction of sensitization. The aim is to replace animal experiments with in vitro tests displaying a higher predictive power. Results We have developed a novel cell-based assay for the prediction of sensitizing chemicals. By analyzing the transcriptome of the human cell line MUTZ-3 after 24 h stimulation, using 20 different sensitizing chemicals, 20 non-sensitizing chemicals and vehicle controls, we have identified a biomarker signature of 200 genes with potent discriminatory ability. Using a Support Vector Machine for supervised classification, the prediction performance of the assay revealed an area under the ROC curve of 0.98. In addition, categorizing the chemicals according to the LLNA assay, this gene signature could also predict sensitizing potency. The identified markers are involved in biological pathways with immunological relevant functions, which can shed light on the process of human sensitization. Conclusions A gene signature predicting sensitization, using a human cell line in vitro, has been identified. This simple and robust cell-based assay has the potential to completely replace or drastically reduce the utilization of test systems based on experimental animals. Being based on human biology, the assay is proposed to be more accurate for predicting sensitization in humans, than the traditional animal-based tests. PMID:21824406

  17. Brief communication: Is variation in the cranial capacity of the Dmanisi sample too high to be from a single species?

    Science.gov (United States)

    Lee, Sang-Hee

    2005-07-01

    This study uses data resampling to test the null hypothesis that the degree of variation in the cranial capacity of the Dmanisi hominid sample is within the range variation of a single species. The statistical significance of the variation in the Dmanisi sample is examined using simulated distributions based on comparative samples of modern humans, chimpanzees, and gorillas. Results show that it is unlikely to find the maximum difference observed in the Dmanisi sample in distributions of female-female pairs from comparative single-species samples. Given that two sexes are represented, the difference in the Dmanisi sample is not enough to reject the null hypothesis of a single species. Results of this study suggest no compelling reason to invoke multiple taxa to explain variation in the cranial capacity of the Dmanisi hominids. (c) 2004 Wiley-Liss, Inc

  18. Comparison of demons deformable registration-based methods for texture analysis of serial thoracic CT scans

    Science.gov (United States)

    Cunliffe, Alexandra R.; Al-Hallaq, Hania A.; Fei, Xianhan M.; Tuohy, Rachel E.; Armato, Samuel G.

    2013-02-01

    To determine how 19 image texture features may be altered by three image registration methods, "normal" baseline and follow-up computed tomography (CT) scans from 27 patients were analyzed. Nineteen texture feature values were calculated in over 1,000 32x32-pixel regions of interest (ROIs) randomly placed in each baseline scan. All three methods used demons registration to map baseline scan ROIs to anatomically matched locations in the corresponding transformed follow-up scan. For the first method, the follow-up scan transformation was subsampled to achieve a voxel size identical to that of the baseline scan. For the second method, the follow-up scan was transformed through affine registration to achieve global alignment with the baseline scan. For the third method, the follow-up scan was directly deformed to the baseline scan using demons deformable registration. Feature values in matched ROIs were compared using Bland- Altman 95% limits of agreement. For each feature, the range spanned by the 95% limits was normalized to the mean feature value to obtain the normalized range of agreement, nRoA. Wilcoxon signed-rank tests were used to compare nRoA values across features for the three methods. Significance for individual tests was adjusted using the Bonferroni method. nRoA was significantly smaller for affine-registered scans than for the resampled scans (p=0.003), indicating lower feature value variability between baseline and follow-up scan ROIs using this method. For both of these methods, however, nRoA was significantly higher than when feature values were calculated directly on demons-deformed followup scans (p<0.001). Across features and methods, nRoA values remained below 26%.

  19. Development of seismic technology and reliability based on vibration tests

    International Nuclear Information System (INIS)

    Sasaki, Youichi

    1997-01-01

    This paper deals with some of the vibration tests and investigations on the seismic safety of nuclear power plants (NPPs) in Japan. To ensure the reliability of the seismic safety of nuclear power plants, nuclear power plants in Japan have been designed according to the Technical Guidelines for Aseismic Design of Nuclear Power Plants. This guideline has been developed based on technical date base and findings which were obtained from many vibration tests and investigations. Besides the tests for the guideline, proving tests on seismic reliability of operating nuclear power plants equipment and systems have been carried out. In this paper some vibration tests and their evaluation results are presented. They have crucially contributed to develop the guideline. (J.P.N.)

  20. Development of seismic technology and reliability based on vibration tests

    Energy Technology Data Exchange (ETDEWEB)

    Sasaki, Youichi [Nuclear Power Engineering Corp., Tokyo (Japan)

    1997-03-01

    This paper deals with some of the vibration tests and investigations on the seismic safety of nuclear power plants (NPPs) in Japan. To ensure the reliability of the seismic safety of nuclear power plants, nuclear power plants in Japan have been designed according to the Technical Guidelines for Aseismic Design of Nuclear Power Plants. This guideline has been developed based on technical date base and findings which were obtained from many vibration tests and investigations. Besides the tests for the guideline, proving tests on seismic reliability of operating nuclear power plants equipment and systems have been carried out. In this paper some vibration tests and their evaluation results are presented. They have crucially contributed to develop the guideline. (J.P.N.)

  1. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network.

    Science.gov (United States)

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-08

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.

  2. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network

    Directory of Open Access Journals (Sweden)

    Ke Li

    2016-01-01

    Full Text Available A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF and Diagnostic Bayesian Network (DBN is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO. To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA is proposed to evaluate the sensitiveness of symptom parameters (SPs for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.

  3. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network

    Science.gov (United States)

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006

  4. Black hole based tests of general relativity

    International Nuclear Information System (INIS)

    Yagi, Kent; Stein, Leo C

    2016-01-01

    General relativity has passed all solar system experiments and neutron star based tests, such as binary pulsar observations, with flying colors. A more exotic arena for testing general relativity is in systems that contain one or more black holes. Black holes are the most compact objects in the Universe, providing probes of the strongest-possible gravitational fields. We are motivated to study strong-field gravity since many theories give large deviations from general relativity only at large field strengths, while recovering the weak-field behavior. In this article, we review how one can probe general relativity and various alternative theories of gravity by using electromagnetic waves from a black hole with an accretion disk, and gravitational waves from black hole binaries. We first review model-independent ways of testing gravity with electromagnetic/gravitational waves from a black hole system. We then focus on selected examples of theories that extend general relativity in rather simple ways. Some important characteristics of general relativity include (but are not limited to) (i) only tensor gravitational degrees of freedom, (ii) the graviton is massless, (iii) no quadratic or higher curvatures in the action, and (iv) the theory is four-dimensional. Altering a characteristic leads to a different extension of general relativity: (i) scalar–tensor theories, (ii) massive gravity theories, (iii) quadratic gravity, and (iv) theories with large extra dimensions. Within each theory, we describe black hole solutions, their properties, and current and projected constraints on each theory using black hole based tests of gravity. We close this review by listing some of the open problems in model-independent tests and within each specific theory. (paper)

  5. Pharmacists performing quality spirometry testing: an evidence based review.

    Science.gov (United States)

    Cawley, Michael J; Warning, William J

    2015-10-01

    The scope of pharmacist services for patients with pulmonary disease has primarily focused on drug related outcomes; however pharmacists have the ability to broaden the scope of clinical services by performing diagnostic testing including quality spirometry testing. Studies have demonstrated that pharmacists can perform quality spirometry testing based upon international guidelines. The primary aim of this review was to assess the published evidence of pharmacists performing quality spirometry testing based upon American Thoracic Society/European Respiratory Society (ATS/ERS) guidelines. In order to accomplish this, the description of evidence and type of outcome from these services were reviewed. A literature search was conducted using five databases [PubMed (1946-January 2015), International Pharmaceutical Abstracts (1970 to January 2015), Cumulative Index of Nursing and Allied Health Literature, Cochrane Central Register of Controlled Trials and Cochrane Database of Systematic Reviews] with search terms including pharmacy, spirometry, pulmonary function, asthma or COPD was conducted. Searches were limited to publications in English and reported in humans. In addition, Uniform Resource Locators and Google Scholar searches were implemented to include any additional supplemental information. Eight studies (six prospective multi-center trials, two retrospective single center studies) were included. Pharmacists in all studies received specialized training in performing spirometry testing. Of the eight studies meeting inclusion and exclusion criteria, 8 (100%) demonstrated acceptable repeatability of spirometry testing based upon standards set by the ATS/ERS guidelines. Acceptable repeatability of seven studies ranged from 70 to 99% consistent with published data. Available evidence suggests that quality spirometry testing can be performed by pharmacists. More prospective studies are needed to add to the current evidence of quality spirometry testing performed by

  6. Methods for significance testing of categorical covariates in logistic regression models after multiple imputation: power and applicability analysis

    NARCIS (Netherlands)

    Eekhout, I.; Wiel, M.A. van de; Heymans, M.W.

    2017-01-01

    Background. Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin’s Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels

  7. Development and preliminary testing of a web-based, self-help application for disaster-affected families.

    Science.gov (United States)

    Yuen, Erica K; Gros, Kirstin; Welsh, Kyleen E; McCauley, Jenna; Resnick, Heidi S; Danielson, Carla K; Price, Matthew; Ruggiero, Kenneth J

    2016-09-01

    Technology-based self-help interventions have the potential to increase access to evidence-based mental healthcare, especially for families affected by natural disasters. However, development of these interventions is a complex process and poses unique challenges. Usability testing, which assesses the ability of individuals to use an application successfully, can have a significant impact on the quality of a self-help intervention. This article describes (a) the development of a novel web-based multi-module self-help intervention for disaster-affected adolescents and their parents and (b) a mixed-methods formal usability study to evaluate user response. A total of 24 adolescents were observed, videotaped, and interviewed as they used the depressed mood component of the self-help intervention. Quantitative results indicated an above-average user experience, and qualitative analysis identified 120 unique usability issues. We discuss the challenges of developing self-help applications, including design considerations and the value of usability testing in technology-based interventions, as well as our plan for widespread dissemination. © The Author(s) 2015.

  8. Microplastic contamination of river beds significantly reduced by catchment-wide flooding

    Science.gov (United States)

    Hurley, Rachel; Woodward, Jamie; Rothwell, James J.

    2018-04-01

    Microplastic contamination of the oceans is one of the world's most pressing environmental concerns. The terrestrial component of the global microplastic budget is not well understood because sources, stores and fluxes are poorly quantified. We report catchment-wide patterns of microplastic contamination, classified by type, size and density, in channel bed sediments at 40 sites across urban, suburban and rural river catchments in northwest England. Microplastic contamination was pervasive on all river channel beds. We found multiple urban contamination hotspots with a maximum microplastic concentration of approximately 517,000 particles m-2. After a period of severe flooding in winter 2015/16, all sites were resampled. Microplastic concentrations had fallen at 28 sites and 18 saw a decrease of one order of magnitude. The flooding exported approximately 70% of the microplastic load stored on these river beds (equivalent to 0.85 ± 0.27 tonnes or 43 ± 14 billion particles) and eradicated microbead contamination at 7 sites. We conclude that microplastic contamination is efficiently flushed from river catchments during flooding.

  9. Tests of beam-based alignement at FACET

    CERN Document Server

    Latina, A; Schulte, D; Adli, E

    2014-01-01

    The performance of future linear colliders will depend critically on beam-based alignment (BBA) and feedback systems, which will play a crucial role in guaranteeing the low emittance transport throughout such machines. BBA algorithms designed to improve the beam transmission in a linac by simultaneously optimising the trajectory and minimising the residual dispersion, have thoughtfully been studied in theory over the last years, and successfully verified experimentally. One such technique is called Dispersion-Free Steering (DFS). A careful study of the DFS performance at the SLAC test facility FACET lead us to design a beam-based technique specifically targeted to reduce the impact of transverse short-range wakefields, rather than of the dispersion, being the wakefields the limiting factor to the FACET performance. This technique is called Wakefield-Free Steering (WFS). The results of the first tests of WFS at FACET are presented in this paper.

  10. Characteristic function-based semiparametric inference for skew-symmetric models

    KAUST Repository

    Potgieter, Cornelis J.

    2012-12-26

    Skew-symmetric models offer a very flexible class of distributions for modelling data. These distributions can also be viewed as selection models for the symmetric component of the specified skew-symmetric distribution. The estimation of the location and scale parameters corresponding to the symmetric component is considered here, with the symmetric component known. Emphasis is placed on using the empirical characteristic function to estimate these parameters. This is made possible by an invariance property of the skew-symmetric family of distributions, namely that even transformations of random variables that are skew-symmetric have a distribution only depending on the symmetric density. A distance metric between the real components of the empirical and true characteristic functions is minimized to obtain the estimators. The method is semiparametric, in that the symmetric component is specified, but the skewing function is assumed unknown. Furthermore, the methodology is extended to hypothesis testing. Two tests for a hypothesis of specific parameter values are considered, as well as a test for the hypothesis that the symmetric component has a specific parametric form. A resampling algorithm is described for practical implementation of these tests. The outcomes of various numerical experiments are presented. © 2012 Board of the Foundation of the Scandinavian Journal of Statistics.

  11. USING COMPUTER-BASED TESTING AS ALTERNATIVE ASSESSMENT METHOD OF STUDENT LEARNING IN DISTANCE EDUCATION

    Directory of Open Access Journals (Sweden)

    Amalia SAPRIATI

    2010-04-01

    Full Text Available This paper addresses the use of computer-based testing in distance education, based on the experience of Universitas Terbuka (UT, Indonesia. Computer-based testing has been developed at UT for reasons of meeting the specific needs of distance students as the following: Ø students’ inability to sit for the scheduled test, Ø conflicting test schedules, and Ø students’ flexibility to take examination to improve their grades. In 2004, UT initiated a pilot project in the development of system and program for computer-based testing method. Then in 2005 and 2006 tryouts in the use of computer-based testing methods were conducted in 7 Regional Offices that were considered as having sufficient supporting recourses. The results of the tryouts revealed that students were enthusiastic in taking computer-based tests and they expected that the test method would be provided by UT as alternative to the traditional paper and pencil test method. UT then implemented computer-based testing method in 6 and 12 Regional Offices in 2007 and 2008 respectively. The computer-based testing was administered in the city of the designated Regional Office and was supervised by the Regional Office staff. The development of the computer-based testing was initiated with conducting tests using computers in networked configuration. The system has been continually improved, and it currently uses devices linked to the internet or the World Wide Web. The construction of the test involves the generation and selection of the test items from the item bank collection of the UT Examination Center. Thus the combination of the selected items compromises the test specification. Currently UT has offered 250 courses involving the use of computer-based testing. Students expect that more courses are offered with computer-based testing in Regional Offices within easy access by students.

  12. Protective Factors, Coping Appraisals, and Social Barriers Predict Mental Health Following Community Violence: A Prospective Test of Social Cognitive Theory.

    Science.gov (United States)

    Smith, Andrew J; Felix, Erika D; Benight, Charles C; Jones, Russell T

    2017-06-01

    This study tested social cognitive theory of posttraumatic adaptation in the context of mass violence, hypothesizing that pre-event protective factors (general self-efficacy and perceived social support) would reduce posttraumatic stress symptoms (PTSS) and depression severity through boosting post-event coping self-efficacy appraisals (mediator). We qualified hypotheses by predicting that post-event social support barriers would disrupt (moderate) the health-promoting indirect effects of pre-event protective factors. With a prospective longitudinal sample, we employed path models with bootstrapping resampling to test hypotheses. Participants included 70 university students (71.4% female; 40.0% White; 34.3% Asian; 14.3% Hispanic) enrolled during a mass violence event who completed surveys one year pre-event and 5-6 months post-event. Results revealed significant large effects in predicting coping self-efficacy (mastery model, R 2 = .34; enabling model, R 2 = .36), PTSS (mastery model, R 2 = .35; enabling model, R 2 = .41), and depression severity (mastery model, R 2 = .43; enabling model, R 2 = .46). Overall findings supported study hypotheses, showing that at low levels of post-event social support barriers, pre-event protective factors reduced distress severity through boosting coping self-efficacy. However, as post-event social support barriers increased, the indirect, distress-reducing effects of pre-event protective factors were reduced to nonsignificance. Study implications focus on preventative and responsive intervention. Copyright © 2017 International Society for Traumatic Stress Studies.

  13. Laboratory Survey of Significant Bacteriuria in a Family Practice Clinic

    African Journals Online (AJOL)

    This study was carried out to determine the causative agents of significant bacteriuria and their antibiotic sensitivity pattern. ... high rate of antibiotic resistance suggest that many patients in this population will probably benefit more from treatment of UTI based on routine antibiotic sensitivity testing rather than empiric therapy.

  14. Human Classification Based on Gestural Motions by Using Components of PCA

    International Nuclear Information System (INIS)

    Aziz, Azri A; Wan, Khairunizam; Za'aba, S K; Shahriman A B; Asyekin H; Zuradzman M R; Adnan, Nazrul H

    2013-01-01

    Lately, a study of human capabilities with the aim to be integrated into machine is the famous topic to be discussed. Moreover, human are bless with special abilities that they can hear, see, sense, speak, think and understand each other. Giving such abilities to machine for improvement of human life is researcher's aim for better quality of life in the future. This research was concentrating on human gesture, specifically arm motions for differencing the individuality which lead to the development of the hand gesture database. We try to differentiate the human physical characteristic based on hand gesture represented by arm trajectories. Subjects are selected from different type of the body sizes, and then acquired data undergo resampling process. The results discuss the classification of human based on arm trajectories by using Principle Component Analysis (PCA)

  15. Dietary Interventions in Multiple Sclerosis: Development and Pilot-Testing of an Evidence Based Patient Education Program.

    Science.gov (United States)

    Riemann-Lorenz, Karin; Eilers, Marlene; von Geldern, Gloria; Schulz, Karl-Heinz; Köpke, Sascha; Heesen, Christoph

    2016-01-01

    Dietary factors have been discussed to influence risk or disease course of multiple sclerosis (MS). Specific diets are widely used among patients with MS. To design and pilot-test an evidence based patient education program on dietary factors in MS. We performed a systematic literature search on the effectiveness of dietary interventions in MS. A web-based survey among 337 patients with MS and 136 healthy controls assessed knowledge, dietary habits and information needs. An interactive group education program was developed and pilot-tested. Fifteen randomised-controlled trials (RCTs) were included in the systematic review. Quality of evidence was low and no clear benefit could be seen. Patients with MS significantly more often adhered to a `Mediterranean Diet`(29.7% versus 14.0%, ppilot test of our newly developed patient education program with 13 participants showed excellent comprehensibility and the MS-specific content was judged as very important. However, the poor evidence base for dietary approaches in MS was perceived disappointing. Development and pilot-testing of an evidence-based patient education program on nutrition and MS is feasible. Patient satisfaction with the program suffers from the lack of evidence. Further research should focus on generating evidence for the potential influence of lifestyle habits (diet, physical activity) on MS disease course thus meeting the needs of patients with MS.

  16. On long-only information-based portfolio diversification framework

    Science.gov (United States)

    Santos, Raphael A.; Takada, Hellinton H.

    2014-12-01

    Using the concepts from information theory, it is possible to improve the traditional frameworks for long-only asset allocation. In modern portfolio theory, the investor has two basic procedures: the choice of a portfolio that maximizes its risk-adjusted excess return or the mixed allocation between the maximum Sharpe portfolio and the risk-free asset. In the literature, the first procedure was already addressed using information theory. One contribution of this paper is the consideration of the second procedure in the information theory context. The performance of these approaches was compared with three traditional asset allocation methodologies: the Markowitz's mean-variance, the resampled mean-variance and the equally weighted portfolio. Using simulated and real data, the information theory-based methodologies were verified to be more robust when dealing with the estimation errors.

  17. Specific Features of Executive Dysfunction in Alzheimer-Type Mild Dementia Based on Computerized Cambridge Neuropsychological Test Automated Battery (CANTAB) Test Results.

    Science.gov (United States)

    Kuzmickienė, Jurgita; Kaubrys, Gintaras

    2016-10-08

    BACKGROUND The primary manifestation of Alzheimer's disease (AD) is decline in memory. Dysexecutive symptoms have tremendous impact on functional activities and quality of life. Data regarding frontal-executive dysfunction in mild AD are controversial. The aim of this study was to assess the presence and specific features of executive dysfunction in mild AD based on Cambridge Neuropsychological Test Automated Battery (CANTAB) results. MATERIAL AND METHODS Fifty newly diagnosed, treatment-naïve, mild, late-onset AD patients (MMSE ≥20, AD group) and 25 control subjects (CG group) were recruited in this prospective, cross-sectional study. The CANTAB tests CRT, SOC, PAL, SWM were used for in-depth cognitive assessment. Comparisons were performed using the t test or Mann-Whitney U test, as appropriate. Correlations were evaluated by Pearson r or Spearman R. Statistical significance was set at p<0.05. RESULTS AD and CG groups did not differ according to age, education, gender, or depression. Few differences were found between groups in the SOC test for performance measures: Mean moves (minimum 3 moves): AD (Rank Sum=2227), CG (Rank Sum=623), p<0.001. However, all SOC test time measures differed significantly between groups: SOC Mean subsequent thinking time (4 moves): AD (Rank Sum=2406), CG (Rank Sum=444), p<0.001. Correlations were weak between executive function (SOC) and episodic/working memory (PAL, SWM) (R=0.01-0.38) or attention/psychomotor speed (CRT) (R=0.02-0.37). CONCLUSIONS Frontal-executive functions are impaired in mild AD patients. Executive dysfunction is highly prominent in time measures, but minimal in performance measures. Executive disorders do not correlate with a decline in episodic and working memory or psychomotor speed in mild AD.

  18. NI Based System for Seu Testing of Memory Chips for Avionics

    Directory of Open Access Journals (Sweden)

    Boruzdina Anna

    2016-01-01

    Full Text Available This paper presents the results of implementation of National Instrument based system for Single Event Upset testing of memory chips into neutron generator experimental facility, which used for SEU tests for avionics purposes. Basic SEU testing algorithm with error correction and constant errors detection is presented. The issues of radiation shielding of NI based system are discussed and solved. The examples of experimental results show the applicability of the presented system for SEU memory testing under neutrons influence.

  19. Cernavoda NPP risk - Based test and maintenance planning - Methodology development

    International Nuclear Information System (INIS)

    Georgescu, G.; Popa, P.; Petrescu, A.; Naum, M.; Gutu, M.

    1997-01-01

    The Cernavoda Power Plant starts the commercial operation in November 1996. During operation of the nuclear power plant, several mandatory tests and maintenance are performed on stand-by safety system components to ensure their availability in case of accident. The basic purpose of such activities is the early detection of any failure and degradation, and timely correction of deteriorations. Because of the large number of such activities, emphasis on plant safety and allocation of resources becomes difficult. The probabilistic model and methodology can be effectively used to obtain the risk significance of these activities so that the resources are directed to the most important areas. The proposed Research Contract activity is strongly connected with other safety related areas under development. Since, the Cernavoda Probabilistic Safety Evaluation Level 1 PSA Study (CPSE) was performed and now the study is revised taking into account the as-built information, it is recommended to implement into the model the necessary modeling features to support further PSA application, especially related to Test and Maintenance optimization. Methods need to be developed in order to apply the PSA model including risk information together with other needed information for Test and Maintenance optimization. Also, in parallel with the CPSE study updating, the software interface for the PSA model is under development (Risk Monitor Software class), methods and models needing to be developed for the purpose of using it for qualified monitoring of Test and Maintenance Strategy efficiency. Similar, the Data Collection System need to be appropriate for the purpose of an ongoing implementation of a risk - based Test and Maintenance Strategy. (author). 4 refs, 1 fig

  20. A more powerful test based on ratio distribution for retention noninferiority hypothesis.

    Science.gov (United States)

    Deng, Ling; Chen, Gang

    2013-03-11

    Rothmann et al. ( 2003 ) proposed a method for the statistical inference of fraction retention noninferiority (NI) hypothesis. A fraction retention hypothesis is defined as a ratio of the new treatment effect verse the control effect in the context of a time to event endpoint. One of the major concerns using this method in the design of an NI trial is that with a limited sample size, the power of the study is usually very low. This makes an NI trial not applicable particularly when using time to event endpoint. To improve power, Wang et al. ( 2006 ) proposed a ratio test based on asymptotic normality theory. Under a strong assumption (equal variance of the NI test statistic under null and alternative hypotheses), the sample size using Wang's test was much smaller than that using Rothmann's test. However, in practice, the assumption of equal variance is generally questionable for an NI trial design. This assumption is removed in the ratio test proposed in this article, which is derived directly from a Cauchy-like ratio distribution. In addition, using this method, the fundamental assumption used in Rothmann's test, that the observed control effect is always positive, that is, the observed hazard ratio for placebo over the control is greater than 1, is no longer necessary. Without assuming equal variance under null and alternative hypotheses, the sample size required for an NI trial can be significantly reduced if using the proposed ratio test for a fraction retention NI hypothesis.