WorldWideScience

Sample records for bootstrap resampling procedure

  1. Orthogonal projections and bootstrap resampling procedures in the study of infraspecific variation

    Directory of Open Access Journals (Sweden)

    Luiza Carla Duarte

    1998-12-01

    Full Text Available The effect of an increase in quantitative continuous characters resulting from indeterminate growth upon the analysis of population differentiation was investigated using, as an example, a set of continuous characters measured as distance variables in 10 populations of a rodent species. The data before and after correction for allometric size effects using orthogonal projections were analyzed with a parametric bootstrap resampling procedure applied to canonical variate analysis. The variance component of the distance measures attributable to indeterminate growth within the populations was found to be substantial, although the ordination of the populations was not affected, as evidenced by the relative and absolute positions of the centroids. The covariance pattern of the distance variables used to infer the nature of the morphological differences was strongly influenced by indeterminate growth. The uncorrected data produced a misleading picture of morphological differentiation by indicating that groups of populations differed in size. However, the data corrected for allometric effects clearly demonstrated that populations differed morphologically both in size and shape. These results are discussed in terms of the analysis of morphological differentiation among populations and the definition of infraspecific geographic units.A influência do aumento em caracteres quantitativos contínuos devido ao crescimento indeterminado sobre a análise de diferenciação entre populações foi investigado utilizando como exemplo um conjunto de dados de variáveis craniométricas em 10 populações de uma espécie de roedor. Dois conjuntos de dados, um não corrigido para o efeito alométrico do tamanho e um outro corrigido para o efeito alométrico do tamanho utilizando um método de projeção ortogonal, foram analisados por um procedimento "bootstrap" de reamostragem aplicado à análise de variáveis canônicas. O componente de variância devido ao

  2. Assessment of bootstrap resampling performance for PET data.

    Science.gov (United States)

    Markiewicz, P J; Reader, A J; Matthews, J C

    2015-01-07

    Bootstrap resampling has been successfully used for estimation of statistical uncertainty of parameters such as tissue metabolism, blood flow or displacement fields for image registration. The performance of bootstrap resampling as applied to PET list-mode data of the human brain and dedicated phantoms is assessed in a novel and systematic way such that: (1) the assessment is carried out in two resampling stages: the 'real world' stage where multiple reference datasets of varying statistical level are generated and the 'bootstrap world' stage where corresponding bootstrap replicates are generated from the reference datasets. (2) All resampled datasets were reconstructed yielding images from which multiple voxel and regions of interest (ROI) values were extracted to form corresponding distributions between the two stages. (3) The difference between the distributions from both stages was quantified using the Jensen-Shannon divergence and the first four moments. It was found that the bootstrap distributions are consistently different to the real world distributions across the statistical levels. The difference was explained by a shift in the mean (up to 33% for voxels and 14% for ROIs) being proportional to the inverse square root of the statistical level (number of counts). Other moments were well replicated by the bootstrap although for very low statistical levels the estimation of the variance was poor. Therefore, the bootstrap method should be used with care when estimating systematic errors (bias) and variance when very low statistical levels are present such as in early time frames of dynamic acquisitions, when the underlying population may not be sufficiently represented.

  3. Application of a New Resampling Method to SEM: A Comparison of S-SMART with the Bootstrap

    Science.gov (United States)

    Bai, Haiyan; Sivo, Stephen A.; Pan, Wei; Fan, Xitao

    2016-01-01

    Among the commonly used resampling methods of dealing with small-sample problems, the bootstrap enjoys the widest applications because it often outperforms its counterparts. However, the bootstrap still has limitations when its operations are contemplated. Therefore, the purpose of this study is to examine an alternative, new resampling method…

  4. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    Science.gov (United States)

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Bootstrap resampling approach to disaggregate analysis of road crashes in Hong Kong.

    Science.gov (United States)

    Pei, Xin; Sze, N N; Wong, S C; Yao, Danya

    2016-10-01

    Road safety affects health and development worldwide; thus, it is essential to examine the factors that influence crashes and injuries. As the relationships between crashes, crash severity, and possible risk factors can vary depending on the type of collision, we attempt to develop separate prediction models for different crash types (i.e., single- versus multi-vehicle crashes and slight injury versus killed and serious injury crashes). Taking advantage of the availability of crash and traffic data disaggregated by time and space, it is possible to identify the factors that may contribute to crash risks in Hong Kong, including traffic flow, road design, and weather conditions. To remove the effects of excess zeros on prediction performance in a highly disaggregated crash prediction model, a bootstrap resampling method is applied. The results indicate that more accurate and reliable parameter estimates, with reduced standard errors, can be obtained with the use of a bootstrap resampling method. Results revealed that factors including rainfall, geometric design, traffic control, and temporal variations all determined the crash risk and crash severity. This helps to shed light on the development of remedial engineering and traffic management and control measures. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Combining Nordtest method and bootstrap resampling for measurement uncertainty estimation of hematology analytes in a medical laboratory.

    Science.gov (United States)

    Cui, Ming; Xu, Lili; Wang, Huimin; Ju, Shaoqing; Xu, Shuizhu; Jing, Rongrong

    2017-12-01

    Measurement uncertainty (MU) is a metrological concept, which can be used for objectively estimating the quality of test results in medical laboratories. The Nordtest guide recommends an approach that uses both internal quality control (IQC) and external quality assessment (EQA) data to evaluate the MU. Bootstrap resampling is employed to simulate the unknown distribution based on the mathematical statistics method using an existing small sample of data, where the aim is to transform the small sample into a large sample. However, there have been no reports of the utilization of this method in medical laboratories. Thus, this study applied the Nordtest guide approach based on bootstrap resampling for estimating the MU. We estimated the MU for the white blood cell (WBC) count, red blood cell (RBC) count, hemoglobin (Hb), and platelets (Plt). First, we used 6months of IQC data and 12months of EQA data to calculate the MU according to the Nordtest method. Second, we combined the Nordtest method and bootstrap resampling with the quality control data and calculated the MU using MATLAB software. We then compared the MU results obtained using the two approaches. The expanded uncertainty results determined for WBC, RBC, Hb, and Plt using the bootstrap resampling method were 4.39%, 2.43%, 3.04%, and 5.92%, respectively, and 4.38%, 2.42%, 3.02%, and 6.00% with the existing quality control data (U [k=2]). For WBC, RBC, Hb, and Plt, the differences between the results obtained using the two methods were lower than 1.33%. The expanded uncertainty values were all less than the target uncertainties. The bootstrap resampling method allows the statistical analysis of the MU. Combining the Nordtest method and bootstrap resampling is considered a suitable alternative method for estimating the MU. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  7. The use of bootstrap resampling to assess the variability of Draize tissue scores.

    Science.gov (United States)

    Worth, A P; Cronin, M T

    2001-01-01

    The acute dermal and ocular effects of chemicals are generally assessed by performing the Draize skin and eye tests, respectively. Because the animal data obtained in these tests are also used for the development and validation of alternative methods for skin and eye irritation, it is important to assess the inherent variability of the animal data, since this variability places an upper limit on the predictive performance that can be expected of any alternative model. The statistical method of bootstrap resampling was used to estimate the variability arising from the use of different animals and time-points, and the estimates of variability were used to determine the maximal extent to which Draize test tissue scores can be predicted.

  8. Assessment of SPM in perfusion brain SPECT studies. A numerical simulation study using bootstrap resampling methods.

    Science.gov (United States)

    Pareto, Deborah; Aguiar, Pablo; Pavía, Javier; Gispert, Juan Domingo; Cot, Albert; Falcón, Carles; Benabarre, Antoni; Lomeña, Francisco; Vieta, Eduard; Ros, Domènec

    2008-07-01

    Statistical parametric mapping (SPM) has become the technique of choice to statistically evaluate positron emission tomography (PET), functional magnetic resonance imaging (fMRI), and single photon emission computed tomography (SPECT) functional brain studies. Nevertheless, only a few methodological studies have been carried out to assess the performance of SPM in SPECT. The aim of this paper was to study the performance of SPM in detecting changes in regional cerebral blood flow (rCBF) in hypo- and hyperperfused areas in brain SPECT studies. The paper seeks to determine the relationship between the group size and the rCBF changes, and the influence of the correction for degradations. The assessment was carried out using simulated brain SPECT studies. Projections were obtained with Monte Carlo techniques, and a fan-beam collimator was considered in the simulation process. Reconstruction was performed by using the ordered subsets expectation maximization (OSEM) algorithm with and without compensation for attenuation, scattering, and spatial variant collimator response. Significance probability maps were obtained with SPM2 by using a one-tailed two-sample t-test. A bootstrap resampling approach was used to determine the sample size for SPM to detect the between-group differences. Our findings show that the correction for degradations results in a diminution of the sample size, which is more significant for small regions and low-activation factors. Differences in sample size were found between hypo- and hyperperfusion. These differences were larger for small regions and low-activation factors, and when no corrections were included in the reconstruction algorithm.

  9. What Teachers Should Know About the Bootstrap: Resampling in the Undergraduate Statistics Curriculum

    Science.gov (United States)

    Hesterberg, Tim C.

    2015-01-01

    Bootstrapping has enormous potential in statistics education and practice, but there are subtle issues and ways to go wrong. For example, the common combination of nonparametric bootstrapping and bootstrap percentile confidence intervals is less accurate than using t-intervals for small samples, though more accurate for larger samples. My goals in this article are to provide a deeper understanding of bootstrap methods—how they work, when they work or not, and which methods work better—and to highlight pedagogical issues. Supplementary materials for this article are available online. [Received December 2014. Revised August 2015] PMID:27019512

  10. Resampling procedures to validate dendro-auxometric regression models

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available Regression analysis has a large use in several sectors of forest research. The validation of a dendro-auxometric model is a basic step in the building of the model itself. The more a model resists to attempts of demonstrating its groundlessness, the more its reliability increases. In the last decades many new theories, that quite utilizes the calculation speed of the calculators, have been formulated. Here we show the results obtained by the application of a bootsprap resampling procedure as a validation tool.

  11. A resampling strategy based on bootstrap to reduce the effect of large blunders in GPS absolute positioning

    Science.gov (United States)

    Angrisano, Antonio; Maratea, Antonio; Gaglione, Salvatore

    2018-01-01

    In the absence of obstacles, a GPS device is generally able to provide continuous and accurate estimates of position, while in urban scenarios buildings can generate multipath and echo-only phenomena that severely affect the continuity and the accuracy of the provided estimates. Receiver autonomous integrity monitoring (RAIM) techniques are able to reduce the negative consequences of large blunders in urban scenarios, but require both a good redundancy and a low contamination to be effective. In this paper a resampling strategy based on bootstrap is proposed as an alternative to RAIM, in order to estimate accurately position in case of low redundancy and multiple blunders: starting with the pseudorange measurement model, at each epoch the available measurements are bootstrapped—that is random sampled with replacement—and the generated a posteriori empirical distribution is exploited to derive the final position. Compared to standard bootstrap, in this paper the sampling probabilities are not uniform, but vary according to an indicator of the measurement quality. The proposed method has been compared with two different RAIM techniques on a data set collected in critical conditions, resulting in a clear improvement on all considered figures of merit.

  12. Bootstrap resampling to detect active zone for extreme rainfall in Indonesia

    Science.gov (United States)

    Kuswanto, Heri; Hidayati, Sri; Salamah, Mutiah; Sutijo Ulama, Brodjol

    2017-10-01

    This research aims to develop a methodology to detect active zones related to the extreme rainfall event in one of regions in Indonesia. Active zone is defined as regions or zones on the atmospheric level which have significantly different pattern with other regions prior to the occurrence of extreme event happen in the study area. The detection will be useful for the forecasters to predict the extreme rainfall events, and hence, the risk of the disaster caused by the events can be minimized. In order to predict the active zone, this paper examines statistical procedure that is able to test the significant difference between weather phenomena at the atmospheric level onset with prior to the occurrence.

  13. A Bootstrap Procedure of Propensity Score Estimation

    Science.gov (United States)

    Bai, Haiyan

    2013-01-01

    Propensity score estimation plays a fundamental role in propensity score matching for reducing group selection bias in observational data. To increase the accuracy of propensity score estimation, the author developed a bootstrap propensity score. The commonly used propensity score matching methods: nearest neighbor matching, caliper matching, and…

  14. Resampling methods for evaluating classification accuracy of wildlife habitat models

    Science.gov (United States)

    Verbyla, David L.; Litvaitis, John A.

    1989-11-01

    Predictive models of wildlife-habitat relationships often have been developed without being tested The apparent classification accuracy of such models can be optimistically biased and misleading. Data resampling methods exist that yield a more realistic estimate of model classification accuracy These methods are simple and require no new sample data. We illustrate these methods (cross-validation, jackknife resampling, and bootstrap resampling) with computer simulation to demonstrate the increase in precision of the estimate. The bootstrap method is then applied to field data as a technique for model comparison We recommend that biologists use some resampling procedure to evaluate wildlife habitat models prior to field evaluation.

  15. On using the bootstrap for multiple comparisons.

    Science.gov (United States)

    Westfall, Peter H

    2011-11-01

    There are many ways to bootstrap data for multiple comparisons procedures. Methods described here include (i) bootstrap (parametric and nonparametric) as a generalization of classical normal-based MaxT methods, (ii) bootstrap as an approximation to exact permutation methods, (iii) bootstrap as a generator of realistic null data sets, and (iv) bootstrap as a generator of realistic non-null data sets. Resampling of MinP versus MaxT is discussed, and the use of the bootstrap for closed testing is also presented. Applications to biopharmaceutical statistics are given.

  16. Conditional Monthly Weather Resampling Procedure for Operational Seasonal Water Resources Forecasting

    Science.gov (United States)

    Beckers, J.; Weerts, A.; Tijdeman, E.; Welles, E.; McManamon, A.

    2013-12-01

    To provide reliable and accurate seasonal streamflow forecasts for water resources management several operational hydrologic agencies and hydropower companies around the world use the Extended Streamflow Prediction (ESP) procedure. The ESP in its original implementation does not accommodate for any additional information that the forecaster may have about expected deviations from climatology in the near future. Several attempts have been conducted to improve the skill of the ESP forecast, especially for areas which are affected by teleconnetions (e,g. ENSO, PDO) via selection (Hamlet and Lettenmaier, 1999) or weighting schemes (Werner et al., 2004; Wood and Lettenmaier, 2006; Najafi et al., 2012). A disadvantage of such schemes is that they lead to a reduction of the signal to noise ratio of the probabilistic forecast. To overcome this, we propose a resampling method conditional on climate indices to generate meteorological time series to be used in the ESP. The method can be used to generate a large number of meteorological ensemble members in order to improve the statistical properties of the ensemble. The effectiveness of the method was demonstrated in a real-time operational hydrologic seasonal forecasts system for the Columbia River basin operated by the Bonneville Power Administration. The forecast skill of the k-nn resampler was tested against the original ESP for three basins at the long-range seasonal time scale. The BSS and CRPSS were used to compare the results to those of the original ESP method. Positive forecast skill scores were found for the resampler method conditioned on different indices for the prediction of spring peak flows in the Dworshak and Hungry Horse basin. For the Libby Dam basin however, no improvement of skill was found. The proposed resampling method is a promising practical approach that can add skill to ESP forecasts at the seasonal time scale. Further improvement is possible by fine tuning the method and selecting the most

  17. The PIT-trap-A "model-free" bootstrap procedure for inference about regression models with discrete, multivariate responses.

    Science.gov (United States)

    Warton, David I; Thibaut, Loïc; Wang, Yi Alice

    2017-01-01

    Bootstrap methods are widely used in statistics, and bootstrapping of residuals can be especially useful in the regression context. However, difficulties are encountered extending residual resampling to regression settings where residuals are not identically distributed (thus not amenable to bootstrapping)-common examples including logistic or Poisson regression and generalizations to handle clustered or multivariate data, such as generalised estimating equations. We propose a bootstrap method based on probability integral transform (PIT-) residuals, which we call the PIT-trap, which assumes data come from some marginal distribution F of known parametric form. This method can be understood as a type of "model-free bootstrap", adapted to the problem of discrete and highly multivariate data. PIT-residuals have the key property that they are (asymptotically) pivotal. The PIT-trap thus inherits the key property, not afforded by any other residual resampling approach, that the marginal distribution of data can be preserved under PIT-trapping. This in turn enables the derivation of some standard bootstrap properties, including second-order correctness of pivotal PIT-trap test statistics. In multivariate data, bootstrapping rows of PIT-residuals affords the property that it preserves correlation in data without the need for it to be modelled, a key point of difference as compared to a parametric bootstrap. The proposed method is illustrated on an example involving multivariate abundance data in ecology, and demonstrated via simulation to have improved properties as compared to competing resampling methods.

  18. The PIT-trap—A “model-free” bootstrap procedure for inference about regression models with discrete, multivariate responses

    Science.gov (United States)

    Thibaut, Loïc; Wang, Yi Alice

    2017-01-01

    Bootstrap methods are widely used in statistics, and bootstrapping of residuals can be especially useful in the regression context. However, difficulties are encountered extending residual resampling to regression settings where residuals are not identically distributed (thus not amenable to bootstrapping)—common examples including logistic or Poisson regression and generalizations to handle clustered or multivariate data, such as generalised estimating equations. We propose a bootstrap method based on probability integral transform (PIT-) residuals, which we call the PIT-trap, which assumes data come from some marginal distribution F of known parametric form. This method can be understood as a type of “model-free bootstrap”, adapted to the problem of discrete and highly multivariate data. PIT-residuals have the key property that they are (asymptotically) pivotal. The PIT-trap thus inherits the key property, not afforded by any other residual resampling approach, that the marginal distribution of data can be preserved under PIT-trapping. This in turn enables the derivation of some standard bootstrap properties, including second-order correctness of pivotal PIT-trap test statistics. In multivariate data, bootstrapping rows of PIT-residuals affords the property that it preserves correlation in data without the need for it to be modelled, a key point of difference as compared to a parametric bootstrap. The proposed method is illustrated on an example involving multivariate abundance data in ecology, and demonstrated via simulation to have improved properties as compared to competing resampling methods. PMID:28738071

  19. The wild tapered block bootstrap

    DEFF Research Database (Denmark)

    Hounyo, Ulrich

    -based method in terms of asymptotic accuracy of variance estimation and distribution approximation. For stationary time series, the asymptotic validity, and the favorable bias properties of the new bootstrap method are shown in two important cases: smooth functions of means, and M-estimators. The first......-order asymptotic validity of the tapered block bootstrap as well as the wild tapered block bootstrap approximation to the actual distribution of the sample mean is also established when data are assumed to satisfy a near epoch dependent condition. The consistency of the bootstrap variance estimator for the sample......In this paper, a new resampling procedure, called the wild tapered block bootstrap, is introduced as a means of calculating standard errors of estimators and constructing confidence regions for parameters based on dependent heterogeneous data. The method consists in tapering each overlapping block...

  20. Bootstrap Determination of the Co-Integration Rank in Heteroskedastic VAR Models

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Rahbek, Anders; Taylor, A. M. Robert

    In a recent paper Cavaliere et al. (2012) develop bootstrap implementations of the (pseudo-) likelihood ratio [PLR] co-integration rank test and associated sequential rank determination procedure of Johansen (1996). The bootstrap samples are constructed using the restricted parameter estimates...... of the underlying VAR model which obtain under the reduced rank null hypothesis. They propose methods based on an i.i.d. bootstrap re-sampling scheme and establish the validity of their proposed bootstrap procedures in the context of a co-integrated VAR model with i.i.d. innovations. In this paper we investigate...... the properties of their bootstrap procedures, together with analogous procedures based on a wild bootstrap re-sampling scheme, when time-varying behaviour is present in either the conditional or unconditional variance of the innovations. We show that the bootstrap PLR tests are asymptotically correctly sized and...

  1. Bootstrap Determination of the Co-integration Rank in Heteroskedastic VAR Models

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Rahbek, Anders; Taylor, A.M.Robert

    In a recent paper Cavaliere et al. (2012) develop bootstrap implementations of the (pseudo-) likelihood ratio [PLR] co-integration rank test and associated sequential rank determination procedure of Johansen (1996). The bootstrap samples are constructed using the restricted parameter estimates...... of the underlying VAR model which obtain under the reduced rank null hypothesis. They propose methods based on an i.i.d. bootstrap re-sampling scheme and establish the validity of their proposed bootstrap procedures in the context of a co-integrated VAR model with i.i.d. innovations. In this paper we investigate...... the properties of their bootstrap procedures, together with analogous procedures based on a wild bootstrap re-sampling scheme, when time-varying behaviour is present in either the conditional or unconditional variance of the innovations. We show that the bootstrap PLR tests are asymptotically correctly sized and...

  2. Resampling-based multiple comparison procedure with application to point-wise testing with functional data.

    Science.gov (United States)

    Vsevolozhskaya, Olga A; Greenwood, Mark C; Powell, Scott L; Zaykin, Dmitri V

    2015-03-01

    In this paper we describe a coherent multiple testing procedure for correlated test statistics such as are encountered in functional linear models. The procedure makes use of two different p -value combination methods: the Fisher combination method and the Šidák correction-based method. P -values for Fisher's and Šidák's test statistics are estimated through resampling to cope with the correlated tests. Building upon these two existing combination methods, we propose the smallest p -value as a new test statistic for each hypothesis. The closure principle is incorporated along with the new test statistic to obtain the overall p -value and appropriately adjust the individual p -values. Furthermore, a shortcut version for the proposed procedure is detailed, so that individual adjustments can be obtained even for a large number of tests. The motivation for developing the procedure comes from a problem of point-wise inference with smooth functional data where tests at neighboring points are related. A simulation study verifies that the methodology performs well in this setting. We illustrate the proposed method with data from a study on the aerial detection of the spectral effect of below ground carbon dioxide leakage on vegetation stress via spectral responses.

  3. A Smooth Bootstrap Procedure towards Deriving Confidence Intervals for the Relative Risk.

    Science.gov (United States)

    Wang, Dongliang; Hutson, Alan D

    Given a pair of sample estimators of two independent proportions, bootstrap methods are a common strategy towards deriving the associated confidence interval for the relative risk. We develop a new smooth bootstrap procedure, which generates pseudo-samples from a continuous quantile function. Under a variety of settings, our simulation studies show that our method possesses a better or equal performance in comparison with asymptotic theory based and existing bootstrap methods, particularly for heavily unbalanced data in terms of coverage probability and power. We illustrate our procedure as applied to several published data sets.

  4. Bootstrap Determination of the Co-Integration Rank in Heteroskedastic VAR Models

    DEFF Research Database (Denmark)

    Cavaliere, G.; Rahbek, Anders; Taylor, A.M.R.

    2014-01-01

    In a recent paper Cavaliere et al. (2012) develop bootstrap implementations of the (pseudo-) likelihood ratio (PLR) co-integration rank test and associated sequential rank determination procedure of Johansen (1996). The bootstrap samples are constructed using the restricted parameter estimates...... of the underlying vector autoregressive (VAR) model which obtain under the reduced rank null hypothesis. They propose methods based on an independent and individual distributed (i.i.d.) bootstrap resampling scheme and establish the validity of their proposed bootstrap procedures in the context of a co......-integrated VAR model with i.i.d. innovations. In this paper we investigate the properties of their bootstrap procedures, together with analogous procedures based on a wild bootstrap resampling scheme, when time-varying behavior is present in either the conditional or unconditional variance of the innovations. We...

  5. Wild bootstrap versus moment-oriented bootstrap

    OpenAIRE

    Sommerfeld, Volker

    1997-01-01

    We investigate the relative merits of a “moment-oriented” bootstrap method of Bunke (1997) in comparison with the classical wild bootstrap of Wu (1986) in nonparametric heteroscedastic regression situations. The “moment-oriented” bootstrap is a wild bootstrap based on local estimators of higher order error moments that are smoothed by kernel smoothers. In this paper we perform an asymptotic comparison of these two dierent bootstrap procedures. We show that the moment-oriented bootstrap is in ...

  6. A bootstrap procedure to select hyperspectral wavebands related to tannin content

    NARCIS (Netherlands)

    Ferwerda, J.G.; Skidmore, A.K.; Stein, A.

    2006-01-01

    Detection of hydrocarbons in plants with hyperspectral remote sensing is hampered by overlapping absorption pits, while the `optimal' wavebands for detecting some surface characteristics (e.g. chlorophyll, lignin, tannin) may shift. We combined a phased regression with a bootstrap procedure to find

  7. Bootstrap for the case-cohort design.

    Science.gov (United States)

    Huang, Yijian

    2014-06-01

    The case-cohort design facilitates economical investigation of risk factors in a large survival study, with covariate data collected only from the cases and a simple random subset of the full cohort. Methods that accommodate the design have been developed for various semiparametric models, but most inference procedures are based on asymptotic distribution theory. Such inference can be cumbersome to derive and implement, and does not permit confidence band construction. While bootstrap is an obvious alternative, how to resample is unclear because of complications from the two-stage sampling design. We establish an equivalent sampling scheme, and propose a novel and versatile nonparametric bootstrap for robust inference with an appealingly simple single-stage resampling. Theoretical justification and numerical assessment are provided for a number of procedures under the proportional hazards model.

  8. Introductory statistics and analytics a resampling perspective

    CERN Document Server

    Bruce, Peter C

    2014-01-01

    Concise, thoroughly class-tested primer that features basic statistical concepts in the concepts in the context of analytics, resampling, and the bootstrapA uniquely developed presentation of key statistical topics, Introductory Statistics and Analytics: A Resampling Perspective provides an accessible approach to statistical analytics, resampling, and the bootstrap for readers with various levels of exposure to basic probability and statistics. Originally class-tested at one of the first online learning companies in the discipline, www.statistics.com, the book primarily focuses on application

  9. Efficient p-value evaluation for resampling-based tests

    KAUST Repository

    Yu, K.

    2011-01-05

    The resampling-based test, which often relies on permutation or bootstrap procedures, has been widely used for statistical hypothesis testing when the asymptotic distribution of the test statistic is unavailable or unreliable. It requires repeated calculations of the test statistic on a large number of simulated data sets for its significance level assessment, and thus it could become very computationally intensive. Here, we propose an efficient p-value evaluation procedure by adapting the stochastic approximation Markov chain Monte Carlo algorithm. The new procedure can be used easily for estimating the p-value for any resampling-based test. We show through numeric simulations that the proposed procedure can be 100-500 000 times as efficient (in term of computing time) as the standard resampling-based procedure when evaluating a test statistic with a small p-value (e.g. less than 10( - 6)). With its computational burden reduced by this proposed procedure, the versatile resampling-based test would become computationally feasible for a much wider range of applications. We demonstrate the application of the new method by applying it to a large-scale genetic association study of prostate cancer.

  10. Weighted bootstrapping: a correction method for assessing the robustness of phylogenetic trees

    Directory of Open Access Journals (Sweden)

    Makarenkov Vladimir

    2010-08-01

    Full Text Available Abstract Background Non-parametric bootstrapping is a widely-used statistical procedure for assessing confidence of model parameters based on the empirical distribution of the observed data 1 and, as such, it has become a common method for assessing tree confidence in phylogenetics 2. Traditional non-parametric bootstrapping does not weigh each tree inferred from resampled (i.e., pseudo-replicated sequences. Hence, the quality of these trees is not taken into account when computing bootstrap scores associated with the clades of the original phylogeny. As a consequence, traditionally, the trees with different bootstrap support or those providing a different fit to the corresponding pseudo-replicated sequences (the fit quality can be expressed through the LS, ML or parsimony score contribute in the same way to the computation of the bootstrap support of the original phylogeny. Results In this article, we discuss the idea of applying weighted bootstrapping to phylogenetic reconstruction by weighting each phylogeny inferred from resampled sequences. Tree weights can be based either on the least-squares (LS tree estimate or on the average secondary bootstrap score (SBS associated with each resampled tree. Secondary bootstrapping consists of the estimation of bootstrap scores of the trees inferred from resampled data. The LS and SBS-based bootstrapping procedures were designed to take into account the quality of each "pseudo-replicated" phylogeny in the final tree estimation. A simulation study was carried out to evaluate the performances of the five weighting strategies which are as follows: LS and SBS-based bootstrapping, LS and SBS-based bootstrapping with data normalization and the traditional unweighted bootstrapping. Conclusions The simulations conducted with two real data sets and the five weighting strategies suggest that the SBS-based bootstrapping with the data normalization usually exhibits larger bootstrap scores and a higher robustness

  11. The Local Fractional Bootstrap

    DEFF Research Database (Denmark)

    Bennedsen, Mikkel; Hounyo, Ulrich; Lunde, Asger

    new resampling method, the local fractional bootstrap, relies on simulating an auxiliary fractional Brownian motion that mimics the fine properties of high frequency differences of the Brownian semistationary process under the null hypothesis. We prove the first order validity of the bootstrap method...... to two empirical data sets: we assess the roughness of a time series of high-frequency asset prices and we test the validity of Kolmogorov's scaling law in atmospheric turbulence data....

  12. Bootstrap-based procedures for inference in nonparametric receiver-operating characteristic curve regression analysis.

    Science.gov (United States)

    Rodríguez-Álvarez, María Xosé; Roca-Pardiñas, Javier; Cadarso-Suárez, Carmen; Tahoces, Pablo G

    2018-03-01

    Prior to using a diagnostic test in a routine clinical setting, the rigorous evaluation of its diagnostic accuracy is essential. The receiver-operating characteristic curve is the measure of accuracy most widely used for continuous diagnostic tests. However, the possible impact of extra information about the patient (or even the environment) on diagnostic accuracy also needs to be assessed. In this paper, we focus on an estimator for the covariate-specific receiver-operating characteristic curve based on direct regression modelling and nonparametric smoothing techniques. This approach defines the class of generalised additive models for the receiver-operating characteristic curve. The main aim of the paper is to offer new inferential procedures for testing the effect of covariates on the conditional receiver-operating characteristic curve within the above-mentioned class. Specifically, two different bootstrap-based tests are suggested to check (a) the possible effect of continuous covariates on the receiver-operating characteristic curve and (b) the presence of factor-by-curve interaction terms. The validity of the proposed bootstrap-based procedures is supported by simulations. To facilitate the application of these new procedures in practice, an R-package, known as npROCRegression, is provided and briefly described. Finally, data derived from a computer-aided diagnostic system for the automatic detection of tumour masses in breast cancer is analysed.

  13. A Local Stable Bootstrap for Power Variations of Pure-Jump Semimartingales and Activity Index Estimation

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Varneskov, Rasmus T.

    We provide a new resampling procedure - the local stable bootstrap - that is able to mimic the dependence properties of realized power variations for pure-jump semimartingales observed at different frequencies. This allows us to propose a bootstrap estimator and inference procedure for the activity...... index of the underlying process, β, as well as a bootstrap test for whether it obeys a jump-diffusion or a pure-jump process, that is, of the null hypothesis H₀: β=2 against the alternative H₁: βbootstrap power variations, activity index...... estimator, and diffusion test for H0. Moreover, the finite sample size and power properties of the proposed diffusion test are compared to those of benchmark tests using Monte Carlo simulations. Unlike existing procedures, our bootstrap test is correctly sized in general settings. Finally, we illustrate use...

  14. Climate time series analysis classical statistical and bootstrap methods

    CERN Document Server

    Mudelsee, Manfred

    2010-01-01

    This book presents bootstrap resampling as a computationally intensive method able to meet the challenges posed by the complexities of analysing climate data. It shows how the bootstrap performs reliably in the most important statistical estimation techniques.

  15. Bootstrap, Wild Bootstrap and Generalized Bootstrap

    OpenAIRE

    Mammen, Enno

    1995-01-01

    Some modifications and generalizations of the bootstrap procedurehave been proposed. In this note we will consider the wild bootstrap and the generalized bootstrap and we will give two arguments why it makes sense touse these modifications instead of the original bootstrap. The firstargument is that there exist examples where generalized and wild bootstrapwork, but where the original bootstrap fails and breaks down. The secondargument will be based on higher order considerations. We will show...

  16. Extending Bootstrap

    CERN Document Server

    Niska, Christoffer

    2014-01-01

    Practical and instruction-based, this concise book will take you from understanding what Bootstrap is, to creating your own Bootstrap theme in no time! If you are an intermediate front-end developer or designer who wants to learn the secrets of Bootstrap, this book is perfect for you.

  17. Resampling methods in Microsoft Excel® for estimating reference intervals

    Science.gov (United States)

    Theodorsson, Elvar

    2015-01-01

    Computer- intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles.
The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular.
Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples. PMID:26527366

  18. Resampling methods in Microsoft Excel® for estimating reference intervals.

    Science.gov (United States)

    Theodorsson, Elvar

    2015-01-01

    Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. 
The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular.
 Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.

  19. The bootstrap and Bayesian bootstrap method in assessing bioequivalence

    International Nuclear Information System (INIS)

    Wan Jianping; Zhang Kongsheng; Chen Hui

    2009-01-01

    Parametric method for assessing individual bioequivalence (IBE) may concentrate on the hypothesis that the PK responses are normal. Nonparametric method for evaluating IBE would be bootstrap method. In 2001, the United States Food and Drug Administration (FDA) proposed a draft guidance. The purpose of this article is to evaluate the IBE between test drug and reference drug by bootstrap and Bayesian bootstrap method. We study the power of bootstrap test procedures and the parametric test procedures in FDA (2001). We find that the Bayesian bootstrap method is the most excellent.

  20. BOOTSTRAP-BASED INFERENCE FOR GROUPED DATA

    Directory of Open Access Journals (Sweden)

    Jorge Iván Vélez

    2015-07-01

    Full Text Available Grouped data refers to continuous variables that are partitioned in intervals, not necessarily of the same length, to facilitate its interpretation.  Unlike in ungrouped data, estimating simple summary statistics as the mean and mode, or more complex ones as a percentile or the coefficient of variation, is a difficult endeavour in grouped data. When the probability distribution generating the data is unknown, inference in ungrouped data is carried out using parametric or nonparametric resampling methods. However, there are no equivalent methods in the case of grouped data.  Here, a bootstrap-based procedure to estimate the parameters of an unknown distribution based on grouped data is proposed, described and illustrated.

  1. Resampling Approach for Determination of the Method for Reference Interval Calculation in Clinical Laboratory Practice▿

    Science.gov (United States)

    Pavlov, Igor Y.; Wilson, Andrew R.; Delgado, Julio C.

    2010-01-01

    Reference intervals (RI) play a key role in clinical interpretation of laboratory test results. Numerous articles are devoted to analyzing and discussing various methods of RI determination. The two most widely used approaches are the parametric method, which assumes data normality, and a nonparametric, rank-based procedure. The decision about which method to use is usually made arbitrarily. The goal of this study was to demonstrate that using a resampling approach for the comparison of RI determination techniques could help researchers select the right procedure. Three methods of RI calculation—parametric, transformed parametric, and quantile-based bootstrapping—were applied to multiple random samples drawn from 81 values of complement factor B observations and from a computer-simulated normally distributed population. It was shown that differences in RI between legitimate methods could be up to 20% and even more. The transformed parametric method was found to be the best method for the calculation of RI of non-normally distributed factor B estimations, producing an unbiased RI and the lowest confidence limits and interquartile ranges. For a simulated Gaussian population, parametric calculations, as expected, were the best; quantile-based bootstrapping produced biased results at low sample sizes, and the transformed parametric method generated heavily biased RI. The resampling approach could help compare different RI calculation methods. An algorithm showing a resampling procedure for choosing the appropriate method for RI calculations is included. PMID:20554803

  2. Moving Block Bootstrap for Analyzing Longitudinal Data.

    Science.gov (United States)

    Ju, Hyunsu

    In a longitudinal study subjects are followed over time. I focus on a case where the number of replications over time is large relative to the number of subjects in the study. I investigate the use of moving block bootstrap methods for analyzing such data. Asymptotic properties of the bootstrap methods in this setting are derived. The effectiveness of these resampling methods is also demonstrated through a simulation study.

  3. Ultrafast approximation for phylogenetic bootstrap.

    Science.gov (United States)

    Minh, Bui Quang; Nguyen, Minh Anh Thi; von Haeseler, Arndt

    2013-05-01

    Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and the Shimodaira-Hasegawa-like approximate likelihood ratio test have been introduced to speed up the bootstrap. Here, we suggest an ultrafast bootstrap approximation approach (UFBoot) to compute the support of phylogenetic groups in maximum likelihood (ML) based trees. To achieve this, we combine the resampling estimated log-likelihood method with a simple but effective collection scheme of candidate trees. We also propose a stopping rule that assesses the convergence of branch support values to automatically determine when to stop collecting candidate trees. UFBoot achieves a median speed up of 3.1 (range: 0.66-33.3) to 10.2 (range: 1.32-41.4) compared with RAxML RBS for real DNA and amino acid alignments, respectively. Moreover, our extensive simulations show that UFBoot is robust against moderate model violations and the support values obtained appear to be relatively unbiased compared with the conservative standard bootstrap. This provides a more direct interpretation of the bootstrap support. We offer an efficient and easy-to-use software (available at http://www.cibiv.at/software/iqtree) to perform the UFBoot analysis with ML tree inference.

  4. The cluster bootstrap consistency in generalized estimating equations

    KAUST Repository

    Cheng, Guang

    2013-03-01

    The cluster bootstrap resamples clusters or subjects instead of individual observations in order to preserve the dependence within each cluster or subject. In this paper, we provide a theoretical justification of using the cluster bootstrap for the inferences of the generalized estimating equations (GEE) for clustered/longitudinal data. Under the general exchangeable bootstrap weights, we show that the cluster bootstrap yields a consistent approximation of the distribution of the regression estimate, and a consistent approximation of the confidence sets. We also show that a computationally more efficient one-step version of the cluster bootstrap provides asymptotically equivalent inference. © 2012.

  5. Bootstrapping heteroskedastic regression models: wild bootstrap vs. pairs bootstrap

    OpenAIRE

    Emmanuel Flachaire

    2005-01-01

    In regression models, appropriate bootstrap methods for inference robust to heteroskedasticity of unknown form are the wild bootstrap and the pairs bootstrap. The finite sample performance of a heteroskedastic-robust test is investigated with Monte Carlo experiments. The simulation results suggest that one specific version of the wild bootstrap outperforms the other versions of the wild bootstrap and of the pairs bootstrap. It is the only one for which the bootstrap test gives always better r...

  6. UFBoot2: Improving the Ultrafast Bootstrap Approximation.

    Science.gov (United States)

    Hoang, Diep Thi; Chernomor, Olga; von Haeseler, Arndt; Minh, Bui Quang; Vinh, Le Sy

    2018-02-01

    The standard bootstrap (SBS), despite being computationally intensive, is widely used in maximum likelihood phylogenetic analyses. We recently proposed the ultrafast bootstrap approximation (UFBoot) to reduce computing time while achieving more unbiased branch supports than SBS under mild model violations. UFBoot has been steadily adopted as an efficient alternative to SBS and other bootstrap approaches. Here, we present UFBoot2, which substantially accelerates UFBoot and reduces the risk of overestimating branch supports due to polytomies or severe model violations. Additionally, UFBoot2 provides suitable bootstrap resampling strategies for phylogenomic data. UFBoot2 is 778 times (median) faster than SBS and 8.4 times (median) faster than RAxML rapid bootstrap on tested data sets. UFBoot2 is implemented in the IQ-TREE software package version 1.6 and freely available at http://www.iqtree.org. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  7. Confidence Intervals for Effect Sizes: Applying Bootstrap Resampling

    Science.gov (United States)

    Banjanovic, Erin S.; Osborne, Jason W.

    2016-01-01

    Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…

  8. Bootstrap unloader

    Science.gov (United States)

    Pfiffner, H. J.

    1969-01-01

    Circuit can sample a number of transducers in sequence without drawing from them. This bootstrap unloader uses a differential amplifier with one input connected to a circuit which is the equivalent of the circuit to be unloaded, and the other input delivering the proper unloading currents.

  9. Bootstrap essentials

    CERN Document Server

    Bhaumik, Snig

    2015-01-01

    If you are a web developer who designs and develops websites and pages using HTML, CSS, and JavaScript, but have very little familiarity with Bootstrap, this is the book for you. Previous experience with HTML, CSS, and JavaScript will be helpful, while knowledge of jQuery would be an extra advantage.

  10. Revealing Traces of Image Resampling and Resampling Antiforensics

    Directory of Open Access Journals (Sweden)

    Anjie Peng

    2017-01-01

    Full Text Available Image resampling is a common manipulation in image processing. The forensics of resampling plays an important role in image tampering detection, steganography, and steganalysis. In this paper, we proposed an effective and secure detector, which can simultaneously detect resampling and its forged resampling which is attacked by antiforensic schemes. We find that the interpolation operation used in the resampling and forged resampling makes these two kinds of image show different statistical behaviors from the unaltered images, especially in the high frequency domain. To reveal the traces left by the interpolation, we first apply multidirectional high-pass filters on an image and the residual to create multidirectional differences. Then, the difference is fit into an autoregressive (AR model. Finally, the AR coefficients and normalized histograms of the difference are extracted as the feature. We assemble the feature extracted from each difference image to construct the comprehensive feature and feed it into support vector machines (SVM to detect resampling and forged resampling. Experiments on a large image database show that the proposed detector is effective and secure. Compared with the state-of-the-art works, the proposed detector achieved significant improvements in the detection of downsampling or resampling under JPEG compression.

  11. Evaluation of bootstrap methods for estimating uncertainty of parameters in nonlinear mixed-effects models: a simulation study in population pharmacokinetics.

    Science.gov (United States)

    Thai, Hoai-Thu; Mentré, France; Holford, Nicholas H G; Veyrat-Follet, Christine; Comets, Emmanuelle

    2014-02-01

    Bootstrap methods are used in many disciplines to estimate the uncertainty of parameters, including multi-level or linear mixed-effects models. Residual-based bootstrap methods which resample both random effects and residuals are an alternative approach to case bootstrap, which resamples the individuals. Most PKPD applications use the case bootstrap, for which software is available. In this study, we evaluated the performance of three bootstrap methods (case bootstrap, nonparametric residual bootstrap and parametric bootstrap) by a simulation study and compared them to that of an asymptotic method (Asym) in estimating uncertainty of parameters in nonlinear mixed-effects models (NLMEM) with heteroscedastic error. This simulation was conducted using as an example of the PK model for aflibercept, an anti-angiogenic drug. As expected, we found that the bootstrap methods provided better estimates of uncertainty for parameters in NLMEM with high nonlinearity and having balanced designs compared to the Asym, as implemented in MONOLIX. Overall, the parametric bootstrap performed better than the case bootstrap as the true model and variance distribution were used. However, the case bootstrap is faster and simpler as it makes no assumptions on the model and preserves both between subject and residual variability in one resampling step. The performance of the nonparametric residual bootstrap was found to be limited when applying to NLMEM due to its failure to reflate the variance before resampling in unbalanced designs where the Asym and the parametric bootstrap performed well and better than case bootstrap even with stratification.

  12. Seven Stages of Bootstrap

    OpenAIRE

    Beran, Rudolf

    1994-01-01

    This essay is organized around the theoretical and computationalproblem of constructing bootstrap confidence sets, with forays into relatedtopics. The seven section headings are: Introduction; The Bootstrap World;Bootstrap Confidence Sets; Computing Bootstrap Confidence Sets; Quality ofBootstrap Confidence Sets; Iterated and Two-step Boostrap; Further Resources.

  13. Internal validation of risk models in clustered data: a comparison of bootstrap schemes

    NARCIS (Netherlands)

    Bouwmeester, W.; Moons, K.G.M.; Kappen, T.H.; van Klei, W.A.; Twisk, J.W.R.; Eijkemans, M.J.C.; Vergouwe, Y.

    2013-01-01

    Internal validity of a risk model can be studied efficiently with bootstrapping to assess possible optimism in model performance. Assumptions of the regular bootstrap are violated when the development data are clustered. We compared alternative resampling schemes in clustered data for the estimation

  14. Faster family-wise error control for neuroimaging with a parametric bootstrap.

    Science.gov (United States)

    Vandekar, Simon N; Satterthwaite, Theodore D; Rosen, Adon; Ciric, Rastko; Roalf, David R; Ruparel, Kosha; Gur, Ruben C; Gur, Raquel E; Shinohara, Russell T

    2017-10-20

    In neuroimaging, hundreds to hundreds of thousands of tests are performed across a set of brain regions or all locations in an image. Recent studies have shown that the most common family-wise error (FWE) controlling procedures in imaging, which rely on classical mathematical inequalities or Gaussian random field theory, yield FWE rates (FWER) that are far from the nominal level. Depending on the approach used, the FWER can be exceedingly small or grossly inflated. Given the widespread use of neuroimaging as a tool for understanding neurological and psychiatric disorders, it is imperative that reliable multiple testing procedures are available. To our knowledge, only permutation joint testing procedures have been shown to reliably control the FWER at the nominal level. However, these procedures are computationally intensive due to the increasingly available large sample sizes and dimensionality of the images, and analyses can take days to complete. Here, we develop a parametric bootstrap joint testing procedure. The parametric bootstrap procedure works directly with the test statistics, which leads to much faster estimation of adjusted p-values than resampling-based procedures while reliably controlling the FWER in sample sizes available in many neuroimaging studies. We demonstrate that the procedure controls the FWER in finite samples using simulations, and present region- and voxel-wise analyses to test for sex differences in developmental trajectories of cerebral blood flow. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Assessment of resampling methods for causality testing: A note on the US inflation behavior.

    Science.gov (United States)

    Papana, Angeliki; Kyrtsou, Catherine; Kugiumtzis, Dimitris; Diks, Cees

    2017-01-01

    Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms.

  16. Assessment of resampling methods for causality testing: A note on the US inflation behavior

    Science.gov (United States)

    Kyrtsou, Catherine; Kugiumtzis, Dimitris; Diks, Cees

    2017-01-01

    Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms. PMID:28708870

  17. Mobile-first Bootstrap

    CERN Document Server

    Magno, Alexandre

    2013-01-01

    A practical, step-by-step tutorial on developing websites for mobile using Bootstrap.This book is for anyone who wants to get acquainted with the new features available in Bootstrap 3 and who wants to develop websites with the mobile-first feature of Bootstrap. The reader should have a basic knowledge of Bootstrap as a frontend framework.

  18. Investigations of dipole localization accuracy in MEG using the bootstrap.

    Science.gov (United States)

    Darvas, F; Rautiainen, M; Pantazis, D; Baillet, S; Benali, H; Mosher, J C; Garnero, L; Leahy, R M

    2005-04-01

    We describe the use of the nonparametric bootstrap to investigate the accuracy of current dipole localization from magnetoencephalography (MEG) studies of event-related neural activity. The bootstrap is well suited to the analysis of event-related MEG data since the experiments are repeated tens or even hundreds of times and averaged to achieve acceptable signal-to-noise ratios (SNRs). The set of repetitions or epochs can be viewed as a set of independent realizations of the brain's response to the experiment. Bootstrap resamples can be generated by sampling with replacement from these epochs and averaging. In this study, we applied the bootstrap resampling technique to MEG data from somatotopic experimental and simulated data. Four fingers of the right and left hand of a healthy subject were electrically stimulated, and about 400 trials per stimulation were recorded and averaged in order to measure the somatotopic mapping of the fingers in the S1 area of the brain. Based on single-trial recordings for each finger we performed 5000 bootstrap resamples. We reconstructed dipoles from these resampled averages using the Recursively Applied and Projected (RAP)-MUSIC source localization algorithm. We also performed a simulation for two dipolar sources with overlapping time courses embedded in realistic background brain activity generated using the prestimulus segments of the somatotopic data. To find correspondences between multiple sources in each bootstrap, sample dipoles with similar time series and forward fields were assumed to represent the same source. These dipoles were then clustered by a Gaussian Mixture Model (GMM) clustering algorithm using their combined normalized time series and topographies as feature vectors. The mean and standard deviation of the dipole position and the dipole time series in each cluster were computed to provide estimates of the accuracy of the reconstructed source locations and time series.

  19. A comparison of bootstrap approaches for estimating uncertainty of parameters in linear mixed-effects models.

    Science.gov (United States)

    Thai, Hoai-Thu; Mentré, France; Holford, Nicholas H G; Veyrat-Follet, Christine; Comets, Emmanuelle

    2013-01-01

    A version of the nonparametric bootstrap, which resamples the entire subjects from original data, called the case bootstrap, has been increasingly used for estimating uncertainty of parameters in mixed-effects models. It is usually applied to obtain more robust estimates of the parameters and more realistic confidence intervals (CIs). Alternative bootstrap methods, such as residual bootstrap and parametric bootstrap that resample both random effects and residuals, have been proposed to better take into account the hierarchical structure of multi-level and longitudinal data. However, few studies have been performed to compare these different approaches. In this study, we used simulation to evaluate bootstrap methods proposed for linear mixed-effect models. We also compared the results obtained by maximum likelihood (ML) and restricted maximum likelihood (REML). Our simulation studies evidenced the good performance of the case bootstrap as well as the bootstraps of both random effects and residuals. On the other hand, the bootstrap methods that resample only the residuals and the bootstraps combining case and residuals performed poorly. REML and ML provided similar bootstrap estimates of uncertainty, but there was slightly more bias and poorer coverage rate for variance parameters with ML in the sparse design. We applied the proposed methods to a real dataset from a study investigating the natural evolution of Parkinson's disease and were able to confirm that the methods provide plausible estimates of uncertainty. Given that most real-life datasets tend to exhibit heterogeneity in sampling schedules, the residual bootstraps would be expected to perform better than the case bootstrap. Copyright © 2013 John Wiley & Sons, Ltd.

  20. Efficient bootstrap with weakly dependent processes

    NARCIS (Netherlands)

    Bravo, Francesco; Crudu, Federico

    2012-01-01

    The efficient bootstrap methodology is developed for overidentified moment conditions models with weakly dependent observation. The resulting bootstrap procedure is shown to be asymptotically valid and can be used to approximate the distributions of t-statistics, the J-statistic for overidentifying

  1. The Finite Population Bootstrap - From the Maximum Likelihood to the Horvitz-Thompson Approach

    Directory of Open Access Journals (Sweden)

    Andreas Quatember

    2014-06-01

    Full Text Available The finite population bootstrap method is used as a computer-intensive alternative to estimate the sampling distribution of a sample statis-tic. The generation of a so-called “bootstrap population” is the necessarystep between the original sample drawn and the resamples needed to mimicthis distribution. The most important question for researchers to answer ishow to create an adequate bootstrap population, which may serve as a close-to-reality basis for the resampling process. In this paper, a review of someapproaches to answer this fundamental question is presented. Moreover, anapproach based on the idea behind the Horvitz-Thompson estimator allow-ing not only whole units in the bootstrap population but also parts of wholeunits is proposed. In a simulation study, this method is compared with a moreheuristic technique from the bootstrap literature.

  2. Spurious 99% bootstrap and jackknife support for unsupported clades.

    Science.gov (United States)

    Simmons, Mark P; Freudenstein, John V

    2011-10-01

    Quantifying branch support using the bootstrap and/or jackknife is generally considered to be an essential component of rigorous parsimony and maximum likelihood phylogenetic analyses. Previous authors have described how application of the frequency-within-replicates approach to treating multiple equally optimal trees found in a given bootstrap pseudoreplicate can provide apparent support for otherwise unsupported clades. We demonstrate how a similar problem may occur when a non-representative subset of equally optimal trees are held per pseudoreplicate, which we term the undersampling-within-replicates artifact. We illustrate the frequency-within-replicates and undersampling-within-replicates bootstrap and jackknife artifacts using both contrived and empirical examples, demonstrate that the artifacts can occur in both parsimony and likelihood analyses, and show that the artifacts occur in outputs from multiple different phylogenetic-inference programs. Based on our results, we make the following five recommendations, which are particularly relevant to supermatrix analyses, but apply to all phylogenetic analyses. First, when two or more optimal trees are found in a given pseudoreplicate they should be summarized using the strict-consensus rather than frequency-within-replicates approach. Second jackknife resampling should be used rather than bootstrap resampling. Third, multiple tree searches while holding multiple trees per search should be conducted in each pseudoreplicate rather than conducting only a single search and holding only a single tree. Fourth, branches with a minimum possible optimized length of zero should be collapsed within each tree search rather than collapsing branches only if their maximum possible optimized length is zero. Fifth, resampling values should be mapped onto the strict consensus of all optimal trees found rather than simply presenting the ≥ 50% bootstrap or jackknife tree or mapping the resampling values onto a single optimal tree

  3. Winter Holts Oscillatory Method: A New Method of Resampling in Time Series.

    Directory of Open Access Journals (Sweden)

    Muhammad Imtiaz Subhani

    2016-12-01

    Full Text Available The core proposition behind this research is to create innovative methods of bootstrapping that can be applied in time series data. In order to find new methods of bootstrapping, various methods were reviewed; The data of automotive Sales, Market Shares and Net Exports of the top 10 countries, which includes China, Europe, United States of America (USA, Japan, Germany, South Korea, India, Mexico, Brazil, Spain and, Canada from 2002 to 2014 were collected through various sources which includes UN Comtrade, Index Mundi and World Bank. The findings of this paper confirmed that Bootstrapping for resampling through winter forecasting by Oscillation and Average methods give more robust results than the winter forecasting by any general methods.

  4. Bootstrap testing for cross-correlation under low firing activity.

    Science.gov (United States)

    González-Montoro, Aldana M; Cao, Ricardo; Espinosa, Nelson; Cudeiro, Javier; Mariño, Jorge

    2015-06-01

    A new cross-correlation synchrony index for neural activity is proposed. The index is based on the integration of the kernel estimation of the cross-correlation function. It is used to test for the dynamic synchronization levels of spontaneous neural activity under two induced brain states: sleep-like and awake-like. Two bootstrap resampling plans are proposed to approximate the distribution of the test statistics. The results of the first bootstrap method indicate that it is useful to discern significant differences in the synchronization dynamics of brain states characterized by a neural activity with low firing rate. The second bootstrap method is useful to unveil subtle differences in the synchronization levels of the awake-like state, depending on the activation pathway.

  5. A cautionary note on the use of nonparametric bootstrap for estimating uncertainties in extreme value models

    Czech Academy of Sciences Publication Activity Database

    Kyselý, Jan

    2008-01-01

    Roč. 47, č. 12 (2008), s. 3236-3251 ISSN 1558-8424 R&D Projects: GA ČR GA205/06/1535 Institutional research plan: CEZ:AV0Z30420517 Keywords : bootstrap * resampling * extreme value analysis * Generalized extreme value distribution * Gumbel distribution Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.761, year: 2008

  6. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  7. Tests for informative cluster size using a novel balanced bootstrap scheme.

    Science.gov (United States)

    Nevalainen, Jaakko; Oja, Hannu; Datta, Somnath

    2017-07-20

    Clustered data are often encountered in biomedical studies, and to date, a number of approaches have been proposed to analyze such data. However, the phenomenon of informative cluster size (ICS) is a challenging problem, and its presence has an impact on the choice of a correct analysis methodology. For example, Dutta and Datta (2015, Biometrics) presented a number of marginal distributions that could be tested. Depending on the nature and degree of informativeness of the cluster size, these marginal distributions may differ, as do the choices of the appropriate test. In particular, they applied their new test to a periodontal data set where the plausibility of the informativeness was mentioned, but no formal test for the same was conducted. We propose bootstrap tests for testing the presence of ICS. A balanced bootstrap method is developed to successfully estimate the null distribution by merging the re-sampled observations with closely matching counterparts. Relying on the assumption of exchangeability within clusters, the proposed procedure performs well in simulations even with a small number of clusters, at different distributions and against different alternative hypotheses, thus making it an omnibus test. We also explain how to extend the ICS test to a regression setting and thereby enhancing its practical utility. The methodologies are illustrated using the periodontal data set mentioned earlier. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Dynamics of bootstrap percolation

    Indian Academy of Sciences (India)

    -law avalanches, while the continuous transition is characterized by truncated avalanches in a related sequential bootstrap process. We explain this behaviour on the basis of an analytical and numerical study of the avalanche distributions on ...

  9. The effective bootstrap

    Energy Technology Data Exchange (ETDEWEB)

    Castedo Echeverri, Alejandro [SISSA, Trieste (Italy); INFN, Trieste (Italy); Harling, Benedict von [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Serone, Marco [SISSA, Trieste (Italy); INFN, Trieste (Italy); ICTP, Trieste (Italy)

    2016-06-15

    We study the numerical bounds obtained using a conformal-bootstrap method where different points in the plane of conformal cross ratios z and anti z are sampled. In contrast to the most used method based on derivatives evaluated at the symmetric point z= anti z=1/2, we can consistently ''integrate out'' higher-dimensional operators and get a reduced simpler, and faster to solve, set of bootstrap equations. We test this ''effective'' bootstrap by studying the 3D Ising and O(n) vector models and bounds on generic 4D CFTs, for which extensive results are already available in the literature. We also determine the scaling dimensions of certain scalar operators in the O(n) vector models, with n=2,3,4, which have not yet been computed using bootstrap techniques.

  10. Bootstrapping phylogenies inferred from rearrangement data

    Directory of Open Access Journals (Sweden)

    Lin Yu

    2012-08-01

    Full Text Available Abstract Background Large-scale sequencing of genomes has enabled the inference of phylogenies based on the evolution of genomic architecture, under such events as rearrangements, duplications, and losses. Many evolutionary models and associated algorithms have been designed over the last few years and have found use in comparative genomics and phylogenetic inference. However, the assessment of phylogenies built from such data has not been properly addressed to date. The standard method used in sequence-based phylogenetic inference is the bootstrap, but it relies on a large number of homologous characters that can be resampled; yet in the case of rearrangements, the entire genome is a single character. Alternatives such as the jackknife suffer from the same problem, while likelihood tests cannot be applied in the absence of well established probabilistic models. Results We present a new approach to the assessment of distance-based phylogenetic inference from whole-genome data; our approach combines features of the jackknife and the bootstrap and remains nonparametric. For each feature of our method, we give an equivalent feature in the sequence-based framework; we also present the results of extensive experimental testing, in both sequence-based and genome-based frameworks. Through the feature-by-feature comparison and the experimental results, we show that our bootstrapping approach is on par with the classic phylogenetic bootstrap used in sequence-based reconstruction, and we establish the clear superiority of the classic bootstrap for sequence data and of our corresponding new approach for rearrangement data over proposed variants. Finally, we test our approach on a small dataset of mammalian genomes, verifying that the support values match current thinking about the respective branches. Conclusions Our method is the first to provide a standard of assessment to match that of the classic phylogenetic bootstrap for aligned sequences. Its

  11. Bootstrapping phylogenies inferred from rearrangement data.

    Science.gov (United States)

    Lin, Yu; Rajan, Vaibhav; Moret, Bernard Me

    2012-08-29

    Large-scale sequencing of genomes has enabled the inference of phylogenies based on the evolution of genomic architecture, under such events as rearrangements, duplications, and losses. Many evolutionary models and associated algorithms have been designed over the last few years and have found use in comparative genomics and phylogenetic inference. However, the assessment of phylogenies built from such data has not been properly addressed to date. The standard method used in sequence-based phylogenetic inference is the bootstrap, but it relies on a large number of homologous characters that can be resampled; yet in the case of rearrangements, the entire genome is a single character. Alternatives such as the jackknife suffer from the same problem, while likelihood tests cannot be applied in the absence of well established probabilistic models. We present a new approach to the assessment of distance-based phylogenetic inference from whole-genome data; our approach combines features of the jackknife and the bootstrap and remains nonparametric. For each feature of our method, we give an equivalent feature in the sequence-based framework; we also present the results of extensive experimental testing, in both sequence-based and genome-based frameworks. Through the feature-by-feature comparison and the experimental results, we show that our bootstrapping approach is on par with the classic phylogenetic bootstrap used in sequence-based reconstruction, and we establish the clear superiority of the classic bootstrap for sequence data and of our corresponding new approach for rearrangement data over proposed variants. Finally, we test our approach on a small dataset of mammalian genomes, verifying that the support values match current thinking about the respective branches. Our method is the first to provide a standard of assessment to match that of the classic phylogenetic bootstrap for aligned sequences. Its support values follow a similar scale and its receiver

  12. Long multiplet bootstrap

    Energy Technology Data Exchange (ETDEWEB)

    Cornagliotto, Martina; Lemos, Madalena; Schomerus, Volker [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group

    2017-02-15

    Applications of the bootstrap program to superconformal field theories promise unique new insights into their landscape and could even lead to the discovery of new models. Most existing results of the superconformal bootstrap were obtained form correlation functions of very special fields in short (BPS) representations of the superconformal algebra. Our main goal is to initiate a superconformal bootstrap for long multiplets, one that exploits all constraints from superprimaries and their descendants. To this end, we work out the Casimir equations for four-point correlators of long multiplets of the two-dimensional global N=2 superconformal algebra. After constructing the full set of conformal blocks we discuss two different applications. The first one concerns two-dimensional (2,0) theories. The numerical bootstrap analysis we perform serves a twofold purpose, as a feasibility study of our long multiplet bootstrap and also as an exploration of (2,0) theories. A second line of applications is directed towards four-dimensional N=3 SCFTs. In this context, our results imply a new bound c ≥ (13)/(24) for the central charge of such models. A theory that saturates this bound is not known yet.

  13. Long multiplet bootstrap

    Science.gov (United States)

    Cornagliotto, Martina; Lemos, Madalena; Schomerus, Volker

    2017-10-01

    Applications of the bootstrap program to superconformal field theories promise unique new insights into their landscape and could even lead to the discovery of new models. Most existing results of the superconformal bootstrap were obtained form correlation functions of very special fields in short (BPS) representations of the superconformal algebra. Our main goal is to initiate a superconformal bootstrap for long multiplets, one that exploits all constraints from superprimaries and their descendants. To this end, we work out the Casimir equations for four-point correlators of long multiplets of the two-dimensional global N=2 superconformal algebra. After constructing the full set of conformal blocks we discuss two different applications. The first one concerns two-dimensional (2,0) theories. The numerical bootstrap analysis we perform serves a twofold purpose, as a feasibility study of our long multiplet bootstrap and also as an exploration of (2,0) theories. A second line of applications is directed towards four-dimensional N=3 SCFTs. In this context, our results imply a new bound c≥ 13/24 for the central charge of such models, which we argue cannot be saturated by an interacting SCFT.

  14. Long multiplet bootstrap

    International Nuclear Information System (INIS)

    Cornagliotto, Martina; Lemos, Madalena; Schomerus, Volker

    2017-02-01

    Applications of the bootstrap program to superconformal field theories promise unique new insights into their landscape and could even lead to the discovery of new models. Most existing results of the superconformal bootstrap were obtained form correlation functions of very special fields in short (BPS) representations of the superconformal algebra. Our main goal is to initiate a superconformal bootstrap for long multiplets, one that exploits all constraints from superprimaries and their descendants. To this end, we work out the Casimir equations for four-point correlators of long multiplets of the two-dimensional global N=2 superconformal algebra. After constructing the full set of conformal blocks we discuss two different applications. The first one concerns two-dimensional (2,0) theories. The numerical bootstrap analysis we perform serves a twofold purpose, as a feasibility study of our long multiplet bootstrap and also as an exploration of (2,0) theories. A second line of applications is directed towards four-dimensional N=3 SCFTs. In this context, our results imply a new bound c ≥ (13)/(24) for the central charge of such models. A theory that saturates this bound is not known yet.

  15. A bootstrap estimation scheme for chemical compositional data with nondetects

    Science.gov (United States)

    Palarea-Albaladejo, J; Martín-Fernández, J.A; Olea, Ricardo A.

    2014-01-01

    The bootstrap method is commonly used to estimate the distribution of estimators and their associated uncertainty when explicit analytic expressions are not available or are difficult to obtain. It has been widely applied in environmental and geochemical studies, where the data generated often represent parts of whole, typically chemical concentrations. This kind of constrained data is generically called compositional data, and they require specialised statistical methods to properly account for their particular covariance structure. On the other hand, it is not unusual in practice that those data contain labels denoting nondetects, that is, concentrations falling below detection limits. Nondetects impede the implementation of the bootstrap and represent an additional source of uncertainty that must be taken into account. In this work, a bootstrap scheme is devised that handles nondetects by adding an imputation step within the resampling process and conveniently propagates their associated uncertainly. In doing so, it considers the constrained relationships between chemical concentrations originated from their compositional nature. Bootstrap estimates using a range of imputation methods, including new stochastic proposals, are compared across scenarios of increasing difficulty. They are formulated to meet compositional principles following the log-ratio approach, and an adjustment is introduced in the multivariate case to deal with nonclosed samples. Results suggest that nondetect bootstrap based on model-based imputation is generally preferable. A robust approach based on isometric log-ratio transformations appears to be particularly suited in this context. Computer routines in the R statistical programming language are provided. 

  16. Assessing statistical reliability of phylogenetic trees via a speedy double bootstrap method.

    Science.gov (United States)

    Ren, Aizhen; Ishida, Takashi; Akiyama, Yutaka

    2013-05-01

    Evaluating the reliability of estimated phylogenetic trees is of critical importance in the field of molecular phylogenetics, and for other endeavors that depend on accurate phylogenetic reconstruction. The bootstrap method is a well-known computational approach to phylogenetic tree assessment, and more generally for assessing the reliability of statistical models. However, it is known to be biased under certain circumstances, calling into question the accuracy of the method. Several advanced bootstrap methods have been developed to achieve higher accuracy, one of which is the double bootstrap approach, but the computational burden of this method has precluded its application to practical problems of phylogenetic tree selection. We address this issue by proposing a simple method called the speedy double bootstrap, which circumvents the second-tier resampling step in the regular double bootstrap approach. We also develop an implementation of the regular double bootstrap for comparison with our speedy method. The speedy double bootstrap suffers no significant loss of accuracy compared with the regular double bootstrap, while performing calculations significantly more rapidly (at minimum around 371 times faster, based on analysis of mammalian mitochondrial amino acid sequences and 12S and 16S rRNA genes). Our method thus enables, for the first time, the practical application of the double bootstrap technique in the context of molecular phylogenetics. The approach can also be used more generally for model selection problems wherever the maximum likelihood criterion is used. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. N=4 superconformal bootstrap.

    Science.gov (United States)

    Beem, Christopher; Rastelli, Leonardo; van Rees, Balt C

    2013-08-16

    We implement the conformal bootstrap for N=4 superconformal field theories in four dimensions. The consistency of the four-point function of the stress-energy tensor multiplet imposes significant upper bounds for the scaling dimensions of unprotected local operators as functions of the central charge of the theory. At the threshold of exclusion, a particular operator spectrum appears to be singled out by the bootstrap constraints. We conjecture that this extremal spectrum is that of N=4 supersymmetric Yang-Mills theory at an S-duality invariant value of the complexified gauge coupling.

  18. BoCluSt: Bootstrap Clustering Stability Algorithm for Community Detection.

    Science.gov (United States)

    Garcia, Carlos

    2016-01-01

    The identification of modules or communities in sets of related variables is a key step in the analysis and modeling of biological systems. Procedures for this identification are usually designed to allow fast analyses of very large datasets and may produce suboptimal results when these sets are of a small to moderate size. This article introduces BoCluSt, a new, somewhat more computationally intensive, community detection procedure that is based on combining a clustering algorithm with a measure of stability under bootstrap resampling. Both computer simulation and analyses of experimental data showed that BoCluSt can outperform current procedures in the identification of multiple modules in data sets with a moderate number of variables. In addition, the procedure provides users with a null distribution of results to evaluate the support for the existence of community structure in the data. BoCluSt takes individual measures for a set of variables as input, and may be a valuable and robust exploratory tool of network analysis, as it provides 1) an estimation of the best partition of variables into modules, 2) a measure of the support for the existence of modular structures, and 3) an overall description of the whole structure, which may reveal hierarchical modular situations, in which modules are composed of smaller sub-modules.

  19. Bootstrap-based methods for estimating standard errors in Cox's regression analyses of clustered event times.

    Science.gov (United States)

    Xiao, Yongling; Abrahamowicz, Michal

    2010-03-30

    We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.

  20. Dynamics of bootstrap percolation

    Indian Academy of Sciences (India)

    by presenting an analytic and numerical study of the problem on a Bethe lattice. The Bethe lattice does not capture all the complexities of bootstrap dynamics on periodic lattices but it does provide useful insight into what makes the avalanche distributions different in the two cases. The following presentation is self- ...

  1. Effects of parameter estimation on maximum-likelihood bootstrap analysis.

    Science.gov (United States)

    Ripplinger, Jennifer; Abdo, Zaid; Sullivan, Jack

    2010-08-01

    Bipartition support in maximum-likelihood (ML) analysis is most commonly assessed using the nonparametric bootstrap. Although bootstrap replicates should theoretically be analyzed in the same manner as the original data, model selection is almost never conducted for bootstrap replicates, substitution-model parameters are often fixed to their maximum-likelihood estimates (MLEs) for the empirical data, and bootstrap replicates may be subjected to less rigorous heuristic search strategies than the original data set. Even though this approach may increase computational tractability, it may also lead to the recovery of suboptimal tree topologies and affect bootstrap values. However, since well-supported bipartitions are often recovered regardless of method, use of a less intensive bootstrap procedure may not significantly affect the results. In this study, we investigate the impact of parameter estimation (i.e., assessment of substitution-model parameters and tree topology) on ML bootstrap analysis. We find that while forgoing model selection and/or setting substitution-model parameters to their empirical MLEs may lead to significantly different bootstrap values, it probably would not change their biological interpretation. Similarly, even though the use of reduced search methods often results in significant differences among bootstrap values, only omitting branch swapping is likely to change any biological inferences drawn from the data. Copyright 2010 Elsevier Inc. All rights reserved.

  2. Fast, Exact Bootstrap Principal Component Analysis forp> 1 million.

    Science.gov (United States)

    Fisher, Aaron; Caffo, Brian; Schwartz, Brian; Zipunnikov, Vadim

    Many have suggested a bootstrap procedure for estimating the sampling variability of principal component analysis (PCA) results. However, when the number of measurements per subject ( p ) is much larger than the number of subjects ( n ), calculating and storing the leading principal components from each bootstrap sample can be computationally infeasible. To address this, we outline methods for fast, exact calculation of bootstrap principal components, eigenvalues, and scores. Our methods leverage the fact that all bootstrap samples occupy the same n -dimensional subspace as the original sample. As a result, all bootstrap principal components are limited to the same n -dimensional subspace and can be efficiently represented by their low dimensional coordinates in that subspace. Several uncertainty metrics can be computed solely based on the bootstrap distribution of these low dimensional coordinates, without calculating or storing the p -dimensional bootstrap components. Fast bootstrap PCA is applied to a dataset of sleep electroencephalogram recordings ( p = 900, n = 392), and to a dataset of brain magnetic resonance images (MRIs) ( p ≈ 3 million, n = 352). For the MRI dataset, our method allows for standard errors for the first 3 principal components based on 1000 bootstrap samples to be calculated on a standard laptop in 47 minutes, as opposed to approximately 4 days with standard methods.

  3. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr......Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach...

  4. Bootstrap quantification of estimation uncertainties in network degree distributions.

    Science.gov (United States)

    Gel, Yulia R; Lyubchich, Vyacheslav; Ramirez Ramirez, L Leticia

    2017-07-19

    We propose a new method of nonparametric bootstrap to quantify estimation uncertainties in functions of network degree distribution in large ultra sparse networks. Both network degree distribution and network order are assumed to be unknown. The key idea is based on adaptation of the "blocking" argument, developed for bootstrapping of time series and re-tiling of spatial data, to random networks. We first sample a set of multiple ego networks of varying orders that form a patch, or a network block analogue, and then resample the data within patches. To select an optimal patch size, we develop a new computationally efficient and data-driven cross-validation algorithm. The proposed fast patchwork bootstrap (FPB) methodology further extends the ideas for a case of network mean degree, to inference on a degree distribution. In addition, the FPB is substantially less computationally expensive, requires less information on a graph, and is free from nuisance parameters. In our simulation study, we show that the new bootstrap method outperforms competing approaches by providing sharper and better-calibrated confidence intervals for functions of a network degree distribution than other available approaches, including the cases of networks in an ultra sparse regime. We illustrate the FPB in application to collaboration networks in statistics and computer science and to Wikipedia networks.

  5. Generalized Bootstrap Method for Assessment of Uncertainty in Semivariogram Inference

    Science.gov (United States)

    Olea, R.A.; Pardo-Iguzquiza, E.

    2011-01-01

    The semivariogram and its related function, the covariance, play a central role in classical geostatistics for modeling the average continuity of spatially correlated attributes. Whereas all methods are formulated in terms of the true semivariogram, in practice what can be used are estimated semivariograms and models based on samples. A generalized form of the bootstrap method to properly model spatially correlated data is used to advance knowledge about the reliability of empirical semivariograms and semivariogram models based on a single sample. Among several methods available to generate spatially correlated resamples, we selected a method based on the LU decomposition and used several examples to illustrate the approach. The first one is a synthetic, isotropic, exhaustive sample following a normal distribution, the second example is also a synthetic but following a non-Gaussian random field, and a third empirical sample consists of actual raingauge measurements. Results show wider confidence intervals than those found previously by others with inadequate application of the bootstrap. Also, even for the Gaussian example, distributions for estimated semivariogram values and model parameters are positively skewed. In this sense, bootstrap percentile confidence intervals, which are not centered around the empirical semivariogram and do not require distributional assumptions for its construction, provide an achieved coverage similar to the nominal coverage. The latter cannot be achieved by symmetrical confidence intervals based on the standard error, regardless if the standard error is estimated from a parametric equation or from bootstrap. ?? 2010 International Association for Mathematical Geosciences.

  6. Uncertainty Assessment of Hydrological Frequency Analysis Using Bootstrap Method

    Directory of Open Access Journals (Sweden)

    Yi-Ming Hu

    2013-01-01

    Full Text Available The hydrological frequency analysis (HFA is the foundation for the hydraulic engineering design and water resources management. Hydrological extreme observations or samples are the basis for HFA; the representativeness of a sample series to the population distribution is extremely important for the estimation reliability of the hydrological design value or quantile. However, for most of hydrological extreme data obtained in practical application, the size of the samples is usually small, for example, in China about 40~50 years. Generally, samples with small size cannot completely display the statistical properties of the population distribution, thus leading to uncertainties in the estimation of hydrological design values. In this paper, a new method based on bootstrap is put forward to analyze the impact of sampling uncertainty on the design value. By bootstrap resampling technique, a large number of bootstrap samples are constructed from the original flood extreme observations; the corresponding design value or quantile is estimated for each bootstrap sample, so that the sampling distribution of design value is constructed; based on the sampling distribution, the uncertainty of quantile estimation can be quantified. Compared with the conventional approach, this method provides not only the point estimation of a design value but also quantitative evaluation on uncertainties of the estimation.

  7. Dynamics of bootstrap percolation

    Indian Academy of Sciences (India)

    transition. The first-order transition encountered in bootstrap problems has often a mixed character in the sense that the discontinuous drop in magnetization is .... Pn = p z−1. ∑ k=0. ( z − 1 k. ) [Pn+1]k[1 − Pn+1]z−1−kpk+1, pk+1 = 1 if k + 1 ≥ m, pk+1 = 0 if k + 1 < m. (1). The rationale for the above equation is as follows.

  8. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    Science.gov (United States)

    Chaibub Neto, Elias

    2015-01-01

    In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  9. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...

  10. How Many Subjects are Needed for a Visual Field Normative Database? A Comparison of Ground Truth and Bootstrapped Statistics.

    Science.gov (United States)

    Phu, Jack; Bui, Bang V; Kalloniatis, Michael; Khuu, Sieu K

    2018-03-01

    The number of subjects needed to establish the normative limits for visual field (VF) testing is not known. Using bootstrap resampling, we determined whether the ground truth mean, distribution limits, and standard deviation (SD) could be approximated using different set size ( x ) levels, in order to provide guidance for the number of healthy subjects required to obtain robust VF normative data. We analyzed the 500 Humphrey Field Analyzer (HFA) SITA-Standard results of 116 healthy subjects and 100 HFA full threshold results of 100 psychophysically experienced healthy subjects. These VFs were resampled (bootstrapped) to determine mean sensitivity, distribution limits (5th and 95th percentiles), and SD for different ' x ' and numbers of resamples. We also used the VF results of 122 glaucoma patients to determine the performance of ground truth and bootstrapped results in identifying and quantifying VF defects. An x of 150 (for SITA-Standard) and 60 (for full threshold) produced bootstrapped descriptive statistics that were no longer different to the original distribution limits and SD. Removing outliers produced similar results. Differences between original and bootstrapped limits in detecting glaucomatous defects were minimized at x = 250. Ground truth statistics of VF sensitivities could be approximated using set sizes that are significantly smaller than the original cohort. Outlier removal facilitates the use of Gaussian statistics and does not significantly affect the distribution limits. We provide guidance for choosing the cohort size for different levels of error when performing normative comparisons with glaucoma patients.

  11. Applying Bootstrap Resampling to Compute Confidence Intervals for Various Statistics with R

    Science.gov (United States)

    Dogan, C. Deha

    2017-01-01

    Background: Most of the studies in academic journals use p values to represent statistical significance. However, this is not a good indicator of practical significance. Although confidence intervals provide information about the precision of point estimation, they are, unfortunately, rarely used. The infrequent use of confidence intervals might…

  12. A wild bootstrap approach for the Aalen-Johansen estimator.

    Science.gov (United States)

    Bluhmki, Tobias; Schmoor, Claudia; Dobler, Dennis; Pauly, Markus; Finke, Juergen; Schumacher, Martin; Beyersmann, Jan

    2018-02-16

    We suggest a wild bootstrap resampling technique for nonparametric inference on transition probabilities in a general time-inhomogeneous Markov multistate model. We first approximate the limiting distribution of the Nelson-Aalen estimator by repeatedly generating standard normal wild bootstrap variates, while the data is kept fixed. Next, a transformation using a functional delta method argument is applied. The approach is conceptually easier than direct resampling for the transition probabilities. It is used to investigate a non-standard time-to-event outcome, currently being alive without immunosuppressive treatment, with data from a recent study of prophylactic treatment in allogeneic transplanted leukemia patients. Due to non-monotonic outcome probabilities in time, neither standard survival nor competing risks techniques apply, which highlights the need for the present methodology. Finite sample performance of time-simultaneous confidence bands for the outcome probabilities is assessed in an extensive simulation study motivated by the clinical trial data. Example code is provided in the web-based Supplementary Materials. © 2018, The International Biometric Society.

  13. Bootstrapping language acquisition.

    Science.gov (United States)

    Abend, Omri; Kwiatkowski, Tom; Smith, Nathaniel J; Goldwater, Sharon; Steedman, Mark

    2017-07-01

    The semantic bootstrapping hypothesis proposes that children acquire their native language through exposure to sentences of the language paired with structured representations of their meaning, whose component substructures can be associated with words and syntactic structures used to express these concepts. The child's task is then to learn a language-specific grammar and lexicon based on (probably contextually ambiguous, possibly somewhat noisy) pairs of sentences and their meaning representations (logical forms). Starting from these assumptions, we develop a Bayesian probabilistic account of semantically bootstrapped first-language acquisition in the child, based on techniques from computational parsing and interpretation of unrestricted text. Our learner jointly models (a) word learning: the mapping between components of the given sentential meaning and lexical words (or phrases) of the language, and (b) syntax learning: the projection of lexical elements onto sentences by universal construction-free syntactic rules. Using an incremental learning algorithm, we apply the model to a dataset of real syntactically complex child-directed utterances and (pseudo) logical forms, the latter including contextually plausible but irrelevant distractors. Taking the Eve section of the CHILDES corpus as input, the model simulates several well-documented phenomena from the developmental literature. In particular, the model exhibits syntactic bootstrapping effects (in which previously learned constructions facilitate the learning of novel words), sudden jumps in learning without explicit parameter setting, acceleration of word-learning (the "vocabulary spurt"), an initial bias favoring the learning of nouns over verbs, and one-shot learning of words and their meanings. The learner thus demonstrates how statistical learning over structured representations can provide a unified account for these seemingly disparate phenomena. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Bootstrap model selection had similar performance for selecting authentic and noise variables compared to backward variable elimination: a simulation study.

    Science.gov (United States)

    Austin, Peter C

    2008-10-01

    Researchers have proposed using bootstrap resampling in conjunction with automated variable selection methods to identify predictors of an outcome and to develop parsimonious regression models. Using this method, multiple bootstrap samples are drawn from the original data set. Traditional backward variable elimination is used in each bootstrap sample, and the proportion of bootstrap samples in which each candidate variable is identified as an independent predictor of the outcome is determined. The performance of this method for identifying predictor variables has not been examined. Monte Carlo simulation methods were used to determine the ability of bootstrap model selection methods to correctly identify predictors of an outcome when those variables that are selected for inclusion in at least 50% of the bootstrap samples are included in the final regression model. We compared the performance of the bootstrap model selection method to that of conventional backward variable elimination. Bootstrap model selection tended to result in an approximately equal proportion of selected models being equal to the true regression model compared with the use of conventional backward variable elimination. Bootstrap model selection performed comparatively to backward variable elimination for identifying the true predictors of a binary outcome.

  15. Explorations in Statistics: the Bootstrap

    Science.gov (United States)

    Curran-Everett, Douglas

    2009-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This fourth installment of Explorations in Statistics explores the bootstrap. The bootstrap gives us an empirical approach to estimate the theoretical variability among possible values of a sample statistic such as the…

  16. Ultrafast Approximation for Phylogenetic Bootstrap

    NARCIS (Netherlands)

    Bui Quang Minh, [No Value; Nguyen, Thi; von Haeseler, Arndt

    Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and

  17. Automated modal parameter estimation using correlation analysis and bootstrap sampling

    Science.gov (United States)

    Yaghoubi, Vahid; Vakilzadeh, Majid K.; Abrahamsson, Thomas J. S.

    2018-02-01

    The estimation of modal parameters from a set of noisy measured data is a highly judgmental task, with user expertise playing a significant role in distinguishing between estimated physical and noise modes of a test-piece. Various methods have been developed to automate this procedure. The common approach is to identify models with different orders and cluster similar modes together. However, most proposed methods based on this approach suffer from high-dimensional optimization problems in either the estimation or clustering step. To overcome this problem, this study presents an algorithm for autonomous modal parameter estimation in which the only required optimization is performed in a three-dimensional space. To this end, a subspace-based identification method is employed for the estimation and a non-iterative correlation-based method is used for the clustering. This clustering is at the heart of the paper. The keys to success are correlation metrics that are able to treat the problems of spatial eigenvector aliasing and nonunique eigenvectors of coalescent modes simultaneously. The algorithm commences by the identification of an excessively high-order model from frequency response function test data. The high number of modes of this model provides bases for two subspaces: one for likely physical modes of the tested system and one for its complement dubbed the subspace of noise modes. By employing the bootstrap resampling technique, several subsets are generated from the same basic dataset and for each of them a model is identified to form a set of models. Then, by correlation analysis with the two aforementioned subspaces, highly correlated modes of these models which appear repeatedly are clustered together and the noise modes are collected in a so-called Trashbox cluster. Stray noise modes attracted to the mode clusters are trimmed away in a second step by correlation analysis. The final step of the algorithm is a fuzzy c-means clustering procedure applied to

  18. MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.

    Science.gov (United States)

    Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang

    2018-02-02

    The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .

  19. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...

  20. Bootstrapping quarks and gluons

    Energy Technology Data Exchange (ETDEWEB)

    Chew, G.F.

    1979-04-01

    Dual topological unitarization (DTU) - the approach to S-matrix causality and unitarity through combinatorial topology - is reviewed. Amplitudes associated with triangulated spheres are shown to constitute the core of particle physics. Each sphere is covered by triangulated disc faces corresponding to hadrons. The leading current candidate for the hadron-face triangulation pattern employs 3-triangle basic subdiscs whose orientations correspond to baryon number and topological color. Additional peripheral triangles lie along the hadron-face perimeter. Certain combinations of peripheral triangles with a basic-disc triangle can be identified as quarks, the flavor of a quark corresponding to the orientation of its edges that lie on the hadron-face perimeter. Both baryon number and flavor are additively conserved. Quark helicity, which can be associated with triangle-interior orientation, is not uniformly conserved and interacts with particle momentum, whereas flavor does not. Three different colors attach to the 3 quarks associated with a single basic subdisc, but there is no additive physical conservation law associated with color. There is interplay between color and quark helicity. In hadron faces with more than one basic subdisc, there may occur pairs of adjacent flavorless but colored triangles with net helicity +-1 that are identifiable as gluons. Broken symmetry is an automatic feature of the bootstrap. T, C and P symmetries, as well as up-down flavor symmetry, persist on all orientable surfaces.

  1. Bootstrapping quarks and gluons

    International Nuclear Information System (INIS)

    Chew, G.F.

    1979-04-01

    Dual topological unitarization (DTU) - the approach to S-matrix causality and unitarity through combinatorial topology - is reviewed. Amplitudes associated with triangulated spheres are shown to constitute the core of particle physics. Each sphere is covered by triangulated disc faces corresponding to hadrons. The leading current candidate for the hadron-face triangulation pattern employs 3-triangle basic subdiscs whose orientations correspond to baryon number and topological color. Additional peripheral triangles lie along the hadron-face perimeter. Certain combinations of peripheral triangles with a basic-disc triangle can be identified as quarks, the flavor of a quark corresponding to the orientation of its edges that lie on the hadron-face perimeter. Both baryon number and flavor are additively conserved. Quark helicity, which can be associated with triangle-interior orientation, is not uniformly conserved and interacts with particle momentum, whereas flavor does not. Three different colors attach to the 3 quarks associated with a single basic subdisc, but there is no additive physical conservation law associated with color. There is interplay between color and quark helicity. In hadron faces with more than one basic subdisc, there may occur pairs of adjacent flavorless but colored triangles with net helicity +-1 that are identifiable as gluons. Broken symmetry is an automatic feature of the bootstrap. T, C and P symmetries, as well as up-down flavor symmetry, persist on all orientable surfaces

  2. Bootstrap Dynamical Symmetry Breaking

    Directory of Open Access Journals (Sweden)

    Wei-Shu Hou

    2013-01-01

    Full Text Available Despite the emergence of a 125 GeV Higgs-like particle at the LHC, we explore the possibility of dynamical electroweak symmetry breaking by strong Yukawa coupling of very heavy new chiral quarks Q . Taking the 125 GeV object to be a dilaton with suppressed couplings, we note that the Goldstone bosons G exist as longitudinal modes V L of the weak bosons and would couple to Q with Yukawa coupling λ Q . With m Q ≳ 700  GeV from LHC, the strong λ Q ≳ 4 could lead to deeply bound Q Q ¯ states. We postulate that the leading “collapsed state,” the color-singlet (heavy isotriplet, pseudoscalar Q Q ¯ meson π 1 , is G itself, and a gap equation without Higgs is constructed. Dynamical symmetry breaking is affected via strong λ Q , generating m Q while self-consistently justifying treating G as massless in the loop, hence, “bootstrap,” Solving such a gap equation, we find that m Q should be several TeV, or λ Q ≳ 4 π , and would become much heavier if there is a light Higgs boson. For such heavy chiral quarks, we find analogy with the π − N system, by which we conjecture the possible annihilation phenomena of Q Q ¯ → n V L with high multiplicity, the search of which might be aided by Yukawa-bound Q Q ¯ resonances.

  3. Can bootstrapping explain concept learning?

    Science.gov (United States)

    Beck, Jacob

    2017-01-01

    Susan Carey's account of Quinean bootstrapping has been heavily criticized. While it purports to explain how important new concepts are learned, many commentators complain that it is unclear just what bootstrapping is supposed to be or how it is supposed to work. Others allege that bootstrapping falls prey to the circularity challenge: it cannot explain how new concepts are learned without presupposing that learners already have those very concepts. Drawing on discussions of concept learning from the philosophical literature, this article develops a detailed interpretation of bootstrapping that can answer the circularity challenge. The key to this interpretation is the recognition of computational constraints, both internal and external to the mind, which can endow empty symbols with new conceptual roles and thus new contents. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Bootstrapping pronunciation dictionaries: practical issues

    CSIR Research Space (South Africa)

    Davel, MH

    2005-09-01

    Full Text Available entries, increasing the size of the dictionary in an incremental fashion. 2.1. The bootstrapping process The bootstrapping system is initialised with a large word list (containing no pronunciation information), or with a pre- existing pronunciation.... The rule set is extracted in a straightforward fashion: for every letter (grapheme), a de- fault phoneme is derived as the phoneme to which the letter is most likely to map. ?Exceptional? cases ? words for which the expected phoneme is not correct...

  5. Confidence Limits for the Indirect Effect: Distribution of the Product and Resampling Methods

    Science.gov (United States)

    MacKinnon, David P.; Lockwood, Chondra M.; Williams, Jason

    2010-01-01

    The most commonly used method to test an indirect effect is to divide the estimate of the indirect effect by its standard error and compare the resulting z statistic with a critical value from the standard normal distribution. Confidence limits for the indirect effect are also typically based on critical values from the standard normal distribution. This article uses a simulation study to demonstrate that confidence limits are imbalanced because the distribution of the indirect effect is normal only in special cases. Two alternatives for improving the performance of confidence limits for the indirect effect are evaluated: (a) a method based on the distribution of the product of two normal random variables, and (b) resampling methods. In Study 1, confidence limits based on the distribution of the product are more accurate than methods based on an assumed normal distribution but confidence limits are still imbalanced. Study 2 demonstrates that more accurate confidence limits are obtained using resampling methods, with the bias-corrected bootstrap the best method overall. PMID:20157642

  6. Probabilistic tractography using Lasso bootstrap.

    Science.gov (United States)

    Ye, Chuyang; Prince, Jerry L

    2017-01-01

    Diffusion magnetic resonance imaging (dMRI) can be used for noninvasive imaging of white matter tracts. Using fiber tracking, which propagates fiber streamlines according to fiber orientations (FOs) computed from dMRI, white matter tracts can be reconstructed for investigation of brain diseases and the brain connectome. Because of image noise, probabilistic tractography has been proposed to characterize uncertainties in FO estimation. Bootstrap provides a nonparametric approach to the estimation of FO uncertainties and residual bootstrap has been used for developing probabilistic tractography. However, recently developed models have incorporated sparsity regularization to reduce the required number of gradient directions to resolve crossing FOs, and the residual bootstrap used in previous methods is not applicable to these models. In this work, we propose a probabilistic tractography algorithm named Lasso bootstrap tractography (LBT) for the models that incorporate sparsity. Using a fixed tensor basis and a sparsity assumption, diffusion signals are modeled using a Lasso formulation. With the residuals from the Lasso model, a distribution of diffusion signals is obtained according to a modified Lasso bootstrap strategy. FOs are then estimated from the synthesized diffusion signals by an algorithm that improves FO estimation by enforcing spatial consistency of FOs. Finally, streamlining fiber tracking is performed with the computed FOs. The LBT algorithm was evaluated on simulated and real dMRI data both qualitatively and quantitatively. Results demonstrate that LBT outperforms state-of-the-art algorithms. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Model-based bootstrapping when correcting for measurement error with application to logistic regression.

    Science.gov (United States)

    Buonaccorsi, John P; Romeo, Giovanni; Thoresen, Magne

    2018-03-01

    When fitting regression models, measurement error in any of the predictors typically leads to biased coefficients and incorrect inferences. A plethora of methods have been proposed to correct for this. Obtaining standard errors and confidence intervals using the corrected estimators can be challenging and, in addition, there is concern about remaining bias in the corrected estimators. The bootstrap, which is one option to address these problems, has received limited attention in this context. It has usually been employed by simply resampling observations, which, while suitable in some situations, is not always formally justified. In addition, the simple bootstrap does not allow for estimating bias in non-linear models, including logistic regression. Model-based bootstrapping, which can potentially estimate bias in addition to being robust to the original sampling or whether the measurement error variance is constant or not, has received limited attention. However, it faces challenges that are not present in handling regression models with no measurement error. This article develops new methods for model-based bootstrapping when correcting for measurement error in logistic regression with replicate measures. The methodology is illustrated using two examples, and a series of simulations are carried out to assess and compare the simple and model-based bootstrap methods, as well as other standard methods. While not always perfect, the model-based approaches offer some distinct improvements over the other methods. © 2017, The International Biometric Society.

  8. Double-bootstrap methods that use a single double-bootstrap simulation

    OpenAIRE

    Chang, Jinyuan; Hall, Peter

    2014-01-01

    We show that, when the double bootstrap is used to improve performance of bootstrap methods for bias correction, techniques based on using a single double-bootstrap sample for each single-bootstrap sample can be particularly effective. In particular, they produce third-order accuracy for much less computational expense than is required by conventional double-bootstrap methods. However, this improved level of performance is not available for the single double-bootstrap methods that have been s...

  9. The efficiency of different search strategies in estimating parsimony jackknife, bootstrap, and Bremer support

    Directory of Open Access Journals (Sweden)

    Müller Kai F

    2005-10-01

    Full Text Available Abstract Background For parsimony analyses, the most common way to estimate confidence is by resampling plans (nonparametric bootstrap, jackknife, and Bremer support (Decay indices. The recent literature reveals that parameter settings that are quite commonly employed are not those that are recommended by theoretical considerations and by previous empirical studies. The optimal search strategy to be applied during resampling was previously addressed solely via standard search strategies available in PAUP*. The question of a compromise between search extensiveness and improved support accuracy for Bremer support received even less attention. A set of experiments was conducted on different datasets to find an empirical cut-off point at which increased search extensiveness does not significantly change Bremer support and jackknife or bootstrap proportions any more. Results For the number of replicates needed for accurate estimates of support in resampling plans, a diagram is provided that helps to address the question whether apparently different support values really differ significantly. It is shown that the use of random addition cycles and parsimony ratchet iterations during bootstrapping does not translate into higher support, nor does any extension of the search extensiveness beyond the rather moderate effort of TBR (tree bisection and reconnection branch swapping plus saving one tree per replicate. Instead, in case of very large matrices, saving more than one shortest tree per iteration and using a strict consensus tree of these yields decreased support compared to saving only one tree. This can be interpreted as a small risk of overestimating support but should be more than compensated by other factors that counteract an enhanced type I error. With regard to Bremer support, a rule of thumb can be derived stating that not much is gained relative to the surplus computational effort when searches are extended beyond 20 ratchet iterations per

  10. A Primer on Bootstrap Factor Analysis as Applied to Health Studies Research

    Science.gov (United States)

    Lu, Wenhua; Miao, Jingang; McKyer, E. Lisako J.

    2014-01-01

    Objectives: To demonstrate how the bootstrap method could be conducted in exploratory factor analysis (EFA) with a syntax written in SPSS. Methods: The data obtained from the Texas Childhood Obesity Prevention Policy Evaluation project (T-COPPE project) were used for illustration. A 5-step procedure to conduct bootstrap factor analysis (BFA) was…

  11. Sieve bootstrapping in the Lee-Carter model

    NARCIS (Netherlands)

    Heinemann, A.

    2013-01-01

    This paper studies an alternative approach to construct confidence intervals for parameter estimates of the Lee-Carter model. First, the procedure of obtaining confidence intervals using regular nonparametric i.i.d. bootstrap is specified. Empirical evidence seems to invalidate this approach as it

  12. A Bootstrap Cointegration Rank Test for Panels of VAR Models

    DEFF Research Database (Denmark)

    Callot, Laurent

    functions of the individual Cointegrated VARs (CVAR) models. A bootstrap based procedure is used to compute empirical distributions of the trace test statistics for these individual models. From these empirical distributions two panel trace test statistics are constructed. The satisfying small sample...

  13. Bootstrap confidence intervals for three-way methods

    NARCIS (Netherlands)

    Kiers, Henk A.L.

    Results from exploratory three-way analysis techniques such as CANDECOMP/PARAFAC and Tucker3 analysis are usually presented without giving insight into uncertainties due to sampling. Here a bootstrap procedure is proposed that produces percentile intervals for all output parameters. Special

  14. Beta limits of a completely bootstrapped tokamak

    International Nuclear Information System (INIS)

    Weening, R.H.; Bondeson, A.

    1992-03-01

    A beta limit is given for a completely bootstrapped tokamak. The beta limit is sensitive to the achievable Troyon factor and depends directly upon the strength of the tokamak bootstrap effect. (author) 16 refs

  15. VOYAGER 1 SATURN MAGNETOMETER RESAMPLED DATA 9.60 SEC

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes Voyager 1 Saturn encounter magnetometer data that have been resampled at a 9.6 second sample rate. The data set is composed of 6 columns: 1)...

  16. VOYAGER 1 SATURN MAGNETOMETER RESAMPLED DATA 1.92 SEC

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes Voyager 1 Saturn encounter magnetometer data that have been resampled at a 1.92 second sample rate. The data set is composed of 6 columns: 1)...

  17. VOYAGER 1 JUPITER MAGNETOMETER RESAMPLED DATA 1.92 SEC

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes Voyager 1 Jupiter encounter magnetometer data that have been resampled at a 1.92 second sample rate. The data set is composed of 6 columns: 1)...

  18. VOYAGER 1 SATURN MAGNETOMETER RESAMPLED DATA 48.0 SEC

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes Voyager 1 Saturn encounter magnetometer data that have been resampled at a 48.0 second sample rate. The data set is composed of 6 columns: 1)...

  19. VOYAGER 1 JUPITER MAGNETOMETER RESAMPLED DATA 48.0 SEC

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes Voyager 1 Jupiter encounter magnetometer data that have been resampled at a 48.0 second sample rate. The data set is composed of 6 columns: 1)...

  20. VOYAGER 2 JUPITER MAGNETOMETER RESAMPLED DATA 9.60 SEC

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes Voyager 2 Jupiter encounter magnetometer data that have been resampled at a 9.6 second sample rate. The data set is composed of 6 columns: 1)...

  1. VOYAGER 2 SATURN MAGNETOMETER RESAMPLED DATA 1.92 SEC

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes Voyager 2 Saturn encounter magnetometer data that have been resampled at a 1.92 second sample rate. The data set is composed of 6 columns: 1)...

  2. VOYAGER 1 JUPITER MAGNETOMETER RESAMPLED DATA 9.60 SEC

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes Voyager 1 Jupiter encounter magnetometer data that have been resampled at a 9.6 second sample rate. The data set is composed of 6 columns: 1)...

  3. VOYAGER 2 SATURN MAGNETOMETER RESAMPLED DATA 48.0 SEC

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes Voyager 2 Saturn encounter magnetometer data that have been resampled at a 48.0 second sample rate. The data set is composed of 6 columns: 1)...

  4. VOYAGER 2 SATURN MAGNETOMETER RESAMPLED DATA 9.60 SEC

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes Voyager 2 Saturn encounter magnetometer data that have been resampled at a 9.6 second sample rate. The data set is composed of 6 columns: 1)...

  5. VOYAGER 2 JUPITER MAGNETOMETER RESAMPLED DATA 1.92 SEC

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes Voyager 2 Jupiter encounter magnetometer data that have been resampled at a 1.92 second sample rate. The data set is composed of 6 columns: 1)...

  6. VOYAGER 2 JUPITER MAGNETOMETER RESAMPLED DATA 48.0 SEC

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes Voyager 2 Jupiter encounter magnetometer data that have been resampled at a 48.0 second sample rate. The data set is composed of 6 columns: 1)...

  7. Calculating Power by Bootstrap, with an Application to Cluster-Randomized Trials.

    Science.gov (United States)

    Kleinman, Ken; Huang, Susan S

    2016-01-01

    A key requirement for a useful power calculation is that the calculation mimic the data analysis that will be performed on the actual data, once that data is observed. Close approximations may be difficult to achieve using analytic solutions, however, and thus Monte Carlo approaches, including both simulation and bootstrap resampling, are often attractive. One setting in which this is particularly true is cluster-randomized trial designs. However, Monte Carlo approaches are useful in many additional settings as well. Calculating power for cluster-randomized trials using analytic or simulation-based methods is frequently unsatisfactory due to the complexity of the data analysis methods to be employed and to the sparseness of data to inform the choice of important parameters in these methods. We propose that among Monte Carlo methods, bootstrap approaches are most likely to generate data similar to the observed data. In bootstrap approaches, real data are resampled to build complete data sets based on real data that resemble the data for the intended analyses. In contrast, simulation methods would use the real data to estimate parameters for the data, and would then simulate data using these parameters. We describe means of implementing bootstrap power calculation. We demonstrate bootstrap power calculation for a cluster-randomized trial with a censored survival outcome and a baseline observation period. Bootstrap power calculation is a natural application of resampling methods. It provides a relatively simple solution to power calculation that is likely to be more accurate than analytic solutions or simulation-based calculations, in the sense that the bootstrap approach does not rely on the assumptions inherent in analytic calculations. This method of calculation has several important strengths. Notably, it is simple to achieve great fidelity to the proposed data analysis method and there is no requirement for parameter estimates, or estimates of their variability

  8. Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions

    Science.gov (United States)

    Padilla, Miguel A.; Divers, Jasmin

    2013-01-01

    The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…

  9. Image re-sampling detection through a novel interpolation kernel.

    Science.gov (United States)

    Hilal, Alaa

    2018-03-27

    Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Bootstrapping N=3 superconformal theories

    Energy Technology Data Exchange (ETDEWEB)

    Lemos, Madalena; Liendo, Pedro [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group; Meneghelli, Carlo [Stony Brook Univ., Stony Brook, NY (United States). Simons Center for Geometry and Physics; Mitev, Vladimir [Mainz Univ. (Germany). PRISMA Cluster of Excellence

    2016-12-15

    We initiate the bootstrap program for N=3 superconformal field theories (SCFTs) in four dimensions. The problem is considered from two fronts: the protected subsector described by a 2d chiral algebra, and crossing symmetry for half-BPS operators whose superconformal primaries parametrize the Coulomb branch of N=3 theories. With the goal of describing a protected subsector of a family of =3 SCFTs, we propose a new 2d chiral algebra with super Virasoro symmetry that depends on an arbitrary parameter, identified with the central charge of the theory. Turning to the crossing equations, we work out the superconformal block expansion and apply standard numerical bootstrap techniques in order to constrain the CFT data. We obtain bounds valid for any theory but also, thanks to input from the chiral algebra results, we are able to exclude solutions with N=4 supersymmetry, allowing us to zoom in on a specific N=3 SCFT.

  11. Mobile first design : using Bootstrap

    OpenAIRE

    Bhusal, Bipin

    2017-01-01

    The aim of this project was to design and build a website for a company based in Australia. The business offers remedial massage therapy to its clients. It is a small business which works on the basis of calls and message reservation. The business currently has a temporary website designed with Wix, a cloud-based web development platform. The new website was built with responsive design using Bootstrap. This website was intended for the customers using mobile internet browsers. This design is...

  12. Bootstrap Sequential Determination of the Co-integration Rank in VAR Models

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Rahbek, Anders; Taylor, A. M. Robert

    with empirical rejection frequencies often very much in excess of the nominal level. As a consequence, bootstrap versions of these tests have been developed. To be useful, however, sequential procedures for determining the co-integrating rank based on these bootstrap tests need to be consistent, in the sense...... in the literature by proposing a bootstrap sequential algorithm which we demonstrate delivers consistent cointegration rank estimation for general I(1) processes. Finite sample Monte Carlo simulations show the proposed procedure performs well in practice....

  13. More N =4 superconformal bootstrap

    Science.gov (United States)

    Beem, Christopher; Rastelli, Leonardo; van Rees, Balt C.

    2017-08-01

    In this long overdue second installment, we continue to develop the conformal bootstrap program for N =4 superconformal field theories (SCFTs) in four dimensions via an analysis of the correlation function of four stress-tensor supermultiplets. We review analytic results for this correlator and make contact with the SCFT/chiral algebra correspondence of Beem et al. [Commun. Math. Phys. 336, 1359 (2015), 10.1007/s00220-014-2272-x]. We demonstrate that the constraints of unitarity and crossing symmetry require the central charge c to be greater than or equal to 3 /4 in any interacting N =4 SCFT. We apply numerical bootstrap methods to derive upper bounds on scaling dimensions and operator product expansion coefficients for several low-lying, unprotected operators as a function of the central charge. We interpret our bounds in the context of N =4 super Yang-Mills theories, formulating a series of conjectures regarding the embedding of the conformal manifold—parametrized by the complexified gauge coupling—into the space of scaling dimensions and operator product expansion coefficients. Our conjectures assign a distinguished role to points on the conformal manifold that are self-dual under a subgroup of the S -duality group. This paper contains a more detailed exposition of a number of results previously reported in Beem et al. [Phys. Rev. Lett. 111, 071601 (2013), 10.1103/PhysRevLett.111.071601] in addition to new results.

  14. A Comparison of Non-Adjusted and Bootstrapped Methods: Bootstrapped Diagnosis Might Be Worth the Trouble.

    Science.gov (United States)

    Greenblatt, Richard L.; And Others

    1992-01-01

    The diagnostic accuracy of nonadjusted and bootstrapped diagnosis was compared using a sample of 1,455 psychiatric patients who completed the Millon Clinical Multiaxial Inventory. The usefulness of bootstrapping depended on the criteria for accuracy. Conditions under which bootstrapping might increase diagnostic accuracy are detailed. (SLD)

  15. Modified bootstrap consistency rates for U-quantiles

    OpenAIRE

    JANSSEN, Paul; SWANEPOEL, Jan; VERAVERBEKE, Noel

    2001-01-01

    We show that, compared to the classical bootstrap, the modified bootstrap provides faster consistency rates for the bootstrap distribution of U-quantiles. This shows that the modified bootstrap is useful, not only in cases where the classical bootstrap fails, but also in situations where it is valid.

  16. PARTICLE FILTER BASED VEHICLE TRACKING APPROACH WITH IMPROVED RESAMPLING STAGE

    Directory of Open Access Journals (Sweden)

    Wei Leong Khong

    2014-02-01

    Full Text Available Optical sensors based vehicle tracking can be widely implemented in traffic surveillance and flow control. The vast development of video surveillance infrastructure in recent years has drawn the current research focus towards vehicle tracking using high-end and low cost optical sensors. However, tracking vehicles via such sensors could be challenging due to the high probability of changing vehicle appearance and illumination, besides the occlusion and overlapping incidents. Particle filter has been proven as an approach which can overcome nonlinear and non-Gaussian situations caused by cluttered background and occlusion incidents. Unfortunately, conventional particle filter approach encounters particle degeneracy especially during and after the occlusion. Particle filter with sampling important resampling (SIR is an important step to overcome the drawback of particle filter, but SIR faced the problem of sample impoverishment when heavy particles are statistically selected many times. In this work, genetic algorithm has been proposed to be implemented in the particle filter resampling stage, where the estimated position can converge faster to hit the real position of target vehicle under various occlusion incidents. The experimental results show that the improved particle filter with genetic algorithm resampling method manages to increase the tracking accuracy and meanwhile reduce the particle sample size in the resampling stage.

  17. Introduction to Permutation and Resampling-Based Hypothesis Tests

    Science.gov (United States)

    LaFleur, Bonnie J.; Greevy, Robert A.

    2009-01-01

    A resampling-based method of inference--permutation tests--is often used when distributional assumptions are questionable or unmet. Not only are these methods useful for obvious departures from parametric assumptions (e.g., normality) and small sample sizes, but they are also more robust than their parametric counterparts in the presences of…

  18. Coefficient Alpha Bootstrap Confidence Interval under Nonnormality

    Science.gov (United States)

    Padilla, Miguel A.; Divers, Jasmin; Newton, Matthew

    2012-01-01

    Three different bootstrap methods for estimating confidence intervals (CIs) for coefficient alpha were investigated. In addition, the bootstrap methods were compared with the most promising coefficient alpha CI estimation methods reported in the literature. The CI methods were assessed through a Monte Carlo simulation utilizing conditions…

  19. How to Bootstrap Anonymous Communication

    DEFF Research Database (Denmark)

    Jakobsen, Sune K.; Orlandi, Claudio

    2015-01-01

    formal study in this direction. To solve this problem, we introduce the concept of anonymous steganography: think of a leaker Lea who wants to leak a large document to Joe the journalist. Using anonymous steganography Lea can embed this document in innocent looking communication on some popular website...... (such as cat videos on YouTube or funny memes on 9GAG). Then Lea provides Joe with a short key k which, when applied to the entire website, recovers the document while hiding the identity of Lea among the large number of users of the website. Our contributions include: { Introducing and formally dening...... anonymous steganography, { A construction showing that anonymous steganography is possible (which uses recent results in circuits obfuscation), { A lower bound on the number of bits which are needed to bootstrap anonymous communication....

  20. How to Bootstrap Anonymous Communication

    DEFF Research Database (Denmark)

    Jakobsen, Sune K.; Orlandi, Claudio

    2015-01-01

    formal study in this direction. To solve this problem, we introduce the concept of anonymous steganography: think of a leaker Lea who wants to leak a large document to Joe the journalist. Using anonymous steganography Lea can embed this document in innocent looking communication on some popular website...... (such as cat videos on YouTube or funny memes on 9GAG). Then Lea provides Joe with a short key $k$ which, when applied to the entire website, recovers the document while hiding the identity of Lea among the large number of users of the website. Our contributions include: - Introducing and formally...... defining anonymous steganography, - A construction showing that anonymous steganography is possible (which uses recent results in circuits obfuscation), - A lower bound on the number of bits which are needed to bootstrap anonymous communication....

  1. Inverse bootstrapping conformal field theories

    Science.gov (United States)

    Li, Wenliang

    2018-01-01

    We propose a novel approach to study conformal field theories (CFTs) in general dimensions. In the conformal bootstrap program, one usually searches for consistent CFT data that satisfy crossing symmetry. In the new method, we reverse the logic and interpret manifestly crossing-symmetric functions as generating functions of conformal data. Physical CFTs can be obtained by scanning the space of crossing-symmetric functions. By truncating the fusion rules, we are able to concentrate on the low-lying operators and derive some approximate relations for their conformal data. It turns out that the free scalar theory, the 2d minimal model CFTs, the ϕ 4 Wilson-Fisher CFT, the Lee-Yang CFTs and the Ising CFTs are consistent with the universal relations from the minimal fusion rule ϕ 1 × ϕ 1 = I + ϕ 2 + T , where ϕ 1 , ϕ 2 are scalar operators, I is the identity operator and T is the stress tensor.

  2. Bootstrapping SCFTs with Four Supercharges

    CERN Document Server

    Bobev, Nikolay; Mazac, Dalimil; Paulos, Miguel F

    2015-01-01

    We study the constraints imposed by superconformal symmetry, crossing symmetry, and unitarity for theories with four supercharges in spacetime dimension $2\\leq d\\leq 4$. We show how superconformal algebras with four Poincar\\'{e} supercharges can be treated in a formalism applicable to any, in principle continuous, value of $d$ and use this to construct the superconformal blocks for any $d\\leq 4$. We then use numerical bootstrap techniques to derive upper bounds on the conformal dimension of the first unprotected operator appearing in the OPE of a chiral and an anti-chiral superconformal primary. We obtain an intriguing structure of three distinct kinks. We argue that one of the kinks smoothly interpolates between the $d=2$, $\\mathcal N=(2,2)$ minimal model with central charge $c=1$ and the theory of a free chiral multiplet in $d=4$, passing through the critical Wess-Zumino model with cubic superpotential in intermediate dimensions.

  3. Bootstrapping SCFTs with four supercharges

    International Nuclear Information System (INIS)

    Bobev, Nikolay; El-Showk, Sheer; Mazáč, Dalimil; Paulos, Miguel F.

    2015-01-01

    We study the constraints imposed by superconformal symmetry, crossing symmetry, and unitarity for theories with four supercharges in spacetime dimension 2≤d≤4. We show how superconformal algebras with four Poincaré supercharges can be treated in a formalism applicable to any, in principle continuous, value of d and use this to construct the superconformal blocks for any d≤4. We then use numerical bootstrap techniques to derive upper bounds on the conformal dimension of the first unprotected operator appearing in the OPE of a chiral and an anti-chiral superconformal primary. We obtain an intriguing structure of three distinct kinks. We argue that one of the kinks smoothly interpolates between the d=2, N=(2,2) minimal model with central charge c=1 and the theory of a free chiral multiplet in d=4, passing through the critical Wess-Zumino model with cubic superpotential in intermediate dimensions.

  4. Statistical error estimation of the Feynman-α method using the bootstrap method

    International Nuclear Information System (INIS)

    Endo, Tomohiro; Yamamoto, Akio; Yagi, Takahiro; Pyeon, Cheol Ho

    2016-01-01

    Applicability of the bootstrap method is investigated to estimate the statistical error of the Feynman-α method, which is one of the subcritical measurement techniques on the basis of reactor noise analysis. In the Feynman-α method, the statistical error can be simply estimated from multiple measurements of reactor noise, however it requires additional measurement time to repeat the multiple times of measurements. Using a resampling technique called 'bootstrap method' standard deviation and confidence interval of measurement results obtained by the Feynman-α method can be estimated as the statistical error, using only a single measurement of reactor noise. In order to validate our proposed technique, we carried out a passive measurement of reactor noise without any external source, i.e. with only inherent neutron source by spontaneous fission and (α,n) reactions in nuclear fuels at the Kyoto University Criticality Assembly. Through the actual measurement, it is confirmed that the bootstrap method is applicable to approximately estimate the statistical error of measurement results obtained by the Feynman-α method. (author)

  5. Estimating uncertainty in respondent-driven sampling using a tree bootstrap method.

    Science.gov (United States)

    Baraff, Aaron J; McCormick, Tyler H; Raftery, Adrian E

    2016-12-20

    Respondent-driven sampling (RDS) is a network-based form of chain-referral sampling used to estimate attributes of populations that are difficult to access using standard survey tools. Although it has grown quickly in popularity since its introduction, the statistical properties of RDS estimates remain elusive. In particular, the sampling variability of these estimates has been shown to be much higher than previously acknowledged, and even methods designed to account for RDS result in misleadingly narrow confidence intervals. In this paper, we introduce a tree bootstrap method for estimating uncertainty in RDS estimates based on resampling recruitment trees. We use simulations from known social networks to show that the tree bootstrap method not only outperforms existing methods but also captures the high variability of RDS, even in extreme cases with high design effects. We also apply the method to data from injecting drug users in Ukraine. Unlike other methods, the tree bootstrap depends only on the structure of the sampled recruitment trees, not on the attributes being measured on the respondents, so correlations between attributes can be estimated as well as variability. Our results suggest that it is possible to accurately assess the high level of uncertainty inherent in RDS.

  6. Comparing bootstrap and posterior probability values in the four-taxon case.

    Science.gov (United States)

    Cummings, Michael P; Handley, Scott A; Myers, Daniel S; Reed, David L; Rokas, Antonis; Winka, Katarina

    2003-08-01

    Assessment of the reliability of a given phylogenetic hypothesis is an important step in phylogenetic analysis. Historically, the nonparametric bootstrap procedure has been the most frequently used method for assessing the support for specific phylogenetic relationships. The recent employment of Bayesian methods for phylogenetic inference problems has resulted in clade support being expressed in terms of posterior probabilities. We used simulated data and the four-taxon case to explore the relationship between nonparametric bootstrap values (as inferred by maximum likelihood) and posterior probabilities (as inferred by Bayesian analysis). The results suggest a complex association between the two measures. Three general regions of tree space can be identified: (1) the neutral zone, where differences between mean bootstrap and mean posterior probability values are not significant, (2) near the two-branch corner, and (3) deep in the two-branch corner. In the last two regions, significant differences occur between mean bootstrap and mean posterior probability values. Whether bootstrap or posterior probability values are higher depends on the data in support of alternative topologies. Examination of star topologies revealed that both bootstrap and posterior probability values differ significantly from theoretical expectations; in particular, there are more posterior probability values in the range 0.85-1 than expected by theory. Therefore, our results corroborate the findings of others that posterior probability values are excessively high. Our results also suggest that extrapolations from single topology branch-length studies are unlikely to provide any general conclusions regarding the relationship between bootstrap and posterior probability values.

  7. Better Confidence Intervals: The Double Bootstrap with No Pivot

    OpenAIRE

    David Letson; B.D. McCullough

    1998-01-01

    The double bootstrap is an important advance in confidence interval generation because it converges faster than the already popular single bootstrap. Yet the usual double bootstrap requires a stable pivot that is not always available, e.g., when estimating flexibilities or substitution elasticities. A recently developed double bootstrap does not require a pivot. A Monte Carlo analysis with the Waugh data finds the double bootstrap achieves nominal coverage whereas the single bootstrap does no...

  8. Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection

    Science.gov (United States)

    Kumar, Sricharan; Srivistava, Ashok N.

    2012-01-01

    Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.

  9. Using re-sampling methods in mortality studies.

    Directory of Open Access Journals (Sweden)

    Igor Itskovich

    Full Text Available Traditional methods of computing standardized mortality ratios (SMR in mortality studies rely upon a number of conventional statistical propositions to estimate confidence intervals for obtained values. Those propositions include a common but arbitrary choice of the confidence level and the assumption that observed number of deaths in the test sample is a purely random quantity. The latter assumption may not be fully justified for a series of periodic "overlapping" studies. We propose a new approach to evaluating the SMR, along with its confidence interval, based on a simple re-sampling technique. The proposed method is most straightforward and requires neither the use of above assumptions nor any rigorous technique, employed by modern re-sampling theory, for selection of a sample set. Instead, we include all possible samples that correspond to the specified time window of the study in the re-sampling analysis. As a result, directly obtained confidence intervals for repeated overlapping studies may be tighter than those yielded by conventional methods. The proposed method is illustrated by evaluating mortality due to a hypothetical risk factor in a life insurance cohort. With this method used, the SMR values can be forecast more precisely than when using the traditional approach. As a result, the appropriate risk assessment would have smaller uncertainties.

  10. Gaussian process regression bootstrapping: exploring the effects of uncertainty in time course data.

    Science.gov (United States)

    Kirk, Paul D W; Stumpf, Michael P H

    2009-05-15

    Although widely accepted that high-throughput biological data are typically highly noisy, the effects that this uncertainty has upon the conclusions we draw from these data are often overlooked. However, in order to assign any degree of confidence to our conclusions, we must quantify these effects. Bootstrap resampling is one method by which this may be achieved. Here, we present a parametric bootstrapping approach for time-course data, in which Gaussian process regression (GPR) is used to fit a probabilistic model from which replicates may then be drawn. This approach implicitly allows the time dependence of the data to be taken into account, and is applicable to a wide range of problems. We apply GPR bootstrapping to two datasets from the literature. In the first example, we show how the approach may be used to investigate the effects of data uncertainty upon the estimation of parameters in an ordinary differential equations (ODE) model of a cell signalling pathway. Although we find that the parameter estimates inferred from the original dataset are relatively robust to data uncertainty, we also identify a distinct second set of estimates. In the second example, we use our method to show that the topology of networks constructed from time-course gene expression data appears to be sensitive to data uncertainty, although there may be individual edges in the network that are robust in light of present data. Matlab code for performing GPR bootstrapping is available from our web site: http://www3.imperial.ac.uk/theoreticalsystemsbiology/data-software/.

  11. Definition of total bootstrap current in tokamaks

    International Nuclear Information System (INIS)

    Ross, D.W.

    1995-01-01

    Alternative definitions of the total bootstrap current are compared. An analogous comparison is given for the ohmic and auxiliary currents. It is argued that different definitions than those usually employed lead to simpler analyses of tokamak operating scenarios

  12. CME Velocity and Acceleration Error Estimates Using the Bootstrap Method

    Science.gov (United States)

    Michalek, Grzegorz; Gopalswamy, Nat; Yashiro, Seiji

    2017-08-01

    The bootstrap method is used to determine errors of basic attributes of coronal mass ejections (CMEs) visually identified in images obtained by the Solar and Heliospheric Observatory (SOHO) mission's Large Angle and Spectrometric Coronagraph (LASCO) instruments. The basic parameters of CMEs are stored, among others, in a database known as the SOHO/LASCO CME catalog and are widely employed for many research studies. The basic attributes of CMEs ( e.g. velocity and acceleration) are obtained from manually generated height-time plots. The subjective nature of manual measurements introduces random errors that are difficult to quantify. In many studies the impact of such measurement errors is overlooked. In this study we present a new possibility to estimate measurements errors in the basic attributes of CMEs. This approach is a computer-intensive method because it requires repeating the original data analysis procedure several times using replicate datasets. This is also commonly called the bootstrap method in the literature. We show that the bootstrap approach can be used to estimate the errors of the basic attributes of CMEs having moderately large numbers of height-time measurements. The velocity errors are in the vast majority small and depend mostly on the number of height-time points measured for a particular event. In the case of acceleration, the errors are significant, and for more than half of all CMEs, they are larger than the acceleration itself.

  13. CME Velocity and Acceleration Error Estimates Using the Bootstrap Method

    Science.gov (United States)

    Michalek, Grzegorz; Gopalswamy, Nat; Yashiro, Seiji

    2017-01-01

    The bootstrap method is used to determine errors of basic attributes of coronal mass ejections (CMEs) visually identified in images obtained by the Solar and Heliospheric Observatory (SOHO) mission's Large Angle and Spectrometric Coronagraph (LASCO) instruments. The basic parameters of CMEs are stored, among others, in a database known as the SOHO/LASCO CME catalog and are widely employed for many research studies. The basic attributes of CMEs (e.g. velocity and acceleration) are obtained from manually generated height-time plots. The subjective nature of manual measurements introduces random errors that are difficult to quantify. In many studies the impact of such measurement errors is overlooked. In this study we present a new possibility to estimate measurements errors in the basic attributes of CMEs. This approach is a computer-intensive method because it requires repeating the original data analysis procedure several times using replicate datasets. This is also commonly called the bootstrap method in the literature. We show that the bootstrap approach can be used to estimate the errors of the basic attributes of CMEs having moderately large numbers of height-time measurements. The velocity errors are in the vast majority small and depend mostly on the number of height-time points measured for a particular event. In the case of acceleration, the errors are significant, and for more than half of all CMEs, they are larger than the acceleration itself.

  14. Generalised block bootstrap and its use in meteorology

    Directory of Open Access Journals (Sweden)

    L. Varga

    2017-06-01

    Full Text Available In an earlier paper, Rakonczai et al.(2014 emphasised the importance of investigating the effective sample size in case of autocorrelated data. The simulations were based on the block bootstrap methodology. However, the discreteness of the usual block size did not allow for exact calculations. In this paper we propose a new generalisation of the block bootstrap methodology, which allows for any positive real number as expected block size. We relate it to the existing optimisation procedures and apply it to a temperature data set. Our other focus is on statistical tests, where quite often the actual sample size plays an important role, even in the case of relatively large samples. This is especially the case for copulas. These are used for investigating the dependencies among data sets. As in quite a few real applications the time dependence cannot be neglected, we investigated the effect of this phenomenon on the used test statistic. The critical value can be computed by the proposed new block bootstrap simulation, where the block size is determined by fitting a VAR model to the observations. The results are illustrated for models of the used temperature data.

  15. Experimental evaluation of a spatial resampling technique to improve the accuracy of pencil-beam dose calculation in proton therapy.

    Science.gov (United States)

    Egashira, Yusuke; Nishio, Teiji; Matsuura, Taeko; Kameoka, Satoru; Uesaka, Mitsuru

    2012-07-01

    generated from the resampling plane, formed a detouring∕overextending path that was different from that of elemental pencil beams. Therefore, when the spatial resampling was implemented at the surface and immediately upstream of the lateral heterogeneity, the calculation could predict these dose reduction∕increment effects. Without the resampling procedure, these dose reduction∕increment effects could not be predicted in both phantoms owing to the blurring of the pencil beam. We found that the PBA with the spatial resampling technique predicted the dose reduction∕increment at the dose profiles in both phantoms when the sampling plane was defined immediately upstream of the heterogeneous slab. We have demonstrated the implementation of a spatial resampling technique for pencil-beam calculation to address the problem of lateral density heterogeneity. While further validation is required for clinical use, this study suggests that the spatial resampling technique can make a significant contribution to proton therapy.

  16. Experimental evaluation of a spatial resampling technique to improve the accuracy of pencil-beam dose calculation in proton therapy

    Energy Technology Data Exchange (ETDEWEB)

    Egashira, Yusuke; Nishio, Teiji; Matsuura, Taeko; Kameoka, Satoru; Uesaka, Mitsuru [Department of Bioengineering, Graduate School of Engineering, University of Tokyo, 2-11-16, Yayoi, Bunkyo-ku, Tokyo 113-8656 (Japan) and Japan Society for the Promotion of Science, Ichibancho 8, Chiyoda-ku, Tokyo 102-8472 (Japan); Particle Therapy Division, Research Center for Innovative Oncology, National Cancer Center, Kashiwa, 6-5-1 Kashiwanoha, Kashiwa-shi, Chiba 277-8577 (Japan); Department of Applied Molecular-Imaging Physics, Graduate School of Medicine, Hokkaido University, Sapporo, Hokkaido 060-8638 (Japan); Particle Therapy Division, Research Center for Innovative Oncology, National Cancer Center, Kashiwa, 6-5-1 Kashiwanoha, Kashiwa-shi, Chiba 277-8577 (Japan); Department of Bioengineering, Graduate School of Engineering, University of Tokyo, 2-11-16, Yayoi, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2012-07-15

    -beams, which were generated from the resampling plane, formed a detouring/overextending path that was different from that of elemental pencil beams. Therefore, when the spatial resampling was implemented at the surface and immediately upstream of the lateral heterogeneity, the calculation could predict these dose reduction/increment effects. Without the resampling procedure, these dose reduction/increment effects could not be predicted in both phantoms owing to the blurring of the pencil beam. We found that the PBA with the spatial resampling technique predicted the dose reduction/increment at the dose profiles in both phantoms when the sampling plane was defined immediately upstream of the heterogeneous slab. Conclusions: We have demonstrated the implementation of a spatial resampling technique for pencil-beam calculation to address the problem of lateral density heterogeneity. While further validation is required for clinical use, this study suggests that the spatial resampling technique can make a significant contribution to proton therapy.

  17. Bootstrap-Calibrated Interval Estimates for Latent Variable Scores in Item Response Theory.

    Science.gov (United States)

    Liu, Yang; Yang, Ji Seung

    2017-09-06

    In most item response theory applications, model parameters need to be first calibrated from sample data. Latent variable (LV) scores calculated using estimated parameters are thus subject to sampling error inherited from the calibration stage. In this article, we propose a resampling-based method, namely bootstrap calibration (BC), to reduce the impact of the carryover sampling error on the interval estimates of LV scores. BC modifies the quantile of the plug-in posterior, i.e., the posterior distribution of the LV evaluated at the estimated model parameters, to better match the corresponding quantile of the true posterior, i.e., the posterior distribution evaluated at the true model parameters, over repeated sampling of calibration data. Furthermore, to achieve better coverage of the fixed true LV score, we explore the use of BC in conjunction with Jeffreys' prior. We investigate the finite-sample performance of BC via Monte Carlo simulations and apply it to two empirical data examples.

  18. Comparison of Bootstrap Confidence Intervals Using Monte Carlo Simulations

    OpenAIRE

    Roberto S. Flowers-Cano; Ruperto Ortiz-Gómez; Jesús Enrique León-Jiménez; Raúl López Rivera; Luis A. Perera Cruz

    2018-01-01

    Design of hydraulic works requires the estimation of design hydrological events by statistical inference from a probability distribution. Using Monte Carlo simulations, we compared coverage of confidence intervals constructed with four bootstrap techniques: percentile bootstrap (BP), bias-corrected bootstrap (BC), accelerated bias-corrected bootstrap (BCA) and a modified version of the standard bootstrap (MSB). Different simulation scenarios were analyzed. In some cases, the mother distributi...

  19. Modeling of correlated data with informative cluster sizes: An evaluation of joint modeling and within-cluster resampling approaches.

    Science.gov (United States)

    Zhang, Bo; Liu, Wei; Zhang, Zhiwei; Qu, Yanping; Chen, Zhen; Albert, Paul S

    2017-08-01

    Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether.

  20. On uniform resampling and gaze analysis of bidirectional texture functions

    Czech Academy of Sciences Publication Activity Database

    Filip, Jiří; Chantler, M.J.; Haindl, Michal

    2009-01-01

    Roč. 6, č. 3 (2009), s. 1-15 ISSN 1544-3558 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0593 Grant - others:EC Marie Curie(BE) 41358 Institutional research plan: CEZ:AV0Z10750506 Keywords : BTF * texture * eye tracking Subject RIV: BD - Theory of Information Impact factor: 1.447, year: 2009 http://library.utia.cas.cz/separaty/2009/RO/haindl-on uniform resampling and gaze analysis of bidirectional texture functions.pdf

  1. Analyzing large datasets with bootstrap penalization.

    Science.gov (United States)

    Fang, Kuangnan; Ma, Shuangge

    2017-03-01

    Data with a large p (number of covariates) and/or a large n (sample size) are now commonly encountered. For many problems, regularization especially penalization is adopted for estimation and variable selection. The straightforward application of penalization to large datasets demands a "big computer" with high computational power. To improve computational feasibility, we develop bootstrap penalization, which dissects a big penalized estimation into a set of small ones, which can be executed in a highly parallel manner and each only demands a "small computer". The proposed approach takes different strategies for data with different characteristics. For data with a large p but a small to moderate n, covariates are first clustered into relatively homogeneous blocks. The proposed approach consists of two sequential steps. In each step and for each bootstrap sample, we select blocks of covariates and run penalization. The results from multiple bootstrap samples are pooled to generate the final estimate. For data with a large n but a small to moderate p, we bootstrap a small number of subjects, apply penalized estimation, and then conduct a weighted average over multiple bootstrap samples. For data with a large p and a large n, the natural marriage of the previous two methods is applied. Numerical studies, including simulations and data analysis, show that the proposed approach has computational and numerical advantages over the straightforward application of penalization. An R package has been developed to implement the proposed methods. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Evaluation of Jackknife and Bootstrap for Defining Confidence Intervals for Pairwise Agreement Measures

    Science.gov (United States)

    Severiano, Ana; Carriço, João A.; Robinson, D. Ashley; Ramirez, Mário; Pinto, Francisco R.

    2011-01-01

    Several research fields frequently deal with the analysis of diverse classification results of the same entities. This should imply an objective detection of overlaps and divergences between the formed clusters. The congruence between classifications can be quantified by clustering agreement measures, including pairwise agreement measures. Several measures have been proposed and the importance of obtaining confidence intervals for the point estimate in the comparison of these measures has been highlighted. A broad range of methods can be used for the estimation of confidence intervals. However, evidence is lacking about what are the appropriate methods for the calculation of confidence intervals for most clustering agreement measures. Here we evaluate the resampling techniques of bootstrap and jackknife for the calculation of the confidence intervals for clustering agreement measures. Contrary to what has been shown for some statistics, simulations showed that the jackknife performs better than the bootstrap at accurately estimating confidence intervals for pairwise agreement measures, especially when the agreement between partitions is low. The coverage of the jackknife confidence interval is robust to changes in cluster number and cluster size distribution. PMID:21611165

  3. Conference on Bootstrapping and Related Techniques

    CERN Document Server

    Rothe, Günter; Sendler, Wolfgang

    1992-01-01

    This book contains 30 selected, refereed papers from an in- ternational conference on bootstrapping and related techni- ques held in Trier 1990. Thepurpose of the book is to in- form about recent research in the area of bootstrap, jack- knife and Monte Carlo Tests. Addressing the novice and the expert it covers as well theoretical as practical aspects of these statistical techniques. Potential users in different disciplines as biometry, epidemiology, computer science, economics and sociology but also theoretical researchers s- hould consult the book to be informed on the state of the art in this area.

  4. Early Stop Criterion from the Bootstrap Ensemble

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Larsen, Jan; Fog, Torben L.

    1997-01-01

    This paper addresses the problem of generalization error estimation in neural networks. A new early stop criterion based on a Bootstrap estimate of the generalization error is suggested. The estimate does not require the network to be trained to the minimum of the cost function, as required...... by other methods based on asymptotic theory. Moreover, in contrast to methods based on cross-validation which require data left out for testing, and thus biasing the estimate, the Bootstrap technique does not have this disadvantage. The potential of the suggested technique is demonstrated on various time...

  5. Bayesian inference and the parametric bootstrap

    Science.gov (United States)

    Efron, Bradley

    2013-01-01

    The parametric bootstrap can be used for the efficient computation of Bayes posterior distributions. Importance sampling formulas take on an easy form relating to the deviance in exponential families, and are particularly simple starting from Jeffreys invariant prior. Because of the i.i.d. nature of bootstrap sampling, familiar formulas describe the computational accuracy of the Bayes estimates. Besides computational methods, the theory provides a connection between Bayesian and frequentist analysis. Efficient algorithms for the frequentist accuracy of Bayesian inferences are developed and demonstrated in a model selection example. PMID:23843930

  6. Bootstrap percolation: a renormalisation group approach

    International Nuclear Information System (INIS)

    Branco, N.S.; Santos, Raimundo R. dos; Queiroz, S.L.A. de.

    1984-02-01

    In bootstrap percolation, sites are occupied at random with probability p, but each site is considered active only if at least m of its neighbours are also active. Within an approximate position-space renormalization group framework on a square lattice we obtain the behaviour of the critical concentration p (sub)c and of the critical exponents ν and β for m = 0 (ordinary percolation), 1,2 and 3. We find that the bootstrap percolation problem can be cast into different universality classes, characterized by the values of m. (author) [pt

  7. Uncertainty assessment of gamma-aminobutyric acid concentration of different brain regions in individual and group using residual bootstrap analysis.

    Science.gov (United States)

    Chen, Meng; Liao, Congyu; Chen, Song; Ding, Qiuping; Zhu, Darong; Liu, Hui; Yan, Xu; Zhong, Jianhui

    2017-06-01

    The aim of this work is to quantify individual and regional differences in the relative concentration of gamma-aminobutyric acid (GABA) in human brain with in vivo magnetic resonance spectroscopy. Spectral editing Mescher-Garwood point resolved spectroscopy (MEGA-PRESS) sequence and GABA analysis toolkit (Gannet) were used to detect and quantify GABA in anterior cingulate cortex (ACC) and occipital cortex (OCC) of healthy volunteers. Residual bootstrap, a model-based statistical analysis technique, was applied to resample the fitting residuals of GABA from the Gaussian fitting model (referred to as GABA + thereafter) in both individual and group data of ACC and OCC. The inter-subject coefficient of variation (CV) of GABA + in OCC (20.66 %) and ACC (12.55 %) with residual bootstrap was lower than that of a standard Gaussian model analysis (21.58 % and 16.73 % for OCC and ACC, respectively). The intra-subject uncertainty and CV of OCC were lower than that of ACC in both analyses. The residual bootstrap analysis thus provides a more robust uncertainty estimation of individual and group GABA + detection in different brain regions, which may be useful in our understanding of GABA biochemistry in brain and its use for the diagnosis of related neuropsychiatric diseases.

  8. How to Bootstrap a Human Communication System

    Science.gov (United States)

    Fay, Nicolas; Arbib, Michael; Garrod, Simon

    2013-01-01

    How might a human communication system be bootstrapped in the absence of conventional language? We argue that motivated signs play an important role (i.e., signs that are linked to meaning by structural resemblance or by natural association). An experimental study is then reported in which participants try to communicate a range of pre-specified…

  9. Pulling Econometrics Students up by Their Bootstraps

    Science.gov (United States)

    O'Hara, Michael E.

    2014-01-01

    Although the concept of the sampling distribution is at the core of much of what we do in econometrics, it is a concept that is often difficult for students to grasp. The thought process behind bootstrapping provides a way for students to conceptualize the sampling distribution in a way that is intuitive and visual. However, teaching students to…

  10. Bootstrapping Kernel-Based Semiparametric Estimators

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Jansson, Michael

    by accommodating a non-negligible bias. A noteworthy feature of the assumptions under which the result is obtained is that reliance on a commonly employed stochastic equicontinuity condition is avoided. The second main result shows that the bootstrap provides an automatic method of correcting for the bias even...... when it is non-negligible....

  11. Speckle reduction in digital holography with resampling ring masks

    Science.gov (United States)

    Zhang, Wenhui; Cao, Liangcai; Jin, Guofan

    2018-01-01

    One-shot digital holographic imaging has the advantages of high stability and low temporal cost. However, the reconstruction is affected by the speckle noise. Resampling ring-mask method in spectrum domain is proposed for speckle reduction. The useful spectrum of one hologram is divided into several sub-spectra by ring masks. In the reconstruction, angular spectrum transform is applied to guarantee the calculation accuracy which has no approximation. N reconstructed amplitude images are calculated from the corresponding sub-spectra. Thanks to speckle's random distribution, superimposing these N uncorrelated amplitude images would lead to a final reconstructed image with lower speckle noise. Normalized relative standard deviation values of the reconstructed image are used to evaluate the reduction of speckle. Effect of the method on the spatial resolution of the reconstructed image is also quantitatively evaluated. Experimental and simulation results prove the feasibility and effectiveness of the proposed method.

  12. Efficient generation of pronunciation dictionaries: human factors factors during bootstrapping

    CSIR Research Space (South Africa)

    Davel, MH

    2004-10-01

    Full Text Available Bootstrapping techniques have significant potential for the efficient generation of linguistic resources such as electronic pronunciation dictionaries. The authors describe a system and an approach to bootstrapping for the development...

  13. Pearson-type goodness-of-fit test with bootstrap maximum likelihood estimation.

    Science.gov (United States)

    Yin, Guosheng; Ma, Yanyuan

    2013-01-01

    The Pearson test statistic is constructed by partitioning the data into bins and computing the difference between the observed and expected counts in these bins. If the maximum likelihood estimator (MLE) of the original data is used, the statistic generally does not follow a chi-squared distribution or any explicit distribution. We propose a bootstrap-based modification of the Pearson test statistic to recover the chi-squared distribution. We compute the observed and expected counts in the partitioned bins by using the MLE obtained from a bootstrap sample. This bootstrap-sample MLE adjusts exactly the right amount of randomness to the test statistic, and recovers the chi-squared distribution. The bootstrap chi-squared test is easy to implement, as it only requires fitting exactly the same model to the bootstrap data to obtain the corresponding MLE, and then constructs the bin counts based on the original data. We examine the test size and power of the new model diagnostic procedure using simulation studies and illustrate it with a real data set.

  14. Bootstrapping Relational Affordances of Object Pairs using Transfer

    DEFF Research Database (Denmark)

    Fichtl, Severin; Kraft, Dirk; Krüger, Norbert

    2018-01-01

    leverage past knowledge to accelerate current learning (which we call bootstrapping). We learn Random Forest based affordance predictors from visual inputs and demonstrate two approaches to knowledge transfer for bootstrapping. In the first approach (direct bootstrapping), the state-space for a new...

  15. Higher Order Bootstrap likelihood | Ogbonmwam | Journal of the ...

    African Journals Online (AJOL)

    In this work, higher order optimal window width is used to generate bootstrap kernel density likelihood. A simulated study is conducted to compare the distributions of the higher order bootstrap likelihoods with the exact (empirical) bootstrap likelihood. Our results indicate that the optimal window width of orders 2 and 4 ...

  16. The use of the bootstrap in the analysis of case-control studies with missing data

    DEFF Research Database (Denmark)

    Siersma, Volkert Dirk; Johansen, Christoffer

    2004-01-01

    nonparametric bootstrap, bootstrap confidence intervals, missing values, multiple imputation, matched case-control study......nonparametric bootstrap, bootstrap confidence intervals, missing values, multiple imputation, matched case-control study...

  17. Bootstrapping Q Methodology to Improve the Understanding of Human Perspectives

    Science.gov (United States)

    Zabala, Aiora; Pascual, Unai

    2016-01-01

    Q is a semi-qualitative methodology to identify typologies of perspectives. It is appropriate to address questions concerning diverse viewpoints, plurality of discourses, or participation processes across disciplines. Perspectives are interpreted based on rankings of a set of statements. These rankings are analysed using multivariate data reduction techniques in order to find similarities between respondents. Discussing the analytical process and looking for progress in Q methodology is becoming increasingly relevant. While its use is growing in social, health and environmental studies, the analytical process has received little attention in the last decades and it has not benefited from recent statistical and computational advances. Specifically, the standard procedure provides overall and arguably simplistic variability measures for perspectives and none of these measures are associated to individual statements, on which the interpretation is based. This paper presents an innovative approach of bootstrapping Q to obtain additional and more detailed measures of variability, which helps researchers understand better their data and the perspectives therein. This approach provides measures of variability that are specific to each statement and perspective, and additional measures that indicate the degree of certainty with which each respondent relates to each perspective. This supplementary information may add or subtract strength to particular arguments used to describe the perspectives. We illustrate and show the usefulness of this approach with an empirical example. The paper provides full details for other researchers to implement the bootstrap in Q studies with any data collection design. PMID:26845694

  18. Bootstrapping Q Methodology to Improve the Understanding of Human Perspectives.

    Science.gov (United States)

    Zabala, Aiora; Pascual, Unai

    2016-01-01

    Q is a semi-qualitative methodology to identify typologies of perspectives. It is appropriate to address questions concerning diverse viewpoints, plurality of discourses, or participation processes across disciplines. Perspectives are interpreted based on rankings of a set of statements. These rankings are analysed using multivariate data reduction techniques in order to find similarities between respondents. Discussing the analytical process and looking for progress in Q methodology is becoming increasingly relevant. While its use is growing in social, health and environmental studies, the analytical process has received little attention in the last decades and it has not benefited from recent statistical and computational advances. Specifically, the standard procedure provides overall and arguably simplistic variability measures for perspectives and none of these measures are associated to individual statements, on which the interpretation is based. This paper presents an innovative approach of bootstrapping Q to obtain additional and more detailed measures of variability, which helps researchers understand better their data and the perspectives therein. This approach provides measures of variability that are specific to each statement and perspective, and additional measures that indicate the degree of certainty with which each respondent relates to each perspective. This supplementary information may add or subtract strength to particular arguments used to describe the perspectives. We illustrate and show the usefulness of this approach with an empirical example. The paper provides full details for other researchers to implement the bootstrap in Q studies with any data collection design.

  19. Bootstrapping a change-point Cox model for survival data

    Science.gov (United States)

    Xu, Gongjun; Sen, Bodhisattva; Ying, Zhiliang

    2014-01-01

    This paper investigates the (in)-consistency of various bootstrap methods for making inference on a change-point in time in the Cox model with right censored survival data. A criterion is established for the consistency of any bootstrap method. It is shown that the usual nonparametric bootstrap is inconsistent for the maximum partial likelihood estimation of the change-point. A new model-based bootstrap approach is proposed and its consistency established. Simulation studies are carried out to assess the performance of various bootstrap schemes. PMID:25400719

  20. The $(2,0)$ superconformal bootstrap

    CERN Document Server

    Beem, Christopher; Rastelli, Leonardo; van Rees, Balt C

    2016-01-01

    We develop the conformal bootstrap program for six-dimensional conformal field theories with $(2,0)$ supersymmetry, focusing on the universal four-point function of stress tensor multiplets. We review the solution of the superconformal Ward identities and describe the superconformal block decomposition of this correlator. We apply numerical bootstrap techniques to derive bounds on OPE coefficients and scaling dimensions from the constraints of crossing symmetry and unitarity. We also derive analytic results for the large spin spectrum using the lightcone expansion of the crossing equation. Our principal result is strong evidence that the $A_1$ theory realizes the minimal allowed central charge $(c=25)$ for any interacting $(2,0)$ theory. This implies that the full stress tensor four-point function of the $A_1$ theory is the unique unitary solution to the crossing symmetry equation at $c=25$. For this theory, we estimate the scaling dimensions of the lightest unprotected operators appearing in the stress tenso...

  1. Heptagons from the Steinmann cluster bootstrap

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, Lance J.; McLeod, Andrew J. [Stanford Univ., CA (United States). SLAC National Accelerator Lab.; Drummond, James [Southampton Univ. (United Kingdom). School of Physics and Astronomy; Harrington, Thomas; Spradlin, Marcus [Brown Univ., Providence, RI (United States). Dept. of Physics; Papathanasiou, Georgios [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group; Stanford Univ., CA (United States). SLAC National Accelerator Lab.

    2016-12-15

    We reformulate the heptagon cluster bootstrap to take advantage of the Steinmann relations, which require certain double discontinuities of any amplitude to vanish. These constraints vastly reduce the number of functions needed to bootstrap seven-point amplitudes in planar N=4 supersymmetric Yang-Mills theory, making higher-loop contributions to these amplitudes more computationally accessible. In particular, dual superconformal symmetry and well-defined collinear limits suffice to determine uniquely the symbols of the three-loop NMHV and four-loop MHV seven-point amplitudes. We also show that at three loops, relaxing the dual superconformal (anti Q) relations and imposing dihedral symmetry (and for NMHV the absence of spurious poles) leaves only a single ambiguity in the heptagon amplitudes. These results point to a strong tension between the collinear properties of the amplitudes and the Steinmann relations.

  2. Heptagons from the Steinmann cluster bootstrap

    International Nuclear Information System (INIS)

    Dixon, Lance J.; McLeod, Andrew J.; Drummond, James; Harrington, Thomas; Spradlin, Marcus; Papathanasiou, Georgios; Stanford Univ., CA

    2016-12-01

    We reformulate the heptagon cluster bootstrap to take advantage of the Steinmann relations, which require certain double discontinuities of any amplitude to vanish. These constraints vastly reduce the number of functions needed to bootstrap seven-point amplitudes in planar N=4 supersymmetric Yang-Mills theory, making higher-loop contributions to these amplitudes more computationally accessible. In particular, dual superconformal symmetry and well-defined collinear limits suffice to determine uniquely the symbols of the three-loop NMHV and four-loop MHV seven-point amplitudes. We also show that at three loops, relaxing the dual superconformal (anti Q) relations and imposing dihedral symmetry (and for NMHV the absence of spurious poles) leaves only a single ambiguity in the heptagon amplitudes. These results point to a strong tension between the collinear properties of the amplitudes and the Steinmann relations.

  3. Kepler Planet Detection Metrics: Statistical Bootstrap Test

    Science.gov (United States)

    Jenkins, Jon M.; Burke, Christopher J.

    2016-01-01

    This document describes the data produced by the Statistical Bootstrap Test over the final three Threshold Crossing Event (TCE) deliveries to NExScI: SOC 9.1 (Q1Q16)1 (Tenenbaum et al. 2014), SOC 9.2 (Q1Q17) aka DR242 (Seader et al. 2015), and SOC 9.3 (Q1Q17) aka DR253 (Twicken et al. 2016). The last few years have seen significant improvements in the SOC science data processing pipeline, leading to higher quality light curves and more sensitive transit searches. The statistical bootstrap analysis results presented here and the numerical results archived at NASAs Exoplanet Science Institute (NExScI) bear witness to these software improvements. This document attempts to introduce and describe the main features and differences between these three data sets as a consequence of the software changes.

  4. A Bootstrap Test for Conditional Symmetry

    OpenAIRE

    Liangjun Su; Sainan Jin

    2005-01-01

    This paper proposes a simple consistent nonparametric test of conditional symmetry based on the principle of characteristic functions. The test statistic is shown to be asymptotically normal under the null hypothesis of conditional symmetry and consistent against any conditional asymmetric distributions. We also study the power against local alternatives, propose a bootstrap version of the test, and conduct a small Monte Carlo simulation to evaluate the finitesample performance of the test.

  5. Estimasi Regresi Wavelet Thresholding Dengan Metode Bootstrap

    OpenAIRE

    Suparti, Suparti; Mustofa, Achmad; Rusgiyono, Agus

    2007-01-01

    Wavelet is a function that has the certainly characteristic for example, it oscillate about zero point ascillating, localized in the time and frequency domain and construct the orthogonal bases in L2(R) space. On of the wavelet application is to estimate non parametric regression function. There are two kinds of wavelet estimator, i.e., linear and non linear wavelet estimator. The non linear wavelet estimator is called a thresholding wavelet rstimator. The application of the bootstrap method...

  6. Bootstrap inference when using multiple imputation.

    Science.gov (United States)

    Schomaker, Michael; Heumann, Christian

    2018-04-16

    Many modern estimators require bootstrapping to calculate confidence intervals because either no analytic standard error is available or the distribution of the parameter of interest is nonsymmetric. It remains however unclear how to obtain valid bootstrap inference when dealing with multiple imputation to address missing data. We present 4 methods that are intuitively appealing, easy to implement, and combine bootstrap estimation with multiple imputation. We show that 3 of the 4 approaches yield valid inference, but that the performance of the methods varies with respect to the number of imputed data sets and the extent of missingness. Simulation studies reveal the behavior of our approaches in finite samples. A topical analysis from HIV treatment research, which determines the optimal timing of antiretroviral treatment initiation in young children, demonstrates the practical implications of the 4 methods in a sophisticated and realistic setting. This analysis suffers from missing data and uses the g-formula for inference, a method for which no standard errors are available. Copyright © 2018 John Wiley & Sons, Ltd.

  7. Bootstrap Approach To Compare the Slopes of Two Calibrations When Few Standards Are Available.

    Science.gov (United States)

    Estévez-Pérez, Graciela; Andrade, Jose M; Wilcox, Rand R

    2016-02-16

    Comparing the slopes of aqueous-based and standard addition calibration procedures is almost a daily task in analytical laboratories. As usual protocols imply very few standards, sound statistical inference and conclusions are hard to obtain for current classical tests (e.g., the t-test), which may greatly affect decision-making. Thus, there is a need for robust statistics that are not distorted by small samples of experimental values obtained from analytical studies. Several promising alternatives based on bootstrapping are studied in this paper under the typical constraints common in laboratory work. The impact of number of standards, homoscedasticity or heteroscedasticity, three variance patterns, and three error distributions on least-squares fits were considered (in total, 144 simulation scenarios). The Student's t-test is the most valuable procedure when the normality assumption is true and homoscedasticity is present, although it can be highly affected by outliers. A wild bootstrap method leads to average rejection percentages that are closer to the nominal level in almost every situation, and it is recommended for laboratories working with a small number of standards. Finally, it was seen that the Theil-Sen percentile bootstrap statistic is very robust but its rejection percentages depart from the nominal ones (bootstrap principles to compare the slopes of two calibration lines.

  8. Estimating variability in functional images using a synthetic resampling approach

    International Nuclear Information System (INIS)

    Maitra, R.; O'Sullivan, F.

    1996-01-01

    Functional imaging of biologic parameters like in vivo tissue metabolism is made possible by Positron Emission Tomography (PET). Many techniques, such as mixture analysis, have been suggested for extracting such images from dynamic sequences of reconstructed PET scans. Methods for assessing the variability in these functional images are of scientific interest. The nonlinearity of the methods used in the mixture analysis approach makes analytic formulae for estimating variability intractable. The usual resampling approach is infeasible because of the prohibitive computational effort in simulating a number of sinogram. datasets, applying image reconstruction, and generating parametric images for each replication. Here we introduce an approach that approximates the distribution of the reconstructed PET images by a Gaussian random field and generates synthetic realizations in the imaging domain. This eliminates the reconstruction steps in generating each simulated functional image and is therefore practical. Results of experiments done to evaluate the approach on a model one-dimensional problem are very encouraging. Post-processing of the estimated variances is seen to improve the accuracy of the estimation method. Mixture analysis is used to estimate functional images; however, the suggested approach is general enough to extend to other parametric imaging methods

  9. Resampling to Speed Up Consolidation of Point Clouds

    Directory of Open Access Journals (Sweden)

    Huanyu Yang

    2015-01-01

    Full Text Available Processing of large-scale scattered point clouds has currently become a hot topic in the field of computer graphics research. A supposedly valid tool in producing a set of denoised, outlier-free, and evenly distributed particles over the original point clouds, Weighted Locally Optimal Projection (WLOP algorithm, has been used in the consolidation of unorganized 3D point clouds by many researchers. However, the algorithm is considered relatively ineffective, due to the large amount of the point clouds data and the iteration calculation. In this paper, a resampling method applied to the point set of 3D model, which significantly improves the computing speed of the WLOP algorithm. In order to measure the impact of error, which will increase with the improvement of calculation efficiency, on the accuracy of the algorithm, we define two quantitative indicators, that is, the projection error and uniformity of distribution. The performance of our method will be evaluated by using both quantitative and qualitative analyses. Our experimental validation demonstrates that this method greatly improves calculating efficiency, notwithstanding the slightly reduced projection accuracy in comparison to WLOP.

  10. Lightweight CoAP-Based Bootstrapping Service for the Internet of Things

    Directory of Open Access Journals (Sweden)

    Dan Garcia-Carrillo

    2016-03-01

    Full Text Available The Internet of Things (IoT is becoming increasingly important in several fields of industrial applications and personal applications, such as medical e-health, smart cities, etc. The research into protocols and security aspects related to this area is continuously advancing in making these networks more reliable and secure, taking into account these aspects by design. Bootstrapping is a procedure by which a user obtains key material and configuration information, among other parameters, to operate as an authenticated party in a security domain. Until now solutions have focused on re-using security protocols that were not developed for IoT constraints. For this reason, in this work we propose a design and implementation of a lightweight bootstrapping service for IoT networks that leverages one of the application protocols used in IoT : Constrained Application Protocol (CoAP. Additionally, in order to provide flexibility, scalability, support for large scale deployment, accountability and identity federation, our design uses technologies such as the Extensible Authentication Protocol (EAP and Authentication Authorization and Accounting (AAA. We have named this service CoAP-EAP. First, we review the state of the art in the field of bootstrapping and specifically for IoT. Second, we detail the bootstrapping service: the architecture with entities and interfaces and the flow operation. Third, we obtain performance measurements of CoAP-EAP (bootstrapping time, memory footprint, message processing time, message length and energy consumption and compare them with PANATIKI. The most significant and constrained representative of the bootstrapping solutions related with CoAP-EAP. As we will show, our solution provides significant improvements, mainly due to an important reduction of the message length.

  11. Lightweight CoAP-Based Bootstrapping Service for the Internet of Things.

    Science.gov (United States)

    Garcia-Carrillo, Dan; Marin-Lopez, Rafael

    2016-03-11

    The Internet of Things (IoT) is becoming increasingly important in several fields of industrial applications and personal applications, such as medical e-health, smart cities, etc. The research into protocols and security aspects related to this area is continuously advancing in making these networks more reliable and secure, taking into account these aspects by design. Bootstrapping is a procedure by which a user obtains key material and configuration information, among other parameters, to operate as an authenticated party in a security domain. Until now solutions have focused on re-using security protocols that were not developed for IoT constraints. For this reason, in this work we propose a design and implementation of a lightweight bootstrapping service for IoT networks that leverages one of the application protocols used in IoT : Constrained Application Protocol (CoAP). Additionally, in order to provide flexibility, scalability, support for large scale deployment, accountability and identity federation, our design uses technologies such as the Extensible Authentication Protocol (EAP) and Authentication Authorization and Accounting (AAA). We have named this service CoAP-EAP. First, we review the state of the art in the field of bootstrapping and specifically for IoT. Second, we detail the bootstrapping service: the architecture with entities and interfaces and the flow operation. Third, we obtain performance measurements of CoAP-EAP (bootstrapping time, memory footprint, message processing time, message length and energy consumption) and compare them with PANATIKI. The most significant and constrained representative of the bootstrapping solutions related with CoAP-EAP. As we will show, our solution provides significant improvements, mainly due to an important reduction of the message length.

  12. arXiv Bootstrapping the QCD soft anomalous dimension

    CERN Document Server

    Almelid, Øyvind; Gardi, Einan; McLeod, Andrew; White, Chris D.

    2017-09-18

    The soft anomalous dimension governs the infrared singularities of scattering amplitudes to all orders in perturbative quantum field theory, and is a crucial ingredient in both formal and phenomenological applications of non-abelian gauge theories. It has recently been computed at three-loop order for massless partons by explicit evaluation of all relevant Feynman diagrams. In this paper, we show how the same result can be obtained, up to an overall numerical factor, using a bootstrap procedure. We first give a geometrical argument for the fact that the result can be expressed in terms of single-valued harmonic polylogarithms. We then use symmetry considerations as well as known properties of scattering amplitudes in collinear and high-energy (Regge) limits to constrain an ansatz of basis functions. This is a highly non-trivial cross-check of the result, and our methods pave the way for greatly simplified higher-order calculations.

  13. Frequency Analysis Using Bootstrap Method and SIR Algorithm for Prevention of Natural Disasters

    Science.gov (United States)

    Kim, T.; Kim, Y. S.

    2017-12-01

    The frequency analysis of hydrometeorological data is one of the most important factors in response to natural disaster damage, and design standards for a disaster prevention facilities. In case of frequency analysis of hydrometeorological data, it assumes that observation data have statistical stationarity, and a parametric method considering the parameter of probability distribution is applied. For a parametric method, it is necessary to sufficiently collect reliable data; however, snowfall observations are needed to compensate for insufficient data in Korea, because of reducing the number of days for snowfall observations and mean maximum daily snowfall depth due to climate change. In this study, we conducted the frequency analysis for snowfall using the Bootstrap method and SIR algorithm which are the resampling methods that can overcome the problems of insufficient data. For the 58 meteorological stations distributed evenly in Korea, the probability of snowfall depth was estimated by non-parametric frequency analysis using the maximum daily snowfall depth data. The results show that probabilistic daily snowfall depth by frequency analysis is decreased at most stations, and most stations representing the rate of change were found to be consistent in both parametric and non-parametric frequency analysis. This study shows that the resampling methods can do the frequency analysis of the snowfall depth that has insufficient observed samples, which can be applied to interpretation of other natural disasters such as summer typhoons with seasonal characteristics. Acknowledgment.This research was supported by a grant(MPSS-NH-2015-79) from Disaster Prediction and Mitigation Technology Development Program funded by Korean Ministry of Public Safety and Security(MPSS).

  14. ROSETTA-ORBITER 67P RPCMAG 4 ESC1 RESAMPLED V6.0

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset contains RESAMPLED DATA (CODMAC LEVEL 4) of the COMET ESCORT1 Phase from November 22, 2014 until March 10, 2015 of the ROSETTA orbiter magnetometer...

  15. ROSETTA-ORBITER EARTH RPCMAG 4 EAR1 RESAMPLED V3.0

    Data.gov (United States)

    National Aeronautics and Space Administration — 2010-07-30 SBN:T.Barnes Updated and DATA_SET_DESCThis dataset contains RESAMPLED DATA of the first Earth Flyby (EAR1). Included are the data of the very Flyby from...

  16. VG2 URA LECP RESAMPLED SUMMARY SCAN AVERAGED 15MIN V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — This browse data consists of resampled data from the Low Energy Charged Particle (LECP) experiment on Voyager 2 while the spacecraft was in the vicinity of Uranus....

  17. VG2 URA LECP RESAMPLED RDR STEPPING SECTOR 15MIN V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set consists of resampled data from the Low Energy Charged Particle (LECP) experiment on Voyager 2 while the spacecraft was in the vicinity of Uranus. This...

  18. ROSETTA-ORBITER STEINS RPCMAG 4 AST1 RESAMPLED V3.0

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset contains RESAMPLED DATA of the STEINS flyby Phase from September 1 until September 10, 2008. The closest approach (CA) took place on September 5, 2008...

  19. ROSETTA-ORBITER LUTETIA RPCMAG 4 AST2 RESAMPLED V3.0

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset contains RESAMPLED DATA of the LUTETIA flyby Phase from July 7 until July 13, 2010. The closest approach (CA) took place on July 10, 2010 at 15:45

  20. VG2 JUP LECP CALIBRATED RESAMPLED SECTORED 15MIN V1.1

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set consists of resampled data from the Low Energy Charged Particle (LECP) experiment on Voyager 2 while the spacecraft was in the vicinity of Jupiter....

  1. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'

    DEFF Research Database (Denmark)

    de Nijs, Robin

    2015-01-01

    by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all...... methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics...... for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties...

  2. ROSETTA-ORBITER SW RPCMAG 4 CR2 RESAMPLED V3.0

    Data.gov (United States)

    National Aeronautics and Space Administration — 2010-07-30 SBN:T.Barnes Updated and DATA_SET_DESCThis dataset contains RESAMPLED DATA of the CRUISE 2 phase (CR2). (Version 3.0 is the first version archived.)

  3. Event-based stochastic point rainfall resampling for statistical replication and climate projection of historical rainfall series

    Science.gov (United States)

    Thorndahl, Søren; Korup Andersen, Aske; Badsberg Larsen, Anders

    2017-09-01

    Continuous and long rainfall series are a necessity in rural and urban hydrology for analysis and design purposes. Local historical point rainfall series often cover several decades, which makes it possible to estimate rainfall means at different timescales, and to assess return periods of extreme events. Due to climate change, however, these series are most likely not representative of future rainfall. There is therefore a demand for climate-projected long rainfall series, which can represent a specific region and rainfall pattern as well as fulfil requirements of long rainfall series which includes climate changes projected to a specific future period. This paper presents a framework for resampling of historical point rainfall series in order to generate synthetic rainfall series, which has the same statistical properties as an original series. Using a number of key target predictions for the future climate, such as winter and summer precipitation, and representation of extreme events, the resampled historical series are projected to represent rainfall properties in a future climate. Climate-projected rainfall series are simulated by brute force randomization of model parameters, which leads to a large number of projected series. In order to evaluate and select the rainfall series with matching statistical properties as the key target projections, an extensive evaluation procedure is developed.

  4. A New Method to Implement Resampled Uniform PWM Suitable for Distributed Control of Modular Multilevel Converters

    DEFF Research Database (Denmark)

    Huang, Shaojun; Mathe, Laszlo; Teodorescu, Remus

    2013-01-01

    the proper switching instances needed for the resampling modulation technique. The software implementation of the proposed phase shifted PWM (PS-PWM) method, and its application in a distributed control system for MMC, are fully discussed in this paper. Simulation and experiment results show...... that the proposed solution can realize the resampled uniform PWM and provide high effective sampling frequency and low time delay, which is critical for the distributed control of MMC....

  5. Comparison of Bootstrap Confidence Intervals Using Monte Carlo Simulations

    Directory of Open Access Journals (Sweden)

    Roberto S. Flowers-Cano

    2018-02-01

    Full Text Available Design of hydraulic works requires the estimation of design hydrological events by statistical inference from a probability distribution. Using Monte Carlo simulations, we compared coverage of confidence intervals constructed with four bootstrap techniques: percentile bootstrap (BP, bias-corrected bootstrap (BC, accelerated bias-corrected bootstrap (BCA and a modified version of the standard bootstrap (MSB. Different simulation scenarios were analyzed. In some cases, the mother distribution function was fit to the random samples that were generated. In other cases, a distribution function different to the mother distribution was fit to the samples. When the fitted distribution had three parameters, and was the same as the mother distribution, the intervals constructed with the four techniques had acceptable coverage. However, the bootstrap techniques failed in several of the cases in which the fitted distribution had two parameters.

  6. Towards bootstrapping QED{sub 3}

    Energy Technology Data Exchange (ETDEWEB)

    Chester, Shai M.; Pufu, Silviu S. [Joseph Henry Laboratories, Princeton University,Princeton, NJ 08544 (United States)

    2016-08-02

    We initiate the conformal bootstrap study of Quantum Electrodynamics in 2+1 space-time dimensions (QED{sub 3}) with N flavors of charged fermions by focusing on the 4-point function of four monopole operators with the lowest unit of topological charge. We obtain upper bounds on the scaling dimension of the doubly-charged monopole operator, with and without assuming other gaps in the operator spectrum. Intriguingly, we find a (gap-dependent) kink in these bounds that comes reasonably close to the large N extrapolation of the scaling dimensions of the singly-charged and doubly-charged monopole operators down to N=4 and N=6.

  7. A tauberian theorem for the conformal bootstrap

    Science.gov (United States)

    Qiao, Jiaxin; Rychkov, Slava

    2017-12-01

    For expansions in one-dimensional conformal blocks, we provide a rigorous link between the asymptotics of the spectral density of exchanged primaries and the leading singularity in the crossed channel. Our result has a direct application to systems of SL(2, ℝ)-invariant correlators (also known as 1d CFTs). It also puts on solid ground a part of the lightcone bootstrap analysis of the spectrum of operators of high spin and bounded twist in CFTs in d > 2. In addition, a similar argument controls the spectral density asymptotics in large N gauge theories.

  8. A bootstrap approach to bump hunting

    Science.gov (United States)

    Silverman, B. W.

    1982-01-01

    An important question in cluster analysis and pattern recognition is the determination of the number of clusters into which a given population should be divided. Frequently, particularly when certain specific clustering methods are being used, the number of clusters is taken to be equal to the number of modes, or local maxima, in the probability density function underlying the given data set. The use of kernal density estimates in mode estimation is discussed. The test statistic to be used is defined and a bootstrap technique for assessing significance is given. An illustrative application is followed by an examination of the asymptotic behavior of the test statistic.

  9. General bootstrap equations in 4D CFTs

    Science.gov (United States)

    Cuomo, Gabriel Francisco; Karateev, Denis; Kravchuk, Petr

    2018-01-01

    We provide a framework for generic 4D conformal bootstrap computations. It is based on the unification of two independent approaches, the covariant (embedding) formalism and the non-covariant (conformal frame) formalism. We construct their main ingredients (tensor structures and differential operators) and establish a precise connection between them. We supplement the discussion by additional details like classification of tensor structures of n-point functions, normalization of 2-point functions and seed conformal blocks, Casimir differential operators and treatment of conserved operators and permutation symmetries. Finally, we implement our framework in a Mathematica package and make it freely available.

  10. Evidence of Bootstrap Financing among Small Start-Up Firms

    OpenAIRE

    Howard E. Van Auken; Lynn Neeley

    1996-01-01

    This study examines the use of bootstrap financing for a sample of 78 firms in a Midwestern state. The results show that traditional sources of capital accounted for 65% of the firms' start-up capital and 35% of the start-up capital was obtained from bootstrap sources. A Chi-squared analysis indicates a significant difference between the percentage of (!) sole proprietorship versus other firms and (2) construction/manufacturing versus other types of firms using bootstrap financing as compared...

  11. Confidence Intervals for the Mean: To Bootstrap or Not to Bootstrap

    Science.gov (United States)

    Calzada, Maria E.; Gardner, Holly

    2011-01-01

    The results of a simulation conducted by a research team involving undergraduate and high school students indicate that when data is symmetric the student's "t" confidence interval for a mean is superior to the studied non-parametric bootstrap confidence intervals. When data is skewed and for sample sizes n greater than or equal to 10,…

  12. Using the Bootstrap Method to Evaluate the Critical Range of Misfit for Polytomous Rasch Fit Statistics.

    Science.gov (United States)

    Seol, Hyunsoo

    2016-06-01

    The purpose of this study was to apply the bootstrap procedure to evaluate how the bootstrapped confidence intervals (CIs) for polytomous Rasch fit statistics might differ according to sample sizes and test lengths in comparison with the rule-of-thumb critical value of misfit. A total of 25 simulated data sets were generated to fit the Rasch measurement and then a total of 1,000 replications were conducted to compute the bootstrapped CIs under each of 25 testing conditions. The results showed that rule-of-thumb critical values for assessing the magnitude of misfit were not applicable because the infit and outfit mean square error statistics showed different magnitudes of variability over testing conditions and the standardized fit statistics did not exactly follow the standard normal distribution. Further, they also do not share the same critical range for the item and person misfit. Based on the results of the study, the bootstrapped CIs can be used to identify misfitting items or persons as they offer a reasonable alternative solution, especially when the distributions of the infit and outfit statistics are not well known and depend on sample size. © The Author(s) 2016.

  13. Using the bootstrap in a multivariadte data problem: An example

    International Nuclear Information System (INIS)

    Glosup, J.G.; Axelrod, M.C.

    1995-01-01

    The use of the bootstrap in the multivariate version of the paired t-test is considered and demonstrated through an example. The problem of interest involves comparing two different techniques for measuring the chemical constituents of an sample item. The bootstrap is used to form an empirical significance level for Hotelling's one-sample T-squared statistic. The bootstrap was selected to determine empirical significance levels because the implicit assumption of multivariate normality in the classic Hotelling's one-sample test night not hold. The results of both the classic and bootstrap test are presented and contrasted

  14. Bootstrap-Based Inference for Cube Root Consistent Estimators

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Jansson, Michael; Nagasawa, Kenichi

    This note proposes a consistent bootstrap-based distributional approximation for cube root consistent estimators such as the maximum score estimator of Manski (1975) and the isotonic density estimator of Grenander (1956). In both cases, the standard nonparametric bootstrap is known to be inconsis......This note proposes a consistent bootstrap-based distributional approximation for cube root consistent estimators such as the maximum score estimator of Manski (1975) and the isotonic density estimator of Grenander (1956). In both cases, the standard nonparametric bootstrap is known...

  15. Bootstrap confidence intervals for the process capability index under half-logistic distribution

    OpenAIRE

    Wararit Panichkitkosolkul

    2012-01-01

    This study concerns the construction of bootstrap confidence intervals for theprocess capability index in the case of half-logistic distribution. The bootstrap confidence intervals applied consist of standard bootstrap confidence interval, percentile bootstrap confidence interval and bias-corrected percentile bootstrap confidence interval. Using Monte Carlo simulations, the estimated coverage probabilities and average widths ofbootstrap confidence intervals are compared, with results showing ...

  16. Estimates by bootstrap interval for time series forecasts obtained by theta model

    Directory of Open Access Journals (Sweden)

    Daniel Steffen

    2017-03-01

    Full Text Available In this work, are developed an experimental computer program in Matlab language version 7.1 from the univariate method for time series forecasting called Theta, and implementation of resampling technique known as computer intensive "bootstrap" to estimate the prediction for the point forecast obtained by this method by confidence interval. To solve this problem built up an algorithm that uses Monte Carlo simulation to obtain the interval estimation for forecasts. The Theta model presented in this work was very efficient in M3 Makridakis competition, where tested 3003 series. It is based on the concept of modifying the local curvature of the time series obtained by a coefficient theta (Θ. In it's simplest approach the time series is decomposed into two lines theta representing terms of long term and short term. The prediction is made by combining the forecast obtained by fitting lines obtained with the theta decomposition. The results of Mape's error obtained for the estimates confirm the favorable results to the method of M3 competition being a good alternative for time series forecast.

  17. Bootstrap support is not first-order correct.

    Science.gov (United States)

    Susko, Edward

    2009-04-01

    The appropriate interpretation of bootstrap support for splits and the question of what constitutes large bootstrap support have received considerable attention. One desirable interpretation, indeed the interpretation that was put forward when bootstrap support for splits was first introduced, is that 1-minus bootstrap support is a P value for the hypothesis that the split is not well resolved. As a P value, bootstrap support has been argued to be first-order correct. By obtaining the limiting distribution of bootstrap support for a split when maximum likelihood estimation is conducted, it is shown that bootstrap support is not first-order correct and insight is provided into the nature of the problem. Borrowing from earlier results, it is also shown that similar results hold when the neighbor-joining algorithm is used. Examples suggest that bootstrap support is generally conservative as a P value and give insight as to why this is usually the case. The analysis indicates that the problem is largely due to the unusual nature of tree space where boundary trees always have at least 2 neighbors.

  18. Bootstrap Estimates of Standard Errors in Generalizability Theory

    Science.gov (United States)

    Tong, Ye; Brennan, Robert L.

    2007-01-01

    Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures…

  19. Higher-order Gaussian kernel in bootstrap boosting algorithm ...

    African Journals Online (AJOL)

    The bootstrap boosting algorithm is a bias reduction scheme. The adoption of higher-order Gaussian kernel in a bootstrap boosting algorithm in kernel density estimation was investigated. The algorithm used the higher-order. Gaussian kernel instead of the regular fixed kernels. A comparison of the scheme with existing ...

  20. Learning web development with Bootstrap and AngularJS

    CERN Document Server

    Radford, Stephen

    2015-01-01

    Whether you know a little about Bootstrap or AngularJS, or you're a complete beginner, this book will enhance your capabilities in both frameworks and you'll build a fully functional web app. A working knowledge of HTML, CSS, and JavaScript is required to fully get to grips with Bootstrap and AngularJS.

  1. Bootstrapping the O(N) Archipelago

    CERN Document Server

    Kos, Filip; Simmons-Duffin, David; Vichi, Alessandro

    2015-01-01

    We study 3d CFTs with an $O(N)$ global symmetry using the conformal bootstrap for a system of mixed correlators. Specifically, we consider all nonvanishing scalar four-point functions containing the lowest dimension $O(N)$ vector $\\phi_i$ and the lowest dimension $O(N)$ singlet $s$, assumed to be the only relevant operators in their symmetry representations. The constraints of crossing symmetry and unitarity for these four-point functions force the scaling dimensions $(\\Delta_\\phi, \\Delta_s)$ to lie inside small islands. We also make rigorous determinations of current two-point functions in the $O(2)$ and $O(3)$ models, with applications to transport in condensed matter systems.

  2. The analytic bootstrap in fermionic CFTs

    Science.gov (United States)

    van Loon, Mark

    2018-01-01

    We apply the method of the large spin bootstrap to analyse fermionic conformal field theories with weakly broken higher spin symmetry. Through the study of correlators of composite operators, we find the anomalous dimensions and OPE coefficients in the GrossNeveu model in d = 2 + ɛ dimensions and the Gross-Neveu-Yukawa model in d = 4 - ɛ dimensions, based only on crossing symmetry. Furthermore a non-trivial solution in the d = 2 + ɛ expansion is found for a fermionic theory in which the fundamental field is not part of the spectrum. The results are perturbative in ɛ and valid to all orders in the spin, reproducing known results for operator dimensions and providing some new results for operator dimensions and OPE coefficients.

  3. Simplifying large spin bootstrap in Mellin space

    Science.gov (United States)

    Dey, Parijat; Ghosh, Kausik; Sinha, Aninda

    2018-01-01

    We set up the conventional conformal bootstrap equations in Mellin space and analyse the anomalous dimensions and OPE coefficients of large spin double trace operators. By decomposing the equations in terms of continuous Hahn polynomials, we derive explicit expressions as an asymptotic expansion in inverse conformal spin to any order, reproducing the contribution of any primary operator and its descendants in the crossed channel. The expressions are in terms of known mathematical functions and involve generalized Bernoulli (Nørlund) polynomials and the Mack polynomials and enable us to derive certain universal properties. Comparing with the recently introduced reformulated equations in terms of crossing symmetric tree level exchange Witten diagrams, we show that to leading order in anomalous dimension but to all orders in inverse conformal spin, the equations are the same as in the conventional formulation. At the next order, the polynomial ambiguity in the Witten diagram basis is needed for the equivalence and we derive the necessary constraints for the same.

  4. Bootstrapping 3D fermions with global symmetries

    Science.gov (United States)

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David

    2018-01-01

    We study the conformal bootstrap for 4-point functions of fermions 〈 ψ i ψ j ψ k ψ ℓ 〉 in parity-preserving 3d CFTs, where ψ i transforms as a vector under an O( N ) global symmetry. We compute bounds on scaling dimensions and central charges, finding features in our bounds that appear to coincide with the O( N ) symmetric Gross-Neveu-Yukawa fixed points. Our computations are in perfect agreement with the 1 /N expansion at large N and allow us to make nontrivial predictions at small N . For values of N for which the Gross-Neveu-Yukawa universality classes are relevant to condensed-matter systems, we compare our results to previous analytic and numerical results.

  5. The ${\\mathcal N}=2$ superconformal bootstrap

    CERN Document Server

    Beem, Christopher; Liendo, Pedro; Rastelli, Leonardo; van Rees, Balt C

    2016-01-01

    In this work we initiate the conformal bootstrap program for ${\\mathcal N}=2$ superconformal field theories in four dimensions. We promote an abstract operator-algebraic viewpoint in order to unify the description of Lagrangian and non-Lagrangian theories, and formulate various conjectures concerning the landscape of theories. We analyze in detail the four-point functions of flavor symmetry current multiplets and of ${\\mathcal N}=2$ chiral operators. For both correlation functions we review the solution of the superconformal Ward identities and describe their superconformal block decompositions. This provides the foundation for an extensive numerical analysis discussed in the second half of the paper. We find a large number of constraints for operator dimensions, OPE coefficients, and central charges that must hold for any ${\\mathcal N}=2$ superconformal field theory.

  6. Conformal bootstrap, universality and gravitational scattering

    Directory of Open Access Journals (Sweden)

    Steven Jackson

    2015-12-01

    Full Text Available We use the conformal bootstrap equations to study the non-perturbative gravitational scattering between infalling and outgoing particles in the vicinity of a black hole horizon in AdS. We focus on irrational 2D CFTs with large c and only Virasoro symmetry. The scattering process is described by the matrix element of two light operators (particles between two heavy states (BTZ black holes. We find that the operator algebra in this regime is (i universal and identical to that of Liouville CFT, and (ii takes the form of an exchange algebra, specified by an R-matrix that exactly matches the scattering amplitude of 2+1 gravity. The R-matrix is given by a quantum 6j-symbol and the scattering phase by the volume of a hyperbolic tetrahedron. We comment on the relevance of our results to scrambling and the holographic reconstruction of the bulk physics near black hole horizons.

  7. Uncertainty estimation in diffusion MRI using the nonlocal bootstrap.

    Science.gov (United States)

    Yap, Pew-Thian; An, Hongyu; Chen, Yasheng; Shen, Dinggang

    2014-08-01

    In this paper, we propose a new bootstrap scheme, called the nonlocal bootstrap (NLB) for uncertainty estimation. In contrast to the residual bootstrap, which relies on a data model, or the repetition bootstrap, which requires repeated signal measurements, NLB is not restricted by the data structure imposed by a data model and obviates the need for time-consuming multiple acquisitions. NLB hinges on the observation that local imaging information recurs in an image. This self-similarity implies that imaging information coming from spatially distant (nonlocal) regions can be exploited for more effective estimation of statistics of interest. Evaluations using in silico data indicate that NLB produces distribution estimates that are in closer agreement with those generated using Monte Carlo simulations, compared with the conventional residual bootstrap. Evaluations using in vivo data demonstrate that NLB produces results that are in agreement with our knowledge on white matter architecture.

  8. Testing Informative Hypotheses in SEM Increases Power: An Illustration Contrasting Classical Hypothesis Testing with a Parametric Bootstrap Approach

    Science.gov (United States)

    van de Schoot, Rens; Strohmeier, Dagmar

    2011-01-01

    In the present paper, the application of a parametric bootstrap procedure, as described by van de Schoot, Hoijtink, and Dekovic (2010), will be applied to demonstrate that a direct test of an informative hypothesis offers more informative results compared to testing traditional null hypotheses against catch-all rivals. Also, more power can be…

  9. Epipolar Resampling of Cross-Track Pushbroom Satellite Imagery Using the Rigorous Sensor Model

    Directory of Open Access Journals (Sweden)

    Mojtaba Jannati

    2017-01-01

    Full Text Available Epipolar resampling aims to eliminate the vertical parallax of stereo images. Due to the dynamic nature of the exterior orientation parameters of linear pushbroom satellite imagery and the complexity of reconstructing the epipolar geometry using rigorous sensor models, so far, no epipolar resampling approach has been proposed based on these models. In this paper for the first time it is shown that the orientation of the instantaneous baseline (IB of conjugate image points (CIPs in the linear pushbroom satellite imagery can be modeled with high precision in terms of the rows- and the columns-number of CIPs. Taking advantage of this feature, a novel approach is then presented for epipolar resampling of cross-track linear pushbroom satellite imagery. The proposed method is based on the rigorous sensor model. As the instantaneous position of sensors remains fixed, the digital elevation model of the area of interest is not required in the resampling process. Experimental results obtained from two pairs of SPOT and one pair of RapidEye stereo imagery with different terrain conditions shows that the proposed epipolar resampling approach benefits from a superior accuracy, as the remained vertical parallaxes of all CIPs in the normalized images are close to zero.

  10. Multiway contingency tables: Monte Carlo resampling probability values for the chi-squared and likelihood-ratio tests.

    Science.gov (United States)

    Long, Michael A; Berry, Kenneth J; Mielke, Paul W

    2010-10-01

    Monte Carlo resampling methods to obtain probability values for chi-squared and likelihood-ratio test statistics for multiway contingency tables are presented. A resampling algorithm provides random arrangements of cell frequencies in a multiway contingency table, given fixed marginal frequency totals. Probability values are obtained from the proportion of resampled test statistic values equal to or greater than the observed test statistic value.

  11. Bootstrap consistency for general semiparametric M-estimation

    KAUST Repository

    Cheng, Guang

    2010-10-01

    Consider M-estimation in a semiparametric model that is characterized by a Euclidean parameter of interest and an infinite-dimensional nuisance parameter. As a general purpose approach to statistical inferences, the bootstrap has found wide applications in semiparametric M-estimation and, because of its simplicity, provides an attractive alternative to the inference approach based on the asymptotic distribution theory. The purpose of this paper is to provide theoretical justifications for the use of bootstrap as a semiparametric inferential tool. We show that, under general conditions, the bootstrap is asymptotically consistent in estimating the distribution of the M-estimate of Euclidean parameter; that is, the bootstrap distribution asymptotically imitates the distribution of the M-estimate. We also show that the bootstrap confidence set has the asymptotically correct coverage probability. These general onclusions hold, in particular, when the nuisance parameter is not estimable at root-n rate, and apply to a broad class of bootstrap methods with exchangeable ootstrap weights. This paper provides a first general theoretical study of the bootstrap in semiparametric models. © Institute of Mathematical Statistics, 2010.

  12. Bootstrapping Relational Affordances of Object Pairs Using Transfers

    DEFF Research Database (Denmark)

    Fichtl, Severin; Kraft, Dirk; Krüger, Norbert

    2018-01-01

    a tool to retrieve a desired object. We investigate how these relational affordances could be learned by a robot from its own action experience. A major challenge in this approach is to reduce the number of training samples needed to achieve accuracy, and hence we investigate an approach which can...... leverage past knowledge to accelerate current learning (which we call bootstrapping). We learn random forest-based affordance predictors from visual inputs and demonstrate two approaches to knowledge transfer for bootstrapping. In the first approach [direct bootstrapping (DB)], the state-space for a new...

  13. Motion vector field phase-to-amplitude resampling for 4D motion-compensated cone-beam CT

    Science.gov (United States)

    Sauppe, Sebastian; Kuhm, Julian; Brehm, Marcus; Paysan, Pascal; Seghers, Dieter; Kachelrieß, Marc

    2018-02-01

    We propose a phase-to-amplitude resampling (PTAR) method to reduce motion blurring in motion-compensated (MoCo) 4D cone-beam CT (CBCT) image reconstruction, without increasing the computational complexity of the motion vector field (MVF) estimation approach. PTAR is able to improve the image quality in reconstructed 4D volumes, including both regular and irregular respiration patterns. The PTAR approach starts with a robust phase-gating procedure for the initial MVF estimation and then switches to a phase-adapted amplitude gating method. The switch implies an MVF-resampling, which makes them amplitude-specific. PTAR ensures that the MVFs, which have been estimated on phase-gated reconstructions, are still valid for all amplitude-gated reconstructions. To validate the method, we use an artificially deformed clinical CT scan with a realistic breathing pattern and several patient data sets acquired with a TrueBeamTM integrated imaging system (Varian Medical Systems, Palo Alto, CA, USA). Motion blurring, which still occurs around the area of the diaphragm or at small vessels above the diaphragm in artifact-specific cyclic motion compensation (acMoCo) images based on phase-gating, is significantly reduced by PTAR. Also, small lung structures appear sharper in the images. This is demonstrated both for simulated and real patient data. A quantification of the sharpness of the diaphragm confirms these findings. PTAR improves the image quality of 4D MoCo reconstructions compared to conventional phase-gated MoCo images, in particular for irregular breathing patterns. Thus, PTAR increases the robustness of MoCo reconstructions for CBCT. Because PTAR does not require any additional steps for the MVF estimation, it is computationally efficient. Our method is not restricted to CBCT but could rather be applied to other image modalities.

  14. Conformal bootstrap in the Regge limit

    Science.gov (United States)

    Li, Daliang; Meltzer, David; Poland, David

    2017-12-01

    We analytically solve the conformal bootstrap equations in the Regge limit for large N conformal field theories. For theories with a parametrically large gap, the amplitude is dominated by spin-2 exchanges and we show how the crossing equations naturally lead to the construction of AdS exchange Witten diagrams. We also show how this is encoded in the anomalous dimensions of double-trace operators of large spin and large twist. We use the chaos bound to prove that the anomalous dimensions are negative. Extending these results to correlators containing two scalars and two conserved currents, we show how to reproduce the CEMZ constraint that the three-point function between two currents and one stress tensor only contains the structure given by Einstein-Maxwell theory in AdS, up to small corrections. Finally, we consider the case where operators of unbounded spin contribute to the Regge amplitude, whose net effect is captured by summing the leading Regge trajectory. We compute the resulting anomalous dimensions and corrections to OPE coefficients in the crossed channel and use the chaos bound to show that both are negative.

  15. Quantum bootstrapping via compressed quantum Hamiltonian learning

    International Nuclear Information System (INIS)

    Wiebe, Nathan; Granade, Christopher; Cory, D G

    2015-01-01

    A major problem facing the development of quantum computers or large scale quantum simulators is that general methods for characterizing and controlling are intractable. We provide a new approach to this problem that uses small quantum simulators to efficiently characterize and learn control models for larger devices. Our protocol achieves this by using Bayesian inference in concert with Lieb–Robinson bounds and interactive quantum learning methods to achieve compressed simulations for characterization. We also show that the Lieb–Robinson velocity is epistemic for our protocol, meaning that information propagates at a rate that depends on the uncertainty in the system Hamiltonian. We illustrate the efficiency of our bootstrapping protocol by showing numerically that an 8 qubit Ising model simulator can be used to calibrate and control a 50 qubit Ising simulator while using only about 750 kilobits of experimental data. Finally, we provide upper bounds for the Fisher information that show that the number of experiments needed to characterize a system rapidly diverges as the duration of the experiments used in the characterization shrinks, which motivates the use of methods such as ours that do not require short evolution times. (fast track communication)

  16. Prior Pronunciation Knowledge Bootstraps Word Learning

    Directory of Open Access Journals (Sweden)

    Khia Anne Johnson

    2018-02-01

    Full Text Available Learners often struggle with L2 sounds, yet little is known about the role of prior pronunciation knowledge and explicit articulatory training in language acquisition. This study asks if existing pronunciation knowledge can bootstrap word learning, and whether short-term audiovisual articulatory training for tongue position with and without a production component has an effect on lexical retention. Participants were trained and tested on stimuli with perceptually salient segments that are challenging to produce. Results indicate that pronunciation knowledge plays an important role in word learning. While much about the extent and shape of this role remains unclear, this study sheds light in three main areas. First, prior pronunciation knowledge leads to increased accuracy in word learning, as all groups trended toward lower accuracy on pseudowords with two novel segments, when compared with those with one or none. Second, all training and control conditions followed similar patterns, with training neither aiding nor inhibiting retention; this is a noteworthy result as previous work has found that the inclusion of production in training leads to decreased performance when testing for retention. Finally, higher production accuracy during practice led to higher retention after the word-learning task, indicating that individual differences and successful training are potentially important indicators of retention. This study provides support for the claim that pronunciation matters in L2 word learning.

  17. Bootstrapping: Una teoría explicativa del cambio conceptual Bootstrapping: A theory for conceptual change

    Directory of Open Access Journals (Sweden)

    José Antonio Castorina

    2005-12-01

    Full Text Available El presente artículo expone la teoría explicativa propuesta por Carey para el cambio conceptual. Primeramente, se plantea la cuestión de la reorganización conceptual en la psicología cognitiva y la posición de Carey. En segundo lugar, se ponen de relieve las condiciones epistémica que deben cumplir las "teorías" infantiles para que la reestructuración conceptual sea posible, así como los modos que adopta esta última. En tercer lugar, se muestran los resultados de investigaciones que verifican el cambio conceptual entre teorías infantiles de biología intuitiva. En cuarto lugar, se plantean las dificultades de otras teorías del cambio conceptual, para luego formular los rasgos del mecanismo alternativo de bootstrapping y su pertinencia para interpretrar los datos de las indagaciones mencionadas. Finalmente, se evalúan la originalidad de la teoría del bootstrpping en el escenario de los debates contemporáneos. Muy especialmente, se esboza una posible aproximación con las tesis dialécticas de Piaget.This paper examines the Carey's theory of conceptual change. First, it describes the conceptual reorganization in cognitive psychology and the author position. Second, the epistemic conditions that children "theories" should fulfil to make conceptual restructuring possible, as well as the ways adopted by the latter, are analyzed. In third place, findings of researches testing the conceptual change among biology intuitive children theories are explained. Subsequently, it discusses the difficulties other theories of conceptual change present, in order to state features of bootstrapping as an alternative mechanism and its relevance for the interpretation of abovementioned researches results. Finally, it evaluates the originality of "bootstrapping" theory in the scene of contemporary debates. It particularly outlines a possible approach to Piaget's dialectic theses.

  18. Climate time series analysis classical statistical and bootstrap methods

    CERN Document Server

    Mudelsee, Manfred

    2014-01-01

    Written for climatologists and applied statisticians, this book explains the bootstrap algorithms (including novel adaptions) and methods for confidence interval construction. The accuracy of the algorithms is tested by means of Monte Carlo experiments.

  19. 'Bootstrap' Configuration for Multistage Pulse-Tube Coolers

    Science.gov (United States)

    Nguyen, Bich; Nguyen, Lauren

    2008-01-01

    A bootstrap configuration has been proposed for multistage pulse-tube coolers that, for instance, provide final-stage cooling to temperatures as low as 20 K. The bootstrap configuration supplants the conventional configuration, in which customarily the warm heat exchangers of all stages reject heat at ambient temperature. In the bootstrap configuration, the warm heat exchanger, the inertance tube, and the reservoir of each stage would be thermally anchored to the cold heat exchanger of the next warmer stage. The bootstrapped configuration is superior to the conventional setup, in some cases increasing the 20 K cooler's coefficient of performance two-fold over that of an otherwise equivalent conventional layout. The increased efficiency could translate into less power consumption, less cooler mass, and/or lower cost for a given amount of cooling.

  20. Assessment of Resampling Methods for Causality Testing: A note on the US Inflation Behavior

    NARCIS (Netherlands)

    Papana, A.; Kyrtsou, C.; Kugiumtzis, D.; Diks, C.

    2017-01-01

    Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial

  1. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    Science.gov (United States)

    Zhang, Yeqing; Wang, Meiling; Li, Yafeng

    2018-01-01

    For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully. PMID:29495301

  2. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    Directory of Open Access Journals (Sweden)

    Yeqing Zhang

    2018-02-01

    Full Text Available For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully.

  3. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time.

    Science.gov (United States)

    Zhang, Yeqing; Wang, Meiling; Li, Yafeng

    2018-02-24

    For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90-94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7-5.6% per millisecond, with most satellites acquired successfully.

  4. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  5. Efficient generation of pronunciation dictionaries: machine learning factors during bootstrapping

    CSIR Research Space (South Africa)

    Davel, MH

    2004-10-01

    Full Text Available of Pronunciation Dictionaries: Machine Learning Factors during Bootstrapping Marelie Davel and Etienne Barnard CSIR / University of Pretoria Pretoria, South Africa mdavel@csir.co.za ebarnard@up.ac.za Abstract Several factors affect the efficiency... of bootstrapping approaches to the generation of pronunciation dictionaries. We focus on factors related to the underlying rule-extraction algorithms, and demonstrate variants of the Dynamically Expanding Context al- gorithm, which are beneficial...

  6. Generalized bootstrap equations and possible implications for the NLO Odderon

    Energy Technology Data Exchange (ETDEWEB)

    Bartels, J. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Vacca, G.P. [INFN, Sezione di Bologna (Italy)

    2013-07-15

    We formulate and discuss generalized bootstrap equations in nonabelian gauge theories. They are shown to hold in the leading logarithmic approximation. Since their validity is related to the self-consistency of the Steinmann relations for inelastic production amplitudes they can be expected to be valid also in NLO. Specializing to the N=4 SYM, we show that the validity in NLO of these generalized bootstrap equations allows to find the NLO Odderon solution with intercept exactly at one.

  7. Bootstrap prediction bands for cervical spine intervertebral kinematics during in vivo three-dimensional head movements.

    Science.gov (United States)

    Anderst, William J

    2015-05-01

    There is substantial inter-subject variability in intervertebral range of motion (ROM) in the cervical spine. This makes it difficult to define "normal" ROM, and to assess the effects of age, injury, and surgical procedures on spine kinematics. The objective of this study was to define normal intervertebral kinematics in the cervical spine during dynamic functional loading. Twenty-nine participants performed dynamic flexion\\extension, axial rotation, and lateral bending while biplane radiographs were collected at 30 images/s. Vertebral motion was tracked with sub-millimeter accuracy using a validated volumetric model-based tracking process that matched subject-specific CT-based bone models to the radiographs. Gaussian point-by-point and bootstrap techniques were used to determine 90% prediction bands for the intervertebral kinematic curves at 1% intervals of each movement cycle. Cross validation was performed to estimate the true achieved coverage for each method. For a targeted coverage of 90%, the estimated true coverage using bootstrap prediction bands averaged 86±5%, while the estimated true coverage using Gaussian point-by-point intervals averaged 56±10% over all movements and all motion segments. Bootstrap prediction bands are recommended as the standard for evaluating full ROM cervical spine kinematic curves. The data presented here can be used to identify abnormal motion in patients presenting with neck pain, to drive computational models, and to assess the biofidelity of in vitro loading paradigms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    Science.gov (United States)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  9. Bootstrap current calculations for TJ-II stellarator

    Science.gov (United States)

    Martinell, Julio J.; Camacho, Katia

    2016-10-01

    Bootstrap current is stellarators is usually very small since they operate solely with the magnetic confinement provided by the external currents. Since plasma pressure gradients are always present the bootstrap current is always finite, but the magnetic design can be optimized to minimize it. In the helias configuration there is no optimization and therefore it is important to estimate the actual bootstrap current generated by given pressure profiles. Here, we use the configuration of the TJ-II helias to calculate the bootstrap current for various density regimes using the kinetic code DKES. We compute the monoenergetic transport coefficients D11 and D13 to find first the thermal ambipolar diffusion coefficients and the corresponding radial electric field and then the respective bootstrap current. This is made taking experimental density and electron and ion temperature profiles. In spite of the convergence problems of DKES at low collisionality, we can obtain bootstrap current values with acceptable uncertainties, without using Monte Carlo methods. The results are compared with axisymmetric neoclassical computations. The resulting rotational transform is used to obtain the rational surfaces location and predict the transport barriers observed in the experiments. Funded by projects PAPIIT IN109115 and Conacyt 152905.

  10. Technical and scale efficiency in public and private Irish nursing homes - a bootstrap DEA approach.

    Science.gov (United States)

    Ni Luasa, Shiovan; Dineen, Declan; Zieba, Marta

    2016-10-27

    This article provides methodological and empirical insights into the estimation of technical efficiency in the nursing home sector. Focusing on long-stay care and using primary data, we examine technical and scale efficiency in 39 public and 73 private Irish nursing homes by applying an input-oriented data envelopment analysis (DEA). We employ robust bootstrap methods to validate our nonparametric DEA scores and to integrate the effects of potential determinants in estimating the efficiencies. Both the homogenous and two-stage double bootstrap procedures are used to obtain confidence intervals for the bias-corrected DEA scores. Importantly, the application of the double bootstrap approach affords true DEA technical efficiency scores after adjusting for the effects of ownership, size, case-mix, and other determinants such as location, and quality. Based on our DEA results for variable returns to scale technology, the average technical efficiency score is 62 %, and the mean scale efficiency is 88 %, with nearly all units operating on the increasing returns to scale part of the production frontier. Moreover, based on the double bootstrap results, Irish nursing homes are less technically efficient, and more scale efficient than the conventional DEA estimates suggest. Regarding the efficiency determinants, in terms of ownership, we find that private facilities are less efficient than the public units. Furthermore, the size of the nursing home has a positive effect, and this reinforces our finding that Irish homes produce at increasing returns to scale. Also, notably, we find that a tendency towards quality improvements can lead to poorer technical efficiency performance.

  11. A Bootstrap Approach to Martian Manufacturing

    Science.gov (United States)

    Dorais, Gregory A.

    2004-01-01

    In-Situ Resource Utilization (ISRU) is an essential element of any affordable strategy for a sustained human presence on Mars. Ideally, Martian habitats would be extremely massive to allow plenty of room to comfortably live and work, as well as to protect the occupants from the environment. Moreover, transportation and power generation systems would also require significant mass if affordable. For our approach to ISRU, we use the industrialization of the U.S. as a metaphor. The 19th century started with small blacksmith shops and ended with massive steel mills primarily accomplished by blacksmiths increasing their production capacity and product size to create larger shops, which produced small mills, which produced the large steel mills that industrialized the country. Most of the mass of a steel mill is comprised of steel in simple shapes, which are produced and repaired with few pieces of equipment also mostly made of steel in basic shapes. Due to this simplicity, we expect that the 19th century manufacturing growth can be repeated on Mars in the 21st century using robots as the primary labor force. We suggest a "bootstrap" approach to manufacturing on Mars that uses a "seed" manufacturing system that uses regolith to create major structural components and spare parts. The regolith would be melted, foamed, and sintered as needed to fabricate parts using casting and solid freeform fabrication techniques. Complex components, such as electronics, would be brought from Earth and integrated as needed. These parts would be assembled to create additional manufacturing systems, which can be both more capable and higher capacity. These subsequent manufacturing systems could refine vast amounts of raw materials to create large components, as well as assemble equipment, habitats, pressure vessels, cranes, pipelines, railways, trains, power generation stations, and other facilities needed to economically maintain a sustained human presence on Mars.

  12. Application of bootstrap method for assessment of linear regression models; Zastosowanie metody bootstrap do badania liniowych modeli regresyjnych

    Energy Technology Data Exchange (ETDEWEB)

    Urbanski, P.; Kowalska, E.

    1997-12-31

    The principle of the bootstrap methodology applied for the assessment of parameters and prediction ability of the linear regression models was presented. Application of this method was shown on the example of calibration of the radioisotope sulphuric acid concentration gauge. The bootstrap method allows to determine not only the numerical values of the regression coefficients, but also enables to investigate their distributions. (author). 11 refs, 12 figs, 3 tabs.

  13. Stability of response characteristics of a Delphi panel: application of bootstrap data expansion

    Directory of Open Access Journals (Sweden)

    Cole Bryan R

    2005-12-01

    Full Text Available Abstract Background Delphi surveys with panels of experts in a particular area of interest have been widely utilized in the fields of clinical medicine, nursing practice, medical education and healthcare services. Despite this wide applicability of the Delphi methodology, there is no clear identification of what constitutes a sufficient number of Delphi survey participants to ensure stability of results. Methods The study analyzed the response characteristics from the first round of a Delphi survey conducted with 23 experts in healthcare quality and patient safety. The panel members had similar training and subject matter understanding of the Malcolm Baldrige Criteria for Performance Excellence in Healthcare. The raw data from the first round sampling, which usually contains the largest diversity of responses, were augmented via bootstrap sampling to obtain computer-generated results for two larger samples obtained by sampling with replacement. Response characteristics (mean, trimmed mean, standard deviation and 95% confidence intervals for 54 survey items were compared for the responses of the 23 actual study participants and two computer-generated samples of 1000 and 2000 resampling iterations. Results The results from this study indicate that the response characteristics of a small expert panel in a well-defined knowledge area are stable in light of augmented sampling. Conclusion Panels of similarly trained experts (who possess a general understanding in the field of interest provide effective and reliable utilization of a small sample from a limited number of experts in a field of study to develop reliable criteria that inform judgment and support effective decision-making.

  14. Resampling method for balancing training data in video analysis

    Science.gov (United States)

    Giritharan, Balathasan; Yuan, Xiaohui

    2010-03-01

    Reviewing videos from medical procedures is a tedious work that requires concentration for extended hours and usually screens thousands of frames to find only a few positive cases that indicate probable presence of disease. Computational classification algorithms are sought to automate the reviewing process. The class imbalance problem becomes challenging when the learning process is driven by relative few minority class samples. The learning algorithms using imbalanced data sets generally result in large number of false negatives. In this article, we present an efficient rebalancing method for finding video frames that contain bleeding lesions. The majority class generally has clusters of data within them. Here we cluster the majority class and under-sample the each cluster based on its variance so that useful examples would not be lost during the under-sampling process. The balance of bleeding to non-bleeding frames is restored by the proposed cluster-based under-sampling and oversampling using Synthetic Minority Over-sampling Technique (SMOTE). Experiments were conducted using synthetic data and videos manually annotated by medical specialists for obscure bleeding detection. Our method achieved a high average sensitivity and specificity.

  15. Using Inverse Probability Bootstrap Sampling to Eliminate Sample Induced Bias in Model Based Analysis of Unequal Probability Samples.

    Directory of Open Access Journals (Sweden)

    Matthew Nahorniak

    Full Text Available In ecology, as in other research fields, efficient sampling for population estimation often drives sample designs toward unequal probability sampling, such as in stratified sampling. Design based statistical analysis tools are appropriate for seamless integration of sample design into the statistical analysis. However, it is also common and necessary, after a sampling design has been implemented, to use datasets to address questions that, in many cases, were not considered during the sampling design phase. Questions may arise requiring the use of model based statistical tools such as multiple regression, quantile regression, or regression tree analysis. However, such model based tools may require, for ensuring unbiased estimation, data from simple random samples, which can be problematic when analyzing data from unequal probability designs. Despite numerous method specific tools available to properly account for sampling design, too often in the analysis of ecological data, sample design is ignored and consequences are not properly considered. We demonstrate here that violation of this assumption can lead to biased parameter estimates in ecological research. In addition, to the set of tools available for researchers to properly account for sampling design in model based analysis, we introduce inverse probability bootstrapping (IPB. Inverse probability bootstrapping is an easily implemented method for obtaining equal probability re-samples from a probability sample, from which unbiased model based estimates can be made. We demonstrate the potential for bias in model-based analyses that ignore sample inclusion probabilities, and the effectiveness of IPB sampling in eliminating this bias, using both simulated and actual ecological data. For illustration, we considered three model based analysis tools--linear regression, quantile regression, and boosted regression tree analysis. In all models, using both simulated and actual ecological data, we

  16. Locality, bulk equations of motion and the conformal bootstrap

    Energy Technology Data Exchange (ETDEWEB)

    Kabat, Daniel [Department of Physics and Astronomy, Lehman College, City University of New York,250 Bedford Park Blvd. W, Bronx NY 10468 (United States); Lifschytz, Gilad [Department of Mathematics, Faculty of Natural Science, University of Haifa,199 Aba Khoushy Ave., Haifa 31905 (Israel)

    2016-10-18

    We develop an approach to construct local bulk operators in a CFT to order 1/N{sup 2}. Since 4-point functions are not fixed by conformal invariance we use the OPE to categorize possible forms for a bulk operator. Using previous results on 3-point functions we construct a local bulk operator in each OPE channel. We then impose the condition that the bulk operators constructed in different channels agree, and hence give rise to a well-defined bulk operator. We refer to this condition as the “bulk bootstrap.” We argue and explicitly show in some examples that the bulk bootstrap leads to some of the same results as the regular conformal bootstrap. In fact the bulk bootstrap provides an easier way to determine some CFT data, since it does not require knowing the form of the conformal blocks. This analysis clarifies previous results on the relation between bulk locality and the bootstrap for theories with a 1/N expansion, and it identifies a simple and direct way in which OPE coefficients and anomalous dimensions determine the bulk equations of motion to order 1/N{sup 2}.

  17. Unbiased bootstrap error estimation for linear discriminant analysis.

    Science.gov (United States)

    Vu, Thang; Sima, Chao; Braga-Neto, Ulisses M; Dougherty, Edward R

    2014-12-01

    Convex bootstrap error estimation is a popular tool for classifier error estimation in gene expression studies. A basic question is how to determine the weight for the convex combination between the basic bootstrap estimator and the resubstitution estimator such that the resulting estimator is unbiased at finite sample sizes. The well-known 0.632 bootstrap error estimator uses asymptotic arguments to propose a fixed 0.632 weight, whereas the more recent 0.632+ bootstrap error estimator attempts to set the weight adaptively. In this paper, we study the finite sample problem in the case of linear discriminant analysis under Gaussian populations. We derive exact expressions for the weight that guarantee unbiasedness of the convex bootstrap error estimator in the univariate and multivariate cases, without making asymptotic simplifications. Using exact computation in the univariate case and an accurate approximation in the multivariate case, we obtain the required weight and show that it can deviate significantly from the constant 0.632 weight, depending on the sample size and Bayes error for the problem. The methodology is illustrated by application on data from a well-known cancer classification study.

  18. Standard errors and confidence intervals for correlations corrected for indirect range restriction: A simulation study comparing analytic and bootstrap methods.

    Science.gov (United States)

    Kennet-Cohen, Tamar; Kleper, Dvir; Turvall, Elliot

    2018-02-01

    A frequent topic of psychological research is the estimation of the correlation between two variables from a sample that underwent a selection process based on a third variable. Due to indirect range restriction, the sample correlation is a biased estimator of the population correlation, and a correction formula is used. In the past, bootstrap standard error and confidence intervals for the corrected correlations were examined with normal data. The present study proposes a large-sample estimate (an analytic method) for the standard error, and a corresponding confidence interval for the corrected correlation. Monte Carlo simulation studies involving both normal and non-normal data were conducted to examine the empirical performance of the bootstrap and analytic methods. Results indicated that with both normal and non-normal data, the bootstrap standard error and confidence interval were generally accurate across simulation conditions (restricted sample size, selection ratio, and population correlations) and outperformed estimates of the analytic method. However, with certain combinations of distribution type and model conditions, the analytic method has an advantage, offering reasonable estimates of the standard error and confidence interval without resorting to the bootstrap procedure's computer-intensive approach. We provide SAS code for the simulation studies. © 2017 The British Psychological Society.

  19. Probabilistic forecasts of near-term climate change based on a resampling ensemble technique

    OpenAIRE

    Räisänen, J.; Ruokolainen, L.

    2006-01-01

    Probabilistic forecasts of near-term climate change are derived by using a multimodel ensemble of climate change simulations and a simple resampling technique that increases the number of realizations for the possible combination of anthropogenic climate change and internal climate variability. The technique is based on the assumption that the probability distribution of local climate changes is only a function of the all-model mean global average warming. Although this is unlikely to be exac...

  20. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    Science.gov (United States)

    Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  1. Wayside acoustic diagnosis of defective train bearings based on signal resampling and information enhancement

    Science.gov (United States)

    He, Qingbo; Wang, Jun; Hu, Fei; Kong, Fanrang

    2013-10-01

    The diagnosis of train bearing defects plays a significant role to maintain the safety of railway transport. Among various defect detection techniques, acoustic diagnosis is capable of detecting incipient defects of a train bearing as well as being suitable for wayside monitoring. However, the wayside acoustic signal will be corrupted by the Doppler effect and surrounding heavy noise. This paper proposes a solution to overcome these two difficulties in wayside acoustic diagnosis. In the solution, a dynamically resampling method is firstly presented to reduce the Doppler effect, and then an adaptive stochastic resonance (ASR) method is proposed to enhance the defective characteristic frequency automatically by the aid of noise. The resampling method is based on a frequency variation curve extracted from the time-frequency distribution (TFD) of an acoustic signal by dynamically minimizing the local cost functions. For the ASR method, the genetic algorithm is introduced to adaptively select the optimal parameter of the multiscale noise tuning (MST)-based stochastic resonance (SR) method. The proposed wayside acoustic diagnostic scheme combines signal resampling and information enhancement, and thus is expected to be effective in wayside defective bearing detection. The experimental study verifies the effectiveness of the proposed solution.

  2. RELATIVE ORIENTATION AND MODIFIED PIECEWISE EPIPOLAR RESAMPLING FOR HIGH RESOLUTION SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    K. Gong

    2017-05-01

    Full Text Available High resolution, optical satellite sensors are boosted to a new era in the last few years, because satellite stereo images at half meter or even 30cm resolution are available. Nowadays, high resolution satellite image data have been commonly used for Digital Surface Model (DSM generation and 3D reconstruction. It is common that the Rational Polynomial Coefficients (RPCs provided by the vendors have rough precision and there is no ground control information available to refine the RPCs. Therefore, we present two relative orientation methods by using corresponding image points only: the first method will use quasi ground control information, which is generated from the corresponding points and rough RPCs, for the bias-compensation model; the second method will estimate the relative pointing errors on the matching image and remove this error by an affine model. Both methods do not need ground control information and are applied for the entire image. To get very dense point clouds, the Semi-Global Matching (SGM method is an efficient tool. However, before accomplishing the matching process the epipolar constraints are required. In most conditions, satellite images have very large dimensions, contrary to the epipolar geometry generation and image resampling, which is usually carried out in small tiles. This paper also presents a modified piecewise epipolar resampling method for the entire image without tiling. The quality of the proposed relative orientation and epipolar resampling method are evaluated, and finally sub-pixel accuracy has been achieved in our work.

  3. Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping.

    Science.gov (United States)

    Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca

    2015-08-12

    Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights.

  4. Testing Process Factor Analysis Models Using the Parametric Bootstrap.

    Science.gov (United States)

    Zhang, Guangjian

    2018-01-01

    Process factor analysis (PFA) is a latent variable model for intensive longitudinal data. It combines P-technique factor analysis and time series analysis. The goodness-of-fit test in PFA is currently unavailable. In the paper, we propose a parametric bootstrap method for assessing model fit in PFA. We illustrate the test with an empirical data set in which 22 participants rated their effects everyday over a period of 90 days. We also explore Type I error and power of the parametric bootstrap test with simulated data.

  5. A critique of astrophysical applications of Hagedorn's bootstrap

    CERN Document Server

    Nahm, W

    1980-01-01

    It has been shown that Hagedorn's bootstrap should not be applied to hadronic matter at densities large against nuclear densities. The correct predictions of the thermodynamical model do not use any relation between the mass of the fireballs and their size, whereas the astrophysical applications depend on the unreasonable assumption that the size is independent of the mass. The most spectacular prediction of the bootstrap, namely violent black hole explosions yielding 10/sup 15/ g in the last millisecond, is shown to be completely unfounded, even if such an assumption is made. (21 refs).

  6. Bootstrapped efficiency measures of oil blocks in Angola

    International Nuclear Information System (INIS)

    Barros, C.P.; Assaf, A.

    2009-01-01

    This paper investigates the technical efficiency of Angola oil blocks over the period 2002-2007. A double bootstrap data envelopment analysis (DEA) model is adopted composed in the first stage of a DEA-variable returns to scale (VRS) model and then followed in the second stage by a bootstrapped truncated regression. Results showed that on average, the technical efficiency has fluctuated over the period of study, but deep and ultradeep oil blocks have generally maintained a consistent efficiency level. Policy implications are derived.

  7. A bootstrap method to avoid the effect of concurvity in generalised additive models in time series studies of air pollution.

    Science.gov (United States)

    Figueiras, Adolfo; Roca-Pardiñas, Javier; Cadarso-Suárez, Carmen

    2005-10-01

    In recent years a great number of studies have applied generalised additive models (GAMs) to time series data to estimate the short term health effects of air pollution. Lately, however, it has been found that concurvity--the non-parametric analogue of multicollinearity--might lead to underestimation of standard errors of the effects of independent variables. Underestimation of standard errors means that for concurvity levels commonly present in the data, the risk of committing type I error rises by over threefold. This study developed a conditional bootstrap methology that consists of assuming that the outcome in any observation is conditional upon the values of the set of independent variables used. It then tested this procedure by means of a simulation study using a Poisson additive model. The response variable of this model is a function of an unobserved confounding variable (that introduces trend and seasonality), real black smoke data, and temperature. Scenarios were created with different coefficients and degrees of concurvity. Conditional bootstrap provides confidence intervals with coverages close to nominal (95%), irrespective of the degree of concurvity, number of variables in the model or magnitude of the coefficient to be estimated (for example, for a concurvity of 0.85, bootstrap confidence interval coverage is 95% compared with 71% in the case of the asymptotic interval obtained directly with S-plus gam function). The bootstrap method avoids the problem of concurvity in time series studies of air pollution, and is easily generalised to non-linear dose-risk effects. All bootstrap calculations described in this paper can be performed using S-Plus gam.boot software.

  8. Bootstrap Approach to Comparison of Alternative Methods of ...

    African Journals Online (AJOL)

    A bootstrap simulation approach was used to generate values for endogenous variables of a simultaneous equation model popularly known as Keynesian Model of Income Determination. Three sample sizes 20, 30 and 40 each replicated 10, 20 and 30 times were considered. Four different estimation techniques: Ordinary ...

  9. Properties of bootstrap tests for N-of-1 studies.

    Science.gov (United States)

    Lin, Sharon X; Morrison, Leanne; Smith, Peter W F; Hargood, Charlie; Weal, Mark; Yardley, Lucy

    2016-11-01

    N-of-1 study designs involve the collection and analysis of repeated measures data from an individual not using an intervention and using an intervention. This study explores the use of semi-parametric and parametric bootstrap tests in the analysis of N-of-1 studies under a single time series framework in the presence of autocorrelation. When the Type I error rates of bootstrap tests are compared to Wald tests, our results show that the bootstrap tests have more desirable properties. We compare the results for normally distributed errors with those for contaminated normally distributed errors and find that, except when there is relatively large autocorrelation, there is little difference between the power of the parametric and semi-parametric bootstrap tests. We also experiment with two intervention designs: ABAB and AB, and show the ABAB design has more power. The results provide guidelines for designing N-of-1 studies, in the sense of how many observations and how many intervention changes are needed to achieve a certain level of power and which test should be performed. © 2016 The Authors British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  10. Finite-Size Effects for Some Bootstrap Percolation Models

    NARCIS (Netherlands)

    Enter, A.C.D. van; Adler, Joan; Duarte, J.A.M.S.

    The consequences of Schonmann's new proof that the critical threshold is unity for certain bootstrap percolation models are explored. It is shown that this proof provides an upper bound for the finite-size scaling in these systems. Comparison with data for one case demonstrates that this scaling

  11. Automatic shape model building based on principal geodesic analysis bootstrapping

    DEFF Research Database (Denmark)

    Dam, Erik B; Fletcher, P Thomas; Pizer, Stephen M

    2008-01-01

    shape representation is deformed into the training shapes followed by computation of the shape mean and modes of shape variation. In the first iteration, a generic shape model is used as starting point - in the following iterations in the bootstrap method, the resulting mean and modes from the previous...

  12. Metastability thresholds for anisotropic bootstrap percolation in three dimensions

    NARCIS (Netherlands)

    Van Enter, A.C.D.; Fey, A.

    2012-01-01

    In this paper we analyze several anisotropic bootstrap percolation models in three dimensions. We present the order of magnitude for the metastability thresholds for a fairly general class of models. In our proofs, we use an adaptation of the technique of dimensional reduction. We find that the

  13. Metastability Thresholds for Anisotropic Bootstrap Percolation in Three Dimensions

    NARCIS (Netherlands)

    Van Enter, A.C.D.; Fey, A.

    2012-01-01

    In this paper we analyze several anisotropic bootstrap percolation models in three dimensions. We present the order of magnitude for the metastability thresholds for a fairly general class of models. In our proofs, we use an adaptation of the technique of dimensional reduction. We find that the

  14. Finite-size effects for anisotropic bootstrap percolation : Logarithmic corrections

    NARCIS (Netherlands)

    van Enter, Aernout C. D.; Hulshof, Tim

    In this note we analyse an anisotropic, two-dimensional bootstrap percolation model introduced by Gravner and Griffeath. We present upper and lower bounds on the finite-size effects. We discuss the similarities with the semi-oriented model introduced by Duarte.

  15. Metastability Thresholds for Anisotropic Bootstrap Percolation in Three Dimensions

    NARCIS (Netherlands)

    Enter, Aernout C.D. van; Fey, Anne

    In this paper we analyze several anisotropic bootstrap percolation models in three dimensions. We present the order of magnitude for the metastability thresholds for a fairly general class of models. In our proofs, we use an adaptation of the technique of dimensional reduction. We find that the

  16. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  17. Bootstrap confidence intervals for model-based surveys | Ouma ...

    African Journals Online (AJOL)

    To deal with the problem, Chambers and Dorfam (1994) suggested a n alternative method based on the bootstrap methodology. Their method is meant for model-based surveys. It starts by assuming a simple linear regression model as a working model in which the ratio estimator is optimal for estimating the population total.

  18. Bootstrapping the energy flow in the beginning of life.

    NARCIS (Netherlands)

    Hengeveld, R.; Fedonkin, M.A.

    2007-01-01

    This paper suggests that the energy flow on which all living structures depend only started up slowly, the low-energy, initial phase starting up a second, slightly more energetic phase, and so on. In this way, the build up of the energy flow follows a bootstrapping process similar to that found in

  19. Bootstrapping the energy flow in the beginning of life

    NARCIS (Netherlands)

    Hengeveld, R.; Fedonkin, M.A.

    2007-01-01

    This paper suggests that the energy flow on which all living structures depend only started up slowly, the low-energy, initial phase starting up a second, slightly more energetic phase, and so on. In this way, the build up of the energy flow follows a bootstrapping process similar to that found in

  20. Sidecoin: a snapshot mechanism for bootstrapping a blockchain

    OpenAIRE

    Krug, Joseph; Peterson, Jack

    2015-01-01

    Sidecoin is a mechanism that allows a snapshot to be taken of Bitcoin's blockchain. We compile a list of Bitcoin's unspent transaction outputs, then use these outputs and their corresponding balances to bootstrap a new blockchain. This allows the preservation of Bitcoin's economic state in the context of a new blockchain, which may provide new features and technical innovations.

  1. Adaptive Kernel In The Bootstrap Boosting Algorithm In KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a bootstrap boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  2. Assessing blood flow control through a bootstrap method

    OpenAIRE

    Simpson, D.M.; Panerai, R.B.; Ramos, E.G.; Lopes, J.M.A.; Villar Marinatto, M.N.; Nadal, J.; Evans, D.H.

    2004-01-01

    In order to assess blood flow control, the relationship between blood pressure and blood flow can be modeled by linear filters. We present a bootstrap method, which allows the statistical analysis of an index of blood flow control that is obtained from constrained system identification using an established set of pre-defined filters.

  3. Integrable deformations of conformal theories and bootstrap trees

    International Nuclear Information System (INIS)

    Mussardo, G.

    1991-01-01

    I present recent results in the study of massive integrable quantum field theories in (1+1) dimensions considered as perturbed conformal minimal models. The on mass-shell properties of such theories, with a particular emphasis on the bootstrap principle, are investigated. (orig.)

  4. Statistical evaluation of single-photon emission computed tomography image using smoothed bootstrap method

    International Nuclear Information System (INIS)

    Tsukamoto, Megumi; Hatabu, Asuka; Takahashi, Yoshitake; Matsuda, Hiroshi; Okamoto, Kousuke; Yamashita, Noriyuki; Takagi, Tatsuya

    2013-01-01

    Many of the neurodegenerative diseases associated with a decrease in regional cerebral blood flow (rCBF) are untreatable, and the appropriate therapeutic strategy is to slow the progression of the disease. Therefore, it is important that a definitive diagnosis is made as soon as possible when such diseases are suspected. Diagnostic imaging methods, such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT), play an important role in such a definitive diagnosis. Since several problems arise when evaluating these images visually, a procedure to evaluate them objectively is necessary, and studies of image analyses using statistical evaluations have been suggested. However, the assumed data distribution in a statistical procedure may occasionally be inappropriate. Therefore, to evaluate the decrease of rCBF, it is important to use a statistical procedure without assumptions about the data distribution. In this study, we propose a new procedure that uses nonparametric or smoothed bootstrap methods to calculate a standardized distribution of the Z-score without assumptions about the data distribution. To test whether the judgment of the proposed procedure is equivalent to that of an evaluation based on the Z-score with a fixed threshold, the procedure was applied to a sample data set whose size was large enough to be appropriate for the assumption of the Z-score. As a result, the evaluations of the proposed procedure were equivalent to that of an evaluation based on the Z-score. (author)

  5. Using the Descriptive Bootstrap to Evaluate Result Replicability (Because Statistical Significance Doesn't)

    Science.gov (United States)

    Spinella, Sarah

    2011-01-01

    As result replicability is essential to science and difficult to achieve through external replicability, the present paper notes the insufficiency of null hypothesis statistical significance testing (NHSST) and explains the bootstrap as a plausible alternative, with a heuristic example to illustrate the bootstrap method. The bootstrap relies on…

  6. RANDOM QUADRATIC-FORMS AND THE BOOTSTRAP FOR U-STATISTICS

    NARCIS (Netherlands)

    DEHLING, H; MIKOSCH, T

    1994-01-01

    We study the bootstrap distribution for U-statistics with special emphasis on the degenerate case. For the Efron bootstrap we give a short proof of the consistency using Mallows' metrics. We also study the i.i.d. weighted bootstrap [GRAPHICS] where (X(i)) and (xi(i)) are two i.i.d. sequences,

  7. Bootstrap analysis of designed experiments for reliability improvement with a non-constant scale parameter

    International Nuclear Information System (INIS)

    Wang, Guodong; He, Zhen; Xue, Li; Cui, Qingan; Lv, Shanshan; Zhou, Panpan

    2017-01-01

    Factors which significantly affect product reliability are of great interest to reliability practitioners. This paper proposes a bootstrap-based methodology for identifying significant factors when both location and scale parameters of the smallest extreme value distribution vary over experimental factors. An industrial thermostat experiment is presented, analyzed, and discussed as an illustrative example. The analysis results show that 1) the misspecification of a constant scale parameter may lead to misidentify spurious effects; 2) the important factors identified by different bootstrap methods (i.e., percentile bootstrapping, bias-corrected percentile bootstrapping, and bias-corrected and accelerated percentile bootstrapping) are different; 3) the number of factors affecting 10th percentile lifetime significantly is less than the number of important factors identified at 63.21th percentile. - Highlights: • Product reliability is improved by design of experiments under both scale and location parameters of smallest extreme value distribution vary with experimental factors. • A bootstrap-based methodology is proposed to identify important factors which affect 100pth lifetime percentile significantly. • Bootstrapping confidence intervals associating experimental factors are obtained by using three bootstrap methods (i.e., percentile bootstrapping, bias-corrected percentile bootstrapping, and bias-corrected and accelerated percentile bootstrapping). • The important factors identified by different bootstrap methods are different. • The number of factors affecting 10th percentile significantly is less than the number of important factors identified at 63.21th percentile.

  8. Resampling to accelerate cross-correlation searches for continuous gravitational waves from binary systems

    Science.gov (United States)

    Meadors, Grant David; Krishnan, Badri; Papa, Maria Alessandra; Whelan, John T.; Zhang, Yuanhao

    2018-02-01

    Continuous-wave (CW) gravitational waves (GWs) call for computationally-intensive methods. Low signal-to-noise ratio signals need templated searches with long coherent integration times and thus fine parameter-space resolution. Longer integration increases sensitivity. Low-mass x-ray binaries (LMXBs) such as Scorpius X-1 (Sco X-1) may emit accretion-driven CWs at strains reachable by current ground-based observatories. Binary orbital parameters induce phase modulation. This paper describes how resampling corrects binary and detector motion, yielding source-frame time series used for cross-correlation. Compared to the previous, detector-frame, templated cross-correlation method, used for Sco X-1 on data from the first Advanced LIGO observing run (O1), resampling is about 20 × faster in the costliest, most-sensitive frequency bands. Speed-up factors depend on integration time and search setup. The speed could be reinvested into longer integration with a forecast sensitivity gain, 20 to 125 Hz median, of approximately 51%, or from 20 to 250 Hz, 11%, given the same per-band cost and setup. This paper's timing model enables future setup optimization. Resampling scales well with longer integration, and at 10 × unoptimized cost could reach respectively 2.83 × and 2.75 × median sensitivities, limited by spin-wandering. Then an O1 search could yield a marginalized-polarization upper limit reaching torque-balance at 100 Hz. Frequencies from 40 to 140 Hz might be probed in equal observing time with 2 × improved detectors.

  9. A resampling-based meta-analysis for detection of differential gene expression in breast cancer

    Directory of Open Access Journals (Sweden)

    Ergul Gulusan

    2008-12-01

    Full Text Available Abstract Background Accuracy in the diagnosis of breast cancer and classification of cancer subtypes has improved over the years with the development of well-established immunohistopathological criteria. More recently, diagnostic gene-sets at the mRNA expression level have been tested as better predictors of disease state. However, breast cancer is heterogeneous in nature; thus extraction of differentially expressed gene-sets that stably distinguish normal tissue from various pathologies poses challenges. Meta-analysis of high-throughput expression data using a collection of statistical methodologies leads to the identification of robust tumor gene expression signatures. Methods A resampling-based meta-analysis strategy, which involves the use of resampling and application of distribution statistics in combination to assess the degree of significance in differential expression between sample classes, was developed. Two independent microarray datasets that contain normal breast, invasive ductal carcinoma (IDC, and invasive lobular carcinoma (ILC samples were used for the meta-analysis. Expression of the genes, selected from the gene list for classification of normal breast samples and breast tumors encompassing both the ILC and IDC subtypes were tested on 10 independent primary IDC samples and matched non-tumor controls by real-time qRT-PCR. Other existing breast cancer microarray datasets were used in support of the resampling-based meta-analysis. Results The two independent microarray studies were found to be comparable, although differing in their experimental methodologies (Pearson correlation coefficient, R = 0.9389 and R = 0.8465 for ductal and lobular samples, respectively. The resampling-based meta-analysis has led to the identification of a highly stable set of genes for classification of normal breast samples and breast tumors encompassing both the ILC and IDC subtypes. The expression results of the selected genes obtained through real

  10. A resampling-based meta-analysis for detection of differential gene expression in breast cancer

    International Nuclear Information System (INIS)

    Gur-Dedeoglu, Bala; Konu, Ozlen; Kir, Serkan; Ozturk, Ahmet Rasit; Bozkurt, Betul; Ergul, Gulusan; Yulug, Isik G

    2008-01-01

    Accuracy in the diagnosis of breast cancer and classification of cancer subtypes has improved over the years with the development of well-established immunohistopathological criteria. More recently, diagnostic gene-sets at the mRNA expression level have been tested as better predictors of disease state. However, breast cancer is heterogeneous in nature; thus extraction of differentially expressed gene-sets that stably distinguish normal tissue from various pathologies poses challenges. Meta-analysis of high-throughput expression data using a collection of statistical methodologies leads to the identification of robust tumor gene expression signatures. A resampling-based meta-analysis strategy, which involves the use of resampling and application of distribution statistics in combination to assess the degree of significance in differential expression between sample classes, was developed. Two independent microarray datasets that contain normal breast, invasive ductal carcinoma (IDC), and invasive lobular carcinoma (ILC) samples were used for the meta-analysis. Expression of the genes, selected from the gene list for classification of normal breast samples and breast tumors encompassing both the ILC and IDC subtypes were tested on 10 independent primary IDC samples and matched non-tumor controls by real-time qRT-PCR. Other existing breast cancer microarray datasets were used in support of the resampling-based meta-analysis. The two independent microarray studies were found to be comparable, although differing in their experimental methodologies (Pearson correlation coefficient, R = 0.9389 and R = 0.8465 for ductal and lobular samples, respectively). The resampling-based meta-analysis has led to the identification of a highly stable set of genes for classification of normal breast samples and breast tumors encompassing both the ILC and IDC subtypes. The expression results of the selected genes obtained through real-time qRT-PCR supported the meta-analysis results. The

  11. DETECTING DIGITAL IMAGE FORGERIES USING RE-SAMPLING BY AUTOMATIC REGION OF INTEREST (ROI

    Directory of Open Access Journals (Sweden)

    P. Subathra

    2012-05-01

    Full Text Available Nowadays, digital images can be easily altered by using high-performance computers, sophisticated photo-editing, computer graphics software, etc. It will affect the authenticity of images in law, politics, the media, and business. In this paper, we proposed a Resampling technique using automatic selection of Region of Interest (ROI method for finding the authenticity of digitally altered image. The proposed technique provides better results beneath scaling, rotation, skewing transformations, and any of their arbitrary combinations in image. It surmounts the protracted complexity in manual ROI selection.

  12. Exploration of the factor structure of the Kirton Adaption-Innovation Inventory using bootstrapping estimation.

    Science.gov (United States)

    Im, Subin; Min, Soonhong

    2013-04-01

    Exploratory factor analyses of the Kirton Adaption-Innovation Inventory (KAI), which serves to measure individual cognitive styles, generally indicate three factors: sufficiency of originality, efficiency, and rule/group conformity. In contrast, a 2005 study by Im and Hu using confirmatory factor analysis supported a four-factor structure, dividing the sufficiency of originality dimension into two subdimensions, idea generation and preference for change. This study extends Im and Hu's (2005) study of a derived version of the KAI by providing additional evidence of the four-factor structure. Specifically, the authors test the robustness of the parameter estimates to the violation of normality assumptions in the sample using bootstrap methods. A bias-corrected confidence interval bootstrapping procedure conducted among a sample of 356 participants--members of the Arkansas Household Research Panel, with middle SES and average age of 55.6 yr. (SD = 13.9)--showed that the four-factor model with two subdimensions of sufficiency of originality fits the data significantly better than the three-factor model in non-normality conditions.

  13. Inference for Optimal Dynamic Treatment Regimes using an Adaptive m-out-of-n Bootstrap Scheme

    Science.gov (United States)

    Chakraborty, Bibhas; Laber, Eric B.; Zhao, Yingqi

    2013-01-01

    Summary A dynamic treatment regime consists of a set of decision rules that dictate how to individualize treatment to patients based on available treatment and covariate history. A common method for estimating an optimal dynamic treatment regime from data is Q-learning which involves nonsmooth operations of the data. This nonsmoothness causes standard asymptotic approaches for inference like the bootstrap or Taylor series arguments to breakdown if applied without correction. Here, we consider the m-out-of-n bootstrap for constructing confidence intervals for the parameters indexing the optimal dynamic regime. We propose an adaptive choice of m and show that it produces asymptotically correct confidence sets under fixed alternatives. Furthermore, the proposed method has the advantage of being conceptually and computationally much more simple than competing methods possessing this same theoretical property. We provide an extensive simulation study to compare the proposed method with currently available inference procedures. The results suggest that the proposed method delivers nominal coverage while being less conservative than alternatives. The proposed methods are implemented in the qLearn R-package and have been made available on the Comprehensive R-Archive Network (http://cran.r-project.org/). Analysis of the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) study is used as an illustrative example. PMID:23845276

  14. Improved efficiency of multi-criteria IMPT treatment planning using iterative resampling of randomly placed pencil beams

    Science.gov (United States)

    van de Water, S.; Kraan, A. C.; Breedveld, S.; Schillemans, W.; Teguh, D. N.; Kooy, H. M.; Madden, T. M.; Heijmen, B. J. M.; Hoogeman, M. S.

    2013-10-01

    This study investigates whether ‘pencil beam resampling’, i.e. iterative selection and weight optimization of randomly placed pencil beams (PBs), reduces optimization time and improves plan quality for multi-criteria optimization in intensity-modulated proton therapy, compared with traditional modes in which PBs are distributed over a regular grid. Resampling consisted of repeatedly performing: (1) random selection of candidate PBs from a very fine grid, (2) inverse multi-criteria optimization, and (3) exclusion of low-weight PBs. The newly selected candidate PBs were added to the PBs in the existing solution, causing the solution to improve with each iteration. Resampling and traditional regular grid planning were implemented into our in-house developed multi-criteria treatment planning system ‘Erasmus iCycle’. The system optimizes objectives successively according to their priorities as defined in the so-called ‘wish-list’. For five head-and-neck cancer patients and two PB widths (3 and 6 mm sigma at 230 MeV), treatment plans were generated using: (1) resampling, (2) anisotropic regular grids and (3) isotropic regular grids, while using varying sample sizes (resampling) or grid spacings (regular grid). We assessed differences in optimization time (for comparable plan quality) and in plan quality parameters (for comparable optimization time). Resampling reduced optimization time by a factor of 2.8 and 5.6 on average (7.8 and 17.0 at maximum) compared with the use of anisotropic and isotropic grids, respectively. Doses to organs-at-risk were generally reduced when using resampling, with median dose reductions ranging from 0.0 to 3.0 Gy (maximum: 14.3 Gy, relative: 0%-42%) compared with anisotropic grids and from -0.3 to 2.6 Gy (maximum: 11.4 Gy, relative: -4%-19%) compared with isotropic grids. Resampling was especially effective when using thin PBs (3 mm sigma). Resampling plans contained on average fewer PBs, energy layers and protons than anisotropic

  15. Resampling nucleotide sequences with closest-neighbor trimming and its comparison to other methods.

    Directory of Open Access Journals (Sweden)

    Kouki Yonezawa

    Full Text Available A large number of nucleotide sequences of various pathogens are available in public databases. The growth of the datasets has resulted in an enormous increase in computational costs. Moreover, due to differences in surveillance activities, the number of sequences found in databases varies from one country to another and from year to year. Therefore, it is important to study resampling methods to reduce the sampling bias. A novel algorithm-called the closest-neighbor trimming method-that resamples a given number of sequences from a large nucleotide sequence dataset was proposed. The performance of the proposed algorithm was compared with other algorithms by using the nucleotide sequences of human H3N2 influenza viruses. We compared the closest-neighbor trimming method with the naive hierarchical clustering algorithm and [Formula: see text]-medoids clustering algorithm. Genetic information accumulated in public databases contains sampling bias. The closest-neighbor trimming method can thin out densely sampled sequences from a given dataset. Since nucleotide sequences are among the most widely used materials for life sciences, we anticipate that our algorithm to various datasets will result in reducing sampling bias.

  16. A proof of fulfillment of the strong bootstrap condition

    International Nuclear Information System (INIS)

    Fadin, V.S.; Papa, A.

    2002-01-01

    It is shown that the kernel of the BFKL equation for the octet color state of two Reggeized gluons satisfies the strong bootstrap condition in the next-to-leading order. This condition is much more restrictive than the one obtained from the requirement of the Reggeized form for the elastic scattering amplitudes in the next-to-leading approximation. It is necessary, however, for self-consistency of the assumption of the Reggeized form of the production amplitudes in multi-Regge kinematics, which are used in the derivation of the BFKL equation. The fulfillment of the strong bootstrap condition for the kernel opens the way to a rigorous proof of the BFKL equation in the next-to-leading approximation. (author)

  17. A bootstrap lunar base: Preliminary design review 2

    Science.gov (United States)

    1987-01-01

    A bootstrap lunar base is the gateway to manned solar system exploration and requires new ideas and new designs on the cutting edge of technology. A preliminary design for a Bootstrap Lunar Base, the second provided by this contractor, is presented. An overview of the work completed is discussed as well as the technical, management, and cost strategies to complete the program requirements. The lunar base design stresses the transforming capabilities of its lander vehicles to aid in base construction. The design also emphasizes modularity and expandability in the base configuration to support the long-term goals of scientific research and profitable lunar resource exploitation. To successfully construct, develop, and inhabit a permanent lunar base, however, several technological advancements must first be realized. Some of these technological advancements are also discussed.

  18. On Comparison of Stochastic Reserving Methods with Bootstrapping

    Directory of Open Access Journals (Sweden)

    Liivika Tee

    2017-01-01

    Full Text Available We consider the well-known stochastic reserve estimation methods on the basis of generalized linear models, such as the (over-dispersed Poisson model, the gamma model and the log-normal model. For the likely variability of the claims reserve, bootstrap method is considered. In the bootstrapping framework, we discuss the choice of residuals, namely the Pearson residuals, the deviance residuals and the Anscombe residuals. In addition, several possible residual adjustments are discussed and compared in a case study. We carry out a practical implementation and comparison of methods using real-life insurance data to estimate reserves and their prediction errors. We propose to consider proper scoring rules for model validation, and the assessments will be drawn from an extensive case study.

  19. Conformal bootstrap: non-perturbative QFT's under siege

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    [Exceptionally in Council Chamber] Originally formulated in the 70's, the conformal bootstrap is the ambitious idea that one can use internal consistency conditions to carve out, and eventually solve, the space of conformal field theories. In this talk I will review recent developments in the field which have boosted this program to a new level. I will present a method to extract quantitative informations in strongly-interacting theories, such as 3D Ising, O(N) vector model and even systems without a Lagrangian formulation. I will explain how these techniques have led to the world record determination of several critical exponents. Finally, I will review exact analytical results obtained using bootstrap techniques.

  20. A Double Parametric Bootstrap Test for Topic Models

    OpenAIRE

    Seto, Skyler; Tan, Sarah; Hooker, Giles; Wells, Martin T.

    2017-01-01

    Non-negative matrix factorization (NMF) is a technique for finding latent representations of data. The method has been applied to corpora to construct topic models. However, NMF has likelihood assumptions which are often violated by real document corpora. We present a double parametric bootstrap test for evaluating the fit of an NMF-based topic model based on the duality of the KL divergence and Poisson maximum likelihood estimation. The test correctly identifies whether a topic model based o...

  1. Higgs Critical Exponents and Conformal Bootstrap in Four Dimensions

    DEFF Research Database (Denmark)

    Antipin, Oleg; Mølgaard, Esben; Sannino, Francesco

    2015-01-01

    We investigate relevant properties of composite operators emerging in nonsupersymmetric, four-dimensional gauge-Yukawa theories with interacting conformal fixed points within a precise framework. The theories investigated in this work are structurally similar to the standard model of particle int...... bootstrap results are then compared to precise four dimensional conformal field theoretical results. To accomplish this, it was necessary to calculate explicitly the crossing symmetry relations for the global symmetry group SU($N$)$\\times$SU($N$)....

  2. 'Bootstrap' charging of surfaces composed of multiple materials

    Science.gov (United States)

    Stannard, P. R.; Katz, I.; Parks, D. E.

    1981-01-01

    The paper examines the charging of a checkerboard array of two materials, only one of which tends to acquire a negative potential alone, using the NASA Charging Analyzer Program (NASCAP). The influence of the charging material's field causes the otherwise 'non-charging' material to acquire a negative potential due to the suppression of its secondary emission ('bootstrap' charging). The NASCAP predictions for the equilibrium potential difference between the two materials are compared to results based on an analytical model.

  3. TruSDN: Bootstrapping Trust in Cloud Network Infrastructure

    OpenAIRE

    Paladi, Nicolae; Gehrmann, Christian

    2017-01-01

    Software-Defined Networking (SDN) is a novel architectural model for cloud network infrastructure, improving resource utilization, scalability and administration. SDN deployments increasingly rely on virtual switches executing on commodity operating systems with large code bases, which are prime targets for adversaries attacking the net- work infrastructure. We describe and implement TruSDN, a framework for bootstrapping trust in SDN infrastructure using Intel Software Guard Extensions (SGX),...

  4. Uncertainty Estimation using Bootstrapped Kriging Predictions for Precipitation Isoscapes

    Science.gov (United States)

    Ma, C.; Bowen, G. J.; Vander Zanden, H.; Wunder, M.

    2017-12-01

    Isoscapes are spatial models representing the distribution of stable isotope values across landscapes. Isoscapes of hydrogen and oxygen in precipitation are now widely used in a diversity of fields, including geology, biology, hydrology, and atmospheric science. To generate isoscapes, geostatistical methods are typically applied to extend predictions from limited data measurements. Kriging is a popular method in isoscape modeling, but quantifying the uncertainty associated with the resulting isoscapes is challenging. Applications that use precipitation isoscapes to determine sample origin require estimation of uncertainty. Here we present a simple bootstrap method (SBM) to estimate the mean and uncertainty of the krigged isoscape and compare these results with a generalized bootstrap method (GBM) applied in previous studies. We used hydrogen isotopic data from IsoMAP to explore these two approaches for estimating uncertainty. We conducted 10 simulations for each bootstrap method and found that SBM results in more kriging predictions (9/10) compared to GBM (4/10). Prediction from SBM was closer to the original prediction generated without bootstrapping and had less variance than GBM. SBM was tested on different datasets from IsoMAP with different numbers of observation sites. We determined that predictions from the datasets with fewer than 40 observation sites using SBM were more variable than the original prediction. The approaches we used for estimating uncertainty will be compiled in an R package that is under development. We expect that these robust estimates of precipitation isoscape uncertainty can be applied in diagnosing the origin of samples ranging from various type of waters to migratory animals, food products, and humans.

  5. Necessary Condition for Emergent Symmetry from the Conformal Bootstrap.

    Science.gov (United States)

    Nakayama, Yu; Ohtsuki, Tomoki

    2016-09-23

    We use the conformal bootstrap program to derive the necessary conditions for emergent symmetry enhancement from discrete symmetry (e.g., Z_{n}) to continuous symmetry [e.g., U(1)] under the renormalization group flow. In three dimensions, in order for Z_{2} symmetry to be enhanced to U(1) symmetry, the conformal bootstrap program predicts that the scaling dimension of the order parameter field at the infrared conformal fixed point must satisfy Δ_{1}>1.08. We also obtain the similar necessary conditions for Z_{3} symmetry with Δ_{1}>0.580 and Z_{4} symmetry with Δ_{1}>0.504 from the simultaneous conformal bootstrap analysis of multiple four-point functions. As applications, we show that our necessary conditions impose severe constraints on the nature of the chiral phase transition in QCD, the deconfinement criticality in Néel valence bond solid transitions, and anisotropic deformations in critical O(n) models. We prove that some fixed points proposed in the literature are unstable under the perturbation that cannot be forbidden by the discrete symmetry. In these situations, the second-order phase transition with enhanced symmetry cannot happen.

  6. Bootstrap method of interior-branch test for phylogenetic trees.

    Science.gov (United States)

    Sitnikova, T

    1996-04-01

    Statistical properties of the bootstrap test of interior branch lengths of phylogenetic trees have been studied and compared with those of the standard interior-branch test in computer simulations. Examination of the properties of the tests under the null hypothesis showed that both tests for an interior branch of a predetermined topology are quite reliable when the distribution of the branch length estimate approaches a normal distribution. Unlike the standard interior-branch test, the bootstrap test appears to retain this property even when the substitution rate varies among sites. In this case, the distribution of the branch length estimate deviates from a normal distribution, and the standard interior-branch test gives conservative confidence probability values. A simple correction method was developed for both interior-branch tests to be applied for testing the reliability of tree topologies estimated from sequence data. This correction for the standard interior-branch test appears to be as effective as that obtained in our previous study, though it is much simpler. The bootstrap and standard interior-branch tests for estimated topologies become conservative as the number of sequence groups in a star-like tree increases.

  7. Soybean yield modeling using bootstrap methods for small samples

    Energy Technology Data Exchange (ETDEWEB)

    Dalposso, G.A.; Uribe-Opazo, M.A.; Johann, J.A.

    2016-11-01

    One of the problems that occur when working with regression models is regarding the sample size; once the statistical methods used in inferential analyzes are asymptotic if the sample is small the analysis may be compromised because the estimates will be biased. An alternative is to use the bootstrap methodology, which in its non-parametric version does not need to guess or know the probability distribution that generated the original sample. In this work we used a set of soybean yield data and physical and chemical soil properties formed with fewer samples to determine a multiple linear regression model. Bootstrap methods were used for variable selection, identification of influential points and for determination of confidence intervals of the model parameters. The results showed that the bootstrap methods enabled us to select the physical and chemical soil properties, which were significant in the construction of the soybean yield regression model, construct the confidence intervals of the parameters and identify the points that had great influence on the estimated parameters. (Author)

  8. Stock Price Simulation Using Bootstrap and Monte Carlo

    Directory of Open Access Journals (Sweden)

    Pažický Martin

    2017-06-01

    Full Text Available In this paper, an attempt is made to assessment and comparison of bootstrap experiment and Monte Carlo experiment for stock price simulation. Since the stock price evolution in the future is extremely important for the investors, there is the attempt to find the best method how to determine the future stock price of BNP Paribas′ bank. The aim of the paper is define the value of the European and Asian option on BNP Paribas′ stock at the maturity date. There are employed four different methods for the simulation. First method is bootstrap experiment with homoscedastic error term, second method is blocked bootstrap experiment with heteroscedastic error term, third method is Monte Carlo simulation with heteroscedastic error term and the last method is Monte Carlo simulation with homoscedastic error term. In the last method there is necessary to model the volatility using econometric GARCH model. The main purpose of the paper is to compare the mentioned methods and select the most reliable. The difference between classical European option and exotic Asian option based on the experiment results is the next aim of tis paper.

  9. Power Analysis for the Likelihood-Ratio Test in Latent Markov Models: Shortcutting the Bootstrap p-Value-Based Method.

    Science.gov (United States)

    Gudicha, Dereje W; Schmittmann, Verena D; Tekle, Fetene B; Vermunt, Jeroen K

    2016-01-01

    The latent Markov (LM) model is a popular method for identifying distinct unobserved states and transitions between these states over time in longitudinally observed responses. The bootstrap likelihood-ratio (BLR) test yields the most rigorous test for determining the number of latent states, yet little is known about power analysis for this test. Power could be computed as the proportion of the bootstrap p values (PBP) for which the null hypothesis is rejected. This requires performing the full bootstrap procedure for a large number of samples generated from the model under the alternative hypothesis, which is computationally infeasible in most situations. This article presents a computationally feasible shortcut method for power computation for the BLR test. The shortcut method involves the following simple steps: (1) obtaining the parameters of the model under the null hypothesis, (2) constructing the empirical distributions of the likelihood ratio under the null and alternative hypotheses via Monte Carlo simulations, and (3) using these empirical distributions to compute the power. We evaluate the performance of the shortcut method by comparing it to the PBP method and, moreover, show how the shortcut method can be used for sample-size determination.

  10. Fast bootstrapping and permutation testing for assessing reproducibility and interpretability of multivariate fMRI decoding models.

    Directory of Open Access Journals (Sweden)

    Bryan R Conroy

    Full Text Available Multivariate decoding models are increasingly being applied to functional magnetic imaging (fMRI data to interpret the distributed neural activity in the human brain. These models are typically formulated to optimize an objective function that maximizes decoding accuracy. For decoding models trained on full-brain data, this can result in multiple models that yield the same classification accuracy, though some may be more reproducible than others--i.e. small changes to the training set may result in very different voxels being selected. This issue of reproducibility can be partially controlled by regularizing the decoding model. Regularization, along with the cross-validation used to estimate decoding accuracy, typically requires retraining many (often on the order of thousands of related decoding models. In this paper we describe an approach that uses a combination of bootstrapping and permutation testing to construct both a measure of cross-validated prediction accuracy and model reproducibility of the learned brain maps. This requires re-training our classification method on many re-sampled versions of the fMRI data. Given the size of fMRI datasets, this is normally a time-consuming process. Our approach leverages an algorithm called fast simultaneous training of generalized linear models (FaSTGLZ to create a family of classifiers in the space of accuracy vs. reproducibility. The convex hull of this family of classifiers can be used to identify a subset of Pareto optimal classifiers, with a single-optimal classifier selectable based on the relative cost of accuracy vs. reproducibility. We demonstrate our approach using full-brain analysis of elastic-net classifiers trained to discriminate stimulus type in an auditory and visual oddball event-related fMRI design. Our approach and results argue for a computational approach to fMRI decoding models in which the value of the interpretation of the decoding model ultimately depends upon optimizing a

  11. Bootstrap Signal-to-Noise Confidence Intervals: An Objective Method for Subject Exclusion and Quality Control in ERP Studies

    Science.gov (United States)

    Parks, Nathan A.; Gannon, Matthew A.; Long, Stephanie M.; Young, Madeleine E.

    2016-01-01

    Analysis of event-related potential (ERP) data includes several steps to ensure that ERPs meet an appropriate level of signal quality. One such step, subject exclusion, rejects subject data if ERP waveforms fail to meet an appropriate level of signal quality. Subject exclusion is an important quality control step in the ERP analysis pipeline as it ensures that statistical inference is based only upon those subjects exhibiting clear evoked brain responses. This critical quality control step is most often performed simply through visual inspection of subject-level ERPs by investigators. Such an approach is qualitative, subjective, and susceptible to investigator bias, as there are no standards as to what constitutes an ERP of sufficient signal quality. Here, we describe a standardized and objective method for quantifying waveform quality in individual subjects and establishing criteria for subject exclusion. The approach uses bootstrap resampling of ERP waveforms (from a pool of all available trials) to compute a signal-to-noise ratio confidence interval (SNR-CI) for individual subject waveforms. The lower bound of this SNR-CI (SNRLB) yields an effective and objective measure of signal quality as it ensures that ERP waveforms statistically exceed a desired signal-to-noise criterion. SNRLB provides a quantifiable metric of individual subject ERP quality and eliminates the need for subjective evaluation of waveform quality by the investigator. We detail the SNR-CI methodology, establish the efficacy of employing this approach with Monte Carlo simulations, and demonstrate its utility in practice when applied to ERP datasets. PMID:26903849

  12. Two novel applications of bootstrap currents: Snakes and jitter stabilization

    Science.gov (United States)

    Thyagaraja, A.; Haas, F. A.

    1993-09-01

    Both neoclassical theory and certain turbulence theories of particle transport in tokamaks predict the existence of bootstrap (i.e., pressure-driven) currents. Two new applications of this form of noninductive current are considered in this work. In the first, an earlier model of the nonlinearly saturated m=1 tearing mode is extended to include the stabilizing effect of a bootstrap current inside the island. This is used to explain several observed features of the so-called ``snake'' reported in the Joint European Torus (JET) [R. D. Gill, A. W. Edwards, D. Pasini, and A. Weller, Nucl. Fusion 32, 723 (1992)]. The second application involves an alternating current (ac) form of bootstrap current, produced by pressure-gradient fluctuations. It is suggested that a time-dependent (in the plasma frame), radio-frequency (rf) power source can be used to produce localized pressure fluctuations of suitable frequency and amplitude to implement the dynamic stabilization method for suppressing gross modes in tokamaks suggested in a recent paper [A. Thyagaraja, R. D. Hazeltine, and A. Y. Aydemir, Phys. Fluids B 4, 2733 (1992)]. This method works by ``detuning'' the resonant layer by rapid current/shear fluctuations. Estimates made for the power source requirements both for small machines such as COMPASS and for larger machines like JET suggest that the method could be practically feasible. This ``jitter'' (i.e., dynamic) stabilization method could provide a useful form of active instability control to avoid both gross/disruptive and fine-scale/transportive instabilities, which may set severe operating/safety constraints in the reactor regime. The results are also capable, in principle, of throwing considerable light on the local properties of current generation and diffusion in tokamaks, which may be enhanced by turbulence, as has been suggested recently by several researchers.

  13. A Bootstrap Approach to an Affordable Exploration Program

    Science.gov (United States)

    Oeftering, Richard C.

    2011-01-01

    This paper examines the potential to build an affordable sustainable exploration program by adopting an approach that requires investing in technologies that can be used to build a space infrastructure from very modest initial capabilities. Human exploration has had a history of flight programs that have high development and operational costs. Since Apollo, human exploration has had very constrained budgets and they are expected be constrained in the future. Due to their high operations costs it becomes necessary to consider retiring established space facilities in order to move on to the next exploration challenge. This practice may save cost in the near term but it does so by sacrificing part of the program s future architecture. Human exploration also has a history of sacrificing fully functional flight hardware to achieve mission objectives. An affordable exploration program cannot be built when it involves billions of dollars of discarded space flight hardware, instead, the program must emphasize preserving its high value space assets and building a suitable permanent infrastructure. Further this infrastructure must reduce operational and logistics cost. The paper examines the importance of achieving a high level of logistics independence by minimizing resource consumption, minimizing the dependency on external logistics, and maximizing the utility of resources available. The approach involves the development and deployment of a core suite of technologies that have minimum initial needs yet are able expand upon initial capability in an incremental bootstrap fashion. The bootstrap approach incrementally creates an infrastructure that grows and becomes self sustaining and eventually begins producing the energy, products and consumable propellants that support human exploration. The bootstrap technologies involve new methods of delivering and manipulating energy and materials. These technologies will exploit the space environment, minimize dependencies, and

  14. Bootstrapping integrated covariance matrix estimators in noisy jump-diffusion models with non-synchronous trading

    DEFF Research Database (Denmark)

    Hounyo, Ulrich

    covariance estimator. As an application of our results, we also consider the bootstrap for regression coefficients. We show that the wild blocks of bootstrap, appropriately centered, is able to mimic both the dependence and heterogeneity of the scores, thus justifying the construction of bootstrap percentile......We propose a bootstrap mehtod for estimating the distribution (and functionals of it such as the variance) of various integrated covariance matrix estimators. In particular, we first adapt the wild blocks of blocks bootsratp method suggested for the pre-averaged realized volatility estimator......-studentized statistics, our results justify using the bootstrap to esitmate the covariance matrix of a broad class of covolatility estimators. The bootstrap variance estimator is positive semi-definite by construction, an appealing feature that is not always shared by existing variance estimators of the integrated...

  15. The S-matrix bootstrap II: two dimensional amplitudes

    Science.gov (United States)

    Paulos, Miguel F.; Penedones, Joao; Toledo, Jonathan; van Rees, Balt C.; Vieira, Pedro

    2017-11-01

    We consider constraints on the S-matrix of any gapped, Lorentz invariant quantum field theory in 1 + 1 dimensions due to crossing symmetry and unitarity. In this way we establish rigorous bounds on the cubic couplings of a given theory with a fixed mass spectrum. In special cases we identify interesting integrable theories saturating these bounds. Our analytic bounds match precisely with numerical bounds obtained in a companion paper where we consider massive QFT in an AdS box and study boundary correlators using the technology of the conformal bootstrap.

  16. Check of the bootstrap conditions for the gluon Reggeization

    International Nuclear Information System (INIS)

    Papa, A.

    2000-01-01

    The property of gluon Reggeization plays an essential role in the derivation of the Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation for the cross sections at high energy √s in perturbative QCD. This property has been proved to all orders of perturbation theory in the leading logarithmic approximation and it is assumed to be valid also in the next-to-leading logarithmic approximation, where it has been checked only to the first three orders of perturbation theory. From s-channel unitarity, however, very stringent 'bootstrap' conditions can be derived which, if fulfilled, leave no doubts that gluon Reggeization holds

  17. Comparing groups randomization and bootstrap methods using R

    CERN Document Server

    Zieffler, Andrew S; Long, Jeffrey D

    2011-01-01

    A hands-on guide to using R to carry out key statistical practices in educational and behavioral sciences research Computing has become an essential part of the day-to-day practice of statistical work, broadening the types of questions that can now be addressed by research scientists applying newly derived data analytic techniques. Comparing Groups: Randomization and Bootstrap Methods Using R emphasizes the direct link between scientific research questions and data analysis. Rather than relying on mathematical calculations, this book focus on conceptual explanations and

  18. Dimensional Reduction via Noncommutative Spacetime: Bootstrap and Holography

    Science.gov (United States)

    Li, Miao

    2002-05-01

    Unlike noncommutative space, when space and time are noncommutative, it seems necessary to modify the usual scheme of quantum mechanics. We propose in this paper a simple generalization of the time evolution equation in quantum mechanics to incorporate the feature of a noncommutative spacetime. This equation is much more constraining than the usual Schrödinger equation in that the spatial dimension noncommuting with time is effectively reduced to a point in low energy. We thus call the new evolution equation the spacetime bootstrap equation, the dimensional reduction called for by this evolution seems close to what is required by the holographic principle. We will discuss several examples to demonstrate this point.

  19. DMSP SSM/I Daily and Monthly Polar Gridded Bootstrap Sea Ice Concentrations

    Data.gov (United States)

    National Aeronautics and Space Administration — DMSP SSM/I Daily and Monthly Polar Gridded Bootstrap Sea Ice Concentrations in polar stereographic projection currently include Defense Meteorological Satellite...

  20. Optical Flow of Small Objects Using Wavelets, Bootstrap Methods, and Synthetic Discriminant Filters

    National Research Council Canada - National Science Library

    Hewer, Gary

    1997-01-01

    ...) targets in highly cluttered and noisy environments. In this paper; we present a novel wavelet detection algorithm which incorporates adaptive CFAR detection statistics using the bootstrap method...

  1. Energy confinement of tokamak plasma with consideration of bootstrap current effect

    International Nuclear Information System (INIS)

    Yuan Ying; Gao Qingdi

    1992-01-01

    Based on the η i -mode induced anomalous transport model of Lee et al., the energy confinement of tokamak plasmas with auxiliary heating is investigated with consideration of bootstrap current effect. The results indicate that energy confinement time increases with plasma current and tokamak major radius, and decreases with heating power, toroidal field and minor radius. This is in reasonable agreement with the Kaye-Goldston empirical scaling law. Bootstrap current always leads to an improvement of energy confinement and the contraction of inversion radius. When γ, the ratio between bootstrap current and total plasma current, is small, the part of energy confinement time contributed from bootstrap current will be about γ/2

  2. Interaction of bootstrap-current-driven magnetic islands

    International Nuclear Information System (INIS)

    Hegna, C.C.; Callen, J.D.

    1991-10-01

    The formation and interaction of fluctuating neoclassical pressure gradient driven magnetic islands is examined. The interaction of magnetic islands produces a stochastic region around the separatrices of the islands. This interaction causes the island pressure profile to be broadened, reducing the island bootstrap current and drive for the magnetic island. A model is presented that describes the magnetic topology as a bath of interacting magnetic islands with low to medium poloidal mode number (m congruent 3-30). The islands grow by the bootstrap current effect and damp due to the flattening of the pressure profile near the island separatrix caused by the interaction of the magnetic islands. The effect of this sporadic growth and decay of the islands (''magnetic bubbling'') is not normally addressed in theories of plasma transport due to magnetic fluctuations. The nature of the transport differs from statistical approaches to magnetic turbulence since the radial step size of the plasma transport is now given by the characteristic island width. This model suggests that tokamak experiments have relatively short-lived, coherent, long wavelength magnetic oscillations present in the steep pressure-gradient regions of the plasma. 42 refs

  3. Quantifying uncertainty on sediment loads using bootstrap confidence intervals

    Science.gov (United States)

    Slaets, Johanna I. F.; Piepho, Hans-Peter; Schmitter, Petra; Hilger, Thomas; Cadisch, Georg

    2017-01-01

    Load estimates are more informative than constituent concentrations alone, as they allow quantification of on- and off-site impacts of environmental processes concerning pollutants, nutrients and sediment, such as soil fertility loss, reservoir sedimentation and irrigation channel siltation. While statistical models used to predict constituent concentrations have been developed considerably over the last few years, measures of uncertainty on constituent loads are rarely reported. Loads are the product of two predictions, constituent concentration and discharge, integrated over a time period, which does not make it straightforward to produce a standard error or a confidence interval. In this paper, a linear mixed model is used to estimate sediment concentrations. A bootstrap method is then developed that accounts for the uncertainty in the concentration and discharge predictions, allowing temporal correlation in the constituent data, and can be used when data transformations are required. The method was tested for a small watershed in Northwest Vietnam for the period 2010-2011. The results showed that confidence intervals were asymmetric, with the highest uncertainty in the upper limit, and that a load of 6262 Mg year-1 had a 95 % confidence interval of (4331, 12 267) in 2010 and a load of 5543 Mg an interval of (3593, 8975) in 2011. Additionally, the approach demonstrated that direct estimates from the data were biased downwards compared to bootstrap median estimates. These results imply that constituent loads predicted from regression-type water quality models could frequently be underestimating sediment yields and their environmental impact.

  4. Bootstrap Enhanced Penalized Regression for Variable Selection with Neuroimaging Data.

    Science.gov (United States)

    Abram, Samantha V; Helwig, Nathaniel E; Moodie, Craig A; DeYoung, Colin G; MacDonald, Angus W; Waller, Niels G

    2016-01-01

    Recent advances in fMRI research highlight the use of multivariate methods for examining whole-brain connectivity. Complementary data-driven methods are needed for determining the subset of predictors related to individual differences. Although commonly used for this purpose, ordinary least squares (OLS) regression may not be ideal due to multi-collinearity and over-fitting issues. Penalized regression is a promising and underutilized alternative to OLS regression. In this paper, we propose a nonparametric bootstrap quantile (QNT) approach for variable selection with neuroimaging data. We use real and simulated data, as well as annotated R code, to demonstrate the benefits of our proposed method. Our results illustrate the practical potential of our proposed bootstrap QNT approach. Our real data example demonstrates how our method can be used to relate individual differences in neural network connectivity with an externalizing personality measure. Also, our simulation results reveal that the QNT method is effective under a variety of data conditions. Penalized regression yields more stable estimates and sparser models than OLS regression in situations with large numbers of highly correlated neural predictors. Our results demonstrate that penalized regression is a promising method for examining associations between neural predictors and clinically relevant traits or behaviors. These findings have important implications for the growing field of functional connectivity research, where multivariate methods produce numerous, highly correlated brain networks.

  5. Bootstrap equations for N=4 SYM with defects

    Energy Technology Data Exchange (ETDEWEB)

    Liendo, Pedro [IMIP, Humboldt-Universität zu Berlin, IRIS Adlershof,Zum Großen Windkanal 6, 12489 Berlin (Germany); Meneghelli, Carlo [Simons Center for Geometry and Physics, Stony Brook University,Stony Brook, NY 11794-3636 (United States)

    2017-01-27

    This paper focuses on the analysis of 4dN=4 superconformal theories in the presence of a defect from the point of view of the conformal bootstrap. We will concentrate first on the case of codimension one, where the defect is a boundary that preserves half of the supersymmetry. After studying the constraints imposed by supersymmetry, we will obtain the Ward identities associated to two-point functions of (1/2)-BPS operators and write their solution as a superconformal block expansion. Due to a surprising connection between spacetime and R-symmetry conformal blocks, our results not only apply to 4dN=4 superconformal theories with a boundary, but also to three more systems that have the same symmetry algebra: 4dN=4 superconformal theories with a line defect, 3dN=4 superconformal theories with no defect, and OSP(4{sup ∗}|4) superconformal quantum mechanics. The superconformal algebra implies that all these systems possess a closed subsector of operators in which the bootstrap equations become polynomial constraints on the CFT data. We derive these truncated equations and initiate the study of their solutions.

  6. A Parsimonious Bootstrap Method to Model Natural Inflow Energy Series

    Directory of Open Access Journals (Sweden)

    Fernando Luiz Cyrino Oliveira

    2014-01-01

    Full Text Available The Brazilian energy generation and transmission system is quite peculiar in its dimension and characteristics. As such, it can be considered unique in the world. It is a high dimension hydrothermal system with huge participation of hydro plants. Such strong dependency on hydrological regimes implies uncertainties related to the energetic planning, requiring adequate modeling of the hydrological time series. This is carried out via stochastic simulations of monthly inflow series using the family of Periodic Autoregressive models, PAR(p, one for each period (month of the year. In this paper it is shown the problems in fitting these models by the current system, particularly the identification of the autoregressive order “p” and the corresponding parameter estimation. It is followed by a proposal of a new approach to set both the model order and the parameters estimation of the PAR(p models, using a nonparametric computational technique, known as Bootstrap. This technique allows the estimation of reliable confidence intervals for the model parameters. The obtained results using the Parsimonious Bootstrap Method of Moments (PBMOM produced not only more parsimonious model orders but also adherent stochastic scenarios and, in the long range, lead to a better use of water resources in the energy operation planning.

  7. Bootstrap equations for N=4 SYM with defects

    International Nuclear Information System (INIS)

    Liendo, Pedro; Meneghelli, Carlo

    2017-01-01

    This paper focuses on the analysis of 4dN=4 superconformal theories in the presence of a defect from the point of view of the conformal bootstrap. We will concentrate first on the case of codimension one, where the defect is a boundary that preserves half of the supersymmetry. After studying the constraints imposed by supersymmetry, we will obtain the Ward identities associated to two-point functions of (1/2)-BPS operators and write their solution as a superconformal block expansion. Due to a surprising connection between spacetime and R-symmetry conformal blocks, our results not only apply to 4dN=4 superconformal theories with a boundary, but also to three more systems that have the same symmetry algebra: 4dN=4 superconformal theories with a line defect, 3dN=4 superconformal theories with no defect, and OSP(4 ∗ |4) superconformal quantum mechanics. The superconformal algebra implies that all these systems possess a closed subsector of operators in which the bootstrap equations become polynomial constraints on the CFT data. We derive these truncated equations and initiate the study of their solutions.

  8. A likelihood and resampling based approach to dichotomizing a continuous biomarker in medical research.

    Science.gov (United States)

    Su, Min; Fang, Liang; Su, Zheng

    2013-05-01

    Dichotomizing a continuous biomarker is a common practice in medical research. Various methods exist in the literature for dichotomizing continuous biomarkers. The most widely adopted minimum p-value approach uses a sequence of test statistics for all possible dichotomizations of a continuous biomarker, and it chooses the cutpoint that is associated with the maximum test statistic, or equivalently, the minimum p-value of the test. We herein propose a likelihood and resampling-based approach to dichotomizing a continuous biomarker. In this approach, the cutpoint is considered as an unknown variable in addition to the unknown outcome variables, and the likelihood function is maximized with respect to the cutpoint variable as well as the outcome variables to obtain the optimal cutpoint for the continuous biomarker. The significance level of the test for whether a cutpoint exists is assessed via a permutation test using the maximum likelihood values calculated based on the original as well as the permutated data sets. Numerical comparisons of the proposed approach and the minimum p-value approach showed that the proposed approach was not only more powerful in detecting the cutpoint but also provided markedly more accurate estimates of the cutpoint than the minimum p-value approach in all the simulation scenarios considered.

  9. A Resampling-Based Stochastic Approximation Method for Analysis of Large Geostatistical Data

    KAUST Repository

    Liang, Faming

    2013-03-01

    The Gaussian geostatistical model has been widely used in modeling of spatial data. However, it is challenging to computationally implement this method because it requires the inversion of a large covariance matrix, particularly when there is a large number of observations. This article proposes a resampling-based stochastic approximation method to address this challenge. At each iteration of the proposed method, a small subsample is drawn from the full dataset, and then the current estimate of the parameters is updated accordingly under the framework of stochastic approximation. Since the proposed method makes use of only a small proportion of the data at each iteration, it avoids inverting large covariance matrices and thus is scalable to large datasets. The proposed method also leads to a general parameter estimation approach, maximum mean log-likelihood estimation, which includes the popular maximum (log)-likelihood estimation (MLE) approach as a special case and is expected to play an important role in analyzing large datasets. Under mild conditions, it is shown that the estimator resulting from the proposed method converges in probability to a set of parameter values of equivalent Gaussian probability measures, and that the estimator is asymptotically normally distributed. To the best of the authors\\' knowledge, the present study is the first one on asymptotic normality under infill asymptotics for general covariance functions. The proposed method is illustrated with large datasets, both simulated and real. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  10. A Poisson resampling method for simulating reduced counts in nuclear medicine images

    International Nuclear Information System (INIS)

    White, Duncan; Lawson, Richard S

    2015-01-01

    Nuclear medicine computers now commonly offer resolution recovery and other software techniques which have been developed to improve image quality for images with low counts. These techniques potentially mean that these images can give equivalent clinical information to a full-count image. Reducing the number of counts in nuclear medicine images has the benefits of either allowing reduced activity to be administered or reducing acquisition times. However, because acquisition and processing parameters vary, each user should ideally evaluate the use of images with reduced counts within their own department, and this is best done by simulating reduced-count images from the original data. Reducing the counts in an image by division and rounding off to the nearest integer value, even if additional Poisson noise is added, is inadequate because it gives incorrect counting statistics. This technical note describes how, by applying Poisson resampling to the original raw data, simulated reduced-count images can be obtained while maintaining appropriate counting statistics. The authors have developed manufacturer independent software that can retrospectively generate simulated data with reduced counts from any acquired nuclear medicine image. (note)

  11. MapReduce particle filtering with exact resampling and deterministic runtime

    Science.gov (United States)

    Thiyagalingam, Jeyarajan; Kekempanos, Lykourgos; Maskell, Simon

    2017-12-01

    Particle filtering is a numerical Bayesian technique that has great potential for solving sequential estimation problems involving non-linear and non-Gaussian models. Since the estimation accuracy achieved by particle filters improves as the number of particles increases, it is natural to consider as many particles as possible. MapReduce is a generic programming model that makes it possible to scale a wide variety of algorithms to Big data. However, despite the application of particle filters across many domains, little attention has been devoted to implementing particle filters using MapReduce. In this paper, we describe an implementation of a particle filter using MapReduce. We focus on a component that what would otherwise be a bottleneck to parallel execution, the resampling component. We devise a new implementation of this component, which requires no approximations, has O( N) spatial complexity and deterministic O((log N)2) time complexity. Results demonstrate the utility of this new component and culminate in consideration of a particle filter with 224 particles being distributed across 512 processor cores.

  12. A Poisson resampling method for simulating reduced counts in nuclear medicine images.

    Science.gov (United States)

    White, Duncan; Lawson, Richard S

    2015-05-07

    Nuclear medicine computers now commonly offer resolution recovery and other software techniques which have been developed to improve image quality for images with low counts. These techniques potentially mean that these images can give equivalent clinical information to a full-count image. Reducing the number of counts in nuclear medicine images has the benefits of either allowing reduced activity to be administered or reducing acquisition times. However, because acquisition and processing parameters vary, each user should ideally evaluate the use of images with reduced counts within their own department, and this is best done by simulating reduced-count images from the original data. Reducing the counts in an image by division and rounding off to the nearest integer value, even if additional Poisson noise is added, is inadequate because it gives incorrect counting statistics. This technical note describes how, by applying Poisson resampling to the original raw data, simulated reduced-count images can be obtained while maintaining appropriate counting statistics. The authors have developed manufacturer independent software that can retrospectively generate simulated data with reduced counts from any acquired nuclear medicine image.

  13. Replication of Major Profile Patterns in Structural Equation Modeling: Effect of Bootstrapping in a Small Sample.

    Science.gov (United States)

    Kim, Se-Kang

    The effect of bootstrapping was studied by examining whether major profile patterns were replicated when sample sizes were reduced. Profile patterns estimated from the original sample (n=645) of the Wechsler Preschool and Primary Scale of IntelligenceThird Edition (WPPSI-III) Standardization Data were considered major profiles. For bootstrapping,…

  14. The Success of Linear Bootstrapping Models: Decision Domain-, Expertise-, and Criterion-Specific Meta-Analysis

    Science.gov (United States)

    Kaufmann, Esther; Wittmann, Werner W.

    2016-01-01

    The success of bootstrapping or replacing a human judge with a model (e.g., an equation) has been demonstrated in Paul Meehl’s (1954) seminal work and bolstered by the results of several meta-analyses. To date, however, analyses considering different types of meta-analyses as well as the potential dependence of bootstrapping success on the decision domain, the level of expertise of the human judge, and the criterion for what constitutes an accurate decision have been missing from the literature. In this study, we addressed these research gaps by conducting a meta-analysis of lens model studies. We compared the results of a traditional (bare-bones) meta-analysis with findings of a meta-analysis of the success of bootstrap models corrected for various methodological artifacts. In line with previous studies, we found that bootstrapping was more successful than human judgment. Furthermore, bootstrapping was more successful in studies with an objective decision criterion than in studies with subjective or test score criteria. We did not find clear evidence that the success of bootstrapping depended on the decision domain (e.g., education or medicine) or on the judge’s level of expertise (novice or expert). Correction of methodological artifacts increased the estimated success of bootstrapping, suggesting that previous analyses without artifact correction (i.e., traditional meta-analyses) may have underestimated the value of bootstrapping models. PMID:27327085

  15. On the consistency of bootstrap testing for a parameter on the boundary of the parameter space

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Nielsen, Heino Bohn; Rahbek, Anders

    2017-01-01

    It is well known that with a parameter on the boundary of the parameter space, such as in the classic cases of testing for a zero location parameter or no autoregressive conditional heteroskedasticity (ARCH) effects, the classic nonparametric bootstrap – based on unrestricted parameter estimates...... the standard and bootstrap Lagrange multiplier tests as well as the asymptotic quasi-likelihood ratio test....

  16. A bootstrap method for estimating uncertainty of water quality trends

    Science.gov (United States)

    Hirsch, Robert M.; Archfield, Stacey A.; DeCicco, Laura

    2015-01-01

    Estimation of the direction and magnitude of trends in surface water quality remains a problem of great scientific and practical interest. The Weighted Regressions on Time, Discharge, and Season (WRTDS) method was recently introduced as an exploratory data analysis tool to provide flexible and robust estimates of water quality trends. This paper enhances the WRTDS method through the introduction of the WRTDS Bootstrap Test (WBT), an extension of WRTDS that quantifies the uncertainty in WRTDS-estimates of water quality trends and offers various ways to visualize and communicate these uncertainties. Monte Carlo experiments are applied to estimate the Type I error probabilities for this method. WBT is compared to other water-quality trend-testing methods appropriate for data sets of one to three decades in length with sampling frequencies of 6–24 observations per year. The software to conduct the test is in the EGRETci R-package.

  17. Integral equations of hadronic correlation functions a functional- bootstrap approach

    CERN Document Server

    Manesis, E K

    1974-01-01

    A reasonable 'microscopic' foundation of the Feynman hadron-liquid analogy is offered, based on a class of models for hadron production. In an external field formalism, the equivalence (complementarity) of the exclusive and inclusive descriptions of hadronic reactions is specifically expressed in a functional-bootstrap form, and integral equations between inclusive and exclusive correlation functions are derived. Using the latest CERN-ISR data on the two-pion inclusive correlation function, and assuming rapidity translational invariance for the exclusive one, the simplest integral equation is solved in the 'central region' and an exclusive correlation length in rapidity predicted. An explanation is also offered for the unexpected similarity observed between pi /sup +/ pi /sup -/ and pi /sup -/ pi /sup -/ inclusive correlations. (31 refs).

  18. Performance of Bootstrap MCEWMA: Study case of Sukuk Musyarakah data

    Science.gov (United States)

    Safiih, L. Muhamad; Hila, Z. Nurul

    2014-07-01

    Sukuk Musyarakah is one of several instruments of Islamic bond investment in Malaysia, where the form of this sukuk is actually based on restructuring the conventional bond to become a Syariah compliant bond. The Syariah compliant is based on prohibition of any influence of usury, benefit or fixed return. Despite of prohibition, daily returns of sukuk are non-fixed return and in statistic, the data of sukuk returns are said to be a time series data which is dependent and autocorrelation distributed. This kind of data is a crucial problem whether in statistical and financing field. Returns of sukuk can be statistically viewed by its volatility, whether it has high volatility that describing the dramatically change of price and categorized it as risky bond or else. However, this crucial problem doesn't get serious attention among researcher compared to conventional bond. In this study, MCEWMA chart in Statistical Process Control (SPC) is mainly used to monitor autocorrelated data and its application on daily returns of securities investment data has gained widespread attention among statistician. However, this chart has always been influence by inaccurate estimation, whether on base model or its limit, due to produce large error and high of probability of signalling out-of-control process for false alarm study. To overcome this problem, a bootstrap approach used in this study, by hybridise it on MCEWMA base model to construct a new chart, i.e. Bootstrap MCEWMA (BMCEWMA) chart. The hybrid model, BMCEWMA, will be applied to daily returns of sukuk Musyarakah for Rantau Abang Capital Bhd. The performance of BMCEWMA base model showed that its more effective compare to real model, MCEWMA based on smaller error estimation, shorter the confidence interval and smaller false alarm. In other word, hybrid chart reduce the variability which shown by smaller error and false alarm. It concludes that the application of BMCEWMA is better than MCEWMA.

  19. Bootstrap-based confidence estimation in PCA and multivariate statistical process control

    DEFF Research Database (Denmark)

    Babamoradi, Hamid

    . The bootstrap could also offer confidence limits for contribution plots with acceptable fault diagnostic power. The performance of bootstrap-based and asymptotic confidence limits was compared in batch MSPC (Paper III). Real and simulated batch process datasets were used to build the limits for five PCA....... The goal was to improve process monitoring by improving the quality of MSPC charts and contribution plots. Bootstrapping algorithm to build confidence limits was illustrated in a case study format (Paper I). The main steps in the algorithm were discussed where a set of sensible choices (plus...... be used to detect outliers in the data since the outliers can distort the bootstrap estimates. Bootstrap-based confidence limits were suggested as alternative to the asymptotic limits for control charts and contribution plots in MSPC (Paper II). The results showed that in case of the Q...

  20. Bootstrapping realized volatility and realized beta under a local Gaussianity assumption

    DEFF Research Database (Denmark)

    Hounyo, Ulrich

    The main contribution of this paper is to propose a new bootstrap method for statistics based on high frequency returns. The new method exploits the local Gaussianity and the local constancy of volatility of high frequency returns, two assumptions that can simplify inference in the high frequency...... context, as recently explained by Mykland and Zhang (2009). Our main contributions are as follows. First, we show that the local Gaussian bootstrap is firstorder consistent when used to estimate the distributions of realized volatility and ealized betas. Second, we show that the local Gaussian bootstrap...... matches accurately the first four cumulants of realized volatility, implying that this method provides third-order refinements. This is in contrast with the wild bootstrap of Gonçalves and Meddahi (2009), which is only second-order correct. Third, we show that the local Gaussian bootstrap is able...

  1. EBW-Bootstrap Current Synergy in the National Spherical Torus Experiment (NSTX)

    International Nuclear Information System (INIS)

    Harvey, R.W.; Taylor, G.

    2005-01-01

    Current driven by electron Bernstein waves (EBW) and by the electron bootstrap effect are calculated separately and concurrently with a kinetic code, to determine the degree of synergy between them. A target β = 40% NSTX plasma is examined. A simple bootstrap model in the CQL3D Fokker-Planck code is used in these studies: the transiting electron distributions are connected in velocity-space at the trapped-passing boundary to trapped-electron distributions which are displaced radially by a half-banana width outwards/inwards for the co-/counter-passing regions. This model agrees well with standard bootstrap current calculations, over the outer 60% of the plasma radius. Relatively small synergy net bootstrap current is obtained for EBW power up to 4 MW. Locally, bootstrap current density increases in proportion to increased plasma pressure, and this effect can significantly affect the radial profile of driven current

  2. Efficient Implementation of Filtering and Resampling Operations on Field Programmable Gate Arrays (FPGAs) for Software Defined Radio (SDR)

    National Research Council Canada - National Science Library

    Giannoulis, Georgios

    2008-01-01

    ...). A set of filtering and resampling operations is developed in the Simulink environment through Xilinx/Simulink blocksets, where all the included subsystems of the design are fully accessible by the designer in any stage of operation. The key ingredient is the use of a Multiplier and Accumulator (MAC) architecture, which can be either time multiplexed for maximum hardware efficiency, or run on a parallel structure for maximum time efficiency.

  3. Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, M D; Cole, S; Frenk, C S; Szapudi, I

    2011-02-14

    We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a power spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.

  4. cloncase: Estimation of sex frequency and effective population size by clonemate resampling in partially clonal organisms.

    Science.gov (United States)

    Ali, Sajid; Soubeyrand, Samuel; Gladieux, Pierre; Giraud, Tatiana; Leconte, Marc; Gautier, Angélique; Mboup, Mamadou; Chen, Wanquan; de Vallavieille-Pope, Claude; Enjalbert, Jérôme

    2016-07-01

    Inferring reproductive and demographic parameters of populations is crucial to our understanding of species ecology and evolutionary potential but can be challenging, especially in partially clonal organisms. Here, we describe a new and accurate method, cloncase, for estimating both the rate of sexual vs. asexual reproduction and the effective population size, based on the frequency of clonemate resampling across generations. Simulations showed that our method provides reliable estimates of sex frequency and effective population size for a wide range of parameters. The cloncase method was applied to Puccinia striiformis f.sp. tritici, a fungal pathogen causing stripe/yellow rust, an important wheat disease. This fungus is highly clonal in Europe but has been suggested to recombine in Asia. Using two temporally spaced samples of P. striiformis f.sp. tritici in China, the estimated sex frequency was 75% (i.e. three-quarter of individuals being sexually derived during the yearly sexual cycle), indicating strong contribution of sexual reproduction to the life cycle of the pathogen in this area. The inferred effective population size of this partially clonal organism (Nc  = 998) was in good agreement with estimates obtained using methods based on temporal variations in allelic frequencies. The cloncase estimator presented herein is the first method allowing accurate inference of both sex frequency and effective population size from population data without knowledge of recombination or mutation rates. cloncase can be applied to population genetic data from any organism with cyclical parthenogenesis and should in particular be very useful for improving our understanding of pest and microbial population biology. © 2016 John Wiley & Sons Ltd.

  5. Resampling method for applying density-dependent habitat selection theory to wildlife surveys.

    Science.gov (United States)

    Tardy, Olivia; Massé, Ariane; Pelletier, Fanie; Fortin, Daniel

    2015-01-01

    Isodar theory can be used to evaluate fitness consequences of density-dependent habitat selection by animals. A typical habitat isodar is a regression curve plotting competitor densities in two adjacent habitats when individual fitness is equal. Despite the increasing use of habitat isodars, their application remains largely limited to areas composed of pairs of adjacent habitats that are defined a priori. We developed a resampling method that uses data from wildlife surveys to build isodars in heterogeneous landscapes without having to predefine habitat types. The method consists in randomly placing blocks over the survey area and dividing those blocks in two adjacent sub-blocks of the same size. Animal abundance is then estimated within the two sub-blocks. This process is done 100 times. Different functional forms of isodars can be investigated by relating animal abundance and differences in habitat features between sub-blocks. We applied this method to abundance data of raccoons and striped skunks, two of the main hosts of rabies virus in North America. Habitat selection by raccoons and striped skunks depended on both conspecific abundance and the difference in landscape composition and structure between sub-blocks. When conspecific abundance was low, raccoons and striped skunks favored areas with relatively high proportions of forests and anthropogenic features, respectively. Under high conspecific abundance, however, both species preferred areas with rather large corn-forest edge densities and corn field proportions. Based on random sampling techniques, we provide a robust method that is applicable to a broad range of species, including medium- to large-sized mammals with high mobility. The method is sufficiently flexible to incorporate multiple environmental covariates that can reflect key requirements of the focal species. We thus illustrate how isodar theory can be used with wildlife surveys to assess density-dependent habitat selection over large

  6. Wayside Bearing Fault Diagnosis Based on Envelope Analysis Paved with Time-Domain Interpolation Resampling and Weighted-Correlation-Coefficient-Guided Stochastic Resonance

    Directory of Open Access Journals (Sweden)

    Yongbin Liu

    2017-01-01

    Full Text Available Envelope spectrum analysis is a simple, effective, and classic method for bearing fault identification. However, in the wayside acoustic health monitoring system, owing to the high relative moving speed between the railway vehicle and the wayside mounted microphone, the recorded signal is embedded with Doppler effect, which brings in shift and expansion of the bearing fault characteristic frequency (FCF. What is more, the background noise is relatively heavy, which makes it difficult to identify the FCF. To solve the two problems, this study introduces solutions for the wayside acoustic fault diagnosis of train bearing based on Doppler effect reduction using the improved time-domain interpolation resampling (TIR method and diagnosis-relevant information enhancement using Weighted-Correlation-Coefficient-Guided Stochastic Resonance (WCCSR method. First, the traditional TIR method is improved by incorporating the original method with kinematic parameter estimation based on time-frequency analysis and curve fitting. Based on the estimated parameters, the Doppler effect is removed using the TIR easily. Second, WCCSR is employed to enhance the diagnosis-relevant period signal component in the obtained Doppler-free signal. Finally, paved with the above two procedures, the local fault is identified using envelope spectrum analysis. Simulated and experimental cases have verified the effectiveness of the proposed method.

  7. Determination of confidence intervals in non-normal data: application of the bootstrap to cocaine concentration in femoral blood.

    Science.gov (United States)

    Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D

    2015-03-01

    Calculating the confidence interval is a common procedure in data analysis and is readily obtained from normally distributed populations with the familiar [Formula: see text] formula. However, when working with non-normally distributed data, determining the confidence interval is not as obvious. For this type of data, there are fewer references in the literature, and they are much less accessible. We describe, in simple language, the percentile and bias-corrected and accelerated variations of the bootstrap method to calculate confidence intervals. This method can be applied to a wide variety of parameters (mean, median, slope of a calibration curve, etc.) and is appropriate for normal and non-normal data sets. As a worked example, the confidence interval around the median concentration of cocaine in femoral blood is calculated using bootstrap techniques. The median of the non-toxic concentrations was 46.7 ng/mL with a 95% confidence interval of 23.9-85.8 ng/mL in the non-normally distributed set of 45 postmortem cases. This method should be used to lead to more statistically sound and accurate confidence intervals for non-normally distributed populations, such as reference values of therapeutic and toxic drug concentration, as well as situations of truncated concentration values near the limit of quantification or cutoff of a method. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Bootstrapped neural nets versus regression kriging in the digital mapping of pedological attributes: the automatic and time-consuming perspectives

    Science.gov (United States)

    Langella, Giuliano; Basile, Angelo; Bonfante, Antonello; Manna, Piero; Terribile, Fabio

    2013-04-01

    Digital soil mapping procedures are widespread used to build two-dimensional continuous maps about several pedological attributes. Our work addressed a regression kriging (RK) technique and a bootstrapped artificial neural network approach in order to evaluate and compare (i) the accuracy of prediction, (ii) the susceptibility of being included in automatic engines (e.g. to constitute web processing services), and (iii) the time cost needed for calibrating models and for making predictions. Regression kriging is maybe the most widely used geostatistical technique in the digital soil mapping literature. Here we tried to apply the EBLUP regression kriging as it is deemed to be the most statistically sound RK flavor by pedometricians. An unusual multi-parametric and nonlinear machine learning approach was accomplished, called BAGAP (Bootstrap aggregating Artificial neural networks with Genetic Algorithms and Principal component regression). BAGAP combines a selected set of weighted neural nets having specified characteristics to yield an ensemble response. The purpose of applying these two particular models is to ascertain whether and how much a more cumbersome machine learning method could be much promising in making more accurate/precise predictions. Being aware of the difficulty to handle objects based on EBLUP-RK as well as BAGAP when they are embedded in environmental applications, we explore the susceptibility of them in being wrapped within Web Processing Services. Two further kinds of aspects are faced for an exhaustive evaluation and comparison: automaticity and time of calculation with/without high performance computing leverage.

  9. JuliBootS: a hands-on guide to the conformal bootstrap

    CERN Document Server

    Paulos, Miguel F

    2014-01-01

    We introduce {\\tt JuliBootS}, a package for numerical conformal bootstrap computations coded in {\\tt Julia}. The centre-piece of {\\tt JuliBootS} is an implementation of Dantzig's simplex method capable of handling arbitrary precision linear programming problems with continuous search spaces. Current supported features include conformal dimension bounds, OPE bounds, and bootstrap with or without global symmetries. The code is trivially parallelizable on one or multiple machines. We exemplify usage extensively with several real-world applications. In passing we give a pedagogical introduction to the numerical bootstrap methods.

  10. Validation of Nonparametric Two-Sample Bootstrap in ROC Analysis on Large Datasets.

    Science.gov (United States)

    Wu, Jin Chu; Martin, Alvin F; Kacker, Raghu N

    The nonparametric two-sample bootstrap is applied to computing uncertainties of measures in ROC analysis on large datasets in areas such as biometrics, speaker recognition, etc., when the analytical method cannot be used. Its validation was studied by computing the SE of the area under ROC curve using the well-established analytical Mann-Whitney-statistic method and also using the bootstrap. The analytical result is unique. The bootstrap results are expressed as a probability distribution due to its stochastic nature. The comparisons were carried out using relative errors and hypothesis testing. They match very well. This validation provides a sound foundation for such computations.

  11. ASAL BİLEŞENLER ANALİZİNE BOOTSTRAP YAKLAŞIMI

    OpenAIRE

    AKTÜKÜN, Dr. Aylin

    2011-01-01

    Bu çalışmada, bootstrap yöntemlerin asal bileşenler analizine uygulanma sürecini sunduk. Hipotetik bir veri ile asal bileşenler analizinde başvurulan bazı güven aralıklarının bootstrap yöntemlerle nasıl gerçekleştirilebileceğini gösterdik. Makaledeki tüm bootstrap süreçleri Mathematica dilinde yazdığımız bir programla gerçekleştirdik.

  12. Correlation Attenuation Due to Measurement Error: A New Approach Using the Bootstrap Procedure

    Science.gov (United States)

    Padilla, Miguel A.; Veprinsky, Anna

    2012-01-01

    Issues with correlation attenuation due to measurement error are well documented. More than a century ago, Spearman proposed a correction for attenuation. However, this correction has seen very little use since it can potentially inflate the true correlation beyond one. In addition, very little confidence interval (CI) research has been done for…

  13. Bootstrapping de-shadowing and self-calibration for scanning electron microscope photometric stereo

    International Nuclear Information System (INIS)

    Miyamoto, Atsushi; Chen, Deshan; Kaneko, Shun’ichi

    2014-01-01

    In this paper, we present a novel approach that addresses the blind reconstruction problem in scanning electron microscope (SEM) photometric stereo. Using only two observed images that suffer from shadowing effects, our method automatically calibrates the parameter and resolves shadowing errors for estimating an accurate three-dimensional (3D) shape and underlying shadowless images. We introduce a novel shadowing compensation model using image intensities for both cases of presence and absence of shadowing. With this model, the proposed de-shadowing algorithm iteratively compensates for image intensities and modifies the corresponding 3D surface. Besides de-shadowing, we introduce a practically useful self-calibration criterion by enforcing a good reconstruction. We show that incorrect parameters will engender significant distortions of 3D reconstructions in shadowed regions during the de-shadowing procedure. This motivated us to design the self-calibration criterion by utilizing shadowing to pursue the proper parameter that produces the best reconstruction with least distortions. As a result, we develop a bootstrapping approach for simultaneous de-shadowing and self-calibration in SEM photometric stereo. Extensive experiments on real image data demonstrate the effectiveness of our method. (paper)

  14. A voltage biased superconducting quantum interference device bootstrap circuit

    International Nuclear Information System (INIS)

    Xie Xiaoming; Wang Huiwu; Wang Yongliang; Dong Hui; Jiang Mianheng; Zhang Yi; Krause, Hans-Joachim; Braginski, Alex I; Offenhaeusser, Andreas; Mueck, Michael

    2010-01-01

    We present a dc superconducting quantum interference device (SQUID) readout circuit operating in the voltage bias mode and called a SQUID bootstrap circuit (SBC). The SBC is an alternative implementation of two existing methods for suppression of room-temperature amplifier noise: additional voltage feedback and current feedback. Two circuit branches are connected in parallel. In the dc SQUID branch, an inductively coupled coil connected in series provides the bias current feedback for enhancing the flux-to-current coefficient. The circuit branch parallel to the dc SQUID branch contains an inductively coupled voltage feedback coil with a shunt resistor in series for suppressing the preamplifier noise current by increasing the dynamic resistance. We show that the SBC effectively reduces the preamplifier noise to below the SQUID intrinsic noise. For a helium-cooled planar SQUID magnetometer with a SQUID inductance of 350 pH, a flux noise of about 3 μΦ 0 Hz -1/2 and a magnetic field resolution of less than 3 fT Hz -1/2 were obtained. The SBC leads to a convenient direct readout electronics for a dc SQUID with a wider adjustment tolerance than other feedback schemes.

  15. High efficiency fusion reactor based on bootstrap current

    International Nuclear Information System (INIS)

    Kikuchi, Mitsuru

    1990-01-01

    The establishment of the steady operation technology which has been the largest subject of the research on the nuclear fusion reactors utilizing tokamak type confinement principle advanced largely by the recent research, and the concept of the power reactor which can be made steady with high efficiency was established. This is to utilize positively the bootstrap current naturally flowing in tokamak plasma. This power reactor can be materialized with the technologies in near future, and there is the possibility that it can become the clear target of developing nuclear fusion reactors as electric power generation plants. In this report, explanation is made centering around the high efficiency nuclear fusion reactor SSTR, for which Japan Atomic Energy Research Institute advances the conceptual design as the prototype power reactor. In nuclear fusion energy CO 2 gas is not generated, essentially nuclear runaway never occurs, and radioactive wastes can be relatively reduced, therefore, it is expected to become a powerful substitute energy source. The main parameters and features of the steady state tokamak nuclear fusion power reactor (SSTR) are reported. (K.I.)

  16. N = 4 superconformal bootstrap of the K3 CFT

    Science.gov (United States)

    Lin, Ying-Hsuan; Shao, Shu-Heng; Simmons-Duffin, David; Wang, Yifan; Yin, Xi

    2017-05-01

    We study two-dimensional (4, 4) superconformal field theories of central charge c = 6, corresponding to nonlinear sigma models on K3 surfaces, using the superconformal bootstrap. This is made possible through a surprising relation between the BPS N = 4 superconformal blocks with c = 6 and bosonic Virasoro conformal blocks with c = 28, and an exact result on the moduli dependence of a certain integrated BPS 4-point function. Nontrivial bounds on the non-BPS spectrum in the K3 CFT are obtained as functions of the CFT moduli, that interpolate between the free orbifold points and singular CFT points. We observe directly from the CFT perspective the signature of a continuous spectrum above a gap at the singular moduli, and find numerically an upper bound on this gap that is saturated by the A 1 N = 4 cigar CFT. We also derive an analytic upper bound on the first nonzero eigenvalue of the scalar Laplacian on K3 in the large volume regime, that depends on the K3 moduli data. As two byproducts, we find an exact equivalence between a class of BPS N = 2 superconformal blocks and Virasoro conformal blocks in two dimensions, and an upper bound on the four-point functions of operators of sufficiently low scaling dimension in three and four dimensional CFTs.

  17. N=4 Superconformal Bootstrap of the K3 CFT

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    We study two-dimensional (4,4) superconformal field theories of central charge c=6, corresponding to nonlinear σ models on K3 surfaces, using the superconformal bootstrap. This is made possible through a surprising relation between the BPS N=4 superconformal blocks with c=6 and bosonic Virasoro conformal blocks with c=28, and an exact result on the moduli dependence of a certain integrated BPS 4-point function. Nontrivial bounds on the non-BPS spectrum in the K3 CFT are obtained as functions of the CFT moduli, that interpolate between the free orbifold points and singular CFT points. We observe directly from the CFT perspective the signature of a continuous spectrum above a gap at the singular moduli, and find numerically an upper bound on this gap that is saturated by the A1 N=4 cigar CFT. We also derive an analytic upper bound on the first nonzero eigenvalue of the scalar Laplacian on K3 in the large volume regime, that depends on the K3 moduli data. As two byproducts, we find an exact equivalence...

  18. Bootstrap-based Support of HGT Inferred by Maximum Parsimony

    Directory of Open Access Journals (Sweden)

    Nakhleh Luay

    2010-05-01

    Full Text Available Abstract Background Maximum parsimony is one of the most commonly used criteria for reconstructing phylogenetic trees. Recently, Nakhleh and co-workers extended this criterion to enable reconstruction of phylogenetic networks, and demonstrated its application to detecting reticulate evolutionary relationships. However, one of the major problems with this extension has been that it favors more complex evolutionary relationships over simpler ones, thus having the potential for overestimating the amount of reticulation in the data. An ad hoc solution to this problem that has been used entails inspecting the improvement in the parsimony length as more reticulation events are added to the model, and stopping when the improvement is below a certain threshold. Results In this paper, we address this problem in a more systematic way, by proposing a nonparametric bootstrap-based measure of support of inferred reticulation events, and using it to determine the number of those events, as well as their placements. A number of samples is generated from the given sequence alignment, and reticulation events are inferred based on each sample. Finally, the support of each reticulation event is quantified based on the inferences made over all samples. Conclusions We have implemented our method in the NEPAL software tool (available publicly at http://bioinfo.cs.rice.edu/, and studied its performance on both biological and simulated data sets. While our studies show very promising results, they also highlight issues that are inherently challenging when applying the maximum parsimony criterion to detect reticulate evolution.

  19. Bootstrap-based support of HGT inferred by maximum parsimony.

    Science.gov (United States)

    Park, Hyun Jung; Jin, Guohua; Nakhleh, Luay

    2010-05-05

    Maximum parsimony is one of the most commonly used criteria for reconstructing phylogenetic trees. Recently, Nakhleh and co-workers extended this criterion to enable reconstruction of phylogenetic networks, and demonstrated its application to detecting reticulate evolutionary relationships. However, one of the major problems with this extension has been that it favors more complex evolutionary relationships over simpler ones, thus having the potential for overestimating the amount of reticulation in the data. An ad hoc solution to this problem that has been used entails inspecting the improvement in the parsimony length as more reticulation events are added to the model, and stopping when the improvement is below a certain threshold. In this paper, we address this problem in a more systematic way, by proposing a nonparametric bootstrap-based measure of support of inferred reticulation events, and using it to determine the number of those events, as well as their placements. A number of samples is generated from the given sequence alignment, and reticulation events are inferred based on each sample. Finally, the support of each reticulation event is quantified based on the inferences made over all samples. We have implemented our method in the NEPAL software tool (available publicly at http://bioinfo.cs.rice.edu/), and studied its performance on both biological and simulated data sets. While our studies show very promising results, they also highlight issues that are inherently challenging when applying the maximum parsimony criterion to detect reticulate evolution.

  20. Bias Reversal Technique in SQUID Bootstrap Circuit (SBC) Scheme

    Science.gov (United States)

    Rong, Liangliang; Zhang, Yi; Zhang, Guofeng; Wu, Jun; Dong, Hui; Qiu, Longqing; Xie, Xiaoming; Offenhüusser, Andreas

    Recently, a SQUID direct readout scheme called voltage-biased SQUID Bootstrap Circuit (SBC) is introduced to reduce preamplifier noise contribution. In this paper, we describe a concept of SBC with bias reversal technique which can suppress SQUID intrinsic 1/f noise. When applying a symmetrically rectangular voltage across SBC, two I-Φ characteristics appear at the amplifier output. In order to return to one I - Φ curve, a demodulation technique is required. Because of the asymmetry of typical SBC I-Φ curve, the demodulation method is realized by using a flux compensation of one half Φ0 flux shift. The output signal is then filtered and returned to one I-Φ curve for ordinary FLL readout. It was found, the reversal frequency fR can be dramatically enhanced when using a preamplifier consisting of two operational amplifiers. A planar Nb SQUID magnetometer with a loop-inductance of 350 pH, fR =50 kHz and a second order low pass filter with 10 kHz cut off frequency was employed in our experiment. Results prove the feasibility of SBC bias reversal method. Comparative experiment on noise performance will be carried out in further studies.

  1. Using the Bootstrap Concept to Build an Adaptable and Compact Subversion Artifice

    National Research Council Canada - National Science Library

    Lack, Lindsey

    2003-01-01

    .... Early tiger teams recognized the possibility of this design and compared it to the two-card bootstrap loader used in mainframes since both exhibit the characteristics of compactness and adaptability...

  2. Systematic evaluation of sequential geostatistical resampling within MCMC for posterior sampling of near-surface geophysical inverse problems

    Science.gov (United States)

    Ruggeri, Paolo; Irving, James; Holliger, Klaus

    2015-08-01

    We critically examine the performance of sequential geostatistical resampling (SGR) as a model proposal mechanism for Bayesian Markov-chain-Monte-Carlo (MCMC) solutions to near-surface geophysical inverse problems. Focusing on a series of simple yet realistic synthetic crosshole georadar tomographic examples characterized by different numbers of data, levels of data error and degrees of model parameter spatial correlation, we investigate the efficiency of three different resampling strategies with regard to their ability to generate statistically independent realizations from the Bayesian posterior distribution. Quite importantly, our results show that, no matter what resampling strategy is employed, many of the examined test cases require an unreasonably high number of forward model runs to produce independent posterior samples, meaning that the SGR approach as currently implemented will not be computationally feasible for a wide range of problems. Although use of a novel gradual-deformation-based proposal method can help to alleviate these issues, it does not offer a full solution. Further, we find that the nature of the SGR is found to strongly influence MCMC performance; however no clear rule exists as to what set of inversion parameters and/or overall proposal acceptance rate will allow for the most efficient implementation. We conclude that although the SGR methodology is highly attractive as it allows for the consideration of complex geostatistical priors as well as conditioning to hard and soft data, further developments are necessary in the context of novel or hybrid MCMC approaches for it to be considered generally suitable for near-surface geophysical inversions.

  3. Delta method and bootstrap in linear mixed models to estimate a proportion when no event is observed: application to intralesional resection in bone tumor surgery.

    Science.gov (United States)

    Francq, Bernard G; Cartiaux, Olivier

    2016-09-10

    Resecting bone tumors requires good cutting accuracy to reduce the occurrence of local recurrence. This issue is considerably reduced with a navigated technology. The estimation of extreme proportions is challenging especially with small or moderate sample sizes. When no success is observed, the commonly used binomial proportion confidence interval is not suitable while the rule of three provides a simple solution. Unfortunately, these approaches are unable to differentiate between different unobserved events. Different delta methods and bootstrap procedures are compared in univariate and linear mixed models with simulations and real data by assuming the normality. The delta method on the z-score and parametric bootstrap provide similar results but the delta method requires the estimation of the covariance matrix of the estimates. In mixed models, the observed Fisher information matrix with unbounded variance components should be preferred. The parametric bootstrap, easier to apply, outperforms the delta method for larger sample sizes but it may be time costly. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Parametric bootstrap for testing model fitting in the proportional hazards framework: an application to the survival analysis of Bruna dels Pirineus beef calves.

    Science.gov (United States)

    Casellas, J; Tarrés, J; Piedrafita, J; Varona, L

    2006-10-01

    Given that correct assumptions on the baseline survival function are determinant for the validity of further inferences, specific tools to test the fit of a model to real data become essential in proportional hazards models. In this sense, we have proposed a parametric bootstrap to test the fit of survival models. Monte Carlo simulations are used to generate new data sets from the estimates obtained through the assumed models, and then bootstrap intervals can be established for the survival function along the time space studied. Significant fitting deficiencies are revealed when the real survival function is not included within the bootstrap interval. We tested this procedure in a survival data set of Bruna dels Pirineus beef calves, assuming 4 parametric models (exponential, Weibull, exponential time-dependent, Weibull time-dependent) and the Cox's semiparametric model. Fitting deficiencies were not observed for the Cox's model and the exponential time-dependent model, whereas the Weibull time-dependent model suffered from moderate overestimation at different ages. Thus, the exponential time-dependent model appears to be preferable because of its correct fit for survival data of beef calves and its smaller computational and time requirements. Exponential and Weibull models were completely rejected due to the continuous over- and underestimation of the survival probability reported. Results here highlighted the flexibility of parametric models with time-dependent effects, achieving a fit comparable to nonparametric models.

  5. Using vis-NIR to predict soil organic carbon and clay at national scale: validation of geographically closest resampling strategy

    DEFF Research Database (Denmark)

    Peng, Yi; Knadel, Maria; Greve, Mette Balslev

    2016-01-01

    The Danish soil visible-near infrared (vis-NIR) spectral library has proved capable of predicting soil properties in Denmark such as soil organic carbon (SOC) at field scale using the geographically closest resampling strategy. However, this strategy has only been tested on one Danish local field...... with the uncertainties of traditional laboratory wet chemistry analysis. However, for organic soils (48 samples SOC >7%) originating from wetland or forested areas the SOC predictions were generally under-estimated and not satisfactory. For prediction of clay content, only 12 out of 442 predictions were unsatisfactory...... model was strongly affected by soil parent material and landscape....

  6. Event-based stochastic point rainfall resampling for statistical replication and climate projection of historical rainfall series

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Korup Andersen, Aske; Larsen, Anders Badsberg

    2017-01-01

    events. Due to climate change, however, these series are most likely not representative of future rainfall. There is therefore a demand for climate-projected long rainfall series, which can represent a specific region and rainfall pattern as well as fulfil requirements of long rainfall series which...... for the future climate, such as winter and summer precipitation, and representation of extreme events, the resampled historical series are projected to represent rainfall properties in a future climate. Climate-projected rainfall series are simulated by brute force randomization of model parameters, which leads...

  7. Quality assessment of high angular resolution diffusion imaging data using bootstrap on Q-ball reconstruction.

    Science.gov (United States)

    Cohen-Adad, Julien; Descoteaux, Maxime; Wald, Lawrence L

    2011-05-01

    To develop a bootstrap method to assess the quality of High Angular Resolution Diffusion Imaging (HARDI) data using Q-Ball imaging (QBI) reconstruction. HARDI data were re-shuffled using regular bootstrap with jackknife sampling. For each bootstrap dataset, the diffusion orientation distribution function (ODF) was estimated voxel-wise using QBI reconstruction based on spherical harmonics functions. The reproducibility of the ODF was assessed using the Jensen-Shannon divergence (JSD) and the angular confidence interval was derived for the first and the second ODF maxima. The sensitivity of the bootstrap method was evaluated on a human subject by adding synthetic noise to the data, by acquiring a map of image signal-to-noise ratio (SNR) and by varying the echo time and the b-value. The JSD was directly linked to the image SNR. The impact of echo times and b-values was reflected by both the JSD and the angular confidence interval, proving the usefulness of the bootstrap method to evaluate specific features of HARDI data. The bootstrap method can effectively assess the quality of HARDI data and can be used to evaluate new hardware and pulse sequences, perform multifiber probabilistic tractography, and provide reliability metrics to support clinical studies. Copyright © 2011 Wiley-Liss, Inc.

  8. The non-local bootstrap--estimation of uncertainty in diffusion MRI.

    Science.gov (United States)

    Yap, Pew-Thian; An, Hongyu; Chen, Yasheng; Shen, Dinggang

    2013-01-01

    Diffusion MRI is a noninvasive imaging modality that allows for the estimation and visualization of white matter connectivity patterns in the human brain. However, due to the low signal-to-noise ratio (SNR) nature of diffusion data, deriving useful statistics from the data is adversely affected by different sources of measurement noise. This is aggravated by the fact that the sampling distribution of the statistic of interest is often complex and unknown. In situations as such, the bootstrap, due to its distribution-independent nature, is an appealing tool for the estimation of the variability of almost any statistic, without relying on complicated theoretical calculations, but purely on computer simulation. In this work, we present new bootstrap strategies for variability estimation of diffusion statistics in association with noise. In contrast to the residual bootstrap, which relies on a predetermined data model, or the repetition bootstrap, which requires repeated signal measurements, our approach, called the non-local bootstrap (NLB), is non-parametric and obviates the need for time-consuming multiple acquisitions. The key assumption of NLB is that local image structures recur in the image. We exploit this self-similarity via a multivariate non-parametric kernel regression framework for bootstrap estimation of uncertainty. Evaluation of NLB using a set of high-resolution diffusion-weighted images, with lower than usual SNR due to the small voxel size, indicates that NLB is markedly more robust to noise and results in more accurate inferences.

  9. In vivo precision of bootstrap algorithms applied to diffusion tensor imaging data.

    Science.gov (United States)

    Vorburger, Robert S; Reischauer, Carolin; Dikaiou, Katerina; Boesiger, Peter

    2012-10-01

    To determine the precision for in vivo applications of model and non-model-based bootstrap algorithms for estimating the measurement uncertainty of diffusion parameters derived from diffusion tensor imaging data. Four different bootstrap methods were applied to diffusion datasets acquired during 10 repeated imaging sessions. Measurement uncertainty was derived in eight manually selected regions of interest and in the entire brain white matter and gray matter. The precision of the bootstrap methods was analyzed using coefficients of variation and intra-class correlation coefficients. Comprehensive simulations were performed to validate the results. All bootstrap algorithms showed similar precision which slightly varied in dependence of the selected region of interest. The averaged coefficient of variation in the selected regions of interest was 13.81%, 12.35%, and 17.93% with respect to the apparent diffusion coefficient, the fractional anisotropy value, and the cone of uncertainty, respectively. The repeated measurements showed a very high similarity with intraclass-correlation coefficients larger than 0.96. The simulations confirmed most of the in vivo findings. All investigated bootstrap methods perform with a similar, high precision in deriving the measurement uncertainty of diffusion parameters. Thus, the time-efficient model-based bootstrap approaches should be the method of choice in clinical practice. Copyright © 2012 Wiley Periodicals, Inc.

  10. A bootstrap based space-time surveillance model with an application to crime occurrences

    Science.gov (United States)

    Kim, Youngho; O'Kelly, Morton

    2008-06-01

    This study proposes a bootstrap-based space-time surveillance model. Designed to find emerging hotspots in near-real time, the bootstrap based model is characterized by its use of past occurrence information and bootstrap permutations. Many existing space-time surveillance methods, using population at risk data to generate expected values, have resulting hotspots bounded by administrative area units and are of limited use for near-real time applications because of the population data needed. However, this study generates expected values for local hotspots from past occurrences rather than population at risk. Also, bootstrap permutations of previous occurrences are used for significant tests. Consequently, the bootstrap-based model, without the requirement of population at risk data, (1) is free from administrative area restriction, (2) enables more frequent surveillance for continuously updated registry database, and (3) is readily applicable to criminology and epidemiology surveillance. The bootstrap-based model performs better for space-time surveillance than the space-time scan statistic. This is shown by means of simulations and an application to residential crime occurrences in Columbus, OH, year 2000.

  11. Visuospatial bootstrapping: implicit binding of verbal working memory to visuospatial representations in children and adults.

    Science.gov (United States)

    Darling, Stephen; Parker, Mary-Jane; Goodall, Karen E; Havelka, Jelena; Allen, Richard J

    2014-03-01

    When participants carry out visually presented digit serial recall, their performance is better if they are given the opportunity to encode extra visuospatial information at encoding-a phenomenon that has been termed visuospatial bootstrapping. This bootstrapping is the result of integration of information from different modality-specific short-term memory systems and visuospatial knowledge in long term memory, and it can be understood in the context of recent models of working memory that address multimodal binding (e.g., models incorporating an episodic buffer). Here we report a cross-sectional developmental study that demonstrated visuospatial bootstrapping in adults (n=18) and 9-year-old children (n=15) but not in 6-year-old children (n=18). This is the first developmental study addressing visuospatial bootstrapping, and results demonstrate that the developmental trajectory of bootstrapping is different from that of basic verbal and visuospatial working memory. This pattern suggests that bootstrapping (and hence integrative functions such as those associated with the episodic buffer) emerge independent of the development of basic working memory slave systems during childhood. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Feature selection of gas chromatography/mass spectrometry chemical profiles of basil plants using a bootstrapped fuzzy rule-building expert system.

    Science.gov (United States)

    Wang, Zhengfang; Harrington, Peter de B

    2013-11-01

    A bootstrapped fuzzy rule-building expert system (FuRES) and a bootstrapped t-statistical weight feature selection method were individually used to select informative features from gas chromatography/mass spectrometry (GC/MS) chemical profiles of basil plants cultivated by organic and conventional farming practices. Feature subsets were selected from two-way GC/MS data objects, total ion chromatograms, and total mass spectra, separately. Four economic classifiers based on the bootstrapped FuRES approach, i.e., fuzzy optimal associative memory (e-FOAM), e-FuRES, partial least-squares-discriminant analysis (e-PLS-DA), and soft independent modeling by class analogy (e-SIMCA), and four economic classifiers based on the bootstrapped t-weight approach, i.e., e-PLS-DA-t, e-FOAM-t, e-FuRES-t, and e-SIMCA-t, were constructed thereafter to be compared with full-size classifiers obtained from the entire GC/MS data objects (i.e., FOAM, FuRES, PLS-DA, and SIMCA). By using three features selected from two-way data objects, the average classification rates with e-FOAM, e-FuRES, e-PLS-DA, and e-SIMCA were 95.3 ± 0.5%, 100%, 100%, and 91.8 ± 0.2%, respectively. The established economic classifiers were used to classify a new validation set collected 2.5 months later with no parametric change to experimental procedure. Classification rates with e-FOAM, e-FuRES, e-PLS-DA, and e-SIMCA were 96.7%, 100%, 100%, and 96.7%, respectively. Characteristic components in basil extracts corresponding to highest-ranked useful features were putatively identified. The feature subset may prove valuable as a rapid approach for organic basil authentication.

  13. Bootstrap finance: the art of start-ups.

    Science.gov (United States)

    Bhide, A

    1992-01-01

    Entrepreneurship is more popular than ever: courses are full, policymakers emphasize new ventures, managers yearn to go off on their own. Would-be founders often misplace their energies, however. Believing in a "big money" model of entrepreneurship, they spend a lot of time trying to attract investors instead of using wits and hustle to get their ideas off the ground. A study of 100 of the 1989 Inc. "500" list of fastest growing U.S. start-ups attests to the value of bootstrapping. In fact, what it takes to start a business often conflicts with what venture capitalists require. Investors prefer solid plans, well-defined markets, and track records. Entrepreneurs are heavy on energy and enthusiasm but may be short on credentials. They thrive in rapidly changing environments where uncertain prospects may scare off established companies. Rolling with the punches is often more important than formal plans. Striving to adhere to investors' criteria can diminish the flexibility--the try-it, fix-it approach--an entrepreneur needs to make a new venture work. Seven principles are basic for successful start-ups: get operational fast; look for quick break-even, cash-generating projects; offer high-value products or services that can sustain direct personal selling; don't try to hire the crack team; keep growth in check; focus on cash; and cultivate banks early. Growth and change are the start-up's natural environment. But change is also the reward for success: just as ventures grow, their founders usually have to take a fresh look at everything again: roles, organization, even the very policies that got the business up and running.

  14. Bootstrap methods for estimating PET image noise: experimental validation and an application to evaluation of image reconstruction algorithms.

    Science.gov (United States)

    Ibaraki, Masanobu; Matsubara, Keisuke; Nakamura, Kazuhiro; Yamaguchi, Hiroshi; Kinoshita, Toshibumi

    2014-02-01

    Accurate and validated methods for estimating regional PET image noise are helpful for optimizing image processing. The bootstrap is a data-based simulation method for statistical inference, which can be used to estimate the PET image noise without repeated measurements. The aim of this study was to experimentally validate bootstrap-based methods as a tool for estimating PET image noise and demonstrate its usefulness for evaluating image reconstruction algorithms. Two bootstrap-based method, the list-mode data bootstrap (LMBS) and the sinogram bootstrap (SNBS), were implemented on a clinical PET scanner. A uniform cylindrical phantom filled with (18)F solution was scanned using list-mode acquisition. A reference standard deviation (SD) map was calculated from 60 statistically independent measured list-mode data. Using one of the 60 list-mode data, 60 bootstrap replicates were generated and used to calculate bootstrap SD maps. Brain (18)F-FDG data from a healthy volunteer were also processed as an example of the bootstrap application. Three reconstruction algorithms, FBP 2D and both 2D and 3D versions of dynamic row-action maximum likelihood algorithm (DRAMA), were assessed. For all the reconstruction algorithms used, the bootstrap SD maps agreed well with the reference SD map, confirming the validity of the bootstrap methods for assessing image noise. The two bootstrap methods were equivalent with respect to the performance of image noise estimation. The bootstrap analysis of the FDG data showed the better contrast-noise relation curve for DRAMA 3D compared to DRAMA 2D and FBP 2D. The bootstrap methods provide the estimates of image noise for various reconstruction algorithms with reasonable accuracy, require only a single measurement, not repeated measures, and are, therefore, applicable for a human PET study.

  15. Error-rate estimation in discriminant analysis of non-linear longitudinal data: A comparison of resampling methods.

    Science.gov (United States)

    de la Cruz, Rolando; Fuentes, Claudio; Meza, Cristian; Núñez-Antón, Vicente

    2018-04-01

    Consider longitudinal observations across different subjects such that the underlying distribution is determined by a non-linear mixed-effects model. In this context, we look at the misclassification error rate for allocating future subjects using cross-validation, bootstrap algorithms (parametric bootstrap, leave-one-out, .632 and [Formula: see text]), and bootstrap cross-validation (which combines the first two approaches), and conduct a numerical study to compare the performance of the different methods. The simulation and comparisons in this study are motivated by real observations from a pregnancy study in which one of the main objectives is to predict normal versus abnormal pregnancy outcomes based on information gathered at early stages. Since in this type of studies it is not uncommon to have insufficient data to simultaneously solve the classification problem and estimate the misclassification error rate, we put special attention to situations when only a small sample size is available. We discuss how the misclassification error rate estimates may be affected by the sample size in terms of variability and bias, and examine conditions under which the misclassification error rate estimates perform reasonably well.

  16. Uncertainties of flood frequency estimation approaches based on continuous simulation using data resampling

    Science.gov (United States)

    Arnaud, Patrick; Cantet, Philippe; Odry, Jean

    2017-11-01

    Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with

  17. On-line crack prognosis in attachment lug using Lamb wave-deterministic resampling particle filter-based method

    Science.gov (United States)

    Yuan, Shenfang; Chen, Jian; Yang, Weibo; Qiu, Lei

    2017-08-01

    Fatigue crack growth prognosis is important for prolonging service time, improving safety, and reducing maintenance cost in many safety-critical systems, such as in aircraft, wind turbines, bridges, and nuclear plants. Combining fatigue crack growth models with the particle filter (PF) method has proved promising to deal with the uncertainties during fatigue crack growth and reach a more accurate prognosis. However, research on prognosis methods integrating on-line crack monitoring with the PF method is still lacking, as well as experimental verifications. Besides, the PF methods adopted so far are almost all sequential importance resampling-based PFs, which usually encounter sample impoverishment problems, and hence performs poorly. To solve these problems, in this paper, the piezoelectric transducers (PZTs)-based active Lamb wave method is adopted for on-line crack monitoring. The deterministic resampling PF (DRPF) is proposed to be used in fatigue crack growth prognosis, which can overcome the sample impoverishment problem. The proposed method is verified through fatigue tests of attachment lugs, which are a kind of important joint component in aerospace systems.

  18. A novel approach for epipolar resampling of cross-track linear pushbroom imagery using orbital parameters model

    Science.gov (United States)

    Jannati, Mojtaba; Valadan Zoej, Mohammad Javad; Mokhtarzade, Mehdi

    2018-03-01

    This paper presents a novel approach to epipolar resampling of cross-track linear pushbroom imagery using orbital parameters model (OPM). The backbone of the proposed method relies on modification of attitude parameters of linear array stereo imagery in such a way to parallelize the approximate conjugate epipolar lines (ACELs) with the instantaneous base line (IBL) of the conjugate image points (CIPs). Afterward, a complementary rotation is applied in order to parallelize all the ACELs throughout the stereo imagery. The new estimated attitude parameters are evaluated based on the direction of the IBL and the ACELs. Due to the spatial and temporal variability of the IBL (respectively changes in column and row numbers of the CIPs) and nonparallel nature of the epipolar lines in the stereo linear images, some polynomials in the both column and row numbers of the CIPs are used to model new attitude parameters. As the instantaneous position of sensors remains fix, the digital elevation model (DEM) of the area of interest is not required in the resampling process. According to the experimental results obtained from two pairs of SPOT and RapidEye stereo imagery with a high elevation relief, the average absolute values of remained vertical parallaxes of CIPs in the normalized images were obtained 0.19 and 0.28 pixels respectively, which confirm the high accuracy and applicability of the proposed method.

  19. Study of the separate exposure method for bootstrap sensitometry on X-ray cine film

    International Nuclear Information System (INIS)

    Matsuda, Eiji; Sanada, Taizo; Hitomi, Go; Kakuba, Koki; Kangai, Yoshiharu; Ishii, Koushi

    1997-01-01

    We developed a new method for bootstrap sensitometry that obtained the characteristic curve from a wide range, with a smaller number of aluminum steps than the conventional bootstrap method. In this method, the density-density curve was obtained from standard and multiplied exposures to the aluminum step wedge and used for bootstrap manipulation. The curve was acquired from two regions separated and added together, e.g., lower and higher photographic density regions. In this study, we evaluated the usefulness of a new cinefluorography method in comparison with N.D. filter sensitometry. The shape of the characteristic curve and the gradient curve obtained with the new method were highly similar to that obtained with N.D. filter sensitometry. Also, the average gradient obtained with the new bootstrap sensitometry method was not significantly different from that obtained by the N.D. filter method. The study revealed that the reliability of the characteristic curve was improved by increasing the measured value used to calculate the density-density curve. This new method was useful for obtaining a characteristic curve with a sufficient density range, and the results suggested that this new method could be applied to specific systems to which the conventional bootstrap method is not applicable. (author)

  20. Insight from uncertainty: bootstrap-derived diffusion metrics differentially predict memory function among older adults.

    Science.gov (United States)

    Vorburger, Robert S; Habeck, Christian G; Narkhede, Atul; Guzman, Vanessa A; Manly, Jennifer J; Brickman, Adam M

    2016-01-01

    Diffusion tensor imaging suffers from an intrinsic low signal-to-noise ratio. Bootstrap algorithms have been introduced to provide a non-parametric method to estimate the uncertainty of the measured diffusion parameters. To quantify the variability of the principal diffusion direction, bootstrap-derived metrics such as the cone of uncertainty have been proposed. However, bootstrap-derived metrics are not independent of the underlying diffusion profile. A higher mean diffusivity causes a smaller signal-to-noise ratio and, thus, increases the measurement uncertainty. Moreover, the goodness of the tensor model, which relies strongly on the complexity of the underlying diffusion profile, influences bootstrap-derived metrics as well. The presented simulations clearly depict the cone of uncertainty as a function of the underlying diffusion profile. Since the relationship of the cone of uncertainty and common diffusion parameters, such as the mean diffusivity and the fractional anisotropy, is not linear, the cone of uncertainty has a different sensitivity. In vivo analysis of the fornix reveals the cone of uncertainty to be a predictor of memory function among older adults. No significant correlation occurs with the common diffusion parameters. The present work not only demonstrates the cone of uncertainty as a function of the actual diffusion profile, but also discloses the cone of uncertainty as a sensitive predictor of memory function. Future studies should incorporate bootstrap-derived metrics to provide more comprehensive analysis.

  1. Bootstrap Restricted Likelihood Ratio Test for the Detection of Rare Variants

    Science.gov (United States)

    Zeng, Ping; Wang, Ting

    2015-01-01

    In this paper the detection of rare variants association with continuous phenotypes of interest is investigated via the likelihood-ratio based variance component test under the framework of linear mixed models. The hypothesis testing is challenging and nonstandard, since under the null the variance component is located on the boundary of its parameter space. In this situation the usual asymptotic chisquare distribution of the likelihood ratio statistic does not necessarily hold. To circumvent the derivation of the null distribution we resort to the bootstrap method due to its generic applicability and being easy to implement. Both parametric and nonparametric bootstrap likelihood ratio tests are studied. Numerical studies are implemented to evaluate the performance of the proposed bootstrap likelihood ratio test and compare to some existing methods for the identification of rare variants. To reduce the computational time of the bootstrap likelihood ratio test we propose an effective approximation mixture for the bootstrap null distribution. The GAW17 data is used to illustrate the proposed test. PMID:26069459

  2. Electron Bernstein wave-bootstrap current synergy in the National Spherical Torus Experiment

    International Nuclear Information System (INIS)

    Harvey, R.W.; Taylor, G.

    2005-01-01

    Current driven by electron Bernstein waves (EBW) and by the electron bootstrap effect are calculated separately and concurrently with a kinetic code to determine the degree of synergy between them. A target β=40% NSTX [M. Ono, S. Kaye, M. Peng et al., Proceedings of the 17th IAEA Fusion Energy Conference, edited by M. Spak (IAEA, Vienna, Austria, 1999), Vol. 3, p. 1135] plasma is examined. A simple bootstrap model in the collisional-quasilinear CQL3D Fokker-Planck code (National Technical Information Service document No. DE93002962) is used in these studies: the transiting electron distributions are connected in velocity space at the trapped-passing boundary to trapped-electron distributions that are displaced radially by a half-banana-width outwards/inwards for the co-passing/counter-passing regions. This model agrees well with standard bootstrap current calculations over the outer 60% of the plasma radius. Relatively small synergy net bootstrap current is obtained for EBW power up to 4 MW. Locally, bootstrap current density increases in proportion to increased plasma pressure, and this effect can significantly affect the radial profile of driven current

  3. Application of bootstrap method for evaluation of the average concentration of Radium-226 in forage palm (Opuntia spp)

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Cleomacio Miguel da; Amaral, Romilton dos Santos; Santos Junior, Jose Araujo dos; Vieira, Jose Wilson; Leoterio, Dilmo Marques da Silva [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Dept. de Energia Nuclear. Grupo de Radioecologia (RAE)], E-mail: cleomaciomiguel@yahoo.com.br; Amaral, Ademir [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Dept. de Energia Nuclear. Grupo de Estudos em Radioprotecao e Radioecologia

    2007-07-01

    The distribution of natural radionuclides in samples from typically anomalous environments has generally a great significant asymmetry, as a result of outlier. For diminishing statistic fluctuation researchers, in radioecology, commonly use geometric mean or median, once the average has no stability under the effect of outliers. As the median is not affected by anomalous values, this parameter of central tendency is the most frequently employed for evaluation of a set of data containing discrepant values. On the other hand, Efron presented a non-parametric method the so-called bootstrap that can be used to decrease the dispersion around the central-tendency value. Generally, in radioecology, statistics procedures are used in order to reduce the effect results of the presence of anomalous values as regards averages. In this context, the present study had as an objective to evaluate the application of the non-parametric bootstrap method (BM) for determining the average concentration of {sup 226}Ra in cultivated forage palms (Opuntia spp.) in soils with uranium anomaly on the dairy milk farms, localized in the cities of Pedra and Venturosa, Pernambuco-Brazil, as well as discussing the utilization of this method in radioecology. The results of {sup 226}Ra in samples of forage palm varied from 1,300 to 25,000 mBq.kg{sup -1} (dry matter), with arithmetic average of 5,965.86 +- 5,903.05 mBq.kg{sup -1}. The result obtained for this average using BM was 5,963.82 +- 1,202.96 mBq.kg{sup -1} (dry matter). The use of BM allowed an automatic filtration of experimental data, without the elimination of outliers, leading to the reduction of dispersion around the average. As a result, the BM permitted reaching a stable arithmetic average of the effects of the outliers. (author)

  4. Introduction of Bootstrap Current Reduction in the Stellarator Optimization Using the Algorithm DAB

    International Nuclear Information System (INIS)

    Castejón, F.; Gómez-Iglesias, A.; Velasco, J. L.

    2015-01-01

    This work is devoted to introduce new optimization criterion in the DAB (Distributed Asynchronous Bees) code. With this new criterion, we have now in DAB the equilibrium and Mercier stability criteria, the minimization of Bxgrad(B) criterion, which ensures the reduction of neoclassical transport and the improvement of the confinement of fast particles, and the reduction of bootstrap current. We have started from a neoclassically optimised configuration of the helias type and imposed the reduction of bootstrap current. The obtained configuration only presents a modest reduction of total bootstrap current, but the local current density is reduced along the minor radii. Further investigations are developed to understand the reason of this modest improvement.

  5. Bootstrap-based confidence estimation in PCA and multivariate statistical process control

    DEFF Research Database (Denmark)

    Babamoradi, Hamid

    Traditional/Asymptotic confidence estimation has limited applicability since it needs statistical theories to estimate the confidences, which are not available for all indicators/parameters. Furthermore, in case the theories are available for a specific indicator/parameter, the theories are based...... the recommended decisions) to build rational confidence limits were given. Two NIR datasets were used to study the effect of outliers and bimodal distributions on the bootstrap-based limits. The results showed that bootstrapping can give reasonable estimate of distributions for scores and loadings. It can also...... on assumptions that do not always hold in practice. The aim of this thesis was to illustrate the concept of bootstrap-based confidence estimation in PCA and MSPC. It particularly shows how to build bootstrapbased confidence limits in these areas to be used as alternative to the traditional/asymptotic limits...

  6. The S-matrix bootstrap. Part I: QFT in AdS

    Science.gov (United States)

    Paulos, Miguel F.; Penedones, Joao; Toledo, Jonathan; van Rees, Balt C.; Vieira, Pedro

    2017-11-01

    We propose a strategy to study massive Quantum Field Theory (QFT) using conformal bootstrap methods. The idea is to consider QFT in hyperbolic space and study correlation functions of its boundary operators. We show that these are solutions of the crossing equations in one lower dimension. By sending the curvature radius of the background hyperbolic space to infinity we expect to recover flat-space physics. We explain that this regime corresponds to large scaling dimensions of the boundary operators, and discuss how to obtain the flat-space scattering amplitudes from the corresponding limit of the boundary correlators. We implement this strategy to obtain universal bounds on the strength of cubic couplings in 2D flat-space QFTs using 1D conformal bootstrap techniques. Our numerical results match precisely the analytic bounds obtained in our companion paper using S-matrix bootstrap techniques.

  7. Constructing Optimal Prediction Intervals by Using Neural Networks and Bootstrap Method.

    Science.gov (United States)

    Khosravi, Abbas; Nahavandi, Saeid; Srinivasan, Dipti; Khosravi, Rihanna

    2015-08-01

    This brief proposes an efficient technique for the construction of optimized prediction intervals (PIs) by using the bootstrap technique. The method employs an innovative PI-based cost function in the training of neural networks (NNs) used for estimation of the target variance in the bootstrap method. An optimization algorithm is developed for minimization of the cost function and adjustment of NN parameters. The performance of the optimized bootstrap method is examined for seven synthetic and real-world case studies. It is shown that application of the proposed method improves the quality of constructed PIs by more than 28% over the existing technique, leading to narrower PIs with a coverage probability greater than the nominal confidence level.

  8. Aspect Ratio Scaling of Ideal No-wall Stability Limits in High Bootstrap Fraction Tokamak Plasmas

    International Nuclear Information System (INIS)

    Menard, J.E.; Bell, M.G.; Bell, R.E.; Gates, D.A.; Kaye, S.M.; LeBlanc, B.P.; Maingi, R.; Sabbagh, S.A.; Soukhanovskii, V.; Stutman, D.

    2003-01-01

    Recent experiments in the low aspect ratio National Spherical Torus Experiment (NSTX) [M. Ono et al., Nucl. Fusion 40 (2000) 557] have achieved normalized beta values twice the conventional tokamak limit at low internal inductance and with significant bootstrap current. These experimental results have motivated a computational re-examination of the plasma aspect ratio dependence of ideal no-wall magnetohydrodynamic stability limits. These calculations find that the profile-optimized no-wall stability limit in high bootstrap fraction regimes is well described by a nearly aspect ratio invariant normalized beta parameter utilizing the total magnetic field energy density inside the plasma. However, the scaling of normalized beta with internal inductance is found to be strongly aspect ratio dependent at sufficiently low aspect ratio. These calculations and detailed stability analyses of experimental equilibria indicate that the nonrotating plasma no-wall stability limit has been exceeded by as much as 30% in NSTX in a high bootstrap fraction regime

  9. arXiv The S-matrix Bootstrap I: QFT in AdS

    CERN Document Server

    Paulos, Miguel F.; Toledo, Jonathan; van Rees, Balt C.; Vieira, Pedro

    2017-11-21

    We propose a strategy to study massive Quantum Field Theory (QFT) using conformal bootstrap methods. The idea is to consider QFT in hyperbolic space and study correlation functions of its boundary operators. We show that these are solutions of the crossing equations in one lower dimension. By sending the curvature radius of the background hyperbolic space to infinity we expect to recover flat-space physics. We explain that this regime corresponds to large scaling dimensions of the boundary operators, and discuss how to obtain the flat-space scattering amplitudes from the corresponding limit of the boundary correlators. We implement this strategy to obtain universal bounds on the strength of cubic couplings in 2D flat-space QFTs using 1D conformal bootstrap techniques. Our numerical results match precisely the analytic bounds obtained in our companion paper using S-matrix bootstrap techniques.

  10. Closure of the operator product expansion in the non-unitary bootstrap

    Energy Technology Data Exchange (ETDEWEB)

    Esterlis, Ilya [Stanford Institute for Theoretical Physics, Stanford University,Via Pueblo, Stanford, CA 94305 (United States); Fitzpatrick, A. Liam [Department of Physics, Boston University,Commonwealth Ave, Boston, MA, 02215 (United States); Ramirez, David M. [Stanford Institute for Theoretical Physics, Stanford University,Via Pueblo, Stanford, CA 94305 (United States)

    2016-11-07

    We use the numerical conformal bootstrap in two dimensions to search for finite, closed sub-algebras of the operator product expansion (OPE), without assuming unitarity. We find the minimal models as special cases, as well as additional lines of solutions that can be understood in the Coulomb gas formalism. All the solutions we find that contain the vacuum in the operator algebra are cases where the external operators of the bootstrap equation are degenerate operators, and we argue that this follows analytically from the expressions in http://arxiv.org/abs/1202.4698 for the crossing matrices of Virasoro conformal blocks. Our numerical analysis is a special case of the “Gliozzi” bootstrap method, and provides a simpler setting in which to study technical challenges with the method. In the supplementary material, we provide a Mathematica notebook that automates the calculation of the crossing matrices and OPE coefficients for degenerate operators using the formulae of Dotsenko and Fateev.

  11. Random resampling masks: a non-Bayesian one-shot strategy for noise reduction in digital holography.

    Science.gov (United States)

    Bianco, V; Paturzo, M; Memmolo, P; Finizio, A; Ferraro, P; Javidi, B

    2013-03-01

    Holographic imaging may become severely degraded by a mixture of speckle and incoherent additive noise. Bayesian approaches reduce the incoherent noise, but prior information is needed on the noise statistics. With no prior knowledge, one-shot reduction of noise is a highly desirable goal, as the recording process is simplified and made faster. Indeed, neither multiple acquisitions nor a complex setup are needed. So far, this result has been achieved at the cost of a deterministic resolution loss. Here we propose a fast non-Bayesian denoising method that avoids this trade-off by means of a numerical synthesis of a moving diffuser. In this way, only one single hologram is required as multiple uncorrelated reconstructions are provided by random complementary resampling masks. Experiments show a significant incoherent noise reduction, close to the theoretical improvement bound, resulting in image-contrast improvement. At the same time, we preserve the resolution of the unprocessed image.

  12. A bootstrap test for comparing two variances: simulation of size and power in small samples.

    Science.gov (United States)

    Sun, Jiajing; Chernick, Michael R; LaBudde, Robert A

    2011-11-01

    An F statistic was proposed by Good and Chernick ( 1993 ) in an unpublished paper, to test the hypothesis of the equality of variances from two independent groups using the bootstrap; see Hall and Padmanabhan ( 1997 ), for a published reference where Good and Chernick ( 1993 ) is discussed. We look at various forms of bootstrap tests that use the F statistic to see whether any or all of them maintain the nominal size of the test over a variety of population distributions when the sample size is small. Chernick and LaBudde ( 2010 ) and Schenker ( 1985 ) showed that bootstrap confidence intervals for variances tend to provide considerably less coverage than their theoretical asymptotic coverage for skewed population distributions such as a chi-squared with 10 degrees of freedom or less or a log-normal distribution. The same difficulties may be also be expected when looking at the ratio of two variances. Since bootstrap tests are related to constructing confidence intervals for the ratio of variances, we simulated the performance of these tests when the population distributions are gamma(2,3), uniform(0,1), Student's t distribution with 10 degrees of freedom (df), normal(0,1), and log-normal(0,1) similar to those used in Chernick and LaBudde ( 2010 ). We find, surprisingly, that the results for the size of the tests are valid (reasonably close to the asymptotic value) for all the various bootstrap tests. Hence we also conducted a power comparison, and we find that bootstrap tests appear to have reasonable power for testing equivalence of variances.

  13. Neoclassical tearing dynamo and self-sustainment of a bootstrapped tokamak

    International Nuclear Information System (INIS)

    Bhattacharjee, A.; Yuan, Y.

    1993-01-01

    It has been suggested by Boozer that a completely bootstrapped tokamak which requires no seed current is possible due to the open-quotes dynamo effectclose quotes caused by tearing modes. Numerical calculations have been carried out by Weening and Boozer confirming the feasibility of a completely bootstrapped tokamak. These calculations use the resistive MHD model, with the pressure profile held arbitrarily fixed. Several questions naturally arise. Is resistive MHD a good model in the low-collisionality regime of present-day tokamaks in which large bootstrap currents have been observed? Is it consistent to rely on pressure gradients to provide the bootstrap current, but then omit pressure gradients in investigating the tearing instabilities that provide the dynamo effect? And how realistic is it to assume that a strong pressure gradient is sustainable in the central region where current relaxation is expected to produce a dynamo effect? In this paper, the authors investigate the dynamo effect in a bootstrapped tokamak within the framework of the neoclassical MHD model which is more realistic than resistive MHD for the regime in question. Since neoclassical MHD includes trapped-particle effects, it can, in principle, provide an additional mechanism for exciting tearing modes which are known to be stabilized by temperature gradients. They investigate the properties of the dynamo field var-epsilon, and find that the original definition var-epsilon = 1 x b 1 > used in incompressible resistive MHD is no longer adequate; neoclassical MHD forces a redefinition of var-epsilon due to the requirements imposed by the helicity conservation constraint. Thus a completely steady-state bootstrapped tokamak sustained by a neoclassical tearing dynamo is realizable. However, they are pessimistic that such a tokamak, even if it were resistively stable, would be stable to ideal kink modes

  14. Determining the significance of associations between two series of discrete events : bootstrap methods /

    Energy Technology Data Exchange (ETDEWEB)

    Niehof, Jonathan T.; Morley, Steven K.

    2012-01-01

    We review and develop techniques to determine associations between series of discrete events. The bootstrap, a nonparametric statistical method, allows the determination of the significance of associations with minimal assumptions about the underlying processes. We find the key requirement for this method: one of the series must be widely spaced in time to guarantee the theoretical applicability of the bootstrap. If this condition is met, the calculated significance passes a reasonableness test. We conclude with some potential future extensions and caveats on the applicability of these methods. The techniques presented have been implemented in a Python-based software toolkit.

  15. Measurement Uncertainty Evaluation in Dimensional X-ray Computed Tomography Using the Bootstrap Method

    DEFF Research Database (Denmark)

    Hiller, Jochen; Genta, Gianfranco; Barbato, Giulio

    2014-01-01

    measurement processes, e.g., with tactile systems, also due to factors related to systematic errors, mainly caused by specific CT image characteristics. In this paper we propose a simulation-based framework for measurement uncertainty evaluation in dimensional CT using the bootstrap method. In a case study...... the problem concerning measurement uncertainties was addressed with bootstrap and successfully applied to ball-bar CT measurements. Results obtained enabled extension to more complex shapes such as actual industrial components as we show by tests on a hollow cylinder workpiece....

  16. Improving Web Learning through model Optimization using Bootstrap for a Tour-Guide Robot

    Directory of Open Access Journals (Sweden)

    Rafael León

    2012-09-01

    Full Text Available We perform a review of Web Mining techniques and we describe a Bootstrap Statistics methodology applied to pattern model classifier optimization and verification for Supervised Learning for Tour-Guide Robot knowledge repository management. It is virtually impossible to test thoroughly Web Page Classifiers and many other Internet Applications with pure empirical data, due to the need for human intervention to generate training sets and test sets. We propose using the computer-based Bootstrap paradigm to design a test environment where they are checked with better reliability

  17. A model for bootstrap current calculations with bounce averaged Fokker-Planck codes

    NARCIS (Netherlands)

    Westerhof, E.; Peeters, A.G.

    1996-01-01

    A model is presented that allows the calculation of the neoclassical bootstrap current originating from the radial electron density and pressure gradients in standard (2+1)D bounce averaged Fokker-Planck codes. The model leads to an electron momentum source located almost exclusively at the

  18. Bootstrap regularity for integro-differential operators and its application to nonlocal minimal surfaces

    OpenAIRE

    Barrera, Begoña Barrios; Figalli, Alessio; Valdinoci, Enrico

    2012-01-01

    We prove that $C^{1,\\alpha}$ $s$-minimal surfaces are automatically $C^\\infty$. For this, we develop a new bootstrap regularity theory for solutions of integro-differential equations of very general type, which we believe is of independent interest.

  19. Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis

    Science.gov (United States)

    Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong

    2010-01-01

    This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…

  20. Computing Robust, Bootstrap-Adjusted Fit Indices for Use with Nonnormal Data

    Science.gov (United States)

    Walker, David A.; Smith, Thomas J.

    2017-01-01

    Nonnormality of data presents unique challenges for researchers who wish to carry out structural equation modeling. The subsequent SPSS syntax program computes bootstrap-adjusted fit indices (comparative fit index, Tucker-Lewis index, incremental fit index, and root mean square error of approximation) that adjust for nonnormality, along with the…

  1. Bootstrapping Malmquist indices for Danish seiners in the North Sea and Skagerrak

    DEFF Research Database (Denmark)

    Hoff, Ayoe

    2006-01-01

    DEA scores or related parameters. The bootstrap method for estimating confidence intervals of deterministic parameters can however be applied to estimate confidence intervals for DEA scores. This method is applied in the present paper for assessing TFP changes between 1987 and 1999 for the fleet...

  2. Bootstrapping Multifractals: Surrogate Data from Random Cascades on Wavelet Dyadic Trees

    Czech Academy of Sciences Publication Activity Database

    Paluš, Milan

    2008-01-01

    Roč. 101, č. 13 (2008), 134101-1-134101-4 ISSN 0031-9007 EU Projects: European Commission(XE) 517133 - BRACCIA Grant - others:GA AV ČR(CZ) 1ET110190504 Institutional research plan: CEZ:AV0Z10300504 Keywords : multifractal * bootstrap * hypothesis testing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 7.180, year: 2008

  3. Bootsie: estimation of coefficient of variation of AFLP data by bootstrap analysis

    Science.gov (United States)

    Bootsie is an English-native replacement for ASG Coelho’s “DBOOT” utility for estimating coefficient of variation of a population of AFLP marker data using bootstrapping. Bootsie improves on DBOOT by supporting batch processing, time-to-completion estimation, built-in graphs, and a suite of export t...

  4. Establishing Keypoint Matches on Multimodal Images with Bootstrap Strategy and Global Information.

    Science.gov (United States)

    Li, Yong; Jin, Hongbin; Wu, Jiatao; Liu, Jie

    2017-04-19

    This paper proposes an algorithm of building keypoint matches on multimodal images by combining a bootstrap process and global information. The correct ratio of keypoint matches built with descriptors is typically very low on multimodal images of large spectral difference. To identify correct matches, global information is utilized for evaluating keypoint matches and a bootstrap technique is employed to reduce the computational cost. A keypoint match determines a transformation T and a similarity metric between the reference and the transformed test image by T. The similarity metric encodes global information over entire images, and hence a higher similarity indicates the match can bring more image content into alignment, implying it tends to be correct. Unfortunately, exhausting triplets/quadruples of matches for affine/projective transformation is computationally intractable when the number of keypoints is large. To reduce the computational cost, a bootstrap technique is employed that starts from single matches for a translation and rotation model, and goes increasingly to quadruples of four matches for a projective model. The global information screens for "good" matches at each stage and the bootstrap strategy makes the screening process computationally feasible. Experimental results show that the proposed method can establish reliable keypoint matches on challenging multimodal images of strong multimodality.

  5. A common-gate bootstrapped CMOS rectifier for VHF isolated DC-DC converter

    Science.gov (United States)

    Pan, Dongfang; Zhang, Feng; Huang, Lu; Li, Jinliang

    2017-06-01

    A common-gate bootstrapped CMOS rectifier dedicated for VHF (very high frequency) isolated DC-DC converter is proposed. It uses common-gate bootstrapped technique to compensate the power loss due to the threshold voltage, and to solve the reflux problem in the conventional rectifier circuit. As a result, it improves the power conversion efficiency (PCE) and voltage conversion ratio (VCR). The design saves almost 90% of the area compared to a previously reported double capacitor structure. In addition, we compare the previous rectifier with the proposed common-gate bootstrapped rectifier in the case of the same area; simulation results show that the PCE and VCR of the proposed structure are superior to other structures. The proposed common-gate bootstrapped rectifier was fabricated by using CSMC 0.5 μm BCD process. The measured maximum PCE is 86% and VCR achieves 77% at the operating frequency of 20 MHz. The average PCE is about 79% and average VCR achieves 71% in the frequency range of 30-70 MHz. Measured PCE and VCR have been improved compared to previous results.

  6. Forward Kinematic Analysis of Tip-Tilt-Piston Parallel Manipulator using Secant-Bootstrap Method

    NARCIS (Netherlands)

    Majidian, A.; Amani, A.; Golipour, M.; Amraei, A.

    2014-01-01

    This paper, deals with application of the Secant-Bootstrap Method (SBM) to solve the Closed-form forward kinematics of a new three degree-of-freedom (DOF) parallel manipulator with inextensible limbs and base-mounted actuators. The manipulator has higher resolution and precision than the existing

  7. Maximum non-extensive entropy block bootstrap for non-stationary processes

    Czech Academy of Sciences Publication Activity Database

    Bergamelli, M.; Novotný, Jan; Urga, G.

    2015-01-01

    Roč. 91, 1/2 (2015), s. 115-139 ISSN 0001-771X R&D Projects: GA ČR(CZ) GA14-27047S Institutional support: RVO:67985998 Keywords : maximum entropy * bootstrap * Monte Carlo simulations Subject RIV: AH - Economics

  8. Abrupt change in mean using block bootstrap and avoiding variance estimation

    Czech Academy of Sciences Publication Activity Database

    Peštová, Barbora; Pešta, M.

    2018-01-01

    Roč. 33, č. 1 (2018), s. 413-441 ISSN 0943-4062 Grant - others:GA ČR(CZ) GJ15-04774Y Institutional support: RVO:67985807 Keywords : Block bootstrap * Change in mean * Change point * Hypothesis testing * Ratio type statistics * Robustness Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.434, year: 2016

  9. Estimating negative likelihood ratio confidence when test sensitivity is 100%: A bootstrapping approach.

    Science.gov (United States)

    Marill, Keith A; Chang, Yuchiao; Wong, Kim F; Friedman, Ari B

    2017-08-01

    Objectives Assessing high-sensitivity tests for mortal illness is crucial in emergency and critical care medicine. Estimating the 95% confidence interval (CI) of the likelihood ratio (LR) can be challenging when sample sensitivity is 100%. We aimed to develop, compare, and automate a bootstrapping method to estimate the negative LR CI when sample sensitivity is 100%. Methods The lowest population sensitivity that is most likely to yield sample sensitivity 100% is located using the binomial distribution. Random binomial samples generated using this population sensitivity are then used in the LR bootstrap. A free R program, "bootLR," automates the process. Extensive simulations were performed to determine how often the LR bootstrap and comparator method 95% CIs cover the true population negative LR value. Finally, the 95% CI was compared for theoretical sample sizes and sensitivities approaching and including 100% using: (1) a technique of individual extremes, (2) SAS software based on the technique of Gart and Nam, (3) the Score CI (as implemented in the StatXact, SAS, and R PropCI package), and (4) the bootstrapping technique. Results The bootstrapping approach demonstrates appropriate coverage of the nominal 95% CI over a spectrum of populations and sample sizes. Considering a study of sample size 200 with 100 patients with disease, and specificity 60%, the lowest population sensitivity with median sample sensitivity 100% is 99.31%. When all 100 patients with disease test positive, the negative LR 95% CIs are: individual extremes technique (0,0.073), StatXact (0,0.064), SAS Score method (0,0.057), R PropCI (0,0.062), and bootstrap (0,0.048). Similar trends were observed for other sample sizes. Conclusions When study samples demonstrate 100% sensitivity, available methods may yield inappropriately wide negative LR CIs. An alternative bootstrapping approach and accompanying free open-source R package were developed to yield realistic estimates easily. This

  10. Estimation of the limit of detection with a bootstrap-derived standard error by a partly non-parametric approach. Application to HPLC drug assays

    DEFF Research Database (Denmark)

    Linnet, Kristian

    2005-01-01

    Bootstrap, HPLC, limit of blank, limit of detection, non-parametric statistics, type I and II errors......Bootstrap, HPLC, limit of blank, limit of detection, non-parametric statistics, type I and II errors...

  11. First-order correct bootstrap support adjustments for splits that allow hypothesis testing when using maximum likelihood estimation.

    Science.gov (United States)

    Susko, Edward

    2010-07-01

    The most frequent measure of phylogenetic uncertainty for splits is bootstrap support. Although large bootstrap support intuitively suggests that a split in a tree is well supported, it has not been clear how large bootstrap support needs to be to conclude that there is significant evidence that a hypothesized split is present. Indeed, recent work has shown that bootstrap support is not first-order correct and thus cannot be directly used for hypothesis testing. We present methods that adjust bootstrap support values in a maximum likelihood (ML) setting so that they have an interpretation corresponding to P values in conventional hypothesis testing; for instance, adjusted bootstrap support larger than 95% occurs only 5% of the time if the split is not present. Through examples and simulation settings, it is found that adjustments always increase the level of support. We also find that the nature of the adjustment is fairly constant across parameter settings. Finally, we consider adjustments that take into account the data-dependent nature of many hypotheses about splits: the hypothesis that they are present is being tested because they are in the tree estimated through ML. Here, in contrast, we find that bootstrap probability often needs to be adjusted downwards.

  12. Point of data saturation was assessed using resampling methods in a survey with open-ended questions.

    Science.gov (United States)

    Tran, Viet-Thi; Porcher, Raphael; Falissard, Bruno; Ravaud, Philippe

    2016-12-01

    To describe methods to determine sample sizes in surveys using open-ended questions and to assess how resampling methods can be used to determine data saturation in these surveys. We searched the literature for surveys with open-ended questions and assessed the methods used to determine sample size in 100 studies selected at random. Then, we used Monte Carlo simulations on data from a previous study on the burden of treatment to assess the probability of identifying new themes as a function of the number of patients recruited. In the literature, 85% of researchers used a convenience sample, with a median size of 167 participants (interquartile range [IQR] = 69-406). In our simulation study, the probability of identifying at least one new theme for the next included subject was 32%, 24%, and 12% after the inclusion of 30, 50, and 100 subjects, respectively. The inclusion of 150 participants at random resulted in the identification of 92% themes (IQR = 91-93%) identified in the original study. In our study, data saturation was most certainly reached for samples >150 participants. Our method may be used to determine when to continue the study to find new themes or stop because of futility. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Merging parallel tempering with sequential geostatistical resampling for improved posterior exploration of high-dimensional subsurface categorical fields

    Science.gov (United States)

    Laloy, Eric; Linde, Niklas; Jacques, Diederik; Mariethoz, Grégoire

    2016-04-01

    The sequential geostatistical resampling (SGR) algorithm is a Markov chain Monte Carlo (MCMC) scheme for sampling from possibly non-Gaussian, complex spatially-distributed prior models such as geologic facies or categorical fields. In this work, we highlight the limits of standard SGR for posterior inference of high-dimensional categorical fields with realistically complex likelihood landscapes and benchmark a parallel tempering implementation (PT-SGR). Our proposed PT-SGR approach is demonstrated using synthetic (error corrupted) data from steady-state flow and transport experiments in categorical 7575- and 10,000-dimensional 2D conductivity fields. In both case studies, every SGR trial gets trapped in a local optima while PT-SGR maintains an higher diversity in the sampled model states. The advantage of PT-SGR is most apparent in an inverse transport problem where the posterior distribution is made bimodal by construction. PT-SGR then converges towards the appropriate data misfit much faster than SGR and partly recovers the two modes. In contrast, for the same computational resources SGR does not fit the data to the appropriate error level and hardly produces a locally optimal solution that looks visually similar to one of the two reference modes. Although PT-SGR clearly surpasses SGR in performance, our results also indicate that using a small number (16-24) of temperatures (and thus parallel cores) may not permit complete sampling of the posterior distribution by PT-SGR within a reasonable computational time (less than 1-2 weeks).

  14. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    Science.gov (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-06-01

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.

  15. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    Energy Technology Data Exchange (ETDEWEB)

    Holoien, Thomas W.-S.; /Ohio State U., Dept. Astron. /Ohio State U., CCAPP /KIPAC, Menlo Park /SLAC; Marshall, Philip J.; Wechsler, Risa H.; /KIPAC, Menlo Park /SLAC

    2017-05-11

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.

  16. A Markov chain Monte Carlo (MCMC) methodology with bootstrap percentile estimates for predicting presidential election results in Ghana.

    Science.gov (United States)

    Nortey, Ezekiel N N; Ansah-Narh, Theophilus; Asah-Asante, Richard; Minkah, Richard

    2015-01-01

    Although, there exists numerous literature on the procedure for forecasting or predicting election results, in Ghana only opinion poll strategies have been used. To fill this gap, the paper develops Markov chain models for forecasting the 2016 presidential election results at the Regional, Zonal (i.e. Savannah, Coastal and Forest) and the National levels using past presidential election results of Ghana. The methodology develops a model for prediction of the 2016 presidential election results in Ghana using the Markov chains Monte Carlo (MCMC) methodology with bootstrap estimates. The results were that the ruling NDC may marginally win the 2016 Presidential Elections but would not obtain the more than 50 % votes to be declared an outright winner. This means that there is going to be a run-off election between the two giant political parties: the ruling NDC and the major opposition party, NPP. The prediction for the 2016 Presidential run-off election between the NDC and the NPP was rather in favour of the major opposition party, the NPP with a little over the 50 % votes obtained.

  17. Estimating Parameter Uncertainty in Binding-Energy Models by the Frequency-Domain Bootstrap

    Science.gov (United States)

    Bertsch, G. F.; Bingham, Derek

    2017-12-01

    We propose using the frequency-domain bootstrap (FDB) to estimate errors of modeling parameters when the modeling error is itself a major source of uncertainty. Unlike the usual bootstrap or the simple χ2 analysis, the FDB can take into account correlations between errors. It is also very fast compared to the Gaussian process Bayesian estimate as often implemented for computer model calibration. The method is illustrated with a simple example, the liquid drop model of nuclear binding energies. We find that the FDB gives a more conservative estimate of the uncertainty in liquid drop parameters than the χ2 method, and is in fair accord with more empirical estimates. For the nuclear physics application, there are no apparent obstacles to apply the method to the more accurate and detailed models based on density-functional theory.

  18. Bootstrapping six-gluon scattering in planar ${\\cal N}=4$ super-Yang-Mills theory

    CERN Document Server

    Dixon, Lance J; Duhr, Claude; von Hippel, Matt; Pennington, Jeffrey

    2014-01-01

    We describe the hexagon function bootstrap for solving for six-gluon scattering amplitudes in the large $N_c$ limit of ${\\cal N}=4$ super-Yang-Mills theory. In this method, an ansatz for the finite part of these amplitudes is constrained at the level of amplitudes, not integrands, using boundary information. In the near-collinear limit, the dual picture of the amplitudes as Wilson loops leads to an operator product expansion which has been solved using integrability by Basso, Sever and Vieira. Factorization of the amplitudes in the multi-Regge limit provides additional boundary data. This bootstrap has been applied successfully through four loops for the maximally helicity violating (MHV) configuration of gluon helicities, and through three loops for the non-MHV case.

  19. Tolerance limits and tolerance intervals for ratios of normal random variables using a bootstrap calibration.

    Science.gov (United States)

    Flouri, Marilena; Zhai, Shuyan; Mathew, Thomas; Bebu, Ionut

    2017-05-01

    This paper addresses the problem of deriving one-sided tolerance limits and two-sided tolerance intervals for a ratio of two random variables that follow a bivariate normal distribution, or a lognormal/normal distribution. The methodology that is developed uses nonparametric tolerance limits based on a parametric bootstrap sample, coupled with a bootstrap calibration in order to improve accuracy. The methodology is also adopted for computing confidence limits for the median of the ratio random variable. Numerical results are reported to demonstrate the accuracy of the proposed approach. The methodology is illustrated using examples where ratio random variables are of interest: an example on the radioactivity count in reverse transcriptase assays and an example from the area of cost-effectiveness analysis in health economics. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Benchmarking the efficiency of the Chilean water and sewerage companies: a double-bootstrap approach.

    Science.gov (United States)

    Molinos-Senante, María; Donoso, Guillermo; Sala-Garrido, Ramon; Villegas, Andrés

    2018-03-01

    Benchmarking the efficiency of water companies is essential to set water tariffs and to promote their sustainability. In doing so, most of the previous studies have applied conventional data envelopment analysis (DEA) models. However, it is a deterministic method that does not allow to identify environmental factors influencing efficiency scores. To overcome this limitation, this paper evaluates the efficiency of a sample of Chilean water and sewerage companies applying a double-bootstrap DEA model. Results evidenced that the ranking of water and sewerage companies changes notably whether efficiency scores are computed applying conventional or double-bootstrap DEA models. Moreover, it was found that the percentage of non-revenue water and customer density are factors influencing the efficiency of Chilean water and sewerage companies. This paper illustrates the importance of using a robust and reliable method to increase the relevance of benchmarking tools.

  1. A bootstrap test for instrument validity in heterogeneous treatment effect models

    OpenAIRE

    Kitagawa, Toru

    2013-01-01

    This paper develops a specification test for the instrument validity conditions in the heterogeneous treatment effect model with a binary treatment and a discrete instrument. A necessary testable implication for the joint restriction of instrument exogeneity and instrument monotonicity is given by nonnegativity of point-identifiable complier's outcome densities. Our specification test infers this testable implication using a Kolmogorov-Smirnov type test statistic. We provide a bootstrap algor...

  2. Asymptotic Expansions and Bootstrapping Distributions for Dependent Variables: A Martingale Approach

    OpenAIRE

    Mykland, Per Aslak

    1992-01-01

    The paper develops a one-step triangular array asymptotic expansion for continuous martingales which are asymptotically normal. Mixing conditions are not required, but the quadratic variations of the martingales must satisfy a law of large numbers and a central limit type condition. From this result we derive expansions for the distributions of estimators in asymptotically ergodic differential equation models, and also for the bootstrapping estimators of these distributions.

  3. Parametric bootstrap methods for testing multiplicative terms in GGE and AMMI models.

    Science.gov (United States)

    Forkman, Johannes; Piepho, Hans-Peter

    2014-09-01

    The genotype main effects and genotype-by-environment interaction effects (GGE) model and the additive main effects and multiplicative interaction (AMMI) model are two common models for analysis of genotype-by-environment data. These models are frequently used by agronomists, plant breeders, geneticists and statisticians for analysis of multi-environment trials. In such trials, a set of genotypes, for example, crop cultivars, are compared across a range of environments, for example, locations. The GGE and AMMI models use singular value decomposition to partition genotype-by-environment interaction into an ordered sum of multiplicative terms. This article deals with the problem of testing the significance of these multiplicative terms in order to decide how many terms to retain in the final model. We propose parametric bootstrap methods for this problem. Models with fixed main effects, fixed multiplicative terms and random normally distributed errors are considered. Two methods are derived: a full and a simple parametric bootstrap method. These are compared with the alternatives of using approximate F-tests and cross-validation. In a simulation study based on four multi-environment trials, both bootstrap methods performed well with regard to Type I error rate and power. The simple parametric bootstrap method is particularly easy to use, since it only involves repeated sampling of standard normally distributed values. This method is recommended for selecting the number of multiplicative terms in GGE and AMMI models. The proposed methods can also be used for testing components in principal component analysis. © 2014, The International Biometric Society.

  4. Forecasting Model for IPTV Service in Korea Using Bootstrap Ridge Regression Analysis

    Science.gov (United States)

    Lee, Byoung Chul; Kee, Seho; Kim, Jae Bum; Kim, Yun Bae

    The telecom firms in Korea are taking new step to prepare for the next generation of convergence services, IPTV. In this paper we described our analysis on the effective method for demand forecasting about IPTV broadcasting. We have tried according to 3 types of scenarios based on some aspects of IPTV potential market and made a comparison among the results. The forecasting method used in this paper is the multi generation substitution model with bootstrap ridge regression analysis.

  5. Syntactic bootstrapping in children with Down syndrome: the impact of bilingualism.

    Science.gov (United States)

    Cleave, Patricia L; Kay-Raining Bird, Elizabeth; Trudeau, Natacha; Sutton, Ann

    2014-01-01

    The purpose of the study was to add to our knowledge of bilingual learning in children with Down syndrome (DS) using a syntactic bootstrapping task. Four groups of children and youth matched on non-verbal mental age participated. There were 14 bilingual participants with DS (DS-B, mean age 12;5), 12 monolingual participants with DS (DS-M, mean age 10;10), 9 bilingual typically developing children (TD-B; mean age 4;1) and 11 monolingual typically developing children (TD-M; mean age 4;1). The participants completed a computerized syntactic bootstrapping task involving unfamiliar nouns and verbs. The syntactic cues employed were a for the nouns and ing for the verbs. Performance was better on nouns than verbs. There was also a main effect for group. Follow-up t-tests revealed that there were no significant differences between the TD-M and TD-B or between the DS-M and DS-B groups. However, the DS-M group performed more poorly than the TD-M group with a large effect size. Analyses at the individual level revealed a similar pattern of results. There was evidence that Down syndrome impacted performance; there was no evidence that bilingualism negatively affected the syntactic bootstrapping skills of individuals with DS. These results from a dynamic language task are consistent with those of previous studies that used static or product measures. Thus, the results are consistent with the position that parents should be supported in their decision to provide bilingual input to their children with DS. Readers of this article will identify (1) research evidence regarding bilingual development in children with Down syndrome and (2) syntactic bootstrapping skills in monolingual and bilingual children who are typically developing or who have Down syndrome. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Depth-Aware Salient Object Detection and Segmentation via Multiscale Discriminative Saliency Fusion and Bootstrap Learning.

    Science.gov (United States)

    Song, Hangke; Liu, Zhi; Du, Huan; Sun, Guangling; Le Meur, Olivier; Ren, Tongwei

    2017-09-01

    This paper proposes a novel depth-aware salient object detection and segmentation framework via multiscale discriminative saliency fusion (MDSF) and bootstrap learning for RGBD images (RGB color images with corresponding Depth maps) and stereoscopic images. By exploiting low-level feature contrasts, mid-level feature weighted factors and high-level location priors, various saliency measures on four classes of features are calculated based on multiscale region segmentation. A random forest regressor is learned to perform the discriminative saliency fusion (DSF) and generate the DSF saliency map at each scale, and DSF saliency maps across multiple scales are combined to produce the MDSF saliency map. Furthermore, we propose an effective bootstrap learning-based salient object segmentation method, which is bootstrapped with samples based on the MDSF saliency map and learns multiple kernel support vector machines. Experimental results on two large datasets show how various categories of features contribute to the saliency detection performance and demonstrate that the proposed framework achieves the better performance on both saliency detection and salient object segmentation.

  7. A wild bootstrap approach for the selection of biomarkers in early diagnostic trials.

    Science.gov (United States)

    Zapf, Antonia; Brunner, Edgar; Konietschke, Frank

    2015-05-01

    In early diagnostic trials, particularly in biomarker studies, the aim is often to select diagnostic tests among several methods. In case of metric, discrete, or even ordered categorical data, the area under the receiver operating characteristic (ROC) curve (denoted by AUC) is an appropriate overall accuracy measure for the selection, because the AUC is independent of cut-off points. For selection of biomarkers the individual AUC's are compared with a pre-defined threshold. To keep the overall coverage probability or the multiple type-I error rate, simultaneous confidence intervals and multiple contrast tests are considered. We propose a purely nonparametric approach for the estimation of the AUC's with the corresponding confidence intervals and statistical tests. This approach uses the correlation among the statistics to account for multiplicity. For small sample sizes, a Wild-Bootstrap approach is presented. It is shown that the corresponding intervals and tests are asymptotically exact. Extensive simulation studies indicate that the derived Wild-Bootstrap approach keeps and exploits the nominal type-I error at best, even for high accuracies and in case of small samples sizes. The strength of the correlation, the type of covariance structure, a skewed distribution, and also a moderate imbalanced case-control ratio do not have any impact on the behavior of the approach. A real data set illustrates the application of the proposed methods. We recommend the new Wild Bootstrap approach for the selection of biomarkers in early diagnostic trials, especially for high accuracies and small samples sizes.

  8. Bootstrap imputation with a disease probability model minimized bias from misclassification due to administrative database codes.

    Science.gov (United States)

    van Walraven, Carl

    2017-04-01

    Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Neural Network Computed Bootstrap Current for Real Time Control in DIII-D

    Science.gov (United States)

    Tema Biwole, Arsene; Smith, Sterling P.; Meneghini, Orso; Belli, Emily; Candy, Jeff

    2017-10-01

    In an effort to provide a fast and accurate calculation of the bootstrap current density for use as a constraint in real-time equilibrium reconstructions, we have developed a neural network (NN) non-linear regression of the NEO code calculated bootstrap current jBS. A new formulation for jBS in NEO allows for a determination of the coefficients on the density and temperature scale lengths. The new formulation reduces the number of inputs to the NN, and the number of output coefficients is 2 times the number of species (including electrons). The NN can reproduce the NEO and Sauter coefficients to a high degree of accuracy (bootstrap current density calculated in NEO has been used as a constraint in an offline equilibrium reconstruction for comparison to the NN calculation. The computational time of this method (μs) makes it ideal for real time calculation in DIII-D. Work supported by US DOE under DE-FC02-04ER54698, DE-FG2-95ER-54309, DE-SC 0012656, DE-FC02-06ER54873.

  10. Nonparametric bootstrap technique for calibrating surgical SmartForceps: theory and application.

    Science.gov (United States)

    Azimaee, Parisa; Jafari Jozani, Mohammad; Maddahi, Yaser; Zareinia, Kourosh; Sutherland, Garnette

    2017-10-01

    Knowledge of forces, exerted on the brain tissue during the performance of neurosurgical tasks, is critical for quality assurance, case rehearsal, and training purposes. Quantifying the interaction forces has been made possible by developing SmartForceps, a bipolar forceps retrofitted by a set of strain gauges. The forces are estimated using voltages read from strain gauges. We therefore need to quantify the force-voltage relationship to estimate the interaction forces during microsurgery. This problem has been addressed in the literature by following the physical and deterministic properties of the force-sensing strain gauges without obtaining the precision associated with each estimate. In this paper, we employ a probabilistic methodology by using a nonparametric Bootstrap approach to obtain both point and interval estimates of the applied forces at the tool tips, while the precision associated with each estimate is provided. To show proof-of-concept, the Bootstrap technique is employed to estimate unknown forces, and construct necessary confidence intervals using observed voltages in data sets that are measured from the performance of surgical tasks on a cadaveric brain. Results indicate that the Bootstrap technique is capable of estimating tool-tissue interaction forces with acceptable level of accuracy compared to the linear regression technique under the normality assumption.

  11. Y-90 PET imaging for radiation theragnosis using bootstrap event re sampling

    International Nuclear Information System (INIS)

    Nam, Taewon; Woo, Sangkeun; Min, Gyungju; Kim, Jimin; Kang, Joohyun; Lim, Sangmoo; Kim, Kyeongmin

    2013-01-01

    Surgical resection is the most effective method to recover the liver function. However, Yttrium-90 (Y-90) has been used as a new treatment due to the fact that it can be delivered to the tumors and results in greater radiation exposure to the tumors than using external radiation nowadays since most treatment is palliative in case of unresectable stage of hepatocellular carcinoma (HCC). Recently, Y-90 has been received much interest and studied by many researchers. Imaging of Y-90 has been conducted using most commonly gamma camera but PET imaging is required due to low sensitivity and resolution. The purpose of this study was to assess statistical characteristics and to improve count rate of image for enhancing image quality by using nonparametric bootstrap method. PET data was able to be improved using non-parametric bootstrap method and it was verified with showing improved uniformity and SNR. Uniformity showed more improvement under the condition of low count rate, i.e. Y-90, in case of phantom and also uniformity and SNR showed improvement of 15.6% and 33.8% in case of mouse, respectively. Bootstrap method performed in this study for PET data increased count rate of PET image and consequentially time for acquisition time can be reduced. It will be expected to improve performance for diagnosis

  12. The sound symbolism bootstrapping hypothesis for language acquisition and language evolution.

    Science.gov (United States)

    Imai, Mutsumi; Kita, Sotaro

    2014-09-19

    Sound symbolism is a non-arbitrary relationship between speech sounds and meaning. We review evidence that, contrary to the traditional view in linguistics, sound symbolism is an important design feature of language, which affects online processing of language, and most importantly, language acquisition. We propose the sound symbolism bootstrapping hypothesis, claiming that (i) pre-verbal infants are sensitive to sound symbolism, due to a biologically endowed ability to map and integrate multi-modal input, (ii) sound symbolism helps infants gain referential insight for speech sounds, (iii) sound symbolism helps infants and toddlers associate speech sounds with their referents to establish a lexical representation and (iv) sound symbolism helps toddlers learn words by allowing them to focus on referents embedded in a complex scene, alleviating Quine's problem. We further explore the possibility that sound symbolism is deeply related to language evolution, drawing the parallel between historical development of language across generations and ontogenetic development within individuals. Finally, we suggest that sound symbolism bootstrapping is a part of a more general phenomenon of bootstrapping by means of iconic representations, drawing on similarities and close behavioural links between sound symbolism and speech-accompanying iconic gesture. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  13. Assessing the uncertainty in QUANTEC's dose-response relation of lung and spinal cord with a bootstrap analysis.

    Science.gov (United States)

    Wedenberg, Minna

    2013-11-15

    To apply a statistical bootstrap analysis to assess the uncertainty in the dose-response relation for the endpoints pneumonitis and myelopathy reported in the QUANTEC review. The bootstrap method assesses the uncertainty of the estimated population-based dose-response relation due to sample variability, which reflects the uncertainty due to limited numbers of patients in the studies. A large number of bootstrap replicates of the original incidence data were produced by random sampling with replacement. The analysis requires only the dose, the number of patients, and the number of occurrences of the studied endpoint, for each study. Two dose-response models, a Poisson-based model and the Lyman model, were fitted to each bootstrap replicate using maximum likelihood. The bootstrap analysis generates a family of curves representing the range of plausible dose-response relations, and the 95% bootstrap confidence intervals give an estimated upper and lower toxicity risk. The curve families for the 2 dose-response models overlap for doses included in the studies at hand but diverge beyond that, with the Lyman model suggesting a steeper slope. The resulting distributions of the model parameters indicate correlation and non-Gaussian distribution. For both data sets, the likelihood of the observed data was higher for the Lyman model in >90% of the bootstrap replicates. The bootstrap method provides a statistical analysis of the uncertainty in the estimated dose-response relation for myelopathy and pneumonitis. It suggests likely values of model parameter values, their confidence intervals, and how they interrelate for each model. Finally, it can be used to evaluate to what extent data supports one model over another. For both data sets considered here, the Lyman model was preferred over the Poisson-based model. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. BootGraph: probabilistic fiber tractography using bootstrap algorithms and graph theory.

    Science.gov (United States)

    Vorburger, Robert S; Reischauer, Carolin; Boesiger, Peter

    2013-02-01

    Bootstrap methods have recently been introduced to diffusion-weighted magnetic resonance imaging to estimate the measurement uncertainty of ensuing diffusion parameters directly from the acquired data without the necessity to assume a noise model. These methods have been previously combined with deterministic streamline tractography algorithms to allow for the assessment of connection probabilities in the human brain. Thereby, the local noise induced disturbance in the diffusion data is accumulated additively due to the incremental progression of streamline tractography algorithms. Graph based approaches have been proposed to overcome this drawback of streamline techniques. For this reason, the bootstrap method is in the present work incorporated into a graph setup to derive a new probabilistic fiber tractography method, called BootGraph. The acquired data set is thereby converted into a weighted, undirected graph by defining a vertex in each voxel and edges between adjacent vertices. By means of the cone of uncertainty, which is derived using the wild bootstrap, a weight is thereafter assigned to each edge. Two path finding algorithms are subsequently applied to derive connection probabilities. While the first algorithm is based on the shortest path approach, the second algorithm takes all existing paths between two vertices into consideration. Tracking results are compared to an established algorithm based on the bootstrap method in combination with streamline fiber tractography and to another graph based algorithm. The BootGraph shows a very good performance in crossing situations with respect to false negatives and permits incorporating additional constraints, such as a curvature threshold. By inheriting the advantages of the bootstrap method and graph theory, the BootGraph method provides a computationally efficient and flexible probabilistic tractography setup to compute connection probability maps and virtual fiber pathways without the drawbacks of

  15. Split-specific bootstrap measures for quantifying phylogenetic stability and the influence of taxon selection.

    Science.gov (United States)

    Wang, Huai-Chun; Susko, Edward; Roger, Andrew J

    2016-12-01

    Assessing the robustness of an inferred phylogeny is an important element of phylogenetics. This is typically done with measures of stabilities at the internal branches and the variation of the positions of the leaf nodes. The bootstrap support for branches in maximum parsimony, distance and maximum likelihood estimation, or posterior probabilities in Bayesian inference, measure the uncertainty about a branch due to the sampling of the sites from genes or sampling genes from genomes. However, these measures do not reveal how taxon sampling affects branch support and the effects of taxon sampling on the estimated phylogeny. An internal branch in a phylogenetic tree can be viewed as a split that separates the taxa into two nonempty complementary subsets. We develop several split-specific measures of stability determined from bootstrap support for quartets. These include BPtaxon_split (average bootstrap percentage [BP] for all quartets involving a taxon within a split), BPsplit (BPtaxon_split averaged over taxa), BPtaxon (BPtaxon_split averaged over splits) and RBIC-taxon (average BP over all splits after removing a taxon). We also develop a pruned-tree distance metric. Application of our measures to empirical and simulated data illustrate that existing measures of overall stability can fail to detect taxa that are the primary source of a split-specific instability. Moreover, we show that the use of many reduced sets of quartets is important in being able to detect the influence of joint sets of taxa rather than individual taxa. These new measures are valuable diagnostic tools to guide taxon sampling in phylogenetic experimental design. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. arXiv The S-matrix Bootstrap II: Two Dimensional Amplitudes

    CERN Document Server

    Paulos, Miguel F.; Toledo, Jonathan; van Rees, Balt C.; Vieira, Pedro

    2017-11-22

    We consider constraints on the S-matrix of any gapped, Lorentz invariant quantum field theory in 1 + 1 dimensions due to crossing symmetry and unitarity. In this way we establish rigorous bounds on the cubic couplings of a given theory with a fixed mass spectrum. In special cases we identify interesting integrable theories saturating these bounds. Our analytic bounds match precisely with numerical bounds obtained in a companion paper where we consider massive QFT in an AdS box and study boundary correlators using the technology of the conformal bootstrap.

  17. Construction of prediction intervals for Palmer Drought Severity Index using bootstrap

    Science.gov (United States)

    Beyaztas, Ufuk; Bickici Arikan, Bugrayhan; Beyaztas, Beste Hamiye; Kahya, Ercan

    2018-04-01

    In this study, we propose an approach based on the residual-based bootstrap method to obtain valid prediction intervals using monthly, short-term (three-months) and mid-term (six-months) drought observations. The effects of North Atlantic and Arctic Oscillation indexes on the constructed prediction intervals are also examined. Performance of the proposed approach is evaluated for the Palmer Drought Severity Index (PDSI) obtained from Konya closed basin located in Central Anatolia, Turkey. The finite sample properties of the proposed method are further illustrated by an extensive simulation study. Our results revealed that the proposed approach is capable of producing valid prediction intervals for future PDSI values.

  18. The use of GLIM and the bootstrap in assessing a clinical trial of two drugs.

    Science.gov (United States)

    Mapleson, W W

    1986-01-01

    An approach is described for estimating the dose of a new drug which is equipotent to an established dose of an old drug. The approach is basically that of the parallel-line assay but it can allow for concomitant variables and, by exploiting the facilities available in the statistical computer package GLIM (generalized linear interactive modelling), the approach can be applied when the residuals conform to one of a number of distributions and, with suitable safeguards, to continuous, discrete and even 'scored' responses. In some circumstances, it is necessary to obtain confidence limits by Efron's 'bootstrap' technique. The method is illustrated with results from a trial of two premedicant drugs in children.

  19. Dynamic Mode Decomposition based on Bootstrapping Extended Kalman Filter Application to Noisy data

    Science.gov (United States)

    Nonomura, Taku; Shibata, Hisaichi; Takaki, Ryoji

    2017-11-01

    In this study, dynamic mode decomposition (DMD) based on bootstrapping extended Kalman filter is proposed for time-series data. In this framework, state variables (x and y) are filtered as well as the parameter estimation (aij) which is conducted in the conventional DMD and the standard Kalman-filter-based DMD. The filtering process of state variables enables us to obtain highly accurate eigenvalue of the system with strong noise. In the presentation, formulation, advantages and disadvantages are discussed. This research is partially supported by Presto, JST (JPMJPR1678).

  20. Numerical evaluation of shielding analysing procedures for the upgraded JRR-3 research reactors

    International Nuclear Information System (INIS)

    Ise, Takeharu; Maruo, Takeshi; Umeda, Kentaro.

    1986-11-01

    Description is about the numerical evaluation of the analytical parameters in the shielding analysing procedures for the upgraded JRR-3 research reactor: number of energy groups, number of spatial meshes, number of angular quadratures and the bootstraps' width used in S N calculations; dose rate attenuation curves; and radiation source. (author)

  1. Exploring the potential for using210Pbexmeasurements within a re-sampling approach to document recent changes in soil redistribution rates within a small catchment in southern Italy.

    Science.gov (United States)

    Porto, Paolo; Walling, Desmond E; Cogliandro, Vanessa; Callegari, Giovanni

    2016-11-01

    In recent years, the fallout radionuclides caesium-137 ( 137 Cs) and unsupported lead-210 ( 210 Pb ex) have been successfully used to document rates of soil erosion in many areas of the world, as an alternative to conventional measurements. By virtue of their different half-lives, these two radionuclides are capable of providing information related to different time windows. 137 Cs measurements are commonly used to generate information on mean annual erosion rates over the past ca. 50-60 years, whereas 210 Pb ex measurements are able to provide information relating to a longer period of up to ca. 100 years. However, the time-integrated nature of the estimates of soil redistribution provided by 137 Cs and 210 Pb ex measurements can be seen as a limitation, particularly when viewed in the context of global change and interest in the response of soil redistribution rates to contemporary climate change and land use change. Re-sampling techniques used with these two fallout radionuclides potentially provide a basis for providing information on recent changes in soil redistribution rates. By virtue of the effectively continuous fallout input, of 210 Pb, the response of the 210 Pb ex inventory of a soil profile to changing soil redistribution rates and thus its potential for use with the re-sampling approach differs from that of 137 Cs. Its greater sensitivity to recent changes in soil redistribution rates suggests that 210 Pb ex may have advantages over 137 Cs for use in the re-sampling approach. The potential for using 210 Pb ex measurements in re-sampling studies is explored further in this contribution. Attention focuses on a small (1.38 ha) forested catchment in southern Italy. The catchment was originally sampled for 210 Pb ex measurements in 2001 and equivalent samples were collected from points very close to the original sampling points again in 2013. This made it possible to compare the estimates of mean annual erosion related to two different time windows. This

  2. Detecting seasonal and cyclical trends in agricultural runoff water quality-hypothesis tests and block bootstrap power analysis.

    Science.gov (United States)

    Uddameri, Venkatesh; Singaraju, Sreeram; Hernandez, E Annette

    2018-02-21

    Seasonal and cyclic trends in nutrient concentrations at four agricultural drainage ditches were assessed using a dataset generated from a multivariate, multiscale, multiyear water quality monitoring effort in the agriculturally dominant Lower Rio Grande Valley (LRGV) River Watershed in South Texas. An innovative bootstrap sampling-based power analysis procedure was developed to evaluate the ability of Mann-Whitney and Noether tests to discern trends and to guide future monitoring efforts. The Mann-Whitney U test was able to detect significant changes between summer and winter nutrient concentrations at sites with lower depths and unimpeded flows. Pollutant dilution, non-agricultural loadings, and in-channel flow structures (weirs) masked the effects of seasonality. The detection of cyclical trends using the Noether test was highest in the presence of vegetation mainly for total phosphorus and oxidized nitrogen (nitrite + nitrate) compared to dissolved phosphorus and reduced nitrogen (total Kjeldahl nitrogen-TKN). Prospective power analysis indicated that while increased monitoring can lead to higher statistical power, the effect size (i.e., the total number of trend sequences within a time-series) had a greater influence on the Noether test. Both Mann-Whitney and Noether tests provide complementary information on seasonal and cyclic behavior of pollutant concentrations and are affected by different processes. The results from these statistical tests when evaluated in the context of flow, vegetation, and in-channel hydraulic alterations can help guide future data collection and monitoring efforts. The study highlights the need for long-term monitoring of agricultural drainage ditches to properly discern seasonal and cyclical trends.

  3. Bootstrapping of gene-expression data improves and controls the false discovery rate of differentially expressed genes

    Directory of Open Access Journals (Sweden)

    Goddard Mike E

    2004-03-01

    Full Text Available Abstract The ordinary-, penalized-, and bootstrap t-test, least squares and best linear unbiased prediction were compared for their false discovery rates (FDR, i.e. the fraction of falsely discovered genes, which was empirically estimated in a duplicate of the data set. The bootstrap-t-test yielded up to 80% lower FDRs than the alternative statistics, and its FDR was always as good as or better than any of the alternatives. Generally, the predicted FDR from the bootstrapped P-values agreed well with their empirical estimates, except when the number of mRNA samples is smaller than 16. In a cancer data set, the bootstrap-t-test discovered 200 differentially regulated genes at a FDR of 2.6%, and in a knock-out gene expression experiment 10 genes were discovered at a FDR of 3.2%. It is argued that, in the case of microarray data, control of the FDR takes sufficient account of the multiple testing, whilst being less stringent than Bonferoni-type multiple testing corrections. Extensions of the bootstrap simulations to more complicated test-statistics are discussed.

  4. Using the bootstrap to establish statistical significance for relative validity comparisons among patient-reported outcome measures.

    Science.gov (United States)

    Deng, Nina; Allison, Jeroan J; Fang, Hua Julia; Ash, Arlene S; Ware, John E

    2013-05-31

    Relative validity (RV), a ratio of ANOVA F-statistics, is often used to compare the validity of patient-reported outcome (PRO) measures. We used the bootstrap to establish the statistical significance of the RV and to identify key factors affecting its significance. Based on responses from 453 chronic kidney disease (CKD) patients to 16 CKD-specific and generic PRO measures, RVs were computed to determine how well each measure discriminated across clinically-defined groups of patients compared to the most discriminating (reference) measure. Statistical significance of RV was quantified by the 95% bootstrap confidence interval. Simulations examined the effects of sample size, denominator F-statistic, correlation between comparator and reference measures, and number of bootstrap replicates. The statistical significance of the RV increased as the magnitude of denominator F-statistic increased or as the correlation between comparator and reference measures increased. A denominator F-statistic of 57 conveyed sufficient power (80%) to detect an RV of 0.6 for two measures correlated at r = 0.7. Larger denominator F-statistics or higher correlations provided greater power. Larger sample size with a fixed denominator F-statistic or more bootstrap replicates (beyond 500) had minimal impact. The bootstrap is valuable for establishing the statistical significance of RV estimates. A reasonably large denominator F-statistic (F > 57) is required for adequate power when using the RV to compare the validity of measures with small or moderate correlations (r 0.9).

  5. Comparison of bootstrap current and plasma conductivity models applied in a self-consistent equilibrium calculation for Tokamak plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Andrade, Maria Celia Ramos; Ludwig, Gerson Otto [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil). Lab. Associado de Plasma]. E-mail: mcr@plasma.inpe.br

    2004-07-01

    Different bootstrap current formulations are implemented in a self-consistent equilibrium calculation obtained from a direct variational technique in fixed boundary tokamak plasmas. The total plasma current profile is supposed to have contributions of the diamagnetic, Pfirsch-Schlueter, and the neoclassical Ohmic and bootstrap currents. The Ohmic component is calculated in terms of the neoclassical conductivity, compared here among different expressions, and the loop voltage determined consistently in order to give the prescribed value of the total plasma current. A comparison among several bootstrap current models for different viscosity coefficient calculations and distinct forms for the Coulomb collision operator is performed for a variety of plasma parameters of the small aspect ratio tokamak ETE (Experimento Tokamak Esferico) at the Associated Plasma Laboratory of INPE, in Brazil. We have performed this comparison for the ETE tokamak so that the differences among all the models reported here, mainly regarding plasma collisionality, can be better illustrated. The dependence of the bootstrap current ratio upon some plasma parameters in the frame of the self-consistent calculation is also analysed. We emphasize in this paper what we call the Hirshman-Sigmar/Shaing model, valid for all collisionality regimes and aspect ratios, and a fitted formulation proposed by Sauter, which has the same range of validity but is faster to compute than the previous one. The advantages or possible limitations of all these different formulations for the bootstrap current estimate are analysed throughout this work. (author)

  6. Introduction of Bootstrap Current Reduction in the Stellarator Optimization Using the Algorithm DAB; Introducción de la reducción actual de bootstrap en la optimización de stellarator utilizando el algoritmo DAB

    Energy Technology Data Exchange (ETDEWEB)

    Castejón, F.; Gómez-Iglesias, A.; Velasco, J. L.

    2015-07-01

    This work is devoted to introduce new optimization criterion in the DAB (Distributed Asynchronous Bees) code. With this new criterion, we have now in DAB the equilibrium and Mercier stability criteria, the minimization of Bxgrad(B) criterion, which ensures the reduction of neoclassical transport and the improvement of the confinement of fast particles, and the reduction of bootstrap current. We have started from a neoclassically optimised configuration of the helias type and imposed the reduction of bootstrap current. The obtained configuration only presents a modest reduction of total bootstrap current, but the local current density is reduced along the minor radii. Further investigations are developed to understand the reason of this modest improvement.

  7. Impurities in a non-axisymmetric plasma: Transport and effect on bootstrap current

    Energy Technology Data Exchange (ETDEWEB)

    Mollén, A., E-mail: albertm@chalmers.se [Department of Applied Physics, Chalmers University of Technology, Göteborg (Sweden); Max-Planck-Institut für Plasmaphysik, 17491 Greifswald (Germany); Landreman, M. [Institute for Research in Electronics and Applied Physics, University of Maryland, College Park, Maryland 20742 (United States); Smith, H. M.; Helander, P. [Max-Planck-Institut für Plasmaphysik, 17491 Greifswald (Germany); Braun, S. [Max-Planck-Institut für Plasmaphysik, 17491 Greifswald (Germany); German Aerospace Center, Institute of Engineering Thermodynamics, Pfaffenwaldring 38-40, D-70569 Stuttgart (Germany)

    2015-11-15

    Impurities cause radiation losses and plasma dilution, and in stellarator plasmas the neoclassical ambipolar radial electric field is often unfavorable for avoiding strong impurity peaking. In this work we use a new continuum drift-kinetic solver, the SFINCS code (the Stellarator Fokker-Planck Iterative Neoclassical Conservative Solver) [M. Landreman et al., Phys. Plasmas 21, 042503 (2014)] which employs the full linearized Fokker-Planck-Landau operator, to calculate neoclassical impurity transport coefficients for a Wendelstein 7-X (W7-X) magnetic configuration. We compare SFINCS calculations with theoretical asymptotes in the high collisionality limit. We observe and explain a 1/ν-scaling of the inter-species radial transport coefficient at low collisionality, arising due to the field term in the inter-species collision operator, and which is not found with simplified collision models even when momentum correction is applied. However, this type of scaling disappears if a radial electric field is present. We also use SFINCS to analyze how the impurity content affects the neoclassical impurity dynamics and the bootstrap current. We show that a change in plasma effective charge Z{sub eff} of order unity can affect the bootstrap current enough to cause a deviation in the divertor strike point locations.

  8. The Index of Biological Integrity and the bootstrap revisited: an example from Minnesota streams

    Science.gov (United States)

    Dolph, Christine L.; Sheshukov, Aleksey Y.; Chizinski, Christopher J.; Vondracek, Bruce C.; Wilson, Bruce

    2010-01-01

    Multimetric indices, such as the Index of Biological Integrity (IBI), are increasingly used by management agencies to determine whether surface water quality is impaired. However, important questions about the variability of these indices have not been thoroughly addressed in the scientific literature. In this study, we used a bootstrap approach to quantify variability associated with fish IBIs developed for streams in two Minnesota river basins. We further placed this variability into a management context by comparing it to impairment thresholds currently used in water quality determinations for Minnesota streams. We found that 95% confidence intervals ranged as high as 40 points for IBIs scored on a 0–100 point scale. However, on average, 90% of IBI scores calculated from bootstrap replicate samples for a given stream site yielded the same impairment status as the original IBI score. We suggest that sampling variability in IBI scores is related to both the number of fish and the number of rare taxa in a field collection. A comparison of the effects of different scoring methods on IBI variability indicates that a continuous scoring method may reduce the amount of bias in IBI scores.

  9. Combining Bootstrap Aggregation with Support Vector Regression for Small Blood Pressure Measurement.

    Science.gov (United States)

    Lee, Soojeong; Ahmad, Awais; Jeon, Gwanggil

    2018-02-28

    Blood pressure measurement based on oscillometry is one of the most popular techniques to check a health condition of individual subjects. This paper proposes a support vector using fusion estimator with a bootstrap technique for oscillometric blood pressure (BP) estimation. However, some inherent problems exist with this approach. First, it is not simple to identify the best support vector regression (SVR) estimator, and worthy information might be omitted when selecting one SVR estimator and discarding others. Additionally, our input feature data, acquired from only five BP measurements per subject, represent a very small sample size. This constitutes a critical limitation when utilizing the SVR technique and can cause overfitting or underfitting, depending on the structure of the algorithm. To overcome these challenges, a fusion with an asymptotic approach (based on combining the bootstrap with the SVR technique) is utilized to generate the pseudo features needed to predict the BP values. This ensemble estimator using the SVR technique can learn to effectively mimic the non-linear relations between the input data acquired from the oscillometry and the nurse's BPs.

  10. A bootstrap based analysis pipeline for efficient classification of phylogenetically related animal miRNAs

    Directory of Open Access Journals (Sweden)

    Gu Xun

    2007-03-01

    Full Text Available Abstract Background Phylogenetically related miRNAs (miRNA families convey important information of the function and evolution of miRNAs. Due to the special sequence features of miRNAs, pair-wise sequence identity between miRNA precursors alone is often inadequate for unequivocally judging the phylogenetic relationships between miRNAs. Most of the current methods for miRNA classification rely heavily on manual inspection and lack measurements of the reliability of the results. Results In this study, we designed an analysis pipeline (the Phylogeny-Bootstrap-Cluster (PBC pipeline to identify miRNA families based on branch stability in the bootstrap trees derived from overlapping genome-wide miRNA sequence sets. We tested the PBC analysis pipeline with the miRNAs from six animal species, H. sapiens, M. musculus, G. gallus, D. rerio, D. melanogaster, and C. elegans. The resulting classification was compared with the miRNA families defined in miRBase. The two classifications were largely consistent. Conclusion The PBC analysis pipeline is an efficient method for classifying large numbers of heterogeneous miRNA sequences. It requires minimum human involvement and provides measurements of the reliability of the classification results.

  11. Recovery of signal loss adopting the residual bootstrap method in fetal heart rate dynamics.

    Science.gov (United States)

    Lee, Sun-Kyung; Park, Young-Sun; Cha, Kyung-Joon

    2018-03-19

    Fetal heart rate (FHR) data obtained from a non-stress test (NST) can be presented in a type of time series, which is accompanied by signal loss due to physical and biological causes. To recover or estimate FHR data, which is subjected to a high rate of signal loss, time series models [second-order autoregressive (AR(2)), first-order autoregressive conditional heteroscedasticity (ARCH(1)) and empirical mode decomposition and vector autoregressive (EMD-VAR)] and the residual bootstrap method were applied. The ARCH(1) model with the residual bootstrap technique was the most accurate [root mean square error (RMSE), 2.065] as it reflects the nonlinearity of the FHR data [mean absolute error (MAE) for approximate entropy (ApEn), 0.081]. As a result, the goal of predicting fetal health and identifying a high-risk pregnancy could be achieved. These trials may be effectively used to save the time and cost of repeating the NST when the fetal diagnosis is impossible owing to a large amount of signal loss.

  12. A Bootstrap Metropolis-Hastings Algorithm for Bayesian Analysis of Big Data.

    Science.gov (United States)

    Liang, Faming; Kim, Jinsu; Song, Qifan

    2016-01-01

    Markov chain Monte Carlo (MCMC) methods have proven to be a very powerful tool for analyzing data of complex structures. However, their computer-intensive nature, which typically require a large number of iterations and a complete scan of the full dataset for each iteration, precludes their use for big data analysis. In this paper, we propose the so-called bootstrap Metropolis-Hastings (BMH) algorithm, which provides a general framework for how to tame powerful MCMC methods to be used for big data analysis; that is to replace the full data log-likelihood by a Monte Carlo average of the log-likelihoods that are calculated in parallel from multiple bootstrap samples. The BMH algorithm possesses an embarrassingly parallel structure and avoids repeated scans of the full dataset in iterations, and is thus feasible for big data problems. Compared to the popular divide-and-combine method, BMH can be generally more efficient as it can asymptotically integrate the whole data information into a single simulation run. The BMH algorithm is very flexible. Like the Metropolis-Hastings algorithm, it can serve as a basic building block for developing advanced MCMC algorithms that are feasible for big data problems. This is illustrated in the paper by the tempering BMH algorithm, which can be viewed as a combination of parallel tempering and the BMH algorithm. BMH can also be used for model selection and optimization by combining with reversible jump MCMC and simulated annealing, respectively.

  13. A Bootstrap Metropolis–Hastings Algorithm for Bayesian Analysis of Big Data

    Science.gov (United States)

    Kim, Jinsu; Song, Qifan

    2016-01-01

    Markov chain Monte Carlo (MCMC) methods have proven to be a very powerful tool for analyzing data of complex structures. However, their computer-intensive nature, which typically require a large number of iterations and a complete scan of the full dataset for each iteration, precludes their use for big data analysis. In this paper, we propose the so-called bootstrap Metropolis-Hastings (BMH) algorithm, which provides a general framework for how to tame powerful MCMC methods to be used for big data analysis; that is to replace the full data log-likelihood by a Monte Carlo average of the log-likelihoods that are calculated in parallel from multiple bootstrap samples. The BMH algorithm possesses an embarrassingly parallel structure and avoids repeated scans of the full dataset in iterations, and is thus feasible for big data problems. Compared to the popular divide-and-combine method, BMH can be generally more efficient as it can asymptotically integrate the whole data information into a single simulation run. The BMH algorithm is very flexible. Like the Metropolis-Hastings algorithm, it can serve as a basic building block for developing advanced MCMC algorithms that are feasible for big data problems. This is illustrated in the paper by the tempering BMH algorithm, which can be viewed as a combination of parallel tempering and the BMH algorithm. BMH can also be used for model selection and optimization by combining with reversible jump MCMC and simulated annealing, respectively. PMID:29033469

  14. How efficient are Greek hospitals? A case study using a double bootstrap DEA approach.

    Science.gov (United States)

    Kounetas, Kostas; Papathanassopoulos, Fotis

    2013-12-01

    The purpose of this study was to measure Greek hospital performance using different input-output combinations, and to identify the factors that influence their efficiency thus providing policy makers with valuable input for the decision-making process. Using a unique dataset, we estimated the productive efficiency of each hospital through a bootstrapped data envelopment analysis (DEA) approach. In a second stage, we explored, using a bootstrapped truncated regression, the impact of environmental factors on hospitals' technical and scale efficiency. Our results reveal that over 80% of the examined hospitals appear to have a technical efficiency lower than 0.8, while the majority appear to be scale efficient. Moreover, efficiency performance differed with inclusion of medical examinations as an additional variable. On the other hand, bed occupancy ratio appeared to affect both technical and scale efficiency in a rather interesting way, while the adoption of advanced medical equipment and the type of hospital improves scale and technical efficiency, correspondingly. The findings of this study on Greek hospitals' performance are not encouraging. Furthermore, our results raise questions regarding the number of hospitals that should operate, and which type of hospital is more efficient. Finally, the results indicate the role of medical equipment in performance, confirming its misallocation in healthcare expenditure.

  15. Nuclear energy consumption, oil consumption and economic growth in G-6 countries: Bootstrap panel causality test

    International Nuclear Information System (INIS)

    Chu, Hsiao-Ping; Chang Tsangyao

    2012-01-01

    This study applies bootstrap panel Granger causality to test whether energy consumption promotes economic growth using data from G-6 countries over the period of 1971–2010. Both nuclear and oil consumption data are used in this study. Regarding the nuclear consumption-economic growth nexus, nuclear consumption causes economic growth in Japan, the UK, and the US; economic growth causes nuclear consumption in the US; nuclear consumption and economic growth show no causal relation in Canada, France and Germany. Regarding oil consumption-economic growth nexus, we find that there is one-way causality from economic growth to oil consumption only in the US, and that oil consumption does not Granger cause economic growth in G-6 countries except Germany and Japan. Our results have important policy implications for the G-6 countries within the context of economic development. - Highlights: ► Bootstrap panel Granger causality test whether energy consumption promotes economic growth. ► Data from G-6 countries for both nuclear and oil consumption data are used. ► Results have important policy implications within the context of economic development.

  16. Gray bootstrap method for estimating frequency-varying random vibration signals with small samples

    Directory of Open Access Journals (Sweden)

    Wang Yanqing

    2014-04-01

    Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.

  17. Off-critical statistical models: factorized scattering theories and bootstrap program

    International Nuclear Information System (INIS)

    Mussardo, G.

    1992-01-01

    We analyze those integrable statistical systems which originate from some relevant perturbations of the minimal models of conformal field theories. When only massive excitations are present, the systems can be efficiently characterized in terms of the relativistic scattering data. We review the general properties of the factorizable S-matrix in two dimensions with particular emphasis on the bootstrap principle. The classification program of the allowed spins of conserved currents and of the non-degenerate S-matrices is discussed and illustrated by means of some significant examples. The scattering theories of several massive perturbations of the minimal models are fully discussed. Among them are the Ising model, the tricritical Ising model, the Potts models, the series of the non-unitary minimal models M 2,2n+3 , the non-unitary model M 3,5 and the scaling limit of the polymer system. The ultraviolet limit of these massive integrable theories can be exploited by the thermodynamics Bethe ansatz, in particular the central charge of the original conformal theories can be recovered from the scattering data. We also consider the numerical method based on the so-called conformal space truncated approach which confirms the theoretical results and allows a direct measurement of the scattering data, i.e. the masses and the S-matrix of the particles in bootstrap interaction. The problem of computing the off-critical correlation functions is discussed in terms of the form-factor approach

  18. Analytic bounds and emergence of AdS2 physics from the conformal bootstrap

    Science.gov (United States)

    Mazáč, Dalimil

    2017-04-01

    We study analytically the constraints of the conformal bootstrap on the lowlying spectrum of operators in field theories with global conformal symmetry in one and two spacetime dimensions. We introduce a new class of linear functionals acting on the conformal bootstrap equation. In 1D, we use the new basis to construct extremal functionals leading to the optimal upper bound on the gap above identity in the OPE of two identical primary operators of integer or half-integer scaling dimension. We also prove an upper bound on the twist gap in 2D theories with global conformal symmetry. When the external scaling dimensions are large, our functionals provide a direct point of contact between crossing in a 1D CFT and scattering of massive particles in large AdS2. In particular, CFT crossing can be shown to imply that appropriate OPE coefficients exhibit an exponential suppression characteristic of massive bound states, and that the 2D flat-space S-matrix should be analytic away from the real axis.

  19. Semantic Drift in Espresso-style Bootstrapping: Graph-theoretic Analysis and Evaluation in Word Sense Disambiguation

    Science.gov (United States)

    Komachi, Mamoru; Kudo, Taku; Shimbo, Masashi; Matsumoto, Yuji

    Bootstrapping has a tendency, called semantic drift, to select instances unrelated to the seed instances as the iteration proceeds. We demonstrate the semantic drift of Espresso-style bootstrapping has the same root as the topic drift of Kleinberg's HITS, using a simplified graph-based reformulation of bootstrapping. We confirm that two graph-based algorithms, the von Neumann kernels and the regularized Laplacian, can reduce the effect of semantic drift in the task of word sense disambiguation (WSD) on Senseval-3 English Lexical Sample Task. Proposed algorithms achieve superior performance to Espresso and previous graph-based WSD methods, even though the proposed algorithms have less parameters and are easy to calibrate.

  20. Application of bootstrap sampling in gamma-ray astronomy: Time variability in pulsed emission from crab pulsar

    International Nuclear Information System (INIS)

    Ozel, M.E.; Mayer-Hasselwander, H.

    1985-01-01

    This paper discusses the bootstrap scheme which fits well for many astronomical applications. It is based on the well-known sampling plan called ''sampling with replacement''. Digital computers make the method very practical for the investigation of various trends present in a limited set of data which is usually a small fraction of the total population. The authors attempt to apply the method and demonstrate its feasibility. The study indicates that the discrete nature of high energy gamma-ray data makes the bootstrap method especially attractive for gamma-ray astronomy. Present analysis shows that the ratio of pulse strengths is variable with a 99.8% confidence

  1. A Phylogeny of the Monocots, as Inferred from rbcL and atpA Sequence Variation, and a Comparison of Methods for Calculating Jackknife and Bootstrap Values

    DEFF Research Database (Denmark)

    Davis, Jerrold I.; Stevenson, Dennis W.; Petersen, Gitte

    2004-01-01

    elements of Xyridaceae. A comparison was conducted of jackknife and bootstrap values, as computed using strict-consensus (SC) and frequency-within-replicates (FWR) approaches. Jackknife values tend to be higher than bootstrap values, and for each of these methods support values obtained with the FWR...

  2. Prediction of Repair Work Duration for Gas Transport Systems Based on Small Data Samples

    DEFF Research Database (Denmark)

    Lesnykh, Valery; Litvin, Yuri; Kozin, Igor

    2016-01-01

    . To address the issue of the scarcity of observed data, we suggest using a bootstrap resampling procedure. Gram-Charlier functions and order statistics are employed to approximate the distributions. It is demonstrated how to derive them for a separate repair project and a larger project consisting of a number...

  3. An Efficient Time-Varying Filter for Detrending and Bandwidth Limiting the Heart Rate Variability Tachogram without Resampling: MATLAB Open-Source Code and Internet Web-Based Implementation

    OpenAIRE

    Eleuteri, A.; Fisher, A. C.; Groves, D.; Dewhurst, C. J.

    2012-01-01

    The heart rate variability (HRV) signal derived from the ECG is a beat-to-beat record of RR intervals and is, as a time series, irregularly sampled. It is common engineering practice to resample this record, typically at 4 Hz, onto a regular time axis for analysis in advance of time domain filtering and spectral analysis based on the DFT. However, it is recognised that resampling introduces noise and frequency bias. The present work describes the implementation of a time-varying filter using ...

  4. Bootstrapping in a language of thought: a formal model of numerical concept learning.

    Science.gov (United States)

    Piantadosi, Steven T; Tenenbaum, Joshua B; Goodman, Noah D

    2012-05-01

    In acquiring number words, children exhibit a qualitative leap in which they transition from understanding a few number words, to possessing a rich system of interrelated numerical concepts. We present a computational framework for understanding this inductive leap as the consequence of statistical inference over a sufficiently powerful representational system. We provide an implemented model that is powerful enough to learn number word meanings and other related conceptual systems from naturalistic data. The model shows that bootstrapping can be made computationally and philosophically well-founded as a theory of number learning. Our approach demonstrates how learners may combine core cognitive operations to build sophisticated representations during the course of development, and how this process explains observed developmental patterns in number word learning. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Two Bootstrap Strategies for a k-Problem up to Location-Scale with Dependent Samples

    Directory of Open Access Journals (Sweden)

    Jean-François Quessy

    2014-01-01

    Full Text Available This paper extends the work of Quessy and Éthier (2012 who considered tests for the k-sample problem with dependent samples. Here, the marginal distributions are allowed, under H0, to differ according to their mean and their variance; in other words, one focuses on the shape of the distributions. Although easily stated, this problem nevertheless requires a careful treatment for the computation of valid P values. To this end, two bootstrap strategies based on the multiplier central limit theorem are proposed, both exploiting a representation of the test statistics in terms of a Hadamard differentiable functional. This accounts for the fact that one works with empirically standardized data instead of the original observations. Simulations reported show the nice sample properties of the method based on Cramér-von Mises and characteristic function type statistics. The newly introduced tests are illustrated on the marginal distributions of the eight-dimensional Oil currency data set.

  6. A Bootstrapping Based Approach for Open Geo-entity Relation Extraction

    Directory of Open Access Journals (Sweden)

    YU Li

    2016-05-01

    Full Text Available Extracting spatial relations and semantic relations between two geo-entities from Web texts, asks robust and effective solutions. This paper puts forward a novel approach: firstly, the characteristics of terms (part-of-speech, position and distance are analyzed by means of bootstrapping. Secondly, the weight of each term is calculated and the keyword is picked out as the clue of geo-entity relations. Thirdly, the geo-entity pairs and their keywords are organized into structured information. Finally, an experiment is conducted with Baidubaike and Stanford CoreNLP. The study shows that the presented method can automatically explore part of the lexical features and find additional relational terms which neither the domain expert knowledge nor large scale corpora need. Moreover, compared with three classical frequency statistics methods, namely Frequency, TF-IDF and PPMI, the precision and recall are improved about 5% and 23% respectively.

  7. A bootstrapped PMHT with feature measurements and a new way to derive its information matrix

    Science.gov (United States)

    Lu, Qin; Domrese, Katherine; Willett, Peter; Bar-Shalom, Yaakov; Pattipati, Krishna

    2017-05-01

    The probabilistic multiple-hypothesis tracker (PMHT), a tracking algorithm of considerable theoretical elegance based on the expectation-maximization (EM) algorithm, will be considered for the problem of multiple target tracking (MTT) with multiple sensors in clutter. Aside from position observations, continuous measurements associated with the unique and constant feature of each target are incorporated to jointly estimate the states and feature of the targets for the sake of tracking and classification, leading to a bootstrapped implementation of the PMHT. In addition, we rederived the information matrix for the big state vector stacking states for all the targets at all the time steps during the observation time. Simulation results have been conducted for both closely spaced and well separated scenarios with and without feature measurements. The normalized estimation error squared (NEES) calculated using the information matrix for both scenarios with and without feature measurements are within the 95% probability region. In other words, the estimates are consistent with the corresponding covariances.

  8. Beyond Crossing Fibers: Bootstrap Probabilistic Tractography Using Complex Subvoxel Fiber Geometries

    Science.gov (United States)

    Campbell, Jennifer S. W.; MomayyezSiahkal, Parya; Savadjiev, Peter; Leppert, Ilana R.; Siddiqi, Kaleem; Pike, G. Bruce

    2014-01-01

    Diffusion magnetic resonance imaging fiber tractography is a powerful tool for investigating human white matter connectivity in vivo. However, it is prone to false positive and false negative results, making interpretation of the tractography result difficult. Optimal tractography must begin with an accurate description of the subvoxel white matter fiber structure, includes quantification of the uncertainty in the fiber directions obtained, and quantifies the confidence in each reconstructed fiber tract. This paper presents a novel and comprehensive pipeline for fiber tractography that meets the above requirements. The subvoxel fiber geometry is described in detail using a technique that allows not only for straight crossing fibers but for fibers that curve and splay. This technique is repeatedly performed within a residual bootstrap statistical process in order to efficiently quantify the uncertainty in the subvoxel geometries obtained. A robust connectivity index is defined to quantify the confidence in the reconstructed connections. The tractography pipeline is demonstrated in the human brain. PMID:25389414

  9. Hypothesis Testing of Population Percentiles via the Wald Test with Bootstrap Variance Estimates

    Science.gov (United States)

    Johnson, William D.; Romer, Jacob E.

    2016-01-01

    Testing the equality of percentiles (quantiles) between populations is an effective method for robust, nonparametric comparison, especially when the distributions are asymmetric or irregularly shaped. Unlike global nonparametric tests for homogeneity such as the Kolmogorv-Smirnov test, testing the equality of a set of percentiles (i.e., a percentile profile) yields an estimate of the location and extent of the differences between the populations along the entire domain. The Wald test using bootstrap estimates of variance of the order statistics provides a unified method for hypothesis testing of functions of the population percentiles. Simulation studies are conducted to show performance of the method under various scenarios and to give suggestions on its use. Several examples are given to illustrate some useful applications to real data. PMID:27034909

  10. Bootstrapping hypercubic and hypertetrahedral theories in three dimensions arXiv

    CERN Document Server

    Stergiou, Andreas

    There are three generalizations of the Platonic solids that exist in all dimensions, namely the hypertetrahedron, the hypercube, and the hyperoctahedron, with the latter two being dual. Conformal field theories with the associated symmetry groups as global symmetries can be argued to exist in $d=3$ spacetime dimensions if the $\\varepsilon=4-d$ expansion is valid when $\\varepsilon\\to1$. In this paper hypercubic and hypertetrahedral theories are studied with the non-perturbative numerical conformal bootstrap. In the $N=3$ cubic case it is found that a bound with a kink is saturated by a solution with properties that cannot be reconciled with the $\\varepsilon$ expansion of the cubic theory. Possible implications for cubic magnets and structural phase transitions are discussed. For the hypertetrahedral theory evidence is found that the non-conformal window that is seen with the $\\varepsilon$ expansion exists in $d=3$ as well, and a rough estimate of its extent is given.

  11. Solid oxide fuel cell power plant having a bootstrap start-up system

    Energy Technology Data Exchange (ETDEWEB)

    Lines, Michael T

    2016-10-04

    The bootstrap start-up system (42) achieves an efficient start-up of the power plant (10) that minimizes formation of soot within a reformed hydrogen rich fuel. A burner (48) receives un-reformed fuel directly from the fuel supply (30) and combusts the fuel to heat cathode air which then heats an electrolyte (24) within the fuel cell (12). A dilute hydrogen forming gas (68) cycles through a sealed heat-cycling loop (66) to transfer heat and generated steam from an anode side (32) of the electrolyte (24) through fuel processing system (36) components (38, 40) and back to an anode flow field (26) until fuel processing system components (38, 40) achieve predetermined optimal temperatures and steam content. Then, the heat-cycling loop (66) is unsealed and the un-reformed fuel is admitted into the fuel processing system (36) and anode flow (26) field to commence ordinary operation of the power plant (10).

  12. A bootstrapped, low-noise, and high-gain photodetector for shot noise measurement

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Haijun; Yang, Wenhai; Li, Zhixiu; Li, Xuefeng; Zheng, Yaohui, E-mail: yhzheng@sxu.edu.cn [State Key Laboratory of Quantum Optics and Quantum Optics Devices, Institute of Opto-Electronics, Shanxi University, Taiyuan 030006 (China)

    2014-01-15

    We presented a low-noise, high-gain photodetector based on the bootstrap structure and the L-C (inductance and capacitance) combination. Electronic characteristics of the photodetector, including electronic noise, gain and frequency response, and dynamic range, were verified through a single-frequency Nd:YVO{sub 4} laser at 1064 nm with coherent output. The measured shot noise of 50 μW laser was 13 dB above the electronic noise at the analysis frequency of 2 MHz, and 10 dB at 3 MHz. And a maximum clearance of 28 dB at 2 MHz was achieved when 1.52 mW laser was illuminated. In addition, the photodetector showed excellent linearities for both DC and AC amplifications in the laser power range between 12.5 μW and 1.52 mW.

  13. Representative Day Selection Using Statistical Bootstrapping for Accelerating Annual Distribution Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Palmintier, Bryan S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bugbee, Bruce [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gotseff, Peter [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-10-03

    Capturing technical and economic impacts of solar photovoltaics (PV) and other distributed energy resources (DERs) on electric distribution systems can require high-time resolution (e.g. 1 minute), long-duration (e.g. 1 year) simulations. However, such simulations can be computationally prohibitive, particularly when including complex control schemes in quasi-steady-state time series (QSTS) simulation. Various approaches have been used in the literature to down select representative time segments (e.g. days), but typically these are best suited for lower time resolutions or consider only a single data stream (e.g. PV production) for selection. We present a statistical approach that combines stratified sampling and bootstrapping to select representative days while also providing a simple method to reassemble annual results. We describe the approach in the context of a recent study with a utility partner. This approach enables much faster QSTS analysis by simulating only a subset of days, while maintaining accurate annual estimates.

  14. Application of Robust Regression and Bootstrap in Poductivity Analysis of GERD Variable in EU27

    Directory of Open Access Journals (Sweden)

    Dagmar Blatná

    2014-06-01

    Full Text Available The GERD is one of Europe 2020 headline indicators being tracked within the Europe 2020 strategy. The headline indicator is the 3% target for the GERD to be reached within the EU by 2020. Eurostat defi nes “GERD” as total gross domestic expenditure on research and experimental development in a percentage of GDP. GERD depends on numerous factors of a general economic background, namely of employment, innovation and research, science and technology. The values of these indicators vary among the European countries, and consequently the occurrence of outliers can be anticipated in corresponding analyses. In such a case, a classical statistical approach – the least squares method – can be highly unreliable, the robust regression methods representing an acceptable and useful tool. The aim of the present paper is to demonstrate the advantages of robust regression and applicability of the bootstrap approach in regression based on both classical and robust methods.

  15. Fitting statistical distributions the generalized lambda distribution and generalized bootstrap methods

    CERN Document Server

    Karian, Zaven A

    2000-01-01

    Throughout the physical and social sciences, researchers face the challenge of fitting statistical distributions to their data. Although the study of statistical modelling has made great strides in recent years, the number and variety of distributions to choose from-all with their own formulas, tables, diagrams, and general properties-continue to create problems. For a specific application, which of the dozens of distributions should one use? What if none of them fit well?Fitting Statistical Distributions helps answer those questions. Focusing on techniques used successfully across many fields, the authors present all of the relevant results related to the Generalized Lambda Distribution (GLD), the Generalized Bootstrap (GB), and Monte Carlo simulation (MC). They provide the tables, algorithms, and computer programs needed for fitting continuous probability distributions to data in a wide variety of circumstances-covering bivariate as well as univariate distributions, and including situations where moments do...

  16. A weighted bootstrap method for the determination of probability density functions of freshwater distribution coefficients (Kds) of Co, Cs, Sr and I radioisotopes.

    Science.gov (United States)

    Durrieu, G; Ciffroy, P; Garnier, J-M

    2006-11-01

    The objective of the study was to provide global probability density functions (PDFs) representing the uncertainty of distribution coefficients (Kds) in freshwater for radioisotopes of Co, Cs, Sr and I. A comprehensive database containing Kd values referenced in 61 articles was first built and quality scores were affected to each data point according to various criteria (e.g. presentation of data, contact times, pH, solid-to-liquid ratio, expert judgement). A weighted bootstrapping procedure was then set up in order to build PDFs, in such a way that more importance is given to the most relevant data points (i.e. those corresponding to typical natural environments). However, it was also assessed that the relevance and the robustness of the PDFs determined by our procedure depended on the number of Kd values in the database. Owing to the large database, conditional PDFs were also proposed, for site studies where some parametric information is known (e.g. pH, contact time between radionuclides and particles, solid-to-liquid ratio). Such conditional PDFs reduce the uncertainty on the Kd values. These global and conditional PDFs are useful for end-users of dose models because the uncertainty and sensitivity of Kd values are taking into account.

  17. The Bootstrap Discovery Behaviour (BDB): a new outlook on usability evaluation.

    Science.gov (United States)

    Borsci, Simone; Londei, Alessandro; Federici, Stefano

    2011-02-01

    The value of λ is one of the main issues debated in international usability studies. The debate is centred on the deficiencies of the mathematical return on investment model (ROI model) of Nielsen and Landauer (1993). The ROI model is discussed in order to identify the base of another model that, respecting Nielsen and Landauer's one, tries to consider a large number of variables for the estimation of the number of evaluators needed for an interface. Using the bootstrap model (Efron 1979), we can take into account: (a) the interface properties, as the properties at zero condition of evaluation and (b) the probability that the population discovery behaviour is represented by all the possible discovery behaviours of a sample. Our alternative model, named Bootstrap Discovery Behaviour (BDB), provides an alternative estimation of the number of experts and users needed for a usability evaluation. Two experimental groups of users and experts are involved in the evaluation of a website (http://www.serviziocivile.it). Applying the BDB model to the problems identified by the two groups, we found that 13 experts and 20 users are needed to identify 80% of usability problems, instead of 6 experts and 7 users required according to the estimation of the discovery likelihood provided by the ROI model. The consequence of the difference between the results of those models is that in following the BDB the costs of usability evaluation increase, although this is justified considering that the results obtained have the best probability of representing the entire population of experts and users.

  18. Smoothed Bootstrap Aggregation for Assessing Selection Pressure at Amino Acid Sites.

    Science.gov (United States)

    Mingrone, Joseph; Susko, Edward; Bielawski, Joseph

    2016-11-01

    To detect positive selection at individual amino acid sites, most methods use an empirical Bayes approach. After parameters of a Markov process of codon evolution are estimated via maximum likelihood, they are passed to Bayes formula to compute the posterior probability that a site evolved under positive selection. A difficulty with this approach is that parameter estimates with large errors can negatively impact Bayesian classification. By assigning priors to some parameters, Bayes Empirical Bayes (BEB) mitigates this problem. However, as implemented, it imposes uniform priors, which causes it to be overly conservative in some cases. When standard regularity conditions are not met and parameter estimates are unstable, inference, even under BEB, can be negatively impacted. We present an alternative to BEB called smoothed bootstrap aggregation (SBA), which bootstraps site patterns from an alignment of protein coding DNA sequences to accommodate the uncertainty in the parameter estimates. We show that deriving the correction for parameter uncertainty from the data in hand, in combination with kernel smoothing techniques, improves site specific inference of positive selection. We compare BEB to SBA by simulation and real data analysis. Simulation results show that SBA balances accuracy and power at least as well as BEB, and when parameter estimates are unstable, the performance gap between BEB and SBA can widen in favor of SBA. SBA is applicable to a wide variety of other inference problems in molecular evolution. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Bootstrapping mixed correlators in the five dimensional critical O(N) models

    Energy Technology Data Exchange (ETDEWEB)

    Li, Zhijin; Su, Ning [George P. and Cynthia W. Mitchell Institute for Fundamental Physics and Astronomy,Texas A& M University, College Station, TX 77843 (United States)

    2017-04-18

    We use the conformal bootstrap approach to explore 5D CFTs with O(N) global symmetry, which contain N scalars ϕ{sub i} transforming as O(N) vector. Specifically, we study multiple four-point correlators of the leading O(N) vector ϕ{sub i} and the O(N) singlet σ. The crossing symmetry of the four-point functions and the unitarity condition provide nontrivial constraints on the scaling dimensions (Δ{sub ϕ}, Δ{sub σ}) of ϕ{sub i} and σ. With reasonable assumptions on the gaps between scaling dimensions of ϕ{sub i} (σ) and the next O(N) vector ϕ{sub i}{sup ′} (singlet σ{sup ′}) scalar, we are able to isolate the scaling dimensions (Δ{sub ϕ}, Δ{sub σ}) in small islands. In particular, for large N=500, the isolated region is highly consistent with the result obtained from large N expansion. We also study the interacting O(N) CFTs for 1≤N≤100. Isolated regions on (Δ{sub ϕ},Δ{sub σ}) plane are obtained using conformal bootstrap program with lower order of derivatives Λ; however, they disappear after increasing Λ. For N=100, no solution can be found with Λ=25 under the assumptions on the scaling dimensions of next O(N) vector Δ{sub ϕ{sub i{sup ′}}}≥5.0 (singlet Δ{sub σ{sup ′}}≥3.3). These islands are expected to be corresponding to interacting but nonunitary O(N) CFTs. Our results suggest a lower bound on the critical value N{sub c}>100, below which the interacting O(N) CFTs turn into nonunitary.

  20. BOBA FRET: bootstrap-based analysis of single-molecule FRET data.

    Directory of Open Access Journals (Sweden)

    Sebastian L B König

    Full Text Available Time-binned single-molecule Förster resonance energy transfer (smFRET experiments with surface-tethered nucleic acids or proteins permit to follow folding and catalysis of single molecules in real-time. Due to the intrinsically low signal-to-noise ratio (SNR in smFRET time traces, research over the past years has focused on the development of new methods to extract discrete states (conformations from noisy data. However, limited observation time typically leads to pronounced cross-sample variability, i.e., single molecules display differences in the relative population of states and the corresponding conversion rates. Quantification of cross-sample variability is necessary to perform statistical testing in order to assess whether changes observed in response to an experimental parameter (metal ion concentration, the presence of a ligand, etc. are significant. However, such hypothesis testing has been disregarded to date, precluding robust biological interpretation. Here, we address this problem by a bootstrap-based approach to estimate the experimental variability. Simulated time traces are presented to assess the robustness of the algorithm in conjunction with approaches commonly used in thermodynamic and kinetic analysis of time-binned smFRET data. Furthermore, a pair of functionally important sequences derived from the self-cleaving group II intron Sc.ai5γ (d3'EBS1/IBS1 is used as a model system. Through statistical hypothesis testing, divalent metal ions are shown to have a statistically significant effect on both thermodynamic and kinetic aspects of their interaction. The Matlab source code used for analysis (bootstrap-based analysis of smFRET data, BOBA FRET, as well as a graphical user interface, is available via http://www.aci.uzh.ch/rna/.

  1. Detecting transitions in protein dynamics using a recurrence quantification analysis based bootstrap method.

    Science.gov (United States)

    Karain, Wael I

    2017-11-28

    Proteins undergo conformational transitions over different time scales. These transitions are closely intertwined with the protein's function. Numerous standard techniques such as principal component analysis are used to detect these transitions in molecular dynamics simulations. In this work, we add a new method that has the ability to detect transitions in dynamics based on the recurrences in the dynamical system. It combines bootstrapping and recurrence quantification analysis. We start from the assumption that a protein has a "baseline" recurrence structure over a given period of time. Any statistically significant deviation from this recurrence structure, as inferred from complexity measures provided by recurrence quantification analysis, is considered a transition in the dynamics of the protein. We apply this technique to a 132 ns long molecular dynamics simulation of the β-Lactamase Inhibitory Protein BLIP. We are able to detect conformational transitions in the nanosecond range in the recurrence dynamics of the BLIP protein during the simulation. The results compare favorably to those extracted using the principal component analysis technique. The recurrence quantification analysis based bootstrap technique is able to detect transitions between different dynamics states for a protein over different time scales. It is not limited to linear dynamics regimes, and can be generalized to any time scale. It also has the potential to be used to cluster frames in molecular dynamics trajectories according to the nature of their recurrence dynamics. One shortcoming for this method is the need to have large enough time windows to insure good statistical quality for the recurrence complexity measures needed to detect the transitions.

  2. Efficiency and productivity measurement of rural township hospitals in China: a bootstrapping data envelopment analysis

    Science.gov (United States)

    Cheng, Zhaohui; Cai, Miao; Tao, Hongbing; He, Zhifei; Lin, Xiaojun; Lin, Haifeng; Zuo, Yuling

    2016-01-01

    Objective Township hospitals (THs) are important components of the three-tier rural healthcare system of China. However, the efficiency and productivity of THs have been questioned since the healthcare reform was implemented in 2009. The objective of this study is to analyse the efficiency and productivity changes in THs before and after the reform process. Setting and participants A total of 48 sample THs were selected from the Xiaogan Prefecture in Hubei Province from 2008 to 2014. Outcome measures First, bootstrapping data envelopment analysis (DEA) was performed to estimate the technical efficiency (TE), pure technical efficiency (PTE) and scale efficiency (SE) of the sample THs during the period. Second, the bootstrapping Malmquist productivity index was used to calculate the productivity changes over time. Results The average TE, PTE and SE of the sample THs over the 7-year period were 0.5147, 0.6373 and 0.7080, respectively. The average TE and PTE increased from 2008 to 2012 but declined considerably after 2012. In general, the sample THs experienced a negative shift in productivity from 2008 to 2014. The negative change was 2.14%, which was attributed to a 23.89% decrease in technological changes (TC). The sample THs experienced a positive productivity shift from 2008 to 2012 but experienced deterioration from 2012 to 2014. Conclusions There was considerable space for TE improvement in the sample THs since the average TE was relatively low. From 2008 to 2014, the sample THs experienced a decrease in productivity, and the adverse alteration in TC should be emphasised. In the context of healthcare reform, the factors that influence TE and productivity of THs are complex. Results suggest that numerous quantitative and qualitative studies are necessary to explore the reasons for the changes in TE and productivity. PMID:27836870

  3. Quantization Procedures

    International Nuclear Information System (INIS)

    Cabrera, J. A.; Martin, R.

    1976-01-01

    We present in this work a review of the conventional quantization procedure, the proposed by I.E. Segal and a new quantization procedure similar to this one for use in non linear problems. We apply this quantization procedures to different potentials and we obtain the appropriate equations of motion. It is shown that for the linear case the three procedures exposed are equivalent but for the non linear cases we obtain different equations of motion and different energy spectra. (Author) 16 refs

  4. Reliability of confidence intervals calculated by bootstrap and classical methods using the FIA 1-ha plot design

    Science.gov (United States)

    H. T. Schreuder; M. S. Williams

    2000-01-01

    In simulation sampling from forest populations using sample sizes of 20, 40, and 60 plots respectively, confidence intervals based on the bootstrap (accelerated, percentile, and t-distribution based) were calculated and compared with those based on the classical t confidence intervals for mapped populations and subdomains within those populations. A 68.1 ha mapped...

  5. Using a Nonparametric Bootstrap to Obtain a Confidence Interval for Pearson's "r" with Cluster Randomized Data: A Case Study

    Science.gov (United States)

    Wagstaff, David A.; Elek, Elvira; Kulis, Stephen; Marsiglia, Flavio

    2009-01-01

    A nonparametric bootstrap was used to obtain an interval estimate of Pearson's "r," and test the null hypothesis that there was no association between 5th grade students' positive substance use expectancies and their intentions to not use substances. The students were participating in a substance use prevention program in which the unit of…

  6. Coverage probability of bootstrap confidence intervals in heavy-tailed frequency models, with application to precipitation data

    Czech Academy of Sciences Publication Activity Database

    Kyselý, Jan

    2010-01-01

    Roč. 101, 3-4 (2010), s. 345-361 ISSN 0177-798X R&D Projects: GA AV ČR KJB300420801 Institutional research plan: CEZ:AV0Z30420517 Keywords : bootstrap * extreme value analysis * confidence intervals * heavy-tailed distributions * precipitation amounts Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.684, year: 2010

  7. Measuring Efficiency of Tunisian Schools in the Presence of Quasi-Fixed Inputs: A Bootstrap Data Envelopment Analysis Approach

    Science.gov (United States)

    Essid, Hedi; Ouellette, Pierre; Vigeant, Stephane

    2010-01-01

    The objective of this paper is to measure the efficiency of high schools in Tunisia. We use a statistical data envelopment analysis (DEA)-bootstrap approach with quasi-fixed inputs to estimate the precision of our measure. To do so, we developed a statistical model serving as the foundation of the data generation process (DGP). The DGP is…

  8. Prediction bands and intervals for the scapulo-humeral coordination based on the Bootstrap and two Gaussian methods.

    Science.gov (United States)

    Cutti, A G; Parel, I; Raggi, M; Petracci, E; Pellegrini, A; Accardo, A P; Sacchetti, R; Porcellini, G

    2014-03-21

    Quantitative motion analysis protocols have been developed to assess the coordination between scapula and humerus. However, the application of these protocols to test whether a subject's scapula resting position or pattern of coordination is "normal", is precluded by the unavailability of reference prediction intervals and bands, respectively. The aim of this study was to present such references for the "ISEO" protocol, by using the non-parametric Bootstrap approach and two parametric Gaussian methods (based on Student's T and Normal distributions). One hundred and eleven asymptomatic subjects were divided into three groups based on their age (18-30, 31-50, and 51-70). For each group, "monolateral" prediction bands and intervals were computed for the scapulo-humeral patterns and the scapula resting orientation, respectively. A fourth group included the 36 subjects (42 ± 13 year-old) for whom the scapulo-humeral coordination was measured bilaterally, and "differential" prediction bands and intervals were computed, which describe right-to-left side differences. Bootstrap and Gaussian methods were compared using cross-validation analyses, by evaluating the coverage probability in comparison to a 90% target. Results showed a mean coverage for Bootstrap from 86% to 90%, compared to 67-70% for parametric bands and 87-88% for parametric intervals. Bootstrap prediction bands showed a distinctive change in amplitude and mean pattern related to age, with an increase toward scapula retraction, lateral rotation and posterior tilt. In conclusion, Bootstrap ensures an optimal coverage and should be preferred over parametric methods. Moreover, the stratification of "monolateral" prediction bands and intervals by age appears relevant for the correct classification of patients. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Re-sampling of carbon stocks in forest soils and afforestation areas after 18 years – results from the 7x7 km Kvadratnet in Denmark

    DEFF Research Database (Denmark)

    Callesen, Ingeborg; Vesterdal, Lars; Stupak, Inge

    Forest soil plots (N=112) of the size 50x50 meter were sampled in 1989-90 (C1) and re-sampled in 2007-9 (C2) by soil auger, producing composite samples from the depths 0-25, 25-50, 50-75 and 75-100 cm. The soils were classified according to the carbon concentration in the uppermost mineral soil...... horizon (0-25 cm) at C1. Soils with less than 1.8% carbon gained carbon during the 18 yr period, while initially very carbon rich (4mineral soils and organic soils (C%>12) lost carbon. We hypothesize that the carbon losses reflect a very slow process of adaptation to the current more aerobic...

  10. Analysis of recurrent gap time data using the weighted risk-set method and the modified within-cluster resampling method.

    Science.gov (United States)

    Luo, Xianghua; Huang, Chiung-Yu

    2011-02-20

    The gap times between recurrent events are often of primary interest in medical and epidemiology studies. The observed gap times cannot be naively treated as clustered survival data in analysis because of the sequential structure of recurrent events. This paper introduces two important building blocks, the averaged counting process and the averaged at-risk process, for the development of the weighted risk-set (WRS) estimation methods. We demonstrate that with the use of these two empirical processes, existing risk-set based methods for univariate survival time data can be easily extended to analyze recurrent gap times. Additionally, we propose a modified within-cluster resampling (MWCR) method that can be easily implemented in standard software. We show that the MWCR estimators are asymptotically equivalent to the WRS estimators. An analysis of hospitalization data from the Danish Psychiatric Central Register is presented to illustrate the proposed methods. 2010 John Wiley & Sons, Ltd.

  11. Multinomial Logistic Regression & Bootstrapping for Bayesian Estimation of Vertical Facies Prediction in Heterogeneous Sandstone Reservoirs

    Science.gov (United States)

    Al-Mudhafar, W. J.

    2013-12-01

    Precisely prediction of rock facies leads to adequate reservoir characterization by improving the porosity-permeability relationships to estimate the properties in non-cored intervals. It also helps to accurately identify the spatial facies distribution to perform an accurate reservoir model for optimal future reservoir performance. In this paper, the facies estimation has been done through Multinomial logistic regression (MLR) with respect to the well logs and core data in a well in upper sandstone formation of South Rumaila oil field. The entire independent variables are gamma rays, formation density, water saturation, shale volume, log porosity, core porosity, and core permeability. Firstly, Robust Sequential Imputation Algorithm has been considered to impute the missing data. This algorithm starts from a complete subset of the dataset and estimates sequentially the missing values in an incomplete observation by minimizing the determinant of the covariance of the augmented data matrix. Then, the observation is added to the complete data matrix and the algorithm continues with the next observation with missing values. The MLR has been chosen to estimate the maximum likelihood and minimize the standard error for the nonlinear relationships between facies & core and log data. The MLR is used to predict the probabilities of the different possible facies given each independent variable by constructing a linear predictor function having a set of weights that are linearly combined with the independent variables by using a dot product. Beta distribution of facies has been considered as prior knowledge and the resulted predicted probability (posterior) has been estimated from MLR based on Baye's theorem that represents the relationship between predicted probability (posterior) with the conditional probability and the prior knowledge. To assess the statistical accuracy of the model, the bootstrap should be carried out to estimate extra-sample prediction error by randomly

  12. Civil Procedure.

    Science.gov (United States)

    Byer, Robert

    1997-01-01

    Briefly reviews the historical development of civil procedure (the rules that dictate how a civil case can proceed through the courts) and identifies some of its main components. Discusses procedures such as subject matter jurisdiction, personal jurisdiction, venue, discovery, motions practice, pleadings, pretrial conference, and trials. (MJP)

  13. The determinants of technical efficiency of a large scale HIV prevention project: application of the DEA double bootstrap using panel data from the Indian Avahan.

    Science.gov (United States)

    Lépine, Aurélia; Vassall, Anna; Chandrashekar, Sudhashree

    2015-01-01

    In 2004, the largest HIV prevention project (Avahan) conducted globally was implemented in India. Avahan was implemented by NGOs supported by state lead partners in order to provide HIV prevention services to high-risk population groups. In 2007, most of the NGOs reached full coverage. Using a panel data set of the NGOs that implemented Avahan, we investigate the level of technical efficiency as well as the drivers of technical inefficiency by using the double bootstrap procedure developed by Simar & Wilson (2007). Unlike the two-stage traditional method, this method allows valid inference in the presence of measurement error and serial correlation. We find that over the 4 years, Avahan NGOs could have reduced the level of inputs by 43% given the level of outputs reached. We find that efficiency of the project has increased over time. Results indicate that main drivers of inefficiency come from the characteristics of the state lead partner, the NGOs and the catchment area. These organisational factors are important to explicitly consider and assess when designing and implementing HIV prevention programmes and in setting benchmarks in order to optimise the use and allocation of resources. C14, I1.

  14. A Bootstrapping Model of Frequency and Context Effects in Word Learning.

    Science.gov (United States)

    Kachergis, George; Yu, Chen; Shiffrin, Richard M

    2017-04-01

    Prior research has shown that people can learn many nouns (i.e., word-object mappings) from a short series of ambiguous situations containing multiple words and objects. For successful cross-situational learning, people must approximately track which words and referents co-occur most frequently. This study investigates the effects of allowing some word-referent pairs to appear more frequently than others, as is true in real-world learning environments. Surprisingly, high-frequency pairs are not always learned better, but can also boost learning of other pairs. Using a recent associative model (Kachergis, Yu, & Shiffrin, 2012), we explain how mixing pairs of different frequencies can bootstrap late learning of the low-frequency pairs based on early learning of higher frequency pairs. We also manipulate contextual diversity, the number of pairs a given pair appears with across training, since it is naturalistically confounded with frequency. The associative model has competing familiarity and uncertainty biases, and their interaction is able to capture the individual and combined effects of frequency and contextual diversity on human learning. Two other recent word-learning models do not account for the behavioral findings. Copyright © 2016 Cognitive Science Society, Inc.

  15. Causal nexus between energy consumption and carbon dioxide emission for Malaysia using maximum entropy bootstrap approach.

    Science.gov (United States)

    Gul, Sehrish; Zou, Xiang; Hassan, Che Hashim; Azam, Muhammad; Zaman, Khalid

    2015-12-01

    This study investigates the relationship between energy consumption and carbon dioxide emission in the causal framework, as the direction of causality remains has a significant policy implication for developed and developing countries. The study employed maximum entropy bootstrap (Meboot) approach to examine the causal nexus between energy consumption and carbon dioxide emission using bivariate as well as multivariate framework for Malaysia, over a period of 1975-2013. This is a unified approach without requiring the use of conventional techniques based on asymptotical theory such as testing for possible unit root and cointegration. In addition, it can be applied in the presence of non-stationary of any type including structural breaks without any type of data transformation to achieve stationary. Thus, it provides more reliable and robust inferences which are insensitive to time span as well as lag length used. The empirical results show that there is a unidirectional causality running from energy consumption to carbon emission both in the bivariate model and multivariate framework, while controlling for broad money supply and population density. The results indicate that Malaysia is an energy-dependent country and hence energy is stimulus to carbon emissions.

  16. Using i2b2 to Bootstrap Rural Health Analytics and Learning Networks.

    Science.gov (United States)

    Harris, Daniel R; Baus, Adam D; Harper, Tamela J; Jarrett, Traci D; Pollard, Cecil R; Talbert, Jeffery C

    2016-08-01

    We demonstrate that the open-source i2b2 (Informatics for Integrating Biology and the Bedside) data model can be used to bootstrap rural health analytics and learning networks. These networks promote communication and research initiatives by providing the infrastructure necessary for sharing data and insights across a group of healthcare and research partners. Data integration remains a crucial challenge in connecting rural healthcare sites with a common data sharing and learning network due to the lack of interoperability and standards within electronic health records. The i2b2 data model acts as a point of convergence for disparate data from multiple healthcare sites. A consistent and natural data model for healthcare data is essential for overcoming integration issues, but challenges such as those caused by weak data standardization must still be addressed. We describe our experience in the context of building the West Virginia/Kentucky Health Analytics and Learning Network, a collaborative, multi-state effort connecting rural healthcare sites.

  17. Oscillometric blood pressure estimation by combining nonparametric bootstrap with Gaussian mixture model.

    Science.gov (United States)

    Lee, Soojeong; Rajan, Sreeraman; Jeon, Gwanggil; Chang, Joon-Hyuk; Dajani, Hilmi R; Groza, Voicu Z

    2017-06-01

    Blood pressure (BP) is one of the most important vital indicators and plays a key role in determining the cardiovascular activity of patients. This paper proposes a hybrid approach consisting of nonparametric bootstrap (NPB) and machine learning techniques to obtain the characteristic ratios (CR) used in the blood pressure estimation algorithm to improve the accuracy of systolic blood pressure (SBP) and diastolic blood pressure (DBP) estimates and obtain confidence intervals (CI). The NPB technique is used to circumvent the requirement for large sample set for obtaining the CI. A mixture of Gaussian densities is assumed for the CRs and Gaussian mixture model (GMM) is chosen to estimate the SBP and DBP ratios. The K-means clustering technique is used to obtain the mixture order of the Gaussian densities. The proposed approach achieves grade "A" under British Society of Hypertension testing protocol and is superior to the conventional approach based on maximum amplitude algorithm (MAA) that uses fixed CR ratios. The proposed approach also yields a lower mean error (ME) and the standard deviation of the error (SDE) in the estimates when compared to the conventional MAA method. In addition, CIs obtained through the proposed hybrid approach are also narrower with a lower SDE. The proposed approach combining the NPB technique with the GMM provides a methodology to derive individualized characteristic ratio. The results exhibit that the proposed approach enhances the accuracy of SBP and DBP estimation and provides narrower confidence intervals for the estimates. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Effect of voltage source internal resistance on the SQUID bootstrap circuit

    Science.gov (United States)

    Dong, Hui; Zhang, Guofeng; Wang, Yongliang; Zhang, Yi; Xie, Xiaoming; Krause, Hans-Joachim; Braginski, Alex I.; Offenhäusser, Andreas

    2012-01-01

    The voltage-biased SQUID bootstrap circuit (SBC) is suitable for achieving simple and low-noise direct readout of dc SQUIDs. In practice, an ideal voltage bias is difficult to realize because of non-zero internal resistance Rin of the bias voltage source. In order to clearly observe the influence of Rin on the SBC parameters (namely the flux-to-current transfer coefficient (∂I/∂Φ)SBC and the dynamic resistance Rd(SBC)) and the noise performance, we introduced an additional adjustable resistor Rad at room temperature to simulate a variable Rin between the SQUID and the preamplifier. We found that the measured SQUID flux noise does not rise, even though Rad increases significantly. This result demonstrates that a highly resistive connection can be inserted between the liquid-helium-cooled SQUID and the room-temperature readout electronics in the SBC scheme, thus reducing the conductive heat loss of the system. This work will be significant for developing multichannel SBC readout systems, e.g. for biomagnetism, and systems using SQUIDs as amplifiers, for example, in TES-array readout.

  19. Bootstrapped Discovery and Ranking of Relevant Services and Information in Context-aware Systems

    Directory of Open Access Journals (Sweden)

    Preeti Bhargava

    2015-08-01

    Full Text Available A context-aware system uses context to provide relevant information and services to the user, where relevancy depends on the user’s situation. This relevant information could include a wide range of heterogeneous content. Many existing context-aware systems determine this information based on pre-defined ontologies or rules. In addition, they rely on users’ context history to filter it. Moreover, they often provide domain-specific information. Such systems are not applicable to a large and varied set of user situations and information needs, and may suffer from cold start for new users. In this paper, we address these limitations and propose a novel, general and flexible approach for bootstrapped discovery and ranking of heterogeneous relevant services and information in context-aware systems. We design and implement four variations of a base algorithm that ranks candidate relevant services, and the information to be retrieved from them, based on the semantic relatedness between the information provided by the services and the user’s situation description. We conduct a live deployment with 14 subjects to evaluate the efficacy of our algorithms. We demonstrate that they have strong positive correlation with human supplied relevance rankings and can be used as an effective means to discover and rank relevant services and information. We also show that our approach is applicable to a wide set of users’ situations and to new users without requiring any user interaction history.

  20. The lightcone bootstrap and the spectrum of the 3d Ising CFT

    Energy Technology Data Exchange (ETDEWEB)

    Simmons-Duffin, David [School of Natural Sciences, Institute for Advanced Study, Princeton, New Jersey 08540 (United States); Walter Burke Institute for Theoretical Physics, Caltech, Pasadena, California 91125 (United States)

    2017-03-15

    We compute numerically the dimensions and OPE coefficients of several operators in the 3d Ising CFT, and then try to reverse-engineer the solution to crossing symmetry analytically. Our key tool is a set of new techniques for computing infinite sums of SL(2,ℝ) conformal blocks. Using these techniques, we solve the lightcone bootstrap to all orders in an asymptotic expansion in large spin, and suggest a strategy for going beyond the large spin limit. We carry out the first steps of this strategy for the 3d Ising CFT, deriving analytic approximations for the dimensions and OPE coefficients of several infinite families of operators in terms of the initial data {Δ_σ,Δ_ϵ,f_σ_σ_ϵ,f_ϵ_ϵ_ϵ,c_T}. The analytic results agree with numerics to high precision for about 100 low-twist operators (correctly accounting for O(1) mixing effects between large-spin families). Plugging these results back into the crossing equations, we obtain approximate analytic constraints on the initial data.