WorldWideScience

Sample records for methods resampling-type improvements

  1. Resampling Methods Improve the Predictive Power of Modeling in Class-Imbalanced Datasets

    Directory of Open Access Journals (Sweden)

    Paul H. Lee

    2014-09-01

    Full Text Available In the medical field, many outcome variables are dichotomized, and the two possible values of a dichotomized variable are referred to as classes. A dichotomized dataset is class-imbalanced if it consists mostly of one class, and performance of common classification models on this type of dataset tends to be suboptimal. To tackle such a problem, resampling methods, including oversampling and undersampling can be used. This paper aims at illustrating the effect of resampling methods using the National Health and Nutrition Examination Survey (NHANES wave 2009–2010 dataset. A total of 4677 participants aged ≥20 without self-reported diabetes and with valid blood test results were analyzed. The Classification and Regression Tree (CART procedure was used to build a classification model on undiagnosed diabetes. A participant demonstrated evidence of diabetes according to WHO diabetes criteria. Exposure variables included demographics and socio-economic status. CART models were fitted using a randomly selected 70% of the data (training dataset, and area under the receiver operating characteristic curve (AUC was computed using the remaining 30% of the sample for evaluation (testing dataset. CART models were fitted using the training dataset, the oversampled training dataset, the weighted training dataset, and the undersampled training dataset. In addition, resampling case-to-control ratio of 1:1, 1:2, and 1:4 were examined. Resampling methods on the performance of other extensions of CART (random forests and generalized boosted trees were also examined. CARTs fitted on the oversampled (AUC = 0.70 and undersampled training data (AUC = 0.74 yielded a better classification power than that on the training data (AUC = 0.65. Resampling could also improve the classification power of random forests and generalized boosted trees. To conclude, applying resampling methods in a class-imbalanced dataset improved the classification power of CART, random forests

  2. Testing for Granger Causality in the Frequency Domain: A Phase Resampling Method.

    Science.gov (United States)

    Liu, Siwei; Molenaar, Peter

    2016-01-01

    This article introduces phase resampling, an existing but rarely used surrogate data method for making statistical inferences of Granger causality in frequency domain time series analysis. Granger causality testing is essential for establishing causal relations among variables in multivariate dynamic processes. However, testing for Granger causality in the frequency domain is challenging due to the nonlinear relation between frequency domain measures (e.g., partial directed coherence, generalized partial directed coherence) and time domain data. Through a simulation study, we demonstrate that phase resampling is a general and robust method for making statistical inferences even with short time series. With Gaussian data, phase resampling yields satisfactory type I and type II error rates in all but one condition we examine: when a small effect size is combined with an insufficient number of data points. Violations of normality lead to slightly higher error rates but are mostly within acceptable ranges. We illustrate the utility of phase resampling with two empirical examples involving multivariate electroencephalography (EEG) and skin conductance data.

  3. Resampling-based methods in single and multiple testing for equality of covariance/correlation matrices.

    Science.gov (United States)

    Yang, Yang; DeGruttola, Victor

    2012-06-22

    Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.

  4. PARTICLE FILTER BASED VEHICLE TRACKING APPROACH WITH IMPROVED RESAMPLING STAGE

    Directory of Open Access Journals (Sweden)

    Wei Leong Khong

    2014-02-01

    Full Text Available Optical sensors based vehicle tracking can be widely implemented in traffic surveillance and flow control. The vast development of video surveillance infrastructure in recent years has drawn the current research focus towards vehicle tracking using high-end and low cost optical sensors. However, tracking vehicles via such sensors could be challenging due to the high probability of changing vehicle appearance and illumination, besides the occlusion and overlapping incidents. Particle filter has been proven as an approach which can overcome nonlinear and non-Gaussian situations caused by cluttered background and occlusion incidents. Unfortunately, conventional particle filter approach encounters particle degeneracy especially during and after the occlusion. Particle filter with sampling important resampling (SIR is an important step to overcome the drawback of particle filter, but SIR faced the problem of sample impoverishment when heavy particles are statistically selected many times. In this work, genetic algorithm has been proposed to be implemented in the particle filter resampling stage, where the estimated position can converge faster to hit the real position of target vehicle under various occlusion incidents. The experimental results show that the improved particle filter with genetic algorithm resampling method manages to increase the tracking accuracy and meanwhile reduce the particle sample size in the resampling stage.

  5. Resampling methods in Microsoft Excel® for estimating reference intervals.

    Science.gov (United States)

    Theodorsson, Elvar

    2015-01-01

    Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. 
The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular.
 Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.

  6. Fourier transform resampling: Theory and application

    International Nuclear Information System (INIS)

    Hawkins, W.G.

    1996-01-01

    One of the most challenging problems in medical imaging is the development of reconstruction algorithms for nonstandard geometries. This work focuses on the application of Fourier analysis to the problem of resampling or rebinning. Conventional resampling methods utilizing some form of interpolation almost always result in a loss of resolution in the tomographic image. Fourier Transform Resampling (FTRS) offers potential improvement because the Modulation Transfer Function (MTF) of the process behaves like an ideal low pass filter. The MTF, however, is nonstationary if the coordinate transformation is nonlinear. FTRS may be viewed as a generalization of the linear coordinate transformations of standard Fourier analysis. Simulated MTF's were obtained by projecting point sources at different transverse positions in the flat fan beam detector geometry. These MTF's were compared to the closed form expression for FIRS. Excellent agreement was obtained for frequencies at or below the estimated cutoff frequency. The resulting FTRS algorithm is applied to simulations with symmetric fan beam geometry, an elliptical orbit and uniform attenuation, with a normalized root mean square error (NRME) of 0.036. Also, a Tc-99m point source study (1 cm dia., placed in air 10 cm from the COR) for a circular fan beam acquisition was reconstructed with a hybrid resampling method. The FWHM of the hybrid resampling method was 11.28 mm and compares favorably with a direct reconstruction (FWHM: 11.03 mm)

  7. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.

    Science.gov (United States)

    de Nijs, Robin

    2015-07-21

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.

  8. Confidence Limits for the Indirect Effect: Distribution of the Product and Resampling Methods

    Science.gov (United States)

    MacKinnon, David P.; Lockwood, Chondra M.; Williams, Jason

    2010-01-01

    The most commonly used method to test an indirect effect is to divide the estimate of the indirect effect by its standard error and compare the resulting z statistic with a critical value from the standard normal distribution. Confidence limits for the indirect effect are also typically based on critical values from the standard normal distribution. This article uses a simulation study to demonstrate that confidence limits are imbalanced because the distribution of the indirect effect is normal only in special cases. Two alternatives for improving the performance of confidence limits for the indirect effect are evaluated: (a) a method based on the distribution of the product of two normal random variables, and (b) resampling methods. In Study 1, confidence limits based on the distribution of the product are more accurate than methods based on an assumed normal distribution but confidence limits are still imbalanced. Study 2 demonstrates that more accurate confidence limits are obtained using resampling methods, with the bias-corrected bootstrap the best method overall. PMID:20157642

  9. Assessment of resampling methods for causality testing: A note on the US inflation behavior

    Science.gov (United States)

    Kyrtsou, Catherine; Kugiumtzis, Dimitris; Diks, Cees

    2017-01-01

    Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms. PMID:28708870

  10. Assessment of resampling methods for causality testing: A note on the US inflation behavior.

    Science.gov (United States)

    Papana, Angeliki; Kyrtsou, Catherine; Kugiumtzis, Dimitris; Diks, Cees

    2017-01-01

    Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms.

  11. Improved efficiency of multi-criteria IMPT treatment planning using iterative resampling of randomly placed pencil beams

    Science.gov (United States)

    van de Water, S.; Kraan, A. C.; Breedveld, S.; Schillemans, W.; Teguh, D. N.; Kooy, H. M.; Madden, T. M.; Heijmen, B. J. M.; Hoogeman, M. S.

    2013-10-01

    This study investigates whether ‘pencil beam resampling’, i.e. iterative selection and weight optimization of randomly placed pencil beams (PBs), reduces optimization time and improves plan quality for multi-criteria optimization in intensity-modulated proton therapy, compared with traditional modes in which PBs are distributed over a regular grid. Resampling consisted of repeatedly performing: (1) random selection of candidate PBs from a very fine grid, (2) inverse multi-criteria optimization, and (3) exclusion of low-weight PBs. The newly selected candidate PBs were added to the PBs in the existing solution, causing the solution to improve with each iteration. Resampling and traditional regular grid planning were implemented into our in-house developed multi-criteria treatment planning system ‘Erasmus iCycle’. The system optimizes objectives successively according to their priorities as defined in the so-called ‘wish-list’. For five head-and-neck cancer patients and two PB widths (3 and 6 mm sigma at 230 MeV), treatment plans were generated using: (1) resampling, (2) anisotropic regular grids and (3) isotropic regular grids, while using varying sample sizes (resampling) or grid spacings (regular grid). We assessed differences in optimization time (for comparable plan quality) and in plan quality parameters (for comparable optimization time). Resampling reduced optimization time by a factor of 2.8 and 5.6 on average (7.8 and 17.0 at maximum) compared with the use of anisotropic and isotropic grids, respectively. Doses to organs-at-risk were generally reduced when using resampling, with median dose reductions ranging from 0.0 to 3.0 Gy (maximum: 14.3 Gy, relative: 0%-42%) compared with anisotropic grids and from -0.3 to 2.6 Gy (maximum: 11.4 Gy, relative: -4%-19%) compared with isotropic grids. Resampling was especially effective when using thin PBs (3 mm sigma). Resampling plans contained on average fewer PBs, energy layers and protons than anisotropic

  12. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'

    DEFF Research Database (Denmark)

    de Nijs, Robin

    2015-01-01

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed...... by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all...... methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics...

  13. A Resampling-Based Stochastic Approximation Method for Analysis of Large Geostatistical Data

    KAUST Repository

    Liang, Faming; Cheng, Yichen; Song, Qifan; Park, Jincheol; Yang, Ping

    2013-01-01

    large number of observations. This article proposes a resampling-based stochastic approximation method to address this challenge. At each iteration of the proposed method, a small subsample is drawn from the full dataset, and then the current estimate

  14. Conditional Monthly Weather Resampling Procedure for Operational Seasonal Water Resources Forecasting

    Science.gov (United States)

    Beckers, J.; Weerts, A.; Tijdeman, E.; Welles, E.; McManamon, A.

    2013-12-01

    To provide reliable and accurate seasonal streamflow forecasts for water resources management several operational hydrologic agencies and hydropower companies around the world use the Extended Streamflow Prediction (ESP) procedure. The ESP in its original implementation does not accommodate for any additional information that the forecaster may have about expected deviations from climatology in the near future. Several attempts have been conducted to improve the skill of the ESP forecast, especially for areas which are affected by teleconnetions (e,g. ENSO, PDO) via selection (Hamlet and Lettenmaier, 1999) or weighting schemes (Werner et al., 2004; Wood and Lettenmaier, 2006; Najafi et al., 2012). A disadvantage of such schemes is that they lead to a reduction of the signal to noise ratio of the probabilistic forecast. To overcome this, we propose a resampling method conditional on climate indices to generate meteorological time series to be used in the ESP. The method can be used to generate a large number of meteorological ensemble members in order to improve the statistical properties of the ensemble. The effectiveness of the method was demonstrated in a real-time operational hydrologic seasonal forecasts system for the Columbia River basin operated by the Bonneville Power Administration. The forecast skill of the k-nn resampler was tested against the original ESP for three basins at the long-range seasonal time scale. The BSS and CRPSS were used to compare the results to those of the original ESP method. Positive forecast skill scores were found for the resampler method conditioned on different indices for the prediction of spring peak flows in the Dworshak and Hungry Horse basin. For the Libby Dam basin however, no improvement of skill was found. The proposed resampling method is a promising practical approach that can add skill to ESP forecasts at the seasonal time scale. Further improvement is possible by fine tuning the method and selecting the most

  15. Image re-sampling detection through a novel interpolation kernel.

    Science.gov (United States)

    Hilal, Alaa

    2018-06-01

    Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Winter Holts Oscillatory Method: A New Method of Resampling in Time Series.

    Directory of Open Access Journals (Sweden)

    Muhammad Imtiaz Subhani

    2016-12-01

    Full Text Available The core proposition behind this research is to create innovative methods of bootstrapping that can be applied in time series data. In order to find new methods of bootstrapping, various methods were reviewed; The data of automotive Sales, Market Shares and Net Exports of the top 10 countries, which includes China, Europe, United States of America (USA, Japan, Germany, South Korea, India, Mexico, Brazil, Spain and, Canada from 2002 to 2014 were collected through various sources which includes UN Comtrade, Index Mundi and World Bank. The findings of this paper confirmed that Bootstrapping for resampling through winter forecasting by Oscillation and Average methods give more robust results than the winter forecasting by any general methods.

  17. Resampling Approach for Determination of the Method for Reference Interval Calculation in Clinical Laboratory Practice▿

    Science.gov (United States)

    Pavlov, Igor Y.; Wilson, Andrew R.; Delgado, Julio C.

    2010-01-01

    Reference intervals (RI) play a key role in clinical interpretation of laboratory test results. Numerous articles are devoted to analyzing and discussing various methods of RI determination. The two most widely used approaches are the parametric method, which assumes data normality, and a nonparametric, rank-based procedure. The decision about which method to use is usually made arbitrarily. The goal of this study was to demonstrate that using a resampling approach for the comparison of RI determination techniques could help researchers select the right procedure. Three methods of RI calculation—parametric, transformed parametric, and quantile-based bootstrapping—were applied to multiple random samples drawn from 81 values of complement factor B observations and from a computer-simulated normally distributed population. It was shown that differences in RI between legitimate methods could be up to 20% and even more. The transformed parametric method was found to be the best method for the calculation of RI of non-normally distributed factor B estimations, producing an unbiased RI and the lowest confidence limits and interquartile ranges. For a simulated Gaussian population, parametric calculations, as expected, were the best; quantile-based bootstrapping produced biased results at low sample sizes, and the transformed parametric method generated heavily biased RI. The resampling approach could help compare different RI calculation methods. An algorithm showing a resampling procedure for choosing the appropriate method for RI calculations is included. PMID:20554803

  18. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    Science.gov (United States)

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Resampling nucleotide sequences with closest-neighbor trimming and its comparison to other methods.

    Directory of Open Access Journals (Sweden)

    Kouki Yonezawa

    Full Text Available A large number of nucleotide sequences of various pathogens are available in public databases. The growth of the datasets has resulted in an enormous increase in computational costs. Moreover, due to differences in surveillance activities, the number of sequences found in databases varies from one country to another and from year to year. Therefore, it is important to study resampling methods to reduce the sampling bias. A novel algorithm-called the closest-neighbor trimming method-that resamples a given number of sequences from a large nucleotide sequence dataset was proposed. The performance of the proposed algorithm was compared with other algorithms by using the nucleotide sequences of human H3N2 influenza viruses. We compared the closest-neighbor trimming method with the naive hierarchical clustering algorithm and [Formula: see text]-medoids clustering algorithm. Genetic information accumulated in public databases contains sampling bias. The closest-neighbor trimming method can thin out densely sampled sequences from a given dataset. Since nucleotide sequences are among the most widely used materials for life sciences, we anticipate that our algorithm to various datasets will result in reducing sampling bias.

  20. Assessment of Resampling Methods for Causality Testing: A note on the US Inflation Behavior

    NARCIS (Netherlands)

    Papana, A.; Kyrtsou, C.; Kugiumtzis, D.; Diks, C.

    2017-01-01

    Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial

  1. Comparison of standard resampling methods for performance estimation of artificial neural network ensembles

    OpenAIRE

    Green, Michael; Ohlsson, Mattias

    2007-01-01

    Estimation of the generalization performance for classification within the medical applications domain is always an important task. In this study we focus on artificial neural network ensembles as the machine learning technique. We present a numerical comparison between five common resampling techniques: k-fold cross validation (CV), holdout, using three cutoffs, and bootstrap using five different data sets. The results show that CV together with holdout $0.25$ and $0.50$ are the best resampl...

  2. A New Method to Implement Resampled Uniform PWM Suitable for Distributed Control of Modular Multilevel Converters

    DEFF Research Database (Denmark)

    Huang, Shaojun; Mathe, Laszlo; Teodorescu, Remus

    2013-01-01

    Two existing methods to implement resampling modulation technique for modular multilevel converter (MMC) (the sampling frequency is a multiple of the carrier frequency) are: the software solution (using a microcontroller) and the hardware solution (using FPGA). The former has a certain level...

  3. Efficient p-value evaluation for resampling-based tests

    KAUST Repository

    Yu, K.

    2011-01-05

    The resampling-based test, which often relies on permutation or bootstrap procedures, has been widely used for statistical hypothesis testing when the asymptotic distribution of the test statistic is unavailable or unreliable. It requires repeated calculations of the test statistic on a large number of simulated data sets for its significance level assessment, and thus it could become very computationally intensive. Here, we propose an efficient p-value evaluation procedure by adapting the stochastic approximation Markov chain Monte Carlo algorithm. The new procedure can be used easily for estimating the p-value for any resampling-based test. We show through numeric simulations that the proposed procedure can be 100-500 000 times as efficient (in term of computing time) as the standard resampling-based procedure when evaluating a test statistic with a small p-value (e.g. less than 10( - 6)). With its computational burden reduced by this proposed procedure, the versatile resampling-based test would become computationally feasible for a much wider range of applications. We demonstrate the application of the new method by applying it to a large-scale genetic association study of prostate cancer.

  4. Modeling of correlated data with informative cluster sizes: An evaluation of joint modeling and within-cluster resampling approaches.

    Science.gov (United States)

    Zhang, Bo; Liu, Wei; Zhang, Zhiwei; Qu, Yanping; Chen, Zhen; Albert, Paul S

    2017-08-01

    Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether.

  5. Introductory statistics and analytics a resampling perspective

    CERN Document Server

    Bruce, Peter C

    2014-01-01

    Concise, thoroughly class-tested primer that features basic statistical concepts in the concepts in the context of analytics, resampling, and the bootstrapA uniquely developed presentation of key statistical topics, Introductory Statistics and Analytics: A Resampling Perspective provides an accessible approach to statistical analytics, resampling, and the bootstrap for readers with various levels of exposure to basic probability and statistics. Originally class-tested at one of the first online learning companies in the discipline, www.statistics.com, the book primarily focuses on application

  6. On removing interpolation and resampling artifacts in rigid image registration.

    Science.gov (United States)

    Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R; Fischl, Bruce

    2013-02-01

    We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration.

  7. Genetic divergence among cupuaçu accessions by multiscale bootstrap resampling

    Directory of Open Access Journals (Sweden)

    Vinicius Silva dos Santos

    2015-06-01

    Full Text Available This study aimed at investigating the genetic divergence of eighteen accessions of cupuaçu trees based on fruit morphometric traits and comparing usual methods of cluster analysis with the proposed multiscale bootstrap resampling methodology. The data were obtained from an experiment conducted in Tomé-Açu city (PA, Brazil, arranged in a completely randomized design with eighteen cupuaçu accessions and 10 repetitions, from 2004 to 2011. Genetic parameters were estimated by restricted maximum likelihood/best linear unbiased prediction (REML/BLUP methodology. The predicted breeding values were used in the study on genetic divergence through Unweighted Pair Cluster Method with Arithmetic Mean (UPGMA hierarchical clustering and Tocher’s optimization method based on standardized Euclidean distance. Clustering consistency and optimal number of clusters in the UPGMA method were verified by the cophenetic correlation coefficient (CCC and Mojena’s criterion, respectively, besides the multiscale bootstrap resampling technique. The use of the clustering UPGMA method in situations with and without multiscale bootstrap resulted in four and five clusters, respectively, while the Tocher’s method resulted in seven clusters. The multiscale bootstrap resampling technique proves to be efficient to assess the consistency of clustering in hierarchical methods and, consequently, the optimal number of clusters.

  8. Resampling to accelerate cross-correlation searches for continuous gravitational waves from binary systems

    Science.gov (United States)

    Meadors, Grant David; Krishnan, Badri; Papa, Maria Alessandra; Whelan, John T.; Zhang, Yuanhao

    2018-02-01

    Continuous-wave (CW) gravitational waves (GWs) call for computationally-intensive methods. Low signal-to-noise ratio signals need templated searches with long coherent integration times and thus fine parameter-space resolution. Longer integration increases sensitivity. Low-mass x-ray binaries (LMXBs) such as Scorpius X-1 (Sco X-1) may emit accretion-driven CWs at strains reachable by current ground-based observatories. Binary orbital parameters induce phase modulation. This paper describes how resampling corrects binary and detector motion, yielding source-frame time series used for cross-correlation. Compared to the previous, detector-frame, templated cross-correlation method, used for Sco X-1 on data from the first Advanced LIGO observing run (O1), resampling is about 20 × faster in the costliest, most-sensitive frequency bands. Speed-up factors depend on integration time and search setup. The speed could be reinvested into longer integration with a forecast sensitivity gain, 20 to 125 Hz median, of approximately 51%, or from 20 to 250 Hz, 11%, given the same per-band cost and setup. This paper's timing model enables future setup optimization. Resampling scales well with longer integration, and at 10 × unoptimized cost could reach respectively 2.83 × and 2.75 × median sensitivities, limited by spin-wandering. Then an O1 search could yield a marginalized-polarization upper limit reaching torque-balance at 100 Hz. Frequencies from 40 to 140 Hz might be probed in equal observing time with 2 × improved detectors.

  9. Methods of soil resampling to monitor changes in the chemical concentrations of forest soils

    Science.gov (United States)

    Lawrence, Gregory B.; Fernandez, Ivan J.; Hazlett, Paul W.; Bailey, Scott W.; Ross, Donald S.; Villars, Thomas R.; Quintana, Angelica; Ouimet, Rock; McHale, Michael; Johnson, Chris E.; Briggs, Russell D.; Colter, Robert A.; Siemion, Jason; Bartlett, Olivia L.; Vargas, Olga; Antidormi, Michael; Koppers, Mary Margaret

    2016-01-01

    Recent soils research has shown that important chemical soil characteristics can change in less than a decade, often the result of broad environmental changes. Repeated sampling to monitor these changes in forest soils is a relatively new practice that is not well documented in the literature and has only recently been broadly embraced by the scientific community. The objective of this protocol is therefore to synthesize the latest information on methods of soil resampling in a format that can be used to design and implement a soil monitoring program. Successful monitoring of forest soils requires that a study unit be defined within an area of forested land that can be characterized with replicate sampling locations. A resampling interval of 5 years is recommended, but if monitoring is done to evaluate a specific environmental driver, the rate of change expected in that driver should be taken into consideration. Here, we show that the sampling of the profile can be done by horizon where boundaries can be clearly identified and horizons are sufficiently thick to remove soil without contamination from horizons above or below. Otherwise, sampling can be done by depth interval. Archiving of sample for future reanalysis is a key step in avoiding analytical bias and providing the opportunity for additional analyses as new questions arise.

  10. Methods of Soil Resampling to Monitor Changes in the Chemical Concentrations of Forest Soils.

    Science.gov (United States)

    Lawrence, Gregory B; Fernandez, Ivan J; Hazlett, Paul W; Bailey, Scott W; Ross, Donald S; Villars, Thomas R; Quintana, Angelica; Ouimet, Rock; McHale, Michael R; Johnson, Chris E; Briggs, Russell D; Colter, Robert A; Siemion, Jason; Bartlett, Olivia L; Vargas, Olga; Antidormi, Michael R; Koppers, Mary M

    2016-11-25

    Recent soils research has shown that important chemical soil characteristics can change in less than a decade, often the result of broad environmental changes. Repeated sampling to monitor these changes in forest soils is a relatively new practice that is not well documented in the literature and has only recently been broadly embraced by the scientific community. The objective of this protocol is therefore to synthesize the latest information on methods of soil resampling in a format that can be used to design and implement a soil monitoring program. Successful monitoring of forest soils requires that a study unit be defined within an area of forested land that can be characterized with replicate sampling locations. A resampling interval of 5 years is recommended, but if monitoring is done to evaluate a specific environmental driver, the rate of change expected in that driver should be taken into consideration. Here, we show that the sampling of the profile can be done by horizon where boundaries can be clearly identified and horizons are sufficiently thick to remove soil without contamination from horizons above or below. Otherwise, sampling can be done by depth interval. Archiving of sample for future reanalysis is a key step in avoiding analytical bias and providing the opportunity for additional analyses as new questions arise.

  11. Resampling: An optimization method for inverse planning in robotic radiosurgery

    International Nuclear Information System (INIS)

    Schweikard, Achim; Schlaefer, Alexander; Adler, John R. Jr.

    2006-01-01

    By design, the range of beam directions in conventional radiosurgery are constrained to an isocentric array. However, the recent introduction of robotic radiosurgery dramatically increases the flexibility of targeting, and as a consequence, beams need be neither coplanar nor isocentric. Such a nonisocentric design permits a large number of distinct beam directions to be used in one single treatment. These major technical differences provide an opportunity to improve upon the well-established principles for treatment planning used with GammaKnife or LINAC radiosurgery. With this objective in mind, our group has developed over the past decade an inverse planning tool for robotic radiosurgery. This system first computes a set of beam directions, and then during an optimization step, weights each individual beam. Optimization begins with a feasibility query, the answer to which is derived through linear programming. This approach offers the advantage of completeness and avoids local optima. Final beam selection is based on heuristics. In this report we present and evaluate a new strategy for utilizing the advantages of linear programming to improve beam selection. Starting from an initial solution, a heuristically determined set of beams is added to the optimization problem, while beams with zero weight are removed. This process is repeated to sample a set of beams much larger compared with typical optimization. Experimental results indicate that the planning approach efficiently finds acceptable plans and that resampling can further improve its efficiency

  12. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  13. Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping.

    Science.gov (United States)

    Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca

    2015-08-12

    Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights.

  14. A Study on the Improvement of Digital Periapical Images using Image Interpolation Methods

    International Nuclear Information System (INIS)

    Song, Nam Kyu; Koh, Kwang Joon

    1998-01-01

    Image resampling is of particular interest in digital radiology. When resampling an image to a new set of coordinate, there appears blocking artifacts and image changes. To enhance image quality, interpolation algorithms have been used. Resampling is used to increase the number of points in an image to improve its appearance for display. The process of interpolation is fitting a continuous function to the discrete points in the digital image. The purpose of this study was to determine the effects of the seven interpolation functions when image resampling in digital periapical images. The images were obtained by Digora, CDR and scanning of Ektaspeed plus periapical radiograms on the dry skull and human subject. The subjects were exposed to intraoral X-ray machine at 60 kVp and 70 kVp with exposure time varying between 0.01 and 0.50 second. To determine which interpolation method would provide the better image, seven functions were compared ; (1) nearest neighbor (2) linear (3) non-linear (4) facet model (5) cubic convolution (6) cubic spline (7) gray segment expansion. And resampled images were compared in terms of SNR (Signal to Noise Ratio) and MTF (Modulation Transfer Function) coefficient value. The obtained results were as follows ; 1. The highest SNR value (75.96 dB) was obtained with cubic convolution method and the lowest SNR value (72.44 dB) was obtained with facet model method among seven interpolation methods. 2. There were significant differences of SNR values among CDR, Digora and film scan (P 0.05). 4. There were significant differences of MTF coefficient values between linear interpolation method and the other six interpolation methods (P<0.05). 5. The speed of computation time was the fastest with nearest neighbor method and the slowest with non-linear method. 6. The better image was obtained with cubic convolution, cubic spline and gray segment method in ROC analysis. 7. The better sharpness of edge was obtained with gray segment expansion method

  15. RELATIVE ORIENTATION AND MODIFIED PIECEWISE EPIPOLAR RESAMPLING FOR HIGH RESOLUTION SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    K. Gong

    2017-05-01

    Full Text Available High resolution, optical satellite sensors are boosted to a new era in the last few years, because satellite stereo images at half meter or even 30cm resolution are available. Nowadays, high resolution satellite image data have been commonly used for Digital Surface Model (DSM generation and 3D reconstruction. It is common that the Rational Polynomial Coefficients (RPCs provided by the vendors have rough precision and there is no ground control information available to refine the RPCs. Therefore, we present two relative orientation methods by using corresponding image points only: the first method will use quasi ground control information, which is generated from the corresponding points and rough RPCs, for the bias-compensation model; the second method will estimate the relative pointing errors on the matching image and remove this error by an affine model. Both methods do not need ground control information and are applied for the entire image. To get very dense point clouds, the Semi-Global Matching (SGM method is an efficient tool. However, before accomplishing the matching process the epipolar constraints are required. In most conditions, satellite images have very large dimensions, contrary to the epipolar geometry generation and image resampling, which is usually carried out in small tiles. This paper also presents a modified piecewise epipolar resampling method for the entire image without tiling. The quality of the proposed relative orientation and epipolar resampling method are evaluated, and finally sub-pixel accuracy has been achieved in our work.

  16. A resampling-based meta-analysis for detection of differential gene expression in breast cancer

    Directory of Open Access Journals (Sweden)

    Ergul Gulusan

    2008-12-01

    Full Text Available Abstract Background Accuracy in the diagnosis of breast cancer and classification of cancer subtypes has improved over the years with the development of well-established immunohistopathological criteria. More recently, diagnostic gene-sets at the mRNA expression level have been tested as better predictors of disease state. However, breast cancer is heterogeneous in nature; thus extraction of differentially expressed gene-sets that stably distinguish normal tissue from various pathologies poses challenges. Meta-analysis of high-throughput expression data using a collection of statistical methodologies leads to the identification of robust tumor gene expression signatures. Methods A resampling-based meta-analysis strategy, which involves the use of resampling and application of distribution statistics in combination to assess the degree of significance in differential expression between sample classes, was developed. Two independent microarray datasets that contain normal breast, invasive ductal carcinoma (IDC, and invasive lobular carcinoma (ILC samples were used for the meta-analysis. Expression of the genes, selected from the gene list for classification of normal breast samples and breast tumors encompassing both the ILC and IDC subtypes were tested on 10 independent primary IDC samples and matched non-tumor controls by real-time qRT-PCR. Other existing breast cancer microarray datasets were used in support of the resampling-based meta-analysis. Results The two independent microarray studies were found to be comparable, although differing in their experimental methodologies (Pearson correlation coefficient, R = 0.9389 and R = 0.8465 for ductal and lobular samples, respectively. The resampling-based meta-analysis has led to the identification of a highly stable set of genes for classification of normal breast samples and breast tumors encompassing both the ILC and IDC subtypes. The expression results of the selected genes obtained through real

  17. A Resampling-Based Stochastic Approximation Method for Analysis of Large Geostatistical Data

    KAUST Repository

    Liang, Faming

    2013-03-01

    The Gaussian geostatistical model has been widely used in modeling of spatial data. However, it is challenging to computationally implement this method because it requires the inversion of a large covariance matrix, particularly when there is a large number of observations. This article proposes a resampling-based stochastic approximation method to address this challenge. At each iteration of the proposed method, a small subsample is drawn from the full dataset, and then the current estimate of the parameters is updated accordingly under the framework of stochastic approximation. Since the proposed method makes use of only a small proportion of the data at each iteration, it avoids inverting large covariance matrices and thus is scalable to large datasets. The proposed method also leads to a general parameter estimation approach, maximum mean log-likelihood estimation, which includes the popular maximum (log)-likelihood estimation (MLE) approach as a special case and is expected to play an important role in analyzing large datasets. Under mild conditions, it is shown that the estimator resulting from the proposed method converges in probability to a set of parameter values of equivalent Gaussian probability measures, and that the estimator is asymptotically normally distributed. To the best of the authors\\' knowledge, the present study is the first one on asymptotic normality under infill asymptotics for general covariance functions. The proposed method is illustrated with large datasets, both simulated and real. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  18. A method to test the reproducibility and to improve performance of computer-aided detection schemes for digitized mammograms

    International Nuclear Information System (INIS)

    Zheng Bin; Gur, David; Good, Walter F.; Hardesty, Lara A.

    2004-01-01

    The purpose of this study is to develop a new method for assessment of the reproducibility of computer-aided detection (CAD) schemes for digitized mammograms and to evaluate the possibility of using the implemented approach for improving CAD performance. Two thousand digitized mammograms (representing 500 cases) with 300 depicted verified masses were selected in the study. Series of images were generated for each digitized image by resampling after a series of slight image rotations. A CAD scheme developed in our laboratory was applied to all images to detect suspicious mass regions. We evaluated the reproducibility of the scheme using the detection sensitivity and false-positive rates for the original and resampled images. We also explored the possibility of improving CAD performance using three methods of combining results from the original and resampled images, including simple grouping, averaging output scores, and averaging output scores after grouping. The CAD scheme generated a detection score (from 0 to 1) for each identified suspicious region. A region with a detection score >0.5 was considered as positive. The CAD scheme detected 238 masses (79.3% case-based sensitivity) and identified 1093 false-positive regions (average 0.55 per image) in the original image dataset. In eleven repeated tests using original and ten sets of rotated and resampled images, the scheme detected a maximum of 271 masses and identified as many as 2359 false-positive regions. Two hundred and eighteen masses (80.4%) and 618 false-positive regions (26.2%) were detected in all 11 sets of images. Combining detection results improved reproducibility and the overall CAD performance. In the range of an average false-positive detection rate between 0.5 and 1 per image, the sensitivity of the scheme could be increased approximately 5% after averaging the scores of the regions detected in at least four images. At low false-positive rate (e.g., ≤average 0.3 per image), the grouping method

  19. Automatic recognition of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNNs.

    Science.gov (United States)

    Han, Guanghui; Liu, Xiabi; Zheng, Guangyuan; Wang, Murong; Huang, Shan

    2018-06-06

    Ground-glass opacity (GGO) is a common CT imaging sign on high-resolution CT, which means the lesion is more likely to be malignant compared to common solid lung nodules. The automatic recognition of GGO CT imaging signs is of great importance for early diagnosis and possible cure of lung cancers. The present GGO recognition methods employ traditional low-level features and system performance improves slowly. Considering the high-performance of CNN model in computer vision field, we proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling is performed on multi-views and multi-receptive fields, which reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has the ability to obtain the optimal fine-tuning model. Multi-CNN models fusion strategy obtains better performance than any single trained model. We evaluated our method on the GGO nodule samples in publicly available LIDC-IDRI dataset of chest CT scans. The experimental results show that our method yields excellent results with 96.64% sensitivity, 71.43% specificity, and 0.83 F1 score. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images. Graphical abstract We proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has ability to obtain the optimal fine-tuning model. Our method is a promising approach to apply deep learning method to computer-aided analysis

  20. Bootstrap resampling: a powerful method of assessing confidence intervals for doses from experimental data

    International Nuclear Information System (INIS)

    Iwi, G.; Millard, R.K.; Palmer, A.M.; Preece, A.W.; Saunders, M.

    1999-01-01

    Bootstrap resampling provides a versatile and reliable statistical method for estimating the accuracy of quantities which are calculated from experimental data. It is an empirically based method, in which large numbers of simulated datasets are generated by computer from existing measurements, so that approximate confidence intervals of the derived quantities may be obtained by direct numerical evaluation. A simple introduction to the method is given via a detailed example of estimating 95% confidence intervals for cumulated activity in the thyroid following injection of 99m Tc-sodium pertechnetate using activity-time data from 23 subjects. The application of the approach to estimating confidence limits for the self-dose to the kidney following injection of 99m Tc-DTPA organ imaging agent based on uptake data from 19 subjects is also illustrated. Results are then given for estimates of doses to the foetus following administration of 99m Tc-sodium pertechnetate for clinical reasons during pregnancy, averaged over 25 subjects. The bootstrap method is well suited for applications in radiation dosimetry including uncertainty, reliability and sensitivity analysis of dose coefficients in biokinetic models, but it can also be applied in a wide range of other biomedical situations. (author)

  1. An add-in implementation of the RESAMPLING syntax under Microsoft EXCEL.

    Science.gov (United States)

    Meineke, I

    2000-10-01

    The RESAMPLING syntax defines a set of powerful commands, which allow the programming of probabilistic statistical models with few, easily memorized statements. This paper presents an implementation of the RESAMPLING syntax using Microsoft EXCEL with Microsoft WINDOWS(R) as a platform. Two examples are given to demonstrate typical applications of RESAMPLING in biomedicine. Details of the implementation with special emphasis on the programming environment are discussed at length. The add-in is available electronically to interested readers upon request. The use of the add-in facilitates numerical statistical analyses of data from within EXCEL in a comfortable way.

  2. Estimating variability in functional images using a synthetic resampling approach

    International Nuclear Information System (INIS)

    Maitra, R.; O'Sullivan, F.

    1996-01-01

    Functional imaging of biologic parameters like in vivo tissue metabolism is made possible by Positron Emission Tomography (PET). Many techniques, such as mixture analysis, have been suggested for extracting such images from dynamic sequences of reconstructed PET scans. Methods for assessing the variability in these functional images are of scientific interest. The nonlinearity of the methods used in the mixture analysis approach makes analytic formulae for estimating variability intractable. The usual resampling approach is infeasible because of the prohibitive computational effort in simulating a number of sinogram. datasets, applying image reconstruction, and generating parametric images for each replication. Here we introduce an approach that approximates the distribution of the reconstructed PET images by a Gaussian random field and generates synthetic realizations in the imaging domain. This eliminates the reconstruction steps in generating each simulated functional image and is therefore practical. Results of experiments done to evaluate the approach on a model one-dimensional problem are very encouraging. Post-processing of the estimated variances is seen to improve the accuracy of the estimation method. Mixture analysis is used to estimate functional images; however, the suggested approach is general enough to extend to other parametric imaging methods

  3. Efficient p-value evaluation for resampling-based tests

    KAUST Repository

    Yu, K.; Liang, F.; Ciampa, J.; Chatterjee, N.

    2011-01-01

    The resampling-based test, which often relies on permutation or bootstrap procedures, has been widely used for statistical hypothesis testing when the asymptotic distribution of the test statistic is unavailable or unreliable. It requires repeated

  4. A resampling-based meta-analysis for detection of differential gene expression in breast cancer

    International Nuclear Information System (INIS)

    Gur-Dedeoglu, Bala; Konu, Ozlen; Kir, Serkan; Ozturk, Ahmet Rasit; Bozkurt, Betul; Ergul, Gulusan; Yulug, Isik G

    2008-01-01

    Accuracy in the diagnosis of breast cancer and classification of cancer subtypes has improved over the years with the development of well-established immunohistopathological criteria. More recently, diagnostic gene-sets at the mRNA expression level have been tested as better predictors of disease state. However, breast cancer is heterogeneous in nature; thus extraction of differentially expressed gene-sets that stably distinguish normal tissue from various pathologies poses challenges. Meta-analysis of high-throughput expression data using a collection of statistical methodologies leads to the identification of robust tumor gene expression signatures. A resampling-based meta-analysis strategy, which involves the use of resampling and application of distribution statistics in combination to assess the degree of significance in differential expression between sample classes, was developed. Two independent microarray datasets that contain normal breast, invasive ductal carcinoma (IDC), and invasive lobular carcinoma (ILC) samples were used for the meta-analysis. Expression of the genes, selected from the gene list for classification of normal breast samples and breast tumors encompassing both the ILC and IDC subtypes were tested on 10 independent primary IDC samples and matched non-tumor controls by real-time qRT-PCR. Other existing breast cancer microarray datasets were used in support of the resampling-based meta-analysis. The two independent microarray studies were found to be comparable, although differing in their experimental methodologies (Pearson correlation coefficient, R = 0.9389 and R = 0.8465 for ductal and lobular samples, respectively). The resampling-based meta-analysis has led to the identification of a highly stable set of genes for classification of normal breast samples and breast tumors encompassing both the ILC and IDC subtypes. The expression results of the selected genes obtained through real-time qRT-PCR supported the meta-analysis results. The

  5. Improvement of the Owner Distinction Method for Healing-Type Pet Robots

    Science.gov (United States)

    Nambo, Hidetaka; Kimura, Haruhiko; Hara, Mirai; Abe, Koji; Tajima, Takuya

    In order to decrease human stress, Animal Assisted Therapy which applies pets to heal humans is attracted. However, since animals are insanitary and unsafe, it is difficult to practically apply animal pets in hospitals. For the reason, on behalf of animal pets, pet robots have been attracted. Since pet robots would have no problems in sanitation and safety, they are able to be applied as a substitute for animal pets in the therapy. In our previous study where pet robots distinguish their owners like an animal pet, we used a puppet type pet robot which has pressure type touch sensors. However, the accuracy of our method was not sufficient to practical use. In this paper, we propose a method to improve the accuracy of the distinction. The proposed method can be applied for capacitive touch sensors such as installed in AIBO in addition to pressure type touch sensors. Besides, this paper shows performance of the proposed method from experimental results and confirms the proposed method has improved performance of the distinction in the conventional method.

  6. NAIP Aerial Imagery (Resampled), Salton Sea - 2005 [ds425

    Data.gov (United States)

    California Natural Resource Agency — NAIP 2005 aerial imagery that has been resampled from 1-meter source resolution to approximately 30-meter resolution. This is a mosaic composed from several NAIP...

  7. VOYAGER 1 SATURN MAGNETOMETER RESAMPLED DATA 9.60 SEC

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes Voyager 1 Saturn encounter magnetometer data that have been resampled at a 9.6 second sample rate. The data set is composed of 6 columns: 1)...

  8. VOYAGER 2 JUPITER MAGNETOMETER RESAMPLED DATA 48.0 SEC

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes Voyager 2 Jupiter encounter magnetometer data that have been resampled at a 48.0 second sample rate. The data set is composed of 6 columns: 1)...

  9. Speckle reduction in digital holography with resampling ring masks

    Science.gov (United States)

    Zhang, Wenhui; Cao, Liangcai; Jin, Guofan

    2018-01-01

    One-shot digital holographic imaging has the advantages of high stability and low temporal cost. However, the reconstruction is affected by the speckle noise. Resampling ring-mask method in spectrum domain is proposed for speckle reduction. The useful spectrum of one hologram is divided into several sub-spectra by ring masks. In the reconstruction, angular spectrum transform is applied to guarantee the calculation accuracy which has no approximation. N reconstructed amplitude images are calculated from the corresponding sub-spectra. Thanks to speckle's random distribution, superimposing these N uncorrelated amplitude images would lead to a final reconstructed image with lower speckle noise. Normalized relative standard deviation values of the reconstructed image are used to evaluate the reduction of speckle. Effect of the method on the spatial resolution of the reconstructed image is also quantitatively evaluated. Experimental and simulation results prove the feasibility and effectiveness of the proposed method.

  10. Random resampling masks: a non-Bayesian one-shot strategy for noise reduction in digital holography.

    Science.gov (United States)

    Bianco, V; Paturzo, M; Memmolo, P; Finizio, A; Ferraro, P; Javidi, B

    2013-03-01

    Holographic imaging may become severely degraded by a mixture of speckle and incoherent additive noise. Bayesian approaches reduce the incoherent noise, but prior information is needed on the noise statistics. With no prior knowledge, one-shot reduction of noise is a highly desirable goal, as the recording process is simplified and made faster. Indeed, neither multiple acquisitions nor a complex setup are needed. So far, this result has been achieved at the cost of a deterministic resolution loss. Here we propose a fast non-Bayesian denoising method that avoids this trade-off by means of a numerical synthesis of a moving diffuser. In this way, only one single hologram is required as multiple uncorrelated reconstructions are provided by random complementary resampling masks. Experiments show a significant incoherent noise reduction, close to the theoretical improvement bound, resulting in image-contrast improvement. At the same time, we preserve the resolution of the unprocessed image.

  11. Accelerated spike resampling for accurate multiple testing controls.

    Science.gov (United States)

    Harrison, Matthew T

    2013-02-01

    Controlling for multiple hypothesis tests using standard spike resampling techniques often requires prohibitive amounts of computation. Importance sampling techniques can be used to accelerate the computation. The general theory is presented, along with specific examples for testing differences across conditions using permutation tests and for testing pairwise synchrony and precise lagged-correlation between many simultaneously recorded spike trains using interval jitter.

  12. Community level patterns in diverse systems: A case study of litter fauna in a Mexican pine-oak forest using higher taxa surrogates and re-sampling methods

    Science.gov (United States)

    Moreno, Claudia E.; Guevara, Roger; Sánchez-Rojas, Gerardo; Téllez, Dianeis; Verdú, José R.

    2008-01-01

    Environmental assessment at the community level in highly diverse ecosystems is limited by taxonomic constraints and statistical methods requiring true replicates. Our objective was to show how diverse systems can be studied at the community level using higher taxa as biodiversity surrogates, and re-sampling methods to allow comparisons. To illustrate this we compared the abundance, richness, evenness and diversity of the litter fauna in a pine-oak forest in central Mexico among seasons, sites and collecting methods. We also assessed changes in the abundance of trophic guilds and evaluated the relationships between community parameters and litter attributes. With the direct search method we observed differences in the rate of taxa accumulation between sites. Bootstrap analysis showed that abundance varied significantly between seasons and sampling methods, but not between sites. In contrast, diversity and evenness were significantly higher at the managed than at the non-managed site. Tree regression models show that abundance varied mainly between seasons, whereas taxa richness was affected by litter attributes (composition and moisture content). The abundance of trophic guilds varied among methods and seasons, but overall we found that parasitoids, predators and detrivores decreased under management. Therefore, although our results suggest that management has positive effects on the richness and diversity of litter fauna, the analysis of trophic guilds revealed a contrasting story. Our results indicate that functional groups and re-sampling methods may be used as tools for describing community patterns in highly diverse systems. Also, the higher taxa surrogacy could be seen as a preliminary approach when it is not possible to identify the specimens at a low taxonomic level in a reasonable period of time and in a context of limited financial resources, but further studies are needed to test whether the results are specific to a system or whether they are general

  13. ROSETTA-ORBITER SW RPCMAG 4 CR2 RESAMPLED V3.0

    Data.gov (United States)

    National Aeronautics and Space Administration — 2010-07-30 SBN:T.Barnes Updated and DATA_SET_DESCThis dataset contains RESAMPLED DATA of the CRUISE 2 phase (CR2). (Version 3.0 is the first version archived.)

  14. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    Directory of Open Access Journals (Sweden)

    Yeqing Zhang

    2018-02-01

    Full Text Available For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully.

  15. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    Science.gov (United States)

    Zhang, Yeqing; Wang, Meiling; Li, Yafeng

    2018-01-01

    For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully. PMID:29495301

  16. A steady-State Genetic Algorithm with Resampling for Noisy Inventory Control

    NARCIS (Netherlands)

    Prestwich, S.; Tarim, S.A.; Rossi, R.; Hnich, B.

    2008-01-01

    Noisy fitness functions occur in many practical applications of evolutionary computation. A standard technique for solving these problems is fitness resampling but this may be inefficient or need a large population, and combined with elitism it may overvalue chromosomes or reduce genetic diversity.

  17. Optimal resampling for the noisy OneMax problem

    OpenAIRE

    Liu, Jialin; Fairbank, Michael; Pérez-Liébana, Diego; Lucas, Simon M.

    2016-01-01

    The OneMax problem is a standard benchmark optimisation problem for a binary search space. Recent work on applying a Bandit-Based Random Mutation Hill-Climbing algorithm to the noisy OneMax Problem showed that it is important to choose a good value for the resampling number to make a careful trade off between taking more samples in order to reduce noise, and taking fewer samples to reduce the total computational cost. This paper extends that observation, by deriving an analytical expression f...

  18. A novel approach for epipolar resampling of cross-track linear pushbroom imagery using orbital parameters model

    Science.gov (United States)

    Jannati, Mojtaba; Valadan Zoej, Mohammad Javad; Mokhtarzade, Mehdi

    2018-03-01

    This paper presents a novel approach to epipolar resampling of cross-track linear pushbroom imagery using orbital parameters model (OPM). The backbone of the proposed method relies on modification of attitude parameters of linear array stereo imagery in such a way to parallelize the approximate conjugate epipolar lines (ACELs) with the instantaneous base line (IBL) of the conjugate image points (CIPs). Afterward, a complementary rotation is applied in order to parallelize all the ACELs throughout the stereo imagery. The new estimated attitude parameters are evaluated based on the direction of the IBL and the ACELs. Due to the spatial and temporal variability of the IBL (respectively changes in column and row numbers of the CIPs) and nonparallel nature of the epipolar lines in the stereo linear images, some polynomials in the both column and row numbers of the CIPs are used to model new attitude parameters. As the instantaneous position of sensors remains fix, the digital elevation model (DEM) of the area of interest is not required in the resampling process. According to the experimental results obtained from two pairs of SPOT and RapidEye stereo imagery with a high elevation relief, the average absolute values of remained vertical parallaxes of CIPs in the normalized images were obtained 0.19 and 0.28 pixels respectively, which confirm the high accuracy and applicability of the proposed method.

  19. Wayside Bearing Fault Diagnosis Based on Envelope Analysis Paved with Time-Domain Interpolation Resampling and Weighted-Correlation-Coefficient-Guided Stochastic Resonance

    Directory of Open Access Journals (Sweden)

    Yongbin Liu

    2017-01-01

    Full Text Available Envelope spectrum analysis is a simple, effective, and classic method for bearing fault identification. However, in the wayside acoustic health monitoring system, owing to the high relative moving speed between the railway vehicle and the wayside mounted microphone, the recorded signal is embedded with Doppler effect, which brings in shift and expansion of the bearing fault characteristic frequency (FCF. What is more, the background noise is relatively heavy, which makes it difficult to identify the FCF. To solve the two problems, this study introduces solutions for the wayside acoustic fault diagnosis of train bearing based on Doppler effect reduction using the improved time-domain interpolation resampling (TIR method and diagnosis-relevant information enhancement using Weighted-Correlation-Coefficient-Guided Stochastic Resonance (WCCSR method. First, the traditional TIR method is improved by incorporating the original method with kinematic parameter estimation based on time-frequency analysis and curve fitting. Based on the estimated parameters, the Doppler effect is removed using the TIR easily. Second, WCCSR is employed to enhance the diagnosis-relevant period signal component in the obtained Doppler-free signal. Finally, paved with the above two procedures, the local fault is identified using envelope spectrum analysis. Simulated and experimental cases have verified the effectiveness of the proposed method.

  20. A comparison of resampling schemes for estimating model observer performance with small ensembles

    Science.gov (United States)

    Elshahaby, Fatma E. A.; Jha, Abhinav K.; Ghaly, Michael; Frey, Eric C.

    2017-09-01

    In objective assessment of image quality, an ensemble of images is used to compute the 1st and 2nd order statistics of the data. Often, only a finite number of images is available, leading to the issue of statistical variability in numerical observer performance. Resampling-based strategies can help overcome this issue. In this paper, we compared different combinations of resampling schemes (the leave-one-out (LOO) and the half-train/half-test (HT/HT)) and model observers (the conventional channelized Hotelling observer (CHO), channelized linear discriminant (CLD) and channelized quadratic discriminant). Observer performance was quantified by the area under the ROC curve (AUC). For a binary classification task and for each observer, the AUC value for an ensemble size of 2000 samples per class served as a gold standard for that observer. Results indicated that each observer yielded a different performance depending on the ensemble size and the resampling scheme. For a small ensemble size, the combination [CHO, HT/HT] had more accurate rankings than the combination [CHO, LOO]. Using the LOO scheme, the CLD and CHO had similar performance for large ensembles. However, the CLD outperformed the CHO and gave more accurate rankings for smaller ensembles. As the ensemble size decreased, the performance of the [CHO, LOO] combination seriously deteriorated as opposed to the [CLD, LOO] combination. Thus, it might be desirable to use the CLD with the LOO scheme when smaller ensemble size is available.

  1. MapReduce particle filtering with exact resampling and deterministic runtime

    Science.gov (United States)

    Thiyagalingam, Jeyarajan; Kekempanos, Lykourgos; Maskell, Simon

    2017-12-01

    Particle filtering is a numerical Bayesian technique that has great potential for solving sequential estimation problems involving non-linear and non-Gaussian models. Since the estimation accuracy achieved by particle filters improves as the number of particles increases, it is natural to consider as many particles as possible. MapReduce is a generic programming model that makes it possible to scale a wide variety of algorithms to Big data. However, despite the application of particle filters across many domains, little attention has been devoted to implementing particle filters using MapReduce. In this paper, we describe an implementation of a particle filter using MapReduce. We focus on a component that what would otherwise be a bottleneck to parallel execution, the resampling component. We devise a new implementation of this component, which requires no approximations, has O( N) spatial complexity and deterministic O((log N)2) time complexity. Results demonstrate the utility of this new component and culminate in consideration of a particle filter with 224 particles being distributed across 512 processor cores.

  2. On uniform resampling and gaze analysis of bidirectional texture functions

    Czech Academy of Sciences Publication Activity Database

    Filip, Jiří; Chantler, M.J.; Haindl, Michal

    2009-01-01

    Roč. 6, č. 3 (2009), s. 1-15 ISSN 1544-3558 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0593 Grant - others:EC Marie Curie(BE) 41358 Institutional research plan: CEZ:AV0Z10750506 Keywords : BTF * texture * eye tracking Subject RIV: BD - Theory of Information Impact factor: 1.447, year: 2009 http://library.utia.cas.cz/separaty/2009/RO/haindl-on uniform resampling and gaze analysis of bidirectional texture functions.pdf

  3. Fine-mapping additive and dominant SNP effects using group-LASSO and Fractional Resample Model Averaging

    Science.gov (United States)

    Sabourin, Jeremy; Nobel, Andrew B.; Valdar, William

    2014-01-01

    Genomewide association studies sometimes identify loci at which both the number and identities of the underlying causal variants are ambiguous. In such cases, statistical methods that model effects of multiple SNPs simultaneously can help disentangle the observed patterns of association and provide information about how those SNPs could be prioritized for follow-up studies. Current multi-SNP methods, however, tend to assume that SNP effects are well captured by additive genetics; yet when genetic dominance is present, this assumption translates to reduced power and faulty prioritizations. We describe a statistical procedure for prioritizing SNPs at GWAS loci that efficiently models both additive and dominance effects. Our method, LLARRMA-dawg, combines a group LASSO procedure for sparse modeling of multiple SNP effects with a resampling procedure based on fractional observation weights; it estimates for each SNP the robustness of association with the phenotype both to sampling variation and to competing explanations from other SNPs. In producing a SNP prioritization that best identifies underlying true signals, we show that: our method easily outperforms a single marker analysis; when additive-only signals are present, our joint model for additive and dominance is equivalent to or only slightly less powerful than modeling additive-only effects; and, when dominance signals are present, even in combination with substantial additive effects, our joint model is unequivocally more powerful than a model assuming additivity. We also describe how performance can be improved through calibrated randomized penalization, and discuss how dominance in ungenotyped SNPs can be incorporated through either heterozygote dosage or multiple imputation. PMID:25417853

  4. Event-based stochastic point rainfall resampling for statistical replication and climate projection of historical rainfall series

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Korup Andersen, Aske; Larsen, Anders Badsberg

    2017-01-01

    Continuous and long rainfall series are a necessity in rural and urban hydrology for analysis and design purposes. Local historical point rainfall series often cover several decades, which makes it possible to estimate rainfall means at different timescales, and to assess return periods of extreme...... includes climate changes projected to a specific future period. This paper presents a framework for resampling of historical point rainfall series in order to generate synthetic rainfall series, which has the same statistical properties as an original series. Using a number of key target predictions...... for the future climate, such as winter and summer precipitation, and representation of extreme events, the resampled historical series are projected to represent rainfall properties in a future climate. Climate-projected rainfall series are simulated by brute force randomization of model parameters, which leads...

  5. Automotive FMCW Radar-Enhanced Range Estimation via a Local Resampling Fourier Transform

    Directory of Open Access Journals (Sweden)

    Cailing Wang

    2016-02-01

    Full Text Available In complex traffic scenarios, more accurate measurement and discrimination for an automotive frequency-modulated continuous-wave (FMCW radar is required for intelligent robots, driverless cars and driver-assistant systems. A more accurate range estimation method based on a local resampling Fourier transform (LRFT for a FMCW radar is developed in this paper. Radar signal correlation in the phase space sees a higher signal-noise-ratio (SNR to achieve more accurate ranging, and the LRFT - which acts on a local neighbour as a refinement step - can achieve a more accurate target range. The rough range is estimated through conditional pulse compression (PC and then, around the initial rough estimation, a refined estimation through the LRFT in the local region achieves greater precision. Furthermore, the LRFT algorithm is tested in numerous simulations and physical system experiments, which show that the LRFT algorithm achieves a more precise range estimation than traditional FFT-based algorithms, especially for lower bandwidth signals.

  6. A practitioners guide to resampling for data analysis, data mining, and modeling: A cookbook for starters

    NARCIS (Netherlands)

    van den Broek, Egon

    A practitioner’s guide to resampling for data analysis, data mining, and modeling provides a gentle and pragmatic introduction in the proposed topics. Its supporting Web site was offline and, hence, its potentially added value could not be verified. The book refrains from using advanced mathematics

  7. A Particle Smoother with Sequential Importance Resampling for soil hydraulic parameter estimation: A lysimeter experiment

    Science.gov (United States)

    Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry

    2013-04-01

    An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.

  8. Active surface model improvement by energy function optimization for 3D segmentation.

    Science.gov (United States)

    Azimifar, Zohreh; Mohaddesi, Mahsa

    2015-04-01

    This paper proposes an optimized and efficient active surface model by improving the energy functions, searching method, neighborhood definition and resampling criterion. Extracting an accurate surface of the desired object from a number of 3D images using active surface and deformable models plays an important role in computer vision especially medical image processing. Different powerful segmentation algorithms have been suggested to address the limitations associated with the model initialization, poor convergence to surface concavities and slow convergence rate. This paper proposes a method to improve one of the strongest and recent segmentation algorithms, namely the Decoupled Active Surface (DAS) method. We consider a gradient of wavelet edge extracted image and local phase coherence as external energy to extract more information from images and we use curvature integral as internal energy to focus on high curvature region extraction. Similarly, we use resampling of points and a line search for point selection to improve the accuracy of the algorithm. We further employ an estimation of the desired object as an initialization for the active surface model. A number of tests and experiments have been done and the results show the improvements with regards to the extracted surface accuracy and computational time of the presented algorithm compared with the best and recent active surface models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Application of an improved maximum correlated kurtosis deconvolution method for fault diagnosis of rolling element bearings

    Science.gov (United States)

    Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo

    2017-08-01

    The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.

  10. Evaluation of resampling applied to UAV imagery for weed detection using OBIA

    OpenAIRE

    Borra, I.; Peña Barragán, José Manuel; Torres Sánchez, Jorge; López Granados, Francisca

    2015-01-01

    Los vehículos aéreos no tripulados (UAVs) son una tecnología emergente en el estudio de parámetros agrícolas por sus características y por portar sensores en diferente rango espectral. En este trabajo se ha detectado y cartografiado rodales de malas hierbas en fase temprana mediante análisis OBIA para elaborar mapas que optimicen el tratamiento herbicida localizado. Se ha aplicado resampling (resampleo) sobre imágenes tomadas en campo desde un UAV (UAV-I) para crear una nueva imagen con disti...

  11. TU-CD-BRA-01: A Novel 3D Registration Method for Multiparametric Radiological Images

    International Nuclear Information System (INIS)

    Akhbardeh, A; Parekth, VS; Jacobs, MA

    2015-01-01

    Purpose: Multiparametric and multimodality radiological imaging methods, such as, magnetic resonance imaging(MRI), computed tomography(CT), and positron emission tomography(PET), provide multiple types of tissue contrast and anatomical information for clinical diagnosis. However, these radiological modalities are acquired using very different technical parameters, e.g.,field of view(FOV), matrix size, and scan planes, which, can lead to challenges in registering the different data sets. Therefore, we developed a hybrid registration method based on 3D wavelet transformation and 3D interpolations that performs 3D resampling and rotation of the target radiological images without loss of information Methods: T1-weighted, T2-weighted, diffusion-weighted-imaging(DWI), dynamic-contrast-enhanced(DCE) MRI and PET/CT were used in the registration algorithm from breast and prostate data at 3T MRI and multimodality(PET/CT) cases. The hybrid registration scheme consists of several steps to reslice and match each modality using a combination of 3D wavelets, interpolations, and affine registration steps. First, orthogonal reslicing is performed to equalize FOV, matrix sizes and the number of slices using wavelet transformation. Second, angular resampling of the target data is performed to match the reference data. Finally, using optimized angles from resampling, 3D registration is performed using similarity transformation(scaling and translation) between the reference and resliced target volume is performed. After registration, the mean-square-error(MSE) and Dice Similarity(DS) between the reference and registered target volumes were calculated. Results: The 3D registration method registered synthetic and clinical data with significant improvement(p<0.05) of overlap between anatomical structures. After transforming and deforming the synthetic data, the MSE and Dice similarity were 0.12 and 0.99. The average improvement of the MSE in breast was 62%(0.27 to 0.10) and prostate was

  12. An improved early detection method of type-2 diabetes mellitus using multiple classifier system

    KAUST Repository

    Zhu, Jia

    2015-01-01

    The specific causes of complex diseases such as Type-2 Diabetes Mellitus (T2DM) have not yet been identified. Nevertheless, many medical science researchers believe that complex diseases are caused by a combination of genetic, environmental, and lifestyle factors. Detection of such diseases becomes an issue because it is not free from false presumptions and is accompanied by unpredictable effects. Given the greatly increased amount of data gathered in medical databases, data mining has been used widely in recent years to detect and improve the diagnosis of complex diseases. However, past research showed that no single classifier can be considered optimal for all problems. Therefore, in this paper, we focus on employing multiple classifier systems to improve the accuracy of detection for complex diseases, such as T2DM. We proposed a dynamic weighted voting scheme called multiple factors weighted combination for classifiers\\' decision combination. This method considers not only the local and global accuracy but also the diversity among classifiers and localized generalization error of each classifier. We evaluated our method on two real T2DM data sets and other medical data sets. The favorable results indicated that our proposed method significantly outperforms individual classifiers and other fusion methods.

  13. An Improved MLVF Method and Its Comparison with Traditional MLVF, spa Typing, MLST/SCCmec and PFGE for the Typing of Methicillin-Resistant Staphylococcus aureus

    Science.gov (United States)

    Du, Xue-Fei; Xiao, Meng; Liang, Hong-Yan; Sun, Zhe; Jiang, Yue-Hong; Chen, Guo-Yu; Meng, Xiao-Yu; Zou, Gui-Ling; Zhang, Li; Liu, Ya-Li; Zhang, Hui; Sun, Hong-Li; Jiang, Xiao-Feng; Xu, Ying-Chun

    2014-01-01

    Methicillin-resistant Staphylococcus aureus (MRSA) has become an important nosocomial pathogen, causing considerable morbidity and mortality. During the last 20 years, a variety of genotyping methods have been introduced for screening the prevalence of MRSA. In this study, we developed and evaluated an improved approach capillary gel electrophoresis based multilocus variable-number tandem-repeat fingerprinting (CGE/MLVF) for rapid MRSA typing. A total of 42 well-characterized strains and 116 non-repetitive clinical MRSA isolates collected from six hospitals in northeast China between 2009 and 2010 were tested. The results obtained by CGE/MLVF against clinical isolates were compared with traditional MLVF, spa typing, Multilocus sequence typing/staphylococcal cassette chromosome mec (MLST/SCCmec) and pulse field gel electrophoresis (PFGE). The discriminatory power estimated by Simpson’s index of diversity was 0.855 (28 types), 0.855 (28 patterns), 0.623 (11 types), 0.517 (8 types) and 0.854 (28 patterns) for CGE/MLVF, traditional MLVF, spa typing, MLST/SCCmec and PFGE, respectively. All methods tested showed a satisfied concordance in clonal complex level calculated by adjusted Rand’s coefficient. CGE/MLVF showed better reproducibility and accuracy than traditional MLVF and PFGE methods. In addition, the CGE/MLVF has potential to produce portable results. In conclusion, CGE/MLVF is a rapid and easy to use MRSA typing method with lower cost, good reproducibility and high discriminatory power for monitoring the outbreak and clonal spread of MRSA isolates. PMID:24406728

  14. Groundwater-quality data in seven GAMA study units: results from initial sampling, 2004-2005, and resampling, 2007-2008, of wells: California GAMA Program Priority Basin Project

    Science.gov (United States)

    Kent, Robert; Belitz, Kenneth; Fram, Miranda S.

    2014-01-01

    The Priority Basin Project (PBP) of the Groundwater Ambient Monitoring and Assessment (GAMA) Program was developed in response to the Groundwater Quality Monitoring Act of 2001 and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The GAMA-PBP began sampling, primarily public supply wells in May 2004. By the end of February 2006, seven (of what would eventually be 35) study units had been sampled over a wide area of the State. Selected wells in these first seven study units were resampled for water quality from August 2007 to November 2008 as part of an assessment of temporal trends in water quality by the GAMA-PBP. The initial sampling was designed to provide a spatially unbiased assessment of the quality of raw groundwater used for public water supplies within the seven study units. In the 7 study units, 462 wells were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study area. Wells selected this way are referred to as grid wells or status wells. Approximately 3 years after the initial sampling, 55 of these previously sampled status wells (approximately 10 percent in each study unit) were randomly selected for resampling. The seven resampled study units, the total number of status wells sampled for each study unit, and the number of these wells resampled for trends are as follows, in chronological order of sampling: San Diego Drainages (53 status wells, 7 trend wells), North San Francisco Bay (84, 10), Northern San Joaquin Basin (51, 5), Southern Sacramento Valley (67, 7), San Fernando–San Gabriel (35, 6), Monterey Bay and Salinas Valley Basins (91, 11), and Southeast San Joaquin Valley (83, 9). The groundwater samples were analyzed for a large number of synthetic organic constituents (volatile organic compounds [VOCs], pesticides, and pesticide degradates), constituents of special interest (perchlorate, N

  15. A Novel Bearing Fault Diagnosis Method Based on Gaussian Restricted Boltzmann Machine

    Directory of Open Access Journals (Sweden)

    Xiao-hui He

    2016-01-01

    Full Text Available To realize the fault diagnosis of bearing effectively, this paper presents a novel bearing fault diagnosis method based on Gaussian restricted Boltzmann machine (Gaussian RBM. Vibration signals are firstly resampled to the same equivalent speed. Subsequently, the envelope spectrums of the resampled data are used directly as the feature vectors to represent the fault types of bearing. Finally, in order to deal with the high-dimensional feature vectors based on envelope spectrum, a classifier model based on Gaussian RBM is applied. Gaussian RBM has the ability to provide a closed-form representation of the distribution underlying the training data, and it is very convenient for modeling high-dimensional real-valued data. Experiments on 10 different data sets verify the performance of the proposed method. The superiority of Gaussian RBM classifier is also confirmed by comparing with other classifiers, such as extreme learning machine, support vector machine, and deep belief network. The robustness of the proposed method is also studied in this paper. It can be concluded that the proposed method can realize the bearing fault diagnosis accurately and effectively.

  16. Use of a 137Cs re-sampling technique to investigate temporal changes in soil erosion and sediment mobilisation for a small forested catchment in southern Italy

    International Nuclear Information System (INIS)

    Porto, Paolo; Walling, Des E.; Alewell, Christine; Callegari, Giovanni; Mabit, Lionel; Mallimo, Nicola; Meusburger, Katrin; Zehringer, Markus

    2014-01-01

    Soil erosion and both its on-site and off-site impacts are increasingly seen as a serious environmental problem across the world. The need for an improved evidence base on soil loss and soil redistribution rates has directed attention to the use of fallout radionuclides, and particularly 137 Cs, for documenting soil redistribution rates. This approach possesses important advantages over more traditional means of documenting soil erosion and soil redistribution. However, one key limitation of the approach is the time-averaged or lumped nature of the estimated erosion rates. In nearly all cases, these will relate to the period extending from the main period of bomb fallout to the time of sampling. Increasing concern for the impact of global change, particularly that related to changing land use and climate change, has frequently directed attention to the need to document changes in soil redistribution rates within this period. Re-sampling techniques, which should be distinguished from repeat-sampling techniques, have the potential to meet this requirement. As an example, the use of a re-sampling technique to derive estimates of the mean annual net soil loss from a small (1.38 ha) forested catchment in southern Italy is reported. The catchment was originally sampled in 1998 and samples were collected from points very close to the original sampling points again in 2013. This made it possible to compare the estimate of mean annual erosion for the period 1954–1998 with that for the period 1999–2013. The availability of measurements of sediment yield from the catchment for parts of the overall period made it possible to compare the results provided by the 137 Cs re-sampling study with the estimates of sediment yield for the same periods. In order to compare the estimates of soil loss and sediment yield for the two different periods, it was necessary to establish the uncertainty associated with the individual estimates. In the absence of a generally accepted procedure

  17. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.

    Science.gov (United States)

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil

    2014-08-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.

  18. Improving the quality of extracting dynamics from interspike intervals via a resampling approach

    Science.gov (United States)

    Pavlova, O. N.; Pavlov, A. N.

    2018-04-01

    We address the problem of improving the quality of characterizing chaotic dynamics based on point processes produced by different types of neuron models. Despite the presence of embedding theorems for non-uniformly sampled dynamical systems, the case of short data analysis requires additional attention because the selection of algorithmic parameters may have an essential influence on estimated measures. We consider how the preliminary processing of interspike intervals (ISIs) can increase the precision of computing the largest Lyapunov exponent (LE). We report general features of characterizing chaotic dynamics from point processes and show that independently of the selected mechanism for spike generation, the performed preprocessing reduces computation errors when dealing with a limited amount of data.

  19. Methods to Improve Osseointegration of Dental Implants in Low Quality (Type-IV Bone: An Overview

    Directory of Open Access Journals (Sweden)

    Hamdan S. Alghamdi

    2018-01-01

    Full Text Available Nowadays, dental implants have become more common treatment for replacing missing teeth and aim to improve chewing efficiency, physical health, and esthetics. The favorable clinical performance of dental implants has been attributed to their firm osseointegration, as introduced by Brånemark in 1965. Although the survival rate of dental implants over a 10-year observation has been reported to be higher than 90% in totally edentulous jaws, the clinical outcome of implant treatment is challenged in compromised (bone conditions, as are frequently present in elderly people. The biomechanical characteristics of bone in aged patients do not offer proper stability to implants, being similar to type-IV bone (Lekholm & Zarb classification, in which a decreased clinical fixation of implants has been clearly demonstrated. However, the search for improved osseointegration has continued forward for the new evolution of modern dental implants. This represents a continuum of developments spanning more than 20 years of research on implant related-factors including surgical techniques, implant design, and surface properties. The methods to enhance osseointegration of dental implants in low quality (type-IV bone are described in a general manner in this review.

  20. Methods to Improve Osseointegration of Dental Implants in Low Quality (Type-IV) Bone: An Overview.

    Science.gov (United States)

    Alghamdi, Hamdan S

    2018-01-13

    Nowadays, dental implants have become more common treatment for replacing missing teeth and aim to improve chewing efficiency, physical health, and esthetics. The favorable clinical performance of dental implants has been attributed to their firm osseointegration, as introduced by Brånemark in 1965. Although the survival rate of dental implants over a 10-year observation has been reported to be higher than 90% in totally edentulous jaws, the clinical outcome of implant treatment is challenged in compromised (bone) conditions, as are frequently present in elderly people. The biomechanical characteristics of bone in aged patients do not offer proper stability to implants, being similar to type-IV bone (Lekholm & Zarb classification), in which a decreased clinical fixation of implants has been clearly demonstrated. However, the search for improved osseointegration has continued forward for the new evolution of modern dental implants. This represents a continuum of developments spanning more than 20 years of research on implant related-factors including surgical techniques, implant design, and surface properties. The methods to enhance osseointegration of dental implants in low quality (type-IV) bone are described in a general manner in this review.

  1. Improving settlement type classification of aerial images

    CSIR Research Space (South Africa)

    Mdakane, L

    2014-10-01

    Full Text Available , an automated method can be used to help identify human settlements in a fixed, repeatable and timely manner. The main contribution of this work is to improve generalisation on settlement type classification of aerial imagery. Images acquired at different dates...

  2. Designing multiplex PCR system of Campylobacter jejuni for efficient typing by improving monoplex PCR binary typing method.

    Science.gov (United States)

    Yamada, Kazuhiro; Ibata, Ami; Suzuki, Masahiro; Matsumoto, Masakado; Yamashita, Teruo; Minagawa, Hiroko; Kurane, Ryuichiro

    2015-01-01

    Campylobacter jejuni is responsible for the majority of Campylobacter infections. As the molecular epidemiological study of outbreaks, pulsed-field gel electrophoresis (PFGE) is performed in general. But PFGE has several problems. PCR binary typing (P-BIT) method is a typing method for Campylobacter spp. that was recently developed, and was reported to have a similar discriminatory power and stability to those of PFGE. We modified the P-BIT method from 18 monoplex PCRs to two multiplex PCR systems (mP-BIT). The same results were obtained from monoplex PCRs using original primers and multiplex PCR in the representative isolates. The mP-BIT can analyze 48 strains at a time by using 96-well PCR systems and can identify C. jejuni because mP-BIT includes C. jejuni marker. The typing of the isolates by the mP-BIT and PFGE demonstrated generally concordant results and the mP-BIT method (D = 0.980) has a similar discriminatory power to that of PFGE with SmaI digest (D = 0.975) or KpnI digest (D = 0.987) as with original article. The mP-BIT method is quick, simple and easy, and comes to be able to perform it at low cost by having become a multiplex PCR system. Therefore, the mP-BIT method with two multiplex PCR systems has high potential for a rapid first-line surveillance typing assay of C. jejuni and can be used for routine surveillance and outbreak investigations of C. jejuni in the future. Copyright © 2014 Japanese Society of Chemotherapy and The Japanese Association for Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  3. Improved Particle Filter for Passive Target Tracking%改进粒子滤波在被动目标跟踪中的应用

    Institute of Scientific and Technical Information of China (English)

    邓小龙; 谢剑英; 杨煜普

    2005-01-01

    As a new method for dealing with any nonlinear or non-Gaussian distributions, based on the Monte Carlo methods and Bayesian filtering, particle filters (PF) are favored by researchers and widely applied in many fields. Based on particle filtering, an improved extended Kalman filter (EKF) proposal distribution is presented. Evaluation of the weights is simplified and other improved techniques including the residual resampling step and Markov Chain Monte Carlo method are introduced for target tracking. Performances of the EKF, basic PF and the improved PF are compared in target tracking examples. The simulation results confirm that the improved particle filter outperforms the others.

  4. Charting improvements in US registry HLA typing ambiguity using a typing resolution score.

    Science.gov (United States)

    Paunić, Vanja; Gragert, Loren; Schneider, Joel; Müller, Carlheinz; Maiers, Martin

    2016-07-01

    Unrelated stem cell registries have been collecting HLA typing of volunteer bone marrow donors for over 25years. Donor selection for hematopoietic stem cell transplantation is based primarily on matching the alleles of donors and patients at five polymorphic HLA loci. As HLA typing technologies have continually advanced since the beginnings of stem cell transplantation, registries have accrued typings of varied HLA typing ambiguity. We present a new typing resolution score (TRS), based on the likelihood of self-match, that allows the systematic comparison of HLA typings across different methods, data sets and populations. We apply the TRS to chart improvement in HLA typing within the Be The Match Registry of the United States from the initiation of DNA-based HLA typing to the current state of high-resolution typing using next-generation sequencing technologies. In addition, we present a publicly available online tool for evaluation of any given HLA typing. This TRS objectively evaluates HLA typing methods and can help define standards for acceptable recruitment HLA typing. Copyright © 2016 American Society for Histocompatibility and Immunogenetics. Published by Elsevier Inc. All rights reserved.

  5. Comparison of parametric and bootstrap method in bioequivalence test.

    Science.gov (United States)

    Ahn, Byung-Jin; Yim, Dong-Seok

    2009-10-01

    The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption.

  6. Vildagliptin improves endothelium-dependent vasodilatation in type 2 diabetes

    NARCIS (Netherlands)

    van Poppel, P.C.; Netea, M.G.; Smits, P.; Tack, C.J.J.

    2011-01-01

    OBJECTIVE: To investigate whether the dipeptidyl peptidase-4 inhibitor vildagliptin improves endothelium-dependent vasodilatation in patients with type 2 diabetes. RESEARCH DESIGN AND METHODS: Sixteen subjects with type 2 diabetes (age 59.8 +/- 6.8 years, BMI 29.1 +/- 4.8 kg/m(2), HbA(1c) 6.97 +/-

  7. Support the Design of Improved IUE NEWSIPS High Dispersion Extraction Algorithms: Improved IUE High Dispersion Extraction Algorithms

    Science.gov (United States)

    Lawton, Pat

    2004-01-01

    The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.

  8. A improved tidal method without water level

    Science.gov (United States)

    Luo, xiaowen

    2017-04-01

    Now most tide are obtained use water Level and pressure type water gage, but it is difficult to install them and reading is in low accuracy in this method . In view of above-mentioned facts, In order to improve tide accuracy, A improved method is introduced.sea level is obtained in given time using high-precision GNSS buoy combined instantaneous position from pressure gage. two steps are as following, (1) the GNSS time service is used as the source of synchronization reference in tidal measurement; (2) centimeter-level sea surface positions are obtained in real time using difference GNSS The improved method used in seafloor topography survey,in 145 cross points, 95% meet the requirements of the Hydrographic survey specification. It is effective method to obtain higher accuracy tide.

  9. Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, M D; Cole, S; Frenk, C S; Szapudi, I

    2011-02-14

    We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a power spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.

  10. An efficient computational method for global sensitivity analysis and its application to tree growth modelling

    International Nuclear Information System (INIS)

    Wu, Qiong-Li; Cournède, Paul-Henry; Mathieu, Amélie

    2012-01-01

    Global sensitivity analysis has a key role to play in the design and parameterisation of functional–structural plant growth models which combine the description of plant structural development (organogenesis and geometry) and functional growth (biomass accumulation and allocation). We are particularly interested in this study in Sobol's method which decomposes the variance of the output of interest into terms due to individual parameters but also to interactions between parameters. Such information is crucial for systems with potentially high levels of non-linearity and interactions between processes, like plant growth. However, the computation of Sobol's indices relies on Monte Carlo sampling and re-sampling, whose costs can be very high, especially when model evaluation is also expensive, as for tree models. In this paper, we thus propose a new method to compute Sobol's indices inspired by Homma–Saltelli, which improves slightly their use of model evaluations, and then derive for this generic type of computational methods an estimator of the error estimation of sensitivity indices with respect to the sampling size. It allows the detailed control of the balance between accuracy and computing time. Numerical tests on a simple non-linear model are convincing and the method is finally applied to a functional–structural model of tree growth, GreenLab, whose particularity is the strong level of interaction between plant functioning and organogenesis. - Highlights: ► We study global sensitivity analysis in the context of functional–structural plant modelling. ► A new estimator based on Homma–Saltelli method is proposed to compute Sobol indices, based on a more balanced re-sampling strategy. ► The estimation accuracy of sensitivity indices for a class of Sobol's estimators can be controlled by error analysis. ► The proposed algorithm is implemented efficiently to compute Sobol indices for a complex tree growth model.

  11.  Higher Order Improvements for Approximate Estimators

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Salanié, Bernard

    Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties of such appr......Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties...... of such approximate estimators at a low computational cost. The first two methods correct the objective function so as to remove the leading term of the bias due to the approximation. One variant provides an analytical bias adjustment, but it only works for estimators based on stochastic approximators......, such as simulation-based estimators. Our second bias correction is based on ideas from the resampling literature; it eliminates the leading bias term for non-stochastic as well as stochastic approximators. Finally, we propose an iterative procedure where we use Newton-Raphson (NR) iterations based on a much finer...

  12. Edge detection methods based on generalized type-2 fuzzy logic

    CERN Document Server

    Gonzalez, Claudia I; Castro, Juan R; Castillo, Oscar

    2017-01-01

    In this book four new methods are proposed. In the first method the generalized type-2 fuzzy logic is combined with the morphological gra-dient technique. The second method combines the general type-2 fuzzy systems (GT2 FSs) and the Sobel operator; in the third approach the me-thodology based on Sobel operator and GT2 FSs is improved to be applied on color images. In the fourth approach, we proposed a novel edge detec-tion method where, a digital image is converted a generalized type-2 fuzzy image. In this book it is also included a comparative study of type-1, inter-val type-2 and generalized type-2 fuzzy systems as tools to enhance edge detection in digital images when used in conjunction with the morphologi-cal gradient and the Sobel operator. The proposed generalized type-2 fuzzy edge detection methods were tested with benchmark images and synthetic images, in a grayscale and color format. Another contribution in this book is that the generalized type-2 fuzzy edge detector method is applied in the preproc...

  13. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    Science.gov (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-06-01

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.

  14. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    Energy Technology Data Exchange (ETDEWEB)

    Holoien, Thomas W.-S.; /Ohio State U., Dept. Astron. /Ohio State U., CCAPP /KIPAC, Menlo Park /SLAC; Marshall, Philip J.; Wechsler, Risa H.; /KIPAC, Menlo Park /SLAC

    2017-05-11

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.

  15. Improving Precision of Types

    DEFF Research Database (Denmark)

    Winther, Johnni

    Types in programming languages provide a powerful tool for the programmer to document the code so that a large aspect of the intent can not only be presented to fellow programmers but also be checked automatically by compilers. The precision with which types model the behavior of programs...... is crucial to the quality of these automated checks, and in this thesis we present three different improvements to the precision of types in three different aspects of the Java programming language. First we show how to extend the type system in Java with a new type which enables the detection of unintended...

  16. Distributed SLAM Using Improved Particle Filter for Mobile Robot Localization

    Directory of Open Access Journals (Sweden)

    Fujun Pei

    2014-01-01

    Full Text Available The distributed SLAM system has a similar estimation performance and requires only one-fifth of the computation time compared with centralized particle filter. However, particle impoverishment is inevitably because of the random particles prediction and resampling applied in generic particle filter, especially in SLAM problem that involves a large number of dimensions. In this paper, particle filter use in distributed SLAM was improved in two aspects. First, we improved the important function of the local filters in particle filter. The adaptive values were used to replace a set of constants in the computational process of importance function, which improved the robustness of the particle filter. Second, an information fusion method was proposed by mixing the innovation method and the number of effective particles method, which combined the advantages of these two methods. And this paper extends the previously known convergence results for particle filter to prove that improved particle filter converges to the optimal filter in mean square as the number of particles goes to infinity. The experiment results show that the proposed algorithm improved the virtue of the DPF-SLAM system in isolate faults and enabled the system to have a better tolerance and robustness.

  17. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    Science.gov (United States)

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  18. On the nature of data collection for soft-tissue image-to-physical organ registration: a noise characterization study

    Science.gov (United States)

    Collins, Jarrod A.; Heiselman, Jon S.; Weis, Jared A.; Clements, Logan W.; Simpson, Amber L.; Jarnagin, William R.; Miga, Michael I.

    2017-03-01

    In image-guided liver surgery (IGLS), sparse representations of the anterior organ surface may be collected intraoperatively to drive image-to-physical space registration. Soft tissue deformation represents a significant source of error for IGLS techniques. This work investigates the impact of surface data quality on current surface based IGLS registration methods. In this work, we characterize the robustness of our IGLS registration methods to noise in organ surface digitization. We study this within a novel human-to-phantom data framework that allows a rapid evaluation of clinically realistic data and noise patterns on a fully characterized hepatic deformation phantom. Additionally, we implement a surface data resampling strategy that is designed to decrease the impact of differences in surface acquisition. For this analysis, n=5 cases of clinical intraoperative data consisting of organ surface and salient feature digitizations from open liver resection were collected and analyzed within our human-to-phantom validation framework. As expected, results indicate that increasing levels of noise in surface acquisition cause registration fidelity to deteriorate. With respect to rigid registration using the raw and resampled data at clinically realistic levels of noise (i.e. a magnitude of 1.5 mm), resampling improved TRE by 21%. In terms of nonrigid registration, registrations using resampled data outperformed the raw data result by 14% at clinically realistic levels and were less susceptible to noise across the range of noise investigated. These results demonstrate the types of analyses our novel human-to-phantom validation framework can provide and indicate the considerable benefits of resampling strategies.

  19. Systematization of types and methods of radiation therapy methods and techniques of irradiation of patients

    International Nuclear Information System (INIS)

    Vajnberg, M.Sh.

    1991-01-01

    The paper is concerned with the principles of systematization and classification of types and methods of radiation therapy, approaches to the regulation of its terminology. They are based on the distinction of the concepts of radiation therapy and irradiation of patients. The author gives a consice historical review of improvement of the methodology of radiation therapy in the course of developing of its methods and facilities. Problems of terminology are under discussion. There is a table of types and methods of radiation therapy, methods and techniques of irradiation. In the appendices one can find a table of typical legends and examples of graphic signs to denote methods of irradiation. Potentialities of a practical use of the system are described

  20. Improvements of characteristics of open cycle Faraday type MHD power generator

    International Nuclear Information System (INIS)

    Yoshida, Masaharu; Umoto, Juro; Aoki, Sigeo

    1982-01-01

    MHD power generators are classified into two types: Faraday type and diagonal type (including Hall type). It is considered also in Faraday type generators that the characteristics can be improved further by selecting the aspect ratio appropriately, and employing cap electrodes which approach diagonal conducting side-wall type from parallel plate electrodes. First, the three-dimensional analysis using a new equivalent circuit is introduced, in which finite electrode division and working gas boundary layer are considered using the generalized Ohm's law, Maxwell's electromagnetic equations and others. The above described improvement of characteristics is investigated numerically fully applying this analyzing method. If the wall temperature is low, the increase in the aspect ratio of a generating duct cross-section considerably improves the characteristics because plasma non-uniformity decreases. If the cap electrodes having an optimum side-wall length are used, the output increases considerably because the load current is given and received through the side-wall electrodes. Efficiency is a little lower than the case using parallel plate electrodes. Therefore, if the aspect ratio is taken sufficiently large, and the cap electrodes with optimum side-wall electrode length are used, the generator characteristics are greatly improved since the above mentioned effects are multiplied. (Wakatsuki, Y.)

  1. Improved Cell Culture Method for Growing Contracting Skeletal Muscle Models

    Science.gov (United States)

    Marquette, Michele L.; Sognier, Marguerite A.

    2013-01-01

    An improved method for culturing immature muscle cells (myoblasts) into a mature skeletal muscle overcomes some of the notable limitations of prior culture methods. The development of the method is a major advance in tissue engineering in that, for the first time, a cell-based model spontaneously fuses and differentiates into masses of highly aligned, contracting myotubes. This method enables (1) the construction of improved two-dimensional (monolayer) skeletal muscle test beds; (2) development of contracting three-dimensional tissue models; and (3) improved transplantable tissues for biomedical and regenerative medicine applications. With adaptation, this method also offers potential application for production of other tissue types (i.e., bone and cardiac) from corresponding precursor cells.

  2. Parametric and non-parametric masking of randomness in sequence alignments can be improved and leads to better resolved trees

    Directory of Open Access Journals (Sweden)

    von Reumont Björn M

    2010-03-01

    Full Text Available Abstract Background Methods of alignment masking, which refers to the technique of excluding alignment blocks prior to tree reconstructions, have been successful in improving the signal-to-noise ratio in sequence alignments. However, the lack of formally well defined methods to identify randomness in sequence alignments has prevented a routine application of alignment masking. In this study, we compared the effects on tree reconstructions of the most commonly used profiling method (GBLOCKS which uses a predefined set of rules in combination with alignment masking, with a new profiling approach (ALISCORE based on Monte Carlo resampling within a sliding window, using different data sets and alignment methods. While the GBLOCKS approach excludes variable sections above a certain threshold which choice is left arbitrary, the ALISCORE algorithm is free of a priori rating of parameter space and therefore more objective. Results ALISCORE was successfully extended to amino acids using a proportional model and empirical substitution matrices to score randomness in multiple sequence alignments. A complex bootstrap resampling leads to an even distribution of scores of randomly similar sequences to assess randomness of the observed sequence similarity. Testing performance on real data, both masking methods, GBLOCKS and ALISCORE, helped to improve tree resolution. The sliding window approach was less sensitive to different alignments of identical data sets and performed equally well on all data sets. Concurrently, ALISCORE is capable of dealing with different substitution patterns and heterogeneous base composition. ALISCORE and the most relaxed GBLOCKS gap parameter setting performed best on all data sets. Correspondingly, Neighbor-Net analyses showed the most decrease in conflict. Conclusions Alignment masking improves signal-to-noise ratio in multiple sequence alignments prior to phylogenetic reconstruction. Given the robust performance of alignment

  3. ECG-derived respiration methods: adapted ICA and PCA.

    Science.gov (United States)

    Tiinanen, Suvi; Noponen, Kai; Tulppo, Mikko; Kiviniemi, Antti; Seppänen, Tapio

    2015-05-01

    Respiration is an important signal in early diagnostics, prediction, and treatment of several diseases. Moreover, a growing trend toward ambulatory measurements outside laboratory environments encourages developing indirect measurement methods such as ECG derived respiration (EDR). Recently, decomposition techniques like principal component analysis (PCA), and its nonlinear version, kernel PCA (KPCA), have been used to derive a surrogate respiration signal from single-channel ECG. In this paper, we propose an adapted independent component analysis (AICA) algorithm to obtain EDR signal, and extend the normal linear PCA technique based on the best principal component (PC) selection (APCA, adapted PCA) to improve its performance further. We also demonstrate that the usage of smoothing spline resampling and bandpass-filtering improve the performance of all EDR methods. Compared with other recent EDR methods using correlation coefficient and magnitude squared coherence, the proposed AICA and APCA yield a statistically significant improvement with correlations 0.84, 0.82, 0.76 and coherences 0.90, 0.91, 0.85 between reference respiration and AICA, APCA and KPCA, respectively. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. Shared or Integrated: Which Type of Integration is More Effective Improves Students’ Creativity?

    Science.gov (United States)

    Mariyam, M.; Kaniawati, I.; Sriyati, S.

    2017-09-01

    Integrated science learning has various types of integration. This study aims to apply shared and integrated type of integration with project based learning (PjBL) model to improve students’ creativity on waste recycling theme. The research method used is a quasi experiment with the matching-only pre test-post test design. The samples of this study are 108 students consisting of 36 students (experiment class 1st), 35 students (experiment class 2nd) and 37 students (control class 3rd) at one of Junior High School in Tanggamus, Lampung. The results show that there is difference of creativity improvement in the class applied by PjBL model with shared type of integration, integrated type of integration and without any integration in waste recycling theme. Class applied by PjBL model with shared type of integration has the higher creativity improvement than the PjBL model with integrated type of integration and without any integration. Integrated science learning using shared type only combines 2 lessons, hence an intact concept is resulted. So, PjBL model with shared type of integration more effective improves students’ creativity than integrated type.

  5. Improved GLR method to instrument failure detection

    International Nuclear Information System (INIS)

    Jeong, Hak Yeoung; Chang, Soon Heung

    1985-01-01

    The generalized likehood radio(GLR) method performs statistical tests on the innovations sequence of a Kalman-Buchy filter state estimator for system failure detection and its identification. However, the major drawback of the convensional GLR is to hypothesize particular failure type in each case. In this paper, a method to solve this drawback is proposed. The improved GLR method is applied to a PWR pressurizer and gives successful results in detection and identification of any failure. Furthmore, some benefit on the processing time per each cycle of failure detection and its identification can be accompanied. (Author)

  6. Method for Pre-Conditioning a Measured Surface Height Map for Model Validation

    Science.gov (United States)

    Sidick, Erkin

    2012-01-01

    This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to

  7. Pyrosequencing™ : A one-step method for high resolution HLA typing

    Directory of Open Access Journals (Sweden)

    Marincola Francesco M

    2003-11-01

    Full Text Available Abstract While the use of high-resolution molecular typing in routine matching of human leukocyte antigens (HLA is expected to improve unrelated donor selection and transplant outcome, the genetic complexity of HLA still makes the current methodology limited and laborious. Pyrosequencing™ is a gel-free, sequencing-by-synthesis method. In a Pyrosequencing reaction, nucleotide incorporation proceeds sequentially along each DNA template at a given nucleotide dispensation order (NDO that is programmed into a pyrosequencer. Here we describe the design of a NDO that generates a pyrogram unique for any given allele or combination of alleles. We present examples of unique pyrograms generated from each of two heterozygous HLA templates, which would otherwise remain cis/trans ambiguous using standard sequencing based typing (SBT method. In addition, we display representative data that demonstrate long read and linear signal generation. These features are prerequisite of high-resolution typing and automated data analysis. In conclusion Pyrosequencing is a one-step method for high resolution DNA typing.

  8. Inferring microevolution from museum collections and resampling: lessons learned from Cepaea

    Directory of Open Access Journals (Sweden)

    Małgorzata Ożgo

    2017-10-01

    Full Text Available Natural history collections are an important and largely untapped source of long-term data on evolutionary changes in wild populations. Here, we utilize three large geo-referenced sets of samples of the common European land-snail Cepaea nemoralis stored in the collection of Naturalis Biodiversity Center in Leiden, the Netherlands. Resampling of these populations allowed us to gain insight into changes occurring over 95, 69, and 50 years. Cepaea nemoralis is polymorphic for the colour and banding of the shell; the mode of inheritance of these patterns is known, and the polymorphism is under both thermal and predatory selection. At two sites the general direction of changes was towards lighter shells (yellow and less heavily banded, which is consistent with predictions based on on-going climatic change. At one site no directional changes were detected. At all sites there were significant shifts in morph frequencies between years, and our study contributes to the recognition that short-term changes in the states of populations often exceed long-term trends. Our interpretation was limited by the few time points available in the studied collections. We therefore stress the need for natural history collections to routinely collect large samples of common species, to allow much more reliable hind-casting of evolutionary responses to environmental change.

  9. An assessment of particle filtering methods and nudging for climate state reconstructions

    NARCIS (Netherlands)

    S. Dubinkina (Svetlana); H. Goosse

    2013-01-01

    htmlabstractUsing the climate model of intermediate complexity LOVECLIM in an idealized framework, we assess three data-assimilation methods for reconstructing the climate state. The methods are a nudging, a particle filter with sequential importance resampling, and a nudging proposal particle

  10. Hardware Architecture of Polyphase Filter Banks Performing Embedded Resampling for Software-Defined Radio Front-Ends

    DEFF Research Database (Denmark)

    Awan, Mehmood-Ur-Rehman; Le Moullec, Yannick; Koch, Peter

    2012-01-01

    , and power optimization for field programmable gate array (FPGA) based architectures in an M -path polyphase filter bank with modified N -path polyphase filter. Such systems allow resampling by arbitrary ratios while simultaneously performing baseband aliasing from center frequencies at Nyquist zones......In this paper, we describe resource-efficient hardware architectures for software-defined radio (SDR) front-ends. These architectures are made efficient by using a polyphase channelizer that performs arbitrary sample rate changes, frequency selection, and bandwidth control. We discuss area, time...... that are not multiples of the output sample rate. A non-maximally decimated polyphase filter bank, where the number of data loads is not equal to the number of M subfilters, processes M subfilters in a time period that is either less than or greater than the M data-load’s time period. We present a load...

  11. Using vis-NIR to predict soil organic carbon and clay at national scale: validation of geographically closest resampling strategy

    DEFF Research Database (Denmark)

    Peng, Yi; Knadel, Maria; Greve, Mette Balslev

    2016-01-01

    geographically closest sampling points. The SOC prediction resulted in R2: 0.76; RMSE: 4.02 %; RPD: 1.59; RPIQ: 0.35. The results for clay prediction were also successful (R2: 0.84; RMSE: 2.36 %; RPD: 2.35; RPIQ: 2.88). For SOC predictions, over 90% of soil samples were well predicted compared...... samples) for soils from each 7-km grid sampling point in the country. In the resampling and modelling process, each target sample was predicted by a specific model which was calibrated using geographically closest soil spectra. The geographically closest 20, 30, 40, and 50 sampling points (profiles) were...

  12. New or improved computational methods and advanced reactor design

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki; Takeda, Toshikazu; Ushio, Tadashi

    1997-01-01

    Nuclear computational method has been studied continuously up to date, as a fundamental technology supporting the nuclear development. At present, research on computational method according to new theory and the calculating method thought to be difficult to practise are also continued actively to find new development due to splendid improvement of features of computer. In Japan, many light water type reactors are now in operations, new computational methods are induced for nuclear design, and a lot of efforts are concentrated for intending to more improvement of economics and safety. In this paper, some new research results on the nuclear computational methods and their application to nuclear design of the reactor were described for introducing recent trend of the nuclear design of the reactor. 1) Advancement of the computational method, 2) Reactor core design and management of the light water reactor, and 3) Nuclear design of the fast reactor. (G.K.)

  13. The PIT-trap-A "model-free" bootstrap procedure for inference about regression models with discrete, multivariate responses.

    Science.gov (United States)

    Warton, David I; Thibaut, Loïc; Wang, Yi Alice

    2017-01-01

    Bootstrap methods are widely used in statistics, and bootstrapping of residuals can be especially useful in the regression context. However, difficulties are encountered extending residual resampling to regression settings where residuals are not identically distributed (thus not amenable to bootstrapping)-common examples including logistic or Poisson regression and generalizations to handle clustered or multivariate data, such as generalised estimating equations. We propose a bootstrap method based on probability integral transform (PIT-) residuals, which we call the PIT-trap, which assumes data come from some marginal distribution F of known parametric form. This method can be understood as a type of "model-free bootstrap", adapted to the problem of discrete and highly multivariate data. PIT-residuals have the key property that they are (asymptotically) pivotal. The PIT-trap thus inherits the key property, not afforded by any other residual resampling approach, that the marginal distribution of data can be preserved under PIT-trapping. This in turn enables the derivation of some standard bootstrap properties, including second-order correctness of pivotal PIT-trap test statistics. In multivariate data, bootstrapping rows of PIT-residuals affords the property that it preserves correlation in data without the need for it to be modelled, a key point of difference as compared to a parametric bootstrap. The proposed method is illustrated on an example involving multivariate abundance data in ecology, and demonstrated via simulation to have improved properties as compared to competing resampling methods.

  14. Setting the top 10 research priorities to improve the health of people with Type 2 diabetes: a Diabetes UK-James Lind Alliance Priority Setting Partnership.

    Science.gov (United States)

    Finer, S; Robb, P; Cowan, K; Daly, A; Shah, K; Farmer, A

    2018-07-01

    To describe processes and outcomes of a priority setting partnership to identify the 'top 10 research priorities' in Type 2 diabetes, involving people living with the condition, their carers, and healthcare professionals. We followed the four-step James Lind Alliance Priority Setting Partnership process which involved: gathering uncertainties using a questionnaire survey distributed to 70 000 people living with Type 2 diabetes and their carers, and healthcare professionals; organizing the uncertainties; interim priority setting by resampling of participants with a second survey; and final priority setting in an independent group of participants, using the nominal group technique. At each step the steering group closely monitored and guided the process. In the first survey, 8227 uncertainties were proposed by 2587 participants, of whom 18% were from black, Asian and minority ethnic groups. Uncertainties were formatted and collated into 114 indicative questions. A total of 1506 people contributed to a second survey, generating a shortlist of 24 questions equally weighted to the contributions of people living with diabetes and their carers and those of healthcare professionals. In the final step the 'top 10 research priorities' were selected, including questions on cure and reversal, risk identification and prevention, and self-management approaches in Type 2 diabetes. Systematic and transparent methodology was used to identify research priorities in a large and genuine partnership of people with lived and professional experience of Type 2 diabetes. The top 10 questions represent consensus areas of research priority to guide future research, deliver responsive and strategic allocation of research resources, and improve the future health and well-being of people living with, and at risk of, Type 2 diabetes. © 2018 The Authors. Diabetic Medicine published by John Wiley & Sons Ltd on behalf of Diabetes UK.

  15. Improved radioanalytical methods

    International Nuclear Information System (INIS)

    Erickson, M.D.; Aldstadt, J.H.; Alvarado, J.S.; Crain, J.S.; Orlandini, K.A.; Smith, L.L.

    1995-01-01

    Methods for the chemical characterization of the environment are being developed under a multitask project for the Analytical Services Division (EM-263) within the US Department of Energy (DOE) Office of Environmental Management. This project focuses on improvement of radioanalytical methods with an emphasis on faster and cheaper routine methods. We have developed improved methods, for separation of environmental levels of technetium-99 and strontium-89/90, radium, and actinides from soil and water; and for separation of actinides from soil and water matrix interferences. Among the novel separation techniques being used are element- and class-specific resins and membranes. (The 3M Corporation is commercializing Empore trademark membranes under a cooperative research and development agreement [CRADA] initiated under this project). We have also developed methods for simultaneous detection of multiple isotopes using inductively coupled plasma-mass spectrometry (ICP-MS). The ICP-MS method requires less rigorous chemical separations than traditional radiochemical analyses because of its mass-selective mode of detection. Actinides and their progeny have been isolated and concentrated from a variety of natural water matrices by using automated batch separation incorporating selective resins prior to ICP-MS analyses. In addition, improvements in detection limits, sample volume, and time of analysis were obtained by using other sample introduction techniques, such as ultrasonic nebulization and electrothermal vaporization. Integration and automation of the separation methods with the ICP-MS methodology by using flow injection analysis is underway, with an objective of automating methods to achieve more reproducible results, reduce labor costs, cut analysis time, and minimize secondary waste generation through miniaturization of the process

  16. Improving Type Error Messages in OCaml

    OpenAIRE

    Charguéraud , Arthur

    2015-01-01

    International audience; Cryptic type error messages are a major obstacle to learning OCaml or other ML-based languages. In many cases, error messages cannot be interpreted without a sufficiently-precise model of the type inference algorithm. The problem of improving type error messages in ML has received quite a bit of attention over the past two decades, and many different strategies have been considered. The challenge is not only to produce error messages that are both sufficiently concise ...

  17. A new method to obtain ground control points based on SRTM data

    Science.gov (United States)

    Wang, Pu; An, Wei; Deng, Xin-pu; Zhang, Xi

    2013-09-01

    The GCPs are widely used in remote sense image registration and geometric correction. Normally, the DRG and DOM are the major data source from which GCPs are extracted. But the high accuracy products of DRG and DOM are usually costly to obtain. Some of the production are free, yet without any guarantee. In order to balance the cost and the accuracy, the paper proposes a method of extracting the GCPs from SRTM data. The method consist of artificial assistance, binarization, data resample and reshape. With artificial assistance to find out which part of SRTM data could be used as GCPs, such as the islands or sharp coast line. By utilizing binarization algorithm , the shape information of the region is obtained while other information is excluded. Then the binary data is resampled to a suitable resolution required by specific application. At last, the data would be reshaped according to satellite imaging type to obtain the GCPs which could be used. There are three advantages of the method proposed in the paper. Firstly, the method is easy for implementation. Unlike the DRG data or DOM data that charges a lot, the SRTM data is totally free to access without any constricts. Secondly, the SRTM has a high accuracy about 90m that is promised by its producer, so the GCPs got from it can also obtain a high quality. Finally, given the SRTM data covers nearly all the land surface of earth between latitude -60° and latitude +60°, the GCPs which are produced by the method can cover most important regions of the world. The method which obtain GCPs from SRTM data can be used in meteorological satellite image or some situation alike, which have a relative low requirement about the accuracy. Through plenty of simulation test, the method is proved convenient and effective.

  18. Clostridium difficile infection: Early history, diagnosis and molecular strain typing methods.

    Science.gov (United States)

    Rodriguez, C; Van Broeck, J; Taminiau, B; Delmée, M; Daube, G

    2016-08-01

    Recognised as the leading cause of nosocomial antibiotic-associated diarrhoea, the incidence of Clostridium difficile infection (CDI) remains high despite efforts to improve prevention and reduce the spread of the bacterium in healthcare settings. In the last decade, many studies have focused on the epidemiology and rapid diagnosis of CDI. In addition, different typing methods have been developed for epidemiological studies. This review explores the history of C. difficile and the current scope of the infection. The variety of available laboratory tests for CDI diagnosis and strain typing methods are also examined. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. A novel fruit shape classification method based on multi-scale analysis

    Science.gov (United States)

    Gui, Jiangsheng; Ying, Yibin; Rao, Xiuqin

    2005-11-01

    Shape is one of the major concerns and which is still a difficult problem in automated inspection and sorting of fruits. In this research, we proposed the multi-scale energy distribution (MSED) for object shape description, the relationship between objects shape and its boundary energy distribution at multi-scale was explored for shape extraction. MSED offers not only the mainly energy which represent primary shape information at the lower scales, but also subordinate energy which represent local shape information at higher differential scales. Thus, it provides a natural tool for multi resolution representation and can be used as a feature for shape classification. We addressed the three main processing steps in the MSED-based shape classification. They are namely, 1) image preprocessing and citrus shape extraction, 2) shape resample and shape feature normalization, 3) energy decomposition by wavelet and classification by BP neural network. Hereinto, shape resample is resample 256 boundary pixel from a curve which is approximated original boundary by using cubic spline in order to get uniform raw data. A probability function was defined and an effective method to select a start point was given through maximal expectation, which overcame the inconvenience of traditional methods in order to have a property of rotation invariants. The experiment result is relatively well normal citrus and serious abnormality, with a classification rate superior to 91.2%. The global correct classification rate is 89.77%, and our method is more effective than traditional method. The global result can meet the request of fruit grading.

  20. Hybrid Lanczos-type product methods

    Energy Technology Data Exchange (ETDEWEB)

    Ressel, K.J. [Swiss Center for Scientific Computing, Zuerich (Switzerland)

    1996-12-31

    A general framework is proposed to construct hybrid iterative methods for the solution of large nonsymmetric systems of linear equations. This framework is based on Lanczos-type product methods, whose iteration polynomial consists of the Lanczos polynomial multiplied by some other arbitrary, {open_quotes}shadow{close_quotes} polynomial. By using for the shadow polynomial Chebyshev (more general Faber) polynomials or L{sup 2}-optimal polynomials, hybrid (Chebyshev-like) methods are incorporated into Lanczos-type product methods. In addition, to acquire spectral information on the system matrix, which is required for such a choice of shadow polynomials, the Lanczos-process can be employed either directly or in an QMR-like approach. The QMR like approach allows the cheap computation of the roots of the B-orthogonal polynomials and the residual polynomials associated with the QMR iteration. These roots can be used as a good approximation for the spectrum of the system matrix. Different choices for the shadow polynomials and their construction are analyzed. The resulting hybrid methods are compared with standard Lanczos-type product methods, like BiOStab, BiOStab({ell}) and BiOS.

  1. Improved dynamical scaling analysis using the kernel method for nonequilibrium relaxation.

    Science.gov (United States)

    Echinaka, Yuki; Ozeki, Yukiyasu

    2016-10-01

    The dynamical scaling analysis for the Kosterlitz-Thouless transition in the nonequilibrium relaxation method is improved by the use of Bayesian statistics and the kernel method. This allows data to be fitted to a scaling function without using any parametric model function, which makes the results more reliable and reproducible and enables automatic and faster parameter estimation. Applying this method, the bootstrap method is introduced and a numerical discrimination for the transition type is proposed.

  2. An improved particle filtering algorithm for aircraft engine gas-path fault diagnosis

    Directory of Open Access Journals (Sweden)

    Qihang Wang

    2016-07-01

    Full Text Available In this article, an improved particle filter with electromagnetism-like mechanism algorithm is proposed for aircraft engine gas-path component abrupt fault diagnosis. In order to avoid the particle degeneracy and sample impoverishment of normal particle filter, the electromagnetism-like mechanism optimization algorithm is introduced into resampling procedure, which adjusts the position of the particles through simulating attraction–repulsion mechanism between charged particles of the electromagnetism theory. The improved particle filter can solve the particle degradation problem and ensure the diversity of the particle set. Meanwhile, it enhances the ability of tracking abrupt fault due to considering the latest measurement information. Comparison of the proposed method with three different filter algorithms is carried out on a univariate nonstationary growth model. Simulations on a turbofan engine model indicate that compared to the normal particle filter, the improved particle filter can ensure the completion of the fault diagnosis within less sampling period and the root mean square error of parameters estimation is reduced.

  3. Improved methods for binding acma-type protein anchor fusions yo cell-wall material of micro-organisms

    NARCIS (Netherlands)

    Leenhouts, Cornelis; Ramasamy, R.; Steen, Anton; Kok, Jan; Buist, Girbe; Kuipers, Oscar

    2002-01-01

    The invention provides a method for improving binding of a proteinaceous substance to cell-wall material of a Gram-positive bacterium, said substance comprising an AcmA cell wall binding domain or homolog or functional derivative thereof, said method comprising treating said cell-wall material with

  4. [Molecular typing methods for Pasteurella multocida-A review].

    Science.gov (United States)

    Peng, Zhong; Liang, Wan; Wu, Bin

    2016-10-04

    Pasteurella multocida is an important gram-negative pathogenic bacterium that could infect wide ranges of animals. Humans could also be infected by P. multocida via animal bite or scratching. Current typing methods for P. multocida include serological typing methods and molecular typing methods. Of them, serological typing methods are based on immunological assays, which are too complicated for clinical bacteriological studies. However, the molecular methods including multiple PCRs and multilocus sequence typing (MLST) methods are more suitable for bacteriological studies of P. multocida in clinic, with their simple operation, high efficiency and accurate detection compared to the traditional serological typing methods, they are therefore widely used. In the current review, we briefly describe the molecular typing methods for P. multocida. Our aim is to provide a knowledge-foundation for clinical bacteriological investigation especially the molecular investigation for P. multocida.

  5. Current Methods in the Molecular Typing of Mycobacterium tuberculosis and Other Mycobacteria

    Science.gov (United States)

    van Ingen, Jakko; Dziadek, Jarosław; Mazur, Paweł K.; Bielecki, Jacek

    2014-01-01

    In the epidemiology of tuberculosis (TB) and nontuberculous mycobacterial (NTM) diseases, as in all infectious diseases, the key issue is to define the source of infection and to disclose its routes of transmission and dissemination in the environment. For this to be accomplished, the ability of discerning and tracking individual Mycobacterium strains is of critical importance. Molecular typing methods have greatly improved our understanding of the biology of mycobacteria and provide powerful tools to combat the diseases caused by these pathogens. The utility of various typing methods depends on the Mycobacterium species under investigation as well as on the research question. For tuberculosis, different methods have different roles in phylogenetic analyses and person-to-person transmission studies. In NTM diseases, most investigations involve the search for environmental sources or phylogenetic relationships. Here, too, the type of setting determines which methodology is most suitable. Within this review, we summarize currently available molecular methods for strain typing of M. tuberculosis and some NTM species, most commonly associated with human disease. For the various methods, technical practicalities as well as discriminatory power and accomplishments are reviewed. PMID:24527454

  6. Fourier Descriptor Analysis and Unification of Voice Range Profile Contours: Method and Applications

    Science.gov (United States)

    Pabon, Peter; Ternstrom, Sten; Lamarche, Anick

    2011-01-01

    Purpose: To describe a method for unified description, statistical modeling, and comparison of voice range profile (VRP) contours, even from diverse sources. Method: A morphologic modeling technique, which is based on Fourier descriptors (FDs), is applied to the VRP contour. The technique, which essentially involves resampling of the curve of the…

  7. PERFORMANCE COMPARISON OF SCENARIO-GENERATION METHODS APPLIED TO A STOCHASTIC OPTIMIZATION ASSET-LIABILITY MANAGEMENT MODEL

    Directory of Open Access Journals (Sweden)

    Alan Delgado de Oliveira

    Full Text Available ABSTRACT In this paper, we provide an empirical discussion of the differences among some scenario tree-generation approaches for stochastic programming. We consider the classical Monte Carlo sampling and Moment matching methods. Moreover, we test the Resampled average approximation, which is an adaptation of Monte Carlo sampling and Monte Carlo with naive allocation strategy as the benchmark. We test the empirical effects of each approach on the stability of the problem objective function and initial portfolio allocation, using a multistage stochastic chance-constrained asset-liability management (ALM model as the application. The Moment matching and Resampled average approximation are more stable than the other two strategies.

  8. On the Dynamic RSS Feedbacks of Indoor Fingerprinting Databases for Localization Reliability Improvement.

    Science.gov (United States)

    Wen, Xiaoyang; Tao, Wenyuan; Own, Chung-Ming; Pan, Zhenjiang

    2016-08-15

    Location data is one of the most widely used context data types in context-aware and ubiquitous computing applications. To support locating applications in indoor environments, numerous systems with different deployment costs and positioning accuracies have been developed over the past decade. One useful method, based on received signal strength (RSS), provides a set of signal transmission access points. However, compiling a remeasurement RSS database involves a high cost, which is impractical in dynamically changing environments, particularly in highly crowded areas. In this study, we propose a dynamic estimation resampling method for certain locations chosen from a set of remeasurement fingerprinting databases. Our proposed method adaptively applies different, newly updated and offline fingerprinting points according to the temporal and spatial strength of the location. To achieve accuracy within a simulated area, the proposed method requires approximately 3% of the feedback to attain a double correctness probability comparable to similar methods; in a real environment, our proposed method can obtain excellent 1 m accuracy errors in the positioning system.

  9. On the Dynamic RSS Feedbacks of Indoor Fingerprinting Databases for Localization Reliability Improvement

    Directory of Open Access Journals (Sweden)

    Xiaoyang Wen

    2016-08-01

    Full Text Available Location data is one of the most widely used context data types in context-aware and ubiquitous computing applications. To support locating applications in indoor environments, numerous systems with different deployment costs and positioning accuracies have been developed over the past decade. One useful method, based on received signal strength (RSS, provides a set of signal transmission access points. However, compiling a remeasurement RSS database involves a high cost, which is impractical in dynamically changing environments, particularly in highly crowded areas. In this study, we propose a dynamic estimation resampling method for certain locations chosen from a set of remeasurement fingerprinting databases. Our proposed method adaptively applies different, newly updated and offline fingerprinting points according to the temporal and spatial strength of the location. To achieve accuracy within a simulated area, the proposed method requires approximately 3% of the feedback to attain a double correctness probability comparable to similar methods; in a real environment, our proposed method can obtain excellent 1 m accuracy errors in the positioning system.

  10. Different types of anastomotic methods: a review of literature

    Directory of Open Access Journals (Sweden)

    Shadi Mooloughi

    2015-09-01

    Full Text Available Constructing successful anastomosis is an important concept in gastrointestinal tract surgeries, which can be affected by various factors such as preoperative bowel condition, intra- and postoperative complications, bleeding and the device characteristics. Suturing, stapling and compression anastomosis are different techniques. Despite the invention of compression anastomosis, which goes back almost two centuries, this method has not obtained the popularity of the suturing and stapling anastomosis and further studies are required. Designing methods and devices with no drawbacks might reduce the complications associated with anastomosis as the alternative to suturing and stapling anastomoses. Several materials can be used as reinforcement materials, which can improve the consequences of the stapled anastomosis. In addition to reinforcement materials, other forms of supports have been proposed, which might be capable of reducing the postoperative complications of anastomosis. In this study, we briefly review various types of anastomotic techniques and associated complications in different types of gastrointestinal surgeries.

  11. Vildagliptin Improves Endothelium-Dependent Vasodilatation in Type 2 Diabetes

    Science.gov (United States)

    van Poppel, Pleun C.M.; Netea, Mihai G.; Smits, Paul; Tack, Cees J.

    2011-01-01

    OBJECTIVE To investigate whether the dipeptidyl peptidase-4 inhibitor vildagliptin improves endothelium-dependent vasodilatation in patients with type 2 diabetes. RESEARCH DESIGN AND METHODS Sixteen subjects with type 2 diabetes (age 59.8 ± 6.8 years, BMI 29.1 ± 4.8 kg/m2, HbA1c 6.97 ± 0.61) on oral blood glucose–lowering treatment were included. Participants received vildagliptin 50 mg b.i.d. or acarbose 100 mg t.i.d. for four consecutive weeks in a randomized, double-blind, cross-over design. At the end of each treatment period, we measured forearm vasodilator responses to intra-arterially administered acetylcholine (endothelium-dependent vasodilator) and sodium nitroprusside (endothelium-independent vasodilator). RESULTS Infusion of acetylcholine induced a dose-dependent increase in forearm blood flow in the experimental arm, which was higher during vildagliptin (3.1 ± 0.7, 7.9 ± 1.1, and 12.6 ± 1.4 mL ⋅ dL−1 ⋅ min−1 in response to three increasing dosages of acetylcholine) than during acarbose (2.0 ± 0.7, 5.0 ± 1.2, and 11.7 ± 1.6 mL ⋅ dL−1 ⋅ min−1, respectively; P = 0.01 by two-way ANOVA). Treatment with vildagliptin did not significantly change the vascular responses to sodium nitroprusside. CONCLUSIONS Four weeks’ treatment with vildagliptin improves endothelium-dependent vasodilatation in subjects with type 2 diabetes. This observation might have favorable cardiovascular implications. PMID:21788633

  12. Automatic bearing fault diagnosis of permanent magnet synchronous generators in wind turbines subjected to noise interference

    Science.gov (United States)

    Guo, Jun; Lu, Siliang; Zhai, Chao; He, Qingbo

    2018-02-01

    An automatic bearing fault diagnosis method is proposed for permanent magnet synchronous generators (PMSGs), which are widely installed in wind turbines subjected to low rotating speeds, speed fluctuations, and electrical device noise interferences. The mechanical rotating angle curve is first extracted from the phase current of a PMSG by sequentially applying a series of algorithms. The synchronous sampled vibration signal of the fault bearing is then resampled in the angular domain according to the obtained rotating phase information. Considering that the resampled vibration signal is still overwhelmed by heavy background noise, an adaptive stochastic resonance filter is applied to the resampled signal to enhance the fault indicator and facilitate bearing fault identification. Two types of fault bearings with different fault sizes in a PMSG test rig are subjected to experiments to test the effectiveness of the proposed method. The proposed method is fully automated and thus shows potential for convenient, highly efficient and in situ bearing fault diagnosis for wind turbines subjected to harsh environments.

  13. Measuring Ambiguity in HLA Typing Methods

    Science.gov (United States)

    Madbouly, Abeer; Freeman, John; Maiers, Martin

    2012-01-01

    In hematopoietic stem cell transplantation, donor selection is based primarily on matching donor and patient HLA genes. These genes are highly polymorphic and their typing can result in exact allele assignment at each gene (the resolution at which patients and donors are matched), but it can also result in a set of ambiguous assignments, depending on the typing methodology used. To facilitate rapid identification of matched donors, registries employ statistical algorithms to infer HLA alleles from ambiguous genotypes. Linkage disequilibrium information encapsulated in haplotype frequencies is used to facilitate prediction of the most likely haplotype assignment. An HLA typing with less ambiguity produces fewer high-probability haplotypes and a more reliable prediction. We estimated ambiguity for several HLA typing methods across four continental populations using an information theory-based measure, Shannon's entropy. We used allele and haplotype frequencies to calculate entropy for different sets of 1,000 subjects with simulated HLA typing. Using allele frequencies we calculated an average entropy in Caucasians of 1.65 for serology, 1.06 for allele family level, 0.49 for a 2002-era SSO kit, and 0.076 for single-pass SBT. When using haplotype frequencies in entropy calculations, we found average entropies of 0.72 for serology, 0.73 for allele family level, 0.05 for SSO, and 0.002 for single-pass SBT. Application of haplotype frequencies further reduces HLA typing ambiguity. We also estimated expected confirmatory typing mismatch rates for simulated subjects. In a hypothetical registry with all donors typed using the same method, the entropy values based on haplotype frequencies correspond to confirmatory typing mismatch rates of 1.31% for SSO versus only 0.08% for SBT. Intermediate-resolution single-pass SBT contains the least ambiguity of the methods we evaluated and therefore the most certainty in allele prediction. The presented measure objectively evaluates HLA

  14. Measuring ambiguity in HLA typing methods.

    Directory of Open Access Journals (Sweden)

    Vanja Paunić

    Full Text Available In hematopoietic stem cell transplantation, donor selection is based primarily on matching donor and patient HLA genes. These genes are highly polymorphic and their typing can result in exact allele assignment at each gene (the resolution at which patients and donors are matched, but it can also result in a set of ambiguous assignments, depending on the typing methodology used. To facilitate rapid identification of matched donors, registries employ statistical algorithms to infer HLA alleles from ambiguous genotypes. Linkage disequilibrium information encapsulated in haplotype frequencies is used to facilitate prediction of the most likely haplotype assignment. An HLA typing with less ambiguity produces fewer high-probability haplotypes and a more reliable prediction. We estimated ambiguity for several HLA typing methods across four continental populations using an information theory-based measure, Shannon's entropy. We used allele and haplotype frequencies to calculate entropy for different sets of 1,000 subjects with simulated HLA typing. Using allele frequencies we calculated an average entropy in Caucasians of 1.65 for serology, 1.06 for allele family level, 0.49 for a 2002-era SSO kit, and 0.076 for single-pass SBT. When using haplotype frequencies in entropy calculations, we found average entropies of 0.72 for serology, 0.73 for allele family level, 0.05 for SSO, and 0.002 for single-pass SBT. Application of haplotype frequencies further reduces HLA typing ambiguity. We also estimated expected confirmatory typing mismatch rates for simulated subjects. In a hypothetical registry with all donors typed using the same method, the entropy values based on haplotype frequencies correspond to confirmatory typing mismatch rates of 1.31% for SSO versus only 0.08% for SBT. Intermediate-resolution single-pass SBT contains the least ambiguity of the methods we evaluated and therefore the most certainty in allele prediction. The presented measure

  15. Changing practice: red blood cell typing by molecular methods for patients with sickle cell disease.

    Science.gov (United States)

    Casas, Jessica; Friedman, David F; Jackson, Tannoa; Vege, Sunitha; Westhoff, Connie M; Chou, Stella T

    2015-06-01

    Extended red blood cell (RBC) antigen matching is recommended to limit alloimmunization in patients with sickle cell disease (SCD). DNA-based testing to predict blood group phenotypes has enhanced availability of antigen-negative donor units and improved typing of transfused patients, but replacement of routine serologic typing for non-ABO antigens with molecular typing for patients has not been reported. This study compared the historical RBC antigen phenotypes obtained by hemagglutination methods with genotype predictions in 494 patients with SCD. For discrepant results, repeat serologic testing was performed and/or investigated by gene sequencing for silent or variant alleles. Seventy-one typing discrepancies were identified among 6360 antigen comparisons (1.1%). New specimens for repeat serologic testing were obtained for 66 discrepancies and retyping agreed with the genotype in 64 cases. One repeat Jk(b-) serologic phenotype, predicted Jk(b+) by genotype, was found by direct sequencing of JK to be a silenced allele, and one N typing discrepancy remains under investigation. Fifteen false-negative serologic results were associated with alleles encoding weak antigens or single-dose Fy(b) expression. DNA-based RBC typing provided improved accuracy and expanded information on RBC antigens compared to hemagglutination methods, leading to its implementation as the primary method for extended RBC typing for patients with SCD at our institution. © 2015 AABB.

  16. Model selection for semiparametric marginal mean regression accounting for within-cluster subsampling variability and informative cluster size.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2018-03-13

    We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.

  17. Improved SAR Image Coregistration Using Pixel-Offset Series

    KAUST Repository

    Wang, Teng

    2014-03-14

    Synthetic aperture radar (SAR) image coregistration is a key procedure before interferometric SAR (InSAR) time-series analysis can be started. However, many geophysical data sets suffer from severe decorrelation problems due to a variety of reasons, making precise coregistration a nontrivial task. Here, we present a new strategy that uses a pixel-offset series of detected subimage patches dominated by point-like targets (PTs) to improve SAR image coregistrations. First, all potentially coherent image pairs are coregistered in a conventional way. In this step, we propose a coregistration quality index for each image to rank its relative “significance” within the data set and to select a reference image for the SAR data set. Then, a pixel-offset series of detected PTs is made from amplitude maps to improve the geometrical mapping functions. Finally, all images are resampled depending on the pixel offsets calculated from the updated geometrical mapping functions. We used images from a rural region near the North Anatolian Fault in eastern Turkey to test the proposed method, and clear coregistration improvements were found based on amplitude stability. This enhanced the fact that the coregistration strategy should therefore lead to improved InSAR time-series analysis results.

  18. Improved SAR Image Coregistration Using Pixel-Offset Series

    KAUST Repository

    Wang, Teng; Jonsson, Sigurjon; Hanssen, Ramon F.

    2014-01-01

    Synthetic aperture radar (SAR) image coregistration is a key procedure before interferometric SAR (InSAR) time-series analysis can be started. However, many geophysical data sets suffer from severe decorrelation problems due to a variety of reasons, making precise coregistration a nontrivial task. Here, we present a new strategy that uses a pixel-offset series of detected subimage patches dominated by point-like targets (PTs) to improve SAR image coregistrations. First, all potentially coherent image pairs are coregistered in a conventional way. In this step, we propose a coregistration quality index for each image to rank its relative “significance” within the data set and to select a reference image for the SAR data set. Then, a pixel-offset series of detected PTs is made from amplitude maps to improve the geometrical mapping functions. Finally, all images are resampled depending on the pixel offsets calculated from the updated geometrical mapping functions. We used images from a rural region near the North Anatolian Fault in eastern Turkey to test the proposed method, and clear coregistration improvements were found based on amplitude stability. This enhanced the fact that the coregistration strategy should therefore lead to improved InSAR time-series analysis results.

  19. Boomerang: A method for recursive reclassification.

    Science.gov (United States)

    Devlin, Sean M; Ostrovnaya, Irina; Gönen, Mithat

    2016-09-01

    While there are many validated prognostic classifiers used in practice, often their accuracy is modest and heterogeneity in clinical outcomes exists in one or more risk subgroups. Newly available markers, such as genomic mutations, may be used to improve the accuracy of an existing classifier by reclassifying patients from a heterogenous group into a higher or lower risk category. The statistical tools typically applied to develop the initial classifiers are not easily adapted toward this reclassification goal. In this article, we develop a new method designed to refine an existing prognostic classifier by incorporating new markers. The two-stage algorithm called Boomerang first searches for modifications of the existing classifier that increase the overall predictive accuracy and then merges to a prespecified number of risk groups. Resampling techniques are proposed to assess the improvement in predictive accuracy when an independent validation data set is not available. The performance of the algorithm is assessed under various simulation scenarios where the marker frequency, degree of censoring, and total sample size are varied. The results suggest that the method selects few false positive markers and is able to improve the predictive accuracy of the classifier in many settings. Lastly, the method is illustrated on an acute myeloid leukemia data set where a new refined classifier incorporates four new mutations into the existing three category classifier and is validated on an independent data set. © 2016, The International Biometric Society.

  20. Method of operating BWR type power plants

    International Nuclear Information System (INIS)

    Koyama, Kazuaki.

    1981-01-01

    Purpose: To improve the operation efficiency of BWR type reactors by reducing the time from the start-up of the reactor to the start-up of the turbine and electrical generator, as well as decrease the pressure difference in each of the sections of the pressure vessel to thereby extend its life span. Method: The operation comprises switching the nuclear reactor from the shutdown mode to the start-up mode, increasing the reactor power to a predetermined level lower than a rated power while maintaining the reactor pressure to a predetermined level lower than a rated pressure, starting up a turbine and an electrical generator in the state of the predetermined reactor pressure and the reactor power to connect the electrical generator to the power transmission system and, thereafter, increasing the reactor pressure and the reactor power to the predetermined rated pressure and rated power respectively. This can shorten the time from the start-up of the reactor to the start of the power transmission system, whereby the operation efficiency of the power plant can be improved. (Moriyama, K.)

  1. Improvement of ozone yield by a multi-discharge type ozonizer using superposition of silent discharge plasma

    International Nuclear Information System (INIS)

    Song, Hyun-Jig; Chun, Byung-Joon; Lee, Kwang-Sik

    2004-01-01

    In order to improve ozone generation, we experimentally investigated the silent discharge plasma and ozone generation characteristics of a multi-discharge type ozonizer. Ozone in a multi-discharge type ozonizer is generated by superposition of a silent discharge plasma, which is simultaneously generated in separated discharge spaces. A multi-discharge type ozonizer is composed of three different kinds of superposed silent discharge type ozonizers, depending on the method of applying power to each electrode. We observed that the discharge period of the current pulse for a multi discharge type ozonizer can be longer than that of silent discharge type ozonizer with two electrodes and one gap. Hence, ozone generation is improved up to 17185 ppm and 783 g/kwh in the case of the superposed silent discharge type ozonizer for which an AC high voltages with a 180 .deg. phase difference were applied to the internal electrode and the external electrode, respectively, with the central electrode being grounded.

  2. Optimization of the ship type using waveform by means of Rankine source method; Rankine source ho ni yoru hakei wo mochiita funagata saitekika ni tsuite

    Energy Technology Data Exchange (ETDEWEB)

    Hirayama, A; Eguchi, T [Mitsui Engineering and Shipbuilding Co. Ltd., Tokyo (Japan)

    1996-04-10

    Among the numerical calculation methods for steady-state wave-making problems, the panel shift Rankine source (PSRS) method has the advantages of rather precise determination of wave pattern of practical ship types, and short calculation period. The wave pattern around the hull was calculated by means of the PSRS method. The waveform analysis was carried out for the wave, to obtain an amplitude function of the original ship type. Based on the amplitude function, a ship type improvement method aiming at the optimization of ship type was provided using a conditional calculus of variation. A Series 60 (Cb=0.6) ship type was selected for the ship type improvement, to apply this technique. It was suggested that optimum design can be made for reducing the wave making resistance by means of this method. For the improvement of Series 60 ship type using this method, a great degree of reduction in the wave making resistance was recognized from the results of numerical waveform analysis. It was suggested that the ship type improvement aiming at the reduction of wave-making resistance can be made in shorter period and by smaller labor compared with the method using a waveform analysis of cistern tests. 5 refs., 9 figs.

  3. The efficiency of average linkage hierarchical clustering algorithm associated multi-scale bootstrap resampling in identifying homogeneous precipitation catchments

    Science.gov (United States)

    Chuan, Zun Liang; Ismail, Noriszura; Shinyie, Wendy Ling; Lit Ken, Tan; Fam, Soo-Fen; Senawi, Azlyna; Yusoff, Wan Nur Syahidah Wan

    2018-04-01

    Due to the limited of historical precipitation records, agglomerative hierarchical clustering algorithms widely used to extrapolate information from gauged to ungauged precipitation catchments in yielding a more reliable projection of extreme hydro-meteorological events such as extreme precipitation events. However, identifying the optimum number of homogeneous precipitation catchments accurately based on the dendrogram resulted using agglomerative hierarchical algorithms are very subjective. The main objective of this study is to propose an efficient regionalized algorithm to identify the homogeneous precipitation catchments for non-stationary precipitation time series. The homogeneous precipitation catchments are identified using average linkage hierarchical clustering algorithm associated multi-scale bootstrap resampling, while uncentered correlation coefficient as the similarity measure. The regionalized homogeneous precipitation is consolidated using K-sample Anderson Darling non-parametric test. The analysis result shows the proposed regionalized algorithm performed more better compared to the proposed agglomerative hierarchical clustering algorithm in previous studies.

  4. Improved modeling of clinical data with kernel methods.

    Science.gov (United States)

    Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart

    2012-02-01

    Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems

  5. Studies on improvements in the control methods of boiling water reactor plant

    International Nuclear Information System (INIS)

    Mankin, Shuichi

    1982-08-01

    In order to improve the performance of regulation and load following control of boiling water reactor plant, optimal control theory is applied and new types of control method are developed. Case-α controller is first formulated on the basis of the optimal linear regulator theory applied to the linealized model of the system; it is then modified by adding a integration-type action in a feed back loop and by the use of variable gain and reference for adapting to the power level requested. Case-#betta# controller consists of a hierarchical control scheme which has classical P.I. type sub-loop controllers at the first level and a linear optimal regulator at the second level. The controller is designed on the basis of the optimal regulator theory applied to the multivariate autoregressive system model which is obtained from the identification experiments, where the system model is determined with the conventional sub-loop controllers included. The results of the simulation experiments show these control methods proposed have performed fairly well and will be useful for the improvement of the performance of nuclear power plant control. In addition, it is suggested that these control methods will be also attractive for the control of other production plants because these were developed in the attempt to solve the problems deviated from so called 'The gap between the optimal contro theory and actual systems.' (author)

  6. A Proposed Method for Improving the Performance of P-Type GaAs IMPATTs

    Directory of Open Access Journals (Sweden)

    H. A. El-Motaafy

    2012-07-01

    Full Text Available A special waveform is proposed and assumed to be the optimum waveform for p-type GaAs IMPATTs. This waveform is deduced after careful and extensive study of the performance of these devices. The results presented here indicate the superiority of the performance of the IMPATTs driven by the proposed waveform over that obtained when the same IMPATTs are driven by the conventional sinusoidal waveform. These results are obtained using a full-scale computer simulation program that takes fully into account all the physical effects pertinent to IMPATT operation.  In this paper, it is indicated that the superiority of the proposed waveform is attributed to its ability to reduce the bad effects that usually degrade the IMPATT performance such as the space-charge effect and the drift-velocity dropping below saturation effect. The superiority is also attributed to the ability of the proposed waveform to improve the phase relationship between the terminal voltage and the induced current.Key Words: Computer-Aided Design, GaAs IMPATT, Microwave Engineering

  7. An improved wavelet-Galerkin method for dynamic response reconstruction and parameter identification of shear-type frames

    Science.gov (United States)

    Bu, Haifeng; Wang, Dansheng; Zhou, Pin; Zhu, Hongping

    2018-04-01

    An improved wavelet-Galerkin (IWG) method based on the Daubechies wavelet is proposed for reconstructing the dynamic responses of shear structures. The proposed method flexibly manages wavelet resolution level according to excitation, thereby avoiding the weakness of the wavelet-Galerkin multiresolution analysis (WGMA) method in terms of resolution and the requirement of external excitation. IWG is implemented by this work in certain case studies, involving single- and n-degree-of-freedom frame structures subjected to a determined discrete excitation. Results demonstrate that IWG performs better than WGMA in terms of accuracy and computation efficiency. Furthermore, a new method for parameter identification based on IWG and an optimization algorithm are also developed for shear frame structures, and a simultaneous identification of structural parameters and excitation is implemented. Numerical results demonstrate that the proposed identification method is effective for shear frame structures.

  8. Statistical methods for quality improvement

    National Research Council Canada - National Science Library

    Ryan, Thomas P

    2011-01-01

    ...."-TechnometricsThis new edition continues to provide the most current, proven statistical methods for quality control and quality improvementThe use of quantitative methods offers numerous benefits...

  9. Pyrolysis-gas chromatographic method for kerogen typing

    Energy Technology Data Exchange (ETDEWEB)

    Larter, S.R.; Douglas, A.G.

    1980-01-01

    The classification of kerogens according to their type and rank is important for the definition of any kerogen assemblage. Whereas optical methods of rank determination are well known, vitrinite reflectance and spore coloration being the most widely accepted chemical methods for typing kerogens are less developed. In this work we show that pyrograms, produced by pyrolyzing microgram quantities of solvent-extracted kerogens, enable not only their characterization in terms of a chromatographic fingerprint but also the production of a numerical type index determined as the ratio of m(+p)-xylene/n-octene (oct-1-ene) in the pyrogram. This index appears to be a close function of kerogen type. Type 3 kerogens (Tissot et al., 1974), including vitrinite, provide a high type index and have pyrolysates dominated by aromatic and phenolic compounds whereas type 1 kerogens provide an aliphatic-rich pyrolysate and consequently a low type index. The type index described here correlates well with microscopic and elemental analysis data and the pyrogram fingerprint provides an additional level of characterization not attainable with other current typing techniques.

  10. Method for controlling FBR type reactor

    International Nuclear Information System (INIS)

    Tamano, Toyomi; Iwashita, Tsuyoshi; Sakuragi, Masanori

    1991-01-01

    The present invention provides a controlling method for moderating thermal transient upon trip in an FBR type reactor. A flow channel for bypassing an intermediate heat exchanger is disposed in a secondary Na system. Then, bypassing flow rate is controlled so as to suppress fluctuations of temperature at a primary exit of the intermediate heat exchanger. Bypassing operation by using the bypassing flow channel is started at the same time with plant trip, to reduce the flow rate of secondary Na flown to the intermediate heat exchanger, so that the imbalance between the primary and the secondary Na flowrates is reduced. Accordingly, fluctuations of the temperature at the primary exit of the intermediate heat exchanger upon trip is suppressed. In view of the above, thermal transient applied to the reactor container upon plant trip can be moderated. As a result, the working life of the reactor can be extended, to improve plant integrity and safety. (I.S.)

  11. Improved Control Strategy for T-type Isolated DC/DC Converters

    DEFF Research Database (Denmark)

    Liu, Dong; Deng, Fujin; Wang, Yanbo

    2017-01-01

    T-type isolated DC/DC converters have recently attracted attention due to their numerous advantages, including few components, low cost, and symmetrical operation of transformers. This study proposes an improved control strategy for increasing the efficiency of T-type isolated DC/DC converters....... Under the proposed strategy, the primary circulating current flows through the auxiliary switches (metal–oxide–semiconductor field-effect transistors) instead of their body diodes in free-wheeling periods. Such feature can reduce conduction losses, thereby improving the efficiency of T-type isolated DC...

  12. INTENSITY, DURATION AND TYPE OF PHYSICAL ACTIVITY REQUIRED TO IMPROVE FUNCTION IN KNEE OSTEOARTHRITIS

    Science.gov (United States)

    KIRIHARA, RICARDO AKIHIRO; CATELAN, FELLIPE BRAVIM; FARIAS, FABIANE ELIZE SABINO DE; SILVA, CLEIDNÉIA APARECIDA CLEMENTE DA; CERNIGOY, CLAUDIA HELENA DE AZEVEDO; REZENDE, MÁRCIA UCHOA DE

    2017-01-01

    ABSTRACT Objective: To evaluate the effects of physical activity intensity, type and duration in patients with knee osteoarthritis (KOA). Methods: A retrospective study of 195 KOA patients who were followed for two years after receiving educational material about KOA with or without attending classes. The patients were evaluated at baseline and 24 months. At the evaluations, the patients answered questionnaires pertaining to pain and function (WOMAC, Lequesne, VAS and SF-36); reported the intensity, duration and type of exercise performed per week; and performed the Timed Up & Go (TUG) and Five Times Sit-to-Stand (FTSST) tests. Results: Increased age affected improvements in the TUG results (p=0.017). The type, intensity and duration of physical activity did not correlate with pain, function or quality of life improvements (p>0.05), but the TUG results were on average 4 seconds faster among the patients who practiced intense physical activity and/or exercised for more than 180 minutes per week and/or performed isolated weight training or swam compared with those who remained sedentary after 2 years (p=0.01; pbodybuilding) for relevant pain reduction and functional improvement.Level of Evidence II, Retrospective Study. PMID:28642646

  13. Review and International Recommendation of Methods for Typing Neisseria gonorrhoeae Isolates and Their Implications for Improved Knowledge of Gonococcal Epidemiology, Treatment, and Biology

    Science.gov (United States)

    Unemo, Magnus; Dillon, Jo-Anne R.

    2011-01-01

    Summary: Gonorrhea, which may become untreatable due to multiple resistance to available antibiotics, remains a public health problem worldwide. Precise methods for typing Neisseria gonorrhoeae, together with epidemiological information, are crucial for an enhanced understanding regarding issues involving epidemiology, test of cure and contact tracing, identifying core groups and risk behaviors, and recommending effective antimicrobial treatment, control, and preventive measures. This review evaluates methods for typing N. gonorrhoeae isolates and recommends various methods for different situations. Phenotypic typing methods, as well as some now-outdated DNA-based methods, have limited usefulness in differentiating between strains of N. gonorrhoeae. Genotypic methods based on DNA sequencing are preferred, and the selection of the appropriate genotypic method should be guided by its performance characteristics and whether short-term epidemiology (microepidemiology) or long-term and/or global epidemiology (macroepidemiology) matters are being investigated. Currently, for microepidemiological questions, the best methods for fast, objective, portable, highly discriminatory, reproducible, typeable, and high-throughput characterization are N. gonorrhoeae multiantigen sequence typing (NG-MAST) or full- or extended-length porB gene sequencing. However, pulsed-field gel electrophoresis (PFGE) and Opa typing can be valuable in specific situations, i.e., extreme microepidemiology, despite their limitations. For macroepidemiological studies and phylogenetic studies, DNA sequencing of chromosomal housekeeping genes, such as multilocus sequence typing (MLST), provides a more nuanced understanding. PMID:21734242

  14. Understanding the relationship between vegetation phenology and productivity across key dryland ecosystem types through the integration of PhenoCam, satellite, and eddy covariance data

    Science.gov (United States)

    Yan, D.; Scott, R. L.; Moore, D. J.; Biederman, J. A.; Smith, W. K.

    2017-12-01

    Land surface phenology (LSP) - defined as remotely sensed seasonal variations in vegetation greenness - is intrinsically linked to seasonal carbon uptake, and is thus commonly used as a proxy for vegetation productivity (gross primary productivity; GPP). Yet, the relationship between LSP and GPP remains uncertain, particularly for understudied dryland ecosystems characterized by relatively large spatial and temporal variability. Here, we explored the relationship between LSP and the phenology of GPP for three dominant dryland ecosystem types, and we evaluated how these relationships change as a function of spatial and temporal scale. We focused on three long-term dryland eddy covariance flux tower sites: Walnut Gulch Lucky Hills Shrubland (WHS), Walnut Gulch Kendall Grassland (WKG), and Santa Rita Mesquite (SRM). We analyzed daily canopy-level, 16-day 30m, and 8-day 500m time series of greenness indices from PhenoCam, Landsat 7 ETM+/Landsat 8 OLI, and MODIS, respectively. We first quantified the impact of spatial scale by temporally resampling canopy-level PhenoCam, 30m Landsat, and 500m MODIS to 16-day intervals and then comparing against flux tower GPP estimates. We next quantified the impact of temporal scale by spatially resampling daily PhenoCam, 16-day Landsat, and 8-day MODIS to 500m time series and then comparing against flux tower GPP estimates. We find evidence of critical periods of decoupling between LSP and the phenology of GPP that vary according to the spatial and temporal scale, and as a function of ecosystem type. Our results provide key insight into dryland LSP and GPP dynamics that can be used in future efforts to improve ecosystem process models and satellite-based vegetation productivity algorithms.

  15. The Improved Locating Algorithm of Particle Filter Based on ROS Robot

    Science.gov (United States)

    Fang, Xun; Fu, Xiaoyang; Sun, Ming

    2018-03-01

    This paperanalyzes basic theory and primary algorithm of the real-time locating system and SLAM technology based on ROS system Robot. It proposes improved locating algorithm of particle filter effectively reduces the matching time of laser radar and map, additional ultra-wideband technology directly accelerates the global efficiency of FastSLAM algorithm, which no longer needs searching on the global map. Meanwhile, the re-sampling has been largely reduced about 5/6 that directly cancels the matching behavior on Roboticsalgorithm.

  16. Continuous improvement methods in the nuclear industry

    International Nuclear Information System (INIS)

    Heising, Carolyn D.

    1995-01-01

    The purpose of this paper is to investigate management methods for improved safety in the nuclear power industry. Process improvement management, methods of business process reengineering, total quality management, and continued process improvement (KAIZEN) are explored. The anticipated advantages of extensive use of improved process oriented management methods in the nuclear industry are increased effectiveness and efficiency in virtually all tasks of plant operation and maintenance. Important spin off include increased plant safety and economy. (author). 6 refs., 1 fig

  17. AN OBJECT-BASED METHOD FOR CHINESE LANDFORM TYPES CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    H. Ding

    2016-06-01

    Full Text Available Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM. In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  18. Method Improving Reading Comprehension In Primary Education Program Students

    Science.gov (United States)

    Rohana

    2018-01-01

    This study aims to determine the influence of reading comprehension skills of English for PGSD students through the application of SQ3R learning method. The type of this research is Pre-Experimental research because it is not yet a real experiment, there are external variables that influence the formation of a dependent variable, this is because there is no control variable and the sample is not chosen randomly. The research design is used is one-group pretest-post-test design involving one group that is an experimental group. In this design, the observation is done twice before and after the experiment. Observations made before the experiment (O1) are called pretests and the post-experimental observation (O2) is called posttest. The difference between O1 and O2 ie O2 - O1 is the effect of the treatment. The results showed that there was an improvement in reading comprehension skills of PGSD students in Class M.4.3 using SQ3R method, and better SQ3R enabling SQ3R to improve English comprehension skills.

  19. Workshop on Analytical Methods in Statistics

    CERN Document Server

    Jurečková, Jana; Maciak, Matúš; Pešta, Michal

    2017-01-01

    This volume collects authoritative contributions on analytical methods and mathematical statistics. The methods presented include resampling techniques; the minimization of divergence; estimation theory and regression, eventually under shape or other constraints or long memory; and iterative approximations when the optimal solution is difficult to achieve. It also investigates probability distributions with respect to their stability, heavy-tailness, Fisher information and other aspects, both asymptotically and non-asymptotically. The book not only presents the latest mathematical and statistical methods and their extensions, but also offers solutions to real-world problems including option pricing. The selected, peer-reviewed contributions were originally presented at the workshop on Analytical Methods in Statistics, AMISTAT 2015, held in Prague, Czech Republic, November 10-13, 2015.

  20. An Improved Iterative Fitting Method to Estimate Nocturnal Residual Layer Height

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2016-08-01

    Full Text Available The planetary boundary layer (PBL is an atmospheric region near the Earth’s surface. It is significant for weather forecasting and for the study of air quality and climate. In this study, the top of nocturnal residual layers—which are what remain of the daytime mixing layer—are estimated by an elastic backscatter Lidar in Wuhan (30.5°N, 114.4°E, a city in Central China. The ideal profile fitting method is widely applied to determine the nocturnal residual layer height (RLH from Lidar data. However, the method is seriously affected by an optical thick layer. Thus, we propose an improved iterative fitting method to eliminate the optical thick layer effect on RLH detection using Lidar. Two typical case studies observed by elastic Lidar are presented to demonstrate the theory and advantage of the proposed method. Results of case analysis indicate that the improved method is more practical and precise than profile-fitting, gradient, and wavelet covariance transform method in terms of nocturnal RLH evaluation under low cloud conditions. Long-term observations of RLH performed with ideal profile fitting and improved methods were carried out in Wuhan from 28 May 2011 to 17 June 2016. Comparisons of Lidar-derived RLHs with the two types of methods verify that the improved solution is practical. Statistical analysis of a six-year Lidar signal was conducted to reveal the monthly average values of nocturnal RLH in Wuhan. A clear RLH monthly cycle with a maximum mean height of about 1.8 km above ground level was observed in August, and a minimum height of about 0.7 km was observed in January. The variation in monthly mean RLH displays an obvious quarterly dependence, which coincides with the annual variation in local surface temperature.

  1. On a linear method in bootstrap confidence intervals

    Directory of Open Access Journals (Sweden)

    Andrea Pallini

    2007-10-01

    Full Text Available A linear method for the construction of asymptotic bootstrap confidence intervals is proposed. We approximate asymptotically pivotal and non-pivotal quantities, which are smooth functions of means of n independent and identically distributed random variables, by using a sum of n independent smooth functions of the same analytical form. Errors are of order Op(n-3/2 and Op(n-2, respectively. The linear method allows a straightforward approximation of bootstrap cumulants, by considering the set of n independent smooth functions as an original random sample to be resampled with replacement.

  2. Improved image alignment method in application to X-ray images and biological images.

    Science.gov (United States)

    Wang, Ching-Wei; Chen, Hsiang-Chou

    2013-08-01

    Alignment of medical images is a vital component of a large number of applications throughout the clinical track of events; not only within clinical diagnostic settings, but prominently so in the area of planning, consummation and evaluation of surgical and radiotherapeutical procedures. However, image registration of medical images is challenging because of variations on data appearance, imaging artifacts and complex data deformation problems. Hence, the aim of this study is to develop a robust image alignment method for medical images. An improved image registration method is proposed, and the method is evaluated with two types of medical data, including biological microscopic tissue images and dental X-ray images and compared with five state-of-the-art image registration techniques. The experimental results show that the presented method consistently performs well on both types of medical images, achieving 88.44 and 88.93% averaged registration accuracies for biological tissue images and X-ray images, respectively, and outperforms the benchmark methods. Based on the Tukey's honestly significant difference test and Fisher's least square difference test tests, the presented method performs significantly better than all existing methods (P ≤ 0.001) for tissue image alignment, and for the X-ray image registration, the proposed method performs significantly better than the two benchmark b-spline approaches (P < 0.001). The software implementation of the presented method and the data used in this study are made publicly available for scientific communities to use (http://www-o.ntust.edu.tw/∼cweiwang/ImprovedImageRegistration/). cweiwang@mail.ntust.edu.tw.

  3. Various Newton-type iterative methods for solving nonlinear equations

    Directory of Open Access Journals (Sweden)

    Manoj Kumar

    2013-10-01

    Full Text Available The aim of the present paper is to introduce and investigate new ninth and seventh order convergent Newton-type iterative methods for solving nonlinear equations. The ninth order convergent Newton-type iterative method is made derivative free to obtain seventh-order convergent Newton-type iterative method. These new with and without derivative methods have efficiency indices 1.5518 and 1.6266, respectively. The error equations are used to establish the order of convergence of these proposed iterative methods. Finally, various numerical comparisons are implemented by MATLAB to demonstrate the performance of the developed methods.

  4. Improving Type Error Messages in OCaml

    Directory of Open Access Journals (Sweden)

    Arthur Charguéraud

    2015-12-01

    Full Text Available Cryptic type error messages are a major obstacle to learning OCaml or other ML-based languages. In many cases, error messages cannot be interpreted without a sufficiently-precise model of the type inference algorithm. The problem of improving type error messages in ML has received quite a bit of attention over the past two decades, and many different strategies have been considered. The challenge is not only to produce error messages that are both sufficiently concise and systematically useful to the programmer, but also to handle a full-blown programming language and to cope with large-sized programs efficiently. In this work, we present a modification to the traditional ML type inference algorithm implemented in OCaml that, by significantly reducing the left-to-right bias, allows us to report error messages that are more helpful to the programmer. Our algorithm remains fully predictable and continues to produce fairly concise error messages that always help making some progress towards fixing the code. We implemented our approach as a patch to the OCaml compiler in just a few hundred lines of code. We believe that this patch should benefit not just to beginners, but also to experienced programs developing large-scale OCaml programs.

  5. Combining gene prediction methods to improve metagenomic gene annotation

    Directory of Open Access Journals (Sweden)

    Rosen Gail L

    2011-01-01

    Full Text Available Abstract Background Traditional gene annotation methods rely on characteristics that may not be available in short reads generated from next generation technology, resulting in suboptimal performance for metagenomic (environmental samples. Therefore, in recent years, new programs have been developed that optimize performance on short reads. In this work, we benchmark three metagenomic gene prediction programs and combine their predictions to improve metagenomic read gene annotation. Results We not only analyze the programs' performance at different read-lengths like similar studies, but also separate different types of reads, including intra- and intergenic regions, for analysis. The main deficiencies are in the algorithms' ability to predict non-coding regions and gene edges, resulting in more false-positives and false-negatives than desired. In fact, the specificities of the algorithms are notably worse than the sensitivities. By combining the programs' predictions, we show significant improvement in specificity at minimal cost to sensitivity, resulting in 4% improvement in accuracy for 100 bp reads with ~1% improvement in accuracy for 200 bp reads and above. To correctly annotate the start and stop of the genes, we find that a consensus of all the predictors performs best for shorter read lengths while a unanimous agreement is better for longer read lengths, boosting annotation accuracy by 1-8%. We also demonstrate use of the classifier combinations on a real dataset. Conclusions To optimize the performance for both prediction and annotation accuracies, we conclude that the consensus of all methods (or a majority vote is the best for reads 400 bp and shorter, while using the intersection of GeneMark and Orphelia predictions is the best for reads 500 bp and longer. We demonstrate that most methods predict over 80% coding (including partially coding reads on a real human gut sample sequenced by Illumina technology.

  6. IMPROVEMENT OF RECOGNITION QUALITY IN DEEP LEARNING NETWORKS BY SIMULATED ANNEALING METHOD

    Directory of Open Access Journals (Sweden)

    A. S. Potapov

    2014-09-01

    Full Text Available The subject of this research is deep learning methods, in which automatic construction of feature transforms is taken place in tasks of pattern recognition. Multilayer autoencoders have been taken as the considered type of deep learning networks. Autoencoders perform nonlinear feature transform with logistic regression as an upper classification layer. In order to verify the hypothesis of possibility to improve recognition rate by global optimization of parameters for deep learning networks, which are traditionally trained layer-by-layer by gradient descent, a new method has been designed and implemented. The method applies simulated annealing for tuning connection weights of autoencoders while regression layer is simultaneously trained by stochastic gradient descent. Experiments held by means of standard MNIST handwritten digit database have shown the decrease of recognition error rate from 1.1 to 1.5 times in case of the modified method comparing to the traditional method, which is based on local optimization. Thus, overfitting effect doesn’t appear and the possibility to improve learning rate is confirmed in deep learning networks by global optimization methods (in terms of increasing recognition probability. Research results can be applied for improving the probability of pattern recognition in the fields, which require automatic construction of nonlinear feature transforms, in particular, in the image recognition. Keywords: pattern recognition, deep learning, autoencoder, logistic regression, simulated annealing.

  7. Improvement of human reliability analysis method for PRA

    International Nuclear Information System (INIS)

    Tanji, Junichi; Fujimoto, Haruo

    2013-09-01

    It is required to refine human reliability analysis (HRA) method by, for example, incorporating consideration for the cognitive process of operator into the evaluation of diagnosis errors and decision-making errors, as a part of the development and improvement of methods used in probabilistic risk assessments (PRAs). JNES has been developed a HRA method based on ATHENA which is suitable to handle the structured relationship among diagnosis errors, decision-making errors and operator cognition process. This report summarizes outcomes obtained from the improvement of HRA method, in which enhancement to evaluate how the plant degraded condition affects operator cognitive process and to evaluate human error probabilities (HEPs) which correspond to the contents of operator tasks is made. In addition, this report describes the results of case studies on the representative accident sequences to investigate the applicability of HRA method developed. HEPs of the same accident sequences are also estimated using THERP method, which is most popularly used HRA method, and comparisons of the results obtained using these two methods are made to depict the differences of these methods and issues to be solved. Important conclusions obtained are as follows: (1) Improvement of HRA method using operator cognitive action model. Clarification of factors to be considered in the evaluation of human errors, incorporation of degraded plant safety condition into HRA and investigation of HEPs which are affected by the contents of operator tasks were made to improve the HRA method which can integrate operator cognitive action model into ATHENA method. In addition, the detail of procedures of the improved method was delineated in the form of flowchart. (2) Case studies and comparison with the results evaluated by THERP method. Four operator actions modeled in the PRAs of representative BWR5 and 4-loop PWR plants were selected and evaluated as case studies. These cases were also evaluated using

  8. Improvement of nuclear ship engineering simulation system. Hardware renewal and interface improvement of the integral type reactor

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, Hiroki; Kyoya, Masahiko; Shimazaki, Junya [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kano, Tadashi [KCS, Co., Mito, Ibaraki (Japan); Takahashi, Teruo [Energis, Co., Kobe, Hyogo (Japan)

    2001-10-01

    JAERI had carried out the design study about a lightweight and compact integral type reactor (an advanced marine reactor) with passive safety equipment as a power source for the future nuclear ships, and completed an engineering design. We have developed the simulator for the integral type reactor to confirm the design and operation performance and to utilize the study of automation of the reactor operation. The simulator can be used also for future research and development of a compact reactor. However, the improvement in a performance of hardware and a human machine interface of software of the simulator were needed for future research and development. Therefore, renewal of hardware and improvement of software have been conducted. The operability of the integral-reactor simulator has been improved. Furthermore, this improvement with the hardware and software on the market brought about better versatility, maintainability, extendibility and transfer of the system. This report mainly focuses on contents of the enhancement in a human machine interface, and describes hardware renewal and the interface improvement of the integral type reactor simulator. (author)

  9. Empirical evaluation of data normalization methods for molecular classification.

    Science.gov (United States)

    Huang, Huei-Chung; Qin, Li-Xuan

    2018-01-01

    Data artifacts due to variations in experimental handling are ubiquitous in microarray studies, and they can lead to biased and irreproducible findings. A popular approach to correct for such artifacts is through post hoc data adjustment such as data normalization. Statistical methods for data normalization have been developed and evaluated primarily for the discovery of individual molecular biomarkers. Their performance has rarely been studied for the development of multi-marker molecular classifiers-an increasingly important application of microarrays in the era of personalized medicine. In this study, we set out to evaluate the performance of three commonly used methods for data normalization in the context of molecular classification, using extensive simulations based on re-sampling from a unique pair of microRNA microarray datasets for the same set of samples. The data and code for our simulations are freely available as R packages at GitHub. In the presence of confounding handling effects, all three normalization methods tended to improve the accuracy of the classifier when evaluated in an independent test data. The level of improvement and the relative performance among the normalization methods depended on the relative level of molecular signal, the distributional pattern of handling effects (e.g., location shift vs scale change), and the statistical method used for building the classifier. In addition, cross-validation was associated with biased estimation of classification accuracy in the over-optimistic direction for all three normalization methods. Normalization may improve the accuracy of molecular classification for data with confounding handling effects; however, it cannot circumvent the over-optimistic findings associated with cross-validation for assessing classification accuracy.

  10. Application of improved AHP method to radiation protection optimization

    International Nuclear Information System (INIS)

    Wang Chuan; Zhang Jianguo; Yu Lei

    2014-01-01

    Aimed at the deficiency of traditional AHP method, a hierarchy model for optimum project selection of radiation protection was established with the improved AHP method. The result of comparison between the improved AHP method and the traditional AHP method shows that the improved AHP method can reduce personal judgment subjectivity, and its calculation process is compact and reasonable. The improved AHP method can provide scientific basis for radiation protection optimization. (authors)

  11. Increase in speed of Wilkinson-type ADC and improvement of differential non-linearity

    Energy Technology Data Exchange (ETDEWEB)

    Kinbara, S [Japan Atomic Energy Research Inst., Tokai, Ibaraki. Tokai Research Establishment

    1977-06-01

    It is shown that the differential non-linearity of a Wilkinson-type analog-to-digital converter (ADC) is dominated by the unbalance of even-numbered periods caused by the action of interference resulting from operation of a channel scaler. To improve this situation, new methods were tested which allow such action of interference to be dispersed. Measurements show that a differential non-linearity value of +- 0.043% is attainable for a clock rate of 300 MHz.

  12. Diabetes education improves depressive state in newly diagnosed patients with type 2 diabetes

    OpenAIRE

    Chen, Bin; Zhang, Xiyao; Xu, Xiuping; Lv, Xiaofeng; Yao, Lu; Huang, Xu; Guo, Xueying; Liu, Baozhu; Li, Qiang; Cui, Can

    2013-01-01

    Objectives: The prevalence of depression is relatively high in individuals with diabetes. However, screening and monitoring of depressive state in patients with diabetes is still neglected in developing countries and the treatment of diabetes-related depression is rarely performed in these countries. In this study, our aim was to study the role of diabetes education in the improvement of depressive state in newly diagnosed patients with type 2 diabetes. Methods: The Dutch version of the cente...

  13. Plasma proteomics classifiers improve risk prediction for renal disease in patients with hypertension or type 2 diabetes

    DEFF Research Database (Denmark)

    Pena, Michelle J; Jankowski, Joachim; Heinze, Georg

    2015-01-01

    OBJECTIVE: Micro and macroalbuminuria are strong risk factors for progression of nephropathy in patients with hypertension or type 2 diabetes. Early detection of progression to micro and macroalbuminuria may facilitate prevention and treatment of renal diseases. We aimed to develop plasma...... proteomics classifiers to predict the development of micro or macroalbuminuria in hypertension or type 2 diabetes. METHODS: Patients with hypertension (n = 125) and type 2 diabetes (n = 82) were selected for this case-control study from the Prevention of REnal and Vascular ENd-stage Disease cohort....... RESULTS: In hypertensive patients, the classifier improved risk prediction for transition in albuminuria stage on top of the reference model (C-index from 0.69 to 0.78; P diabetes, the classifier improved risk prediction for transition from micro to macroalbuminuria (C-index from 0...

  14. Cutibacterium acnes molecular typing: time to standardize the method.

    Science.gov (United States)

    Dagnelie, M-A; Khammari, A; Dréno, B; Corvec, S

    2018-03-12

    The Gram-positive, anaerobic/aerotolerant bacterium Cutibacterium acnes is a commensal of healthy human skin; it is subdivided into six main phylogenetic groups or phylotypes: IA1, IA2, IB, IC, II and III. To decipher how far specific subgroups of C. acnes are involved in disease physiopathology, different molecular typing methods have been developed to identify these subgroups: i.e. phylotypes, clonal complexes, and types defined by single-locus sequence typing (SLST). However, as several molecular typing methods have been developed over the last decade, it has become a difficult task to compare the results from one article to another. Based on the scientific literature, the aim of this narrative review is to propose a standardized method to perform molecular typing of C. acnes, according to the degree of resolution needed (phylotypes, clonal complexes, or SLST types). We discuss the existing different typing methods from a critical point of view, emphasizing their advantages and drawbacks, and we identify the most frequently used methods. We propose a consensus algorithm according to the needed phylogeny resolution level. We first propose to use multiplex PCR for phylotype identification, MLST9 for clonal complex determination, and SLST for phylogeny investigation including numerous isolates. There is an obvious need to create a consensus about molecular typing methods for C. acnes. This standardization will facilitate the comparison of results between one article and another, and also the interpretation of clinical data. Copyright © 2018 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  15. Improving the Service with the Servqual Method

    Science.gov (United States)

    Midor, Katarzyna; Kučera, Marian

    2018-03-01

    At the time when economy is growing, there is strong competition in the market, and customers have increasingly higher expectations as regards quality of service and products. Under such conditions, organizations need to improve. One of the areas of improvement for an organization is to research the level of customer satisfaction. The article presents results of customer satisfaction surveys conducted by the Servqual method in a pharmaceutical service company. Use of this method allowed to improve the services provided by that pharmaceutical wholesaler, identify areas that need to be improved as soon as possible in order to improve the level of service provided.

  16. Sparsity-weighted outlier FLOODing (OFLOOD) method: Efficient rare event sampling method using sparsity of distribution.

    Science.gov (United States)

    Harada, Ryuhei; Nakamura, Tomotake; Shigeta, Yasuteru

    2016-03-30

    As an extension of the Outlier FLOODing (OFLOOD) method [Harada et al., J. Comput. Chem. 2015, 36, 763], the sparsity of the outliers defined by a hierarchical clustering algorithm, FlexDice, was considered to achieve an efficient conformational search as sparsity-weighted "OFLOOD." In OFLOOD, FlexDice detects areas of sparse distribution as outliers. The outliers are regarded as candidates that have high potential to promote conformational transitions and are employed as initial structures for conformational resampling by restarting molecular dynamics simulations. When detecting outliers, FlexDice defines a rank in the hierarchy for each outlier, which relates to sparsity in the distribution. In this study, we define a lower rank (first ranked), a medium rank (second ranked), and the highest rank (third ranked) outliers, respectively. For instance, the first-ranked outliers are located in a given conformational space away from the clusters (highly sparse distribution), whereas those with the third-ranked outliers are nearby the clusters (a moderately sparse distribution). To achieve the conformational search efficiently, resampling from the outliers with a given rank is performed. As demonstrations, this method was applied to several model systems: Alanine dipeptide, Met-enkephalin, Trp-cage, T4 lysozyme, and glutamine binding protein. In each demonstration, the present method successfully reproduced transitions among metastable states. In particular, the first-ranked OFLOOD highly accelerated the exploration of conformational space by expanding the edges. In contrast, the third-ranked OFLOOD reproduced local transitions among neighboring metastable states intensively. For quantitatively evaluations of sampled snapshots, free energy calculations were performed with a combination of umbrella samplings, providing rigorous landscapes of the biomolecules. © 2015 Wiley Periodicals, Inc.

  17. Measuring environmental change in forest ecosystems by repeated soil sampling: a North American perspective

    Science.gov (United States)

    Lawrence, Gregory B.; Fernandez, Ivan J.; Richter, Daniel D.; Ross, Donald S.; Hazlett, Paul W.; Bailey, Scott W.; Oiumet, Rock; Warby, Richard A.F.; Johnson, Arthur H.; Lin, Henry; Kaste, James M.; Lapenis, Andrew G.; Sullivan, Timothy J.

    2013-01-01

    Environmental change is monitored in North America through repeated measurements of weather, stream and river flow, air and water quality, and most recently, soil properties. Some skepticism remains, however, about whether repeated soil sampling can effectively distinguish between temporal and spatial variability, and efforts to document soil change in forest ecosystems through repeated measurements are largely nascent and uncoordinated. In eastern North America, repeated soil sampling has begun to provide valuable information on environmental problems such as air pollution. This review synthesizes the current state of the science to further the development and use of soil resampling as an integral method for recording and understanding environmental change in forested settings. The origins of soil resampling reach back to the 19th century in England and Russia. The concepts and methodologies involved in forest soil resampling are reviewed and evaluated through a discussion of how temporal and spatial variability can be addressed with a variety of sampling approaches. Key resampling studies demonstrate the type of results that can be obtained through differing approaches. Ongoing, large-scale issues such as recovery from acidification, long-term N deposition, C sequestration, effects of climate change, impacts from invasive species, and the increasing intensification of soil management all warrant the use of soil resampling as an essential tool for environmental monitoring and assessment. Furthermore, with better awareness of the value of soil resampling, studies can be designed with a long-term perspective so that information can be efficiently obtained well into the future to address problems that have not yet surfaced.

  18. A low-fat Diet improves insulin sensitivity in patients with type 1 diabetes

    DEFF Research Database (Denmark)

    Rosenfalck, AM; Almdal, Thomas Peter; Viggers, Lone

    2006-01-01

    diet (P = 0.039). The daily protein and carbohydrate intake increased (+4.4% of total energy intake, P = 0.0049 and +2.5%, P = 0.34, respectively), while alcohol intake decreased (-3.2% of total energy intake, P = 0.02). There was a significant improvement in insulin sensitivity on the isocaloric, low-fat......AIMS: To compare the effects on insulin sensitivity, body composition and glycaemic control of the recommended standard weight-maintaining diabetes diet and an isocaloric low-fat diabetes diet during two, 3-month periods in patients with Type 1 diabetes. METHODS: Thirteen Type 1 patients were...... by the insulin clamp technique at baseline and after each of the diet intervention periods. RESULTS: On an isocaloric low-fat diet, Type 1 diabetic patients significantly reduced the proportion of fat in the total daily energy intake by 12.1% (or -3.6% of total energy) as compared with a conventional diabetes...

  19. Improved power performance assessment methods

    Energy Technology Data Exchange (ETDEWEB)

    Frandsen, S; Antoniou, I; Dahlberg, J A [and others

    1999-03-01

    The uncertainty of presently-used methods for retrospective assessment of the productive capacity of wind farms is unacceptably large. The possibilities of improving the accuracy have been investigated and are reported. A method is presented that includes an extended power curve and site calibration. In addition, blockage effects with respect to reference wind speed measurements are analysed. It is found that significant accuracy improvements are possible by the introduction of more input variables such as turbulence and wind shear, in addition to mean wind speed and air density. Also, the testing of several or all machines in the wind farm - instead of only one or two - may provide a better estimate of the average performance. (au)

  20. Improved numerical algorithm and experimental validation of a system thermal-hydraulic/CFD coupling method for multi-scale transient simulations of pool-type reactors

    International Nuclear Information System (INIS)

    Toti, A.; Vierendeels, J.; Belloni, F.

    2017-01-01

    Highlights: • A system thermal-hydraulic/CFD coupling methodology is proposed for high-fidelity transient flow analyses. • The method is based on domain decomposition and implicit numerical scheme. • A novel interface Quasi-Newton algorithm is implemented to improve stability and convergence rate. • Preliminary validation analyses on the TALL-3D experiment. - Abstract: The paper describes the development and validation of a coupling methodology between the best-estimate system thermal-hydraulic code RELAP5-3D and the CFD code FLUENT, conceived for high fidelity plant-scale safety analyses of pool-type reactors. The computational tool is developed to assess the impact of three-dimensional phenomena occurring in accidental transients such as loss of flow (LOF) in the research reactor MYRRHA, currently in the design phase at the Belgian Nuclear Research Centre, SCK• CEN. A partitioned, implicit domain decomposition coupling algorithm is implemented, in which the coupled domains exchange thermal-hydraulics variables at coupling boundary interfaces. Numerical stability and interface convergence rates are improved by a novel interface Quasi-Newton algorithm, which is compared in this paper with previously tested numerical schemes. The developed computational method has been assessed for validation purposes against the experiment performed at the test facility TALL-3D, operated by the Royal Institute of Technology (KTH) in Sweden. This paper details the results of the simulation of a loss of forced convection test, showing the capability of the developed methodology to predict transients influenced by local three-dimensional phenomena.

  1. An improved EMD method for modal identification and a combined static-dynamic method for damage detection

    Science.gov (United States)

    Yang, Jinping; Li, Peizhen; Yang, Youfa; Xu, Dian

    2018-04-01

    Empirical mode decomposition (EMD) is a highly adaptable signal processing method. However, the EMD approach has certain drawbacks, including distortions from end effects and mode mixing. In the present study, these two problems are addressed using an end extension method based on the support vector regression machine (SVRM) and a modal decomposition method based on the characteristics of the Hilbert transform. The algorithm includes two steps: using the SVRM, the time series data are extended at both endpoints to reduce the end effects, and then, a modified EMD method using the characteristics of the Hilbert transform is performed on the resulting signal to reduce mode mixing. A new combined static-dynamic method for identifying structural damage is presented. This method combines the static and dynamic information in an equilibrium equation that can be solved using the Moore-Penrose generalized matrix inverse. The combination method uses the differences in displacements of the structure with and without damage and variations in the modal force vector. Tests on a four-story, steel-frame structure were conducted to obtain static and dynamic responses of the structure. The modal parameters are identified using data from the dynamic tests and improved EMD method. The new method is shown to be more accurate and effective than the traditional EMD method. Through tests with a shear-type test frame, the higher performance of the proposed static-dynamic damage detection approach, which can detect both single and multiple damage locations and the degree of the damage, is demonstrated. For structures with multiple damage, the combined approach is more effective than either the static or dynamic method. The proposed EMD method and static-dynamic damage detection method offer improved modal identification and damage detection, respectively, in structures.

  2. Comparison of a newly developed binary typing with ribotyping and multilocus sequence typing methods for Clostridium difficile.

    Science.gov (United States)

    Li, Zhirong; Liu, Xiaolei; Zhao, Jianhong; Xu, Kaiyue; Tian, Tiantian; Yang, Jing; Qiang, Cuixin; Shi, Dongyan; Wei, Honglian; Sun, Suju; Cui, Qingqing; Li, Ruxin; Niu, Yanan; Huang, Bixing

    2018-04-01

    Clostridium difficile is the causative pathogen for antibiotic-related nosocomial diarrhea. For epidemiological study and identification of virulent clones, a new binary typing method was developed for C. difficile in this study. The usefulness of this newly developed optimized 10-loci binary typing method was compared with two widely used methods ribotyping and multilocus sequence typing (MLST) in 189 C. difficile samples. The binary typing, ribotyping and MLST typed the samples into 53 binary types (BTs), 26 ribotypes (RTs), and 33 MLST sequence types (STs), respectively. The typing ability of the binary method was better than that of either ribotyping or MLST expressed in Simpson Index (SI) at 0.937, 0.892 and 0.859, respectively. The ease of testing, portability and cost-effectiveness of the new binary typing would make it a useful typing alternative for outbreak investigations within healthcare facilities and epidemiological research. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Improved radionuclide bone imaging agent injection needle withdrawal method can improve image quality

    International Nuclear Information System (INIS)

    Qin Yongmei; Wang Laihao; Zhao Lihua; Guo Xiaogang; Kong Qingfeng

    2009-01-01

    Objective: To investigate the improvement of radionuclide bone imaging agent injection needle withdrawal method on whole body bone scan image quality. Methods: Elbow vein injection syringe needle directly into the bone imaging agent in the routine group of 117 cases, with a cotton swab needle injection method for the rapid pull out the needle puncture point pressing, pressing moment. Improvement of 117 cases of needle injection method to put two needles into the skin swabs and blood vessels, pull out the needle while pressing two or more entry point 5min. After 2 hours underwent whole body bone SPECT imaging plane. Results: The conventional group at the injection site imaging agents uptake rate was 16.24%, improved group was 2.56%. Conclusion: The modified bone imaging agent injection needle withdrawal method, injection-site imaging agent uptake were significantly decreased whole body bone imaging can improve image quality. (authors)

  4. An improved method for calculating force distributions in moment-stiff timber connections

    DEFF Research Database (Denmark)

    Ormarsson, Sigurdur; Blond, Mette

    2012-01-01

    An improved method for calculating force distributions in moment-stiff metal dowel-type timber connections is presented, a method based on use of three-dimensional finite element simulations of timber connections subjected to moment action. The study that was carried out aimed at determining how...... the slip modulus varies with the angle between the direction of the dowel forces and the fibres in question, as well as how the orthotropic stiffness behaviour of the wood material affects the direction and the size of the forces. It was assumed that the force distribution generated by the moment action...

  5. Comparing and improving reconstruction methods for proxies based on compositional data

    Science.gov (United States)

    Nolan, C.; Tipton, J.; Booth, R.; Jackson, S. T.; Hooten, M.

    2017-12-01

    Many types of studies in paleoclimatology and paleoecology involve compositional data. Often, these studies aim to use compositional data to reconstruct an environmental variable of interest; the reconstruction is usually done via the development of a transfer function. Transfer functions have been developed using many different methods. Existing methods tend to relate the compositional data and the reconstruction target in very simple ways. Additionally, the results from different methods are rarely compared. Here we seek to address these two issues. First, we introduce a new hierarchical Bayesian multivariate gaussian process model; this model allows for the relationship between each species in the compositional dataset and the environmental variable to be modeled in a way that captures the underlying complexities. Then, we compare this new method to machine learning techniques and commonly used existing methods. The comparisons are based on reconstructing the water table depth history of Caribou Bog (an ombrotrophic Sphagnum peat bog in Old Town, Maine, USA) from a new 7500 year long record of testate amoebae assemblages. The resulting reconstructions from different methods diverge in both their resulting means and uncertainties. In particular, uncertainty tends to be drastically underestimated by some common methods. These results will help to improve inference of water table depth from testate amoebae. Furthermore, this approach can be applied to test and improve inferences of past environmental conditions from a broad array of paleo-proxies based on compositional data

  6. Improved quasi-static nodal green's function method

    International Nuclear Information System (INIS)

    Li Junli; Jing Xingqing; Hu Dapu

    1997-01-01

    Improved Quasi-Static Green's Function Method (IQS/NGFM) is presented, as an new kinetic method. To solve the three-dimensional transient problem, improved Quasi-Static Method is adopted to deal with the temporal problem, which will increase the time step as long as possible so as to decrease the number of times of space calculation. The time step of IQS/NGFM can be increased to 5∼10 times longer than that of Full Implicit Differential Method. In spatial calculation, the NGFM is used to get the distribution of shape function, and it's spatial mesh can be nearly 20 times larger than that of Definite Differential Method. So the IQS/NGFM is considered as an efficient kinetic method

  7. An improved front tracking method for the Euler equations

    NARCIS (Netherlands)

    Witteveen, J.A.S.; Koren, B.; Bakker, P.G.

    2007-01-01

    An improved front tracking method for hyperbolic conservation laws is presented. The improved method accurately resolves discontinuities as well as continuous phenomena. The method is based on an improved front interaction model for a physically more accurate modeling of the Euler equations, as

  8. COMPARISON OF CRM PROGRAMS BASING ON IMPROVING CUSTOMER PROFITABILITY: USING THE AHP METHOD

    Directory of Open Access Journals (Sweden)

    Dong-Fei Xue

    2014-04-01

    Full Text Available In this paper, we generalize the cause-related marketing (CRM methods that used by most current enterprises. And then, we probe into the difference of the effects of different types of CRM programs aiming at improving customer profitability by analytic hierarchy process (AHP. Consequently, we find out the sequencing results and provide some reference to the enterprises while performing CRM programs.

  9. Computation of saddle-type slow manifolds using iterative methods

    DEFF Research Database (Denmark)

    Kristiansen, Kristian Uldall

    2015-01-01

    with respect to , appropriate estimates are directly attainable using the method of this paper. The method is applied to several examples, including a model for a pair of neurons coupled by reciprocal inhibition with two slow and two fast variables, and the computation of homoclinic connections in the Fitz......This paper presents an alternative approach for the computation of trajectory segments on slow manifolds of saddle type. This approach is based on iterative methods rather than collocation-type methods. Compared to collocation methods, which require mesh refinements to ensure uniform convergence...

  10. Improvement of high-frequency characteristics of Z-type hexaferrite by dysprosium doping

    International Nuclear Information System (INIS)

    Mu Chunhong; Liu Yingli; Song Yuanqiang; Wang Liguo; Zhang Huaiwu

    2011-01-01

    Z-type hexaferrite has great potential applications as anti-EMI material for magnetic devices in the GHz region. In this work, Dy-doped Z-type hexaferrites with nominal stoichiometry of Ba 3 Co 2 Dy x Fe 24-x O 41 (x 0.0, 0.05, 0.5, 1.0) were prepared by an improved solid-state reaction method. The effects of rare earth oxide (Dy 2 O 3 ) addition on the phase composition, microstructure and electromagnetic properties of the ceramics were investigated. Structure and micromorphology characterizations indicate that certain content of Dy doping will cause the emergence of the second phase Dy 3 Fe 5 O 12 at the grain boundaries of the majority phase Z-type hexaferrite, due to which the straightforward result is the grain refinement during the successive sintering process. Permeability spectra measurements show that the initial permeability reaches its maximum of 17 at 300 MHz with x = 0.5, while the cutoff frequency keeps above 800 MHz. The apparent specific anisotropy field H K of Dy-doped Z-type hexaferrites decreases with x increasing. The relationships among phase composition, grain size, permeability spectra, and anisotropy are theoretically investigated, and according to the analysis, Dy doping effects on its magnetic properties can be well explained and understood.

  11. Statistical error estimation of the Feynman-α method using the bootstrap method

    International Nuclear Information System (INIS)

    Endo, Tomohiro; Yamamoto, Akio; Yagi, Takahiro; Pyeon, Cheol Ho

    2016-01-01

    Applicability of the bootstrap method is investigated to estimate the statistical error of the Feynman-α method, which is one of the subcritical measurement techniques on the basis of reactor noise analysis. In the Feynman-α method, the statistical error can be simply estimated from multiple measurements of reactor noise, however it requires additional measurement time to repeat the multiple times of measurements. Using a resampling technique called 'bootstrap method' standard deviation and confidence interval of measurement results obtained by the Feynman-α method can be estimated as the statistical error, using only a single measurement of reactor noise. In order to validate our proposed technique, we carried out a passive measurement of reactor noise without any external source, i.e. with only inherent neutron source by spontaneous fission and (α,n) reactions in nuclear fuels at the Kyoto University Criticality Assembly. Through the actual measurement, it is confirmed that the bootstrap method is applicable to approximately estimate the statistical error of measurement results obtained by the Feynman-α method. (author)

  12. Statistical and numerical methods to improve the transient divided bar method

    DEFF Research Database (Denmark)

    Bording, Thue Sylvester; Nielsen, S.B.; Balling, N.

    The divided bar method is a commonly used method to measure thermal conductivity of rock samples in laboratory. We present improvements to this method that allows for simultaneous measurements of both thermal conductivity and thermal diffusivity. The divided bar setup is run in a transient mode...

  13. Efficiency Improvement of HIT Solar Cells on p-Type Si Wafers.

    Science.gov (United States)

    Wei, Chun-You; Lin, Chu-Hsuan; Hsiao, Hao-Tse; Yang, Po-Chuan; Wang, Chih-Ming; Pan, Yen-Chih

    2013-11-22

    Single crystal silicon solar cells are still predominant in the market due to the abundance of silicon on earth and their acceptable efficiency. Different solar-cell structures of single crystalline Si have been investigated to boost efficiency; the heterojunction with intrinsic thin layer (HIT) structure is currently the leading technology. The record efficiency values of state-of-the art HIT solar cells have always been based on n-type single-crystalline Si wafers. Improving the efficiency of cells based on p-type single-crystalline Si wafers could provide broader options for the development of HIT solar cells. In this study, we varied the thickness of intrinsic hydrogenated amorphous Si layer to improve the efficiency of HIT solar cells on p-type Si wafers.

  14. A improved method for the analysis of alpha spectra

    International Nuclear Information System (INIS)

    Equillor, Hugo E.

    2004-01-01

    In this work we describe a methodology, developed in the last years, for the analysis of alpha emitters spectra, obtained with implanted ion detectors, that tend to solve some of the problems that shows this type of spectra. This is an improved methodology respect to that described in a previous publication. The method is based on the application of a mathematical function that allows to model the tail of an alpha peak, to evaluate the part of the peak that is not seen in the cases of partial superposition with another peak. Also, a calculation program that works in a semiautomatic way, with the possibility of interactive intervention of the analyst, has been developed simultaneously and is described in detail. (author)

  15. Method of determining the composition of fuels for FBR type reactors

    International Nuclear Information System (INIS)

    Tsutsumi, Kiyoshi.

    1981-01-01

    Purpose: To improve the core safety of FBR type reactors by determining the composition of fuels composed of oxide mixture of plutonium and uranium, using a relation between specific plutonium seed and plutonium enrichment degree. Method: Relation is determined between the ratio of a specific plutonium seed for constituting plutonium oxide, for example 239 U ratio and a plutonium enrichment degree required for setting the assembly power to a constant level. The ratio of 239 U is plutonium having a given isotopic ratio is also determined. The accuracy of the 239 U ratio can be improved by the correction using the density coefficient. Then, the plutonium enrichment degree is determined using the relation determined as above based on the thus determined 239 U ratio. The composition of the fuel using oxide mixture of plutonium and uranium is determined by utilizing the thus obtained plutonium enrichment degree. (Moriyama, K.)

  16. An improved method of inverse kinematics calculation for a six-link manipulator

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1987-07-01

    As one method of solving the inverse problem related to a six-link manipulator, an improvement was made of previously proposed calculation algorithm based on a solution of an algebraic equation of the 24-th order. In this paper, the same type of a polynomial was derived in the form of the equation of 16-th order, i.e., the order reduced by 8, as compared to previous algorithm. The accuracy of solutions was identified to be much refined. (author)

  17. Strengths and limitations of period estimation methods for circadian data.

    Directory of Open Access Journals (Sweden)

    Tomasz Zielinski

    Full Text Available A key step in the analysis of circadian data is to make an accurate estimate of the underlying period. There are many different techniques and algorithms for determining period, all with different assumptions and with differing levels of complexity. Choosing which algorithm, which implementation and which measures of accuracy to use can offer many pitfalls, especially for the non-expert. We have developed the BioDare system, an online service allowing data-sharing (including public dissemination, data-processing and analysis. Circadian experiments are the main focus of BioDare hence performing period analysis is a major feature of the system. Six methods have been incorporated into BioDare: Enright and Lomb-Scargle periodograms, FFT-NLLS, mFourfit, MESA and Spectrum Resampling. Here we review those six techniques, explain the principles behind each algorithm and evaluate their performance. In order to quantify the methods' accuracy, we examine the algorithms against artificial mathematical test signals and model-generated mRNA data. Our re-implementation of each method in Java allows meaningful comparisons of the computational complexity and computing time associated with each algorithm. Finally, we provide guidelines on which algorithms are most appropriate for which data types, and recommendations on experimental design to extract optimal data for analysis.

  18. Improved methods for high resolution electron microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, J.R.

    1987-04-01

    Existing methods of making support films for high resolution transmission electron microscopy are investigated and novel methods are developed. Existing methods of fabricating fenestrated, metal reinforced specimen supports (microgrids) are evaluated for their potential to reduce beam induced movement of monolamellar crystals of C/sub 44/H/sub 90/ paraffin supported on thin carbon films. Improved methods of producing hydrophobic carbon films by vacuum evaporation, and improved methods of depositing well ordered monolamellar paraffin crystals on carbon films are developed. A novel technique for vacuum evaporation of metals is described which is used to reinforce microgrids. A technique is also developed to bond thin carbon films to microgrids with a polymer bonding agent. Unique biochemical methods are described to accomplish site specific covalent modification of membrane proteins. Protocols are given which covalently convert the carboxy terminus of papain cleaved bacteriorhodopsin to a free thiol. 53 refs., 19 figs., 1 tab.

  19. A molecular method for typing Herpes simplex virus isolates as an alternative to immunofluorescence methods

    Directory of Open Access Journals (Sweden)

    Abraham A

    2009-01-01

    Full Text Available Background: Typing of Herpes simplex virus (HSV isolates is required to identify the virus isolated in culture. The methods available for this include antigen detection by immunofluorescence (IF assays and polymerase chain reaction (PCR. This study was undertaken to standardize a molecular method for typing of HSV and compare it with a commercial IF reagent for typing. Objectives: To compare a molecular method for typing HSV isolates with a monoclonal antibody (MAb based IF test. Study design : This cross-sectional study utilized four reference strains and 42 HSV isolates obtained from patients between September 1998 and September 2004. These were subjected to testing using an MAb-based IF test and a PCR that detects the polymerase ( pol gene of HSV isolates. Results: The observed agreement of the MAb IF assay with the pol PCR was 95.7%. Fifty four point eight percent (23/42 of isolates tested by IF typing were found to be HSV-1, 40.5% (17/42 were HSV-2, and two (4.8% were untypable using the MAb IF assay. The two untypable isolates were found to be HSV-2 using the pol PCR. In addition, the cost per PCR test for typing is estimated to be around Rs 1,300 (USD 30, whereas the cost per MAb IF test is about Rs 1,500 (USD 35 including all overheads (reagents, instruments, personnel time, and consumables. Conclusion: The pol PCR is a cheaper and more easily reproducible method for typing HSV isolates as compared to the IF test. It could replace the IF-based method for routine typing of HSV isolates as availability of PCR machines (thermal cyclers is now more widespread than fluorescence microscopes in a country like India.

  20. Passive Methods as a Solution for Improving Indoor Environments

    CERN Document Server

    Orosa, José A

    2012-01-01

    There are many aspects to consider when evaluating or improving an indoor environment; thermal comfort, energy saving, preservation of materials, hygiene and health are all key aspects which can be improved by passive methods of environmental control. Passive Methods as a Solution for Improving Indoor Environments endeavours to fill the lack of analysis in this area by using over ten years of research to illustrate the effects of methods such as thermal inertia and permeable coverings; for example, the use of permeable coverings is a well known passive method, but its effects and ways to improve indoor environments have been rarely analyzed.   Passive Methods as a Solution for Improving Indoor Environments  includes both software simulations and laboratory and field studies. Through these, the main parameters that characterize the behavior of internal coverings are defined. Furthermore, a new procedure is explained in depth which can be used to identify the real expected effects of permeable coverings such ...

  1. Fine typing of methicillin-resistant Staphylococcus aureus isolates using direct repeat unit and staphylococcal interspersed repeat unit typing methods.

    Science.gov (United States)

    Ho, Cheng-Mao; Ho, Mao-Wang; Li, Chi-Yuan; Lu, Jang-Jih

    2015-08-01

    Methicillin-resistant Staphylococcus aureus (MRSA) typing is an important epidemiologic tool for monitoring trends and preventing outbreaks. However, the efficiency of various MRSA typing methods for each SCCmec MRSA isolate is rarely evaluated. A total of 157 MRSA isolates from four different regions in Taiwan were typed with five different molecular methods, including SCCmec typing, multilocus sequence typing (MLST), spa typing, mec-associated direct repeat unit (dru) copy number determination, and staphylococcal interspersed repeat unit (SIRU) profiling. There were four SCCmec types, eight MLST types, 15 spa types, 11 dru types, and 31 SIRU profiles. The most common type determined by each molecular typing method was SCCmec III (115 isolates, 73.2%), ST239 (99 isolates, 63.1%), t037 (107 isolates, 68.2%), 14 dru copies (76 isolates, 48.4%), and SIRU profile 3013722 (102 isolates, 65%), respectively. When using the combination of MLST, spa typing, and dru copy number, ST5-t002-4 (n = 8), ST239-t037-14 (n = 68), ST59-t437-9 (n = 9), and ST59-t437-11 (n = 6) were found to be the most common types of SCCmec types II (n = 9), III (n = 115), IV (n = 21), and VT (n = 11) isolates, respectively. SCCmec type III isolates were further classified into 11 dru types. Of the 21 SCCmec type IV isolates, 14 SIRU profiles were found. Seven SIRU patterns were observed in the 11 SCCmec type VT isolates. Different typing methods showed a similar Hunter-Gaston discrimination index among the 157 MRSA isolates. However, dru and SIRU typing methods had a better discriminatory power for SCCmec type III and SCCmec types IV and VT isolates, respectively, suggesting that dru and SIRU can be used to further type these isolates. Copyright © 2013. Published by Elsevier B.V.

  2. Simulation of optical configurations and signal processing methods in Anger-type neutron-position scintillation detector

    International Nuclear Information System (INIS)

    Roche, C.T.; Strauss, M.G.; Brenner, R.

    1984-01-01

    The spatial linearity and resolution of Anger-type neutron-position scintillation detectors are studied using a semi-empirical model. Detector optics with either an air gap or optical grease between the scintillator and the dispersive light guide are considered. Three signal processing methods which truncate signals from PMT's distant from the scintillation are compared with the linear resistive weighting method. Air gap optics yields a 15% improvement in spatial resolution and 50% reduction in differential and integral nonlinearity relative to grease coupled optics, using linear processing. Using signal truncation instead of linear processing improves the resolution 15-20% for the air gap and 20-30% for the grease coupling case. Thus, the initial discrepancy in the resolution between the two optics nearly vanished, however the linearity of the grease coupled system is still significantly poorer

  3. Intelligent Photovoltaic Systems by Combining the Improved Perturbation Method of Observation and Sun Location Tracking

    Science.gov (United States)

    Wang, Yajie; Shi, Yunbo; Yu, Xiaoyu; Liu, Yongjie

    2016-01-01

    Currently, tracking in photovoltaic (PV) systems suffers from some problems such as high energy consumption, poor anti-interference performance, and large tracking errors. This paper presents a solar PV tracking system on the basis of an improved perturbation and observation method, which maximizes photoelectric conversion efficiency. According to the projection principle, we design a sensor module with a light-intensity-detection module for environmental light-intensity measurement. The effect of environmental factors on the system operation is reduced, and intelligent identification of the weather is realized. This system adopts the discrete-type tracking method to reduce power consumption. A mechanical structure with a level-pitch double-degree-of-freedom is designed, and attitude correction is performed by closed-loop control. A worm-and-gear mechanism is added, and the reliability, stability, and precision of the system are improved. Finally, the perturbation and observation method designed and improved by this study was tested by simulated experiments. The experiments verified that the photoelectric sensor resolution can reach 0.344°, the tracking error is less than 2.5°, the largest improvement in the charge efficiency can reach 44.5%, and the system steadily and reliably works. PMID:27327657

  4. Intelligent Photovoltaic Systems by Combining the Improved Perturbation Method of Observation and Sun Location Tracking.

    Directory of Open Access Journals (Sweden)

    Yajie Wang

    Full Text Available Currently, tracking in photovoltaic (PV systems suffers from some problems such as high energy consumption, poor anti-interference performance, and large tracking errors. This paper presents a solar PV tracking system on the basis of an improved perturbation and observation method, which maximizes photoelectric conversion efficiency. According to the projection principle, we design a sensor module with a light-intensity-detection module for environmental light-intensity measurement. The effect of environmental factors on the system operation is reduced, and intelligent identification of the weather is realized. This system adopts the discrete-type tracking method to reduce power consumption. A mechanical structure with a level-pitch double-degree-of-freedom is designed, and attitude correction is performed by closed-loop control. A worm-and-gear mechanism is added, and the reliability, stability, and precision of the system are improved. Finally, the perturbation and observation method designed and improved by this study was tested by simulated experiments. The experiments verified that the photoelectric sensor resolution can reach 0.344°, the tracking error is less than 2.5°, the largest improvement in the charge efficiency can reach 44.5%, and the system steadily and reliably works.

  5. Intelligent Photovoltaic Systems by Combining the Improved Perturbation Method of Observation and Sun Location Tracking.

    Science.gov (United States)

    Wang, Yajie; Shi, Yunbo; Yu, Xiaoyu; Liu, Yongjie

    2016-01-01

    Currently, tracking in photovoltaic (PV) systems suffers from some problems such as high energy consumption, poor anti-interference performance, and large tracking errors. This paper presents a solar PV tracking system on the basis of an improved perturbation and observation method, which maximizes photoelectric conversion efficiency. According to the projection principle, we design a sensor module with a light-intensity-detection module for environmental light-intensity measurement. The effect of environmental factors on the system operation is reduced, and intelligent identification of the weather is realized. This system adopts the discrete-type tracking method to reduce power consumption. A mechanical structure with a level-pitch double-degree-of-freedom is designed, and attitude correction is performed by closed-loop control. A worm-and-gear mechanism is added, and the reliability, stability, and precision of the system are improved. Finally, the perturbation and observation method designed and improved by this study was tested by simulated experiments. The experiments verified that the photoelectric sensor resolution can reach 0.344°, the tracking error is less than 2.5°, the largest improvement in the charge efficiency can reach 44.5%, and the system steadily and reliably works.

  6. Discriminative power of Campylobacter phenotypic and genotypic typing methods.

    Science.gov (United States)

    Duarte, Alexandra; Seliwiorstow, Tomasz; Miller, William G; De Zutter, Lieven; Uyttendaele, Mieke; Dierick, Katelijne; Botteldoorn, Nadine

    2016-06-01

    The aim of this study was to compare different typing methods, individually and combined, for use in the monitoring of Campylobacter in food. Campylobacter jejuni (n=94) and Campylobacter coli (n=52) isolated from different broiler meat carcasses were characterized using multilocus sequence typing (MLST), flagellin gene A restriction fragment length polymorphism typing (flaA-RFLP), antimicrobial resistance profiling (AMRp), the presence/absence of 5 putative virulence genes; and, exclusively for C. jejuni, the determination of lipooligosaccharide (LOS) class. Discriminatory power was calculated by the Simpson's index of diversity (SID) and the congruence was measured by the adjusted Rand index and adjusted Wallace coefficient. MLST was individually the most discriminative typing method for both C. jejuni (SID=0.981) and C. coli (SID=0.957). The most discriminative combination with a SID of 0.992 for both C. jejuni and C. coli was obtained by combining MLST with flaA-RFLP. The combination of MLST with flaA-RFLP is an easy and feasible typing method for short-term monitoring of Campylobacter in broiler meat carcass. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Motivational Interview Method Based on Transtheoretical Model of Health Behaviour Change in Type 2 Diabetes Mellitus

    Directory of Open Access Journals (Sweden)

    Alime Selcuk Tosun

    2016-03-01

    Full Text Available Precautions taken in early stages of diabetes mellitus are more beneficial in terms of quality of life. The risk of Type 2 diabetes mellitus has been shown to be reduced at rates up to 58% or its emergence may be delayed with healthy lifestyle changes in different studies. Transtheoretical model and motivational interview method are especially used to increase the adaptation of individuals to disease management and to change behaviours about diabetes mellitus for decreasing or preventing the harmful effects of diabetes mellitus in studies conducted with individuals with Type 2 diabetes mellitus. Interventions using motivational interview method based on transtheoretical model demonstrated that a general improvement in glycaemic control and in physical activity level can be achieved and significant progress has been made during the stage of change. Motivational interview method based on transtheoretical model is an easy and efficient counselling method to reach behavioural change. [Psikiyatride Guncel Yaklasimlar - Current Approaches in Psychiatry 2016; 8(1: 32-41

  8. Performance of Firth-and logF-type penalized methods in risk prediction for small or sparse binary data.

    Science.gov (United States)

    Rahman, M Shafiqur; Sultana, Mahbuba

    2017-02-23

    When developing risk models for binary data with small or sparse data sets, the standard maximum likelihood estimation (MLE) based logistic regression faces several problems including biased or infinite estimate of the regression coefficient and frequent convergence failure of the likelihood due to separation. The problem of separation occurs commonly even if sample size is large but there is sufficient number of strong predictors. In the presence of separation, even if one develops the model, it produces overfitted model with poor predictive performance. Firth-and logF-type penalized regression methods are popular alternative to MLE, particularly for solving separation-problem. Despite the attractive advantages, their use in risk prediction is very limited. This paper evaluated these methods in risk prediction in comparison with MLE and other commonly used penalized methods such as ridge. The predictive performance of the methods was evaluated through assessing calibration, discrimination and overall predictive performance using an extensive simulation study. Further an illustration of the methods were provided using a real data example with low prevalence of outcome. The MLE showed poor performance in risk prediction in small or sparse data sets. All penalized methods offered some improvements in calibration, discrimination and overall predictive performance. Although the Firth-and logF-type methods showed almost equal amount of improvement, Firth-type penalization produces some bias in the average predicted probability, and the amount of bias is even larger than that produced by MLE. Of the logF(1,1) and logF(2,2) penalization, logF(2,2) provides slight bias in the estimate of regression coefficient of binary predictor and logF(1,1) performed better in all aspects. Similarly, ridge performed well in discrimination and overall predictive performance but it often produces underfitted model and has high rate of convergence failure (even the rate is higher than that

  9. Robust and efficient method for matching features in omnidirectional images

    Science.gov (United States)

    Zhu, Qinyi; Zhang, Zhijiang; Zeng, Dan

    2018-04-01

    Binary descriptors have been widely used in many real-time applications due to their efficiency. These descriptors are commonly designed for perspective images but perform poorly on omnidirectional images, which are severely distorted. To address this issue, this paper proposes tangent plane BRIEF (TPBRIEF) and adapted log polar grid-based motion statistics (ALPGMS). TPBRIEF projects keypoints to a unit sphere and applies the fixed test set in BRIEF descriptor on the tangent plane of the unit sphere. The fixed test set is then backprojected onto the original distorted images to construct the distortion invariant descriptor. TPBRIEF directly enables keypoint detecting and feature describing on original distorted images, whereas other approaches correct the distortion through image resampling, which introduces artifacts and adds time cost. With ALPGMS, omnidirectional images are divided into circular arches named adapted log polar grids. Whether a match is true or false is then determined by simply thresholding the match numbers in a grid pair where the two matched points located. Experiments show that TPBRIEF greatly improves the feature matching accuracy and ALPGMS robustly removes wrong matches. Our proposed method outperforms the state-of-the-art methods.

  10. Implicit high-order discontinuous Galerkin method with HWENO type limiters for steady viscous flow simulations

    Science.gov (United States)

    Jiang, Zhen-Hua; Yan, Chao; Yu, Jian

    2013-08-01

    Two types of implicit algorithms have been improved for high order discontinuous Galerkin (DG) method to solve compressible Navier-Stokes (NS) equations on triangular grids. A block lower-upper symmetric Gauss-Seidel (BLU-SGS) approach is implemented as a nonlinear iterative scheme. And a modified LU-SGS (LLU-SGS) approach is suggested to reduce the memory requirements while retain the good convergence performance of the original LU-SGS approach. Both implicit schemes have the significant advantage that only the diagonal block matrix is stored. The resulting implicit high-order DG methods are applied, in combination with Hermite weighted essentially non-oscillatory (HWENO) limiters, to solve viscous flow problems. Numerical results demonstrate that the present implicit methods are able to achieve significant efficiency improvements over explicit counterparts and for viscous flows with shocks, and the HWENO limiters can be used to achieve the desired essentially non-oscillatory shock transition and the designed high-order accuracy simultaneously.

  11. Explanation of Two Anomalous Results in Statistical Mediation Analysis

    Science.gov (United States)

    Fritz, Matthew S.; Taylor, Aaron B.; MacKinnon, David P.

    2012-01-01

    Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special…

  12. AIR Tools II: algebraic iterative reconstruction methods, improved implementation

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jørgensen, Jakob Sauer

    2017-01-01

    with algebraic iterative methods and their convergence properties. The present software is a much expanded and improved version of the package AIR Tools from 2012, based on a new modular design. In addition to improved performance and memory use, we provide more flexible iterative methods, a column-action method...

  13. Metric-based method of software requirements correctness improvement

    Directory of Open Access Journals (Sweden)

    Yaremchuk Svitlana

    2017-01-01

    Full Text Available The work highlights the most important principles of software reliability management (SRM. The SRM concept construes a basis for developing a method of requirements correctness improvement. The method assumes that complicated requirements contain more actual and potential design faults/defects. The method applies a newer metric to evaluate the requirements complexity and double sorting technique evaluating the priority and complexity of a particular requirement. The method enables to improve requirements correctness due to identification of a higher number of defects with restricted resources. Practical application of the proposed method in the course of demands review assured a sensible technical and economic effect.

  14. Frequency Analysis Using Bootstrap Method and SIR Algorithm for Prevention of Natural Disasters

    Science.gov (United States)

    Kim, T.; Kim, Y. S.

    2017-12-01

    The frequency analysis of hydrometeorological data is one of the most important factors in response to natural disaster damage, and design standards for a disaster prevention facilities. In case of frequency analysis of hydrometeorological data, it assumes that observation data have statistical stationarity, and a parametric method considering the parameter of probability distribution is applied. For a parametric method, it is necessary to sufficiently collect reliable data; however, snowfall observations are needed to compensate for insufficient data in Korea, because of reducing the number of days for snowfall observations and mean maximum daily snowfall depth due to climate change. In this study, we conducted the frequency analysis for snowfall using the Bootstrap method and SIR algorithm which are the resampling methods that can overcome the problems of insufficient data. For the 58 meteorological stations distributed evenly in Korea, the probability of snowfall depth was estimated by non-parametric frequency analysis using the maximum daily snowfall depth data. The results show that probabilistic daily snowfall depth by frequency analysis is decreased at most stations, and most stations representing the rate of change were found to be consistent in both parametric and non-parametric frequency analysis. This study shows that the resampling methods can do the frequency analysis of the snowfall depth that has insufficient observed samples, which can be applied to interpretation of other natural disasters such as summer typhoons with seasonal characteristics. Acknowledgment.This research was supported by a grant(MPSS-NH-2015-79) from Disaster Prediction and Mitigation Technology Development Program funded by Korean Ministry of Public Safety and Security(MPSS).

  15. Bootstrapping the economy -- a non-parametric method of generating consistent future scenarios

    OpenAIRE

    Müller, Ulrich A; Bürgi, Roland; Dacorogna, Michel M

    2004-01-01

    The fortune and the risk of a business venture depends on the future course of the economy. There is a strong demand for economic forecasts and scenarios that can be applied to planning and modeling. While there is an ongoing debate on modeling economic scenarios, the bootstrapping (or resampling) approach presented here has several advantages. As a non-parametric method, it directly relies on past market behaviors rather than debatable assumptions on models and parameters. Simultaneous dep...

  16. Improvements of the Profil Cultural Method for a better Low-tech Field Assessment of Soil Structure under no-till

    Science.gov (United States)

    Roger-Estrade, Jean; Boizard, Hubert; Peigné, Josephine; Sasal, Maria Carolina; Guimaraes, Rachel; Piron, Denis; Tomis, Vincent; Vian, Jean-François; Cadoux, Stephane; Ralisch, Ricardo; Filho, Tavares; Heddadj, Djilali; de Battista, Juan; Duparque, Annie

    2016-04-01

    In France, agronomists have studied the effects of cropping systems on soil structure, using a field method based on a visual description of soil structure. The "profil cultural" method (Manichon and Gautronneau, 1987) has been designed to perform a field diagnostic of the effects of tillage and compaction on soil structure dynamics. This method is of great use to agronomists improving crop management for a better preservation of soil structure. However, this method was developed and mainly used in conventional tillage systems, with ploughing. As several forms of reduced, minimum and no tillage systems are expanding in many parts of the world, it is necessary to re-evaluate the ability of this method to describe and interpret soil macrostructure in unploughed situations. In unploughed fields, soil structure dynamics of untilled layers is mainly driven by compaction and regeneration by natural agents (climatic conditions, root growth and macrofauna) and it is of major importance to evaluate the importance of these natural processes on soil structure regeneration. These concerns have led us to adapt the standard method and to propose amendments based on a series of field observations and experimental work in different situations of cropping systems, soil types and climatic conditions. We improved the description of crack type and we introduced an index of biological activity, based on the visual examination of clods. To test the improved method, a comparison with the reference method was carried out and the ability of the "profil cultural" method to make a diagnosis was tested on five experiments in France, Brazil and Argentina. Using the improved method, the impact of cropping systems on soil functioning was better assessed when natural processes were integrated into the description.

  17. Overview of molecular typing methods for outbreak detection and epidemiological surveillance

    OpenAIRE

    Sabat, A. J.; Budimir, A.; Nashev, D.; Sa-Leao, R.; van Dijl, J. M.; Laurent, F.; Grundmann, H.; Friedrich, A. W.

    2013-01-01

    Typing methods for discriminating different bacterial isolates of the same species are essential epidemiological tools in infection prevention and control. Traditional typing systems based on phenotypes, such as serotype, biotype, phage-type, or antibiogram, have been used for many years. However, more recent methods that examine the relatedness of isolates at a molecular level have revolutionised our ability to differentiate among bacterial types and subtypes. Importantly, the development of...

  18. Fluctuation Flooding Method (FFM) for accelerating conformational transitions of proteins

    Science.gov (United States)

    Harada, Ryuhei; Takano, Yu; Shigeta, Yasuteru

    2014-03-01

    A powerful conformational sampling method for accelerating structural transitions of proteins, "Fluctuation Flooding Method (FFM)," is proposed. In FFM, cycles of the following steps enhance the transitions: (i) extractions of largely fluctuating snapshots along anisotropic modes obtained from trajectories of multiple independent molecular dynamics (MD) simulations and (ii) conformational re-sampling of the snapshots via re-generations of initial velocities when re-starting MD simulations. In an application to bacteriophage T4 lysozyme, FFM successfully accelerated the open-closed transition with the 6 ns simulation starting solely from the open state, although the 1-μs canonical MD simulation failed to sample such a rare event.

  19. Improving Reference Service: The Case for Using a Continuous Quality Improvement Method.

    Science.gov (United States)

    Aluri, Rao

    1993-01-01

    Discusses the evaluation of library reference service; examines problems with past evaluations, including the lack of long-term planning and a systems perspective; and suggests a method for continuously monitoring and improving reference service using quality improvement tools such as checklists, cause and effect diagrams, Pareto charts, and…

  20. Methods of exploitation of different types of uranium deposits

    International Nuclear Information System (INIS)

    2000-09-01

    Deposits are mined using three broad types of mining methods: open pit, underground and in situ leaching. This publication addresses all aspects of mining and milling methods for several types of deposits and provides information to assist in the selection process of methods and also considers what actions must be taken into account for obtaining regulatory approvals for a project and for final decommissioning and reclamation of a project. The objective of this publication is to provide a process of selections of methods for mining engineers and managers involved in modernising ongoing operations or considering opening new operations. Several practical examples are given. These guidelines can be consulted and used in many countries involved in uranium mining and milling operations. The examples where costs are given can also be adjusted to specific economic conditions of various countries. The authors are from four uranium producing countries. They bring diversified experience for all types of mining and milling operations from tile opening of a mine to the decommissioning of the complete operation

  1. Methods of exploitation of different types of uranium deposits

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-09-01

    Deposits are mined using three broad types of mining methods: open pit, underground and in situ leaching. This publication addresses all aspects of mining and milling methods for several types of deposits and provides information to assist in the selection process of methods and also considers what actions must be taken into account for obtaining regulatory approvals for a project and for final decommissioning and reclamation of a project. The objective of this publication is to provide a process of selections of methods for mining engineers and managers involved in modernising ongoing operations or considering opening new operations. Several practical examples are given. These guidelines can be consulted and used in many countries involved in uranium mining and milling operations. The examples where costs are given can also be adjusted to specific economic conditions of various countries. The authors are from four uranium producing countries. They bring diversified experience for all types of mining and milling operations from tile opening of a mine to the decommissioning of the complete operation.

  2. Using the Direct Sampling Multiple-Point Geostatistical Method for Filling Gaps in Landsat 7 ETM+ SLC-off Imagery

    KAUST Repository

    Yin, Gaohong

    2016-05-01

    Since the failure of the Scan Line Corrector (SLC) instrument on Landsat 7, observable gaps occur in the acquired Landsat 7 imagery, impacting the spatial continuity of observed imagery. Due to the highly geometric and radiometric accuracy provided by Landsat 7, a number of approaches have been proposed to fill the gaps. However, all proposed approaches have evident constraints for universal application. The main issues in gap-filling are an inability to describe the continuity features such as meandering streams or roads, or maintaining the shape of small objects when filling gaps in heterogeneous areas. The aim of the study is to validate the feasibility of using the Direct Sampling multiple-point geostatistical method, which has been shown to reconstruct complicated geological structures satisfactorily, to fill Landsat 7 gaps. The Direct Sampling method uses a conditional stochastic resampling of known locations within a target image to fill gaps and can generate multiple reconstructions for one simulation case. The Direct Sampling method was examined across a range of land cover types including deserts, sparse rural areas, dense farmlands, urban areas, braided rivers and coastal areas to demonstrate its capacity to recover gaps accurately for various land cover types. The prediction accuracy of the Direct Sampling method was also compared with other gap-filling approaches, which have been previously demonstrated to offer satisfactory results, under both homogeneous area and heterogeneous area situations. Studies have shown that the Direct Sampling method provides sufficiently accurate prediction results for a variety of land cover types from homogeneous areas to heterogeneous land cover types. Likewise, it exhibits superior performances when used to fill gaps in heterogeneous land cover types without input image or with an input image that is temporally far from the target image in comparison with other gap-filling approaches.

  3. LandScape: a simple method to aggregate p--Values and other stochastic variables without a priori grouping

    DEFF Research Database (Denmark)

    Wiuf, Carsten; Pallesen, Jonatan; Foldager, Leslie

    2016-01-01

    variables without assuming a priori defined groups. We provide different ways to evaluate the significance of the aggregated variables based on theoretical considerations and resampling techniques, and show that under certain assumptions the FWER is controlled in the strong sense. Validity of the method...... and the results might depend on the chosen criteria. Methods that summarize, or aggregate, test statistics or p-values, without relying on a priori criteria, are therefore desirable. We present a simple method to aggregate a sequence of stochastic variables, such as test statistics or p-values, into fewer...

  4. TYPE II-P SUPERNOVAE FROM THE SDSS-II SUPERNOVA SURVEY AND THE STANDARDIZED CANDLE METHOD

    International Nuclear Information System (INIS)

    D'Andrea, Chris B.; Sako, Masao; Dilday, Benjamin; Jha, Saurabh; Frieman, Joshua A.; Kessler, Richard; Holtzman, Jon; Konishi, Kohki; Yasuda, Naoki; Schneider, D. P.; Sollerman, Jesper; Wheeler, J. Craig; Cinabro, David; Nichol, Robert C.; Lampeitl, Hubert; Smith, Mathew; Atlee, David W.; Bassett, Bruce; Castander, Francisco J.; Goobar, Ariel

    2010-01-01

    We apply the Standardized Candle Method (SCM) for Type II Plateau supernovae (SNe II-P), which relates the velocity of the ejecta of a SN to its luminosity during the plateau, to 15 SNe II-P discovered over the three season run of the Sloan Digital Sky Survey-II Supernova Survey. The redshifts of these SNe-0.027 0.01) as all of the current literature on the SCM combined. We find that the SDSS SNe have a very small intrinsic I-band dispersion (0.22 mag), which can be attributed to selection effects. When the SCM is applied to the combined SDSS-plus-literature set of SNe II-P, the dispersion increases to 0.29 mag, larger than the scatter for either set of SNe separately. We show that the standardization cannot be further improved by eliminating SNe with positive plateau decline rates, as proposed in Poznanski et al. We thoroughly examine all potential systematic effects and conclude that for the SCM to be useful for cosmology, the methods currently used to determine the Fe II velocity at day 50 must be improved, and spectral templates able to encompass the intrinsic variations of Type II-P SNe will be needed.

  5. Synthesis of research on Biogrout soil improvement method

    Directory of Open Access Journals (Sweden)

    Zsolt KALTENBACHER

    2014-12-01

    Full Text Available Because of the great rhythm of city developments, there is a great need for a new cost effective method for ground improvement. In this paper, a few chemical improvement technologies and a new biological ground improvement method called Biogrout are discussed. The method, used in the paper for a Sarmatian sand in Transylvania (Feleac locality implies using microorganisms as catalysts in order to induce a microbial carbonate precipitation (MICP to increase the strength and stiffness of cohesionless soils. For this calcium based procedure, the bacteria Sporosarcina Pasteurii (DSMZ 33 is used, while for the treatment solution urea (CO(NH22 and calcium chloride (CaCl2 are used. The study presents the triaxial testing of sand probes treated with Biogrout and the comparison of results obtained with untreated sand probes.

  6. Linear–Quadratic Mean-Field-Type Games: A Direct Method

    Directory of Open Access Journals (Sweden)

    Tyrone E. Duncan

    2018-02-01

    Full Text Available In this work, a multi-person mean-field-type game is formulated and solved that is described by a linear jump-diffusion system of mean-field type and a quadratic cost functional involving the second moments, the square of the expected value of the state, and the control actions of all decision-makers. We propose a direct method to solve the game, team, and bargaining problems. This solution approach does not require solving the Bellman–Kolmogorov equations or backward–forward stochastic differential equations of Pontryagin’s type. The proposed method can be easily implemented by beginners and engineers who are new to the emerging field of mean-field-type game theory. The optimal strategies for decision-makers are shown to be in a state-and-mean-field feedback form. The optimal strategies are given explicitly as a sum of the well-known linear state-feedback strategy for the associated deterministic linear–quadratic game problem and a mean-field feedback term. The equilibrium cost of the decision-makers are explicitly derived using a simple direct method. Moreover, the equilibrium cost is a weighted sum of the initial variance and an integral of a weighted variance of the diffusion and the jump process. Finally, the method is used to compute global optimum strategies as well as saddle point strategies and Nash bargaining solution in state-and-mean-field feedback form.

  7. The Prediction Methods for Potential Suspended Solids Clogging Types during Managed Aquifer Recharge

    Directory of Open Access Journals (Sweden)

    Xinqiang Du

    2014-04-01

    Full Text Available The implementation and development of managed aquifer recharge (MAR have been limited by the clogging attributed to physical, chemical, and biological reactions. In application field of MAR, physical clogging is usually the dominant type. Although numerous studies on the physical clogging mechanism during MAR are available, studies on the more detailed suspended clogging types and its prediction methods still remain few. In this study, a series of column experiments were inducted to show the process of suspended solids clogging process. The suspended solids clogging was divided into three types of surface clogging, inner clogging and mixed clogging based on the different clogging characteristics. Surface clogging indicates that the suspended solids are intercepted by the medium surface when suspended solids grain diameter is larger than pore diameter of infiltration medium. Inner clogging indicates that the suspended solids particles could transport through the infiltration medium. Mixed clogging refers to the comprehensive performance of surface clogging and inner clogging. Each suspended solids clogging type has the different clogging position, different changing laws of hydraulic conductivity and different deposition profile of suspended solids. Based on the experiment data, the ratio of effective medium pore diameter (Dp and median grain size of suspended solids (d50 was proposed as the judgment index for suspended solids clogging types. Surface clogging occurred while Dp/d50 was less than 5.5, inner clogging occurred while Dp/d50 was greater than 180, and mixed clogging occurred while Dp/d50 was between 5.5 and 180. In order to improve the judgment accuracy and applicability, Bayesian method, which considered more ratios of medium pore diameter (Dp and different level of grain diameter of suspended solids (di, were developed to predict the potential suspended solids types.

  8. Improving ASTER GDEM Accuracy Using Land Use-Based Linear Regression Methods: A Case Study of Lianyungang, East China

    Directory of Open Access Journals (Sweden)

    Xiaoyan Yang

    2018-04-01

    Full Text Available The Advanced Spaceborne Thermal-Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM is important to a wide range of geographical and environmental studies. Its accuracy, to some extent associated with land-use types reflecting topography, vegetation coverage, and human activities, impacts the results and conclusions of these studies. In order to improve the accuracy of ASTER GDEM prior to its application, we investigated ASTER GDEM errors based on individual land-use types and proposed two linear regression calibration methods, one considering only land use-specific errors and the other considering the impact of both land-use and topography. Our calibration methods were tested on the coastal prefectural city of Lianyungang in eastern China. Results indicate that (1 ASTER GDEM is highly accurate for rice, wheat, grass and mining lands but less accurate for scenic, garden, wood and bare lands; (2 despite improvements in ASTER GDEM2 accuracy, multiple linear regression calibration requires more data (topography and a relatively complex calibration process; (3 simple linear regression calibration proves a practicable and simplified means to systematically investigate and improve the impact of land-use on ASTER GDEM accuracy. Our method is applicable to areas with detailed land-use data based on highly accurate field-based point-elevation measurements.

  9. An overview of various typing methods for clinical epidemiology of the emerging pathogen Stenotrophomonas maltophilia.

    Science.gov (United States)

    Gherardi, Giovanni; Creti, Roberta; Pompilio, Arianna; Di Bonaventura, Giovanni

    2015-03-01

    Typing of bacterial isolates has been used for decades to study local outbreaks as well as in national and international surveillances for monitoring newly emerging resistant clones. Despite being recognized as a nosocomial pathogen, the precise modes of transmission of Stenotrophomonas maltophilia in health care settings are unknown. Due to the high genetic diversity observed among S. maltophilia clinical isolates, the typing results might be better interpreted if also environmental strains were included. This could help to identify preventative measures to be designed and implemented for decreasing the possibility of outbreaks and nosocomial infections. In this review, we attempt to provide an overview on the most common typing methods used for clinical epidemiology of S. maltophilia strains, such as PCR-based fingerprinting analyses, pulsed-field gel electrophoresis, multilocus variable number tandem repeat analysis, and multilocus sequence type. Application of the proteomic-based mass spectrometry by matrix-assisted laser desorption ionization-time of flight is also described. Improvements of typing methods already in use have to be achieved to facilitate S. maltophilia infection control at any level. In the near future, when novel Web-based platforms for rapid data processing and analysis will be available, whole genome sequencing technologies will likely become a highly powerful tool for outbreak investigations and surveillance studies in routine clinical practices. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Effects of model complexity and priors on estimation using sequential importance sampling/resampling for species conservation

    Science.gov (United States)

    Dunham, Kylee; Grand, James B.

    2016-01-01

    We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.

  11. Partition dataset according to amino acid type improves the prediction of deleterious non-synonymous SNPs

    International Nuclear Information System (INIS)

    Yang, Jing; Li, Yuan-Yuan; Li, Yi-Xue; Ye, Zhi-Qiang

    2012-01-01

    Highlights: ► Proper dataset partition can improve the prediction of deleterious nsSNPs. ► Partition according to original residue type at nsSNP is a good criterion. ► Similar strategy is supposed promising in other machine learning problems. -- Abstract: Many non-synonymous SNPs (nsSNPs) are associated with diseases, and numerous machine learning methods have been applied to train classifiers for sorting disease-associated nsSNPs from neutral ones. The continuously accumulated nsSNP data allows us to further explore better prediction approaches. In this work, we partitioned the training data into 20 subsets according to either original or substituted amino acid type at the nsSNP site. Using support vector machine (SVM), training classification models on each subset resulted in an overall accuracy of 76.3% or 74.9% depending on the two different partition criteria, while training on the whole dataset obtained an accuracy of only 72.6%. Moreover, the dataset was also randomly divided into 20 subsets, but the corresponding accuracy was only 73.2%. Our results demonstrated that partitioning the whole training dataset into subsets properly, i.e., according to the residue type at the nsSNP site, will improve the performance of the trained classifiers significantly, which should be valuable in developing better tools for predicting the disease-association of nsSNPs.

  12. A REVIEW OF ORDER PICKING IMPROVEMENT METHODS

    Directory of Open Access Journals (Sweden)

    Johan Oscar Ong

    2014-09-01

    Full Text Available As a crucial and one of the most important parts of warehousing, order picking often raises discussion between warehousing professionals, resulting in various studies aiming to analyze how order picking activity can be improved from various perspective. This paper reviews various past researches on order picking improvement, and the various methods those studies analyzed or developed. This literature review is based on twenty research articles on order picking improvement viewed from four different perspectives: Automation (specifically, stock-to-picker system, storage assignment policy, order batching, and order picking sequencing. By reviewing these studies, we try to identify the most prevalent order picking improvement approach to order picking improvement. Keywords: warehousing; stock-to-picker; storage assignment; order batching; order picking sequencing; improvement

  13. An improved calcium chloride method preparation and ...

    African Journals Online (AJOL)

    Transformation is one of the fundamental and essential molecular cloning techniques. In this paper, we have reported a modified method for preparation and transformation of competent cells. This modified method, improved from a classical protocol, has made some modifications on the concentration of calcium chloride ...

  14. Work system innovation: Designing improvement methods for generative capability

    DEFF Research Database (Denmark)

    Hansen, David; Møller, Niels

    2013-01-01

    This paper explores how a work system’s capability for improvement is influenced by its improvement methods. Based on explorative case study at a Lean manufacturing facility, the methods problem solving and Appreciative Inquiry were compared through in-depth qualitative studies over a 12-month...

  15. A PLL-based resampling technique for vibration analysis in variable-speed wind turbines with PMSG: A bearing fault case

    Science.gov (United States)

    Pezzani, Carlos M.; Bossio, José M.; Castellino, Ariel M.; Bossio, Guillermo R.; De Angelo, Cristian H.

    2017-02-01

    Condition monitoring in permanent magnet synchronous machines has gained interest due to the increasing use in applications such as electric traction and power generation. Particularly in wind power generation, non-invasive condition monitoring techniques are of great importance. Usually, in such applications the access to the generator is complex and costly, while unexpected breakdowns results in high repair costs. This paper presents a technique which allows using vibration analysis for bearing fault detection in permanent magnet synchronous generators used in wind turbines. Given that in wind power applications the generator rotational speed may vary during normal operation, it is necessary to use special sampling techniques to apply spectral analysis of mechanical vibrations. In this work, a resampling technique based on order tracking without measuring the rotor position is proposed. To synchronize sampling with rotor position, an estimation of the rotor position obtained from the angle of the voltage vector is proposed. This angle is obtained from a phase-locked loop synchronized with the generator voltages. The proposed strategy is validated by laboratory experimental results obtained from a permanent magnet synchronous generator. Results with single point defects in the outer race of a bearing under variable speed and load conditions are presented.

  16. A pragmatic comparison of two diabetes education programs in improving type 2 diabetes mellitus outcomes.

    Science.gov (United States)

    Dorland, Katherine; Liddy, Clare

    2014-03-28

    Although it is clear that education programs constitute key elements of improved diabetes management, uncertainty exists regarding the optimal method of delivering that education. In addition to the lack of consensus regarding the most appropriate delivery methods for these programs, there is a paucity of research which evaluates these methods in terms of specific clinical outcomes. This pragmatic study compares the effectiveness of two distinct diabetes education programs in improving clinical outcomes in patients with type 2 diabetes mellitus in a primary care setting. The two diabetes education classes (n = 80 enrolled) retrospectively evaluated were 'the ABC's of Diabetes' (one 2-hour didactic teaching session) and 'Conversation Maps' (3 highly interactive weekly classes, 6 hours in total). Eligible participants (n = 32) had their charts reviewed and outcome measures (i.e., glycosylated hemoglobin levels (HbA1c), low density lipoprotein (LDL), systolic blood pressure (SBP), diastolic blood pressure (DBP), and weight) recorded 1 year prior to and 6 months following the class. Pre- and post-class outcome measures were compared. A trend towards lower HbA1c was observed after completion of both classes, with an average reduction of 0.2%, and 0.6% after 6 months in the 'ABC's of Diabetes' class and 'Conversation Maps' class respectively. A significant decrease in weight was observed 6 months after the 'ABC's of Diabetes' class (p = 0.028), and in LDL after the 'Conversation Maps' class (p = 0.049). Patients with HbA1c ≥ 8% showed a drop of 1.1% in HbA1c 3 months after either class (p = 0.004). No significant difference in outcomes was found between the two diabetes education classes assessed. There was a trend towards improved glycemic control after both classes, and patients with high HbA1c levels demonstrated statistically significant improvements. This indicates that shorter sessions using didactic teaching methods may be equally

  17. An improved ontological representation of dendritic cells as a paradigm for all cell types

    Directory of Open Access Journals (Sweden)

    Mungall Chris

    2009-02-01

    Full Text Available Abstract Background Recent increases in the volume and diversity of life science data and information and an increasing emphasis on data sharing and interoperability have resulted in the creation of a large number of biological ontologies, including the Cell Ontology (CL, designed to provide a standardized representation of cell types for data annotation. Ontologies have been shown to have significant benefits for computational analyses of large data sets and for automated reasoning applications, leading to organized attempts to improve the structure and formal rigor of ontologies to better support computation. Currently, the CL employs multiple is_a relations, defining cell types in terms of histological, functional, and lineage properties, and the majority of definitions are written with sufficient generality to hold across multiple species. This approach limits the CL's utility for computation and for cross-species data integration. Results To enhance the CL's utility for computational analyses, we developed a method for the ontological representation of cells and applied this method to develop a dendritic cell ontology (DC-CL. DC-CL subtypes are delineated on the basis of surface protein expression, systematically including both species-general and species-specific types and optimizing DC-CL for the analysis of flow cytometry data. We avoid multiple uses of is_a by linking DC-CL terms to terms in other ontologies via additional, formally defined relations such as has_function. Conclusion This approach brings benefits in the form of increased accuracy, support for reasoning, and interoperability with other ontology resources. Accordingly, we propose our method as a general strategy for the ontological representation of cells. DC-CL is available from http://www.obofoundry.org.

  18. Method to improve commercial bonded SOI material

    Science.gov (United States)

    Maris, Humphrey John; Sadana, Devendra Kumar

    2000-07-11

    A method of improving the bonding characteristics of a previously bonded silicon on insulator (SOI) structure is provided. The improvement in the bonding characteristics is achieved in the present invention by, optionally, forming an oxide cap layer on the silicon surface of the bonded SOI structure and then annealing either the uncapped or oxide capped structure in a slightly oxidizing ambient at temperatures greater than 1200.degree. C. Also provided herein is a method for detecting the bonding characteristics of previously bonded SOI structures. According to this aspect of the present invention, a pico-second laser pulse technique is employed to determine the bonding imperfections of previously bonded SOI structures.

  19. Individuals with Type 1 and Type 2 Diabetes Mellitus Trade Increased Hyperglycemia for Decreased Hypoglycemia When Glycemic Variability is not Improved.

    Science.gov (United States)

    Jangam, Sujit R; Hayter, Gary; Dunn, Timothy C

    2018-02-01

    Glycemic variability refers to oscillations in blood glucose within a day and differences in blood glucose at the same time on different days. Glycemic variability is linked to hypoglycemia and hyperglycemia. The relationship among these three important metrics is examined here, specifically to show how reduction in both hypo- and hyperglycemia risk is dependent on changes in variability. To understand the importance of glycemic variability in the simultaneous reduction of hypoglycemia and hyperglycemia risk, we introduce the glycemic risk plot-estimated HbA1c % (eA1c) vs. minutes below 70 mg/dl (MB70) with constant variability contours for predicting post-intervention risks in the absence of a change in glycemic variability. The glycemic risk plot illustrates that individuals who do not reduce glycemic variability improve one of the two metrics (hypoglycemia risk or hyperglycemia risk) at the cost of the other. It is important to reduce variability to improve both risks. These results were confirmed by data collected in a randomized controlled trial consisting of individuals with type 1 and type 2 diabetes on insulin therapy. For type 1, a total of 28 individuals out of 35 (80%) showed improvement in at least one of the risks (hypo and/or hyper) during the 100-day course of the study. Seven individuals (20%) showed improvement in both. Similar data were observed for type 2 where a total of 36 individuals out of 43 (84%) showed improvement in at least one risk and 8 individuals (19%) showed improvement in both. All individuals in the study who showed improvement in both hypoglycemia and hyperglycemia risk also showed a reduction in variability. Therapy changes intended to improve an individual's hypoglycemia or hyperglycemia risk often result in the reduction of one risk at the expense of another. It is important to improve glucose variability to reduce both risks or at least maintain one risk while reducing the other. Abbott Diabetes Care.

  20. A Lean Six Sigma approach to the improvement of the selenium analysis method.

    Science.gov (United States)

    Cloete, Bronwyn C; Bester, André

    2012-11-02

    Reliable results represent the pinnacle assessment of quality of an analytical laboratory, and therefore variability is considered to be a critical quality problem associated with the selenium analysis method executed at Western Cape Provincial Veterinary Laboratory (WCPVL). The elimination and control of variability is undoubtedly of significant importance because of the narrow margin of safety between toxic and deficient doses of the trace element for good animal health. A quality methodology known as Lean Six Sigma was believed to present the most feasible solution for overcoming the adverse effect of variation, through steps towards analytical process improvement. Lean Six Sigma represents a form of scientific method type, which is empirical, inductive and deductive, and systematic, which relies on data, and is fact-based. The Lean Six Sigma methodology comprises five macro-phases, namely Define, Measure, Analyse, Improve and Control (DMAIC). Both qualitative and quantitative laboratory data were collected in terms of these phases. Qualitative data were collected by using quality-tools, namely an Ishikawa diagram, a Pareto chart, Kaizen analysis and a Failure Mode Effect analysis tool. Quantitative laboratory data, based on the analytical chemistry test method, were collected through a controlled experiment. The controlled experiment entailed 13 replicated runs of the selenium test method, whereby 11 samples were repetitively analysed, whilst Certified Reference Material (CRM) was also included in 6 of the runs. Laboratory results obtained from the controlled experiment was analysed by using statistical methods, commonly associated with quality validation of chemistry procedures. Analysis of both sets of data yielded an improved selenium analysis method, believed to provide greater reliability of results, in addition to a greatly reduced cycle time and superior control features. Lean Six Sigma may therefore be regarded as a valuable tool in any laboratory, and

  1. A Lean Six Sigma approach to the improvement of the selenium analysis method

    Directory of Open Access Journals (Sweden)

    Bronwyn C. Cloete

    2012-11-01

    Full Text Available Reliable results represent the pinnacle assessment of quality of an analytical laboratory, and therefore variability is considered to be a critical quality problem associated with the selenium analysis method executed at Western Cape Provincial Veterinary Laboratory (WCPVL. The elimination and control of variability is undoubtedly of significant importance because of the narrow margin of safety between toxic and deficient doses of the trace element for good animal health. A quality methodology known as Lean Six Sigma was believed to present the most feasible solution for overcoming the adverse effect of variation, through steps towards analytical process improvement. Lean Six Sigma represents a form of scientific method type, which is empirical, inductive and deductive, and systematic, which relies on data, and is fact-based. The Lean Six Sigma methodology comprises five macro-phases, namely Define, Measure, Analyse, Improve and Control (DMAIC. Both qualitative and quantitative laboratory data were collected in terms of these phases. Qualitative data were collected by using quality-tools, namely an Ishikawa diagram, a Pareto chart, Kaizen analysis and a Failure Mode Effect analysis tool. Quantitative laboratory data, based on the analytical chemistry test method, were collected through a controlled experiment. The controlled experiment entailed 13 replicated runs of the selenium test method, whereby 11 samples were repetitively analysed, whilst Certified Reference Material (CRM was also included in 6 of the runs. Laboratory results obtained from the controlled experiment was analysed by using statistical methods, commonly associated with quality validation of chemistry procedures. Analysis of both sets of data yielded an improved selenium analysis method, believed to provide greater reliability of results, in addition to a greatly reduced cycle time and superior control features. Lean Six Sigma may therefore be regarded as a valuable tool in any

  2. A Lean Six Sigma approach to the improvement of the selenium analysis method

    Directory of Open Access Journals (Sweden)

    Bronwyn C. Cloete

    2012-02-01

    Full Text Available Reliable results represent the pinnacle assessment of quality of an analytical laboratory, and therefore variability is considered to be a critical quality problem associated with the selenium analysis method executed at Western Cape Provincial Veterinary Laboratory (WCPVL. The elimination and control of variability is undoubtedly of significant importance because of the narrow margin of safety between toxic and deficient doses of the trace element for good animal health. A quality methodology known as Lean Six Sigma was believed to present the most feasible solution for overcoming the adverse effect of variation, through steps towards analytical process improvement. Lean Six Sigma represents a form of scientific method type, which is empirical, inductive and deductive, and systematic, which relies on data, and is fact-based. The Lean Six Sigma methodology comprises five macro-phases, namely Define, Measure, Analyse, Improve and Control (DMAIC. Both qualitative and quantitative laboratory data were collected in terms of these phases. Qualitative data were collected by using quality-tools, namely an Ishikawa diagram, a Pareto chart, Kaizen analysis and a Failure Mode Effect analysis tool. Quantitative laboratory data, based on the analytical chemistry test method, were collected through a controlled experiment. The controlled experiment entailed 13 replicated runs of the selenium test method, whereby 11 samples were repetitively analysed, whilst Certified Reference Material (CRM was also included in 6 of the runs. Laboratory results obtained from the controlled experiment was analysed by using statistical methods, commonly associated with quality validation of chemistry procedures. Analysis of both sets of data yielded an improved selenium analysis method, believed to provide greater reliability of results, in addition to a greatly reduced cycle time and superior control features. Lean Six Sigma may therefore be regarded as a valuable tool in any

  3. Improved parallel solution techniques for the integral transport matrix method

    Energy Technology Data Exchange (ETDEWEB)

    Zerr, R. Joseph, E-mail: rjz116@psu.edu [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, University Park, PA (United States); Azmy, Yousry Y., E-mail: yyazmy@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Burlington Engineering Laboratories, Raleigh, NC (United States)

    2011-07-01

    Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution time by up to 10´ when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing cases are optically thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block pre conditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient pre conditioner. (author)

  4. Improved parallel solution techniques for the integral transport matrix method

    International Nuclear Information System (INIS)

    Zerr, R. Joseph; Azmy, Yousry Y.

    2011-01-01

    Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution time by up to 10´ when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing cases are optically thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block pre conditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient pre conditioner. (author)

  5. An improved partial least-squares regression method for Raman spectroscopy

    Science.gov (United States)

    Momenpour Tehran Monfared, Ali; Anis, Hanan

    2017-10-01

    It is known that the performance of partial least-squares (PLS) regression analysis can be improved using the backward variable selection method (BVSPLS). In this paper, we further improve the BVSPLS based on a novel selection mechanism. The proposed method is based on sorting the weighted regression coefficients, and then the importance of each variable of the sorted list is evaluated using root mean square errors of prediction (RMSEP) criterion in each iteration step. Our Improved BVSPLS (IBVSPLS) method has been applied to leukemia and heparin data sets and led to an improvement in limit of detection of Raman biosensing ranged from 10% to 43% compared to PLS. Our IBVSPLS was also compared to the jack-knifing (simpler) and Genetic Algorithm (more complex) methods. Our method was consistently better than the jack-knifing method and showed either a similar or a better performance compared to the genetic algorithm.

  6. A new method for identifying the types of organic matter

    International Nuclear Information System (INIS)

    Tong Chunhan; Li Guodong

    1991-01-01

    A new method for dividing the types of organic matter according to V and Ni contents in soluble organic matter determined by NAA is introduced. The research site was an oil-gas field in northeastern China. The type of organic matter is an important parameter in evaluating an oil or a gas field. The conventional organic geochemistry methods will meet unsurmountable difficulties when the maturity of organic matter is high. The method described in this paper can solve the problem. (author) 4 refs.; 1 fig.; 2 tabs

  7. Improvement of numerical analysis method for FBR core characteristics. 3

    International Nuclear Information System (INIS)

    Takeda, Toshikazu; Yamamoto, Toshihisa; Kitada, Takanori; Katagi, Yousuke

    1998-03-01

    As the improvement of numerical analysis method for FBR core characteristics, studies on several topics have been conducted; multiband method, Monte Carlo perturbation and nodal transport method. This report is composed of the following three parts. Part 1: Improvement of Reaction Rate Calculation Method in the Blanket Region Based on the Multiband Method; A method was developed for precise evaluation of the reaction rate distribution in the blanket region using the multiband method. With the 3-band parameters obtained from the ordinary fitting method, major reaction rates such as U-238 capture, U-235 fission, Pu-239 fission and U-238 fission rate distributions were analyzed. Part 2: Improvement of Estimation Method for Reactivity Based on Monte-Carlo Perturbation Theory; Perturbation theory based on Monte-Carlo perturbation theory have been investigated and introduced into the calculational code. The Monte-Carlo perturbation code was applied to MONJU core and the calculational results were compared to the reference. Part 3: Improvement of Nodal Transport Calculation for Hexagonal Geometry; A method to evaluate the intra-subassembly power distribution from the nodal averaged neutron flux and surface fluxes at the node boundaries, was developed based on the transport theory. (J.P.N.)

  8. Projection preconditioning for Lanczos-type methods

    Energy Technology Data Exchange (ETDEWEB)

    Bielawski, S.S.; Mulyarchik, S.G.; Popov, A.V. [Belarusian State Univ., Minsk (Belarus)

    1996-12-31

    We show how auxiliary subspaces and related projectors may be used for preconditioning nonsymmetric system of linear equations. It is shown that preconditioned in such a way (or projected) system is better conditioned than original system (at least if the coefficient matrix of the system to be solved is symmetrizable). Two approaches for solving projected system are outlined. The first one implies straightforward computation of the projected matrix and consequent using some direct or iterative method. The second approach is the projection preconditioning of conjugate gradient-type solver. The latter approach is developed here in context with biconjugate gradient iteration and some related Lanczos-type algorithms. Some possible particular choices of auxiliary subspaces are discussed. It is shown that one of them is equivalent to using colorings. Some results of numerical experiments are reported.

  9. Improvement of Tone's method with two-term rational approximation

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Endo, Tomohiro; Chiba, Go

    2011-01-01

    An improvement of Tone's method, which is a resonance calculation method based on the equivalence theory, is proposed. In order to increase calculation accuracy, the two-term rational approximation is incorporated for the representation of neutron flux. Furthermore, some theoretical aspects of Tone's method, i.e., its inherent approximation and choice of adequate multigroup cross section for collision probability estimation, are also discussed. The validity of improved Tone's method is confirmed through a verification calculation in an irregular lattice geometry, which represents part of an LWR fuel assembly. The calculation result clarifies the validity of the present method. (author)

  10. Discriminatory Indices of Typing Methods for Epidemiologic Analysis of Contemporary Staphylococcus aureus Strains.

    Science.gov (United States)

    Rodriguez, Marcela; Hogan, Patrick G; Satola, Sarah W; Crispell, Emily; Wylie, Todd; Gao, Hongyu; Sodergren, Erica; Weinstock, George M; Burnham, Carey-Ann D; Fritz, Stephanie A

    2015-09-01

    Historically, a number of typing methods have been evaluated for Staphylococcus aureus strain characterization. The emergence of contemporary strains of community-associated S. aureus, and the ensuing epidemic with a predominant strain type (USA300), necessitates re-evaluation of the discriminatory power of these typing methods for discerning molecular epidemiology and transmission dynamics, essential to investigations of hospital and community outbreaks. We compared the discriminatory index of 5 typing methods for contemporary S. aureus strain characterization. Children presenting to St. Louis Children's Hospital and community pediatric practices in St. Louis, Missouri (MO), with community-associated S. aureus infections were enrolled. Repetitive sequence-based PCR (repPCR), pulsed-field gel electrophoresis (PFGE), multilocus sequence typing (MLST), staphylococcal protein A (spa), and staphylococcal cassette chromosome (SCC) mec typing were performed on 200 S. aureus isolates. The discriminatory index of each method was calculated using the standard formula for this metric, where a value of 1 is highly discriminatory and a value of 0 is not discriminatory. Overall, we identified 26 distinct strain types by repPCR, 17 strain types by PFGE, 30 strain types by MLST, 68 strain types by spa typing, and 5 strain types by SCCmec typing. RepPCR had the highest discriminatory index (D) of all methods (D = 0.88), followed by spa typing (D = 0.87), MLST (D = 0.84), PFGE (D = 0.76), and SCCmec typing (D = 0.60). The method with the highest D among MRSA isolates was repPCR (D = 0.64) followed by spa typing (D = 0.45) and MLST (D = 0.44). The method with the highest D among MSSA isolates was spa typing (D = 0.98), followed by MLST (D = 0.93), repPCR (D = 0.92), and PFGE (D = 0.89). Among isolates designated USA300 by PFGE, repPCR was most discriminatory, with 10 distinct strain types identified (D = 0.63). We identified 45

  11. Room for improvement? Leadership, innovation culture and uptake of quality improvement methods in general practice.

    Science.gov (United States)

    Apekey, Tanefa A; McSorley, Gerry; Tilling, Michelle; Siriwardena, A Niroshan

    2011-04-01

    Leadership and innovation are currently seen as essential elements for the development and maintenance of high-quality care. Little is known about the relationship between leadership and culture of innovation and the extent to which quality improvement methods are used in general practice. This study aimed to assess the relationship between leadership behaviour, culture of innovation and adoption of quality improvement methods in general practice. Self-administered postal questionnaires were sent to general practitioner quality improvement leads in one county in the UK between June and December 2007. The questionnaire consisted of background information, a 12-item scale to assess leadership behaviour, a seven-dimension self-rating scale for culture of innovation and questions on current use of quality improvement tools and techniques. Sixty-three completed questionnaires (62%) were returned. Leadership behaviours were not commonly reported. Most practices reported a positive culture of innovation, featuring relationship most strongly, followed by targets and information but rated lower on other dimensions of rewards, risk and resources. There was a significant positive correlation between leadership behaviour and the culture of innovation (r = 0.57; P improvement methods were not adopted by most participating practices. Leadership behaviours were infrequently reported and this was associated with a limited culture of innovation in participating general practices. There was little use of quality improvement methods beyond clinical and significant event audit. Practices need support to enhance leadership skills, encourage innovation and develop quality improvement skills if improvements in health care are to accelerate. © 2010 Blackwell Publishing Ltd.

  12. Do physicians understand type 2 diabetes patients' perceptions of seriousness; the emotional impact and needs for care improvement? A cross-national survey

    NARCIS (Netherlands)

    Hajós, T.R.S.; Polonsky, W.H.; Twisk, J.W.; Dain, M.P.; Snoek, F.J.

    2011-01-01

    Objective: To explore across countries the extent to which physicians understand Type 2 diabetes patients' perceptions of seriousness, worries about complications, emotional distress, and needs for care improvement. Methods: Cross-sectional data were collected in a multinational survey (SHARED).

  13. [An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].

    Science.gov (United States)

    Xu, Yonghong; Gao, Shangce; Hao, Xiaofei

    2016-04-01

    Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.

  14. Mutagenesis applied to improve fruit trees. Techniques, methods and evaluation of radiation-induced mutations

    International Nuclear Information System (INIS)

    Donini, B.

    1982-01-01

    Improvement of fruit tree cultivars is an urgent need for a modern and industrialized horticulture on which is based the economic importance of many countries. Both the cross breeding and the mutation breeding are regarded as the methods to be used for creating new varieties. Research carried out at the CNEN Agriculture Laboratory on mutagenesis to improve vegetatively propagated plants, under the FAO-IAEA Co-ordinated Research Programme, has dealt with methods of exposure, types of radiations, conditions during and after the irradiation, mechanisms of mutation induction, methodology of isolation of somatic mutations and evaluation of radiation-induced mutations in fruit trees. Problems associated with these aspects have been evaluated, which is very important for the more efficient use of radiation in the mutation breeding. Mutants of agronomical importance (plant size reduction, early ripening, fruit colour change, nectarine fruit, self-thinning fruit) have been isolated in cherry, grape, apple, olive and peach and they are ready to be released. (author)

  15. Analyzing and improving a chaotic encryption method

    International Nuclear Information System (INIS)

    Wu Xiaogang; Hu Hanping; Zhang Baoliang

    2004-01-01

    To resist the return map attack [Phys. Rev. Lett. 74 (1995) 1970] presented by Perez and Cerdeira, Shouliang Bu and Bing-Hong Wang proposed a simple method to improve the security of the chaotic encryption by modulating the chaotic carrier with an appropriately chosen scalar signal in [Chaos, Solitons and Fractals 19 (2004) 919]. They maintained that this modulating strategy not only preserved all appropriate information required for synchronizing chaotic systems but also destroyed the possibility of the phase space reconstruction of the sender dynamics such as a return map. However, a critical defect does exist in this scheme. This paper gives a zero-point autocorrelation method, which can recover the parameters of the scalar signal from the modulated signal. Consequently, the messages will be extracted from the demodulated chaotic carrier by using return map. Based on such a fact, an improved scheme is presented to obtain higher security, and the numerical simulation indicates the improvement of the synchronizing performance as well

  16. Uncertainties of flood frequency estimation approaches based on continuous simulation using data resampling

    Science.gov (United States)

    Arnaud, Patrick; Cantet, Philippe; Odry, Jean

    2017-11-01

    Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with

  17. Collateral Information for Equating in Small Samples: A Preliminary Investigation

    Science.gov (United States)

    Kim, Sooyeon; Livingston, Samuel A.; Lewis, Charles

    2011-01-01

    This article describes a preliminary investigation of an empirical Bayes (EB) procedure for using collateral information to improve equating of scores on test forms taken by small numbers of examinees. Resampling studies were done on two different forms of the same test. In each study, EB and non-EB versions of two equating methods--chained linear…

  18. Large-scale systematic analysis of 2D fingerprint methods and parameters to improve virtual screening enrichments.

    Science.gov (United States)

    Sastry, Madhavi; Lowrie, Jeffrey F; Dixon, Steven L; Sherman, Woody

    2010-05-24

    A systematic virtual screening study on 11 pharmaceutically relevant targets has been conducted to investigate the interrelation between 8 two-dimensional (2D) fingerprinting methods, 13 atom-typing schemes, 13 bit scaling rules, and 12 similarity metrics using the new cheminformatics package Canvas. In total, 157 872 virtual screens were performed to assess the ability of each combination of parameters to identify actives in a database screen. In general, fingerprint methods, such as MOLPRINT2D, Radial, and Dendritic that encode information about local environment beyond simple linear paths outperformed other fingerprint methods. Atom-typing schemes with more specific information, such as Daylight, Mol2, and Carhart were generally superior to more generic atom-typing schemes. Enrichment factors across all targets were improved considerably with the best settings, although no single set of parameters performed optimally on all targets. The size of the addressable bit space for the fingerprints was also explored, and it was found to have a substantial impact on enrichments. Small bit spaces, such as 1024, resulted in many collisions and in a significant degradation in enrichments compared to larger bit spaces that avoid collisions.

  19. Improving flow patterns and spillage characteristics of a box-type commercial kitchen hood.

    Science.gov (United States)

    Huang, Rong Fung; Chen, Jia-Kun; Han, Meng-Ji; Priyambodo, Yusuf

    2014-01-01

    A conventional box-type commercial kitchen hood and its improved version (termed the "IQV commercial kitchen hood") were studied using the laser-assisted smoke flow visualization technique and tracer-gas (sulfur hexafluoride) detection methods. The laser-assisted smoke flow visualization technique qualitatively revealed the flow field of the hood and the areas apt for leakages of hood containment. The tracer-gas concentration detection method measured the quantitative leakage levels of the hood containment. The oil mists that were generated in the conventional box-type commercial kitchen hood leaked significantly into the environment from the areas near the front edges of ceiling and side walls. Around these areas, the boundary-layer separation occurred, inducing highly unsteady and turbulent recirculating flow, and leading to spillages of hood containment due to inappropriate aerodynamic design at the front edges of the ceiling and side walls. The tracer-gas concentration measurements on the conventional box-type commercial kitchen hood showed that the sulfur hexafluoride concentrations detected at the hood face attained very large values on an order of magnitude about 10(3)-10(4) ppb. By combining the backward-offset narrow suction slot, deflection plates, and quarter-circular arcs at the hood entrance, the IQV commercial kitchen hood presented a flow field containing four backward-inclined cyclone flow structures. The oil mists generated by cooking were coherently confined in these upward-rising cyclone flow structures and finally exhausted through the narrow suction slot. The tracer-gas concentration measurements on the IQV commercial kitchen hood showed that the order of magnitude of the sulfur hexafluoride concentrations detected at the hood face is negligibly small--only about 10(0) ppb across the whole hood face.

  20. Improvement of biomolecular methods for the identification and typing of Escherichia coli O157:H7 isolated from raw meat.

    Science.gov (United States)

    Paris, A; Bonardi, S; Bacci, C; Boni, E; Salmi, F; Bassi, L; Brindani, F

    2010-06-01

    The aim of the study was to evaluate the sensitivity of two m-PCR methods for the quantitative determination of E. coli O157:H7 in foodstuffs. Genomic serotyping was carried out on bacterial cultures, and the necessary time was optimized to increase the resolution of the method. Subsequently, artificial contamination trials using meat were conducted to assess method accuracy in foodstuffs and pursue the genetic typing of pathogens. Measurement thresholds were shown to range between 10(5) and 10(6) CFU/mL, but were reduced by four logarithmic cycles in 80% of samples. Relative to the meat contamination trials, serotypes were identified after 24 hours, corresponding to 10 CFU/mL inoculum, with higher rates seen when m-TSB was used for enrichment. Inoculated samples were found to contain three virulence factors (hlyA, eaeA, and stx1).

  1. A New Processing Method Combined with BP Neural Network for Francis Turbine Synthetic Characteristic Curve Research

    Directory of Open Access Journals (Sweden)

    Junyi Li

    2017-01-01

    Full Text Available A BP (backpropagation neural network method is employed to solve the problems existing in the synthetic characteristic curve processing of hydroturbine at present that most studies are only concerned with data in the high efficiency and large guide vane opening area, which can hardly meet the research requirements of transition process especially in large fluctuation situation. The principle of the proposed method is to convert the nonlinear characteristics of turbine to torque and flow characteristics, which can be used for real-time simulation directly based on neural network. Results show that obtained sample data can be extended successfully to cover working areas wider under different operation conditions. Another major contribution of this paper is the resampling technique proposed in the paper to overcome the limitation to sample period simulation. In addition, a detailed analysis for improvements of iteration convergence of the pressure loop is proposed, leading to a better iterative convergence during the head pressure calculation. Actual applications verify that methods proposed in this paper have better simulation results which are closer to the field and provide a new perspective for hydroturbine synthetic characteristic curve fitting and modeling.

  2. When daily planning improves employee performance: The importance of planning type, engagement, and interruptions.

    Science.gov (United States)

    Parke, Michael R; Weinhardt, Justin M; Brodsky, Andrew; Tangirala, Subrahmaniam; DeVoe, Sanford E

    2018-03-01

    Does planning for a particular workday help employees perform better than on other days they fail to plan? We investigate this question by identifying 2 distinct types of daily work planning to explain why and when planning improves employees' daily performance. The first type is time management planning (TMP)-creating task lists, prioritizing tasks, and determining how and when to perform them. We propose that TMP enhances employees' performance by increasing their work engagement, but that these positive effects are weakened when employees face many interruptions in their day. The second type is contingent planning (CP) in which employees anticipate possible interruptions in their work and plan for them. We propose that CP helps employees stay engaged and perform well despite frequent interruptions. We investigate these hypotheses using a 2-week experience-sampling study. Our findings indicate that TMP's positive effects are conditioned upon the amount of interruptions, but CP has positive effects that are not influenced by the level of interruptions. Through this study, we help inform workers of the different planning methods they can use to increase their daily motivation and performance in dynamic work environments. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. Solving the interval type-2 fuzzy polynomial equation using the ranking method

    Science.gov (United States)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim

    2014-07-01

    Polynomial equations with trapezoidal and triangular fuzzy numbers have attracted some interest among researchers in mathematics, engineering and social sciences. There are some methods that have been developed in order to solve these equations. In this study we are interested in introducing the interval type-2 fuzzy polynomial equation and solving it using the ranking method of fuzzy numbers. The ranking method concept was firstly proposed to find real roots of fuzzy polynomial equation. Therefore, the ranking method is applied to find real roots of the interval type-2 fuzzy polynomial equation. We transform the interval type-2 fuzzy polynomial equation to a system of crisp interval type-2 fuzzy polynomial equation. This transformation is performed using the ranking method of fuzzy numbers based on three parameters, namely value, ambiguity and fuzziness. Finally, we illustrate our approach by numerical example.

  4. Training Methods to Improve Evidence-Based Medicine Skills

    Directory of Open Access Journals (Sweden)

    Filiz Ozyigit

    2010-06-01

    Full Text Available Evidence based medicine (EBM is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. It is estimated that only 15% of medical interventions is evidence-based. Increasing demand, new technological developments, malpractice legislations, a very speed increase in knowledge and knowledge sources push the physicians forward for EBM, but at the same time increase load of physicians by giving them the responsibility to improve their skills. Clinical maneuvers are needed more, as the number of clinical trials and observational studies increase. However, many of the physicians, who are in front row of patient care do not use this increasing evidence. There are several examples related to different training methods in order to improve skills of physicians for evidence based practice. There are many training methods to improve EBM skills and these trainings might be given during medical school, during residency or as continuous trainings to the actual practitioners in the field. It is important to discuss these different training methods in our country as well and encourage dissemination of feasible and effective methods. [TAF Prev Med Bull 2010; 9(3.000: 245-254

  5. An Improved Method of Training Overcomplete Dictionary Pair

    Directory of Open Access Journals (Sweden)

    Zhuozheng Wang

    2014-01-01

    Full Text Available Training overcomplete dictionary pair is a critical step of the mainstream superresolution methods. For the high time complexity and susceptible to corruption characteristics of training dictionary, an improved method based on lifting wavelet transform and robust principal component analysis is reported. The high-frequency components of example images are estimated through wavelet coefficients of 3-tier lifting wavelet transform decomposition. Sparse coefficients are similar in multiframe images. Accordingly, the inexact augmented Lagrange multiplier method is employed to achieve robust principal component analysis in the process of imposing global constraints. Experiments reveal that the new algorithm not only reduces the time complexity preserving the clarity but also improves the robustness for the corrupted example images.

  6. Onabotulinumtoxin type A improves lower urinary tract symptoms and quality of life in patients with human T cell lymphotropic virus type 1 associated overactive bladder

    Directory of Open Access Journals (Sweden)

    Jose Abraão Carneiro Neto

    2018-03-01

    Full Text Available Aim: To evaluate the efficacy of the onabotulinum toxin type A in the treatment of HTLV-1 associated overactive bladder and its impact on quality of life (QoL. Methods: Case series with 10 patients with overactive bladder refractory to conservative treatment with anticholinergic or physical therapy. They received 200Ui of onabotulinumtoxin type A intravesically and were evaluated by overactive bladder symptoms score (OABSS and King's Health Questionnaire. Results: The mean (SD of the age was 52 + 14.5 years and 60% were female. All of them had confirmed detrusor overactivity on urodynamic study. Seven patients had HAM/TSP. The median and range of the OABSS was 13 (12–15 before therapy and decreased to 1.0 (0–12 on day 30 and to 03 (0–14 on day 90 (p < 0.0001. There was a significant improvement in 8 of the 9 domains of the King's Health Questionnaire after the intervention. Hematuria, urinary retention and urinary infection were the complications observed in 3 out of 10 patients. The mean time to request retreatment was 465 days. Conclusion: Onabotulinum toxin type A intravesically reduced the OABSS with last long effect and improved the quality of life of HTLV-1 infected patients with severe overactive bladder. Keywords: Overactive bladder, Onabotulinum toxin, HTLV-1

  7. Error Estimation and Accuracy Improvements in Nodal Transport Methods

    International Nuclear Information System (INIS)

    Zamonsky, O.M.

    2000-01-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid

  8. Use of the Diabetes Prevention Trial-Type 1 Risk Score (DPTRS) for improving the accuracy of the risk classification of type 1 diabetes.

    Science.gov (United States)

    Sosenko, Jay M; Skyler, Jay S; Mahon, Jeffrey; Krischer, Jeffrey P; Greenbaum, Carla J; Rafkin, Lisa E; Beam, Craig A; Boulware, David C; Matheson, Della; Cuthbertson, David; Herold, Kevan C; Eisenbarth, George; Palmer, Jerry P

    2014-04-01

    OBJECTIVE We studied the utility of the Diabetes Prevention Trial-Type 1 Risk Score (DPTRS) for improving the accuracy of type 1 diabetes (T1D) risk classification in TrialNet Natural History Study (TNNHS) participants. RESEARCH DESIGN AND METHODS The cumulative incidence of T1D was compared between normoglycemic individuals with DPTRS values >7.00 and dysglycemic individuals in the TNNHS (n = 991). It was also compared between individuals with DPTRS values 7.00 among those with dysglycemia and those with multiple autoantibodies in the TNNHS. DPTRS values >7.00 were compared with dysglycemia for characterizing risk in Diabetes Prevention Trial-Type 1 (DPT-1) (n = 670) and TNNHS participants. The reliability of DPTRS values >7.00 was compared with dysglycemia in the TNNHS. RESULTS The cumulative incidence of T1D for normoglycemic TNNHS participants with DPTRS values >7.00 was comparable to those with dysglycemia. Among those with dysglycemia, the cumulative incidence was much higher (P 7.00 than for those with values 7.00). Dysglycemic individuals in DPT-1 were at much higher risk for T1D than those with dysglycemia in the TNNHS (P 7.00. The proportion in the TNNHS reverting from dysglycemia to normoglycemia at the next visit was higher than the proportion reverting from DPTRS values >7.00 to values <7.00 (36 vs. 23%). CONCLUSIONS DPTRS thresholds can improve T1D risk classification accuracy by identifying high-risk normoglycemic and low-risk dysglycemic individuals. The 7.00 DPTRS threshold characterizes risk more consistently between populations and has greater reliability than dysglycemia.

  9. Are students' impressions of improved learning through active learning methods reflected by improved test scores?

    Science.gov (United States)

    Everly, Marcee C

    2013-02-01

    To report the transformation from lecture to more active learning methods in a maternity nursing course and to evaluate whether student perception of improved learning through active-learning methods is supported by improved test scores. The process of transforming a course into an active-learning model of teaching is described. A voluntary mid-semester survey for student acceptance of the new teaching method was conducted. Course examination results, from both a standardized exam and a cumulative final exam, among students who received lecture in the classroom and students who had active learning activities in the classroom were compared. Active learning activities were very acceptable to students. The majority of students reported learning more from having active-learning activities in the classroom rather than lecture-only and this belief was supported by improved test scores. Students who had active learning activities in the classroom scored significantly higher on a standardized assessment test than students who received lecture only. The findings support the use of student reflection to evaluate the effectiveness of active-learning methods and help validate the use of student reflection of improved learning in other research projects. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Effect of quality of life improvement on type 2 diabetes patients' self-esteem.

    Science.gov (United States)

    Safavi, Mahboubeh; Samadi, Nasrin; Mahmoodi, Mahmood

    2011-09-01

    To study the effects of the quality of life (QoL) improvement on their QoL and self-esteem. This was a random controlled clinical trial study on 123 type 2 diabetes patients admitted to the Diabetes Clinic in Imam Khomeini Hospital at Ardebil, Iran from April 2009 to June 2010. The 30-70 years old participants are afflicted with type 2 diabetes, and randomly divided into 2 groups (experimental group n=61, and control group n=62). The questionnaires were composed of sociodemographic status, Farrel & Grant, and Rosenberg's self-esteem questionnaires and the quality of life (QoL) improvement plan was codified to educate and evaluate them. A plow self-esteem (13%) before QOL training, and they had moderate self-esteem after the intervention (39%), however, the control group had moderate self-esteem (62.5%) in the pre-test, and changed to low self-esteem (12.9%) in the post-test, and there was significant difference in the previous and next intervention (pself-esteem, and QOL as improved may help to reduce the side effects of type 2 diabetes process.

  11. Research and development of improved type radioactive waste volume reduction system

    International Nuclear Information System (INIS)

    Okamoto, Masahiro; Watanabe, Yoshifumi; Yamaoka, Katsuaki; Masaki, Tetsuo; Akagawa, Yoshihiro; Murakami, Tadashi; Miyake, Takashi.

    1985-01-01

    Development and research had been conducted since 1978 on an improved type radioactive waste volume reduction system incorporating calcining and incinerating fluidized bed type furnaces. This system can dispose of concentrated liquid wastes, combustible solid wastes, spent ion exchange resins and so forth by calcination or incineration to turn them into reduced-volume products. Recently a pilot test facility has constructed and tests has been conducted to demonstrate actual performance. Representative results of pilot tests are reported in this paper. (author)

  12. Overview of molecular typing methods for outbreak detection and epidemiological surveillance

    NARCIS (Netherlands)

    Sabat, A. J.; Budimir, A.; Nashev, D.; Sa-Leao, R.; van Dijl, J. M.; Laurent, F.; Grundmann, H.; Friedrich, A. W.

    2013-01-01

    Typing methods for discriminating different bacterial isolates of the same species are essential epidemiological tools in infection prevention and control. Traditional typing systems based on phenotypes, such as serotype, biotype, phage-type, or antibiogram, have been used for many years. However,

  13. An Improved Pansharpening Method for Misaligned Panchromatic and Multispectral Data.

    Science.gov (United States)

    Li, Hui; Jing, Linhai; Tang, Yunwei; Ding, Haifeng

    2018-02-11

    Numerous pansharpening methods were proposed in recent decades for fusing low-spatial-resolution multispectral (MS) images with high-spatial-resolution (HSR) panchromatic (PAN) bands to produce fused HSR MS images, which are widely used in various remote sensing tasks. The effect of misregistration between MS and PAN bands on quality of fused products has gained much attention in recent years. An improved method for misaligned MS and PAN imagery is proposed, through two improvements made on a previously published method named RMI (reduce misalignment impact). The performance of the proposed method was assessed by comparing with some outstanding fusion methods, such as adaptive Gram-Schmidt and generalized Laplacian pyramid. Experimental results show that the improved version can reduce spectral distortions of fused dark pixels and sharpen boundaries between different image objects, as well as obtain similar quality indexes with the original RMI method. In addition, the proposed method was evaluated with respect to its sensitivity to misalignments between MS and PAN bands. It is certified that the proposed method is more robust to misalignments between MS and PAN bands than the other methods.

  14. Study on erbium loading method to improve reactivity coefficients for low radiotoxic spent fuel HTGR

    Energy Technology Data Exchange (ETDEWEB)

    Fukaya, Y., E-mail: fukaya.yuji@jaea.go.jp; Goto, M.; Nishihara, T.

    2015-11-15

    Highlights: • We attempted and optimized erbium loading methods to improve reactivity coefficients for LRSF-HTGR. • We elucidated the mechanism of the improvements for each erbium loading method by using the Bondarenko approach. • We concluded the erbium loading method by embedding into graphite shaft is preferable. - Abstract: Erbium loading methods are investigated to improve reactivity coefficients of Low Radiotoxic Spent Fuel High Temperature Gas-cooled Reactor (LRSF-HTGR). Highly enriched uranium is used for fuel to reduce the generation of toxicity from uranium-238. The power coefficients are positive without the use of any additive. Then, the erbium is loaded into the core to obtain negative reactivity coefficients owing to the large resonance the peak of neutron capture reaction of erbium-167. The loading methods are attempted to find the suitable method for LRSF-HTGR. The erbium is mixed in a CPF fuel kernel, loaded by binary packing with fuel particles and erbium particles, and embedded into the graphite shaft deployed in the center of the fuel compact. It is found that erbium loading causes negative reactivity as moderator temperature reactivity, and from the viewpoint of heat transfer, it should be loaded into fuel pin elements for pin-in-block type fuel. Moreover, the erbium should be incinerated slowly to obtain negative reactivity coefficients even at the End Of Cycle (EOC). A loading method that effectively causes self-shielding should be selected to avoid incineration with burn-up. The incineration mechanism is elucidated using the Bondarenko approach. As a result, it is concluded that erbium embedded into graphite shaft is preferable for LRSF-HTGR to ensure that the reactivity coefficients remain negative at EOC.

  15. Development and Validation of Improved Method for Fingerprint ...

    African Journals Online (AJOL)

    Purpose: To develop and validate an improved method by capillary zone electrophoresis with photodiode array detection for the fingerprint analysis of Ligusticum chuanxiong Hort. (Rhizoma Chuanxiong). Methods: The optimum high performance capillary electrophoresis (HPCE) conditions were 30 mM borax containing 5 ...

  16. System health monitoring using multiple-model adaptive estimation techniques

    Science.gov (United States)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  17. An effective method to improve the robustness of small-world networks under attack

    International Nuclear Information System (INIS)

    Zhang Zheng-Zhen; Xu Wen-Jun; Lin Jia-Ru; Zeng Shang-You

    2014-01-01

    In this study, the robustness of small-world networks to three types of attack is investigated. Global efficiency is introduced as the network coefficient to measure the robustness of a small-world network. The simulation results prove that an increase in rewiring probability or average degree can enhance the robustness of the small-world network under all three types of attack. The effectiveness of simultaneously increasing both rewiring probability and average degree is also studied, and the combined increase is found to significantly improve the robustness of the small-world network. Furthermore, the combined effect of rewiring probability and average degree on network robustness is shown to be several times greater than that of rewiring probability or average degree individually. This means that small-world networks with a relatively high rewiring probability and average degree have advantages both in network communications and in good robustness to attacks. Therefore, simultaneously increasing rewiring probability and average degree is an effective method of constructing realistic networks. Consequently, the proposed method is useful to construct efficient and robust networks in a realistic scenario. (interdisciplinary physics and related areas of science and technology)

  18. Do Electrochemiluminescence Assays Improve Prediction of Time to Type 1 Diabetes in Autoantibody-Positive TrialNet Subjects?

    OpenAIRE

    Fouts, Alexandra; Pyle, Laura; Yu, Liping; Miao, Dongmei; Michels, Aaron; Krischer, Jeffrey; Sosenko, Jay; Gottlieb, Peter; Steck, Andrea K.

    2016-01-01

    OBJECTIVE To explore whether electrochemiluminescence (ECL) assays can help improve prediction of time to type 1 diabetes in the TrialNet autoantibody-positive population. RESEARCH DESIGN AND METHODS TrialNet subjects who were positive for one or more autoantibodies (microinsulin autoantibody, GAD65 autoantibody [GADA], IA-2A, and ZnT8A) with available ECL-insulin autoantibody (IAA) and ECL-GADA data at their initial visit were analyzed; after a median follow-up of 24 months, 177 of these 1,2...

  19. Inverse airfoil design method for low-speed straight-bladed Darrieus-type VAWT applications

    Energy Technology Data Exchange (ETDEWEB)

    Saeed, F. [King Fahd Univ. of Petroleum and Minerals, Dhahran (Saudi Arabia); Paraschivoiu, I.; Trifu, O. [Ecole Polytechnique, Montreal, PQ (Canada); Hess, M.; Gabrys, C. [Mariah Power Inc., Reno, NV (United States)

    2008-07-01

    Inverse airfoil design of a low-speed straight-bladed Darrieus-type vertical axis wind turbine (VAWT) can help improve aerodynamic performance and power output by eliminating undesirable flow field characteristics at very low Reynolds number. This study used an interactive inverse airfoil design method (PROFOIL) that allows specification of velocity and boundary-layer characteristics over different segments of the airfoil subject to constraints on the geometry (closure) and the flow field (far field boundary). Additional constraints were also considered to address pitching moment coefficient, thickness and the power output for a given tip-speed ratio. Performance analyses of the airfoil and the VAWT were carried out using state-of-the-art analyses codes XFOIL and CARDAAV, respectively. XFOIL is a panel method with a coupled boundary-layer scheme and is used to obtain the aerodynamic characteristics of resulting airfoil shapes. The final airfoil geometry is obtained through a multi-dimensional Newton iteration. The study showed that the strength of the method lies in the inverse design methodology whereas its weaknesses is in reliably predicting aerodynamic characteristics of airfoils at low Reynolds numbers and high angles of attack. A 10-15 per cent increase in the relative performance of the VAWT was achieved with this method. Although the results of the study showed that the method has great application potential for VAWTs in general, there is much room for improvement in flow analysis capabilities for low Re flows in reliably predicting post-stall aerodynamic characteristics. In the absence of such analysis capabilities, the authors suggested that the results should be viewed qualitatively and not quantitatively. 36 refs., 1 tab., 4 figs.

  20. Improvement of open-type magnetically shielded room composed of magnetic square cylinders by controlling flux path

    International Nuclear Information System (INIS)

    Hirosato, S.; Yamazaki, K.; Tsuruta, T.; Haraguchi, Y.; Kosaka, M.; Gao, Y.; Muramatsu, K.; Kobayashi, K.

    2011-01-01

    We have developed an open-type magnetically shielded room composed of magnetic square cylinders that has been used for an actual MRI in a hospital. To improve shielding performance, we propose here a method to control the path of the magnetic flux in the wall composed of the magnetic square cylinders by changing the magnetic permeability in each direction of the square cylinders spatially. First, we discuss a method to control the magnetic permeability in each direction of the square cylinders independently by inserting slits without changing the outside dimensions of the square cylinders, by using 3-D magnetic field analysis. Then, the effectiveness of the design of controlling the flux pass was shown by magnetic field analysis and experiments. (author)

  1. Diabetes technology: improving care, improving patient-reported outcomes and preventing complications in young people with Type 1 diabetes.

    Science.gov (United States)

    Prahalad, P; Tanenbaum, M; Hood, K; Maahs, D M

    2018-04-01

    With the evolution of diabetes technology, those living with Type 1 diabetes are given a wider arsenal of tools with which to achieve glycaemic control and improve patient-reported outcomes. Furthermore, the use of these technologies may help reduce the risk of acute complications, such as severe hypoglycaemia and diabetic ketoacidosis, as well as long-term macro- and microvascular complications. In addition, diabetes technology can have a beneficial impact on psychosocial health by reducing the burden of diabetes. Unfortunately, diabetes goals are often unmet and people with Type 1 diabetes too frequently experience acute and long-term complications of this condition, in addition to often having less than ideal psychosocial outcomes. Increasing realization of the importance of patient-reported outcomes is leading to diabetes care delivery becoming more patient-centred. Diabetes technology in the form of medical devices, digital health and big data analytics have the potential to improve clinical care and psychosocial support, resulting in lower rates of acute and chronic complications, decreased burden of diabetes care, and improved quality of life. © 2018 Diabetes UK.

  2. Improved methods for operating public transportation services.

    Science.gov (United States)

    2013-03-01

    In this joint project, West Virginia University and the University of Maryland collaborated in developing improved methods for analyzing and managing public transportation services. Transit travel time data were collected using GPS tracking services ...

  3. STUDY ON THE CLASSIFICATION OF GAOFEN-3 POLARIMETRIC SAR IMAGES USING DEEP NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    J. Zhang

    2018-04-01

    Full Text Available Polarimetric Synthetic Aperture Radar(POLSAR) imaging principle determines that the image quality will be affected by speckle noise. So the recognition accuracy of traditional image classification methods will be reduced by the effect of this interference. Since the date of submission, Deep Convolutional Neural Network impacts on the traditional image processing methods and brings the field of computer vision to a new stage with the advantages of a strong ability to learn deep features and excellent ability to fit large datasets. Based on the basic characteristics of polarimetric SAR images, the paper studied the types of the surface cover by using the method of Deep Learning. We used the fully polarimetric SAR features of different scales to fuse RGB images to the GoogLeNet model based on convolution neural network Iterative training, and then use the trained model to test the classification of data validation.First of all, referring to the optical image, we mark the surface coverage type of GF-3 POLSAR image with 8m resolution, and then collect the samples according to different categories. To meet the GoogLeNet model requirements of 256 × 256 pixel image input and taking into account the lack of full-resolution SAR resolution, the original image should be pre-processed in the process of resampling. In this paper, POLSAR image slice samples of different scales with sampling intervals of 2 m and 1 m to be trained separately and validated by the verification dataset. Among them, the training accuracy of GoogLeNet model trained with resampled 2-m polarimetric SAR image is 94.89 %, and that of the trained SAR image with resampled 1 m is 92.65 %.

  4. Study on the Classification of GAOFEN-3 Polarimetric SAR Images Using Deep Neural Network

    Science.gov (United States)

    Zhang, J.; Zhang, J.; Zhao, Z.

    2018-04-01

    Polarimetric Synthetic Aperture Radar (POLSAR) imaging principle determines that the image quality will be affected by speckle noise. So the recognition accuracy of traditional image classification methods will be reduced by the effect of this interference. Since the date of submission, Deep Convolutional Neural Network impacts on the traditional image processing methods and brings the field of computer vision to a new stage with the advantages of a strong ability to learn deep features and excellent ability to fit large datasets. Based on the basic characteristics of polarimetric SAR images, the paper studied the types of the surface cover by using the method of Deep Learning. We used the fully polarimetric SAR features of different scales to fuse RGB images to the GoogLeNet model based on convolution neural network Iterative training, and then use the trained model to test the classification of data validation.First of all, referring to the optical image, we mark the surface coverage type of GF-3 POLSAR image with 8m resolution, and then collect the samples according to different categories. To meet the GoogLeNet model requirements of 256 × 256 pixel image input and taking into account the lack of full-resolution SAR resolution, the original image should be pre-processed in the process of resampling. In this paper, POLSAR image slice samples of different scales with sampling intervals of 2 m and 1 m to be trained separately and validated by the verification dataset. Among them, the training accuracy of GoogLeNet model trained with resampled 2-m polarimetric SAR image is 94.89 %, and that of the trained SAR image with resampled 1 m is 92.65 %.

  5. Improving glycaemic control and life skills in adolescents with type 1 diabetes: A randomised, controlled intervention study using the Guided Self-Determination-Young method in triads of adolescents, parents and health care providers integrated into routine paediatric outpatient clinics

    DEFF Research Database (Denmark)

    Husted, Gitte; Thorsteinsson, Birger; Esbensen, Bente Appel

    2011-01-01

    visits will reduce haemoglobin A1c (HbA1c) concentrations and improve adolescents' life skills compared with a control group. METHODS: Using a mixed methods design comprising a randomised controlled trial and a nested qualitative evaluation, we will recruit 68 adolescents age 13 - 18 years with type 1......ABSTRACT: BACKGROUND: Adolescents with type 1 diabetes face demanding challenges due to conflicting priorities between psychosocial needs and diabetes management. This conflict often results in poor glycaemic control and discord between adolescents and parents. Adolescent-parent conflicts are thus...... are lacking. The Guided Self-Determination method is proven effective in adult care and has been adapted to adolescents and parents (Guided Self-Determination-Young (GSD-Y)) for use in paediatric diabetes outpatient clinics. Our objective is to test whether GSD-Y used in routine paediatric outpatient clinic...

  6. Topiramate improves neurovascular function, epidermal nerve fiber morphology, and metabolism in patients with type 2 diabetes mellitus

    Directory of Open Access Journals (Sweden)

    Boyd A

    2010-12-01

    Full Text Available Amanda L Boyd, Patricia M Barlow, Gary L Pittenger, Kathryn F Simmons, Aaron I VinikDepartment of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA, USAPurpose: To assess the effects of topiramate on C-fiber function, nerve fiber morphology, and metabolism (including insulin sensitivity, obesity, and dyslipidemia in type 2 diabetes.Patients and methods: We conducted an 18-week, open-label trial treating patients with topiramate. Twenty subjects with type 2 diabetes and neuropathy (61.5 ± 1.29 years; 15 male, 5 female were enrolled and completed the trial. Neuropathy was evaluated by total neuropathy scores, nerve conduction studies, quantitative sensory tests, laser Doppler skin blood flow, and intraepidermal nerve fibers in skin biopsies.Results: Topiramate treatment improved symptoms compatible with C-fiber dysfunction. Weight, blood pressure, and hemoglobin A1c also improved. Laser Doppler skin blood flow improved significantly after 12 weeks of treatment, but returned to baseline at 18 weeks. After 18 weeks of treatment there was a significant increase in intraepidermal nerve fiber length at the forearm, thigh, and proximal leg. Intraepidermal nerve fiber density was significantly increased by topiramate in the proximal leg.Conclusion: This study is the first to demonstrate that it is possible to induce skin intraepidermal nerve fiber regeneration accompanied by enhancement of neurovascular function, translating into improved symptoms as well as sensory nerve function. The simultaneous improvement of selective metabolic indices may play a role in this effect, but this remains to be determined.Keywords: diabetic neuropathy, skin blood flow, skin biopsy, diabetes

  7. A chronicle of permutation statistical methods 1920–2000, and beyond

    CERN Document Server

    Berry, Kenneth J; Mielke Jr , Paul W

    2014-01-01

    The focus of this book is on the birth and historical development of permutation statistical methods from the early 1920s to the near present. Beginning with the seminal contributions of R.A. Fisher, E.J.G. Pitman, and others in the 1920s and 1930s, permutation statistical methods were initially introduced to validate the assumptions of classical statistical methods. Permutation methods have advantages over classical methods in that they are optimal for small data sets and non-random samples, are data-dependent, and are free of distributional assumptions. Permutation probability values may be exact, or estimated via moment- or resampling-approximation procedures. Because permutation methods are inherently computationally-intensive, the evolution of computers and computing technology that made modern permutation methods possible accompanies the historical narrative. Permutation analogs of many well-known statistical tests are presented in a historical context, including multiple correlation and regression, ana...

  8. Using Different Types of Dictionaries for Improving EFL Reading Comprehension and Vocabulary Learning

    Science.gov (United States)

    Alharbi, Majed A.

    2016-01-01

    This study investigated the effects of monolingual book dictionaries, popup dictionaries, and type-in dictionaries on improving reading comprehension and vocabulary learning in an EFL program. An experimental design involving four groups and a post-test was chosen for the experiment: (1) pop-up dictionary (experimental group 1); (2) type-in…

  9. Hypoplastic thumb type IIIB: An alternative method for surgical repair

    Directory of Open Access Journals (Sweden)

    Salih Onur Basat

    2014-08-01

    Full Text Available Hypoplastic thumb is the second most common congenital deformity of the thumb. Thumb hypoplasia is characterized by diminished thumb size, metacarpal adduction, metacarpophalangeal joint instability, and thenar muscle hypoplasia. In the literature, different classification types of hypoplastic thumb have been used and different treatment methods described. In this case we presented an alternative palliative treatment method for a ten-year-old patient with modified Blauth's classification type IIIB hypoplastic thumb and one-year follow-up results. [Hand Microsurg 2014; 3(2.000: 59-61

  10. Implementation of Health Action Process Approach to Improve Dietary Adherence in Type 2 Diabetic Patient

    Directory of Open Access Journals (Sweden)

    Kusnanto Kusnanto

    2016-03-01

    Full Text Available Introduction: Type 2 diabetic patients usually unsuccessful to follow the diet recommendation due to lack of motivation, memory and intention. This study attempts to increase the motivation and also to improve intention in dietary adherence through the implementation of Health Action Process Approach (HAPA. Method: This study was a quasy-experiment. The population were type 2 diabetic patients in Puskesmas Krian Sidoarjo in March-April 2015. Respondents were only 16 and had been divided into experiment and control group. The independent variable was the implementation of HAPA. The dependent variable were self-efficacy, dietary adherence and blood sugar levels. The instruments in this study were questionnaires and blood sugar monitoring devices. Data were analyzed using statistical wilcoxon sign rank test and mann whitney u  test with significance level α ≤ 0.05. Result: Wilcoxon sign rank test showed there were differences between pre and post test significantly on self-efficacy (p=0.014, dietary adherence  (p=0.025, blood sugar levels (p=0.009 in  experiment group, while no significant differences in control group. Mann Witney U test showed that there was significant difference on dietary adherence (p=0.002 between two groups. Discussion: In conclusion, the implementation of HAPA can improve dietary adherence in type 2 diabetic patient. Further, following studies are expected with large number respondents and identify the whole variables in the HAPA theory. Keywords: Health Action Process Approach (HAPA, self efficacy, dietary adherence, blood glucose, Diabetes Mellitus (DM

  11. Operating method of molten carbonate type fuel cell

    Energy Technology Data Exchange (ETDEWEB)

    Nakanishi, Tsuneo

    1988-12-06

    Molten carbonate type fuel cell involves a problem of oxidation of anode while the unit is stopped. Although there is a method proposed wherein an inactive gas is supplied to anode during the stoppage, the market-available inactive gas contains a slight amount of oxygen which makes it difficult to prevent the deterioration of the anode. In this invention, at the start and the stop other than the normal operation, a protective gas mixture of an inactive gas with a small amount of hydrogen is supplied to the anode. The inactive gas is a commercial type nitrogen, argon or helium; hydrogen is mixed in amount 0.5 - 2.0% of the inactive gas. By this method, oxygen in air which comes in from the gas-sealed portion of the cell is reduced by hydrogen in the protective gas and is discharged in the form of water. 2 figs.

  12. An improved dynamic test method for solar collectors

    DEFF Research Database (Denmark)

    Kong, Weiqiang; Wang, Zhifeng; Fan, Jianhua

    2012-01-01

    A comprehensive improvement of the mathematical model for the so called transfer function method is presented in this study. This improved transfer function method can estimate the traditional solar collector parameters such as zero loss coefficient and heat loss coefficient. Two new collector...... parameters t and mfCf are obtained. t is a time scale parameter which can indicate the heat transfer ability of the solar collector. mfCf can be used to calculate the fluid volume content in the solar collector or to validate the regression process by comparing it to the physical fluid volume content...... for the second-order differential term with 6–9min as the best averaging time interval. The measured and predicted collector power output of the solar collector are compared during a test of 13days continuously both for the ITF method and the QDT method. The maximum and averaging error is 53.87W/m2 and 5.22W/m2...

  13. [An improved low spectral distortion PCA fusion method].

    Science.gov (United States)

    Peng, Shi; Zhang, Ai-Wu; Li, Han-Lun; Hu, Shao-Xing; Meng, Xian-Gang; Sun, Wei-Dong

    2013-10-01

    Aiming at the spectral distortion produced in PCA fusion process, the present paper proposes an improved low spectral distortion PCA fusion method. This method uses NCUT (normalized cut) image segmentation algorithm to make a complex hyperspectral remote sensing image into multiple sub-images for increasing the separability of samples, which can weaken the spectral distortions of traditional PCA fusion; Pixels similarity weighting matrix and masks were produced by using graph theory and clustering theory. These masks are used to cut the hyperspectral image and high-resolution image into some sub-region objects. All corresponding sub-region objects between the hyperspectral image and high-resolution image are fused by using PCA method, and all sub-regional integration results are spliced together to produce a new image. In the experiment, Hyperion hyperspectral data and Rapid Eye data were used. And the experiment result shows that the proposed method has the same ability to enhance spatial resolution and greater ability to improve spectral fidelity performance.

  14. An improved early detection method of type-2 diabetes mellitus using multiple classifier system

    KAUST Repository

    Zhu, Jia; Xie, Qing; Zheng, Kai

    2015-01-01

    The specific causes of complex diseases such as Type-2 Diabetes Mellitus (T2DM) have not yet been identified. Nevertheless, many medical science researchers believe that complex diseases are caused by a combination of genetic, environmental

  15. Method of operating FBR type reactors

    International Nuclear Information System (INIS)

    Arie, Kazuo.

    1984-01-01

    Purpose: To secure the controlling performance and the safety of FBR type reactors by decreasing the amount of deformation due to the difference in the heat expansion of a control rod guide tube. Method: The reactor is operated while disposing reactor core fuel assemblies of a same power at point-to-point symmetrical positions relative to the axial center for the control rod assembly. This can eliminate the temperature difference between opposing surfaces of the control rod guide tube and eliminate the difference in the thermal expansion. (Yoshino, Y.)

  16. Integration of rock typing methods for carbonate reservoir characterization

    International Nuclear Information System (INIS)

    Aliakbardoust, E; Rahimpour-Bonab, H

    2013-01-01

    Reservoir rock typing is the most important part of all reservoir modelling. For integrated reservoir rock typing, static and dynamic properties need to be combined, but sometimes these two are incompatible. The failure is due to the misunderstanding of the crucial parameters that control the dynamic behaviour of the reservoir rock and thus selecting inappropriate methods for defining static rock types. In this study, rock types were defined by combining the SCAL data with the rock properties, particularly rock fabric and pore types. First, air-displacing-water capillary pressure curues were classified because they are representative of fluid saturation and behaviour under capillary forces. Next the most important rock properties which control the fluid flow and saturation behaviour (rock fabric and pore types) were combined with defined classes. Corresponding petrophysical properties were also attributed to reservoir rock types and eventually, defined rock types were compared with relative permeability curves. This study focused on representing the importance of the pore system, specifically pore types in fluid saturation and entrapment in the reservoir rock. The most common tests in static rock typing, such as electrofacies analysis and porosity–permeability correlation, were carried out and the results indicate that these are not appropriate approaches for reservoir rock typing in carbonate reservoirs with a complicated pore system. (paper)

  17. An introduction to Bartlett correction and bias reduction

    CERN Document Server

    Cordeiro, Gauss M

    2014-01-01

    This book presents a concise introduction to Bartlett and Bartlett-type corrections of statistical tests and bias correction of point estimators. The underlying idea behind both groups of corrections is to obtain higher accuracy in small samples. While the main focus is on corrections that can be analytically derived, the authors also present alternative strategies for improving estimators and tests based on bootstrap, a data resampling technique, and discuss concrete applications to several important statistical models.

  18. Improved method for calculating neoclassical transport coefficients in the banana regime

    Energy Technology Data Exchange (ETDEWEB)

    Taguchi, M., E-mail: taguchi.masayoshi@nihon-u.ac.jp [College of Industrial Technology, Nihon University, Narashino 275-8576 (Japan)

    2014-05-15

    The conventional neoclassical moment method in the banana regime is improved by increasing the accuracy of approximation to the linearized Fokker-Planck collision operator. This improved method is formulated for a multiple ion plasma in general tokamak equilibria. The explicit computation in a model magnetic field shows that the neoclassical transport coefficients can be accurately calculated in the full range of aspect ratio by the improved method. The some neoclassical transport coefficients for the intermediate aspect ratio are found to appreciably deviate from those obtained by the conventional moment method. The differences between the transport coefficients with these two methods are up to about 20%.

  19. Method for identifying type I diabetes mellitus in humans

    Science.gov (United States)

    Metz, Thomas O [Kennewick, WA; Qian, Weijun [Richland, WA; Jacobs, Jon M [Pasco, WA; Smith, Richard D [Richland, WA

    2011-04-12

    A method and system for classifying subject populations utilizing predictive and diagnostic biomarkers for type I diabetes mellitus. The method including determining the levels of a variety of markers within the serum or plasma of a target organism and correlating this level to general populations as a screen for predisposition or progressive monitoring of disease presence or predisposition.

  20. Pyroprinting: a rapid and flexible genotypic fingerprinting method for typing bacterial strains.

    Science.gov (United States)

    Black, Michael W; VanderKelen, Jennifer; Montana, Aldrin; Dekhtyar, Alexander; Neal, Emily; Goodman, Anya; Kitts, Christopher L

    2014-10-01

    Bacterial strain typing is commonly employed in studies involving epidemiology, population ecology, and microbial source tracking to identify sources of fecal contamination. Methods for differentiating strains generally use either a collection of phenotypic traits or rely on some interrogation of the bacterial genotype. This report introduces pyroprinting, a novel genotypic strain typing method that is rapid, inexpensive, and discriminating compared to the most sensitive methods already in use. Pyroprinting relies on the simultaneous pyrosequencing of polymorphic multicopy loci, such as the intergenic transcribed spacer regions of rRNA operons in bacterial genomes. Data generated by sequencing combinations of variable templates are reproducible and intrinsically digitized. The theory and development of pyroprinting in Escherichia coli, including the selection of similarity thresholds to define matches between isolates, are presented. The pyroprint-based strain differentiation limits and phylogenetic relevance compared to other typing methods are also explored. Pyroprinting is unique in its simplicity and, paradoxically, in its intrinsic complexity. This new approach serves as an excellent alternative to more cumbersome or less phylogenetically relevant strain typing methods. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. The gridding method for image reconstruction by Fourier transformation

    International Nuclear Information System (INIS)

    Schomberg, H.; Timmer, J.

    1995-01-01

    This paper explores a computational method for reconstructing an n-dimensional signal f from a sampled version of its Fourier transform f. The method involves a window function w and proceeds in three steps. First, the convolution g = w * f is computed numerically on a Cartesian grid, using the available samples of f. Then, g = wf is computed via the inverse discrete Fourier transform, and finally f is obtained as g/w. Due to the smoothing effect of the convolution, evaluating w * f is much less error prone than merely interpolating f. The method was originally devised for image reconstruction in radio astronomy, but is actually applicable to a broad range of reconstructive imaging methods, including magnetic resonance imaging and computed tomography. In particular, it provides a fast and accurate alternative to the filtered backprojection. The basic method has several variants with other applications, such as the equidistant resampling of arbitrarily sampled signals or the fast computation of the Radon (Hough) transform

  2. Starter culture development for improving the flavour of Proosdij-type cheese

    NARCIS (Netherlands)

    Ayad, E.H.E.; Verheul, A.; Bruinenberg, P.; Wouters, J.T.M.; Smit, G.

    2003-01-01

    The use of the additional mesophilic strain B851, which has specific flavour forming abilities, was tested for improving the flavour development of a Proosdij-type cheese made with a combination of an acidifying mesophilic and an adjunct thermophilic culture. This strain was selected because of its

  3. Improved Riccati Transfer Matrix Method for Free Vibration of Non-Cylindrical Helical Springs Including Warping

    Directory of Open Access Journals (Sweden)

    A.M. Yu

    2012-01-01

    Full Text Available Free vibration equations for non-cylindrical (conical, barrel, and hyperboloidal types helical springs with noncircular cross-sections, which consist of 14 first-order ordinary differential equations with variable coefficients, are theoretically derived using spatially curved beam theory. In the formulation, the warping effect upon natural frequencies and vibrating mode shapes is first studied in addition to including the rotary inertia, the shear and axial deformation influences. The natural frequencies of the springs are determined by the use of improved Riccati transfer matrix method. The element transfer matrix used in the solution is calculated using the Scaling and Squaring method and Pad'e approximations. Three examples are presented for three types of springs with different cross-sectional shapes under clamped-clamped boundary condition. The accuracy of the proposed method has been compared with the FEM results using three-dimensional solid elements (Solid 45 in ANSYS code. Numerical results reveal that the warping effect is more pronounced in the case of non-cylindrical helical springs than that of cylindrical helical springs, which should be taken into consideration in the free vibration analysis of such springs.

  4. Comparison results on preconditioned SOR-type iterative method for Z-matrices linear systems

    Science.gov (United States)

    Wang, Xue-Zhong; Huang, Ting-Zhu; Fu, Ying-Ding

    2007-09-01

    In this paper, we present some comparison theorems on preconditioned iterative method for solving Z-matrices linear systems, Comparison results show that the rate of convergence of the Gauss-Seidel-type method is faster than the rate of convergence of the SOR-type iterative method.

  5. Improving efficiency of two-type maximum power point tracking methods of tip-speed ratio and optimum torque in wind turbine system using a quantum neural network

    International Nuclear Information System (INIS)

    Ganjefar, Soheil; Ghassemi, Ali Akbar; Ahmadi, Mohamad Mehdi

    2014-01-01

    In this paper, a quantum neural network (QNN) is used as controller in the adaptive control structures to improve efficiency of the maximum power point tracking (MPPT) methods in the wind turbine system. For this purpose, direct and indirect adaptive control structures equipped with QNN are used in tip-speed ratio (TSR) and optimum torque (OT) MPPT methods. The proposed control schemes are evaluated through a battery-charging windmill system equipped with PMSG (permanent magnet synchronous generator) at a random wind speed to demonstrate transcendence of their effectiveness as compared to PID controller and conventional neural network controller (CNNC). - Highlights: • Using a new control method to harvest the maximum power from wind energy system. • Using an adaptive control scheme based on quantum neural network (QNN). • Improving of MPPT-TSR method by direct adaptive control scheme based on QNN. • Improving of MPPT-OT method by indirect adaptive control scheme based on QNN. • Using a windmill system based on PMSG to evaluate proposed control schemes

  6. Improved Stereo Matching With Boosting Method

    Directory of Open Access Journals (Sweden)

    Shiny B

    2015-06-01

    Full Text Available Abstract This paper presents an approach based on classification for improving the accuracy of stereo matching methods. We propose this method for occlusion handling. This work employs classification of pixels for finding the erroneous disparity values. Due to the wide applications of disparity map in 3D television medical imaging etc the accuracy of disparity map has high significance. An initial disparity map is obtained using local or global stereo matching methods from the input stereo image pair. The various features for classification are computed from the input stereo image pair and the obtained disparity map. Then the computed feature vector is used for classification of pixels by using GentleBoost as the classification method. The erroneous disparity values in the disparity map found by classification are corrected through a completion stage or filling stage. A performance evaluation of stereo matching using AdaBoostM1 RUSBoost Neural networks and GentleBoost is performed.

  7. Improved verification methods for OVI security ink

    Science.gov (United States)

    Coombs, Paul G.; Markantes, Tom

    2000-04-01

    Together, OVP Security Pigment in OVI Security Ink, provide an excellent method of overt banknote protection. The effective use of overt security feature requires an educated public. The rapid rise in computer-generated counterfeits indicates that consumers are not as educate das to banknote security features as they should be. To counter the education issue, new methodologies have been developed to improve the validation of banknotes using the OVI ink feature itself. One of the new methods takes advantage of the overt nature of the product's optically variable effect. Another method utilizes the unique optical interference characteristics provided by the OVP platelets.

  8. Methods of improvement in hardness of composite surface layer on cast steel

    Directory of Open Access Journals (Sweden)

    J. Szajnar

    2008-08-01

    Full Text Available The paper presents a method of usable properties of surface layers improvement of cast carbon steel 200–450, by put directly in founding process a composite surface layer on the basis of Fe-Cr-C alloy and next its remelting with use of welding technology TIG – Tungsten Inert Gas. Technology of composite surface layer guarantee mainly increase in hardness and abrasive wear resistance of cast steel castings on machine elements. This technology can be competition for generally applied welding technology (surfacing by welding and thermal spraying. However the results of studies show, that is possible to connection of both methods founding and welding of surface hardening of cast steel castings. In range of experimental plan was made test castings with composite surface layer, which next were remelted with energy 0,8 and 1,6 kJ/cm. Usability for industrial applications of test castings was estimated by criterion of hardness and abrasive wear resistance of type metal-mineral.

  9. Improved stochastic approximation methods for discretized parabolic partial differential equations

    Science.gov (United States)

    Guiaş, Flavius

    2016-12-01

    We present improvements of the stochastic direct simulation method, a known numerical scheme based on Markov jump processes which is used for approximating solutions of ordinary differential equations. This scheme is suited especially for spatial discretizations of evolution partial differential equations (PDEs). By exploiting the full path simulation of the stochastic method, we use this first approximation as a predictor and construct improved approximations by Picard iterations, Runge-Kutta steps, or a combination. This has as consequence an increased order of convergence. We illustrate the features of the improved method at a standard benchmark problem, a reaction-diffusion equation modeling a combustion process in one space dimension (1D) and two space dimensions (2D).

  10. Collagen Type I Improves the Differentiation of Human Embryonic Stem Cells towards Definitive Endoderm

    DEFF Research Database (Denmark)

    Rasmussen, Camilla Holzmann; Petersen, Dorthe Roenn; Møller, Jonas Bech

    2015-01-01

    Human embryonic stem cells have the ability to generate all cell types in the body and can potentially provide an unlimited source of cells for cell replacement therapy to treat degenerative diseases such as diabetes. Current differentiation protocols of human embryonic stem cells towards insulin...... and consistent differentiation of stem cells to definitive endoderm. The results shed light on the importance of extracellular matrix proteins for differentiation and also points to a cost effective and easy method to improve differentiation....... embryonic stem cells to the definitive endoderm lineage. The percentage of definitive endoderm cells after differentiation on collagen I and fibronectin was >85% and 65%, respectively. The cells on collagen I substrates displayed different morphology and gene expression during differentiation as assessed...

  11. A study for estimate of contamination source with numerical simulation method in the turbulent type clean room

    International Nuclear Information System (INIS)

    Han, Sang Mok; Hwang, Young Kyu; Kim, Dong Kwon

    2015-01-01

    Contamination in a clean room may appear even more complicated by the effect of complicated manufacturing processes and indoor equipment. For this reason, detailed information about the concentration of pollutant particles in the clean room is needed to control the level of contamination financially and efficiently without any problem in manufacturing process. Allocation method has been developed as one of main ideas to fulfill a function of controlling contamination under the situation. By using this method, weighting factor can be predicted based on cleanliness on sampling spots and the values based on numerical analysis. In this point, the weighting factor indicates how each of contaminant sources influences the concentration of pollutant in the clean room. In this paper, when applied allocation method, we propose zoning method to accelerate the calculation time. And it was applied to cleanliness the actual improvement of the turbulent type clean room. As a result, we could estimate quantitatively the amount of contamination generated from the pollution sources. And was proved by experiments that it is possible to improve the level of cleanliness of the clean rooms by using these results.

  12. Advances in Blood Typing.

    Science.gov (United States)

    Quraishy, N; Sapatnekar, S

    The clinical importance of blood group antigens relates to their ability to evoke immune antibodies that are capable of causing hemolysis. The most important antigens for safe transfusion are ABO and D (Rh), and typing for these antigens is routinely performed for patients awaiting transfusion, prenatal patients, and blood donors. Typing for other blood group antigens, typically of the Kell, Duffy, Kidd, and MNS blood groups, is sometimes necessary, for patients who have, or are likely to develop antibodies to these antigens. The most commonly used typing method is serological typing, based on hemagglutination reactions against specific antisera. This method is generally reliable and practical for routine use, but it has certain drawbacks. In recent years, molecular typing has emerged as an alternative or supplemental typing method. It is based on detecting the polymorphisms and mutations that control the expression of blood group antigens, and using this information to predict the probable antigen type. Molecular typing methods are useful when traditional serological typing methods cannot be used, as when a patient has been transfused and the sample is contaminated with red blood cells from the transfused blood component. Moreover, molecular typing methods can precisely identify clinically significant variant antigens that cannot be distinguished by serological typing; this capability has been exploited for the resolution of typing discrepancies and shows promise for the improved transfusion management of patients with sickle cell anemia. Despite its advantages, molecular typing has certain limitations, and it should be used in conjunction with serological methods. © 2016 Elsevier Inc. All rights reserved.

  13. Analytical methods applied to diverse types of Brazilian propolis

    Directory of Open Access Journals (Sweden)

    Marcucci Maria

    2011-06-01

    Full Text Available Abstract Propolis is a bee product, composed mainly of plant resins and beeswax, therefore its chemical composition varies due to the geographic and plant origins of these resins, as well as the species of bee. Brazil is an important supplier of propolis on the world market and, although green colored propolis from the southeast is the most known and studied, several other types of propolis from Apis mellifera and native stingless bees (also called cerumen can be found. Propolis is usually consumed as an extract, so the type of solvent and extractive procedures employed further affect its composition. Methods used for the extraction; analysis the percentage of resins, wax and insoluble material in crude propolis; determination of phenolic, flavonoid, amino acid and heavy metal contents are reviewed herein. Different chromatographic methods applied to the separation, identification and quantification of Brazilian propolis components and their relative strengths are discussed; as well as direct insertion mass spectrometry fingerprinting. Propolis has been used as a popular remedy for several centuries for a wide array of ailments. Its antimicrobial properties, present in propolis from different origins, have been extensively studied. But, more recently, anti-parasitic, anti-viral/immune stimulating, healing, anti-tumor, anti-inflammatory, antioxidant and analgesic activities of diverse types of Brazilian propolis have been evaluated. The most common methods employed and overviews of their relative results are presented.

  14. Improvement of Source Number Estimation Method for Single Channel Signal.

    Directory of Open Access Journals (Sweden)

    Zhi Dong

    Full Text Available Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin's disk estimation (GDE and minimum description length (MDL, are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely.

  15. Application of improved topsis method to comprehensive assessment of radiological environmental quality

    International Nuclear Information System (INIS)

    Shi Dongsheng; Di Yuming; Zhou Chunlin

    2007-01-01

    TOPSIS is a method for multiobjective decision-making, which can be applied to comprehensive assessment of radiological environmental quality. This paper introduces the principle of TOPSIS method and sets up the model of improved TOPSIS method, discusses the application of improved TOPSIS method to comprehensive assessment of radiological environmental quality. This method sufficiently makes use of the information of the optimal matrix. Analysis of practical examples using MATLAB program shows that it is objectively reasonable and feasible to comprehensively assess radiological environmental quality by improved TOPSIS method. This paper also provides the result of optimum number of sites and compares it with optimal index method based on TOPSIS method and traditional method. (authors)

  16. Solutions of interval type-2 fuzzy polynomials using a new ranking method

    Science.gov (United States)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim; Ghani, Ahmad Termimi Ab.; Ahmad, Noor'Ani

    2015-10-01

    A few years ago, a ranking method have been introduced in the fuzzy polynomial equations. Concept of the ranking method is proposed to find actual roots of fuzzy polynomials (if exists). Fuzzy polynomials are transformed to system of crisp polynomials, performed by using ranking method based on three parameters namely, Value, Ambiguity and Fuzziness. However, it was found that solutions based on these three parameters are quite inefficient to produce answers. Therefore in this study a new ranking method have been developed with the aim to overcome the inherent weakness. The new ranking method which have four parameters are then applied in the interval type-2 fuzzy polynomials, covering the interval type-2 of fuzzy polynomial equation, dual fuzzy polynomial equations and system of fuzzy polynomials. The efficiency of the new ranking method then numerically considered in the triangular fuzzy numbers and the trapezoidal fuzzy numbers. Finally, the approximate solutions produced from the numerical examples indicate that the new ranking method successfully produced actual roots for the interval type-2 fuzzy polynomials.

  17. Is type-D personality trait(s or state? An examination of type-D temporal stability in older Israeli adults in the community

    Directory of Open Access Journals (Sweden)

    Ada H. Zohar

    2016-02-01

    Full Text Available Background. Type D personality was suggested as a marker of poorer prognosis for patients of cardiovascular disease. It is defined by having a score of 10 or more on both sub-scales of the DS14 questionnaire, Social Inhibition (SI and Negative Affectivity (NA. As Type D was designed to predict risk, its temporal stability is of prime importance. Methods. Participants in the current study were 285 community volunteers, who completed the DS14, and other personality scales, at a mean interval of six years. Results. The prevalence of Type D did not change. The component traits of Type D showed rank order stability. Type D caseness temporal stability was improved by using the sub-scales product as a criterion. Logistic hierarchical regression predicting Type D classification from Time1 demonstrated that the best predictors were Time1 scores on NA and SI, with the character trait of Cooperation, and the alexithymia score adding some predictive power. Conclusions. The temporal stability of the component traits, and of the prevalence of Type D were excellent. Temporal stability of Type D caseness may be improved by using a product threshold, rather than the current rule. Research is required in order to formulate the optimal timing for Type D measurement for predictive purposes.

  18. Iterative Runge–Kutta-type methods for nonlinear ill-posed problems

    International Nuclear Information System (INIS)

    Böckmann, C; Pornsawad, P

    2008-01-01

    We present a regularization method for solving nonlinear ill-posed problems by applying the family of Runge–Kutta methods to an initial value problem, in particular, to the asymptotical regularization method. We prove that the developed iterative regularization method converges to a solution under certain conditions and with a general stopping rule. Some particular iterative regularization methods are numerically implemented. Numerical results of the examples show that the developed Runge–Kutta-type regularization methods yield stable solutions and that particular implicit methods are very efficient in saving iteration steps

  19. The improved quasi-static method vs the direct method: a case study for CANDU reactor transients

    International Nuclear Information System (INIS)

    Kaveh, S.; Koclas, J.; Roy, R.

    1999-01-01

    Among the large number of methods for the transient analysis of nuclear reactors, the improved quasi-static procedure is one of the most widely used. In recent years, substantial increase in both computer speed and memory has motivated a rethinking of the limitations of this method. The overall goal of the present work is a systematic comparison between the improved quasi-static and the direct method (mesh-centered finite difference) for realistic CANDU transient simulations. The emphasis is on the accuracy of the solutions as opposed to the computational speed. Using the computer code NDF, a typical realistic transient of CANDU reactor has been analyzed. In this transient the response of the reactor regulating system to a substantial local perturbation (sudden extraction of the five adjuster rods) has been simulated. It is shown that when updating the detector responses is of major importance, it is better to use a well-optimized direct method rather than the improved quasi-static method. (author)

  20. Improvement of gas entrainment prediction method. Introduction of surface tension effect

    International Nuclear Information System (INIS)

    Ito, Kei; Sakai, Takaaki; Ohshima, Hiroyuki; Uchibori, Akihiro; Eguchi, Yuzuru; Monji, Hideaki; Xu, Yongze

    2010-01-01

    A gas entrainment (GE) prediction method has been developed to establish design criteria for the large-scale sodium-cooled fast reactor (JSFR) systems. The prototype of the GE prediction method was already confirmed to give reasonable gas core lengths by simple calculation procedures. However, for simplification, the surface tension effects were neglected. In this paper, the evaluation accuracy of gas core lengths is improved by introducing the surface tension effects into the prototype GE prediction method. First, the mechanical balance between gravitational, centrifugal, and surface tension forces is considered. Then, the shape of a gas core tip is approximated by a quadratic function. Finally, using the approximated gas core shape, the authors determine the gas core length satisfying the mechanical balance. This improved GE prediction method is validated by analyzing the gas core lengths observed in simple experiments. Results show that the analytical gas core lengths calculated by the improved GE prediction method become shorter in comparison to the prototype GE prediction method, and are in good agreement with the experimental data. In addition, the experimental data under different temperature and surfactant concentration conditions are reproduced by the improved GE prediction method. (author)

  1. Seismic verification methods for structures and equipment of VVER-type and RBMK-type NPPs (summary of experiences)

    International Nuclear Information System (INIS)

    Masopust, R.

    2003-01-01

    The main verification methods for structures and equipment of already existing VVER-type and RBMK-type NPPs are briefly described. The following aspects are discussed: fundamental seismic safety assessment principles for VVER/RBMK-type NPPs (seismic safety assessment procedure, typical work plan for seismic safety assessment of existing NPPs, SMA (HCLPF) calculations, modified GIP (GIP-VVER) procedure, similarity of VVER/RBMK equipment to that included in the SQUG databases and seismic interactions

  2. Improved Method for PD-Quantification in Power Cables

    DEFF Research Database (Denmark)

    Holbøll, Joachim T.; Villefrance, Rasmus; Henriksen, Mogens

    1999-01-01

    n this paper, a method is described for improved quantification of partial discharges(PD) in power cables. The method is suitable for PD-detection and location systems in the MHz-range, where pulse attenuation and distortion along the cable cannot be neglected. The system transfer function...... was calculated and measured in order to form basis for magnitude calculation after each measurements. --- Limitations and capabilities of the method will be discussed and related to relevant field applications of high frequent PD-measurements. --- Methods for increased signal/noise ratio are easily implemented...

  3. Improvement of organic compounds labelling method with the use of thermally activated tritium gas

    International Nuclear Information System (INIS)

    Nejman, L.A.; Smolyakov, V.S.; Antropova, L.P.

    1982-01-01

    Use of a support (various types of papers) is recommended for organic compounds labelling by tritium gas activated at a hot tungsten filament. This improvement increases chemical and radiochemical yields and makes the experiment simpler and faster. Generally labelled triethyloxonium tetra-fluoroborate, ethyl-p-aminobenzoate, p-aminobenzoic acid (Na-salt), A-factor (a natural regulator of streptomycin biosynthesis), decapeptide angiotensin I, phospholipid 1, 2 - dimyristoyl-sn-glycero-3--phosphocholine and E. coli tRNAs have been prepared by this method. Molar radioactivity of the labelled compounds is in the range of 1-200 GBg/mmole [ru

  4. Design and implementation of new design of numerical experiments for non linear models; Conception et mise en oeuvre de nouvelles methodes d'elaboration de plans d'experiences pour l'apprentissage de modeles non lineaires

    Energy Technology Data Exchange (ETDEWEB)

    Gazut, St

    2007-03-15

    This thesis addresses the problem of the construction of surrogate models in numerical simulation. Whenever numerical experiments are costly, the simulation model is complex and difficult to use. It is important then to select the numerical experiments as efficiently as possible in order to minimize their number. In statistics, the selection of experiments is known as optimal experimental design. In the context of numerical simulation where no measurement uncertainty is present, we describe an alternative approach based on statistical learning theory and re-sampling techniques. The surrogate models are constructed using neural networks and the generalization error is estimated by leave-one-out, cross-validation and bootstrap. It is shown that the bootstrap can control the over-fitting and extend the concept of leverage for non linear in their parameters surrogate models. The thesis describes an iterative method called LDR for Learner Disagreement from experiment Re-sampling, based on active learning using several surrogate models constructed on bootstrap samples. The method consists in adding new experiments where the predictors constructed from bootstrap samples disagree most. We compare the LDR method with other methods of experimental design such as D-optimal selection. (author)

  5. Evaluation of simplified dna extraction methods for EMM typing of group a streptococci

    Directory of Open Access Journals (Sweden)

    Jose JJM

    2006-01-01

    Full Text Available Simplified methods of DNA extraction for amplification and sequencing for emm typing of group A streptococci (GAS can save valuable time and cost in resource crunch situations. To evaluate this, we compared two methods of DNA extraction directly from colonies with the standard CDC cell lysate method for emm typing of 50 GAS strains isolated from children with pharyngitis and impetigo. For this, GAS colonies were transferred into two sets of PCR tubes. One set was preheated at 94oC for two minutes in the thermal cycler and cooled while the other set was frozen overnight at -20oC and then thawed before adding the PCR mix. For the cell lysate method, cells were treated with mutanolysin and hyaluronidase before heating at 100oC for 10 minutes and cooling immediately as recommended in the CDC method. All 50 strains could be typed by sequencing the hyper variable region of the emm gene after amplification. The quality of sequences and the emm types identified were also identical. Our study shows that the two simplified DNA extraction methods directly from colonies can conveniently be used for typing a large number of GAS strains easily in relatively short time.

  6. Recent improvements in check valve monitoring methods

    International Nuclear Information System (INIS)

    Haynes, H.D.

    1991-01-01

    In support of the NRC Nuclear Plant Aging Research (NPAR) program, ORNL has carried out an evaluation of three check valve monitoring methods: acoustic emission, ultrasonic inspection, and magnetic flux signature analysis (MFSA). This work has focussed on determining the capabilities of each method to provide diagnostic information useful in determining check valve aging and service wear effects (degradation) and undesirable operating modes. In addition, as part of the ORNL Advanced Diagnostic Engineering Research and Development Center (ADEC), two novel nonintrusive monitoring methods were developed (external ac- and dc-magnetic monitoring) that provide several improvements over the other methods. None of the examined methods could, by themselves, monitor the instantaneous position and motion of check valve internals and valve leakage; however, the combination of acoustic emission monitoring with one of the other methods provides the means to determine vital check valve operational information. This paper describes the benefits and limitations associated with each method and includes recent laboratory and field test data to illustrate the capabilities of these methods to detect simulated check valve degradation. 3 refs., 22 figs., 4 tabs

  7. Recent improvements in check valve monitoring methods

    International Nuclear Information System (INIS)

    Haynes, H.D.

    1990-01-01

    In support of the NRC Nuclear Plant Aging Research (NPAR) program, ORNL has carried out an evaluation of three check valve monitoring methods: acoustic emission, ultrasonic inspection, and magnetic flux signature analysis (MFSA). This work has focused on determining the capabilities of each method to provide diagnostic information useful in determining check valve aging and service wear effects (degradation) and undesirable operating modes. In addition, as part of the ORNL Advanced Diagnostic Engineering Research and Development Center (ADEC), two noval nonintrusive monitoring methods were developed (external ac- and dc-magnetic monitoring) that provide several improvements over the other methods. None of the examined methods could, by themselves, monitor the instantaneous position and motion of check valve internals and valve leakage; however, the combination of acoustic emission monitoring with one of the other methods provides the means to determine vital check valve operational information. This paper describes the benefits and limitations associated with each method and includes recent laboratory and field test data to illustrate the capabilities of these methods to detect simulated check valve degradation. 3 refs., 22 figs., 4 tabs

  8. Deep Learning Methods for Improved Decoding of Linear Codes

    Science.gov (United States)

    Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair

    2018-02-01

    The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.

  9. Fast and robust methods for full genome sequencing of Porcine Reproductive and Respiratory Syndrome Virus (PRRSV) Type 1 and Type 2

    DEFF Research Database (Denmark)

    Kvisgaard, Lise Kirstine; Hjulsager, Charlotte Kristiane; Fahnøe, Ulrik

    . In the present study, fast and robust methods for long range RT-PCR amplification and subsequent next generation sequencing (NGS) of PRRSV Type 1 and Type 2 viruses were developed and validated on nine Type 1 and nine Type 2 PRRSV viruses. The methods were shown to generate robust and reliable sequences both...... on primary material and cell culture adapted viruses and the protocols were shown to perform well on all three NGS platforms tested (Roche 454 FLX, Illumina HiSeq 2000, and Ion Torrent PGM™ Sequencer). To complete the sequences at the 5’ end, 5’ Rapid Amplification of cDNA Ends (5’ RACE) was conducted...... followed by cycle sequencing of clones. The genome lengths were determined to be 14,876-15,098 and 15,342-15,408 nucleotides long for the Type 1 and Type 2 strains, respectively. These methods will greatly facilitate the generation of more complete genome PRRSV sequences globally which in turn may lead...

  10. Assimilation of Ocean-Color Plankton Functional Types to Improve Marine Ecosystem Simulations

    Science.gov (United States)

    Ciavatta, S.; Brewin, R. J. W.; Skákala, J.; Polimene, L.; de Mora, L.; Artioli, Y.; Allen, J. I.

    2018-02-01

    We assimilated phytoplankton functional types (PFTs) derived from ocean color into a marine ecosystem model, to improve the simulation of biogeochemical indicators and emerging properties in a shelf sea. Error-characterized chlorophyll concentrations of four PFTs (diatoms, dinoflagellates, nanoplankton, and picoplankton), as well as total chlorophyll for comparison, were assimilated into a physical-biogeochemical model of the North East Atlantic, applying a localized Ensemble Kalman filter. The reanalysis simulations spanned the years 1998-2003. The skill of the reference and reanalysis simulations in estimating ocean color and in situ biogeochemical data were compared by using robust statistics. The reanalysis outperformed both the reference and the assimilation of total chlorophyll in estimating the ocean-color PFTs (except nanoplankton), as well as the not-assimilated total chlorophyll, leading the model to simulate better the plankton community structure. Crucially, the reanalysis improved the estimates of not-assimilated in situ data of PFTs, as well as of phosphate and pCO2, impacting the simulation of the air-sea carbon flux. However, the reanalysis increased further the model overestimation of nitrate, in spite of increases in plankton nitrate uptake. The method proposed here is easily adaptable for use with other ecosystem models that simulate PFTs, for, e.g., reanalysis of carbon fluxes in the global ocean and for operational forecasts of biogeochemical indicators in shelf-sea ecosystems.

  11. Bitshuffle: Filter for improving compression of typed binary data

    Science.gov (United States)

    Masui, Kiyoshi

    2017-12-01

    Bitshuffle rearranges typed, binary data for improving compression; the algorithm is implemented in a python/C package within the Numpy framework. The library can be used alongside HDF5 to compress and decompress datasets and is integrated through the dynamically loaded filters framework. Algorithmically, Bitshuffle is closely related to HDF5's Shuffle filter except it operates at the bit level instead of the byte level. Arranging a typed data array in to a matrix with the elements as the rows and the bits within the elements as the columns, Bitshuffle "transposes" the matrix, such that all the least-significant-bits are in a row, etc. This transposition is performed within blocks of data roughly 8kB long; this does not in itself compress data, but rearranges it for more efficient compression. A compression library is necessary to perform the actual compression. This scheme has been used for compression of radio data in high performance computing.

  12. Improving Machining Accuracy of CNC Machines with Innovative Design Methods

    Science.gov (United States)

    Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.

    2018-03-01

    The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.

  13. Method of removing deterioration product in hydrocarbon type solvent

    International Nuclear Information System (INIS)

    Ito, Yoshifumi; Takashina, Toru; Murasawa, Kenji.

    1988-01-01

    Purpose: To remarkably reduce radioactive wastes by bringing adsorbents comprising titanium oxide and/or zirconium oxide into contact with hydrocarbon type solvents. Method: In a nuclear fuel re-processing step, an appropriate processing is applied to extraction solvents suffering from radioactive degradation, to separate the hydrocarbon solvents and store them in a solvent tank. Then, titanium oxide and/or zirconium oxide adsorbents are continuously mixed and agitated therewith to adsorb degradation products on the adsorbents. Then, they are introduced with adsorbent separators to recover purified hydrocarbon type solvents. Meanwhile, the separated adsorbents are discharged from pipeways. This enables to regenerate the hydrocarbon type solvents for reuse, as well as remarkably reduce the radioactive wastes. (Takahashi, M.)

  14. Improvement in PWR automatic optimization reloading methods using genetic algorithm

    International Nuclear Information System (INIS)

    Levine, S.H.; Ivanov, K.; Feltus, M.

    1996-01-01

    The objective of using automatic optimized reloading methods is to provide the Nuclear Engineer with an efficient method for reloading a nuclear reactor which results in superior core configurations that minimize fuel costs. Previous methods developed by Levine et al required a large effort to develop the initial core loading using a priority loading scheme. Subsequent modifications to this core configuration were made using expert rules to produce the final core design. Improvements in this technique have been made by using a genetic algorithm to produce improved core reload designs for PWRs more efficiently (authors)

  15. Improvement in PWR automatic optimization reloading methods using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Levine, S H; Ivanov, K; Feltus, M [Pennsylvania State Univ., University Park, PA (United States)

    1996-12-01

    The objective of using automatic optimized reloading methods is to provide the Nuclear Engineer with an efficient method for reloading a nuclear reactor which results in superior core configurations that minimize fuel costs. Previous methods developed by Levine et al required a large effort to develop the initial core loading using a priority loading scheme. Subsequent modifications to this core configuration were made using expert rules to produce the final core design. Improvements in this technique have been made by using a genetic algorithm to produce improved core reload designs for PWRs more efficiently (authors).

  16. Security analysis and improvements to the PsychoPass method.

    Science.gov (United States)

    Brumen, Bostjan; Heričko, Marjan; Rozman, Ivan; Hölbl, Marko

    2013-08-13

    In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength.

  17. Responsive parenting is associated with improved type 1 diabetes-related quality of life.

    Science.gov (United States)

    Botello-Harbaum, M; Nansel, T; Haynie, D L; Iannotti, R J; Simons-Morton, B

    2008-09-01

    Improved quality of life is an important treatment goal for children and adolescents with type 1 diabetes. While previous research supports a relationship between family environment and quality of life, little research has addressed the relationship of parenting style constructs to quality of life in children with chronic disease. The present investigation assesses the relationship of parent responsiveness and demandingness with diabetes-related quality of life among children and adolescents with type 1 diabetes. Baseline and 12-month follow-up self-report assessments were collected on a sample of 81 children with type 1 diabetes participating in an efficacy trial of a behavioural intervention to enhance adherence. The sample had a mean age of 13.3 years (SD=1.7) and duration of diabetes of 7.7 years (SD=3.7). Multiple regression analyses were conducted to determine the relationship of parent responsiveness and demandingness to diabetes-related quality of life at each time point. After adjusting for demographic and diabetes characteristics, as well as diabetes-specific parent-child behaviours, parent responsiveness was significantly associated with baseline diabetes-related quality of life (beta=0.23; P=0.04). This relationship was sustained at 12-month follow-up (beta=0.22; P=0.04) after adjusting for baseline quality of life and treatment group assignment, suggesting that parent responsiveness is associated with improved quality of life. Findings indicate the importance of a supportive and emotionally warm parenting style in promoting improved quality of life for children with type 1 diabetes. Appropriate parenting skills should be an element of diabetes family management health care.

  18. Robust Grid-Current-Feedback Resonance Suppression Method for LCL-Type Grid-Connected Inverter Connected to Weak Grid

    DEFF Research Database (Denmark)

    Zhou, Xiaoping; Zhou, Leming; Chen, Yandong

    2018-01-01

    In this paper, a robust grid-current-feedback reso-nance suppression (GCFRS) method for LCL-type grid-connected inverter is proposed to enhance the system damping without introducing the switching noise and eliminate the impact of control delay on system robustness against grid-impedance variation....... It is composed of GCFRS method, the full duty-ratio and zero-beat-lag PWM method, and the lead-grid-current-feedback-resonance-suppression (LGCFRS) method. Firstly, the GCFRS is used to suppress the LCL-resonant peak well and avoid introducing the switching noise. Secondly, the proposed full duty-ratio and zero......-beat-lag PWM method is used to elimi-nate the one-beat-lag computation delay without introducing duty cycle limitations. Moreover, it can also realize the smooth switching from positive to negative half-wave of the grid current and improve the waveform quality. Thirdly, the proposed LGCFRS is used to further...

  19. Quantity-quality measuring method possibilities in improving operator's learning quality

    International Nuclear Information System (INIS)

    Zvonarev, V.P.

    1984-01-01

    Possibilities of obtainnjng qualitative-quantitative estimations of different aspects of learning process and their application in determination of learning purposes, substantiation of the training program choice of types and forms of studies directed at quality improvement of operator learning are considered

  20. An improved ghost-cell immersed boundary method for compressible flow simulations

    KAUST Repository

    Chi, Cheng

    2016-05-20

    This study presents an improved ghost-cell immersed boundary approach to represent a solid body in compressible flow simulations. In contrast to the commonly used approaches, in the present work ghost cells are mirrored through the boundary described using a level-set method to farther image points, incorporating a higher-order extra/interpolation scheme for the ghost cell values. A sensor is introduced to deal with image points near the discontinuities in the flow field. Adaptive mesh refinement (AMR) is used to improve the representation of the geometry efficiently in the Cartesian grid system. The improved ghost-cell method is validated against four test cases: (a) double Mach reflections on a ramp, (b) smooth Prandtl-Meyer expansion flows, (c) supersonic flows in a wind tunnel with a forward-facing step, and (d) supersonic flows over a circular cylinder. It is demonstrated that the improved ghost-cell method can reach the accuracy of second order in L1 norm and higher than first order in L∞ norm. Direct comparisons against the cut-cell method demonstrate that the improved ghost-cell method is almost equally accurate with better efficiency for boundary representation in high-fidelity compressible flow simulations. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Immersive volume rendering of blood vessels

    Science.gov (United States)

    Long, Gregory; Kim, Han Suk; Marsden, Alison; Bazilevs, Yuri; Schulze, Jürgen P.

    2012-03-01

    In this paper, we present a novel method of visualizing flow in blood vessels. Our approach reads unstructured tetrahedral data, resamples it, and uses slice based 3D texture volume rendering. Due to the sparse structure of blood vessels, we utilize an octree to efficiently store the resampled data by discarding empty regions of the volume. We use animation to convey time series data, wireframe surface to give structure, and utilize the StarCAVE, a 3D virtual reality environment, to add a fully immersive element to the visualization. Our tool has great value in interdisciplinary work, helping scientists collaborate with clinicians, by improving the understanding of blood flow simulations. Full immersion in the flow field allows for a more intuitive understanding of the flow phenomena, and can be a great help to medical experts for treatment planning.

  2. An improved loopless mounting method for cryocrystallography

    International Nuclear Information System (INIS)

    Jian-Xun, Qi; Fan, Jiang

    2010-01-01

    Based on a recent loopless mounting method, a simplified loopless and bufferless crystal mounting method is developed for macromolecular crystallography. This simplified crystal mounting system is composed of the following components: a home-made glass capillary, a brass seat for holding the glass capillary, a flow regulator, and a vacuum pump for evacuation. Compared with the currently prevalent loop mounting method, this simplified method has almost the same mounting procedure and thus is compatible with the current automated crystal mounting system. The advantages of this method include higher signal-to-noise ratio, more accurate measurement, more rapid flash cooling, less x-ray absorption and thus less radiation damage to the crystal. This method can be extended to the flash-freeing of a crystal without or with soaking it in a lower concentration of cryoprotectant, thus it may be the best option for data collection in the absence of suitable cryoprotectant. Therefore, it is suggested that this mounting method should be further improved and extensively applied to cryocrystallographic experiments. (general)

  3. An improved loopless mounting method for cryocrystallography

    Science.gov (United States)

    Qi, Jian-Xun; Jiang, Fan

    2010-01-01

    Based on a recent loopless mounting method, a simplified loopless and bufferless crystal mounting method is developed for macromolecular crystallography. This simplified crystal mounting system is composed of the following components: a home-made glass capillary, a brass seat for holding the glass capillary, a flow regulator, and a vacuum pump for evacuation. Compared with the currently prevalent loop mounting method, this simplified method has almost the same mounting procedure and thus is compatible with the current automated crystal mounting system. The advantages of this method include higher signal-to-noise ratio, more accurate measurement, more rapid flash cooling, less x-ray absorption and thus less radiation damage to the crystal. This method can be extended to the flash-freeing of a crystal without or with soaking it in a lower concentration of cryoprotectant, thus it may be the best option for data collection in the absence of suitable cryoprotectant. Therefore, it is suggested that this mounting method should be further improved and extensively applied to cryocrystallographic experiments.

  4. A Novel Mesh Quality Improvement Method for Boundary Elements

    Directory of Open Access Journals (Sweden)

    Hou-lin Liu

    2012-01-01

    Full Text Available In order to improve the boundary mesh quality while maintaining the essential characteristics of discrete surfaces, a new approach combining optimization-based smoothing and topology optimization is developed. The smoothing objective function is modified, in which two functions denoting boundary and interior quality, respectively, and a weight coefficient controlling boundary quality are taken into account. In addition, the existing smoothing algorithm can improve the mesh quality only by repositioning vertices of the interior mesh. Without destroying boundary conformity, bad elements with all their vertices on the boundary cannot be eliminated. Then, topology optimization is employed, and those elements are converted into other types of elements whose quality can be improved by smoothing. The practical application shows that the worst elements can be eliminated and, with the increase of weight coefficient, the average quality of boundary mesh can also be improved. Results obtained with the combined approach are compared with some common approach. It is clearly shown that it performs better than the existing approach.

  5. A method for removing adobe-type manure from hides using an oxidizing agent

    Science.gov (United States)

    Adobe-type (hardened) manure attached to bovine hair is a major source of meat contamination, hide quality deterioration, and devalued leather products. Therefore, it is important to develop cleaning solutions that can rapidly remove adobe-type manure to improve the quality of hides delivered to tan...

  6. Typing methods for the plague pathogen, Yersinia pestis.

    Science.gov (United States)

    Lindler, Luther E

    2009-01-01

    Phenotypic and genotypic methodologies have been used to differentiate the etiological agent of plague, Yersinia pestis. Historically, phenotypic methods were used to place isolates into one of three biovars based on nitrate reduction and glycerol fermentation. Classification of Y. pestis into genetic subtypes is problematic due to the relative monomorphic nature of the pathogen. Resolution into groups is dependent on the number and types of loci used in the analysis. The last 5-10 years of research and analysis in the field of Y. pestis genotyping have resulted in a recognition by Western scientists that two basic types of Y. pestis exist. One type, considered to be classic strains that are able to cause human plague transmitted by the normal flea vector, is termed epidemic strains. The other type does not typically cause human infections by normal routes of infection, but is virulent for rodents and is termed endemic strains. Previous classification schemes used outside the Western hemisphere referred to these latter strains as Pestoides varieties of Y. pestis. Recent molecular analysis has definitely shown that both endemic and epidemic strains arose independently from a common Yersinia pseudotuberculosis ancestor. Currently, 11 major groups of Y. pestis are defined globally.

  7. Two Dimensional Array of Piezoresistive Nanomechanical Membrane-Type Surface Stress Sensor (MSS with Improved Sensitivity

    Directory of Open Access Journals (Sweden)

    Nico F. de Rooij

    2012-11-01

    Full Text Available We present a new generation of piezoresistive nanomechanical Membrane-type Surface stress Sensor (MSS chips, which consist of a two dimensional array of MSS on a single chip. The implementation of several optimization techniques in the design and microfabrication improved the piezoresistive sensitivity by 3~4 times compared to the first generation MSS chip, resulting in a sensitivity about ~100 times better than a standard cantilever-type sensor and a few times better than optical read-out methods in terms of experimental signal-to-noise ratio. Since the integrated piezoresistive read-out of the MSS can meet practical requirements, such as compactness and not requiring bulky and expensive peripheral devices, the MSS is a promising transducer for nanomechanical sensing in the rapidly growing application fields in medicine, biology, security, and the environment. Specifically, its system compactness due to the integrated piezoresistive sensing makes the MSS concept attractive for the instruments used in mobile applications. In addition, the MSS can operate in opaque liquids, such as blood, where optical read-out techniques cannot be applied.

  8. Cooking Schools Improve Nutrient Intake Patterns of People with Type 2 Diabetes

    Science.gov (United States)

    Archuleta, Martha; VanLeeuwen, Dawn; Halderson, Karen; Jackson, K'Dawn; Bock, Margaret Ann; Eastman, Wanda; Powell, Jennifer; Titone, Michelle; Marr, Carol; Wells, Linda

    2012-01-01

    Objective: To determine whether cooking classes offered by the Cooperative Extension Service improved nutrient intake patterns in people with type 2 diabetes. Design: Quasi-experimental using pretest, posttest comparisons. Setting: Community locations including schools, churches, and senior centers. Participants: One hundred seventeen people with…

  9. Typing of Y chromosome SNPs with multiplex PCR methods

    DEFF Research Database (Denmark)

    Sanchez Sanchez, Juan Jose; Børsting, Claus; Morling, Niels

    2005-01-01

    We describe a method for the simultaneous typing of Y-chromosome single nucleotide polymorphism (SNP) markers by means of multiplex polymerase chain reaction (PCR) strategies that allow the detection of 35 Y chromosome SNPs on 25 amplicons from 100 to 200 pg of chromosomal deoxyribonucleic acid...... factors for the creation of larger SNP typing PCR multiplexes include careful selection of primers for the primary amplification and the SBE reaction, use of DNA primers with homogenous composition, and balancing the primer concentrations for both the amplification and the SBE reactions....

  10. An improved intermediate resonance method for heterogeneous media

    International Nuclear Information System (INIS)

    Chiovato, O.; Corno, S.; Pasquantonio, F.Di.

    1977-01-01

    A new formulation is described of the Intermediate Resonance method which incorporates the previous developments suitably modified and improved, together with some new contributions. The 'intermediate' character is directly introduced in the integral operator K, allowing a more rigorous deduction of the equations for evaluating the intermediate parameters related to the nuclides involved in the system. There is no limit to the number of internal (admixed in the fuel) and external moderators. The capability to take into account the interference scattering has been extended to heterogeneous systems. The Doppler broadening is described by means of new accurate rational approximations to the broadened line shape psi. Finally the use of energy mean values suitably defined refines the values of the resonance integrals and resonance absorption cross sections. The Intermediate Resonance method so extended and improved, has been coded in a group of FORTRAN routines, which have been inserted as a calculation option in the fast section of the GGC code for the evaluation of multigroup cross sections. A series of calculations has been carried out, using these routines, and comparisons have been made with Monte Carlo and Nordheim's methods. The results obtained show that the Intermediate Resonance method developed in the present work offers considerable advantages over Nordheim's method: better accuracy in evaluating resonance absorption cross sections, and much smaller computing times. (author)

  11. An Improved Ensemble Learning Method for Classifying High-Dimensional and Imbalanced Biomedicine Data.

    Science.gov (United States)

    Yu, Hualong; Ni, Jun

    2014-01-01

    Training classifiers on skewed data can be technically challenging tasks, especially if the data is high-dimensional simultaneously, the tasks can become more difficult. In biomedicine field, skewed data type often appears. In this study, we try to deal with this problem by combining asymmetric bagging ensemble classifier (asBagging) that has been presented in previous work and an improved random subspace (RS) generation strategy that is called feature subspace (FSS). Specifically, FSS is a novel method to promote the balance level between accuracy and diversity of base classifiers in asBagging. In view of the strong generalization capability of support vector machine (SVM), we adopt it to be base classifier. Extensive experiments on four benchmark biomedicine data sets indicate that the proposed ensemble learning method outperforms many baseline approaches in terms of Accuracy, F-measure, G-mean and AUC evaluation criterions, thus it can be regarded as an effective and efficient tool to deal with high-dimensional and imbalanced biomedical data.

  12. Persufflation Improves Pancreas Preservation When Compared With the Two-Layer Method

    Science.gov (United States)

    Scott, W.E.; O'Brien, T.D.; Ferrer-Fabrega, J.; Avgoustiniatos, E.S.; Weegman, B.P.; Anazawa, T.; Matsumoto, S.; Kirchner, V.A.; Rizzari, M.D.; Murtaugh, M.P.; Suszynski, T.M.; Aasheim, T.; Kidder, L.S.; Hammer, B.E.; Stone, S.G.; Tempelman, L.; Sutherland, D.E.R.; Hering, B.J.; Papas, K.K.

    2010-01-01

    Islet transplantation is emerging as a promising treatment for patients with type 1 diabetes. It is important to maximize viable islet yield for each organ due to scarcity of suitable human donor pancreata, high cost, and the high dose of islets required for insulin independence. However, organ transport for 8 hours using the two-layer method (TLM) frequently results in lower islet yields. Since efficient oxygenation of the core of larger organs (eg, pig, human) in TLM has recently come under question, we investigated oxygen persufflation as an alternative way to supply the pancreas with oxygen during preservation. Porcine pancreata were procured from non–heart-beating donors and preserved by either TLM or persufflation for 24 hours and fixed. Biopsies were collected from several regions of the pancreas, sectioned, stained with hematoxylin and eosin, and evaluated by a histologist. Persufflated tissues exhibited distended capillaries due to gas perfusion and significantly less autolysis/cell death than regions not exposed to persufflation or tissues exposed to TLM. The histology presented here suggests that after 24 hours of preservation, persufflation dramatically improves tissue health when compared with TLM. These results indicate the potential for persufflation to improve viable islet yields and extend the duration of preservation, allowing more donor organs to be utilized. PMID:20692396

  13. Cardiac lipid content is unresponsive to a physical activity training intervention in type 2 diabetic patients, despite improved ejection fraction

    Directory of Open Access Journals (Sweden)

    Leiner Tim

    2011-05-01

    Full Text Available Abstract Background Increased cardiac lipid content has been associated with diabetic cardiomyopathy. We recently showed that cardiac lipid content is reduced after 12 weeks of physical activity training in healthy overweight subjects. The beneficial effect of exercise training on cardiovascular risk is well established and the decrease in cardiac lipid content with exercise training in healthy overweight subjects was accompanied by improved ejection fraction. It is yet unclear whether diabetic patients respond similarly to physical activity training and whether a lowered lipid content in the heart is necessary for improvements in cardiac function. Here, we investigated whether exercise training is able to lower cardiac lipid content and improve cardiac function in type 2 diabetic patients. Methods Eleven overweight-to-obese male patients with type 2 diabetes mellitus (age: 58.4 ± 0.9 years, BMI: 29.9 ± 0.01 kg/m2 followed a 12-week training program (combination endurance/strength training, three sessions/week. Before and after training, maximal whole body oxygen uptake (VO2max and insulin sensitivity (by hyperinsulinemic, euglycemic clamp was determined. Systolic function was determined under resting conditions by CINE-MRI and cardiac lipid content in the septum of the heart by Proton Magnetic Resonance Spectroscopy. Results VO2max increased (from 27.1 ± 1.5 to 30.1 ± 1.6 ml/min/kg, p = 0.001 and insulin sensitivity improved upon training (insulin stimulated glucose disposal (delta Rd of glucose improved from 5.8 ± 1.9 to 10.3 ± 2.0 μmol/kg/min, p = 0.02. Left-ventricular ejection fraction improved after training (from 50.5 ± 2.0 to 55.6 ± 1.5%, p = 0.01 as well as cardiac index and cardiac output. Unexpectedly, cardiac lipid content in the septum remained unchanged (from 0.80 ± 0.22% to 0.95 ± 0.21%, p = 0.15. Conclusions Twelve weeks of progressive endurance/strength training was effective in improving VO2max, insulin sensitivity

  14. An Improved Ghost-cell Immersed Boundary Method for Compressible Inviscid Flow Simulations

    KAUST Repository

    Chi, Cheng

    2015-05-01

    This study presents an improved ghost-cell immersed boundary approach to represent a solid body in compressible flow simulations. In contrast to the commonly used approaches, in the present work ghost cells are mirrored through the boundary described using a level-set method to farther image points, incorporating a higher-order extra/interpolation scheme for the ghost cell values. In addition, a shock sensor is in- troduced to deal with image points near the discontinuities in the flow field. Adaptive mesh refinement (AMR) is used to improve the representation of the geometry efficiently. The improved ghost-cell method is validated against five test cases: (a) double Mach reflections on a ramp, (b) supersonic flows in a wind tunnel with a forward- facing step, (c) supersonic flows over a circular cylinder, (d) smooth Prandtl-Meyer expansion flows, and (e) steady shock-induced combustion over a wedge. It is demonstrated that the improved ghost-cell method can reach the accuracy of second order in L1 norm and higher than first order in L∞ norm. Direct comparisons against the cut-cell method demonstrate that the improved ghost-cell method is almost equally accurate with better efficiency for boundary representation in high-fidelity compressible flow simulations. Implementation of the improved ghost-cell method in reacting Euler flows further validates its general applicability for compressible flow simulations.

  15. Qualitative Improvement Methods Through Analysis of Inquiry Contents for Cancer Registration

    Science.gov (United States)

    Boo, Yoo-Kyung; Lim, Hyun-Sook; Kim, Jung-Eun; Kim, Kyoung-Beom; Won, Young-Joo

    2017-06-25

    Background: In Korea, the national cancer database was constructed after the initiation of the national cancer registration project in 1980, and the annual national cancer registration report has been published every year since 2005. Consequently, data management must begin even at the stage of data collection in order to ensure quality. Objectives: To determine the suitability of cancer registries’ inquiry tools through the inquiry analysis of the Korea Central Cancer Registry (KCCR), and identify the needs to improve the quality of cancer registration. Methods: Results of 721 inquiries to the KCCR from 2000 to 2014 were analyzed by inquiry year, question type, and medical institution characteristics. Using Stata version 14.1, descriptive analysis was performed to identify general participant characteristics, and chi-square analysis was applied to investigate significant differences in distribution characteristics by factors affecting the quality of cancer registration data. Results: The number of inquiries increased in 2005–2009. During this period, there were various changes, including the addition of cancer registration items such as brain tumors and guideline updates. Of the inquirers, 65.3% worked at hospitals in metropolitan cities and 60.89% of hospitals had 601–1000 beds. Tertiary hospitals had the highest number of inquiries (64.91%), and the highest number of questions by type were 353 (48.96%) for histological codes, 92 (12.76%) for primary sites, and 76 (10.54%) for reportable. Conclusions: A cancer registration inquiry system is an effective method when not confident about codes during cancer registration, or when confronting cancer cases in which previous clinical knowledge or information on the cancer registration guidelines are insufficient. Creative Commons Attribution License

  16. Efficient Kernel-Based Ensemble Gaussian Mixture Filtering

    KAUST Repository

    Liu, Bo

    2015-11-11

    We consider the Bayesian filtering problem for data assimilation following the kernel-based ensemble Gaussian-mixture filtering (EnGMF) approach introduced by Anderson and Anderson (1999). In this approach, the posterior distribution of the system state is propagated with the model using the ensemble Monte Carlo method, providing a forecast ensemble that is then used to construct a prior Gaussian-mixture (GM) based on the kernel density estimator. This results in two update steps: a Kalman filter (KF)-like update of the ensemble members and a particle filter (PF)-like update of the weights, followed by a resampling step to start a new forecast cycle. After formulating EnGMF for any observational operator, we analyze the influence of the bandwidth parameter of the kernel function on the covariance of the posterior distribution. We then focus on two aspects: i) the efficient implementation of EnGMF with (relatively) small ensembles, where we propose a new deterministic resampling strategy preserving the first two moments of the posterior GM to limit the sampling error; and ii) the analysis of the effect of the bandwidth parameter on contributions of KF and PF updates and on the weights variance. Numerical results using the Lorenz-96 model are presented to assess the behavior of EnGMF with deterministic resampling, study its sensitivity to different parameters and settings, and evaluate its performance against ensemble KFs. The proposed EnGMF approach with deterministic resampling suggests improved estimates in all tested scenarios, and is shown to require less localization and to be less sensitive to the choice of filtering parameters.

  17. Behavioral interventions for improving dual-method contraceptive use.

    Science.gov (United States)

    Lopez, Laureen M; Stockton, Laurie L; Chen, Mario; Steiner, Markus J; Gallo, Maria F

    2014-03-30

    Dual-method contraception refers to using condoms as well as another modern method of contraception. The latter (usually non-barrier) method is commonly hormonal (e.g., oral contraceptives) or a non-hormonal intrauterine device. Use of two methods can better prevent pregnancy and the transmission of HIV and other sexually transmitted infections (STIs) compared to single-method use. Unprotected sex increases risk for disease, disability, and mortality in many areas due to the prevalence and incidence of HIV/STI. Millions of women, especially in lower-resource areas, also have an unmet need for protection against unintended pregnancy. We examined comparative studies of behavioral interventions for improving use of dual methods of contraception. Dual-method use refers to using condoms as well as another modern contraceptive method. Our intent was to identify effective interventions for preventing pregnancy as well as HIV/STI transmission. Through January 2014, we searched MEDLINE, CENTRAL, POPLINE, EMBASE, COPAC, and Open Grey. In addition, we searched ClinicalTrials.gov and ICTRP for current trials and trials with relevant data or reports. We examined reference lists of pertinent papers, including review articles, for additional reports. Studies could be either randomized or non-randomized. They examined a behavioral intervention with an educational or counseling component to encourage or improve the use of dual methods, i.e., condoms and another modern contraceptive. The intervention had to address preventing pregnancy as well as the transmission of HIV/STI. The program or service could be targeted to individuals, couples, or communities. The comparison condition could be another behavioral intervention to improve contraceptive use, usual care, other health education, or no intervention.Studies had to report use of dual methods, i.e., condoms plus another modern contraceptive method. We focused on the investigator's assessment of consistent dual-method use or use at

  18. Technical and metrological service improvement of measurement channels with flow-type transducers of ionic impurities for water chemical control in nuclear reactors

    International Nuclear Information System (INIS)

    Vilkov, Nicolay Ya.; Voronina, N.V.; Matveyev, V.N.; Sorokin, N.M.; Sidorchuk, A.N.

    2012-09-01

    Improvement of sampling process, including sample taking, transport, and preparation, and optimization of on-line metrological maintenance on measuring chains containing flow-type sensors is very important for obtaining high quality information about NPP coolant water composition. Sample preparation and measurement errors almost cannot be eliminated by data processing in top level computers. For on-line measurements of the coolant water ion composition, nuclear plants commonly use sampling lines with gage pressure regulators provided at inlets of flow type sensors. The major part of sample fluid is drained via bypass outside the flow path through the sensors. A better alternative is to form flows at the inlets of flow type sensors using outlet pressure feedback devices. This sampling scheme ensures fully representative samples that are transported to the sensor inlets with a given time delay. In such a scheme, the sample fluid returns into the coolant system without change in composition. The paper presents test results for the prototype model of the pressure and flow control device. Alexandrov NITI has patented a method and apparatus for comprehensively calibrating measuring chains with flow type ion analyzers which are used in nuclear power plants to measure on line the ion composition of high-purity and other water streams. The patented dynamical method generates calibration solutions as binary electrolytes with a given analyte concentration. The method is easy to implement and requires no dosing equipment. Calibration solutions are generated directly in the water flow through the sampling line connected to the coolant line or high-purity water feed line. Unlike the concentration of buffer solutions used in pH measurements, the total ion concentration in generated electrolyte solutions is close to that in actual water streams at nuclear plants. With the proposed method and equipment, a reference pH value can be obtained with accuracy which is close to the

  19. Brewer?s Yeast Improves Blood Pressure in Type 2 Diabetes Mellitus

    OpenAIRE

    HOSSEINZADEH, Payam; DJAZAYERY, Abolghassem; MOSTAFAVI, Seyed-Ali; JAVANBAKHT, Mohammad Hassan; DERAKHSHANIAN, Hoda; RAHIMIFOROUSHANI, Abbas; DJALALI, Mahmoud

    2013-01-01

    Background This study was conducted to investigate the effects of Brewer?s yeast supplementation on serum lipoproteins and blood pressure in patients with Type 2 diabetes mellitus. Methods: In a randomized double blind clinical trial, 90 adults with type 2 diabetes mellitus were recruited, and divided randomly into 2 groups, trial group received brewer?s yeast (1800 mg/day) and control group received placebo for 12 weeks. Weight, BMI, food consumption (based on 24 hour food recall), fasting s...

  20. New method to incorporate Type B uncertainty into least-squares procedures in radionuclide metrology

    International Nuclear Information System (INIS)

    Han, Jubong; Lee, K.B.; Lee, Jong-Man; Park, Tae Soon; Oh, J.S.; Oh, Pil-Jei

    2016-01-01

    We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. - Highlights: • A new method proposed to incorporate Type B uncertainty into least-squares method. • The method constructed from the likelihood function and PDFs of Type B uncertainty. • A case study performed to compare results from the new and the conventional method. • Fitted parameters are consistent but with larger uncertainties in the new method.

  1. Improvement Method of Gene Transfer in Kappaphycus Alvarezii

    OpenAIRE

    Triana, St. Hidayah; Alimuddin,; Widyastuti, Utut; Suharsono,; Suryati, Emma; Parenrengi, Andi

    2016-01-01

    Method of foreign gene transfer in red seaweed Kappaphycus alvarezii has been reported, however, li-mited number of transgenic F0 (broodstock) was obtained. This study was conducted to improve the method of gene transfer mediated by Agrobacterium tumefaciens in order to obtain high percentage of K. alvarezii transgenic. Superoxide dismutase gene from Melastoma malabatrichum (MmCu/Zn-SOD) was used as model towards increasing adaptability of K. alvarezii to environmental stress. The treat-ment...

  2. Forms and Methods of Agricultural Sector Innovative Activity Improvement

    Directory of Open Access Journals (Sweden)

    Aisha S. Ablyaeva

    2013-01-01

    Full Text Available The article is focused on basic forms and methods to improve the efficiency of innovative activity in the agricultural sector of Ukraine. It was determined that the development of agriculture in Ukraine is affected by a number of factors that must be considered to design innovative models of entrepreneurship development and ways to improve the efficiency of innovative entrepreneurship activity.

  3. TRIZ method application for improving the special vehicles maintenance

    OpenAIRE

    Petrović Saša; Lozanović-Šajić Jasmina; Knežević Tijana; Pavlović Jovan; Ivanov Goran

    2014-01-01

    TRIZ methodology provides an opportunity for improving the classical engineering approach based on personal knowledge and experience. This paper presents the application of TRIZ methods for improving vehicle maintenance where special equipment is installed. A specific problem is the maintenance of the periscopes with heating system. Protective glass panels with heating system are rectangular glass elements. Their purpose is to perform mechanical protection ...

  4. Improved method of measurement for outer leak

    International Nuclear Information System (INIS)

    Xu Guang

    2012-01-01

    Pneumatic pipeline is installed for the airborne radioactivity measurement equipment, air tightness and outer leak rate are essential for the testing of the characteristics, both in the national criteria and ISO standards, an improved practical method is available for the measurement of the outer air leak rate based on the engineering experiences for the equipment acceptance and testing procedure. (authors)

  5. CRISPR typing and subtyping for improved laboratory surveillance of Salmonella infections.

    Directory of Open Access Journals (Sweden)

    Laëtitia Fabre

    Full Text Available Laboratory surveillance systems for salmonellosis should ideally be based on the rapid serotyping and subtyping of isolates. However, current typing methods are limited in both speed and precision. Using 783 strains and isolates belonging to 130 serotypes, we show here that a new family of DNA repeats named CRISPR (clustered regularly interspaced short palindromic repeats is highly polymorphic in Salmonella. We found that CRISPR polymorphism was strongly correlated with both serotype and multilocus sequence type. Furthermore, spacer microevolution discriminated between subtypes within prevalent serotypes, making it possible to carry out typing and subtyping in a single step. We developed a high-throughput subtyping assay for the most prevalent serotype, Typhimurium. An open web-accessible database was set up, providing a serotype/spacer dictionary and an international tool for strain tracking based on this innovative, powerful typing and subtyping tool.

  6. Assessment guidance of carbohydrate counting method in patients with type 2 diabetes mellitus.

    Science.gov (United States)

    Martins, Michelle R; Ambrosio, Ana Cristina T; Nery, Marcia; Aquino, Rita de Cássia; Queiroz, Marcia S

    2014-04-01

    We evaluated the application of the method of carbohydrate counting performed by 21 patients with type 2 diabetes, 1 year later attending a guidance course. Participants answered a questionnaire to assess patients' adhesion to carbohydrate counting as well as to identify habit changes and the method's applicability, and values of glycated hemoglobin were also analyzed. Most participants (76%) were females, and 25% of them had obesity degree III. There was a statistically significant decrease in glycated hemoglobin from 8.42±0.02% to 7.66±0.01% comparing values before and after counseling. We observed that although patients stated that the method was difficult they understood that carbohydrate counting could allow them make choices and have more freedom in their meals; we also verified if they understood accurately how to replace some foods used regularly in their diets and most patients correctly chose replacements for the groups of bread (76%), beans (67%) and noodles (67%). We concluded that participation in the course led to improved blood glucose control with a significant reduction of glycated hemoglobin, better understanding of food groups and the adoption of healthier eating habits. Copyright © 2013 Primary Care Diabetes Europe. Published by Elsevier Ltd. All rights reserved.

  7. Standard CMMIsm Appraisal Method for Process Improvement (SCAMPIsm), Version 1.1: Method Definition Document

    National Research Council Canada - National Science Library

    2001-01-01

    The Standard CMMI Appraisal Method for Process Improvement (SCAMPI(Service Mark)) is designed to provide benchmark quality ratings relative to Capability Maturity Model(registered) Integration (CMMI(Service Mark)) models...

  8. Improvement of Power Flow Calculation with Optimization Factor Based on Current Injection Method

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2014-01-01

    Full Text Available This paper presents an improvement in power flow calculation based on current injection method by introducing optimization factor. In the method proposed by this paper, the PQ buses are represented by current mismatches while the PV buses are represented by power mismatches. It is different from the representations in conventional current injection power flow equations. By using the combined power and current injection mismatches method, the number of the equations required can be decreased to only one for each PV bus. The optimization factor is used to improve the iteration process and to ensure the effectiveness of the improved method proposed when the system is ill-conditioned. To verify the effectiveness of the method, the IEEE test systems are tested by conventional current injection method and the improved method proposed separately. Then the results are compared. The comparisons show that the optimization factor improves the convergence character effectively, especially that when the system is at high loading level and R/X ratio, the iteration number is one or two times less than the conventional current injection method. When the overloading condition of the system is serious, the iteration number in this paper appears 4 times less than the conventional current injection method.

  9. Chosen interval methods for solving linear interval systems with special type of matrix

    Science.gov (United States)

    Szyszka, Barbara

    2013-10-01

    The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.

  10. Soil map disaggregation improved by soil-landscape relationships, area-proportional sampling and random forest implementation

    DEFF Research Database (Denmark)

    Møller, Anders Bjørn; Malone, Brendan P.; Odgers, Nathan

    implementation generally improved the algorithm’s ability to predict the correct soil class. The implementation of soil-landscape relationships and area-proportional sampling generally increased the calculation time, while the random forest implementation reduced the calculation time. In the most successful......Detailed soil information is often needed to support agricultural practices, environmental protection and policy decisions. Several digital approaches can be used to map soil properties based on field observations. When soil observations are sparse or missing, an alternative approach...... is to disaggregate existing conventional soil maps. At present, the DSMART algorithm represents the most sophisticated approach for disaggregating conventional soil maps (Odgers et al., 2014). The algorithm relies on classification trees trained from resampled points, which are assigned classes according...

  11. Effectiveness of a regional prepregnancy care program in women with type 1 and type 2 diabetes

    DEFF Research Database (Denmark)

    Murphy, Helen R.; Roland, Jonathan M.; Skinner, Timothy C.

    2010-01-01

    of 680 pregnancies in women with type 1 and type 2 diabetes was performed. Primary outcomes were adverse pregnancy outcome (congenital malformation, stillbirth, or neonatal death), congenital malformation, and indicators of pregnancy preparation (5 mg folic acid, gestational age, and A1C). Comparisons...... with improved pregnancy preparation and reduced risk of adverse pregnancy outcome in type 1 and type 2 diabetes. Prepregnancy care had benefits beyond improved glycemic control and was a stronger predictor of pregnancy outcome than maternal obesity, ethnicity, or social disadvantage.......OBJECTIVE - To implement and evaluate a regional prepregnancy care program in women with type 1 and type 2 diabetes. RESEARCH DESIGN AND METHODS - Prepregnancy care was promoted among patients and health professionals and delivered across 10 regional maternity units. A prospective cohort study...

  12. New application of Exp-function method for improved Boussinesq equation

    Energy Technology Data Exchange (ETDEWEB)

    Abdou, M.A. [Theoretical Research Group, Physics Department, Faculty of Science, Mansoura University, 35516 Mansoura (Egypt); Department of Physics, Faculty of Education for Girls, Science Departments, King Khalid University, Bisha (Saudi Arabia)], E-mail: m_abdou_eg@yahoo.com; Soliman, A.A. [Department of Mathematics, Faculty of Education (AL-Arish) Suez Canal University, AL-Arish 45111 (Egypt); Department of Mathematics, Teacher' s College (Bisha), King Khalid University, Bisha, PO Box 551 (Saudi Arabia)], E-mail: asoliman_99@yahoo.com; El-Basyony, S.T. [Theoretical Research Group, Physics Department, Faculty of Science, Mansoura University, 35516 Mansoura (Egypt)

    2007-10-01

    The Exp-function method is used to obtain generalized solitary solutions and periodic solutions for nonlinear evolution equations arising in mathematical physics with the aid of symbolic computation method, namely, the improved Boussinesq equation. The method is straightforward and concise, and its applications is promising for other nonlinear evolution equations in mathematical physics.

  13. Integration of equations of parabolic type by the method of nets

    CERN Document Server

    Saul'Yev, V K; Stark, M; Ulam, S

    1964-01-01

    International Series of Monographs in Pure and Applied Mathematics, Volume 54: Integration of Equations of Parabolic Type by the Method of Nets deals with solving parabolic partial differential equations using the method of nets. The first part of this volume focuses on the construction of net equations, with emphasis on the stability and accuracy of the approximating net equations. The method of nets or method of finite differences (used to define the corresponding numerical method in ordinary differential equations) is one of many different approximate methods of integration of partial diff

  14. Using Economic Evaluation to Illustrate Value of Care for Improving Patient Safety and Quality: Choosing the Right Method.

    Science.gov (United States)

    Padula, William V; Lee, Ken K H; Pronovost, Peter J

    2017-08-03

    To scale and sustain successful quality improvement (QI) interventions, it is recommended for health system leaders to calculate the economic and financial sustainability of the intervention. Many methods of economic evaluation exist, and the type of method depends on the audience: providers, researchers, and hospital executives. This is a primer to introduce cost-effectiveness analysis, budget impact analysis, and return on investment calculation as 3 distinct methods for each stakeholder needing a measurement of the value of QI at the health system level. Using cases for the QI of hospital-acquired condition rates (e.g., pressure injuries), this primer proceeds stepwise through each method beginning from the same starting point of constructing a model so that the repetition of steps is minimized and thereby capturing the attention of all intended audiences.

  15. Analyzing Repeated Measures Marginal Models on Sample Surveys with Resampling Methods

    Directory of Open Access Journals (Sweden)

    James D. Knoke

    2005-12-01

    Full Text Available Packaged statistical software for analyzing categorical, repeated measures marginal models on sample survey data with binary covariates does not appear to be available. Consequently, this report describes a customized SAS program which accomplishes such an analysis on survey data with jackknifed replicate weights for which the primary sampling unit information has been suppressed for respondent confidentiality. First, the program employs the Macro Language and the Output Delivery System (ODS to estimate the means and covariances of indicator variables for the response variables, taking the design into account. Then, it uses PROC CATMOD and ODS, ignoring the survey design, to obtain the design matrix and hypothesis test specifications. Finally, it enters these results into another run of CATMOD, which performs automated direct input of the survey design specifications and accomplishes the appropriate analysis. This customized SAS program can be employed, with minor editing, to analyze general categorical, repeated measures marginal models on sample surveys with replicate weights. Finally, the results of our analysis accounting for the survey design are compared to the results of two alternate analyses of the same data. This comparison confirms that such alternate analyses, which do not properly account for the design, do not produce useful results.

  16. Improvement of CPU time of Linear Discriminant Function based on MNM criterion by IP

    Directory of Open Access Journals (Sweden)

    Shuichi Shinmura

    2014-05-01

    Full Text Available Revised IP-OLDF (optimal linear discriminant function by integer programming is a linear discriminant function to minimize the number of misclassifications (NM of training samples by integer programming (IP. However, IP requires large computation (CPU time. In this paper, it is proposed how to reduce CPU time by using linear programming (LP. In the first phase, Revised LP-OLDF is applied to all cases, and all cases are categorized into two groups: those that are classified correctly or those that are not classified by support vectors (SVs. In the second phase, Revised IP-OLDF is applied to the misclassified cases by SVs. This method is called Revised IPLP-OLDF.In this research, it is evaluated whether NM of Revised IPLP-OLDF is good estimate of the minimum number of misclassifications (MNM by Revised IP-OLDF. Four kinds of the real data—Iris data, Swiss bank note data, student data, and CPD data—are used as training samples. Four kinds of 20,000 re-sampling cases generated from these data are used as the evaluation samples. There are a total of 149 models of all combinations of independent variables by these data. NMs and CPU times of the 149 models are compared with Revised IPLP-OLDF and Revised IP-OLDF. The following results are obtained: 1 Revised IPLP-OLDF significantly improves CPU time. 2 In the case of training samples, all 149 NMs of Revised IPLP-OLDF are equal to the MNM of Revised IP-OLDF. 3 In the case of evaluation samples, most NMs of Revised IPLP-OLDF are equal to NM of Revised IP-OLDF. 4 Generalization abilities of both discriminant functions are concluded to be high, because the difference between the error rates of training and evaluation samples are almost within 2%.   Therefore, Revised IPLP-OLDF is recommended for the analysis of big data instead of Revised IP-OLDF. Next, Revised IPLP-OLDF is compared with LDF and logistic regression by 100-fold cross validation using 100 re-sampling samples. Means of error rates of

  17. Improved methods for the mathematically controlled comparison of biochemical systems

    Directory of Open Access Journals (Sweden)

    Schwacke John H

    2004-06-01

    Full Text Available Abstract The method of mathematically controlled comparison provides a structured approach for the comparison of alternative biochemical pathways with respect to selected functional effectiveness measures. Under this approach, alternative implementations of a biochemical pathway are modeled mathematically, forced to be equivalent through the application of selected constraints, and compared with respect to selected functional effectiveness measures. While the method has been applied successfully in a variety of studies, we offer recommendations for improvements to the method that (1 relax requirements for definition of constraints sufficient to remove all degrees of freedom in forming the equivalent alternative, (2 facilitate generalization of the results thus avoiding the need to condition those findings on the selected constraints, and (3 provide additional insights into the effect of selected constraints on the functional effectiveness measures. We present improvements to the method and related statistical models, apply the method to a previously conducted comparison of network regulation in the immune system, and compare our results to those previously reported.

  18. Improved Real-time Denoising Method Based on Lifting Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Liu Zhaohua

    2014-06-01

    Full Text Available Signal denoising can not only enhance the signal to noise ratio (SNR but also reduce the effect of noise. In order to satisfy the requirements of real-time signal denoising, an improved semisoft shrinkage real-time denoising method based on lifting wavelet transform was proposed. The moving data window technology realizes the real-time wavelet denoising, which employs wavelet transform based on lifting scheme to reduce computational complexity. Also hyperbolic threshold function and recursive threshold computing can ensure the dynamic characteristics of the system, in addition, it can improve the real-time calculating efficiency as well. The simulation results show that the semisoft shrinkage real-time denoising method has quite a good performance in comparison to the traditional methods, namely soft-thresholding and hard-thresholding. Therefore, this method can solve more practical engineering problems.

  19. An Improved Local Gradient Method for Sea Surface Wind Direction Retrieval from SAR Imagery

    Directory of Open Access Journals (Sweden)

    Lizhang Zhou

    2017-06-01

    Full Text Available Sea surface wind affects the fluxes of energy, mass and momentum between the atmosphere and ocean, and therefore regional and global weather and climate. With various satellite microwave sensors, sea surface wind can be measured with large spatial coverage in almost all-weather conditions, day or night. Like any other remote sensing measurements, sea surface wind measurement is also indirect. Therefore, it is important to develop appropriate wind speed and direction retrieval models for different types of microwave instruments. In this paper, a new sea surface wind direction retrieval method from synthetic aperture radar (SAR imagery is developed. In the method, local gradients are computed in frequency domain by combining the operation of smoothing and computing local gradients in one step to simplify the process and avoid the difference approximation. This improved local gradients (ILG method is compared with the traditional two-dimensional fast Fourier transform (2D FFT method and local gradients (LG method, using interpolating wind directions from the European Centre for Medium-Range Weather Forecast (ECMWF reanalysis data and the Cross-Calibrated Multi-Platform (CCMP wind vector product. The sensitivities to the salt-and-pepper noise, the additive noise and the multiplicative noise are analyzed. The ILG method shows a better performance of retrieval wind directions than the other two methods.

  20. Suggestions for an improved HRA method for use in Probabilistic Safety Assessment

    International Nuclear Information System (INIS)

    Parry, Gareth W.

    1995-01-01

    This paper discusses why an improved Human Reliability Analysis (HRA) approach for use in Probabilistic Safety Assessments (PSAs) is needed, and proposes a set of requirements on the improved HRA method. The constraints imposed by the need to embed the approach into the PSA methodology are discussed. One approach to laying the foundation for an improved method, using models from the cognitive psychology and behavioral science disciplines, is outlined

  1. Cantor-type cylindrical-coordinate method for differential equations with local fractional derivatives

    International Nuclear Information System (INIS)

    Yang, Xiao-Jun; Srivastava, H.M.; He, Ji-Huan; Baleanu, Dumitru

    2013-01-01

    In this Letter, we propose to use the Cantor-type cylindrical-coordinate method in order to investigate a family of local fractional differential operators on Cantor sets. Some testing examples are given to illustrate the capability of the proposed method for the heat-conduction equation on a Cantor set and the damped wave equation in fractal strings. It is seen to be a powerful tool to convert differential equations on Cantor sets from Cantorian-coordinate systems to Cantor-type cylindrical-coordinate systems.

  2. Topology optimization using the improved element-free Galerkin method for elasticity*

    International Nuclear Information System (INIS)

    Wu Yi; Ma Yong-Qi; Feng Wei; Cheng Yu-Min

    2017-01-01

    The improved element-free Galerkin (IEFG) method of elasticity is used to solve the topology optimization problems. In this method, the improved moving least-squares approximation is used to form the shape function. In a topology optimization process, the entire structure volume is considered as the constraint. From the solid isotropic microstructures with penalization, we select relative node density as a design variable. Then we choose the minimization of compliance to be an objective function, and compute its sensitivity with the adjoint method. The IEFG method in this paper can overcome the disadvantages of the singular matrices that sometimes appear in conventional element-free Galerkin (EFG) method. The central processing unit (CPU) time of each example is given to show that the IEFG method is more efficient than the EFG method under the same precision, and the advantage that the IEFG method does not form singular matrices is also shown. (paper)

  3. Two Modified Three-Term Type Conjugate Gradient Methods and Their Global Convergence for Unconstrained Optimization

    Directory of Open Access Journals (Sweden)

    Zhongbo Sun

    2014-01-01

    Full Text Available Two modified three-term type conjugate gradient algorithms which satisfy both the descent condition and the Dai-Liao type conjugacy condition are presented for unconstrained optimization. The first algorithm is a modification of the Hager and Zhang type algorithm in such a way that the search direction is descent and satisfies Dai-Liao’s type conjugacy condition. The second simple three-term type conjugate gradient method can generate sufficient decent directions at every iteration; moreover, this property is independent of the steplength line search. Also, the algorithms could be considered as a modification of the MBFGS method, but with different zk. Under some mild conditions, the given methods are global convergence, which is independent of the Wolfe line search for general functions. The numerical experiments show that the proposed methods are very robust and efficient.

  4. Real-Time Pore Pressure Detection: Indicators and Improved Methods

    Directory of Open Access Journals (Sweden)

    Jincai Zhang

    2017-01-01

    Full Text Available High uncertainties may exist in the predrill pore pressure prediction in new prospects and deepwater subsalt wells; therefore, real-time pore pressure detection is highly needed to reduce drilling risks. The methods for pore pressure detection (the resistivity, sonic, and corrected d-exponent methods are improved using the depth-dependent normal compaction equations to adapt to the requirements of the real-time monitoring. A new method is proposed to calculate pore pressure from the connection gas or elevated background gas, which can be used for real-time pore pressure detection. The pore pressure detection using the logging-while-drilling, measurement-while-drilling, and mud logging data is also implemented and evaluated. Abnormal pore pressure indicators from the well logs, mud logs, and wellbore instability events are identified and analyzed to interpret abnormal pore pressures for guiding real-time drilling decisions. The principles for identifying abnormal pressure indicators are proposed to improve real-time pore pressure monitoring.

  5. Characterization of YBaCuO films deposited by the sputtering method using a temple-bell-type substrate holder

    International Nuclear Information System (INIS)

    Kajikawa, H.; Fukumoto, Y.; Shibutani, K.; Hayashi, S.; Ishibashi, K.; Inoue, K.

    1992-01-01

    The as-grown YBaCuO films deposited by the off-axis sputtering method using a temple-bell type substrate holder showed a good crystalline quality with a minimum yield value x min of 0.9 MeV He ions of 3.8%. The post-annealing degraded the crystalline quality to increase x min up to 11.8%, though it improved both the T c and J c . It was supposed that the degradation was caused by the re-arrangement of oxygen atoms. (orig.)

  6. Bayesian target tracking based on particle filter

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    For being able to deal with the nonlinear or non-Gaussian problems, particle filters have been studied by many researchers. Based on particle filter, the extended Kalman filter (EKF) proposal function is applied to Bayesian target tracking. Markov chain Monte Carlo (MCMC) method, the resampling step, etc novel techniques are also introduced into Bayesian target tracking. And the simulation results confirm the improved particle filter with these techniques outperforms the basic one.

  7. A systematic review of interventions to improve outcomes for young adults with Type 1 diabetes.

    LENUS (Irish Health Repository)

    O'Hara, M C

    2016-10-20

    Many young adults with Type 1 diabetes experience poor outcomes. The aim of this systematic review was to synthesize the evidence regarding the effectiveness of interventions aimed at improving clinical, behavioural or psychosocial outcomes for young adults with Type 1 diabetes.

  8. Probiotic yogurt improves antioxidant status in type 2 diabetic patients.

    Science.gov (United States)

    Ejtahed, Hanie S; Mohtadi-Nia, Javad; Homayouni-Rad, Aziz; Niafar, Mitra; Asghari-Jafarabadi, Mohammad; Mofid, Vahid

    2012-05-01

    Oxidative stress plays a major role in the pathogenesis and progression of diabetes. Among various functional foods with an antioxidant effect, probiotic foods have been reported to repress oxidative stress. The objective of this clinical trial was to assess the effects of probiotic and conventional yogurt on blood glucose and antioxidant status in type 2 diabetic patients. Sixty-four patients with type 2 diabetes mellitus, 30 to 60 y old, were assigned to two groups in this randomized, double-blind, controlled clinical trial. The patients in the intervention group consumed 300 g/d of probiotic yogurt containing Lactobacillus acidophilus La5 and Bifidobacterium lactis Bb12 and those in the control group consumed 300 g/d of conventional yogurt for 6 wk. Fasting blood samples, 24-h dietary recalls, and anthropometric measurements were collected at the baseline and at the end of the trial. Probiotic yogurt significantly decreased fasting blood glucose (P activities and total antioxidant status (P activity within either group (P > 0.05). The consumption of probiotic yogurt improved fasting blood glucose and antioxidant status in type 2 diabetic patients. These results suggest that probiotic yogurt is a promising agent for diabetes management. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Development of an ELA-DRA gene typing method based on pyrosequencing technology.

    Science.gov (United States)

    Díaz, S; Echeverría, M G; It, V; Posik, D M; Rogberg-Muñoz, A; Pena, N L; Peral-García, P; Vega-Pla, J L; Giovambattista, G

    2008-11-01

    The polymorphism of equine lymphocyte antigen (ELA) class II DRA gene had been detected by polymerase chain reaction-single-strand conformational polymorphism (PCR-SSCP) and reference strand-mediated conformation analysis. These methodologies allowed to identify 11 ELA-DRA exon 2 sequences, three of which are widely distributed among domestic horse breeds. Herein, we describe the development of a pyrosequencing-based method applicable to ELA-DRA typing, by screening samples from eight different horse breeds previously typed by PCR-SSCP. This sequence-based method would be useful in high-throughput genotyping of major histocompatibility complex genes in horses and other animal species, making this system interesting as a rapid screening method for animal genotyping of immune-related genes.

  10. Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data

    KAUST Repository

    Qahtan, Abdulhakim

    2016-05-11

    Recent advances in computing technology allow for collecting vast amount of data that arrive continuously in the form of streams. Mining data streams is challenged by the speed and volume of the arriving data. Furthermore, the underlying distribution of the data changes over the time in unpredicted scenarios. To reduce the computational cost, data streams are often studied in forms of condensed representation, e.g., Probability Density Function (PDF). This thesis aims at developing an online density estimator that builds a model called KDE-Track for characterizing the dynamic density of the data streams. KDE-Track estimates the PDF of the stream at a set of resampling points and uses interpolation to estimate the density at any given point. To reduce the interpolation error and computational complexity, we introduce adaptive resampling where more/less resampling points are used in high/low curved regions of the PDF. The PDF values at the resampling points are updated online to provide up-to-date model of the data stream. Comparing with other existing online density estimators, KDE-Track is often more accurate (as reflected by smaller error values) and more computationally efficient (as reflected by shorter running time). The anytime available PDF estimated by KDE-Track can be applied for visualizing the dynamic density of data streams, outlier detection and change detection in data streams. In this thesis work, the first application is to visualize the taxi traffic volume in New York city. Utilizing KDE-Track allows for visualizing and monitoring the traffic flow on real time without extra overhead and provides insight analysis of the pick up demand that can be utilized by service providers to improve service availability. The second application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The

  11. Method of renormalization potential for one model of Hartree-Fock-Slater type

    CERN Document Server

    Zasorin, Y V

    2002-01-01

    A new method of the potential renormalization for the quasiclassical model of the Hartree-Fock-Slater real potential is proposed. The method makes it possible to easily construct the wave functions and contrary to the majority od similar methods it does not require the knowledge of the real-type potential

  12. Improvement of type 2 diabetes mellitus in obese and non-obese patients after the duodenal switch operation.

    Science.gov (United States)

    Frenken, M; Cho, E Y; Karcz, W K; Grueneberger, J; Kuesters, S

    2011-01-01

    Introduction. Type 2 diabetes mellitus (T2DM) is one of the most important obesity-related comorbidities. This study was undertaken to characterise the effect of the biliopancreatic diversion with duodenal switch (BPD-DS) in morbidly obese and nonmorbidly obese diabetic patients. Methods. Outcome of 74 obese diabetic patients after BPD-DS and 16 non-obese diabetic patients after BPD or gastric bypass surgery was evaluated. Insulin usage, HbA(1c)-levels, and index of HOMA-IR (homeostasis model assessment of insulin resistence) were measured. Results. A substantial fraction of patients is free of insulin and shows an improved insulin sensitivity early after the operation, another fraction gets free of insulin in a 12-month period after the operation and a small fraction of long-term insulin users will not get free of insulin but nevertheless shows an improved metabolic status (less insulin needed, normal HbA(1c)-levels). Conclusion. BPD-DS leads to an improvement of T2DM in obese and non-obese patients. Nevertheless, more data is needed to clarify indications and mechanisms of action and to adjust our operation techniques to the needs of non-obese diabetic patients.

  13. Improvement of Type 2 Diabetes Mellitus in Obese and Non-Obese Patients after the Duodenal Switch Operation

    Directory of Open Access Journals (Sweden)

    M. Frenken

    2011-01-01

    Full Text Available Introduction. Type 2 diabetes mellitus (T2DM is one of the most important obesity-related comorbidities. This study was undertaken to characterise the effect of the biliopancreatic diversion with duodenal switch (BPD-DS in morbidly obese and nonmorbidly obese diabetic patients. Methods. Outcome of 74 obese diabetic patients after BPD-DS and 16 non-obese diabetic patients after BPD or gastric bypass surgery was evaluated. Insulin usage, HbA1c-levels, and index of HOMA-IR (homeostasis model assessment of insulin resistence were measured. Results. A substantial fraction of patients is free of insulin and shows an improved insulin sensitivity early after the operation, another fraction gets free of insulin in a 12-month period after the operation and a small fraction of long-term insulin users will not get free of insulin but nevertheless shows an improved metabolic status (less insulin needed, normal HbA1c-levels. Conclusion. BPD-DS leads to an improvement of T2DM in obese and non-obese patients. Nevertheless, more data is needed to clarify indications and mechanisms of action and to adjust our operation techniques to the needs of non-obese diabetic patients.

  14. Improvement of spatial discretization error on the semi-analytic nodal method using the scattered source subtraction method

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Tatsumi, Masahiro

    2006-01-01

    In this paper, the scattered source subtraction (SSS) method is newly proposed to improve the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. In the SSS method, the scattered source is subtracted from both side of the diffusion or the transport equation to make spatial variation of the source term to be small. The same neutron balance equation is still used in the SSS method. Since the SSS method just modifies coefficients of node coupling equations (those used in evaluation for the response of partial currents), its implementation is easy. Validity of the present method is verified through test calculations that are carried out in PWR multi-assemblies configurations. The calculation results show that the SSS method can significantly improve the spatial discretization error. Since the SSS method does not have any negative impact on execution time, convergence behavior and memory requirement, it will be useful to reduce the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. (author)

  15. Perovskite type nanopowders and thin films obtained by chemical methods

    Directory of Open Access Journals (Sweden)

    Viktor Fruth

    2010-09-01

    Full Text Available The review presents the contribution of the authors, to the preparation of two types of perovskites, namely BiFeO3 and LaCoO3, by innovative methods. The studied perovskites were obtained as powders, films and sintered bodies. Their complex structural and morphological characterization is also presented. The obtained results have underlined the important influence of the method of preparation on the properties of the synthesized perovskites.

  16. An improved electrokinetic method to consolidate porous materials

    DEFF Research Database (Denmark)

    Feijoo, Jorge; Ottosen, Lisbeth M.; Nóvoa, X. R.

    2017-01-01

    the consolidation using commercial products have some limitations, such as: (1) low penetrability; (2) no chemical and mineralogical affinity with the material to treat and (3) release of toxic compounds (VOCs), during the solvent evaporation. In the last years, a new consolidation method based on electrokinetic...... the pH of the solutions in contact with the porous material, which can damage it and (2) it is difficult to determine in which area the consolidation takes place. In this study an electrokinetic consolidation method, which has two steps between which the current is reversed, is proposed to solve all...... techniques was developed. This method allows filling some pores by the precipitation of an inorganic compound. As a result the method allows increasing the penetration depth of current consolidation treatments. However, this method needs to be improved since: (1) no special care is taking in controlling...

  17. Optimal plot size in the evaluation of papaya scions: proposal and comparison of methods

    Directory of Open Access Journals (Sweden)

    Humberto Felipe Celanti

    Full Text Available ABSTRACT Evaluating the quality of scions is extremely important and it can be done by characteristics of shoots and roots. This experiment evaluated height of the aerial part, stem diameter, number of leaves, petiole length and length of roots of papaya seedlings. Analyses were performed from a blank trial with 240 seedlings of "Golden Pecíolo Curto". The determination of the optimum plot size was done by applying the methods of maximum curvature, maximum curvature of coefficient of variation and a new proposed method, which incorporates the bootstrap resampling simulation to the maximum curvature method. According to the results obtained, five is the optimal number of seedlings of papaya "Golden Pecíolo Curto" per plot. The proposed method of bootstrap simulation with replacement provides optimal plot sizes equal or higher than the maximum curvature method and provides same plot size than maximum curvature method of the coefficient of variation.

  18. An improved method for chromosome counting in maize.

    Science.gov (United States)

    Kato, A

    1997-09-01

    An improved method for counting chromosomes in maize (Zea mays L.) is presented. Application of cold treatment (5C, 24 hr), heat treatment (42 C, 5 min) and a second cold treatment (5C, 24 hr) to root tips before fixation increased the number of condensed and dispersed countable metaphase chromosome figures. Fixed root tips were prepared by the enzymatic maceration-air drying method and preparations were stained with acetic orcein. Under favorable conditions, one preparation with 50-100 countable chromosome figures could be obtained in diploid maize using this method. Conditions affecting the dispersion of the chromosomes are described. This technique is especially useful for determining the somatic chromosome number in triploid and tetraploid maize lines.

  19. Molecular typing of Legionella pneumophila from air-conditioning cooling waters using mip gene, SBT, and FAFLP methods.

    Science.gov (United States)

    Gong, Xiangli; Li, Juntao; Zhang, Ying; Hou, Shuiping; Qu, Pinghua; Yang, Zhicong; Chen, Shouyi

    2017-08-01

    Legionella spp. are important waterborne pathogens. Molecular typing has become an important method for outbreaks investigations and source tracking of Legionnaires. In a survey program conducted by the Guangzhou Center for Disease Control and Prevention, multiple serotypes Legionella pneumophila (L. pneumophila) were isolated from waters in air-conditioning cooling towers in urban Guangzhou region, China between 2008 and 2011. Three genotyping methods, mip (macrophage infectivity potentiator) genotyping, SBT (sequence-based typing), and FAFLP (fluorescent amplified fragment length polymorphism analysis) were used to type these waterborne L. pneumophila isolates. The three methods were capable of typing all the 134 isolates and a reference strain of L. pneumophila (ATCC33153), with discriminatory indices of 0.7034, 0.9218, and 0.9376, for the mip, SBT, and FAFLP methods respectively. Among the 9 serotypes of the 134 isolates, 10, 50, and 34 molecular types were detected by the mip, SBT, and FAFLP methods respectively. The mip genotyping and SBT typing are more feasible for inter-laboratory results sharing and comparison of different types of L. pneumophila. The SBT and FAFLP typing methods were rapid with higher discriminatory abilities. Combinations of two or more of the typing methods enables more accurate typing of Legionella isolates for outbreak investigations and source tracking of Legionnaires. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Method of mechanical quadratures for solving singular integral equations of various types

    Science.gov (United States)

    Sahakyan, A. V.; Amirjanyan, H. A.

    2018-04-01

    The method of mechanical quadratures is proposed as a common approach intended for solving the integral equations defined on finite intervals and containing Cauchy-type singular integrals. This method can be used to solve singular integral equations of the first and second kind, equations with generalized kernel, weakly singular equations, and integro-differential equations. The quadrature rules for several different integrals represented through the same coefficients are presented. This allows one to reduce the integral equations containing integrals of different types to a system of linear algebraic equations.

  1. Improvement of diabetes, obesity and hypertension in type 2 diabetic KKAy mice by bis(allixinato)oxovanadium(IV) complex

    International Nuclear Information System (INIS)

    Adachi, Yusuke; Yoshikawa, Yutaka; Yoshida, Jiro; Kodera, Yukihiro; Katoh, Akira; Takada, Jitsuya; Sakurai, Hiromu

    2006-01-01

    Previously, we found that bis(allixinato)oxovanadium(IV) (VO(alx) 2 ) exhibits a potent hypoglycemic activity in type 1-like diabetic mice. Since the enhancement of insulin sensitivity is involved in one of the mechanisms by which vanadium exerts its anti-diabetic effects, VO(alx) 2 was further tested in type 2 diabetes with low insulin sensitivity. The effect of oral administration of VO(alx) 2 was examined in obesity-linked type 2 diabetic KKA y mice. Treatment of VO(alx) 2 for 4 weeks normalized hyperglycemia, glucose intolerance, hyperinsulinemia, hypercholesterolemia and hypertension in KKA y mice; however, it had no effect on hypoadiponectinemia. VO(alx) 2 also improved hyperleptinemia, following attenuation of obesity in KKA y mice. This is the first example in which a vanadium compound improved leptin resistance in type 2 diabetes by oral administration. On the basis of these results, VO(alx) 2 is proposed to enhance not only insulin sensitivity but also leptin sensitivity, which in turn improves diabetes, obesity and hypertension in an obesity-linked type 2 diabetic animal

  2. Effective lifestyle interventions to improve type II diabetes self-management for those with schizophrenia or schizoaffective disorder: a systematic review

    Directory of Open Access Journals (Sweden)

    Cimo Adriana

    2012-03-01

    Full Text Available Abstract Background The prevalence of type II diabetes among individuals suffering from schizophrenia or schizoaffective disorders is more than double that of the general population. By 2005, North American professional medical associations of Psychiatry, Diabetes, and Endocrinology responded by recommending continuous metabolic monitoring for this population to control complications from obesity and diabetes. However, these recommendations do not identify the types of effective treatment for people with schizophrenia who have type II diabetes. To fill this gap, this systematic evidence review identifies effective lifestyle interventions that enhance quality care in individuals who are suffering from type II diabetes and schizophrenia or other schizoaffective disorders. Methods A systematic search from Medline, CINAHL, PsycINFO, and ISI Web of Science was conducted. Of the 1810 unique papers that were retrieved, four met the inclusion/exclusion criteria and were analyzed. Results The results indicate that diabetes education is effective when it incorporates diet and exercise components, while using a design that addresses challenges such as cognition, motivation, and weight gain that may result from antipsychotics. Conclusions This paper begins to point to effective interventions that will improve type II diabetes management for people with schizophrenia or other schizoaffective disorders.

  3. Can long-term thiamine treatment improve the clinical outcomes of myotonic dystrophy type 1?

    Directory of Open Access Journals (Sweden)

    Antonio Costantini

    2016-01-01

    Full Text Available Myotonic dystrophy type 1, also known as Steinert′s disease, is an autosomal dominant disorder with multisystemic clinical features affecting the skeletal and cardiac muscles, the eyes, and the endocrine system. Thiamine (vitamin B1 is a cofactor of fundamental enzymes involved in the energetic cell metabolism; recent studies described its role in oxidative stress, protein processing, peroxisomal function, and gene expression. Thiamine deficiency is critical mainly in the central and peripheral nervous system, as well as in the muscular cells. Our aim was to investigate the potential therapeutical effects of long-term treatment with thiamine in myotonic dystrophy type 1 in an observational open-label pilot study. We described two patients with myotonic dystrophy type 1 treated with intramuscular thiamine 100 mg twice a week for 12 or 11 months. We evaluated the patients using the grading of muscle strength according to Medical Research Council (MRC, the Muscular Impairment Rating Scale (MIRS, and the Modified Barthel index. High-dose thiamine treatment was well tolerated and effective in improving the motor symptomatology, particularly the muscle strength evaluated with the MRC scale, and the patients′ activities of daily living using the Modified Barthel Index. At the end of treatment, the MRC score was 5 in the proximal muscles and 2-4 in the distal muscles (the MRC score before the treatment was 3-4 and 1-3, respectively. The MIRS grade improved by 25% compared to baseline for both patients. In patient #1, the Modified Barthel Index improved by 44%, and in patient #2 by 29%. These findings suggest that clinical outcomes are improved by long-term thiamine treatment.

  4. Can long-term thiamine treatment improve the clinical outcomes of myotonic dystrophy type 1?

    Science.gov (United States)

    Costantini, Antonio; Trevi, Erika; Pala, Maria Immacolata; Fancellu, Roberto

    2016-09-01

    Myotonic dystrophy type 1, also known as Steinert's disease, is an autosomal dominant disorder with multisystemic clinical features affecting the skeletal and cardiac muscles, the eyes, and the endocrine system. Thiamine (vitamin B1) is a cofactor of fundamental enzymes involved in the energetic cell metabolism; recent studies described its role in oxidative stress, protein processing, peroxisomal function, and gene expression. Thiamine deficiency is critical mainly in the central and peripheral nervous system, as well as in the muscular cells. Our aim was to investigate the potential therapeutical effects of long-term treatment with thiamine in myotonic dystrophy type 1 in an observational open-label pilot study. We described two patients with myotonic dystrophy type 1 treated with intramuscular thiamine 100 mg twice a week for 12 or 11 months. We evaluated the patients using the grading of muscle strength according to Medical Research Council (MRC), the Muscular Impairment Rating Scale (MIRS), and the Modified Barthel index. High-dose thiamine treatment was well tolerated and effective in improving the motor symptomatology, particularly the muscle strength evaluated with the MRC scale, and the patients' activities of daily living using the Modified Barthel Index. At the end of treatment, the MRC score was 5 in the proximal muscles and 2-4 in the distal muscles (the MRC score before the treatment was 3-4 and 1-3, respectively). The MIRS grade improved by 25% compared to baseline for both patients. In patient #1, the Modified Barthel Index improved by 44%, and in patient #2 by 29%. These findings suggest that clinical outcomes are improved by long-term thiamine treatment.

  5. A study for the improvement on knife-edge-type metal-seal flange

    International Nuclear Information System (INIS)

    Obara, Kenjiro; Nakamura, Kazuyuki; Murakami, Yoshio; Naganuma, Masamitsu; Kitamura, Kazunori; Uchida, Takao; Kondo, Mitsunori.

    1989-01-01

    Present paper describes the performance characteristics of the knife-edge-type metal-seal flange. The aim of the study is to try to make efficient the combination function of flange. Parameters on improved flange are smaller than that of conventional flange as follows; -number of bolt: 1/2∼1/3, tightness torque: 3/5, flange thickness: 7/10. (author)

  6. Methods to improve patient recruitment and retention in stroke trials

    DEFF Research Database (Denmark)

    Berge, Eivind; Stapf, Christian; Al-Shahi Salman, Rustam

    2016-01-01

    Background: The success of randomized-controlled stroke trials is dependent on the recruitment and retention of a sufficient number of patients, but fewer than half of all trials meet their target number of patients. Methods: We performed a search and review of the literature, and conducted...... a survey and workshop among 56 European stroke trialists, to identify barriers, suggest methods to improve recruitment and retention, and make a priority list of interventions that merit further evaluation. Results: The survey and workshop identified a number of barriers to patient recruitment...... and retention, from patients’ incapacity to consent, to handicaps that prevent patients from participation in trial-specific follow-up. Methods to improve recruitment and retention may include simple interventions with individual participants, funding of research networks, and reimbursement of new treatments...

  7. Effects of different pretreatment methods on fermentation types and dominant bacteria for hydrogen production

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Nan-Qi; Guo, Wan-Qian; Liu, Bing-Feng; Wang, Xing-Zu; Ding, Jie; Chen, Zhao-Bo [State Key Laboratory of Urban Water Resource and Environment, Harbin Institute of Technology, Harbin 150090, Heilongjiang (China); Wang, Xiang-Jing; Xiang, Wen-Sheng [Research Center of Life Science and Biotechnology, Northeast Agricultural University, Harbin 150030 (China)

    2008-08-15

    In order to enrich hydrogen producing bacteria and to establish high-efficient communities of the mixed microbial cultures, inoculum needs to be pretreated before the cultivation. Four pretreatment methods including heat-shock pretreatment, acid pretreatment, alkaline pretreatment and repeated-aeration pretreatment were performed on the seed sludge which was collected from a secondary settling tank of a municipal wastewater treatment plant. In contrast to the control test without any pretreatment, the heat-shock pretreatment, acid pretreatment and repeated-aeration pretreatment completely suppressed the methanogenic activity of the seed sludge, but the alkaline pretreatment did not. Employing different pretreatment methods resulted in the change in fermentation types as butyric-acid type fermentation was achieved by the heat-shock and alkaline pretreatments, mixed-acid type fermentation was achieved by acid pretreatment and the control, and ethanol-type fermentation was observed by repeated-aeration pretreatment. Denaturing gradient gel electrophoresis (DGGE) profiles revealed that pretreatment method substantially affected the species composition of microbial communities. The highest hydrogen yield of 1.96 mol/mol-glucose was observed with the repeated-aeration pretreatment method, while the lowest was obtained as the seed sludge was acidified. It is concluded that the pretreatment methods led to the difference in the initial microbial communities which might be directly responsible for different fermentation types and hydrogen yields. (author)

  8. Update and Improve Subsection NH - Alternative Simplified Creep-Fatigue Design Methods

    International Nuclear Information System (INIS)

    Asayama, Tai

    2009-01-01

    This report described the results of investigation on Task 10 of DOE/ASME Materials NGNP/Generation IV Project based on a contract between ASME Standards Technology, LLC (ASME ST-LLC) and Japan Atomic Energy Agency (JAEA). Task 10 is to Update and Improve Subsection NH -- Alternative Simplified Creep-Fatigue Design Methods. Five newly proposed promising creep-fatigue evaluation methods were investigated. Those are (1) modified ductility exhaustion method, (2) strain range separation method, (3) approach for pressure vessel application, (4) hybrid method of time fraction and ductility exhaustion, and (5) simplified model test approach. The outlines of those methods are presented first, and predictability of experimental results of these methods is demonstrated using the creep-fatigue data collected in previous Tasks 3 and 5. All the methods (except the simplified model test approach which is not ready for application) predicted experimental results fairly accurately. On the other hand, predicted creep-fatigue life in long-term regions showed considerable differences among the methodologies. These differences come from the concepts each method is based on. All the new methods investigated in this report have advantages over the currently employed time fraction rule and offer technical insights that should be thought much of in the improvement of creep-fatigue evaluation procedures. The main points of the modified ductility exhaustion method, the strain range separation method, the approach for pressure vessel application and the hybrid method can be reflected in the improvement of the current time fraction rule. The simplified mode test approach would offer a whole new advantage including robustness and simplicity which are definitely attractive but this approach is yet to be validated for implementation at this point. Therefore, this report recommends the following two steps as a course of improvement of NH based on newly proposed creep-fatigue evaluation

  9. Blood group typing based on recording the elastic scattering of laser radiation using the method of digital imaging

    Energy Technology Data Exchange (ETDEWEB)

    Dolmashkin, A A; Dubrovskii, V A; Zabenkov, I V [V.I.Razumovsky Saratov State Medical University, Saratov (Russian Federation)

    2012-05-31

    The possibility is demonstrated to determine the human blood group by recording the scattering of laser radiation with the help of the digital imaging method. It is experimentally shown that the action of a standing ultrasound wave leads to acceleration of the agglutination reaction of red blood cells, to formation of larger immune complexes of red blood cells, and, as a consequence, to acceleration of their sedimentation. In the absence of agglutination of red blood cells the ultrasound does not enhance the relevant processes. This difference in the results of ultrasound action on the mixture of blood and serum allows a method of blood typing to be offered. Theoretical modelling of the technique of the practical blood typing, carried out on the basis of the elastic light scattering theory, agrees well with the experimental results, which made it possible to plan further improvement of the proposed method. The studies of specific features of sedimentation of red blood cells and their immune complexes were aimed at the optimisation of the sample preparation, i.e., at the search for such experimental conditions that provide the maximal resolution of the method and the device for registering the reaction of red blood cells agglutination. The results of the study may be used in designing the instrumentation for blood group assessment in humans.

  10. Blood group typing based on recording the elastic scattering of laser radiation using the method of digital imaging

    International Nuclear Information System (INIS)

    Dolmashkin, A A; Dubrovskii, V A; Zabenkov, I V

    2012-01-01

    The possibility is demonstrated to determine the human blood group by recording the scattering of laser radiation with the help of the digital imaging method. It is experimentally shown that the action of a standing ultrasound wave leads to acceleration of the agglutination reaction of red blood cells, to formation of larger immune complexes of red blood cells, and, as a consequence, to acceleration of their sedimentation. In the absence of agglutination of red blood cells the ultrasound does not enhance the relevant processes. This difference in the results of ultrasound action on the mixture of blood and serum allows a method of blood typing to be offered. Theoretical modelling of the technique of the practical blood typing, carried out on the basis of the elastic light scattering theory, agrees well with the experimental results, which made it possible to plan further improvement of the proposed method. The studies of specific features of sedimentation of red blood cells and their immune complexes were aimed at the optimisation of the sample preparation, i.e., at the search for such experimental conditions that provide the maximal resolution of the method and the device for registering the reaction of red blood cells agglutination. The results of the study may be used in designing the instrumentation for blood group assessment in humans.

  11. Through Their Eyes: Lessons Learned Using Participatory Methods in Health Care Quality Improvement Projects.

    Science.gov (United States)

    Balbale, Salva N; Locatelli, Sara M; LaVela, Sherri L

    2016-08-01

    In this methodological article, we examine participatory methods in depth to demonstrate how these methods can be adopted for quality improvement (QI) projects in health care. We draw on existing literature and our QI initiatives in the Department of Veterans Affairs to discuss the application of photovoice and guided tours in QI efforts. We highlight lessons learned and several benefits of using participatory methods in this area. Using participatory methods, evaluators can engage patients, providers, and other stakeholders as partners to enhance care. Participant involvement helps yield actionable data that can be translated into improved care practices. Use of these methods also helps generate key insights to inform improvements that truly resonate with stakeholders. Using participatory methods is a valuable strategy to harness participant engagement and drive improvements that address individual needs. In applying these innovative methodologies, evaluators can transcend traditional approaches to uniquely support evaluations and improvements in health care. © The Author(s) 2015.

  12. Prototyping method for Bragg-type atom interferometers

    Energy Technology Data Exchange (ETDEWEB)

    Benton, Brandon; Krygier, Michael; Heward, Jeffrey; Edwards, Mark [Department of Physics, Georgia Southern University, Statesboro, Georgia 30460-8031 (United States); Clark, Charles W. [Joint Quantum Insitute, National Institute of Standards and Technology and the University of Maryland, Gaithersburg, Maryland 20899 (United States)

    2011-10-15

    We present a method for rapid modeling of new Bragg ultracold atom-interferometer (AI) designs useful for assessing the performance of such interferometers. The method simulates the overall effect on the condensate wave function in a given AI design using two separate elements. These are (1) modeling the effect of a Bragg pulse on the wave function and (2) approximating the evolution of the wave function during the intervals between the pulses. The actual sequence of these pulses and intervals is then followed to determine the approximate final wave function from which the interference pattern can be calculated. The exact evolution between pulses is assumed to be governed by the Gross-Pitaevskii (GP) equation whose solution is approximated using a Lagrangian variational method to facilitate rapid estimation of performance. The method presented here is an extension of an earlier one that was used to analyze the results of an experiment [J. E. Simsarian et al., Phys. Rev. Lett. 85, 2040 (2000)], where the phase of a Bose-Einstein condensate was measured using a Mach-Zehnder-type Bragg AI. We have developed both 1D and 3D versions of this method and we have determined their validity by comparing their predicted interference patterns with those obtained by numerical integration of the 1D GP equation and with the results of the above experiment. We find excellent agreement between the 1D interference patterns predicted by this method and those found by the GP equation. We show that we can reproduce all of the results of that experiment without recourse to an ad hoc velocity-kick correction needed by the earlier method, including some experimental results that the earlier model did not predict. We also found that this method provides estimates of 1D interference patterns at least four orders-of-magnitude faster than direct numerical solution of the 1D GP equation.

  13. TESTING METHODS FOR MECHANICALLY IMPROVED SOILS: RELIABILITY AND VALIDITY

    Directory of Open Access Journals (Sweden)

    Ana Petkovšek

    2017-10-01

    Full Text Available A possibility of in-situ mechanical improvement for reducing the liquefaction potential of silty sands was investigated by using three different techniques: Vibratory Roller Compaction, Rapid Impact Compaction (RIC and Soil Mixing. Material properties at all test sites were investigated before and after improvement with the laboratory and the in situ tests (CPT, SDMT, DPSH B, static and dynamic load plate test, geohydraulic tests. Correlation between the results obtained by different test methods gave inconclusive answers.

  14. A pulse-shape discrimination method for improving Gamma-ray spectrometry based on a new digital shaping filter

    Science.gov (United States)

    Qin, Zhang-jian; Chen, Chuan; Luo, Jun-song; Xie, Xing-hong; Ge, Liang-quan; Wu, Qi-fan

    2018-04-01

    It is a usual practice for improving spectrum quality by the mean of designing a good shaping filter to improve signal-noise ratio in development of nuclear spectroscopy. Another method is proposed in the paper based on discriminating pulse-shape and discarding the bad pulse whose shape is distorted as a result of abnormal noise, unusual ballistic deficit or bad pulse pile-up. An Exponentially Decaying Pulse (EDP) generated in nuclear particle detectors can be transformed into a Mexican Hat Wavelet Pulse (MHWP) and the derivation process of the transform is given. After the transform is performed, the baseline drift is removed in the new MHWP. Moreover, the MHWP-shape can be discriminated with the three parameters: the time difference between the two minima of the MHWP, and the two ratios which are from the amplitude of the two minima respectively divided by the amplitude of the maximum in the MHWP. A new type of nuclear spectroscopy was implemented based on the new digital shaping filter and the Gamma-ray spectra were acquired with a variety of pulse-shape discrimination levels. It had manifested that the energy resolution and the peak-Compton ratio were both improved after the pulse-shape discrimination method was used.

  15. Molecular methods for typing of Helicobacter pylori and their applications

    DEFF Research Database (Denmark)

    Colding, H; Hartzen, S H; Roshanisefat, H

    1999-01-01

    .g. the urease genes. Furthermore, reproducibility, discriminatory power, ease of performance and interpretation, cost and toxic procedures of each method are assessed. To date no direct comparison of all the molecular typing methods described has been performed in the same study with the same H. pylori strains....... However, PCR analysis of the urease gene directly on suspensions of H. pylori or gastric biopsy material seems to be useful for routine use and applicable in specific epidemiological situations....

  16. Rebound effect of improved energy efficiency for different energy types: A general equilibrium analysis for China

    International Nuclear Information System (INIS)

    Lu, Yingying; Liu, Yu; Zhou, Meifang

    2017-01-01

    This paper explores the rebound effect of different energy types in China based on a static computable general equilibrium model. A one-off 5% energy efficiency improvement is imposed on five different types of energy, respectively, in all the 135 production sectors in China. The rebound effect is measured both on the production level and on the economy-wide level for each type of energy. The results show that improving energy efficiency of using electricity has the largest positive impact on GDP among the five energy types. Inter-fuel substitutability does not affect the macroeconomic results significantly, but long-run impact is usually greater than the short-run impact. For the exports-oriented sectors, those that are capital-intensive get big negative shock in the short run while those that are labour-intensive get hurt in the long run. There is no “backfire” effect; however, improving efficiency of using electricity can cause negative rebound, which implies that improving the energy efficiency of using electricity might be a good policy choice under China's current energy structure. In general, macro-level rebound is larger than production-level rebound. Primary energy goods show larger rebound effect than secondary energy goods. In addition, the paper points out that the policy makers in China should look at the rebound effect in the long term rather than in the short term. The energy efficiency policy would be a good and effective policy choice for energy conservation in China when it still has small inter-fuel substitution. - Highlights: • Primary energy goods show larger rebound effect than secondary energy goods. • Improving efficiency of using electricity can cause negative rebound. • The energy efficiency policy would be an effective policy choice for China. • Policy-makers should consider the rebound effect in the longer term.

  17. Comparison effectiveness of cooperative learning type STAD with cooperative learning type TPS in terms of mathematical method of Junior High School students

    Science.gov (United States)

    Wahyuni, A.

    2018-05-01

    This research is aimed to find out whether the model of cooperative learning type Student Team Achievement Division (STAD) is more effective than cooperative learning type Think-Pair-Share in SMP Negeri 7 Yogyakarta. This research was a quasi-experimental research, using two experimental groups. The population of research was all students of 7thclass in SMP Negeri 7 Yogyakarta that consists of 5 Classes. From the population were taken 2 classes randomly which used as sample. The instrument to collect data was a description test. Measurement of instrument validity use content validity and construct validity, while measuring instrument reliability use Cronbach Alpha formula. To investigate the effectiveness of cooperative learning type STAD and cooperative learning type TPS on the aspect of student’s mathematical method, the datas were analyzed by one sample test. Comparing the effectiveness of cooperative learning type STAD and TPS in terms of mathematical communication skills by using t-test. Normality test was not conducted because the sample of research more than 30 students, while homogeneity tested by using Kolmogorov Smirnov test. The analysis was performed at 5% confidence level.The results show as follows : 1) The model of cooperative learning type STAD and TPS are effective in terms of mathematical method of junior high school students. 2). STAD type cooperative learning model is more effective than TPS type cooperative learning model in terms of mathematical methods of junior high school students.

  18. An Improved in Vivo Deuterium Labeling Method for Measuring the Biosynthetic Rate of Cytokinins

    Directory of Open Access Journals (Sweden)

    Petr Tarkowski

    2010-12-01

    Full Text Available An improved method for determining the relative biosynthetic rate of isoprenoid cytokinins has been developed. A set of 11 relevant isoprenoid cytokinins, including zeatin isomers, was separated by ultra performance liquid chromatography in less than 6 min. The iP-type cytokinins were observed to give rise to a previously-unknown fragment at m/z 69; we suggest that the diagnostic (204-69 transition can be used to monitor the biosynthetic rate of isopentenyladenine. Furthermore, we found that by treating the cytokinin nucleotides with alkaline phosphatase prior to analysis, the sensitivity of the detection process could be increased. In addition, derivatization (propionylation improved the ESI-MS response by increasing the analytes' hydrophobicity. Indeed, the ESI-MS response of propionylated isopentenyladenosine was about 34% higher than that of its underivatized counterpart. Moreover, the response of the derivatized zeatin ribosides was about 75% higher than that of underivatized zeatin ribosides. Finally, we created a web-based calculator (IZOTOP that facilitates MS/MS data processing and offer it freely to the research community.

  19. Eicosapentaenoic acid improves glycemic control in elderly bedridden patients with type 2 diabetes.

    Science.gov (United States)

    Ogawa, Susumu; Abe, Takaaki; Nako, Kazuhiro; Okamura, Masashi; Senda, Miho; Sakamoto, Takuya; Ito, Sadayoshi

    2013-01-01

    Eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) are ω3-polyunsaturated fatty acids mainly contained in the blue-backed fish oil, and are effective in decreasing the lipids disorder and the cardiovascular incidence among diabetic patients. Moreover, it has been suggested that EPA and DHA may improve the insulin resistance and glucose metabolism. However, the clinical effects of EPA and DHA on glucose metabolism remain unclear. We aimed to clarify the effects of EPA/DHA treatment on glycemic control in type 2 diabetes mellitus. This study was a multicenter prospective randomized controlled trial involving 30 elderly type 2 diabetic patients on a liquid diet. Their exercises were almost zero and the content of their meals was strictly managed and understood well. Therefore, the difference by the individual's life was a minimum. The subjects were divided into two groups: those receiving EPA/DHA-rich liquid diet [EPA/DHA (+)] or liquid diet lacking EPA/DHA [EPA/DHA (-)]. Changes in factors related to glucose and lipid metabolism were assessed after the three-month study. Serum concentrations of EPA rose in EPA/DHA (+), although the levels of DHA and fasting C-peptide remained unchanged in EPA/DHA (+). In addition, there was a significant decline in the fasting plasma glucose (FPG), hemoglobin A1c (HbA1c), fasting remnant-like particles and apolipoprotein (apo) B in EPA/DHA (+), compared with the values in EPA/DHA (-). EPA/DHA-rich diet might improve glucose metabolism in elderly type 2 diabetic patients on a liquid diet. This phenomenon may be due to the improved insulin resistance mediated by the rise in serum EPA concentrations.

  20. P-type sp3-bonded BN/n-type Si heterodiode solar cell fabricated by laser-plasma synchronous CVD method

    International Nuclear Information System (INIS)

    Komatsu, Shojiro; Nagata, Takahiro; Chikyo, Toyohiro; Sato, Yuhei; Watanabe, Takayuki; Hirano, Daisuke; Takizawa, Takeo; Nakamura, Katsumitsu; Hashimoto, Takuya; Nakamura, Takuya; Koga, Kazunori; Shiratani, Masaharu; Yamamoto, Atsushi

    2009-01-01

    A heterojunction of p-type sp 3 -bonded boron nitride (BN) and n-type Si fabricated by laser-plasma synchronous chemical vapour deposition (CVD) showed excellent rectifying properties and proved to work as a solar cell with photovoltaic conversion efficiency of 1.76%. The BN film was deposited on an n-type Si (1 0 0) substrate by plasma CVD from B 2 H 6 + NH 3 + Ar while doping of Si into the BN film was induced by the simultaneous irradiation of an intense excimer laser with a pulse power of 490 mJ cm -2 , at a wavelength of 193 nm and at a repetition rate of 20 Hz. The source of dopant Si was supposed to be the Si substrate ablated at the initial stage of the film growth. The laser enhanced the doping (and/or diffusion) of Si into BN as well as the growth of sp 3 -bonded BN simultaneously in this method. P-type conduction of BN films was determined by the hot (thermoelectric) probe method. The BN/Si heterodiode with an essentially transparent p-type BN as a front layer is supposed to efficiently absorb light reaching the active region so as to potentially result in high efficiency.

  1. Calculation method of CGH for Binocular Eyepiece-Type Electro Holography

    International Nuclear Information System (INIS)

    Yang, Chanyoung; Yoneyama, Takuo; Sakamoto, Yuji; Okuyama, Fumio

    2013-01-01

    We had researched about eyepiece-type electro holography to display 3-D images of larger objects at wider angle. We had enlarged visual field considering depth of object with Fourier optical system using two lenses. In this paper, we extend our system for binocular. In the binocular system, we use two different holograms for each eye. The 3-D image for left eye should be observed like the real object observed using left eye and the same for right eye. So, we propose a method of calculation of computer-generated hologram (CGH) transforming the coordinate system of the model data to make two holograms for binocular eyepiece-type electro holography. The coordinate system of original model data is called the world coordinate system. The left and the right coordinate system are transformed from the world coordinate system. We also propose the method for correcting the installation error that occurs when placing the electronic and optical devices. The installation error is calculated and the model data is corrected using the distance between measured position and setup position of the reconstructed image Optical reconstruction experiments were carried out to verify the proposed method.

  2. The Architecture Improvement Method: cost management and systematic learning about strategic product architectures

    NARCIS (Netherlands)

    de Weerd-Nederhof, Petronella C.; Wouters, Marc; Teuns, Steven J.A.; Hissel, Paul H.

    2007-01-01

    The architecture improvement method (AIM) is a method for multidisciplinary product architecture improvement, addressing uncertainty and complexity and incorporating feedback loops, facilitating trade-off decision making during the architecture creation process. The research reported in this paper

  3. Orthogonal projections and bootstrap resampling procedures in the study of infraspecific variation

    Directory of Open Access Journals (Sweden)

    Luiza Carla Duarte

    1998-12-01

    Full Text Available The effect of an increase in quantitative continuous characters resulting from indeterminate growth upon the analysis of population differentiation was investigated using, as an example, a set of continuous characters measured as distance variables in 10 populations of a rodent species. The data before and after correction for allometric size effects using orthogonal projections were analyzed with a parametric bootstrap resampling procedure applied to canonical variate analysis. The variance component of the distance measures attributable to indeterminate growth within the populations was found to be substantial, although the ordination of the populations was not affected, as evidenced by the relative and absolute positions of the centroids. The covariance pattern of the distance variables used to infer the nature of the morphological differences was strongly influenced by indeterminate growth. The uncorrected data produced a misleading picture of morphological differentiation by indicating that groups of populations differed in size. However, the data corrected for allometric effects clearly demonstrated that populations differed morphologically both in size and shape. These results are discussed in terms of the analysis of morphological differentiation among populations and the definition of infraspecific geographic units.A influência do aumento em caracteres quantitativos contínuos devido ao crescimento indeterminado sobre a análise de diferenciação entre populações foi investigado utilizando como exemplo um conjunto de dados de variáveis craniométricas em 10 populações de uma espécie de roedor. Dois conjuntos de dados, um não corrigido para o efeito alométrico do tamanho e um outro corrigido para o efeito alométrico do tamanho utilizando um método de projeção ortogonal, foram analisados por um procedimento "bootstrap" de reamostragem aplicado à análise de variáveis canônicas. O componente de variância devido ao

  4. Development of a method of continuous improvement of services using the Business Intelligence tools

    Directory of Open Access Journals (Sweden)

    Svetlana V. Kulikova

    2018-01-01

    Full Text Available The purpose of the study was to develop a method of continuous improvement of services using the Business Intelligence tools.Materials and methods: the materials are used on the concept of the Deming Cycle, methods and Business Intelligence technologies, Agile methodology and SCRUM.Results: the article considers the problem of continuous improvement of services and offers solutions using methods and technologies of Business Intelligence. In this case, the purpose of this technology is to solve and make the final decision regarding what needs to be improved in the current organization of services. In other words, Business Intelligence helps the product manager to see what is hidden from the “human eye” on the basis of received and processed data. Development of a method based on the concept of the Deming Cycle and Agile methodologies, and SCRUM.The article describes the main stages of development of method based on activity of the enterprise. It is necessary to fully build the Business Intelligence system in the enterprise to identify bottlenecks and justify the need for their elimination and, in general, for continuous improvement of the services. This process is represented in the notation of DFD. The article presents a scheme for the selection of suitable agile methodologies.The proposed concept of the solution of the stated objectives, including methods of identification of problems through Business Intelligence technology, development of the system for troubleshooting and analysis of results of the introduced changes. The technical description of the project is given.Conclusion: following the work of the authors there was formed the concept of the method for the continuous improvement of the services, using the Business Intelligence technology with the specifics of the enterprises, offering SaaS solutions. It was also found that when using this method, the recommended development methodology is SCRUM. The result of this scientific

  5. AN ENCODING METHOD FOR COMPRESSING GEOGRAPHICAL COORDINATES IN 3D SPACE

    Directory of Open Access Journals (Sweden)

    C. Qian

    2017-09-01

    Full Text Available This paper proposed an encoding method for compressing geographical coordinates in 3D space. By the way of reducing the length of geographical coordinates, it helps to lessen the storage size of geometry information. In addition, the encoding algorithm subdivides the whole space according to octree rules, which enables progressive transmission and loading. Three main steps are included in this method: (1 subdividing the whole 3D geographic space based on octree structure, (2 resampling all the vertices in 3D models, (3 encoding the coordinates of vertices with a combination of Cube Index Code (CIC and Geometry Code. A series of geographical 3D models were applied to evaluate the encoding method. The results showed that this method reduced the storage size of most test data by 90 % or even more under the condition of a speed of encoding and decoding. In conclusion, this method achieved a remarkable compression rate in vertex bit size with a steerable precision loss. It shall be of positive meaning to the web 3d map storing and transmission.

  6. Improvement Methods in NPP's Radiation Emergency Plan: An Administrative Approach

    International Nuclear Information System (INIS)

    Lee, Yoon Wook; Yang, He Sun

    2009-01-01

    The Radiation Emergency Plan (REP) can be divided into a technical and an administrative responses. The domestic NPP's REPs are reviewed from the viewpoint of the administrative response and improvement methods are also suggested in this treatise. The fields of the reviews are the composition of the emergency response organizations, the activation criteria of the organizations, the selection of the staffings and the reasonableness of the REP's volume. In addition, the limitations of the current radiation exercises are reviewed and the improvement method of the exercise is presented. It is expected that the suggested recommendations will be helpful in establishing useful REPs and making practical radiation exercises in Korea

  7. A fast and robust method for full genome sequencing of Porcine Reproductive and Respiratory Syndrome Virus (PRRSV) Type 1 and Type 2

    DEFF Research Database (Denmark)

    Kvisgaard, Lise Kirstine; Hjulsager, Charlotte Kristiane; Fahnøe, Ulrik

    2013-01-01

    . In the present study, fast and robust methods for long range RT-PCR amplification and subsequent next generation sequencing (NGS) were developed and validated on nine Type 1 and nine Type 2 PRRSV viruses. The methods generated robust and reliable sequences both on primary material and cell culture adapted...... viruses and the protocols performed well on all three NGS platforms tested (Roche 454 FLX, Illumina HiSeq2000, and Ion Torrent PGM™ Sequencer). These methods will greatly facilitate the generation of more full genome PRRSV sequences globally....

  8. Numerical study of water diffusion in biological tissues using an improved finite difference method

    International Nuclear Information System (INIS)

    Xu Junzhong; Does, Mark D; Gore, John C

    2007-01-01

    An improved finite difference (FD) method has been developed in order to calculate the behaviour of the nuclear magnetic resonance signal variations caused by water diffusion in biological tissues more accurately and efficiently. The algorithm converts the conventional image-based finite difference method into a convenient matrix-based approach and includes a revised periodic boundary condition which eliminates the edge effects caused by artificial boundaries in conventional FD methods. Simulated results for some modelled tissues are consistent with analytical solutions for commonly used diffusion-weighted pulse sequences, whereas the improved FD method shows improved efficiency and accuracy. A tightly coupled parallel computing approach was also developed to implement the FD methods to enable large-scale simulations of realistic biological tissues. The potential applications of the improved FD method for understanding diffusion in tissues are also discussed. (note)

  9. Concepts of Scenario Methods in Improvement of an Enterprise

    Directory of Open Access Journals (Sweden)

    Edyta Bielinska-Dusza

    2013-06-01

    Full Text Available Purpose of the study, principal objectives, scope of the investigation, methods employed results and principal conclusion. Uncertainty makes both theoreticians and practicioners face new tasks to fulfil. Enterprises, in order to win the competitive struggle must constantly improve their processes and structures. On the other hand, thinking in the categories of the future becomes really difficult nowadays. This creates particularly convenient conditions to apply scenario methods. In connection with the above, the purpose of this study is to characterize the essence of scenario methods employed in enterprise development. The article addresses the issue of factors conditioning proper selection of methods in the enterprise development process, the principles of scenario planning and the opportunities to apply other techniques and methods in scenario planning.

  10. Analysis of cost data in a cluster-randomized, controlled trial: comparison of methods

    DEFF Research Database (Denmark)

    Sokolowski, Ineta; Ørnbøl, Eva; Rosendal, Marianne

    studies have used non-valid analysis of skewed data. We propose two different methods to compare mean cost in two groups. Firstly, we use a non-parametric bootstrap method where the re-sampling takes place on two levels in order to take into account the cluster effect. Secondly, we proceed with a log......-transformation of the cost data and apply the normal theory on these data. Again we try to account for the cluster effect. The performance of these two methods is investigated in a simulation study. The advantages and disadvantages of the different approaches are discussed.......  We consider health care data from a cluster-randomized intervention study in primary care to test whether the average health care costs among study patients differ between the two groups. The problems of analysing cost data are that most data are severely skewed. Median instead of mean...

  11. Improving self-management of type 1 and type 2 diabetes.

    Science.gov (United States)

    Phillips, Anne

    2016-01-06

    Diabetes is an increasingly common life-long condition, which has significant physical, psychological and behavioural implications for individuals. Self-management of type 1 and type 2 diabetes can be complex and challenging. A collaborative approach to care, between healthcare professionals and patients, is essential to promote self-management skills and knowledge to help patients engage in shared decision making and manage any difficulties associated with a diagnosis of diabetes.

  12. Improved Cole parameter extraction based on the least absolute deviation method

    International Nuclear Information System (INIS)

    Yang, Yuxiang; Ni, Wenwen; Sun, Qiang; Wen, He; Teng, Zhaosheng

    2013-01-01

    The Cole function is widely used in bioimpedance spectroscopy (BIS) applications. Fitting the measured BIS data onto the model and then extracting the Cole parameters (R 0 , R ∞ , α and τ) is a common practice. Accurate extraction of the Cole parameters from the measured BIS data has great significance for evaluating the physiological or pathological status of biological tissue. The traditional least-squares (LS)-based curve fitting method for Cole parameter extraction is often sensitive to noise or outliers and becomes non-robust. This paper proposes an improved Cole parameter extraction based on the least absolute deviation (LAD) method. Comprehensive simulation experiments are carried out and the performances of the LAD method are compared with those of the LS method under the conditions of outliers, random noises and both disturbances. The proposed LAD method exhibits much better robustness under all circumstances, which demonstrates that the LAD method is deserving as an improved alternative to the LS method for Cole parameter extraction for its robustness to outliers and noises. (paper)

  13. Expert System for Determination of Type Lenses Glasses Using Forward Chaining Method

    Directory of Open Access Journals (Sweden)

    Atikah Ari Pramesti

    2016-11-01

    Full Text Available One of the branches of computer science that is widely used by humans to help her work is the establishment of an expert system. In this study we will design an expert system for determining the type of spectacle lenses using a forward chaining method. In forward chaining method, starting with the initial information (early symptoms and moved forward to fit more information to find the information in accordance with the rules of the knowledge base and production, and will be concluded in the form of the type of disorder diagnosis of eye disorders and provide solutions in the form of lenses of eyeglasses. Result from this study is that the match calculation of algorithm of forward chaining method between system and manual calculations produce the same output.

  14. Post-disposal safety assessment of toxic and radioactive waste: waste types, disposal practices, disposal criteria, assessment methods and post-disposal impacts

    International Nuclear Information System (INIS)

    Torres, C.; Simon, I.; Little, R.H.; Charles, D.; Grogan, H.A.; Smith, G.M.; Sumerling, T.J.; Watkins, B.M.

    1993-01-01

    The need for safety assessments of waste disposal stems not only from the implementation of regulations requiring the assessment of environmental effects, but also from the more general need to justify decisions on protection requirements. As waste-disposal methods have become more technologically based, through the application of more highly engineered design concepts and through more rigorous and specific limitations on the types and quantities of the waste disposed, it follows that assessment procedures also must become more sophisticated. It is the overall aim of this study to improve the predictive modelling capacity for post-disposal safety assessments of land-based disposal facilities through the development and testing of a comprehensive, yet practicable, assessment framework. This report records all the work which has been undertaken during Phase 1 of the study. Waste types, disposal practices, disposal criteria and assessment methods for both toxic and radioactive waste are reviewed with the purpose of identifying those features relevant to assessment methodology development. Difference and similarities in waste types, disposal practices, criteria and assessment methods between countries, and between toxic and radioactive wastes are highlighted and discussed. Finally, an approach to identify post-disposal impacts, how they arise and their effects on humans and the environment is described

  15. Investigation on Carbohydrate Counting Method in Type 1 Diabetic Patients

    Directory of Open Access Journals (Sweden)

    Osman Son

    2014-01-01

    Full Text Available Objective. The results from Diabetes Control and Complications Trial (DCCT have propounded the importance of the approach of treatment by medical nutrition when treating diabetes mellitus (DM. During this study, we tried to inquire carbohydrate (Kh count method’s positive effects on the type 1 DM treatment’s success as well as on the life quality of the patients. Methods. 22 of 37 type 1 DM patients who applied to Eskişehir Osmangazi University, Faculty of Medicine Hospital, Department of Endocrinology and Metabolism, had been treated by Kh count method and 15 of them are treated by multiple dosage intensive insulin treatment with applying standard diabetic diet as a control group and both of groups were under close follow-up for 6 months. Required approval was taken from the Ethical Committee of Eskişehir Osmangazi University, Medical Faculty, as well as informed consent from the patients. The body weight of patients who are treated by carbohydrate count method and multiple dosage intensive insulin treatment during the study beginning and after 6-month term, body mass index, and body compositions are analyzed. A short life quality and medical research survey applied. At statistical analysis, t-test, chi-squared test, and Mann-Whitney U test were used. Results. There had been no significant change determined at glycemic control indicators between the Kh counting group and the standard diabetic diet and multiple dosage insulin treatment group in our study. Conclusion. As a result, Kh counting method which offers a flexible nutrition plan to diabetic individuals is a functional method.

  16. An approximation method for nonlinear integral equations of Hammerstein type

    International Nuclear Information System (INIS)

    Chidume, C.E.; Moore, C.

    1989-05-01

    The solution of a nonlinear integral equation of Hammerstein type in Hilbert spaces is approximated by means of a fixed point iteration method. Explicit error estimates are given and, in some cases, convergence is shown to be at least as fast as a geometric progression. (author). 25 refs

  17. A Nested-Splicing by Overlap Extension PCR Improves Specificity of this Standard Method.

    Science.gov (United States)

    Karkhane, Ali Asghar; Yakhchali, Bagher; Rastgar Jazii, Ferdous; Bambai, Bijan; Aminzadeh, Saeed; Rahimi, Fatemeh

    2015-06-01

    Splicing by overlap extension (SOE) PCR is used to create mutation in the coding sequence of an enzyme in order to study the role of specific residues in protein's structure and function. We introduced a nested-SOE-PCR (N -SOE-PCR) in order to increase the specificity and generating mutations in a gene by SOE-PCR. Genomic DNA from Bacillus thermocatenulatus was extracted. Nested PCR was used to amplify B. thermocatenulatus lipase gene variants, namely wild type and mutant, using gene specific and mutagenic specific primers, followed by cloning in a suitable vector. Briefly in N-SOE-PCR method, instead of two pairs of primers, three pairs of primers are used to amplify a mutagenic fragment. Moreover, the first and second PCR products are slightly longer than PCR products in a conventional SOE. PCR products obtained from the first round of PCR are used for the second PCR by applying the nested and mutated primers. Following to the purification of the amplified fragments, they will be subject of the further purification and will be used as template to perform the third round of PCR using gene specific primers. In the end, the products will be cloned into a suitable vector for subsequent application. In comparison to the conventional SOE-PCR, the improved method (i.e. N-SOE-PCR) increases the yield and specificity of the products. In addition, the proposed method shows a large reduction in the non-specific products. By applying two more primers in the conventional SOE, the specificity of the method will be improved. This would be in part due to annealing of the primers further inside the amplicon that increases both the efficiency and a better attachment of the primers. Positioning of the primer far from both ends of an amplicon leads to an enhanced binding as well as increased affinity in the third round of amplification in SOE.

  18. Improvement in the independence of relaxation method-based particle tracking velocimetry

    International Nuclear Information System (INIS)

    Jia, P; Wang, Y; Zhang, Y

    2013-01-01

    New techniques are developed to improve the independence of relaxation method-based particle tracking velocimetry (RM-PTV). Firstly, Delaunay tessellation (DT) is employed to form clusters of neighboring particles with similar motion in the same frame; and then a bidirectional calculation concept is adopted to improve the way of particle pairing. These new techniques are tested with both self-defined particle images and the particle image velocimetry standard synthetic particle images. The results indicate that the DT method performs well and efficiently in determining the particle clusters, and the particle pairing process is well optimized by the bidirectional calculation concept. With these methods, three computation parameters are eliminated, which makes RM-PTV more autonomous in applications. (paper)

  19. Improvement of dose evaluation method for employees at severe accident

    International Nuclear Information System (INIS)

    Onda, Takashi; Yoshida, Yoshitaka; Kudo, Seiichi; Nishimura, Kazuya

    2003-01-01

    It is expected that the selection of access routes for employees who engage in emergency work at a severe accident in a nuclear power plant makes a difference in their radiation dose values. In order to examine how much difference arises in the dose by the selection of the access routes, in the case of a severe accident in a pressurized water reactor plant, we improved the method to obtain the dose for employees and expanded the analyzing system. By the expansion of the system and the improvement of the method, we have realized the followings: (1) in the whole plant area, the dose evaluation is possible, (2) the efficiency of calculation is increased by the reduction of the number of radiation sources, etc, and (3) the function is improved by introduction of the sky shine calculation into the highest floor, etc. The improved system clarifies the followings: (1) the doses change by selected access routes, and this system can give the difference in the doses quantitatively, and (2) in order to suppress the dose, it is effective to choose the most adequate access route for the employees. (author)

  20. Improvement of vector compensation method for vehicle magnetic distortion field

    Energy Technology Data Exchange (ETDEWEB)

    Pang, Hongfeng, E-mail: panghongfeng@126.com; Zhang, Qi; Li, Ji; Luo, Shitu; Chen, Dixiang; Pan, Mengchun; Luo, Feilu

    2014-03-15

    Magnetic distortions such as eddy-current field and low frequency magnetic field have not been considered in vector compensation methods. A new compensation method is proposed to suppress these magnetic distortions and improve compensation performance, in which the magnetic distortions related to measurement vectors and time are considered. The experimental system mainly consists of a three-axis fluxgate magnetometer (DM-050), an underwater vehicle and a proton magnetometer, in which the scalar value of magnetic field is obtained with the proton magnetometer and considered to be the true value. Comparing with traditional compensation methods, experimental results show that the magnetic distortions can be further reduced by two times. After compensation, error intensity and RMS error are reduced from 11684.013 nT and 7794.604 nT to 16.219 nT and 5.907 nT respectively. It suggests an effective way to improve the compensation performance of magnetic distortions. - Highlights: • A new vector compensation method is proposed for vehicle magnetic distortion. • The proposed model not only includes magnetometer error but also considers magnetic distortion. • Compensation parameters are computed directly by solving nonlinear equations. • Compared with traditional methods, the proposed method is not related with rotation angle rate. • Error intensity and RMS error can be reduced to 1/2 of the error with traditional methods.

  1. Improvement of vector compensation method for vehicle magnetic distortion field

    International Nuclear Information System (INIS)

    Pang, Hongfeng; Zhang, Qi; Li, Ji; Luo, Shitu; Chen, Dixiang; Pan, Mengchun; Luo, Feilu

    2014-01-01

    Magnetic distortions such as eddy-current field and low frequency magnetic field have not been considered in vector compensation methods. A new compensation method is proposed to suppress these magnetic distortions and improve compensation performance, in which the magnetic distortions related to measurement vectors and time are considered. The experimental system mainly consists of a three-axis fluxgate magnetometer (DM-050), an underwater vehicle and a proton magnetometer, in which the scalar value of magnetic field is obtained with the proton magnetometer and considered to be the true value. Comparing with traditional compensation methods, experimental results show that the magnetic distortions can be further reduced by two times. After compensation, error intensity and RMS error are reduced from 11684.013 nT and 7794.604 nT to 16.219 nT and 5.907 nT respectively. It suggests an effective way to improve the compensation performance of magnetic distortions. - Highlights: • A new vector compensation method is proposed for vehicle magnetic distortion. • The proposed model not only includes magnetometer error but also considers magnetic distortion. • Compensation parameters are computed directly by solving nonlinear equations. • Compared with traditional methods, the proposed method is not related with rotation angle rate. • Error intensity and RMS error can be reduced to 1/2 of the error with traditional methods

  2. THE FLUORBOARD A STATISTICALLY BASED DASHBOARD METHOD FOR IMPROVING SAFETY

    International Nuclear Information System (INIS)

    PREVETTE, S.S.

    2005-01-01

    The FluorBoard is a statistically based dashboard method for improving safety. Fluor Hanford has achieved significant safety improvements--including more than a 80% reduction in OSHA cases per 200,000 hours, during its work at the US Department of Energy's Hanford Site in Washington state. The massive project on the former nuclear materials production site is considered one of the largest environmental cleanup projects in the world. Fluor Hanford's safety improvements were achieved by a committed partnering of workers, managers, and statistical methodology. Safety achievements at the site have been due to a systematic approach to safety. This includes excellent cooperation between the field workers, the safety professionals, and management through OSHA Voluntary Protection Program principles. Fluor corporate values are centered around safety, and safety excellence is important for every manager in every project. In addition, Fluor Hanford has utilized a rigorous approach to using its safety statistics, based upon Dr. Shewhart's control charts, and Dr. Deming's management and quality methods

  3. An improved partial bundle method for linearly constrained minimax problems

    Directory of Open Access Journals (Sweden)

    Chunming Tang

    2016-02-01

    Full Text Available In this paper, we propose an improved partial bundle method for solving linearly constrained minimax problems. In order to reduce the number of component function evaluations, we utilize a partial cutting-planes model to substitute for the traditional one. At each iteration, only one quadratic programming subproblem needs to be solved to obtain a new trial point. An improved descent test criterion is introduced to simplify the algorithm. The method produces a sequence of feasible trial points, and ensures that the objective function is monotonically decreasing on the sequence of stability centers. Global convergence of the algorithm is established. Moreover, we utilize the subgradient aggregation strategy to control the size of the bundle and therefore overcome the difficulty of computation and storage. Finally, some preliminary numerical results show that the proposed method is effective.

  4. Comparison and Evaluation of the Molecular Typing Methods for Toxigenic Vibrio cholerae in Southwest China.

    Science.gov (United States)

    Liao, Feng; Mo, Zhishuo; Chen, Meiling; Pang, Bo; Fu, Xiaoqing; Xu, Wen; Jing, Huaiqi; Kan, Biao; Gu, Wenpeng

    2018-01-01

    Vibrio cholerae O1 strains taken from the repository of Yunnan province, southwest China, were abundant and special. We selected 70 typical toxigenic V. cholerae (69 O1 and one O139 serogroup strains) isolated from Yunnan province, performed the pulsed field gel electrophoresis (PFGE), multilocus sequence typing (MLST), and MLST of virulence gene (V-MLST) methods, and evaluated the resolution abilities for typing methods. The ctxB subunit sequence analysis for all strains have shown that cholera between 1986 and 1995 was associated with mixed infections with El Tor and El Tor variants, while infections after 1996 were all caused by El Tor variant strains. Seventy V. cholerae obtained 50 PFGE patterns, with a high resolution. The strains could be divided into three groups with predominance of strains isolated during 1980s, 1990s, and 2000s, respectively, showing a good consistency with the epidemiological investigation. We also evaluated two MLST method for V. cholerae , one was used seven housekeeping genes ( adk , gyrB , metE , pntA , mdh , purM , and pyrC ), and all the isolates belonged to ST69; another was used nine housekeeping genes ( cat , chi , dnaE , gyrB , lap , pgm , recA , rstA , and gmd ). A total of seven sequence types (STs) were found by using this method for all the strains; among them, rstA gene had five alleles, recA and gmd have two alleles, and others had only one allele. The virulence gene sequence typing method ( ctxAB , tcpA , and toxR ) showed that 70 strains were divided into nine STs; among them, tcpA gene had six alleles, toxR had five alleles, while ctxAB was identical for all the strains. The latter two sequences based typing methods also had consistency with epidemiology of the strains. PFGE had a higher resolution ability compared with the sequence based typing method, and MLST used seven housekeeping genes showed the lower resolution power than nine housekeeping genes and virulence genes methods. These two sequence typing methods

  5. Comparison and Evaluation of the Molecular Typing Methods for Toxigenic Vibrio cholerae in Southwest China

    Directory of Open Access Journals (Sweden)

    Feng Liao

    2018-05-01

    Full Text Available Vibrio cholerae O1 strains taken from the repository of Yunnan province, southwest China, were abundant and special. We selected 70 typical toxigenic V. cholerae (69 O1 and one O139 serogroup strains isolated from Yunnan province, performed the pulsed field gel electrophoresis (PFGE, multilocus sequence typing (MLST, and MLST of virulence gene (V-MLST methods, and evaluated the resolution abilities for typing methods. The ctxB subunit sequence analysis for all strains have shown that cholera between 1986 and 1995 was associated with mixed infections with El Tor and El Tor variants, while infections after 1996 were all caused by El Tor variant strains. Seventy V. cholerae obtained 50 PFGE patterns, with a high resolution. The strains could be divided into three groups with predominance of strains isolated during 1980s, 1990s, and 2000s, respectively, showing a good consistency with the epidemiological investigation. We also evaluated two MLST method for V. cholerae, one was used seven housekeeping genes (adk, gyrB, metE, pntA, mdh, purM, and pyrC, and all the isolates belonged to ST69; another was used nine housekeeping genes (cat, chi, dnaE, gyrB, lap, pgm, recA, rstA, and gmd. A total of seven sequence types (STs were found by using this method for all the strains; among them, rstA gene had five alleles, recA and gmd have two alleles, and others had only one allele. The virulence gene sequence typing method (ctxAB, tcpA, and toxR showed that 70 strains were divided into nine STs; among them, tcpA gene had six alleles, toxR had five alleles, while ctxAB was identical for all the strains. The latter two sequences based typing methods also had consistency with epidemiology of the strains. PFGE had a higher resolution ability compared with the sequence based typing method, and MLST used seven housekeeping genes showed the lower resolution power than nine housekeeping genes and virulence genes methods. These two sequence typing methods could

  6. An entropy-based improved k-top scoring pairs (TSP) method for ...

    African Journals Online (AJOL)

    An entropy-based improved k-top scoring pairs (TSP) (Ik-TSP) method was presented in this study for the classification and prediction of human cancers based on gene-expression data. We compared Ik-TSP classifiers with 5 different machine learning methods and the k-TSP method based on 3 different feature selection ...

  7. Use of net reclassification improvement (NRI method confirms the utility of combined genetic risk score to predict type 2 diabetes.

    Directory of Open Access Journals (Sweden)

    Claudia H T Tam

    Full Text Available BACKGROUND: Recent genome-wide association studies (GWAS identified more than 70 novel loci for type 2 diabetes (T2D, some of which have been widely replicated in Asian populations. In this study, we investigated their individual and combined effects on T2D in a Chinese population. METHODOLOGY: We selected 14 single nucleotide polymorphisms (SNPs in T2D genes relating to beta-cell function validated in Asian populations and genotyped them in 5882 Chinese T2D patients and 2569 healthy controls. A combined genetic score (CGS was calculated by summing up the number of risk alleles or weighted by the effect size for each SNP under an additive genetic model. We tested for associations by either logistic or linear regression analysis for T2D and quantitative traits, respectively. The contribution of the CGS for predicting T2D risk was evaluated by receiver operating characteristic (ROC analysis and net reclassification improvement (NRI. RESULTS: We observed consistent and significant associations of IGF2BP2, WFS1, CDKAL1, SLC30A8, CDKN2A/B, HHEX, TCF7L2 and KCNQ1 (8.5×10(-18improved the predictive ability on T2D risk by 11.2% and 11.3% for unweighted and weighted CGS, respectively using the NRI approach (P<0.001. CONCLUSION: In a Chinese population, the use of a CGS of 8 SNPs modestly but significantly improved its discriminative ability to predict T2D above and beyond that attributed to clinical risk factors (sex, age and BMI.

  8. Improved verification methods for safeguards verifications at enrichment plants

    International Nuclear Information System (INIS)

    Lebrun, A.; Kane, S. C.; Bourva, L.; Poirier, S.; Loghin, N. E.; Langlands, D.

    2009-01-01

    The International Atomic Energy Agency (IAEA) has initiated a coordinated research and development programme to improve its verification methods and equipment applicable to enrichment plants. The programme entails several individual projects to meet the objectives of the IAEA Safeguards Model Approach for Gas Centrifuge Enrichment Plants updated in 2006. Upgrades of verification methods to confirm the absence of HEU (highly enriched uranium) production have been initiated and, in particular, the Cascade Header Enrichment Monitor (CHEM) has been redesigned to reduce its weight and incorporate an electrically cooled germanium detector. Such detectors are also introduced to improve the attended verification of UF 6 cylinders for the verification of the material balance. Data sharing of authenticated operator weighing systems such as accountancy scales and process load cells is also investigated as a cost efficient and an effective safeguards measure combined with unannounced inspections, surveillance and non-destructive assay (NDA) measurement. (authors)

  9. Improved verification methods for safeguards verifications at enrichment plants

    Energy Technology Data Exchange (ETDEWEB)

    Lebrun, A.; Kane, S. C.; Bourva, L.; Poirier, S.; Loghin, N. E.; Langlands, D. [Department of Safeguards, International Atomic Energy Agency, Wagramer Strasse 5, A1400 Vienna (Austria)

    2009-07-01

    The International Atomic Energy Agency (IAEA) has initiated a coordinated research and development programme to improve its verification methods and equipment applicable to enrichment plants. The programme entails several individual projects to meet the objectives of the IAEA Safeguards Model Approach for Gas Centrifuge Enrichment Plants updated in 2006. Upgrades of verification methods to confirm the absence of HEU (highly enriched uranium) production have been initiated and, in particular, the Cascade Header Enrichment Monitor (CHEM) has been redesigned to reduce its weight and incorporate an electrically cooled germanium detector. Such detectors are also introduced to improve the attended verification of UF{sub 6} cylinders for the verification of the material balance. Data sharing of authenticated operator weighing systems such as accountancy scales and process load cells is also investigated as a cost efficient and an effective safeguards measure combined with unannounced inspections, surveillance and non-destructive assay (NDA) measurement. (authors)

  10. Higher Order Numerical Methods and Use of Estimation Techniques to Improve Modeling of Two-Phase Flow in Pipelines and Wells

    Energy Technology Data Exchange (ETDEWEB)

    Lorentzen, Rolf Johan

    2002-04-01

    The main objective of this thesis is to develop methods which can be used to improve predictions of two-phase flow (liquid and gas) in pipelines and wells. More reliable predictions are accomplished by improvements of numerical methods, and by using measured data to tune the mathematical model which describes the two-phase flow. We present a way to extend simple numerical methods to second order spatial accuracy. These methods are implemented, tested and compared with a second order Godunov-type scheme. In addition, a new (and faster) version of the Godunov-type scheme utilizing primitive (observable) variables is presented. We introduce a least squares method which is used to tune parameters embedded in the two-phase flow model. This method is tested using synthetic generated measurements. We also present an ensemble Kalman filter which is used to tune physical state variables and model parameters. This technique is tested on synthetic generated measurements, but also on several sets of full-scale experimental measurements. The thesis is divided into an introductory part, and a part consisting of four papers. The introduction serves both as a summary of the material treated in the papers, and as supplementary background material. It contains five sections, where the first gives an overview of the main topics which are addressed in the thesis. Section 2 contains a description and discussion of mathematical models for two-phase flow in pipelines. Section 3 deals with the numerical methods which are used to solve the equations arising from the two-phase flow model. The numerical scheme described in Section 3.5 is not included in the papers. This section includes results in addition to an outline of the numerical approach. Section 4 gives an introduction to estimation theory, and leads towards application of the two-phase flow model. The material in Sections 4.6 and 4.7 is not discussed in the papers, but is included in the thesis as it gives an important validation

  11. OUTCOME OF GARTLAND TYPE – II SUPRACONDYLAR FRACTURES OF HUMERUS TREATED BY CONSERVATIVE METHOD

    Directory of Open Access Journals (Sweden)

    Dinesh Mitra

    2015-08-01

    Full Text Available BACKGROUND: The current literatures recommend operative method (closed reduction and pinning for type II supracondylar fractures of humerus. But some surgeons still prefer conservative method for type II supracondylar fractures of humerus. We pr esent results of 14 cases of type II supracondylar fractures treated with CR and AE POP immobilization . The purpose of this study is to evaluate the outcome of conservative treatment in management of type II supracondylar fracture of humerus. MATERIALS AND METHODS: Fourteen children treated by conservative methods (CR & AE POP between January 2013 and December 2014 is included in this study. The mean age group is 6.8 years (3 years - 11 years. The patient follow up is done for a minimum of 10 - 12 weeks. Treatment outcome is based on final clinical and radiological assessments and grading of results was done using Flynn’s criteria. RESULTS: Gartland type II fracture gives 82% excellent results and 28 % good results as per Flynn’s criteria. Of the 14 patien ts only two cases required re manipulation. Surgical intervention was not needed for any of the patients. No patients in this study developed compartment syndrome / cubitus varus deformity. CONCLUSION: Satisfactory results can be obtained with conservative treatment (closed reduction and above elbow POP if proper selection of the patient and careful clinical and radiological follow up is done

  12. A new method for estimating UV fluxes at ground level in cloud-free conditions

    Science.gov (United States)

    Wandji Nyamsi, William; Pitkänen, Mikko R. A.; Aoun, Youva; Blanc, Philippe; Heikkilä, Anu; Lakkala, Kaisa; Bernhard, Germar; Koskela, Tapani; Lindfors, Anders V.; Arola, Antti; Wald, Lucien

    2017-12-01

    A new method has been developed to estimate the global and direct solar irradiance in the UV-A and UV-B at ground level in cloud-free conditions. It is based on a resampling technique applied to the results of the k-distribution method and the correlated-k approximation of Kato et al. (1999) over the UV band. Its inputs are the aerosol properties and total column ozone that are produced by the Copernicus Atmosphere Monitoring Service (CAMS). The estimates from this new method have been compared to instantaneous measurements of global UV irradiances made in cloud-free conditions at five stations at high latitudes in various climates. For the UV-A irradiance, the bias ranges between -0.8 W m-2 (-3 % of the mean of all data) and -0.2 W m-2 (-1 %). The root mean square error (RMSE) ranges from 1.1 W m-2 (6 %) to 1.9 W m-2 (9 %). The coefficient of determination R2 is greater than 0.98. The bias for UV-B is between -0.04 W m-2 (-4 %) and 0.08 W m-2 (+13 %) and the RMSE is 0.1 W m-2 (between 12 and 18 %). R2 ranges between 0.97 and 0.99. This work demonstrates the quality of the proposed method combined with the CAMS products. Improvements, especially in the modeling of the reflectivity of the Earth's surface in the UV region, are necessary prior to its inclusion into an operational tool.

  13. An adjoint sensitivity-based data assimilation method and its comparison with existing variational methods

    Directory of Open Access Journals (Sweden)

    Yonghan Choi

    2014-01-01

    Full Text Available An adjoint sensitivity-based data assimilation (ASDA method is proposed and applied to a heavy rainfall case over the Korean Peninsula. The heavy rainfall case, which occurred on 26 July 2006, caused torrential rainfall over the central part of the Korean Peninsula. The mesoscale convective system (MCS related to the heavy rainfall was classified as training line/adjoining stratiform (TL/AS-type for the earlier period, and back building (BB-type for the later period. In the ASDA method, an adjoint model is run backwards with forecast-error gradient as input, and the adjoint sensitivity of the forecast error to the initial condition is scaled by an optimal scaling factor. The optimal scaling factor is determined by minimising the observational cost function of the four-dimensional variational (4D-Var method, and the scaled sensitivity is added to the original first guess. Finally, the observations at the analysis time are assimilated using a 3D-Var method with the improved first guess. The simulated rainfall distribution is shifted northeastward compared to the observations when no radar data are assimilated or when radar data are assimilated using the 3D-Var method. The rainfall forecasts are improved when radar data are assimilated using the 4D-Var or ASDA method. Simulated atmospheric fields such as horizontal winds, temperature, and water vapour mixing ratio are also improved via the 4D-Var or ASDA method. Due to the improvement in the analysis, subsequent forecasts appropriately simulate the observed features of the TL/AS- and BB-type MCSs and the corresponding heavy rainfall. The computational cost associated with the ASDA method is significantly lower than that of the 4D-Var method.

  14. Effect of p-type multi-walled carbon nanotubes for improving hydrogen storage behaviors

    International Nuclear Information System (INIS)

    Lee, Seul-Yi; Yop Rhee, Kyong; Nahm, Seung-Hoon; Park, Soo-Jin

    2014-01-01

    In this study, the hydrogen storage behaviors of p-type multi-walled carbon nanotubes (MWNTs) were investigated through the surface modification of MWNTs by immersing them in sulfuric acid (H 2 SO 4 ) and hydrogen peroxide (H 2 O 2 ) at various ratios. The presence of acceptor-functional groups on the p-type MWNT surfaces was confirmed by X-ray photoelectron spectroscopy. Measurement of the zeta-potential determined the surface charge transfer and dispersion of the p-type MWMTs, and the hydrogen storage capacity was evaluated at 77 K and 1 bar. From the results obtained, it was found that acceptor-functional groups were introduced onto the MWNT surfaces, and the dispersion of MWNTs could be improved depending on the acid-mixed treatment conditions. The hydrogen storage was increased by acid-mixed treatments of up to 0.36 wt% in the p-type MWNTs, compared with 0.18 wt% in the As-received MWNTs. Consequently, the hydrogen storage capacities were greatly influenced by the acceptor-functional groups of p-type MWNT surfaces, resulting in increased electron acceptor–donor interaction at the interfaces. - Graphical abstract: Hydrogen storage behaviors of the p-type MWNTs with the acid-mixed treatments are described. Display Omitted Display Omitted

  15. Dai-Kou type conjugate gradient methods with a line search only using gradient.

    Science.gov (United States)

    Huang, Yuanyuan; Liu, Changhe

    2017-01-01

    In this paper, the Dai-Kou type conjugate gradient methods are developed to solve the optimality condition of an unconstrained optimization, they only utilize gradient information and have broader application scope. Under suitable conditions, the developed methods are globally convergent. Numerical tests and comparisons with the PRP+ conjugate gradient method only using gradient show that the methods are efficient.

  16. Acute hyperglycemia produces transient improvement in glucose transporter type 1 deficiency.

    Science.gov (United States)

    Akman, Cigdem I; Engelstad, Kristin; Hinton, Veronica J; Ullner, Paivi; Koenigsberger, Dorcas; Leary, Linda; Wang, Dong; De Vivo, Darryl C

    2010-01-01

    Glucose transporter type 1 deficiency syndrome (Glut1-DS) is characterized clinically by acquired microcephaly, infantile-onset seizures, psychomotor retardation, choreoathetosis, dystonia, and ataxia. The laboratory signature is hypoglycorrhachia. The 5-hour oral glucose tolerance test (OGTT) was performed to assess cerebral function and systemic carbohydrate homeostasis during acute hyperglycemia, in the knowledge that GLUT1 is constitutively expressed ubiquitously and upregulated in the brain. Thirteen Glut1-DS patients completed a 5-hour OGTT. Six patients had prolonged electroencephalographic (EEG)/video monitoring, 10 patients had plasma glucose and serum insulin measurements, and 5 patients had repeated measures of attention, memory, fine motor coordination, and well-being. All patients had a full neuropsychological battery prior to OGTT. The glycemic profile and insulin response during the OGTT were normal. Following the glucose load, transient improvement of clinical seizures and EEG findings were observed, with the most significant improvement beginning within the first 30 minutes and continuing for 180 minutes. Thereafter, clinical seizures returned, and EEG findings worsened. Additionally, transient improvement in attention, fine motor coordination, and reported well-being were observed without any change in memory performance. This study documents transient neurological improvement in Glut1-DS patients following acute hyperglycemia, associated with improved fine motor coordination and attention. Also, systemic carbohydrate homeostasis was normal, despite GLUT1 haploinsufficiency, confirming the specific role of GLUT1 as the transporter of metabolic fuel across the blood-brain barrier. The transient improvement in brain function underscores the rate-limiting role of glucose transport and the critical minute-to-minute dependence of cerebral function on fuel availability for energy metabolism.

  17. Approximate median regression for complex survey data with skewed response.

    Science.gov (United States)

    Fraser, Raphael André; Lipsitz, Stuart R; Sinha, Debajyoti; Fitzmaurice, Garrett M; Pan, Yi

    2016-12-01

    The ready availability of public-use data from various large national complex surveys has immense potential for the assessment of population characteristics using regression models. Complex surveys can be used to identify risk factors for important diseases such as cancer. Existing statistical methods based on estimating equations and/or utilizing resampling methods are often not valid with survey data due to complex survey design features. That is, stratification, multistage sampling, and weighting. In this article, we accommodate these design features in the analysis of highly skewed response variables arising from large complex surveys. Specifically, we propose a double-transform-both-sides (DTBS)'based estimating equations approach to estimate the median regression parameters of the highly skewed response; the DTBS approach applies the same Box-Cox type transformation twice to both the outcome and regression function. The usual sandwich variance estimate can be used in our approach, whereas a resampling approach would be needed for a pseudo-likelihood based on minimizing absolute deviations (MAD). Furthermore, the approach is relatively robust to the true underlying distribution, and has much smaller mean square error than a MAD approach. The method is motivated by an analysis of laboratory data on urinary iodine (UI) concentration from the National Health and Nutrition Examination Survey. © 2016, The International Biometric Society.

  18. Visual improvement for bad handwriting based on Monte-Carlo method

    Science.gov (United States)

    Shi, Cao; Xiao, Jianguo; Xu, Canhui; Jia, Wenhua

    2014-03-01

    A visual improvement algorithm based on Monte Carlo simulation is proposed in this paper, in order to enhance visual effects for bad handwriting. The whole improvement process is to use well designed typeface so as to optimize bad handwriting image. In this process, a series of linear operators for image transformation are defined for transforming typeface image to approach handwriting image. And specific parameters of linear operators are estimated by Monte Carlo method. Visual improvement experiments illustrate that the proposed algorithm can effectively enhance visual effect for handwriting image as well as maintain the original handwriting features, such as tilt, stroke order and drawing direction etc. The proposed visual improvement algorithm, in this paper, has a huge potential to be applied in tablet computer and Mobile Internet, in order to improve user experience on handwriting.

  19. An improved method for estimating the frequency correlation function

    KAUST Repository

    Chelli, Ali; Pä tzold, Matthias

    2012-01-01

    For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.

  20. An improved method for estimating the frequency correlation function

    KAUST Repository

    Chelli, Ali

    2012-04-01

    For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.

  1. Acute Resveratrol Consumption Improves Neurovascular Coupling Capacity in Adults with Type 2 Diabetes Mellitus

    Directory of Open Access Journals (Sweden)

    Rachel H.X. Wong

    2016-07-01

    Full Text Available Background: Poor cerebral perfusion may contribute to cognitive impairment in type 2 diabetes mellitus (T2DM. We conducted a randomized controlled trial to test the hypothesis that resveratrol can enhance cerebral vasodilator function and thereby alleviate the cognitive deficits in T2DM. We have already reported that acute resveratrol consumption improved cerebrovascular responsiveness (CVR to hypercapnia. We now report the effects of resveratrol on neurovascular coupling capacity (CVR to cognitive stimuli, cognitive performance and correlations with plasma resveratrol concentrations. Methods: Thirty-six T2DM adults aged 40–80 years were randomized to consume single doses of resveratrol (0, 75, 150 and 300 mg at weekly intervals. Transcranial Doppler ultrasound was used to monitor changes in blood flow velocity (BFV during a cognitive test battery. The battery consisted of dual-tasking (finger tapping with both Trail Making task and Serial Subtraction 3 task and a computerized multi-tasking test that required attending to four tasks simultaneously. CVR to cognitive tasks was calculated as the per cent increase in BFV from pre-test basal to peak mean blood flow velocity and also as the area under the curve for BFV. Results: Compared to placebo, 75 mg resveratrol significantly improved neurovascular coupling capacity, which correlated with plasma total resveratrol levels. Enhanced performance on the multi-tasking test battery was also evident following 75 mg and 300 mg of resveratrol. Conclusion: a single 75 mg dose of resveratrol was able to improve neurovascular coupling and cognitive performance in T2DM. Evaluation of benefits of chronic resveratrol supplementation is now warranted.

  2. Relative contributions of sampling effort, measuring, and weighing to precision of larval sea lamprey biomass estimates

    Science.gov (United States)

    Slade, Jeffrey W.; Adams, Jean V.; Cuddy, Douglas W.; Neave, Fraser B.; Sullivan, W. Paul; Young, Robert J.; Fodale, Michael F.; Jones, Michael L.

    2003-01-01

    We developed two weight-length models from 231 populations of larval sea lampreys (Petromyzon marinus) collected from tributaries of the Great Lakes: Lake Ontario (21), Lake Erie (6), Lake Huron (67), Lake Michigan (76), and Lake Superior (61). Both models were mixed models, which used population as a random effect and additional environmental factors as fixed effects. We resampled weights and lengths 1,000 times from data collected in each of 14 other populations not used to develop the models, obtaining a weight and length distribution from reach resampling. To test model performance, we applied the two weight-length models to the resampled length distributions and calculated the predicted mean weights. We also calculated the observed mean weight for each resampling and for each of the original 14 data sets. When the average of predicted means was compared to means from the original data in each stream, inclusion of environmental factors did not consistently improve the performance of the weight-length model. We estimated the variance associated with measures of abundance and mean weight for each of the 14 selected populations and determined that a conservative estimate of the proportional contribution to variance associated with estimating abundance accounted for 32% to 95% of the variance (mean = 66%). Variability in the biomass estimate appears more affected by variability in estimating abundance than in converting length to weight. Hence, efforts to improve the precision of biomass estimates would be aided most by reducing the variability associated with estimating abundance.

  3. The GLP-1 analogue liraglutide improves first-phase insulin secretion and maximal beta-cell secretory capacity over 14 weeks of therapy in subjects with Type 2 diabetes

    DEFF Research Database (Denmark)

    Madsbad, Sten; Vilsbøll, Tina; Brock, Birgitte

    Aims: We investigated the clinical effect of liraglutide, a long- acting GLP-1 analogue, on insulin secretion in Type 2 diabetes. Methods: Thirty-nine subjects (28 completed) from a randomised trial received a hyperglycaemic clamp (20 mM) with intravenous arginine stimulation, and an insulin...... group. Conclusion: In subjects with Type 2 diabetes, 14 weeks’ once-daily liraglutide (1.25 and 1.9 mg/day) markedly improves beta-cell function, significantly increases first-phase insulin secretion and maximal beta-cell secretory capacity....

  4. Look-ahead procedures for Lanczos-type product methods based on three-term recurrences

    Energy Technology Data Exchange (ETDEWEB)

    Gutknecht, M.H.; Ressel, K.J. [Swiss Center for Scientific Computing, Zuerich (Switzerland)

    1996-12-31

    Lanczos-type product methods for the solution of large sparse non-Hermitian linear systems either square the Lanczos process or combine it with a local minimization of the residual. They inherit from the underlying Lanczos process the danger of breakdown. For various Lanczos-type product methods that are based on the Lanczos three-term recurrence, look-ahead versions are presented, which avoid such breakdowns or near breakdowns with a small computational overhead. Different look-ahead strategies are discussed and their efficiency is demonstrated in several numerical examples.

  5. Method for improved gas-solids separation

    Science.gov (United States)

    Kusik, C.L.; He, B.X.

    1990-11-13

    Methods are disclosed for the removal of particulate solids from a gas stream at high separation efficiency, including the removal of submicron size particles. The apparatus includes a cyclone separator type of device which contains an axially mounted perforated cylindrical hollow rotor. The rotor is rotated at high velocity in the same direction as the flow of an input particle-laden gas stream to thereby cause enhanced separation of particulate matter from the gas stream in the cylindrical annular space between the rotor and the sidewall of the cyclone vessel. Substantially particle-free gas passes through the perforated surface of the spinning rotor and into the hollow rotor, from where it is discharged out of the top of the apparatus. Separated particulates are removed from the bottom of the vessel. 4 figs.

  6. S-type and P-type habitability in stellar binary systems: A comprehensive approach. I. Method and applications

    Energy Technology Data Exchange (ETDEWEB)

    Cuntz, M., E-mail: cuntz@uta.edu [Department of Physics, University of Texas at Arlington, Arlington, TX 76019-0059 (United States)

    2014-01-01

    A comprehensive approach is provided for the study of both S-type and P-type habitability in stellar binary systems, which in principle can also be expanded to systems of higher order. P-type orbits occur when the planet orbits both binary components, whereas in the case of S-type orbits, the planet orbits only one of the binary components with the second component considered a perturbator. The selected approach encapsulates a variety of different aspects, which include: (1) the consideration of a joint constraint, including orbital stability and a habitable region for a putative system planet through the stellar radiative energy fluxes ({sup r}adiative habitable zone{sup ;} RHZ), needs to be met; (2) the treatment of conservative, general, and extended zones of habitability for the various systems as defined for the solar system and beyond; (3) the provision of a combined formalism for the assessment of both S-type and P-type habitability; in particular, mathematical criteria are presented for the kind of system in which S-type and P-type habitability is realized; (4) applications of the attained theoretical approach to standard (theoretical) main-sequence stars. In principle, five different cases of habitability are identified, which are S-type and P-type habitability provided by the full extent of the RHZs; habitability, where the RHZs are truncated by the additional constraint of planetary orbital stability (referred to as ST- and PT-type, respectively); and cases of no habitability at all. Regarding the treatment of planetary orbital stability, we utilize the formulae of Holman and Wiegert as also used in previous studies. In this work, we focus on binary systems in circular orbits. Future applications will also consider binary systems in elliptical orbits and provide thorough comparisons to other methods and results given in the literature.

  7. Fueling method in LMFBR type reactors

    International Nuclear Information System (INIS)

    Kawashima, Katsuyuki; Inoue, Kotaro.

    1985-01-01

    Purpose: To extend the burning cycle and decrease the number of fuel exchange batches without increasing the excess reactivity at the initial stage of burning cycles upon fuel loading to an LMFBR type reactor. Method: Each of the burning cycles is divided into a plurality of burning sections. Fuels are charged at the first burning section in each of the cycles such that driver fuel assemblies and blanket assemblies or those assemblies containing neutron absorbers such as boron are distributed in mixture in the reactor core region. At the final stage of the first burning section, the blanket assemblies or neutron absorber-containing assemblies present in mixture are partially or entirely replaced with driver fuel assemblies depending on the number of burning sections such that all of them are replaced with the driver fuel assemblies till the start of the final burning section of the abovementioned cycle. The object of this invention can thus be attained. (Horiuchi, T.)

  8. Improving reading comprehension through Reciprocal Teaching Method

    Directory of Open Access Journals (Sweden)

    Endang Komariah

    2015-10-01

    Full Text Available This study is aimed at discovering the benefits of the Reciprocal Teaching Method (RTM in the reading classroom, finding out the achievements of students after four comprehension training sessions of using RTM, and exploring the perceptions of students on the use of RTM. This method uses four comprehension strategies: predicting, questioning, clarifying, and summarizing, to help learners monitor their development of reading comprehension by themselves. Students work in groups of four or five and the members are divided into five roles which are the leader, predictor, clarifier, questioner, and summarizer. The subjects were 24 students from the twelfth grade at a high school in Banda Aceh. Observations, tests, documents and interviews were collected to get the data. The results showed that the students were more active and productive in the reading classroom after RTM sessions and their reading proficiency improved. They learnt how to apply several of the strategies from RTM while reading. The results also showed that they preferred this method for teaching-learning reading compared to the conventional one. Therefore, teachers are suggested to consider using this method for teaching reading that instils the students on how to apply the four comprehension strategies used in reading.

  9. Semiconducting p-type MgNiO:Li epitaxial films fabricated by cosputtering method

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Yong Hun; Chun, Sung Hyun; Cho, Hyung Koun [School of Advanced Materials Science and Engineering, Sungkyunkwan University, 300 Cheoncheon-dong, Jangan-gu, Suwon, Gyeonggi-do 440-746 (Korea, Republic of)

    2013-07-15

    Li-doped ternary Mg{sub x}Ni{sub 1-x}O thin films were deposited on (0001) Al{sub 2}O{sub 3} substrates by a radio frequency (RF) magnetron cosputtering method with MgO and NiO:Li targets. The Mg mole fraction and Li content were relatively controlled by changing RF power for the MgO target over a range of 0-300 W, while the NiO:Li target was kept at 150 W. As a result, all films were epitaxially grown on (0001) Al{sub 2}O{sub 3} substrates with the relationship of [110]{sub NiO}||[1110]{sub Al2O3}, [112]{sub NiO}||[2110]{sub Al2O3} (in-plane), and [111]{sub NiO}||[0001]{sub Al2O3} (out-of-plane), and showed p-type semiconducting properties. Furthermore, from x-ray diffraction patterns, the authors found that MgO was effectively mixed with NiO:Li without structural deformation due to low lattice mismatch (0.8%) between NiO and MgO. However, the excess Li contents degraded the crystallinity of the MgNiO films. The band-gap of films was continuously shifted from 3.66 eV (339 nm) to 4.15 eV (299 nm) by the RF power of the MgO target. A visible transmittance of more than 80% was exhibited at RF powers higher than 200 W. Ultimately, the electrical resistivity of p-type MgNiO films was improved from 7.5 to 673.5 {Omega}cm, indicating that the Li-doped MgNiO films are good candidates for transparent p-type semiconductors.

  10. Efficiency improvements by Metal Wrap Through technology for n-type Si solar cells and modules

    Energy Technology Data Exchange (ETDEWEB)

    Wenchao, Zhao; Jianming, Wang; Yanlong, Shen; Ziqian, Wang; Yingle, Chen; Shuquan, Tian; Zhiliang, Wan; Bo, Yu; Gaofei, Li; Zhiyan, Hu; Jingfeng, Xiong [Yingli Green Energy Holding Co., Ltd, 3399 North Chaoyang Avenue, Baoding (China); Guillevin, N.; Heurtault, B.; Aken, B.B. van; Bennett, I.J.; Geerligs, L.J.; Weeber, A.W.; Bultman, J.H. [ECN Solar Energy, Petten (Netherlands)

    2012-09-15

    N-type Metal Wrap Through (n-MWT) is presented as an industrially promising back-contact technology to reach high performance of silicon solar cells and modules. It can combine benefits from both n-type base and MWT metallization. In this paper, the efficiency improvements of commercial industrial n-type bifacial Si solar cells (239 cm{sup 2}) and modules (60 cells) by the integration of the MWT technique are described. For the cell, after the optimization of integration, over 0.3% absolute efficiency gain was achieved over the similar non-MWT technology, and Voc gain and Isc gain up to 0.9% and 3.5%, respectively. These gains are mainly attributed to reduced shading loss and surface recombination. Besides the front pattern optimization, a 0.1m{Omega} reduction of Rs in via part will induce further 0.06% absolute efficiency improvement. For the module part, a power output of n-MWT module up to 279W was achieved, corresponding to a module efficiency of about 17.7%.

  11. Surface Fitting for Quasi Scattered Data from Coordinate Measuring Systems.

    Science.gov (United States)

    Mao, Qing; Liu, Shugui; Wang, Sen; Ma, Xinhui

    2018-01-13

    Non-uniform rational B-spline (NURBS) surface fitting from data points is wildly used in the fields of computer aided design (CAD), medical imaging, cultural relic representation and object-shape detection. Usually, the measured data acquired from coordinate measuring systems is neither gridded nor completely scattered. The distribution of this kind of data is scattered in physical space, but the data points are stored in a way consistent with the order of measurement, so it is named quasi scattered data in this paper. Therefore they can be organized into rows easily but the number of points in each row is random. In order to overcome the difficulty of surface fitting from this kind of data, a new method based on resampling is proposed. It consists of three major steps: (1) NURBS curve fitting for each row, (2) resampling on the fitted curve and (3) surface fitting from the resampled data. Iterative projection optimization scheme is applied in the first and third step to yield advisable parameterization and reduce the time cost of projection. A resampling approach based on parameters, local peaks and contour curvature is proposed to overcome the problems of nodes redundancy and high time consumption in the fitting of this kind of scattered data. Numerical experiments are conducted with both simulation and practical data, and the results show that the proposed method is fast, effective and robust. What's more, by analyzing the fitting results acquired form data with different degrees of scatterness it can be demonstrated that the error introduced by resampling is negligible and therefore it is feasible.

  12. Assessment methods for Bree-type ratcheting without the necessity of linearization of stresses and strains

    International Nuclear Information System (INIS)

    Fujioka, Terutaka

    2015-01-01

    This paper proposes methods for assessing Bree-type ratcheting in a cylinder subjected to constant internal pressure and cyclic thermal loading. The proposed methods are elastic analysis-route and elastic–plastic analysis-route. The former is based on the polynomial approximation of the elastic stress distributions for thermal stresses and the reference stress concept for estimating primary stress. The latter elastic–plastic route method is based on the concept of relative elastic core size. The methods proposed were validated by performing elastic–plastic finite element analyses of a smooth cylinder that exhibited Bree-type ratcheting. - Highlights: • Rationalization of the ratcheting assessment has been made. • The proposed methods include both elastic and elastic-plastic routes. • The elastic route method is based on skeletal point stress by elastic FEA. • The elastic-plastic route is based on elastic core size in elastic-plastic FEA. • These have been validated by elastic-plastic FEA causing Bree-type ratcheting

  13. Research on Monte Carlo improved quasi-static method for reactor space-time dynamics

    International Nuclear Information System (INIS)

    Xu Qi; Wang Kan; Li Shirui; Yu Ganglin

    2013-01-01

    With large time steps, improved quasi-static (IQS) method can improve the calculation speed for reactor dynamic simulations. The Monte Carlo IQS method was proposed in this paper, combining the advantages of both the IQS method and MC method. Thus, the Monte Carlo IQS method is beneficial for solving space-time dynamics problems of new concept reactors. Based on the theory of IQS, Monte Carlo algorithms for calculating adjoint neutron flux, reactor kinetic parameters and shape function were designed and realized. A simple Monte Carlo IQS code and a corresponding diffusion IQS code were developed, which were used for verification of the Monte Carlo IQS method. (authors)

  14. A novel analysis strategy for HLA typing using a sequence-specific oligonucleotide probe method.

    Science.gov (United States)

    Won, D I

    2017-11-01

    The technique of reverse sequence-specific oligonucleotide probes (SSOPs) is commonly used in human leukocyte antigen (HLA) typing. In the conventional method for data analysis (exact pattern matching, EPM), the larger is the number of mismatched probes, the longer the time for final typing assignment. A novel strategy, filtering and scoring (FnS), has been developed to easily assign the best-fit allele pair. In the FnS method, candidate alleles and allele pairs were filtered based on (1) subject's ethnicity, and (2) the measured partial reaction pattern with only definitely negative or positive probes. Then, the complete reaction pattern for all probes (CRPoAPs) were compared between the raw sample and expected residual allele pairs to obtain mismatch scores. To compare the FnS and EPM methods, each analysis time (minutes:seconds) for reverse SSOP HLA typing with intermediate resolution (n = 507) was measured. The analysis time with FnS method was shorter than that of the EPM method [00:21 (00:08-01:47) and 01:04 (00:15-23:45), respectively, P typing in a comprehensive and quantitative comparison between measured and expected CRPoAPs of candidate allele pairs. Therefore, this analysis strategy might be useful in a clinical setting. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. Methods for enhancing P-type doping in III-V semiconductor films

    Science.gov (United States)

    Liu, Feng; Stringfellow, Gerald; Zhu, Junyi

    2017-08-01

    Methods of doping a semiconductor film are provided. The methods comprise epitaxially growing the III-V semiconductor film in the presence of a dopant, a surfactant capable of acting as an electron reservoir, and hydrogen, under conditions that promote the formation of a III-V semiconductor film doped with the p-type dopant. In some embodiments of the methods, the epitaxial growth of the doped III-V semiconductor film is initiated at a first hydrogen partial pressure which is increased to a second hydrogen partial pressure during the epitaxial growth process.

  16. Molecular Strain Typing of Mycobacterium tuberculosis: a Review of Frequently Used Methods

    Science.gov (United States)

    2016-01-01

    Tuberculosis, caused by the bacterium Mycobacterium tuberculosis, remains one of the most serious global health problems. Molecular typing of M. tuberculosis has been used for various epidemiologic purposes as well as for clinical management. Currently, many techniques are available to type M. tuberculosis. Choosing the most appropriate technique in accordance with the existing laboratory conditions and the specific features of the geographic region is important. Insertion sequence IS6110-based restriction fragment length polymorphism (RFLP) analysis is considered the gold standard for the molecular epidemiologic investigations of tuberculosis. However, other polymerase chain reaction-based methods such as spacer oligonucleotide typing (spoligotyping), which detects 43 spacer sequence-interspersing direct repeats (DRs) in the genomic DR region; mycobacterial interspersed repetitive units–variable number tandem repeats, (MIRU-VNTR), which determines the number and size of tandem repetitive DNA sequences; repetitive-sequence-based PCR (rep-PCR), which provides high-throughput genotypic fingerprinting of multiple Mycobacterium species; and the recently developed genome-based whole genome sequencing methods demonstrate similar discriminatory power and greater convenience. This review focuses on techniques frequently used for the molecular typing of M. tuberculosis and discusses their general aspects and applications. PMID:27709842

  17. Cryptanalysis of "an improvement over an image encryption method based on total shuffling"

    Science.gov (United States)

    Akhavan, A.; Samsudin, A.; Akhshani, A.

    2015-09-01

    In the past two decades, several image encryption algorithms based on chaotic systems had been proposed. Many of the proposed algorithms are meant to improve other chaos based and conventional cryptographic algorithms. Whereas, many of the proposed improvement methods suffer from serious security problems. In this paper, the security of the recently proposed improvement method for a chaos-based image encryption algorithm is analyzed. The results indicate the weakness of the analyzed algorithm against chosen plain-text.

  18. Effect of Improving Dietary Quality on Arterial Stiffness in Subjects with Type 1 and Type 2 Diabetes: A 12 Months Randomised Controlled Trial

    Directory of Open Access Journals (Sweden)

    Kristina S. Petersen

    2016-06-01

    Full Text Available People with diabetes have accelerated arterial stiffening. The aim of this study was to determine the effect of increasing fruit, vegetable and dairy intake for 12 months on carotid femoral pulse wave velocity (cfPWV, augmentation index (AIx, and central blood pressure (cBP, compared to a usual diet control, in people with type 1 and type 2 diabetes. In a 12 months randomised controlled trial, cfPWV, AIx and cBP were measured every 3 months. The intervention group received dietary counselling to increase consumption of fruit (+1 serving/day; 150 g/day, vegetables (+2 servings/day; 150 g/day and dairy (+1 serving/day; 200–250 g/day at baseline, 1, 3, 6 and 9 months. The control group continued on their usual diet. One hundred and nine participants were randomised and 92 (intervention n = 45; control n = 47 completed. At 3 months, fruit (184 g/day; p = 0.001 and dairy (83 g/day; p = 0.037 intake increased in the intervention group compared with the control group but this increase was not maintained at 12 months. After adjustment for baseline measurements there was no time by treatment effect for central systolic or diastolic BP, AIx or cfPWV. A time effect existed for AIx which modestly increased over time. Peripheral diastolic BP and central pulse pressure were improved in the intervention group compared with the control group at 12 months. In the cohort with type 1 and type 2 diabetes, improving dietary quality by increasing consumption of fruit, vegetables and dairy did not improve cBP, AIx or cfPWV, compared with a control group continuing on their usual diet, after 12 months.

  19. Glycogen storage disease type III: modified Atkins diet improves myopathy.

    Science.gov (United States)

    Mayorandan, Sebene; Meyer, Uta; Hartmann, Hans; Das, Anibh Martin

    2014-11-28

    Frequent feeds with carbohydrate-rich meals or continuous enteral feeding has been the therapy of choice in glycogen storage disease (Glycogenosis) type III. Recent guidelines on diagnosis and management recommend frequent feedings with high complex carbohydrates or cornstarch avoiding fasting in children, while in adults a low-carb-high-protein-diet is recommended. While this regimen can prevent hypoglycaemia in children it does not improve skeletal and heart muscle function, which are compromised in patients with glycogenosis IIIa. Administration of carbohydrates may elicit reactive hyperinsulinism, resulting in suppression of lipolysis, ketogenesis, gluconeogenesis, and activation of glycogen synthesis. Thus, heart and skeletal muscle are depleted of energy substrates. Modified Atkins diet leads to increased blood levels of ketone bodies and fatty acids. We hypothesize that this health care intervention improves the energetic balance of muscles. We treated 2 boys with glycogenosis IIIa aged 9 and 11 years with a modified Atkins diet (10 g carbohydrate per day, protein and fatty acids ad libitum) over a period of 32 and 26 months, respectively. In both patients, creatine kinase levels in blood dropped in response to Atkins diet. When diet was withdrawn in one of the patients he complained of chest pain, reduced physical strength and creatine kinase levels rapidly increased. This was reversed when Atkins diet was reintroduced. One patient suffered from severe cardiomyopathy which significantly improved under diet. Patients with glycogenosis IIIa benefit from an improved energetic state of heart and skeletal muscle by introduction of Atkins diet both on a biochemical and clinical level. Apart from transient hypoglycaemia no serious adverse effects were observed.

  20. An Improved Algebraic Method for Transit Signal Priority Scheme and Its Impact on Traffic Emission

    OpenAIRE

    Ji, Yanjie; Hu, Bo; Han, Jing; Tang, Dounan

    2014-01-01

    Transit signal priority has a positive effect on improving traffic congestion and reducing transit delay and also has an influence on traffic emission. In this paper, an optimal transit signal priority scheme based on an improved algebraic method was developed and its impact on vehicle emission was evaluated as well. The improved algebraic method was proposed on the basis of classical algebraic method and has improvements in three aspects. First, the calculation rules of split loss are more r...

  1. Method for improving solution flow in solution mining of a mineral

    International Nuclear Information System (INIS)

    Moore, T.

    1980-01-01

    An improved method for the solution mining of a mineral from a subterranean formation containing same in which an injection and production well are drilled and completed within said formation, leach solution and an oxidant are injected through said injection well into said formation to dissolve said mineral, and said dissolved mineral is recovered via said production well, wherein the improvement comprises pretreating said formation with an acid gas to improve the permeabiltiy thereof

  2. Demonstration of the improved PID method for the accurate temperature control of ADRs

    International Nuclear Information System (INIS)

    Shinozaki, K.; Hoshino, A.; Ishisaki, Y.; Mihara, T.

    2006-01-01

    Microcalorimeters require extreme stability (-bar 10μK) of thermal bath at low temperature (∼100mK). We have developed a portable adiabatic demagnetization refrigerator (ADR) system for ground experiments with TES microcalorimeters, in which we observed residual temperature between aimed and measured values when magnet current was controlled with the standard Proportional, Integral, and Derivative control (PID) method. The difference increases in time as the magnet current decreases. This phenomenon can be explained by the theory of the magnetic cooling, and we have introduced a new functional parameter to improve the PID method. With this improvement, long-term stability of the ADR temperature about 10μK rms is obtained up to the period of ∼15ks down to almost zero magnet current. We briefly describe our ADR system and principle of the improved PID method, showing the temperature control result. It is demonstrated that the controlled time of the aimed temperature can be extended by about 30% longer than the standard PID method in our system. The improved PID method is considered to be of great advantage especially in the range of small magnet current

  3. Development and validation of an improved method for the determination of chloropropanols in paperboard food packaging by GC-MS.

    Science.gov (United States)

    Mezouari, S; Liu, W Yun; Pace, G; Hartman, T G

    2015-01-01

    The objective of this study was to develop an improved analytical method for the determination of 3-chloro-1,2-propanediol (3-MCPD) and 1,3-dichloropropanol (1,3-DCP) in paper-type food packaging. The established method includes aqueous extraction, matrix spiking of a deuterated surrogate internal standard (3-MCPD-d₅), clean-up using Extrelut solid-phase extraction, derivatisation using a silylation reagent, and GC-MS analysis of the chloropropanols as their corresponding trimethyl silyl ethers. The new method is applicable to food-grade packaging samples using European Commission standard aqueous extraction and aqueous food stimulant migration tests. In this improved method, the derivatisation procedure was optimised; the cost and time of the analysis were reduced by using 10 times less sample, solvents and reagents than in previously described methods. Overall the validation data demonstrate that the method is precise and reliable. The limit of detection (LOD) of the aqueous extract was 0.010 mg kg(-1) (w/w) for both 3-MCPD and 1,3-DCP. Analytical precision had a relative standard deviation (RSD) of 3.36% for 3-MCPD and an RSD of 7.65% for 1,3-DCP. The new method was satisfactorily applied to the analysis of over 100 commercial paperboard packaging samples. The data are being used to guide the product development of a next generation of wet-strength resins with reduced chloropropanol content, and also for risk assessments to calculate the virtual safe dose (VSD).

  4. Improvement to the PhytoDOAS method for identification of coccolithophores using hyper-spectral satellite data

    Directory of Open Access Journals (Sweden)

    A. Sadeghi

    2012-11-01

    Full Text Available The goal of this study was to improve PhytoDOAS, which is a new retrieval method for quantitative identification of major phytoplankton functional types (PFTs using hyper-spectral satellite data. PhytoDOAS is an extension of the Differential Optical Absorption Spectroscopy (DOAS, a method for detection of atmospheric trace gases, developed for remote identification of oceanic phytoplankton groups. Thus far, PhytoDOAS has been successfully exploited to identify cyanobacteria and diatoms over the global ocean from SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY hyper-spectral data. This study aimed to improve PhytoDOAS for remote identification of coccolithophores, another functional group of phytoplankton. The main challenge for retrieving more PFTs by PhytoDOAS is to overcome the correlation effects between different PFT absorption spectra. Different PFTs are composed of different types and amounts of pigments, but also have pigments in common, e.g. chl a, causing correlation effects in the usual performance of the PhytoDOAS retrieval. Two ideas have been implemented to improve PhytoDOAS for the PFT retrieval of more phytoplankton groups. Firstly, using the fourth-derivative spectroscopy, the peak positions of the main pigment components in each absorption spectrum have been derived. After comparing the corresponding results of major PFTs, the optimized fit-window for the PhytoDOAS retrieval of each PFT was determined. Secondly, based on the results from derivative spectroscopy, a simultaneous fit of PhytoDOAS has been proposed and tested for a selected set of PFTs (coccolithophores, diatoms and dinoflagellates within an optimized fit-window, proven by spectral orthogonality tests. The method was then applied to the processing of SCIAMACHY data over the year 2005. Comparisons of the PhytoDOAS coccolithophore retrievals in 2005 with other coccolithophore-related data showed similar patterns in their

  5. Type D patients report poorer health status prior to and after cardiac rehabilitation compared to non-type D patients

    OpenAIRE

    Pelle, Aline; Erdman, Ruud; Domburg, Ron; Spiering, Marquita; Kazemier, Marten; Pedersen, Susanne

    2008-01-01

    textabstractBackground: Type D personality is an emerging risk factor in coronary artery disease (CAD). Cardiac rehabilitation (CR) improves outcomes, but little is known about the effects of CR on Type D patients. Purpose: We examined (1) variability in Type D caseness following CR, (2) Type D as a determinant of health status, and (3) the clinical relevance of Type D as a determinant of health status compared to cardiac history. Methods: CAD patients (n = 368) participating in CR completed ...

  6. Improvement to defect detection by ultrasonic data processing: the DTVG method

    International Nuclear Information System (INIS)

    Francois, D.

    1995-10-01

    The cast elbows of the pipes of the principal primary circuit of French PWR, made of austenitic-ferritic stainless steel, pose problems to control. In order to improve the ultrasonic detection of defects in coarse-grained materials, we propose a method (called DTVG) based on the statistic study of the spatial stability of events contained in temporal signals. At the Beginning, the method was developed during a thesis (G. Corneloup, 1998) to improve the detection of cracks in thin thickness austenitic welds. Here, we propose to adapt the DTVG method and estimate its performances in detection of defects in thick materials representative of cast austenitic-ferritic elbows steels. The first objective of the study is adapting the original treatment applied to the thin thickness austenitic welds for the detection of defects in thick thickness austenitic-ferritic cast steels. The second objective consist of improving the algorithm to take in account the difference between thin and thick material and estimating the performances of the DTVG method in detection in specimen block with artificial defects. This work has led to adapt the original DTVG method to control thick cast austenitic-ferritic specimen (80 mm) under normal and oblique incidence. More, the study has allowed to make the the treatment automatic (automatic research of parameters). The results have shown that the DTVG method is fitted to detect artificial defects in thick cast austenitic-ferritic sample steel. All the defects in the specimen block have been detected without revealing false indication. (author). 4 refs., 4 figs

  7. Improved Taguchi method based contract capacity optimization for industrial consumer with self-owned generating units

    International Nuclear Information System (INIS)

    Yang, Hong-Tzer; Peng, Pai-Chun

    2012-01-01

    Highlights: ► We propose an improved Taguchi method to determine the optimal contract capacities with SOGUs. ► We solve the highly discrete and nonlinear optimization problem for the contract capacities with SOGUs. ► The proposed improved Taguchi method integrates PSO in Taguchi method. ► The customer using the proposed optimization approach may save up to 12.18% of power expenses. ► The improved Taguchi method can also be well applied to the other similar problems. - Abstract: Contract capacity setting for industrial consumer with self-owned generating units (SOGUs) is a highly discrete and nonlinear optimization problem considering expenditure on the electricity from the utility and operation costs of the SOGUs. This paper proposes an improved Taguchi method that combines existing Taguchi method and particle swarm optimization (PSO) algorithm to solve this problem. Taguchi method provides fast converging characteristics in searching the optimal solution through quality analysis in orthogonal matrices. The integrated PSO algorithm generates new solutions in the orthogonal matrices based on the searching experiences during the evolution process to further improve the quality of solution. To verify feasibility of the proposed method, the paper uses the real data obtained from a large optoelectronics factory in Taiwan. In comparison with the existing optimization methods, the proposed improved Taguchi method has superior performance as revealed in the numerical results in terms of the convergence process and the quality of solution obtained.

  8. An Improved BeiDou-2 Satellite-Induced Code Bias Estimation Method

    Directory of Open Access Journals (Sweden)

    Jingyang Fu

    2018-04-01

    Full Text Available Different from GPS, GLONASS, GALILEO and BeiDou-3, it is confirmed that the code multipath bias (CMB, which originate from the satellite end and can be over 1 m, are commonly found in the code observations of BeiDou-2 (BDS IGSO and MEO satellites. In order to mitigate their adverse effects on absolute precise applications which use the code measurements, we propose in this paper an improved correction model to estimate the CMB. Different from the traditional model which considering the correction values are orbit-type dependent (estimating two sets of values for IGSO and MEO, respectively and modeling the CMB as a piecewise linear function with a elevation node separation of 10°, we estimate the corrections for each BDS IGSO + MEO satellite on one hand, and a denser elevation node separation of 5° is used to model the CMB variations on the other hand. Currently, the institutions such as IGS-MGEX operate over 120 stations which providing the daily BDS observations. These large amounts of data provide adequate support to refine the CMB estimation satellite by satellite in our improved model. One month BDS observations from MGEX are used for assessing the performance of the improved CMB model by means of precise point positioning (PPP. Experimental results show that for the satellites on the same orbit type, obvious differences can be found in the CMB at the same node and frequency. Results show that the new correction model can improve the wide-lane (WL ambiguity usage rate for WL fractional cycle bias estimation, shorten the WL and narrow-lane (NL time to first fix (TTFF in PPP ambiguity resolution (AR as well as improve the PPP positioning accuracy. With our improved correction model, the usage of WL ambiguity is increased from 94.1% to 96.0%, the WL and NL TTFF of PPP AR is shorten from 10.6 to 9.3 min, 67.9 to 63.3 min, respectively, compared with the traditional correction model. In addition, both the traditional and improved CMB model have

  9. A method of quantitative prediction for sandstone type uranium deposit in Russia and its application

    International Nuclear Information System (INIS)

    Chang Shushuai; Jiang Minzhong; Li Xiaolu

    2008-01-01

    The paper presents the foundational principle of quantitative predication for sandstone type uranium deposits in Russia. Some key methods such as physical-mathematical model construction and deposits prediction are described. The method has been applied to deposits prediction in Dahongshan region of Chaoshui basin. It is concluded that the technique can fortify the method of quantitative predication for sandstone type uranium deposits, and it could be used as a new technique in China. (authors)

  10. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    Science.gov (United States)

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  11. Refining types using type guards in TypeScript

    NARCIS (Netherlands)

    de Wolff, Ivo Gabe; Hage, J.

    2017-01-01

    We discuss two adaptations of the implementation of type guards and narrowing in the TypeScript compiler. The first is an improvement on the original syntax-directed implementation, and has now replaced the original one in the TypeScript compiler. It is specifically suited for the scenario in which

  12. An improved method of continuous LOD based on fractal theory in terrain rendering

    Science.gov (United States)

    Lin, Lan; Li, Lijun

    2007-11-01

    With the improvement of computer graphic hardware capability, the algorithm of 3D terrain rendering is going into the hot topic of real-time visualization. In order to solve conflict between the rendering speed and reality of rendering, this paper gives an improved method of terrain rendering which improves the traditional continuous level of detail technique based on fractal theory. This method proposes that the program needn't to operate the memory repeatedly to obtain different resolution terrain model, instead, obtains the fractal characteristic parameters of different region according to the movement of the viewpoint. Experimental results show that the method guarantees the authenticity of landscape, and increases the real-time 3D terrain rendering speed.

  13. Introducing modified TypeScript in an existing framework to improve error handling

    OpenAIRE

    Minder, Patrik

    2016-01-01

    Error messages in compilers is a topic that is often overlooked. The quality of the messages can have a big impact on development time and ease oflearning. Another method used to speed up development is to build a domainspecific language (DSL). This thesis migrates an existing framework to use TypeScript in order to speed up development time with compile-time error handling. Alternative methods for implementing a DSL are evaluated based onhow they affect the ability to generate good error mes...

  14. A Mixed-Methods Research Framework for Healthcare Process Improvement.

    Science.gov (United States)

    Bastian, Nathaniel D; Munoz, David; Ventura, Marta

    2016-01-01

    The healthcare system in the United States is spiraling out of control due to ever-increasing costs without significant improvements in quality, access to care, satisfaction, and efficiency. Efficient workflow is paramount to improving healthcare value while maintaining the utmost standards of patient care and provider satisfaction in high stress environments. This article provides healthcare managers and quality engineers with a practical healthcare process improvement framework to assess, measure and improve clinical workflow processes. The proposed mixed-methods research framework integrates qualitative and quantitative tools to foster the improvement of processes and workflow in a systematic way. The framework consists of three distinct phases: 1) stakeholder analysis, 2a) survey design, 2b) time-motion study, and 3) process improvement. The proposed framework is applied to the pediatric intensive care unit of the Penn State Hershey Children's Hospital. The implementation of this methodology led to identification and categorization of different workflow tasks and activities into both value-added and non-value added in an effort to provide more valuable and higher quality patient care. Based upon the lessons learned from the case study, the three-phase methodology provides a better, broader, leaner, and holistic assessment of clinical workflow. The proposed framework can be implemented in various healthcare settings to support continuous improvement efforts in which complexity is a daily element that impacts workflow. We proffer a general methodology for process improvement in a healthcare setting, providing decision makers and stakeholders with a useful framework to help their organizations improve efficiency. Published by Elsevier Inc.

  15. Improvement in children’s fine motor skills following a computerized typing intervention

    OpenAIRE

    McGlashan, Hannah L.; Blanchard, Caroline C.V.; Sycamore, Nicole J.; Lee, Rachel; French, Blandine; Holmes, Nicholas P.

    2017-01-01

    Children spend a large proportion of their school day engaged in tasks that require manual dexterity. If children experience difficulties with their manual dexterity skills it can have a consequential effect on their academic achievement. The first aim of this paper was to explore whether an online interactive typing intervention could improve children’s scores on a standardised measure of manual dexterity. The second aim was to implement a serial reaction time tapping task as an index of chi...

  16. The improved oval forceps suture-guiding method for minimally invasive Achilles tendon repair.

    Science.gov (United States)

    Liu, Yang; Lin, Lixiang; Lin, Chuanlu; Weng, Qihao; Hong, Jianjun

    2018-06-01

    To discuss the effect and advantage of the improved oval forceps suture-guiding method combined with anchor nail in the treatment of acute Achilles tendon rupture. A retrospective research was performed on 35 cases of acute Achilles tendon rupture treated with the improved oval forceps suture-guiding method from January 2013 to October 2016. Instead of the Achillon device, we perform the Achillon technique with the use of simple oval forceps, combined with absorbable anchor nail, percutaneously to repair the acute Achilles tendon rupture. All patients were followed up for at least 12 months (range, 12-19 months), and all the patients underwent successful repair of their acute Achilles tendon rupture using the improved oval forceps suture-guiding method without any major intra- or postoperative complications. All the patients returned to work with pre-injury levels of activity at a mean of 12.51 ± 0.76 weeks. Mean AOFAS ankle-hindfoot scores improved from 63.95 (range, 51-78) preoperatively to 98.59 (range, 91-100) at last follow-up. This was statistically significant difference (P anchor nail, the improved technique has better repair capacity and expands the operation indication of oval forceps method. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Observation method for inside of FBR type reactor

    International Nuclear Information System (INIS)

    Shimano, Kunio; Ishitori, Takashi.

    1992-01-01

    The method of the present invention provides such a method that the surface of a metal case of an ultrasonic transducer keep a good intimate contact with liquid sodium always in a normal state in a short period of time while exhibiting satisfactory wettability. That is, an oxygen concentration in liquid sodium is increased and the inside of the reactor is seen through. Liquid sodium in a state of high oxygen concentration has extremely satisfactory wettability with metals. Accordingly, the metal surface of the ultrasonic transducer can be put to a intimate contact with the liquid metal sodium in a normal state. Further, a coating layer made of nickel or gold is disposed on the surface of the ultrasonic transducer. With such a constitution, the wettability with the liquid metal sodium can further be improved. (I.S.)

  18. A new metric method-improved structural holes researches on software networks

    Science.gov (United States)

    Li, Bo; Zhao, Hai; Cai, Wei; Li, Dazhou; Li, Hui

    2013-03-01

    The scale software systems quickly increase with the rapid development of software technologies. Hence, how to understand, measure, manage and control software structure is a great challenge for software engineering. there are also many researches on software networks metrics: C&K, MOOD, McCabe and etc, the aim of this paper is to propose a new and better method to metric software networks. The metric method structural holes are firstly introduced to in this paper, which can not directly be applied as a result of modular characteristics on software network. Hence, structural holes is redefined in this paper and improved, calculation process and results are described in detail. The results shows that the new method can better reflect bridge role of vertexes on software network and there is a significant correlation between degree and improved structural holes. At last, a hydropower simulation system is taken as an example to show validity of the new metric method.

  19. Improved look-up table method of computer-generated holograms.

    Science.gov (United States)

    Wei, Hui; Gong, Guanghong; Li, Ni

    2016-11-10

    Heavy computation load and vast memory requirements are major bottlenecks of computer-generated holograms (CGHs), which are promising and challenging in three-dimensional displays. To solve these problems, an improved look-up table (LUT) method suitable for arbitrarily sampled object points is proposed and implemented on a graphics processing unit (GPU) whose reconstructed object quality is consistent with that of the coherent ray-trace (CRT) method. The concept of distance factor is defined, and the distance factors are pre-computed off-line and stored in a look-up table. The results show that while reconstruction quality close to that of the CRT method is obtained, the on-line computation time is dramatically reduced compared with the LUT method on the GPU and the memory usage is lower than that of the novel-LUT considerably. Optical experiments are carried out to validate the effectiveness of the proposed method.

  20. Unscented Kalman filtering in the additive noise case

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The unscented Kalman filter(UKF) has four implementations in the additive noise case,according to whether the state is augmented with noise vectors and whether a new set of sigma points is redrawn from the predicted state(which is so-called resampling) for the observation prediction.This paper concerns the differences of performances for those implementations,such as accuracy,adaptability,computational complexity,etc.The conditionally equivalent relationships between the augmented and non-augmented unscented transforms(UTs) are proved for several sampling strategies that are commonly used.Then,we find that the augmented and non-augmented UKFs have the same filter results with the additive measurement noise,but only have the same state predictions with the additive process noise.Resampling is not believed to be necessary in some researches.However,we find out that resampling can be helpful for an adaptive Kalman gain.This will improve the convergence and accuracy of the filter when the large scale state modeling bias or unknown maneuvers occur.Finally,some universal designing principles for a practical UKF are given as follows:1) for the additive observation noise case,it’s better to use the non-augmented UKF;2) for the additive process noise case,when the small state modeling bias or maneuvers are involved,the non-resampling algorithms with state whether augmented or not are candidates for filters;3) the resampling and non-augmented algorithm is the only choice while the large state modeling bias or maneuvers are latent.