WorldWideScience

Sample records for based significance tests

  1. Testing Significance Testing

    Directory of Open Access Journals (Sweden)

    Joachim I. Krueger

    2018-04-01

    Full Text Available The practice of Significance Testing (ST remains widespread in psychological science despite continual criticism of its flaws and abuses. Using simulation experiments, we address four concerns about ST and for two of these we compare ST’s performance with prominent alternatives. We find the following: First, the 'p' values delivered by ST predict the posterior probability of the tested hypothesis well under many research conditions. Second, low 'p' values support inductive inferences because they are most likely to occur when the tested hypothesis is false. Third, 'p' values track likelihood ratios without raising the uncertainties of relative inference. Fourth, 'p' values predict the replicability of research findings better than confidence intervals do. Given these results, we conclude that 'p' values may be used judiciously as a heuristic tool for inductive inference. Yet, 'p' values cannot bear the full burden of inference. We encourage researchers to be flexible in their selection and use of statistical methods.

  2. A Non-Parametric Surrogate-based Test of Significance for T-Wave Alternans Detection

    Science.gov (United States)

    Nemati, Shamim; Abdala, Omar; Bazán, Violeta; Yim-Yeh, Susie; Malhotra, Atul; Clifford, Gari

    2010-01-01

    We present a non-parametric adaptive surrogate test that allows for the differentiation of statistically significant T-Wave Alternans (TWA) from alternating patterns that can be solely explained by the statistics of noise. The proposed test is based on estimating the distribution of noise induced alternating patterns in a beat sequence from a set of surrogate data derived from repeated reshuffling of the original beat sequence. Thus, in assessing the significance of the observed alternating patterns in the data no assumptions are made about the underlying noise distribution. In addition, since the distribution of noise-induced alternans magnitudes is calculated separately for each sequence of beats within the analysis window, the method is robust to data non-stationarities in both noise and TWA. The proposed surrogate method for rejecting noise was compared to the standard noise rejection methods used with the Spectral Method (SM) and the Modified Moving Average (MMA) techniques. Using a previously described realistic multi-lead model of TWA, and real physiological noise, we demonstrate the proposed approach reduces false TWA detections, while maintaining a lower missed TWA detection compared with all the other methods tested. A simple averaging-based TWA estimation algorithm was coupled with the surrogate significance testing and was evaluated on three public databases; the Normal Sinus Rhythm Database (NRSDB), the Chronic Heart Failure Database (CHFDB) and the Sudden Cardiac Death Database (SCDDB). Differences in TWA amplitudes between each database were evaluated at matched heart rate (HR) intervals from 40 to 120 beats per minute (BPM). Using the two-sample Kolmogorov-Smirnov test, we found that significant differences in TWA levels exist between each patient group at all decades of heart rates. The most marked difference was generally found at higher heart rates, and the new technique resulted in a larger margin of separability between patient populations than

  3. Do School-Based Tutoring Programs Significantly Improve Student Performance on Standardized Tests?

    Science.gov (United States)

    Rothman, Terri; Henderson, Mary

    2011-01-01

    This study used a pre-post, nonequivalent control group design to examine the impact of an in-district, after-school tutoring program on eighth grade students' standardized test scores in language arts and mathematics. Students who had scored in the near-passing range on either the language arts or mathematics aspect of a standardized test at the…

  4. Cross wavelet analysis: significance testing and pitfalls

    Directory of Open Access Journals (Sweden)

    D. Maraun

    2004-01-01

    Full Text Available In this paper, we present a detailed evaluation of cross wavelet analysis of bivariate time series. We develop a statistical test for zero wavelet coherency based on Monte Carlo simulations. If at least one of the two processes considered is Gaussian white noise, an approximative formula for the critical value can be utilized. In a second part, typical pitfalls of wavelet cross spectra and wavelet coherency are discussed. The wavelet cross spectrum appears to be not suitable for significance testing the interrelation between two processes. Instead, one should rather apply wavelet coherency. Furthermore we investigate problems due to multiple testing. Based on these results, we show that coherency between ENSO and NAO is an artefact for most of the time from 1900 to 1995. However, during a distinct period from around 1920 to 1940, significant coherency between the two phenomena occurs.

  5. Can a significance test be genuinely Bayesian?

    OpenAIRE

    Pereira, Carlos A. de B.; Stern, Julio Michael; Wechsler, Sergio

    2008-01-01

    The Full Bayesian Significance Test, FBST, is extensively reviewed. Its test statistic, a genuine Bayesian measure of evidence, is discussed in detail. Its behavior in some problems of statistical inference like testing for independence in contingency tables is discussed.

  6. What if there were no significance tests?

    CERN Document Server

    Harlow, Lisa L; Steiger, James H

    2013-01-01

    This book is the result of a spirited debate stimulated by a recent meeting of the Society of Multivariate Experimental Psychology. Although the viewpoints span a range of perspectives, the overriding theme that emerges states that significance testing may still be useful if supplemented with some or all of the following -- Bayesian logic, caution, confidence intervals, effect sizes and power, other goodness of approximation measures, replication and meta-analysis, sound reasoning, and theory appraisal and corroboration. The book is organized into five general areas. The first presents an overview of significance testing issues that sythesizes the highlights of the remainder of the book. The next discusses the debate in which significance testing should be rejected or retained. The third outlines various methods that may supplement current significance testing procedures. The fourth discusses Bayesian approaches and methods and the use of confidence intervals versus significance tests. The last presents the p...

  7. The insignificance of statistical significance testing

    Science.gov (United States)

    Johnson, Douglas H.

    1999-01-01

    Despite their use in scientific journals such as The Journal of Wildlife Management, statistical hypothesis tests add very little value to the products of research. Indeed, they frequently confuse the interpretation of data. This paper describes how statistical hypothesis tests are often viewed, and then contrasts that interpretation with the correct one. I discuss the arbitrariness of P-values, conclusions that the null hypothesis is true, power analysis, and distinctions between statistical and biological significance. Statistical hypothesis testing, in which the null hypothesis about the properties of a population is almost always known a priori to be false, is contrasted with scientific hypothesis testing, which examines a credible null hypothesis about phenomena in nature. More meaningful alternatives are briefly outlined, including estimation and confidence intervals for determining the importance of factors, decision theory for guiding actions in the face of uncertainty, and Bayesian approaches to hypothesis testing and other statistical practices.

  8. Non-destructive testing: significant facts

    International Nuclear Information System (INIS)

    Espejo, Hector; Ruch, Marta C.

    2006-01-01

    In the last fifty years different organisations, both public and private, have been assigned to the mission of introducing into the country the most relevant aspects of the modern technological discipline 'Non Destructive Testing' (NDT) through a manifold of activities, such as training and education, research, development, technical assistance and services, personnel qualification/certification and standardisation. A review is given of the significant facts in this process, in which the Argentine Atomic Energy Commission, CNEA, played a leading part, a balance of the accomplishments is made and a forecast of the future of the activity is sketched. (author) [es

  9. Caveats for using statistical significance tests in research assessments

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg

    2013-01-01

    controversial and numerous criticisms have been leveled against their use. Based on examples from articles by proponents of the use statistical significance tests in research assessments, we address some of the numerous problems with such tests. The issues specifically discussed are the ritual practice......This article raises concerns about the advantages of using statistical significance tests in research assessments as has recently been suggested in the debate about proper normalization procedures for citation indicators by Opthof and Leydesdorff (2010). Statistical significance tests are highly...... argue that applying statistical significance tests and mechanically adhering to their results are highly problematic and detrimental to critical thinking. We claim that the use of such tests do not provide any advantages in relation to deciding whether differences between citation indicators...

  10. Significance levels for studies with correlated test statistics.

    Science.gov (United States)

    Shi, Jianxin; Levinson, Douglas F; Whittemore, Alice S

    2008-07-01

    When testing large numbers of null hypotheses, one needs to assess the evidence against the global null hypothesis that none of the hypotheses is false. Such evidence typically is based on the test statistic of the largest magnitude, whose statistical significance is evaluated by permuting the sample units to simulate its null distribution. Efron (2007) has noted that correlation among the test statistics can induce substantial interstudy variation in the shapes of their histograms, which may cause misleading tail counts. Here, we show that permutation-based estimates of the overall significance level also can be misleading when the test statistics are correlated. We propose that such estimates be conditioned on a simple measure of the spread of the observed histogram, and we provide a method for obtaining conditional significance levels. We justify this conditioning using the conditionality principle described by Cox and Hinkley (1974). Application of the method to gene expression data illustrates the circumstances when conditional significance levels are needed.

  11. Testing for significance of phase synchronisation dynamics in the EEG.

    Science.gov (United States)

    Daly, Ian; Sweeney-Reed, Catherine M; Nasuto, Slawomir J

    2013-06-01

    A number of tests exist to check for statistical significance of phase synchronisation within the Electroencephalogram (EEG); however, the majority suffer from a lack of generality and applicability. They may also fail to account for temporal dynamics in the phase synchronisation, regarding synchronisation as a constant state instead of a dynamical process. Therefore, a novel test is developed for identifying the statistical significance of phase synchronisation based upon a combination of work characterising temporal dynamics of multivariate time-series and Markov modelling. We show how this method is better able to assess the significance of phase synchronisation than a range of commonly used significance tests. We also show how the method may be applied to identify and classify significantly different phase synchronisation dynamics in both univariate and multivariate datasets.

  12. Identification of significant features by the Global Mean Rank test.

    Science.gov (United States)

    Klammer, Martin; Dybowski, J Nikolaj; Hoffmann, Daniel; Schaab, Christoph

    2014-01-01

    With the introduction of omics-technologies such as transcriptomics and proteomics, numerous methods for the reliable identification of significantly regulated features (genes, proteins, etc.) have been developed. Experimental practice requires these tests to successfully deal with conditions such as small numbers of replicates, missing values, non-normally distributed expression levels, and non-identical distributions of features. With the MeanRank test we aimed at developing a test that performs robustly under these conditions, while favorably scaling with the number of replicates. The test proposed here is a global one-sample location test, which is based on the mean ranks across replicates, and internally estimates and controls the false discovery rate. Furthermore, missing data is accounted for without the need of imputation. In extensive simulations comparing MeanRank to other frequently used methods, we found that it performs well with small and large numbers of replicates, feature dependent variance between replicates, and variable regulation across features on simulation data and a recent two-color microarray spike-in dataset. The tests were then used to identify significant changes in the phosphoproteomes of cancer cells induced by the kinase inhibitors erlotinib and 3-MB-PP1 in two independently published mass spectrometry-based studies. MeanRank outperformed the other global rank-based methods applied in this study. Compared to the popular Significance Analysis of Microarrays and Linear Models for Microarray methods, MeanRank performed similar or better. Furthermore, MeanRank exhibits more consistent behavior regarding the degree of regulation and is robust against the choice of preprocessing methods. MeanRank does not require any imputation of missing values, is easy to understand, and yields results that are easy to interpret. The software implementing the algorithm is freely available for academic and commercial use.

  13. Manipulating the Alpha Level Cannot Cure Significance Testing

    Directory of Open Access Journals (Sweden)

    David Trafimow

    2018-05-01

    Full Text Available We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and sample size much more directly than significance testing does; but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p-value threshold of 0.05, 0.01, 0.005, or anything else, is not acceptable.

  14. A shift from significance test to hypothesis test through power analysis in medical research.

    Science.gov (United States)

    Singh, G

    2006-01-01

    Medical research literature until recently, exhibited substantial dominance of the Fisher's significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson's hypothesis test considering both probability of type I and II error. Fisher's approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson's approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher's significance test to Neyman-Pearson's hypothesis test procedure.

  15. A shift from significance test to hypothesis test through power analysis in medical research

    Directory of Open Access Journals (Sweden)

    Singh Girish

    2006-01-01

    Full Text Available Medical research literature until recently, exhibited substantial dominance of the Fisher′s significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson′s hypothesis test considering both probability of type I and II error. Fisher′s approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson′s approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher′s significance test to Neyman-Pearson′s hypothesis test procedure.

  16. Your Chi-Square Test Is Statistically Significant: Now What?

    Science.gov (United States)

    Sharpe, Donald

    2015-01-01

    Applied researchers have employed chi-square tests for more than one hundred years. This paper addresses the question of how one should follow a statistically significant chi-square test result in order to determine the source of that result. Four approaches were evaluated: calculating residuals, comparing cells, ransacking, and partitioning. Data…

  17. Testing the Difference of Correlated Agreement Coefficients for Statistical Significance

    Science.gov (United States)

    Gwet, Kilem L.

    2016-01-01

    This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling…

  18. Significance tests for functional data with complex dependence structure

    KAUST Repository

    Staicu, Ana-Maria; Lahiri, Soumen N.; Carroll, Raymond J.

    2015-01-01

    We propose an L (2)-norm based global testing procedure for the null hypothesis that multiple group mean functions are equal, for functional data with complex dependence structure. Specifically, we consider the setting of functional data with a

  19. Testing the significance of canonical axes in redundancy analysis

    NARCIS (Netherlands)

    Legendre, P.; Oksanen, J.; Braak, ter C.J.F.

    2011-01-01

    1. Tests of significance of the individual canonical axes in redundancy analysis allow researchers to determine which of the axes represent variation that can be distinguished from random. Variation along the significant axes can be mapped, used to draw biplots or interpreted through subsequent

  20. Polynomial regression analysis and significance test of the regression function

    International Nuclear Information System (INIS)

    Gao Zhengming; Zhao Juan; He Shengping

    2012-01-01

    In order to analyze the decay heating power of a certain radioactive isotope per kilogram with polynomial regression method, the paper firstly demonstrated the broad usage of polynomial function and deduced its parameters with ordinary least squares estimate. Then significance test method of polynomial regression function is derived considering the similarity between the polynomial regression model and the multivariable linear regression model. Finally, polynomial regression analysis and significance test of the polynomial function are done to the decay heating power of the iso tope per kilogram in accord with the authors' real work. (authors)

  1. Safety Significance of the Halden IFA-650 LOCA Test Results

    International Nuclear Information System (INIS)

    Fuketa, Toyoshi; Nagase, Fumihisa; Grandjean, Claude; Petit, Marc; Hozer, Zoltan; Kelppe, Seppo; Khvostov, Grigori; Hafidi, Biya; Therache, Benjamin; Heins, Lothar; Valach, Mojmir; Voglewede, John; Wiesenack, Wolfgang

    2010-01-01

    The safety criteria for loss-of-coolant accidents were defined to ensure that the core would remain coolable. Since the time of the first LOCA experiments, which were largely conducted with fresh fuel, changes in fuel design, the introduction of new cladding materials and in particular the move to high burnup have generated a need to re-examine these criteria and to verify their continued validity. As part of international efforts to this end, the OECD Halden Reactor Project program implemented a LOCA test series. Based on recommendations of a group of experts from the US NRC, EPRI, EDF, IRSN, FRAMATOME-ANP and GNF, the primary objective of the experiments were defined as 1. Measure the extent of fuel (fragment) relocation into the ballooned region and evaluate its possible effect on cladding temperature and oxidation. 2. Investigate the extent (if any) of 'secondary transient hydriding' on the inner side of the cladding above and below the burst region. The fourth test of the series, IFA-650.4 conducted in April 2006, caused particular attention in the international nuclear community. The fuel used in the experiment had a high burnup, 92 MWd/kgU, and a low pre-test hydrogen content of about 50 ppm. The test aimed at and achieved a peak cladding temperature of 850 deg. C. The rod burst occurred at 790 deg. C. The burst caused a marked temperature increase at the lower end and a decrease at the upper end of the system, indicating that fuel relocation had occurred. Subsequent gamma scanning showed that approximately 19 cm of the fuel stack were missing from the upper part of the rod and that fuel had fallen to the bottom of the capsule. PIE at the IFE-Kjeller hot cells corroborated this evidence of substantial fuel relocation. The fact that fuel dispersal could occur upon ballooning and burst, i.e. at cladding temperatures as low as 800 deg. C and thus far lower than the temperature entailed by the current 1200 deg. C / 17% ECR limit, caused concern. The

  2. Significance testing in ridge regression for genetic data

    Directory of Open Access Journals (Sweden)

    De Iorio Maria

    2011-09-01

    Full Text Available Abstract Background Technological developments have increased the feasibility of large scale genetic association studies. Densely typed genetic markers are obtained using SNP arrays, next-generation sequencing technologies and imputation. However, SNPs typed using these methods can be highly correlated due to linkage disequilibrium among them, and standard multiple regression techniques fail with these data sets due to their high dimensionality and correlation structure. There has been increasing interest in using penalised regression in the analysis of high dimensional data. Ridge regression is one such penalised regression technique which does not perform variable selection, instead estimating a regression coefficient for each predictor variable. It is therefore desirable to obtain an estimate of the significance of each ridge regression coefficient. Results We develop and evaluate a test of significance for ridge regression coefficients. Using simulation studies, we demonstrate that the performance of the test is comparable to that of a permutation test, with the advantage of a much-reduced computational cost. We introduce the p-value trace, a plot of the negative logarithm of the p-values of ridge regression coefficients with increasing shrinkage parameter, which enables the visualisation of the change in p-value of the regression coefficients with increasing penalisation. We apply the proposed method to a lung cancer case-control data set from EPIC, the European Prospective Investigation into Cancer and Nutrition. Conclusions The proposed test is a useful alternative to a permutation test for the estimation of the significance of ridge regression coefficients, at a much-reduced computational cost. The p-value trace is an informative graphical tool for evaluating the results of a test of significance of ridge regression coefficients as the shrinkage parameter increases, and the proposed test makes its production computationally feasible.

  3. Significance of acceleration period in a dynamic strength testing study.

    Science.gov (United States)

    Chen, W L; Su, F C; Chou, Y L

    1994-06-01

    The acceleration period that occurs during isokinetic tests may provide valuable information regarding neuromuscular readiness to produce maximal contraction. The purpose of this study was to collect the normative data of acceleration time during isokinetic knee testing, to calculate the acceleration work (Wacc), and to determine the errors (ERexp, ERwork, ERpower) due to ignoring Wacc during explosiveness, total work, and average power measurements. Seven male and 13 female subjects attended the test by using the Cybex 325 system and electronic stroboscope machine for 10 testing speeds (30-300 degrees/sec). A three-way ANOVA was used to assess gender, direction, and speed factors on acceleration time, Wacc, and errors. The results indicated that acceleration time was significantly affected by speed and direction; Wacc and ERexp by speed, direction, and gender; and ERwork and ERpower by speed and gender. The errors appeared to increase when testing the female subjects, during the knee flexion test, or when speed increased. To increase validity in clinical testing, it is important to consider the acceleration phase effect, especially in higher velocity isokinetic testing or for weaker muscle groups.

  4. After statistics reform : Should we still teach significance testing?

    NARCIS (Netherlands)

    A. Hak (Tony)

    2014-01-01

    textabstractIn the longer term null hypothesis significance testing (NHST) will disappear because p- values are not informative and not replicable. Should we continue to teach in the future the procedures of then abolished routines (i.e., NHST)? Three arguments are discussed for not teaching NHST in

  5. Shaping Up the Practice of Null Hypothesis Significance Testing.

    Science.gov (United States)

    Wainer, Howard; Robinson, Daniel H.

    2003-01-01

    Discusses criticisms of null hypothesis significance testing (NHST), suggesting that historical use of NHST was reasonable, and current users should read Sir Ronald Fisher's applied work. Notes that modifications to NHST and interpretations of its outcomes might better suit the needs of modern science. Concludes that NHST is most often useful as…

  6. The significance test controversy revisited the fiducial Bayesian alternative

    CERN Document Server

    Lecoutre, Bruno

    2014-01-01

    The purpose of this book is not only to revisit the “significance test controversy,”but also to provide a conceptually sounder alternative. As such, it presents a Bayesian framework for a new approach to analyzing and interpreting experimental data. It also prepares students and researchers for reporting on experimental results. Normative aspects: The main views of statistical tests are revisited and the philosophies of Fisher, Neyman-Pearson and Jeffrey are discussed in detail. Descriptive aspects: The misuses of Null Hypothesis Significance Tests are reconsidered in light of Jeffreys’ Bayesian conceptions concerning the role of statistical inference in experimental investigations. Prescriptive aspects: The current effect size and confidence interval reporting practices are presented and seriously questioned. Methodological aspects are carefully discussed and fiducial Bayesian methods are proposed as a more suitable alternative for reporting on experimental results. In closing, basic routine procedures...

  7. Test for the statistical significance of differences between ROC curves

    International Nuclear Information System (INIS)

    Metz, C.E.; Kronman, H.B.

    1979-01-01

    A test for the statistical significance of observed differences between two measured Receiver Operating Characteristic (ROC) curves has been designed and evaluated. The set of observer response data for each ROC curve is assumed to be independent and to arise from a ROC curve having a form which, in the absence of statistical fluctuations in the response data, graphs as a straight line on double normal-deviate axes. To test the significance of an apparent difference between two measured ROC curves, maximum likelihood estimates of the two parameters of each curve and the associated parameter variances and covariance are calculated from the corresponding set of observer response data. An approximate Chi-square statistic with two degrees of freedom is then constructed from the differences between the parameters estimated for each ROC curve and from the variances and covariances of these estimates. This statistic is known to be truly Chi-square distributed only in the limit of large numbers of trials in the observer performance experiments. Performance of the statistic for data arising from a limited number of experimental trials was evaluated. Independent sets of rating scale data arising from the same underlying ROC curve were paired, and the fraction of differences found (falsely) significant was compared to the significance level, α, used with the test. Although test performance was found to be somewhat dependent on both the number of trials in the data and the position of the underlying ROC curve in the ROC space, the results for various significance levels showed the test to be reliable under practical experimental conditions

  8. Conducting tests for statistically significant differences using forest inventory data

    Science.gov (United States)

    James A. Westfall; Scott A. Pugh; John W. Coulston

    2013-01-01

    Many forest inventory and monitoring programs are based on a sample of ground plots from which estimates of forest resources are derived. In addition to evaluating metrics such as number of trees or amount of cubic wood volume, it is often desirable to make comparisons between resource attributes. To properly conduct statistical tests for differences, it is imperative...

  9. Significance tests for functional data with complex dependence structure.

    Science.gov (United States)

    Staicu, Ana-Maria; Lahiri, Soumen N; Carroll, Raymond J

    2015-01-01

    We propose an L 2 -norm based global testing procedure for the null hypothesis that multiple group mean functions are equal, for functional data with complex dependence structure. Specifically, we consider the setting of functional data with a multilevel structure of the form groups-clusters or subjects-units, where the unit-level profiles are spatially correlated within the cluster, and the cluster-level data are independent. Orthogonal series expansions are used to approximate the group mean functions and the test statistic is estimated using the basis coefficients. The asymptotic null distribution of the test statistic is developed, under mild regularity conditions. To our knowledge this is the first work that studies hypothesis testing, when data have such complex multilevel functional and spatial structure. Two small-sample alternatives, including a novel block bootstrap for functional data, are proposed, and their performance is examined in simulation studies. The paper concludes with an illustration of a motivating experiment.

  10. Significance tests for functional data with complex dependence structure

    KAUST Repository

    Staicu, Ana-Maria

    2015-01-01

    We propose an L (2)-norm based global testing procedure for the null hypothesis that multiple group mean functions are equal, for functional data with complex dependence structure. Specifically, we consider the setting of functional data with a multilevel structure of the form groups-clusters or subjects-units, where the unit-level profiles are spatially correlated within the cluster, and the cluster-level data are independent. Orthogonal series expansions are used to approximate the group mean functions and the test statistic is estimated using the basis coefficients. The asymptotic null distribution of the test statistic is developed, under mild regularity conditions. To our knowledge this is the first work that studies hypothesis testing, when data have such complex multilevel functional and spatial structure. Two small-sample alternatives, including a novel block bootstrap for functional data, are proposed, and their performance is examined in simulation studies. The paper concludes with an illustration of a motivating experiment.

  11. The uriscreen test to detect significant asymptomatic bacteriuria during pregnancy.

    Science.gov (United States)

    Teppa, Roberto J; Roberts, James M

    2005-01-01

    Asymptomatic bacteriuria (ASB) occurs in 2-11% of pregnancies and it is a clear predisposition to the development of acute pyelonephritis, which, in turn, poses risk to mother and fetus. Treatment of bacteriuria during pregnancy reduces the incidence of pyelonephritis. Therefore, it is recommended to screen for ASB at the first prenatal visit. The gold standard for detection of bacteriuria during pregnancy is urine culture, but this test is expensive, time-consuming, and labor-intensive. To determine the reliability of an enzymatic urine screening test (Uriscreen; Savyon Diagnostics, Ashdod, Israel) for detecting ASB in pregnancy. Catheterized urine samples were collected from 150 women who had routine prenatal screening for ASB. Patients with urinary symptoms, active vaginal bleeding, or who were previously on antibiotics therapy were excluded from the study. Sensitivity, specificity, and the positive and negative predictive values for the Uriscreen were estimated using urine culture as the criterion standard. Urine cultures were considered positive if they grew >10(5) colony-forming units of a single uropathogen. Twenty-eight women (18.7%) had urine culture results indicating significant bacteriuria, and 17 of these 28 specimens had positive enzyme activity. Of 122 samples with no growth, 109 had negative enzyme activity. Sensitivity, specificity, and positive and negative predictive values for the Uriscreen test were 60.7% (+/-18.1), 89.3% (+/-5.6), 56.6%, and 90.8%, respectively. The Uriscreen test had inadequate sensitivity for rapid screening of bacteriuria in pregnancy.

  12. Significance of high level test data in piping design

    International Nuclear Information System (INIS)

    McLean, J.L.; Bitner, J.L.

    1991-01-01

    During the 1980's the piping technical community in the U.S. initiated a series of research activities aimed at reducing the conservatism inherent in nuclear piping design. One of these activities was directed at the application of the ASME Code rules to the design of piping subjected to dynamic loads. This paper surveys the test data obtained from three groups in the U.S. and none in the U.K., and correlates the findings as they relate to the failure modes of piping subjected to seismic loads. The failure modes experienced as the result of testing at dynamic loads significantly in excess of anticipated loads specified for any of the ASME Code service levels are discussed. A recommendation is presented for modifying the Code piping rules to reduce the conservatism inherent in seismic design

  13. Mutagenicity in drug development: interpretation and significance of test results.

    Science.gov (United States)

    Clive, D

    1985-03-01

    The use of mutagenicity data has been proposed and widely accepted as a relatively fast and inexpensive means of predicting long-term risk to man (i.e., cancer in somatic cells, heritable mutations in germ cells). This view is based on the universal nature of the genetic material, the somatic mutation model of carcinogenesis, and a number of studies showing correlations between mutagenicity and carcinogenicity. An uncritical acceptance of this approach by some regulatory and industrial concerns is over-conservative, naive, and scientifically unjustifiable on a number of grounds: Human cancers are largely life-style related (e.g., cigarettes, diet, tanning). Mutagens (both natural and man-made) are far more prevalent in the environment than was originally assumed (e.g., the natural bases and nucleosides, protein pyrolysates, fluorescent lights, typewriter ribbon, red wine, diesel fuel exhausts, viruses, our own leukocytes). "False-positive" (relative to carcinogenicity) and "false-negative" mutagenicity results occur, often with rational explanations (e.g., high threshold, inappropriate metabolism, inadequate genetic endpoint), and thereby confound any straightforward interpretation of mutagenicity test results. Test battery composition affects both the proper identification of mutagens and, in many instances, the ability to make preliminary risk assessments. In vitro mutagenicity assays ignore whole animal protective mechanisms, may provide unphysiological metabolism, and may be either too sensitive (e.g., testing at orders-of-magnitude higher doses than can be ingested) or not sensitive enough (e.g., short-term treatments inadequately model chronic exposure in bioassay). Bacterial systems, particularly the Ames assay, cannot in principle detect chromosomal events which are involved in both carcinogenesis and germ line mutations in man. Some compounds induce only chromosomal events and little or no detectable single-gene events (e.g., acyclovir, caffeine

  14. Finding significantly connected voxels based on histograms of connection strengths

    DEFF Research Database (Denmark)

    Kasenburg, Niklas; Pedersen, Morten Vester; Darkner, Sune

    2016-01-01

    We explore a new approach for structural connectivity based segmentations of subcortical brain regions. Connectivity based segmentations are usually based on fibre connections from a seed region to predefined target regions. We present a method for finding significantly connected voxels based...... on the distribution of connection strengths. Paths from seed voxels to all voxels in a target region are obtained from a shortest-path tractography. For each seed voxel we approximate the distribution with a histogram of path scores. We hypothesise that the majority of estimated connections are false-positives...... and that their connection strength is distributed differently from true-positive connections. Therefore, an empirical null-distribution is defined for each target region as the average normalized histogram over all voxels in the seed region. Single histograms are then tested against the corresponding null...

  15. The Relative Importance of Low Significance Level and High Power in Multiple Tests of Significance.

    Science.gov (United States)

    Westermann, Rainer; Hager, Willi

    1983-01-01

    Two psychological experiments--Anderson and Shanteau (1970), Berkowitz and LePage (1967)--are reanalyzed to present the problem of the relative importance of low Type 1 error probability and high power when answering a research question by testing several statistical hypotheses. (Author/PN)

  16. 40 CFR Appendix IV to Part 265 - Tests for Significance

    Science.gov (United States)

    2010-07-01

    ... introductory statistics texts. ... student's t-test involves calculation of the value of a t-statistic for each comparison of the mean... parameter with its initial background concentration or value. The calculated value of the t-statistic must...

  17. Stress test, what is the reality and significance of it?

    International Nuclear Information System (INIS)

    Sawada, Tetsuo

    2012-01-01

    Stress test was introduced in July 2011 by 'political judgment' to demonstrate the ability of nuclear power plants to withstand severe earthquake and tsunami. Stress test consisted of two stages and the first stage using computerized simulation required to obtain 'cliff edge' for earthquake, tsunami, their superposition, loss of all alternating current power and loss of final heat sink, and effectiveness of severe accident management after emergency safety measures. Clearing the first stage of the test was a prerequisite for restarting reactors that had been suspended for regular inspections. NISA had received such test results for 14 nuclear reactors as of January 18, 2012. After passing IAEA's evaluation of stress test review process, NISA's endorsement of test results, NSC's confirmation of NISA's screening results and approval of local government, Prime Minister and relevant ministers concerned would decide whether reactors could be restarted as 'political judgment'. Using ranking list and referring to respective experiences of 14 reactors hit by earthquake and tsunami at the Great East Japan earthquake might better perform comprehensive judgment. (T. Tanaka)

  18. Computer-Based Testing: Test Site Security.

    Science.gov (United States)

    Rosen, Gerald A.

    Computer-based testing places great burdens on all involved parties to ensure test security. A task analysis of test site security might identify the areas of protecting the test, protecting the data, and protecting the environment as essential issues in test security. Protecting the test involves transmission of the examinations, identifying the…

  19. New significance test methods for Fourier analysis of geophysical time series

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2011-09-01

    Full Text Available When one applies the discrete Fourier transform to analyze finite-length time series, discontinuities at the data boundaries will distort its Fourier power spectrum. In this paper, based on a rigid statistics framework, we present a new significance test method which can extract the intrinsic feature of a geophysical time series very well. We show the difference in significance level compared with traditional Fourier tests by analyzing the Arctic Oscillation (AO and the Nino3.4 time series. In the AO, we find significant peaks at about 2.8, 4.3, and 5.7 yr periods and in Nino3.4 at about 12 yr period in tests against red noise. These peaks are not significant in traditional tests.

  20. Base Deficit as an Indicator of Significant Blunt Abdominal Trauma

    African Journals Online (AJOL)

    multiruka1

    important cause of morbidity and mortality among trauma patients. ... the use of BD as an indicator of significant BAT. Methods: ... Key words: Base deficit, Blunt abdominal trauma,. Predictor. ..... Delineate Risk for Torso Injury in Stable Patients.

  1. The Harm Done to Reproducibility by the Culture of Null Hypothesis Significance Testing.

    Science.gov (United States)

    Lash, Timothy L

    2017-09-15

    In the last few years, stakeholders in the scientific community have raised alarms about a perceived lack of reproducibility of scientific results. In reaction, guidelines for journals have been promulgated and grant applicants have been asked to address the rigor and reproducibility of their proposed projects. Neither solution addresses a primary culprit, which is the culture of null hypothesis significance testing that dominates statistical analysis and inference. In an innovative research enterprise, selection of results for further evaluation based on null hypothesis significance testing is doomed to yield a low proportion of reproducible results and a high proportion of effects that are initially overestimated. In addition, the culture of null hypothesis significance testing discourages quantitative adjustments to account for systematic errors and quantitative incorporation of prior information. These strategies would otherwise improve reproducibility and have not been previously proposed in the widely cited literature on this topic. Without discarding the culture of null hypothesis significance testing and implementing these alternative methods for statistical analysis and inference, all other strategies for improving reproducibility will yield marginal gains at best. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Significance of hair-dye base-induced sensory irritation.

    Science.gov (United States)

    Fujita, F; Azuma, T; Tajiri, M; Okamoto, H; Sano, M; Tominaga, M

    2010-06-01

    Oxidation hair-dyes, which are the principal hair-dyes, sometimes induce painful sensory irritation of the scalp caused by the combination of highly reactive substances, such as hydrogen peroxide and alkali agents. Although many cases of severe facial and scalp dermatitis have been reported following the use of hair-dyes, sensory irritation caused by contact of the hair-dye with the skin has not been reported clearly. In this study, we used a self-assessment questionnaire to measure the sensory irritation in various regions of the body caused by two model hair-dye bases that contained different amounts of alkali agents without dyes. Moreover, the occipital region was found as an alternative region of the scalp to test for sensory irritation of the hair-dye bases. We used this region to evaluate the relationship of sensitivity with skin properties, such as trans-epidermal water loss (TEWL), stratum corneum water content, sebum amount, surface temperature, current perception threshold (CPT), catalase activities in tape-stripped skin and sensory irritation score with the model hair-dye bases. The hair-dye sensitive group showed higher TEWL, a lower sebum amount, a lower surface temperature and higher catalase activity than the insensitive group, and was similar to that of damaged skin. These results suggest that sensory irritation caused by hair-dye could occur easily on the damaged dry scalp, as that caused by skin cosmetics reported previously.

  3. Significance tests for the wavelet cross spectrum and wavelet linear coherence

    Directory of Open Access Journals (Sweden)

    Z. Ge

    2008-12-01

    Full Text Available This work attempts to develop significance tests for the wavelet cross spectrum and the wavelet linear coherence as a follow-up study on Ge (2007. Conventional approaches that are used by Torrence and Compo (1998 based on stationary background noise time series were used here in estimating the sampling distributions of the wavelet cross spectrum and the wavelet linear coherence. The sampling distributions are then used for establishing significance levels for these two wavelet-based quantities. In addition to these two wavelet quantities, properties of the phase angle of the wavelet cross spectrum of, or the phase difference between, two Gaussian white noise series are discussed. It is found that the tangent of the principal part of the phase angle approximately has a standard Cauchy distribution and the phase angle is uniformly distributed, which makes it impossible to establish significance levels for the phase angle. The simulated signals clearly show that, when there is no linear relation between the two analysed signals, the phase angle disperses into the entire range of [−π,π] with fairly high probabilities for values close to ±π to occur. Conversely, when linear relations are present, the phase angle of the wavelet cross spectrum settles around an associated value with considerably reduced fluctuations. When two signals are linearly coupled, their wavelet linear coherence will attain values close to one. The significance test of the wavelet linear coherence can therefore be used to complement the inspection of the phase angle of the wavelet cross spectrum. The developed significance tests are also applied to actual data sets, simultaneously recorded wind speed and wave elevation series measured from a NOAA buoy on Lake Michigan. Significance levels of the wavelet cross spectrum and the wavelet linear coherence between the winds and the waves reasonably separated meaningful peaks from those generated by randomness in the data set. As

  4. IRT-based test construction

    OpenAIRE

    van der Linden, Willem J.; Theunissen, T.J.J.M.; Boekkooi-Timminga, Ellen; Kelderman, Henk

    1987-01-01

    Four discussions of test construction based on item response theory (IRT) are presented. The first discussion, "Test Design as Model Building in Mathematical Programming" (T.J.J.M. Theunissen), presents test design as a decision process under certainty. A natural way of modeling this process leads to mathematical programming. General models of test construction are discussed, with information about algorithms and heuristics; ideas about the analysis and refinement of test constraints are also...

  5. Model-based security testing

    OpenAIRE

    Schieferdecker, Ina; Großmann, Jürgen; Schneider, Martin

    2012-01-01

    Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security...

  6. Significance tests to determine the direction of effects in linear regression models.

    Science.gov (United States)

    Wiedermann, Wolfgang; Hagmann, Michael; von Eye, Alexander

    2015-02-01

    Previous studies have discussed asymmetric interpretations of the Pearson correlation coefficient and have shown that higher moments can be used to decide on the direction of dependence in the bivariate linear regression setting. The current study extends this approach by illustrating that the third moment of regression residuals may also be used to derive conclusions concerning the direction of effects. Assuming non-normally distributed variables, it is shown that the distribution of residuals of the correctly specified regression model (e.g., Y is regressed on X) is more symmetric than the distribution of residuals of the competing model (i.e., X is regressed on Y). Based on this result, 4 one-sample tests are discussed which can be used to decide which variable is more likely to be the response and which one is more likely to be the explanatory variable. A fifth significance test is proposed based on the differences of skewness estimates, which leads to a more direct test of a hypothesis that is compatible with direction of dependence. A Monte Carlo simulation study was performed to examine the behaviour of the procedures under various degrees of associations, sample sizes, and distributional properties of the underlying population. An empirical example is given which illustrates the application of the tests in practice. © 2014 The British Psychological Society.

  7. Afrika Statistika ISSN 2316-090X A Bayesian significance test of ...

    African Journals Online (AJOL)

    of the generalized likelihood ratio test to detect a change in binomial ... computational simplicity to the problem of calculating posterior marginals. ... the impact of a single outlier on the performance of the Bayesian significance test of change.

  8. Nanomaterial-Based Electrochemical Immunosensors for Clinically Significant Biomarkers

    Directory of Open Access Journals (Sweden)

    Niina J. Ronkainen

    2014-06-01

    Full Text Available Nanotechnology has played a crucial role in the development of biosensors over the past decade. The development, testing, optimization, and validation of new biosensors has become a highly interdisciplinary effort involving experts in chemistry, biology, physics, engineering, and medicine. The sensitivity, the specificity and the reproducibility of biosensors have improved tremendously as a result of incorporating nanomaterials in their design. In general, nanomaterials-based electrochemical immunosensors amplify the sensitivity by facilitating greater loading of the larger sensing surface with biorecognition molecules as well as improving the electrochemical properties of the transducer. The most common types of nanomaterials and their properties will be described. In addition, the utilization of nanomaterials in immunosensors for biomarker detection will be discussed since these biosensors have enormous potential for a myriad of clinical uses. Electrochemical immunosensors provide a specific and simple analytical alternative as evidenced by their brief analysis times, inexpensive instrumentation, lower assay cost as well as good portability and amenability to miniaturization. The role nanomaterials play in biosensors, their ability to improve detection capabilities in low concentration analytes yielding clinically useful data and their impact on other biosensor performance properties will be discussed. Finally, the most common types of electroanalytical detection methods will be briefly touched upon.

  9. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker

    2012-02-01

    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  10. Understanding the Sampling Distribution and Its Use in Testing Statistical Significance.

    Science.gov (United States)

    Breunig, Nancy A.

    Despite the increasing criticism of statistical significance testing by researchers, particularly in the publication of the 1994 American Psychological Association's style manual, statistical significance test results are still popular in journal articles. For this reason, it remains important to understand the logic of inferential statistics. A…

  11. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    Science.gov (United States)

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  12. Adaptive Tests of Significance Using Permutations of Residuals with R and SAS

    CERN Document Server

    O'Gorman, Thomas W

    2012-01-01

    Provides the tools needed to successfully perform adaptive tests across a broad range of datasets Adaptive Tests of Significance Using Permutations of Residuals with R and SAS illustrates the power of adaptive tests and showcases their ability to adjust the testing method to suit a particular set of data. The book utilizes state-of-the-art software to demonstrate the practicality and benefits for data analysis in various fields of study. Beginning with an introduction, the book moves on to explore the underlying concepts of adaptive tests, including:Smoothing methods and normalizing transforma

  13. THE SMALL BUT SIGNIFICANT AND NONTRANSITORY INCREASE IN PRICES (SSNIP TEST

    Directory of Open Access Journals (Sweden)

    Liviana Niminet

    2008-12-01

    Full Text Available The Small but Significant Nontransitory Increase in Price Test was designed to define the relevant market by concepts of product, geographical area and time. This test, also called the ,,hypothetical monopolistic test” is the subject of many researches both economical and legal as it deals with economic concepts as well as with legally aspects.

  14. Significance tests in mutagen screening: another method considering historical control frequencies

    International Nuclear Information System (INIS)

    Traut, H.

    1983-01-01

    Recently a method has been devised for testing the significance of the difference between a mutation frequency observed after chemical treatment or iradiation and the historical ('stable') control frequency. Another test is proposed serving the same purpose. Both methods are applied to several examples (experimental frequency versus historical control frequency). The results (P values) obtained agree well. (author)

  15. P-Value, a true test of statistical significance? a cautionary note ...

    African Journals Online (AJOL)

    While it's not the intention of the founders of significance testing and hypothesis testing to have the two ideas intertwined as if they are complementary, the inconvenient marriage of the two practices into one coherent, convenient, incontrovertible and misinterpreted practice has dotted our standard statistics textbooks and ...

  16. [Clinical significance of the tests used in the diagnosis of pancreatic diseases].

    Science.gov (United States)

    Lenti, G; Emanuelli, G

    1976-11-14

    Different methods available for investigating patients for pancreatic disease are discussed. They first include measurement of pancreatic enzymes in biological fluids. Basal amylase and/or lipase in blood are truly diagnostic in acute pancreatitis but their utility is low in chronic pancreatic diseases. Evocative tests have been performed to increase the sensitivity of blood enzyme measurement. The procedure is based on enzyme determination following administration of pancreozymin and secretin, and offers a valuable aid in diagnosis of chronic pancreatitis and cancer of the pancreas. They are capable of discerning pancreatic lesions but are not really discriminatory because similar changes are observed in both diseases. The measurement of urinary enzyme levels in patients with acute pancreatitis is a sensitive indicator of disease. The urinary amylase excretion rises to abnormal levels and persists at significant values for a longer period of time than the serum amylase in acute pancreatitis. The fractional urinary amylase escretion seems to be more sensitive than daily urinary measurement. The pancreatic exocrin function can be assessed by examining the duodenal contents after intravenous administration of pancreozymin and secretin. Different abnormal secretory patterns can be determinated. Total secretory deficiency is observed in patients with obstruction of excretory ducts by tumors of the head of the pancreas and in the end stage of chronic pancreatitis. Low volume with normal bicarbonate and enzyme concentration is another typical pattern seen in neoplastic obstruction of escretory ducts. In chronic pancreatitis the chief defect is the inability of the gland to secrete a juice with a high bicarbonate concentration; but in the advanced stage diminution of enzyme and volume is also evident. Diagnostic procedures for pancreatic diseases include digestion and absorption tests. The microscopic examination and chemical estimation of the fats in stool specimens in

  17. Testing statistical significance scores of sequence comparison methods with structure similarity

    Directory of Open Access Journals (Sweden)

    Leunissen Jack AM

    2006-10-01

    Full Text Available Abstract Background In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical significance testing for an alignment. The e-value is the most commonly used statistical validation method for sequence database searching. The CluSTr database and the Protein World database have been created using an alternative statistical significance test: a Z-score based on Monte-Carlo statistics. Several papers have described the superiority of the Z-score as compared to the e-value, using simulated data. We were interested if this could be validated when applied to existing, evolutionary related protein sequences. Results All experiments are performed on the ASTRAL SCOP database. The Smith-Waterman sequence comparison algorithm with both e-value and Z-score statistics is evaluated, using ROC, CVE and AP measures. The BLAST and FASTA algorithms are used as reference. We find that two out of three Smith-Waterman implementations with e-value are better at predicting structural similarities between proteins than the Smith-Waterman implementation with Z-score. SSEARCH especially has very high scores. Conclusion The compute intensive Z-score does not have a clear advantage over the e-value. The Smith-Waterman implementations give generally better results than their heuristic counterparts. We recommend using the SSEARCH algorithm combined with e-values for pairwise sequence comparisons.

  18. Can the Bruckner test be used as a rapid screening test to detect significant refractive errors in children?

    Directory of Open Access Journals (Sweden)

    Kothari Mihir

    2007-01-01

    Full Text Available Purpose: To assess the suitability of Brückner test as a screening test to detect significant refractive errors in children. Materials and Methods: A pediatric ophthalmologist prospectively observed the size and location of pupillary crescent on Brückner test as hyperopic, myopic or astigmatic. This was compared with the cycloplegic refraction. Detailed ophthalmic examination was done for all. Sensitivity, specificity, positive predictive value and negative predictive value of Brückner test were determined for the defined cutoff levels of ametropia. Results: Ninety-six subjects were examined. Mean age was 8.6 years (range 1 to 16 years. Brückner test could be completed for all; the time taken to complete this test was 10 seconds per subject. The ophthalmologist identified 131 eyes as ametropic, 61 as emmetropic. The Brückner test had sensitivity 91%, specificity 72.8%, positive predictive value 85.5% and negative predictive value 83.6%. Of 10 false negatives four had compound hypermetropic astigmatism and three had myopia. Conclusions: Brückner test can be used to rapidly screen the children for significant refractive errors. The potential benefits from such use may be maximized if programs use the test with lower crescent measurement cutoffs, a crescent measurement ruler and a distance fixation target.

  19. Risk Based Optimal Fatigue Testing

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Faber, M.H.; Kroon, I.B.

    1992-01-01

    Optimal fatigue life testing of materials is considered. Based on minimization of the total expected costs of a mechanical component a strategy is suggested to determine the optimal stress range levels for which additional experiments are to be performed together with an optimal value...

  20. Interface-based software testing

    Directory of Open Access Journals (Sweden)

    Aziz Ahmad Rais

    2016-10-01

    Full Text Available Software quality is determined by assessing the characteristics that specify how it should work, which are verified through testing. If it were possible to touch, see, or measure software, it would be easier to analyze and prove its quality. Unfortunately, software is an intangible asset, which makes testing complex. This is especially true when software quality is not a question of particular functions that can be tested through a graphical user interface. The primary objective of software architecture is to design quality of software through modeling and visualization. There are many methods and standards that define how to control and manage quality. However, many IT software development projects still fail due to the difficulties involved in measuring, controlling, and managing software quality. Software quality failure factors are numerous. Examples include beginning to test software too late in the development process, or failing properly to understand, or design, the software architecture and the software component structure. The goal of this article is to provide an interface-based software testing technique that better measures software quality, automates software quality testing, encourages early testing, and increases the software’s overall testability

  1. Finding of No Significant Impact and Environmental Assessment for Flight Test to the Edge of Space

    Science.gov (United States)

    2008-12-01

    Runway 22 or on Rogers Dry Lakebed at Edwards AFB. 17 On the basis of the findings of the Environmental Assessment, no significant impact to human...FLIGHT TEST CENTER Environmental Assessment for Flight Test to the Edge of Space Page 5-3 Bowles, A.E., S. Eckert, L . Starke, E. Berg, L . Wolski, and...Numbers. Anne Choate, Laura 20 Pederson , Jeremy Scharfenberg, Henry Farland. Washington, D.C. September. 21 Jeppesen Sanderson, Incorporated 22

  2. Methodology to identify risk-significant components for inservice inspection and testing

    International Nuclear Information System (INIS)

    Anderson, M.T.; Hartley, R.S.; Jones, J.L. Jr.; Kido, C.; Phillips, J.H.

    1992-08-01

    Periodic inspection and testing of vital system components should be performed to ensure the safe and reliable operation of Department of Energy (DOE) nuclear processing facilities. Probabilistic techniques may be used to help identify and rank components by their relative risk. A risk-based ranking would allow varied DOE sites to implement inspection and testing programs in an effective and cost-efficient manner. This report describes a methodology that can be used to rank components, while addressing multiple risk issues

  3. Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.

    Science.gov (United States)

    Kieffer, Kevin M.; Thompson, Bruce

    As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate…

  4. Recent Literature on Whether Statistical Significance Tests Should or Should Not Be Banned.

    Science.gov (United States)

    Deegear, James

    This paper summarizes the literature regarding statistical significant testing with an emphasis on recent literature in various discipline and literature exploring why researchers have demonstrably failed to be influenced by the American Psychological Association publication manual's encouragement to report effect sizes. Also considered are…

  5. Measured dose to ovaries and testes from Hodgkin's fields and determination of genetically significant dose

    International Nuclear Information System (INIS)

    Niroomand-Rad, A.; Cumberlin, R.

    1993-01-01

    The purpose of this study was to determine the genetically significant dose from therapeutic radiation exposure with Hodgkin's fields by estimating the doses to ovaries and testes. Phantom measurements were performed to verify estimated doses to ovaries and testes from Hodgkin's fields. Thermoluminescent LiF dosimeters (TLD-100) of 1 x 3 x 3 mm 3 dimensions were embedded in phantoms and exposed to standard mantle and paraaortic fields using Co-60, 4 MV, 6 MV, and 10 MV photon beams. The results show that measured doses to ovaries and testes are about two to five times higher than the corresponding graphically estimated doses for Co-60 and 4 MVX photon beams as depicted in ICRP publication 44. In addition, the measured doses to ovaries and testes are about 30% to 65% lower for 10 MV photon beams than for their corresponding Co-60 photon beams. The genetically significant dose from Hodgkin's treatment (less than 0.01 mSv) adds about 4% to the genetically significant dose contribution to medical procedures and adds less than 1% to the genetically significant dose from all sources. Therefore, the consequence to society is considered to be very small. The consequences for the individual patient are, likewise, small. 28 refs., 3 figs., 5 tabs

  6. The Need for Nuance in the Null Hypothesis Significance Testing Debate

    Science.gov (United States)

    Häggström, Olle

    2017-01-01

    Null hypothesis significance testing (NHST) provides an important statistical toolbox, but there are a number of ways in which it is often abused and misinterpreted, with bad consequences for the reliability and progress of science. Parts of contemporary NHST debate, especially in the psychological sciences, is reviewed, and a suggestion is made…

  7. Hypothesis Tests for Bernoulli Experiments: Ordering the Sample Space by Bayes Factors and Using Adaptive Significance Levels for Decisions

    Directory of Open Access Journals (Sweden)

    Carlos A. de B. Pereira

    2017-12-01

    Full Text Available The main objective of this paper is to find the relation between the adaptive significance level presented here and the sample size. We statisticians know of the inconsistency, or paradox, in the current classical tests of significance that are based on p-value statistics that are compared to the canonical significance levels (10%, 5%, and 1%: “Raise the sample to reject the null hypothesis” is the recommendation of some ill-advised scientists! This paper will show that it is possible to eliminate this problem of significance tests. We present here the beginning of a larger research project. The intention is to extend its use to more complex applications such as survival analysis, reliability tests, and other areas. The main tools used here are the Bayes factor and the extended Neyman–Pearson Lemma.

  8. Wavelet Co-movement Significance Testing with Respect to Gaussian White Noise Background

    Directory of Open Access Journals (Sweden)

    Poměnková Jitka

    2018-01-01

    Full Text Available The paper deals with significance testing of time series co-movement measured via wavelet analysis, namely via the wavelet cross-spectra. This technique is very popular for its better time resolution compare to other techniques. Such approach put in evidence the existence of both long-run and short-run co-movement. In order to have better predictive power it is suitable to support and validate obtained results via some testing approach. We investigate the test of wavelet power cross-spectrum with respect to the Gaussian white noise background with the use of the Bessel function. Our experiment is performed on real data, i.e. seasonally adjusted quarterly data of gross domestic product of the United Kingdom, Korea and G7 countries. To validate the test results we perform Monte Carlo simulation. We describe the advantages and disadvantages of both approaches and formulate recommendations for its using.

  9. Pressure tests to assess the significance of defects in boiler and superheater tubing

    International Nuclear Information System (INIS)

    Guest, J.C.; Hutchings, J.A.

    1975-01-01

    Internal pressure tests on 9 per cent Cr-1 per cent Mo steel tubing containing artificial defects demonstrated that the resultant loss of strength was less than a simple calculation based on the reduced tube thickness would suggest. Bursting tests on tubes containing longitudinal defects of varying length, depth and acuity showed notch strengthening at ambient temperature and at 550 0 C. A flow stress concept developed for simple bursting tests was shown to apply to creep conditions at 550 0 C. Results of creep and short-term bursting tests show that the length as well as the depth of the defect is an important factor affecting the life of bursting strength of the tubes. Defects less than 10 per cent of the tube thickness were found to have an insignificant effect. (author)

  10. Rehearsal significantly improves immediate and delayed recall on the Rey Auditory Verbal Learning Test.

    Science.gov (United States)

    Hessen, Erik

    2011-10-01

    A repeated observation during memory assessment with the Rey Auditory Verbal Learning Test (RAVLT) is that patients who spontaneously employ a memory rehearsal strategy by repeating the word list more than once achieve better scores than patients who only repeat the word list once. This observation led to concern about the ability of the standard test procedure of RAVLT and similar tests in eliciting the best possible recall scores. The purpose of the present study was to test the hypothesis that a rehearsal recall strategy of repeating the word list more than once would result in improved scores of recall on the RAVLT. We report on differences in outcome after standard administration and after experimental administration on Immediate and Delayed Recall measures from the RAVLT of 50 patients. The experimental administration resulted in significantly improved scores for all the variables employed. Additionally, it was found that patients who failed effort screening showed significantly poorer improvement on Delayed Recall compared with those who passed the effort screening. The general clear improvement both in raw scores and T-scores demonstrates that recall performance can be significantly influenced by the strategy of the patient or by small variations in instructions by the examiner.

  11. Description of test facilities bound to the research on sodium aerosols - some significant results

    Energy Technology Data Exchange (ETDEWEB)

    Dolias, M; Lafon, A; Vidard, M; Schaller, K H [DRNR/STRS - Centre de Cadarache, Saint-Paul-lez-Durance (France)

    1977-01-01

    This communication is dedicated to the description of the CEA (French Atomic Energy Authority) testing located at CADARACHE and which are utilized for the study of sodium aerosols behavior. These testing loops are necessary for studying the operating of equipment such as filters, sodium vapour traps, condensers and separators. It is also possible to study the effect of characteristics parameters on formation, coagulation and carrying away of sodium aerosols in the cover gas. Sodium aerosols deposits in a vertical annular space configuration with a cold area in its upper part are also studied. Some significant results emphasize the importance of operating conditions on the formation of aerosols. (author)

  12. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    Science.gov (United States)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  13. Confidence intervals permit, but don't guarantee, better inference than statistical significance testing

    Directory of Open Access Journals (Sweden)

    Melissa Coulson

    2010-07-01

    Full Text Available A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST, or confidence intervals (CIs. Authors of articles published in psychology, behavioural neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST.

  14. Significance of the combined tests application in serum and liquor of patients with suspected neurosyphilis

    Directory of Open Access Journals (Sweden)

    Mirković Mihailo

    2007-01-01

    Full Text Available Background. Tertiary syphilis develops in 8-40% of untreated patients. It is most commonly manifested in the form of neurosyphilis, which can be asymptomatic taking the form of tabes dorsalis or progressive paralyze. Nowadays, in the developed countries, progressive paralyze is a rather rare disease, although the incidence of this disease has been rising within the last decades. Case report. We reported a 74-year-old male with the clinical image of dementia showing psychotic symptoms. On cytobiochemical examination of cerebrospinal liquor, hyperproteinorhacmia of 0.70 g/l with the normal number of cells was revealed. Computed tomography of the brain showed the marked cortical cerebral and cerebellar reduction changes with multiple ischemic lesions. Within a routine examination of patients with demention, we performed serologic reactions to syphilis out of which the Veneral Disease Researc Laboratory (VDRL test in serum and liquor was unreactive, while the Treponema pallidum hemagglutination (TPNA test in serum and liquor was positive. Positivity in serum and liquor was additionally confirmed by the Western blot method and fluoroscent treponema antibody (FTA test. The treatment with benzathine fenylpenicilline 2.4 g once weekly resulted in significant improving the psychotic symptoms of the disease even after two weeks. Conclusion. This case report showed that within the differential diagnostics in patients with demention or psychotic disorder it is obligatory to consider syphilis of the nervous system, as well as to apply a combination of various tests which, besides the typical liquor findings, significantly improve the accuracy of diagnosis. Such approach is especially important regarding the fact that neurosyphilis can remain clinically quite asymptomatic for a long period, which could lead to late therapy, while, on the contrary, an adequate and timely treatment can contribute to a significant recovery of any patients.

  15. Prognostic significance of silent myocardial ischemia on a thallium stress test

    International Nuclear Information System (INIS)

    Heller, L.I.; Tresgallo, M.; Sciacca, R.R.; Blood, D.K.; Seldin, D.W.; Johnson, L.L.

    1990-01-01

    The clinical significance of silent ischemia is not fully known. The purpose of this study was to determine whether the presence or absence of angina during a thallium stress test positive for ischemia was independently predictive of an adverse outcome. Two hundred thirty-four consecutive patients with ischemia on a thallium stress test were identified. Ischemia was defined as the presence of defect(s) on the immediate postexercise scans not in the distribution of prior infarctions that redistributed on 4-hour scans. During the test 129 patients had angina, defined as characteristic neck, jaw, arm, back or chest discomfort, while the remaining 105 patients had no angina. Follow-up ranged from 2 to 8.2 years (mean 5.2 +/- 2.1) and was successfully obtained in 156 patients. Eighty-two of the 156 patients had angina (group A) and 74 had silent ischemia (group S). Group A patients were significantly older (62 +/- 8 vs 59 +/- 8 years, p less than 0.05). There was no significant difference between the 2 groups in terms of sex, history of prior infarction or presence of left main/3-vessel disease. A larger percentage of patients in group A were receiving beta blockers (60 vs 41%, p less than 0.05) and nitrates (52 vs 36%, 0.05 less than p less than 0.10). There was a large number of cardiac events (myocardial infarction, revascularization and death) in both groups (37 of 82 [45%] in group A; 28 of 72 [38%] in group S) but no statistically significant difference between the groups. Similarly, life-table analysis revealed no difference in mortality between the 2 groups

  16. The diagnostic sensitivity of dengue rapid test assays is significantly enhanced by using a combined antigen and antibody testing approach.

    Directory of Open Access Journals (Sweden)

    Scott R Fry

    2011-06-01

    Full Text Available BACKGROUND: Serological tests for IgM and IgG are routinely used in clinical laboratories for the rapid diagnosis of dengue and can differentiate between primary and secondary infections. Dengue virus non-structural protein 1 (NS1 has been identified as an early marker for acute dengue, and is typically present between days 1-9 post-onset of illness but following seroconversion it can be difficult to detect in serum. AIMS: To evaluate the performance of a newly developed Panbio® Dengue Early Rapid test for NS1 and determine if it can improve diagnostic sensitivity when used in combination with a commercial IgM/IgG rapid test. METHODOLOGY: The clinical performance of the Dengue Early Rapid was evaluated in a retrospective study in Vietnam with 198 acute laboratory-confirmed positive and 100 negative samples. The performance of the Dengue Early Rapid in combination with the IgM/IgG Rapid test was also evaluated in Malaysia with 263 laboratory-confirmed positive and 30 negative samples. KEY RESULTS: In Vietnam the sensitivity and specificity of the test was 69.2% (95% CI: 62.8% to 75.6% and 96% (95% CI: 92.2% to 99.8 respectively. In Malaysia the performance was similar with 68.9% sensitivity (95% CI: 61.8% to 76.1% and 96.7% specificity (95% CI: 82.8% to 99.9% compared to RT-PCR. Importantly, when the Dengue Early Rapid test was used in combination with the IgM/IgG test the sensitivity increased to 93.0%. When the two tests were compared at each day post-onset of illness there was clear differentiation between the antigen and antibody markers. CONCLUSIONS: This study highlights that using dengue NS1 antigen detection in combination with anti-glycoprotein E IgM and IgG serology can significantly increase the sensitivity of acute dengue diagnosis and extends the possible window of detection to include very early acute samples and enhances the clinical utility of rapid immunochromatographic testing for dengue.

  17. Development and testing of an assessment instrument for the formative peer review of significant event analyses.

    Science.gov (United States)

    McKay, J; Murphy, D J; Bowie, P; Schmuck, M-L; Lough, M; Eva, K W

    2007-04-01

    To establish the content validity and specific aspects of reliability for an assessment instrument designed to provide formative feedback to general practitioners (GPs) on the quality of their written analysis of a significant event. Content validity was quantified by application of a content validity index. Reliability testing involved a nested design, with 5 cells, each containing 4 assessors, rating 20 unique significant event analysis (SEA) reports (10 each from experienced GPs and GPs in training) using the assessment instrument. The variance attributable to each identified variable in the study was established by analysis of variance. Generalisability theory was then used to investigate the instrument's ability to discriminate among SEA reports. Content validity was demonstrated with at least 8 of 10 experts endorsing all 10 items of the assessment instrument. The overall G coefficient for the instrument was moderate to good (G>0.70), indicating that the instrument can provide consistent information on the standard achieved by the SEA report. There was moderate inter-rater reliability (G>0.60) when four raters were used to judge the quality of the SEA. This study provides the first steps towards validating an instrument that can provide educational feedback to GPs on their analysis of significant events. The key area identified to improve instrument reliability is variation among peer assessors in their assessment of SEA reports. Further validity and reliability testing should be carried out to provide GPs, their appraisers and contractual bodies with a validated feedback instrument on this aspect of the general practice quality agenda.

  18. Safety significance of ATR [Advanced Test Reactor] passive safety response attributes

    International Nuclear Information System (INIS)

    Atkinson, S.A.

    1989-01-01

    The Advanced Test Reactor (ATR) at the Idaho National Engineering Laboratory was designed with some passive safety response attributes which contribute to the safety posture of the facility. The three passive safety attributes being evaluated in the paper are: (1) In-core and in-vessel natural convection cooling, (2) a passive heat sink capability of the ATR primary coolant system (PCS) for the transfer of decay power from the uninsulated piping to the confinement, and (3) gravity feed of emergency coolant makeup. The safety significance of the ATR passive safety response attributes is that the reactor can passively respond for most transients, given a reactor scram, to provide adequate decay power removal and a significant time for operator action should the normal active heat removal systems and their backup systems both fail. The ATR Interim Level 1 Probabilistic Risk Assessment (PRA) model ands results were used to evaluate the significance to ATR fuel damage frequency (or probability) of the above three passive response attributes. The results of the evaluation indicate that the first attribute is a major safety characteristic of the ATR. The second attribute has a noticeable but only minor safety significance. The third attribute has no significant influence on the ATR Level 1 PRA because of the diversity and redundancy of the ATR firewater injection system (emergency coolant system). 8 refs., 4 figs., 1 tab

  19. A procedure for the significance testing of unmodeled errors in GNSS observations

    Science.gov (United States)

    Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling

    2018-01-01

    It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.

  20. A critical discussion of null hypothesis significance testing and statistical power analysis within psychological research

    DEFF Research Database (Denmark)

    Jones, Allan; Sommerlund, Bo

    2007-01-01

    The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...... the potential error introduced by small and large samples is advocated. Power analysis is not recommended as a replacement to NHST but as an additional source of information about the phenomena under investigation. Moreover, the importance of conceptual analysis in relation to statistical analysis of hypothesis...

  1. Confidence Intervals: From tests of statistical significance to confidence intervals, range hypotheses and substantial effects

    Directory of Open Access Journals (Sweden)

    Dominic Beaulieu-Prévost

    2006-03-01

    Full Text Available For the last 50 years of research in quantitative social sciences, the empirical evaluation of scientific hypotheses has been based on the rejection or not of the null hypothesis. However, more than 300 articles demonstrated that this method was problematic. In summary, null hypothesis testing (NHT is unfalsifiable, its results depend directly on sample size and the null hypothesis is both improbable and not plausible. Consequently, alternatives to NHT such as confidence intervals (CI and measures of effect size are starting to be used in scientific publications. The purpose of this article is, first, to provide the conceptual tools necessary to implement an approach based on confidence intervals, and second, to briefly demonstrate why such an approach is an interesting alternative to an approach based on NHT. As demonstrated in the article, the proposed CI approach avoids most problems related to a NHT approach and can often improve the scientific and contextual relevance of the statistical interpretations by testing range hypotheses instead of a point hypothesis and by defining the minimal value of a substantial effect. The main advantage of such a CI approach is that it replaces the notion of statistical power by an easily interpretable three-value logic (probable presence of a substantial effect, probable absence of a substantial effect and probabilistic undetermination. The demonstration includes a complete example.

  2. DENBRAN: A basic program for a significance test for multivariate normality of clusters from branching patterns in dendrograms

    Science.gov (United States)

    Sneath, P. H. A.

    A BASIC program is presented for significance tests to determine whether a dendrogram is derived from clustering of points that belong to a single multivariate normal distribution. The significance tests are based on statistics of the Kolmogorov—Smirnov type, obtained by comparing the observed cumulative graph of branch levels with a graph for the hypothesis of multivariate normality. The program also permits testing whether the dendrogram could be from a cluster of lower dimensionality due to character correlations. The program makes provision for three similarity coefficients, (1) Euclidean distances, (2) squared Euclidean distances, and (3) Simple Matching Coefficients, and for five cluster methods (1) WPGMA, (2) UPGMA, (3) Single Linkage (or Minimum Spanning Trees), (4) Complete Linkage, and (5) Ward's Increase in Sums of Squares. The program is entitled DENBRAN.

  3. Significant-Loophole-Free Test of Bell's Theorem with Entangled Photons.

    Science.gov (United States)

    Giustina, Marissa; Versteegh, Marijn A M; Wengerowsky, Sören; Handsteiner, Johannes; Hochrainer, Armin; Phelan, Kevin; Steinlechner, Fabian; Kofler, Johannes; Larsson, Jan-Åke; Abellán, Carlos; Amaya, Waldimar; Pruneri, Valerio; Mitchell, Morgan W; Beyer, Jörn; Gerrits, Thomas; Lita, Adriana E; Shalm, Lynden K; Nam, Sae Woo; Scheidl, Thomas; Ursin, Rupert; Wittmann, Bernhard; Zeilinger, Anton

    2015-12-18

    Local realism is the worldview in which physical properties of objects exist independently of measurement and where physical influences cannot travel faster than the speed of light. Bell's theorem states that this worldview is incompatible with the predictions of quantum mechanics, as is expressed in Bell's inequalities. Previous experiments convincingly supported the quantum predictions. Yet, every experiment requires assumptions that provide loopholes for a local realist explanation. Here, we report a Bell test that closes the most significant of these loopholes simultaneously. Using a well-optimized source of entangled photons, rapid setting generation, and highly efficient superconducting detectors, we observe a violation of a Bell inequality with high statistical significance. The purely statistical probability of our results to occur under local realism does not exceed 3.74×10^{-31}, corresponding to an 11.5 standard deviation effect.

  4. Prognostic significance of blood coagulation tests in carcinoma of the lung and colon.

    Science.gov (United States)

    Wojtukiewicz, M Z; Zacharski, L R; Moritz, T E; Hur, K; Edwards, R L; Rickles, F R

    1992-08-01

    Blood coagulation test results were collected prospectively in patients with previously untreated, advanced lung or colon cancer who entered into a clinical trial. In patients with colon cancer, reduced survival was associated (in univariate analysis) with higher values obtained at entry to the study for fibrinogen, fibrin(ogen) split products, antiplasmin, and fibrinopeptide A and accelerated euglobulin lysis times. In patients with non-small cell lung cancer, reduced survival was associated (in univariate analysis) with higher fibrinogen and fibrin(ogen) split products, platelet counts and activated partial thromboplastin times. In patients with small cell carcinoma of the lung, only higher activated partial thromboplastin times were associated (in univariate analysis) with reduced survival in patients with disseminated disease. In multivariate analysis, higher activated partial thromboplastin times were a significant independent predictor of survival for patients with non-small cell lung cancer limited to one hemithorax and with disseminated small cell carcinoma of the lung. Fibrin(ogen) split product levels were an independent predictor of survival for patients with disseminated non-small cell lung cancer as were both the fibrinogen and fibrinopeptide A levels for patients with disseminated colon cancer. These results suggest that certain tests of blood coagulation may be indicative of prognosis in lung and colon cancer. The heterogeneity of these results suggests that the mechanism(s), intensity, and pathophysiological significance of coagulation activation in cancer may differ between tumour types.

  5. Association between micronucleus frequency and cervical intraepithelial neoplasia grade in Thinprep cytological test and its significance.

    Science.gov (United States)

    Shi, Yong-Hua; Wang, Bo-Wei; Tuokan, Talaf; Li, Qiao-Zhi; Zhang, Ya-Jing

    2015-01-01

    A micronucleus is an additional small nucleus formed due to chromosomes or chromosomal fragments fail to be incorporated into the nucleus during cell division. In this study, we assessed the utility of micronucleus counting as a screening tool in cervical precancerous lesions in Thinprep cytological test smears under oil immersion. High risk HPV was also detected by hybrid capture-2 in Thinprep cytological test smears. Our results showed that micronucleus counting was significantly higher in high-grade squamous intraepithelial lesion (HSIL) and invasive carcinoma cases compared to low-grade squamous intraepithelial lesion (LSIL) and non-neoplastic cases. Receiver operating characteristic (ROC) curve analysis revealed that micronucleus counting possessed a high degree of sensitivity and specificity for identifying HSIL and invasive carcinoma. Cut-off of 7.5 for MN counting gave a sensitivity of 89.6% and a specificity of 66.7% (P = 0.024 and AUC = 0.892) for detecting HSIL and invasive carcinoma lesions. Multiple linear regression analysis showed that only HSIL and invasive cancer lesions not age, duration of marital life and number of pregnancy are significantly associated with MN counting. The positive rate of high risk HPV was distinctly higher in LSIL, HSIL and invasive cancer than that in non-neoplstic categories. In conclusions, MN evaluation may be viewed as an effective biomarker for cervical cancer screening. The combination of MN count with HPV DNA detection and TCT may serve as an effective means to screen precancerous cervical lesions in most developing nations.

  6. A Note on Comparing the Power of Test Statistics at Low Significance Levels.

    Science.gov (United States)

    Morris, Nathan; Elston, Robert

    2011-01-01

    It is an obvious fact that the power of a test statistic is dependent upon the significance (alpha) level at which the test is performed. It is perhaps a less obvious fact that the relative performance of two statistics in terms of power is also a function of the alpha level. Through numerous personal discussions, we have noted that even some competent statisticians have the mistaken intuition that relative power comparisons at traditional levels such as α = 0.05 will be roughly similar to relative power comparisons at very low levels, such as the level α = 5 × 10 -8 , which is commonly used in genome-wide association studies. In this brief note, we demonstrate that this notion is in fact quite wrong, especially with respect to comparing tests with differing degrees of freedom. In fact, at very low alpha levels the cost of additional degrees of freedom is often comparatively low. Thus we recommend that statisticians exercise caution when interpreting the results of power comparison studies which use alpha levels that will not be used in practice.

  7. Diagnostic significance of haematological testing in patients presenting at the Emergency Department

    Directory of Open Access Journals (Sweden)

    Giuseppe Lippi

    2012-03-01

    Full Text Available The use of simple and economic tests to rule out diseases of sufficient clinical severity is appealing in emergency department (ED, since it would be effective for contrasting ED overcrowding and decreasing healthcare costs. The aim of this study was to assess the diagnostic performance of simple and economic haematological testing in a large sample of adult patients presenting at the ED of the Academic Hospital of Parma during the year 2010 with the five most frequent acute pathologies (i.e., acute myocardial infarction, renal colic, pneumonia, trauma and pancreatitis. Both leukocyte count and hemoglobin showed a good diagnostic performance (Area Under the Curve [AUC] of 0.85 for leukocyte count and 0.76 for hemoglobin; both p < 0.01. Although the platelet count was significantly increased in all patients groups except pancreatitis, the diagnostic performance did not achieve statistical significance (AUC 0.53; p = 0.07. We also observed an increased RDW in all groups, except in those with trauma and the diagnostic performance was acceptable (AUC 0.705; p < 0.01. The mean platelet volume (MPV was consistently lower in all patients groups and also characterized by an efficient diagnostic performance (AUC 0.76; p < 0.01. This evidence led us to design an arbitrary formula, whereby MPV and hemoglobin were multiplied, and further divided by the leukocyte count, obtaining a remarkable AUC (0.91; p < 0.01. We conclude that simple, rapid and cheap hematological tests might provide relevant clinical information for decision making to busy emergency physicians, and the their combination into an arbitrary formula might further increase the specific diagnostic potential of each of them.

  8. Prognostic significance of electrophysiological tests for facial nerve outcome in vestibular schwannoma surgery.

    Science.gov (United States)

    van Dinther, J J S; Van Rompaey, V; Somers, T; Zarowski, A; Offeciers, F E

    2011-01-01

    To assess the prognostic significance of pre-operative electrophysiological tests for facial nerve outcome in vestibular schwannoma surgery. Retrospective study design in a tertiary referral neurology unit. We studied a total of 123 patients with unilateral vestibular schwannoma who underwent microsurgical removal of the lesion. Nine patients were excluded because they had clinically abnormal pre-operative facial function. Pre-operative electrophysiological facial nerve function testing (EPhT) was performed. Short-term (1 month) and long-term (1 year) post-operative clinical facial nerve function were assessed. When pre-operative facial nerve function, evaluated by EPhT, was normal, the outcome from clinical follow-up at 1-month post-operatively was excellent in 78% (i.e. HB I-II) of patients, moderate in 11% (i.e. HB III-IV), and bad in 11% (i.e. HB V-VI). After 1 year, 86% had excellent outcomes, 13% had moderate outcomes, and 1% had bad outcomes. Of all patients with normal clinical facial nerve function, 22% had an abnormal EPhT result and 78% had a normal result. No statistically significant differences could be observed in short-term and long-term post-operative facial function between the groups. In this study, electrophysiological tests were not able to predict facial nerve outcome after vestibular schwannoma surgery. Tumour size remains the best pre-operative prognostic indicator of facial nerve function outcome, i.e. a better outcome in smaller lesions.

  9. Safety Testing of Ammonium Nitrate Based Mixtures

    Science.gov (United States)

    Phillips, Jason; Lappo, Karmen; Phelan, James; Peterson, Nathan; Gilbert, Don

    2013-06-01

    Ammonium nitrate (AN)/ammonium nitrate based explosives have a lengthy documented history of use by adversaries in acts of terror. While historical research has been conducted on AN-based explosive mixtures, it has primarily focused on detonation performance while varying the oxygen balance between the oxidizer and fuel components. Similarly, historical safety data on these materials is often lacking in pertinent details such as specific fuel type, particle size parameters, oxidizer form, etc. A variety of AN-based fuel-oxidizer mixtures were tested for small-scale sensitivity in preparation for large-scale testing. Current efforts focus on maintaining a zero oxygen-balance (a stoichiometric ratio for active chemical participants) while varying factors such as charge geometry, oxidizer form, particle size, and inert diluent ratios. Small-scale safety testing was conducted on various mixtures and fuels. It was found that ESD sensitivity is significantly affected by particle size, while this is less so for impact and friction. Thermal testing is in progress to evaluate hazards that may be experienced during large-scale testing.

  10. Agonist anti-GITR antibody significantly enhances the therapeutic efficacy of Listeria monocytogenes-based immunotherapy.

    Science.gov (United States)

    Shrimali, Rajeev; Ahmad, Shamim; Berrong, Zuzana; Okoev, Grigori; Matevosyan, Adelaida; Razavi, Ghazaleh Shoja E; Petit, Robert; Gupta, Seema; Mkrtichyan, Mikayel; Khleif, Samir N

    2017-08-15

    We previously demonstrated that in addition to generating an antigen-specific immune response, Listeria monocytogenes (Lm)-based immunotherapy significantly reduces the ratio of regulatory T cells (Tregs)/CD4 + and myeloid-derived suppressor cells (MDSCs) in the tumor microenvironment. Since Lm-based immunotherapy is able to inhibit the immune suppressive environment, we hypothesized that combining this treatment with agonist antibody to a co-stimulatory receptor that would further boost the effector arm of immunity will result in significant improvement of anti-tumor efficacy of treatment. Here we tested the immune and therapeutic efficacy of Listeria-based immunotherapy combination with agonist antibody to glucocorticoid-induced tumor necrosis factor receptor-related protein (GITR) in TC-1 mouse tumor model. We evaluated the potency of combination on tumor growth and survival of treated animals and profiled tumor microenvironment for effector and suppressor cell populations. We demonstrate that combination of Listeria-based immunotherapy with agonist antibody to GITR synergizes to improve immune and therapeutic efficacy of treatment in a mouse tumor model. We show that this combinational treatment leads to significant inhibition of tumor-growth, prolongs survival and leads to complete regression of established tumors in 60% of treated animals. We determined that this therapeutic benefit of combinational treatment is due to a significant increase in tumor infiltrating effector CD4 + and CD8 + T cells along with a decrease of inhibitory cells. To our knowledge, this is the first study that exploits Lm-based immunotherapy combined with agonist anti-GITR antibody as a potent treatment strategy that simultaneously targets both the effector and suppressor arms of the immune system, leading to significantly improved anti-tumor efficacy. We believe that our findings depicted in this manuscript provide a promising and translatable strategy that can enhance the overall

  11. Antirandom Testing: A Distance-Based Approach

    Directory of Open Access Journals (Sweden)

    Shen Hui Wu

    2008-01-01

    Full Text Available Random testing requires each test to be selected randomly regardless of the tests previously applied. This paper introduces the concept of antirandom testing where each test applied is chosen such that its total distance from all previous tests is maximum. This spans the test vector space to the maximum extent possible for a given number of vectors. An algorithm for generating antirandom tests is presented. Compared with traditional pseudorandom testing, antirandom testing is found to be very effective when a high-fault coverage needs to be achieved with a limited number of test vectors. The superiority of the new approach is even more significant for testing bridging faults.

  12. Interface-based software testing

    OpenAIRE

    Aziz Ahmad Rais

    2016-01-01

    Software quality is determined by assessing the characteristics that specify how it should work, which are verified through testing. If it were possible to touch, see, or measure software, it would be easier to analyze and prove its quality. Unfortunately, software is an intangible asset, which makes testing complex. This is especially true when software quality is not a question of particular functions that can be tested through a graphical user interface. The primary objective of softwar...

  13. Metal allergens of growing significance: epidemiology, immunotoxicology, strategies for testing and prevention.

    Science.gov (United States)

    Forte, Giovanni; Petrucci, Francesco; Bocca, Beatrice

    2008-09-01

    Metal-induced allergic contact dermatitis (ACD) is expressed in a wide range of cutaneous reactions following dermal and systemic exposure to products such as cosmetics and tattoos, detergents, jewellery and piercing, leather tanning, articular prostheses and dental implants. Apart from the well known significance of nickel in developing ACD, other metals such as aluminium, beryllium, chromium, cobalt, copper, gold, iridium, mercury, palladium, platinum, rhodium and titanium represented emerging causes of skin hypersensitivity. Despite the European Union directives that limit the total nickel content in jewellery alloys, the water soluble chromium (VI) in cement, and metals banned in cosmetics, the diffusion of metal-induced ACD remained quite high. On this basis, a review on the epidemiology of metal allergens, the types of exposure, the skin penetration, the immune response, and the protein interaction is motivated. Moreover, in vivo and in vitro tests for the identification and potency of skin-sensitizing metals are here reviewed in a risk assessment framework for the protection of consumer's health. Avenues for ACD prevention and therapy such as observance of maximum allowable metal levels, optimization of metallurgic characteristics, efficacy of chelating agents and personal protection are also discussed.

  14. Cloud-based solution to identify statistically significant MS peaks differentiating sample categories.

    Science.gov (United States)

    Ji, Jun; Ling, Jeffrey; Jiang, Helen; Wen, Qiaojun; Whitin, John C; Tian, Lu; Cohen, Harvey J; Ling, Xuefeng B

    2013-03-23

    Mass spectrometry (MS) has evolved to become the primary high throughput tool for proteomics based biomarker discovery. Until now, multiple challenges in protein MS data analysis remain: large-scale and complex data set management; MS peak identification, indexing; and high dimensional peak differential analysis with the concurrent statistical tests based false discovery rate (FDR). "Turnkey" solutions are needed for biomarker investigations to rapidly process MS data sets to identify statistically significant peaks for subsequent validation. Here we present an efficient and effective solution, which provides experimental biologists easy access to "cloud" computing capabilities to analyze MS data. The web portal can be accessed at http://transmed.stanford.edu/ssa/. Presented web application supplies large scale MS data online uploading and analysis with a simple user interface. This bioinformatic tool will facilitate the discovery of the potential protein biomarkers using MS.

  15. Semantics-based Automated Web Testing

    Directory of Open Access Journals (Sweden)

    Hai-Feng Guo

    2015-08-01

    Full Text Available We present TAO, a software testing tool performing automated test and oracle generation based on a semantic approach. TAO entangles grammar-based test generation with automated semantics evaluation using a denotational semantics framework. We show how TAO can be incorporated with the Selenium automation tool for automated web testing, and how TAO can be further extended to support automated delta debugging, where a failing web test script can be systematically reduced based on grammar-directed strategies. A real-life parking website is adopted throughout the paper to demonstrate the effectivity of our semantics-based web testing approach.

  16. A rule-based software test data generator

    Science.gov (United States)

    Deason, William H.; Brown, David B.; Chang, Kai-Hsiung; Cross, James H., II

    1991-01-01

    Rule-based software test data generation is proposed as an alternative to either path/predicate analysis or random data generation. A prototype rule-based test data generator for Ada programs is constructed and compared to a random test data generator. Four Ada procedures are used in the comparison. Approximately 2000 rule-based test cases and 100,000 randomly generated test cases are automatically generated and executed. The success of the two methods is compared using standard coverage metrics. Simple statistical tests showing that even the primitive rule-based test data generation prototype is significantly better than random data generation are performed. This result demonstrates that rule-based test data generation is feasible and shows great promise in assisting test engineers, especially when the rule base is developed further.

  17. Significance of Charpy and COD tests in the determination of fracture toughness of welds

    International Nuclear Information System (INIS)

    Caminha Junior, H.M.; Bastian, F.L.

    1983-01-01

    A comparison is made between the Charpy and crack opening displacement (COD) tests used to acess the fracture toughness of metallic materials. The main problems inherent in these tests are discussed, such as scatter of results and their advantages and limitations. The chief experimental difficulties when they are applied to welds are indicated and the various methods available for calculating the COD from a test graph are described. Comments are made on the use of the Charpy test and the methods of calculating the COD in determing critical defect sizes in welded structures. (Author) [pt

  18. Significance of Intratracheal Instillation Tests for the Screening of Pulmonary Toxicity of Nanomaterials.

    Science.gov (United States)

    Morimoto, Yasuo; Izumi, Hiroto; Yoshiura, Yukiko; Fujisawa, Yuri; Fujita, Katsuhide

    Inhalation tests are the gold standard test for the estimation of the pulmonary toxicity of respirable materials. Intratracheal instillation tests have been used widely, but they yield limited evidence of the harmful effects of respirable materials. We reviewed the effectiveness of intratracheal instillation tests for estimating the hazards of nanomaterials, mainly using research papers featuring intratracheal instillation and inhalation tests centered on a Japanese national project. Compared to inhalation tests, intratracheal instillation tests induced more acute inflammatory responses in the animal lung due to a bolus effect regardless of the toxicity of the nanomaterials. However, nanomaterials with high toxicity induced persistent inflammation in the chronic phase, and nanomaterials with low toxicity induced only transient inflammation. Therefore, in order to estimate the harmful effects of a nanomaterial, an observation period of 3 months or 6 months following intratracheal instillation is necessary. Among the endpoints of pulmonary toxicity, cell count and percentage of neutrophil, chemokines for neutrophils and macrophages, and oxidative stress markers are considered most important. These markers show persistent and transient responses in the lung from nanomaterials with high and low toxicity, respectively. If the evaluation of the pulmonary toxicity of nanomaterials is performed in not only the acute but also the chronic phase in order to avoid the bolus effect of intratracheal instillation and inflammatory-related factors that are used as endpoints of pulmonary toxicity, we speculate that intratracheal instillation tests can be useful for screening for the identification of the hazard of nanomaterials through pulmonary inflammation.

  19. Classifying and scoring of molecules with the NGN: new datasets, significance tests, and generalization

    Directory of Open Access Journals (Sweden)

    Cameron Christopher JF

    2010-10-01

    Full Text Available Abstract This paper demonstrates how a Neural Grammar Network learns to classify and score molecules for a variety of tasks in chemistry and toxicology. In addition to a more detailed analysis on datasets previously studied, we introduce three new datasets (BBB, FXa, and toxicology to show the generality of the approach. A new experimental methodology is developed and applied to both the new datasets as well as previously studied datasets. This methodology is rigorous and statistically grounded, and ultimately culminates in a Wilcoxon significance test that proves the effectiveness of the system. We further include a complete generalization of the specific technique to arbitrary grammars and datasets using a mathematical abstraction that allows researchers in different domains to apply the method to their own work. Background Our work can be viewed as an alternative to existing methods to solve the quantitative structure-activity relationship (QSAR problem. To this end, we review a number approaches both from a methodological and also a performance perspective. In addition to these approaches, we also examined a number of chemical properties that can be used by generic classifier systems, such as feed-forward artificial neural networks. In studying these approaches, we identified a set of interesting benchmark problem sets to which many of the above approaches had been applied. These included: ACE, AChE, AR, BBB, BZR, Cox2, DHFR, ER, FXa, GPB, Therm, and Thr. Finally, we developed our own benchmark set by collecting data on toxicology. Results Our results show that our system performs better than, or comparatively to, the existing methods over a broad range of problem types. Our method does not require the expert knowledge that is necessary to apply the other methods to novel problems. Conclusions We conclude that our success is due to the ability of our system to: 1 encode molecules losslessly before presentation to the learning system, and 2

  20. BRCA1 and BRCA2 genetic testing-pitfalls and recommendations for managing variants of uncertain clinical significance.

    Science.gov (United States)

    Eccles, D M; Mitchell, G; Monteiro, A N A; Schmutzler, R; Couch, F J; Spurdle, A B; Gómez-García, E B

    2015-10-01

    Increasing use of BRCA1/2 testing for tailoring cancer treatment and extension of testing to tumour tissue for somatic mutation is moving BRCA1/2 mutation screening from a primarily prevention arena delivered by specialist genetic services into mainstream oncology practice. A considerable number of gene tests will identify rare variants where clinical significance cannot be inferred from sequence information alone. The proportion of variants of uncertain clinical significance (VUS) is likely to grow with lower thresholds for testing and laboratory providers with less experience of BRCA. Most VUS will not be associated with a high risk of cancer but a misinterpreted VUS has the potential to lead to mismanagement of both the patient and their relatives. Members of the Clinical Working Group of ENIGMA (Evidence-based Network for the Interpretation of Germline Mutant Alleles) global consortium (www.enigmaconsortium.org) observed wide variation in practices in reporting, disclosure and clinical management of patients with a VUS. Examples from current clinical practice are presented and discussed to illustrate potential pitfalls, explore factors contributing to misinterpretation, and propose approaches to improving clarity. Clinicians, patients and their relatives would all benefit from an improved level of genetic literacy. Genetic laboratories working with clinical geneticists need to agree on a clinically clear and uniform format for reporting BRCA test results to non-geneticists. An international consortium of experts, collecting and integrating all available lines of evidence and classifying variants according to an internationally recognized system, will facilitate reclassification of variants for clinical use. © The Author 2015. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  1. Team-Based Testing Improves Individual Learning

    Science.gov (United States)

    Vogler, Jane S.; Robinson, Daniel H.

    2016-01-01

    In two experiments, 90 undergraduates took six tests as part of an educational psychology course. Using a crossover design, students took three tests individually without feedback and then took the same test again, following the process of team-based testing (TBT), in teams in which the members reached consensus for each question and answered…

  2. Model-based testing for software safety

    NARCIS (Netherlands)

    Gurbuz, Havva Gulay; Tekinerdogan, Bedir

    2017-01-01

    Testing safety-critical systems is crucial since a failure or malfunction may result in death or serious injuries to people, equipment, or environment. An important challenge in testing is the derivation of test cases that can identify the potential faults. Model-based testing adopts models of a

  3. Peabody Picture Vocabulary Test (PPVT-III: Psychometric properties and significance for application

    Directory of Open Access Journals (Sweden)

    Nataša Bucik

    2003-12-01

    Full Text Available The purpose of this article is to present the content, conceptual structure and methodological steps of the latest revision of the Peabody Picture Vocabulary Test (PPVT-III, which is a highly functional and valuable vocabulary test that has been in use since 1959 in different language and cultural surroundings. On the case of the PPVT-III we are presenting the procedure of development and standardization of such vocabulary tests as well as its translation and adaptation from one language and cultural milieu to another. We also note the practical use of the PPVT-III for research purposes. In Slovenian language no vocabulary tests were developed or adapted so far; PPVT-III is presented in this context, too.

  4. Minute Impurities Contribute Significantly to Olfactory Receptor Ligand Studies: Tales from Testing the Vibration Theory

    OpenAIRE

    Paoli, M.; M?nch, D.; Haase, A.; Skoulakis, E.; Turin, L.; Galizia, C. G.

    2017-01-01

    Several studies have attempted to test the vibrational hypothesis of odorant receptor activation in behavioral and physiological studies using deuterated compounds as odorants. The results have been mixed. Here, we attempted to test how deuterated compounds activate odorant receptors using calcium imaging of the fruit fly antennal lobe. We found specific activation of one area of the antennal lobe corresponding to inputs from a specific receptor. However, upon more detailed analysis, we disco...

  5. Calculating p-values and their significances with the Energy Test for large datasets

    Science.gov (United States)

    Barter, W.; Burr, C.; Parkes, C.

    2018-04-01

    The energy test method is a multi-dimensional test of whether two samples are consistent with arising from the same underlying population, through the calculation of a single test statistic (called the T-value). The method has recently been used in particle physics to search for samples that differ due to CP violation. The generalised extreme value function has previously been used to describe the distribution of T-values under the null hypothesis that the two samples are drawn from the same underlying population. We show that, in a simple test case, the distribution is not sufficiently well described by the generalised extreme value function. We present a new method, where the distribution of T-values under the null hypothesis when comparing two large samples can be found by scaling the distribution found when comparing small samples drawn from the same population. This method can then be used to quickly calculate the p-values associated with the results of the test.

  6. Strain measurement based battery testing

    Science.gov (United States)

    Xu, Jeff Qiang; Steiber, Joe; Wall, Craig M.; Smith, Robert; Ng, Cheuk

    2017-05-23

    A method and system for strain-based estimation of the state of health of a battery, from an initial state to an aged state, is provided. A strain gauge is applied to the battery. A first strain measurement is performed on the battery, using the strain gauge, at a selected charge capacity of the battery and at the initial state of the battery. A second strain measurement is performed on the battery, using the strain gauge, at the selected charge capacity of the battery and at the aged state of the battery. The capacity degradation of the battery is estimated as the difference between the first and second strain measurements divided by the first strain measurement.

  7. Null hypothesis significance tests. A mix-up of two different theories

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg

    2015-01-01

    criticisms raised against NHST. As practiced, NHST has been characterized as a ‘null ritual’ that is overused and too often misapplied and misinterpreted. NHST is in fact a patchwork of two fundamentally different classical statistical testing models, often blended with some wishful quasi......-Bayesian interpretations. This is undoubtedly a major reason why NHST is very often misunderstood. But NHST also has intrinsic logical problems and the epistemic range of the information provided by such tests is much more limited than most researchers recognize. In this article we introduce to the scientometric community...

  8. Exploring High School Students Beginning Reasoning about Significance Tests with Technology

    Science.gov (United States)

    García, Víctor N.; Sánchez, Ernesto

    2017-01-01

    In the present study we analyze how students reason about or make inferences given a particular hypothesis testing problem (without having studied formal methods of statistical inference) when using Fathom. They use Fathom to create an empirical sampling distribution through computer simulation. It is found that most student´s reasoning rely on…

  9. Automation for a base station stability testing

    OpenAIRE

    Punnek, Elvis

    2016-01-01

    This Batchelor’s thesis was commissioned by Oy LM Ericsson Ab Oulu. The aim of it was to help to investigate and create a test automation solution for the stability testing of the LTE base station. The main objective was to create a test automation for a predefined test set. This test automation solution had to be created for specific environments and equipment. This work included creating the automation for the test cases and putting them to daily test automation jobs. The key factor...

  10. F 35 JOINT STRIKE FIGHTER DOD Needs to Complete Developmental Testing Before Making Significant New Investments

    Science.gov (United States)

    2017-04-01

    aircraft currently in production , 82 are United States aircraft and 60 are for international partners or foreign military sales . Aircraft Manufacturing... review team. As a result, development funds increased, test aircraft were added, the schedule was extended, and the early production rate decreased...the Navy’s initial operational capability, and delay the program’s full rate production decision, currently planned for April 2019. Assumptions

  11. Avoidance bio-assays may help to test the ecological significance of soil pollution

    International Nuclear Information System (INIS)

    Martinez Aldaya, Maite; Lors, Christine; Salmon, Sandrine; Ponge, Jean-Francois

    2006-01-01

    We measured the short-term (100 min) avoidance of a soil heavily polluted by hydrocarbons by the soil springtail Folsomia candida, at six rates of dilution in a control, unpolluted soil. We compared the results with those of long-term (40-day) population tests. Five strains were compared, of varying geographical and ecological origin. When pure, the polluted soil was lethal in the long-term and avoided in the short-term by all strains. Avoidance tests, but not population tests, were able to discriminate between strains. Avoidance thresholds differed among strains. Two ecological consequences of the results were discussed: (i) toxic compounds may kill soil animals or deprive them from food, resulting in death of populations, (ii) pollution spots can be locally deprived of fauna because of escape movements of soil animals. Advantages and limitations of the method have been listed, together with proposals for their wider use in soil ecology and ecotoxicology. - Polluted soils are avoided by soil animals, a phenomenon which can be used as a cheap, sensitive tool for the early detection of environmental risk

  12. Minute Impurities Contribute Significantly to Olfactory Receptor Ligand Studies: Tales from Testing the Vibration Theory.

    Science.gov (United States)

    Paoli, M; Münch, D; Haase, A; Skoulakis, E; Turin, L; Galizia, C G

    2017-01-01

    Several studies have attempted to test the vibrational hypothesis of odorant receptor activation in behavioral and physiological studies using deuterated compounds as odorants. The results have been mixed. Here, we attempted to test how deuterated compounds activate odorant receptors using calcium imaging of the fruit fly antennal lobe. We found specific activation of one area of the antennal lobe corresponding to inputs from a specific receptor. However, upon more detailed analysis, we discovered that an impurity of 0.0006% ethyl acetate in a chemical sample of benzaldehyde-d 5 was entirely responsible for a sizable odorant-evoked response in Drosophila melanogaster olfactory receptor cells expressing dOr42b. Without gas chromatographic purification within the experimental setup, this impurity would have created a difference in the responses of deuterated and nondeuterated benzaldehyde, suggesting that dOr42b be a vibration sensitive receptor, which we show here not to be the case. Our results point to a broad problem in the literature on use of non-GC-pure compounds to test receptor selectivity, and we suggest how the limitations can be overcome in future studies.

  13. Analysis of pumping tests: Significance of well diameter, partial penetration, and noise

    Science.gov (United States)

    Heidari, M.; Ghiassi, K.; Mehnert, E.

    1999-01-01

    The nonlinear least squares (NLS) method was applied to pumping and recovery aquifer test data in confined and unconfined aquifers with finite diameter and partially penetrating pumping wells, and with partially penetrating piezometers or observation wells. It was demonstrated that noiseless and moderately noisy drawdown data from observation points located less than two saturated thicknesses of the aquifer from the pumping well produced an exact or acceptable set of parameters when the diameter of the pumping well was included in the analysis. The accuracy of the estimated parameters, particularly that of specific storage, decreased with increases in the noise level in the observed drawdown data. With consideration of the well radii, the noiseless drawdown data from the pumping well in an unconfined aquifer produced good estimates of horizontal and vertical hydraulic conductivities and specific yield, but the estimated specific storage was unacceptable. When noisy data from the pumping well were used, an acceptable set of parameters was not obtained. Further experiments with noisy drawdown data in an unconfined aquifer revealed that when the well diameter was included in the analysis, hydraulic conductivity, specific yield and vertical hydraulic conductivity may be estimated rather effectively from piezometers located over a range of distances from the pumping well. Estimation of specific storage became less reliable for piezemeters located at distances greater than the initial saturated thickness of the aquifer. Application of the NLS to field pumping and recovery data from a confined aquifer showed that the estimated parameters from the two tests were in good agreement only when the well diameter was included in the analysis. Without consideration of well radii, the estimated values of hydraulic conductivity from the pumping and recovery tests were off by a factor of four.The nonlinear least squares method was applied to pumping and recovery aquifer test data in

  14. Strategies for Testing Statistical and Practical Significance in Detecting DIF with Logistic Regression Models

    Science.gov (United States)

    Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza

    2014-01-01

    This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…

  15. The ECG component of thallium-201 exercise testing significantly alters patient management

    International Nuclear Information System (INIS)

    Deague, J.; Salehi, N.; Grigg, L.; Lichtenstein, M.; Better, N.

    1998-01-01

    Full text: Thallium exercise testing (Tlex) offers superior sensitivity and specificity to exercise electrocardiography (ECG), but the value of the ECG data in Tlex remains poorly studied. While a normal Tlex is associated with an excellent prognosis, patients with a positive Tlex have a higher cardiac event rate. We aimed to see if a negative ECG component of the Tlex (ECGTI) was associated with an improved outcome compared with a positive ECGTI, in those patients with a reversible Tlex defect. We followed 100 consecutive patients retrospectively with a reversible defect on Tlex (50 with negative and 50 with positive (ECGTI) for 12 months. The ECG was reviewed as positive (1 mm ST depression 0.08 seconds after J point or > 2 mm if on digoxin or prior ECG changes), negative, equivocal or uninterpretable. We excluded patients with pharmacological testing, and those with equivocal or uninterpretable ECGs. Over the ensuing 12 months no patients with negative ECGTl was admitted with unstable angina, myocardium infarction or had a cardiac death. It is concluded that in patients with reversible defects on Tlex, a negative ECGTl is associated with a low incidence of cardiac events and a decreased incidence of a cardiac intervention

  16. Evaluation and significance of hyperchromatic crowded groups (HCG in liquid-based paps

    Directory of Open Access Journals (Sweden)

    Chivukula Mamatha

    2007-01-01

    Full Text Available Abstract Objective Hyperchromatic crowded groups (HCG, a term first introduced into the cytology literature by DeMay in 1995, are commonly observed in Pap tests and may rarely be associated with serious but difficult to interpret lesions. In this study, we specifically defined HCG as dark crowded cell groups with more than 15 cells which can be identified at 10× screening magnification. Methods We evaluated consecutive liquid-based (Surepath Pap tests from 601 women (age 17–74 years, mean age 29.4 yrs and observed HCG in 477 cases. In all 477 HCG cases, Pap tests were found to be satisfactory and to contain an endocervical sample. HCG were easily detectible at 10× screening magnification (size up to 400 um, mean 239.5 um and ranged from 1 to 50 (mean 19.5 per Pap slide. Results HCG predominantly represented 3-Dimensional groups of endocervical cells with some nuclear overlap (379/477 – 79%, reactive endocervical cells with relatively prominent nucleoli and some nuclear crowding (29/477 – 6%, clusters of inflammatory cells (25/477 – 5.2%, parabasal cells (22/477 – 4.6%, endometrial cells (1/477 – 0.2%. Epithelial cell abnormalities (ECA were present in only 21 of 477 cases (4.6%. 18 of 21 women with HCG-associated ECA were less than 40 years old; only 3 were =/> 40 years. HCG-associated final abnormal Pap test interpretations were as follows: ASCUS (6/21 – 28%, LSIL (12/21 – 57%, ASC-H (2/21 – 9.5%, and HSIL/CIN2-3 (3/21 – 14%. The association of HCG with ECA was statistically significant (p = 0.0174. chi-square test. In patients with ECA, biopsy results were available in 10 cases, and 4 cases of biopsy-proven CIN2/3 were detected. Among these four cases, HCG in the Pap tests, in retrospect represented the lesional high grade cells in three cases (one HSIL case and two ASC-H cases. Interestingly, none of the 124 cases without HCG were found to have an epithelial cell abnormality. Conclusion We conclude: a. HCG are observed

  17. Significance of tests and properties of concrete and concrete-making materials

    CERN Document Server

    Pielert, James H

    2006-01-01

    Reflects a decade of technological changes in concrete industry! The newest edition of this popular ASTM publication reflects the latest technology in concrete and concrete-making materials. Six sections cover: (1) General information on the nature of concrete, sampling, variability, and testing laboratories. A new chapter deals with modeling cement and concrete properties. (2) Properties of freshly mixed concrete. (3) Properties of hardened concrete. (4) Concrete aggregates—this section has been revised and the chapters are presented in the order that most concerns concrete users: grading, density, soundness, degradation resistance, petrographic examination, reactivity, and thermal properties. (5) Materials other than aggregates—the chapter on curing materials now reflects the current technology of materials applied to new concrete surfaces. The chapter on mineral admixtures has been separated into two chapters: supplementary cementitious materials and ground slag. (6) Specialized concretes—contains a ...

  18. A significant-loophole-free test of Bell's theorem with entangled photons

    Science.gov (United States)

    Giustina, Marissa; Versteegh, Marijn A. M.; Wengerowsky, Sören; Handsteiner, Johannes; Hochrainer, Armin; Phelan, Kevin; Steinlechner, Fabian; Kofler, Johannes; Larsson, Jan-Åke; Abellán, Carlos; Amaya, Waldimar; Mitchell, Morgan W.; Beyer, Jörn; Gerrits, Thomas; Lita, Adriana E.; Shalm, Lynden K.; Nam, Sae Woo; Scheidl, Thomas; Ursin, Rupert; Wittmann, Bernhard; Zeilinger, Anton

    2017-10-01

    John Bell's theorem of 1964 states that local elements of physical reality, existing independent of measurement, are inconsistent with the predictions of quantum mechanics (Bell, J. S. (1964), Physics (College. Park. Md). Specifically, correlations between measurement results from distant entangled systems would be smaller than predicted by quantum physics. This is expressed in Bell's inequalities. Employing modifications of Bell's inequalities, many experiments have been performed that convincingly support the quantum predictions. Yet, all experiments rely on assumptions, which provide loopholes for a local realist explanation of the measurement. Here we report an experiment with polarization-entangled photons that simultaneously closes the most significant of these loopholes. We use a highly efficient source of entangled photons, distributed these over a distance of 58.5 meters, and implemented rapid random setting generation and high-efficiency detection to observe a violation of a Bell inequality with high statistical significance. The merely statistical probability of our results to occur under local realism is less than 3.74×10-31, corresponding to an 11.5 standard deviation effect.

  19. Significance of repeated exercise testing with thallium-201 scanning in asymptomatic diabetic males

    International Nuclear Information System (INIS)

    Rubler, S.; Fisher, V.J.

    1985-01-01

    This study was conducted with asymptomatic middle-aged male subjects with diabetes mellitus to detect latent cardiac disease using noninvasive techniques. One group of 38 diabetic males (mean age 50.5 +/- 10.2 years) and a group of 15 normal males (mean age 46.9 +/- 10.0 years) participated in the initial trial; 13 diabetic patients and 7 control subjects were restudied 1-2 years later. Maximal treadmill exercise with a Bruce protocol and myocardial scintigraphy with thallium-201(201Tl) were used. Diabetic subjects on initial examination and retesting achieved a lower maximal heart rate and duration of exercise than control subjects. Abnormal electrocardiographic changes, thallium defects, or both were observed in 23/38 diabetic males (60.5%) on the first study and only one 65-year-old control subject had such findings. On retesting, the control subjects had no abnormalities while 76.9% of diabetic subjects had either 201Tl defects or ECG changes. We conclude that despite the fact that none of diabetic males had any clinical evidence or symptoms of heart disease, this high-risk group demonstrated abnormalities on exercise testing that merit careful subsequent evaluation and followup and could be an effective method of detecting early cardiac disease

  20. More powerful significant testing for time course gene expression data using functional principal component analysis approaches.

    Science.gov (United States)

    Wu, Shuang; Wu, Hulin

    2013-01-16

    One of the fundamental problems in time course gene expression data analysis is to identify genes associated with a biological process or a particular stimulus of interest, like a treatment or virus infection. Most of the existing methods for this problem are designed for data with longitudinal replicates. But in reality, many time course gene experiments have no replicates or only have a small number of independent replicates. We focus on the case without replicates and propose a new method for identifying differentially expressed genes by incorporating the functional principal component analysis (FPCA) into a hypothesis testing framework. The data-driven eigenfunctions allow a flexible and parsimonious representation of time course gene expression trajectories, leaving more degrees of freedom for the inference compared to that using a prespecified basis. Moreover, the information of all genes is borrowed for individual gene inferences. The proposed approach turns out to be more powerful in identifying time course differentially expressed genes compared to the existing methods. The improved performance is demonstrated through simulation studies and a real data application to the Saccharomyces cerevisiae cell cycle data.

  1. Depressive status explains a significant amount of the variance in COPD assessment test (CAT) scores.

    Science.gov (United States)

    Miravitlles, Marc; Molina, Jesús; Quintano, José Antonio; Campuzano, Anna; Pérez, Joselín; Roncero, Carlos

    2018-01-01

    COPD assessment test (CAT) is a short, easy-to-complete health status tool that has been incorporated into the multidimensional assessment of COPD in order to guide therapy; therefore, it is important to understand the factors determining CAT scores. This is a post hoc analysis of a cross-sectional, observational study conducted in respiratory medicine departments and primary care centers in Spain with the aim of identifying the factors determining CAT scores, focusing particularly on the cognitive status measured by the Mini-Mental State Examination (MMSE) and levels of depression measured by the short Beck Depression Inventory (BDI). A total of 684 COPD patients were analyzed; 84.1% were men, the mean age of patients was 68.7 years, and the mean forced expiratory volume in 1 second (%) was 55.1%. Mean CAT score was 21.8. CAT scores correlated with the MMSE score (Pearson's coefficient r =-0.371) and the BDI ( r =0.620), both p CAT scores and explained 45% of the variability. However, a model including only MMSE and BDI scores explained up to 40% and BDI alone explained 38% of the CAT variance. CAT scores are associated with clinical variables of severity of COPD. However, cognitive status and, in particular, the level of depression explain a larger percentage of the variance in the CAT scores than the usual COPD clinical severity variables.

  2. Model-Based GUI Testing Using Uppaal at Novo Nordisk

    Science.gov (United States)

    Hjort, Ulrik H.; Illum, Jacob; Larsen, Kim G.; Petersen, Michael A.; Skou, Arne

    This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates a test suite satisfying some testing criterion, such as edge or state coverage, and converts the individual test case into a scripting language that can be automatically executed against the target. The tool has significantly reduced the time required for test construction and generation, and reduced the number of test scripts while increasing the coverage.

  3. Oral Challenge without Skin Testing Safely Excludes Clinically Significant Delayed-Onset Penicillin Hypersensitivity.

    Science.gov (United States)

    Confino-Cohen, Ronit; Rosman, Yossi; Meir-Shafrir, Keren; Stauber, Tali; Lachover-Roth, Idit; Hershko, Alon; Goldberg, Arnon

    Penicillins are the drug family most commonly associated with hypersensitivity reactions. Current guidelines recommend negative skin tests (ST) before re-administering penicillins to patients with previous nonimmediate reactions (NIR). The objective of this study was to examine whether ST are necessary before re-administering penicillin to patients with NIR. Patients with NIR to penicillins starting longer than 1 hour after last dose administration or starting any time after the first treatment day or patients with vague recollection of their reaction underwent penicillin ST. Disregarding ST results, patients were challenged with the relevant penicillins. One-tenth of the therapeutic dose followed by the full dose was administered at 1-hour interval and patients continued taking the full dose for 5 days. A total of 710 patients with alleged BL allergy were evaluated. Patients with a history of immediate reaction (52, 7.3%) or cephalosporin allergy (16, 2.2%) were excluded. Of the remaining 642 patients, 62.3% had negative ST, 5.3% positive ST, and 32.4% equivocal ST. A total of 617 (96.1%) patients were challenged. Immediate reaction was observed in 9 patients (1.5%): 1-positive ST, 7-negative ST, and 1-equivocal ST (P = .7). Late reaction to the first-day challenge occurred in 24 patients (4%). An at-home challenge was continued by 491 patients. Complete 5-day and partial challenges were well tolerated by 417 (85%) and 44 patients (8.9%), respectively, disregarding ST results. Thirty patients (6.1%) developed mild reactions to the home challenge regardless of their ST results. A 5-day oral challenge without preceding ST is safe and sufficient to exclude penicillin allergy after NIR developing during penicillin treatment. Copyright © 2017 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.

  4. Simulation-based Testing of Control Software

    Energy Technology Data Exchange (ETDEWEB)

    Ozmen, Ozgur [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nutaro, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Sanyal, Jibonananda [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Olama, Mohammed M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-02-10

    It is impossible to adequately test complex software by examining its operation in a physical prototype of the system monitored. Adequate test coverage can require millions of test cases, and the cost of equipment prototypes combined with the real-time constraints of testing with them makes it infeasible to sample more than a small number of these tests. Model based testing seeks to avoid this problem by allowing for large numbers of relatively inexpensive virtual prototypes that operate in simulation time at a speed limited only by the available computing resources. In this report, we describe how a computer system emulator can be used as part of a model based testing environment; specifically, we show that a complete software stack including operating system and application software - can be deployed within a simulated environment, and that these simulations can proceed as fast as possible. To illustrate this approach to model based testing, we describe how it is being used to test several building control systems that act to coordinate air conditioning loads for the purpose of reducing peak demand. These tests involve the use of ADEVS (A Discrete Event System Simulator) and QEMU (Quick Emulator) to host the operational software within the simulation, and a building model developed with the MODELICA programming language using Buildings Library and packaged as an FMU (Functional Mock-up Unit) that serves as the virtual test environment.

  5. Model-based testing for embedded systems

    CERN Document Server

    Zander, Justyna; Mosterman, Pieter J

    2011-01-01

    What the experts have to say about Model-Based Testing for Embedded Systems: "This book is exactly what is needed at the exact right time in this fast-growing area. From its beginnings over 10 years ago of deriving tests from UML statecharts, model-based testing has matured into a topic with both breadth and depth. Testing embedded systems is a natural application of MBT, and this book hits the nail exactly on the head. Numerous topics are presented clearly, thoroughly, and concisely in this cutting-edge book. The authors are world-class leading experts in this area and teach us well-used

  6. 77 FR 21065 - Certain High Production Volume Chemicals; Test Rule and Significant New Use Rule; Fourth Group of...

    Science.gov (United States)

    2012-04-09

    ... 2070-AJ66 Certain High Production Volume Chemicals; Test Rule and Significant New Use Rule; Fourth... an opportunity to comment on a proposed test rule for 23 high production volume (HPV) chemical... necessary, to prohibit or limit that activity before it occurs. The opportunity to present oral comment was...

  7. Genetic testing in benign familial epilepsies of the first year of life: clinical and diagnostic significance.

    Science.gov (United States)

    Zara, Federico; Specchio, Nicola; Striano, Pasquale; Robbiano, Angela; Gennaro, Elena; Paravidino, Roberta; Vanni, Nicola; Beccaria, Francesca; Capovilla, Giuseppe; Bianchi, Amedeo; Caffi, Lorella; Cardilli, Viviana; Darra, Francesca; Bernardina, Bernardo Dalla; Fusco, Lucia; Gaggero, Roberto; Giordano, Lucio; Guerrini, Renzo; Incorpora, Gemma; Mastrangelo, Massimo; Spaccini, Luigina; Laverda, Anna Maria; Vecchi, Marilena; Vanadia, Francesca; Veggiotti, Pierangelo; Viri, Maurizio; Occhi, Guya; Budetta, Mauro; Taglialatela, Maurizio; Coviello, Domenico A; Vigevano, Federico; Minetti, Carlo

    2013-03-01

    role of K-channel genes beyond the typical neonatal epilepsies. The identification of a novel SCN2A mutation in a family with infantile seizures with onset between 6 and 8 months provides further confirmation that this gene is not specifically associated with BFNIS and is also involved in families with a delayed age of onset. Our data indicate that PRRT2 mutations are clustered in families with BFIS. Paroxysmal kinesigenic dyskinesia emerges as a distinctive feature of PRRT2 families, although uncommon in our series. We showed that the age of onset of seizures is significantly correlated with underlying genetics, as about 90% of the typical BFNS families are linked to KCNQ2 compared to only 3% of the BFIS families, for which PRRT2 represents the major gene. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.

  8. Using the Coefficient of Determination "R"[superscript 2] to Test the Significance of Multiple Linear Regression

    Science.gov (United States)

    Quinino, Roberto C.; Reis, Edna A.; Bessegato, Lupercio F.

    2013-01-01

    This article proposes the use of the coefficient of determination as a statistic for hypothesis testing in multiple linear regression based on distributions acquired by beta sampling. (Contains 3 figures.)

  9. Who is more skilful? Doping and its implication on the validity, morality and significance of the sporting test

    DEFF Research Database (Denmark)

    Christiansen, Ask Vest; Møller, Rasmus Bysted

    2016-01-01

    In this article, we explore if and in what ways doping can be regarded as a challenge to the validity, morality and significance of the sporting test. We start out by examining Kalevi Heinilä’s analysis of the logic of elite sport, which shows how the ‘spiral of competition’ leads to the use...... of ‘dubious means’. As a supplement to Heinilä, we revisit American sports historian John Hoberman’s writings on sport and technology. Then we discuss what function equality and fairness have in sport and what separates legitimate form illegitimate ways of enhancing performance. We proceed by discussing...... the line of argumentation set forth by philosopher Torbjörn Tännsjö on how our admiration of sporting superiority based on natural talent or ‘birth luck’ is immoral. We analyse his argument in favour of eliminating the significance of meritless luck in sport by lifting the ban on doping and argue that its...

  10. Feature selection based on SVM significance maps for classification of dementia

    NARCIS (Netherlands)

    E.E. Bron (Esther); M. Smits (Marion); J.C. van Swieten (John); W.J. Niessen (Wiro); S. Klein (Stefan)

    2014-01-01

    textabstractSupport vector machine significance maps (SVM p-maps) previously showed clusters of significantly different voxels in dementiarelated brain regions. We propose a novel feature selection method for classification of dementia based on these p-maps. In our approach, the SVM p-maps are

  11. Benford's law first significant digit and distribution distances for testing the reliability of financial reports in developing countries

    Science.gov (United States)

    Shi, Jing; Ausloos, Marcel; Zhu, Tingting

    2018-02-01

    We discuss a common suspicion about reported financial data, in 10 industrial sectors of the 6 so called "main developing countries" over the time interval [2000-2014]. These data are examined through Benford's law first significant digit and through distribution distances tests. It is shown that several visually anomalous data have to be a priori removed. Thereafter, the distributions much better follow the first digit significant law, indicating the usefulness of a Benford's law test from the research starting line. The same holds true for distance tests. A few outliers are pointed out.

  12. Kernel-based tests for joint independence

    DEFF Research Database (Denmark)

    Pfister, Niklas; Bühlmann, Peter; Schölkopf, Bernhard

    2018-01-01

    if the $d$ variables are jointly independent, as long as the kernel is characteristic. Based on an empirical estimate of dHSIC, we define three different non-parametric hypothesis tests: a permutation test, a bootstrap test and a test based on a Gamma approximation. We prove that the permutation test......We investigate the problem of testing whether $d$ random variables, which may or may not be continuous, are jointly (or mutually) independent. Our method builds on ideas of the two variable Hilbert-Schmidt independence criterion (HSIC) but allows for an arbitrary number of variables. We embed...... the $d$-dimensional joint distribution and the product of the marginals into a reproducing kernel Hilbert space and define the $d$-variable Hilbert-Schmidt independence criterion (dHSIC) as the squared distance between the embeddings. In the population case, the value of dHSIC is zero if and only...

  13. Validity evidence based on test content.

    Science.gov (United States)

    Sireci, Stephen; Faulkner-Bond, Molly

    2014-01-01

    Validity evidence based on test content is one of the five forms of validity evidence stipulated in the Standards for Educational and Psychological Testing developed by the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. In this paper, we describe the logic and theory underlying such evidence and describe traditional and modern methods for gathering and analyzing content validity data. A comprehensive review of the literature and of the aforementioned Standards is presented. For educational tests and other assessments targeting knowledge and skill possessed by examinees, validity evidence based on test content is necessary for building a validity argument to support the use of a test for a particular purpose. By following the methods described in this article, practitioners have a wide arsenal of tools available for determining how well the content of an assessment is congruent with and appropriate for the specific testing purposes.

  14. Measuring individual significant change on the Beck Depression Inventory-II through IRT-based statistics.

    NARCIS (Netherlands)

    Brouwer, D.; Meijer, R.R.; Zevalkink, D.J.

    2013-01-01

    Several researchers have emphasized that item response theory (IRT)-based methods should be preferred over classical approaches in measuring change for individual patients. In the present study we discuss and evaluate the use of IRT-based statistics to measure statistical significant individual

  15. Unit Roots in Economic and Financial Time Series: A Re-Evaluation at the Decision-Based Significance Levels

    Directory of Open Access Journals (Sweden)

    Jae H. Kim

    2017-09-01

    Full Text Available This paper re-evaluates key past results of unit root tests, emphasizing that the use of a conventional level of significance is not in general optimal due to the test having low power. The decision-based significance levels for popular unit root tests, chosen using the line of enlightened judgement under a symmetric loss function, are found to be much higher than conventional ones. We also propose simple calibration rules for the decision-based significance levels for a range of unit root tests. At the decision-based significance levels, many time series in Nelson and Plosser’s (1982 (extended data set are judged to be trend-stationary, including real income variables, employment variables and money stock. We also find that nearly all real exchange rates covered in Elliott and Pesavento’s (2006 study are stationary; and that most of the real interest rates covered in Rapach and Weber’s (2004 study are stationary. In addition, using a specific loss function, the U.S. nominal interest rate is found to be stationary under economically sensible values of relative loss and prior belief for the null hypothesis.

  16. Methodology for testing and validating knowledge bases

    Science.gov (United States)

    Krishnamurthy, C.; Padalkar, S.; Sztipanovits, J.; Purves, B. R.

    1987-01-01

    A test and validation toolset developed for artificial intelligence programs is described. The basic premises of this method are: (1) knowledge bases have a strongly declarative character and represent mostly structural information about different domains, (2) the conditions for integrity, consistency, and correctness can be transformed into structural properties of knowledge bases, and (3) structural information and structural properties can be uniformly represented by graphs and checked by graph algorithms. The interactive test and validation environment have been implemented on a SUN workstation.

  17. A Note on Testing Mediated Effects in Structural Equation Models: Reconciling Past and Current Research on the Performance of the Test of Joint Significance

    Science.gov (United States)

    Valente, Matthew J.; Gonzalez, Oscar; Miocevic, Milica; MacKinnon, David P.

    2016-01-01

    Methods to assess the significance of mediated effects in education and the social sciences are well studied and fall into two categories: single sample methods and computer-intensive methods. A popular single sample method to detect the significance of the mediated effect is the test of joint significance, and a popular computer-intensive method…

  18. Security Considerations and Recommendations in Computer-Based Testing

    Directory of Open Access Journals (Sweden)

    Saleh M. Al-Saleem

    2014-01-01

    Full Text Available Many organizations and institutions around the globe are moving or planning to move their paper-and-pencil based testing to computer-based testing (CBT. However, this conversion will not be the best option for all kinds of exams and it will require significant resources. These resources may include the preparation of item banks, methods for test delivery, procedures for test administration, and last but not least test security. Security aspects may include but are not limited to the identification and authentication of examinee, the risks that are associated with cheating on the exam, and the procedures related to test delivery to the examinee. This paper will mainly investigate the security considerations associated with CBT and will provide some recommendations for the security of these kinds of tests. We will also propose a palm-based biometric authentication system incorporated with basic authentication system (username/password in order to check the identity and authenticity of the examinee.

  19. Security considerations and recommendations in computer-based testing.

    Science.gov (United States)

    Al-Saleem, Saleh M; Ullah, Hanif

    2014-01-01

    Many organizations and institutions around the globe are moving or planning to move their paper-and-pencil based testing to computer-based testing (CBT). However, this conversion will not be the best option for all kinds of exams and it will require significant resources. These resources may include the preparation of item banks, methods for test delivery, procedures for test administration, and last but not least test security. Security aspects may include but are not limited to the identification and authentication of examinee, the risks that are associated with cheating on the exam, and the procedures related to test delivery to the examinee. This paper will mainly investigate the security considerations associated with CBT and will provide some recommendations for the security of these kinds of tests. We will also propose a palm-based biometric authentication system incorporated with basic authentication system (username/password) in order to check the identity and authenticity of the examinee.

  20. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  1. Evidence for the different physiological significance of the 6- and 2-minute walk tests in multiple sclerosis

    Directory of Open Access Journals (Sweden)

    Motl Robert W

    2012-03-01

    Full Text Available Abstract Background Researchers have recently advocated for the 2-minute walk (2MW as an alternative for the 6-minute walk (6MW to assess long distance ambulation in persons with multiple sclerosis (MS. This recommendation has not been based on physiological considerations such as the rate of oxygen consumption (V·O2 over the 6MW range. Objective This study examined the pattern of change in V·O2 over the range of the 6MW in a large sample of persons with MS who varied as a function of disability status. Method Ninety-five persons with clinically-definite MS underwent a neurological examination for generating an Expanded Disability Status Scale (EDSS score, and then completion of the 6MW protocol while wearing a portable metabolic unit and an accelerometer. Results There was a time main effect on V·O2 during the 6MW (p = .0001 such that V·O2 increased significantly every 30 seconds over the first 3 minutes of the 6MW, and then remained stable over the second 3 minutes of the 6MW. This occurred despite no change in cadence across the 6MW (p = .84. Conclusions The pattern of change in V·O2 indicates that there are different metabolic systems providing energy for ambulation during the 6MW in MS subjects and steady state aerobic metabolism is reached during the last 3 minutes of the 6MW. By extension, the first 3 minutes would represent a test of mixed aerobic and anaerobic work, whereas the second 3 minutes would represent a test of aerobic work during walking.

  2. A systematic review on diagnostic accuracy of CT-based detection of significant coronary artery disease

    International Nuclear Information System (INIS)

    Janne d'Othee, Bertrand; Siebert, Uwe; Cury, Ricardo; Jadvar, Hossein; Dunn, Edward J.; Hoffmann, Udo

    2008-01-01

    Objectives: Systematic review of diagnostic accuracy of contrast enhanced coronary computed tomography (CE-CCT). Background: Noninvasive detection of coronary artery stenosis (CAS) by CE-CCT as an alternative to catheter-based coronary angiography (CCA) may improve patient management. Methods: Forty-one articles published between 1997 and 2006 were included that evaluated native coronary arteries for significant stenosis and used CE-CCT as diagnostic test and CCA as reference standard. Study group characteristics, study methodology and diagnostic outcomes were extracted. Pooled summary sensitivity and specificity of CE-CCT were calculated using a random effects model (1) for all coronary segments, (2) assessable segments, and (3) per patient. Results: The 41 studies totaled 2515 patients (75% males; mean age: 59 years, CAS prevalence: 59%). Analysis of all coronary segments yielded a sensitivity of 95% (80%, 89%, 86%, 98% for electron beam CT, 4/8-slice, 16-slice and 64-slice MDCT, respectively) for a specificity of 85% (77%, 84%, 95%, 91%). Analysis limited to segments deemed assessable by CT showed sensitivity of 96% (86%, 85%, 98%, 97%) for a specificity of 95% (90%, 96%, 96%, 96%). Per patient, sensitivity was 99% (90%, 97%, 99%, 98%) and specificity was 76% (59%, 81%, 83%, 92%). Heterogeneity was quantitatively important but not explainable by patient group characteristics or study methodology. Conclusions: Current diagnostic accuracy of CE-CCT is high. Advances in CT technology have resulted in increases in diagnostic accuracy and proportion of assessable coronary segments. However, per patient, accuracy may be lower and CT may have more limited clinical utility in populations at high risk for CAD

  3. Test the Overall Significance of p-values by Using Joint Tail Probability of Ordered p-values as Test Statistic

    NARCIS (Netherlands)

    Fang, Yongxiang; Wit, Ernst

    2008-01-01

    Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values

  4. Testlet-Based Multidimensional Adaptive Testing.

    Science.gov (United States)

    Frey, Andreas; Seitz, Nicki-Nils; Brandt, Steffen

    2016-01-01

    Multidimensional adaptive testing (MAT) is a highly efficient method for the simultaneous measurement of several latent traits. Currently, no psychometrically sound approach is available for the use of MAT in testlet-based tests. Testlets are sets of items sharing a common stimulus such as a graph or a text. They are frequently used in large operational testing programs like TOEFL, PISA, PIRLS, or NAEP. To make MAT accessible for such testing programs, we present a novel combination of MAT with a multidimensional generalization of the random effects testlet model (MAT-MTIRT). MAT-MTIRT compared to non-adaptive testing is examined for several combinations of testlet effect variances (0.0, 0.5, 1.0, and 1.5) and testlet sizes (3, 6, and 9 items) with a simulation study considering three ability dimensions with simple loading structure. MAT-MTIRT outperformed non-adaptive testing regarding the measurement precision of the ability estimates. Further, the measurement precision decreased when testlet effect variances and testlet sizes increased. The suggested combination of the MTIRT model therefore provides a solution to the substantial problems of testlet-based tests while keeping the length of the test within an acceptable range.

  5. Testlet-based Multidimensional Adaptive Testing

    Directory of Open Access Journals (Sweden)

    Andreas Frey

    2016-11-01

    Full Text Available Multidimensional adaptive testing (MAT is a highly efficient method for the simultaneous measurement of several latent traits. Currently, no psychometrically sound approach is available for the use of MAT in testlet-based tests. Testlets are sets of items sharing a common stimulus such as a graph or a text. They are frequently used in large operational testing programs like TOEFL, PISA, PIRLS, or NAEP. To make MAT accessible for such testing programs, we present a novel combination of MAT with a multidimensional generalization of the random effects testlet model (MAT-MTIRT. MAT-MTIRT compared to non-adaptive testing is examined for several combinations of testlet effect variances (0.0, 0.5, 1.0, 1.5 and testlet sizes (3 items, 6 items, 9 items with a simulation study considering three ability dimensions with simple loading structure. MAT-MTIRT outperformed non-adaptive testing regarding the measurement precision of the ability estimates. Further, the measurement precision decreased when testlet effect variances and testlet sizes increased. The suggested combination of the MTIRT model therefore provides a solution to the substantial problems of testlet-based tests while keeping the length of the test within an acceptable range.

  6. Diagnostic tests based on human basophils

    DEFF Research Database (Denmark)

    Kleine-Tebbe, Jörg; Erdmann, Stephan; Knol, Edward F

    2006-01-01

    -maximal responses, termed 'intrinsic sensitivity'. These variables give rise to shifts in the dose-response curves which, in a diagnostic setting where only a single antigen concentration is employed, may produce false-negative data. Thus, in order to meaningfully utilize the current basophil activation tests....... Diagnostic studies using CD63 or CD203c in hymenoptera, food and drug allergy are critically discussed. Basophil-based tests are indicated for allergy testing in selected cases but should only be performed by experienced laboratories....

  7. Test the Overall Significance of p-values by Using Joint Tail Probability of Ordered p-values as Test Statistic

    OpenAIRE

    Fang, Yongxiang; Wit, Ernst

    2008-01-01

    Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values and decide the test result. This is, in some cases, viewed as a flaw. In order to overcome this flaw and improve the power of the test, the joint tail probability of a set p-values is proposed as a ...

  8. Methods for significance testing of categorical covariates in logistic regression models after multiple imputation: power and applicability analysis

    NARCIS (Netherlands)

    Eekhout, I.; Wiel, M.A. van de; Heymans, M.W.

    2017-01-01

    Background. Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin’s Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels

  9. Liver stiffness measurement-based scoring system for significant inflammation related to chronic hepatitis B.

    Directory of Open Access Journals (Sweden)

    Mei-Zhu Hong

    Full Text Available Liver biopsy is indispensable because liver stiffness measurement alone cannot provide information on intrahepatic inflammation. However, the presence of fibrosis highly correlates with inflammation. We constructed a noninvasive model to determine significant inflammation in chronic hepatitis B patients by using liver stiffness measurement and serum markers.The training set included chronic hepatitis B patients (n = 327, and the validation set included 106 patients; liver biopsies were performed, liver histology was scored, and serum markers were investigated. All patients underwent liver stiffness measurement.An inflammation activity scoring system for significant inflammation was constructed. In the training set, the area under the curve, sensitivity, and specificity of the fibrosis-based activity score were 0.964, 91.9%, and 90.8% in the HBeAg(+ patients and 0.978, 85.0%, and 94.0% in the HBeAg(- patients, respectively. In the validation set, the area under the curve, sensitivity, and specificity of the fibrosis-based activity score were 0.971, 90.5%, and 92.5% in the HBeAg(+ patients and 0.977, 95.2%, and 95.8% in the HBeAg(- patients. The liver stiffness measurement-based activity score was comparable to that of the fibrosis-based activity score in both HBeAg(+ and HBeAg(- patients for recognizing significant inflammation (G ≥3.Significant inflammation can be accurately predicted by this novel method. The liver stiffness measurement-based scoring system can be used without the aid of computers and provides a noninvasive alternative for the prediction of chronic hepatitis B-related significant inflammation.

  10. TREAT (TREe-based Association Test)

    Science.gov (United States)

    TREAT is an R package for detecting complex joint effects in case-control studies. The test statistic is derived from a tree-structure model by recursive partitioning the data. Ultra-fast algorithm is designed to evaluate the significance of association between candidate gene and disease outcome

  11. The Improvement of Screening the Significant Factors of Oil Blends as Bio lubricant Base Stock

    International Nuclear Information System (INIS)

    Noor Hajarul Ashikin Shamsuddin; Rozaini Abdullah; Zainab Hamzah; Siti Jamilah Hanim Mohd Yusof

    2015-01-01

    A new formulation bio lubricant base stock was developed by blending of waste cooking oil (WCO) with Jatropha curcas oil (JCO). The objective of this research is to evaluate significant factors contributing to the production of oil blends for bio lubricant application. The significant factors used in this study were oil ratio (WCO:JCO), agitation times (min) and agitation speed (rpm). The blended oil bio based lubricant was used to determine the saponification, acid, peroxide and iodine values. The experimental design used in this study was the 2 level-factorial design. In this experiment, it was found that the effect of oil ratio and interaction of oil ratio and agitation speed gave the most significant effect in oil blends as bio lubricant base stock. The highest ratio of oil blend 80 %:20 % WCO:JCO, with low agitation speed of 300 rpm and low agitation time of 30 minutes gave the optimum results. The acid, saponification, peroxide and iodine values obtained were 0.517±0.08 mg KOH/ g, 126.23±1.62 mg/ g, 7.5±2.0 m eq/ kg and 50.42±2.85 mg/ g respectively. A higher ratio of waste cooking oil blends was found to be favourable as bio lubricant base stock. (author)

  12. Simple sorting algorithm test based on CUDA

    OpenAIRE

    Meng, Hongyu; Guo, Fangjin

    2015-01-01

    With the development of computing technology, CUDA has become a very important tool. In computer programming, sorting algorithm is widely used. There are many simple sorting algorithms such as enumeration sort, bubble sort and merge sort. In this paper, we test some simple sorting algorithm based on CUDA and draw some useful conclusions.

  13. Forum: Is Test-Based Accountability Dead?

    Science.gov (United States)

    Polikoff, Morgan S.; Greene, Jay P.; Huffman, Kevin

    2017-01-01

    Since the 2001 passage of the No Child Left Behind Act (NCLB), test-based accountability has been an organizing principle--perhaps "the" organizing principle--of efforts to improve American schools. But lately, accountability has been under fire from many critics, including Common Core opponents and those calling for more multifaceted…

  14. Combination of blood tests for significant fibrosis and cirrhosis improves the assessment of liver-prognosis in chronic hepatitis C.

    Science.gov (United States)

    Boursier, J; Brochard, C; Bertrais, S; Michalak, S; Gallois, Y; Fouchard-Hubert, I; Oberti, F; Rousselet, M-C; Calès, P

    2014-07-01

    Recent longitudinal studies have emphasised the prognostic value of noninvasive tests of liver fibrosis and cross-sectional studies have shown their combination significantly improves diagnostic accuracy. To compare the prognostic accuracy of six blood fibrosis tests and liver biopsy, and evaluate if test combination improves the liver-prognosis assessment in chronic hepatitis C (CHC). A total of 373 patients with compensated CHC, liver biopsy (Metavir F) and blood tests targeting fibrosis (APRI, FIB4, Fibrotest, Hepascore, FibroMeter) or cirrhosis (CirrhoMeter) were included. Significant liver-related events (SLRE) and liver-related deaths were recorded during follow-up (started the day of biopsy). During the median follow-up of 9.5 years (3508 person-years), 47 patients had a SLRE and 23 patients died from liver-related causes. For the prediction of first SLRE, most blood tests allowed higher prognostication than Metavir F [Harrell C-index: 0.811 (95% CI: 0.751-0.868)] with a significant increase for FIB4: 0.879 [0.832-0.919] (P = 0.002), FibroMeter: 0.870 [0.812-0.922] (P = 0.005) and APRI: 0.861 [0.813-0.902] (P = 0.039). Multivariate analysis identified FibroMeter, CirrhoMeter and sustained viral response as independent predictors of first SLRE. CirrhoMeter was the only independent predictor of liver-related death. The combination of FibroMeter and CirrhoMeter classifications into a new FM/CM classification improved the liver-prognosis assessment compared to Metavir F staging or single tests by identifying five subgroups of patients with significantly different prognoses. Some blood fibrosis tests are more accurate than liver biopsy for determining liver prognosis in CHC. A new combination of two complementary blood tests, one targeted for fibrosis and the other for cirrhosis, optimises assessment of liver-prognosis. © 2014 John Wiley & Sons Ltd.

  15. Children with hemodynamically significant congenital heart disease can be identified through population-based registers

    DEFF Research Database (Denmark)

    Bergman, Gunnar; Hærskjold, Ann; Stensballe, Lone Graff

    2015-01-01

    BACKGROUND: Epidemiological research is facilitated in Sweden by a history of national health care registers, making large unselected national cohort studies possible. However, for complex clinical populations, such as children with congenital heart disease (CHD), register-based studies...... are challenged by registration limitations. For example, the diagnostic code system International Classification of Diseases, 10th version (ICD-10) does not indicate the clinical significance of abnormalities, therefore may be of limited use if used as the sole parameter in epidemiological research. Palivizumab...

  16. Human papillomavirus mRNA and DNA testing in women with atypical squamous cells of undetermined significance

    DEFF Research Database (Denmark)

    Thomsen, Louise T; Dehlendorff, Christian; Junge, Jette

    2016-01-01

    In this prospective cohort study, we compared the performance of human papillomavirus (HPV) mRNA and DNA testing of women with atypical squamous cells of undetermined significance (ASC-US) during cervical cancer screening. Using a nationwide Danish pathology register, we identified women aged 30......-65 years with ASC-US during 2005-2011 who were tested for HPV16/18/31/33/45 mRNA using PreTect HPV-Proofer (n = 3,226) or for high-risk HPV (hrHPV) DNA using Hybrid Capture 2 (HC2) (n = 9,405) or Linear Array HPV-Genotyping test (LA) (n = 1,533). Women with ≥1 subsequent examination in the register (n = 13...... those testing HC2 negative (3.2% [95% CI: 2.2-4.2%] versus 0.5% [95% CI: 0.3-0.7%]). Patterns were similar after 18 months and 5 years'; follow-up; for CIN2+ and cancer as outcomes; across all age groups; and when comparing mRNA testing to hrHPV DNA testing using LA. In conclusion, the HPV16...

  17. Understanding text-based persuasion and support tactics of concerned significant others

    Directory of Open Access Journals (Sweden)

    Katherine van Stolk-Cooke

    2015-08-01

    Full Text Available The behavior of concerned significant others (CSOs can have a measurable impact on the health and wellness of individuals attempting to meet behavioral and health goals, and research is needed to better understand the attributes of text-based CSO language when encouraging target significant others (TSOs to achieve those goals. In an effort to inform the development of interventions for CSOs, this study examined the language content of brief text-based messages generated by CSOs to motivate TSOs to achieve a behavioral goal. CSOs generated brief text-based messages for TSOs for three scenarios: (1 to help TSOs achieve the goal, (2 in the event that the TSO is struggling to meet the goal, and (3 in the event that the TSO has given up on meeting the goal. Results indicate that there was a significant relationship between the tone and compassion of messages generated by CSOs, the CSOs’ perceptions of TSO motivation, and their expectation of a grateful or annoyed reaction by the TSO to their feedback or support. Results underscore the importance of attending to patterns in language when CSOs communicate with TSOs about goal achievement or failure, and how certain variables in the CSOs’ perceptions of their TSOs affect these characteristics.

  18. Fault tolerant system based on IDDQ testing

    Science.gov (United States)

    Guibane, Badi; Hamdi, Belgacem; Mtibaa, Abdellatif; Bensalem, Brahim

    2018-06-01

    Offline test is essential to ensure good manufacturing quality. However, for permanent or transient faults that occur during the use of the integrated circuit in an application, an online integrated test is needed as well. This procedure should ensure the detection and possibly the correction or the masking of these faults. This requirement of self-correction is sometimes necessary, especially in critical applications that require high security such as automotive, space or biomedical applications. We propose a fault-tolerant design for analogue and mixed-signal design complementary metal oxide (CMOS) circuits based on the quiescent current supply (IDDQ) testing. A defect can cause an increase in current consumption. IDDQ testing technique is based on the measurement of power supply current to distinguish between functional and failed circuits. The technique has been an effective testing method for detecting physical defects such as gate-oxide shorts, floating gates (open) and bridging defects in CMOS integrated circuits. An architecture called BICS (Built In Current Sensor) is used for monitoring the supply current (IDDQ) of the connected integrated circuit. If the measured current is not within the normal range, a defect is signalled and the system switches connection from the defective to a functional integrated circuit. The fault-tolerant technique is composed essentially by a double mirror built-in current sensor, allowing the detection of abnormal current consumption and blocks allowing the connection to redundant circuits, if a defect occurs. Spices simulations are performed to valid the proposed design.

  19. Using a micro computer based test bank

    International Nuclear Information System (INIS)

    Hamel, R.T.

    1987-01-01

    Utilizing a micro computer based test bank offers a training department many advantages and can have a positive impact upon training procedures and examination standards. Prior to data entry, Training Department management must pre-review the examination questions and answers to ensure compliance with examination standards and to verify the validity of all questions. Management must adhere to the TSD format since all questions require an enabling objective numbering scheme. Each question is entered under the enabling objective upon which it is based. Then the question is selected via the enabling objective. This eliminates any instructor bias because a random number generator chooses the test question. However, the instructor may load specific questions to create an emphasis theme for any test. The examination, answer and cover sheets are produced and printed within minutes. The test bank eliminates the large amount of time that is normally required for an instructor to formulate an examination. The need for clerical support is reduced by the elimination of typing examinations and also by the software's ability to maintain and generate student/course lists, attendance sheets, and grades. Software security measures limit access to the test bank, and the impromptu method used to generate and print an examination enhance its security

  20. Social marketing campaign significantly associated with increases in syphilis testing among gay and bisexual men in San Francisco.

    Science.gov (United States)

    Montoya, Jorge A; Kent, Charlotte K; Rotblatt, Harlan; McCright, Jacque; Kerndt, Peter R; Klausner, Jeffrey D

    2005-07-01

    Between 1999 and 2002, San Francisco experienced a sharp increase in early syphilis among gay and bisexual men. In response, the San Francisco Department of Public Health launched a social marketing campaign to increase testing for syphilis, and awareness and knowledge about syphilis among gay and bisexual men. A convenience sample of 244 gay and bisexual men (18-60 years of age) were surveyed to evaluate the effectiveness of the campaign. Respondents were interviewed to elicit unaided and aided awareness about the campaign, knowledge about syphilis, recent sexual behaviors, and syphilis testing behavior. After controlling for other potential confounders, unaided campaign awareness was a significant correlate of having a syphilis test in the last 6 months (odds ratio, 3.21; 95% confidence interval, 1.30-7.97) compared with no awareness of the campaign. A comparison of respondents aware of the campaign with those not aware also revealed significant increases in awareness and knowledge about syphilis. The Healthy Penis 2002 campaign achieved its primary objective of increasing syphilis testing, and awareness and knowledge about syphilis among gay and bisexual men in San Francisco.

  1. Significance of specificity of Tinetti B-POMA test and fall risk factor in third age of life.

    Science.gov (United States)

    Avdić, Dijana; Pecar, Dzemal

    2006-02-01

    As for the third age, psychophysical abilities of humans gradually decrease, while the ability of adaptation to endogenous and exogenous burdens is going down. In 1987, "Harada" et al. (1) have found out that 9.5 million persons in USA have difficulties running daily activities, while 59% of them (which is 5.6 million) are older than 65 years in age. The study has encompassed 77 questioned persons of both sexes with their average age 71.73 +/- 5.63 (scope of 65-90 years in age), chosen by random sampling. Each patient has been questioned in his/her own home and familiar to great extent with the methodology and aims of the questionnaire. Percentage of questioned women was 64.94% (50 patients) while the percentage for men was 35.06% (27 patients). As for the value of risk factor score achieved conducting the questionnaire and B-POMA test, there are statistically significant differences between men and women, as well as between patients who fell and those who never did. As for the way of life (alone or in the community), there are no significant statistical differences. Average results gained through B-POMA test in this study are statistically significantly higher in men and patients who did not provide data about falling, while there was no statistically significant difference in the way of life. In relation to the percentage of maximum number of positive answers to particular questions, regarding gender, way of life and the data about falling, there were no statistically significant differences between the value of B-POMA test and the risk factor score (the questionnaire).

  2. Innovations in individual feature history management - The significance of feature-based temporal model

    Science.gov (United States)

    Choi, J.; Seong, J.C.; Kim, B.; Usery, E.L.

    2008-01-01

    A feature relies on three dimensions (space, theme, and time) for its representation. Even though spatiotemporal models have been proposed, they have principally focused on the spatial changes of a feature. In this paper, a feature-based temporal model is proposed to represent the changes of both space and theme independently. The proposed model modifies the ISO's temporal schema and adds new explicit temporal relationship structure that stores temporal topological relationship with the ISO's temporal primitives of a feature in order to keep track feature history. The explicit temporal relationship can enhance query performance on feature history by removing topological comparison during query process. Further, a prototype system has been developed to test a proposed feature-based temporal model by querying land parcel history in Athens, Georgia. The result of temporal query on individual feature history shows the efficiency of the explicit temporal relationship structure. ?? Springer Science+Business Media, LLC 2007.

  3. Enhancing SAT-Based Test Pattern Generation

    Institute of Scientific and Technical Information of China (English)

    LIU Xin; XIONG You-lun

    2005-01-01

    This paper presents modeling tools based on Boolean satisfiability (SAT) to solve problems of test generation for combinational circuits. It exploits an added layer to maintain circuit-related information and value justification relations to a generic SAT algorithm. It dovetails binary decision graphs (BDD) and SAT techniques to improve the efficiency of automatic test pattern generation (ATPG). More specifically, it first exploits inexpensive reconvergent fanout analysis of circuit to gather information on the local signal correlation by using BDD learning, then uses the above learned information to restrict and focus the overall search space of SAT-based ATPG. Its learning technique is effective and lightweight. The experimental results demonstrate the effectiveness of the approach.

  4. Association test based on SNP set: logistic kernel machine based test vs. principal component analysis.

    Directory of Open Access Journals (Sweden)

    Yang Zhao

    Full Text Available GWAS has facilitated greatly the discovery of risk SNPs associated with complex diseases. Traditional methods analyze SNP individually and are limited by low power and reproducibility since correction for multiple comparisons is necessary. Several methods have been proposed based on grouping SNPs into SNP sets using biological knowledge and/or genomic features. In this article, we compare the linear kernel machine based test (LKM and principal components analysis based approach (PCA using simulated datasets under the scenarios of 0 to 3 causal SNPs, as well as simple and complex linkage disequilibrium (LD structures of the simulated regions. Our simulation study demonstrates that both LKM and PCA can control the type I error at the significance level of 0.05. If the causal SNP is in strong LD with the genotyped SNPs, both the PCA with a small number of principal components (PCs and the LKM with kernel of linear or identical-by-state function are valid tests. However, if the LD structure is complex, such as several LD blocks in the SNP set, or when the causal SNP is not in the LD block in which most of the genotyped SNPs reside, more PCs should be included to capture the information of the causal SNP. Simulation studies also demonstrate the ability of LKM and PCA to combine information from multiple causal SNPs and to provide increased power over individual SNP analysis. We also apply LKM and PCA to analyze two SNP sets extracted from an actual GWAS dataset on non-small cell lung cancer.

  5. Significantly High Modulation Efficiency of Compact Graphene Modulator Based on Silicon Waveguide.

    Science.gov (United States)

    Shu, Haowen; Su, Zhaotang; Huang, Le; Wu, Zhennan; Wang, Xingjun; Zhang, Zhiyong; Zhou, Zhiping

    2018-01-17

    We theoretically and experimentally demonstrate a significantly large modulation efficiency of a compact graphene modulator based on a silicon waveguide using the electro refractive effect of graphene. The modulation modes of electro-absorption and electro-refractive can be switched with different applied voltages. A high extinction ratio of 25 dB is achieved in the electro-absorption modulation mode with a driving voltage range of 0 V to 1 V. For electro-refractive modulation, the driving voltage ranges from 1 V to 3 V with a 185-pm spectrum shift. The modulation efficiency of 1.29 V · mm with a 40-μm interaction length is two orders of magnitude higher than that of the first reported graphene phase modulator. The realisation of phase and intensity modulation with graphene based on a silicon waveguide heralds its potential application in optical communication and optical interconnection systems.

  6. Black hole based tests of general relativity

    International Nuclear Information System (INIS)

    Yagi, Kent; Stein, Leo C

    2016-01-01

    General relativity has passed all solar system experiments and neutron star based tests, such as binary pulsar observations, with flying colors. A more exotic arena for testing general relativity is in systems that contain one or more black holes. Black holes are the most compact objects in the Universe, providing probes of the strongest-possible gravitational fields. We are motivated to study strong-field gravity since many theories give large deviations from general relativity only at large field strengths, while recovering the weak-field behavior. In this article, we review how one can probe general relativity and various alternative theories of gravity by using electromagnetic waves from a black hole with an accretion disk, and gravitational waves from black hole binaries. We first review model-independent ways of testing gravity with electromagnetic/gravitational waves from a black hole system. We then focus on selected examples of theories that extend general relativity in rather simple ways. Some important characteristics of general relativity include (but are not limited to) (i) only tensor gravitational degrees of freedom, (ii) the graviton is massless, (iii) no quadratic or higher curvatures in the action, and (iv) the theory is four-dimensional. Altering a characteristic leads to a different extension of general relativity: (i) scalar–tensor theories, (ii) massive gravity theories, (iii) quadratic gravity, and (iv) theories with large extra dimensions. Within each theory, we describe black hole solutions, their properties, and current and projected constraints on each theory using black hole based tests of gravity. We close this review by listing some of the open problems in model-independent tests and within each specific theory. (paper)

  7. Watermarking Techniques Using Least Significant Bit Algorithm for Digital Image Security Standard Solution- Based Android

    Directory of Open Access Journals (Sweden)

    Ari Muzakir

    2017-05-01

    Full Text Available Ease of deployment of digital image through the internet has positive and negative sides, especially for owners of the original digital image. The positive side of the ease of rapid deployment is the owner of that image deploys digital image files to various sites in the world address. While the downside is that if there is no copyright that serves as protector of the image it will be very easily recognized ownership by other parties. Watermarking is one solution to protect the copyright and know the results of the digital image. With Digital Image Watermarking, copyright resulting digital image will be protected through the insertion of additional information such as owner information and the authenticity of the digital image. The least significant bit (LSB is one of the algorithm is simple and easy to understand. The results of the simulations carried out using android smartphone shows that the LSB watermarking technique is not able to be seen by naked human eye, meaning there is no significant difference in the image of the original files with images that have been inserted watermarking. The resulting image has dimensions of 640x480 with a bit depth of 32 bits. In addition, to determine the function of the ability of the device (smartphone in processing the image using this application used black box testing

  8. Technical bases for the DWPF testing program

    International Nuclear Information System (INIS)

    Plodinec, M.J.

    1990-01-01

    The Defense Waste Processing Facility (DWPF) at the Savannah River Site (SRS) will be the first production facility in the United States for the immobilization of high-level nuclear waste. Production of DWPF canistered wasteforms will begin prior to repository licensing, so decisions on facility startup will have to be made before the final decisions on repository design are made. The Department of Energy's Office of Civilian Radioactive Waste Management (RW) has addressed this discrepancy by defining a Waste Acceptance Process. This process provides assurance that the borosilicate-glass wasteform, in a stainless-steel canister, produced by the DWPF will be acceptable for permanent storage in a federal repository. As part of this process, detailed technical specifications have been developed for the DWPF product. SRS has developed detailed strategies for demonstrating compliance with each of the Waste Acceptance Process specifications. An important part of the compliance is the testing which will be carried out in the DWPF. In this paper, the bases for each of the tests to be performed in the DWPF to establish compliance with the specifications are described, and the tests are detailed. The results of initial tests relating to characterization of sealed canisters are reported

  9. Tests results of skutterudite based thermoelectric unicouples

    International Nuclear Information System (INIS)

    Saber, Hamed H.; El-Genk, Mohamed S.; Caillat, Thierry

    2007-01-01

    Tests were performed of skutterudite based unicouples with (MAY-04) and without (MAR-03) metallic coating on the legs near the hot junction to quantify the effect on reducing performance degradation with operation time. The p-legs in the unicouples were made of CeFe 3.5 Co 0.5 Sb 12 and the n-legs of CoSb 3 . The MAY-04 test was performed in vacuum (∼9 x 10 -7 torr) for ∼2000 h at hot and cold junction temperatures of 892.1 ± 11.9 K and 316.1 ± 5.5 K, respectively, while the MAR-03 test was performed in argon cover gas (0.051-0.068 MPa) at 972.61 ± 10.0 K and 301.1 ± 5.1 K, respectively. The argon cover gas decreased antimony loss from the legs in the MAR-03 test, but marked degradation in performance occurred over time. Conversely, the metallic coating in the MAY-04 test was very effective in reducing performance degradation of the unicouple. Because the cross sectional areas of the legs in MAY-04 were larger than those in MAR-03, the measured electrical power of the former is much higher than that of the latter, but the Beginning of Test (BOT) open circuit voltages, V oc (204.2 mV) for both unicouples were almost the same. The peak electrical power of the MAY-04 unicouple decreased 12.35% from 1.62W e at BOT to 1.42W e after ∼2000 h of testing, while that of the MAR-03 unicouple decreased 25.37% from 0.67 to 0.5W e after 261 h of testing at the above temperatures. The estimated peak efficiency of the MAY-04 unicouple, shortly after BOT (10.65%), was only ∼0.37% points lower than the theoretical value, calculated assuming zero side heat losses and zero contact resistance per leg

  10. Efficient p-value evaluation for resampling-based tests

    KAUST Repository

    Yu, K.

    2011-01-05

    The resampling-based test, which often relies on permutation or bootstrap procedures, has been widely used for statistical hypothesis testing when the asymptotic distribution of the test statistic is unavailable or unreliable. It requires repeated calculations of the test statistic on a large number of simulated data sets for its significance level assessment, and thus it could become very computationally intensive. Here, we propose an efficient p-value evaluation procedure by adapting the stochastic approximation Markov chain Monte Carlo algorithm. The new procedure can be used easily for estimating the p-value for any resampling-based test. We show through numeric simulations that the proposed procedure can be 100-500 000 times as efficient (in term of computing time) as the standard resampling-based procedure when evaluating a test statistic with a small p-value (e.g. less than 10( - 6)). With its computational burden reduced by this proposed procedure, the versatile resampling-based test would become computationally feasible for a much wider range of applications. We demonstrate the application of the new method by applying it to a large-scale genetic association study of prostate cancer.

  11. Divergence-based tests for model diagnostic

    Czech Academy of Sciences Publication Activity Database

    Hobza, Tomáš; Esteban, M. D.; Morales, D.; Marhuenda, Y.

    2008-01-01

    Roč. 78, č. 13 (2008), s. 1702-1710 ISSN 0167-7152 R&D Projects: GA MŠk 1M0572 Grant - others:Instituto Nacional de Estadistica (ES) MTM2006-05693 Institutional research plan: CEZ:AV0Z10750506 Keywords : goodness of fit * devergence statistics * GLM * model checking * bootstrap Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.445, year: 2008 http://library.utia.cas.cz/separaty/2008/SI/hobza-divergence-based%20tests%20for%20model%20diagnostic.pdf

  12. Network Diffusion-Based Prioritization of Autism Risk Genes Identifies Significantly Connected Gene Modules

    Directory of Open Access Journals (Sweden)

    Ettore Mosca

    2017-09-01

    Full Text Available Autism spectrum disorder (ASD is marked by a strong genetic heterogeneity, which is underlined by the low overlap between ASD risk gene lists proposed in different studies. In this context, molecular networks can be used to analyze the results of several genome-wide studies in order to underline those network regions harboring genetic variations associated with ASD, the so-called “disease modules.” In this work, we used a recent network diffusion-based approach to jointly analyze multiple ASD risk gene lists. We defined genome-scale prioritizations of human genes in relation to ASD genes from multiple studies, found significantly connected gene modules associated with ASD and predicted genes functionally related to ASD risk genes. Most of them play a role in synapsis and neuronal development and function; many are related to syndromes that can be in comorbidity with ASD and the remaining are involved in epigenetics, cell cycle, cell adhesion and cancer.

  13. Significantly enhanced robustness and electrochemical performance of flexible carbon nanotube-based supercapacitors by electrodepositing polypyrrole

    Science.gov (United States)

    Chen, Yanli; Du, Lianhuan; Yang, Peihua; Sun, Peng; Yu, Xiang; Mai, Wenjie

    2015-08-01

    Here, we report robust, flexible CNT-based supercapacitor (SC) electrodes fabricated by electrodepositing polypyrrole (PPy) on freestanding vacuum-filtered CNT film. These electrodes demonstrate significantly improved mechanical properties (with the ultimate tensile strength of 16 MPa), and greatly enhanced electrochemical performance (5.6 times larger areal capacitance). The major drawback of conductive polymer electrodes is the fast capacitance decay caused by structural breakdown, which decreases cycling stability but this is not observed in our case. All-solid-state SCs assembled with the robust CNT/PPy electrodes exhibit excellent flexibility, long lifetime (95% capacitance retention after 10,000 cycles) and high electrochemical performance (a total device volumetric capacitance of 4.9 F/cm3). Moreover, a flexible SC pack is demonstrated to light up 53 LEDs or drive a digital watch, indicating the broad potential application of our SCs for portable/wearable electronics.

  14. Semifragile Speech Watermarking Based on Least Significant Bit Replacement of Line Spectral Frequencies

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Nematollahi

    2017-01-01

    Full Text Available There are various techniques for speech watermarking based on modifying the linear prediction coefficients (LPCs; however, the estimated and modified LPCs vary from each other even without attacks. Because line spectral frequency (LSF has less sensitivity to watermarking than LPC, watermark bits are embedded into the maximum number of LSFs by applying the least significant bit replacement (LSBR method. To reduce the differences between estimated and modified LPCs, a checking loop is added to minimize the watermark extraction error. Experimental results show that the proposed semifragile speech watermarking method can provide high imperceptibility and that any manipulation of the watermark signal destroys the watermark bits since manipulation changes it to a random stream of bits.

  15. Corneal topographer based on the Hartmann test.

    Science.gov (United States)

    Mejía, Yobani; Galeano, Janneth C

    2009-04-01

    The purpose of this article is to show the performance of a topographer based on the Hartmann test for convex surfaces of F/# approximately 1. This topographer, called "Hartmann Test topographer (HT topographer)," is a prototype developed in the Physics Department of the Universidad Nacional de Colombia. From the Hartmann pattern generated by the surface under test, and by the Fourier analysis and the optical aberration theory we obtain the sagitta (elevation map) of the surface. Then, taking the first and the second derivatives of the sagitta in the radial direction we obtain the meridional curvature map. The method is illustrated with an example. To check the performance of the HT topographer a toric surface, a revolution aspherical surface, and two human corneas were measured. Our results are compared with those obtained with a Placido ring topographer (Tomey TMS-4 videokeratoscope), and we show that our curvature maps are similar to those obtained with the Placido ring topographer. The HT topographer is able to reconstruct the corneal topography potentially eradicating the skew ray problem, therefore, corneal defects can be visualized more. The results are presented by elevation and meridional curvature maps.

  16. Efficient Test Application for Core-Based Systems Using Twisted-Ring Counters

    OpenAIRE

    Anshuman Chandra; Krishnendu Chakrabarty; Mark C. Hansen

    2001-01-01

    We present novel test set encoding and pattern decompression methods for core-based systems. These are based on the use of twisted-ring counters and offer a number of important advantages–significant test compression (over 10X in many cases), less tester memory and reduced testing time, the ability to use a slow tester without compromising test quality or testing time, and no performance degradation for the core under test. Surprisingly, the encoded test sets obtained from partially-specified...

  17. Significance of Bias Correction in Drought Frequency and Scenario Analysis Based on Climate Models

    Science.gov (United States)

    Aryal, Y.; Zhu, J.

    2015-12-01

    Assessment of future drought characteristics is difficult as climate models usually have bias in simulating precipitation frequency and intensity. To overcome this limitation, output from climate models need to be bias corrected based on the specific purpose of applications. In this study, we examine the significance of bias correction in the context of drought frequency and scenario analysis using output from climate models. In particular, we investigate the performance of three widely used bias correction techniques: (1) monthly bias correction (MBC), (2) nested bias correction (NBC), and (3) equidistance quantile mapping (EQM) The effect of bias correction in future scenario of drought frequency is also analyzed. The characteristics of drought are investigated in terms of frequency and severity in nine representative locations in different climatic regions across the United States using regional climate model (RCM) output from the North American Regional Climate Change Assessment Program (NARCCAP). The Standardized Precipitation Index (SPI) is used as the means to compare and forecast drought characteristics at different timescales. Systematic biases in the RCM precipitation output are corrected against the National Centers for Environmental Prediction (NCEP) North American Regional Reanalysis (NARR) data. The results demonstrate that bias correction significantly decreases the RCM errors in reproducing drought frequency derived from the NARR data. Preserving mean and standard deviation is essential for climate models in drought frequency analysis. RCM biases both have regional and timescale dependence. Different timescale of input precipitation in the bias corrections show similar results. Drought frequency obtained from the RCM future (2040-2070) scenarios is compared with that from the historical simulations. The changes in drought characteristics occur in all climatic regions. The relative changes in drought frequency in future scenario in relation to

  18. Performance-based containment leakage testing

    International Nuclear Information System (INIS)

    Cybulskis, P.

    1995-01-01

    The U.S. Nuclear Regulatory Commission (NRC) is reviewing regulatory requirements in an effort to revise those that are marginal to safety but impose significant burdens on licensees. Identification of requirements marginal to safety and development and evaluation of alternatives utilize the NRC safety goals and insights from probabilistic risk assessments (PRAs). Since earlier studies found design-basis containment leakage to be a minor contributor to reactor accident risk, containment leakage testing has been selected as a candidate for change in regulations. This paper summarizes the technical analyses supporting the NRC proposal to amend Appendix J of 10 CFR Part 50 as its first effort to decrease unnecessary regulatory burdens on licensees

  19. Model-Based Software Testing for Object-Oriented Software

    Science.gov (United States)

    Biju, Soly Mathew

    2008-01-01

    Model-based testing is one of the best solutions for testing object-oriented software. It has a better test coverage than other testing styles. Model-based testing takes into consideration behavioural aspects of a class, which are usually unchecked in other testing methods. An increase in the complexity of software has forced the software industry…

  20. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    Science.gov (United States)

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  1. An initiative to improve the management of clinically significant test results in a large health care network.

    Science.gov (United States)

    Roy, Christopher L; Rothschild, Jeffrey M; Dighe, Anand S; Schiff, Gordon D; Graydon-Baker, Erin; Lenoci-Edwards, Jennifer; Dwyer, Cheryl; Khorasani, Ramin; Gandhi, Tejal K

    2013-11-01

    The failure of providers to communicate and follow up clinically significant test results (CSTR) is an important threat to patient safety. The Massachusetts Coalition for the Prevention of Medical Errors has endorsed the creation of systems to ensure that results can be received and acknowledged. In 2008 a task force was convened that represented clinicians, laboratories, radiology, patient safety, risk management, and information systems in a large health care network with the goals of providing recommendations and a road map for improvement in the management of CSTR and of implementing this improvement plan during the sub-force sequent five years. In drafting its charter, the task broadened the scope from "critical" results to "clinically significant" ones; clinically significant was defined as any result that requires further clinical action to avoid morbidity or mortality, regardless of the urgency of that action. The task force recommended four key areas for improvement--(1) standardization of policies and definitions, (2) robust identification of the patient's care team, (3) enhanced results management/tracking systems, and (4) centralized quality reporting and metrics. The task force faced many challenges in implementing these recommendations, including disagreements on definitions of CSTR and on who should have responsibility for CSTR, changes to established work flows, limitations of resources and of existing information systems, and definition of metrics. This large-scale effort to improve the communication and follow-up of CSTR in a health care network continues with ongoing work to address implementation challenges, refine policies, prepare for a new clinical information system platform, and identify new ways to measure the extent of this important safety problem.

  2. NEIGHBORHOOD TEST DESIGN BASED ON HISTORIC PRECEDENTS

    Directory of Open Access Journals (Sweden)

    Besim S. Hakim

    2012-07-01

    Full Text Available There have been various attempts to emulate traditional architecture and to experiment with the form and aesthetics of building design. However, learning from precedents of urban morphology is rare. This design study is a test at the neighborhood level using the pattern of traditional courtyard housing that is prevalent in the majority of historic towns and cities of North Africa and the Middle East. The study is undertaken at five levels of design enquiry: dwelling types, dwelling groups, neighborhood segment and community center. All of which are synthesized into a full prototype neighborhood comprising of 428 dwelling units covering an area that includes circulation and the community center, of 17.6 hectares. The test demonstrates that the traditional pattern of neighborhoods that are based on the typology of the courtyard dwelling as the initial generator of urban form may be used to develop a contemporary settlement pattern that is compatible with current necessities of lifestyle, vehicular circulation,  including parking and infrastructure achieving an attractive livable environment with an overall gross density, that includes a community center, of about 24 dwelling units per hectare.

  3. Investigating a multigene prognostic assay based on significant pathways for Luminal A breast cancer through gene expression profile analysis.

    Science.gov (United States)

    Gao, Haiyan; Yang, Mei; Zhang, Xiaolan

    2018-04-01

    The present study aimed to investigate potential recurrence-risk biomarkers based on significant pathways for Luminal A breast cancer through gene expression profile analysis. Initially, the gene expression profiles of Luminal A breast cancer patients were downloaded from The Cancer Genome Atlas database. The differentially expressed genes (DEGs) were identified using a Limma package and the hierarchical clustering analysis was conducted for the DEGs. In addition, the functional pathways were screened using Kyoto Encyclopedia of Genes and Genomes pathway enrichment analyses and rank ratio calculation. The multigene prognostic assay was exploited based on the statistically significant pathways and its prognostic function was tested using train set and verified using the gene expression data and survival data of Luminal A breast cancer patients downloaded from the Gene Expression Omnibus. A total of 300 DEGs were identified between good and poor outcome groups, including 176 upregulated genes and 124 downregulated genes. The DEGs may be used to effectively distinguish Luminal A samples with different prognoses verified by hierarchical clustering analysis. There were 9 pathways screened as significant pathways and a total of 18 DEGs involved in these 9 pathways were identified as prognostic biomarkers. According to the survival analysis and receiver operating characteristic curve, the obtained 18-gene prognostic assay exhibited good prognostic function with high sensitivity and specificity to both the train and test samples. In conclusion the 18-gene prognostic assay including the key genes, transcription factor 7-like 2, anterior parietal cortex and lymphocyte enhancer factor-1 may provide a new method for predicting outcomes and may be conducive to the promotion of precision medicine for Luminal A breast cancer.

  4. A Quantitative Analysis of Evidence-Based Testing Practices in Nursing Education

    Science.gov (United States)

    Moore, Wendy

    2017-01-01

    The focus of this dissertation is evidence-based testing practices in nursing education. Specifically, this research study explored the implementation of evidence-based testing practices between nursing faculty of various experience levels. While the significance of evidence-based testing in nursing education is well documented, little is known…

  5. Coronary risk assessment by point-based vs. equation-based Framingham models: significant implications for clinical care.

    Science.gov (United States)

    Gordon, William J; Polansky, Jesse M; Boscardin, W John; Fung, Kathy Z; Steinman, Michael A

    2010-11-01

    US cholesterol guidelines use original and simplified versions of the Framingham model to estimate future coronary risk and thereby classify patients into risk groups with different treatment strategies. We sought to compare risk estimates and risk group classification generated by the original, complex Framingham model and the simplified, point-based version. We assessed 2,543 subjects age 20-79 from the 2001-2006 National Health and Nutrition Examination Surveys (NHANES) for whom Adult Treatment Panel III (ATP-III) guidelines recommend formal risk stratification. For each subject, we calculated the 10-year risk of major coronary events using the original and point-based Framingham models, and then compared differences in these risk estimates and whether these differences would place subjects into different ATP-III risk groups (20% risk). Using standard procedures, all analyses were adjusted for survey weights, clustering, and stratification to make our results nationally representative. Among 39 million eligible adults, the original Framingham model categorized 71% of subjects as having "moderate" risk (20%) risk. Estimates of coronary risk by the original and point-based models often differed substantially. The point-based system classified 15% of adults (5.7 million) into different risk groups than the original model, with 10% (3.9 million) misclassified into higher risk groups and 5% (1.8 million) into lower risk groups, for a net impact of classifying 2.1 million adults into higher risk groups. These risk group misclassifications would impact guideline-recommended drug treatment strategies for 25-46% of affected subjects. Patterns of misclassifications varied significantly by gender, age, and underlying CHD risk. Compared to the original Framingham model, the point-based version misclassifies millions of Americans into risk groups for which guidelines recommend different treatment strategies.

  6. [Generalized neonatal screening based on laboratory tests].

    Science.gov (United States)

    Ardaillou, Raymond; Le Gall, Jean-Yves

    2006-11-01

    Implementation of a generalized screening program for neonatal diseases must obey precise rules. The disease must be severe, recognizable at an early stage, amenable to an effective treatment, detectable with a non expensive and widely applicable test; it must also be a significant public health problem. Subjects with positive results must be offered immediate treatment or prevention. All screening programs must be regularly evaluated. In France, since 1978, a national screening program has been organized by a private association ("Association française pour le dépistage et la prévention des handicaps de l'enfant") and supervised by the "Caisse nationale d'assurance maladie" and "Direction Générale de la Sante". Five diseases are now included in the screening program: phenylketonuria, hypothyroidism, congenital adrenal hyperplasia, cystic fibrosis and sickle cell disease (the latter only in at-risk newborns). Toxoplasmosis is a particular problem because only the children of mothers who were not tested during the pregnancy or who seroconverted are screened. Neonatal screening for phenylketonuria and hypothyrodism is unanimously recommended. Screening for congenital adrenal hyperplasia is approved in most countries. Cases of sickle cell disease and cystic fibrosis are more complex because--not all children who carry the mutations develop severe forms;--there is no curative treatment;--parents may become anxious, even though the phenotype is sometimes mild or even asymptomatic. Supporters of screening stress the benefits of early diagnosis (which extends the life expectancy of these children, particularly in the case of sickle cell disease), the fact that it opens up the possibility of prenatal screening of future pregnancies, and the utility of informing heterozygous carriers identified by familial screening. Neonatal screening for other diseases is under discussion. Indeed, technical advances such as tandem mass spectrometry make it possible to detect about 50

  7. Palaeoclimate significance of speleothems in crystalline rocks: a test case from the Late Glacial and early Holocene (Vinschgau, northern Italy)

    Science.gov (United States)

    Koltai, Gabriella; Cheng, Hai; Spötl, Christoph

    2018-03-01

    Partly coeval flowstones formed in fractured gneiss and schist were studied to test the palaeoclimate significance of this new type of speleothem archive on a decadal-to-millennial timescale. The samples encompass a few hundred to a few thousand years of the Late Glacial and the early Holocene. The speleothem fabric is primarily comprised of columnar fascicular optic calcite and acicular aragonite, both indicative of elevated Mg / Ca ratios in the groundwater. Stable isotopes suggest that aragonite is more prone to disequilibrium isotope fractionation driven by evaporation and prior calcite/aragonite precipitation than calcite. Changes in mineralogy are therefore attributed to these two internal fracture processes rather than to palaeoclimate. Flowstones formed in the same fracture show similar δ18O changes on centennial scales, which broadly correspond to regional lacustrine δ18O records, suggesting that such speleothems may provide an opportunity to investigate past climate conditions in non-karstic areas. The shortness of overlapping periods in flowstone growth and the complexity of in-aquifer processes, however, render the establishment of a robust stacked δ18O record challenging.

  8. Accelerated load testing of geosynthetic base reinforced pavement test sections.

    Science.gov (United States)

    2011-02-01

    The main objective of this research is to evaluate the benefits of geosynthetic stabilization and reinforcement of subgrade/base aggregate layers in flexible pavements built on weak subgrades and the effect of pre-rut pavement sections, prior to the ...

  9. Chinese Students' Perceptions of the Value of Test Preparation Courses for the TOEFL iBT: Merit, Worth, and Significance

    Science.gov (United States)

    Ma, Jia; Cheng, Liying

    2015-01-01

    Test preparation for high-stakes English language tests has received increasing research attention in the language assessment field; however, little is known about what aspects of test preparation students attend to and value. In this study, we considered the perspectives of 12 Chinese students who were enrolled in various academic programs in a…

  10. A Study on the Significance of the Colloidal Radiogold Disappearance Rate as a Simple Clinical Liver Function Test

    International Nuclear Information System (INIS)

    Hong, Chang Gi

    1969-01-01

    Liver function in diffuse parenchymal liver such as cirrhosis of the liver depend largely on the effective hepatic blood flow rather than on the individual cell functions. Clinical methods of measuring the hepatic blood flow were developed recently by the application of colloidal disappearance rate. In order to correlate the radiogold disappearance rate to conventional biochemical liver function tests, 21 normal subjects and 80 cases of cirrhosis of the liver were studied with both methods. The results are summarized as following: 1) The validity of external counting method to measure the blood disappearance rate of colloidal radiogold was confirmed by in vitro counting of the serial blood samples. 2) The blood disappearance rate of colloidal radiogold was essentially the same as the liver uptake rate of colloidal radiogold in normal and cirrhotic subjects with various degrees of functional disturbance. And it seemed there was no serious extrahepatic removal of the colloidal radiogold. 3) The disappearance rate of colloidal radiogold was not significant changed by the posture change, but was enhanced by ingestion of 500 ml of water. 4) The disappearance rate of colloidal radiogold was not influenced y single dose of Telepaque, while BSP retention was increased after Telepaque. 5) The mean disappearance half time of colloidal radiogold in normal subjects was 2.49±0.391 (S.D.) minutes. The mean normal disappearance rate constant (K value) was 0.285±0.0428 (S.D.)/minute. 6) The colloidal radiogold disappearance half time was abnormally prolonged (over 3.2 mm) in 87.7±3.68 (S.D.) % of cirrhotic subjects. 7) In patients of liver cirrhosis the blood disappearance rate of colloidal radiogold correlated well to serum albumin and globulin levels and BSP retention which were considered to reflect functions of hepatic parenchymal cells. There was, however, no correlation between colloidal disappearance rate and thymol turbidity test, serum glutamic pyruvic transaminase

  11. Properties of permutation-based gene tests and controlling type 1 error using a summary statistic based gene test.

    Science.gov (United States)

    Swanson, David M; Blacker, Deborah; Alchawa, Taofik; Ludwig, Kerstin U; Mangold, Elisabeth; Lange, Christoph

    2013-11-07

    The advent of genome-wide association studies has led to many novel disease-SNP associations, opening the door to focused study on their biological underpinnings. Because of the importance of analyzing these associations, numerous statistical methods have been devoted to them. However, fewer methods have attempted to associate entire genes or genomic regions with outcomes, which is potentially more useful knowledge from a biological perspective and those methods currently implemented are often permutation-based. One property of some permutation-based tests is that their power varies as a function of whether significant markers are in regions of linkage disequilibrium (LD) or not, which we show from a theoretical perspective. We therefore develop two methods for quantifying the degree of association between a genomic region and outcome, both of whose power does not vary as a function of LD structure. One method uses dimension reduction to "filter" redundant information when significant LD exists in the region, while the other, called the summary-statistic test, controls for LD by scaling marker Z-statistics using knowledge of the correlation matrix of markers. An advantage of this latter test is that it does not require the original data, but only their Z-statistics from univariate regressions and an estimate of the correlation structure of markers, and we show how to modify the test to protect the type 1 error rate when the correlation structure of markers is misspecified. We apply these methods to sequence data of oral cleft and compare our results to previously proposed gene tests, in particular permutation-based ones. We evaluate the versatility of the modification of the summary-statistic test since the specification of correlation structure between markers can be inaccurate. We find a significant association in the sequence data between the 8q24 region and oral cleft using our dimension reduction approach and a borderline significant association using the

  12. Routine Laboratory Blood Tests May Diagnose Significant Fibrosis in Liver Transplant Recipients with Chronic Hepatitis C: A 10 Year Experience.

    Science.gov (United States)

    Sheen, Victoria; Nguyen, Heajung; Jimenez, Melissa; Agopian, Vatche; Vangala, Sitaram; Elashoff, David; Saab, Sammy

    2016-03-28

    The aims of our study were to determine whether routine blood tests, the aspartate aminotransferase (AST) to Platelet Ratio Index (APRI) and Fibrosis 4 (Fib-4) scores, were associated with advanced fibrosis and to create a novel model in liver transplant recipients with chronic hepatitis C virus (HCV). We performed a cross sectional study of patients at The University of California at Los Angeles (UCLA) Medical Center who underwent liver transplantation for HCV. We used linear mixed effects models to analyze association between fibrosis severity and individual biochemical markers and mixed effects logistic regression to construct diagnostic models for advanced fibrosis (METAVIR F3-4). Cross-validation was used to estimate a receiving operator characteristic (ROC) curve for the prediction models and to estimate the area under the curve (AUC). The mean (± standard deviation [SD]) age of our cohort was 55 (±7.7) years, and almost three quarter were male. The mean (±SD) time from transplant to liver biopsy was 19.9 (±17.1) months. The mean (±SD) APRI and Fib-4 scores were 3 (±12) and 7 (±14), respectively. Increased fibrosis was associated with lower platelet count and alanine aminotransferase (ALT) values and higher total bilirubin and Fib-4 scores. We developed a model that takes into account age, gender, platelet count, ALT, and total bilirubin, and this model outperformed APRI and Fib-4 with an AUC of 0.68 (p fibrosis more reliably than APRI and Fib-4 scores. This noninvasive calculation may be used clinically to identify liver transplant recipients with HCV with significant liver damage.

  13. The significant impact of education, poverty, and race on Internet-based research participant engagement.

    Science.gov (United States)

    Hartz, Sarah M; Quan, Tiffany; Ibiebele, Abiye; Fisher, Sherri L; Olfson, Emily; Salyer, Patricia; Bierut, Laura J

    2017-02-01

    Internet-based technologies are increasingly being used for research studies. However, it is not known whether Internet-based approaches will effectively engage participants from diverse racial and socioeconomic backgrounds. A total of 967 participants were recruited and offered genetic ancestry results. We evaluated viewing Internet-based genetic ancestry results among participants who expressed high interest in obtaining the results. Of the participants, 64% stated that they were very or extremely interested in their genetic ancestry results. Among interested participants, individuals with a high school diploma (n = 473) viewed their results 19% of the time relative to 4% of the 145 participants without a diploma (P Internet-based research was low despite high reported interest. This suggests that explicit strategies should be developed to increase diversity in Internet-based research.Genet Med 19 2, 240-243.

  14. Adult age differences in perceptually based, but not conceptually based implicit tests of memory.

    Science.gov (United States)

    Small, B J; Hultsch, D F; Masson, M E

    1995-05-01

    Implicit tests of memory assess the influence of recent experience without requiring awareness of remembering. Evidence concerning age differences on implicit tests of memory suggests small age differences in favor of younger adults. However, the majority of research examining this issue has relied upon perceptually based implicit tests. Recently, a second type of implicit test, one that relies upon conceptually based processes, has been identified. The pattern of age differences on this second type of implicit test is less clear. In the present study, we examined the pattern of age differences on one conceptually based (fact completion) and one perceptually based (stem completion) implicit test of memory, as well as two explicit tests of memory (fact and word recall). Tasks were administered to 403 adults from three age groups (19-34 years, 58-73 years, 74-89 years). Significant age differences in favor of the young were found on stem completion but not fact completion. Age differences were present for both word and fast recall. Correlational analyses examining the relationship of memory performance to other cognitive variables indicated that the implicit tests were supported by different components than the explicit tests, as well as being different from each other.

  15. Clinical significance of creative 3D-image fusion across multimodalities [PET + CT + MR] based on characteristic coregistration

    International Nuclear Information System (INIS)

    Peng, Matthew Jian-qiao; Ju Xiangyang; Khambay, Balvinder S.; Ayoub, Ashraf F.; Chen, Chin-Tu; Bai Bo

    2012-01-01

    Objective: To investigate a registration approach for 2-dimension (2D) based on characteristic localization to achieve 3-dimension (3D) fusion from images of PET, CT and MR one by one. Method: A cubic oriented scheme of“9-point and 3-plane” for co-registration design was verified to be geometrically practical. After acquisiting DICOM data of PET/CT/MR (directed by radiotracer 18 F-FDG etc.), through 3D reconstruction and virtual dissection, human internal feature points were sorted to combine with preselected external feature points for matching process. By following the procedure of feature extraction and image mapping, “picking points to form planes” and “picking planes for segmentation” were executed. Eventually, image fusion was implemented at real-time workstation mimics based on auto-fuse techniques so called “information exchange” and “signal overlay”. Result: The 2D and 3D images fused across modalities of [CT + MR], [PET + MR], [PET + CT] and [PET + CT + MR] were tested on data of patients suffered from tumors. Complementary 2D/3D images simultaneously presenting metabolic activities and anatomic structures were created with detectable-rate of 70%, 56%, 54% (or 98%) and 44% with no significant difference for each in statistics. Conclusion: Currently, based on the condition that there is no complete hybrid detector integrated of triple-module [PET + CT + MR] internationally, this sort of multiple modality fusion is doubtlessly an essential complement for the existing function of single modality imaging.

  16. Rule-based Test Generation with Mind Maps

    Directory of Open Access Journals (Sweden)

    Dimitry Polivaev

    2012-02-01

    Full Text Available This paper introduces basic concepts of rule based test generation with mind maps, and reports experiences learned from industrial application of this technique in the domain of smart card testing by Giesecke & Devrient GmbH over the last years. It describes the formalization of test selection criteria used by our test generator, our test generation architecture and test generation framework.

  17. Validity of selected cardiovascular field-based test among Malaysian ...

    African Journals Online (AJOL)

    Based on emerge obese problem among Malaysian, this research is formulated to validate published tests among healthy female adult. Selected test namely; 20 meter multi-stage shuttle run, 2.4km run test, 1 mile walk test and Harvard Step test were correlated with laboratory test (Bruce protocol) to find the criterion validity ...

  18. Neutron Sources for Standard-Based Testing

    Energy Technology Data Exchange (ETDEWEB)

    Radev, Radoslav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McLean, Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-11-10

    The DHS TC Standards and the consensus ANSI Standards use 252Cf as the neutron source for performance testing because its energy spectrum is similar to the 235U and 239Pu fission sources used in nuclear weapons. An emission rate of 20,000 ± 20% neutrons per second is used for testing of the radiological requirements both in the ANSI standards and the TCS. Determination of the accurate neutron emission rate of the test source is important for maintaining consistency and agreement between testing results obtained at different testing facilities. Several characteristics in the manufacture and the decay of the source need to be understood and accounted for in order to make an accurate measurement of the performance of the neutron detection instrument. Additionally, neutron response characteristics of the particular instrument need to be known and taken into account as well as neutron scattering in the testing environment.

  19. Students Perception on the Use of Computer Based Test

    Science.gov (United States)

    Nugroho, R. A.; Kusumawati, N. S.; Ambarwati, O. C.

    2018-02-01

    Teaching nowadays might use technology in order to disseminate science and knowledge. As part of teaching, the way evaluating study progress and result has also benefited from this IT rapid progress. The computer-based test (CBT) has been introduced to replace the more conventional Paper and Pencil Test (PPT). CBT are considered more advantageous than PPT. It is considered as more efficient, transparent, and has the ability of minimising fraud in cognitive evaluation. Current studies have indicated the debate of CBT vs PPT usage. Most of the current research compares the two methods without exploring the students’ perception about the test. This study will fill the gap in the literature by providing students’ perception on the two tests method. Survey approach is conducted to obtain the data. The sample is collected in two identical classes with similar subject in a public university in Indonesia. Mann-Whitney U test used to analyse the data. The result indicates that there is a significant difference between two groups of students regarding CBT usage. Student with different test method prefers to have test other than what they were having. Further discussion and research implication is discussed in the paper.

  20. FMIT test-end instrumentation development bases

    International Nuclear Information System (INIS)

    Fuller, J.L.

    1982-06-01

    FMIT test-end measurements proposed for deuteron beam control, target diagnostics, and irradiation sample dosimetry are listed. The test-end refers to the area inside the test cell, but includes measurements inside and outside the cell. Justification, categorization, and limits qualification are presented for each measurement. Methods are purposefully de-emphasized in order to clarify the measurement needs, not techniques. Some discussion of techniques currently under investigation is given in the last section of the report

  1. Of Minima and Maxima: The Social Significance of Minimal Competency Testing and the Search for Educational Excellence.

    Science.gov (United States)

    Ericson, David P.

    1984-01-01

    Explores the many meanings of the minimal competency testing movement and the more recent mobilization for educational excellence in the schools. Argues that increasing the value of the diploma by setting performance standards on minimal competency tests and by elevating academic graduation standards may strongly conflict with policies encouraging…

  2. Significance of Iron(II,III) Hydroxycarbonate Green Rust in Arsenic Remediation Using Zerovalent Iron in Laboratory Column Tests

    Science.gov (United States)

    We examined the corrosion products of zerovalent iron used in three column tests for removing arsenic from water under dynamic flow conditions. Each column test lasted three- to four-months using columns consisting of a 10.3-cm depth of 50 : 50 (w : w, Peerless iron : sand) in t...

  3. Consensus-based evaluation of clinical significance and management of anticancer drug interactions

    NARCIS (Netherlands)

    Jansman, F.G.A.; Reyners, A.K.L.; van Roon, E.N.; Smorenburg, C.H.; Helgason, H.H.; le Comte, M.; Wensveen, B.M.; van den Tweel, A.M.A.; de Blois, M.; Kwee, W.; Kerremans, A.L.; Brouwers, J.R.B.J.

    Background: Anticancer drug interactions can affect the efficacy and toxicity of anticancer treatment and that of the interacting drugs. However, information on the significance, prevention, and management of these interactions is currently lacking. Objective: The purpose of this study was to assess

  4. GPS Device Testing Based on User Performance Metrics

    Science.gov (United States)

    2015-10-02

    1. Rationale for a Test Program Based on User Performance Metrics ; 2. Roberson and Associates Test Program ; 3. Status of, and Revisions to, the Roberson and Associates Test Program ; 4. Comparison of Roberson and DOT/Volpe Programs

  5. Efficient p-value evaluation for resampling-based tests

    KAUST Repository

    Yu, K.; Liang, F.; Ciampa, J.; Chatterjee, N.

    2011-01-01

    The resampling-based test, which often relies on permutation or bootstrap procedures, has been widely used for statistical hypothesis testing when the asymptotic distribution of the test statistic is unavailable or unreliable. It requires repeated

  6. Reducing test-data volume and test-power simultaneously in LFSR reseeding-based compression environment

    Energy Technology Data Exchange (ETDEWEB)

    Wang Weizheng; Kuang Jishun; You Zhiqiang; Liu Peng, E-mail: jshkuang@163.com [College of Information Science and Engineering, Hunan University, Changsha 410082 (China)

    2011-07-15

    This paper presents a new test scheme based on scan block encoding in a linear feedback shift register (LFSR) reseeding-based compression environment. Meanwhile, our paper also introduces a novel algorithm of scan-block clustering. The main contribution of this paper is a flexible test-application framework that achieves significant reductions in switching activity during scan shift and the number of specified bits that need to be generated via LFSR reseeding. Thus, it can significantly reduce the test power and test data volume. Experimental results using Mintest test set on the larger ISCAS'89 benchmarks show that the proposed method reduces the switching activity significantly by 72%-94% and provides a best possible test compression of 74%-94% with little hardware overhead. (semiconductor integrated circuits)

  7. Community-based enterprises: The significance of partnerships and institutional linkages

    NARCIS (Netherlands)

    Seixas, Cristiana Simão; Berkes, Fikret

    2010-01-01

    Community-based institutions used to be driven by local needs, but in recent decades, some of them have been responding to national and global economic opportunities. These cases are of interest because they make it possible to investigate how local institutions can evolve in response to new

  8. Multi-objective Search-based Mobile Testing

    OpenAIRE

    Mao, K.

    2017-01-01

    Despite the tremendous popularity of mobile applications, mobile testing still relies heavily on manual testing. This thesis presents mobile test automation approaches based on multi-objective search. We introduce three approaches: Sapienz (for native Android app testing), Octopuz (for hybrid/web JavaScript app testing) and Polariz (for using crowdsourcing to support search-based mobile testing). These three approaches represent the primary scientific and technical contributions of the thesis...

  9. Computer-Aided Test Flow in Core-Based Design

    NARCIS (Netherlands)

    Zivkovic, V.; Tangelder, R.J.W.T.; Kerkhoff, Hans G.

    2000-01-01

    This paper copes with the efficient test-pattern generation in a core-based design. A consistent Computer-Aided Test (CAT) flow is proposed based on the required core-test strategy. It generates a test-pattern set for the embedded cores with high fault coverage and low DfT area overhead. The CAT

  10. Oscillation-based test in mixed-signal circuits

    CERN Document Server

    Sánchez, Gloria Huertas; Rueda, Adoración Rueda

    2007-01-01

    This book presents the development and experimental validation of the structural test strategy called Oscillation-Based Test - OBT in short. The results presented here assert, not only from a theoretical point of view, but also based on a wide experimental support, that OBT is an efficient defect-oriented test solution, complementing the existing functional test techniques for mixed-signal circuits.

  11. Website-based PNG image steganography using the modified Vigenere Cipher, least significant bit, and dictionary based compression methods

    Science.gov (United States)

    Rojali, Salman, Afan Galih; George

    2017-08-01

    Along with the development of information technology in meeting the needs, various adverse actions and difficult to avoid are emerging. One of such action is data theft. Therefore, this study will discuss about cryptography and steganography that aims to overcome these problems. This study will use the Modification Vigenere Cipher, Least Significant Bit and Dictionary Based Compression methods. To determine the performance of study, Peak Signal to Noise Ratio (PSNR) method is used to measure objectively and Mean Opinion Score (MOS) method is used to measure subjectively, also, the performance of this study will be compared to other method such as Spread Spectrum and Pixel Value differencing. After comparing, it can be concluded that this study can provide better performance when compared to other methods (Spread Spectrum and Pixel Value Differencing) and has a range of MSE values (0.0191622-0.05275) and PSNR (60.909 to 65.306) with a hidden file size of 18 kb and has a MOS value range (4.214 to 4.722) or image quality that is approaching very good.

  12. Efficacy of a Word- and Text-Based Intervention for Students With Significant Reading Difficulties.

    Science.gov (United States)

    Vaughn, Sharon; Roberts, Garrett J; Miciak, Jeremy; Taylor, Pat; Fletcher, Jack M

    2018-05-01

    We examine the efficacy of an intervention to improve word reading and reading comprehension in fourth- and fifth-grade students with significant reading problems. Using a randomized control trial design, we compare the fourth- and fifth-grade reading outcomes of students with severe reading difficulties who were provided a researcher-developed treatment with reading outcomes of students in a business-as-usual (BAU) comparison condition. A total of 280 fourth- and fifth-grade students were randomly assigned within school in a 1:1 ratio to either the BAU comparison condition ( n = 139) or the treatment condition ( n = 141). Treatment students were provided small-group tutoring for 30 to 45 minutes for an average of 68 lessons (mean hours of instruction = 44.4, SD = 11.2). Treatment students performed statistically significantly higher than BAU students on a word reading measure (effect size [ES] = 0. 58) and a measure of reading fluency (ES = 0.46). Though not statistically significant, effect sizes for students in the treatment condition were consistently higher than BAU students for decoding measures (ES = 0.06, 0.08), and mixed for comprehension (ES = -0.02, 0.14).

  13. A simple bedside blood test (Fibrofast; FIB-5) is superior to FIB-4 index for the differentiation between non-significant and significant fibrosis in patients with chronic hepatitis C.

    Science.gov (United States)

    Shiha, G; Seif, S; Eldesoky, A; Elbasiony, M; Soliman, R; Metwally, A; Zalata, K; Mikhail, N

    2017-05-01

    A simple non-invasive score (Fibrofast, FIB-5) was developed using five routine laboratory tests (ALT, AST, alkaline phosphatase, albumin and platelets count) for the detection of significant hepatic fibrosis in patients with chronic hepatitis C. The FIB-4 index is a non-invasive test for the assessment of liver fibrosis, and a score of ≤1.45 enables the correct identification of patients who have non-significant (F0-1) from significant fibrosis (F2-4), and could avoid liver biopsy. The aim of this study was to compare the performance characteristics of FIB-5 and FIB-4 to differentiate between non-significant and significant fibrosis. A cross-sectional study included 604 chronic HCV patients. All liver biopsies were scored using the METAVIR system. Both FIB-5 and FIB-4 scores were measured and the performance characteristics were calculated using the ROC curve. The performance characteristics of FIB-5 at ≥7.5 and FIB-4 at ≤1.45 for the differentiation between non-significant fibrosis and significant fibrosis were: specificity 94.4%, PPV 85.7%, and specificity 54.9%, PPV 55.7% respectively. FIB-5 score at the new cutoff is superior to FIB-4 index for the differentiation between non-significant and significant fibrosis.

  14. Significant Performance Enhancement in Asymmetric Supercapacitors based on Metal Oxides, Carbon nanotubes and Neutral Aqueous Electrolyte

    Science.gov (United States)

    Singh, Arvinder; Chandra, Amreesh

    2015-10-01

    Amongst the materials being investigated for supercapacitor electrodes, carbon based materials are most investigated. However, pure carbon materials suffer from inherent physical processes which limit the maximum specific energy and power that can be achieved in an energy storage device. Therefore, use of carbon-based composites with suitable nano-materials is attaining prominence. The synergistic effect between the pseudocapacitive nanomaterials (high specific energy) and carbon (high specific power) is expected to deliver the desired improvements. We report the fabrication of high capacitance asymmetric supercapacitor based on electrodes of composites of SnO2 and V2O5 with multiwall carbon nanotubes and neutral 0.5 M Li2SO4 aqueous electrolyte. The advantages of the fabricated asymmetric supercapacitors are compared with the results published in the literature. The widened operating voltage window is due to the higher over-potential of electrolyte decomposition and a large difference in the work functions of the used metal oxides. The charge balanced device returns the specific capacitance of ~198 F g-1 with corresponding specific energy of ~89 Wh kg-1 at 1 A g-1. The proposed composite systems have shown great potential in fabricating high performance supercapacitors.

  15. Metabonomics-based analysis of Brachyspira pilosicoli's response to tiamulin reveals metabolic activity despite significant growth inhibition.

    Science.gov (United States)

    Le Roy, Caroline Ivanne; Passey, Jade Louise; Woodward, Martin John; La Ragione, Roberto Marcello; Claus, Sandrine Paule

    2017-06-01

    Pathogenic anaerobes Brachyspira spp. are responsible for an increasing number of Intestinal Spirochaetosis (IS) cases in livestock against which few approved treatments are available. Tiamulin is used to treat swine dysentery caused by Brachyspira spp. and recently has been used to handle avian intestinal spirochaetosis (AIS). The therapeutic dose used in chickens requires further evaluation since cases of bacterial resistance to tiamulin have been reported. In this study, we evaluated the impact of tiamulin at varying concentrations on the metabolism of B. pilosicoli using a 1 H-NMR-based metabonomics approach allowing the capture of the overall bacterial metabolic response to antibiotic treatment. Based on growth curve studies, tiamulin impacted bacterial growth even at very low concentration (0.008 μg/mL) although its metabolic activity was barely affected 72 h post exposure to antibiotic treatment. Only the highest dose of tiamulin tested (0.250 μg/mL) caused a major metabolic shift. Results showed that below this concentration, bacteria could maintain a normal metabolic trajectory despite significant growth inhibition by the antibiotic, which may contribute to disease reemergence post antibiotic treatment. Indeed, we confirmed that B. pilosicoli remained viable even after exposition to the highest antibiotic dose. This paper stresses the need to ensure new evaluation of bacterial viability post bacteriostatic exposure such as tiamulin to guarantee treatment efficacy and decrease antibiotic resistance development. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Reliability Estimation Based Upon Test Plan Results

    National Research Council Canada - National Science Library

    Read, Robert

    1997-01-01

    The report contains a brief summary of aspects of the Maximus reliability point and interval estimation technique as it has been applied to the reliability of a device whose surveillance tests contain...

  17. Exploration of Least Significant Bit Based Watermarking and Its Robustness against Salt and Pepper Noise

    OpenAIRE

    Kamaldeep Joshi; Rajkumar Yadav; Sachin Allwadhi

    2016-01-01

    Image steganography is the best aspect of information hiding. In this, the information is hidden within an image and the image travels openly on the Internet. The Least Significant Bit (LSB) is one of the most popular methods of image steganography. In this method, the information bit is hidden at the LSB of the image pixel. In one bit LSB steganography method, the total numbers of the pixels and the total number of message bits are equal to each other. In this paper, the LSB method of image ...

  18. Significance of non-specific complaints in asymptomatic cerebral infarction. Approach based on the cerebral circulation

    Energy Technology Data Exchange (ETDEWEB)

    Sakayori, Osamu; Kitamura, Shin; Nagazumi, Atsushi; Terashi, Akirou [Nippon Medical School, Tokyo (Japan)

    1997-10-01

    Seventy-three cases with asymptomatic cerebral infarction detected by MR scanning and 80 cases of past stroke patients were evaluated. The regional cerebral blood flow (CBF) using the SPECT, idoine-123-IMP autoradiography (ARG) method was measured. Twenty-two patients with non-specific complaints (dizziness, numbness of the extremities, headache, etc.) without cerebrovascular risk factors were also examined as controls. Fifty-two percent of the asymptomatic infarction cases had non-specific complaints. The regional CBF in all cerebral non-specific complaints showed significantly lower values as compared to the controls. There was no difference in CBF values between the asymptomatic infarction cases with non-specific complaints and the past stroke patients. Among the asymptomatic infarction patients, cases with both non-specific complaints and hypertension displayed significantly lower CBF values, especially in the frontal and temporal cortical regions, than did cases without non-specific complaints or hypertension. These findings suggest that the patient`s complaints should be taken into consideration when determining the clinical treatment of asymptomatic infarction. (author)

  19. Significantly reduced c-axis thermal diffusivity of graphene-based papers

    Science.gov (United States)

    Han, Meng; Xie, Yangsu; Liu, Jing; Zhang, Jingchao; Wang, Xinwei

    2018-06-01

    Owing to their very high thermal conductivity as well as large surface-to-volume ratio, graphene-based films/papers have been proposed as promising candidates of lightweight thermal interface materials and lateral heat spreaders. In this work, we study the cross-plane (c-axis) thermal conductivity (k c ) and diffusivity (α c ) of two typical graphene-based papers, which are partially reduced graphene paper (PRGP) and graphene oxide paper (GOP), and compare their thermal properties with highly-reduced graphene paper and graphite. The determined α c of PRGP varies from (1.02 ± 0.09) × 10‑7 m2 s‑1 at 295 K to (2.31 ± 0.18) × 10‑7 m2 s‑1 at 12 K. This low α c is mainly attributed to the strong phonon scattering at the grain boundaries and defect centers due to the small grain sizes and high-level defects. For GOP, α c varies from (1.52 ± 0.05) × 10‑7 m2 s‑1 at 295 K to (2.28 ± 0.08) × 10‑7 m2 s‑1 at 12.5 K. The cross-plane thermal transport of GOP is attributed to the high density of functional groups between carbon layers which provide weak thermal transport tunnels across the layers in the absence of direct energy coupling among layers. This work sheds light on the understanding and optimizing of nanostructure of graphene-based paper-like materials for desired thermal performance.

  20. A Limited Structural Modification Results in a Significantly More Efficacious Diazachrysene-Based Filovirus Inhibitor

    Directory of Open Access Journals (Sweden)

    Rekha G. Panchal

    2012-08-01

    Full Text Available Ebola (EBOV and Marburg (MARV filoviruses are highly infectious pathogens causing deadly hemorrhagic fever in humans and non-human primates. Promising vaccine candidates providing immunity against filoviruses have been reported. However, the sporadic nature and swift progression of filovirus disease underlines the need for the development of small molecule therapeutics providing immediate antiviral effects. Herein we describe a brief structural exploration of two previously reported diazachrysene (DAAC-based EBOV inhibitors. Specifically, three analogs were prepared to examine how slight substituent modifications would affect inhibitory efficacy and inhibitor-mediated toxicity during not only EBOV, but also MARV cellular infection. Of the three analogs, one was highly efficacious, providing IC50 values of 0.696 µM ± 0.13 µM and 2.76 µM ± 0.21 µM against EBOV and MARV infection, respectively, with little or no associated cellular toxicity. Overall, the structure-activity and structure-toxicity results from this study provide a framework for the future development of DAAC-based filovirus inhibitors that will be both active and non-toxic in vivo.

  1. Community-based enterprises: The significance of partnerships and institutional linkages

    Directory of Open Access Journals (Sweden)

    Cristiana Simão Seixas

    2009-10-01

    Full Text Available Community-based institutions used to be driven by local needs, but in recent decades, some of them have been responding to national and global economic opportunities. These cases are of interest because they make it possible to investigate how local institutions can evolve in response to new challenges. A promising set of cases comes from the UNDP Equator Initiative, a program that holds biennial searches to find and reward entrepreneurship cases that seek to reduce poverty and conserve biodiversity at the same time. What can we learn from these local entrepreneurship cases that seem to be playing at the global level? Here we focus on partnerships and horizontal and vertical linkages in a sample of ten Equator Initiative projects. We find that successful projects tend to interact with a large array of support groups, typically 10 to 15 partners. Based on information from on-site research, these partners include local and national NGOs; local, regional and (less commonly national governments; international donor agencies and other organizations; and universities and research centers. These partners provide a range of services and support functions, including raising start-up funds; institution building; business networking and marketing; innovation and knowledge transfer; and technical training. These findings indicate that a diverse variety of partners are needed to help satisfy a diversity of needs, and highlight the importance of networks and support groups in the evolution of commons institutions.

  2. Clinical significance of low result of 1-h 50-g glucose-challenge test in pregnant women.

    Science.gov (United States)

    Oawada, Nozomi; Aoki, Shigeru; Sakamaki, Kentaro; Obata, Soichiro; Seki, Kazuo; Hirahara, Fumiki

    2018-01-31

    The objective of this study is to examine the effect of low-glucose value on the 1-h 50-g glucose challenge test (GCT) on neonatal body weight in low-risk Asian singleton pregnant women. We retrospectively analyzed women who delivered a singleton neonate at term at a tertiary center and underwent GCT at 24-28 weeks of gestation between June 2001 and June 2015. The low GCT group was defined as low-birth weight, and macrosomia. The χ 2 test, Fisher's exact test, and Student's t test were used. There were 313 low GCT groups and 4611 control. The low GCT group were younger, had lower prepregnancy body weight, higher stature, and lower prepregnancy body mass index (BMI). After adjusting these variables, the low GCT group had a lower rate of LGA and a higher rate of SGA. Neonatal body weight is more influenced by maternal physique than by low GCT result (standardized coefficient (β); GCT 0.071, height 0.188, prepregnancy BMI 0.143). Neonatal body weight was only slightly influenced by low GCT result, but markedly influenced by maternal physique, such as height and prepregnancy BMI.

  3. A new method to detect solar-like oscillations at very low S/N using statistical significance testing

    DEFF Research Database (Denmark)

    Lund, Mikkel N.; Chaplin, William J.; Kjeldsen, Hans

    2012-01-01

    hence a candidate detection). We apply the method to solar photometry data, whose quality was systematically degraded to test the performance of the MWPS at low signal-to-noise ratios. We also compare the performance of the MWPS against the frequently applied power-spectrum-of-power-spectrum (PSx...

  4. Lipidomics in translational research and the clinical significance of lipid-based biomarkers.

    Science.gov (United States)

    Stephenson, Daniel J; Hoeferlin, L Alexis; Chalfant, Charles E

    2017-11-01

    Lipidomics is a rapidly developing field of study that focuses on the identification and quantitation of various lipid species in the lipidome. Lipidomics has now emerged in the forefront of scientific research due to the importance of lipids in metabolism, cancer, and disease. Using both targeted and untargeted mass spectrometry as a tool for analysis, progress in the field has rapidly progressed in the last decade. Having the ability to assess these small molecules in vivo has led to better understanding of several lipid-driven mechanisms and the identification of lipid-based biomarkers in neurodegenerative disease, cancer, sepsis, wound healing, and pre-eclampsia. Biomarker identification and mechanistic understanding of specific lipid pathways linked to a disease's pathologies can form the foundation in the development of novel therapeutics in hopes of curing human disease. Published by Elsevier Inc.

  5. Likelihood based testing for no fractional cointegration

    DEFF Research Database (Denmark)

    Lasak, Katarzyna

    . The standard cointegration analysis only considers the assumption that deviations from equilibrium can be integrated of order zero, which is very restrictive in many cases and may imply an important loss of power in the fractional case. We consider the alternative hypotheses with equilibrium deviations...... that can be mean reverting with order of integration possibly greater than zero. Moreover, the degree of fractional cointegration is not assumed to be known, and the asymptotic null distribution of both tests is found when considering an interval of possible values. The power of the proposed tests under...

  6. Exercise testing in patients with variant angina: results, correlation with clinical and angiographic features and prognostic significance

    International Nuclear Information System (INIS)

    Waters, D.D.; Szlachcic, J.; Bourassa, M.G.; Scholl, J.-M.; Theroux, P.

    1982-01-01

    Eighty-two patients with variant angina underwent a treadmill exercise test using 14 ECG leads, and 67 also underwent exercise thallium-201 scans. The test induced ST elevation in 25 patients (30%), ST depression in 21 (26%) and no ST-segment abnormality in 36 (44%). ST elevation during exercise occurred in the same ECG leads as during spontaneous attacks at rest, and was always associated with a large perfusion defect on the exercise thallium scan. In contrast, exercise-induced ST depression often did not occur in the leads that exhibited ST elevation during episodes at rest. The ST-segment response to exercise did not accurately predict coronary anatomy: Coronary stenoses greater than or equal to 70% were present in 14 of 25 patients (56%) with ST elevation, in 13 of 21 (62%) with ST depression and in 14 of 36 (39%) with no ST-segment abnormality (NS). However, the degree of disease activity did correlate with the result of the exercise test: ST elevation occurred during exercise in 11 of 14 patients who had an average of more than two spontaneous attacks per day, in 12 of 24 who had between two attacks per day and two per week, and in only two of 31 who had fewer than two attacks per week (p<0.005). ST elevation during exercise was reproducible in five of five patients retested during an active phase of their disease, but not in three of three patients who had been angina-free for a least 1 month before the repeat test. We conclude that in variant angina patients, the results of an exercise test correlate well with the degree of disease activity but not with coronary anatomy, and do not define a high-risk subgroup

  7. Radiobiological significance of radioactive contamination - summary assessment based on great number of measurements

    International Nuclear Information System (INIS)

    Angelov, V.; Bonchev, Ts.; Mavrodiev, V.; Kyrdzhilov, N.

    1995-01-01

    In order to facilitate quantitative and qualitative characterisation of radioactive contamination the authors introduce a relative estimate of radionuclide activity by setting as a reference the most abundant element -Co-60 in the case of the Kozloduy NPP. The ratio η i of the mean annual permissible concentration in air for each radionuclide (RPC-92) to that of Co-60 is calculated. It is found that η i has the same or close values for groups of radionuclides, e.g. η i = 2.10 -4 for 238 Pu, 239 Pu, 240 Pu, 241 Am, 244 Cm; η i = 5 for 89 Sr, 91 Y; 93 Nb, 134 Cs, 137 Cs; η i = 50 for 55 Fe, 63 Ni, 95 Zr, 95 Nb, 140 Ba, 140 La. Then it is compared to the experimentally measured values of the same quantity η iexp , derived from surface contamination data. The ratio η iexp /η i is plotted against log η i . The resulting nomograms give graphic representation of the radiobiological significance of various radionuclide groups. Data from different locations at the Kozloduy NPP are presented. It is found that the alpha emitter contamination has highest values in the Unit 1 (WWER-440) control rooms after repair. The Unit 5 (WWER-1000) has lower alpha contamination compared to WWER-440 units. 1 ref., 5 figs., 1 tab

  8. VALORA: data base system for storage significant information used in the behavior modelling in the biosphere

    International Nuclear Information System (INIS)

    Valdes R, M.; Aguero P, A.; Perez S, D.; Cancio P, D.

    2006-01-01

    The nuclear and radioactive facilities can emit to the environment effluents that contain radionuclides, which are dispersed and/or its accumulate in the atmosphere, the terrestrial surface and the surface waters. As part of the evaluations of radiological impact, it requires to be carried out qualitative and quantitative analysis. In many of the cases it doesn't have the real values of the parameters that are used in the modelling, neither it is possible to carry out their measure, for that to be able to carry out the evaluation it needs to be carried out an extensive search of that published in the literature about the possible values of each parameter, under similar conditions to the object of study, this work can be extensive. In this work the characteristics of the VALORA Database System developed with the purpose of organizing and to automate significant information that it appears in different sources (scientific or technique literature) of the parameters that are used in the modelling of the behavior of the pollutants in the environment and the values assigned to these parameters that are used in the evaluation of the radiological impact potential is described; VALORA allows the consultation and selection of the characteristic parametric data of different situations and processes that are required by the calculation pattern implemented. The software VALORA it is a component of a group of tools computer that have as objective to help to the resolution of dispersion models and transfer of pollutants. (Author)

  9. Safety significance of component ageing, exemplary for MOV, based on French and German operating experience

    Energy Technology Data Exchange (ETDEWEB)

    Morlent, O. [CEA Fontenay-aux-Roses, 92 (France). Inst. de Protection et de Surete Nucleaire; Michel, F. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH, Garching (Germany)

    2001-07-01

    An outline is given of how IPSN and GRS assess the effects of physical ageing on the safety of French and German Nuclear Power Plants (NPPs) on the basis of the available knowledge and how investigations are carried out. The presentation is focused exemplary on a preliminary study illustrating approaches for the evaluation of the ageing behaviour of active components, the motor-operated valves (MOV). The results so far seems to demonstrate that the developed methodological approaches are suitable to obtain qualitative evidence with regard to the ageing behaviour of technical facilities such as MOV. The evaluation of the operating experience with French 900 MWe plants seems to reveal, for MOV of one system, a trend similar to some international findings about ageing-related events with increasing operating time; this trend will have to be confirmed. For the German NPPs so far, there appears to be no significant increase of ageing-related events concerning MOV as the plants get older. Future work on ageing scheduled at IPSN and GRS includes further cooperation on this issue, too; a deep analysis is necessary to explain the reasons of such apparent differences before any conclusion. (authors)

  10. Control range: a controllability-based index for node significance in directed networks

    International Nuclear Information System (INIS)

    Wang, Bingbo; Gao, Lin; Gao, Yong

    2012-01-01

    While a large number of methods for module detection have been developed for undirected networks, it is difficult to adapt them to handle directed networks due to the lack of consensus criteria for measuring the node significance in a directed network. In this paper, we propose a novel structural index, the control range, motivated by recent studies on the structural controllability of large-scale directed networks. The control range of a node quantifies the size of the subnetwork that the node can effectively control. A related index, called the control range similarity, is also introduced to measure the structural similarity between two nodes. When applying the index of control range to several real-world and synthetic directed networks, it is observed that the control range of the nodes is mainly influenced by the network's degree distribution and that nodes with a low degree may have a high control range. We use the index of control range similarity to detect and analyze functional modules in glossary networks and the enzyme-centric network of homo sapiens. Our results, as compared with other approaches to module detection such as modularity optimization algorithm, dynamic algorithm and clique percolation method, indicate that the proposed indices are effective and practical in depicting structural and modular characteristics of sparse directed networks

  11. Safety significance of component ageing, exemplary for MOV, based on French and German operating experience

    International Nuclear Information System (INIS)

    Morlent, O.

    2001-01-01

    An outline is given of how IPSN and GRS assess the effects of physical ageing on the safety of French and German Nuclear Power Plants (NPPs) on the basis of the available knowledge and how investigations are carried out. The presentation is focused exemplary on a preliminary study illustrating approaches for the evaluation of the ageing behaviour of active components, the motor-operated valves (MOV). The results so far seems to demonstrate that the developed methodological approaches are suitable to obtain qualitative evidence with regard to the ageing behaviour of technical facilities such as MOV. The evaluation of the operating experience with French 900 MWe plants seems to reveal, for MOV of one system, a trend similar to some international findings about ageing-related events with increasing operating time; this trend will have to be confirmed. For the German NPPs so far, there appears to be no significant increase of ageing-related events concerning MOV as the plants get older. Future work on ageing scheduled at IPSN and GRS includes further cooperation on this issue, too; a deep analysis is necessary to explain the reasons of such apparent differences before any conclusion. (authors)

  12. Significance of heme-based respiration in meat spoilage caused by Leuconostoc gasicomitatum.

    Science.gov (United States)

    Jääskeläinen, Elina; Johansson, Per; Kostiainen, Olli; Nieminen, Timo; Schmidt, Georg; Somervuo, Panu; Mohsina, Marzia; Vanninen, Paula; Auvinen, Petri; Björkroth, Johanna

    2013-02-01

    Leuconostoc gasicomitatum is a psychrotrophic lactic acid bacterium (LAB) which causes spoilage in cold-stored modified-atmosphere-packaged (MAP) meat products. In addition to the fermentative metabolism, L. gasicomitatum is able to respire when exogenous heme and oxygen are available. In this study, we investigated the respiration effects on growth rate, biomass, gene expression, and volatile organic compound (VOC) production in laboratory media and pork loin. The meat samples were evaluated by a sensory panel every second or third day for 29 days. We observed that functional respiration increased the growth (rate and yield) of L. gasicomitatum in laboratory media with added heme and in situ meat with endogenous heme. Respiration increased enormously (up to 2,600-fold) the accumulation of acetoin and diacetyl, which are buttery off-odor compounds in meat. Our transcriptome analyses showed that the gene expression patterns were quite similar, irrespective of whether respiration was turned off by excluding heme from the medium or mutating the cydB gene, which is essential in the respiratory chain. The respiration-based growth of L. gasicomitatum in meat was obtained in terms of population development and subsequent development of sensory characteristics. Respiration is thus a key factor explaining why L. gasicomitatum is so well adapted in high-oxygen packed meat.

  13. Radiobiological significance of radioactive contamination - summary assessment based on great number of measurements

    Energy Technology Data Exchange (ETDEWEB)

    Angelov, V [Civil Defence Administration, Sofia (Bulgaria); Bonchev, Ts; Mavrodiev, V; Kyrdzhilov, N [Sofia Univ. (Bulgaria). Fizicheski Fakultet

    1996-12-31

    In order to facilitate quantitative and qualitative characterisation of radioactive contamination the authors introduce a relative estimate of radionuclide activity by setting as a reference the most abundant element -Co-60 in the case of the Kozloduy NPP. The ratio {eta}{sub i} of the mean annual permissible concentration in air for each radionuclide (RPC-92) to that of Co-60 is calculated. It is found that {eta}{sub i} has the same or close values for groups of radionuclides, e.g. {eta}{sub i} = 2.10{sup -4} for {sup 238} Pu, {sup 239} Pu, {sup 240} Pu, {sup 241} Am, {sup 244} Cm; {eta}{sub i} = 5 for {sup 89} Sr, {sup 91} Y; {sup 93} Nb, {sup 134} Cs, {sup 137} Cs; {eta}{sub i} = 50 for {sup 55} Fe, {sup 63} Ni, {sup 95} Zr, {sup 95} Nb, {sup 140} Ba, {sup 140} La. Then it is compared to the experimentally measured values of the same quantity {eta}{sub iexp}, derived from surface contamination data. The ratio {eta}{sub iexp}/{eta}{sub i} is plotted against log {eta}{sub i}. The resulting nomograms give graphic representation of the radiobiological significance of various radionuclide groups. Data from different locations at the Kozloduy NPP are presented. It is found that the alpha emitter contamination has highest values in the Unit 1 (WWER-440) control rooms after repair. The Unit 5 (WWER-1000) has lower alpha contamination compared to WWER-440 units. 1 ref., 5 figs., 1 tab.

  14. Routine Laboratory Blood Tests May Diagnose Significant Fibrosis in Liver Transplant Recipients with Chronic Hepatitis C: A 10 Year Experience

    OpenAIRE

    Sheen, Victoria; Nguyen, Heajung; Jimenez, Melissa; Agopian, Vatche; Vangala, Sitaram; Elashoff, David; Saab, Sammy

    2016-01-01

    Background and Aims: The aims of our study were to determine whether routine blood tests, the aspartate aminotransferase (AST) to Platelet Ratio Index (APRI) and Fibrosis 4 (Fib-4) scores, were associated with advanced fibrosis and to create a novel model in liver transplant recipients with chronic hepatitis C virus (HCV). Methods: We performed a cross sectional study of patients at The University of California at Los Angeles (UCLA) Medical Center who underwent liver transplantation for HCV. ...

  15. An algorithm for finding biologically significant features in microarray data based on a priori manifold learning.

    Directory of Open Access Journals (Sweden)

    Zena M Hira

    Full Text Available Microarray databases are a large source of genetic data, which, upon proper analysis, could enhance our understanding of biology and medicine. Many microarray experiments have been designed to investigate the genetic mechanisms of cancer, and analytical approaches have been applied in order to classify different types of cancer or distinguish between cancerous and non-cancerous tissue. However, microarrays are high-dimensional datasets with high levels of noise and this causes problems when using machine learning methods. A popular approach to this problem is to search for a set of features that will simplify the structure and to some degree remove the noise from the data. The most widely used approach to feature extraction is principal component analysis (PCA which assumes a multivariate Gaussian model of the data. More recently, non-linear methods have been investigated. Among these, manifold learning algorithms, for example Isomap, aim to project the data from a higher dimensional space onto a lower dimension one. We have proposed a priori manifold learning for finding a manifold in which a representative set of microarray data is fused with relevant data taken from the KEGG pathway database. Once the manifold has been constructed the raw microarray data is projected onto it and clustering and classification can take place. In contrast to earlier fusion based methods, the prior knowledge from the KEGG databases is not used in, and does not bias the classification process--it merely acts as an aid to find the best space in which to search the data. In our experiments we have found that using our new manifold method gives better classification results than using either PCA or conventional Isomap.

  16. [Formula: see text]Determination of the smoking gun of intent: significance testing of forced choice results in social security claimants.

    Science.gov (United States)

    Binder, Laurence M; Chafetz, Michael D

    2018-01-01

    Significantly below-chance findings on forced choice tests have been described as revealing "the smoking gun of intent" that proved malingering. The issues of probability levels, one-tailed vs. two-tailed tests, and the combining of PVT scores on significantly below-chance findings were addressed in a previous study, with a recommendation of a probability level of .20 to test the significance of below-chance results. The purpose of the present study was to determine the rate of below-chance findings in a Social Security Disability claimant sample using the previous recommendations. We compared the frequency of below-chance results on forced choice performance validity tests (PVTs) at two levels of significance, .05 and .20, and when using significance testing on individual subtests of the PVTs compared with total scores in claimants for Social Security Disability in order to determine the rate of the expected increase. The frequency of significant results increased with the higher level of significance for each subtest of the PVT and when combining individual test sections to increase the number of test items, with up to 20% of claimants showing significantly below-chance results at the higher p-value. These findings are discussed in light of Social Security Administration policy, showing an impact on policy issues concerning child abuse and neglect, and the importance of using these techniques in evaluations for Social Security Disability.

  17. Quantitative coronary angiography in the estimation of the functional significance of coronary stenosis: correlations with dobutamine-atropine stress test

    NARCIS (Netherlands)

    J.M.P. Baptista da Silva (José); M. Arnese (Mariarosaria); J.R.T.C. Roelandt (Jos); P.M. Fioretti (Paolo); D.T.J. Keane (David); J. Escaned (Javier); C. di Mario (Carlo); P.W.J.C. Serruys (Patrick); H. Boersma (Eric)

    1994-01-01

    textabstractOBJECTIVES. The purpose of this study was to determine the predictive value of quantitative coronary angiography in the assessment of the functional significance of coronary stenosis as judged from the development of left ventricular wall motion abnormalities during dobutamine-atropine

  18. Automated Search-Based Robustness Testing for Autonomous Vehicle Software

    Directory of Open Access Journals (Sweden)

    Kevin M. Betts

    2016-01-01

    Full Text Available Autonomous systems must successfully operate in complex time-varying spatial environments even when dealing with system faults that may occur during a mission. Consequently, evaluating the robustness, or ability to operate correctly under unexpected conditions, of autonomous vehicle control software is an increasingly important issue in software testing. New methods to automatically generate test cases for robustness testing of autonomous vehicle control software in closed-loop simulation are needed. Search-based testing techniques were used to automatically generate test cases, consisting of initial conditions and fault sequences, intended to challenge the control software more than test cases generated using current methods. Two different search-based testing methods, genetic algorithms and surrogate-based optimization, were used to generate test cases for a simulated unmanned aerial vehicle attempting to fly through an entryway. The effectiveness of the search-based methods in generating challenging test cases was compared to both a truth reference (full combinatorial testing and the method most commonly used today (Monte Carlo testing. The search-based testing techniques demonstrated better performance than Monte Carlo testing for both of the test case generation performance metrics: (1 finding the single most challenging test case and (2 finding the set of fifty test cases with the highest mean degree of challenge.

  19. Trialing a Tablet PC Based Language Test

    Science.gov (United States)

    Litzler, Mary Frances; Garcia Laborda, Jesus

    2015-01-01

    Designing tests is a sophisticated task due to issues such as rubrics, validation and impact. Delivery has become another key issue in recent years. Recent research projects in Spain (García Laborda et al.s, 2010, García Laborda, 2012; García Laborda et al., 2014; Bueno Alastuey et al., 2014) have been working mainly with technological devices as…

  20. Improvement of testing and maintenance based on fault tree analysis

    International Nuclear Information System (INIS)

    Cepin, M.

    2000-01-01

    Testing and maintenance of safety equipment is an important issue, which significantly contributes to safe and efficient operation of a nuclear power plant. In this paper a method, which extends the classical fault tree with time, is presented. Its mathematical model is represented by a set of equations, which include time requirements defined in the house event matrix. House events matrix is a representation of house events switched on and off through the discrete points of time. It includes house events, which timely switch on and off parts of the fault tree in accordance with the status of the plant configuration. Time dependent top event probability is calculated by the fault tree evaluations. Arrangement of components outages is determined on base of minimization of mean system unavailability. The results show that application of the method may improve the time placement of testing and maintenance activities of safety equipment. (author)

  1. Computer-Aided Test Flow in Core-Based Design

    NARCIS (Netherlands)

    Zivkovic, V.; Tangelder, R.J.W.T.; Kerkhoff, Hans G.

    2000-01-01

    This paper copes with the test-pattern generation and fault coverage determination in the core based design. The basic core-test strategy that one has to apply in the core-based design is stated in this work. A Computer-Aided Test (CAT) flow is proposed resulting in accurate fault coverage of

  2. Identifying significant genetic regulatory networks in the prostate cancer from microarray data based on transcription factor analysis and conditional independency

    Directory of Open Access Journals (Sweden)

    Yeh Cheng-Yu

    2009-12-01

    Full Text Available Abstract Background Prostate cancer is a world wide leading cancer and it is characterized by its aggressive metastasis. According to the clinical heterogeneity, prostate cancer displays different stages and grades related to the aggressive metastasis disease. Although numerous studies used microarray analysis and traditional clustering method to identify the individual genes during the disease processes, the important gene regulations remain unclear. We present a computational method for inferring genetic regulatory networks from micorarray data automatically with transcription factor analysis and conditional independence testing to explore the potential significant gene regulatory networks that are correlated with cancer, tumor grade and stage in the prostate cancer. Results To deal with missing values in microarray data, we used a K-nearest-neighbors (KNN algorithm to determine the precise expression values. We applied web services technology to wrap the bioinformatics toolkits and databases to automatically extract the promoter regions of DNA sequences and predicted the transcription factors that regulate the gene expressions. We adopt the microarray datasets consists of 62 primary tumors, 41 normal prostate tissues from Stanford Microarray Database (SMD as a target dataset to evaluate our method. The predicted results showed that the possible biomarker genes related to cancer and denoted the androgen functions and processes may be in the development of the prostate cancer and promote the cell death in cell cycle. Our predicted results showed that sub-networks of genes SREBF1, STAT6 and PBX1 are strongly related to a high extent while ETS transcription factors ELK1, JUN and EGR2 are related to a low extent. Gene SLC22A3 may explain clinically the differentiation associated with the high grade cancer compared with low grade cancer. Enhancer of Zeste Homolg 2 (EZH2 regulated by RUNX1 and STAT3 is correlated to the pathological stage

  3. Identifying significant genetic regulatory networks in the prostate cancer from microarray data based on transcription factor analysis and conditional independency.

    Science.gov (United States)

    Yeh, Hsiang-Yuan; Cheng, Shih-Wu; Lin, Yu-Chun; Yeh, Cheng-Yu; Lin, Shih-Fang; Soo, Von-Wun

    2009-12-21

    Prostate cancer is a world wide leading cancer and it is characterized by its aggressive metastasis. According to the clinical heterogeneity, prostate cancer displays different stages and grades related to the aggressive metastasis disease. Although numerous studies used microarray analysis and traditional clustering method to identify the individual genes during the disease processes, the important gene regulations remain unclear. We present a computational method for inferring genetic regulatory networks from micorarray data automatically with transcription factor analysis and conditional independence testing to explore the potential significant gene regulatory networks that are correlated with cancer, tumor grade and stage in the prostate cancer. To deal with missing values in microarray data, we used a K-nearest-neighbors (KNN) algorithm to determine the precise expression values. We applied web services technology to wrap the bioinformatics toolkits and databases to automatically extract the promoter regions of DNA sequences and predicted the transcription factors that regulate the gene expressions. We adopt the microarray datasets consists of 62 primary tumors, 41 normal prostate tissues from Stanford Microarray Database (SMD) as a target dataset to evaluate our method. The predicted results showed that the possible biomarker genes related to cancer and denoted the androgen functions and processes may be in the development of the prostate cancer and promote the cell death in cell cycle. Our predicted results showed that sub-networks of genes SREBF1, STAT6 and PBX1 are strongly related to a high extent while ETS transcription factors ELK1, JUN and EGR2 are related to a low extent. Gene SLC22A3 may explain clinically the differentiation associated with the high grade cancer compared with low grade cancer. Enhancer of Zeste Homolg 2 (EZH2) regulated by RUNX1 and STAT3 is correlated to the pathological stage. We provide a computational framework to reconstruct

  4. On testing the significance of atmospheric response to smoke from the Kuwaiti oil fires using the Los Alamos general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Kao, C.J.; Glatzmaier, G.A.; Malone, R.C. [Los Alamos National Laboratory, Los Alamos, NM (United States)

    1994-07-01

    The response of the Los Alamos atmospheric general circulation model to the smoke from the Kuwaiti oil fires set in 1991 is examined. The model has an interactive soot transport module that uses a Lagrangian tracer particle scheme. The statistical significance of the results is evaluated using a methodology based on the classic Student`s t test. Among various estimated smoke emission rates and associated visible absorption coefficients, the worst- and best-case scenarios are selected. In each of the scenarios, an ensemble of 10 30-day June simulations are conducted with the smoke and are compared to the same 10 June simulations without the smoke. The results of the worst-case scneario show that a statistically significant wave train pattern propagates eastward-poleward downstream from the source. The signals favorably compare with the observed climate anomalies in summer 1991, albeit some possible El Nino-Southern Oscillation effects were involved in the actual climate. The results of the best-case (i.e., least-impact) scenario show that the significance is rather small but that its general pattern is quite similar to that in the worst-case scenario.

  5. On testing the significance of atmospheric response to smoke from the Kuwaiti oil fires using the Los Alamos general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Chih-Yue Jim Kao; Glatzmaier, G.A.; Malone, R.C. [Los Alamos National Lab., NM (United States)

    1994-07-20

    The response of the Los Alamos atmospheric general circulation model to the smoke from the Kuwaiti oil fires set in 1991 is examined. The model has an interactive soot transport module that uses a Lagrangian tracer particle scheme. The statistical significance of the results is evaluated using a methodology based on the classic Student`s t test. Among various estimated smoke emission rates and associated visible absorption coefficients, the worst- and best-case scenarios are selected. In each of the scenarios, an ensemble of 10, 30-day June simulations are conducted with the smoke, and are compared to the same 10 June simulations without the smoke. The results of the worst-case scenario show that a statistically significant wave train pattern propagates eastward-poleward downstream from the source. The signals favorably compare with the observed climate anomalies in summer 1991, albeit some possible El Nino-Southern Oscillation effects were involved in the actual climate. The results of the best-case (i.e., least-impact) scenario show that the significance is rather small but that its general pattern is quite similar to that in the worst-case scenario. 24 refs., 5 figs.

  6. EDDY - a FORTRAN program to extract significant features from eddy-current test data - the basis of the CANSCAN system

    International Nuclear Information System (INIS)

    Jarvis, R.G.; Cranston, R.J.

    1982-09-01

    The FORTRAN program EDDY is designed to analyse data: from eddy-current scans of steam generator tubes. It is written in modular form, for future development, and it uses signal-recognition techniques that the authors developed in the profilometry of irradiated fuel elements. During a scan, significant signals are detected and extracted for immediate attention or more detailed analysis later. A version of the program was used in the CANSCAN system 'for automated eddy-current in-service inspection of nuclear steam generator tubing'

  7. Finding of No Significant Impact (FONSI) for Construction of a Base Civil Engineer Complex at Travis Air Force Base, California

    Science.gov (United States)

    2002-01-26

    2 Ellis Drive Dix o n A v e Rags dale Stree t Bioremedial Site Base Civil Engineering ComplexEnvironmental AssessmentTravis Air Force Base...CNPS List 1B species. This species is an annual herb in the sunflower tribe (Heliantheae) of the sunflower family (Asteraceae). Individual plants...range from approximately 10 to 40 centimeters (cm) tall. Being in the sunflower family (Asteraceae), the characteristic yellow flower of this plant

  8. Test Review: Test of English as a Foreign Language[TM]--Internet-Based Test (TOEFL iBT[R])

    Science.gov (United States)

    Alderson, J. Charles

    2009-01-01

    In this article, the author reviews the TOEFL iBT which is the latest version of the TOEFL, whose history stretches back to 1961. The TOEFL iBT was introduced in the USA, Canada, France, Germany and Italy in late 2005. Currently the TOEFL test is offered in two testing formats: (1) Internet-based testing (iBT); and (2) paper-based testing (PBT).…

  9. Testing for Statistical Discrimination based on Gender

    OpenAIRE

    Lesner, Rune Vammen

    2016-01-01

    This paper develops a model which incorporates the two most commonly cited strands of the literature on statistical discrimination, namely screening discrimination and stereotyping. The model is used to provide empirical evidence of statistical discrimination based on gender in the labour market. It is shown that the implications of both screening discrimination and stereotyping are consistent with observable wage dynamics. In addition, it is found that the gender wage gap decreases in tenure...

  10. Inversion of lithium heparin gel tubes after centrifugation is a significant source of bias in clinical chemistry testing.

    Science.gov (United States)

    Lippi, Giuseppe; Salvagno, Gian Luca; Danese, Elisa; Lima-Oliveira, Gabriel; Brocco, Giorgio; Guidi, Gian Cesare

    2014-09-25

    This study was planned to establish whether random orientation of gel tubes after centrifugation may impair sample quality. Eight gel tubes were collected from 17 volunteers: 2 Becton Dickinson (BD) serum tubes, 2 Terumo serum tubes, 2 BD lithium heparin tubes and 2 Terumo lithium heparin tubes. One patient's tube for each category was kept in a vertical, closure-up position for 90 min ("upright"), whereas paired tubes underwent bottom-up inversion every 15 min, for 90 min ("inverted"). Immediately after this period of time, 14 clinical chemistry analytes, serum indices and complete blood count were then assessed in all tubes. Significant increases were found for phosphate and lipaemic index in all inverted tubes, along with AST, calcium, cholesterol, LDH, potassium, hemolysis index, leukocytes, erythrocytes and platelets limited to lithium heparin tubes. The desirable quality specifications were exceeded for AST, LDH, and potassium in inverted lithium heparin tubes. Residual leukocytes, erythrocytes, platelets and cellular debris were also significantly increased in inverted lithium heparin tubes. Lithium heparin gel tubes should be maintained in a vertical, closure-up position after centrifugation. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Risk based test interval and maintenance optimisation - Application and uses

    International Nuclear Information System (INIS)

    Sparre, E.

    1999-10-01

    The project is part of an IAEA co-ordinated Research Project (CRP) on 'Development of Methodologies for Optimisation of Surveillance Testing and Maintenance of Safety Related Equipment at NPPs'. The purpose of the project is to investigate the sensitivity of the results obtained when performing risk based optimisation of the technical specifications. Previous projects have shown that complete LPSA models can be created and that these models allow optimisation of technical specifications. However, these optimisations did not include any in depth check of the result sensitivity with regards to methods, model completeness etc. Four different test intervals have been investigated in this study. Aside from an original, nominal, optimisation a set of sensitivity analyses has been performed and the results from these analyses have been compared to the original optimisation. The analyses indicate that the result of an optimisation is rather stable. However, it is not possible to draw any certain conclusions without performing a number of sensitivity analyses. Significant differences in the optimisation result were discovered when analysing an alternative configuration. Also deterministic uncertainties seem to affect the result of an optimisation largely. The sensitivity of failure data uncertainties is important to investigate in detail since the methodology is based on the assumption that the unavailability of a component is dependent on the length of the test interval

  12. Protein-Based Urine Test Predicts Kidney Transplant Outcomes

    Science.gov (United States)

    ... News Releases News Release Thursday, August 22, 2013 Protein-based urine test predicts kidney transplant outcomes NIH- ... supporting development of noninvasive tests. Levels of a protein in the urine of kidney transplant recipients can ...

  13. Significance test for seismicity rate changes before the 1987 Chiba-toho-oki earthquake ({mu} 6.7) Japan

    Energy Technology Data Exchange (ETDEWEB)

    Maeda, K.; Wiemer, S. [Meteorologial Research Institute, Tsukuba, Ibaraki (Japan). Dept. of Seismology and Volcanology Research

    1999-10-01

    The paper discusses a quantitative analysis of the seismicity rates, using two independent catalogs provided by the NIED (National Research Institute for Earth Science and Disaster Prevention) and JMA (Japan Meteorological Agency) networks and shows that the precursory seismic quiescence is centered in the shallower part of the rupture zone of the subsequent main shock, at the depth of 20-40 km. At the hypocenter of 1987 Chiba-toho-oki earthquake, a 50% increase in the seismicity rate was detected in the NIED data, coinciding in time with the onset of quiescence. For the aid of real time monitoring of seismicity rate changes, the method to calculate the 95-percentile of confidence level for the significant rate changes has been introduced.

  14. Testing the performance of technical trading rules in the Chinese markets based on superior predictive test

    Science.gov (United States)

    Wang, Shan; Jiang, Zhi-Qiang; Li, Sai-Ping; Zhou, Wei-Xing

    2015-12-01

    Technical trading rules have a long history of being used by practitioners in financial markets. The profitable ability and efficiency of technical trading rules are yet controversial. In this paper, we test the performance of more than seven thousand traditional technical trading rules on the Shanghai Securities Composite Index (SSCI) from May 21, 1992 through June 30, 2013 and China Securities Index 300 (CSI 300) from April 8, 2005 through June 30, 2013 to check whether an effective trading strategy could be found by using the performance measurements based on the return and Sharpe ratio. To correct for the influence of the data-snooping effect, we adopt the Superior Predictive Ability test to evaluate if there exists a trading rule that can significantly outperform the benchmark. The result shows that for SSCI, technical trading rules offer significant profitability, while for CSI 300, this ability is lost. We further partition the SSCI into two sub-series and find that the efficiency of technical trading in sub-series, which have exactly the same spanning period as that of CSI 300, is severely weakened. By testing the trading rules on both indexes with a five-year moving window, we find that during the financial bubble from 2005 to 2007, the effectiveness of technical trading rules is greatly improved. This is consistent with the predictive ability of technical trading rules which appears when the market is less efficient.

  15. Operational Based Vision Assessment Automated Vision Test Collection User Guide

    Science.gov (United States)

    2017-05-15

    AFRL-SA-WP-SR-2017-0012 Operational Based Vision Assessment Automated Vision Test Collection User Guide Elizabeth Shoda, Alex...June 2015 – May 2017 4. TITLE AND SUBTITLE Operational Based Vision Assessment Automated Vision Test Collection User Guide 5a. CONTRACT NUMBER... automated vision tests , or AVT. Development of the AVT was required to support threshold-level vision testing capability needed to investigate the

  16. Computer-Aided Test Flow in Core-Based Design

    OpenAIRE

    Zivkovic, V.; Tangelder, R.J.W.T.; Kerkhoff, Hans G.

    2000-01-01

    This paper copes with the test-pattern generation and fault coverage determination in the core based design. The basic core-test strategy that one has to apply in the core-based design is stated in this work. A Computer-Aided Test (CAT) flow is proposed resulting in accurate fault coverage of embedded cores. The CAT now is applied to a few cores within the Philips Core Test Pilot IC project

  17. Testing for Statistical Discrimination based on Gender

    DEFF Research Database (Denmark)

    Lesner, Rune Vammen

    . It is shown that the implications of both screening discrimination and stereotyping are consistent with observable wage dynamics. In addition, it is found that the gender wage gap decreases in tenure but increases in job transitions and that the fraction of women in high-ranking positions within a firm does......This paper develops a model which incorporates the two most commonly cited strands of the literature on statistical discrimination, namely screening discrimination and stereotyping. The model is used to provide empirical evidence of statistical discrimination based on gender in the labour market...... not affect the level of statistical discrimination by gender....

  18. Comparison of the Clock Test and a questionnaire-based test for ...

    African Journals Online (AJOL)

    Comparison of the Clock Test and a questionnaire-based test for screening for cognitive impairment in Nigerians. D J VanderJagt, S Ganga, M O Obadofin, P Stanley, M Zimmerman, B J Skipper, R H Glew ...

  19. Testing of advanced chromium - iron based steel

    International Nuclear Information System (INIS)

    Simeg Veternikova, J.; Degmova, J.; Sabelova, V.; Sojak, S.; Petriska, M.; Slugen, V.; Simko, F.; Pekarcikova, M.

    2015-01-01

    Research and Development of advanced nuclear reactors in Generation IV (GEN IV) are limited by the selection of proper construction materials. Suitable candidate materials are still under extensive investigation, because their properties must be excellent to achieve high level of reactor system safety. NF 709 (Fe-20Cr-25Ni) is new austenitic steel with improved properties in compare to AISI steels; therefore it is also one of candidate materials. Our study is focused on investigation of radiation resistance as well as thermal stability of this steel - NF 709. New austenitic steel NF 709, candidate materials for construction of Generation IV reactors, was observed in term of its stability after an exposure to very high temperature and irradiation. The change of microstructure was observed by positron annihilation techniques which demonstrated the growth of vacancy defects from di-vacancies in as-received material to three-vacancies in material after the thermal and implantation treatments; although the total change of structure was very small. Thus, NF 709 showed good resistance to tested strains and according to our preliminary results. Therefore, this material could be used for high temperature applications and interchangeable components of Generation IV reactors. (authors)

  20. BEAT: A Web-Based Boolean Expression Fault-Based Test Case Generation Tool

    Science.gov (United States)

    Chen, T. Y.; Grant, D. D.; Lau, M. F.; Ng, S. P.; Vasa, V. R.

    2006-01-01

    BEAT is a Web-based system that generates fault-based test cases from Boolean expressions. It is based on the integration of our several fault-based test case selection strategies. The generated test cases are considered to be fault-based, because they are aiming at the detection of particular faults. For example, when the Boolean expression is in…

  1. DLP™-based dichoptic vision test system

    Science.gov (United States)

    Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli

    2010-01-01

    It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3% remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer's sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events.

  2. Wind turbine blade testing system using base excitation

    Science.gov (United States)

    Cotrell, Jason; Thresher, Robert; Lambert, Scott; Hughes, Scott; Johnson, Jay

    2014-03-25

    An apparatus (500) for fatigue testing elongate test articles (404) including wind turbine blades through forced or resonant excitation of the base (406) of the test articles (404). The apparatus (500) includes a testing platform or foundation (402). A blade support (410) is provided for retaining or supporting a base (406) of an elongate test article (404), and the blade support (410) is pivotally mounted on the testing platform (402) with at least two degrees of freedom of motion relative to the testing platform (402). An excitation input assembly (540) is interconnected with the blade support (410) and includes first and second actuators (444, 446, 541) that act to concurrently apply forces or loads to the blade support (410). The actuator forces are cyclically applied in first and second transverse directions. The test article (404) responds to shaking of its base (406) by oscillating in two, transverse directions (505, 507).

  3. Formal Specification Based Automatic Test Generation for Embedded Network Systems

    Directory of Open Access Journals (Sweden)

    Eun Hye Choi

    2014-01-01

    Full Text Available Embedded systems have become increasingly connected and communicate with each other, forming large-scaled and complicated network systems. To make their design and testing more reliable and robust, this paper proposes a formal specification language called SENS and a SENS-based automatic test generation tool called TGSENS. Our approach is summarized as follows: (1 A user describes requirements of target embedded network systems by logical property-based constraints using SENS. (2 Given SENS specifications, test cases are automatically generated using a SAT-based solver. Filtering mechanisms to select efficient test cases are also available in our tool. (3 In addition, given a testing goal by the user, test sequences are automatically extracted from exhaustive test cases. We’ve implemented our approach and conducted several experiments on practical case studies. Through the experiments, we confirmed the efficiency of our approach in design and test generation of real embedded air-conditioning network systems.

  4. Environmental Assessment and Finding of No Significant Impact: The Nevada Test Site Development Corporations's Desert Rock Sky Park at the Nevada Test Site

    Energy Technology Data Exchange (ETDEWEB)

    N/A

    2000-03-01

    The United States Department of Energy has prepared an Environmental Assessment (DOE/EA-1300) (EA) which analyzes the potential environmental effects of developing operating and maintaining a commercial/industrial park in Area 22 of the Nevada Test Site, between Mercury Camp and U.S. Highway 95 and east of Desert Rock Airport. The EA evaluates the potential impacts of infrastructure improvements necessary to support fill build out of the 512-acre Desert Rock Sky Park. Two alternative actions were evaluated: (1) Develop, operate and maintain a commercial/industrial park in Area 22 of the Nevada Test Site, and (2) taking no action. The purpose and need for the commercial industrial park are addressed in Section 1.0 of the EA. A detailed description of the proposed action and alternatives is in section 2.0. Section 3.0 describes the affected environment. Section 4.0 the environmental consequences of the proposed action and alternative. Cumulative effects are addressed in Section 5.0. Mitigation measures are addressed in Section 6.0. The Department of Energy determined that the proposed action of developing, operating and maintaining a commercial/industrial park in Area 22 of the Nevada Test Site would best meet the needs of the agency.

  5. Analysing Test-Takers’ Views on a Computer-Based Speaking Test

    Directory of Open Access Journals (Sweden)

    Marian Amengual-Pizarro

    2017-11-01

    Full Text Available This study examines test-takers’ views on a computer-delivered speaking test in order to investigate the aspects they consider most relevant in technology-based oral assessment, and to explore the main advantages and disadvantages computer-based tests may offer as compared to face-to-face speaking tests. A small-scale open questionnaire was administered to 80 test-takers who took the APTIS speaking test at the Universidad de Alcalá in April 2016. Results reveal that examinees believe computer-based tests provide a valid measure of oral competence in English and are considered to be an adequate method for the assessment of speaking. Interestingly, the data suggest that personal characteristics of test-takers seem to play a key role in deciding upon the most suitable and reliable delivery mode.

  6. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through

  7. Clinicopathological significance of p16, cyclin D1, Rb and MIB-1 levels in skull base chordoma and chondrosarcoma

    Directory of Open Access Journals (Sweden)

    Jun-qi Liu

    2015-09-01

    Full Text Available Objective: To investigate the expression of p16, cyclin D1, retinoblastoma tumor suppressor protein (Rb and MIB-1 in skull base chordoma and chondrosarcoma tissues, and to determine the clinicopathological significance of the above indexes in these diseases. Methods: A total of 100 skull base chordoma, 30 chondrosarcoma, and 20 normal cartilage tissue samples were analyzed by immunohistochemistry. The expression levels of p16, cyclinD1, Rb and MIB-1 proteins were assessed for potential correlation with the clinicopathological features. Results: As compared to normal cartilage specimen (control, there was decreased expression of p16, and increased expression of cyclin D1, Rb and MIB-1 proteins, in both skull base chordoma and chondrosarcoma specimens. MIB-1 LI levels were significantly increased in skull base chordoma specimens with negative expression of p16, and positive expression of cyclin D1 and Rb (P  0.05. However, p16 and MIB-1 levels correlated with the intradural invasion, and expression of p16, Rb and MIB-1 correlated with the number of tumor foci (P < 0.05. Further, the expression of p16 and MIB-1 appeared to correlate with the prognosis of patients with skull base chordoma. Conclusions: The abnormal expression of p16, cyclin D1 and Rb proteins might be associated with the tumorigenesis of skull base chordoma and chondrosarcoma. Keywords: p16, Cyclin D1, Rb, MIB-1, Skull base chordoma, Skull base chondrosarcoma

  8. Addendum to the article: Misuse of null hypothesis significance testing: Would estimation of positive and negative predictive values improve certainty of chemical risk assessment?

    Science.gov (United States)

    Bundschuh, Mirco; Newman, Michael C; Zubrod, Jochen P; Seitz, Frank; Rosenfeldt, Ricki R; Schulz, Ralf

    2015-03-01

    We argued recently that the positive predictive value (PPV) and the negative predictive value (NPV) are valuable metrics to include during null hypothesis significance testing: They inform the researcher about the probability of statistically significant and non-significant test outcomes actually being true. Although commonly misunderstood, a reported p value estimates only the probability of obtaining the results or more extreme results if the null hypothesis of no effect was true. Calculations of the more informative PPV and NPV require a priori estimate of the probability (R). The present document discusses challenges of estimating R.

  9. Cernavoda NPP risk - Based test and maintenance planning - Methodology development

    International Nuclear Information System (INIS)

    Georgescu, G.; Popa, P.; Petrescu, A.; Naum, M.; Gutu, M.

    1997-01-01

    The Cernavoda Power Plant starts the commercial operation in November 1996. During operation of the nuclear power plant, several mandatory tests and maintenance are performed on stand-by safety system components to ensure their availability in case of accident. The basic purpose of such activities is the early detection of any failure and degradation, and timely correction of deteriorations. Because of the large number of such activities, emphasis on plant safety and allocation of resources becomes difficult. The probabilistic model and methodology can be effectively used to obtain the risk significance of these activities so that the resources are directed to the most important areas. The proposed Research Contract activity is strongly connected with other safety related areas under development. Since, the Cernavoda Probabilistic Safety Evaluation Level 1 PSA Study (CPSE) was performed and now the study is revised taking into account the as-built information, it is recommended to implement into the model the necessary modeling features to support further PSA application, especially related to Test and Maintenance optimization. Methods need to be developed in order to apply the PSA model including risk information together with other needed information for Test and Maintenance optimization. Also, in parallel with the CPSE study updating, the software interface for the PSA model is under development (Risk Monitor Software class), methods and models needing to be developed for the purpose of using it for qualified monitoring of Test and Maintenance Strategy efficiency. Similar, the Data Collection System need to be appropriate for the purpose of an ongoing implementation of a risk - based Test and Maintenance Strategy. (author). 4 refs, 1 fig

  10. Moving beyond the Failure of Test-Based Accountability

    Science.gov (United States)

    Koretz, Daniel

    2018-01-01

    In "The Testing Charade: Pretending to Make Schools Better", the author's new book from which this article is drawn, the failures of test-based accountability are documented and some of the most egregious misuses and outright abuses of testing are described, along with some of the most serious negative effects. Neither good intentions…

  11. Ethernet-based test stand for a CAN network

    Science.gov (United States)

    Ziebinski, Adam; Cupek, Rafal; Drewniak, Marek

    2017-11-01

    This paper presents a test stand for the CAN-based systems that are used in automotive systems. The authors propose applying an Ethernet-based test system that supports the virtualisation of a CAN network. The proposed solution has many advantages compared to classical test beds that are based on dedicated CAN-PC interfaces: it allows the physical constraints associated with the number of interfaces that can be simultaneously connected to a tested system to be avoided, which enables the test time for parallel tests to be shortened; the high speed of Ethernet transmission allows for more frequent sampling of the messages that are transmitted by a CAN network (as the authors show in the experiment results section) and the cost of the proposed solution is much lower than the traditional lab-based dedicated CAN interfaces for PCs.

  12. Correlations between power and test reactor data bases

    International Nuclear Information System (INIS)

    Guthrie, G.L.; Simonen, E.P.

    1989-02-01

    Differences between power reactor and test reactor data bases have been evaluated. Charpy shift data has been assembled from specimens irradiated in both high-flux test reactors and low-flux power reactors. Preliminary tests for the existence of a bias between test and power reactor data bases indicate a possible bias between the weld data bases. The bias is nonconservative for power predictive purposes, using test reactor data. The lesser shift for test reactor data compared to power reactor data is interpreted primarily in terms of greater point defect recombination for test reactor fluxes compared to power reactor fluxes. The possibility of greater thermal aging effects during lower damage rates is also discussed. 15 refs., 5 figs., 2 tabs

  13. A comparison of test statistics for the recovery of rapid growth-based enumeration tests

    NARCIS (Netherlands)

    van den Heuvel, Edwin R.; IJzerman-Boon, Pieta C.

    This paper considers five test statistics for comparing the recovery of a rapid growth-based enumeration test with respect to the compendial microbiological method using a specific nonserial dilution experiment. The finite sample distributions of these test statistics are unknown, because they are

  14. Using the noninformative families in family-based association tests : A powerful new testing strategy

    NARCIS (Netherlands)

    Lange, C; DeMeo, D; Silverman, EK; Weiss, ST; Laird, NM

    2003-01-01

    For genetic association studies with multiple phenotypes, we propose a new strategy for multiple testing with family-based association tests (FBATs). The strategy increases the power by both using all available family data and reducing the number of hypotheses tested while being robust against

  15. Acid-base titrations for polyacids: Significance of the pK sub a and parameters in the Kern equation

    Science.gov (United States)

    Meites, L.

    1978-01-01

    A new method is suggested for calculating the dissociation constants of polyvalent acids, especially polymeric acids. In qualitative form the most significant characteristics of the titration curves are demonstrated and identified which are obtained when titrating the solutions of such acids with a standard base potentiometrically.

  16. Geometrical error calibration in reflective surface testing based on reverse Hartmann test

    Science.gov (United States)

    Gong, Zhidong; Wang, Daodang; Xu, Ping; Wang, Chao; Liang, Rongguang; Kong, Ming; Zhao, Jun; Mo, Linhai; Mo, Shuhui

    2017-08-01

    In the fringe-illumination deflectometry based on reverse-Hartmann-test configuration, ray tracing of the modeled testing system is performed to reconstruct the test surface error. Careful calibration of system geometry is required to achieve high testing accuracy. To realize the high-precision surface testing with reverse Hartmann test, a computer-aided geometrical error calibration method is proposed. The aberrations corresponding to various geometrical errors are studied. With the aberration weights for various geometrical errors, the computer-aided optimization of system geometry with iterative ray tracing is carried out to calibration the geometrical error, and the accuracy in the order of subnanometer is achieved.

  17. Prognostic significance of gastrointestinal symptoms and diagnosis in relation to the acute radiation syndrome. A retrospective analysis based on the data base SEARCH

    International Nuclear Information System (INIS)

    Hoebbel, Mathias Niklaus Johannes

    2016-01-01

    The following thesis explores the prognostic significance of gastrointestinal symptoms and diagnoses in relation to acute radiation syndrome. This is a retrospective analysis based on the SEARCH (System of Evaluation and Archiving of Radiation Accidents based on Case Histories) database, which was created by a team of researchers in Ulm in 1998. The SEARCH database compiled health status data of individuals involved in a total of 78 ionized radiation accidents between 1945 and 2003. In the past changes in bloodbuilding systems were considered the defining factor in determining a prognosis regarding survival times. Treatment decisions were made in line with these findings, including stem-cell transplants. In recent history, especially after the nuclear disaster in Chernobyl in 1986, the focus shifted onto other organ systems. As a result it has been proven that significant cutaneous damages present an important influence on survival regardless of haematopoiesis. Several researchers have looked at changes in the gastrointestinal tract and possible correlations with radiation induced multiple organ failure. In this paper, all of the data recorded in SEARCH in regards to gastrointestinal symptoms have been analyzed. These include symptoms such as nausea, vomiting and changes in bowel movement as well as their onset and severity. Radiation-induced oral mucositis was also further investigated. Despite the occasional gaps in data in SEARCH, results from the analysis proved that the occurrence of certain symptoms, their severity and their onset were directly correlated to life expectancy, regardless of the dose estimation, and the pending blood test results. An immediate triage of these patients by skilled medical professionals is imperative to accurate categorization.

  18. Cross-Mode Comparability of Computer-Based Testing (CBT) versus Paper-Pencil Based Testing (PPT): An Investigation of Testing Administration Mode among Iranian Intermediate EFL Learners

    Science.gov (United States)

    Khoshsima, Hooshang; Hosseini, Monirosadat; Toroujeni, Seyyed Morteza Hashemi

    2017-01-01

    Advent of technology has caused growing interest in using computers to convert conventional paper and pencil-based testing (Henceforth PPT) into Computer-based testing (Henceforth CBT) in the field of education during last decades. This constant promulgation of computers to reshape the conventional tests into computerized format permeated the…

  19. Impact of age on the false negative rate of human papillomavirus DNA test in patients with atypical squamous cells of undetermined significance

    OpenAIRE

    Won, Kyu-Hee; Lee, Jae Yeon; Cho, Hye-Yon; Suh, Dong Hoon; No, Jae Hong; Kim, Yong-Beom

    2015-01-01

    Objective Human papillomavirus (HPV) test was incorporated into the triage of lesser abnormal cervical cytologies: atypical squamous cells of undetermined significance (ASCUS) or low-grade squamous intraepithelial lesion (LSIL). This study aimed to evaluate the impact of age on the efficacy of HPV testing in patients with lesser abnormal cervical cytologies. Methods A total of 439 patients with ASCUS or LSIL were included. The association between age groups and the diagnostic performances of ...

  20. The predictive value of the sacral base pressure test in detecting specific types of sacroiliac dysfunction

    Science.gov (United States)

    Mitchell, Travis D.; Urli, Kristina E.; Breitenbach, Jacques; Yelverton, Chris

    2007-01-01

    Abstract Objective This study aimed to evaluate the validity of the sacral base pressure test in diagnosing sacroiliac joint dysfunction. It also determined the predictive powers of the test in determining which type of sacroiliac joint dysfunction was present. Methods This was a double-blind experimental study with 62 participants. The results from the sacral base pressure test were compared against a cluster of previously validated tests of sacroiliac joint dysfunction to determine its validity and predictive powers. The external rotation of the feet, occurring during the sacral base pressure test, was measured using a digital inclinometer. Results There was no statistically significant difference in the results of the sacral base pressure test between the types of sacroiliac joint dysfunction. In terms of the results of validity, the sacral base pressure test was useful in identifying positive values of sacroiliac joint dysfunction. It was fairly helpful in correctly diagnosing patients with negative test results; however, it had only a “slight” agreement with the diagnosis for κ interpretation. Conclusions In this study, the sacral base pressure test was not a valid test for determining the presence of sacroiliac joint dysfunction or the type of dysfunction present. Further research comparing the agreement of the sacral base pressure test or other sacroiliac joint dysfunction tests with a criterion standard of diagnosis is necessary. PMID:19674694

  1. Model Based Analysis and Test Generation for Flight Software

    Science.gov (United States)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  2. Towards model-based testing of electronic funds transfer systems

    OpenAIRE

    Asaadi, H.R.; Khosravi, R.; Mousavi, M.R.; Noroozi, N.

    2010-01-01

    We report on our first experience with applying model-based testing techniques to an operational Electronic Funds Transfer (EFT) switch. The goal is to test the conformance of the EFT switch to the standard flows described by the ISO 8583 standard. To this end, we first make a formalization of the transaction flows specified in the ISO 8583 standard in terms of a Labeled Transition System (LTS). This formalization paves the way for model-based testing based on the formal notion of Input-Outpu...

  3. Design Of Computer Based Test Using The Unified Modeling Language

    Science.gov (United States)

    Tedyyana, Agus; Danuri; Lidyawati

    2017-12-01

    The Admission selection of Politeknik Negeri Bengkalis through interest and talent search (PMDK), Joint Selection of admission test for state Polytechnics (SB-UMPN) and Independent (UM-Polbeng) were conducted by using paper-based Test (PBT). Paper Based Test model has some weaknesses. They are wasting too much paper, the leaking of the questios to the public, and data manipulation of the test result. This reasearch was Aimed to create a Computer-based Test (CBT) models by using Unified Modeling Language (UML) the which consists of Use Case diagrams, Activity diagram and sequence diagrams. During the designing process of the application, it is important to pay attention on the process of giving the password for the test questions before they were shown through encryption and description process. RSA cryptography algorithm was used in this process. Then, the questions shown in the questions banks were randomized by using the Fisher-Yates Shuffle method. The network architecture used in Computer Based test application was a client-server network models and Local Area Network (LAN). The result of the design was the Computer Based Test application for admission to the selection of Politeknik Negeri Bengkalis.

  4. PLC based control system for RAM assembly test facility

    International Nuclear Information System (INIS)

    Kulkarni, S.S.; Kumar, Vinaya; Chandra, Umesh

    1994-01-01

    The flexibility, expandability, ease of programming and diagnostic features makes the programmable logic controller (PLC) suitable for a variety of control applications in engineering system test facilities. A PLC based control system for RAM assembly test facility (RATF) and for testing the related hydraulic components is being developed and installed at BARC. This paper describes the approach taken for meeting the control requirements and illustrates the PLC software that has been developed. (author). 1 fig

  5. Functions and Design Scheme of Tibet High Altitude Test Base

    Institute of Scientific and Technical Information of China (English)

    Yu Yongqing; Guo Jian; Yin Yu; Mao Yan; Li Guangfan; Fan Jianbin; Lu Jiayu; Su Zhiyi; Li Peng; Li Qingfeng; Liao Weiming; Zhou Jun

    2010-01-01

    @@ The functional orientation of the Tibet High Altitude Test Base, subordinated to the State Grid Corporation of China (SGCC), is to serve power transmission projects in high altitude areas, especially to provide technical support for southwestern hydropower delivery projects by UHVDC transmission and Qinghai-Tibet grid interconnection project. This paper presents the matters concerned during siting and planning, functions,design scheme, the main performances and parameters of the test facilities, as well as the tests and research tasks already carried out.

  6. A LabVIEWTM-based detector testing system

    International Nuclear Information System (INIS)

    Yang Haori; Li Yuanjing; Wang Yi; Li Yulan; Li Jin

    2003-01-01

    The construction of a LabVIEW-based detector testing system is described in this paper. In this system, the signal of detector is magnified and digitized, so amplitude or time spectrum can be obtained. The Analog-to-Digital Converter is a peak-sensitive ADC based on VME bus. The virtual instrument constructed by LabVIEW can be used to acquire data, draw spectrum and save testing results

  7. Home-based HIV counselling and testing in Western Kenya ...

    African Journals Online (AJOL)

    Home-based HIV counselling and testing was feasible among this rural population in western Kenya, with a majority of the population accepting to get tested. These data suggest that scaling-up of HBCT is possible and may enable large numbers of individuals to know their HIV serostatus in sub-Saharan Africa.

  8. Model-Based GUI Testing Using Uppaal at Novo Nordisk

    DEFF Research Database (Denmark)

    H. Hjort, Ulrik; Rasmussen, Jacob Illum; Larsen, Kim Guldstrand

    2009-01-01

    This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates...

  9. Automated model-based testing of hybrid systems

    NARCIS (Netherlands)

    Osch, van M.P.W.J.

    2009-01-01

    In automated model-based input-output conformance testing, tests are automati- cally generated from a speci¯cation and automatically executed on an implemen- tation. Input is applied to the implementation and output is observed from the implementation. If the observed output is allowed according to

  10. Towards model-based testing of electronic funds transfer systems

    NARCIS (Netherlands)

    Asaadi, H.R.; Khosravi, R.; Mousavi, M.R.; Noroozi, N.; Arbab, F.; Sirjani, M.

    2012-01-01

    We report on our first experience with applying model-based testing techniques to an operational Electronic Funds Transfer (EFT) switch. The goal is to test the conformance of the EFT switch to the standard flows described by the ISO 8583 standard. To this end, we first make a formalization of the

  11. Towards model-based testing of electronic funds transfer systems

    NARCIS (Netherlands)

    Asaadi, H.R.; Khosravi, R.; Mousavi, M.R.; Noroozi, N.

    2010-01-01

    We report on our first experience with applying model-based testing techniques to an operational Electronic Funds Transfer (EFT) switch. The goal is to test the conformance of the EFT switch to the standard flows described by the ISO 8583 standard. To this end, we first make a formalization of the

  12. Conceptualizing Teaching to the Test under Standards-Based Reform

    Science.gov (United States)

    Welsh, Megan E.; Eastwood, Melissa; D'Agostino, Jerome V.

    2014-01-01

    Teacher and school accountability systems based on high-stakes tests are ubiquitous throughout the United States and appear to be growing as a catalyst for reform. As a result, educators have increased the proportion of instructional time devoted to test preparation. Although guidelines for what constitutes appropriate and inappropriate test…

  13. Microcomputer based test system for charge coupled devices

    International Nuclear Information System (INIS)

    Sidman, S.

    1981-02-01

    A microcomputer based system for testing analog charge coupled integrated circuits has been developed. It measures device performance for three parameters: dynamic range, baseline shift due to leakage current, and transfer efficiency. A companion board tester has also been developed. The software consists of a collection of BASIC and assembly language routines developed on the test system microcomputer

  14. Defect-based testing of LTS digital circuits

    NARCIS (Netherlands)

    Arun, A.J.

    2006-01-01

    A Defect-Based Test (DBT) methodology for Superconductor Electronics (SCE) is presented in this thesis, so that commercial production and efficient testing of systems can be implemented in this technology in the future. In the first chapter, the features and prospects for SCE have been presented.

  15. A versatile electrophoresis-based self-test platform.

    Science.gov (United States)

    Staal, Steven; Ungerer, Mathijn; Floris, Arjan; Ten Brinke, Hans-Willem; Helmhout, Roy; Tellegen, Marian; Janssen, Kjeld; Karstens, Erik; van Arragon, Charlotte; Lenk, Stefan; Staijen, Erik; Bartholomew, Jody; Krabbe, Hans; Movig, Kris; Dubský, Pavel; van den Berg, Albert; Eijkel, Jan

    2015-03-01

    This paper reports on recent research creating a family of electrophoresis-based point of care devices for the determination of a wide range of ionic analytes in various sample matrices. These devices are based on a first version for the point-of-care measurement of Li(+), reported in 2010 by Floris et al. (Lab Chip 2010, 10, 1799-1806). With respect to this device, significant improvements in accuracy, precision, detection limit, and reliability have been obtained especially by the use of multiple injections of one sample on a single chip and integrated data analysis. Internal and external validation by clinical laboratories for the determination of analytes in real patients by a self-test is reported. For Li(+) in blood better precision than the standard clinical determination for Li(+) was achieved. For Na(+) in human urine the method was found to be within the clinical acceptability limits. In a veterinary application, Ca(2+) and Mg(2+) were determined in bovine blood by means of the same chip, but using a different platform. Finally, promising preliminary results are reported with the Medimate platform for the determination of creatinine in whole blood and quantification of both cations and anions through replicate measurements on the same sample with the same chip. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Visually directed vs. software-based targeted biopsy compared to transperineal template mapping biopsy in the detection of clinically significant prostate cancer.

    Science.gov (United States)

    Valerio, Massimo; McCartan, Neil; Freeman, Alex; Punwani, Shonit; Emberton, Mark; Ahmed, Hashim U

    2015-10-01

    Targeted biopsy based on cognitive or software magnetic resonance imaging (MRI) to transrectal ultrasound registration seems to increase the detection rate of clinically significant prostate cancer as compared with standard biopsy. However, these strategies have not been directly compared against an accurate test yet. The aim of this study was to obtain pilot data on the diagnostic ability of visually directed targeted biopsy vs. software-based targeted biopsy, considering transperineal template mapping (TPM) biopsy as the reference test. Prospective paired cohort study included 50 consecutive men undergoing TPM with one or more visible targets detected on preoperative multiparametric MRI. Targets were contoured on the Biojet software. Patients initially underwent software-based targeted biopsies, then visually directed targeted biopsies, and finally systematic TPM. The detection rate of clinically significant disease (Gleason score ≥3+4 and/or maximum cancer core length ≥4mm) of one strategy against another was compared by 3×3 contingency tables. Secondary analyses were performed using a less stringent threshold of significance (Gleason score ≥4+3 and/or maximum cancer core length ≥6mm). Median age was 68 (interquartile range: 63-73); median prostate-specific antigen level was 7.9ng/mL (6.4-10.2). A total of 79 targets were detected with a mean of 1.6 targets per patient. Of these, 27 (34%), 28 (35%), and 24 (31%) were scored 3, 4, and 5, respectively. At a patient level, the detection rate was 32 (64%), 34 (68%), and 38 (76%) for visually directed targeted, software-based biopsy, and TPM, respectively. Combining the 2 targeted strategies would have led to detection rate of 39 (78%). At a patient level and at a target level, software-based targeted biopsy found more clinically significant diseases than did visually directed targeted biopsy, although this was not statistically significant (22% vs. 14%, P = 0.48; 51.9% vs. 44.3%, P = 0.24). Secondary

  17. Intensity-Modulated Radiotherapy Results in Significant Decrease in Clinical Toxicities Compared With Conventional Wedge-Based Breast Radiotherapy

    International Nuclear Information System (INIS)

    Harsolia, Asif; Kestin, Larry; Grills, Inga; Wallace, Michelle; Jolly, Shruti; Jones, Cortney; Lala, Moinaktar; Martinez, Alvaro; Schell, Scott; Vicini, Frank A.

    2007-01-01

    Purpose: We have previously demonstrated that intensity-modulated radiotherapy (IMRT) with a static multileaf collimator process results in a more homogenous dose distribution compared with conventional wedge-based whole breast irradiation (WBI). In the present analysis, we reviewed the acute and chronic toxicity of this IMRT approach compared with conventional wedge-based treatment. Methods and Materials: A total of 172 patients with Stage 0-IIB breast cancer were treated with lumpectomy followed by WBI. All patients underwent treatment planning computed tomography and received WBI (median dose, 45 Gy) followed by a boost to 61 Gy. Of the 172 patients, 93 (54%) were treated with IMRT, and the 79 patients (46%) treated with wedge-based RT in a consecutive fashion immediately before this cohort served as the control group. The median follow-up was 4.7 years. Results: A significant reduction in acute Grade 2 or worse dermatitis, edema, and hyperpigmentation was seen with IMRT compared with wedges. A trend was found toward reduced acute Grade 3 or greater dermatitis (6% vs. 1%, p = 0.09) in favor of IMRT. Chronic Grade 2 or worse breast edema was significantly reduced with IMRT compared with conventional wedges. No difference was found in cosmesis scores between the two groups. In patients with larger breasts (≥1,600 cm 3 , n = 64), IMRT resulted in reduced acute (Grade 2 or greater) breast edema (0% vs. 36%, p <0.001) and hyperpigmentation (3% vs. 41%, p 0.001) and chronic (Grade 2 or greater) long-term edema (3% vs. 30%, p 0.007). Conclusion: The use of IMRT in the treatment of the whole breast results in a significant decrease in acute dermatitis, edema, and hyperpigmentation and a reduction in the development of chronic breast edema compared with conventional wedge-based RT

  18. A multiparametric magnetic resonance imaging-based risk model to determine the risk of significant prostate cancer prior to biopsy.

    Science.gov (United States)

    van Leeuwen, Pim J; Hayen, Andrew; Thompson, James E; Moses, Daniel; Shnier, Ron; Böhm, Maret; Abuodha, Magdaline; Haynes, Anne-Maree; Ting, Francis; Barentsz, Jelle; Roobol, Monique; Vass, Justin; Rasiah, Krishan; Delprado, Warick; Stricker, Phillip D

    2017-12-01

    To develop and externally validate a predictive model for detection of significant prostate cancer. Development of the model was based on a prospective cohort including 393 men who underwent multiparametric magnetic resonance imaging (mpMRI) before biopsy. External validity of the model was then examined retrospectively in 198 men from a separate institution whom underwent mpMRI followed by biopsy for abnormal prostate-specific antigen (PSA) level or digital rectal examination (DRE). A model was developed with age, PSA level, DRE, prostate volume, previous biopsy, and Prostate Imaging Reporting and Data System (PIRADS) score, as predictors for significant prostate cancer (Gleason 7 with >5% grade 4, ≥20% cores positive or ≥7 mm of cancer in any core). Probability was studied via logistic regression. Discriminatory performance was quantified by concordance statistics and internally validated with bootstrap resampling. In all, 393 men had complete data and 149 (37.9%) had significant prostate cancer. While the variable model had good accuracy in predicting significant prostate cancer, area under the curve (AUC) of 0.80, the advanced model (incorporating mpMRI) had a significantly higher AUC of 0.88 (P prostate cancer. Individualised risk assessment of significant prostate cancer using a predictive model that incorporates mpMRI PIRADS score and clinical data allows a considerable reduction in unnecessary biopsies and reduction of the risk of over-detection of insignificant prostate cancer at the cost of a very small increase in the number of significant cancers missed. © 2017 The Authors BJU International © 2017 BJU International Published by John Wiley & Sons Ltd.

  19. Uncertainty management in knowledge based systems for nondestructive testing-an example from ultrasonic testing

    International Nuclear Information System (INIS)

    Rajagopalan, C.; Kalyanasundaram, P.; Baldev Raj

    1996-01-01

    The use of fuzzy logic, as a framework for uncertainty management, in a knowledge-based system (KBS) for ultrasonic testing of austenitic stainless steels is described. Parameters that may contain uncertain values are identified. Methodologies to handle uncertainty in these parameters using fuzzy logic are detailed. The overall improvement in the performance of the knowledge-based system after incorporating fuzzy logic is discussed. The methodology developed being universal, its extension to other KBS for nondestructive testing and evaluation is highlighted. (author)

  20. Finding differentially expressed genes in high dimensional data: Rank based test statistic via a distance measure.

    Science.gov (United States)

    Mathur, Sunil; Sadana, Ajit

    2015-12-01

    We present a rank-based test statistic for the identification of differentially expressed genes using a distance measure. The proposed test statistic is highly robust against extreme values and does not assume the distribution of parent population. Simulation studies show that the proposed test is more powerful than some of the commonly used methods, such as paired t-test, Wilcoxon signed rank test, and significance analysis of microarray (SAM) under certain non-normal distributions. The asymptotic distribution of the test statistic, and the p-value function are discussed. The application of proposed method is shown using a real-life data set. © The Author(s) 2011.

  1. HEV Test Bench Based on CAN Bus Sensor Communication

    Directory of Open Access Journals (Sweden)

    Shupeng ZHAO

    2014-02-01

    Full Text Available The HEV test bench based on Controller Area Network bus was studied and developed. Control system of HEV power test bench used the CAN bus technology. The application of CAN bus technology on control system development has opened up a new research direction for domestic automobile experimental platform. The HEV power control system development work was completed, including power master controller, electric throttle controller, driving simulation platform, CAN2.0 B communication protocol procedures for formulation, CAN communication monitoring system, the simulation model based on MATLAB code automatic generation technology research, etc. Maximum absorption power of the test bench is 90 kW, the test bench top speed is 6000 r/min, the CAN communication data baud rate is 10~500 k, the conventional electric measurement parameter part precision satisfies the requirement of development of HEV. On the HEV test bench the result of regenerative braking experiment shows that the result got by the test bench was closer to the results got by outdoor road test. And the fuel consumption experiment test results show that the HEV fuel consumption and the charge-discharge character are in linear relationship. The establishment of the test platform for the evaluation of the development of hybrid electric vehicle and power provides physical simulation and test platform.

  2. Space Launch System Base Heating Test: Experimental Operations & Results

    Science.gov (United States)

    Dufrene, Aaron; Mehta, Manish; MacLean, Matthew; Seaford, Mark; Holden, Michael

    2016-01-01

    NASA's Space Launch System (SLS) uses four clustered liquid rocket engines along with two solid rocket boosters. The interaction between all six rocket exhaust plumes will produce a complex and severe thermal environment in the base of the vehicle. This work focuses on a recent 2% scale, hot-fire SLS base heating test. These base heating tests are short-duration tests executed with chamber pressures near the full-scale values with gaseous hydrogen/oxygen engines and RSRMV analogous solid propellant motors. The LENS II shock tunnel/Ludwieg tube tunnel was used at or near flight duplicated conditions up to Mach 5. Model development was based on the Space Shuttle base heating tests with several improvements including doubling of the maximum chamber pressures and duplication of freestream conditions. Test methodology and conditions are presented, and base heating results from 76 runs are reported in non-dimensional form. Regions of high heating are identified and comparisons of various configuration and conditions are highlighted. Base pressure and radiometer results are also reported.

  3. On school choice and test-based accountability.

    Directory of Open Access Journals (Sweden)

    Damian W. Betebenner

    2005-10-01

    Full Text Available Among the two most prominent school reform measures currently being implemented in The United States are school choice and test-based accountability. Until recently, the two policy initiatives remained relatively distinct from one another. With the passage of the No Child Left Behind Act of 2001 (NCLB, a mutualism between choice and accountability emerged whereby school choice complements test-based accountability. In the first portion of this study we present a conceptual overview of school choice and test-based accountability and explicate connections between the two that are explicit in reform implementations like NCLB or implicit within the market-based reform literature in which school choice and test-based accountability reside. In the second portion we scrutinize the connections, in particular, between school choice and test-based accountability using a large western school district with a popular choice system in place. Data from three sources are combined to explore the ways in which school choice and test-based accountability draw on each other: state assessment data of children in the district, school choice data for every participating student in the district choice program, and a parental survey of both participants and non-participants of choice asking their attitudes concerning the use of school report cards in the district. Results suggest that choice is of benefit academically to only the lowest achieving students, choice participation is not uniform across different ethnic groups in the district, and parents' primary motivations as reported on a survey for participation in choice are not due to test scores, though this is not consistent with choice preferences among parents in the district. As such, our results generally confirm the hypotheses of choice critics more so than advocates. Keywords: school choice; accountability; student testing.

  4. Predicting the occurrence of iron chlorosis in grapevine with tests based on soil iron forms

    Directory of Open Access Journals (Sweden)

    Isabel Díaz de la Torre

    2010-06-01

    Significance and impact of study: This study has shown the limited usefulness of tests based on the contents and reactivity of the soil carbonate to predict the occurrence of Fe chlorosis in grapevine; tests capable of estimating the contents of the labile soil Fe forms constitute the best alternative.

  5. Optimization of organic contaminant and toxicity testing analytical procedures for estimating the characteristics and environmental significance of natural gas processing plant waste sludges

    International Nuclear Information System (INIS)

    Novak, N.

    1990-10-01

    The Gas Plant Sludge Characterization Phase IIB program is a continuation of the Canadian Petroleum Association's (CPA) initiatives to characterize sludge generated at gas processing plants. The objectives of the Phase IIB project were to develop an effective procedure for screening waste sludges or centrifuge/leachate generated from sludge samples for volatile, solvent-soluble and water-soluble organics; verify the reproducibility of the three aquatic toxicity tests recommended as the battery of tests for determining the environmental significance of sludge centrifugates or leachates; assess the performance of two terrestrial toxicity tests in determining the environmental significance of whole sludge samples applied to soil; and to assess and discuss the reproducibility and cost-effectiveness of the sampling and analytical techniques proposed for the overall sludge characterization procedure. Conclusions and recommendations are provided for sludge collection, preparation and distribution, organic analyses, toxicity testing, project management, and procedure standardization. The three aquatic and two terrestrial toxicity tests proved effective in indicating the toxicity of complex mixtures. 27 refs., 3 figs., 59 tabs

  6. Development of a new test method for Mineral Based Composites

    DEFF Research Database (Denmark)

    Täljsten, Björn; Orosz, Katalin

    2008-01-01

    The well-known wedge splitting test, often used for characterizing brittle materials has been modified and adapted to testing MBC-reinforced concrete under splitting load. MBC (Mineral Based Composites) is a newly developed strengthening system for existing concrete structures where FRPs, mainly...... CFRP grids are externally bonded to the concrete surface by means of cementitious bonding agents. Crack development, crack patterns, crack opening displacement (COD) versus splitting load and fracture energy are investigated and evaluated. Development of a suitable test specimen and test setup has been...

  7. Development of seismic technology and reliability based on vibration tests

    International Nuclear Information System (INIS)

    Sasaki, Youichi

    1997-01-01

    This paper deals with some of the vibration tests and investigations on the seismic safety of nuclear power plants (NPPs) in Japan. To ensure the reliability of the seismic safety of nuclear power plants, nuclear power plants in Japan have been designed according to the Technical Guidelines for Aseismic Design of Nuclear Power Plants. This guideline has been developed based on technical date base and findings which were obtained from many vibration tests and investigations. Besides the tests for the guideline, proving tests on seismic reliability of operating nuclear power plants equipment and systems have been carried out. In this paper some vibration tests and their evaluation results are presented. They have crucially contributed to develop the guideline. (J.P.N.)

  8. Web based aphasia test using service oriented architecture (SOA)

    International Nuclear Information System (INIS)

    Voos, J A; Vigliecca, N S; Gonzalez, E A

    2007-01-01

    Based on an aphasia test for Spanish speakers which analyze the patient's basic resources of verbal communication, a web-enabled software was developed to automate its execution. A clinical database was designed as a complement, in order to evaluate the antecedents (risk factors, pharmacological and medical backgrounds, neurological or psychiatric symptoms, brain injury -anatomical and physiological characteristics, etc) which are necessary to carry out a multi-factor statistical analysis in different samples of patients. The automated test was developed following service oriented architecture and implemented in a web site which contains a tests suite, which would allow both integrating the aphasia test with other neuropsychological instruments and increasing the available site information for scientific research. The test design, the database and the study of its psychometric properties (validity, reliability and objectivity) were made in conjunction with neuropsychological researchers, who participate actively in the software design, based on the patients or other subjects of investigation feedback

  9. Development of seismic technology and reliability based on vibration tests

    Energy Technology Data Exchange (ETDEWEB)

    Sasaki, Youichi [Nuclear Power Engineering Corp., Tokyo (Japan)

    1997-03-01

    This paper deals with some of the vibration tests and investigations on the seismic safety of nuclear power plants (NPPs) in Japan. To ensure the reliability of the seismic safety of nuclear power plants, nuclear power plants in Japan have been designed according to the Technical Guidelines for Aseismic Design of Nuclear Power Plants. This guideline has been developed based on technical date base and findings which were obtained from many vibration tests and investigations. Besides the tests for the guideline, proving tests on seismic reliability of operating nuclear power plants equipment and systems have been carried out. In this paper some vibration tests and their evaluation results are presented. They have crucially contributed to develop the guideline. (J.P.N.)

  10. Web based aphasia test using service oriented architecture (SOA)

    Energy Technology Data Exchange (ETDEWEB)

    Voos, J A [Clinical Engineering R and D Center, Universidad Tecnologica Nacional, Facultad Regional Cordoba, Cordoba (Argentina); Vigliecca, N S [Consejo Nacional de Investigaciones Cientificas y Tecnicas, CONICET, Cordoba (Argentina); Gonzalez, E A [Clinical Engineering R and D Center, Universidad Tecnologica Nacional, Facultad Regional Cordoba, Cordoba (Argentina)

    2007-11-15

    Based on an aphasia test for Spanish speakers which analyze the patient's basic resources of verbal communication, a web-enabled software was developed to automate its execution. A clinical database was designed as a complement, in order to evaluate the antecedents (risk factors, pharmacological and medical backgrounds, neurological or psychiatric symptoms, brain injury -anatomical and physiological characteristics, etc) which are necessary to carry out a multi-factor statistical analysis in different samples of patients. The automated test was developed following service oriented architecture and implemented in a web site which contains a tests suite, which would allow both integrating the aphasia test with other neuropsychological instruments and increasing the available site information for scientific research. The test design, the database and the study of its psychometric properties (validity, reliability and objectivity) were made in conjunction with neuropsychological researchers, who participate actively in the software design, based on the patients or other subjects of investigation feedback.

  11. MATT: Multi Agents Testing Tool Based Nets within Nets

    Directory of Open Access Journals (Sweden)

    Sara Kerraoui

    2016-12-01

    As part of this effort, we propose a model based testing approach for multi agent systems based on such a model called Reference net, where a tool, which aims to providing a uniform and automated approach is developed. The feasibility and the advantage of the proposed approach are shown through a short case study.

  12. Evolution of a Computer-Based Testing Laboratory

    Science.gov (United States)

    Moskal, Patrick; Caldwell, Richard; Ellis, Taylor

    2009-01-01

    In 2003, faced with increasing growth in technology-based and large-enrollment courses, the College of Business Administration at the University of Central Florida opened a computer-based testing lab to facilitate administration of course examinations. Patrick Moskal, Richard Caldwell, and Taylor Ellis describe the development and evolution of the…

  13. Test-Access Planning and Test Scheduling for Embedded Core-Based System Chips

    OpenAIRE

    Goel, Sandeep Kumar

    2005-01-01

    Advances in the semiconductor process technology enable the creation of a complete system on one single die, the so-called system chip or SOC. To reduce time-to-market for large SOCs, reuse of pre-designed and pre-veried blocks called cores is employed. Like the design style, testing of SOCs can be best approached in a core-based fashion. In order to enable core-based test development, an embedded core should be isolated from its surrounding circuitry and electrical test access from chip pins...

  14. Penetration of gas into concrete during a leakage rate test of reactor containments and its significance for the drop in pressure

    Directory of Open Access Journals (Sweden)

    Nilsson L.-O.

    2011-04-01

    Full Text Available The objective of the project described in the paper was to develop a simulation model that describes transient air pressure distribution in concrete in order to see if the leakage rates obtained from the Containment Integrated Leakage Rate Tests can be explained by the transient air pressurization of concrete pores inside the steel liner. A partial differential equation was derived which describes transient air pressure distribution in concrete pores. The model was validated against experimental results. The simulation model shows that there are significant air fluxes into the concrete structures that can explain the pressure drop during a leakage test.

  15. Evaluation of the aspartate aminotransferase/platelet ratio index and enhanced liver fibrosis tests to detect significant fibrosis due to chronic hepatitis C.

    Science.gov (United States)

    Petersen, John R; Stevenson, Heather L; Kasturi, Krishna S; Naniwadekar, Ashutosh; Parkes, Julie; Cross, Richard; Rosenberg, William M; Xiao, Shu-Yuan; Snyder, Ned

    2014-04-01

    The assessment of liver fibrosis in chronic hepatitis C patients is important for prognosis and making decisions regarding antiviral treatment. Although liver biopsy is considered the reference standard for assessing hepatic fibrosis in patients with chronic hepatitis C, it is invasive and associated with sampling and interobserver variability. Serum fibrosis markers have been utilized as surrogates for a liver biopsy. We completed a prospective study of 191 patients in which blood draws and liver biopsies were performed on the same visit. Using liver biopsies the sensitivity, specificity, and negative and positive predictive values for both aspartate aminotransferase/platelet ratio index (APRI) and enhanced liver fibrosis (ELF) were determined. The patients were divided into training and validation patient sets to develop and validate a clinically useful algorithm for differentiating mild and significant fibrosis. The area under the ROC curve for the APRI and ELF tests for the training set was 0.865 and 0.880, respectively. The clinical sensitivity in separating mild (F0-F1) from significant fibrosis (F2-F4) was 80% and 86.0% with a clinical specificity of 86.7% and 77.8%, respectively. For the validation sets the area under the ROC curve for the APRI and ELF tests was, 0.855 and 0.780, respectively. The clinical sensitivity of the APRI and ELF tests in separating mild (F0-F1) from significant (F2-F4) fibrosis for the validation set was 90.0% and 70.0% with a clinical specificity of 73.3% and 86.7%, respectively. There were no differences between the APRI and ELF tests in distinguishing mild from significant fibrosis for either the training or validation sets (P=0.61 and 0.20, respectively). Using APRI as the primary test followed by ELF for patients in the intermediate zone, would have decreased the number of liver biopsies needed by 40% for the validation set. Overall, use of our algorithm would have decreased the number of patients who needed a liver biopsy

  16. Frequency of Testing for Dyslipidemia: An Evidence-Based Analysis

    Science.gov (United States)

    2014-01-01

    Background Dyslipidemias include high levels of total cholesterol, low-density lipoprotein (LDL) cholesterol, and triglycerides and low levels of high-density lipoprotein (HDL) cholesterol. Dyslipidemia is a risk factor for cardiovascular disease, which is a major contributor to mortality in Canada. Approximately 23% of the 2009/11 Canadian Health Measures Survey (CHMS) participants had a high level of LDL cholesterol, with prevalence increasing with age, and approximately 15% had a total cholesterol to HDL ratio above the threshold. Objectives To evaluate the frequency of lipid testing in adults not diagnosed with dyslipidemia and in adults on treatment for dyslipidemia. Research Methods A systematic review of the literature set out to identify randomized controlled trials (RCTs), systematic reviews, health technology assessments (HTAs), and observational studies published between January 1, 2000, and November 29, 2012, that evaluated the frequency of testing for dyslipidemia in the 2 populations. Results Two observational studies assessed the frequency of lipid testing, 1 in individuals not on lipid-lowering medications and 1 in treated individuals. Both studies were based on previously collected data intended for a different objective and, therefore, no conclusions could be reached about the frequency of testing at intervals other than the ones used in the original studies. Given this limitation and generalizability issues, the quality of evidence was considered very low. No evidence for the frequency of lipid testing was identified in the 2 HTAs included. Canadian and international guidelines recommend testing for dyslipidemia in individuals at an increased risk for cardiovascular disease. The frequency of testing recommended is based on expert consensus. Conclusions Conclusions on the frequency of lipid testing could not be made based on the 2 observational studies. Current guidelines recommend lipid testing in adults with increased cardiovascular risk, with

  17. Test of cold asphalt storability based on alternative approaches

    Science.gov (United States)

    Abaffyová, Zora; Komačka, Jozef

    2017-09-01

    Cold asphalt products for potholes repairs should be workable (soft enough) for long time to ensure their applicability. Storability is assessed indirectly using various tests of workability. Therefore, simple test methods (self-compaction and disintegration test) was developed and verified to investigate changes of storability of this group of cold asphalts. Selfcompaction of the tested mixture in the upturned Abram’s cone for the cement concrete slump test and in the mould for the California Bearing Ratio test was assessed in first stage. After that the video record of disintegration test was taken. During this test, the mould was lifted up and the mixture fell off the mould (Abram’s cone) or disintegrate (CBR mould). The drop of surface after 10 min self-compaction and netto time related to falling out or disintegration of the mixture were used to evaluate the mixture from storability point of view. It was found out the self-compaction test has not a potential to reveal and prove changes of mixture properties. Based on the disintegration test results it can be stated this test at 5 °C using the upturned Abram’s cone could be a suitable approach to determine qualitative changes of a cold mixture from storability point of view.

  18. Moisture distribution in sludges based on different testing methods

    Institute of Scientific and Technical Information of China (English)

    Wenyi Deng; Xiaodong Li; Jianhua Yan; Fei Wang; Yong Chi; Kefa Cen

    2011-01-01

    Moisture distributions in municipal sewage sludge, printing and dyeing sludge and paper mill sludge were experimentally studied based on four different methods, i.e., drying test, thermogravimetric-differential thermal analysis (TG-DTA) test, thermogravimetricdifferential scanning calorimetry (TG-DSC) test and water activity test. The results indicated that the moistures in the mechanically dewatered sludges were interstitial water, surface water and bound water. The interstitial water accounted for more than 50% wet basis (wb) of the total moisture content. The bond strength of sludge moisture increased with decreasing moisture content, especially when the moisture content was lower than 50% wb. Furthermore, the comparison among the four different testing methods was presented.The drying test was advantaged by its ability to quantify free water, interstitial water, surface water and bound water; while TG-DSC test, TG-DTA test and water activity test were capable of determining the bond strength of moisture in sludge. It was found that the results from TG-DSC and TG-DTA test are more persuasive than water activity test.

  19. Testing ESL sociopragmatics development and validation of a web-based test battery

    CERN Document Server

    Roever, Carsten; Elder, Catherine

    2014-01-01

    Testing of second language pragmatics has grown as a research area but still suffers from a tension between construct coverage and practicality. In this book, the authors describe the development and validation of a web-based test of second language pragmatics for learners of English. The test has a sociopragmatic orientation and strives for a broad coverage of the construct by assessing learners'' metapragmatic judgments as well as their ability to co-construct discourse. To ensure practicality, the test is delivered online and is scored partially automatically and partially by human raters.

  20. Comparison of the clinical performance of an HPV mRNA test and an HPV DNA test in triage of atypical squamous cells of undetermined significance (ASC-US)

    DEFF Research Database (Denmark)

    Waldstrom, M; Ornskov, D

    2012-01-01

    The effect of triaging women with atypical squamous cells of undetermined significance (ASC-US) with human papillomavirus (HPV) DNA testing has been well documented. New tests detecting HPV E6/E7 mRNA are emerging, claiming to be more specific for detecting high-grade disease. We evaluated the cl...

  1. Testing Game-Based Performance in Team-Handball.

    Science.gov (United States)

    Wagner, Herbert; Orwat, Matthias; Hinz, Matthias; Pfusterschmied, Jürgen; Bacharach, David W; von Duvillard, Serge P; Müller, Erich

    2016-10-01

    Wagner, H, Orwat, M, Hinz, M, Pfusterschmied, J, Bacharach, DW, von Duvillard, SP, and Müller, E. Testing game-based performance in team-handball. J Strength Cond Res 30(10): 2794-2801, 2016-Team-handball is a fast paced game of defensive and offensive action that includes specific movements of jumping, passing, throwing, checking, and screening. To date and to the best of our knowledge, a game-based performance test (GBPT) for team-handball does not exist. Therefore, the aim of this study was to develop and validate such a test. Seventeen experienced team-handball players performed 2 GBPTs separated by 7 days between each test, an incremental treadmill running test, and a team-handball test game (TG) (2 × 20 minutes). Peak oxygen uptake (V[Combining Dot Above]O2peak), blood lactate concentration (BLC), heart rate (HR), sprinting time, time of offensive and defensive actions as well as running intensities, ball velocity, and jump height were measured in the game-based test. Reliability of the tests was calculated using an intraclass correlation coefficient (ICC). Additionally, we measured V[Combining Dot Above]O2peak in the incremental treadmill running test and BLC, HR, and running intensities in the team-handball TG to determine the validity of the GBPT. For the test-retest reliability, we found an ICC >0.70 for the peak BLC and HR, mean offense and defense time, as well as ball velocity that yielded an ICC >0.90 for the V[Combining Dot Above]O2peak in the GBPT. Percent walking and standing constituted 73% of total time. Moderate (18%) and high (9%) intensity running in the GBPT was similar to the team-handball TG. Our results indicated that the GBPT is a valid and reliable test to analyze team-handball performance (physiological and biomechanical variables) under conditions similar to competition.

  2. Does the Test Work? Evaluating a Web-Based Language Placement Test

    Science.gov (United States)

    Long, Avizia Y.; Shin, Sun-Young; Geeslin, Kimberly; Willis, Erik W.

    2018-01-01

    In response to the need for examples of test validation from which everyday language programs can benefit, this paper reports on a study that used Bachman's (2005) assessment use argument (AUA) framework to examine evidence to support claims made about the intended interpretations and uses of scores based on a new web-based Spanish language…

  3. Qualitative tests for the determination of inorganic bases

    OpenAIRE

    Založnik, Urša

    2013-01-01

    The unit on acids, bases and salts is dealt with in primary and secondary schools and can be very interesting to students because they encounter these substances on an everyday basis. In my Diploma thesis I will focus on bases, especially on how the students could determine in the most interesting way whether a solution is acid or base and which solution (base) that actually is. My goal is to develop simple qualitative tests to determine inorganic bases in primary schools. In nature, ba...

  4. [Prevention and treatment of the complications of polycystic ovarian syndrome--the significance of evidence-based, interdisciplinary management].

    Science.gov (United States)

    Gődény, Sándor; Csenteri, Orsolya Karola

    2015-12-13

    Polycystic ovary syndrome is the most common hormonal and metabolic disorder likely to affect women. The syndrome is often associated with obesity, hyperinsulinemia and adversely affects endocrine, metabolic, and cardiovascular health. The complex feature of the syndrome requires an interdisciplinary approach to treatment, where cooperation of paediatrician, internist, gynaecologist, endocrinologist, dermatologist, psychologist and oncologist is essential. The prevention and the treatment should be based on the best available evidence. This should include physical examination, laboratory tests for hormones, serum insulin, glucose, lipids, in addition patient's preferences should be considered, too. To maximise health gain of polycystic ovarian syndrome, adequate, effective, efficient and safe treatment is necessary. This article summarises the highest available evidence provided by meta-analyses and systematic reviews of the prevention of metabolic and cardiovascular complications of the syndrome, and discusses the relevant evidence published in the literature.

  5. Automated Functional Testing based on the Navigation of Web Applications

    Directory of Open Access Journals (Sweden)

    Boni García

    2011-08-01

    Full Text Available Web applications are becoming more and more complex. Testing such applications is an intricate hard and time-consuming activity. Therefore, testing is often poorly performed or skipped by practitioners. Test automation can help to avoid this situation. Hence, this paper presents a novel approach to perform automated software testing for web applications based on its navigation. On the one hand, web navigation is the process of traversing a web application using a browser. On the other hand, functional requirements are actions that an application must do. Therefore, the evaluation of the correct navigation of web applications results in the assessment of the specified functional requirements. The proposed method to perform the automation is done in four levels: test case generation, test data derivation, test case execution, and test case reporting. This method is driven by three kinds of inputs: i UML models; ii Selenium scripts; iii XML files. We have implemented our approach in an open-source testing framework named Automatic Testing Platform. The validation of this work has been carried out by means of a case study, in which the target is a real invoice management system developed using a model-driven approach.

  6. Exploring pharmacy and home-based sexually transmissible infection testing.

    Science.gov (United States)

    Habel, Melissa A; Scheinmann, Roberta; Verdesoto, Elizabeth; Gaydos, Charlotte; Bertisch, Maggie; Chiasson, Mary Ann

    2015-11-01

    Background This study assessed the feasibility and acceptability of pharmacy and home-based sexually transmissible infection (STI) screening as alternate testing venues among emergency contraception (EC) users. The study included two phases in February 2011-July 2012. In Phase I, customers purchasing EC from eight pharmacies in Manhattan received vouchers for free STI testing at onsite medical clinics. In Phase II, three Facebook ads targeted EC users to connect them with free home-based STI test kits ordered online. Participants completed a self-administered survey. Only 38 participants enrolled in Phase I: 90% female, ≤29 years (74%), 45% White non-Hispanic and 75% college graduates; 71% were not tested for STIs in the past year and 68% reported a new partner in the past 3 months. None tested positive for STIs. In Phase II, ads led to >45000 click-throughs, 382 completed the survey and 290 requested kits; 28% were returned. Phase II participants were younger and less educated than Phase I participants; six tested positive for STIs. Challenges included recruitment, pharmacy staff participation, advertising with discretion and cost. This study found low uptake of pharmacy and home-based testing among EC users; however, STI testing in these settings is feasible and the acceptability findings indicate an appeal among younger women for testing in non-traditional settings. Collaborating with and training pharmacy and medical staff are key elements of service provision. Future research should explore how different permutations of expanding screening in non-traditional settings could improve testing uptake and detect additional STI cases.

  7. Worldwide Research, Worldwide Participation: Web-Based Test Logger

    Science.gov (United States)

    Clark, David A.

    1998-01-01

    Thanks to the World Wide Web, a new paradigm has been born. ESCORT (steady state data system) facilities can now be configured to use a Web-based test logger, enabling worldwide participation in tests. NASA Lewis Research Center's new Web-based test logger for ESCORT automatically writes selected test and facility parameters to a browser and allows researchers to insert comments. All data can be viewed in real time via Internet connections, so anyone with a Web browser and the correct URL (universal resource locator, or Web address) can interactively participate. As the test proceeds and ESCORT data are taken, Web browsers connected to the logger are updated automatically. The use of this logger has demonstrated several benefits. First, researchers are free from manual data entry and are able to focus more on the tests. Second, research logs can be printed in report format immediately after (or during) a test. And finally, all test information is readily available to an international public.

  8. OCL-BASED TEST CASE GENERATION USING CATEGORY PARTITIONING METHOD

    Directory of Open Access Journals (Sweden)

    A. Jalila

    2015-10-01

    Full Text Available The adoption of fault detection techniques during initial stages of software development life cycle urges to improve reliability of a software product. Specification-based testing is one of the major criterions to detect faults in the requirement specification or design of a software system. However, due to the non-availability of implementation details, test case generation from formal specifications become a challenging task. As a novel approach, the proposed work presents a methodology to generate test cases from OCL (Object constraint Language formal specification using Category Partitioning Method (CPM. The experiment results indicate that the proposed methodology is more effective in revealing specification based faults. Furthermore, it has been observed that OCL and CPM form an excellent combination for performing functional testing at the earliest to improve software quality with reduced cost.

  9. A clip-based protocol for breast boost radiotherapy provides clear target visualisation and demonstrates significant volume reduction over time

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, Lorraine [Department of Radiation Oncology, Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales (Australia); Cox, Jennifer [Department of Radiation Oncology, Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales (Australia); Faculty of Health Sciences, University of Sydney, Sydney, New South Wales (Australia); Morgia, Marita [Department of Radiation Oncology, Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales (Australia); Atyeo, John [Faculty of Health Sciences, University of Sydney, Sydney, New South Wales (Australia); Lamoury, Gillian [Department of Radiation Oncology, Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales (Australia)

    2015-09-15

    The clinical target volume (CTV) for early stage breast cancer is difficult to clearly identify on planning computed tomography (CT) scans. Surgical clips inserted around the tumour bed should help to identify the CTV, particularly if the seroma has been reabsorbed, and enable tracking of CTV changes over time. A surgical clip-based CTV delineation protocol was introduced. CTV visibility and its post-operative shrinkage pattern were assessed. The subjects were 27 early stage breast cancer patients receiving post-operative radiotherapy alone and 15 receiving post-operative chemotherapy followed by radiotherapy. The radiotherapy alone (RT/alone) group received a CT scan at median 25 days post-operatively (CT1rt) and another at 40 Gy, median 68 days (CT2rt). The chemotherapy/RT group (chemo/RT) received a CT scan at median 18 days post-operatively (CT1ch), a planning CT scan at median 126 days (CT2ch), and another at 40 Gy (CT3ch). There was no significant difference (P = 0.08) between the initial mean CTV for each cohort. The RT/alone cohort showed significant CTV volume reduction of 38.4% (P = 0.01) at 40 Gy. The Chemo/RT cohort had significantly reduced volumes between CT1ch: median 54 cm{sup 3} (4–118) and CT2ch: median 16 cm{sup 3}, (2–99), (P = 0.01), but no significant volume reduction thereafter. Surgical clips enable localisation of the post-surgical seroma for radiotherapy targeting. Most seroma shrinkage occurs early, enabling CT treatment planning to take place at 7 weeks, which is within the 9 weeks recommended to limit disease recurrence.

  10. A clip-based protocol for breast boost radiotherapy provides clear target visualisation and demonstrates significant volume reduction over time

    International Nuclear Information System (INIS)

    Lewis, Lorraine; Cox, Jennifer; Morgia, Marita; Atyeo, John; Lamoury, Gillian

    2015-01-01

    The clinical target volume (CTV) for early stage breast cancer is difficult to clearly identify on planning computed tomography (CT) scans. Surgical clips inserted around the tumour bed should help to identify the CTV, particularly if the seroma has been reabsorbed, and enable tracking of CTV changes over time. A surgical clip-based CTV delineation protocol was introduced. CTV visibility and its post-operative shrinkage pattern were assessed. The subjects were 27 early stage breast cancer patients receiving post-operative radiotherapy alone and 15 receiving post-operative chemotherapy followed by radiotherapy. The radiotherapy alone (RT/alone) group received a CT scan at median 25 days post-operatively (CT1rt) and another at 40 Gy, median 68 days (CT2rt). The chemotherapy/RT group (chemo/RT) received a CT scan at median 18 days post-operatively (CT1ch), a planning CT scan at median 126 days (CT2ch), and another at 40 Gy (CT3ch). There was no significant difference (P = 0.08) between the initial mean CTV for each cohort. The RT/alone cohort showed significant CTV volume reduction of 38.4% (P = 0.01) at 40 Gy. The Chemo/RT cohort had significantly reduced volumes between CT1ch: median 54 cm 3 (4–118) and CT2ch: median 16 cm 3 , (2–99), (P = 0.01), but no significant volume reduction thereafter. Surgical clips enable localisation of the post-surgical seroma for radiotherapy targeting. Most seroma shrinkage occurs early, enabling CT treatment planning to take place at 7 weeks, which is within the 9 weeks recommended to limit disease recurrence

  11. Overheating Anomalies during Flight Test Due to the Base Bleeding

    Science.gov (United States)

    Luchinsky, Dmitry; Hafiychuck, Halyna; Osipov, Slava; Ponizhovskaya, Ekaterina; Smelyanskiy, Vadim; Dagostino, Mark; Canabal, Francisco; Mobley, Brandon L.

    2012-01-01

    In this paper we present the results of the analytical and numerical studies of the plume interaction with the base flow in the presence of base out-gassing. The physics-based analysis and CFD modeling of the base heating for single solid rocket motor performed in this research addressed the following questions: what are the key factors making base flow so different from that in the Shuttle [1]; why CFD analysis of this problem reveals small plume recirculation; what major factors influence base temperature; and why overheating was initiated at a given time in the flight. To answer these questions topological analysis of the base flow was performed and Korst theory was used to estimate relative contributions of radiation, plume recirculation, and chemically reactive out-gassing to the base heating. It was shown that base bleeding and small base volume are the key factors contributing to the overheating, while plume recirculation is effectively suppressed by asymmetric configuration of the flow formed earlier in the flight. These findings are further verified using CFD simulations that include multi-species gas environment both in the plume and in the base. Solid particles in the exhaust plume (Al2O3) and char particles in the base bleeding were also included into the simulations and their relative contributions into the base temperature rise were estimated. The results of simulations are in good agreement with the temperature and pressure in the base measured during the test.

  12. The significance and robustness of a plasma free amino acid (PFAA) profile-based multiplex function for detecting lung cancer

    International Nuclear Information System (INIS)

    Shingyoji, Masato; Mitsushima, Toru; Yamakado, Minoru; Kimura, Hideki; Iizasa, Toshihiko; Higashiyama, Masahiko; Imamura, Fumio; Saruki, Nobuhiro; Imaizumi, Akira; Yamamoto, Hiroshi; Daimon, Takashi; Tochikubo, Osamu

    2013-01-01

    We have recently reported on the changes in plasma free amino acid (PFAA) profiles in lung cancer patients and the efficacy of a PFAA-based, multivariate discrimination index for the early detection of lung cancer. In this study, we aimed to verify the usefulness and robustness of PFAA profiling for detecting lung cancer using new test samples. Plasma samples were collected from 171 lung cancer patients and 3849 controls without apparent cancer. PFAA levels were measured by high-performance liquid chromatography (HPLC)–electrospray ionization (ESI)–mass spectrometry (MS). High reproducibility was observed for both the change in the PFAA profiles in the lung cancer patients and the discriminating performance for lung cancer patients compared to previously reported results. Furthermore, multivariate discriminating functions obtained in previous studies clearly distinguished the lung cancer patients from the controls based on the area under the receiver-operator characteristics curve (AUC of ROC = 0.731 ~ 0.806), strongly suggesting the robustness of the methodology for clinical use. Moreover, the results suggested that the combinatorial use of this classifier and tumor markers improves the clinical performance of tumor markers. These findings suggest that PFAA profiling, which involves a relatively simple plasma assay and imposes a low physical burden on subjects, has great potential for improving early detection of lung cancer

  13. Diagnosis of Food Allergy Based on Oral Food Challenge Test

    OpenAIRE

    Komei Ito; Atsuo Urisu

    2009-01-01

    Diagnosis of food allergy should be based on the observation of allergic symptoms after intake of the suspected food. The oral food challenge test (OFC) is the most reliable clinical procedure for diagnosing food allergy. The OFC is also applied for the diagnosis of tolerance of food allergy. The Japanese Society of Pediatric Allergy and Clinical Immunology issued the 'Japanese Pediatric Guideline for Oral Food Challenge Test in Food Allergy 2009' in April 2009, to provide information on a sa...

  14. Towards Automatic Testing of Reference Point Based Interactive Methods

    OpenAIRE

    Ojalehto, Vesa; Podkopaev, Dmitry; Miettinen, Kaisa

    2016-01-01

    In order to understand strengths and weaknesses of optimization algorithms, it is important to have access to different types of test problems, well defined performance indicators and analysis tools. Such tools are widely available for testing evolutionary multiobjective optimization algorithms. To our knowledge, there do not exist tools for analyzing the performance of interactive multiobjective optimization methods based on the reference point approach to communicating ...

  15. A model based security testing method for protocol implementation.

    Science.gov (United States)

    Fu, Yu Long; Xin, Xiao Long

    2014-01-01

    The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.

  16. A simple slide test to assess erythrocyte aggregation in acute ST-elevated myocardial infarction and acute ischemic stroke: Its prognostic significance

    Directory of Open Access Journals (Sweden)

    Atla Bhagya Lakshmi

    2011-01-01

    Full Text Available A simple slide test and image analysis were used to reveal the presence of an acute-phase response and to determine its intensity in subjects of acute myocardial infarction and acute ischemic stroke. Erythrocytes tend to aggregate during an inflammatory process. Evaluation of erythrocyte adhesiveness/aggregation is currently available to the clinicians indirectly by erythrocyte sedimentation rate (ESR, but ESR correlates poorly with erythrocyte aggregation, hence a simple slide technique using citrated blood was used to evaluate erythrocyte aggregation microscopically and also by using image analysis. Aims: (1 To study erythrocyte aggregation/adhesiveness by a simple slide test in subjects with acute ST-elevated myocardial infarction (STEMI, acute ischemic stroke and healthy controls. (2 To study the prognostic significance of ESR and erythrocyte aggregation/adhesiveness test (EAAT in predicting the outcome after 1 week in subjects of acute myocardial infarction and acute ischemic stroke. Patients and Methods: Three groups of subjects were included in the study; 30 patients of acute STEMI, 30 patients of acute ischemic stroke, and 30 subjects with age- and gender-matched healthy controls. Citrated blood was subjected to simple slide test and ESR estimation by Westergren′s method. Stained smears were examined under 400Χ and graded into four grades. Images were taken from nine fields; three each from head, body, and tail of the smear. The degree of erythrocyte aggregation was quantified using a variable called erythrocyte percentage (EP, by using the software MATLAB Version 7.5. A simple program was used to count the number of black and white pixels in the image by selecting a threshold level. Results: The mean ESR of the subjects with acute myocardial infarction (29 + 17.34 was significantly higher (P = 0.001 than the mean ESR of the control group (15.5 + 12.37. The mean EP of the subjects with acute myocardial infarction (69.91 + 13.25 was

  17. Validation test case generation based on safety analysis ontology

    International Nuclear Information System (INIS)

    Fan, Chin-Feng; Wang, Wen-Shing

    2012-01-01

    Highlights: ► Current practice in validation test case generation for nuclear system is mainly ad hoc. ► This study designs a systematic approach to generate validation test cases from a Safety Analysis Report. ► It is based on a domain-specific ontology. ► Test coverage criteria have been defined and satisfied. ► A computerized toolset has been implemented to assist the proposed approach. - Abstract: Validation tests in the current nuclear industry practice are typically performed in an ad hoc fashion. This study presents a systematic and objective method of generating validation test cases from a Safety Analysis Report (SAR). A domain-specific ontology was designed and used to mark up a SAR; relevant information was then extracted from the marked-up document for use in automatically generating validation test cases that satisfy the proposed test coverage criteria; namely, single parameter coverage, use case coverage, abnormal condition coverage, and scenario coverage. The novelty of this technique is its systematic rather than ad hoc test case generation from a SAR to achieve high test coverage.

  18. Significance of a negative exercise thallium test in the presence of a critical residual stenosis after thrombolysis for acute myocardial infarction

    International Nuclear Information System (INIS)

    Sutton, J.M.; Topol, E.J.

    1991-01-01

    After thrombolytic therapy for acute myocardial infarction, increasing emphasis is placed on early submaximal exercise testing, with further intervention advocated only for demonstrable ischemia. Although significant residual coronary artery lesions after successful thrombolysis are common, many patients paradoxically have no corresponding provokable ischemia. The relation between significant postthrombolytic residual coronary artery disease and a negative early, submaximal exercise thallium-201 tomogram was studied among 101 consecutive patients with uncomplicated myocardial infarction and at least 70% residual stenosis of the infarct artery. A negative test occurred in 49 (48.5%) patients with a mean 88% residual infarct artery stenosis. Further characteristics of the group were as follows: mean time to treatment was 3.1 hours; mean age was 54 +/- 10 years; 80% were male; 47% had anterior infarction; 39% had multivessel disease; mean left ventricular ejection fraction was 53 +/- 14%; and mean peak creatine kinase level was 3,820 +/- 3,123 IU/ml. A similar group of 52 (51.5%) patients, treated within 3.3 hours from symptom onset, with a mean postthrombolysis stenosis of 90%, had a positive exercise test. Characteristics of this group were as follows: age was 58 +/- 10 years; 92% were male; 56% had anterior infarction; 40% had multivessel disease; and mean left ventricular ejection fraction was 54 +/- 15%. The peak creatine kinase level associated with the infarction, however, was lower: 2,605 +/- 1,805 IU/ml (p = 0.04). There was no difference in performance at exercise testing with respect to peak systolic pressure, peak heart rate, or time tolerated on the treadmill between the two groups. By multivariate logistic regression, only peak creatine kinase level predicted a negative stress result in the presence of a significant residual stenosis

  19. Realistic evaluation of tester exposure based on Florida testing experience

    International Nuclear Information System (INIS)

    Schreiber, R.A.

    1990-01-01

    This paper reports on a radon decay product exposure model for Florida Certified Radon Measurement Technicians that has been formulated based on the guidance of 10CFR20. This model was used to estimate the exposure of 44 Florida measurement technicians from January through November of 1989. Comparing estimated testing and home exposure shows that 100% of the technicians observed received more exposure in the home than during testing activities. Exposure during normal office hours also exceed testing exposure in 86% of the technicians observed. Health and safety exposure data for radon measurement technicians does not follow the standard concepts of occupational radiation exposure normally accepted in 10CFR20

  20. FARO base case post-test analysis by COMETA code

    Energy Technology Data Exchange (ETDEWEB)

    Annunziato, A.; Addabbo, C. [Joint Research Centre, Ispra (Italy)

    1995-09-01

    The paper analyzes the COMETA (Core Melt Thermal-Hydraulic Analysis) post test calculations of FARO Test L-11, the so-called Base Case Test. The FARO Facility, located at JRC Ispra, is used to simulate the consequences of Severe Accidents in Nuclear Power Plants under a variety of conditions. The COMETA Code has a 6 equations two phase flow field and a 3 phases corium field: the jet, the droplets and the fused-debris bed. The analysis shown that the code is able to pick-up all the major phenomena occurring during the fuel-coolant interaction pre-mixing phase.

  1. Induction Based Training leads to Highly Significant Improvements of Objective and Subjective Suturing Ability in Junior Doctors

    Directory of Open Access Journals (Sweden)

    Kevin Garry

    2018-03-01

    Full Text Available Background: Simulation based training has shown to be of benefit in the education of medical students. However, the impact of induction based clinical simulation on surgical ability of qualified doctors remains unclear.The aim of this study was to establish if a 60 minute teaching session integrated into an Emergency Medicine speciality induction program produces statistically significant improvements in objective and subjective suturing abilities of junior doctors commencing an Emergency Medicine rotation.Methods: The objective suturing abilities of 16 Foundation Year Two doctors were analysed using a validated OSATs scale prior to a novel teaching intervention. The doctors then undertook an intensive hour long workshop receiving one to one feedback before undergoing repeat OSATs assessment.Subjective ability was measured using a 5 point likert scale and self-assessed competency reporting interrupted suturing before and after the intervention. Photographs of wound closure before and after the intervention were recorded for further blinded assessment of impact of intervention. A survey regarding continued ability was repeated at four months following the intervention. The study took place on 7/12/16 during the Belfast Health and Social Care Trust Emergency Medicine induction in the Royal Victoria Hospital Belfast. The hospital is a regional level 1 trauma centre that has annual departmental attendances in excess of 200,000.All new junior doctors commencing the Emergency Medicine rotation were invited to partake in the study. All 16 agreed. The group consisted of a mixture of undergraduate and postgraduate medicaldoctors who all had 16 months experience working in a variety of medical or surgical jobs previously.Results: Following the teaching intervention objective and subjective abilities in interrupted suturing showed statistically significant improvement (P>0.005. Self-reporting of competency of independently suturingwounds improved from 50

  2. Large-area photogrammetry based testing of wind turbine blades

    Science.gov (United States)

    Poozesh, Peyman; Baqersad, Javad; Niezrecki, Christopher; Avitabile, Peter; Harvey, Eric; Yarala, Rahul

    2017-03-01

    An optically based sensing system that can measure the displacement and strain over essentially the entire area of a utility-scale blade leads to a measurement system that can significantly reduce the time and cost associated with traditional instrumentation. This paper evaluates the performance of conventional three dimensional digital image correlation (3D DIC) and three dimensional point tracking (3DPT) approaches over the surface of wind turbine blades and proposes a multi-camera measurement system using dynamic spatial data stitching. The potential advantages for the proposed approach include: (1) full-field measurement distributed over a very large area, (2) the elimination of time-consuming wiring and expensive sensors, and (3) the need for large-channel data acquisition systems. There are several challenges associated with extending the capability of a standard 3D DIC system to measure entire surface of utility scale blades to extract distributed strain, deflection, and modal parameters. This paper only tries to address some of the difficulties including: (1) assessing the accuracy of the 3D DIC system to measure full-field distributed strain and displacement over the large area, (2) understanding the geometrical constraints associated with a wind turbine testing facility (e.g. lighting, working distance, and speckle pattern size), (3) evaluating the performance of the dynamic stitching method to combine two different fields of view by extracting modal parameters from aligned point clouds, and (4) determining the feasibility of employing an output-only system identification to estimate modal parameters of a utility scale wind turbine blade from optically measured data. Within the current work, the results of an optical measurement (one stereo-vision system) performed on a large area over a 50-m utility-scale blade subjected to quasi-static and cyclic loading are presented. The blade certification and testing is typically performed using International

  3. Design of Test Wrapper Scan Chain Based on Differential Evolution

    Directory of Open Access Journals (Sweden)

    Aijun Zhu

    2013-08-01

    Full Text Available Integrated Circuit has entered the era of design of the IP-based SoC (System on Chip, which makes the IP core reuse become a key issue. SoC test wrapper design for scan chain is a NP Hard problem, we propose an algorithm based on Differential Evolution (DE to design wrapper scan chain. Through group’s mutation, crossover and selection operations, the design of test wrapper scan chain is achieved. Experimental verification is carried out according to the international standard benchmark ITC’02. The results show that the algorithm can obtain shorter longest wrapper scan chains, compared with other algorithms.

  4. Diagnosis of Food Allergy Based on Oral Food Challenge Test

    Directory of Open Access Journals (Sweden)

    Komei Ito

    2009-01-01

    Full Text Available Diagnosis of food allergy should be based on the observation of allergic symptoms after intake of the suspected food. The oral food challenge test (OFC is the most reliable clinical procedure for diagnosing food allergy. The OFC is also applied for the diagnosis of tolerance of food allergy. The Japanese Society of Pediatric Allergy and Clinical Immunology issued the 'Japanese Pediatric Guideline for Oral Food Challenge Test in Food Allergy 2009' in April 2009, to provide information on a safe and standardized method for administering the OFC. This review focuses on the clinical applications and procedure for the OFC, based on the Japanese OFC guideline.

  5. Social inequality and HIV-testing: Comparing home- and clinic-based testing in rural Malawi

    Directory of Open Access Journals (Sweden)

    Alexander A. Weinreb

    2009-10-01

    Full Text Available The plan to increase HIV testing is a cornerstone of the international health strategy against the HIV/AIDS epidemic, particularly in sub-Saharan Africa. This paper highlights a problematic aspect of that plan: the reliance on clinic- rather than home-based testing. First, drawing on DHS data from across Africa, we demonstrate the substantial differences in socio-demographic and economic profiles between those who report having ever had an HIV test, and those who report never having had one. Then, using data from a random household survey in rural Malawi, we show that substituting home-based for clinic-based testing may eliminate this source of inequality between those tested and those not tested. This result, which is stable across modeling frameworks, has important implications for accurately and equitably addressing the counseling and treatment programs that comprise the international health strategy against AIDS, and that promise to shape the future trajectory of the epidemic in Africa and beyond.

  6. Pharmacists performing quality spirometry testing: an evidence based review.

    Science.gov (United States)

    Cawley, Michael J; Warning, William J

    2015-10-01

    The scope of pharmacist services for patients with pulmonary disease has primarily focused on drug related outcomes; however pharmacists have the ability to broaden the scope of clinical services by performing diagnostic testing including quality spirometry testing. Studies have demonstrated that pharmacists can perform quality spirometry testing based upon international guidelines. The primary aim of this review was to assess the published evidence of pharmacists performing quality spirometry testing based upon American Thoracic Society/European Respiratory Society (ATS/ERS) guidelines. In order to accomplish this, the description of evidence and type of outcome from these services were reviewed. A literature search was conducted using five databases [PubMed (1946-January 2015), International Pharmaceutical Abstracts (1970 to January 2015), Cumulative Index of Nursing and Allied Health Literature, Cochrane Central Register of Controlled Trials and Cochrane Database of Systematic Reviews] with search terms including pharmacy, spirometry, pulmonary function, asthma or COPD was conducted. Searches were limited to publications in English and reported in humans. In addition, Uniform Resource Locators and Google Scholar searches were implemented to include any additional supplemental information. Eight studies (six prospective multi-center trials, two retrospective single center studies) were included. Pharmacists in all studies received specialized training in performing spirometry testing. Of the eight studies meeting inclusion and exclusion criteria, 8 (100%) demonstrated acceptable repeatability of spirometry testing based upon standards set by the ATS/ERS guidelines. Acceptable repeatability of seven studies ranged from 70 to 99% consistent with published data. Available evidence suggests that quality spirometry testing can be performed by pharmacists. More prospective studies are needed to add to the current evidence of quality spirometry testing performed by

  7. Impact of age on the false negative rate of human papillomavirus DNA test in patients with atypical squamous cells of undetermined significance.

    Science.gov (United States)

    Won, Kyu-Hee; Lee, Jae Yeon; Cho, Hye-Yon; Suh, Dong Hoon; No, Jae Hong; Kim, Yong-Beom

    2015-03-01

    Human papillomavirus (HPV) test was incorporated into the triage of lesser abnormal cervical cytologies: atypical squamous cells of undetermined significance (ASCUS) or low-grade squamous intraepithelial lesion (LSIL). This study aimed to evaluate the impact of age on the efficacy of HPV testing in patients with lesser abnormal cervical cytologies. A total of 439 patients with ASCUS or LSIL were included. The association between age groups and the diagnostic performances of HPV test for high-grade cervical intraepithelial neoplasia (CIN2+) was evaluated. Median age was 44 years (range, 17 to 75 years). ASCUS was more frequently observed in older patients while LSIL was more common in younger patients (P=0.002). CIN2+ was found in 11.3% (32/284) of the ASCUS patients and 12.9% (20/155) of patients with LSIL. Older patients with ASCUS showed lower HPV infection rates (P=0.025), but not LSIL (P=0.114). However, the prevalence of CIN2+ was similar between the age groups with ASCUS or LSIL. In patients with ASCUS, the false negative rate of HPV test for CIN2+ was 6.2%. The false negative rate of the HPV test became higher with increasing of the age after the age of 50 (P=0.034). Our findings suggest that false negative rate of the HPV test for CIN2+ in ASCUS patients older than 50 years might become higher with increasing of the age. Negative HPV results in patients of the age >50 years with ASCUS should be carefully interpreted.

  8. Diagnostic accuracy and prognostic significance of blood fibrosis tests and liver stiffness measurement by FibroScan in non-alcoholic fatty liver disease.

    Science.gov (United States)

    Boursier, Jérôme; Vergniol, Julien; Guillet, Anne; Hiriart, Jean-Baptiste; Lannes, Adrien; Le Bail, Brigitte; Michalak, Sophie; Chermak, Faiza; Bertrais, Sandrine; Foucher, Juliette; Oberti, Frédéric; Charbonnier, Maude; Fouchard-Hubert, Isabelle; Rousselet, Marie-Christine; Calès, Paul; de Lédinghen, Victor

    2016-09-01

    NAFLD is highly prevalent but only a small subset of patients develop advanced liver fibrosis with impaired liver-related prognosis. We aimed to compare blood fibrosis tests and liver stiffness measurement (LSM) by FibroScan for the diagnosis of liver fibrosis and the evaluation of prognosis in NAFLD. Diagnostic accuracy was evaluated in a cross-sectional study including 452 NAFLD patients with liver biopsy (NASH-CRN fibrosis stage), LSM, and eight blood fibrosis tests (BARD, NAFLD fibrosis score, FibroMeter(NAFLD), aspartate aminotransferase to platelet ratio index (APRI), FIB4, FibroTest, Hepascore, FibroMeter(V2G)). Prognostic accuracy was evaluated in a longitudinal study including 360 NAFLD patients. LSM and FibroMeter(V2G) were the two best-performing tests in the cross-sectional study: AUROCs for advanced fibrosis (F3/4) were, respectively, 0.831±0.019 and 0.817±0.020 (p⩽0.041 vs. other tests); rates of patients with ⩾90% negative/positive predictive values for F3/4 were 56.4% and 46.7% (ptests); Obuchowski indexes were 0.834±0.014 and 0.798±0.016 (p⩽0.036 vs. other tests). Two fibrosis classifications were developed to precisely estimate the histological fibrosis stage from LSM or FibroMeter(V2G) results without liver biopsy (diagnostic accuracy, respectively: 80.8% vs. 77.4%, p=0.190). Kaplan-Meier curves in the longitudinal study showed that both classifications categorised NAFLD patients into subgroups with significantly different prognoses (pfibrosis classification, the worse was the prognosis. LSM and FibroMeter(V2G) were the most accurate of nine evaluated tests for the non-invasive diagnosis of liver fibrosis in NAFLD. LSM and FibroMeter(V2G) fibrosis classifications help physicians estimate both fibrosis stage and patient prognosis in clinical practice. The amount of liver fibrosis is the main determinant of the liver-related prognosis in patients with non-alcoholic fatty liver disease (NAFLD). We evaluated eight blood tests and Fibro

  9. An authoring tool for building both mobile adaptable tests and web-based adaptive or classic tests

    NARCIS (Netherlands)

    Romero, C.; Ventura, S.; Hervás, C.; De Bra, P.M.E.; Wade, V.; Ashman, H.; Smyth, B.

    2006-01-01

    This paper describes Test Editor, an authoring tool for building both mobile adaptable tests and web-based adaptive or classic tests. This tool facilitates the development and maintenance of different types of XML-based multiple- choice tests for using in web-based education systems and wireless

  10. A puzzle form of a non-verbal intelligence test gives significantly higher performance measures in children with severe intellectual disability.

    Science.gov (United States)

    Bello, Katrina D; Goharpey, Nahal; Crewther, Sheila G; Crewther, David P

    2008-08-01

    Assessment of 'potential intellectual ability' of children with severe intellectual disability (ID) is limited, as current tests designed for normal children do not maintain their interest. Thus a manual puzzle version of the Raven's Coloured Progressive Matrices (RCPM) was devised to appeal to the attentional and sensory preferences and language limitations of children with ID. It was hypothesized that performance on the book and manual puzzle forms would not differ for typically developing children but that children with ID would perform better on the puzzle form. The first study assessed the validity of this puzzle form of the RCPM for 76 typically developing children in a test-retest crossover design, with a 3 week interval between tests. A second study tested performance and completion rate for the puzzle form compared to the book form in a sample of 164 children with ID. In the first study, no significant difference was found between performance on the puzzle and book forms in typically developing children, irrespective of the order of completion. The second study demonstrated a significantly higher performance and completion rate for the puzzle form compared to the book form in the ID population. Similar performance on book and puzzle forms of the RCPM by typically developing children suggests that both forms measure the same construct. These findings suggest that the puzzle form does not require greater cognitive ability but demands sensory-motor attention and limits distraction in children with severe ID. Thus, we suggest the puzzle form of the RCPM is a more reliable measure of the non-verbal mentation of children with severe ID than the book form.

  11. High Frequency Vibration Based Fatigue Testing of Developmental Alloys

    Science.gov (United States)

    Holycross, Casey M.; Srinivasan, Raghavan; George, Tommy J.; Tamirisakandala, Seshacharyulu; Russ, Stephan M.

    Many fatigue test methods have been previously developed to rapidly evaluate fatigue behavior. This increased test speed can come at some expense, since these methods may require non-standard specimen geometry or increased facility and equipment capability. One such method, developed by George et al, involves a base-excited plate specimen driven into a high frequency bending resonant mode. This resonant mode is of sufficient frequency (typically 1200 to 1700 Hertz) to accumulate 107 cycles in a few hours. One of the main limitations of this test method is that fatigue cracking is almost certainly guaranteed to be surface initiated at regions of high stress. This brings into question the validity of the fatigue test results, as compared to more traditional uniaxial, smooth-bar testing, since high stresses are subjecting only a small volume to fatigue damage. This limitation also brings into question the suitability of this method to screen developmental alloys, should their initiation life be governed by subsurface flaws. However, if applicable, the rapid generation of fatigue data using this method would facilitate faster design iterations, identifying more quickly, material and manufacturing process deficiencies. The developmental alloy used in this study was a powder metallurgy boron-modified Ti-6Al-4V, a new alloy currently being considered for gas turbine engine fan blades. Plate specimens were subjected to fully reversed bending fatigue. Results are compared with existing data from commercially available Ti-6Al-4V using both vibration based and more traditional fatigue test methods.

  12. A sequential hypothesis test based on a generalized Azuma inequality

    NARCIS (Netherlands)

    Reijsbergen, D.P.; Scheinhardt, Willem R.W.; de Boer, Pieter-Tjerk

    We present a new power-one sequential hypothesis test based on a bound for the probability that a bounded zero-mean martingale ever crosses a curve of the form $a(n+k)^b$. The proof of the bound is of independent interest.

  13. Invariant-Based Automatic Testing of Modern Web Applications

    NARCIS (Netherlands)

    Mesbah, A.; Van Deursen, A.; Roest, D.

    2011-01-01

    AJAX-based Web 2.0 applications rely on stateful asynchronous client/server communication, and client-side run-time manipulation of the DOM tree. This not only makes them fundamentally different from traditional web applications, but also more error-prone and harder to test. We propose a method for

  14. Heart rate-based lactate minimum test: a reproducible method.

    NARCIS (Netherlands)

    Strupler, M.; Muller, G.; Perret, C.

    2009-01-01

    OBJECTIVE: To find the individual intensity for aerobic endurance training, the lactate minimum test (LMT) seems to be a promising method. LMTs described in the literature consist of speed or work rate-based protocols, but for training prescription in daily practice mostly heart rate is used. The

  15. Applications of decision theory to test-based decision making

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1987-01-01

    The use of Bayesian decision theory to solve problems in test-based decision making is discussed. Four basic decision problems are distinguished: (1) selection; (2) mastery; (3) placement; and (4) classification, the situation where each treatment has its own criterion. Each type of decision can be

  16. Correlation Based Testing for Passive Sonar Picture Rationalization

    National Research Council Canada - National Science Library

    Mellema, Garfield R

    2007-01-01

    .... The sample correlation coefficient, is a statistical measure of relatedness. This paper describes the application of a test based on that measure to compare tracks produced by a probabilistic data association filter from a set of towed array sonar data. Keywords.

  17. A valuation-Based Test of Market Timing

    NARCIS (Netherlands)

    Koeter-Kant, J.; Elliott, W.B.; Warr, R.S.

    2007-01-01

    We implement an earnings-based fundamental valuation model to test the impact of market timing on the firm's method of funding the financing deficit. We argue that our valuation metric provides a superior measure of equity misvaluation because it avoids multiple interpretation problems faced by the

  18. Laboratory test of an APS-based sun sensor prototype

    Science.gov (United States)

    Rufino, Giancarlo; Perrotta, Alessandro; Grassi, Michele

    2017-11-01

    This paper deals with design and prototype development of an Active Pixel Sensor - based miniature sun sensor and a laboratory facility for its indoor test and calibration. The miniature sun sensor is described and the laboratory test facility is presented in detail. The major focus of the paper is on tests and calibration of the sensor. Two different calibration functions have been adopted. They are based, respectively, on a geometrical model, which has required least-squares optimisation of system physical parameters estimates, and on neural networks. Calibration results are presented for the above solutions, showing that accuracy in the order of 0.01° has been achieved. Neural calibration functions have attained better performance thanks to their intrinsic auto-adaptive structure.

  19. The significance for epidemiological studies anti-measles antibody detection examined by enzyme immunoassay (EIA) and plaque reduction neutralization test (PRNT).

    Science.gov (United States)

    Siennicka, Joanna; Częścik, Agnieszka; Trzcińska, Agnieszka

    2014-01-01

    The paper discusses the role of anti-measles antibodies for protection and significance for epidemiological studies determination of antibodies by different serological methods. The comparison of anti-measles virus antibodies levels measured by enzyme immunoassay (EIA) and Plaque Reduction Neutralization Test (PRNT) was described. It was found that the 200 mIU/ml of anti-measles activity measured by PRNT (level protection against symp- tomatic disease) is equivalent of 636 mIU/ml measured by EIA (Enzygnost®Anti-Measles Virus/IgG, Simens).

  20. Computer Based Test Untuk Seleksi Masuk Politeknik Negeri Bengkalis

    Directory of Open Access Journals (Sweden)

    Agus Tedyyana

    2017-11-01

    Full Text Available AbstrakPenyeleksian calon mahasiswa baru dapat dilakukan dengan aplikasi Computer Based Test (CBT. Metode yang digunakan meliputi teknik pengumpulan data, analisis sistem, model perancangan, implementasi dan pengujian. Penelitian ini menghasilkan aplikasi CBT dimana soal yang dimunculkan dari bank soal melalui proses pengacakan dengan tidak akan memunculkan soal yang sama dengan menggunakan metoda Fisher-Yates Shuffle. Dalam proses pengamanan informasi soal saat terhubung ke jaringan maka diperlukan teknik untuk penyandian pesan agar soal tersebut sebeum dimunculkan melewati proses enkripsi dan deskripsi data terlebih dahulu maka digunakan algoritma kriptografi  RSA. Metode perancangan perangkat lunak menggunakan model waterfall, perancangan database menggunakan entity relationship diagram, perancangan antarmuka menggunakan hypertext markup language (HTML Cascading Style Sheet (CSS dan jQuery serta diimplementasikan berbasis web dengan menggunakan bahasa pemrograman PHP dan database MySQL, Arsitektur jaringan yang digunakan aplikasi Computer Based Test adalah model jaringan client-server dengan jaringan Local Area Network (LAN. Kata kunci: Computer Based Test, Fisher-Yates Shuffle, Criptography, Local Area Network AbstractSelection of new student candidates can be done with Computer Based Test (CBT application. The methods used include data collection techniques, system analysis, design model, implementation and testing. This study produces a CBT application where the questions raised from the question bank through randomization process will not bring up the same problem using the Fisher-Yates Shuffle method. In the process of securing information about the problem when connected to the network it is necessary techniques for encoding the message so that the problem before appear through the process of encryption and description of data first then used RSA cryptography algorithm. Software design method using waterfall model, database design

  1. astrophysical significance

    Directory of Open Access Journals (Sweden)

    Dartois E.

    2014-02-01

    Full Text Available Clathrate hydrates, ice inclusion compounds, are of major importance for the Earth’s permafrost regions and may control the stability of gases in many astrophysical bodies such as the planets, comets and possibly interstellar grains. Their physical behavior may provide a trapping mechanism to modify the absolute and relative composition of icy bodies that could be the source of late-time injection of gaseous species in planetary atmospheres or hot cores. In this study, we provide and discuss laboratory-recorded infrared signatures of clathrate hydrates in the near to mid-infrared and the implications for space-based astrophysical tele-detection in order to constrain their possible presence.

  2. Contribution of the ELFG test in algorithms of non-invasive markers towards the diagnosis of significant fibrosis in chronic hepatitis C.

    Directory of Open Access Journals (Sweden)

    Jean-Pierre Zarski

    Full Text Available BACKGROUND AND AIMS: We aimed to determine the best algorithms for the diagnosis of significant fibrosis in chronic hepatitis C (CHC patients using all available parameters and tests. PATIENTS AND METHODS: We used the database from our study of 507 patients with histologically proven CHC in which fibrosis was evaluated by liver biopsy (Metavir and tests: Fibrometer®, Fibrotest®, Hepascore®, Apri, ELFG, MP3, Forn's, hyaluronic acid, tissue inhibitor of metalloproteinase-1 (TIMP1, MMP1, collagen IV and when possible Fibroscan™. For the first test we used 90% negative predictive value to exclude patients with F≤1, next an induction algorithm was applied giving the best tests with at least 80% positive predictive value for the diagnosis of F≥2. The algorithms were computed using the R Software C4.5 program to select the best tests and cut-offs. The algorithm was automatically induced without premises on the part of the investigators. We also examined the inter-observer variations after independent review of liver biopsies by two pathologists. A medico-economic analysis compared the screening strategies with liver biopsy. RESULTS: In "intention to diagnose" the best algorithms for F≥2 were Fibrometer ®, Fibrotest®, or Hepascore® in first intention with the ELFG score in second intention for indeterminate cases. The percentage of avoided biopsies varied between 50% (Fibrotest® or Fibrometer®+ELFG and 51% (Hepascore®+ELFG. In "per-analysis" Fibroscan™+ELFG avoided liver biopsy in 55% of cases. The diagnostic performance of these screening strategies was statistically superior to the usual combinations (Fibrometer® or Fibrotest®+Fibroscan™ and was cost effective. We note that the consensual review of liver biopsies between the two pathologists was mainly in favor of F1 (64-69%. CONCLUSION: The ELFG test could replace Fibroscan in most currently used algorithms for the diagnosis of significant fibrosis including for those patients

  3. Pile Design Based on Cone Penetration Test Results

    OpenAIRE

    Salgado, Rodrigo; Lee, Junhwan

    1999-01-01

    The bearing capacity of piles consists of both base resistance and side resistance. The side resistance of piles is in most cases fully mobilized well before the maximum base resistance is reached. As the side resistance is mobilized early in the loading process, the determination of pile base resistance is a key element of pile design. Static cone penetration is well related to the pile loading process, since it is performed quasi-statically and resembles a scaled-down pile load test. In ord...

  4. Evaluation of a Secure Laptop-Based Testing Program in an Undergraduate Nursing Program: Students' Perspective.

    Science.gov (United States)

    Tao, Jinyuan; Gunter, Glenda; Tsai, Ming-Hsiu; Lim, Dan

    2016-01-01

    Recently, the many robust learning management systems, and the availability of affordable laptops, have made secure laptop-based testing a reality on many campuses. The undergraduate nursing program at the authors' university began to implement a secure laptop-based testing program in 2009, which allowed students to use their newly purchased laptops to take quizzes and tests securely in classrooms. After nearly 5 years' secure laptop-based testing program implementation, a formative evaluation, using a mixed method that has both descriptive and correlational data elements, was conducted to seek constructive feedback from students to improve the program. Evaluation data show that, overall, students (n = 166) believed the secure laptop-based testing program helps them get hands-on experience of taking examinations on the computer and gets them prepared for their computerized NCLEX-RN. Students, however, had a lot of concerns about laptop glitches and campus wireless network glitches they experienced during testing. At the same time, NCLEX-RN first-time passing rate data were analyzed using the χ2 test, and revealed no significant association between the two testing methods (paper-and-pencil testing and the secure laptop-based testing) and students' first-time NCLEX-RN passing rate. Based on the odds ratio, however, the odds of students passing NCLEX-RN the first time was 1.37 times higher if they were taught with the secure laptop-based testing method than if taught with the traditional paper-and-pencil testing method in nursing school. It was recommended to the institution that better quality of laptops needs to be provided to future students, measures needed to be taken to further stabilize the campus wireless Internet network, and there was a need to reevaluate the Laptop Initiative Program.

  5. Risk-based inservice testing program modifications at Palo Verde nuclear generating station

    Energy Technology Data Exchange (ETDEWEB)

    Knauf, S.; Lindenlaub, B.; Linthicum, R.

    1996-12-01

    Arizona Public Service Company (APS) is investigating changes to the Palo Verde Inservice Testing (IST) Program that are intended to result in the reduction of the required test frequency for various valves in the American Society of Mechanical Engineers (ASME) Section XI IST program. The analytical techniques employed to select candidate valves and to demonstrate that these frequency reductions are acceptable are risk based. The results of the Palo Verde probabilistic risk assessment (PRA), updated in June 1994, and the risk significant determination performed as part of the implementation efforts for 10 CFR 50.65 (the maintenance rule) were used to select candidate valves for extended test intervals. Additional component level evaluations were conducted by an `expert panel.` The decision to pursue these changes was facilitated by the ASME Risk-Based Inservice Testing Research Task Force for which Palo Verde is participating as a pilot plant. The NRC`s increasing acceptance of cost beneficial licensing actions and risk-based submittals also provided incentive to seek these changes. Arizona Public Service is pursuing the risk-based IST program modification in order to reduce the unnecessary regulatory burden of the IST program through qualitative and quantitative analysis consistent with maintaining a high level of plant safety. The objectives of this project at Palo Verde are as follows: (1) Apply risk-based technologies to IST components to determine their risk significance (i.e., high or low). (2) Apply a combination of deterministic and risk-based methods to determine appropriate testing requirements for IST components including improvement of testing methods and frequency intervals for high-risk significant components. (3) Apply risk-based technologies to high-risk significant components identified by the {open_quotes}expert panel{close_quotes} and outside of the IST program to determine whether additional testing requirements are appropriate.

  6. Risk-based inservice testing program modifications at Palo Verde nuclear generating station

    International Nuclear Information System (INIS)

    Knauf, S.; Lindenlaub, B.; Linthicum, R.

    1996-01-01

    Arizona Public Service Company (APS) is investigating changes to the Palo Verde Inservice Testing (IST) Program that are intended to result in the reduction of the required test frequency for various valves in the American Society of Mechanical Engineers (ASME) Section XI IST program. The analytical techniques employed to select candidate valves and to demonstrate that these frequency reductions are acceptable are risk based. The results of the Palo Verde probabilistic risk assessment (PRA), updated in June 1994, and the risk significant determination performed as part of the implementation efforts for 10 CFR 50.65 (the maintenance rule) were used to select candidate valves for extended test intervals. Additional component level evaluations were conducted by an 'expert panel.' The decision to pursue these changes was facilitated by the ASME Risk-Based Inservice Testing Research Task Force for which Palo Verde is participating as a pilot plant. The NRC's increasing acceptance of cost beneficial licensing actions and risk-based submittals also provided incentive to seek these changes. Arizona Public Service is pursuing the risk-based IST program modification in order to reduce the unnecessary regulatory burden of the IST program through qualitative and quantitative analysis consistent with maintaining a high level of plant safety. The objectives of this project at Palo Verde are as follows: (1) Apply risk-based technologies to IST components to determine their risk significance (i.e., high or low). (2) Apply a combination of deterministic and risk-based methods to determine appropriate testing requirements for IST components including improvement of testing methods and frequency intervals for high-risk significant components. (3) Apply risk-based technologies to high-risk significant components identified by the open-quotes expert panelclose quotes and outside of the IST program to determine whether additional testing requirements are appropriate

  7. Communicative Language Testing: Implications for Computer Based Language Testing in French for Specific Purposes

    Science.gov (United States)

    García Laborda, Jesús; López Santiago, Mercedes; Otero de Juan, Nuria; Álvarez Álvarez, Alfredo

    2014-01-01

    Current evolutions of language testing have led to integrating computers in FSP assessments both in oral and written communicative tasks. This paper deals with two main issues: learners' expectations about the types of questions in FSP computer based assessments and the relation with their own experience. This paper describes the experience of 23…

  8. Failing Tests: Commentary on "Adapting Educational Measurement to the Demands of Test-Based Accountability"

    Science.gov (United States)

    Thissen, David

    2015-01-01

    In "Adapting Educational Measurement to the Demands of Test-Based Accountability" Koretz takes the time-honored engineering approach to educational measurement, identifying specific problems with current practice and proposing minimal modifications of the system to alleviate those problems. In response to that article, David Thissen…

  9. Computer-based tests: The impact of test design and problem of equivalency

    Czech Academy of Sciences Publication Activity Database

    Květon, Petr; Jelínek, Martin; Vobořil, Dalibor; Klimusová, H.

    -, č. 23 (2007), s. 32-51 ISSN 0747-5632 R&D Projects: GA ČR(CZ) GA406/99/1052; GA AV ČR(CZ) KSK9058117 Institutional research plan: CEZ:AV0Z7025918 Keywords : Computer-based assessment * speeded test * equivalency Subject RIV: AN - Psychology Impact factor: 1.344, year: 2007

  10. Examining the Use of Web-Based Tests for Testing Academic Vocabulary in EAP Instruction

    Science.gov (United States)

    Dashtestani, Reza

    2015-01-01

    Interest in Web-based and computer-assisted language testing is growing in the field of English for academic purposes (EAP). In this study, four groups of undergraduate EAP students (n = 120), each group consisted of 30 students, were randomly selected from four different disciplines, i.e. biology, political sciences, psychology, and law. The four…

  11. A novel PMT test system based on waveform sampling

    Science.gov (United States)

    Yin, S.; Ma, L.; Ning, Z.; Qian, S.; Wang, Y.; Jiang, X.; Wang, Z.; Yu, B.; Gao, F.; Zhu, Y.; Wang, Z.

    2018-01-01

    Comparing with the traditional test system based on a QDC and TDC and scaler, a test system based on waveform sampling is constructed for signal sampling of the 8"R5912 and the 20"R12860 Hamamatsu PMT in different energy states from single to multiple photoelectrons. In order to achieve high throughput and to reduce the dead time in data processing, the data acquisition software based on LabVIEW is developed and runs with a parallel mechanism. The analysis algorithm is realized in LabVIEW and the spectra of charge, amplitude, signal width and rising time are analyzed offline. The results from Charge-to-Digital Converter, Time-to-Digital Converter and waveform sampling are discussed in detailed comparison.

  12. Prognostic significance of immunohistochemistry-based markers and algorithms in immunochemotherapy-treated diffuse large B cell lymphoma patients.

    Science.gov (United States)

    Culpin, Rachel E; Sieniawski, Michal; Angus, Brian; Menon, Geetha K; Proctor, Stephen J; Milne, Paul; McCabe, Kate; Mainou-Fowler, Tryfonia

    2013-12-01

    To reassess the prognostic validity of immunohistochemical markers and algorithms identified in the CHOP era in immunochemotherapy-treated diffuse large B cell lymphoma patients. The prognostic significance of immunohistochemical markers (CD10, Bcl-6, Bcl-2, MUM1, Ki-67, CD5, GCET1, FoxP1, LMO2) and algorithms (Hans, Hans*, Muris, Choi, Choi*, Nyman, Visco-Young, Tally) was assessed using clinical diagnostic blocks taken from an unselected, population-based cohort of 190 patients treated with R-CHOP. Dichotomizing expression, low CD10 (<10%), low LMO2 (<70%) or high Bcl-2 (≥80%) predicted shorter overall survival (OS; P = 0.033, P = 0.010 and P = 0.008, respectively). High Bcl-2 (≥80%), low Bcl-6 (<60%), low GCET1 (<20%) or low LMO2 (<70%) predicted shorter progression-free survival (PFS; P = 0.001, P = 0.048, P = 0.045 and P = 0.002, respectively). The Hans, Hans* and Muris classifiers predicted OS (P = 0.022, P = 0.037 and P = 0.011) and PFS (P = 0.021, P = 0.020 and P = 0.004). The Choi, Choi* and Tally were associated with PFS (P = 0.049, P = 0.009 and P = 0.023). In multivariate analysis, the International Prognostic Index (IPI) was the only independent predictor of outcome (OS; HR: 2.60, P < 0.001 and PFS; HR: 2.91, P < 0.001). Results highlight the controversy surrounding immunohistochemistry-based algorithms in the R-CHOP era. The need for more robust markers, applicable to the clinic, for incorporation into improved prognostic systems is emphasized. © 2013 John Wiley & Sons Ltd.

  13. Nonapnea Sleep Disorders in Patients Younger than 65 Years Are Significantly Associated with CKD: A Nationwide Population-Based Study.

    Directory of Open Access Journals (Sweden)

    Hugo You-Hsien Lin

    Full Text Available Nonapnea sleep disorders (NASD and sleep-related problems are associated with poor health outcomes. However, the association between NASD and the development and prognosis of chronic kidney disease (CKD has not been investigated thoroughly. We explored the association between CKD and NASD in Taiwan.We conducted a population-based study using the Taiwan National Health Insurance database with1,000,000 representative data for the period from January 1, 2000 to December 31, 2009. We investigated the incidence and risk of CKD in 7,006 newly diagnosed NASD cases compared with 21,018 people without NASD matched according to age, sex, index year, urbanization, region, and monthly income at a 1:3 ratio.The subsequent risk of CKD was 1.48-foldhigher in the NASD cohort than in the control cohort (95% confidence interval [CI] = 1.26-1.73, p< 0.001. Men, older age, type 2 diabetes mellitus, and gout were significant factors associated with the increased risk of CKD (p< 0.001. Among different types of NASDs, patients with insomnia had a 52% increased risk of developing CKD (95%CI = 1.23-1.84; P<0.01, whereas patients with sleep disturbance had a 49%increased risk of subsequent CKD (95% CI = 1.19-1.87; P<0.001. Younger women (aged < 65 years were at a high risk of CKD with NASD (adjusted hazard ratio, [HR] = 1.81; 95% CI = 1.35-2.40, p< 0.001.In this nationwide population-based cohort study, patients with NASD, particularly men of all ages and women aged younger than 65 years, were at high risk of CKD.

  14. Bayesian models based on test statistics for multiple hypothesis testing problems.

    Science.gov (United States)

    Ji, Yuan; Lu, Yiling; Mills, Gordon B

    2008-04-01

    We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.

  15. Psychometric properties of a test in evidence based practice: the Spanish version of the Fresno test

    Directory of Open Access Journals (Sweden)

    Jiménez-Villa Josep

    2010-06-01

    Full Text Available Abstract Background Validated instruments are needed to evaluate the programmatic impact of Evidence Based Practice (EBP training and to document the competence of individual trainees. This study aimed to translate the Fresno test into Spanish and subsequently validate it, in order to ensure the equivalence of the Spanish version against the original English version. Methods Before and after study performed between October 2007 and June 2008. Three groups of participants: (a Mentors of family medicine residents (expert group (n = 56; (b Family medicine physicians (intermediate experience group (n = 17; (c Family medicine residents (novice group (n = 202; Medical residents attended an EBP course, and two sets of the test were administered before and after the course. The Fresno test is a performance based measure for use in medical education that assesses EBP skills. The outcome measures were: inter-rater and intra-rater reliability, internal consistency, item analyses, construct validity, feasibility of administration, and responsiveness. Results Inter-rater correlations were 0.95 and 0.85 in the pre-test and the post-test respectively. The overall intra-rater reliability was 0.71 and 0.81 in the pre-test and post-test questionnaire, respectively. Cronbach's alpha was 0.88 and 0.77, respectively. 152 residents (75.2% returned both sets of the questionnaire. The observed effect size for the residents was 1.77 (CI 95%: 1.57-1.95, the standardised response mean was 1.65 (CI 95%:1.47-1.82. Conclusions The Spanish version of the Fresno test is a useful tool in assessing the knowledge and skills of EBP in Spanish-speaking residents of Family Medicine.

  16. TR-EDB: Test Reactor Embrittlement Data Base, Version 1

    Energy Technology Data Exchange (ETDEWEB)

    Stallmann, F.W.; Wang, J.A.; Kam, F.B.K. [Oak Ridge National Lab., TN (United States)

    1994-01-01

    The Test Reactor Embrittlement Data Base (TR-EDB) is a collection of results from irradiation in materials test reactors. It complements the Power Reactor Embrittlement Data Base (PR-EDB), whose data are restricted to the results from the analysis of surveillance capsules in commercial power reactors. The rationale behind their restriction was the assumption that the results of test reactor experiments may not be applicable to power reactors and could, therefore, be challenged if such data were included. For this very reason the embrittlement predictions in the Reg. Guide 1.99, Rev. 2, were based exclusively on power reactor data. However, test reactor experiments are able to cover a much wider range of materials and irradiation conditions that are needed to explore more fully a variety of models for the prediction of irradiation embrittlement. These data are also needed for the study of effects of annealing for life extension of reactor pressure vessels that are difficult to obtain from surveillance capsule results.

  17. TR-EDB: Test Reactor Embrittlement Data Base, Version 1

    International Nuclear Information System (INIS)

    Stallmann, F.W.; Wang, J.A.; Kam, F.B.K.

    1994-01-01

    The Test Reactor Embrittlement Data Base (TR-EDB) is a collection of results from irradiation in materials test reactors. It complements the Power Reactor Embrittlement Data Base (PR-EDB), whose data are restricted to the results from the analysis of surveillance capsules in commercial power reactors. The rationale behind their restriction was the assumption that the results of test reactor experiments may not be applicable to power reactors and could, therefore, be challenged if such data were included. For this very reason the embrittlement predictions in the Reg. Guide 1.99, Rev. 2, were based exclusively on power reactor data. However, test reactor experiments are able to cover a much wider range of materials and irradiation conditions that are needed to explore more fully a variety of models for the prediction of irradiation embrittlement. These data are also needed for the study of effects of annealing for life extension of reactor pressure vessels that are difficult to obtain from surveillance capsule results

  18. Tests of gravity with future space-based experiments

    Science.gov (United States)

    Sakstein, Jeremy

    2018-03-01

    Future space-based tests of relativistic gravitation—laser ranging to Phobos, accelerometers in orbit, and optical networks surrounding Earth—will constrain the theory of gravity with unprecedented precision by testing the inverse-square law, the strong and weak equivalence principles, and the deflection and time delay of light by massive bodies. In this paper, we estimate the bounds that could be obtained on alternative gravity theories that use screening mechanisms to suppress deviations from general relativity in the Solar System: chameleon, symmetron, and Galileon models. We find that space-based tests of the parametrized post-Newtonian parameter γ will constrain chameleon and symmetron theories to new levels, and that tests of the inverse-square law using laser ranging to Phobos will provide the most stringent constraints on Galileon theories to date. We end by discussing the potential for constraining these theories using upcoming tests of the weak equivalence principle, and conclude that further theoretical modeling is required in order to fully utilize the data.

  19. Titan TTCN-3 Based Test Framework for Resource Constrained Systems

    Directory of Open Access Journals (Sweden)

    Yushev Artem

    2016-01-01

    Full Text Available Wireless communication systems more and more become part of our daily live. Especially with the Internet of Things (IoT the overall connectivity increases rapidly since everyday objects become part of the global network. For this purpose several new wireless protocols have arisen, whereas 6LoWPAN (IPv6 over Low power Wireless Personal Area Networks can be seen as one of the most important protocols within this sector. Originally designed on top of the IEEE802.15.4 standard it is a subject to various adaptions that will allow to use 6LoWPAN over different technologies; e.g. DECT Ultra Low Energy (ULE. Although this high connectivity offers a lot of new possibilities, there are several requirements and pitfalls coming along with such new systems. With an increasing number of connected devices the interoperability between different providers is one of the biggest challenges, which makes it necessary to verify the functionality and stability of the devices and the network. Therefore testing becomes one of the key components that decides on success or failure of such a system. Although there are several protocol implementations commonly available; e.g., for IoT based systems, there is still a lack of according tools and environments as well as for functional and conformance testing. This article describes the architecture and functioning of the proposed test framework based on Testing and Test Control Notation Version 3 (TTCN-3 for 6LoWPAN over ULE networks.

  20. A Fuzzy Logic Based Method for Analysing Test Results

    Directory of Open Access Journals (Sweden)

    Le Xuan Vinh

    2017-11-01

    Full Text Available Network operators must perform many tasks to ensure smooth operation of the network, such as planning, monitoring, etc. Among those tasks, regular testing of network performance, network errors and troubleshooting is very important. Meaningful test results will allow the operators to evaluate network performanceof any shortcomings and to better plan for network upgrade. Due to the diverse and mainly unquantifiable nature of network testing results, there is a needs to develop a method for systematically and rigorously analysing these results. In this paper, we present STAM (System Test-result Analysis Method which employs a bottom-up hierarchical processing approach using Fuzzy logic. STAM is capable of combining all test results into a quantitative description of the network performance in terms of network stability, the significance of various network erros, performance of each function blocks within the network. The validity of this method has been successfully demonstrated in assisting the testing of a VoIP system at the Research Instiute of Post and Telecoms in Vietnam. The paper is organized as follows. The first section gives an overview of fuzzy logic theory the concepts of which will be used in the development of STAM. The next section describes STAM. The last section, demonstrating STAM’s capability, presents a success story in which STAM is successfully applied.

  1. Do sediment type and test durations affect results of laboratory-based, accelerated testing studies of permeable pavement clogging?

    Science.gov (United States)

    Nichols, Peter W B; White, Richard; Lucke, Terry

    2015-04-01

    Previous studies have attempted to quantify the clogging processes of Permeable Interlocking Concrete Pavers (PICPs) using accelerated testing methods. However, the results have been variable. This study investigated the effects that three different sediment types (natural and silica), and different simulated rainfall intensities, and testing durations had on the observed clogging processes (and measured surface infiltration rates) of laboratory-based, accelerated PICP testing studies. Results showed that accelerated simulated laboratory testing results are highly dependent on the type, and size of sediment used in the experiments. For example, when using real stormwater sediment up to 1.18 mm in size, the results showed that neither testing duration, nor stormwater application rate had any significant effect on PICP clogging. However, the study clearly showed that shorter testing durations generally increased clogging and reduced the surface infiltration rates of the models when artificial silica sediment was used. Longer testing durations also generally increased clogging of the models when using fine sediment (<300 μm). Results from this study will help researchers and designers better anticipate when and why PICPs are susceptible to clogging, reduce maintenance and extend the useful life of these increasingly common stormwater best management practices. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. A test beam upgrade based on the BEPC-LINAC

    International Nuclear Information System (INIS)

    Li Jiacai; Wu Yuanming; Cui Xiangzong; Zhang Liangsheng; Zhou Baoqing; Liu Zhengquan; Zhang Shaoping; Sun Changchun; Zhang Zhuxiang; Zhang Caidi; Zheng Linsheng; Liu Shixing; Shen Ji; Yin Zejie; Zhang Yongming; Chen Ziyu

    2004-01-01

    A total of three beam lines, E1, E2 and E3 have based on the LINAC of BEPC. The E1 beam is to be used for intense slow-positron facility. The E2 is a primary positron or electron beam with an energy of 1.3-1.5 GeV. The E3 is a secondary electron or pion test beam with a momentum can be adjustable continuously. The position accuracy of a detected particle is 0.2-0.4 mm with an event rate of 3 - 4 Hz. This beam has been successfully used for some detectors beam test. (author)

  3. Model-based automated testing of critical PLC programs.

    CERN Document Server

    Fernández Adiego, B; Tournier, J-C; González Suárez, V M; Bliudze, S

    2014-01-01

    Testing of critical PLC (Programmable Logic Controller) programs remains a challenging task for control system engineers as it can rarely be automated. This paper proposes a model based approach which uses the BIP (Behavior, Interactions and Priorities) framework to perform automated testing of PLC programs developed with the UNICOS (UNified Industrial COntrol System) framework. This paper defines the translation procedure and rules from UNICOS to BIP which can be fully automated in order to hide the complexity of the underlying model from the control engineers. The approach is illustrated and validated through the study of a water treatment process.

  4. Timer-based data acquisitioning of creep testing machines

    International Nuclear Information System (INIS)

    Rana, M.A.; Farooq, M.A.; Ali, L.

    1998-01-01

    Duration of a creep test may be short or long term extending over several years. Continuous operation of a computer for automatic data acquisition of creep testing machines is useless. Timer based data acquisitioning of the machines already interface with IBM-Pc/AT and compatibles has been streamlined for economical use of the computer. A locally designed and fabricated timer has been introduced in the system in this regard to meet the requirements of the system. The timer switches on the computer according to pre scheduled interval of time of capture creep data in Real time. The periodically captured data is logged on the hard disk for analysis and report generation. (author)

  5. Universal Verification Methodology Based Register Test Automation Flow.

    Science.gov (United States)

    Woo, Jae Hun; Cho, Yong Kwan; Park, Sun Kyu

    2016-05-01

    In today's SoC design, the number of registers has been increased along with complexity of hardware blocks. Register validation is a time-consuming and error-pron task. Therefore, we need an efficient way to perform verification with less effort in shorter time. In this work, we suggest register test automation flow based UVM (Universal Verification Methodology). UVM provides a standard methodology, called a register model, to facilitate stimulus generation and functional checking of registers. However, it is not easy for designers to create register models for their functional blocks or integrate models in test-bench environment because it requires knowledge of SystemVerilog and UVM libraries. For the creation of register models, many commercial tools support a register model generation from register specification described in IP-XACT, but it is time-consuming to describe register specification in IP-XACT format. For easy creation of register model, we propose spreadsheet-based register template which is translated to IP-XACT description, from which register models can be easily generated using commercial tools. On the other hand, we also automate all the steps involved integrating test-bench and generating test-cases, so that designers may use register model without detailed knowledge of UVM or SystemVerilog. This automation flow involves generating and connecting test-bench components (e.g., driver, checker, bus adaptor, etc.) and writing test sequence for each type of register test-case. With the proposed flow, designers can save considerable amount of time to verify functionality of registers.

  6. An Effective Strategy to Build Up a Balanced Test Suite for Spectrum-Based Fault Localization

    Directory of Open Access Journals (Sweden)

    Ning Li

    2016-01-01

    Full Text Available During past decades, many automated software faults diagnosis techniques including Spectrum-Based Fault Localization (SBFL have been proposed to improve the efficiency of software debugging activity. In the field of SBFL, suspiciousness calculation is closely related to the number of failed and passed test cases. Studies have shown that the ratio of the number of failed and passed test case has more significant impact on the accuracy of SBFL than the total number of test cases, and a balanced test suite is more beneficial to improving the accuracy of SBFL. Based on theoretical analysis, we proposed an PNF (Passed test cases, Not execute Faulty statement strategy to reduce test suite and build up a more balanced one for SBFL, which can be used in regression testing. We evaluated the strategy making experiments using the Siemens program and Space program. Experiments indicated that our PNF strategy can be used to construct a new test suite effectively. Compared with the original test suite, the new one has smaller size (average 90% test case was reduced in experiments and more balanced ratio of failed test cases to passed test cases, while it has the same statement coverage and fault localization accuracy.

  7. Targeted Resequencing and Functional Testing Identifies Low-Frequency Missense Variants in the Gene Encoding GARP as Significant Contributors to Atopic Dermatitis Risk.

    Science.gov (United States)

    Manz, Judith; Rodríguez, Elke; ElSharawy, Abdou; Oesau, Eva-Maria; Petersen, Britt-Sabina; Baurecht, Hansjörg; Mayr, Gabriele; Weber, Susanne; Harder, Jürgen; Reischl, Eva; Schwarz, Agatha; Novak, Natalija; Franke, Andre; Weidinger, Stephan

    2016-12-01

    Gene-mapping studies have consistently identified a susceptibility locus for atopic dermatitis and other inflammatory diseases on chromosome band 11q13.5, with the strongest association observed for a common variant located in an intergenic region between the two annotated genes C11orf30 and LRRC32. Using a targeted resequencing approach we identified low-frequency and rare missense mutations within the LRRC32 gene encoding the protein GARP, a receptor on activated regulatory T cells that binds latent transforming growth factor-β. Subsequent association testing in more than 2,000 atopic dermatitis patients and 2,000 control subjects showed a significant excess of these LRRC32 variants in individuals with atopic dermatitis. Structural protein modeling and bioinformatic analysis predicted a disruption of protein transport upon these variants, and overexpression assays in CD4 + CD25 - T cells showed a significant reduction in surface expression of the mutated protein. Consistently, flow cytometric (FACS) analyses of different T-cell subtypes obtained from atopic dermatitis patients showed a significantly reduced surface expression of GARP and a reduced conversion of CD4 + CD25 - T cells into regulatory T cells, along with lower expression of latency-associated protein upon stimulation in carriers of the LRRC32 A407T variant. These results link inherited disturbances of transforming growth factor-β signaling with atopic dermatitis risk. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Effect modification of air pollution on Urinary 8-Hydroxy-2'-Deoxyguanosine by genotypes: an application of the multiple testing procedure to identify significant SNP interactions

    Directory of Open Access Journals (Sweden)

    Christiani David C

    2010-12-01

    Full Text Available Abstract Background Air pollution is associated with adverse human health, but mechanisms through which pollution exerts effects remain to be clarified. One suggested pathway is that pollution causes oxidative stress. If so, oxidative stress-related genotypes may modify the oxidative response defenses to pollution exposure. Methods We explored the potential pathway by examining whether an array of oxidative stress-related genes (twenty single nucleotide polymorphisms, SNPs in nine genes modified associations of pollutants (organic carbon (OC, ozone and sulfate with urinary 8-hydroxy-2-deoxygunosine (8-OHdG, a biomarker of oxidative stress among the 320 aging men. We used a Multiple Testing Procedure in R modified by our team to identify the significance of the candidate genes adjusting for a priori covariates. Results We found that glutathione S-tranferase P1 (GSTP1, rs1799811, M1 and catalase (rs2284367 and group-specific component (GC, rs2282679, rs1155563 significantly or marginally significantly modified effects of OC and/or sulfate with larger effects among those carrying the wild type of GSTP1, catalase, non-wild type of GC and the non-null of GSTM1. Conclusions Polymorphisms of oxidative stress-related genes modified effects of OC and/or sulfate on 8-OHdG, suggesting that effects of OC or sulfate on 8-OHdG and other endpoints may be through the oxidative stress pathway.

  9. Accounting for Proof Test Data in a Reliability Based Design Optimization Framework

    Science.gov (United States)

    Ventor, Gerharad; Scotti, Stephen J.

    2012-01-01

    This paper investigates the use of proof (or acceptance) test data during the reliability based design optimization of structural components. It is assumed that every component will be proof tested and that the component will only enter into service if it passes the proof test. The goal is to reduce the component weight, while maintaining high reliability, by exploiting the proof test results during the design process. The proposed procedure results in the simultaneous design of the structural component and the proof test itself and provides the designer with direct control over the probability of failing the proof test. The procedure is illustrated using two analytical example problems and the results indicate that significant weight savings are possible when exploiting the proof test results during the design process.

  10. Salt Fog Testing Iron-Based Amorphous Alloys

    International Nuclear Information System (INIS)

    Rebak, Raul B.; Aprigliano, Louis F.; Day, S. Daniel; Farmer, Joseph C.

    2007-01-01

    Iron-based amorphous alloys are hard and highly corrosion resistant, which make them desirable for salt water and other applications. These alloys can be produced as powder and can be deposited as coatings on any surface that needs to be protected from the environment. It was of interest to examine the behavior of these amorphous alloys in the standard salt-fog testing ASTM B 117. Three different amorphous coating compositions were deposited on 316L SS coupons and exposed for many cycles of the salt fog test. Other common engineering alloys such as 1018 carbon steel, 316L SS and Hastelloy C-22 were also tested together with the amorphous coatings. Results show that amorphous coatings are resistant to rusting in salt fog. Partial devitrification may be responsible for isolated rust spots in one of the coatings. (authors)

  11. A significant carbon sink in temperate forests in Beijing: based on 20-year field measurements in three stands.

    Science.gov (United States)

    Zhu, JianXiao; Hu, XueYang; Yao, Hui; Liu, GuoHua; Ji, ChenJun; Fang, JingYun

    2015-11-01

    Numerous efforts have been made to characterize forest carbon (C) cycles and stocks in various ecosystems. However, long-term observation on each component of the forest C cycle is still lacking. We measured C stocks and fluxes in three permanent temperate forest plots (birch, oak and pine forest) during 2011–2014, and calculated the changes of the components of the C cycle related to the measurements during 1992–1994 at Mt. Dongling, Beijing, China. Forest net primary production in birch, oak, and pine plots was 5.32, 4.53, and 6.73 Mg C ha-1 a-1, respectively. Corresponding net ecosystem production was 0.12, 0.43, and 3.53 Mg C ha-1 a-1. The C stocks and fluxes in 2011–2014 were significantly larger than those in 1992–1994 in which the biomass C densities in birch, oak, and pine plots increased from 50.0, 37.7, and 54.0 Mg C ha-1 in 1994 to 101.5, 77.3, and 110.9 Mg C ha-1 in 2014; soil organic C densities increased from 207.0, 239.1, and 231.7 Mg C ha-1 to 214.8, 241.7, and 238.4 Mg C ha-1; and soil heterotrophic respiration increased from 2.78, 3.49, and 1.81 Mg C ha-1 a-1 to 5.20, 4.10, and 3.20 Mg C ha-1 a-1. These results suggest that the mountainous temperate forest ecosystems in Beijing have served as a carbon sink in the last two decades. These observations of C stocks and fluxes provided field-based data for a long-term study of C cycling in temperate forest ecosystems.

  12. Immunogenic Cell Death Induced by Ginsenoside Rg3: Significance in Dendritic Cell-based Anti-tumor Immunotherapy.

    Science.gov (United States)

    Son, Keum-Joo; Choi, Ki Ryung; Lee, Seog Jae; Lee, Hyunah

    2016-02-01

    Cancer is one of the leading causes of morbidity and mortality worldwide; therefore there is a need to discover new therapeutic modules with improved efficacy and safety. Immune-(cell) therapy is a promising therapeutic strategy for the treatment of intractable cancers. The effectiveness of certain chemotherapeutics in inducing immunogenic tumor cell death thus promoting cancer eradication has been reported. Ginsenoside Rg3 is a ginseng saponin that has antitumor and immunomodulatory activity. In this study, we treated tumor cells with Rg3 to verify the significance of inducing immunogenic tumor cell death in antitumor therapy, especially in DC-based immunotherapy. Rg3 killed the both immunogenic (B16F10 melanoma cells) and non-immunogenic (LLC: Lewis Lung Carcinoma cells) tumor cells by inducing apoptosis. Surface expression of immunogenic death markers including calreticulin and heat shock proteins and the transcription of relevant genes were increased in the Rg3-dying tumor. Increased calreticulin expression was directly related to the uptake of dying tumor cells by dendritic cells (DCs): the proportion of CRT(+) CD11c(+) cells was increased in the Rg3-treated group. Interestingly, tumor cells dying by immunogenic cell death secreted IFN-γ, an effector molecule for antitumor activity in T cells. Along with the Rg3-induced suppression of pro-angiogenic (TNF-α) and immunosuppressive cytokine (TGF-β) secretion, IFN-γ production from the Rg3-treated tumor cells may also indicate Rg3 as an effective anticancer immunotherapeutic strategy. The data clearly suggests that Rg3-induced immunogenic tumor cell death due its cytotoxic effect and its ability to induce DC function. This indicates that Rg3 may be an effective immunotherapeutic strategy.

  13. Effects of computer-based immediate feedback on foreign language listening comprehension and test-associated anxiety.

    Science.gov (United States)

    Lee, Shu-Ping; Su, Hui-Kai; Lee, Shin-Da

    2012-06-01

    This study investigated the effects of immediate feedback on computer-based foreign language listening comprehension tests and on intrapersonal test-associated anxiety in 72 English major college students at a Taiwanese University. Foreign language listening comprehension of computer-based tests designed by MOODLE, a dynamic e-learning environment, with or without immediate feedback together with the state-trait anxiety inventory (STAI) were tested and repeated after one week. The analysis indicated that immediate feedback during testing caused significantly higher anxiety and resulted in significantly higher listening scores than in the control group, which had no feedback. However, repeated feedback did not affect the test anxiety and listening scores. Computer-based immediate feedback did not lower debilitating effects of anxiety but enhanced students' intrapersonal eustress-like anxiety and probably improved their attention during listening tests. Computer-based tests with immediate feedback might help foreign language learners to increase attention in foreign language listening comprehension.

  14. Tests of beam-based alignement at FACET

    CERN Document Server

    Latina, A; Schulte, D; Adli, E

    2014-01-01

    The performance of future linear colliders will depend critically on beam-based alignment (BBA) and feedback systems, which will play a crucial role in guaranteeing the low emittance transport throughout such machines. BBA algorithms designed to improve the beam transmission in a linac by simultaneously optimising the trajectory and minimising the residual dispersion, have thoughtfully been studied in theory over the last years, and successfully verified experimentally. One such technique is called Dispersion-Free Steering (DFS). A careful study of the DFS performance at the SLAC test facility FACET lead us to design a beam-based technique specifically targeted to reduce the impact of transverse short-range wakefields, rather than of the dispersion, being the wakefields the limiting factor to the FACET performance. This technique is called Wakefield-Free Steering (WFS). The results of the first tests of WFS at FACET are presented in this paper.

  15. Syndromic Panel-Based Testing in Clinical Microbiology.

    Science.gov (United States)

    Ramanan, Poornima; Bryson, Alexandra L; Binnicker, Matthew J; Pritt, Bobbi S; Patel, Robin

    2018-01-01

    The recent development of commercial panel-based molecular diagnostics for the rapid detection of pathogens in positive blood culture bottles, respiratory specimens, stool, and cerebrospinal fluid has resulted in a paradigm shift in clinical microbiology and clinical practice. This review focuses on U.S. Food and Drug Administration (FDA)-approved/cleared multiplex molecular panels with more than five targets designed to assist in the diagnosis of bloodstream, respiratory tract, gastrointestinal, or central nervous system infections. While these panel-based assays have the clear advantages of a rapid turnaround time and the detection of a large number of microorganisms and promise to improve health care, they present certain challenges, including cost and the definition of ideal test utilization strategies (i.e., optimal ordering) and test interpretation. Copyright © 2017 American Society for Microbiology.

  16. Testing R&D-Based Endogenous Growth Models

    DEFF Research Database (Denmark)

    Kruse-Andersen, Peter Kjær

    2017-01-01

    R&D-based growth models are tested using US data for the period 1953-2014. A general growth model is developed which nests the model varieties of interest. The model implies a cointegrating relationship between multifactor productivity, research intensity, and employment. This relationship...... is estimated using cointegrated VAR models. The results provide evidence against the widely used fully endogenous variety and in favor of the semi-endogenous variety. Forecasts based on the empirical estimates suggest that the slowdown in US productivity growth will continue. Particularly, the annual long...

  17. Feasibility and willingness-to-pay for integrated community-based tuberculosis testing

    Directory of Open Access Journals (Sweden)

    Vickery Carter

    2011-11-01

    Full Text Available Abstract Background Community-based screening for TB, combined with HIV and syphilis testing, faces a number of barriers. One significant barrier is the value that target communities place on such screening. Methods Integrated testing for TB, HIV, and syphilis was performed in neighborhoods identified using geographic information systems-based disease mapping. TB testing included skin testing and interferon gamma release assays. Subjects completed a survey describing disease risk factors, healthcare access, healthcare utilization, and willingness to pay for integrated testing. Results Behavioral and social risk factors among the 113 subjects were prevalent (71% prior incarceration, 27% prior or current crack cocaine use, 35% homelessness, and only 38% had a regular healthcare provider. The initial 24 subjects reported that they would be willing to pay a median $20 (IQR: 0-100 for HIV testing and $10 (IQR: 0-100 for TB testing when the question was asked in an open-ended fashion, but when the question was changed to a multiple-choice format, the next 89 subjects reported that they would pay a median $5 for testing, and 23% reported that they would either not pay anything to get tested or would need to be paid $5 to get tested for TB, HIV, or syphilis. Among persons who received tuberculin skin testing, only 14/78 (18% participants returned to have their skin tests read. Only 14/109 (13% persons who underwent HIV testing returned to receive their HIV results. Conclusion The relatively high-risk persons screened in this community outreach study placed low value on testing. Reported willingness to pay for such testing, while low, likely overestimated the true willingness to pay. Successful TB, HIV, and syphilis integrated testing programs in high risk populations will likely require one-visit diagnostic testing and incentives.

  18. The comparison between science virtual and paper based test in measuring grade 7 students’ critical thinking

    Science.gov (United States)

    Dhitareka, P. H.; Firman, H.; Rusyati, L.

    2018-05-01

    This research is comparing science virtual and paper-based test in measuring grade 7 students’ critical thinking based on Multiple Intelligences and gender. Quasi experimental method with within-subjects design is conducted in this research in order to obtain the data. The population of this research was all seventh grade students in ten classes of one public secondary school in Bandung. There were 71 students within two classes taken randomly became the sample in this research. The data are obtained through 28 questions with a topic of living things and environmental sustainability constructed based on eight critical thinking elements proposed by Inch then the questions provided in science virtual and paper-based test. The data was analysed by using paired-samples t test when the data are parametric and Wilcoxon signed ranks test when the data are non-parametric. In general comparison, the p-value of the comparison between science virtual and paper-based tests’ score is 0.506, indicated that there are no significance difference between science virtual and paper-based test based on the tests’ score. The results are furthermore supported by the students’ attitude result which is 3.15 from the scale from 1 to 4, indicated that they have positive attitudes towards Science Virtual Test.

  19. Space Launch System Base Heating Test: Environments and Base Flow Physics

    Science.gov (United States)

    Mehta, Manish; Knox, Kyle S.; Seaford, C. Mark; Dufrene, Aaron T.

    2016-01-01

    The NASA Space Launch System (SLS) vehicle is composed of four RS-25 liquid oxygen- hydrogen rocket engines in the core-stage and two 5-segment solid rocket boosters and as a result six hot supersonic plumes interact within the aft section of the vehicle during ight. Due to the complex nature of rocket plume-induced ows within the launch vehicle base during ascent and a new vehicle con guration, sub-scale wind tunnel testing is required to reduce SLS base convective environment uncertainty and design risk levels. This hot- re test program was conducted at the CUBRC Large Energy National Shock (LENS) II short-duration test facility to simulate ight from altitudes of 50 kft to 210 kft. The test program is a challenging and innovative e ort that has not been attempted in 40+ years for a NASA vehicle. This presentation discusses the various trends of base convective heat ux and pressure as a function of altitude at various locations within the core-stage and booster base regions of the two-percent SLS wind tunnel model. In-depth understanding of the base ow physics is presented using the test data, infrared high-speed imaging and theory. The normalized test design environments are compared to various NASA semi- empirical numerical models to determine exceedance and conservatism of the ight scaled test-derived base design environments. Brief discussion of thermal impact to the launch vehicle base components is also presented.

  20. SABATPG-A Structural Analysis Based Automatic Test Generation System

    Institute of Scientific and Technical Information of China (English)

    李忠诚; 潘榆奇; 闵应骅

    1994-01-01

    A TPG system, SABATPG, is given based on a generic structural model of large circuits. Three techniques of partial implication, aftereffect of identified undetectable faults and shared sensitization with new concepts of localization and aftereffect are employed in the system to improve FAN algorithm. Experiments for the 10 ISCAS benchmark circuits show that the computing time of SABATPG for test generation is 19.42% less than that of FAN algorithm.

  1. Investigation of Concrete Electrical Resistivity As a Performance Based Test

    OpenAIRE

    Malakooti, Amir

    2017-01-01

    The purpose of this research project was to identify the extent that concrete resistivity measurements (bulk and/or surface) can be used as a performance based lab test to improve the quality of concrete in Utah bridge decks. By allowing UDOT to specify a required resistivity, concrete bridge deck quality will increase and future maintenance costs will decrease. This research consisted of two phases: the field phase and the lab phase. In the field phase, concrete samples were gathered from...

  2. Self-testing protocols based on the chained Bell inequalities

    International Nuclear Information System (INIS)

    Šupić, I; Augusiak, R; Salavrakos, A; Acín, A

    2016-01-01

    Self-testing is a device-independent technique based on non-local correlations whose aim is to certify the effective uniqueness of the quantum state and measurements needed to produce these correlations. It is known that the maximal violation of some Bell inequalities suffices for this purpose. However, most of the existing self-testing protocols for two devices exploit the well-known Clauser–Horne–Shimony–Holt Bell inequality or modifications of it, and always with two measurements per party. Here, we generalize the previous results by demonstrating that one can construct self-testing protocols based on the chained Bell inequalities, defined for two devices implementing an arbitrary number of two-output measurements. On the one hand, this proves that the quantum state and measurements leading to the maximal violation of the chained Bell inequality are unique. On the other hand, in the limit of a large number of measurements, our approach allows one to self-test the entire plane of measurements spanned by the Pauli matrices X and Z. Our results also imply that the chained Bell inequalities can be used to certify two bits of perfect randomness. (paper)

  3. The Test Reactor Embrittlement Data Base (TR-EDB)

    International Nuclear Information System (INIS)

    Stallmann, F.W.; Kam, F.B.K.; Wang, J.A.

    1993-01-01

    The Test Reactor Embrittlement Data Base (TR-EDB) is part of an ongoing program to collect test data from materials irradiations to aid in the research and evaluation of embrittlement prediction models that are used to assure the safety of pressure vessels in power reactors. This program is being funded by the US Nuclear Regulatory Commission (NRC) and has resulted in the publication of the Power Reactor Embrittlement Data Base (PR-EDB) whose second version is currently being released. The TR-EDB is a compatible collection of data from experiments in materials test reactors. These data contain information that is not obtainable from surveillance results, especially, about the effects of annealing after irradiation. Other information that is only available from test reactors is the influence of fluence rates and irradiation temperatures on radiation embrittlement. The first version of the TR-EDB will be released in fall of 1993 and contains published results from laboratories in many countries. Data collection will continue and further updates will be published

  4. Strengthening Theoretical Testing in Criminology Using Agent-based Modeling.

    Science.gov (United States)

    Johnson, Shane D; Groff, Elizabeth R

    2014-07-01

    The Journal of Research in Crime and Delinquency ( JRCD ) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity-agent-based computational modeling-that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interest. Two literature reviews are completed. The aim of the first is to identify those articles published in JRCD that have been the most influential and to classify the theoretical perspectives taken. The second is intended to identify those studies that have used an agent-based model (ABM) to examine criminological theories and to identify which theories have been explored. Ecological theories of crime pattern formation have received the most attention from researchers using ABMs, but many other criminological theories are amenable to testing using such methods. Traditional methods of theory development and testing suffer from a number of potential issues that a more systematic use of ABMs-not without its own issues-may help to overcome. ABMs should become another method in the criminologists toolbox to aid theory testing and falsification.

  5. Testing for Turkeys Faith-Based Community HIV Testing Initiative: An Update.

    Science.gov (United States)

    DeGrezia, Mary; Baker, Dorcas; McDowell, Ingrid

    2018-06-04

    Testing for Turkeys (TFT) HIV/hepatitis C virus (HCV) and sexually transmitted infection (STI) testing initiative is a joint effort between Older Women Embracing Life (OWEL), Inc., a nonprofit faith-based community HIV support and advocacy organization; the Johns Hopkins University Regional Partner MidAtlantic AIDS Education and Training Center (MAAETC); and the University of Maryland, Baltimore JACQUES Initiative (JI), and is now in its 11th year of providing HIV outreach, testing, and linkage to care. Since 2008, the annual TFT daylong community HIV testing and linkage to care initiative has been held 2 weeks before Thanksgiving at a faith-based center in Baltimore, Maryland, in a zip code where one in 26 adults and adolescents ages 13 years and older are living with HIV (Maryland Department of Health, Center for HIV Surveillance, Epidemiology, and Evaluation, 2017). TFT includes a health fair with vendors that supply an abundance of education information (handouts, videos, one-on-one counseling) and safer sex necessities, including male and female condoms, dental dams, and lube. Nutritious boxed lunches and beverages are provided to all attendees and volunteers. Everyone tested for HIV who stays to obtain their results is given a free frozen turkey as they exit. The Baltimore City Health Department is on hand with a confidential no-test list (persons in the state already known to have HIV) to diminish retesting of individuals previously diagnosed with HIV. However, linkage to care is available to everyone: newly diagnosed individuals and those previously diagnosed and currently out of care. Copyright © 2018 Association of Nurses in AIDS Care. Published by Elsevier Inc. All rights reserved.

  6. Evaluation of the routine antimicrobial susceptibility testing results of clinically significant anaerobic bacteria in a Slovenian tertiary-care hospital in 2015.

    Science.gov (United States)

    Jeverica, Samo; Kolenc, Urša; Mueller-Premru, Manica; Papst, Lea

    2017-10-01

    The aim of our study was to determined antimicrobial susceptibility profiles of 2673 clinically significant anaerobic bacteria belonging to the major genera, isolated in 2015 in a large tertiary-care hospital in Slovenia. The species identification was performed by MALDI-TOF mass spectrometry. Antimicrobial susceptibility was determined immediately at the isolation of the strains against: penicillin, co-amoxiclav, imipenem, clindamycin and metronidazole, using gradient diffusion methodology and EUCAST breakpoints. The most frequent anaerobes were Bacteroides fragilis group with 31% (n = 817), Gram positive anaerobic cocci (GPACs) with 22% (n = 589), Prevotella with 14% (n = 313) and Propionibacterium with 8% (n = 225). Metronidazole has retained full activity (100%) against all groups of anaerobic bacteria intrinsically susceptible to it. Co-amoxiclav and imipenem were active against most tested anaerobes with zero or low resistance rates. However, observed resistance to co-amoxiclav (8%) and imipenem (1%) is worrying especially among B. fragilis group isolates. High overall resistance (23%) to clindamycin was detected in our study and was highest among the genera Prevotella, Bacteroides, Parabacteroides, GPACs and Clostridium. Routine testing of antimicrobial susceptibility of clinically relevant anaerobic bacteria is feasible and provides good surveillance data. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Standard Test Method for Preparing Aircraft Cleaning Compounds, Liquid Type, Water Base, for Storage Stability Testing

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2002-01-01

    1.1 This test method covers the determination of the stability in storage, of liquid, water-base chemical cleaning compounds, used to clean the exterior surfaces of aircraft. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  8. Inquiry-Based Instruction and High Stakes Testing

    Science.gov (United States)

    Cothern, Rebecca L.

    Science education is a key to economic success for a country in terms of promoting advances in national industry and technology and maximizing competitive advantage in a global marketplace. The December 2010 Program for International Student Assessment (PISA) ranked the United States 23rd of 65 countries in science. That dismal standing in science proficiency impedes the ability of American school graduates to compete in the global market place. Furthermore, the implementation of high stakes testing in science mandated by the 2007 No Child Left Behind (NCLB) Act has created an additional need for educators to find effective science pedagogy. Research has shown that inquiry-based science instruction is one of the predominant science instructional methods. Inquiry-based instruction is a multifaceted teaching method with its theoretical foundation in constructivism. A correlational survey research design was used to determine the relationship between levels of inquiry-based science instruction and student performance on a standardized state science test. A self-report survey, using a Likert-type scale, was completed by 26 fifth grade teachers. Participants' responses were analyzed and grouped as high, medium, or low level inquiry instruction. The unit of analysis for the achievement variable was the student scale score average from the state science test. Spearman's Rho correlation data showed a positive relationship between the level of inquiry-based instruction and student achievement on the state assessment. The findings can assist teachers and administrators by providing additional research on the benefits of the inquiry-based instructional method. Implications for positive social change include increases in student proficiency and decision-making skills related to science policy issues which can help make them more competitive in the global marketplace.

  9. Life estimation I and C cable insulation materials based on accelerated life testing accelerated life testing

    International Nuclear Information System (INIS)

    Santhosh, T.V.; Ramteke, P.K.; Shrestha, N.B.; Ahirwar, A.K.; Gopika, V.

    2016-01-01

    Accelerated Iife tests are becoming increasingly popular in today's industry due to the need for obtaining life data quickly and reliably. Life testing of products under higher stress levels without introducing additional failure modes can provide significant savings of both time and money. Correct analysis of data gathered via such accelerated life testing will yield parameters and other information for the product's life under use stress conditions. To be of practical use in assessing the operational behaviour of cables in NPPs, laboratory ageing aims to mimic the type of degradation observed under operational conditions. Conditions of testing therefore need to be carefully chosen to ensure that the degradation mechanism occurring in the accelerated tests are similar to those which occur in service. This paper presents the results of an investigation in which the elongation-at-break (EAB) measurements were carried on a typical control cable to predict the mean life at service conditions. A low voltage polyvinyl chloride (PVC) insulated and PVC sheathed control cable, used in NPP instrumentation and control (I and C) applications, was subjected thermal ageing at three elevated temperatures

  10. Comparing Science Virtual and Paper-Based Test to Measure Students’ Critical Thinking based on VAK Learning Style Model

    Science.gov (United States)

    Rosyidah, T. H.; Firman, H.; Rusyati, L.

    2017-02-01

    This research was comparing virtual and paper-based test to measure students’ critical thinking based on VAK (Visual-Auditory-Kynesthetic) learning style model. Quasi experiment method with one group post-test only design is applied in this research in order to analyze the data. There was 40 eight grade students at one of public junior high school in Bandung becoming the sample in this research. The quantitative data was obtained through 26 questions about living thing and environment sustainability which is constructed based on the eight elements of critical thinking and be provided in the form of virtual and paper-based test. Based on analysis of the result, it is shown that within visual, auditory, and kinesthetic were not significantly difference in virtual and paper-based test. Besides, all result was supported by quistionnaire about students’ respond on virtual test which shows 3.47 in the scale of 4. Means that student showed positive respond in all aspet measured, which are interest, impression, and expectation.

  11. Application of risk-based methods to inservice testing of check valves

    Energy Technology Data Exchange (ETDEWEB)

    Closky, N.B.; Balkey, K.R.; McAllister, W.J. [and others

    1996-12-01

    Research efforts have been underway in the American Society of Mechanical Engineers (ASME) and industry to define appropriate methods for the application of risk-based technology in the development of inservice testing (IST) programs for pumps and valves in nuclear steam supply systems. This paper discusses a pilot application of these methods to the inservice testing of check valves in the emergency core cooling system of Georgia Power`s Vogtle nuclear power station. The results of the probabilistic safety assessment (PSA) are used to divide the check valves into risk-significant and less-risk-significant groups. This information is reviewed by a plant expert panel along with the consideration of appropriate deterministic insights to finally categorize the check valves into more safety-significant and less safety-significant component groups. All of the more safety-significant check valves are further evaluated in detail using a failure modes and causes analysis (FMCA) to assist in defining effective IST strategies. A template has been designed to evaluate how effective current and emerging tests for check valves are in detecting failures or in finding significant conditions that are precursors to failure for the likely failure causes. This information is then used to design and evaluate appropriate IST strategies that consider both the test method and frequency. A few of the less safety-significant check valves are also evaluated using this process since differences exist in check valve design, function, and operating conditions. Appropriate test strategies are selected for each check valve that has been evaluated based on safety and cost considerations. Test strategies are inferred from this information for the other check valves based on similar check valve conditions. Sensitivity studies are performed using the PSA model to arrive at an overall IST program that maintains or enhances safety at the lowest achievable cost.

  12. Performance-based alternative assessments as a means of eliminating gender achievement differences on science tests

    Science.gov (United States)

    Brown, Norman Merrill

    1998-09-01

    Historically, researchers have reported an achievement difference between females and males on standardized science tests. These differences have been reported to be based upon science knowledge, abstract reasoning skills, mathematical abilities, and cultural and social phenomena. This research was designed to determine how mastery of specific science content from public school curricula might be evaluated with performance-based assessment models, without producing gender achievement differences. The assessment instruments used were Harcourt Brace Educational Measurement's GOALSsp°ler: A Performance-Based Measure of Achievement and the performance-based portion of the Stanford Achievement Testspcopyright, Ninth Edition. The identified independent variables were test, gender, ethnicity, and grade level. A 2 x 2 x 6 x 12 (test x gender x ethnicity x grade) factorial experimental design was used to organize the data. A stratified random sample (N = 2400) was selected from a national pool of norming data: N = 1200 from the GOALSsp°ler group and N = 1200 from the SAT9spcopyright group. The ANOVA analysis yielded mixed results. The factors of test, gender, ethnicity by grade, gender by grade, and gender by grade by ethnicity failed to produce significant results (alpha = 0.05). The factors yielding significant results were ethnicity, grade, and ethnicity by grade. Therefore, no significant differences were found between female and male achievement on these performance-based assessments.

  13. A performance-oriented and risk-based regulation for containment testing

    International Nuclear Information System (INIS)

    Dey, M.

    1994-01-01

    In August 1992, the NRC initiated a major initiative to develop requirements for containment testing that are less prescriptive, and more performance-oriented and risk-based. This action was a result of public comments and several studies that concluded that the economic burden of certain, present containment testing requirements are not commensurate with their safety benefits. The rulemaking will include consideration of relaxing the allowable containment leakage rate, increasing the interval for the integrated containment test, and establishing intervals for the local containment leak rate tests based on their performance. A study has been conducted to provide technical information for establishing the performance criteria for containment tests, the allowable leakage rate, commensurate with its significance to total public risk. The study used results of a recent comprehensive study conducted by the NRC, NUREG-1150, 'Severe Accident Risks: An Assessment for Five U.S. Nuclear Power Plants,' to examine the sensitivity of containment leakage to public risk. Risk was found to be insensitive to containment leakage rate up to levels of about 100 percent-volume per day for certain types of containments. PRA methods have also been developed to establish risk-based intervals for containment tests based on their past experience. Preliminary evaluations show that increasing the interval for the integrated containment leakage test from three times to once every ten years would have an insignificant impact on public risk. Preliminary analyses of operational experience data for local leak rate tests show that performance-based testing, valves and penetrations that perform well are tested less frequently, is feasible with marginal impact on safety. The above technical studies are being used to develop efficient (cost-effective) requirements for containment tests. (author). 4 refs., 2 figs

  14. Improved Accelerated Stress Tests Based on Fuel Cell Vehicle Data

    Energy Technology Data Exchange (ETDEWEB)

    Patterson, Timothy [Research Engineer; Motupally, Sathya [Research Engineer

    2012-06-01

    UTC will led a top-tier team of industry and national laboratory participants to update and improve DOE’s Accelerated Stress Tests (AST’s) for hydrogen fuel cells. This in-depth investigation will focused on critical fuel cell components (e.g. membrane electrode assemblies - MEA) whose durability represented barriers for widespread commercialization of hydrogen fuel cell technology. UTC had access to MEA materials that had accrued significant load time under real-world conditions in PureMotion® 120 power plant used in transit buses. These materials are referred to as end-of-life (EOL) components in the rest of this document. Advanced characterization techniques were used to evaluate degradation mode progress using these critical cell components extracted from both bus power plants and corresponding materials tested using the DOE AST’s. These techniques were applied to samples at beginning-of-life (BOL) to serve as a baseline. These comparisons advised the progress of the various failure modes that these critical components were subjected to, such as membrane degradation, catalyst support corrosion, platinum group metal dissolution, and others. Gaps in the existing ASTs predicted the degradation observed in the field in terms of these modes were outlined. Using the gaps, new AST’s were recommended and tested to better reflect the degradation modes seen in field operation. Also, BOL components were degraded in a test vehicle at UTC designed to accelerate the bus field operation.

  15. A computer vision based candidate for functional balance test.

    Science.gov (United States)

    Nalci, Alican; Khodamoradi, Alireza; Balkan, Ozgur; Nahab, Fatta; Garudadri, Harinath

    2015-08-01

    Balance in humans is a motor skill based on complex multimodal sensing, processing and control. Ability to maintain balance in activities of daily living (ADL) is compromised due to aging, diseases, injuries and environmental factors. Center for Disease Control and Prevention (CDC) estimate of the costs of falls among older adults was $34 billion in 2013 and is expected to reach $54.9 billion in 2020. In this paper, we present a brief review of balance impairments followed by subjective and objective tools currently used in clinical settings for human balance assessment. We propose a novel computer vision (CV) based approach as a candidate for functional balance test. The test will take less than a minute to administer and expected to be objective, repeatable and highly discriminative in quantifying ability to maintain posture and balance. We present an informal study with preliminary data from 10 healthy volunteers, and compare performance with a balance assessment system called BTrackS Balance Assessment Board. Our results show high degree of correlation with BTrackS. The proposed system promises to be a good candidate for objective functional balance tests and warrants further investigations to assess validity in clinical settings, including acute care, long term care and assisted living care facilities. Our long term goals include non-intrusive approaches to assess balance competence during ADL in independent living environments.

  16. Vision-based Ground Test for Active Debris Removal

    Directory of Open Access Journals (Sweden)

    Seong-Min Lim

    2013-12-01

    Full Text Available Due to the continuous space development by mankind, the number of space objects including space debris in orbits around the Earth has increased, and accordingly, difficulties of space development and activities are expected in the near future. In this study, among the stages for space debris removal, the implementation of a vision-based approach technique for approaching space debris from a far-range rendezvous state to a proximity state, and the ground test performance results were described. For the vision-based object tracking, the CAM-shift algorithm with high speed and strong performance, and the Kalman filter were combined and utilized. For measuring the distance to a tracking object, a stereo camera was used. For the construction of a low-cost space environment simulation test bed, a sun simulator was used, and in the case of the platform for approaching, a two-dimensional mobile robot was used. The tracking status was examined while changing the position of the sun simulator, and the results indicated that the CAM-shift showed a tracking rate of about 87% and the relative distance could be measured down to 0.9 m. In addition, considerations for future space environment simulation tests were proposed.

  17. Prevailence of Patch Test Positivity with Some Bases

    Directory of Open Access Journals (Sweden)

    J S Pasricha

    1987-01-01

    Full Text Available To evaluate the suitability of some chemicals to act as bases for antigens for patch tests, patch tests were performed with these agents in patients having contact dermatitis. Propylene glycol′used as such produced positive reactions in 25 (50% patients of which 12 were 2 + or more, polyethylene glycol 200 produced positive reactions in 9 (18% cases of which 4 cases were 2 + or more, a mixture of liquid paraffin and hard paraffin gave rise to positive reactions in 10 (10% cases 3 of these being 2 +, a mixture of liquid paraffin and bees wax was positive in 14 (14,Yo cases 3 of these being 2 +, yellow petrolatum was positive in 4 (8% cases, one of which was 2 +, white petrolatum was positive in S (6% cases all of these being + reactions only, and glycerol gave rise to a I + reaction in only one (2% caw. In tropical countries, water should be as base for as many antigens as possible for others, a control test with the b must be included.

  18. A Universal Motor Performance Test System Based on Virtual Instrument

    Directory of Open Access Journals (Sweden)

    Wei Li

    2014-09-01

    Full Text Available With the development of technology universal motors play a more and more important role in daily life and production, they have been used in increasingly wide field and the requirements increase gradually. How to control the speed and monitor the real-time temperature of motors are key issues. The cost of motor testing system based on traditional technology platform is very high in many reasons. In the paper a universal motor performance test system which based on virtual instrument is provided. The system achieves the precise control of the current motor speed and completes the measurement of real-time temperature of motor bearing support in order to realize the testing of general-purpose motor property. Experimental result shows that the system can work stability in controlling the speed and monitoring the real-time temperature. It has advantages that traditional using of SCM cannot match in speed, stability, cost and accuracy aspects. Besides it is easy to expand and reconfigure.

  19. Optimized periodic verification testing blended risk and performance-based MOV inservice test program an application of ASME code case OMN-1

    Energy Technology Data Exchange (ETDEWEB)

    Sellers, C.; Fleming, K.; Bidwell, D.; Forbes, P. [and others

    1996-12-01

    This paper presents an application of ASME Code Case OMN-1 to the GL 89-10 Program at the South Texas Project Electric Generating Station (STPEGS). Code Case OMN-1 provides guidance for a performance-based MOV inservice test program that can be used for periodic verification testing and allows consideration of risk insights. Blended probabilistic and deterministic evaluation techniques were used to establish inservice test strategies including both test methods and test frequency. Described in the paper are the methods and criteria for establishing MOV safety significance based on the STPEGS probabilistic safety assessment, deterministic considerations of MOV performance characteristics and performance margins, the expert panel evaluation process, and the development of inservice test strategies. Test strategies include a mix of dynamic and static testing as well as MOV exercising.

  20. Optimized periodic verification testing blended risk and performance-based MOV inservice test program an application of ASME code case OMN-1

    International Nuclear Information System (INIS)

    Sellers, C.; Fleming, K.; Bidwell, D.; Forbes, P.

    1996-01-01

    This paper presents an application of ASME Code Case OMN-1 to the GL 89-10 Program at the South Texas Project Electric Generating Station (STPEGS). Code Case OMN-1 provides guidance for a performance-based MOV inservice test program that can be used for periodic verification testing and allows consideration of risk insights. Blended probabilistic and deterministic evaluation techniques were used to establish inservice test strategies including both test methods and test frequency. Described in the paper are the methods and criteria for establishing MOV safety significance based on the STPEGS probabilistic safety assessment, deterministic considerations of MOV performance characteristics and performance margins, the expert panel evaluation process, and the development of inservice test strategies. Test strategies include a mix of dynamic and static testing as well as MOV exercising

  1. The Art Gallery Test: A Preliminary Comparison between Traditional Neuropsychological and Ecological VR-Based Tests

    Directory of Open Access Journals (Sweden)

    Pedro Gamito

    2017-11-01

    Full Text Available Ecological validity should be the cornerstone of any assessment of cognitive functioning. For this purpose, we have developed a preliminary study to test the Art Gallery Test (AGT as an alternative to traditional neuropsychological testing. The AGT involves three visual search subtests displayed in a virtual reality (VR art gallery, designed to assess visual attention within an ecologically valid setting. To evaluate the relation between AGT and standard neuropsychological assessment scales, data were collected on a normative sample of healthy adults (n = 30. The measures consisted of concurrent paper-and-pencil neuropsychological measures [Montreal Cognitive Assessment (MoCA, Frontal Assessment Battery (FAB, and Color Trails Test (CTT] along with the outcomes from the three subtests of the AGT. The results showed significant correlations between the AGT subtests describing different visual search exercises strategies with global and specific cognitive measures. Comparative visual search was associated with attention and cognitive flexibility (CTT; whereas visual searches involving pictograms correlated with global cognitive function (MoCA.

  2. Test-Driven, Model-Based Systems Engineering

    DEFF Research Database (Denmark)

    Munck, Allan

    Hearing systems have evolved over many years from simple mechanical devices (horns) to electronic units consisting of microphones, amplifiers, analog filters, loudspeakers, batteries, etc. Digital signal processors replaced analog filters to provide better performance end new features. Central....... This thesis concerns methods for identifying, selecting and implementing tools for various aspects of model-based systems engineering. A comprehensive method was proposed that include several novel steps such as techniques for analyzing the gap between requirements and tool capabilities. The method...... was verified with good results in two case studies for selection of a traceability tool (single-tool scenario) and a set of modeling tools (multi-tool scenarios). Models must be subjected to testing to allow engineers to predict functionality and performance of systems. Test-first strategies are known...

  3. Test of a PCIe based readout option for PANDA

    Energy Technology Data Exchange (ETDEWEB)

    Reiter, Simon; Lange, Soeren; Kuehn, Wolfgang [Justus-Liebig-Universitaet Giessen (Germany); Engel, Heiko [Goethe-Universitaet Frankfurt (Germany); Collaboration: PANDA-Collaboration

    2016-07-01

    The future PANDA detector will achieve an event rate at about 20 MHz resulting in a high data load of up to 200 GB/s. The data acquisition system will be based on a triggerless readout concept, leading to the requirement of large data bandwidths. The data reduction will be guaranteed on the first level by an array of FPGAs running a full on-line reconstruction followed by the second level of a CPU/GPU cluster to achieve a reduction factor more than 1000. The C-RORC (Common Readout Receiver Card), originally developed for ALICE, provides on the one hand 12 optical links with 6.25 Gbps each, and on the other hand a PCIe interface with up to 40 Gbps. The receiver card has been installed and tested, and the firmware has been adjusted for the Panda data format. Test results are presented.

  4. Development of training simulator based on critical assemblies test bench

    International Nuclear Information System (INIS)

    Narozhnyi, A.T.; Vorontsov, S.V.; Golubeva, O.A.; Dyudyaev, A.M.; Il'in, V.I.; Kuvshinov, M.I.; Panin, A.V.; Peshekhonov, D.P.

    2007-01-01

    When preparing critical mass experiment, multiplying system (MS) parts are assembled manually. This work is connected with maximum professional risk to personnel. Personnel training and keeping the skill of working experts is the important factor of nuclear safety maintenance. For this purpose authors develop a training simulator based on functioning critical assemblies test bench (CATB), allowing simulation of the MS assemblage using training mockups made of inert materials. The control program traces the current status of MS under simulation. A change in the assembly neutron physical parameters is mapped in readings of the regular devices. The simulator information support is provided by the computer database on physical characteristics of typical MS components The work in the training mode ensures complete simulation of real MS assemblage on the critical test bench. It makes it possible to elaborate the procedures related to CATB operation in a standard mode safely and effectively and simulate possible abnormal situations. (author)

  5. Bond strength test of acrylic artificial teeth with prosthetic base

    Directory of Open Access Journals (Sweden)

    Erna Kurnikasari

    2008-07-01

    Full Text Available Denture consists of acrylic artificial teeth and acrylic prothesis base bond chemically with a bond strength of 315 kgF/cm2. Most of the commercial acrylic artificial teeth do not specify their specifications and all of those acrylic artificial teeth do not include mechanical data (bond strength. The aim of this study is to discover which acrylic artificial teeth meet ADA specification no. 15. This study is a descriptive analytic study performed to 5 acrylic artificial teeth posterior brands commonly used by dentists and technicians. From each brand, 3 sample teeth were taken. The acrylic artificial teeth were prepared into a rectangular shape and were attached between acrylic prothesis base simulation and jigs. The sample was given tensile load using a Universal Testing Machine. The amount of force that causes the teeth to be fractured was recorded and the bond strength was calculated. The results of the study show that the average value for the five acrylic artificial teeth for the five brands were as followed: Brand A, 125.993 kgF/cm2; B, 188.457 kgF/cm2; C, 175.880 kgF/cm2; D, 153.373 kgF/cm2; E, 82.839 kgF/cm2. The data can be tested statistically by using One Way ANOVA test and Dunnett test (alpha = 0.05. From the study, it is concluded that the five acrylic artificial teeth have a bond strength below the ADA specification no. 15.

  6. Development Testing of 1-Newton ADN-Based Rocket Engines

    Science.gov (United States)

    Anflo, K.; Gronland, T.-A.; Bergman, G.; Nedar, R.; Thormählen, P.

    2004-10-01

    With the objective to reduce operational hazards and improve specific and density impulse as compared with hydrazine, the Research and Development (R&D) of a new monopropellant for space applications based on AmmoniumDiNitramide (ADN), was first proposed in 1997. This pioneering work has been described in previous papers1,2,3,4 . From the discussion above, it is clear that cost savings as well as risk reduction are the main drivers to develop a new generation of reduced hazard propellants. However, this alone is not enough to convince a spacecraft builder to choose a new technology. Cost, risk and schedule reduction are good incentives, but a spacecraft supplier will ask for evidence that this new propulsion system meets a number of requirements within the following areas: This paper describes the ongoing effort to develop a storable liquid monopropellant blend, based on AND, and its specific rocket engines. After building and testing more than 20 experimental rocket engines, the first Engineering Model (EM-1) has now accumulated more than 1 hour of firing-time. The results from test firings have validated the design. Specific impulse, combustion stability, blow-down capability and short pulse capability are amongst the requirements that have been demonstrated. The LMP-103x propellant candidate has been stored for more than 1 year and initial material compatibility screening and testing has started. 1. Performance &life 2. Impact on spacecraft design &operation 3. Flight heritage Hereafter, the essential requirements for some of these areas are outlined. These issues are discussed in detail in a previous paper1 . The use of "Commercial Of The Shelf" (COTS) propulsion system components as much as possible is essential to minimize the overall cost, risk and schedule. This leads to the conclusion that the Technology Readiness Level (TRL) 5 has been reached for the thruster and propellant. Furthermore, that the concept of ADN-based propulsion is feasible.

  7. Powder-based 3D printing application for geomechanical testing

    Science.gov (United States)

    Williams, M.; Yoon, H.; Choens, R. C., II; Martinez, M. J.; Dewers, T. A.; Lee, M.

    2017-12-01

    3D printing of fractured and porous analog geomaterials has the potential to enhance hydrogeological and mechanical interpretations by generating engineered samples in testable configurations with reproducible microstructures and tunable surface and mechanical properties. For geoscience applications, 3D printing technology can be co-opted to print reproducible structures derived from CT-imaging of actual rocks and theoretical algorithms. In particular, the use of 3D printed samples allows us to overcome sample-to-sample heterogeneity that plague rock physics testing and to test material response independent from material variability. In this work, gypsum powder-based 3D printing was used to print cylindrical core samples and block samples with a pre-existing flaw geometry. All samples are printed in three different directions to evaluate the impact of printing direction on mechanical properties. For the cylindrical samples, unconfined compression testing has been performed. For compressive strength, the samples printed perpendicular to the loading direction show stronger than those printed parallel to the loading and at 45 degree. Micro-CT images of the printed samples reveal the uneven spreading of binder, resulting in soft inner core surrounded by stronger outer shell. In particular, the layered feature with binder causes the strong anisotropic properties. This was also confirmed by the wave velocity. For the small block samples ( 6.1cm wide, 10cm high, and 1.25cm thick) with an inclined flaw, uniaxial tests coupled with an array of acoustic emission sensors and digital image correlation revealed that cracks were developed at/near the tip of flaw as expected. Although acoustic events were detected, localization was not detectable mainly due to strong attenuation. Advantage and disadvantage of power-based 3D printing for mechanical testing will be discussed and a few attempts will be presented to improve the applicability of powder-based printing technique. Sandia

  8. CAMAC based Test Signal Generator using Re-configurable device

    International Nuclear Information System (INIS)

    Sharma, Atish; Raval, Tushar; Srivastava, Amit K; Reddy, D Chenna

    2010-01-01

    There are many different types of signal generators, with different purposes and applications (and at varying levels of expense). In general, no device is suitable for all possible applications. Hence the selection of signal generator is as per requirements. For SST-1 Data Acquisition System requirements, we have developed a CAMAC based Test Signal Generator module using Re-configurable device (CPLD). This module is based on CAMAC interface but can be used for testing both CAMAC and PXI Data Acquisition Systems in SST-1 tokamak. It can also be used for other similar applications. Unlike traditional signal generators, which are embedded hardware, it is a flexible hardware unit, programmable through Graphical User Interface (GUI) developed in LabVIEW application development tool. The main aim of this work is to develop a signal generator for testing our data acquisition interface for a large number of channels simultaneously. The module front panel has various connectors like LEMO and D type connectors for signal interface. The module can be operated either in continuous signal generation mode or in triggered mode depending upon application. This can be done either by front panel switch or through CAMAC software commands (for remote operation). Similarly module reset and trigger generation operation can be performed either through front panel push button switch or through software CAMAC commands. The module has the facility to accept external TTL level trigger and clock through LEMO connectors. The module can also generate trigger and the clock signal, which can be delivered to other devices through LEMO connectors. The module generates two types of signals: Analog and digital (TTL level). The analog output (single channel) is generated from Digital to Analog Converter through CPLD for various types of waveforms like Sine, Square, Triangular and other wave shape that can vary in amplitude as well as in frequency. The module is quite useful to test up to 32 channels

  9. A self-calibrating led-based solar test platform

    DEFF Research Database (Denmark)

    Krebs, Frederik C; Sylvester-Hvid, Kristian O.; Jørgensen, Mikkel

    2011-01-01

    A compact platform for testing solar cells is presented. The light source comprises a multi-wavelength high-power LED (light emitting diode) array allowing the homogenous illumination of small laboratory solar cell devices (substrate size 50 × 25 mm) within the 390–940 nm wavelength range......, it is possible to perform all the commonly employed measurements on the solar cell at very high speed without moving the sample. In particular, the LED-based illumination system provides an alternative to light-biased incident photon-to-current efficiency measurement to be performed which we demonstrate. Both...

  10. Optimisation of test and maintenance based on probabilistic methods

    International Nuclear Information System (INIS)

    Cepin, M.

    2001-01-01

    This paper presents a method, which based on models and results of probabilistic safety assessment, minimises the nuclear power plant risk by optimisation of arrangement of safety equipment outages. The test and maintenance activities of the safety equipment are timely arranged, so the classical static fault tree models are extended with the time requirements to be capable to model real plant states. A house event matrix is used, which enables modelling of the equipment arrangements through the discrete points of time. The result of the method is determination of such configuration of equipment outages, which result in the minimal risk. Minimal risk is represented by system unavailability. (authors)

  11. Water Pollution Detection Based on Hypothesis Testing in Sensor Networks

    Directory of Open Access Journals (Sweden)

    Xu Luo

    2017-01-01

    Full Text Available Water pollution detection is of great importance in water conservation. In this paper, the water pollution detection problems of the network and of the node in sensor networks are discussed. The detection problems in both cases of the distribution of the monitoring noise being normal and nonnormal are considered. The pollution detection problems are analyzed based on hypothesis testing theory firstly; then, the specific detection algorithms are given. Finally, two implementation examples are given to illustrate how the proposed detection methods are used in the water pollution detection in sensor networks and prove the effectiveness of the proposed detection methods.

  12. Ultrasound-based testing of tendon mechanical properties

    DEFF Research Database (Denmark)

    Seynnes, O R; Bojsen-Møller, J.; Albracht, K

    2015-01-01

    In the past 20 years, the use of ultrasound-based methods has become a standard approach to measure tendon mechanical properties in vivo. Yet the multitude of methodological approaches adopted by various research groups probably contribute to the large variability of reported values. The technique......, or signal synchronization; and 2) in physiological considerations related to the viscoelastic behavior or length measurements of tendons. Hence, the purpose of the present review is to assess and discuss the physiological and technical aspects connected to in vivo testing of tendon mechanical properties...

  13. Normal Threshold Size of Stimuli in Children Using a Game-Based Visual Field Test.

    Science.gov (United States)

    Wang, Yanfang; Ali, Zaria; Subramani, Siddharth; Biswas, Susmito; Fenerty, Cecilia; Henson, David B; Aslam, Tariq

    2017-06-01

    The aim of this study was to demonstrate and explore the ability of novel game-based perimetry to establish normal visual field thresholds in children. One hundred and eighteen children (aged 8.0 ± 2.8 years old) with no history of visual field loss or significant medical history were recruited. Each child had one eye tested using a game-based visual field test 'Caspar's Castle' at four retinal locations 12.7° (N = 118) from fixation. Thresholds were established repeatedly using up/down staircase algorithms with stimuli of varying diameter (luminance 20 cd/m 2 , duration 200 ms, background luminance 10 cd/m 2 ). Relationships between threshold and age were determined along with measures of intra- and intersubject variability. The Game-based visual field test was able to establish threshold estimates in the full range of children tested. Threshold size reduced with increasing age in children. Intrasubject variability and intersubject variability were inversely related to age in children. Normal visual field thresholds were established for specific locations in children using a novel game-based visual field test. These could be used as a foundation for developing a game-based perimetry screening test for children.

  14. ETMB-RBF: discrimination of metal-binding sites in electron transporters based on RBF networks with PSSM profiles and significant amino acid pairs.

    Science.gov (United States)

    Ou, Yu-Yen; Chen, Shu-An; Wu, Sheng-Cheng

    2013-01-01

    Cellular respiration is the process by which cells obtain energy from glucose and is a very important biological process in living cell. As cells do cellular respiration, they need a pathway to store and transport electrons, the electron transport chain. The function of the electron transport chain is to produce a trans-membrane proton electrochemical gradient as a result of oxidation-reduction reactions. In these oxidation-reduction reactions in electron transport chains, metal ions play very important role as electron donor and acceptor. For example, Fe ions are in complex I and complex II, and Cu ions are in complex IV. Therefore, to identify metal-binding sites in electron transporters is an important issue in helping biologists better understand the workings of the electron transport chain. We propose a method based on Position Specific Scoring Matrix (PSSM) profiles and significant amino acid pairs to identify metal-binding residues in electron transport proteins. We have selected a non-redundant set of 55 metal-binding electron transport proteins as our dataset. The proposed method can predict metal-binding sites in electron transport proteins with an average 10-fold cross-validation accuracy of 93.2% and 93.1% for metal-binding cysteine and histidine, respectively. Compared with the general metal-binding predictor from A. Passerini et al., the proposed method can improve over 9% of sensitivity, and 14% specificity on the independent dataset in identifying metal-binding cysteines. The proposed method can also improve almost 76% sensitivity with same specificity in metal-binding histidine, and MCC is also improved from 0.28 to 0.88. We have developed a novel approach based on PSSM profiles and significant amino acid pairs for identifying metal-binding sites from electron transport proteins. The proposed approach achieved a significant improvement with independent test set of metal-binding electron transport proteins.

  15. Final Finding of No Significant Impact: Maintenance, Repair, and Overhaul Technology Center Acquisition Tinker Air Force Base Oklahoma City, Oklahoma

    Science.gov (United States)

    2013-04-18

    result in short-term health problems. Airborne Lead. Airborne lead can be inhaled directly or ingested indirectly by consuming lead- contaminated...reduction of lead in gasoline and paint, and the elimination of lead from soldered cans. 3.1.1.2 Greenhouse Gases GHGs are measured by the global...at Tinker AFB: • Stationary combustion sources (e.g., boilers, water heaters, furnaces, gasoline and diesel- fuel generators, engine test cells

  16. A PC-based expert system for nondestructive testing

    International Nuclear Information System (INIS)

    Shankar, R.; Williams, R.; Smith, C.; Selby, G.

    1991-01-01

    Rule-based decision logic which can emulate problem-solving expertise of humans is being explored for power plant nondestructive evaluation (NDE) applications. This paper describes an effort underway at the EPRI NDE Center to assist in the interpretation of NDE data acquired by automatic systems during ultrasonic weld examination of boiling-water reactors (BWRs). A personal computer (PC) -based expert system shell was used to encode rules and assemble knowledge to address the discrimination of intergranular stress corrosion cracking (IGSCC) from benign reflectors in the inspection of pipe-to-component welds. The rules attempt to factor in plant inspection history, ultrasonic examination data nd, if available, radiography testing data; a majority of them deal with specific ultrasonic signal temporal and spatial behavior during automatic scanning. The paper describes the efforts in the development of the expert system

  17. Detailed field test of yaw-based wake steering

    DEFF Research Database (Denmark)

    Fleming, P.; Churchfield, M.; Scholbrock, A.

    2016-01-01

    production. In the first phase, a nacelle-mounted scanning lidar was used to verify wake deflection of a misaligned turbine and calibrate wake deflection models. In the second phase, these models were used within a yaw controller to achieve a desired wake deflection. This paper details the experimental......This paper describes a detailed field-test campaign to investigate yaw-based wake steering. In yaw-based wake steering, an upstream turbine intentionally misaligns its yaw with respect to the inflow to deflect its wake away from a downstream turbine, with the goal of increasing total power...... design and setup. All data collected as part of this field experiment will be archived and made available to the public via the U.S. Department of Energy’s Atmosphere to Electrons Data Archive and Portal....

  18. Human papillomavirus testing for triage of women with cytologic evidence of low-grade squamous intraepithelial lesions: baseline data from a randomized trial. The Atypical Squamous Cells of Undetermined Significance/Low-Grade Squamous Intraepithelial Lesions Triage Study (ALTS) Group.

    Science.gov (United States)

    2000-03-01

    Human papillomavirus (HPV) infections appear to be central to the development of cervical cancer. This study addresses the question of whether testing women who have low-grade squamous intraepithelial lesions (LSILs) of the uterine cervix for HPV DNA is useful as a triage strategy. Four clinical centers in different areas of the United States participated in a randomized clinical trial of the use of HPV DNA testing in women with cytologic evidence of atypical squamous cells of undetermined significance (ASCUS) or LSIL. The study sample in this article consists only of women who had LSIL at enrollment. Within 6 months of an LSIL diagnosis (based on a Pap smear read by a community-based cytopathologist), women who were 18 years of age or older completed a standardized questionnaire and underwent a pelvic examination that included collection of cervical specimens for HPV DNA testing by Hybrid Capture II (HCII)(R) assay. Among the 642 women referred with LSIL who had analyzable test results, the mean chronologic age and age at first coitus were similar among the four clinical centers, despite the centers' ethnic and geographic diversity. Overall, HPV DNA was detected in cervical samples from 532 (82.9%) of the 642 women (95% confidence interval = 79.7%-85.7%). This high frequency of HPV positivity was confirmed by polymerase chain reaction (PCR) assays in a subset of 210 paired specimens tested by HCII and PCR (81.4% were positive by both methods). Because a very high percentage of women with an LSIL diagnosis from Pap smears are positive for HPV DNA by HCII testing, there is limited potential for this assay to direct decisions about the clinical management of women with LSIL. The role of HPV testing in the management of women with ASCUS is still under study.

  19. Comparison of Glycomacropeptide with Phenylalanine Free-Synthetic Amino Acids in Test Meals to PKU Patients: No Significant Differences in Biomarkers, Including Plasma Phe Levels

    Directory of Open Access Journals (Sweden)

    Kirsten K. Ahring

    2018-01-01

    Full Text Available Introduction. Management of phenylketonuria (PKU is achieved through low-phenylalanine (Phe diet, supplemented with low-protein food and mixture of free-synthetic (FS amino acid (AA. Casein glycomacropeptide (CGMP is a natural peptide released in whey during cheese-making and does not contain Phe. Lacprodan® CGMP-20 used in this study contained a small amount of Phe due to minor presence of other proteins/peptides. Objective. The purpose of this study was to compare absorption of CGMP-20 to FSAA with the aim of evaluating short-term effects on plasma AAs as well as biomarkers related to food intake. Methods. This study included 8 patients, who had four visits and tested four drink mixtures (DM1–4, consisting of CGMP, FSAA, or a combination. Plasma blood samples were collected at baseline, 15, 30, 60, 120, and 240 minutes (min after the meal. AA profiles and ghrelin were determined 6 times, while surrogate biomarkers were determined at baseline and 240 min. A visual analogue scale (VAS was used for evaluation of taste and satiety. Results. The surrogate biomarker concentrations and VAS scores for satiety and taste were nonsignificant between the four DMs, and there were only few significant results for AA profiles (not Phe. Conclusion. CGMP and FSAA had the overall same nonsignificant short-term effect on biomarkers, including Phe. This combination of FSAA and CGMP is a suitable supplement for PKU patients.

  20. Tracing the Base: A Topographic Test for Collusive Basing-Point Pricing

    NARCIS (Netherlands)

    Bos, Iwan; Schinkel, Maarten Pieter

    2009-01-01

    Basing-point pricing is known to have been abused by geographically dispersed firms in order to eliminate competition on transportation costs. This paper develops a topographic test for collusive basing-point pricing. The method uses transaction data (prices, quantities) and customer project site

  1. Tracing the base: A topographic test for collusive basing-point pricing

    NARCIS (Netherlands)

    Bos, I.; Schinkel, M.P.

    2008-01-01

    Basing-point pricing is known to have been abused by geographically dispersed firms in order to eliminate competition on transportation costs. This paper develops a topographic test for collusive basing-point pricing. The method uses transaction data (prices, quantities) and customer project site

  2. Team-Based Learning, Faculty Research, and Grant Writing Bring Significant Learning Experiences to an Undergraduate Biochemistry Laboratory Course

    Science.gov (United States)

    Evans, Hedeel Guy; Heyl, Deborah L.; Liggit, Peggy

    2016-01-01

    This biochemistry laboratory course was designed to provide significant learning experiences to expose students to different ways of succeeding as scientists in academia and foster development and improvement of their potential and competency as the next generation of investigators. To meet these goals, the laboratory course employs three…

  3. Hydrothermal Fe cycling and deep ocean organic carbon scavenging: Model-based evidence for significant POC supply to seafloor sediments

    Digital Repository Service at National Institute of Oceanography (India)

    German, C.R.; Legendre, L.L.; Sander, S.G.;; Niquil, N.; Luther-III, G.W.; LokaBharathi, P.A.; Han, X.; LeBris, N.

    by more than ~10% over background values, what the model does indicate is that scavenging of carbon in association with Fe-rich hydrothermal plume particles should play a significant role in the delivery of particulate organic carbon to deep ocean...

  4. Bridge Testing With Ground-Based Interferometric Radar: Experimental Results

    International Nuclear Information System (INIS)

    Chiara, P.; Morelli, A.

    2010-01-01

    The research of innovative non-contact techniques aimed at the vibration measurement of civil engineering structures (also for damage detection and structural health monitoring) is continuously directed to the optimization of measures and methods. Ground-Based Radar Interferometry (GBRI) represents the more recent technique available for static and dynamic control of structures and ground movements.Dynamic testing of bridges and buildings in operational conditions are currently performed: (a) to assess the conformity of the structure to the project design at the end of construction; (b) to identify the modal parameters (i.e. natural frequencies, mode shapes and damping ratios) and to check the variation of any modal parameters over the years; (c) to evaluate the amplitude of the structural response to special load conditions (i.e. strong winds, earthquakes, heavy railway or roadway loads). If such tests are carried out by using a non-contact technique (like GBRI), the classical issues of contact sensors (like accelerometers) are easily overtaken.This paper presents and discusses the results of various tests carried out on full-scale bridges by using a Stepped Frequency-Continuous Wave radar system.

  5. Bridge Testing With Ground-Based Interferometric Radar: Experimental Results

    Science.gov (United States)

    Chiara, P.; Morelli, A.

    2010-05-01

    The research of innovative non-contact techniques aimed at the vibration measurement of civil engineering structures (also for damage detection and structural health monitoring) is continuously directed to the optimization of measures and methods. Ground-Based Radar Interferometry (GBRI) represents the more recent technique available for static and dynamic control of structures and ground movements. Dynamic testing of bridges and buildings in operational conditions are currently performed: (a) to assess the conformity of the structure to the project design at the end of construction; (b) to identify the modal parameters (i.e. natural frequencies, mode shapes and damping ratios) and to check the variation of any modal parameters over the years; (c) to evaluate the amplitude of the structural response to special load conditions (i.e. strong winds, earthquakes, heavy railway or roadway loads). If such tests are carried out by using a non-contact technique (like GBRI), the classical issues of contact sensors (like accelerometers) are easily overtaken. This paper presents and discusses the results of various tests carried out on full-scale bridges by using a Stepped Frequency-Continuous Wave radar system.

  6. Effects of an Employer-Based Intervention on Employment Outcomes for Youth with Significant Support Needs Due to Autism

    Science.gov (United States)

    Wehman, Paul; Schall, Carol M.; McDonough, Jennifer; Graham, Carolyn; Brooke, Valerie; Riehle, J. Erin; Brooke, Alissa; Ham, Whitney; Lau, Stephanie; Allen, Jaclyn; Avellone, Lauren

    2017-01-01

    The purpose of this study was to develop and investigate an employer-based 9-month intervention for high school youth with autism spectrum disorder to learn job skills and acquire employment. The intervention modified a program titled Project SEARCH and incorporated the use of applied behavior analysis to develop Project SEARCH plus Autism…

  7. Finding of No Significant Impact Construction of a New Water Pipeline, Travis Air Force Base, Solano County, California

    Science.gov (United States)

    2003-09-03

    Soun:e: California State Automobile Association, Bay and Maunt.Ul Secllon 1999 North Gate Road Pipeline Project Travis Air Force Base, California J... HRm ·----~----~-------~~-,~··--------~---- TrlnomlaJ , , , , r. Page_i_of~ *Resource Name or# (Assigned by recorder) -.:N

  8. Assessment of robustness and significance of climate change signals for an ensemble of distribution-based scaled climate projections

    DEFF Research Database (Denmark)

    Seaby, Lauren Paige; Refsgaard, J.C.; Sonnenborg, T.O.

    2013-01-01

    An ensemble of 11 regional climate model (RCM) projections are analysed for Denmark from a hydrological modelling inputs perspective. Two bias correction approaches are applied: a relatively simple monthly delta change (DC) method and a more complex daily distribution-based scaling (DBS) method...

  9. Blind Test of Physics-Based Prediction of Protein Structures

    Science.gov (United States)

    Shell, M. Scott; Ozkan, S. Banu; Voelz, Vincent; Wu, Guohong Albert; Dill, Ken A.

    2009-01-01

    We report here a multiprotein blind test of a computer method to predict native protein structures based solely on an all-atom physics-based force field. We use the AMBER 96 potential function with an implicit (GB/SA) model of solvation, combined with replica-exchange molecular-dynamics simulations. Coarse conformational sampling is performed using the zipping and assembly method (ZAM), an approach that is designed to mimic the putative physical routes of protein folding. ZAM was applied to the folding of six proteins, from 76 to 112 monomers in length, in CASP7, a community-wide blind test of protein structure prediction. Because these predictions have about the same level of accuracy as typical bioinformatics methods, and do not utilize information from databases of known native structures, this work opens up the possibility of predicting the structures of membrane proteins, synthetic peptides, or other foldable polymers, for which there is little prior knowledge of native structures. This approach may also be useful for predicting physical protein folding routes, non-native conformations, and other physical properties from amino acid sequences. PMID:19186130

  10. Ageing tests study on wood-based sandwich panels

    Directory of Open Access Journals (Sweden)

    Mateo, Raquel

    2011-12-01

    Full Text Available Composite lightweight wood panels are being increasingly used in construction in Spain. Their growing use should be accompanied by necessary guarantees based on studies of their properties. As it is prescriptive and in addition to others tests, in the present work is examinated the durability of these panels when exposed to the climatic conditions, a characteristic of great importance for wood products, according to Guide ETAG 016, the current standard defining the ageing tests to be used. However, due to the use class of this material, there are indications that the testing outlined in this Guide is inappropriate for assessing the ageing of wood-based sandwich panels. Alternative tests are here proposed that recreate rather better the real conditions under which these products are used. Covering the samples in a waterproof sheeting permeable to the outward movement of water vapour, which is in fact used in the installation, provided the best procedure for testing these panels.

    Los paneles sándwich de madera son un producto de creciente aplicación en la edificación de nuestro país. Este ascendente uso del material debe estar acompañado de las garantías necesarias avaladas por un estudio previo de sus prestaciones. Como es preceptivo y entre otros, se evalúa su durabilidad frente a las condiciones climatológicas, clave en los productos derivados de la madera, acorde a la normativa actual definida con tal fin, la Guía ETAG 016. Sin embargo, debido a la clase de uso del material, se ha detectado que dicha normativa tal y como está concebida no es capaz de valorar su envejecimiento adecuadamente. En este trabajo se proponen ensayos alternativos al establecido tras exhaustivos análisis que recrean las condiciones reales de uso y más acordes a los productos de madera. Se concluye que la incorporación de una lámina impermeable pero permeable al vapor de agua hacia el exterior, como las utilizadas en el montaje, aportan el mejor

  11. Drop-in capsule testing of plutonium-based fuels in the Advanced Test Reactor

    International Nuclear Information System (INIS)

    Chang, G.S.; Ryskamp, J.M.; Terry, W.K.; Ambrosek, R.G.; Palmer, A.J.; Roesener, R.A.

    1996-09-01

    The most attractive way to dispose of weapons-grade plutonium (WGPu) is to use it as fuel in existing light water reactors (LWRs) in the form of mixed oxide (MOX) fuel - i.e., plutonia (PuO[sub 2]) mixed with urania (UO[sub 2]). Before U.S. reactors could be used for this purpose, their operating licenses would have to be amended. Numerous technical issues must be resolved before LWR operating licenses can be amended to allow the use of MOX fuel. The proposed weapons-grade MOX fuel is unusual, even relative to ongoing foreign experience with reactor-grade MOX power reactor fuel. Some demonstration of the in- reactor thermal, mechanical, and fission gas release behavior of the prototype fuel will most likely be required in a limited number of test reactor irradiations. The application to license operation with MOX fuel must be amply supported by experimental data. The Advanced Test Reactor (ATR) at the Idaho National Engineering Laboratory (INEL) is capable of playing a key role in the irradiation, development, and licensing of these new fuel types. The ATR is a 250- MW (thermal) LWR designed to study the effects of intense radiation on reactor fuels and materials. For 25 years, the primary role of the ATR has been to serve in experimental investigations for the development of advanced nuclear fuels. Both large- and small-volume test positions in the ATR could be used for MOX fuel irradiation. The ATR would be a nearly ideal test bed for developing data needed to support applications to license LWRs for operation with MOX fuel made from weapons-grade plutonium. Furthermore, these data can be obtained more quickly by using ATR instead of testing in a commercial LWR. Our previous work in this area has demonstrated that it is technically feasible to perform MOX fuel testing in the ATR. This report documents our analyses of sealed drop-in capsules containing plutonium-based test specimens placed in various ATR positions

  12. Web-based thyroid imaging reporting and data system: Malignancy risk of atypia of undetermined significance or follicular lesion of undetermined significance thyroid nodules calculated by a combination of ultrasonography features and biopsy results.

    Science.gov (United States)

    Choi, Young Jun; Baek, Jung Hwan; Shin, Jung Hee; Shim, Woo Hyun; Kim, Seon-Ok; Lee, Won-Hong; Song, Dong Eun; Kim, Tae Yong; Chung, Ki-Wook; Lee, Jeong Hyun

    2018-05-13

    The purpose of this study was to construct a web-based predictive model using ultrasound characteristics and subcategorized biopsy results for thyroid nodules of atypia of undetermined significance/follicular lesion of undetermined significance (AUS/FLUS) to stratify the risk of malignancy. Data included 672 thyroid nodules from 656 patients from a historical cohort. We analyzed ultrasound images of thyroid nodules and biopsy results according to nuclear atypia and architectural atypia. Multivariate logistic regression analysis was performed to predict whether nodules were diagnosed as malignant or benign. The ultrasound features, including spiculated margin, marked hypoechogenicity, calcifications, biopsy results, and cytologic atypia, showed significant differences between groups. A 13-point risk scoring system was developed, and the area under the curve (AUC) of the receiver operating characteristic (ROC) curve of the development and validation sets were 0.837 and 0.830, respectively (http://www.gap.kr/thyroidnodule_b3.php). We devised a web-based predictive model using the combined information of ultrasound characteristics and biopsy results for AUS/FLUS thyroid nodules to stratify the malignant risk. © 2018 Wiley Periodicals, Inc.

  13. Clinicopathological significance of psychotic experiences in non-psychotic young people: evidence from four population-based studies.

    LENUS (Irish Health Repository)

    Kelleher, Ian

    2012-07-01

    Epidemiological research has shown that hallucinations and delusions, the classic symptoms of psychosis, are far more prevalent in the population than actual psychotic disorder. These symptoms are especially prevalent in childhood and adolescence. Longitudinal research has demonstrated that psychotic symptoms in adolescence increase the risk of psychotic disorder in adulthood. There has been a lack of research, however, on the immediate clinicopathological significance of psychotic symptoms in adolescence.

  14. Quantum spacetime operationally based on propagators for extended test particles

    International Nuclear Information System (INIS)

    Prugovecki, E.

    1981-01-01

    By taking into account the quantum aspects intrinsic to any operational definition of spatio-temporal relationships, a stochastic concept of spacetime emerges. In relation to its classical counterpart is realized as a stochastic mean around which quantum fluctuations become negligible only in the limit of macroscopic spacetime intervals. The test-particle propagators used in the proposed quantum concept of spacetime are derived by solving in a consistent manner the localizability problem for relativistic particles. This is achieved in the framework of the stochastic phase space formulation of quantum mechanics, which in the nonrelativistic context is shown to result from systems of imprimitivity related to phase space conserved probability currents derivable from bona fide convariant probability densities in stochastic phase spaces of one particle systems, which can be interpreted as due to measurements performed with extended rather than pointlike test particles. The associated particle propagators can be therefore consistently related to coordinate probability densities measurable by the exchange of photons in between test particles from a chosen standard. Quantum spacetime is defined as the family of propagators corresponding to all conceivable coherent flows of test particles. This family of free-fall propagators has to satisfy certain self-consistency conditions as well as consistent laws of motion which inplicitly determine the stochastic geometro-dynamics of quantum space-time. Field theory on quantum spacetime retains many of the formal features of conventional quantum field theory. On a fundamental epistemological level stochastic geometries emerge as essential prerequisites in the construction of spacetime models that would be operationally based and yet consistent with the relativity principle as well as with the uncertinty principle

  15. Profile of Students' Creative Thinking Skills on Quantitative Project-Based Protein Testing using Local Materials

    Directory of Open Access Journals (Sweden)

    D. K. Sari

    2017-04-01

    Full Text Available The purpose of this study is to obtain a profile of students’ creative thinking skills on quantitative project-based protein testing using local materials. Implementation of the research is using quasi-experimental method pre-test post-test control group design with 40 students involved in Biochemistry lab. The research instrument is pre-test and post-test using creative thinking skills in the form of description and students’ questionnaire. The analysis was performed with SPSS 22.0 program to see the significance normality, U Mann-Whitney test for nonparametric statistics, N-Gain score, and the percentage of student responses to the practicum performed. The research result shows that the pretest rate in the experimental group is 8.25 while in the control group is 6.90. After attending a project-based practicum with local materials, the experimental group obtained the mean of posttest is 37.55 while in control class is 11.18. The students’ improvement on creative thinking skills can be seen from the average of N-Gain in the experimental class with 0.32 (medium category and in the control category with 0.05 (low category. The experimental and control class have different creative thinking skills significantly different fluency, flexibility, novelty, and detail. It can be concluded that quantitative project-based protein testing using local materials can improve students’ creative thinking skills. 71% of total students feel that quantitative project-based protein testing using local materials make them more creative in doing a practicum in the laboratory.

  16. Electrochemical evidences and consequences of significant differences in ions diffusion rate in polyacrylate-based ion-selective membranes.

    Science.gov (United States)

    Woźnica, Emilia; Mieczkowski, Józef; Michalska, Agata

    2011-11-21

    The origin and effect of surface accumulation of primary ions within the ion-selective poly(n-butyl acrylate)-based membrane, obtained by thermal polymerization, is discussed. Using a new method, based on the relation between the shape of a potentiometric plot and preconditioning time, the diffusion of copper ions in the membrane was found to be slow (the diffusion coefficient estimated to be close to 10(-11) cm(2) s(-1)), especially when compared to ion-exchanger counter ions--sodium cations diffusion (a diffusion coefficient above 10(-9) cm(2) s(-1)). The higher mobility of sodium ions than those of the copper-ionophore complex results in exposed ion-exchanger role leading to undesirably exposed sensitivity to sodium or potassium ions.

  17. Sustainable Corporate Social Media Marketing Based on Message Structural Features: Firm Size Plays a Significant Role as a Moderator

    OpenAIRE

    Moon Young Kang; Byungho Park

    2018-01-01

    Social media has been receiving attention as a cost-effective tool to build corporate brand image and to enrich customer relationships. This phenomenon calls for more attention to developing a model that measures the impact of structural features, used in corporate social media messages. Based on communication science, this study proposes a model to measure the impact of three essential message structural features (interactivity, formality, and immediacy) in corporate social media on customer...

  18. Multimeric recombinant M2e protein-based ELISA: a significant improvement in differentiating avian influenza infected chickens from vaccinated ones.

    Directory of Open Access Journals (Sweden)

    Farshid Hadifar

    Full Text Available Killed avian influenza virus (AIV vaccines have been used to control H5N1 infections in countries where the virus is endemic. Distinguishing vaccinated from naturally infected birds (DIVA in such situations however, has become a major challenge. Recently, we introduced the recombinant ectodomain of the M2 protein (M2e of H5N1 subtype as a novel tool for an ELISA based DIVA test. Despite being antigenic in natural infection the monomer form of the M2e used in ELISA had limited antigenicity and consequently poor diagnostic capability. To address this shortcoming, we evaluated the use of four tandem copies of M2e (tM2e for increased efficiency of M2e antibody detection. The tM2e gene of H5N1 strain from Indonesia (A/Indonesia/CDC540/2006 was cloned into a pMAL- p4x expression vector and expressed in E.coli as a recombinant tM2e-MBP or M2e-MBP proteins. Both of these, M2e and tM2e antigens reacted with sera obtained from chickens following live H5N1 infection but not with sera from vaccinated birds. A significantly stronger M2e antibody reaction was observed with the tM2e compared to M2e antigen. Western blotting also supported the superiority of tM2e over M2e in detection of specific M2e antibodies against live H5N1 infection. Results from this study demonstrate that M2e tetramer is a better antigen than single M2e and could be more suitable for an ELISA based DIVA test.

  19. Autonomous Cryogenic Load Operations: Knowledge-Based Autonomous Test Engineer

    Science.gov (United States)

    Schrading, J. Nicolas

    2013-01-01

    The Knowledge-Based Autonomous Test Engineer (KATE) program has a long history at KSC. Now a part of the Autonomous Cryogenic Load Operations (ACLO) mission, this software system has been sporadically developed over the past 20 years. Originally designed to provide health and status monitoring for a simple water-based fluid system, it was proven to be a capable autonomous test engineer for determining sources of failure in the system. As part of a new goal to provide this same anomaly-detection capability for a complicated cryogenic fluid system, software engineers, physicists, interns and KATE experts are working to upgrade the software capabilities and graphical user interface. Much progress was made during this effort to improve KATE. A display of the entire cryogenic system's graph, with nodes for components and edges for their connections, was added to the KATE software. A searching functionality was added to the new graph display, so that users could easily center their screen on specific components. The GUI was also modified so that it displayed information relevant to the new project goals. In addition, work began on adding new pneumatic and electronic subsystems into the KATE knowledge base, so that it could provide health and status monitoring for those systems. Finally, many fixes for bugs, memory leaks, and memory errors were implemented and the system was moved into a state in which it could be presented to stakeholders. Overall, the KATE system was improved and necessary additional features were added so that a presentation of the program and its functionality in the next few months would be a success.

  20. [The significance of the results of crash-tests with the use of the models of the pedestrians' lower extremities for the prevention of the traffic road accidents].

    Science.gov (United States)

    Smirenin, S A; Fetisov, V A; Grigoryan, V G; Gusarov, A A; Kucheryavets, Yu O

    The disabling injuries inflicted during road traffic accidents (RTA) create a serious challenge for the public health services and are at the same time a major socio-economic problem in the majority of the countries throughout the world. The injuries to the lower extremities of the pedestrians make up the largest fraction of the total number of the non-lethal RTA injuries. Most of them are responsible for the considerable deterioration of the quality of life for the participants in the accidents during the subsequent period. The objective of the present study was to summarize the currently available results of experimental testing of the biomechanical models of the pedestrians' lower extremities in the framework of the program for the prevention of the road traffic accidents as proposed by the World Health Organization (WHO, 2004). The European Enhanced Safety Vehicle Committee (EEVC) has developed a series of crash-tests with the use of the models of the pedestrians' lower extremities simulating the vehicle bumper-pedestrian impact. The models are intended for the assessment of the risk of the tibia fractures and the injuries to the knee joint ligaments. The experts of EEVC proposed the biomechanical criteria for the acceleration of the knee and talocrural parts of the lower limbs as well as for the shear displacement of the knee and knee-bending angle. The engineering solution of this problem is based on numerous innovation proposals being implemented in the machine-building industry with the purpose of reducing the stiffness of structural elements of the bumper and other front components of a modern vehicle designed to protect the pedestrians from severe injuries that can be inflicted in the road traffic accidents. The activities of the public health authorities (in the first place, bureaus of forensic medical expertise and analogous facilities) have a direct bearing on the solution of the problem of control of road traffic injuries because they are possessed of

  1. Screening for cognitive impairment in older individuals. Validation study of a computer-based test.

    Science.gov (United States)

    Green, R C; Green, J; Harrison, J M; Kutner, M H

    1994-08-01

    This study examined the validity of a computer-based cognitive test that was recently designed to screen the elderly for cognitive impairment. Criterion-related validity was examined by comparing test scores of impaired patients and normal control subjects. Construct-related validity was computed through correlations between computer-based subtests and related conventional neuropsychological subtests. University center for memory disorders. Fifty-two patients with mild cognitive impairment by strict clinical criteria and 50 unimpaired, age- and education-matched control subjects. Control subjects were rigorously screened by neurological, neuropsychological, imaging, and electrophysiological criteria to identify and exclude individuals with occult abnormalities. Using a cut-off total score of 126, this computer-based instrument had a sensitivity of 0.83 and a specificity of 0.96. Using a prevalence estimate of 10%, predictive values, positive and negative, were 0.70 and 0.96, respectively. Computer-based subtests correlated significantly with conventional neuropsychological tests measuring similar cognitive domains. Thirteen (17.8%) of 73 volunteers with normal medical histories were excluded from the control group, with unsuspected abnormalities on standard neuropsychological tests, electroencephalograms, or magnetic resonance imaging scans. Computer-based testing is a valid screening methodology for the detection of mild cognitive impairment in the elderly, although this particular test has important limitations. Broader applications of computer-based testing will require extensive population-based validation. Future studies should recognize that normal control subjects without a history of disease who are typically used in validation studies may have a high incidence of unsuspected abnormalities on neurodiagnostic studies.

  2. A practical approach for implementing risk-based inservice testing of pumps at nuclear power plants

    International Nuclear Information System (INIS)

    Hartley, R.S.; Maret, D.; Seniuk, P.; Smith, L.

    1996-01-01

    The American Society of Mechanical Engineers (ASME) Center for Research and Technology Development's (CRTD) Research Task Force on Risk-Based Inservice Testing has developed guidelines for risk-based inservice testing (IST) of pumps and valves. These guidelines are intended to help the ASME Operation and Maintenance (OM) Committee to enhance plant safety while focussing appropriate testing resources on critical components. This paper describes a practical approach for implementing those guidelines for pumps at nuclear power plants. The approach, as described in this paper, relies on input, direction, and assistance from several entities such as the ASME Code Committees, United States Nuclear Regulatory Commission (NRC), and the National Laboratories, as well as industry groups and personnel with applicable expertise. Key parts of the risk-based IST process that are addressed here include: identification of important failure modes, identification of significant failure causes, assessing the effectiveness of testing and maintenance activities, development of alternative testing and maintenance strategies, and assessing the effectiveness of alternative testing strategies with present ASME Code requirements. Finally, the paper suggests a method of implementing this process into the ASME OM Code for pump testing

  3. A practical approach for implementing risk-based inservice testing of pumps at nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, R.S. [Idaho National Engineering Lab., Idaho Falls, ID (United States); Maret, D.; Seniuk, P.; Smith, L.

    1996-12-01

    The American Society of Mechanical Engineers (ASME) Center for Research and Technology Development`s (CRTD) Research Task Force on Risk-Based Inservice Testing has developed guidelines for risk-based inservice testing (IST) of pumps and valves. These guidelines are intended to help the ASME Operation and Maintenance (OM) Committee to enhance plant safety while focussing appropriate testing resources on critical components. This paper describes a practical approach for implementing those guidelines for pumps at nuclear power plants. The approach, as described in this paper, relies on input, direction, and assistance from several entities such as the ASME Code Committees, United States Nuclear Regulatory Commission (NRC), and the National Laboratories, as well as industry groups and personnel with applicable expertise. Key parts of the risk-based IST process that are addressed here include: identification of important failure modes, identification of significant failure causes, assessing the effectiveness of testing and maintenance activities, development of alternative testing and maintenance strategies, and assessing the effectiveness of alternative testing strategies with present ASME Code requirements. Finally, the paper suggests a method of implementing this process into the ASME OM Code for pump testing.

  4. Significant Factors Related to Failed Pediatric Dental General Anesthesia Appointments at a Hospital-based Residency Program.

    Science.gov (United States)

    Emhardt, John R; Yepes, Juan F; Vinson, LaQuia A; Jones, James E; Emhardt, John D; Kozlowski, Diana C; Eckert, George J; Maupome, Gerardo

    2017-05-15

    The purposes of this study were to: (1) evaluate the relationship between appointment failure and the factors of age, gender, race, insurance type, day of week, scheduled time of surgery, distance traveled, and weather; (2) investigate reasons for failure; and (3) explore the relationships between the factors and reasons for failure. Electronic medical records were accessed to obtain data for patients scheduled for dental care under general anesthesia from May 2012 to May 2015. Factors were analyzed for relation to appointment failure. Data from 3,513 appointments for 2,874 children were analyzed. Bivariate associations showed statistically significant (Pgeneral anesthesia face specific barriers to care.

  5. The Development of an Internet-Based Treatment for Problem Gamblers and Concerned Significant Others: A Pilot Randomized Controlled Trial.

    Science.gov (United States)

    Nilsson, Anders; Magnusson, Kristoffer; Carlbring, Per; Andersson, Gerhard; Gumpert, Clara Hellner

    2018-06-01

    Problem gambling creates significant harm for the gambler and for concerned significant others (CSOs). While several studies have investigated the effects of individual cognitive behavioral therapy (CBT) for problem gambling, less is known about the effects of involving CSOs in treatment. Behavioral couples therapy (BCT) has shown promising results when working with substance use disorders by involving both the user and a CSO. This pilot study investigated BCT for problem gambling, as well as the feasibility of performing a larger scale randomized controlled trial. 36 participants, 18 gamblers and 18 CSOs, were randomized to either BCT or individual CBT for the gambler. Both interventions were Internet-delivered self-help interventions with therapist support. Both groups of gamblers improved on all outcome measures, but there were no differences between the groups. The CSOs in the BCT group lowered their scores on anxiety and depression more than the CSOs of those randomized to the individual CBT group did. The implications of the results and the feasibility of the trial are discussed.

  6. An Improved Test Selection Optimization Model Based on Fault Ambiguity Group Isolation and Chaotic Discrete PSO

    Directory of Open Access Journals (Sweden)

    Xiaofeng Lv

    2018-01-01

    Full Text Available Sensor data-based test selection optimization is the basis for designing a test work, which ensures that the system is tested under the constraint of the conventional indexes such as fault detection rate (FDR and fault isolation rate (FIR. From the perspective of equipment maintenance support, the ambiguity isolation has a significant effect on the result of test selection. In this paper, an improved test selection optimization model is proposed by considering the ambiguity degree of fault isolation. In the new model, the fault test dependency matrix is adopted to model the correlation between the system fault and the test group. The objective function of the proposed model is minimizing the test cost with the constraint of FDR and FIR. The improved chaotic discrete particle swarm optimization (PSO algorithm is adopted to solve the improved test selection optimization model. The new test selection optimization model is more consistent with real complicated engineering systems. The experimental result verifies the effectiveness of the proposed method.

  7. Sustainable Corporate Social Media Marketing Based on Message Structural Features: Firm Size Plays a Significant Role as a Moderator

    Directory of Open Access Journals (Sweden)

    Moon Young Kang

    2018-04-01

    Full Text Available Social media has been receiving attention as a cost-effective tool to build corporate brand image and to enrich customer relationships. This phenomenon calls for more attention to developing a model that measures the impact of structural features, used in corporate social media messages. Based on communication science, this study proposes a model to measure the impact of three essential message structural features (interactivity, formality, and immediacy in corporate social media on customers’ purchase intentions, mediated by brand attitude and corporate trust. Especially, social media platforms are believed to provide a good marketing platform for small and medium enterprises (SMEs by providing access to huge audiences at a very low cost. The findings from this study based on a structural equation model suggest that brand attitude and corporate trust have larger impacts on purchase intention for SMEs than large firms. This implies that SMEs with little to no presence in the market should pay more attention to building corporate trust and brand attitude for their sustainable growth.

  8. Simulation-Based Testing of Pager Interruptions During Laparoscopic Cholecystectomy.

    Science.gov (United States)

    Sujka, Joseph A; Safcsak, Karen; Bhullar, Indermeet S; Havron, William S

    2018-01-30

    To determine if pager interruptions affect operative time, safety, or complications and management of pager issues during a simulated laparoscopic cholecystectomy. Twelve surgery resident volunteers were tested on a Simbionix Lap Mentor II simulator. Each resident performed 6 randomized simulated laparoscopic cholecystectomies; 3 with pager interruptions (INT) and 3 without pager interruptions (NO-INT). The pager interruptions were sent in the form of standardized patient vignettes and timed to distract the resident during dissection of the critical view of safety and clipping of the cystic duct. The residents were graded on a pass/fail scale for eliciting appropriate patient history and management of the pager issue. Data was extracted from the simulator for the following endpoints: operative time, safety metrics, and incidence of operative complications. The Mann-Whitney U test and contingency table analysis were used to compare the 2 groups (INT vs. NO-INT). Level I trauma center; Simulation laboratory. Twelve general surgery residents. There was no significant difference between the 2 groups in any of the operative endpoints as measured by the simulator. However, in the INT group, only 25% of the time did the surgery residents both adequately address the issue and provide effective patient management in response to the pager interruption. Pager interruptions did not affect operative time, safety, or complications during the simulated procedure. However, there were significant failures in the appropriate evaluations and management of pager issues. Consideration for diversion of patient care issues to fellow residents not operating to improve quality and safety of patient care outside the operating room requires further study. Copyright © 2018. Published by Elsevier Inc.

  9. On the equivalence of the Clauser–Horne and Eberhard inequality based tests

    International Nuclear Information System (INIS)

    Khrennikov, Andrei; Ramelow, Sven; Ursin, Rupert; Wittmann, Bernhard; Kofler, Johannes; Basieva, Irina

    2014-01-01

    Recently, the results of the first experimental test for entangled photons closing the detection loophole (also referred to as the fair sampling loophole) were published (Vienna, 2013). From the theoretical viewpoint the main distinguishing feature of this long-aspired to experiment was that the Eberhard inequality was used. Almost simultaneously another experiment closing this loophole was performed (Urbana-Champaign, 2013) and it was based on the Clauser–Horne inequality (for probabilities). The aim of this note is to analyze the mathematical and experimental equivalence of tests based on the Eberhard inequality and various forms of the Clauser–Horne inequality. The structure of the mathematical equivalence is nontrivial. In particular, it is necessary to distinguish between algebraic and statistical equivalence. Although the tests based on these inequalities are algebraically equivalent, they need not be equivalent statistically, i.e., theoretically the level of statistical significance can drop under transition from one test to another (at least for finite samples). Nevertheless, the data collected in the Vienna test implies not only a statistically significant violation of the Eberhard inequality, but also of the Clauser–Horne inequality (in the ratio-rate form): for both a violation >60σ. (paper)

  10. Mechanical Testing of Carbon Based Woven Thermal Protection Materials

    Science.gov (United States)

    Pham, John; Agrawal, Parul; Arnold, James O.; Peterson, Keith; Venkatapathy, Ethiraj

    2013-01-01

    Three Dimensional Woven thermal protection system (TPS) materials are one of the enabling technologies for mechanically deployable hypersonic decelerator systems. These materials have been shown capable of serving a dual purpose as TPS and as structural load bearing members during entry and descent operations. In order to ensure successful structural performance, it is important to characterize the mechanical properties of these materials prior to and post exposure to entry-like heating conditions. This research focuses on the changes in load bearing capacity of woven TPS materials after being subjected to arcjet simulations of entry heating. Preliminary testing of arcjet tested materials [1] has shown a mechanical degradation. However, their residual strength is significantly more than the requirements for a mission to Venus [2]. A systematic investigation at the macro and microstructural scales is reported here to explore the potential causes of this degradation. The effects of heating on the sizing (an epoxy resin coating used to reduce friction and wear during fiber handling) are discussed as one of the possible causes for the decrease in mechanical properties. This investigation also provides valuable guidelines for margin policies for future mechanically deployable entry systems.

  11. Cationic lipid-based nanoparticles mediate functional delivery of acetate to tumor cells in vivo leading to significant anticancer effects

    Directory of Open Access Journals (Sweden)

    Brody LP

    2017-09-01

    Full Text Available Leigh P Brody,1,* Meliz Sahuri-Arisoylu,1,* James R Parkinson,1 Harry G Parkes,2 Po Wah So,3 Nabil Hajji,4 E Louise Thomas,1 Gary S Frost,5 Andrew D Miller,6,* Jimmy D Bell1,* 1Department of Life Sciences, Faculty of Science and Technology, University of Westminster, 2CR-UK Clinical MR Research Group, Institute of Cancer Research, Sutton, Surrey, 3Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, 4Department of Medicine, Division of Experimental Medicine, Centre for Pharmacology & Therapeutics, Toxicology Unit, Imperial College London, 5Faculty of Medicine, Nutrition and Dietetic Research Group, Division of Diabetes, Endocrinology and Metabolism, Department of Investigative Medicine, Imperial College London, Hammersmith Hospital, 6Institute of Pharmaceutical Science, King’s College London, London, UK *These authors contributed equally to this work Abstract: Metabolic reengineering using nanoparticle delivery represents an innovative therapeutic approach to normalizing the deregulation of cellular metabolism underlying many diseases, including cancer. Here, we demonstrated a unique and novel application to the treatment of malignancy using a short-chain fatty acid (SCFA-encapsulated lipid-based delivery system – liposome-encapsulated acetate nanoparticles for cancer applications (LITA-CAN. We assessed chronic in vivo administration of our nanoparticle in three separate murine models of colorectal cancer. We demonstrated a substantial reduction in tumor growth in the xenograft model of colorectal cancer cell lines HT-29, HCT-116 p53+/+ and HCT-116 p53-/-. Nanoparticle-induced reductions in histone deacetylase gene expression indicated a potential mechanism for these anti-proliferative effects. Together, these results indicated that LITA-CAN could be used as an effective direct or adjunct therapy to treat malignant transformation in vivo. Keywords: lipid-based nanoparticles, liposomes

  12. Beam based alignment at the KEK accelerator test facility

    International Nuclear Information System (INIS)

    Ross, M.; Nelson, J.; Woodley, M.; Wolski, A.

    2002-01-01

    The KEK Accelerator Test Facility (ATF) damping ring is a prototype low emittance source for the NLC/JLC linear collider. To achieve the goal normalized vertical emittance gey = 20 nm-rad, magnet placement accuracy better than 30 mm must be achieved. Accurate beam-based alignment (BBA) is required. The ATF arc optics uses a FOBO cell with two horizontally focusing quadrupoles, two sextupoles and a horizontally defocusing gradient dipole, all of which must be aligned with BBA. BBA at ATF uses the quadrupole and sextupole trim windings to find the trajectory through the center of each magnet. The results can be interpreted to assess the accuracy of the mechanical alignment and the beam position monitor offsets

  13. Versatile electrophoresis-based self-test platform.

    Science.gov (United States)

    Guijt, Rosanne M

    2015-03-01

    Lab on a Chip technology offers the possibility to extract chemical information from a complex sample in a simple, automated way without the need for a laboratory setting. In the health care sector, this chemical information could be used as a diagnostic tool for example to inform dosing. In this issue, the research underpinning a family of electrophoresis-based point-of-care devices for self-testing of ionic analytes in various sample matrices is described [Electrophoresis 2015, 36, 712-721.]. Hardware, software, and methodological chances made to improve the overall analytical performance in terms of accuracy, precision, detection limit, and reliability are discussed. In addition to the main focus of lithium monitoring, new applications including the use of the platform for veterinary purposes, sodium, and for creatinine measurements are included. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Digital reflection holography based systems development for MEMS testing

    Science.gov (United States)

    Singh, Vijay Raj; Liansheng, Sui; Asundi, Anand

    2010-05-01

    MEMS are tiny mechanical devices that are built onto semiconductor chips and are measured in micrometers and nanometers. Testing of MEMS device is an important part in carrying out their functional assessment and reliability analysis. Development of systems based on digital holography (DH) for MEMS inspection and characterization is presented in this paper. Two DH reflection systems, table-top and handheld types, are developed depending on the MEMS measurement requirements and their capabilities are presented. The methodologies for the systems are developed for 3D profile inspection and static & dynamic measurements, which is further integrated with in-house developed software that provides the measurement results in near real time. The applications of the developed systems are demonstrated for different MEMS devices for 3D profile inspection, static deformation/deflection measurements and vibration analysis. The developed systems are well suitable for the testing of MEMS and Microsystems samples, with full-field, static & dynamic inspection as well as to monitor micro-fabrication process.

  15. SU-E-T-580: On the Significance of Model Based Dosimetry for Breast and Head and Neck 192Ir HDR Brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Peppa, V; Pappas, E; Pantelis, E; Papagiannis, P [Medical Physics Laboratory, Medical School, University of Athens, Athens (Greece); Major, T; Polgar, C [National Institute of Oncology, Budapest (Hungary)

    2015-06-15

    Purpose: To assess the dosimetric and radiobiological differences between TG43-based and model-based dosimetry in the treatment planning of {sup 192}Ir HDR brachytherapy for breast and head and neck cancer. Methods: Two cohorts of 57 Accelerated Partial Breast Irradiation (APBI) and 22 head and neck (H&N) patients with oral cavity carcinoma were studied. Dosimetry for the treatment plans was performed using the TG43 algorithm of the Oncentra Brachy v4.4 treatment planning system (TPS). Corresponding Monte Carlo (MC) simulations were performed using MCNP6 with input files automatically prepared by the BrachyGuide software tool from DICOM RT plan data. TG43 and MC data were compared in terms of % dose differences, Dose Volume Histograms (DVHs) and related indices of clinical interest for the Planning Target Volume (PTV) and the Organs-At-Risk (OARs). A radiobiological analysis was also performed using the Equivalent Uniform Dose (EUD), mean survival fraction (S) and Tumor Control Probability (TCP) for the PTV, and the Normal Tissue Control Probability (N TCP) and the generalized EUD (gEUD) for the OARs. Significance testing of the observed differences performed using the Wilcoxon paired sample test. Results: Differences between TG43 and MC DVH indices, associated with the increased corresponding local % dose differences observed, were statistically significant. This is mainly attributed to their consistency however, since TG43 agrees closely with MC for the majority of DVH and radiobiological parameters in both patient cohorts. Differences varied considerably among patients only for the ipsilateral lung and ribs in the APBI cohort, with a strong correlation to target location. Conclusion: While the consistency and magnitude of differences in the majority of clinically relevant DVH indices imply that no change is needed in the treatment planning practice, individualized dosimetry improves accuracy and addresses instances of inter-patient variability observed. Research

  16. SU-E-T-580: On the Significance of Model Based Dosimetry for Breast and Head and Neck 192Ir HDR Brachytherapy

    International Nuclear Information System (INIS)

    Peppa, V; Pappas, E; Pantelis, E; Papagiannis, P; Major, T; Polgar, C

    2015-01-01

    Purpose: To assess the dosimetric and radiobiological differences between TG43-based and model-based dosimetry in the treatment planning of 192 Ir HDR brachytherapy for breast and head and neck cancer. Methods: Two cohorts of 57 Accelerated Partial Breast Irradiation (APBI) and 22 head and neck (H&N) patients with oral cavity carcinoma were studied. Dosimetry for the treatment plans was performed using the TG43 algorithm of the Oncentra Brachy v4.4 treatment planning system (TPS). Corresponding Monte Carlo (MC) simulations were performed using MCNP6 with input files automatically prepared by the BrachyGuide software tool from DICOM RT plan data. TG43 and MC data were compared in terms of % dose differences, Dose Volume Histograms (DVHs) and related indices of clinical interest for the Planning Target Volume (PTV) and the Organs-At-Risk (OARs). A radiobiological analysis was also performed using the Equivalent Uniform Dose (EUD), mean survival fraction (S) and Tumor Control Probability (TCP) for the PTV, and the Normal Tissue Control Probability (N TCP) and the generalized EUD (gEUD) for the OARs. Significance testing of the observed differences performed using the Wilcoxon paired sample test. Results: Differences between TG43 and MC DVH indices, associated with the increased corresponding local % dose differences observed, were statistically significant. This is mainly attributed to their consistency however, since TG43 agrees closely with MC for the majority of DVH and radiobiological parameters in both patient cohorts. Differences varied considerably among patients only for the ipsilateral lung and ribs in the APBI cohort, with a strong correlation to target location. Conclusion: While the consistency and magnitude of differences in the majority of clinically relevant DVH indices imply that no change is needed in the treatment planning practice, individualized dosimetry improves accuracy and addresses instances of inter-patient variability observed. Research co

  17. Food Classification Systems Based on Food Processing: Significance and Implications for Policies and Actions: A Systematic Literature Review and Assessment.

    Science.gov (United States)

    Moubarac, Jean-Claude; Parra, Diana C; Cannon, Geoffrey; Monteiro, Carlos A

    2014-06-01

    This paper is the first to make a systematic review and assessment of the literature that attempts methodically to incorporate food processing into classification of diets. The review identified 1276 papers, of which 110 were screened and 21 studied, derived from five classification systems. This paper analyses and assesses the five systems, one of which has been devised and developed by a research team that includes co-authors of this paper. The quality of the five systems is assessed and scored according to how specific, coherent, clear, comprehensive and workable they are. Their relevance to food, nutrition and health, and their use in various settings, is described. The paper shows that the significance of industrial food processing in shaping global food systems and supplies and thus dietary patterns worldwide, and its role in the pandemic of overweight and obesity, remains overlooked and underestimated. Once food processing is systematically incorporated into food classifications, they will be more useful in assessing and monitoring dietary patterns. Food classification systems that emphasize industrial food processing, and that define and distinguish relevant different types of processing, will improve understanding of how to prevent and control overweight, obesity and related chronic non-communicable diseases, and also malnutrition. They will also be a firmer basis for rational policies and effective actions designed to protect and improve public health at all levels from global to local.

  18. CT-image-based conformal brachytherapy of breast cancer. The significance of semi-3-D and 3-D treatment planning.

    Science.gov (United States)

    Polgár, C; Major, T; Somogyi, A; Takácsi-Nagy, Z; Mangel, L C; Forrai, G; Sulyok, Z; Fodor, J; Németh, G

    2000-03-01

    To compare the conventional 2-D, the simulator-guided semi-3-D and the recently developed CT-guided 3-D brachytherapy treatment planning in the interstitial radiotherapy of breast cancer. In 103 patients with T1-2, N0-1 breast cancer the tumor bed was clipped during breast conserving surgery. Fifty-two of them received boost brachytherapy after 46 to 50 Gy teletherapy and 51 patients were treated with brachytherapy alone via flexible implant tubes. Single, double and triple plane implant was used in 6, 89 and 8 cases, respectively. The dose of boost brachytherapy and sole brachytherapy prescribed to dose reference points was 3 times 4.75 Gy and 7 times 5.2 Gy, respectively. The positions of dose reference points varied according to the level (2-D, semi-3-D and 3-D) of treatment planning performed. The treatment planning was based on the 3-D reconstruction of the surgical clips, implant tubes and skin points. In all cases the implantations were planned with a semi-3-D technique aided by simulator. In 10 cases a recently developed CT-guided 3-D planning system was used. The semi-3-D and 3-D treatment plans were compared to hypothetical 2-D plans using dose-volume histograms and dose non-uniformity ratios. The values of mean central dose, mean skin dose, minimal clip dose, proportion of underdosaged clips and mean target surface dose were evaluated. The accuracy of tumor bed localization and the conformity of planning target volume and treated volume were also analyzed in each technique. With the help of conformal semi-3-D and 3-D brachytherapy planning we could define reference dose points, active source positions and dwell times individually. This technique decreased the mean skin dose with 22.2% and reduced the possibility of geographical miss. We could achieve the best conformity between the planning target volume and the treated volume with the CT-image based 3-D treatment planning, at the cost of worse dose homogeneity. The mean treated volume was reduced by 25

  19. CT-image based conformal brachytherapy of breast cancer. The significance of semi-3-D and 3-D treatment planning

    International Nuclear Information System (INIS)

    Polgar, C.; Major, T.; Somogyi, A.; Takacsi-Nagy, Z.; Mangel, L.C.; Fodor, J.; Nemeth, G.; Forrai, G.; Sulyok, Z.

    2000-01-01

    In 103 patients with T1-2, N0-1 breast cancer the tumor bed was clipped during breast conserving surgery. Fifty-two of them received boost brachytherapy after 46 to 50 Gy teletherapy and 51 patients were treated with brachytherapy alone via flexible implant tubes. Single double and triple plane implant was used in 6,89 and 8 cases, respectively. The dose of boost brachytherapy and sole brachytherapy prescribed to dose reference points was 3 times 4.75 Gy and 7 times 5.2 Gy, respectively. The positions of dose reference points varied according to the level (2-D, semi-3-D and 3-D) of treatment planning performed. The treatment planning was based on the 3-D reconstruction of the surgical clips, implant tubes and skin points. In all cases the implantations were planned with a semi-3-D technique aided by simulator. In 10 cases a recently developed CT-guided 3-D planning system was used. The semi-3D and 3-D treatment plans were compared to hypothetical 2-D plans using dose-volume histograms and dose non-uniformity ratios. The values of mean central dose, mean skin dose, minimal clip dose, proportion of underdosaged clips and mean target surface dose were evaluated. The accuracy of tumor bed localization and the conformity of planning target volume and treated volume were also analyzed in each technique. Results: With the help of conformal semi-3D and 3D brachytherapy planning we could define reference dose points, active source positions and dwell times individually. This technique decreased the mean skin dose with 22.2% and reduced the possibility of geographical miss. We could achieve the best conformity between the planning target volume and the treated volume with the CT-image based 3-D treatment planning, at the cost of worse dose homogeneity. The mean treated volume was reduced by 25.1% with semi-3-D planning, however, its was increased by 16.2% with 3-D planning, compared to the 2-D planning. (orig.) [de

  20. GIS based 3D visualization of subsurface and surface lineaments / faults and their geological significance, northern tamil nadu, India

    Science.gov (United States)

    Saravanavel, J.; Ramasamy, S. M.

    2014-11-01

    The study area falls in the southern part of the Indian Peninsular comprising hard crystalline rocks of Archaeozoic and Proterozoic Era. In the present study, the GIS based 3D visualizations of gravity, magnetic, resistivity and topographic datasets were made and therefrom the basement lineaments, shallow subsurface lineaments and surface lineaments/faults were interpreted. These lineaments were classified as category-1 i.e. exclusively surface lineaments, category-2 i.e. surface lineaments having connectivity with shallow subsurface lineaments and category-3 i.e. surface lineaments having connectivity with shallow subsurface lineaments and basement lineaments. These three classified lineaments were analyzed in conjunction with known mineral occurrences and historical seismicity of the study area in GIS environment. The study revealed that the category-3 NNE-SSW to NE-SW lineaments have greater control over the mineral occurrences and the N-S, NNE-SSW and NE-SW, faults/lineaments control the seismicities in the study area.

  1. Fast Lemons and Sour Boulders: Testing Crossmodal Correspondences Using an Internet-Based Testing Methodology

    Directory of Open Access Journals (Sweden)

    Andy T. Woods

    2013-09-01

    Full Text Available According to a popular family of hypotheses, crossmodal matches between distinct features hold because they correspond to the same polarity on several conceptual dimensions (such as active–passive, good–bad, etc. that can be identified using the semantic differential technique. The main problem here resides in turning this hypothesis into testable empirical predictions. In the present study, we outline a series of plausible consequences of the hypothesis and test a variety of well-established and previously untested crossmodal correspondences by means of a novel internet-based testing methodology. The results highlight that the semantic hypothesis cannot easily explain differences in the prevalence of crossmodal associations built on the same semantic pattern (fast lemons, slow prunes, sour boulders, heavy red; furthermore, the semantic hypothesis only minimally predicts what happens when the semantic dimensions and polarities that are supposed to drive such crossmodal associations are made more salient (e.g., by adding emotional cues that ought to make the good/bad dimension more salient; finally, the semantic hypothesis does not explain why reliable matches are no longer observed once intramodal dimensions with congruent connotations are presented (e.g., visually presented shapes and colour do not appear to correspond.

  2. Clinical significance of cerebrospinal fluid tap test and magnetic resonance imaging/computed tomography findings of tight high convexity in patients with possible idiopathic normal pressure hydrocephalus

    International Nuclear Information System (INIS)

    Ishikawa, Masatsune; Furuse, Motomasa; Nishida, Namiko; Oowaki, Hisayuki; Matsumoto, Atsuhito; Suzuki, Takayuki

    2010-01-01

    Idiopathic normal pressure hydrocephalus (iNPH) is a treatable syndrome with a classical triad of symptoms. The Japanese iNPH guidelines indicate that the cerebrospinal fluid (CSF) tap test and tight high-convexity on magnetic resonance (MR) imaging are important for the diagnosis. The relationships between the effectiveness of CSF shunt surgery in possible iNPH patients, the tap test result, and the MR imaging/computed tomography (CT) findings of tight high-convexity were evaluated in 88 possible iNPH patients (mean age 75 years) with one or more of the classical triad of symptoms, and mild to moderate ventricular dilation. All patients underwent the tap test in the outpatient clinic, and patients and caregivers assessed the clinical changes during one week. The tap test was positive in 47 patients and negative in 41 patients. Surgery was performed in 19 patients with positive tap test, and was effective in 17 patients. Although the findings were inconsistent in some patients, the result of the tap test was found to be highly correlated with the MR imaging/CT finding of tight high-convexity (p<0.0001), confirming that both these diagnostic tests are promising predictors of shunt effectiveness. (author)

  3. Diagnostic reliability of MMPI-2 computer-based test interpretations.

    Science.gov (United States)

    Pant, Hina; McCabe, Brian J; Deskovitz, Mark A; Weed, Nathan C; Williams, John E

    2014-09-01

    Reflecting the common use of the MMPI-2 to provide diagnostic considerations, computer-based test interpretations (CBTIs) also typically offer diagnostic suggestions. However, these diagnostic suggestions can sometimes be shown to vary widely across different CBTI programs even for identical MMPI-2 profiles. The present study evaluated the diagnostic reliability of 6 commercially available CBTIs using a 20-item Q-sort task developed for this study. Four raters each sorted diagnostic classifications based on these 6 CBTI reports for 20 MMPI-2 profiles. Two questions were addressed. First, do users of CBTIs understand the diagnostic information contained within the reports similarly? Overall, diagnostic sorts of the CBTIs showed moderate inter-interpreter diagnostic reliability (mean r = .56), with sorts for the 1/2/3 profile showing the highest inter-interpreter diagnostic reliability (mean r = .67). Second, do different CBTIs programs vary with respect to diagnostic suggestions? It was found that diagnostic sorts of the CBTIs had a mean inter-CBTI diagnostic reliability of r = .56, indicating moderate but not strong agreement across CBTIs in terms of diagnostic suggestions. The strongest inter-CBTI diagnostic agreement was found for sorts of the 1/2/3 profile CBTIs (mean r = .71). Limitations and future directions are discussed. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  4. The Significance and Impact of Innovation Networks of Academia and Business with a Special Emphasis on Work-Based Learning

    Directory of Open Access Journals (Sweden)

    Hogeforster Max A.

    2014-10-01

    countries. The introduction of work-based education plays a crucial role in narrowing this divide.

  5. Theme Enrichment Analysis: A Statistical Test for Identifying Significantly Enriched Themes in a List of Stories with an Application to the Star Trek Television Franchise

    OpenAIRE

    Onsjö, Mikael; Sheridan, Paul

    2017-01-01

    In this paper, we describe how the hypergeometric test can be used to determine whether a given theme of interest occurs in a storyset at a frequency more than would be expected by chance. By a storyset we mean simply a list of stories defined according to a common attribute (e.g. author, movement, period). The test works roughly as follows: Given a background storyset (e.g. 19th century adventure novels), and a sub-storyset of interest (e.g. Jules Verne novels), the test determines whether a...

  6. Distinct distribution and prognostic significance of molecular subtypes of breast cancer in Chinese women: a population-based cohort study

    Directory of Open Access Journals (Sweden)

    Cai Qiuyin

    2011-07-01

    Full Text Available Abstract Background Molecular classification of breast cancer is an important prognostic factor. The distribution of molecular subtypes of breast cancer and their prognostic value has not been well documented in Asians. Methods A total of 2,791 breast cancer patients recruited for a population-based cohort study were evaluated for molecular subtypes of breast cancer by immunohistochemical assays. Data on clinicopathological characteristics were confirmed by centralized pathology review. The average follow-up of the patients was 53.4 months. Overall and disease-free survival by molecular subtypes of breast cancer were evaluated. Results The prevalence of the luminal A, luminal B, human epidermal growth factor receptor 2 (HER2, and triple-negative subtypes were 48.6%, 16.7%, 13.7%, and 12.9%, respectively. The luminal A subtype was more likely to be diagnosed in older women (P = 0.03 and had a stronger correlation with favorable clinicopathological factors (smaller tumor size, lower histologic grade, and earlier TNM stage than the triple-negative or HER2 subtypes. Women with triple-negative breast cancer had a higher frequency of family history of breast cancer than women with other subtypes (P = 0.048. The 5-year overall/disease-free survival percentages for the luminal A, luminal B, HER2, and triple-negative subtypes were 92.9%/88.6%, 88.6%/85.1%, 83.2%/79.1%, and 80.7%/76.0%, respectively. A similar pattern was observed in multivariate analyses. Immunotherapy was associated with improved overall and disease-free survival for luminal A breast cancer, but reduced disease-free survival (HR = 2.21, 95% CI, 1.09-4.48 for the HER2 subtype of breast cancer. Conclusions The triple-negative and HER2 subtypes were associated with poorer outcomes compared with the luminal A subtype among these Chinese women. The HER2 subtype was more prevalent in this Chinese population compared with Western populations, suggesting the importance of standardized HER2

  7. Change-based test selection : An empirical evaluation

    NARCIS (Netherlands)

    Soetens, Quinten; Demeyer, Serge; Zaidman, A.E.; Perez, Javier

    2015-01-01

    Regression test selection (i.e., selecting a subset of a given regression test suite) is a problem that has been studied intensely over the last decade. However, with the increasing popularity of developer tests as the driver of the test process, more fine-grained solutions that work well within the

  8. CT-image based conformal brachytherapy of breast cancer. The significance of semi-3-D and 3-D treatment planning

    Energy Technology Data Exchange (ETDEWEB)

    Polgar, C.; Major, T.; Somogyi, A.; Takacsi-Nagy, Z.; Mangel, L.C.; Fodor, J.; Nemeth, G. [Orszagos Onkologiai Intezet, Budapest (Hungary). Dept. of Radiotherapy; Forrai, G. [Haynal Imre Univ. of Health Sciences, Budapest (Hungary). Dept. of Radiology; Sulyok, Z. [Orszagos Onkologiai Intezet, Budapest (Hungary). Dept. of Surgery

    2000-03-01

    In 103 patients with T1-2, N0-1 breast cancer the tumor bed was clipped during breast conserving surgery. Fifty-two of them received boost brachytherapy after 46 to 50 Gy teletherapy and 51 patients were treated with brachytherapy alone via flexible implant tubes. Single double and triple plane implant was used in 6,89 and 8 cases, respectively. The dose of boost brachytherapy and sole brachytherapy prescribed to dose reference points was 3 times 4.75 Gy and 7 times 5.2 Gy, respectively. The positions of dose reference points varied according to the level (2-D, semi-3-D and 3-D) of treatment planning performed. The treatment planning was based on the 3-D reconstruction of the surgical clips, implant tubes and skin points. In all cases the implantations were planned with a semi-3-D technique aided by simulator. In 10 cases a recently developed CT-guided 3-D planning system was used. The semi-3D and 3-D treatment plans were compared to hypothetical 2-D plans using dose-volume histograms and dose non-uniformity ratios. The values of mean central dose, mean skin dose, minimal clip dose, proportion of underdosaged clips and mean target surface dose were evaluated. The accuracy of tumor bed localization and the conformity of planning target volume and treated volume were also analyzed in each technique. Results: With the help of conformal semi-3D and 3D brachytherapy planning we could define reference dose points, active source positions and dwell times individually. This technique decreased the mean skin dose with 22.2% and reduced the possibility of geographical miss. We could achieve the best conformity between the planning target volume and the treated volume with the CT-image based 3-D treatment planning, at the cost of worse dose homogeneity. The mean treated volume was reduced by 25.1% with semi-3-D planning, however, its was increased by 16.2% with 3-D planning, compared to the 2-D planning. (orig.) [German] Bei 103 Patientinnen mit Mammakarzinom der Stadien T1

  9. Top-Down and Bottom-Up Approach for Model-Based Testing of Product Lines

    Directory of Open Access Journals (Sweden)

    Stephan Weißleder

    2013-03-01

    Full Text Available Systems tend to become more and more complex. This has a direct impact on system engineering processes. Two of the most important phases in these processes are requirements engineering and quality assurance. Two significant complexity drivers located in these phases are the growing number of product variants that have to be integrated into the requirements engineering and the ever growing effort for manual test design. There are modeling techniques to deal with both complexity drivers like, e.g., feature modeling and model-based test design. Their combination, however, has been seldom the focus of investigation. In this paper, we present two approaches to combine feature modeling and model-based testing as an efficient quality assurance technique for product lines. We present the corresponding difficulties and approaches to overcome them. All explanations are supported by an example of an online shop product line.

  10. A Bootstrap Neural Network Based Heterogeneous Panel Unit Root Test: Application to Exchange Rates

    OpenAIRE

    Christian de Peretti; Carole Siani; Mario Cerrato

    2010-01-01

    This paper proposes a bootstrap artificial neural network based panel unit root test in a dynamic heterogeneous panel context. An application to a panel of bilateral real exchange rate series with the US Dollar from the 20 major OECD countries is provided to investigate the Purchase Power Parity (PPP). The combination of neural network and bootstrapping significantly changes the findings of the economic study in favour of PPP.

  11. Quantification of Plasmodiophora brassicae Using a DNA-Based Soil Test Facilitates Sustainable Oilseed Rape Production

    OpenAIRE

    Ann-Charlotte Wallenhammar; Albin Gunnarson; Fredrik Hansson; Anders Jonsson

    2016-01-01

    Outbreaks of clubroot disease caused by the soil-borne obligate parasite Plasmodiophora brassicae are common in oilseed rape (OSR) in Sweden. A DNA-based soil testing service that identifies fields where P. brassicae poses a significant risk of clubroot infection is now commercially available. It was applied here in field surveys to monitor the prevalence of P. brassicae DNA in field soils intended for winter OSR production and winter OSR field experiments. In 2013 in Scania, prior to plantin...

  12. Determination of Geometrical REVs Based on Volumetric Fracture Intensity and Statistical Tests

    Directory of Open Access Journals (Sweden)

    Ying Liu

    2018-05-01

    Full Text Available This paper presents a method to estimate a representative element volume (REV of a fractured rock mass based on the volumetric fracture intensity P32 and statistical tests. A 150 m × 80 m × 50 m 3D fracture network model was generated based on field data collected at the Maji dam site by using the rectangular window sampling method. The volumetric fracture intensity P32 of each cube was calculated by varying the cube location in the generated 3D fracture network model and varying the cube side length from 1 to 20 m, and the distribution of the P32 values was described. The size effect and spatial effect of the fractured rock mass were studied; the P32 values from the same cube sizes and different locations were significantly different, and the fluctuation in P32 values clearly decreases as the cube side length increases. In this paper, a new method that comprehensively considers the anisotropy of rock masses, simplicity of calculation and differences between different methods was proposed to estimate the geometrical REV size. The geometrical REV size of the fractured rock mass was determined based on the volumetric fracture intensity P32 and two statistical test methods, namely, the likelihood ratio test and the Wald–Wolfowitz runs test. The results of the two statistical tests were substantially different; critical cube sizes of 13 m and 12 m were estimated by the Wald–Wolfowitz runs test and the likelihood ratio test, respectively. Because the different test methods emphasize different considerations and impact factors, considering a result that these two tests accept, the larger cube size, 13 m, was selected as the geometrical REV size of the fractured rock mass at the Maji dam site in China.

  13. Surface Charges and Shell Crosslinks Each Play Significant Roles in Mediating Degradation, Biofouling, Cytotoxicity and Immunotoxicity for Polyphosphoester-based Nanoparticles

    Science.gov (United States)

    Elsabahy, Mahmoud; Zhang, Shiyi; Zhang, Fuwu; Deng, Zhou J.; Lim, Young H.; Wang, Hai; Parsamian, Perouza; Hammond, Paula T.; Wooley, Karen L.

    2013-11-01

    The construction of nanostructures from biodegradable precursors and shell/core crosslinking have been pursued as strategies to solve the problems of toxicity and limited stability, respectively. Polyphosphoester (PPE)-based micelles and crosslinked nanoparticles with non-ionic, anionic, cationic, and zwitterionic surface characteristics for potential packaging and delivery of therapeutic and diagnostic agents, were constructed using a quick and efficient synthetic strategy, and importantly, demonstrated remarkable differences in terms of cytotoxicity, immunotoxicity, and biofouling properties, as a function of their surface characteristics and also with dependence on crosslinking throughout the shell layers. For instance, crosslinking of zwitterionic micelles significantly reduced the immunotoxicity, as evidenced from the absence of secretions of any of the 23 measured cytokines from RAW 264.7 mouse macrophages treated with the nanoparticles. The micelles and their crosslinked analogs demonstrated lower cytotoxicity than several commercially-available vehicles, and their degradation products were not cytotoxic to cells at the range of the tested concentrations. PPE-nanoparticles are expected to have broad implications in clinical nanomedicine as alternative vehicles to those involved in several of the currently available medications.

  14. Strategies for Ground Based Testing of Manned Lunar Surface Systems

    Science.gov (United States)

    Beyer, Jeff; Peacock, Mike; Gill, Tracy

    2009-01-01

    Integrated testing (such as Multi-Element Integrated Test (MEIT)) is critical to reducing risks and minimizing problems encountered during assembly, activation, and on-orbit operation of large, complex manned spacecraft. Provides the best implementation of "Test Like You Fly:. Planning for integrated testing needs to begin at the earliest stages of Program definition. Program leadership needs to fully understand and buy in to what integrated testing is and why it needs to be performed. As Program evolves and design and schedules mature, continually look for suitable opportunities to perform testing where enough components are together in one place at one time. The benefits to be gained are well worth the costs.

  15. SB certification handout material requirements, test methods, responsibilities, and minimum classification levels for mixture-based specification for flexible base.

    Science.gov (United States)

    2012-10-01

    A handout with tables representing the material requirements, test methods, responsibilities, and minimum classification levels mixture-based specification for flexible base and details on aggregate and test methods employed, along with agency and co...

  16. Comparing Postsecondary Marketing Student Performance on Computer-Based and Handwritten Essay Tests

    Science.gov (United States)

    Truell, Allen D.; Alexander, Melody W.; Davis, Rodney E.

    2004-01-01

    The purpose of this study was to determine if there were differences in postsecondary marketing student performance on essay tests based on test format (i.e., computer-based or handwritten). Specifically, the variables of performance, test completion time, and gender were explored for differences based on essay test format. Results of the study…

  17. Cloud-Based Electronic Test Procedures, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Procedures are critical to experimental tests as they describe the specific steps necessary to efficiently and safely carry out a test in a repeatable fashion. The...

  18. Test-Taking Strategies and Task-based Assessment: The Case of Iranian EFL Learners

    Directory of Open Access Journals (Sweden)

    Hossein Barati

    2012-01-01

    Full Text Available The present study examined the effect of task-based assessment on the type and frequency of test-taking strategies that three proficiency groups of Iranian adult EFL learners used when completing the First Certificate in English FCE reading paper. A total of 70 EFL university undergraduates (53 females and 17 males took part in the main phase of this study. They were divided into three proficiency groups: high, intermediate, and low. A set of Chi-square analyses was used to explore the type and frequency of test-taking strategies used by participants. The results suggested that the intermediate group test takers used the strategies significantly different after completing each task (sub-test in the FCE reading paper. However, the high and low proficient test takers› use of strategies was only significant after completing the third task of the FCE reading paper. The findings also revealed that a pattern could be drawn of the type of strategies used by the three proficiency groups who participated in this study. Nonetheless, such a pattern shifted at times depending on the ability of the test takers and/or the task under study.

  19. Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data

    KAUST Repository

    Dong, Kai

    2015-09-16

    DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.

  20. Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data

    KAUST Repository

    Dong, Kai; Pang, Herbert; Tong, Tiejun; Genton, Marc G.

    2015-01-01

    DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.