WorldWideScience

Sample records for based significance tests

  1. Significance of Test-based Ratings for Metropolitan Boston Schools

    Directory of Open Access Journals (Sweden)

    Craig Bolon

    2001-10-01

    Full Text Available In 1998 Massachusetts began state-sponsored, annual achievement testing of all students in three public school grades. It has created a school and district rating system for which scores on these tests are the sole factor. It proposes to use tenth-grade test scores as a sole criterion for high school graduation, beginning with the class of 2003. The state is treating scores and ratings as though they were precise educational measures of high significance. A review of tenth-grade mathematics test scores from academic high schools in metropolitan Boston showed that statistically they are not. Community income is strongly correlated with test scores and accounted for more than 80 percent of the variance in average scores for a sample of Boston-area communities. Once community income was included in models, other factors--including percentages of students in disadvantaged populations, (Note 1 percentages receiving special education, percentages eligible for free or reduced price lunch, percentages with limited English proficiency, school sizes, school spending levels, and property values--all failed to associate substantial additional variance. Large uncertainties in residuals of school-averaged scores, after subtracting predictions based on community income, tend to make the scores ineffective for rating performance of schools. Large uncertainties in year-to-year score changes tend to make the score changes ineffective for measuring performance trends.

  2. Significance Test of Reliability Evaluation with Three-parameter Weibull Distribution Based on Grey Relational Analysis

    OpenAIRE

    Xintao Xia; Yantao Shang; Yinping Jin; Long Chen

    2013-01-01

    With the aid of the grey system theory, the grey relational analysis of the reliability with the three-parameter Weibull distribution is made for the Weibull parameter evaluation and its significance test. Via the theoretical value set and the experimental value set of the reliability relied on the lifetime data of a product, the model of the constrained optimization of the Weibull parameter evaluation based on the maximum grey relational grade. The grey significance of the reliability functi...

  3. A Non-Parametric Surrogate-based Test of Significance for T-Wave Alternans Detection

    Science.gov (United States)

    Nemati, Shamim; Abdala, Omar; Bazán, Violeta; Yim-Yeh, Susie; Malhotra, Atul; Clifford, Gari

    2010-01-01

    We present a non-parametric adaptive surrogate test that allows for the differentiation of statistically significant T-Wave Alternans (TWA) from alternating patterns that can be solely explained by the statistics of noise. The proposed test is based on estimating the distribution of noise induced alternating patterns in a beat sequence from a set of surrogate data derived from repeated reshuffling of the original beat sequence. Thus, in assessing the significance of the observed alternating patterns in the data no assumptions are made about the underlying noise distribution. In addition, since the distribution of noise-induced alternans magnitudes is calculated separately for each sequence of beats within the analysis window, the method is robust to data non-stationarities in both noise and TWA. The proposed surrogate method for rejecting noise was compared to the standard noise rejection methods used with the Spectral Method (SM) and the Modified Moving Average (MMA) techniques. Using a previously described realistic multi-lead model of TWA, and real physiological noise, we demonstrate the proposed approach reduces false TWA detections, while maintaining a lower missed TWA detection compared with all the other methods tested. A simple averaging-based TWA estimation algorithm was coupled with the surrogate significance testing and was evaluated on three public databases; the Normal Sinus Rhythm Database (NRSDB), the Chronic Heart Failure Database (CHFDB) and the Sudden Cardiac Death Database (SCDDB). Differences in TWA amplitudes between each database were evaluated at matched heart rate (HR) intervals from 40 to 120 beats per minute (BPM). Using the two-sample Kolmogorov-Smirnov test, we found that significant differences in TWA levels exist between each patient group at all decades of heart rates. The most marked difference was generally found at higher heart rates, and the new technique resulted in a larger margin of separability between patient populations than

  4. Linearity of network proximity measures: implications for set-based queries and significance testing.

    Science.gov (United States)

    Maxwell, Sean; Chance, Mark R; Koyutürk, Mehmet

    2017-05-01

    In recent years, various network proximity measures have been proposed to facilitate the use of biomolecular interaction data in a broad range of applications. These applications include functional annotation, disease gene prioritization, comparative analysis of biological systems and prediction of new interactions. In such applications, a major task is the scoring or ranking of the nodes in the network in terms of their proximity to a given set of 'seed' nodes (e.g. a group of proteins that are identified to be associated with a disease, or are deferentially expressed in a certain condition). Many different network proximity measures are utilized for this purpose, and these measures are quite diverse in terms of the benefits they offer. We propose a unifying framework for characterizing network proximity measures for set-based queries. We observe that many existing measures are linear, in that the proximity of a node to a set of nodes can be represented as an aggregation of its proximity to the individual nodes in the set. Based on this observation, we propose methods for processing of set-based proximity queries that take advantage of sparse local proximity information. In addition, we provide an analytical framework for characterizing the distribution of proximity scores based on reference models that accurately capture the characteristics of the seed set (e.g. degree distribution and biological function). The resulting framework facilitates computation of exact figures for the statistical significance of network proximity scores, enabling assessment of the accuracy of Monte Carlo simulation based estimation methods. Implementations of the methods in this paper are available at https://bioengine.case.edu/crosstalker which includes a robust visualization for results viewing. stm@case.edu or mxk331@case.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions

  5. Cross wavelet analysis: significance testing and pitfalls

    Directory of Open Access Journals (Sweden)

    D. Maraun

    2004-01-01

    Full Text Available In this paper, we present a detailed evaluation of cross wavelet analysis of bivariate time series. We develop a statistical test for zero wavelet coherency based on Monte Carlo simulations. If at least one of the two processes considered is Gaussian white noise, an approximative formula for the critical value can be utilized. In a second part, typical pitfalls of wavelet cross spectra and wavelet coherency are discussed. The wavelet cross spectrum appears to be not suitable for significance testing the interrelation between two processes. Instead, one should rather apply wavelet coherency. Furthermore we investigate problems due to multiple testing. Based on these results, we show that coherency between ENSO and NAO is an artefact for most of the time from 1900 to 1995. However, during a distinct period from around 1920 to 1940, significant coherency between the two phenomena occurs.

  6. Clinical experience with a single-nucleotide polymorphism-based non-invasive prenatal test for five clinically significant microdeletions.

    Science.gov (United States)

    Martin, K; Iyengar, S; Kalyan, A; Lan, C; Simon, A L; Stosic, M; Kobara, K; Ravi, H; Truong, T; Ryan, A; Demko, Z P; Benn, P

    2018-02-01

    Single-nucleotide polymorphism (SNP)-based non-invasive prenatal testing (NIPT) can currently predict a subset of submicroscopic abnormalities associated with severe clinical manifestations. We retrospectively analyzed the performance of SNP-based NIPT in 80 449 referrals for 22q11.2 deletion syndrome and 42 326 referrals for 1p36, cri-du-chat, Prader-Willi, and Angelman microdeletion syndromes over a 1-year period, and compared the original screening protocol with a revision that reflexively sequenced high-risk calls at a higher depth of read. The prevalence of these microdeletion syndromes was also estimated in the referral population. The positive predictive value of the original test was 15.7% for 22q11.2 deletion syndrome, and 5.2% for the other 4 disorders combined. With the revised protocol, these values increased to 44.2% for 22q11.2 and 31.7% for the others. The 0.33% false-positive rate (FPR) for 22q11.2 deletion syndrome decreased to 0.07% with the revised protocol. Similarly, the FPR for the other 4 disorders combined decreased from 0.56% to 0.07%. Minimal prevalences were estimated to be 1 in 1255 for 22q11.2 deletion syndrome and 1 in 1464 for 1p36, cri-du-chat, and Angelman syndromes combined. Our results show that these microdeletions are relatively common in the referral population, and that the performance of SNP-based NIPT is improved with high-depth resequencing. © 2017 The Authors. Clinical Genetics published by John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. The insignificance of statistical significance testing

    Science.gov (United States)

    Johnson, Douglas H.

    1999-01-01

    Despite their use in scientific journals such as The Journal of Wildlife Management, statistical hypothesis tests add very little value to the products of research. Indeed, they frequently confuse the interpretation of data. This paper describes how statistical hypothesis tests are often viewed, and then contrasts that interpretation with the correct one. I discuss the arbitrariness of P-values, conclusions that the null hypothesis is true, power analysis, and distinctions between statistical and biological significance. Statistical hypothesis testing, in which the null hypothesis about the properties of a population is almost always known a priori to be false, is contrasted with scientific hypothesis testing, which examines a credible null hypothesis about phenomena in nature. More meaningful alternatives are briefly outlined, including estimation and confidence intervals for determining the importance of factors, decision theory for guiding actions in the face of uncertainty, and Bayesian approaches to hypothesis testing and other statistical practices.

  8. Non-destructive testing: significant facts

    International Nuclear Information System (INIS)

    Espejo, Hector; Ruch, Marta C.

    2006-01-01

    In the last fifty years different organisations, both public and private, have been assigned to the mission of introducing into the country the most relevant aspects of the modern technological discipline 'Non Destructive Testing' (NDT) through a manifold of activities, such as training and education, research, development, technical assistance and services, personnel qualification/certification and standardisation. A review is given of the significant facts in this process, in which the Argentine Atomic Energy Commission, CNEA, played a leading part, a balance of the accomplishments is made and a forecast of the future of the activity is sketched. (author) [es

  9. Advances in Significance Testing for Cluster Detection

    Science.gov (United States)

    Coleman, Deidra Andrea

    surveillance data while controlling the Bayesian False Discovery Rate (BFDR). The procedure entails choosing an appropriate Bayesian model that captures the spatial dependency inherent in epidemiological data and considers all days of interest, selecting a test statistic based on a chosen measure that provides the magnitude of the maximumal spatial cluster for each day, and identifying a cutoff value that controls the BFDR for rejecting the collective null hypothesis of no outbreak over a collection of days for a specified region.We use our procedure to analyze botulism-like syndrome data collected by the North Carolina Disease Event Tracking and Epidemiologic Collection Tool (NC DETECT).

  10. Consomic mouse strain selection based on effect size measurement, statistical significance testing and integrated behavioral z-scoring: focus on anxiety-related behavior and locomotion.

    Science.gov (United States)

    Labots, M; Laarakker, M C; Ohl, F; van Lith, H A

    2016-06-29

    Selecting chromosome substitution strains (CSSs, also called consomic strains/lines) used in the search for quantitative trait loci (QTLs) consistently requires the identification of the respective phenotypic trait of interest and is simply based on a significant difference between a consomic and host strain. However, statistical significance as represented by P values does not necessarily predicate practical importance. We therefore propose a method that pays attention to both the statistical significance and the actual size of the observed effect. The present paper extends on this approach and describes in more detail the use of effect size measures (Cohen's d, partial eta squared - η p (2) ) together with the P value as statistical selection parameters for the chromosomal assignment of QTLs influencing anxiety-related behavior and locomotion in laboratory mice. The effect size measures were based on integrated behavioral z-scoring and were calculated in three experiments: (A) a complete consomic male mouse panel with A/J as the donor strain and C57BL/6J as the host strain. This panel, including host and donor strains, was analyzed in the modified Hole Board (mHB). The consomic line with chromosome 19 from A/J (CSS-19A) was selected since it showed increased anxiety-related behavior, but similar locomotion compared to its host. (B) Following experiment A, female CSS-19A mice were compared with their C57BL/6J counterparts; however no significant differences and effect sizes close to zero were found. (C) A different consomic mouse strain (CSS-19PWD), with chromosome 19 from PWD/PhJ transferred on the genetic background of C57BL/6J, was compared with its host strain. Here, in contrast with CSS-19A, there was a decreased overall anxiety in CSS-19PWD compared to C57BL/6J males, but not locomotion. This new method shows an improved way to identify CSSs for QTL analysis for anxiety-related behavior using a combination of statistical significance testing and effect

  11. Identification of significant features by the Global Mean Rank test.

    Directory of Open Access Journals (Sweden)

    Martin Klammer

    Full Text Available With the introduction of omics-technologies such as transcriptomics and proteomics, numerous methods for the reliable identification of significantly regulated features (genes, proteins, etc. have been developed. Experimental practice requires these tests to successfully deal with conditions such as small numbers of replicates, missing values, non-normally distributed expression levels, and non-identical distributions of features. With the MeanRank test we aimed at developing a test that performs robustly under these conditions, while favorably scaling with the number of replicates. The test proposed here is a global one-sample location test, which is based on the mean ranks across replicates, and internally estimates and controls the false discovery rate. Furthermore, missing data is accounted for without the need of imputation. In extensive simulations comparing MeanRank to other frequently used methods, we found that it performs well with small and large numbers of replicates, feature dependent variance between replicates, and variable regulation across features on simulation data and a recent two-color microarray spike-in dataset. The tests were then used to identify significant changes in the phosphoproteomes of cancer cells induced by the kinase inhibitors erlotinib and 3-MB-PP1 in two independently published mass spectrometry-based studies. MeanRank outperformed the other global rank-based methods applied in this study. Compared to the popular Significance Analysis of Microarrays and Linear Models for Microarray methods, MeanRank performed similar or better. Furthermore, MeanRank exhibits more consistent behavior regarding the degree of regulation and is robust against the choice of preprocessing methods. MeanRank does not require any imputation of missing values, is easy to understand, and yields results that are easy to interpret. The software implementing the algorithm is freely available for academic and commercial use.

  12. Bayesian Test of Significance for Conditional Independence: The Multinomial Model

    Directory of Open Access Journals (Sweden)

    Pablo de Morais Andrade

    2014-03-01

    Full Text Available Conditional independence tests have received special attention lately in machine learning and computational intelligence related literature as an important indicator of the relationship among the variables used by their models. In the field of probabilistic graphical models, which includes Bayesian network models, conditional independence tests are especially important for the task of learning the probabilistic graphical model structure from data. In this paper, we propose the full Bayesian significance test for tests of conditional independence for discrete datasets. The full Bayesian significance test is a powerful Bayesian test for precise hypothesis, as an alternative to the frequentist’s significance tests (characterized by the calculation of the p-value.

  13. Caveats for using statistical significance tests in research assessments

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg

    2013-01-01

    This article raises concerns about the advantages of using statistical significance tests in research assessments as has recently been suggested in the debate about proper normalization procedures for citation indicators by Opthof and Leydesdorff (2010). Statistical significance tests are highly ...... are important or not. On the contrary their use may be harmful. Like many other critics, we generally believe that statistical significance tests are over- and misused in the empirical sciences including scientometrics and we encourage a reform on these matters....

  14. A shift from significance test to hypothesis test through power analysis in medical research

    Directory of Open Access Journals (Sweden)

    Singh Girish

    2006-01-01

    Full Text Available Medical research literature until recently, exhibited substantial dominance of the Fisher′s significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson′s hypothesis test considering both probability of type I and II error. Fisher′s approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson′s approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher′s significance test to Neyman-Pearson′s hypothesis test procedure.

  15. Misconceptions, Misuses, and Misinterpretations of P Values and Significance Testing.

    Science.gov (United States)

    Gagnier, Joel J; Morgenstern, Hal

    2017-09-20

    The interpretation and reporting of p values and significance testing in biomedical research are fraught with misconceptions and inaccuracies. Publications of peer-reviewed research in orthopaedics are not immune to such problems. The American Statistical Association (ASA) recently published an official statement on the use, misuse, and misinterpretation of statistical testing and p values in applied research. The ASA statement discussed 6 principles: (1) "P-values can indicate how incompatible the data are with a specified statistical model." (2) "P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone." (3) "Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold." (4) "Proper inference requires full reporting and transparency." (5) "A p-value, or statistical significance, does not measure the size of an effect or the importance of a result." (6) "By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis." The purpose of this article was to discuss these principles. We make several recommendations for moving forward: (1) Authors should avoid statements such as "statistically significant" or "statistically nonsignificant." (2) Investigators should report the magnitude of effect of all outcomes together with the appropriate measure of precision or variation. (3) Orthopaedic residents and surgeons must be educated in biostatistics, the ASA principles, and clinical epidemiology. (4) Journal editors and reviewers need to be familiar with and enforce the ASA principles.

  16. Testing the significance of canonical axes in redundancy analysis

    NARCIS (Netherlands)

    Legendre, P.; Oksanen, J.; Braak, ter C.J.F.

    2011-01-01

    1. Tests of significance of the individual canonical axes in redundancy analysis allow researchers to determine which of the axes represent variation that can be distinguished from random. Variation along the significant axes can be mapped, used to draw biplots or interpreted through subsequent

  17. Polynomial regression analysis and significance test of the regression function

    International Nuclear Information System (INIS)

    Gao Zhengming; Zhao Juan; He Shengping

    2012-01-01

    In order to analyze the decay heating power of a certain radioactive isotope per kilogram with polynomial regression method, the paper firstly demonstrated the broad usage of polynomial function and deduced its parameters with ordinary least squares estimate. Then significance test method of polynomial regression function is derived considering the similarity between the polynomial regression model and the multivariable linear regression model. Finally, polynomial regression analysis and significance test of the polynomial function are done to the decay heating power of the iso tope per kilogram in accord with the authors' real work. (authors)

  18. Significance testing in ridge regression for genetic data

    Directory of Open Access Journals (Sweden)

    De Iorio Maria

    2011-09-01

    Full Text Available Abstract Background Technological developments have increased the feasibility of large scale genetic association studies. Densely typed genetic markers are obtained using SNP arrays, next-generation sequencing technologies and imputation. However, SNPs typed using these methods can be highly correlated due to linkage disequilibrium among them, and standard multiple regression techniques fail with these data sets due to their high dimensionality and correlation structure. There has been increasing interest in using penalised regression in the analysis of high dimensional data. Ridge regression is one such penalised regression technique which does not perform variable selection, instead estimating a regression coefficient for each predictor variable. It is therefore desirable to obtain an estimate of the significance of each ridge regression coefficient. Results We develop and evaluate a test of significance for ridge regression coefficients. Using simulation studies, we demonstrate that the performance of the test is comparable to that of a permutation test, with the advantage of a much-reduced computational cost. We introduce the p-value trace, a plot of the negative logarithm of the p-values of ridge regression coefficients with increasing shrinkage parameter, which enables the visualisation of the change in p-value of the regression coefficients with increasing penalisation. We apply the proposed method to a lung cancer case-control data set from EPIC, the European Prospective Investigation into Cancer and Nutrition. Conclusions The proposed test is a useful alternative to a permutation test for the estimation of the significance of ridge regression coefficients, at a much-reduced computational cost. The p-value trace is an informative graphical tool for evaluating the results of a test of significance of ridge regression coefficients as the shrinkage parameter increases, and the proposed test makes its production computationally feasible.

  19. Significance testing in ridge regression for genetic data.

    Science.gov (United States)

    Cule, Erika; Vineis, Paolo; De Iorio, Maria

    2011-09-19

    Technological developments have increased the feasibility of large scale genetic association studies. Densely typed genetic markers are obtained using SNP arrays, next-generation sequencing technologies and imputation. However, SNPs typed using these methods can be highly correlated due to linkage disequilibrium among them, and standard multiple regression techniques fail with these data sets due to their high dimensionality and correlation structure. There has been increasing interest in using penalised regression in the analysis of high dimensional data. Ridge regression is one such penalised regression technique which does not perform variable selection, instead estimating a regression coefficient for each predictor variable. It is therefore desirable to obtain an estimate of the significance of each ridge regression coefficient. We develop and evaluate a test of significance for ridge regression coefficients. Using simulation studies, we demonstrate that the performance of the test is comparable to that of a permutation test, with the advantage of a much-reduced computational cost. We introduce the p-value trace, a plot of the negative logarithm of the p-values of ridge regression coefficients with increasing shrinkage parameter, which enables the visualisation of the change in p-value of the regression coefficients with increasing penalisation. We apply the proposed method to a lung cancer case-control data set from EPIC, the European Prospective Investigation into Cancer and Nutrition. The proposed test is a useful alternative to a permutation test for the estimation of the significance of ridge regression coefficients, at a much-reduced computational cost. The p-value trace is an informative graphical tool for evaluating the results of a test of significance of ridge regression coefficients as the shrinkage parameter increases, and the proposed test makes its production computationally feasible.

  20. Surface characterization based upon significant topographic features

    Energy Technology Data Exchange (ETDEWEB)

    Blanc, J; Grime, D; Blateyron, F, E-mail: fblateyron@digitalsurf.fr [Digital Surf, 16 rue Lavoisier, F-25000 Besancon (France)

    2011-08-19

    Watershed segmentation and Wolf pruning, as defined in ISO 25178-2, allow the detection of significant features on surfaces and their characterization in terms of dimension, area, volume, curvature, shape or morphology. These new tools provide a robust way to specify functional surfaces.

  1. Surface characterization based upon significant topographic features

    International Nuclear Information System (INIS)

    Blanc, J; Grime, D; Blateyron, F

    2011-01-01

    Watershed segmentation and Wolf pruning, as defined in ISO 25178-2, allow the detection of significant features on surfaces and their characterization in terms of dimension, area, volume, curvature, shape or morphology. These new tools provide a robust way to specify functional surfaces.

  2. The Use of Meta-Analytic Statistical Significance Testing

    Science.gov (United States)

    Polanin, Joshua R.; Pigott, Terri D.

    2015-01-01

    Meta-analysis multiplicity, the concept of conducting multiple tests of statistical significance within one review, is an underdeveloped literature. We address this issue by considering how Type I errors can impact meta-analytic results, suggest how statistical power may be affected through the use of multiplicity corrections, and propose how…

  3. Conducting tests for statistically significant differences using forest inventory data

    Science.gov (United States)

    James A. Westfall; Scott A. Pugh; John W. Coulston

    2013-01-01

    Many forest inventory and monitoring programs are based on a sample of ground plots from which estimates of forest resources are derived. In addition to evaluating metrics such as number of trees or amount of cubic wood volume, it is often desirable to make comparisons between resource attributes. To properly conduct statistical tests for differences, it is imperative...

  4. Significance tests for functional data with complex dependence structure

    KAUST Repository

    Staicu, Ana-Maria

    2015-01-01

    We propose an L (2)-norm based global testing procedure for the null hypothesis that multiple group mean functions are equal, for functional data with complex dependence structure. Specifically, we consider the setting of functional data with a multilevel structure of the form groups-clusters or subjects-units, where the unit-level profiles are spatially correlated within the cluster, and the cluster-level data are independent. Orthogonal series expansions are used to approximate the group mean functions and the test statistic is estimated using the basis coefficients. The asymptotic null distribution of the test statistic is developed, under mild regularity conditions. To our knowledge this is the first work that studies hypothesis testing, when data have such complex multilevel functional and spatial structure. Two small-sample alternatives, including a novel block bootstrap for functional data, are proposed, and their performance is examined in simulation studies. The paper concludes with an illustration of a motivating experiment.

  5. Evaluating clinical significance: incorporating robust statistics with normative comparison tests.

    Science.gov (United States)

    van Wieringen, Katrina; Cribbie, Robert A

    2014-05-01

    The purpose of this study was to evaluate a modified test of equivalence for conducting normative comparisons when distribution shapes are non-normal and variances are unequal. A Monte Carlo study was used to compare the empirical Type I error rates and power of the proposed Schuirmann-Yuen test of equivalence, which utilizes trimmed means, with that of the previously recommended Schuirmann and Schuirmann-Welch tests of equivalence when the assumptions of normality and variance homogeneity are satisfied, as well as when they are not satisfied. The empirical Type I error rates of the Schuirmann-Yuen were much closer to the nominal α level than those of the Schuirmann or Schuirmann-Welch tests, and the power of the Schuirmann-Yuen was substantially greater than that of the Schuirmann or Schuirmann-Welch tests when distributions were skewed or outliers were present. The Schuirmann-Yuen test is recommended for assessing clinical significance with normative comparisons. © 2013 The British Psychological Society.

  6. Coagulation tests show significant differences in patients with breast cancer.

    Science.gov (United States)

    Tas, Faruk; Kilic, Leyla; Duranyildiz, Derya

    2014-06-01

    Activated coagulation and fibrinolytic system in cancer patients is associated with tumor stroma formation and metastasis in different cancer types. The aim of this study is to explore the correlation of blood coagulation assays for various clinicopathologic factors in breast cancer patients. A total of 123 female breast cancer patients were enrolled into the study. All the patients were treatment naïve. Pretreatment blood coagulation tests including PT, APTT, PTA, INR, D-dimer, fibrinogen levels, and platelet counts were evaluated. Median age of diagnosis was 51 years old (range 26-82). Twenty-two percent of the group consisted of metastatic breast cancer patients. The plasma level of all coagulation tests revealed statistically significant difference between patient and control group except for PT (p50 years) was associated with higher D-dimer levels (p=0.003). Metastatic patients exhibited significantly higher D-dimer values when compared with early breast cancer patients (p=0.049). Advanced tumor stage (T3 and T4) was associated with higher INR (p=0.05) and lower PTA (p=0.025). In conclusion, coagulation tests show significant differences in patients with breast cancer.

  7. Optimizing significance testing of astronomical forcing in cyclostratigraphy

    Science.gov (United States)

    Kemp, David B.

    2016-12-01

    The recognition of astronomically forced (Milankovitch) climate cycles in geological archives marked a major advance in Earth science, revealing a heartbeat within the climate system of general importance and key utility. Power spectral analysis is the primary tool used to facilitate identification of astronomical cycles in stratigraphic data, but commonly employed methods for testing the statistical significance of relatively high narrow-band variance of potential astronomical origin in spectra have been criticized for inadequately balancing the respective probabilities of type I (false positive) and type II (false negative) errors. This has led to suggestions that the importance of astronomical forcing in Earth history is overstated. It can be readily demonstrated, however, that the imperfect nature of the stratigraphic record and the quasiperiodicity of astronomical cycles sets an upper limit on the attainable significance of astronomical signals. Optimized significance testing is that which minimizes the combined probability of type I and type II errors. Numerical simulations of stratigraphically preserved astronomical signals suggest that optimum significance levels at which to reject a null hypothesis of no astronomical forcing are between 0.01 and 0.001 (i.e., 99-99.9% confidence level). This is lower than commonly employed in the literature (90-99% confidence levels). Nevertheless, in consonance with the emergent view from other scientific disciplines, fixed-value null hypothesis significance testing of power spectra is implicitly ill suited to demonstrating astronomical forcing, and the use of spectral analysis remains a difficult and subjective endeavor in the absence of additional supporting evidence.

  8. Significance testing of orbital forcing in deep time

    Science.gov (United States)

    Kemp, David

    2016-04-01

    The recognition of orbitally forced (Milankovitch) climate cycles in geological archives marked a paradigm shift in Earth science, revealing a heartbeat within the climate system of general importance and key utility. Resolving orbital cycles in stratigraphic data is, however, problematic owing to the imperfect stratigraphic preservation of climate signals, the masking effects of non-periodic variance, and uncertainties in the expected responses of proxy records to climate change. Power spectral analysis is the primary tool used to facilitate identification of orbital cycles in stratigraphic data, but commonly employed methods for testing the significance of relatively high narrow-band variance of potential orbital origin in spectra have been criticised for inadequately balancing the respective probabilities of type I (false positive) and type II (false negative) errors. This has led to suggestions that the importance of orbital forcing in deep time is overstated [1]. It can be readily demonstrated, however, that the imperfect nature of the stratigraphic record sets an upper limit on the attainable significance of orbital signals. Optimised significance testing is that which minimises the sum of type I and type II errors [2]. Numerical simulations of stratigraphically preserved orbital signals suggest that optimum significance levels at which to reject a null hypothesis of no orbital forcing cluster between 99% and 99.9%. This is lower than recently advocated [1], but higher than the 90-99% levels most commonly employed in the literature. Nevertheless, in consonance with the emergent view from other scientific disciplines, fixed-value null hypothesis significance testing of power spectra is likely ill suited to verifying orbital forcing. In part, this is because the experiments also indicate that the combined probability of making an error in the acceptance or rejection of an orbital hypothesis may be rather high for typical stratigraphic data, and hence the use of

  9. Finite Sample Corrections for Parameters Estimation and Significance Testing

    Directory of Open Access Journals (Sweden)

    Boon Kin Teh

    2018-01-01

    Full Text Available An increasingly important problem in the era of Big Data is fitting data to distributions. However, many stop at visually inspecting the fits or use the coefficient of determination as a measure of the goodness of fit. In general, goodness-of-fit measures do not allow us to tell which of several distributions fit the data best. Also, the likelihood of drawing the data from a distribution can be low even when the fit is good. To overcome these limitations, Clauset et al. advocated a three-step procedure for fitting any distribution: (i estimate parameter(s accurately, (ii choosing and calculating an appropriate goodness of fit, (iii test its significance to determine how likely this goodness of fit will appear in samples of the distribution. When we perform this significance testing on exponential distributions, we often obtain low significance values despite the fits being visually good. This led to our realization that most fitting methods do not account for effects due to the finite number of elements and the finite largest element. The former produces sample size dependence in the goodness of fits and the latter introduces a bias in the estimated parameter and the goodness of fit. We propose modifications to account for both and show that these corrections improve the significance of the fits of both real and simulated data. In addition, we used simulations and analytical approximations to verify that convergence rate of the estimated parameters toward its true value depends on how fast the largest element converge to infinity, and provide fast inversion formulas to obtain p-values directly from the adjusted test statistics, in place of doing more Monte Carlo simulations.

  10. Significance of high level test data in piping design

    International Nuclear Information System (INIS)

    McLean, J.L.; Bitner, J.L.

    1991-01-01

    During the 1980's the piping technical community in the U.S. initiated a series of research activities aimed at reducing the conservatism inherent in nuclear piping design. One of these activities was directed at the application of the ASME Code rules to the design of piping subjected to dynamic loads. This paper surveys the test data obtained from three groups in the U.S. and none in the U.K., and correlates the findings as they relate to the failure modes of piping subjected to seismic loads. The failure modes experienced as the result of testing at dynamic loads significantly in excess of anticipated loads specified for any of the ASME Code service levels are discussed. A recommendation is presented for modifying the Code piping rules to reduce the conservatism inherent in seismic design

  11. Effect of normalization on significance testing for oligonucleotide microarrays.

    Science.gov (United States)

    Parrish, Rudolph S; Spencer, Horace J

    2004-08-01

    Normalization techniques are used to reduce variation among gene expression measurements in oligonucleotide microarrays in an effort to improve the quality of the data and the power of significance tests for detecting differential expression. Of several such proposed methods, two that have commonly been employed include median-interquartile range normalization and quantile normalization. The median-IQR method applied directly to fold-changes for paired data also was considered. Two methods for calculating gene expression values include the MAS 5.0 algorithm [Affymetrix. (2002). Statistical Algorithms Description Document. Santa Clara, CA: Affymetrix, Inc. http://www.affymetrix.com/support/technical/whitepapers/sadd-whitepaper.pdf] and the RMA method [Irizarry, R. A., Bolstad, B. M., Collin, F., Cope, L. M., Hobbs, B., Speed, T. P. (2003a). Summaries of Affymetrix GeneChip probe level data. Nucleic Acids Res. 31(4,e15); Irizarry, R. A., Hobbs, B., Collin, F., Beazer-Barclay, Y. D., Antonellis, K. J., Scherf, U., Speed, T. P. (2003b). Exploration, normalization, and summaries of high density oligonucleotide array probe-level data. Biostatistics 4(2):249-264; Irizarry, R. A., Gautier, L., Cope, L. (2003c). An R package for analysis of Affymetrix oligonucleotide arrays. In: Parmigiani, R. I. G., Garrett, E. S., Ziegler, S., eds. The Analysis of Gene Expression Data: Methods and Software. Berlin: Springer, pp. 102-119]. In considering these methods applied to a prostate cancer data set derived from paired samples on normal and tumor tissue, it is shown that normalization methods may lead to substantial inflation of the number of genes identified by paired-t significance tests even after adjustment for multiple testing. This is shown to be due primarily to an unintended effect that normalization has on the experimental error variance. The impact appears to be greater in the RMA method compared to the MAS 5.0 algorithm and for quantile normalization compared to median

  12. Failure to notify reportable test results: significance in medical malpractice.

    Science.gov (United States)

    Gale, Brian D; Bissett-Siegel, Dana P; Davidson, Steven J; Juran, David C

    2011-11-01

    Diagnostic physicians generally acknowledge their responsibility to notify referring clinicians whenever examinations demonstrate urgent or unexpected findings. During the past decade, clinicians have ordered dramatically greater numbers of diagnostic examinations. One study demonstrated that between 1996 and 2003, malpractice payments related to diagnosis increased by approximately 40%. Communication failures are a prominent cause of action in medical malpractice litigation. The aims of this study were to (1) define the magnitude of malpractice costs related to communication failures in test result notification and (2) determine if these costs are increasing significantly. Linear regression analysis of National Practitioner Data Bank claims data from 1991 to 2009 suggested that claims payments increased at the national level by an average of $4.7 million annually (95% confidence interval, $2.98 million to $6.37 million). Controlled Risk Insurance Company/Risk Management Foundation claims data for 2004 to 2008 indicate that communication failures played a role, accounting for 4% of cases by volume and 7% of the total cost. Faile communication of clinical data constitutes an increasing proportion of medical malpractice payments. The increase in cases may reflect expectations of more reliable notification of medical data. Another explanation may be that the remarkable growth in diagnostic test volume has led to a corresponding increase in reportable results. If notification reliability remained unchanged, this increased volume would predict more failed notifications. There is increased risk for malpractice litigation resulting from diagnostic test result notification. The advent of semiautomated critical test result management systems may improve notification reliability, improve workflow and patient safety, and, when necessary, provide legal documentation. Copyright © 2011 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  13. Significant Inter-Test Reliability Across Approximate Number System Assessments

    Directory of Open Access Journals (Sweden)

    Nicholas Kurshan Dewind

    2016-03-01

    Full Text Available The approximate number system (ANS is the hypothesized cognitive mechanism that allows adults, infants, and animals to enumerate large sets of items approximately. Researchers usually assess the ANS by having subjects compare two sets and indicate which is larger. Accuracy or Weber fraction is taken as an index of the acuity of the system. However, as Clayton et al., (2015 have highlighted, the stimulus parameters used when assessing the ANS vary widely. In particular, the numerical ratio between the pairs, and the way in which non-numerical features are varied often differ radically between studies. Recently, Clayton et al. (2015 found that accuracy measures derived from two commonly used stimulus sets are not significantly correlated. They argue that a lack of inter-test reliability threatens the validity of the ANS construct. Here we apply a recently developed modeling technique to the same data set. The model, by explicitly accounting for the effect of numerical ratio and non-numerical features, produces dependent measures that are less perturbed by stimulus protocol. Contrary to their conclusion we find a significant correlation in Weber fraction across the two stimulus sets. Nevertheless, in agreement with Clayton et al., we find that different protocols do indeed induce differences in numerical acuity and the degree of influence of non-numerical stimulus features. These findings highlight the need for a systematic investigation of how protocol idiosyncrasies affect ANS assessments.

  14. Significant Inter-Test Reliability across Approximate Number System Assessments.

    Science.gov (United States)

    DeWind, Nicholas K; Brannon, Elizabeth M

    2016-01-01

    The approximate number system (ANS) is the hypothesized cognitive mechanism that allows adults, infants, and animals to enumerate large sets of items approximately. Researchers usually assess the ANS by having subjects compare two sets and indicate which is larger. Accuracy or Weber fraction is taken as an index of the acuity of the system. However, as Clayton et al. (2015) have highlighted, the stimulus parameters used when assessing the ANS vary widely. In particular, the numerical ratio between the pairs, and the way in which non-numerical features are varied often differ radically between studies. Recently, Clayton et al. (2015) found that accuracy measures derived from two commonly used stimulus sets are not significantly correlated. They argue that a lack of inter-test reliability threatens the validity of the ANS construct. Here we apply a recently developed modeling technique to the same data set. The model, by explicitly accounting for the effect of numerical ratio and non-numerical features, produces dependent measures that are less perturbed by stimulus protocol. Contrary to their conclusion we find a significant correlation in Weber fraction across the two stimulus sets. Nevertheless, in agreement with Clayton et al. (2015) we find that different protocols do indeed induce differences in numerical acuity and the degree of influence of non-numerical stimulus features. These findings highlight the need for a systematic investigation of how protocol idiosyncrasies affect ANS assessments.

  15. Laboratory testing in monoclonal gammopathy of renal significance (MGRS).

    Science.gov (United States)

    Leung, Nelson; Barnidge, David R; Hutchison, Colin A

    2016-06-01

    Recently, monoclonal gammopathy of renal significance (MGRS) reclassified all monoclonal (M) gammopathies that are associated with the development of a kidney disease but do not meet the definition of symptomatic multiple myeloma (MM) or malignant lymphoma. The purpose was to distinguish the M gammopathy as the nephrotoxic agent independent from the clonal mass. The diagnosis of MGRS obviously depends on the detection of the M-protein. More importantly, the success of treatment is correlated with the reduction of the M-protein. Therefore, familiarity with the M-protein tests is a must. Protein electrophoresis performed in serum or urine is inexpensive and rapid due to automation. However, poor sensitivity especially with the urine is an issue particularly with the low-level M gammopathy often encountered with MGRS. Immunofixation adds to the sensitivity and specificity but also the cost. Serum free light chain (sFLC) assays have significantly increased the sensitivity of M-protein detection and is relatively inexpensive. It is important to recognize that there is more than one assay on the market and their results are not interchangeable. In addition, in certain diseases, immunofixation is more sensitive than sFLC. Finally, novel techniques with promising results are adding to the ability to identify M-proteins. Using the time of flight method, the use of mass spectrometry of serum samples has been shown to dramatically increase the sensitivity of M-protein detection. In another technique, oligomeric LCs are identified on urinary exosomes amplifying the specificity for the nephrotoxic M-protein.

  16. Stress test, what is the reality and significance of it?

    International Nuclear Information System (INIS)

    Sawada, Tetsuo

    2012-01-01

    Stress test was introduced in July 2011 by 'political judgment' to demonstrate the ability of nuclear power plants to withstand severe earthquake and tsunami. Stress test consisted of two stages and the first stage using computerized simulation required to obtain 'cliff edge' for earthquake, tsunami, their superposition, loss of all alternating current power and loss of final heat sink, and effectiveness of severe accident management after emergency safety measures. Clearing the first stage of the test was a prerequisite for restarting reactors that had been suspended for regular inspections. NISA had received such test results for 14 nuclear reactors as of January 18, 2012. After passing IAEA's evaluation of stress test review process, NISA's endorsement of test results, NSC's confirmation of NISA's screening results and approval of local government, Prime Minister and relevant ministers concerned would decide whether reactors could be restarted as 'political judgment'. Using ranking list and referring to respective experiences of 14 reactors hit by earthquake and tsunami at the Great East Japan earthquake might better perform comprehensive judgment. (T. Tanaka)

  17. Testing emotional memories: does negative emotional significance influence the benefit received from testing?

    Science.gov (United States)

    Emmerdinger, Kathrin J; Kuhbandner, Christof; Berchtold, Franziska

    2017-07-31

    A large body of research shows that emotionally significant stimuli are better stored in memory. One question that has received much less attention is how emotional memories are influenced by factors that influence memories after the initial encoding of stimuli. Intriguingly, several recent studies suggest that post-encoding factors do not differ in their effects on emotional and neutral memories. However, to date, only detrimental factors have been addressed. In the present study, we examined whether emotionally negative memories are differentially influenced by a well-known beneficial factor: the testing of memories. We employed a standard cued recall testing-effect paradigm where participants studied cue-target pairs for negative and neutral target pictures. In a subsequent post-encoding phase, one third of the cue-target pairs were tested and one third restudied; the remaining third served as control pairs. After one week, memory for all cue-target pairs was tested. While replicating both the testing effect and the emotional enhancement effect, no differences between negative and neutral memories in the benefits received from testing and restudying were observed. Thus, it seems to be true that post-encoding factors do not influence emotional memories in any other way than neutral memories, even when they are beneficial.

  18. Compositional based testing with ioco

    NARCIS (Netherlands)

    van der Bijl, H.M.; Rensink, Arend; Tretmans, G.J.; Petrenko, A.; Ulrich, A.

    2004-01-01

    Compositional testing concerns the testing of systems that consist of communicating components which can also be tested in isolation. Examples are component based testing and interoperability testing. We show that, with certain restrictions, the ioco-test theory for conformance testing is suitable

  19. Finding significantly connected voxels based on histograms of connection strengths

    DEFF Research Database (Denmark)

    Kasenburg, Niklas; Pedersen, Morten Vester; Darkner, Sune

    2016-01-01

    -distribution and significance is determined using the false discovery rate (FDR). Segmentations are based on significantly connected voxels and their FDR. In this work we focus on the thalamus and the target regions were chosen by dividing the cortex into a prefrontal/temporal zone, motor zone, somatosensory zone and a parieto...

  20. Component Based Testing with ioco

    NARCIS (Netherlands)

    van der Bijl, H.M.; Rensink, Arend; Tretmans, G.J.

    Component based testing concerns the integration of components which have already been tested separately. We show that, with certain restrictions, the ioco-test theory for conformance testing is suitable for component based testing, in the sense that the integration of fully conformant components is

  1. Cell-Based Genotoxicity Testing

    Science.gov (United States)

    Reifferscheid, Georg; Buchinger, Sebastian

    Genotoxicity test systems that are based on bacteria display an important role in the detection and assessment of DNA damaging chemicals. They belong to the basic line of test systems due to their easy realization, rapidness, broad applicability, high sensitivity and good reproducibility. Since the development of the Salmonella microsomal mutagenicity assay by Ames and coworkers in the early 1970s, significant development in bacterial genotoxicity assays was achieved and is still a subject matter of research. The basic principle of the mutagenicity assay is a reversion of a growth inhibited bacterial strain, e.g., due to auxotrophy, back to a fast growing phenotype (regain of prototrophy). Deeper knowledge of the ­mutation events allows a mechanistic understanding of the induced DNA-damage by the utilization of base specific tester strains. Collections of such specific tester strains were extended by genetic engineering. Beside the reversion assays, test systems utilizing the bacterial SOS-response were invented. These methods are based on the fusion of various SOS-responsive promoters with a broad variety of reporter genes facilitating numerous methods of signal detection. A very important aspect of genotoxicity testing is the bioactivation of ­xenobiotics to DNA-damaging compounds. Most widely used is the extracellular metabolic activation by making use of rodent liver homogenates. Again, genetic engineering allows the construction of highly sophisticated bacterial tester strains with significantly enhanced sensitivity due to overexpression of enzymes that are involved in the metabolism of xenobiotics. This provides mechanistic insights into the toxification and detoxification pathways of xenobiotics and helps explaining the chemical nature of hazardous substances in unknown mixtures. In summary, beginning with "natural" tester strains the rational design of bacteria led to highly specific and sensitive tools for a rapid, reliable and cost effective

  2. The Harm Done to Reproducibility by the Culture of Null Hypothesis Significance Testing.

    Science.gov (United States)

    Lash, Timothy L

    2017-09-15

    In the last few years, stakeholders in the scientific community have raised alarms about a perceived lack of reproducibility of scientific results. In reaction, guidelines for journals have been promulgated and grant applicants have been asked to address the rigor and reproducibility of their proposed projects. Neither solution addresses a primary culprit, which is the culture of null hypothesis significance testing that dominates statistical analysis and inference. In an innovative research enterprise, selection of results for further evaluation based on null hypothesis significance testing is doomed to yield a low proportion of reproducible results and a high proportion of effects that are initially overestimated. In addition, the culture of null hypothesis significance testing discourages quantitative adjustments to account for systematic errors and quantitative incorporation of prior information. These strategies would otherwise improve reproducibility and have not been previously proposed in the widely cited literature on this topic. Without discarding the culture of null hypothesis significance testing and implementing these alternative methods for statistical analysis and inference, all other strategies for improving reproducibility will yield marginal gains at best. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Testing the spatial significance of weed patterns in arable land using Mead's test

    NARCIS (Netherlands)

    Heijting, S.; Werf, van der W.; Kruijer, W.T.; Stein, A.

    2007-01-01

    There is a need in weed science for statistical tests for patchiness and spatial pattern. The objective of this study was to investigate the performance of Mead¿s test for detecting patterns in synthetic data and in real weed counts made in maize, and making a first assessment of its applicability

  4. Testing the spatial significance of weed patterns in arable land using Mead's test

    NARCIS (Netherlands)

    Heijting, S.; van der Werf, W.; Kruijer, W.T.; Stein, A.

    2007-01-01

    There is a need in weed science for statistical tests for patchiness and spatial pattern. The objective of this study was to investigate the performance of Mead's test for detecting patterns in synthetic data and in real weed counts made in maize, and making a first assessment of its applicability

  5. Significance of hair-dye base-induced sensory irritation.

    Science.gov (United States)

    Fujita, F; Azuma, T; Tajiri, M; Okamoto, H; Sano, M; Tominaga, M

    2010-06-01

    Oxidation hair-dyes, which are the principal hair-dyes, sometimes induce painful sensory irritation of the scalp caused by the combination of highly reactive substances, such as hydrogen peroxide and alkali agents. Although many cases of severe facial and scalp dermatitis have been reported following the use of hair-dyes, sensory irritation caused by contact of the hair-dye with the skin has not been reported clearly. In this study, we used a self-assessment questionnaire to measure the sensory irritation in various regions of the body caused by two model hair-dye bases that contained different amounts of alkali agents without dyes. Moreover, the occipital region was found as an alternative region of the scalp to test for sensory irritation of the hair-dye bases. We used this region to evaluate the relationship of sensitivity with skin properties, such as trans-epidermal water loss (TEWL), stratum corneum water content, sebum amount, surface temperature, current perception threshold (CPT), catalase activities in tape-stripped skin and sensory irritation score with the model hair-dye bases. The hair-dye sensitive group showed higher TEWL, a lower sebum amount, a lower surface temperature and higher catalase activity than the insensitive group, and was similar to that of damaged skin. These results suggest that sensory irritation caused by hair-dye could occur easily on the damaged dry scalp, as that caused by skin cosmetics reported previously.

  6. Significance tests for the wavelet cross spectrum and wavelet linear coherence

    Directory of Open Access Journals (Sweden)

    Z. Ge

    2008-12-01

    Full Text Available This work attempts to develop significance tests for the wavelet cross spectrum and the wavelet linear coherence as a follow-up study on Ge (2007. Conventional approaches that are used by Torrence and Compo (1998 based on stationary background noise time series were used here in estimating the sampling distributions of the wavelet cross spectrum and the wavelet linear coherence. The sampling distributions are then used for establishing significance levels for these two wavelet-based quantities. In addition to these two wavelet quantities, properties of the phase angle of the wavelet cross spectrum of, or the phase difference between, two Gaussian white noise series are discussed. It is found that the tangent of the principal part of the phase angle approximately has a standard Cauchy distribution and the phase angle is uniformly distributed, which makes it impossible to establish significance levels for the phase angle. The simulated signals clearly show that, when there is no linear relation between the two analysed signals, the phase angle disperses into the entire range of [−π,π] with fairly high probabilities for values close to ±π to occur. Conversely, when linear relations are present, the phase angle of the wavelet cross spectrum settles around an associated value with considerably reduced fluctuations. When two signals are linearly coupled, their wavelet linear coherence will attain values close to one. The significance test of the wavelet linear coherence can therefore be used to complement the inspection of the phase angle of the wavelet cross spectrum. The developed significance tests are also applied to actual data sets, simultaneously recorded wind speed and wave elevation series measured from a NOAA buoy on Lake Michigan. Significance levels of the wavelet cross spectrum and the wavelet linear coherence between the winds and the waves reasonably separated meaningful peaks from those generated by randomness in the data set. As

  7. IRT-based test construction

    NARCIS (Netherlands)

    van der Linden, Willem J.; Theunissen, T.J.J.M.; Boekkooi-Timminga, Ellen; Kelderman, Henk

    1987-01-01

    Four discussions of test construction based on item response theory (IRT) are presented. The first discussion, "Test Design as Model Building in Mathematical Programming" (T.J.J.M. Theunissen), presents test design as a decision process under certainty. A natural way of modeling this process leads

  8. Significance tests to determine the direction of effects in linear regression models.

    Science.gov (United States)

    Wiedermann, Wolfgang; Hagmann, Michael; von Eye, Alexander

    2015-02-01

    Previous studies have discussed asymmetric interpretations of the Pearson correlation coefficient and have shown that higher moments can be used to decide on the direction of dependence in the bivariate linear regression setting. The current study extends this approach by illustrating that the third moment of regression residuals may also be used to derive conclusions concerning the direction of effects. Assuming non-normally distributed variables, it is shown that the distribution of residuals of the correctly specified regression model (e.g., Y is regressed on X) is more symmetric than the distribution of residuals of the competing model (i.e., X is regressed on Y). Based on this result, 4 one-sample tests are discussed which can be used to decide which variable is more likely to be the response and which one is more likely to be the explanatory variable. A fifth significance test is proposed based on the differences of skewness estimates, which leads to a more direct test of a hypothesis that is compatible with direction of dependence. A Monte Carlo simulation study was performed to examine the behaviour of the procedures under various degrees of associations, sample sizes, and distributional properties of the underlying population. An empirical example is given which illustrates the application of the tests in practice. © 2014 The British Psychological Society.

  9. Performance on the Farnsworth-Munsell 100-Hue Test Is Significantly Related to Nonverbal IQ.

    Science.gov (United States)

    Cranwell, Matthew B; Pearce, Bradley; Loveridge, Camilla; Hurlbert, Anya C

    2015-05-01

    The Farnsworth-Munsell 100-Hue test (FM100) is a standardized measure of chromatic discrimination, based on colored cap-sorting, which has been widely used in both adults and children. Its dependence on seriation ability raises questions as to its universal suitability and accuracy in assessing purely sensory discrimination. This study investigates how general intellectual ability relates to performance on both the FM100 and a new computer-based chromatic discrimination threshold test, across different age groups in both typical and atypical development. Participants were divided into two main age groups, children (6-15 years) and young adults (16-25 years), with each group further subdivided into typically developing (TD; three groups; TD 6-7 years, TD 8-9 years, TD Adult) individuals and atypically developing individuals, all but one carrying a diagnosis of Autism Spectrum Disorders (ASD; two groups; atypically developing [ATY] child 7-15 years, ASD Adult). General intelligence was measured using the Wechsler Abbreviated Intelligence Scale and Wechsler Intelligence Scale for Children. All participants completed the FM100. Both child groups also completed a computer-based chromatic discrimination threshold test, which assessed discrimination along cone-opponent ("red-green," "blue-yellow") and luminance cardinal axes using a controlled staircase procedure. Farnsworth-Munsell 100-Hue test performance was better in adults than in children. Furthermore, performance significantly positively correlated with nonverbal intelligence quotient (NVIQ) for all child groups and the young adult ASD group. The slope of this relationship was steeper for the ASD than TD groups. Performance on the chromatic discrimination threshold test was not significantly related to any IQ measure. Regression models reveal that chromatic discrimination threshold, although a significant predictor of FM100 performance when used alone, is a weaker predictor than NVIQ used alone or in combination

  10. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker

    2012-02-01

    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  11. Examining the significance of fingerprint-based classifiers

    Directory of Open Access Journals (Sweden)

    Collins Jack R

    2008-12-01

    Full Text Available Abstract Background Experimental examinations of biofluids to measure concentrations of proteins or their fragments or metabolites are being explored as a means of early disease detection, distinguishing diseases with similar symptoms, and drug treatment efficacy. Many studies have produced classifiers with a high sensitivity and specificity, and it has been argued that accurate results necessarily imply some underlying biology-based features in the classifier. The simplest test of this conjecture is to examine datasets designed to contain no information with classifiers used in many published studies. Results The classification accuracy of two fingerprint-based classifiers, a decision tree (DT algorithm and a medoid classification algorithm (MCA, are examined. These methods are used to examine 30 artificial datasets that contain random concentration levels for 300 biomolecules. Each dataset contains between 30 and 300 Cases and Controls, and since the 300 observed concentrations are randomly generated, these datasets are constructed to contain no biological information. A modest search of decision trees containing at most seven decision nodes finds a large number of unique decision trees with an average sensitivity and specificity above 85% for datasets containing 60 Cases and 60 Controls or less, and for datasets with 90 Cases and 90 Controls many DTs have an average sensitivity and specificity above 80%. For even the largest dataset (300 Cases and 300 Controls the MCA procedure finds several unique classifiers that have an average sensitivity and specificity above 88% using only six or seven features. Conclusion While it has been argued that accurate classification results must imply some biological basis for the separation of Cases from Controls, our results show that this is not necessarily true. The DT and MCA classifiers are sufficiently flexible and can produce good results from datasets that are specifically constructed to contain no

  12. Nanomaterial-Based Electrochemical Immunosensors for Clinically Significant Biomarkers

    Science.gov (United States)

    Ronkainen, Niina J.; Okon, Stanley L.

    2014-01-01

    Nanotechnology has played a crucial role in the development of biosensors over the past decade. The development, testing, optimization, and validation of new biosensors has become a highly interdisciplinary effort involving experts in chemistry, biology, physics, engineering, and medicine. The sensitivity, the specificity and the reproducibility of biosensors have improved tremendously as a result of incorporating nanomaterials in their design. In general, nanomaterials-based electrochemical immunosensors amplify the sensitivity by facilitating greater loading of the larger sensing surface with biorecognition molecules as well as improving the electrochemical properties of the transducer. The most common types of nanomaterials and their properties will be described. In addition, the utilization of nanomaterials in immunosensors for biomarker detection will be discussed since these biosensors have enormous potential for a myriad of clinical uses. Electrochemical immunosensors provide a specific and simple analytical alternative as evidenced by their brief analysis times, inexpensive instrumentation, lower assay cost as well as good portability and amenability to miniaturization. The role nanomaterials play in biosensors, their ability to improve detection capabilities in low concentration analytes yielding clinically useful data and their impact on other biosensor performance properties will be discussed. Finally, the most common types of electroanalytical detection methods will be briefly touched upon. PMID:28788700

  13. Nanomaterial-Based Electrochemical Immunosensors for Clinically Significant Biomarkers

    Directory of Open Access Journals (Sweden)

    Niina J. Ronkainen

    2014-06-01

    Full Text Available Nanotechnology has played a crucial role in the development of biosensors over the past decade. The development, testing, optimization, and validation of new biosensors has become a highly interdisciplinary effort involving experts in chemistry, biology, physics, engineering, and medicine. The sensitivity, the specificity and the reproducibility of biosensors have improved tremendously as a result of incorporating nanomaterials in their design. In general, nanomaterials-based electrochemical immunosensors amplify the sensitivity by facilitating greater loading of the larger sensing surface with biorecognition molecules as well as improving the electrochemical properties of the transducer. The most common types of nanomaterials and their properties will be described. In addition, the utilization of nanomaterials in immunosensors for biomarker detection will be discussed since these biosensors have enormous potential for a myriad of clinical uses. Electrochemical immunosensors provide a specific and simple analytical alternative as evidenced by their brief analysis times, inexpensive instrumentation, lower assay cost as well as good portability and amenability to miniaturization. The role nanomaterials play in biosensors, their ability to improve detection capabilities in low concentration analytes yielding clinically useful data and their impact on other biosensor performance properties will be discussed. Finally, the most common types of electroanalytical detection methods will be briefly touched upon.

  14. Understanding the Sampling Distribution and Its Use in Testing Statistical Significance.

    Science.gov (United States)

    Breunig, Nancy A.

    Despite the increasing criticism of statistical significance testing by researchers, particularly in the publication of the 1994 American Psychological Association's style manual, statistical significance test results are still popular in journal articles. For this reason, it remains important to understand the logic of inferential statistics. A…

  15. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    Science.gov (United States)

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  16. Technical note: A significance test for data-sparse zones in scatter plots

    Directory of Open Access Journals (Sweden)

    V. V. Vetrova

    2012-04-01

    Full Text Available Data-sparse zones in scatter plots of hydrological variables can be of interest in various contexts. For example, a well-defined data-sparse zone may indicate inhibition of one variable by another. It is of interest therefore to determine whether data-sparse regions in scatter plots are of sufficient extent to be beyond random chance. We consider the specific situation of data-sparse regions defined by a linear internal boundary within a scatter plot defined over a rectangular region. An Excel VBA macro is provided for carrying out a randomisation-based significance test of the data-sparse region, taking into account both the within-region number of data points and the extent of the region. Example applications are given with respect to a rainfall time series from Israel and also to validation scatter plots from a seasonal forecasting model for lake inflows in New Zealand.

  17. Technical note: A significance test for data-sparse zones in scatter plots

    Science.gov (United States)

    Vetrova, V. V.; Bardsley, W. E.

    2012-04-01

    Data-sparse zones in scatter plots of hydrological variables can be of interest in various contexts. For example, a well-defined data-sparse zone may indicate inhibition of one variable by another. It is of interest therefore to determine whether data-sparse regions in scatter plots are of sufficient extent to be beyond random chance. We consider the specific situation of data-sparse regions defined by a linear internal boundary within a scatter plot defined over a rectangular region. An Excel VBA macro is provided for carrying out a randomisation-based significance test of the data-sparse region, taking into account both the within-region number of data points and the extent of the region. Example applications are given with respect to a rainfall time series from Israel and also to validation scatter plots from a seasonal forecasting model for lake inflows in New Zealand.

  18. Risk Based Optimal Fatigue Testing

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Faber, M.H.; Kroon, I.B.

    1992-01-01

    Optimal fatigue life testing of materials is considered. Based on minimization of the total expected costs of a mechanical component a strategy is suggested to determine the optimal stress range levels for which additional experiments are to be performed together with an optimal value of the maxi......Optimal fatigue life testing of materials is considered. Based on minimization of the total expected costs of a mechanical component a strategy is suggested to determine the optimal stress range levels for which additional experiments are to be performed together with an optimal value...

  19. Adaptive Tests of Significance Using Permutations of Residuals with R and SAS

    CERN Document Server

    O'Gorman, Thomas W

    2012-01-01

    Provides the tools needed to successfully perform adaptive tests across a broad range of datasets Adaptive Tests of Significance Using Permutations of Residuals with R and SAS illustrates the power of adaptive tests and showcases their ability to adjust the testing method to suit a particular set of data. The book utilizes state-of-the-art software to demonstrate the practicality and benefits for data analysis in various fields of study. Beginning with an introduction, the book moves on to explore the underlying concepts of adaptive tests, including:Smoothing methods and normalizing transforma

  20. Tests of English Language as Significant Thresholds for College-Bound Chinese and the Washback of Test-Preparation

    Science.gov (United States)

    Matoush, Marylou M.; Fu, Danling

    2012-01-01

    Tests of English language mark significantly high thresholds for all college-bound students in the People's Republic of China. Many Chinese students hope to seek their fortunes at universities in the United States, or other English speaking countries. These students spend long hours, year after year, in test-preparation centres in order to develop…

  1. Base Deficit as an Indicator of Significant Blunt Abdominal Trauma

    African Journals Online (AJOL)

    multiruka1

    Abstract. Background: Blunt abdominal trauma (BAT) is an important cause of morbidity and mortality among trauma patients. Base deficit (BD) has been proposed as an early available tool alongside focused assessment with sonography for trauma (FAST) in the screening of patients suspected to have BAT and also to help ...

  2. P-Value, a true test of statistical significance? a cautionary note ...

    African Journals Online (AJOL)

    While it's not the intention of the founders of significance testing and hypothesis testing to have the two ideas intertwined as if they are complementary, the inconvenient marriage of the two practices into one coherent, convenient, incontrovertible and misinterpreted practice has dotted our standard statistics textbooks and ...

  3. Competency-based curricular design to encourage significant learning.

    Science.gov (United States)

    Hurtubise, Larry; Roman, Brenda

    2014-07-01

    Most significant learning (SL) experiences produce long-lasting learning experiences that meaningfully change the learner's thinking, feeling, and/or behavior. Most significant teaching experiences involve strong connections with the learner and recognition that the learner felt changed by the teaching effort. L. Dee Fink in Creating Significant Learning Experiences: An Integrated Approach to Designing College Course defines six kinds of learning goals: Foundational Knowledge, Application, Integration, Human Dimension, Caring, and Learning to Learn. SL occurs when learning experiences promote interaction between the different kinds of goals, for example, acquiring knowledge alone is not enough, but when paired with a learning experience, such as an effective patient experience as in Caring, then significant (and lasting) learning occurs. To promote SL, backward design principles that start with clearly defined learning goals and the context of the situation of the learner are particularly effective. Emphasis on defining assessment methods prior to developing teaching/learning activities is the key: this ensures that assessment (where the learner should be at the end of the educational activity/process) drives instruction and that assessment and learning/instruction are tightly linked so that assessment measures a defined outcome (competency) of the learner. Employing backward design and the AAMC's MedBiquitous standard vocabulary for medical education can help to ensure that curricular design and redesign efforts effectively enhance educational program quality and efficacy, leading to improved patient care. Such methods can promote successful careers in health care for learners through development of self-directed learning skills and active learning, in ways that help learners become fully committed to lifelong learning and continuous professional development. Copyright © 2014 Mosby, Inc. All rights reserved.

  4. Risk Based Optimal Fatigue Testing

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Faber, M.H.; Kroon, I.B.

    1992-01-01

    Optimal fatigue life testing of materials is considered. Based on minimization of the total expected costs of a mechanical component a strategy is suggested to determine the optimal stress range levels for which additional experiments are to be performed together with an optimal value...

  5. [Clinical significance of the tests used in the diagnosis of pancreatic diseases].

    Science.gov (United States)

    Lenti, G; Emanuelli, G

    1976-11-14

    Different methods available for investigating patients for pancreatic disease are discussed. They first include measurement of pancreatic enzymes in biological fluids. Basal amylase and/or lipase in blood are truly diagnostic in acute pancreatitis but their utility is low in chronic pancreatic diseases. Evocative tests have been performed to increase the sensitivity of blood enzyme measurement. The procedure is based on enzyme determination following administration of pancreozymin and secretin, and offers a valuable aid in diagnosis of chronic pancreatitis and cancer of the pancreas. They are capable of discerning pancreatic lesions but are not really discriminatory because similar changes are observed in both diseases. The measurement of urinary enzyme levels in patients with acute pancreatitis is a sensitive indicator of disease. The urinary amylase excretion rises to abnormal levels and persists at significant values for a longer period of time than the serum amylase in acute pancreatitis. The fractional urinary amylase escretion seems to be more sensitive than daily urinary measurement. The pancreatic exocrin function can be assessed by examining the duodenal contents after intravenous administration of pancreozymin and secretin. Different abnormal secretory patterns can be determinated. Total secretory deficiency is observed in patients with obstruction of excretory ducts by tumors of the head of the pancreas and in the end stage of chronic pancreatitis. Low volume with normal bicarbonate and enzyme concentration is another typical pattern seen in neoplastic obstruction of escretory ducts. In chronic pancreatitis the chief defect is the inability of the gland to secrete a juice with a high bicarbonate concentration; but in the advanced stage diminution of enzyme and volume is also evident. Diagnostic procedures for pancreatic diseases include digestion and absorption tests. The microscopic examination and chemical estimation of the fats in stool specimens in

  6. Null hypothesis significance tests. A mix-up of two different theories

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg

    2015-01-01

    Null hypothesis statistical significance tests (NHST) are widely used in quantitative research in the empirical sciences including scientometrics. Nevertheless, since their introduction nearly a century ago significance tests have been controversial. Many researchers are not aware of the numerous......-Bayesian interpretations. This is undoubtedly a major reason why NHST is very often misunderstood. But NHST also has intrinsic logical problems and the epistemic range of the information provided by such tests is much more limited than most researchers recognize. In this article we introduce to the scientometric community...... of the misunderstandings with examples from the scientometric literature and bring forward some modest recommendations for a more sound practice in quantitative data analysis....

  7. Interface-based software testing

    Directory of Open Access Journals (Sweden)

    Aziz Ahmad Rais

    2016-10-01

    Full Text Available Software quality is determined by assessing the characteristics that specify how it should work, which are verified through testing. If it were possible to touch, see, or measure software, it would be easier to analyze and prove its quality. Unfortunately, software is an intangible asset, which makes testing complex. This is especially true when software quality is not a question of particular functions that can be tested through a graphical user interface. The primary objective of software architecture is to design quality of software through modeling and visualization. There are many methods and standards that define how to control and manage quality. However, many IT software development projects still fail due to the difficulties involved in measuring, controlling, and managing software quality. Software quality failure factors are numerous. Examples include beginning to test software too late in the development process, or failing properly to understand, or design, the software architecture and the software component structure. The goal of this article is to provide an interface-based software testing technique that better measures software quality, automates software quality testing, encourages early testing, and increases the software’s overall testability

  8. RDX-based nanocomposite microparticles for significantly reduced shock sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Qiu Hongwei, E-mail: hqiu@stevens.edu [Department of Chemical Engineering and Materials Science, Stevens Institute of Technology, Hoboken, NJ 07030 (United States); Stepanov, Victor; Di Stasio, Anthony R. [U.S. Army - Armament Research, Development, and Engineering Center, Picatinny, NJ 07806 (United States); Chou, Tsengming; Lee, Woo Y. [Department of Chemical Engineering and Materials Science, Stevens Institute of Technology, Hoboken, NJ 07030 (United States)

    2011-01-15

    Cyclotrimethylenetrinitramine (RDX)-based nanocomposite microparticles were produced by a simple, yet novel spray drying method. The microparticles were characterized by scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD) and high performance liquid chromatography (HPLC), which shows that they consist of small RDX crystals ({approx}0.1-1 {mu}m) uniformly and discretely dispersed in a binder. The microparticles were subsequently pressed to produce dense energetic materials which exhibited a markedly lower shock sensitivity. The low sensitivity was attributed to small crystal size as well as small void size ({approx}250 nm). The method developed in this work may be suitable for the preparation of a wide range of insensitive explosive compositions.

  9. RDX-based nanocomposite microparticles for significantly reduced shock sensitivity

    International Nuclear Information System (INIS)

    Qiu Hongwei; Stepanov, Victor; Di Stasio, Anthony R.; Chou, Tsengming; Lee, Woo Y.

    2011-01-01

    Cyclotrimethylenetrinitramine (RDX)-based nanocomposite microparticles were produced by a simple, yet novel spray drying method. The microparticles were characterized by scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD) and high performance liquid chromatography (HPLC), which shows that they consist of small RDX crystals (∼0.1-1 μm) uniformly and discretely dispersed in a binder. The microparticles were subsequently pressed to produce dense energetic materials which exhibited a markedly lower shock sensitivity. The low sensitivity was attributed to small crystal size as well as small void size (∼250 nm). The method developed in this work may be suitable for the preparation of a wide range of insensitive explosive compositions.

  10. RDX-based nanocomposite microparticles for significantly reduced shock sensitivity.

    Science.gov (United States)

    Qiu, Hongwei; Stepanov, Victor; Di Stasio, Anthony R; Chou, Tsengming; Lee, Woo Y

    2011-01-15

    Cyclotrimethylenetrinitramine (RDX)-based nanocomposite microparticles were produced by a simple, yet novel spray drying method. The microparticles were characterized by scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD) and high performance liquid chromatography (HPLC), which shows that they consist of small RDX crystals (∼0.1-1 μm) uniformly and discretely dispersed in a binder. The microparticles were subsequently pressed to produce dense energetic materials which exhibited a markedly lower shock sensitivity. The low sensitivity was attributed to small crystal size as well as small void size (∼250 nm). The method developed in this work may be suitable for the preparation of a wide range of insensitive explosive compositions. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. The Significance of Temperature Based Approach Over the Energy Based Approaches in the Buildings Thermal Assessment

    Science.gov (United States)

    Albatayneh, Aiman; Alterman, Dariusz; Page, Adrian; Moghtaderi, Behdad

    2017-05-01

    The design of low energy buildings requires accurate thermal simulation software to assess the heating and cooling loads. Such designs should sustain thermal comfort for occupants and promote less energy usage over the life time of any building. One of the house energy rating used in Australia is AccuRate, star rating tool to assess and compare the thermal performance of various buildings where the heating and cooling loads are calculated based on fixed operational temperatures between 20 °C to 25 °C to sustain thermal comfort for the occupants. However, these fixed settings for the time and temperatures considerably increase the heating and cooling loads. On the other hand the adaptive thermal model applies a broader range of weather conditions, interacts with the occupants and promotes low energy solutions to maintain thermal comfort. This can be achieved by natural ventilation (opening window/doors), suitable clothes, shading and low energy heating/cooling solutions for the occupied spaces (rooms). These activities will save significant amount of operating energy what can to be taken into account to predict energy consumption for a building. Most of the buildings thermal assessment tools depend on energy-based approaches to predict the thermal performance of any building e.g. AccuRate in Australia. This approach encourages the use of energy to maintain thermal comfort. This paper describes the advantages of a temperature-based approach to assess the building's thermal performance (using an adaptive thermal comfort model) over energy based approach (AccuRate Software used in Australia). The temperature-based approach was validated and compared with the energy-based approach using four full scale housing test modules located in Newcastle, Australia (Cavity Brick (CB), Insulated Cavity Brick (InsCB), Insulated Brick Veneer (InsBV) and Insulated Reverse Brick Veneer (InsRBV)) subjected to a range of seasonal conditions in a moderate climate. The time required for

  12. F 35 JOINT STRIKE FIGHTER DOD Needs to Complete Developmental Testing Before Making Significant New Investments

    Science.gov (United States)

    2017-04-01

    support over its lifetime. We have reported on F-35 issues for many years. Over time, we have reported significant cost , schedule, and performance...program test pilots. We analyzed historical test point execution rates and cost performance reports to determine the program’s average monthly... historical data. Using historical contractor cost data from April 2016 to September 2016, we calculated the average monthly cost associated with the

  13. Methodology to identify risk-significant components for inservice inspection and testing

    International Nuclear Information System (INIS)

    Anderson, M.T.; Hartley, R.S.; Jones, J.L. Jr.; Kido, C.; Phillips, J.H.

    1992-08-01

    Periodic inspection and testing of vital system components should be performed to ensure the safe and reliable operation of Department of Energy (DOE) nuclear processing facilities. Probabilistic techniques may be used to help identify and rank components by their relative risk. A risk-based ranking would allow varied DOE sites to implement inspection and testing programs in an effective and cost-efficient manner. This report describes a methodology that can be used to rank components, while addressing multiple risk issues

  14. Fitting and testing the significance of linear trends in Gumbel-distributed data

    Directory of Open Access Journals (Sweden)

    R. T. Clarke

    2002-01-01

    Full Text Available The widely-used hydrological procedures for calculating events with T-year return periods from data that follow a Gumbel distribution assume that the data sequence from which the Gumbel distribution is fitted remains stationary in time. If non-stationarity is suspected, whether as a consequence of changes in land-use practices or climate, it is common practice to test the significance of trend by either of two methods: linear regression, which assumes that data in the record have a Normal distribution with mean value that possibly varies with time; or a non-parametric test such as that of Mann-Kendall, which makes no assumption about the distribution of the data. Thus, the hypothesis that the data are Gumbel-distributed is temporarily abandoned while testing for trend, but is re-adopted if the trend proves to be not significant, when events with T-year return periods are then calculated. This is illogical. The paper describes an alternative model in which the Gumbel distribution has a (possibly time-variant mean, the time-trend in mean value being determined, for the present purpose, by a single parameter β estimated by Maximum Likelihood (ML. The large-sample variance of the ML estimate ˆβMR is compared with the variance of the trend βLR calculated by linear regression; the latter is found to be 64% greater. Simulated samples from a standard Gumbel distribution were given superimposed linear trends of different magnitudes, and the power of each of three trend-testing procedures (Maximum Likelihood, Linear Regression, and the non-parametric Mann-Kendall test were compared. The ML test was always more powerful than either the Linear Regression or Mann-Kendall test, whatever the (positive value of the trend β; the power of the MK test was always least, for all values of β. Keywords: Extreme value probability distribution, Gumbel distribution, statistical stationarity, trend-testing procedures

  15. A critical discussion of null hypothesis significance testing and statistical power analysis within psychological research

    DEFF Research Database (Denmark)

    Jones, Allan; Sommerlund, Bo

    2007-01-01

    The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...

  16. Measured dose to ovaries and testes from Hodgkin's fields and determination of genetically significant dose

    International Nuclear Information System (INIS)

    Niroomand-Rad, A.; Cumberlin, R.

    1993-01-01

    The purpose of this study was to determine the genetically significant dose from therapeutic radiation exposure with Hodgkin's fields by estimating the doses to ovaries and testes. Phantom measurements were performed to verify estimated doses to ovaries and testes from Hodgkin's fields. Thermoluminescent LiF dosimeters (TLD-100) of 1 x 3 x 3 mm 3 dimensions were embedded in phantoms and exposed to standard mantle and paraaortic fields using Co-60, 4 MV, 6 MV, and 10 MV photon beams. The results show that measured doses to ovaries and testes are about two to five times higher than the corresponding graphically estimated doses for Co-60 and 4 MVX photon beams as depicted in ICRP publication 44. In addition, the measured doses to ovaries and testes are about 30% to 65% lower for 10 MV photon beams than for their corresponding Co-60 photon beams. The genetically significant dose from Hodgkin's treatment (less than 0.01 mSv) adds about 4% to the genetically significant dose contribution to medical procedures and adds less than 1% to the genetically significant dose from all sources. Therefore, the consequence to society is considered to be very small. The consequences for the individual patient are, likewise, small. 28 refs., 3 figs., 5 tabs

  17. Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.

    Science.gov (United States)

    Kieffer, Kevin M.; Thompson, Bruce

    As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate…

  18. Recent Literature on Whether Statistical Significance Tests Should or Should Not Be Banned.

    Science.gov (United States)

    Deegear, James

    This paper summarizes the literature regarding statistical significant testing with an emphasis on recent literature in various discipline and literature exploring why researchers have demonstrably failed to be influenced by the American Psychological Association publication manual's encouragement to report effect sizes. Also considered are…

  19. Wavelet Co-movement Significance Testing with Respect to Gaussian White Noise Background

    Directory of Open Access Journals (Sweden)

    Poměnková Jitka

    2018-01-01

    Full Text Available The paper deals with significance testing of time series co-movement measured via wavelet analysis, namely via the wavelet cross-spectra. This technique is very popular for its better time resolution compare to other techniques. Such approach put in evidence the existence of both long-run and short-run co-movement. In order to have better predictive power it is suitable to support and validate obtained results via some testing approach. We investigate the test of wavelet power cross-spectrum with respect to the Gaussian white noise background with the use of the Bessel function. Our experiment is performed on real data, i.e. seasonally adjusted quarterly data of gross domestic product of the United Kingdom, Korea and G7 countries. To validate the test results we perform Monte Carlo simulation. We describe the advantages and disadvantages of both approaches and formulate recommendations for its using.

  20. Significance of hydrogen breath tests in children with suspected carbohydrate malabsorption.

    Science.gov (United States)

    Däbritz, Jan; Mühlbauer, Michael; Domagk, Dirk; Voos, Nicole; Henneböhl, Geraldine; Siemer, Maria L; Foell, Dirk

    2014-02-27

    Hydrogen breath tests are noninvasive procedures frequently applied in the diagnostic workup of functional gastrointestinal disorders. Here, we review hydrogen breath test results and the occurrence of lactose, fructose and sorbitol malabsorption in pediatric patients; and determine the significance of the findings and the outcome of patients with carbohydrate malabsorption. We included 206 children (88 male, 118 female, median age 10.7 years, range 3-18 years) with a total of 449 hydrogen breath tests (lactose, n = 161; fructose, n = 142; sorbitol, n = 146) into a retrospective analysis. Apart from test results, we documented symptoms, the therapeutic consequences of the test, the outcome and the overall satisfaction of the patients and families. In total, 204 (46%) of all breath tests were positive. Long-term follow-up data could be collected from 118 patients. Of 79 patients (67%) who were put on a diet reduced in lactose, fructose and/or sorbitol, the majority (92%, n = 73) reported the diet to be strict and only 13% (n = 10) had no response to diet. Most families (96%, n = 113) were satisfied by the test and the therapy. There were only 21 tests (5%) with a borderline result because the criteria for a positive result were only partially met. Hydrogen breath tests can be helpful in the evaluation of children with gastrointestinal symptoms including functional intestinal disorders. If applied for a variety of carbohydrates but only where indicated, around two-third of all children have positive results. The therapeutic consequences are successfully relieving symptoms in the vast majority of patients.

  1. Description of test facilities bound to the research on sodium aerosols - some significant results

    International Nuclear Information System (INIS)

    Dolias, M.; Lafon, A.; Vidard, M.; Schaller, K.H.

    1977-01-01

    This communication is dedicated to the description of the CEA (French Atomic Energy Authority) testing located at CADARACHE and which are utilized for the study of sodium aerosols behavior. These testing loops are necessary for studying the operating of equipment such as filters, sodium vapour traps, condensers and separators. It is also possible to study the effect of characteristics parameters on formation, coagulation and carrying away of sodium aerosols in the cover gas. Sodium aerosols deposits in a vertical annular space configuration with a cold area in its upper part are also studied. Some significant results emphasize the importance of operating conditions on the formation of aerosols. (author)

  2. Aluminum sulfate significantly reduces the skin test response to common allergens in sensitized patients

    Directory of Open Access Journals (Sweden)

    Grier Thomas J

    2006-02-01

    Full Text Available Abstract Background Avoidance of allergens is still recommended as the first and best way to prevent allergic illnesses and their comorbid diseases. Despite a variety of attempts there has been very limited success in the area of environmental control of allergic disease. Our objective was to identify a non-invasive, non-pharmacological method to reduce indoor allergen loads in atopic persons' homes and public environments. We employed a novel in vivo approach to examine the possibility of using aluminum sulfate to control environmental allergens. Methods Fifty skin test reactive patients were simultaneously skin tested with conventional test materials and the actions of the protein/glycoprotein modifier, aluminum sulfate. Common allergens, dog, cat, dust mite, Alternaria, and cockroach were used in the study. Results Skin test reactivity was significantly reduced by the modifier aluminum sulfate. Our studies demonstrate that the effects of histamine were not affected by the presence of aluminum sulfate. In fact, skin test reactivity was reduced independent of whether aluminum sulfate was present in the allergen test material or removed prior to testing, indicating that the allergens had in some way been inactivated. Conclusion Aluminum sulfate was found to reduce the in vivo allergic reaction cascade induced by skin testing with common allergens. The exact mechanism is not clear but appears to involve the alteration of IgE-binding epitopes on the allergen. Our results indicate that it may be possible to diminish the allergenicity of an environment by application of the active agent aluminum sulfate, thus producing environmental control without complete removal of the allergen.

  3. When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment

    Science.gov (United States)

    Szucs, Denes; Ioannidis, John P. A.

    2017-01-01

    Null hypothesis significance testing (NHST) has several shortcomings that are likely contributing factors behind the widely debated replication crisis of (cognitive) neuroscience, psychology, and biomedical science in general. We review these shortcomings and suggest that, after sustained negative experience, NHST should no longer be the default, dominant statistical practice of all biomedical and psychological research. If theoretical predictions are weak we should not rely on all or nothing hypothesis tests. Different inferential methods may be most suitable for different types of research questions. Whenever researchers use NHST they should justify its use, and publish pre-study power calculations and effect sizes, including negative findings. Hypothesis-testing studies should be pre-registered and optimally raw data published. The current statistics lite educational approach for students that has sustained the widespread, spurious use of NHST should be phased out. PMID:28824397

  4. ‘SGoFicance Trace’: Assessing Significance in High Dimensional Testing Problems

    Science.gov (United States)

    de Uña-Alvarez, Jacobo; Carvajal-Rodriguez, Antonio

    2010-01-01

    Recently, an exact binomial test called SGoF (Sequential Goodness-of-Fit) has been introduced as a new method for handling high dimensional testing problems. SGoF looks for statistical significance when comparing the amount of null hypotheses individually rejected at level γ = 0.05 with the expected amount under the intersection null, and then proceeds to declare a number of effects accordingly. SGoF detects an increasing proportion of true effects with the number of tests, unlike other methods for which the opposite is true. It is worth mentioning that the choice γ = 0.05 is not essential to the SGoF procedure, and more power may be reached at other values of γ depending on the situation. In this paper we enhance the possibilities of SGoF by letting the γ vary on the whole interval (0,1). In this way, we introduce the ‘SGoFicance Trace’ (from SGoF's significance trace), a graphical complement to SGoF which can help to make decisions in multiple-testing problems. A script has been written for the computation in R of the SGoFicance Trace. This script is available from the web site http://webs.uvigo.es/acraaj/SGoFicance.htm. PMID:21209966

  5. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    Science.gov (United States)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  6. Confidence intervals permit, but don't guarantee, better inference than statistical significance testing

    Directory of Open Access Journals (Sweden)

    Melissa Coulson

    2010-07-01

    Full Text Available A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST, or confidence intervals (CIs. Authors of articles published in psychology, behavioural neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST.

  7. Methods for significance testing of categorical covariates in logistic regression models after multiple imputation: power and applicability analysis.

    Science.gov (United States)

    Eekhout, Iris; van de Wiel, Mark A; Heymans, Martijn W

    2017-08-22

    Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin's Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels significantly contributes to the model, different methods are available. For example pooling chi-square tests with multiple degrees of freedom, pooling likelihood ratio test statistics, and pooling based on the covariance matrix of the regression model. These methods are more complex than RR and are not available in all mainstream statistical software packages. In addition, they do not always obtain optimal power levels. We argue that the median of the p-values from the overall significance tests from the analyses on the imputed datasets can be used as an alternative pooling rule for categorical variables. The aim of the current study is to compare different methods to test a categorical variable for significance after multiple imputation on applicability and power. In a large simulation study, we demonstrated the control of the type I error and power levels of different pooling methods for categorical variables. This simulation study showed that for non-significant categorical covariates the type I error is controlled and the statistical power of the median pooling rule was at least equal to current multiple parameter tests. An empirical data example showed similar results. It can therefore be concluded that using the median of the p-values from the imputed data analyses is an attractive and easy to use alternative method for significance testing of categorical variables.

  8. The diagnostic sensitivity of dengue rapid test assays is significantly enhanced by using a combined antigen and antibody testing approach.

    Directory of Open Access Journals (Sweden)

    Scott R Fry

    2011-06-01

    Full Text Available BACKGROUND: Serological tests for IgM and IgG are routinely used in clinical laboratories for the rapid diagnosis of dengue and can differentiate between primary and secondary infections. Dengue virus non-structural protein 1 (NS1 has been identified as an early marker for acute dengue, and is typically present between days 1-9 post-onset of illness but following seroconversion it can be difficult to detect in serum. AIMS: To evaluate the performance of a newly developed Panbio® Dengue Early Rapid test for NS1 and determine if it can improve diagnostic sensitivity when used in combination with a commercial IgM/IgG rapid test. METHODOLOGY: The clinical performance of the Dengue Early Rapid was evaluated in a retrospective study in Vietnam with 198 acute laboratory-confirmed positive and 100 negative samples. The performance of the Dengue Early Rapid in combination with the IgM/IgG Rapid test was also evaluated in Malaysia with 263 laboratory-confirmed positive and 30 negative samples. KEY RESULTS: In Vietnam the sensitivity and specificity of the test was 69.2% (95% CI: 62.8% to 75.6% and 96% (95% CI: 92.2% to 99.8 respectively. In Malaysia the performance was similar with 68.9% sensitivity (95% CI: 61.8% to 76.1% and 96.7% specificity (95% CI: 82.8% to 99.9% compared to RT-PCR. Importantly, when the Dengue Early Rapid test was used in combination with the IgM/IgG test the sensitivity increased to 93.0%. When the two tests were compared at each day post-onset of illness there was clear differentiation between the antigen and antibody markers. CONCLUSIONS: This study highlights that using dengue NS1 antigen detection in combination with anti-glycoprotein E IgM and IgG serology can significantly increase the sensitivity of acute dengue diagnosis and extends the possible window of detection to include very early acute samples and enhances the clinical utility of rapid immunochromatographic testing for dengue.

  9. Laboratory testing for monoclonal gammopathies: Focus on monoclonal gammopathy of undetermined significance and smoldering multiple myeloma.

    Science.gov (United States)

    Willrich, Maria A V; Murray, David L; Kyle, Robert A

    2018-01-01

    Monoclonal gammopathies (MG) are defined by increased proliferation of clonal plasma cells, resulting in a detectable abnormality called monoclonal component or M-protein. Detection of the M-protein as either narrow peaks on protein electrophoresis and discrete bands on immunofixation is the defining feature of MG. MG are classified as low-tumor burden disorders, pre-malignancies and malignancies. Since significant disease can be present at any level, several different tests are employed in order to encompass the inherent diverse nature of the M-proteins. In this review, we discuss the main characteristics and limitations of clinical assays to detect M-proteins: protein electrophoresis, immunofixation, immunoglobulin quantitation, serum free light chains and heavy-light chain assays, as well as the newly developed MALDI-TOF mass spectrometric methods. In addition, the definitions of the pre-malignancies monoclonal gammopathy of undetermined significance (MGUS) and smoldering multiple myeloma (SMM), as well as monoclonal gammopathy of renal significance (MGRS) are presented in the context of the 2014 international guidelines for definition of myeloma requiring treatment, and the role of the laboratory in test selection for screening and monitoring these conditions is highlighted. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  10. A procedure for the significance testing of unmodeled errors in GNSS observations

    Science.gov (United States)

    Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling

    2018-01-01

    It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.

  11. Confidence Intervals: From tests of statistical significance to confidence intervals, range hypotheses and substantial effects

    Directory of Open Access Journals (Sweden)

    Dominic Beaulieu-Prévost

    2006-03-01

    Full Text Available For the last 50 years of research in quantitative social sciences, the empirical evaluation of scientific hypotheses has been based on the rejection or not of the null hypothesis. However, more than 300 articles demonstrated that this method was problematic. In summary, null hypothesis testing (NHT is unfalsifiable, its results depend directly on sample size and the null hypothesis is both improbable and not plausible. Consequently, alternatives to NHT such as confidence intervals (CI and measures of effect size are starting to be used in scientific publications. The purpose of this article is, first, to provide the conceptual tools necessary to implement an approach based on confidence intervals, and second, to briefly demonstrate why such an approach is an interesting alternative to an approach based on NHT. As demonstrated in the article, the proposed CI approach avoids most problems related to a NHT approach and can often improve the scientific and contextual relevance of the statistical interpretations by testing range hypotheses instead of a point hypothesis and by defining the minimal value of a substantial effect. The main advantage of such a CI approach is that it replaces the notion of statistical power by an easily interpretable three-value logic (probable presence of a substantial effect, probable absence of a substantial effect and probabilistic undetermination. The demonstration includes a complete example.

  12. Diagnostic Significance of Laboratory Tests in Diseases of the Biliary System

    Directory of Open Access Journals (Sweden)

    K.V. Paramonovа

    2013-09-01

    Laboratory diagnosis, which is an integral part of modern medicine, does not lose its significance for the management of patients with cholelithiasis. Hematological studies are used to establish the diagnosis, etiology, risk stratification for severeclinical course, organ failure detection, determination of the adequacy of therapy. Changes in laboratory parameters make it possible to suspecttimely the development of complications and monitoring of the patient’s state. Laboratory tests are usually more sensitive indicators of the patient’s state than his well-being. A set of clinical manifestations, laboratory data and imaging techniques improve the results of cholelithiasis diagnosis.

  13. [Significance of the urine strip test in case of stunted growth].

    Science.gov (United States)

    Bertholet-Thomas, A; Llanas, B; Servais, A; Bendelac, N; Goizet, C; Choukroun, G; Novo, R; Decramer, S

    2015-07-01

    Observation of stunted growth in children usually leads the general practitioner to refer the patient to endocrinologists or gastroenterologists. In most cases, after a complementary check-up, the diagnosis is made and treatment is initiated. However, certain cases remain undiagnosed, particularly renal etiologies, such as proximal tubulopathy. The urine strip test at the initial check-up would be an easy and inexpensive test to avoid delayed diagnosis. The aim of the present paper is to increase general physicians' and pediatricians' awareness of the significance of questioning the parents and using the urine strip test for any child presenting stunted growth. We report a patient case of a 20-month-old child admitted to the emergency department for severe dehydration. He had displayed stunted growth since the age of 5 months and showed a negative etiologic check-up at 9 months of age. Clinical examination at admission confirmed stunted growth with loss of 2 standard deviations and signs of dehydration with persistent diuresis. Skin paleness, ash-blond hair, and signs of rickets were also observed and the urine strip test showed positive pads for glycosuria and proteinuria. Polyuria and polydipsia were also revealed following parents' questioning, suggesting proximal tubulopathy (Fanconi syndrome). Association of stunted growth, rickets, polyuria and polydipsia, glycosuria (without ketonuria and normal glycemia), and proteinuria suggest nephropathic cystinosis. Ophthalmic examination showed cystine deposits in the cornea. The semiotic diagnosis of nephropathic cystinosis was confirmed by leukocyte cystine concentrations and genetic investigations. This case report clearly illustrates the significance of the urine strip test to easily and quickly concentrate the diagnosis of stunted growth on a renal etiology (glycosuria, proteinuria), especially on proximal tubulopathy for which the most frequent cause is nephropathic cystinosis. Specificity of nephropathic

  14. Tests of Statistical Significance and Background Estimation in Gamma-Ray Air Shower Experiments

    Science.gov (United States)

    Fleysher, R.; Fleysher, L.; Nemethy, P.; Mincer, A. I.; Haines, T. J.

    2004-03-01

    In this paper we discuss established methods of significance calculation for testing the existence of a signal in the presence of unknown background and point out the limits of their applicability. We then introduce a new self-consistent scheme for source detection and discuss some of its properties. The method overcomes weaknesses of those used previously and allows incorporating background anisotropies by vetoing existing localized sources and sinks on the sky and compensating for known large-scale anisotropies. By giving an example using the Milagro gamma-ray observatory data, we demonstrate how the method can be employed to relax the detector stability assumption. The new method is universal and can be used with any large field-of-view detector, in which the object of investigation, steady or transient, point or extended, traverses its field of view.

  15. School-Based Educational Interventions Can Significantly Improve Health Outcomes in Children with Asthma.

    Science.gov (United States)

    Suwannakeeree, Pussayaban; Deerojanawong, Jitladda; Prapphal, Nuanchan

    2016-02-01

    Lack of asthma knowledge among the pediatric patients and their caregivers contribute to poor asthma control in children. There is no data from Thailand on the health outcomes of school-based educational interventions for asthmatic children. To assess the effectiveness of school-based asthma educational interventions on health outcomes, asthma control, and management in asthmatic children. Forty-seven asthmatic students (6-15 years old), 14 caregivers, and five teachers from the Homkred School participated in the study. Asthma knowledge, workshops on pMDI (pressurized metered dose inhaler) techniques, use of asthma diaries, and self-management plans were provided Pre- and post-tests were administered to assess the asthma knowledge of the asthmatic students, their caregivers, and teachers. Pulmonary function tests (PFT) were used to assess the health outcomes. The controls of asthma and self-management behaviors were assessed at three and six months post-intervention. There were significant improvements of asthma knowledge in all groups (p management behaviors in the asthmatic children improved. The teachers' management of asthmatic attacks during the classes also improved. As a result of this, there were fewer emergency room (ER) visits. School-based educational interventions can significantly improve asthma outcomes in children with asthma. Therefore, the authors highly recommend the use of this intervention.

  16. Significance of T wave normalization in the electrocardiogram during exercise stress test

    International Nuclear Information System (INIS)

    Marin, J.J.; Heng, M.K.; Sevrin, R.; Udhoji, V.N.

    1987-01-01

    Although normalization of previously inverted T waves in the ECG is not uncommon during exercise treadmill testing, the clinical significance of this finding is still unclear. This was investigated in 45 patients during thallium-201 exercise testing. Patients with secondary T wave abnormalities on the resting ECG and ischemic exercise ST segment depression were excluded. On the thallium-201 scans, the left ventricle was divided into anterior-septal and inferior-posterior segments; these were considered equivalent to T wave changes in leads V1 and V5, and aVF, respectively. A positive thallium-201 scan was found in 43 of 45 (95%) patients and in 49 of 52 (94%) cardiac segments that showed T wave normalization. When thallium scans and T wave changes were matched to sites of involvement, 76% of T wave normalization in lead aV, was associated with positive thallium scans in the inferior-posterior segments, and 77% of T wave normalization in V1 and V5 was associated with positive thallium scans in the anterior-septal segments. These site correlations were similar for reversible and fixed thallium defects, and for patients not on digoxin therapy. Similar correlations were noted for the sites of T wave changes and coronary artery lesions in 12 patients who had angiography. In patients with a high prevalence for coronary artery disease, exercise T wave normalization is highly specific for the presence of the disease. In addition, it represents predominantly either previous injury or exercise-induced ischemic changes over the site of ECG involvement, rather than reciprocal changes of the opposite ventricular wall

  17. A rule-based software test data generator

    Science.gov (United States)

    Deason, William H.; Brown, David B.; Chang, Kai-Hsiung; Cross, James H., II

    1991-01-01

    Rule-based software test data generation is proposed as an alternative to either path/predicate analysis or random data generation. A prototype rule-based test data generator for Ada programs is constructed and compared to a random test data generator. Four Ada procedures are used in the comparison. Approximately 2000 rule-based test cases and 100,000 randomly generated test cases are automatically generated and executed. The success of the two methods is compared using standard coverage metrics. Simple statistical tests showing that even the primitive rule-based test data generation prototype is significantly better than random data generation are performed. This result demonstrates that rule-based test data generation is feasible and shows great promise in assisting test engineers, especially when the rule base is developed further.

  18. [Clinical significance of secondary results from non-invasive prenatal testing].

    Science.gov (United States)

    Ke, Weilin; Zhao, Weihua; Jie, Shenqiu; Chen, Qingqing; Li, Qing

    2017-06-10

    To assess the accuracy of copy number variations (CNVs) detection by non-invasive prenatal testing (NIPT) in addition to its routine targets and clinical significance of such CNVs for the reduction of fetuses born with chromosomal microdeletion/duplication syndromes. From October 2014 to October 2015, 14 235 pregnant women volunteered to participate in the study. Fifteen cases detected with chromosomal CNVs by the NIPT decided to undergo prenatal diagnostic procedures including amniocentesis, G-banded karyotyping and chromosomal microarray analysis (CMA). All such cases were routinely followed up after birth. Among the 14 235 subjects underwent NIPT, 18 cases were detected with Down syndrome, 4 with trisomy 18, and 2 with trisomy 13, in addition with 24 cases of CNVs. For the latter, 15 (including 11 cases with microdeletions and 4 cases with microduplications) participated in further prenatal diagnosis. In 13 cases (86.7%), the results of CMA were consistent with those of NIPT. On the other hand, only 7 out of the 15 cases showed a positive result with karyotyping, suggesting a rather high rate of missed diagnosis (46.2%). Of note, karyotyping has identified partial inversion of chromosome 9 in one case. As a screening tool, NIPT has a high accuracy for the detection of CNVs. However, as this method is still under improvement, it is more of a reminder rather than a diagnostic tool with full capability.

  19. Metal allergens of growing significance: epidemiology, immunotoxicology, strategies for testing and prevention.

    Science.gov (United States)

    Forte, Giovanni; Petrucci, Francesco; Bocca, Beatrice

    2008-09-01

    Metal-induced allergic contact dermatitis (ACD) is expressed in a wide range of cutaneous reactions following dermal and systemic exposure to products such as cosmetics and tattoos, detergents, jewellery and piercing, leather tanning, articular prostheses and dental implants. Apart from the well known significance of nickel in developing ACD, other metals such as aluminium, beryllium, chromium, cobalt, copper, gold, iridium, mercury, palladium, platinum, rhodium and titanium represented emerging causes of skin hypersensitivity. Despite the European Union directives that limit the total nickel content in jewellery alloys, the water soluble chromium (VI) in cement, and metals banned in cosmetics, the diffusion of metal-induced ACD remained quite high. On this basis, a review on the epidemiology of metal allergens, the types of exposure, the skin penetration, the immune response, and the protein interaction is motivated. Moreover, in vivo and in vitro tests for the identification and potency of skin-sensitizing metals are here reviewed in a risk assessment framework for the protection of consumer's health. Avenues for ACD prevention and therapy such as observance of maximum allowable metal levels, optimization of metallurgic characteristics, efficacy of chelating agents and personal protection are also discussed.

  20. Atomic Action Refinement in Model Based Testing

    NARCIS (Netherlands)

    van der Bijl, H.M.; Rensink, Arend; Tretmans, G.J.

    2007-01-01

    In model based testing (MBT) test cases are derived from a specification of the system that we want to test. In general the specification is more abstract than the implementation. This may result in 1) test cases that are not executable, because their actions are too abstract (the implementation

  1. Model-based testing for software safety

    NARCIS (Netherlands)

    Gurbuz, Havva Gulay; Tekinerdogan, Bedir

    2017-01-01

    Testing safety-critical systems is crucial since a failure or malfunction may result in death or serious injuries to people, equipment, or environment. An important challenge in testing is the derivation of test cases that can identify the potential faults. Model-based testing adopts models of a

  2. Unit root tests based on M estimators

    NARCIS (Netherlands)

    Lucas, André

    1995-01-01

    This paper considers unit root tests based on M estimators. The asymptotic theory for these tests is developed. It is shown how the asymptotic distributions of the tests depend on nuisance parameters and how tests can be constructed that are invariant to these parameters. It is also shown that a

  3. Efficient Regression Testing Based on Test History: An Industrial Evaluation

    OpenAIRE

    Ekelund, Edward Dunn; Engström, Emelie

    2015-01-01

    Due to changes in the development practices at Axis Communications, towards continuous integration, faster regression testing feedback is needed. The current automated regression test suite takes approximately seven hours to run which prevents developers from integrating code changes several times a day as preferred. Therefore we want to implement a highly selective yet accurate regression testing strategy. Traditional code coverage based techniques are not applicable due to the size and comp...

  4. Significance of Intratracheal Instillation Tests for the Screening of Pulmonary Toxicity of Nanomaterials.

    Science.gov (United States)

    Morimoto, Yasuo; Izumi, Hiroto; Yoshiura, Yukiko; Fujisawa, Yuri; Fujita, Katsuhide

    Inhalation tests are the gold standard test for the estimation of the pulmonary toxicity of respirable materials. Intratracheal instillation tests have been used widely, but they yield limited evidence of the harmful effects of respirable materials. We reviewed the effectiveness of intratracheal instillation tests for estimating the hazards of nanomaterials, mainly using research papers featuring intratracheal instillation and inhalation tests centered on a Japanese national project. Compared to inhalation tests, intratracheal instillation tests induced more acute inflammatory responses in the animal lung due to a bolus effect regardless of the toxicity of the nanomaterials. However, nanomaterials with high toxicity induced persistent inflammation in the chronic phase, and nanomaterials with low toxicity induced only transient inflammation. Therefore, in order to estimate the harmful effects of a nanomaterial, an observation period of 3 months or 6 months following intratracheal instillation is necessary. Among the endpoints of pulmonary toxicity, cell count and percentage of neutrophil, chemokines for neutrophils and macrophages, and oxidative stress markers are considered most important. These markers show persistent and transient responses in the lung from nanomaterials with high and low toxicity, respectively. If the evaluation of the pulmonary toxicity of nanomaterials is performed in not only the acute but also the chronic phase in order to avoid the bolus effect of intratracheal instillation and inflammatory-related factors that are used as endpoints of pulmonary toxicity, we speculate that intratracheal instillation tests can be useful for screening for the identification of the hazard of nanomaterials through pulmonary inflammation.

  5. Strain measurement based battery testing

    Science.gov (United States)

    Xu, Jeff Qiang; Steiber, Joe; Wall, Craig M.; Smith, Robert; Ng, Cheuk

    2017-05-23

    A method and system for strain-based estimation of the state of health of a battery, from an initial state to an aged state, is provided. A strain gauge is applied to the battery. A first strain measurement is performed on the battery, using the strain gauge, at a selected charge capacity of the battery and at the initial state of the battery. A second strain measurement is performed on the battery, using the strain gauge, at the selected charge capacity of the battery and at the aged state of the battery. The capacity degradation of the battery is estimated as the difference between the first and second strain measurements divided by the first strain measurement.

  6. Peabody Picture Vocabulary Test (PPVT-III: Psychometric properties and significance for application

    Directory of Open Access Journals (Sweden)

    Nataša Bucik

    2003-12-01

    Full Text Available The purpose of this article is to present the content, conceptual structure and methodological steps of the latest revision of the Peabody Picture Vocabulary Test (PPVT-III, which is a highly functional and valuable vocabulary test that has been in use since 1959 in different language and cultural surroundings. On the case of the PPVT-III we are presenting the procedure of development and standardization of such vocabulary tests as well as its translation and adaptation from one language and cultural milieu to another. We also note the practical use of the PPVT-III for research purposes. In Slovenian language no vocabulary tests were developed or adapted so far; PPVT-III is presented in this context, too.

  7. Minute Impurities Contribute Significantly to Olfactory Receptor Ligand Studies: Tales from Testing the Vibration Theory

    OpenAIRE

    Paoli, M.; M?nch, D.; Haase, A.; Skoulakis, E.; Turin, L.; Galizia, C. G.

    2017-01-01

    Several studies have attempted to test the vibrational hypothesis of odorant receptor activation in behavioral and physiological studies using deuterated compounds as odorants. The results have been mixed. Here, we attempted to test how deuterated compounds activate odorant receptors using calcium imaging of the fruit fly antennal lobe. We found specific activation of one area of the antennal lobe corresponding to inputs from a specific receptor. However, upon more detailed analysis, we disco...

  8. The Significance of Aptitude Tests in the Selection of Prospective Music Teachers

    Directory of Open Access Journals (Sweden)

    Erkan Sülün

    2017-06-01

    Full Text Available In Turkey and North Cyprus, music education departments are usually situated in faculties of education. The eligibility requirements are in two stages: Applicants who manage to attain the required minimum grade from Transition to Higher Education Examination can then take an aptitude test.  These aptitude tests are designed by each university department to assess applicants’ qualifications. The main objective of this study is to examine the content and the structure of the aptitude test, and accordingly to be able to shed a light on the correlation between students’ aptitude tests scores and music and pedagogy course grades.  For this purpose, the scores of 29 students who successfully passed the aptitude test in the Music Education Programme at Eastern Mediterranean University are compared with their course grades. This study employs quantitative descriptive research with an inter-relational sub-model. The Pearson correlation coefficient is used whilst analysing the relationship. The sampling method has been chosen for purposive sampling. This study determines both the content and predictive validity of the Music Education Department’s aptitude test.

  9. Automation for a base station stability testing

    OpenAIRE

    Punnek, Elvis

    2016-01-01

    This Batchelor’s thesis was commissioned by Oy LM Ericsson Ab Oulu. The aim of it was to help to investigate and create a test automation solution for the stability testing of the LTE base station. The main objective was to create a test automation for a predefined test set. This test automation solution had to be created for specific environments and equipment. This work included creating the automation for the test cases and putting them to daily test automation jobs. The key factor...

  10. Significance of a Positive Toxoplasma Immunoglobulin M Test Result in the United States.

    Science.gov (United States)

    Dhakal, Reshika; Gajurel, Kiran; Pomares, Christelle; Talucod, Jeanne; Press, Cynthia J; Montoya, Jose G

    2015-11-01

    A positive Toxoplasma immunoglobulin M (IgM) result is often interpreted as a marker of an acute infection. However, IgM can persist for several years, and Toxoplasma commercial IgM diagnostic test kits can yield a number of false-positive results. For these reasons, a chronic Toxoplasma infection can be erroneously classified as an acute infection, resulting in serious adverse consequences, especially in pregnant women, leading to emotional distress and unnecessary interventions, including termination of pregnancy. Interpretation of Toxoplasma serology at a reference laboratory can help differentiate a recently acquired infection from a chronic infection. Serological test results for 451 patients with positive Toxoplasma IgM and IgG test results obtained at nonreference laboratories (NRLs) that were referred to Palo Alto Medical Foundation Toxoplasma Serology Laboratory (PAMF-TSL) to determine whether the patient was acutely or chronically infected were retrospectively reviewed. PAMF-TSL results established that of the 451 patients, 335 (74%) had a chronic infection, 100 (22%) had an acute infection, and 7 (2%) were not infected, and for 9 (2%), results were indeterminate. Positive Toxoplasma IgM and IgG test results obtained at NRLs cannot accurately distinguish between acute and chronic infections. To do so, testing at reference laboratories is required, as mandated in 1997 in a letter from the Food and Drug Administration (FDA) to clinicians and laboratories in the United States. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  11. Statistically significant contrasts between EMG waveforms revealed using wavelet-based functional ANOVA

    Science.gov (United States)

    McKay, J. Lucas; Welch, Torrence D. J.; Vidakovic, Brani

    2013-01-01

    We developed wavelet-based functional ANOVA (wfANOVA) as a novel approach for comparing neurophysiological signals that are functions of time. Temporal resolution is often sacrificed by analyzing such data in large time bins, increasing statistical power by reducing the number of comparisons. We performed ANOVA in the wavelet domain because differences between curves tend to be represented by a few temporally localized wavelets, which we transformed back to the time domain for visualization. We compared wfANOVA and ANOVA performed in the time domain (tANOVA) on both experimental electromyographic (EMG) signals from responses to perturbation during standing balance across changes in peak perturbation acceleration (3 levels) and velocity (4 levels) and on simulated data with known contrasts. In experimental EMG data, wfANOVA revealed the continuous shape and magnitude of significant differences over time without a priori selection of time bins. However, tANOVA revealed only the largest differences at discontinuous time points, resulting in features with later onsets and shorter durations than those identified using wfANOVA (P < 0.02). Furthermore, wfANOVA required significantly fewer (∼¼×; P < 0.015) significant F tests than tANOVA, resulting in post hoc tests with increased power. In simulated EMG data, wfANOVA identified known contrast curves with a high level of precision (r2 = 0.94 ± 0.08) and performed better than tANOVA across noise levels (P < <0.01). Therefore, wfANOVA may be useful for revealing differences in the shape and magnitude of neurophysiological signals (e.g., EMG, firing rates) across multiple conditions with both high temporal resolution and high statistical power. PMID:23100136

  12. [Changes in the structure and clinical significance of the positive results of pretransfusion testing during the switching from tube test agglutination to gel microcolumn technique].

    Science.gov (United States)

    Petrusić-Kafedzić, Alma; Ivanković, Zdravko; Ekinović, Sabahudin; Hutinović, Aida; Ibrahimagić-Seper, Lejla

    2010-08-01

    To investigate the changes in pretransfusion testing during the switch from the agglutination tube test to the gel test. Clinical significance of positive results has been analyzed in 7667 pretransfusion tests (with 16610 cross-matches) performed by the tube test in 2005-2006, and in 7372 pretransfusion tests (with 17294 cross-matches) performed in 2007-2008 by the gel test. In both analyzed periods antibody detection was positive in 1.3% and cross-matching in 0.3% cases. At least one test was positive in 1.4% pretransfusions tested by the tube test and in 1.3% by the gel test, with >75% positive results in women. Analyzing cases with positive cross-matching but negative antibody detection, eight of ten such cases found by the tube test were caused by 'cold antibodies' whereas 'warm non-specific antibodies' caused all three cases found by the gel test. The gel test detected higher proportion of immune antibodies than the tube test (69.8% vs 41.3%, p antibodies. The tube test detected 24 cases of clinically non-significant antibodies, as compared with no cases found by the gel test (p antibodies' more often caused positive cross-matches than antibody detection (42.6% vs. 29.9% by the tube test, 28.9% vs. 18.3% by the gel test). Despite of being close in the detection of irregular antibodies (p=0.062), the difference between the tube and gel test was not significant. 'Non-specific antibodies' were found by both tests more often in women, while clinical departments were of no significance. The gel test has proved to be a more optimal technique of pretransfusion testing. The detection of irregular antibodies is recommended as an obligatory part of pretransfusion testing.

  13. The Ideal Base For Patch Testing

    Directory of Open Access Journals (Sweden)

    Ashok Kumar Bajaj

    1984-01-01

    Full Text Available A study was conducted to find out a suitable vehicle for patch testing in India. Various bases: tested were petrolatum, propylene glycol, polyethylene glycol 400, lanolin, olive, oil and plastobase. The observations suggest that polythylene glycol 400 is the most suitable vehicle for patch testing.

  14. The Poisson Margin Test for Normalisation Free Significance Analysis of NGS Data

    Science.gov (United States)

    Kowalczyk, Adam; Bedo, Justin; Conway, Thomas; Beresford-Smith, Bryan

    Motivation: The current methods for the determination of the statistical significance of peaks and regions in NGS data require an explicit normalisation step to compensate for (global or local) imbalances in the sizes of sequenced and mapped libraries. There are no canonical methods for performing such compensations, hence a number of different procedures serving this goal in different ways can be found in the literature. Unfortunately, the normalisation has a significant impact on the final results. Different methods yield very different numbers of detected "significant peaks" even in the simplest scenario of ChIP-Seq experiments which compare the enrichment in a single sample relative to a matching control. This becomes an even more acute issue in the more general case of the comparison of multiple samples, where a number of arbitrary design choices will be required in the data analysis stage, each option resulting in possibly (significantly) different outcomes.

  15. 77 FR 19861 - Certain Polybrominated Diphenylethers; Significant New Use Rule and Test Rule

    Science.gov (United States)

    2012-04-02

    ... article processors' exemption from the requirement to submit a SNUN, as described in this proposed rule... importers' and article processors' exemption from the requirement to conduct testing of c-PBDEs, as... include, among others, importers and processors of a chemical substance or mixture as part of an article...

  16. Afrika Statistika ISSN 2316-090X A Bayesian significance test of ...

    African Journals Online (AJOL)

    Department of Probability and statistics, USTHB. PO Box 32 EL Alia 16111 Bab Ezzouar, ... of the generalized likelihood ratio test to detect a change in binomial probability and in location of an exponential family of ..... Inference about the change-point in a sequence of random variables. Biometrika. 57, 1-17. Page, E.S. ...

  17. Frequency and Prognostic Significance of Abnormal Liver Function Tests in Patients With Cardiogenic Shock

    DEFF Research Database (Denmark)

    Jäntti, Toni; Tarvasmäki, Tuukka; Harjola, Veli Pekka

    2017-01-01

    Cardiogenic shock (CS) is a cardiac emergency often leading to multiple organ failure and death. Assessing organ dysfunction and appropriate risk stratification are central for the optimal management of these patients. The purpose of this study was to assess the prevalence of abnormal liver...... function tests (LFTs), as well as early changes of LFTs and their impact on outcome in CS. We measured LFTs in 178 patients in CS from serial blood samples taken at 0 hours, 12 hours, and 24 hours. The associations of LFT abnormalities and their early changes with all-cause 90-day mortality were estimated...... using Fisher's exact test and Cox proportional hazards regression analysis. Baseline alanine aminotransferase (ALT) was abnormal in 58% of the patients, more frequently in nonsurvivors. Abnormalities in other LFTs analyzed (alkaline phosphatase, gamma-glutamyl transferase, and total bilirubin) were...

  18. The diagnostic significance of the direct antiglobulin test (DAT) in anemic dogs

    NARCIS (Netherlands)

    Slappendel, R.J.

    1979-01-01

    The direct antiglobulin test (DAT) was positive in 134 (36.1%) of 371 anemic dogs with internal diseases. Four principal types of reaction were recognized: IgG alone in 15 (11.2%), IgG + C′ in 41 (30.6%), C′ alone in 74 (55.2%) and IgM + C′ in 2 (1.5%). Rarely, IgM and/or IgA reactions occurred in

  19. Minute Impurities Contribute Significantly to Olfactory Receptor Ligand Studies: Tales from Testing the Vibration Theory.

    Science.gov (United States)

    Paoli, M; Münch, D; Haase, A; Skoulakis, E; Turin, L; Galizia, C G

    2017-01-01

    Several studies have attempted to test the vibrational hypothesis of odorant receptor activation in behavioral and physiological studies using deuterated compounds as odorants. The results have been mixed. Here, we attempted to test how deuterated compounds activate odorant receptors using calcium imaging of the fruit fly antennal lobe. We found specific activation of one area of the antennal lobe corresponding to inputs from a specific receptor. However, upon more detailed analysis, we discovered that an impurity of 0.0006% ethyl acetate in a chemical sample of benzaldehyde-d 5 was entirely responsible for a sizable odorant-evoked response in Drosophila melanogaster olfactory receptor cells expressing dOr42b. Without gas chromatographic purification within the experimental setup, this impurity would have created a difference in the responses of deuterated and nondeuterated benzaldehyde, suggesting that dOr42b be a vibration sensitive receptor, which we show here not to be the case. Our results point to a broad problem in the literature on use of non-GC-pure compounds to test receptor selectivity, and we suggest how the limitations can be overcome in future studies.

  20. Avoidance bio-assays may help to test the ecological significance of soil pollution

    Energy Technology Data Exchange (ETDEWEB)

    Martinez Aldaya, Maite [Universidad de Navarra, Facultad de Ciencias, Departamento de Zoologia y Ecologia, 31080 Pamplona (Spain); Lors, Christine [Centre National de Recherche sur les Sites et Sols Pollues, 930 Boulevard Lahure, BP 537, 59505 Douai Cedex (France); Salmon, Sandrine [Museum National d' Histoire Naturelle, CNRS UMR 5176, 4 avenue du Petit-Chateau, 91800 Brunoy (France); Ponge, Jean-Francois [Museum National d' Histoire Naturelle, CNRS UMR 5176, 4 avenue du Petit-Chateau, 91800 Brunoy (France)]. E-mail: jean-francois.ponge@wanadoo.fr

    2006-03-15

    We measured the short-term (100 min) avoidance of a soil heavily polluted by hydrocarbons by the soil springtail Folsomia candida, at six rates of dilution in a control, unpolluted soil. We compared the results with those of long-term (40-day) population tests. Five strains were compared, of varying geographical and ecological origin. When pure, the polluted soil was lethal in the long-term and avoided in the short-term by all strains. Avoidance tests, but not population tests, were able to discriminate between strains. Avoidance thresholds differed among strains. Two ecological consequences of the results were discussed: (i) toxic compounds may kill soil animals or deprive them from food, resulting in death of populations, (ii) pollution spots can be locally deprived of fauna because of escape movements of soil animals. Advantages and limitations of the method have been listed, together with proposals for their wider use in soil ecology and ecotoxicology. - Polluted soils are avoided by soil animals, a phenomenon which can be used as a cheap, sensitive tool for the early detection of environmental risk.

  1. Analysis of pumping tests: Significance of well diameter, partial penetration, and noise

    Science.gov (United States)

    Heidari, M.; Ghiassi, K.; Mehnert, E.

    1999-01-01

    The nonlinear least squares (NLS) method was applied to pumping and recovery aquifer test data in confined and unconfined aquifers with finite diameter and partially penetrating pumping wells, and with partially penetrating piezometers or observation wells. It was demonstrated that noiseless and moderately noisy drawdown data from observation points located less than two saturated thicknesses of the aquifer from the pumping well produced an exact or acceptable set of parameters when the diameter of the pumping well was included in the analysis. The accuracy of the estimated parameters, particularly that of specific storage, decreased with increases in the noise level in the observed drawdown data. With consideration of the well radii, the noiseless drawdown data from the pumping well in an unconfined aquifer produced good estimates of horizontal and vertical hydraulic conductivities and specific yield, but the estimated specific storage was unacceptable. When noisy data from the pumping well were used, an acceptable set of parameters was not obtained. Further experiments with noisy drawdown data in an unconfined aquifer revealed that when the well diameter was included in the analysis, hydraulic conductivity, specific yield and vertical hydraulic conductivity may be estimated rather effectively from piezometers located over a range of distances from the pumping well. Estimation of specific storage became less reliable for piezemeters located at distances greater than the initial saturated thickness of the aquifer. Application of the NLS to field pumping and recovery data from a confined aquifer showed that the estimated parameters from the two tests were in good agreement only when the well diameter was included in the analysis. Without consideration of well radii, the estimated values of hydraulic conductivity from the pumping and recovery tests were off by a factor of four.The nonlinear least squares method was applied to pumping and recovery aquifer test data in

  2. An improved method to set significance thresholds forβdiversity testing in microbial community comparisons

    DEFF Research Database (Denmark)

    Gülay, Arda; Smets, Barth F.

    2015-01-01

    , including those associated with the process of subsampling. These components exist for any proposed β diversity measurement procedure. Further, we introduce a strategy to set significance thresholds for β diversity of any group of microbial samples using rarefaction, invoking the notion of a meta-community...

  3. Strategies for Testing Statistical and Practical Significance in Detecting DIF with Logistic Regression Models

    Science.gov (United States)

    Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza

    2014-01-01

    This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…

  4. Cryptococcal Antigen Test Revisited: Significance for Cryptococcal Meningitis Therapy Monitoring in a Tertiary Chinese Hospital

    Science.gov (United States)

    Lu, Hongzhou; Zhou, Yingjie; Yin, Youkuan; Pan, Xiaozhang; Weng, Xinhua

    2005-01-01

    For a total of 29 non-human immunodeficiency virus 1 cryptococcal meningitis cases, titer changes in the latex agglutination test before and after therapy were reviewed along with clinical manifestations, laboratory findings, and therapy regimens. The cryptococcal antigen titer decreased for every case after therapy and was correlated to fungal clearance as defined by fungus smear and/or culture. However, cryptococcal antigen can remain at low titers for long periods of time after therapy, even when fungus smears and/or cultures become negative. PMID:15956440

  5. The ECG component of thallium-201 exercise testing significantly alters patient management

    Energy Technology Data Exchange (ETDEWEB)

    Deague, J.; Salehi, N.; Grigg, L.; Lichtenstein, M.; Better, N. [Royal Melbourne Hospital, Parkville, VIC (Australia). Departments Nuclear Medicine and Cardiology

    1998-02-01

    Full text: Thallium exercise testing (Tlex) offers superior sensitivity and specificity to exercise electrocardiography (ECG), but the value of the ECG data in Tlex remains poorly studied. While a normal Tlex is associated with an excellent prognosis, patients with a positive Tlex have a higher cardiac event rate. We aimed to see if a negative ECG component of the Tlex (ECGTI) was associated with an improved outcome compared with a positive ECGTI, in those patients with a reversible Tlex defect. We followed 100 consecutive patients retrospectively with a reversible defect on Tlex (50 with negative and 50 with positive ECGTI) for 12 months. The ECG was reviewed as positive (1 mm ST depression 0.08 seconds after J point or > 2 mm if on digoxin or prior ECG changes), negative, equivocal or uninterpretable. We excluded patients with pharmacological testing, and those with equivocal or uninterpretable ECGs. Over the ensuing 12 months no patients with negative ECGTl was admitted with unstable angina, myocardium infarction or had a cardiac death. It is concluded that in patients with reversible defects on Tlex, a negative ECGTl is associated with a low incidence of cardiac events and a decreased incidence of a cardiac intervention

  6. Combination antimicrobial susceptibility testing of Burkholderia cepacia complex: significance of species.

    Science.gov (United States)

    Abbott, Felicity K; Milne, Kathleen E N; Stead, David A; Gould, Ian M

    2016-11-01

    The Burkholderia cepacia complex (Bcc) is notorious for the life-threatening pulmonary infections it causes in patients with cystic fibrosis. The multidrug-resistant nature of Bcc and differing infective Bcc species make the design of appropriate treatment regimens challenging. Previous synergy studies have failed to take account of the species of Bcc isolates. Etest methodology was used to facilitate minimum inhibitory concentration (MIC) and antimicrobial combination testing on 258 isolates of Bcc, identified to species level by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-TOF/MS). The most active antimicrobials were trimethoprim/sulphamethoxazole, doxycycline and minocycline (52.5%, 46.4% and 45.9% of isolates susceptible, respectively). Synergy was observed in 9.2% of the 1799 combinations tested; the most common synergistic combinations were tobramycin + ceftazidime, meropenem + tobramycin and levofloxacin + piperacillin/tazobactam (35.4%, 32.3% and 22.2% synergy, respectively). Antimicrobial susceptibility analysis revealed differences between Burkholderia cenocepacia and Burkholderia multivorans. Disparity in clinical outcome during infection with these two micro-organisms necessitates further investigation into the clinical outcomes of treatment regimens in light of species identification and in vitro antimicrobial susceptibility studies. Copyright © 2016 Elsevier B.V. and International Society of Chemotherapy. All rights reserved.

  7. Significant effects of mild endogenous hormonal changes in humans: considerations for low-dose testing.

    OpenAIRE

    Brucker-Davis, F; Thayer, K; Colborn, T

    2001-01-01

    We review the significant and adverse health effects that can occur with relatively small endogenous hormonal changes in pubertal and adult humans. We discuss the effects of hormonal changes that occur within normal physiologic ranges--such as the rising levels of estrogen in peripuberty, which cause growth spurts at low levels and then the fusion of epiphyses at higher levels--and the hormonal variations during the menstrual cycle and their relation to genital phenotypic changes and intercur...

  8. Simulation-based Testing of Control Software

    Energy Technology Data Exchange (ETDEWEB)

    Ozmen, Ozgur [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nutaro, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Sanyal, Jibonananda [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Olama, Mohammed M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-02-10

    It is impossible to adequately test complex software by examining its operation in a physical prototype of the system monitored. Adequate test coverage can require millions of test cases, and the cost of equipment prototypes combined with the real-time constraints of testing with them makes it infeasible to sample more than a small number of these tests. Model based testing seeks to avoid this problem by allowing for large numbers of relatively inexpensive virtual prototypes that operate in simulation time at a speed limited only by the available computing resources. In this report, we describe how a computer system emulator can be used as part of a model based testing environment; specifically, we show that a complete software stack including operating system and application software - can be deployed within a simulated environment, and that these simulations can proceed as fast as possible. To illustrate this approach to model based testing, we describe how it is being used to test several building control systems that act to coordinate air conditioning loads for the purpose of reducing peak demand. These tests involve the use of ADEVS (A Discrete Event System Simulator) and QEMU (Quick Emulator) to host the operational software within the simulation, and a building model developed with the MODELICA programming language using Buildings Library and packaged as an FMU (Functional Mock-up Unit) that serves as the virtual test environment.

  9. Model-based testing for embedded systems

    CERN Document Server

    Zander, Justyna; Mosterman, Pieter J

    2011-01-01

    What the experts have to say about Model-Based Testing for Embedded Systems: "This book is exactly what is needed at the exact right time in this fast-growing area. From its beginnings over 10 years ago of deriving tests from UML statecharts, model-based testing has matured into a topic with both breadth and depth. Testing embedded systems is a natural application of MBT, and this book hits the nail exactly on the head. Numerous topics are presented clearly, thoroughly, and concisely in this cutting-edge book. The authors are world-class leading experts in this area and teach us well-used

  10. Significance of tests and properties of concrete and concrete-making materials

    CERN Document Server

    Pielert, James H

    2006-01-01

    Reflects a decade of technological changes in concrete industry! The newest edition of this popular ASTM publication reflects the latest technology in concrete and concrete-making materials. Six sections cover: (1) General information on the nature of concrete, sampling, variability, and testing laboratories. A new chapter deals with modeling cement and concrete properties. (2) Properties of freshly mixed concrete. (3) Properties of hardened concrete. (4) Concrete aggregates—this section has been revised and the chapters are presented in the order that most concerns concrete users: grading, density, soundness, degradation resistance, petrographic examination, reactivity, and thermal properties. (5) Materials other than aggregates—the chapter on curing materials now reflects the current technology of materials applied to new concrete surfaces. The chapter on mineral admixtures has been separated into two chapters: supplementary cementitious materials and ground slag. (6) Specialized concretes—contains a ...

  11. A significant-loophole-free test of Bell's theorem with entangled photons

    Science.gov (United States)

    Giustina, Marissa; Versteegh, Marijn A. M.; Wengerowsky, Sören; Handsteiner, Johannes; Hochrainer, Armin; Phelan, Kevin; Steinlechner, Fabian; Kofler, Johannes; Larsson, Jan-Åke; Abellán, Carlos; Amaya, Waldimar; Mitchell, Morgan W.; Beyer, Jörn; Gerrits, Thomas; Lita, Adriana E.; Shalm, Lynden K.; Nam, Sae Woo; Scheidl, Thomas; Ursin, Rupert; Wittmann, Bernhard; Zeilinger, Anton

    2017-10-01

    John Bell's theorem of 1964 states that local elements of physical reality, existing independent of measurement, are inconsistent with the predictions of quantum mechanics (Bell, J. S. (1964), Physics (College. Park. Md). Specifically, correlations between measurement results from distant entangled systems would be smaller than predicted by quantum physics. This is expressed in Bell's inequalities. Employing modifications of Bell's inequalities, many experiments have been performed that convincingly support the quantum predictions. Yet, all experiments rely on assumptions, which provide loopholes for a local realist explanation of the measurement. Here we report an experiment with polarization-entangled photons that simultaneously closes the most significant of these loopholes. We use a highly efficient source of entangled photons, distributed these over a distance of 58.5 meters, and implemented rapid random setting generation and high-efficiency detection to observe a violation of a Bell inequality with high statistical significance. The merely statistical probability of our results to occur under local realism is less than 3.74×10-31, corresponding to an 11.5 standard deviation effect.

  12. The Sensitivity of Adolescent School-Based Hearing Screens Is Significantly Improved by Adding High Frequencies

    Science.gov (United States)

    Sekhar, Deepa L.; Zalewski, Thomas R.; Beiler, Jessica S.; Czarnecki, Beth; Barr, Ashley L.; King, Tonya S.; Paul, Ian M.

    2016-01-01

    High frequency hearing loss (HFHL), often related to hazardous noise, affects one in six U.S. adolescents. Yet, only 20 states include school-based hearing screens for adolescents. Only six states test multiple high frequencies. Study objectives were to (1) compare the sensitivity of state school-based hearing screens for adolescents to gold…

  13. More powerful significant testing for time course gene expression data using functional principal component analysis approaches.

    Science.gov (United States)

    Wu, Shuang; Wu, Hulin

    2013-01-16

    One of the fundamental problems in time course gene expression data analysis is to identify genes associated with a biological process or a particular stimulus of interest, like a treatment or virus infection. Most of the existing methods for this problem are designed for data with longitudinal replicates. But in reality, many time course gene experiments have no replicates or only have a small number of independent replicates. We focus on the case without replicates and propose a new method for identifying differentially expressed genes by incorporating the functional principal component analysis (FPCA) into a hypothesis testing framework. The data-driven eigenfunctions allow a flexible and parsimonious representation of time course gene expression trajectories, leaving more degrees of freedom for the inference compared to that using a prespecified basis. Moreover, the information of all genes is borrowed for individual gene inferences. The proposed approach turns out to be more powerful in identifying time course differentially expressed genes compared to the existing methods. The improved performance is demonstrated through simulation studies and a real data application to the Saccharomyces cerevisiae cell cycle data.

  14. Significance of repeated exercise testing with thallium-201 scanning in asymptomatic diabetic males

    International Nuclear Information System (INIS)

    Rubler, S.; Fisher, V.J.

    1985-01-01

    This study was conducted with asymptomatic middle-aged male subjects with diabetes mellitus to detect latent cardiac disease using noninvasive techniques. One group of 38 diabetic males (mean age 50.5 +/- 10.2 years) and a group of 15 normal males (mean age 46.9 +/- 10.0 years) participated in the initial trial; 13 diabetic patients and 7 control subjects were restudied 1-2 years later. Maximal treadmill exercise with a Bruce protocol and myocardial scintigraphy with thallium-201(201Tl) were used. Diabetic subjects on initial examination and retesting achieved a lower maximal heart rate and duration of exercise than control subjects. Abnormal electrocardiographic changes, thallium defects, or both were observed in 23/38 diabetic males (60.5%) on the first study and only one 65-year-old control subject had such findings. On retesting, the control subjects had no abnormalities while 76.9% of diabetic subjects had either 201Tl defects or ECG changes. We conclude that despite the fact that none of diabetic males had any clinical evidence or symptoms of heart disease, this high-risk group demonstrated abnormalities on exercise testing that merit careful subsequent evaluation and followup and could be an effective method of detecting early cardiac disease

  15. Depressive status explains a significant amount of the variance in COPD assessment test (CAT) scores.

    Science.gov (United States)

    Miravitlles, Marc; Molina, Jesús; Quintano, José Antonio; Campuzano, Anna; Pérez, Joselín; Roncero, Carlos

    2018-01-01

    COPD assessment test (CAT) is a short, easy-to-complete health status tool that has been incorporated into the multidimensional assessment of COPD in order to guide therapy; therefore, it is important to understand the factors determining CAT scores. This is a post hoc analysis of a cross-sectional, observational study conducted in respiratory medicine departments and primary care centers in Spain with the aim of identifying the factors determining CAT scores, focusing particularly on the cognitive status measured by the Mini-Mental State Examination (MMSE) and levels of depression measured by the short Beck Depression Inventory (BDI). A total of 684 COPD patients were analyzed; 84.1% were men, the mean age of patients was 68.7 years, and the mean forced expiratory volume in 1 second (%) was 55.1%. Mean CAT score was 21.8. CAT scores correlated with the MMSE score (Pearson's coefficient r =-0.371) and the BDI ( r =0.620), both p CAT scores and explained 45% of the variability. However, a model including only MMSE and BDI scores explained up to 40% and BDI alone explained 38% of the CAT variance. CAT scores are associated with clinical variables of severity of COPD. However, cognitive status and, in particular, the level of depression explain a larger percentage of the variance in the CAT scores than the usual COPD clinical severity variables.

  16. Kernel-based tests for joint independence

    DEFF Research Database (Denmark)

    Pfister, Niklas; Bühlmann, Peter; Schölkopf, Bernhard

    2018-01-01

    if the $d$ variables are jointly independent, as long as the kernel is characteristic. Based on an empirical estimate of dHSIC, we define three different non-parametric hypothesis tests: a permutation test, a bootstrap test and a test based on a Gamma approximation. We prove that the permutation test......We investigate the problem of testing whether $d$ random variables, which may or may not be continuous, are jointly (or mutually) independent. Our method builds on ideas of the two variable Hilbert-Schmidt independence criterion (HSIC) but allows for an arbitrary number of variables. We embed...... the $d$-dimensional joint distribution and the product of the marginals into a reproducing kernel Hilbert space and define the $d$-variable Hilbert-Schmidt independence criterion (dHSIC) as the squared distance between the embeddings. In the population case, the value of dHSIC is zero if and only...

  17. Methodology for testing and validating knowledge bases

    Science.gov (United States)

    Krishnamurthy, C.; Padalkar, S.; Sztipanovits, J.; Purves, B. R.

    1987-01-01

    A test and validation toolset developed for artificial intelligence programs is described. The basic premises of this method are: (1) knowledge bases have a strongly declarative character and represent mostly structural information about different domains, (2) the conditions for integrity, consistency, and correctness can be transformed into structural properties of knowledge bases, and (3) structural information and structural properties can be uniformly represented by graphs and checked by graph algorithms. The interactive test and validation environment have been implemented on a SUN workstation.

  18. Designing a VOIP Based Language Test

    Science.gov (United States)

    Garcia Laborda, Jesus; Magal Royo, Teresa; Otero de Juan, Nuria; Gimenez Lopez, Jose L.

    2015-01-01

    Assessing speaking is one of the most difficult tasks in computer based language testing. Many countries all over the world face the need to implement standardized language tests where speaking tasks are commonly included. However, a number of problems make them rather impractical such as the costs, the personnel involved, the length of time for…

  19. A Block Compressive Sensing Based Scalable Encryption Framework for Protecting Significant Image Regions

    Science.gov (United States)

    Zhang, Yushu; Zhou, Jiantao; Chen, Fei; Zhang, Leo Yu; Xiao, Di; Chen, Bin; Liao, Xiaofeng

    The existing Block Compressive Sensing (BCS) based image ciphers adopted the same sampling rate for all the blocks, which may lead to the desirable result that after subsampling, significant blocks lose some more-useful information while insignificant blocks still retain some less-useful information. Motivated by this observation, we propose a scalable encryption framework (SEF) based on BCS together with a Sobel Edge Detector and Cascade Chaotic Maps. Our work is firstly dedicated to the design of two new fusion techniques, chaos-based structurally random matrices and chaos-based random convolution and subsampling. The basic idea is to divide an image into some blocks with an equal size and then diagnose their respective significance with the help of the Sobel Edge Detector. For significant block encryption, chaos-based structurally random matrix is applied to significant blocks whereas chaos-based random convolution and subsampling are responsible for the remaining insignificant ones. In comparison with the BCS based image ciphers, the SEF takes lightweight subsampling and severe sensitivity encryption for the significant blocks and severe subsampling and lightweight robustness encryption for the insignificant ones in parallel, thus better protecting significant image regions.

  20. Precursor Analysis for Flight- and Ground-Based Anomaly Risk Significance Determination

    Science.gov (United States)

    Groen, Frank

    2010-01-01

    This slide presentation reviews the precursor analysis for flight and ground based anomaly risk significance. It includes information on accident precursor analysis, real models vs. models, and probabilistic analysis.

  1. Using the Coefficient of Determination "R"[superscript 2] to Test the Significance of Multiple Linear Regression

    Science.gov (United States)

    Quinino, Roberto C.; Reis, Edna A.; Bessegato, Lupercio F.

    2013-01-01

    This article proposes the use of the coefficient of determination as a statistic for hypothesis testing in multiple linear regression based on distributions acquired by beta sampling. (Contains 3 figures.)

  2. Risk based surveillance test interval optimization

    International Nuclear Information System (INIS)

    Cepin, M.; Mavko, B.

    1995-01-01

    First step towards the risk based regulation is to determine the optimal surveillance test intervals for the safety equipment which is tested at nuclear power plant operation. In the paper we have presented the process of optimal surveillance test interval optimization from our perspective. It consist of three levels: component level, system level and plant level. It bases on the results of the Probabilistic Safety Assessment and is focused to minimize risk. At component and system level the risk measure is component or system mean unavailability respectively. At plant level the risk measure is core damage frequency. (author)

  3. Genetic testing in benign familial epilepsies of the first year of life: clinical and diagnostic significance.

    Science.gov (United States)

    Zara, Federico; Specchio, Nicola; Striano, Pasquale; Robbiano, Angela; Gennaro, Elena; Paravidino, Roberta; Vanni, Nicola; Beccaria, Francesca; Capovilla, Giuseppe; Bianchi, Amedeo; Caffi, Lorella; Cardilli, Viviana; Darra, Francesca; Bernardina, Bernardo Dalla; Fusco, Lucia; Gaggero, Roberto; Giordano, Lucio; Guerrini, Renzo; Incorpora, Gemma; Mastrangelo, Massimo; Spaccini, Luigina; Laverda, Anna Maria; Vecchi, Marilena; Vanadia, Francesca; Veggiotti, Pierangelo; Viri, Maurizio; Occhi, Guya; Budetta, Mauro; Taglialatela, Maurizio; Coviello, Domenico A; Vigevano, Federico; Minetti, Carlo

    2013-03-01

    role of K-channel genes beyond the typical neonatal epilepsies. The identification of a novel SCN2A mutation in a family with infantile seizures with onset between 6 and 8 months provides further confirmation that this gene is not specifically associated with BFNIS and is also involved in families with a delayed age of onset. Our data indicate that PRRT2 mutations are clustered in families with BFIS. Paroxysmal kinesigenic dyskinesia emerges as a distinctive feature of PRRT2 families, although uncommon in our series. We showed that the age of onset of seizures is significantly correlated with underlying genetics, as about 90% of the typical BFNS families are linked to KCNQ2 compared to only 3% of the BFIS families, for which PRRT2 represents the major gene. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.

  4. Who is more skilful? Doping and its implication on the validity, morality and significance of the sporting test

    DEFF Research Database (Denmark)

    Christiansen, Ask Vest; Møller, Rasmus Bysted

    2016-01-01

    of ‘dubious means’. As a supplement to Heinilä, we revisit American sports historian John Hoberman’s writings on sport and technology. Then we discuss what function equality and fairness have in sport and what separates legitimate form illegitimate ways of enhancing performance. We proceed by discussing......In this article, we explore if and in what ways doping can be regarded as a challenge to the validity, morality and significance of the sporting test. We start out by examining Kalevi Heinilä’s analysis of the logic of elite sport, which shows how the ‘spiral of competition’ leads to the use...... the line of argumentation set forth by philosopher Torbjörn Tännsjö on how our admiration of sporting superiority based on natural talent or ‘birth luck’ is immoral. We analyse his argument in favour of eliminating the significance of meritless luck in sport by lifting the ban on doping and argue that its...

  5. Changes in the structure and clinical significance of the positive results of pretransfusion testing during the switching from tube test agglutination to gel microcolumn technique

    Directory of Open Access Journals (Sweden)

    Petrušić-Kafedžić Alma

    2010-08-01

    Full Text Available im To investigate the changes in pretransfusion testing during the switch from the agglutination tube test to the gel test. Methods Clinical significance of positive results has been analyzed in 7667 pretransfusion tests (with 16610 cross-matches performed by the tube test in 2005-2006, and in 7372 pretransfusion tests (with17294 cross-matches performed in 2007-2008 by the gel test. ResultsIn both analyzed periods antibody detection was positive in 1.3% and cross-matching in 0.3% cases. At least one test was positive in 1.4%pretransfusions tested by the tube test and in 1.3% by the gel test,with >75% positive results in women. Analyzing cases with positive cross-matching but negative antibody detection, eight of ten such cases found by the tube test were caused by ‘cold antibodies’whereas‘warm non-specific antibodies’ caused all three cases found by the gel test. The gel test detected higher proportion of immune antibodies than the tube test (69.8% vs 41.3%, p<0.001, with a double increase in anti-K and Rh antibodies. The tube test detected 24 cases of clinically non-significant antibodies, as compared with no cases found by the gel test (p<0,001. ‘Non-specific antibodies’ more often caused positive cross-matches than antibody detection(42.6% vs. 29.9% by the tube test, 28.9% vs. 18.3% by the gel test. Despite of being close in the detection of irregular antibodies (p=0.062, the difference between the tube and gel test was not significant. ‘Non-specific antibodies’ were found by both tests more often in women, while clinical departments were of no significance.Conclusion The gel test has proved to be a more optimal technique of pretransfusion testing. The detection of irregular antibodies isrecommended as an obligatory part of pretransfusion testing.

  6. Benford's law first significant digit and distribution distances for testing the reliability of financial reports in developing countries

    Science.gov (United States)

    Shi, Jing; Ausloos, Marcel; Zhu, Tingting

    2018-02-01

    We discuss a common suspicion about reported financial data, in 10 industrial sectors of the 6 so called "main developing countries" over the time interval [2000-2014]. These data are examined through Benford's law first significant digit and through distribution distances tests. It is shown that several visually anomalous data have to be a priori removed. Thereafter, the distributions much better follow the first digit significant law, indicating the usefulness of a Benford's law test from the research starting line. The same holds true for distance tests. A few outliers are pointed out.

  7. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  8. Security considerations and recommendations in computer-based testing.

    Science.gov (United States)

    Al-Saleem, Saleh M; Ullah, Hanif

    2014-01-01

    Many organizations and institutions around the globe are moving or planning to move their paper-and-pencil based testing to computer-based testing (CBT). However, this conversion will not be the best option for all kinds of exams and it will require significant resources. These resources may include the preparation of item banks, methods for test delivery, procedures for test administration, and last but not least test security. Security aspects may include but are not limited to the identification and authentication of examinee, the risks that are associated with cheating on the exam, and the procedures related to test delivery to the examinee. This paper will mainly investigate the security considerations associated with CBT and will provide some recommendations for the security of these kinds of tests. We will also propose a palm-based biometric authentication system incorporated with basic authentication system (username/password) in order to check the identity and authenticity of the examinee.

  9. Security Considerations and Recommendations in Computer-Based Testing

    Directory of Open Access Journals (Sweden)

    Saleh M. Al-Saleem

    2014-01-01

    Full Text Available Many organizations and institutions around the globe are moving or planning to move their paper-and-pencil based testing to computer-based testing (CBT. However, this conversion will not be the best option for all kinds of exams and it will require significant resources. These resources may include the preparation of item banks, methods for test delivery, procedures for test administration, and last but not least test security. Security aspects may include but are not limited to the identification and authentication of examinee, the risks that are associated with cheating on the exam, and the procedures related to test delivery to the examinee. This paper will mainly investigate the security considerations associated with CBT and will provide some recommendations for the security of these kinds of tests. We will also propose a palm-based biometric authentication system incorporated with basic authentication system (username/password in order to check the identity and authenticity of the examinee.

  10. Measuring individual significant change on the Beck Depression Inventory-II through IRT-based statistics.

    NARCIS (Netherlands)

    Brouwer, D.; Meijer, R.R.; Zevalkink, D.J.

    2013-01-01

    Several researchers have emphasized that item response theory (IRT)-based methods should be preferred over classical approaches in measuring change for individual patients. In the present study we discuss and evaluate the use of IRT-based statistics to measure statistical significant individual

  11. Measuring individual significant change on the Beck Depression Inventory-II through IRT-based statistics

    NARCIS (Netherlands)

    Brouwer, Danny; Meijer, Rob; Zevalkink, J.

    2013-01-01

    Several researchers have emphasized that item response theory (IRT)-based methods should be preferred over classical approaches in measuring change for individual patients. In the present study we discuss and evaluate the use of IRT-based statistics to measure statistical significant individual

  12. A Note on Testing Mediated Effects in Structural Equation Models: Reconciling Past and Current Research on the Performance of the Test of Joint Significance

    Science.gov (United States)

    Valente, Matthew J.; Gonzalez, Oscar; Miocevic, Milica; MacKinnon, David P.

    2016-01-01

    Methods to assess the significance of mediated effects in education and the social sciences are well studied and fall into two categories: single sample methods and computer-intensive methods. A popular single sample method to detect the significance of the mediated effect is the test of joint significance, and a popular computer-intensive method…

  13. Significance of atypia in conventional Papanicolaou smears and liquid-based cytology: a follow-up study

    DEFF Research Database (Denmark)

    Schledermann, D; Ejersbo, D; Hoelund, B

    2004-01-01

    Papanicolaou smears (CP) and liquid-based samples by the ThinPrep Pap Test (TP). A total of 1607 CP smears from 1 January 2000 to 31 December 2000 and 798 TP samples from 1 January 2002 to 31 December 2002 diagnosed as atypia were included. The results show that the detection rate of atypia in cervical......-up diagnosis of mild dysplasia was seen more than twice as often in TP than in CP (12.8% versus 5.0%, P Pap Test yielded...... a significant decrease in atypia rates compared with the conventional Papanicolaou test. In subsequent follow-up the percentage of neoplastic lesions was significantly increased in the ThinPrep Pap Test samples....

  14. Testlet-based Multidimensional Adaptive Testing

    Directory of Open Access Journals (Sweden)

    Andreas Frey

    2016-11-01

    Full Text Available Multidimensional adaptive testing (MAT is a highly efficient method for the simultaneous measurement of several latent traits. Currently, no psychometrically sound approach is available for the use of MAT in testlet-based tests. Testlets are sets of items sharing a common stimulus such as a graph or a text. They are frequently used in large operational testing programs like TOEFL, PISA, PIRLS, or NAEP. To make MAT accessible for such testing programs, we present a novel combination of MAT with a multidimensional generalization of the random effects testlet model (MAT-MTIRT. MAT-MTIRT compared to non-adaptive testing is examined for several combinations of testlet effect variances (0.0, 0.5, 1.0, 1.5 and testlet sizes (3 items, 6 items, 9 items with a simulation study considering three ability dimensions with simple loading structure. MAT-MTIRT outperformed non-adaptive testing regarding the measurement precision of the ability estimates. Further, the measurement precision decreased when testlet effect variances and testlet sizes increased. The suggested combination of the MTIRT model therefore provides a solution to the substantial problems of testlet-based tests while keeping the length of the test within an acceptable range.

  15. Testlet-Based Multidimensional Adaptive Testing.

    Science.gov (United States)

    Frey, Andreas; Seitz, Nicki-Nils; Brandt, Steffen

    2016-01-01

    Multidimensional adaptive testing (MAT) is a highly efficient method for the simultaneous measurement of several latent traits. Currently, no psychometrically sound approach is available for the use of MAT in testlet-based tests. Testlets are sets of items sharing a common stimulus such as a graph or a text. They are frequently used in large operational testing programs like TOEFL, PISA, PIRLS, or NAEP. To make MAT accessible for such testing programs, we present a novel combination of MAT with a multidimensional generalization of the random effects testlet model (MAT-MTIRT). MAT-MTIRT compared to non-adaptive testing is examined for several combinations of testlet effect variances (0.0, 0.5, 1.0, and 1.5) and testlet sizes (3, 6, and 9 items) with a simulation study considering three ability dimensions with simple loading structure. MAT-MTIRT outperformed non-adaptive testing regarding the measurement precision of the ability estimates. Further, the measurement precision decreased when testlet effect variances and testlet sizes increased. The suggested combination of the MTIRT model therefore provides a solution to the substantial problems of testlet-based tests while keeping the length of the test within an acceptable range.

  16. Evaluation of the enzyme test for the detection of clinically significant red blood cell antibodies during pregnancy.

    Science.gov (United States)

    Hundrić-Haspl, Z; Juraković-Loncar, N; Grgicević, D

    1999-01-01

    In the Croatian transfusion medicine, no general agreement has yet been achieved whether red blood cell (RBC) Rhesus (Rh) antibodies detected during pregnancy only by enzyme tests can cause hemolytic disease of the newborn (HDN). Results of the detection of clinically significant RBC antibodies by low-ionic-strength additive solution antiglobulin test (LISS-IAT) and trypsin enzyme test in 22,947 pregnant women are presented. All pregnant women in whom clinically significant RBC antibodies (RBC-CSA) were detected by LISS-IAT and/or enzyme tests were followed and observed during pregnancy. The women who had enzyme-only anti-D antibodies in their serum were followed up during subsequent pregnancies. Out of 302 positive results obtained by both techniques, irregular clinically significant enzyme-only antibodies (anti-RhD and anti-RhE specificity) were detected in 14 (4.6%) pregnant women. None of 11 RhD positive newborns whose mothers had enzyme-only anti-D antibodies, had signs of HDN after delivery. In these 11 women, anti-D antibodies were detected by LISS-IAT in the first trimenon of subsequent pregnancy. Nine infants born from subsequent pregnancies to women who had previously had enzyme-only anti-D, had clinical signs of HDN. The authors concluded that there is no need for enzyme tests in prenatal testing because enzyme tests are not reliable in the prediction of HDN.

  17. Evidence for the different physiological significance of the 6- and 2-minute walk tests in multiple sclerosis

    Directory of Open Access Journals (Sweden)

    Motl Robert W

    2012-03-01

    Full Text Available Abstract Background Researchers have recently advocated for the 2-minute walk (2MW as an alternative for the 6-minute walk (6MW to assess long distance ambulation in persons with multiple sclerosis (MS. This recommendation has not been based on physiological considerations such as the rate of oxygen consumption (V·O2 over the 6MW range. Objective This study examined the pattern of change in V·O2 over the range of the 6MW in a large sample of persons with MS who varied as a function of disability status. Method Ninety-five persons with clinically-definite MS underwent a neurological examination for generating an Expanded Disability Status Scale (EDSS score, and then completion of the 6MW protocol while wearing a portable metabolic unit and an accelerometer. Results There was a time main effect on V·O2 during the 6MW (p = .0001 such that V·O2 increased significantly every 30 seconds over the first 3 minutes of the 6MW, and then remained stable over the second 3 minutes of the 6MW. This occurred despite no change in cadence across the 6MW (p = .84. Conclusions The pattern of change in V·O2 indicates that there are different metabolic systems providing energy for ambulation during the 6MW in MS subjects and steady state aerobic metabolism is reached during the last 3 minutes of the 6MW. By extension, the first 3 minutes would represent a test of mixed aerobic and anaerobic work, whereas the second 3 minutes would represent a test of aerobic work during walking.

  18. Diagnostic tests based on human basophils

    DEFF Research Database (Denmark)

    Kleine-Tebbe, Jörg; Erdmann, Stephan; Knol, Edward F

    2006-01-01

    -maximal responses, termed 'intrinsic sensitivity'. These variables give rise to shifts in the dose-response curves which, in a diagnostic setting where only a single antigen concentration is employed, may produce false-negative data. Thus, in order to meaningfully utilize the current basophil activation tests....... Diagnostic studies using CD63 or CD203c in hymenoptera, food and drug allergy are critically discussed. Basophil-based tests are indicated for allergy testing in selected cases but should only be performed by experienced laboratories....

  19. Test the Overall Significance of p-values by Using Joint Tail Probability of Ordered p-values as Test Statistic

    NARCIS (Netherlands)

    Fang, Yongxiang; Wit, Ernst

    2008-01-01

    Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values

  20. A systematic review on diagnostic accuracy of CT-based detection of significant coronary artery disease

    International Nuclear Information System (INIS)

    Janne d'Othee, Bertrand; Siebert, Uwe; Cury, Ricardo; Jadvar, Hossein; Dunn, Edward J.; Hoffmann, Udo

    2008-01-01

    Objectives: Systematic review of diagnostic accuracy of contrast enhanced coronary computed tomography (CE-CCT). Background: Noninvasive detection of coronary artery stenosis (CAS) by CE-CCT as an alternative to catheter-based coronary angiography (CCA) may improve patient management. Methods: Forty-one articles published between 1997 and 2006 were included that evaluated native coronary arteries for significant stenosis and used CE-CCT as diagnostic test and CCA as reference standard. Study group characteristics, study methodology and diagnostic outcomes were extracted. Pooled summary sensitivity and specificity of CE-CCT were calculated using a random effects model (1) for all coronary segments, (2) assessable segments, and (3) per patient. Results: The 41 studies totaled 2515 patients (75% males; mean age: 59 years, CAS prevalence: 59%). Analysis of all coronary segments yielded a sensitivity of 95% (80%, 89%, 86%, 98% for electron beam CT, 4/8-slice, 16-slice and 64-slice MDCT, respectively) for a specificity of 85% (77%, 84%, 95%, 91%). Analysis limited to segments deemed assessable by CT showed sensitivity of 96% (86%, 85%, 98%, 97%) for a specificity of 95% (90%, 96%, 96%, 96%). Per patient, sensitivity was 99% (90%, 97%, 99%, 98%) and specificity was 76% (59%, 81%, 83%, 92%). Heterogeneity was quantitatively important but not explainable by patient group characteristics or study methodology. Conclusions: Current diagnostic accuracy of CE-CCT is high. Advances in CT technology have resulted in increases in diagnostic accuracy and proportion of assessable coronary segments. However, per patient, accuracy may be lower and CT may have more limited clinical utility in populations at high risk for CAD

  1. TREAT (TREe-based Association Test)

    Science.gov (United States)

    TREAT is an R package for detecting complex joint effects in case-control studies. The test statistic is derived from a tree-structure model by recursive partitioning the data. Ultra-fast algorithm is designed to evaluate the significance of association between candidate gene and disease outcome

  2. A health policy course based on Fink's Taxonomy of Significant Learning.

    Science.gov (United States)

    Krueger, Kem P; Russell, Mark A; Bischoff, Jason

    2011-02-10

    To incorporate Fink's Taxonomy of Significant Learning into a course and determine whether doing so increased students' knowledge of and interest in healthcare policy. A healthcare policy course for second-year doctor of pharmacy (PharmD) students was redesigned to incorporate activities reflecting Fink's Taxonomy including completing a required reading, outlining the required reading, presenting the outline to a small group of peers, attending lectures, and completing a final policy project and simulation activity. The effectiveness of the course was assessed using a pre-post non-randomized control design, with nursing and social work students serving as the control group. Interest and knowledge scores increased significantly among students in the intervention group. Differences between the low-interest students and the rest of the class identified on the precourse tests were not apparent on the postcourse test. Applying Fink's Taxonomy to course activities increased students' interest in and importance placed on learning health policy.

  3. Forum: Is Test-Based Accountability Dead?

    Science.gov (United States)

    Polikoff, Morgan S.; Greene, Jay P.; Huffman, Kevin

    2017-01-01

    Since the 2001 passage of the No Child Left Behind Act (NCLB), test-based accountability has been an organizing principle--perhaps "the" organizing principle--of efforts to improve American schools. But lately, accountability has been under fire from many critics, including Common Core opponents and those calling for more multifaceted…

  4. Liver stiffness measurement-based scoring system for significant inflammation related to chronic hepatitis B.

    Directory of Open Access Journals (Sweden)

    Mei-Zhu Hong

    Full Text Available Liver biopsy is indispensable because liver stiffness measurement alone cannot provide information on intrahepatic inflammation. However, the presence of fibrosis highly correlates with inflammation. We constructed a noninvasive model to determine significant inflammation in chronic hepatitis B patients by using liver stiffness measurement and serum markers.The training set included chronic hepatitis B patients (n = 327, and the validation set included 106 patients; liver biopsies were performed, liver histology was scored, and serum markers were investigated. All patients underwent liver stiffness measurement.An inflammation activity scoring system for significant inflammation was constructed. In the training set, the area under the curve, sensitivity, and specificity of the fibrosis-based activity score were 0.964, 91.9%, and 90.8% in the HBeAg(+ patients and 0.978, 85.0%, and 94.0% in the HBeAg(- patients, respectively. In the validation set, the area under the curve, sensitivity, and specificity of the fibrosis-based activity score were 0.971, 90.5%, and 92.5% in the HBeAg(+ patients and 0.977, 95.2%, and 95.8% in the HBeAg(- patients. The liver stiffness measurement-based activity score was comparable to that of the fibrosis-based activity score in both HBeAg(+ and HBeAg(- patients for recognizing significant inflammation (G ≥3.Significant inflammation can be accurately predicted by this novel method. The liver stiffness measurement-based scoring system can be used without the aid of computers and provides a noninvasive alternative for the prediction of chronic hepatitis B-related significant inflammation.

  5. Children with hemodynamically significant congenital heart disease can be identified through population-based registers

    DEFF Research Database (Denmark)

    Bergman, Gunnar; Hærskjold, Ann; Stensballe, Lone Graff

    2015-01-01

    as having hemodynamically significant CHD according to the recommendations for treatment with palivizumab. CONCLUSION: It was possible to identify a subgroup of children with hemodynamically significant CHD using an epidemiological approach and an algorithm with high validity. Our results will enable well-powered......BACKGROUND: Epidemiological research is facilitated in Sweden by a history of national health care registers, making large unselected national cohort studies possible. However, for complex clinical populations, such as children with congenital heart disease (CHD), register-based studies...... is indicated as a prophylactic treatment against respiratory syncytial virus infections in children with hemodynamically significant CHD. AIM: The aim of the study reported here was to develop and validate an algorithm to identify children with hemodynamically significant CHD according to recommendations...

  6. Using a micro computer based test bank

    International Nuclear Information System (INIS)

    Hamel, R.T.

    1987-01-01

    Utilizing a micro computer based test bank offers a training department many advantages and can have a positive impact upon training procedures and examination standards. Prior to data entry, Training Department management must pre-review the examination questions and answers to ensure compliance with examination standards and to verify the validity of all questions. Management must adhere to the TSD format since all questions require an enabling objective numbering scheme. Each question is entered under the enabling objective upon which it is based. Then the question is selected via the enabling objective. This eliminates any instructor bias because a random number generator chooses the test question. However, the instructor may load specific questions to create an emphasis theme for any test. The examination, answer and cover sheets are produced and printed within minutes. The test bank eliminates the large amount of time that is normally required for an instructor to formulate an examination. The need for clerical support is reduced by the elimination of typing examinations and also by the software's ability to maintain and generate student/course lists, attendance sheets, and grades. Software security measures limit access to the test bank, and the impromptu method used to generate and print an examination enhance its security

  7. The Guide-based Automatic Creation of Verified Test Scenarious

    Directory of Open Access Journals (Sweden)

    P. D. Drobintsev

    2013-01-01

    Full Text Available This paper presents an overview of technology of the automated generation of test scenarios based on guides. The usage of this technology can significantly improve the quality of the developed program products. In order to ground the technology creation, the main problems that occur during the development and testing of the large industrial systems, are described, as well as the methodologies of software verification on conformity to product requirements. The potentialities of tools for automatic and semi-automatic generation of a test suite by using a formal model in UCM notation are demonstrated, as well as tools for verification and automation of testing.

  8. Human papillomavirus mRNA and DNA testing in women with atypical squamous cells of undetermined significance

    DEFF Research Database (Denmark)

    Thomsen, Louise T; Dehlendorff, Christian; Junge, Jette

    2016-01-01

    In this prospective cohort study, we compared the performance of human papillomavirus (HPV) mRNA and DNA testing of women with atypical squamous cells of undetermined significance (ASC-US) during cervical cancer screening. Using a nationwide Danish pathology register, we identified women aged 30......-65 years with ASC-US during 2005-2011 who were tested for HPV16/18/31/33/45 mRNA using PreTect HPV-Proofer (n = 3,226) or for high-risk HPV (hrHPV) DNA using Hybrid Capture 2 (HC2) (n = 9,405) or Linear Array HPV-Genotyping test (LA) (n = 1,533). Women with ≥1 subsequent examination in the register (n = 13...... those testing HC2 negative (3.2% [95% CI: 2.2-4.2%] versus 0.5% [95% CI: 0.3-0.7%]). Patterns were similar after 18 months and 5 years'; follow-up; for CIN2+ and cancer as outcomes; across all age groups; and when comparing mRNA testing to hrHPV DNA testing using LA. In conclusion, the HPV16...

  9. Academic integrity "captured" by a personality-based test

    Directory of Open Access Journals (Sweden)

    Okanović Predrag

    2013-01-01

    Full Text Available The main goal of this study was to develop and validate a personality-based academic integrity test which could serve as a predictor of students’ academic dishonesty. A new Academic Integrity Test (AIT, based on methodological principles accepted in the field of work integrity, was created during this study. The test was developed on one student sample (N=350, and then validated on another (N=471. Validation of the AIT confirmed its relations with three dimensions previously found to be consistent correlates of work integrity measures - Conscientiousness, Aggressiveness and Neuroticism, with the addition of Negative Valence. The correlation between the AIT and a cognitive ability measure was not significant, which is in accordance with previous research. The test retained significant relations with the aforementioned personality measures in simulated applicant condition (except with Neuroticism, leading to the conclusion that the AIT maintains construct validity in situations susceptible to self-presentation.

  10. Understanding text-based persuasion and support tactics of concerned significant others

    Directory of Open Access Journals (Sweden)

    Katherine van Stolk-Cooke

    2015-08-01

    Full Text Available The behavior of concerned significant others (CSOs can have a measurable impact on the health and wellness of individuals attempting to meet behavioral and health goals, and research is needed to better understand the attributes of text-based CSO language when encouraging target significant others (TSOs to achieve those goals. In an effort to inform the development of interventions for CSOs, this study examined the language content of brief text-based messages generated by CSOs to motivate TSOs to achieve a behavioral goal. CSOs generated brief text-based messages for TSOs for three scenarios: (1 to help TSOs achieve the goal, (2 in the event that the TSO is struggling to meet the goal, and (3 in the event that the TSO has given up on meeting the goal. Results indicate that there was a significant relationship between the tone and compassion of messages generated by CSOs, the CSOs’ perceptions of TSO motivation, and their expectation of a grateful or annoyed reaction by the TSO to their feedback or support. Results underscore the importance of attending to patterns in language when CSOs communicate with TSOs about goal achievement or failure, and how certain variables in the CSOs’ perceptions of their TSOs affect these characteristics.

  11. Understanding text-based persuasion and support tactics of concerned significant others

    Science.gov (United States)

    van Stolk-Cooke, Katherine; Hayes, Marie; Baumel, Amit

    2015-01-01

    The behavior of concerned significant others (CSOs) can have a measurable impact on the health and wellness of individuals attempting to meet behavioral and health goals, and research is needed to better understand the attributes of text-based CSO language when encouraging target significant others (TSOs) to achieve those goals. In an effort to inform the development of interventions for CSOs, this study examined the language content of brief text-based messages generated by CSOs to motivate TSOs to achieve a behavioral goal. CSOs generated brief text-based messages for TSOs for three scenarios: (1) to help TSOs achieve the goal, (2) in the event that the TSO is struggling to meet the goal, and (3) in the event that the TSO has given up on meeting the goal. Results indicate that there was a significant relationship between the tone and compassion of messages generated by CSOs, the CSOs’ perceptions of TSO motivation, and their expectation of a grateful or annoyed reaction by the TSO to their feedback or support. Results underscore the importance of attending to patterns in language when CSOs communicate with TSOs about goal achievement or failure, and how certain variables in the CSOs’ perceptions of their TSOs affect these characteristics. PMID:26312172

  12. New Classification Method Based on Support-Significant Association Rules Algorithm

    Science.gov (United States)

    Li, Guoxin; Shi, Wen

    One of the most well-studied problems in data mining is mining for association rules. There was also research that introduced association rule mining methods to conduct classification tasks. These classification methods, based on association rule mining, could be applied for customer segmentation. Currently, most of the association rule mining methods are based on a support-confidence structure, where rules satisfied both minimum support and minimum confidence were returned as strong association rules back to the analyzer. But, this types of association rule mining methods lack of rigorous statistic guarantee, sometimes even caused misleading. A new classification model for customer segmentation, based on association rule mining algorithm, was proposed in this paper. This new model was based on the support-significant association rule mining method, where the measurement of confidence for association rule was substituted by the significant of association rule that was a better evaluation standard for association rules. Data experiment for customer segmentation from UCI indicated the effective of this new model.

  13. Candidate reservoirs for enhanced oil recovery, guidelines for their selection and appraisal of significant tests to date. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Cronquist, C.; Portugal, D.

    1977-03-01

    Elements which are keys to the effective management of the ERDA Enhanced Oil Recovery Programs are described. Results of efforts to develop screening criteria to identify reservoir rock-fluid characteristics considered suitable for application of EOR processes which appear to have the best chance for commercial recovery of significant additional volumes of oil are summarized. A review of significant field tests made to develop the management matrices is included. Over a hundred published references describing completed and ongoing tests of EOR processes were reviewed to compile the data reported in the four matrices. The knowledge derived from these and subsequent field tests, plus that derived from laboratory research, serves to continually improve the state-of-the-art, thereby reducing the risk in tests yet to come. As the state-of-the-art improves, and as more data comes into the public domain, the matrices should be updated, and the screening criteria should be revised accordingly. An extensive set of data representing 2,420 reservoirs in the United States which have been identified as potential candidates for enhanced oil recovery is included.

  14. Preoperative Panel Testing for Hereditary Cancer Syndromes Does Not Significantly Impact Time to Surgery for Newly Diagnosed Breast Cancer Patients Compared with BRCA1/2 Testing.

    Science.gov (United States)

    Murphy, Amy E; Hussain, Lala; Ho, Ching; Dunki-Jacobs, Erik; Lee, David; Tameron, Ashley; Huelsman, Karen; Rice, Courtney; Wexelman, Barbara A

    2017-10-01

    This study seeks to determine whether there is a delay in time to surgery in breast cancer patients with panel tests compared with traditional BRCA testing. This study was a retrospective review of women diagnosed with breast cancer who underwent genetic evaluation from our institution's Genetic Counselor Database from January 2013 to August 2015. Patients were excluded if they were male, clinical information was unavailable, the patient underwent neoadjuvant chemotherapy, had a diagnosis of recurrent breast cancer during time of study, or had postoperative genetics evaluation. Included in the study were 138 patients. The time from diagnosis to surgery for BRCA1/2 tested patients was 43.5 days compared with 51.0 days in the panel group (p = 0.186). Turnaround time for genetic testing decreased during the period studied and was approximately 6 days longer for panel testing than BRCA testing. It took 12.2 days for BRCA results and 18.9 days for the panel results (p testing in 2014 and 2015 was 12.4 and 10.5 days respectively, whereas panel testing was 20.5 and 18.2 days (p ≤ 0.001). Of the variables included in multivariable linear regression, only mastectomy significantly contributed to time to surgery (p testing did not delay time to surgery compared with BRCA testing alone. The use of panel testing has increased over time, and lab turnaround time has decreased. Mastectomy was the only clinical variable contributing to longer time to surgery.

  15. Black hole based tests of general relativity

    International Nuclear Information System (INIS)

    Yagi, Kent; Stein, Leo C

    2016-01-01

    General relativity has passed all solar system experiments and neutron star based tests, such as binary pulsar observations, with flying colors. A more exotic arena for testing general relativity is in systems that contain one or more black holes. Black holes are the most compact objects in the Universe, providing probes of the strongest-possible gravitational fields. We are motivated to study strong-field gravity since many theories give large deviations from general relativity only at large field strengths, while recovering the weak-field behavior. In this article, we review how one can probe general relativity and various alternative theories of gravity by using electromagnetic waves from a black hole with an accretion disk, and gravitational waves from black hole binaries. We first review model-independent ways of testing gravity with electromagnetic/gravitational waves from a black hole system. We then focus on selected examples of theories that extend general relativity in rather simple ways. Some important characteristics of general relativity include (but are not limited to) (i) only tensor gravitational degrees of freedom, (ii) the graviton is massless, (iii) no quadratic or higher curvatures in the action, and (iv) the theory is four-dimensional. Altering a characteristic leads to a different extension of general relativity: (i) scalar–tensor theories, (ii) massive gravity theories, (iii) quadratic gravity, and (iv) theories with large extra dimensions. Within each theory, we describe black hole solutions, their properties, and current and projected constraints on each theory using black hole based tests of gravity. We close this review by listing some of the open problems in model-independent tests and within each specific theory. (paper)

  16. Resetting the bar: Statistical significance in whole-genome sequencing-based association studies of global populations.

    Science.gov (United States)

    Pulit, Sara L; de With, Sera A J; de Bakker, Paul I W

    2017-02-01

    Genome-wide association studies (GWAS) of common disease have been hugely successful in implicating loci that modify disease risk. The bulk of these associations have proven robust and reproducible, in part due to community adoption of statistical criteria for claiming significant genotype-phenotype associations. As the cost of sequencing continues to drop, assembling large samples in global populations is becoming increasingly feasible. Sequencing studies interrogate not only common variants, as was true for genotyping-based GWAS, but variation across the full allele frequency spectrum, yielding many more (independent) statistical tests. We sought to empirically determine genome-wide significance thresholds for various analysis scenarios. Using whole-genome sequence data, we simulated sequencing-based disease studies of varying sample size and ancestry. We determined that future sequencing efforts in >2,000 samples of European, Asian, or admixed ancestry should set genome-wide significance at approximately P = 5 × 10 -9 , and studies of African samples should apply a more stringent genome-wide significance threshold of P = 1 × 10 -9 . Adoption of a revised multiple test correction will be crucial in avoiding irreproducible claims of association. © 2016 WILEY PERIODICALS, INC.

  17. Evaluating research for clinical significance: using critically appraised topics to enhance evidence-based neuropsychology.

    Science.gov (United States)

    Bowden, Stephen C; Harrison, Elise J; Loring, David W

    2014-01-01

    Meehl's (1973, Psychodiagnosis: Selected papers. Minneapolis: University of Minnesota Press) distinction between statistical and clinical significance holds special relevance for evidence-based neuropsychological practice. Meehl argued that despite attaining statistical significance, many published findings have limited practical value since they do not inform clinical care. In the context of an ever expanding clinical research literature, accessible methods to evaluate clinical impact are needed. The method of Critically Appraised Topics (Straus, Richardson, Glasziou, & Haynes, 2011, Evidence-based medicine: How to practice and teach EBM (4th ed.). Edinburgh: Elsevier Churchill-Livingstone) was developed to provide clinicians with a "toolkit" to facilitate implementation of evidence-based practice. We illustrate the Critically Appraised Topics method using a dementia screening example. We argue that the skills practiced through critical appraisal provide clinicians with methods to: (1) evaluate the clinical relevance of new or unfamiliar research findings with a focus on patient benefit, (2) help focus of research quality, and (3) incorporate evaluation of clinical impact into educational and professional development activities.

  18. Technical bases for the DWPF testing program

    International Nuclear Information System (INIS)

    Plodinec, M.J.

    1990-01-01

    The Defense Waste Processing Facility (DWPF) at the Savannah River Site (SRS) will be the first production facility in the United States for the immobilization of high-level nuclear waste. Production of DWPF canistered wasteforms will begin prior to repository licensing, so decisions on facility startup will have to be made before the final decisions on repository design are made. The Department of Energy's Office of Civilian Radioactive Waste Management (RW) has addressed this discrepancy by defining a Waste Acceptance Process. This process provides assurance that the borosilicate-glass wasteform, in a stainless-steel canister, produced by the DWPF will be acceptable for permanent storage in a federal repository. As part of this process, detailed technical specifications have been developed for the DWPF product. SRS has developed detailed strategies for demonstrating compliance with each of the Waste Acceptance Process specifications. An important part of the compliance is the testing which will be carried out in the DWPF. In this paper, the bases for each of the tests to be performed in the DWPF to establish compliance with the specifications are described, and the tests are detailed. The results of initial tests relating to characterization of sealed canisters are reported

  19. Significantly High Modulation Efficiency of Compact Graphene Modulator Based on Silicon Waveguide.

    Science.gov (United States)

    Shu, Haowen; Su, Zhaotang; Huang, Le; Wu, Zhennan; Wang, Xingjun; Zhang, Zhiyong; Zhou, Zhiping

    2018-01-17

    We theoretically and experimentally demonstrate a significantly large modulation efficiency of a compact graphene modulator based on a silicon waveguide using the electro refractive effect of graphene. The modulation modes of electro-absorption and electro-refractive can be switched with different applied voltages. A high extinction ratio of 25 dB is achieved in the electro-absorption modulation mode with a driving voltage range of 0 V to 1 V. For electro-refractive modulation, the driving voltage ranges from 1 V to 3 V with a 185-pm spectrum shift. The modulation efficiency of 1.29 V · mm with a 40-μm interaction length is two orders of magnitude higher than that of the first reported graphene phase modulator. The realisation of phase and intensity modulation with graphene based on a silicon waveguide heralds its potential application in optical communication and optical interconnection systems.

  20. Efficient p-value evaluation for resampling-based tests

    KAUST Repository

    Yu, K.

    2011-01-05

    The resampling-based test, which often relies on permutation or bootstrap procedures, has been widely used for statistical hypothesis testing when the asymptotic distribution of the test statistic is unavailable or unreliable. It requires repeated calculations of the test statistic on a large number of simulated data sets for its significance level assessment, and thus it could become very computationally intensive. Here, we propose an efficient p-value evaluation procedure by adapting the stochastic approximation Markov chain Monte Carlo algorithm. The new procedure can be used easily for estimating the p-value for any resampling-based test. We show through numeric simulations that the proposed procedure can be 100-500 000 times as efficient (in term of computing time) as the standard resampling-based procedure when evaluating a test statistic with a small p-value (e.g. less than 10( - 6)). With its computational burden reduced by this proposed procedure, the versatile resampling-based test would become computationally feasible for a much wider range of applications. We demonstrate the application of the new method by applying it to a large-scale genetic association study of prostate cancer.

  1. Divergence-based tests for model diagnostic

    Czech Academy of Sciences Publication Activity Database

    Hobza, Tomáš; Esteban, M. D.; Morales, D.; Marhuenda, Y.

    2008-01-01

    Roč. 78, č. 13 (2008), s. 1702-1710 ISSN 0167-7152 R&D Projects: GA MŠk 1M0572 Grant - others:Instituto Nacional de Estadistica (ES) MTM2006-05693 Institutional research plan: CEZ:AV0Z10750506 Keywords : goodness of fit * devergence statistics * GLM * model checking * bootstrap Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.445, year: 2008 http://library.utia.cas.cz/separaty/2008/SI/hobza-divergence-based%20tests%20for%20model%20diagnostic.pdf

  2. Molecular-based classification algorithm for endometrial carcinoma categorizes ovarian endometrioid carcinoma into prognostically significant groups.

    Science.gov (United States)

    Parra-Herran, Carlos; Lerner-Ellis, Jordan; Xu, Bin; Khalouei, Sam; Bassiouny, Dina; Cesari, Matthew; Ismiil, Nadia; Nofech-Mozes, Sharon

    2017-12-01

    The Cancer Genome Atlas classification divides endometrial carcinoma in biologically distinct groups, and testing for p53, mismatch repair proteins (MMR), and polymerase ɛ (POLE) exonuclease domain mutations has been shown to predict the molecular subgroup and clinical outcome. While abnormalities in these markers have been described in ovarian endometrioid carcinoma, their role in predicting its molecular profile and prognosis is still not fully explored. Patients with ovarian endometrioid carcinomas treated surgically in a 14-year period were selected. Only tumors with confirmation of endometrioid histology and negative WT1 and Napsin-A were included. POLE mutational analysis and immunohistochemistry for p53, MLH1, MSH2, MSH6, and PMS2 was performed in formalin-fixed, paraffin-embedded tissue. Following the molecular classifier proposed for endometrial carcinoma (Br J Cancer2015;113:299-310), cases were classified as POLE mutated, MMR abnormal, p53 abnormal, and p53 wild type. Clinicopathologic information was recorded, including patient outcome. In all, 72 cases were included, distributed as follows: 7 (10%) POLE mutated; 6 (8%) MMR abnormal; 17 (24%) p53 abnormal; and 42 (58%) p53 wild type. The molecular classification correlated with disease-free survival in multivariate analysis (P=0.003), independently of tumor grade and stage. Correlation with overall survival approached statistical significance (P=0.051). POLE-mutated and MMR-abnormal tumors had excellent survival, whereas p53-abnormal tumors had significantly higher rates of recurrence and death. Ovarian endometroid carcinoma can be classified in clinically meaningful subgroups by testing for molecular surrogates, akin to endometrial cancer. MMR and POLE alterations seem to identify a subset of ovarian endometrioid carcinomas with excellent outcome; conversely, abnormal p53 carries a worse prognosis. In the era of personalized medicine, the use of these markers in the routine evaluation of ovarian

  3. Women who know their place : sex-based differences in spatial abilities and their evolutionary significance.

    Science.gov (United States)

    Burke, Ariane; Kandler, Anne; Good, David

    2012-06-01

    Differences between men and women in the performance of tests designed to measure spatial abilities are explained by evolutionary psychologists in terms of adaptive design. The Hunter-Gatherer Theory of Spatial Ability suggests that the adoption of a hunter-gatherer lifestyle (assuming a sexual division of labor) created differential selective pressure on the development of spatial skills in men and women and, therefore, cognitive differences between the sexes. Here, we examine a basic spatial skill-wayfinding (the ability to plan routes and navigate a landscape)-in men and women in a natural, real-world setting as a means of testing the proposition that sex-based differences in spatial ability exist outside of the laboratory. Our results indicate that when physical differences are accounted for, men and women with equivalent experience perform equally well at complex navigation tasks in a real-world setting. We conclude that experience, gendered patterns of activity, and self-assessment are contributing factors in producing previously reported differences in spatial ability.

  4. Watermarking Techniques Using Least Significant Bit Algorithm for Digital Image Security Standard Solution- Based Android

    Directory of Open Access Journals (Sweden)

    Ari Muzakir

    2017-05-01

    Full Text Available Ease of deployment of digital image through the internet has positive and negative sides, especially for owners of the original digital image. The positive side of the ease of rapid deployment is the owner of that image deploys digital image files to various sites in the world address. While the downside is that if there is no copyright that serves as protector of the image it will be very easily recognized ownership by other parties. Watermarking is one solution to protect the copyright and know the results of the digital image. With Digital Image Watermarking, copyright resulting digital image will be protected through the insertion of additional information such as owner information and the authenticity of the digital image. The least significant bit (LSB is one of the algorithm is simple and easy to understand. The results of the simulations carried out using android smartphone shows that the LSB watermarking technique is not able to be seen by naked human eye, meaning there is no significant difference in the image of the original files with images that have been inserted watermarking. The resulting image has dimensions of 640x480 with a bit depth of 32 bits. In addition, to determine the function of the ability of the device (smartphone in processing the image using this application used black box testing

  5. Corneal topographer based on the Hartmann test.

    Science.gov (United States)

    Mejía, Yobani; Galeano, Janneth C

    2009-04-01

    The purpose of this article is to show the performance of a topographer based on the Hartmann test for convex surfaces of F/# approximately 1. This topographer, called "Hartmann Test topographer (HT topographer)," is a prototype developed in the Physics Department of the Universidad Nacional de Colombia. From the Hartmann pattern generated by the surface under test, and by the Fourier analysis and the optical aberration theory we obtain the sagitta (elevation map) of the surface. Then, taking the first and the second derivatives of the sagitta in the radial direction we obtain the meridional curvature map. The method is illustrated with an example. To check the performance of the HT topographer a toric surface, a revolution aspherical surface, and two human corneas were measured. Our results are compared with those obtained with a Placido ring topographer (Tomey TMS-4 videokeratoscope), and we show that our curvature maps are similar to those obtained with the Placido ring topographer. The HT topographer is able to reconstruct the corneal topography potentially eradicating the skew ray problem, therefore, corneal defects can be visualized more. The results are presented by elevation and meridional curvature maps.

  6. Network Diffusion-Based Prioritization of Autism Risk Genes Identifies Significantly Connected Gene Modules

    Directory of Open Access Journals (Sweden)

    Ettore Mosca

    2017-09-01

    Full Text Available Autism spectrum disorder (ASD is marked by a strong genetic heterogeneity, which is underlined by the low overlap between ASD risk gene lists proposed in different studies. In this context, molecular networks can be used to analyze the results of several genome-wide studies in order to underline those network regions harboring genetic variations associated with ASD, the so-called “disease modules.” In this work, we used a recent network diffusion-based approach to jointly analyze multiple ASD risk gene lists. We defined genome-scale prioritizations of human genes in relation to ASD genes from multiple studies, found significantly connected gene modules associated with ASD and predicted genes functionally related to ASD risk genes. Most of them play a role in synapsis and neuronal development and function; many are related to syndromes that can be in comorbidity with ASD and the remaining are involved in epigenetics, cell cycle, cell adhesion and cancer.

  7. Semifragile Speech Watermarking Based on Least Significant Bit Replacement of Line Spectral Frequencies

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Nematollahi

    2017-01-01

    Full Text Available There are various techniques for speech watermarking based on modifying the linear prediction coefficients (LPCs; however, the estimated and modified LPCs vary from each other even without attacks. Because line spectral frequency (LSF has less sensitivity to watermarking than LPC, watermark bits are embedded into the maximum number of LSFs by applying the least significant bit replacement (LSBR method. To reduce the differences between estimated and modified LPCs, a checking loop is added to minimize the watermark extraction error. Experimental results show that the proposed semifragile speech watermarking method can provide high imperceptibility and that any manipulation of the watermark signal destroys the watermark bits since manipulation changes it to a random stream of bits.

  8. Model-Based Software Testing for Object-Oriented Software

    Science.gov (United States)

    Biju, Soly Mathew

    2008-01-01

    Model-based testing is one of the best solutions for testing object-oriented software. It has a better test coverage than other testing styles. Model-based testing takes into consideration behavioural aspects of a class, which are usually unchecked in other testing methods. An increase in the complexity of software has forced the software industry…

  9. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    Science.gov (United States)

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  10. Investigating a multigene prognostic assay based on significant pathways for Luminal A breast cancer through gene expression profile analysis.

    Science.gov (United States)

    Gao, Haiyan; Yang, Mei; Zhang, Xiaolan

    2018-04-01

    The present study aimed to investigate potential recurrence-risk biomarkers based on significant pathways for Luminal A breast cancer through gene expression profile analysis. Initially, the gene expression profiles of Luminal A breast cancer patients were downloaded from The Cancer Genome Atlas database. The differentially expressed genes (DEGs) were identified using a Limma package and the hierarchical clustering analysis was conducted for the DEGs. In addition, the functional pathways were screened using Kyoto Encyclopedia of Genes and Genomes pathway enrichment analyses and rank ratio calculation. The multigene prognostic assay was exploited based on the statistically significant pathways and its prognostic function was tested using train set and verified using the gene expression data and survival data of Luminal A breast cancer patients downloaded from the Gene Expression Omnibus. A total of 300 DEGs were identified between good and poor outcome groups, including 176 upregulated genes and 124 downregulated genes. The DEGs may be used to effectively distinguish Luminal A samples with different prognoses verified by hierarchical clustering analysis. There were 9 pathways screened as significant pathways and a total of 18 DEGs involved in these 9 pathways were identified as prognostic biomarkers. According to the survival analysis and receiver operating characteristic curve, the obtained 18-gene prognostic assay exhibited good prognostic function with high sensitivity and specificity to both the train and test samples. In conclusion the 18-gene prognostic assay including the key genes, transcription factor 7-like 2, anterior parietal cortex and lymphocyte enhancer factor-1 may provide a new method for predicting outcomes and may be conducive to the promotion of precision medicine for Luminal A breast cancer.

  11. Analyzing metabolomics-based challenge test

    NARCIS (Netherlands)

    Vis, D.J.; Westerhuis, J.A.; Jacobs, D.M.; van Duynhoven, J.P.M.; Wopereis, S.; van Ommen, B.; Hendriks, M.M.W.B.; Smilde, A.K.

    2015-01-01

    Challenge tests are used to assess the resilience of human beings to perturbations by analyzing responses to detect functional abnormalities. Well known examples are allergy tests and glucose tolerance tests. Increasingly, metabolomics analysis of blood or serum samples is used to analyze the

  12. Web-Based Testing: Exploring the Relationship between Hardware Usability and Test Performance

    Science.gov (United States)

    Huff, Kyle; Cline, Melinda; Guynes, Carl S.

    2012-01-01

    Web-based testing has recently become common in both academic and professional settings. A web-based test is administered through a web browser. Individuals may complete a web-based test at nearly any time and at any place. In addition, almost any computer lab can become a testing center. It is important to understand the environmental issues that…

  13. Surface and superficial surgical anatomy of the posterolateral cranial base: significance for surgical planning and approach.

    Science.gov (United States)

    Day, J D; Kellogg, J X; Tschabitscher, M; Fukushima, T

    1996-06-01

    We have performed an anatomic study, 15 using fixed cadaveric preparations, with the goal of identifying surface landmarks that will reliably locate the underlying transverse and sigmoid sinus complex. Simple morphometric relationships were first determined on both sides of each specimen to yield 30 sides measured. The following relationships were determined: 1) zygoma root-asterion, 2) asterion-mastoid tip, 3) zygoma root-suprameatal spine (Henle's spine), 4) asterion-suprameatal spine, 5) mastoid tip-suprameatal spine. The relationship of the asterion to the transverse-sigmoid junction was determined by bone removal. Also, the distances from the asterion to the sigmoid sinus-superior petrosal sinus junction and the superior margin of the transverse sinus were studied. Surface and marks were found to have definitive relationships to underlying anatomic substrates in all specimens studied. The critical relationships that were concluded from this study can be described in terms of two easily identified lines between bony surface structures. A line drawn from the zygoma root to the inion, i.e., the superior nuchal line, reliably located the rostrocaudal level of the transverse sinus in all specimens. Although the asterion did not consistently fall on this line, the transverse-sigmoid junction could reliably be placed at the anteroposterior level of the asterion. Further, a line drawn from the squamosal-parietomastoid suture junction to the mastoid tip reliably defined the axis of the sigmoid sinus through the mastoid. We also found that the junction of the squamosal and parietomastoid sutures lay over the anterior border of the upper curve of the sigmoid sinus. The anterior portion of the supramastoid crest correlated with the level of the middle fossa. These surface relationships all have significance for posterolateral approaches to the cranial base. Since performing this study, these relationships have been found reliable for operative planning in our clinical

  14. Palaeoclimate significance of speleothems in crystalline rocks: a test case from the Late Glacial and early Holocene (Vinschgau, northern Italy)

    Science.gov (United States)

    Koltai, Gabriella; Cheng, Hai; Spötl, Christoph

    2018-03-01

    Partly coeval flowstones formed in fractured gneiss and schist were studied to test the palaeoclimate significance of this new type of speleothem archive on a decadal-to-millennial timescale. The samples encompass a few hundred to a few thousand years of the Late Glacial and the early Holocene. The speleothem fabric is primarily comprised of columnar fascicular optic calcite and acicular aragonite, both indicative of elevated Mg / Ca ratios in the groundwater. Stable isotopes suggest that aragonite is more prone to disequilibrium isotope fractionation driven by evaporation and prior calcite/aragonite precipitation than calcite. Changes in mineralogy are therefore attributed to these two internal fracture processes rather than to palaeoclimate. Flowstones formed in the same fracture show similar δ18O changes on centennial scales, which broadly correspond to regional lacustrine δ18O records, suggesting that such speleothems may provide an opportunity to investigate past climate conditions in non-karstic areas. The shortness of overlapping periods in flowstone growth and the complexity of in-aquifer processes, however, render the establishment of a robust stacked δ18O record challenging.

  15. Rule-based Test Generation with Mind Maps

    Directory of Open Access Journals (Sweden)

    Dimitry Polivaev

    2012-02-01

    Full Text Available This paper introduces basic concepts of rule based test generation with mind maps, and reports experiences learned from industrial application of this technique in the domain of smart card testing by Giesecke & Devrient GmbH over the last years. It describes the formalization of test selection criteria used by our test generator, our test generation architecture and test generation framework.

  16. Validity of selected cardiovascular field-based test among Malaysian ...

    African Journals Online (AJOL)

    Based on emerge obese problem among Malaysian, this research is formulated to validate published tests among healthy female adult. Selected test namely; 20 meter multi-stage shuttle run, 2.4km run test, 1 mile walk test and Harvard Step test were correlated with laboratory test (Bruce protocol) to find the criterion validity ...

  17. Chinese Students' Perceptions of the Value of Test Preparation Courses for the TOEFL iBT: Merit, Worth, and Significance

    Science.gov (United States)

    Ma, Jia; Cheng, Liying

    2015-01-01

    Test preparation for high-stakes English language tests has received increasing research attention in the language assessment field; however, little is known about what aspects of test preparation students attend to and value. In this study, we considered the perspectives of 12 Chinese students who were enrolled in various academic programs in a…

  18. Neutron Sources for Standard-Based Testing

    Energy Technology Data Exchange (ETDEWEB)

    Radev, Radoslav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McLean, Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-11-10

    The DHS TC Standards and the consensus ANSI Standards use 252Cf as the neutron source for performance testing because its energy spectrum is similar to the 235U and 239Pu fission sources used in nuclear weapons. An emission rate of 20,000 ± 20% neutrons per second is used for testing of the radiological requirements both in the ANSI standards and the TCS. Determination of the accurate neutron emission rate of the test source is important for maintaining consistency and agreement between testing results obtained at different testing facilities. Several characteristics in the manufacture and the decay of the source need to be understood and accounted for in order to make an accurate measurement of the performance of the neutron detection instrument. Additionally, neutron response characteristics of the particular instrument need to be known and taken into account as well as neutron scattering in the testing environment.

  19. Assessment of robustness and significance of climate change signals for an ensemble of distribution-based scaled climate projections

    DEFF Research Database (Denmark)

    Seaby, Lauren Paige; Refsgaard, J.C.; Sonnenborg, T.O.

    2013-01-01

    An ensemble of 11 regional climate model (RCM) projections are analysed for Denmark from a hydrological modelling inputs perspective. Two bias correction approaches are applied: a relatively simple monthly delta change (DC) method and a more complex daily distribution-based scaling (DBS) method....... Differences in the strength and direction of climate change signals are compared across models and between bias correction methods, the statistical significance of climate change is tested as it evolves over the 21st century, and the impact of choice of reference and change period lengths is analysed...... as it relates to assumptions of stationary in current climate and change signals. Both DC and DBS methods are able to capture mean monthly and seasonal climate characteristics in temperature (T), precipitation (P), and potential evapotranspiration (ETpot). For P, which is comparatively more variable by day...

  20. A Study on the Significance of the Colloidal Radiogold Disappearance Rate as a Simple Clinical Liver Function Test

    International Nuclear Information System (INIS)

    Hong, Chang Gi

    1969-01-01

    Liver function in diffuse parenchymal liver such as cirrhosis of the liver depend largely on the effective hepatic blood flow rather than on the individual cell functions. Clinical methods of measuring the hepatic blood flow were developed recently by the application of colloidal disappearance rate. In order to correlate the radiogold disappearance rate to conventional biochemical liver function tests, 21 normal subjects and 80 cases of cirrhosis of the liver were studied with both methods. The results are summarized as following: 1) The validity of external counting method to measure the blood disappearance rate of colloidal radiogold was confirmed by in vitro counting of the serial blood samples. 2) The blood disappearance rate of colloidal radiogold was essentially the same as the liver uptake rate of colloidal radiogold in normal and cirrhotic subjects with various degrees of functional disturbance. And it seemed there was no serious extrahepatic removal of the colloidal radiogold. 3) The disappearance rate of colloidal radiogold was not significant changed by the posture change, but was enhanced by ingestion of 500 ml of water. 4) The disappearance rate of colloidal radiogold was not influenced y single dose of Telepaque, while BSP retention was increased after Telepaque. 5) The mean disappearance half time of colloidal radiogold in normal subjects was 2.49±0.391 (S.D.) minutes. The mean normal disappearance rate constant (K value) was 0.285±0.0428 (S.D.)/minute. 6) The colloidal radiogold disappearance half time was abnormally prolonged (over 3.2 mm) in 87.7±3.68 (S.D.) % of cirrhotic subjects. 7) In patients of liver cirrhosis the blood disappearance rate of colloidal radiogold correlated well to serum albumin and globulin levels and BSP retention which were considered to reflect functions of hepatic parenchymal cells. There was, however, no correlation between colloidal disappearance rate and thymol turbidity test, serum glutamic pyruvic transaminase

  1. Students Perception on the Use of Computer Based Test

    Science.gov (United States)

    Nugroho, R. A.; Kusumawati, N. S.; Ambarwati, O. C.

    2018-02-01

    Teaching nowadays might use technology in order to disseminate science and knowledge. As part of teaching, the way evaluating study progress and result has also benefited from this IT rapid progress. The computer-based test (CBT) has been introduced to replace the more conventional Paper and Pencil Test (PPT). CBT are considered more advantageous than PPT. It is considered as more efficient, transparent, and has the ability of minimising fraud in cognitive evaluation. Current studies have indicated the debate of CBT vs PPT usage. Most of the current research compares the two methods without exploring the students’ perception about the test. This study will fill the gap in the literature by providing students’ perception on the two tests method. Survey approach is conducted to obtain the data. The sample is collected in two identical classes with similar subject in a public university in Indonesia. Mann-Whitney U test used to analyse the data. The result indicates that there is a significant difference between two groups of students regarding CBT usage. Student with different test method prefers to have test other than what they were having. Further discussion and research implication is discussed in the paper.

  2. Routine Laboratory Blood Tests May Diagnose Significant Fibrosis in Liver Transplant Recipients with Chronic Hepatitis C: A 10 Year Experience.

    Science.gov (United States)

    Sheen, Victoria; Nguyen, Heajung; Jimenez, Melissa; Agopian, Vatche; Vangala, Sitaram; Elashoff, David; Saab, Sammy

    2016-03-28

    The aims of our study were to determine whether routine blood tests, the aspartate aminotransferase (AST) to Platelet Ratio Index (APRI) and Fibrosis 4 (Fib-4) scores, were associated with advanced fibrosis and to create a novel model in liver transplant recipients with chronic hepatitis C virus (HCV). We performed a cross sectional study of patients at The University of California at Los Angeles (UCLA) Medical Center who underwent liver transplantation for HCV. We used linear mixed effects models to analyze association between fibrosis severity and individual biochemical markers and mixed effects logistic regression to construct diagnostic models for advanced fibrosis (METAVIR F3-4). Cross-validation was used to estimate a receiving operator characteristic (ROC) curve for the prediction models and to estimate the area under the curve (AUC). The mean (± standard deviation [SD]) age of our cohort was 55 (±7.7) years, and almost three quarter were male. The mean (±SD) time from transplant to liver biopsy was 19.9 (±17.1) months. The mean (±SD) APRI and Fib-4 scores were 3 (±12) and 7 (±14), respectively. Increased fibrosis was associated with lower platelet count and alanine aminotransferase (ALT) values and higher total bilirubin and Fib-4 scores. We developed a model that takes into account age, gender, platelet count, ALT, and total bilirubin, and this model outperformed APRI and Fib-4 with an AUC of 0.68 (p < 0.001). Our novel prediction model diagnosed the presence of advanced fibrosis more reliably than APRI and Fib-4 scores. This noninvasive calculation may be used clinically to identify liver transplant recipients with HCV with significant liver damage.

  3. GPS Device Testing Based on User Performance Metrics

    Science.gov (United States)

    2015-10-02

    1. Rationale for a Test Program Based on User Performance Metrics ; 2. Roberson and Associates Test Program ; 3. Status of, and Revisions to, the Roberson and Associates Test Program ; 4. Comparison of Roberson and DOT/Volpe Programs

  4. Biglan Model Test Based on Institutional Diversity.

    Science.gov (United States)

    Roskens, Ronald W.; Creswell, John W.

    The Biglan model, a theoretical framework for empirically examining the differences among subject areas, classifies according to three dimensions: adherence to common set of paradigms (hard or soft), application orientation (pure or applied), and emphasis on living systems (life or nonlife). Tests of the model are reviewed, and a further test is…

  5. Evaluation of the Appendix Base Location in Acute Appendicitis Using Sonography and its Clinical Significance

    International Nuclear Information System (INIS)

    Lee, Kwan Seop; Kim, Min Jeong; Ko, Eun Young; Hong, Myung Sun; Jeon, Eui Yong; Hwang, Hee Sung; Lee, In Jae; Yang, Ik; Lee, Eil Seong; Lee, Bong Hwa

    2005-01-01

    The purpose of this study is to investigate the location of appendiceal base using sonography in acute appendicitis and the usefulness of the appendiceal base marking in deciding the incision site of appendectomy. We performed appendix sonography in 813 patients and 381 patients were diagnosed as acute appendicitis. During sonography, we marked the base of the appendix on the skin of the patients' abdomen. After appendiceal base marking, we measured the distance from McBurney's point to the appendiceal base. The marking was used as the guide for incision site for appendectomy by the surgeon. Among 381 patients, we excluded 78 patients due to non visualization of the cecoappendiceal junction (n = 6), pregnancy appendicitis (n = 2), false positive appendicitis (n = 3) and no reply from the surgeon (n = 67). So we investigated 303 patients prospectively. After operation, we asked the surgeon whether the appendiceal base marking was helpful for appendectomy or not. The base of the appendix at McBurney's point were 31%, lying within 2 cm from McBurney's point were 20%, within 5 cm were 28%, more than 5 cm were 21%. For the usefulness of appendiceal base marking, 95% showed good correlation with marking and surgical incision, and 5% revealed poor correlation. The base of the appendix was located in diverse areas of the abdomen, although most frequent in the McBurney's point and within 2 cm from the McBurney's point. Appendiceal base marking on the skin of the abdomen after diagnosis of acute appendicitis could be an useful method to guide the surgeon for decision of surgical incision site

  6. Clinical significance of creative 3D-image fusion across multimodalities [PET + CT + MR] based on characteristic coregistration

    International Nuclear Information System (INIS)

    Peng, Matthew Jian-qiao; Ju Xiangyang; Khambay, Balvinder S.; Ayoub, Ashraf F.; Chen, Chin-Tu; Bai Bo

    2012-01-01

    Objective: To investigate a registration approach for 2-dimension (2D) based on characteristic localization to achieve 3-dimension (3D) fusion from images of PET, CT and MR one by one. Method: A cubic oriented scheme of“9-point and 3-plane” for co-registration design was verified to be geometrically practical. After acquisiting DICOM data of PET/CT/MR (directed by radiotracer 18 F-FDG etc.), through 3D reconstruction and virtual dissection, human internal feature points were sorted to combine with preselected external feature points for matching process. By following the procedure of feature extraction and image mapping, “picking points to form planes” and “picking planes for segmentation” were executed. Eventually, image fusion was implemented at real-time workstation mimics based on auto-fuse techniques so called “information exchange” and “signal overlay”. Result: The 2D and 3D images fused across modalities of [CT + MR], [PET + MR], [PET + CT] and [PET + CT + MR] were tested on data of patients suffered from tumors. Complementary 2D/3D images simultaneously presenting metabolic activities and anatomic structures were created with detectable-rate of 70%, 56%, 54% (or 98%) and 44% with no significant difference for each in statistics. Conclusion: Currently, based on the condition that there is no complete hybrid detector integrated of triple-module [PET + CT + MR] internationally, this sort of multiple modality fusion is doubtlessly an essential complement for the existing function of single modality imaging.

  7. Multi-objective Search-based Mobile Testing

    OpenAIRE

    Mao, K.

    2017-01-01

    Despite the tremendous popularity of mobile applications, mobile testing still relies heavily on manual testing. This thesis presents mobile test automation approaches based on multi-objective search. We introduce three approaches: Sapienz (for native Android app testing), Octopuz (for hybrid/web JavaScript app testing) and Polariz (for using crowdsourcing to support search-based mobile testing). These three approaches represent the primary scientific and technical contributions of the thesis...

  8. Finding of No Significant Impact: Military Family Housing Revitalization Project Nellis Air Force Base, Nevada

    Science.gov (United States)

    2005-02-01

    potable water supply from two main sources: nine base wells and the Southern Nevada Water Authority (SNWA). A small amount of water is also purchased...chloride (PVC) (Headquarters Air Combat Command, 2001). There are eight potable water storage tanks on the base with a total capacity of 5 million...Wisconsin, Eau Claire Graduate Work, Resource Management, University of Wisconsin, Stevens Point Years of Experience: 15 Joesph Nixon, Senior Cultural

  9. Evaluating the significance of protein functional similarity based on gene ontology.

    Science.gov (United States)

    Konopka, Bogumil M; Golda, Tomasz; Kotulska, Malgorzata

    2014-11-01

    Gene ontology is among the most successful ontologies in the biomedical domain. It is used to describe, unambiguously, protein molecular functions, cellular localizations, and processes in which proteins participate. The hierarchical structure of gene ontology allows quantifying protein functional similarity by application of algorithms that calculate semantic similarities. The scores, however, are meaningless without a given context. Here, we propose how to evaluate the significance of protein function semantic similarity scores by comparing them to reference distributions calculated for randomly chosen proteins. In the study, thresholds for significant functional semantic similarity, in four representative annotation corpuses, were estimated. We also show that the score significance is influenced by the number and specificity of gene ontology terms that are annotated to compared proteins. While proteins with a greater number of terms tend to yield higher similarity scores, proteins with more specific terms produce lower scores. The estimated significance thresholds were validated using protein sequence-function and structure-function relationships. Taking into account the term number and term specificity improves the distinction between significant and insignificant semantic similarity comparisons.

  10. Significance of Iron(II,III) Hydroxycarbonate Green Rust in Arsenic Remediation Using Zerovalent Iron in Laboratory Column Tests

    Science.gov (United States)

    We examined the corrosion products of zerovalent iron used in three column tests for removing arsenic from water under dynamic flow conditions. Each column test lasted three- to four-months using columns consisting of a 10.3-cm depth of 50 : 50 (w : w, Peerless iron : sand) in t...

  11. Analysing Test-Takers' Views on a Computer-Based Speaking Test

    Science.gov (United States)

    Amengual-Pizarro, Marian; García-Laborda, Jesús

    2017-01-01

    This study examines test-takers' views on a computer-delivered speaking test in order to investigate the aspects they consider most relevant in technology-based oral assessment, and to explore the main advantages and disadvantages computer-based tests may offer as compared to face-to-face speaking tests. A small-scale open questionnaire was…

  12. Significance of life table estimates for small populations: Simulation-based study of estimation errors

    Directory of Open Access Journals (Sweden)

    Sergei Scherbov

    2011-03-01

    Full Text Available We study bias, standard errors, and distributions of characteristics of life tables for small populations. Theoretical considerations and simulations show that statistical efficiency of different methods is, above all, affected by the population size. Yet it is also significantly affected by the life table construction method and by a population's age composition. Study results are presented in the form of ready-to-use tables and relations, which may be useful in assessing the significance of estimates and differences in life expectancy across time and space for the territories with a small population size, when standard errors of life expectancy estimates may be high.

  13. A new direction for prenatal chromosome microarray testing: software-targeting for detection of clinically significant chromosome imbalance without equivocal findings.

    Science.gov (United States)

    Ahn, Joo Wook; Bint, Susan; Irving, Melita D; Kyle, Phillipa M; Akolekar, Ranjit; Mohammed, Shehla N; Mackie Ogilvie, Caroline

    2014-01-01

    Purpose. To design and validate a prenatal chromosomal microarray testing strategy that moves away from size-based detection thresholds, towards a more clinically relevant analysis, providing higher resolution than G-banded chromosomes but avoiding the detection of copy number variants (CNVs) of unclear prognosis that cause parental anxiety. Methods. All prenatal samples fulfilling our criteria for karyotype analysis (n = 342) were tested by chromosomal microarray and only CNVs of established deletion/duplication syndrome regions and any other CNV >3 Mb were detected and reported. A retrospective full-resolution analysis of 249 of these samples was carried out to ascertain the performance of this testing strategy. Results. Using our prenatal analysis, 23/342 (6.7%) samples were found to be abnormal. Of the remaining samples, 249 were anonymized and reanalyzed at full-resolution; a further 46 CNVs were detected in 44 of these cases (17.7%). None of these additional CNVs were of clear clinical significance. Conclusion. This prenatal chromosomal microarray strategy detected all CNVs of clear prognostic value and did not miss any CNVs of clear clinical significance. This strategy avoided both the problems associated with interpreting CNVs of uncertain prognosis and the parental anxiety that are a result of such findings.

  14. Specification-Based Testing Via Domain Specific Language

    Science.gov (United States)

    Sroka, Michal; Nagy, Roman; Fisch, Dominik

    2014-12-01

    The article presents tCF (testCaseFramework) - a domain specific language with corresponding toolchain for specification-based software testing of embedded software. tCF is designed for efficient preparation of maintainable and intelligible test cases and for testing process automation, as it allows to generate platform specific test cases for various testing levels. The article describes the essential parts of the tCF meta-model and the applied concept of platform specific test cases generators.

  15. Accuracy of multidetector spiral computed tomography in detecting significant coronary stenosis in patient populations with differing pre-test probabilities of disease

    International Nuclear Information System (INIS)

    Pontone, G.; Andreini, D.; Quaglia, C.; Ballerini, G.; Nobili, E.; Pepi, M.

    2007-01-01

    Aim: To investigate the clinical impact of multidetector computed tomography (MDCT) in patients with a low versus a high pre-test likelihood of coronary artery disease (CAD). Materials and methods: A cohort of 120 patients with suspected CAD, scheduled for conventional coronary angiography, underwent MDCT. Using the American Heart Association (AHA)/American College of Cardiology (ACC) guidelines, the population was divided into two groups: patients with a low (group 1) and a high (group 2) likelihood of CAD. Results: Analysis of all segments showed a high feasibility (92%), and a patient based-model showed excellent sensitivity and negative predictive values (NPV; both 100%) and acceptable specificity and positive predictive values (PPV; 86 and 90%, respectively), with an accuracy of 94%. Using MDCT in patients with lower pre-test likelihoods of CAD, according to the ACC/AHA guidelines, the accuracy remained high (93%); conversely, in patient groups with a high prevalence of CAD, a non-significant reduction in accuracy (85%) occurred using MDCT. Particularly, MDCT can be used effectively to exclude a diagnosis of CAD because of its high sensitivity and NPV (100%), but shows a significant reduction in specificity (58%). This reduction was due to an increase in the false-positive:true-negative ratio because of the higher percentage of calcified plaque (a relative but non-significant increase in false positives), and the high prevalence of CAD (significant reduction in true negatives). No differences were found between MDCT and quantitative coronary angiography (QCA) concerning the number of vessels narrowed. Conclusion: Because of its excellent sensitivity and specificity in patients with a low pre-test likelihood of CAD, MDCT could be helpful in clinical decision-making in this population

  16. The Welsh study of mothers and babies: protocol for a population-based cohort study to investigate the clinical significance of defined ultrasound findings of uncertain significance.

    Science.gov (United States)

    Hurt, Lisa; Wright, Melissa; Brook, Fiona; Thomas, Susan; Dunstan, Frank; Fone, David; John, Gareth; Morris, Sue; Tucker, David; Wills, Marilyn Ann; Chitty, Lyn; Davies, Colin; Paranjothy, Shantini

    2014-05-08

    Improvement in ultrasound imaging has led to the identification of subtle non-structural markers during the 18 - 20 week fetal anomaly scan, such as echogenic bowel, mild cerebral ventriculomegaly, renal pelvicalyceal dilatation, and nuchal thickening. These markers are estimated to occur in between 0.6% and 4.3% of pregnancies. Their clinical significance, for pregnancy outcomes or childhood morbidity, is largely unknown. The aim of this study is to estimate the prevalence of seven markers in the general obstetric population and establish a cohort of children for longer terms follow-up to assess the clinical significance of these markers. All women receiving antenatal care within six of seven Welsh Health Boards who had an 18 to 20 week ultrasound scan in Welsh NHS Trusts between July 2008 and March 2011 were eligible for inclusion. Data were collected on seven markers (echogenic bowel, cerebral ventriculomegaly, renal pelvicalyceal dilatation, nuchal thickening, cardiac echogenic foci, choroid plexus cysts, and short femur) at the time of 18 - 20 week fetal anomaly scan. Ultrasound records were linked to routinely collected data on pregnancy outcomes (work completed during 2012 and 2013). Images were stored and reviewed by an expert panel.The prevalence of each marker (reported and validated) will be estimated. A projected sample size of 23,000 will allow the prevalence of each marker to be estimated with the following precision: a marker with 0.50% prevalence to within 0.10%; a marker with 1.00% prevalence to within 0.13%; and a marker with 4.50% prevalence to within 0.27%. The relative risk of major congenital abnormalities, stillbirths, pre-term birth and small for gestational age, given the presence of a validated marker, will be reported. This is a large, prospective study designed to estimate the prevalence of markers in a population-based cohort of pregnant women and to investigate associations with adverse pregnancy outcomes. The study will also

  17. Evidence-Based Clinical Significance in Health Care: Toward an Inferential Analysis of Clinical Relevance

    Directory of Open Access Journals (Sweden)

    Mahsa Dousti

    2011-09-01

    Full Text Available Evidence-based dental practice requires the developmment and evaluation of protocols that en-sure translational effectiveness: that is, the efficient incorporation of the best available efficacy and effec-tiveness findings in specific clinical dentistry settings and environments. Evidence-based dentistry predi-cates the synthesis of research for obtaining the best available evidence in a validated, stringent, systematic and unbiased fashion. Research synthesis is now established as a science in its own right, precisely because it adheres to the scientific process that is driven by a research question and a hypothesis, follows through clearly defined methodology and design, yielding quantifiable data that are analyzed statistically, and from which stringent statistical inferences are drawn. The conclusions from the protocol of research synthesis define the best available evidence, which is used in the process of evidence-based revision of clinical practice guidelines. One important hurdle of the process of applying research synthesis in evidence-based dentistry lies in the fact that the statistical inferences produced by research must be translated into clinical relevance. Here, we present a model to circumvent this limitation by means of text analysis/mining protocols, which could lead the path toward a novel, valid and reliable ap-proach for the inferential analysis of clinical relevance.

  18. Community-based enterprises: The significance of partnerships and institutional linkages

    NARCIS (Netherlands)

    Seixas, Cristiana Simão; Berkes, Fikret

    2010-01-01

    Community-based institutions used to be driven by local needs, but in recent decades, some of them have been responding to national and global economic opportunities. These cases are of interest because they make it possible to investigate how local institutions can evolve in response to new

  19. Website-based PNG image steganography using the modified Vigenere Cipher, least significant bit, and dictionary based compression methods

    Science.gov (United States)

    Rojali, Salman, Afan Galih; George

    2017-08-01

    Along with the development of information technology in meeting the needs, various adverse actions and difficult to avoid are emerging. One of such action is data theft. Therefore, this study will discuss about cryptography and steganography that aims to overcome these problems. This study will use the Modification Vigenere Cipher, Least Significant Bit and Dictionary Based Compression methods. To determine the performance of study, Peak Signal to Noise Ratio (PSNR) method is used to measure objectively and Mean Opinion Score (MOS) method is used to measure subjectively, also, the performance of this study will be compared to other method such as Spread Spectrum and Pixel Value differencing. After comparing, it can be concluded that this study can provide better performance when compared to other methods (Spread Spectrum and Pixel Value Differencing) and has a range of MSE values (0.0191622-0.05275) and PSNR (60.909 to 65.306) with a hidden file size of 18 kb and has a MOS value range (4.214 to 4.722) or image quality that is approaching very good.

  20. Significantly enhanced base activation of peroxymonosulfate by polyphosphates: Kinetics and mechanism.

    Science.gov (United States)

    Lou, Xiaoyi; Fang, Changling; Geng, Zhuning; Jin, Yuming; Xiao, Dongxue; Wang, Zhaohui; Liu, Jianshe; Guo, Yaoguang

    2017-04-01

    Base activation of peroxydisulfate (PDS) is a common process aiming for water treatment, but requires high doses of PDS and strongly basic solutions. Peroxymonosulfate (PMS), another peroxygen of sulfurate derived from PDS, may also be activated by a less basic solution. However, enhancing the base-PMS reactivity is still challenging. Here it is reported that pyrophosphate (PA) and tripolyphosphate (PB) can efficiently enhance PMS activation under weakly alkaline conditions (pH 9.5) via the formation of superoxide anion radical (O 2 •- ) and singlet oxygen ( 1 O 2 ). The rate constant of Acid Orange 7 (AO7) degradation in PA/PMS system (k PA/PMS ) was nearly 4.4-15.9 fold higher than that in PMS/base system (k PMS/base ) without any polyphosphates. Increases in PA (or PB) concentration, PMS dose and pH favored the rapid dye degradation. Gas chromatograph-mass spectrometer (GC-MS) data confirmed AO7 and 2,4,6-trichlorophenol (2,4,6-TCP) were decomposed to a series of organic intermediates. The radical quenching and probe oxidation experiments indicate the degradation of organic compounds in the PA/PMS and PB/PMS processes was not reliant on sulfate radical (SO 4 •- ) and hydroxyl radical (OH) species but on O 2 - and 1 O 2 reactive species. Comparison experiments show that the polyphosphate/PMS process was much more favorable than PDS/base process. The present work provides a novel way to activate PMS for contaminant removal using industrial polyphosphate wastewaters. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Significance of pregnancy test false negative results due to elevated levels of β-core fragment hCG.

    Science.gov (United States)

    Johnson, Sarah; Eapen, Saji; Smith, Peter; Warren, Graham; Zinaman, Michael

    2017-01-01

    Very high levels of β-core fragment human chorionic gonadotrophin (βcf-hCG) are reported to potentially cause false negative results in point-of-care (POC)/over-the-counter (OTC) pregnancy tests. To investigate this further, women's daily early morning urine samples, collected prior to conception and during pregnancy, were analysed for intact, free β-, and βcf-hCG. The proportion of βcf-hCG was found to be related to that of hCG produced and in circulation. Therefore, best practice for accuracy testing of POC/OTC pregnancy tests would be to test devices against clinical samples containing high levels of βcf-hCG as well as standards spiked with biologically relevant ratios.

  2. Likelihood based testing for no fractional cointegration

    DEFF Research Database (Denmark)

    Lasak, Katarzyna

    . The standard cointegration analysis only considers the assumption that deviations from equilibrium can be integrated of order zero, which is very restrictive in many cases and may imply an important loss of power in the fractional case. We consider the alternative hypotheses with equilibrium deviations...... that can be mean reverting with order of integration possibly greater than zero. Moreover, the degree of fractional cointegration is not assumed to be known, and the asymptotic null distribution of both tests is found when considering an interval of possible values. The power of the proposed tests under...

  3. Validation of computational fluid dynamics-based analysis to evaluate hemodynamic significance of access stenosis.

    Science.gov (United States)

    Hoganson, David M; Hinkel, Cameron J; Chen, Xiaomin; Agarwal, Ramesh K; Shenoy, Surendra

    2014-01-01

    Stenosis in a vascular access circuit is the predominant cause of access dysfunction. Hemodynamic significance of a stenosis identified by angiography in an access circuit is uncertain. This study utilizes computational fluid dynamics (CFD) to model flow through arteriovenous fistula to predict the functional significance of stenosis in vascular access circuits. Three-dimensional models of fistulas were created with a range of clinically relevant stenoses using SolidWorks. Stenoses diameters ranged from 1.0 to 3.0 mm and lengths from 5 to 60 mm within a fistula diameter of 7 mm. CFD analyses were performed using a blood model over a range of blood pressures. Eight patient-specific stenoses were also modeled and analyzed with CFD and the resulting blood flow calculations were validated by comparison with brachial artery flow measured by duplex ultrasound. Predicted flow rates were derived from CFD analysis of a range of stenoses. These stenoses were modeled by CFD and correlated with the ultrasound measured flow rate through the fistula of eight patients. The calculated flow rate using CFD correlated within 20% of ultrasound measured flow for five of eight patients. The mean difference was 17.2% (ranged from 1.3% to 30.1%). CFD analysis-generated flow rate tables provide valuable information to assess the functional significance of stenosis detected during imaging studies. The CFD study can help in determining the clinical relevance of a stenosis in access dysfunction and guide the need for intervention.

  4. [Different significance in normal subjects and in glaucoma patients tested with Optopol PTS-910, in the galucoma program].

    Science.gov (United States)

    Dascalu, Ana Maria; Cherecheanu, Alina Popa; Stana, Daniela; Serban, Dragoş

    2013-01-01

    to quantify the inter-test variability (dB) for the Optopol PTS automated perimeter, Glaucoma Fast threshold program. A prospective study was performed on 166 glaucomatous patients and a control group of 30 normal subjects, tested by complete ophthalmological exam and automated perimetry (Optopol PTS-910). The visual field was tested weekly for 4 consecutive weeks. The visual field defects were classified according to the Aulhorn-Karmeyer descriptive scale. For the control group, the medium inter-test variability was of 1.57 +/- 0.24 dB, lower next to fixation and increasing towards the 50 degree isopter. The medium inter-test variability increases along with the perimetric stage :1.57 +/- 0.66 dB for pre-perimetric glaucoma, 2.13 + 1.04 dB for non-specific defects group, 3.23 + 1.01 dB for the stage 1, 3.52 + 2.61 dB, for the stage 2, 3.65 + 1.19dB for the stage 3 and 5.82 +/- 1.67dB for the stage 4. For the cases of preperimetric glaucoma and non-specific defects, a similar profile of variability to the normal subjects can be observed. For the stages 2-4, the profile of the areas with maxim inter-test variability moves towards the relative scotoma and the surrounding area. A better description of the inter-test variability and the evolution of this intricate parameter of the retinal light sensitivity is useful for the differential diagnostic between the real change and the "background noise" in early detection of the functional progression in glaucoma.

  5. Transfer path based tyre absorption tests

    NARCIS (Netherlands)

    Tijs, E.; Makwana, B.K.; Peksel, O.; Amarnath, S.K.P.; Bekke, Dirk; Krishnan, K.S.

    2013-01-01

    The development process of a tyre usually involves a combination of simulation and testing techniques focused on characterizing acoustic/aerodynamic and vibrational phenomena. One of the acoustic phenomenon of interest is the absorption of the tyre, which affects the sound radiated. This properties

  6. Assessment of human contributions to significant trends based on LWR past operating experience

    International Nuclear Information System (INIS)

    Sabri, Z.A.; Husseiny, A.A.; Casto, W.

    1982-01-01

    Significance criteria for human-initiated events and for events affected by human deficiencies are developed. Inadequacies in human performance of metal and motor tasks are used to categorize the nature of human deficiencies into perception, cognition, decision, response and behavior. Such criteria and categories are incorporated in a classification scheme, TRACE, which provides information extracted from LER's in a matrix form to systematically relate event consequences to causes, identify systems/components affected by or involved in the event, and possible impact on component redundancy

  7. 76 FR 65579 - Certain High Production Volume Chemicals; Test Rule and Significant New Use Rule; Fourth Group of...

    Science.gov (United States)

    2011-10-21

    ... exemption applications, and includes information about the submission of study plans and how to modify test...)? 3. Computational toxicology. The U.S. National Academy of Sciences National Research Council in... Coefficient: Method A (40 CFR 799.6755--shake flask); Method B (ASTM E 1147-92(2005)--liquid chromatography...

  8. Significance and application of microbial toxicity tests in assessing ecotoxicological risks of contaminants in soil and sediment

    NARCIS (Netherlands)

    Beelen P van; Doelman P; ECO

    1996-01-01

    Micro-organisms are vital for soil fertility and for the degradation of organic matter and pollutants in soils and sediments. Due to their function and ubiquitous presence the microflora can act as an environmentally very relevant indicator of pollution. Microbial tests should be used discriminatory

  9. Architecturally Significant Requirements Identification, Classification and Change Management for Multi-tenant Cloud-Based Systems

    DEFF Research Database (Denmark)

    Chauhan, Muhammad Aufeef; Probst, Christian W.

    2017-01-01

    identification, classification, and change management challenges. We have explored findings from systematic as well as structured reviews of the literature on quality requirements of the cloud-based systems including but not limited to security, availability, scalability, privacy, and multi-tenancy. We have...... presented a framework for requirements classification and change management focusing on distributed Platform as a Service (PaaS) and Software as a Service (SaaS) systems as well as complex software ecosystems that are built using PaaS and SaaS, such as Tools as a Service (TaaS). We have demonstrated......-time status of the tenant-specific architecture quality requirements can be monitored and system configurations can be adjusted accordingly. For the systems that can be used by multiple tenants, the requirements change management framework should consider if the addition or modification (triggered...

  10. Lipidomics in translational research and the clinical significance of lipid-based biomarkers.

    Science.gov (United States)

    Stephenson, Daniel J; Hoeferlin, L Alexis; Chalfant, Charles E

    2017-11-01

    Lipidomics is a rapidly developing field of study that focuses on the identification and quantitation of various lipid species in the lipidome. Lipidomics has now emerged in the forefront of scientific research due to the importance of lipids in metabolism, cancer, and disease. Using both targeted and untargeted mass spectrometry as a tool for analysis, progress in the field has rapidly progressed in the last decade. Having the ability to assess these small molecules in vivo has led to better understanding of several lipid-driven mechanisms and the identification of lipid-based biomarkers in neurodegenerative disease, cancer, sepsis, wound healing, and pre-eclampsia. Biomarker identification and mechanistic understanding of specific lipid pathways linked to a disease's pathologies can form the foundation in the development of novel therapeutics in hopes of curing human disease. Published by Elsevier Inc.

  11. Thymoma size significantly affects the survival, metastasis and effectiveness of adjuvant therapies: a population based study

    Science.gov (United States)

    Bian, Dongliang; Zhou, Feng; Yang, Weiguang; Zhang, Kaixuan; Chen, Linsong; Jiang, Gening; Zhang, Peng; Wu, Chunyan; Fei, Ke; Zhang, Lei

    2018-01-01

    Background Thymoma, though a rare tumor disease, is the most common tumor of the anterior mediastinum. However, tumor size, as a critical factor, has been underestimated. Results Age, advanced tumor stage, and preoperative radiotherapy were poor prognostic factors of overall survival (OS) and disease specific survival (DSS) (P thymoma patients were enrolled from the Surveillance, Epidemiology, and End Results (SEER) database. Survival based on thymoma size and other characteristics of tumors were analyzed by univariate and multivariate analysis. Correlation between thymoma size and thymoma metastatic status was contributed by logistic regression analysis. The efficiency of adjuvant therapy was analysis by stratification analysis. Conclusions Thymoma size could predict postoperative survival and guide chemotherapeutic regimens of patients. Larger tumor size indicated worse survival and higher metastatic rate. If thymoma is smaller than 90mm, traditional chemotherapy should be prohibited. While chemotherapy could be performed moderately when thymoma larger than 90 mm. PMID:29552309

  12. Improvement of testing and maintenance based on fault tree analysis

    International Nuclear Information System (INIS)

    Cepin, M.

    2000-01-01

    Testing and maintenance of safety equipment is an important issue, which significantly contributes to safe and efficient operation of a nuclear power plant. In this paper a method, which extends the classical fault tree with time, is presented. Its mathematical model is represented by a set of equations, which include time requirements defined in the house event matrix. House events matrix is a representation of house events switched on and off through the discrete points of time. It includes house events, which timely switch on and off parts of the fault tree in accordance with the status of the plant configuration. Time dependent top event probability is calculated by the fault tree evaluations. Arrangement of components outages is determined on base of minimization of mean system unavailability. The results show that application of the method may improve the time placement of testing and maintenance activities of safety equipment. (author)

  13. Significance of the 'bow and lean test' for the diagnosis of benign horizontal semicircular canal paroxysmal positional vertigo

    Directory of Open Access Journals (Sweden)

    Ying CHEN

    2012-10-01

    Full Text Available Objective To observe and assess the positive rate and accuracy of 'bow and lean test' in the horizontal semicircular canal benign paroxysmal positional vertigo (HSC-BPPV. Methods Ninety-two HSC-BPPV patients who were diagnosed by head roll test (HRT were enrolled, and then further tested with 'bow and lean test' (BLT between Oct 1, 2010 and Sep 30, 2011. They were treated by Barbecue maneuver or Brandt-Daroff exercise on the basis of HRT and BLT tests. The positive rate of BLT test was analyzed, and its accuracy for diagnosis and success rate for treatment of HSC-BPPV were compared between HRT and BLT. Results Among the 92 patients, 83(90.2% of them showed BLT nystagmus. Fifty-seven of 83 (68.7% patients showed both bowing nystagmus and leaning nystagmus, and 18(21.7% and 8(9.6% respectively showed bowing nystagmus alone or leaning nystagmus alone. Among 92 patients, 74(80.4% of them the affected side could be determined by HRT with 69 BLT positive and 5 BLT negative. Among the 69 BLT-positive patients, 60 patients showed the same result of HRT, and successful result was achieved by manipulation. 9 patients showed different result between BLT and HRT, in whom manipulation failed according to the result of HRT, but succeeded when manipulation was performed according to BLT. In 18 patients(19.6% it was not able to determine the affected side by HRT, but in 14 patients manipulation was successful when BLT result was applied. In 4 patients BLT failed to evoke nystagmus, but after practicing Brandt-Daroff exercise, vertigo and HRT nystagmus disappeared 3 days later. Among the 92 patients, 65(70.7% were cured according to HRT, while 83(90.2% got successful result according to BLT(P < 0.05. Conclusion The positive rate and accuracy for HSC-BPPV by BLT are high. It is a useful method for determining the affected side in HSC-BPPV, and to provide the basis for selecting effective manipulation treatment.

  14. Radiobiological significance of radioactive contamination - summary assessment based on great number of measurements

    International Nuclear Information System (INIS)

    Angelov, V.; Bonchev, Ts.; Mavrodiev, V.; Kyrdzhilov, N.

    1995-01-01

    In order to facilitate quantitative and qualitative characterisation of radioactive contamination the authors introduce a relative estimate of radionuclide activity by setting as a reference the most abundant element -Co-60 in the case of the Kozloduy NPP. The ratio η i of the mean annual permissible concentration in air for each radionuclide (RPC-92) to that of Co-60 is calculated. It is found that η i has the same or close values for groups of radionuclides, e.g. η i = 2.10 -4 for 238 Pu, 239 Pu, 240 Pu, 241 Am, 244 Cm; η i = 5 for 89 Sr, 91 Y; 93 Nb, 134 Cs, 137 Cs; η i = 50 for 55 Fe, 63 Ni, 95 Zr, 95 Nb, 140 Ba, 140 La. Then it is compared to the experimentally measured values of the same quantity η iexp , derived from surface contamination data. The ratio η iexp /η i is plotted against log η i . The resulting nomograms give graphic representation of the radiobiological significance of various radionuclide groups. Data from different locations at the Kozloduy NPP are presented. It is found that the alpha emitter contamination has highest values in the Unit 1 (WWER-440) control rooms after repair. The Unit 5 (WWER-1000) has lower alpha contamination compared to WWER-440 units. 1 ref., 5 figs., 1 tab

  15. Safety significance of component ageing, exemplary for MOV, based on French and German operating experience

    International Nuclear Information System (INIS)

    Morlent, O.

    2001-01-01

    An outline is given of how IPSN and GRS assess the effects of physical ageing on the safety of French and German Nuclear Power Plants (NPPs) on the basis of the available knowledge and how investigations are carried out. The presentation is focused exemplary on a preliminary study illustrating approaches for the evaluation of the ageing behaviour of active components, the motor-operated valves (MOV). The results so far seems to demonstrate that the developed methodological approaches are suitable to obtain qualitative evidence with regard to the ageing behaviour of technical facilities such as MOV. The evaluation of the operating experience with French 900 MWe plants seems to reveal, for MOV of one system, a trend similar to some international findings about ageing-related events with increasing operating time; this trend will have to be confirmed. For the German NPPs so far, there appears to be no significant increase of ageing-related events concerning MOV as the plants get older. Future work on ageing scheduled at IPSN and GRS includes further cooperation on this issue, too; a deep analysis is necessary to explain the reasons of such apparent differences before any conclusion. (authors)

  16. Safety significance of component ageing, exemplary for MOV, based on French and German operating experience

    Energy Technology Data Exchange (ETDEWEB)

    Morlent, O. [CEA Fontenay-aux-Roses, 92 (France). Inst. de Protection et de Surete Nucleaire; Michel, F. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH, Garching (Germany)

    2001-07-01

    An outline is given of how IPSN and GRS assess the effects of physical ageing on the safety of French and German Nuclear Power Plants (NPPs) on the basis of the available knowledge and how investigations are carried out. The presentation is focused exemplary on a preliminary study illustrating approaches for the evaluation of the ageing behaviour of active components, the motor-operated valves (MOV). The results so far seems to demonstrate that the developed methodological approaches are suitable to obtain qualitative evidence with regard to the ageing behaviour of technical facilities such as MOV. The evaluation of the operating experience with French 900 MWe plants seems to reveal, for MOV of one system, a trend similar to some international findings about ageing-related events with increasing operating time; this trend will have to be confirmed. For the German NPPs so far, there appears to be no significant increase of ageing-related events concerning MOV as the plants get older. Future work on ageing scheduled at IPSN and GRS includes further cooperation on this issue, too; a deep analysis is necessary to explain the reasons of such apparent differences before any conclusion. (authors)

  17. VALORA: data base system for storage significant information used in the behavior modelling in the biosphere

    International Nuclear Information System (INIS)

    Valdes R, M.; Aguero P, A.; Perez S, D.; Cancio P, D.

    2006-01-01

    The nuclear and radioactive facilities can emit to the environment effluents that contain radionuclides, which are dispersed and/or its accumulate in the atmosphere, the terrestrial surface and the surface waters. As part of the evaluations of radiological impact, it requires to be carried out qualitative and quantitative analysis. In many of the cases it doesn't have the real values of the parameters that are used in the modelling, neither it is possible to carry out their measure, for that to be able to carry out the evaluation it needs to be carried out an extensive search of that published in the literature about the possible values of each parameter, under similar conditions to the object of study, this work can be extensive. In this work the characteristics of the VALORA Database System developed with the purpose of organizing and to automate significant information that it appears in different sources (scientific or technique literature) of the parameters that are used in the modelling of the behavior of the pollutants in the environment and the values assigned to these parameters that are used in the evaluation of the radiological impact potential is described; VALORA allows the consultation and selection of the characteristic parametric data of different situations and processes that are required by the calculation pattern implemented. The software VALORA it is a component of a group of tools computer that have as objective to help to the resolution of dispersion models and transfer of pollutants. (Author)

  18. Test-Based Admission to Selective Universities:

    DEFF Research Database (Denmark)

    Thomsen, Jens-Peter

    2016-01-01

    in the social gradient in the primary admission system, admitting students on the basis of their high school grade point average, and in the secondary admission system, admitting university students based on more qualitative assessments. I find that the secondary higher education admission system does...

  19. Invasive endocervical adenocarcinoma: a new pattern-based classification system with important clinical significance.

    Science.gov (United States)

    Roma, Andres A; Diaz De Vivar, Andrea; Park, Kay J; Alvarado-Cabrero, Isabel; Rasty, Golnar; Chanona-Vilchis, Jose G; Mikami, Yoshiki; Hong, Sung R; Teramoto, Norihiro; Ali-Fehmi, Rouba; Rutgers, Joanne K L; Barbuto, Denise; Silva, Elvio G

    2015-05-01

    A new 3-tier pattern-based system to classify endocervical adenocarcinoma was recently presented. In short, pattern A tumors were characterized by well-demarcated glands frequently forming clusters or groups with relative lobular architecture. Pattern B tumors demonstrated localized destructive invasion defined as desmoplastic stroma surrounding glands with irregular and/or ill-defined borders or incomplete glands and associated tumor cells (individual or small clusters) within the stroma. Tumors with pattern C showed diffusely infiltrative glands with associated extensive desmoplastic response. In total, 352 cases (all FIGO stages) from 12 institutions were identified. Mean patient age was 45 years (range, 20 to 83 y). Forty-nine (13.9%) cases demonstrated lymph nodes (LNs) with metastatic endocervical carcinoma. Using this new system, 73 patients (20.7%) were identified with pattern A tumors (all stage I); none had LN metastases and/or recurrences. Ninety patients (25.6%) were identified with pattern B tumors (all stage I); only 4 (4.4%) had LN metastases; 1 had vaginal recurrence. The 189 (53.7%) remaining patients had pattern C tumors; 45 (23.8%) of them had LN metastases. This new classification system demonstrated 20.7% of patients (pattern A) with negative LNs, and patients with pattern A tumors can be spared of lymphadenectomy. Patients with pattern B tumors rarely presented with metastatic LNs, and sentinel LN examination could potentially identify these patients. Aggressive treatment is justified in patients with pattern C tumors.

  20. Interactions Between Students and Tutor in Problem-Based Learning: The Significance of Deep Learning

    Directory of Open Access Journals (Sweden)

    Samy A. Azer

    2009-05-01

    Full Text Available Problem-based learning (PBL is an excellent opportunity for students to take responsibility for their learning and to develop a number of cognitive skills. These include identifying problems in the trigger, generating hypotheses, constructing mechanisms, developing an enquiry plan, ranking their hypotheses on the basis of available evidence, interpreting clinical and laboratory findings, identifying their learning needs, and dealing with uncertainty. Students also need to work collaboratively in their group, communicate effectively, and take active roles in the tutorials. Therefore, interaction in the group between students and their tutor is vital to ensure deep learning and successful outcomes. The aims of this paper are to discuss the key principles for successful interaction in PBL tutorials and to highlight the major symptoms of superficial learning and poor interactions. This comprises a wide range of symptoms for different group problems, including superficial learning. By early detection of such problems, tutors will be able to explore actions with the group and negotiate changes that can foster group dynamics and enforce deep learning.

  1. Dynamic test of radio altimeter based on IQ modulation

    Science.gov (United States)

    Pan, Hongfei; Tian, Yu; Li, Miao

    2010-08-01

    This paper based on the analysis and research of radio altimeter and its basic principles, it introduces a design for I/Q modulator's radio altimeter testing system. Further, data got from the test had been analyzed. Combined with the testing data of the altimeter, a construction of the I/Q modulator's radio altimeter testing system is built.

  2. An algorithm for finding biologically significant features in microarray data based on a priori manifold learning.

    Directory of Open Access Journals (Sweden)

    Zena M Hira

    Full Text Available Microarray databases are a large source of genetic data, which, upon proper analysis, could enhance our understanding of biology and medicine. Many microarray experiments have been designed to investigate the genetic mechanisms of cancer, and analytical approaches have been applied in order to classify different types of cancer or distinguish between cancerous and non-cancerous tissue. However, microarrays are high-dimensional datasets with high levels of noise and this causes problems when using machine learning methods. A popular approach to this problem is to search for a set of features that will simplify the structure and to some degree remove the noise from the data. The most widely used approach to feature extraction is principal component analysis (PCA which assumes a multivariate Gaussian model of the data. More recently, non-linear methods have been investigated. Among these, manifold learning algorithms, for example Isomap, aim to project the data from a higher dimensional space onto a lower dimension one. We have proposed a priori manifold learning for finding a manifold in which a representative set of microarray data is fused with relevant data taken from the KEGG pathway database. Once the manifold has been constructed the raw microarray data is projected onto it and clustering and classification can take place. In contrast to earlier fusion based methods, the prior knowledge from the KEGG databases is not used in, and does not bias the classification process--it merely acts as an aid to find the best space in which to search the data. In our experiments we have found that using our new manifold method gives better classification results than using either PCA or conventional Isomap.

  3. Numerical simulations of laminated rubber bearing tests and shaking table tests of base-isolated structures

    International Nuclear Information System (INIS)

    1998-01-01

    This report includes the following numerical simulation of rubber bearing tests: Simulation of NRB test (data provided by CRIEPI, Japan); Simulation of LRB test (data provided by CRIEPI, Japan); Simulation of HDR test by (data provided by KAERI, Japan); Simulation of HDR test by (data provided by ENEL, Italy). Numerical simulation of shaking table test for base-isolated steel frame was conducted by ENEL/ISMES/ENEA/EC Numerical simulation of shaking table test for base-isolated rigid mass was conducted by CRIEP1/MIT1

  4. [Formula: see text]Determination of the smoking gun of intent: significance testing of forced choice results in social security claimants.

    Science.gov (United States)

    Binder, Laurence M; Chafetz, Michael D

    2018-01-01

    Significantly below-chance findings on forced choice tests have been described as revealing "the smoking gun of intent" that proved malingering. The issues of probability levels, one-tailed vs. two-tailed tests, and the combining of PVT scores on significantly below-chance findings were addressed in a previous study, with a recommendation of a probability level of .20 to test the significance of below-chance results. The purpose of the present study was to determine the rate of below-chance findings in a Social Security Disability claimant sample using the previous recommendations. We compared the frequency of below-chance results on forced choice performance validity tests (PVTs) at two levels of significance, .05 and .20, and when using significance testing on individual subtests of the PVTs compared with total scores in claimants for Social Security Disability in order to determine the rate of the expected increase. The frequency of significant results increased with the higher level of significance for each subtest of the PVT and when combining individual test sections to increase the number of test items, with up to 20% of claimants showing significantly below-chance results at the higher p-value. These findings are discussed in light of Social Security Administration policy, showing an impact on policy issues concerning child abuse and neglect, and the importance of using these techniques in evaluations for Social Security Disability.

  5. Test Review: Test of English as a Foreign Language[TM]--Internet-Based Test (TOEFL iBT[R])

    Science.gov (United States)

    Alderson, J. Charles

    2009-01-01

    In this article, the author reviews the TOEFL iBT which is the latest version of the TOEFL, whose history stretches back to 1961. The TOEFL iBT was introduced in the USA, Canada, France, Germany and Italy in late 2005. Currently the TOEFL test is offered in two testing formats: (1) Internet-based testing (iBT); and (2) paper-based testing (PBT).…

  6. Quantitative coronary angiography in the estimation of the functional significance of coronary stenosis: correlations with dobutamine-atropine stress test

    NARCIS (Netherlands)

    J.M.P. Baptista da Silva (José); M. Arnese (Mariarosaria); J.R.T.C. Roelandt (Jos); P.M. Fioretti (Paolo); D.T.J. Keane (David); J. Escaned (Javier); C. di Mario (Carlo); P.W.J.C. Serruys (Patrick); H. Boersma (Eric)

    1994-01-01

    textabstractOBJECTIVES. The purpose of this study was to determine the predictive value of quantitative coronary angiography in the assessment of the functional significance of coronary stenosis as judged from the development of left ventricular wall motion abnormalities during dobutamine-atropine

  7. Protein-Based Urine Test Predicts Kidney Transplant Outcomes

    Science.gov (United States)

    ... News Releases News Release Thursday, August 22, 2013 Protein-based urine test predicts kidney transplant outcomes NIH- ... supporting development of noninvasive tests. Levels of a protein in the urine of kidney transplant recipients can ...

  8. Risk based test interval and maintenance optimisation - Application and uses

    International Nuclear Information System (INIS)

    Sparre, E.

    1999-10-01

    The project is part of an IAEA co-ordinated Research Project (CRP) on 'Development of Methodologies for Optimisation of Surveillance Testing and Maintenance of Safety Related Equipment at NPPs'. The purpose of the project is to investigate the sensitivity of the results obtained when performing risk based optimisation of the technical specifications. Previous projects have shown that complete LPSA models can be created and that these models allow optimisation of technical specifications. However, these optimisations did not include any in depth check of the result sensitivity with regards to methods, model completeness etc. Four different test intervals have been investigated in this study. Aside from an original, nominal, optimisation a set of sensitivity analyses has been performed and the results from these analyses have been compared to the original optimisation. The analyses indicate that the result of an optimisation is rather stable. However, it is not possible to draw any certain conclusions without performing a number of sensitivity analyses. Significant differences in the optimisation result were discovered when analysing an alternative configuration. Also deterministic uncertainties seem to affect the result of an optimisation largely. The sensitivity of failure data uncertainties is important to investigate in detail since the methodology is based on the assumption that the unavailability of a component is dependent on the length of the test interval

  9. Medical students’ attitudes and perspectives regarding novel computer-based practical spot tests compared to traditional practical spot tests

    Science.gov (United States)

    Wijerathne, Buddhika; Rathnayake, Geetha

    2013-01-01

    Background Most universities currently practice traditional practical spot tests to evaluate students. However, traditional methods have several disadvantages. Computer-based examination techniques are becoming more popular among medical educators worldwide. Therefore incorporating the computer interface in practical spot testing is a novel concept that may minimize the shortcomings of traditional methods. Assessing students’ attitudes and perspectives is vital in understanding how students perceive the novel method. Methods One hundred and sixty medical students were randomly allocated to either a computer-based spot test (n=80) or a traditional spot test (n=80). The students rated their attitudes and perspectives regarding the spot test method soon after the test. The results were described comparatively. Results Students had higher positive attitudes towards the computer-based practical spot test compared to the traditional spot test. Their recommendations to introduce the novel practical spot test method for future exams and to other universities were statistically significantly higher. Conclusions The computer-based practical spot test is viewed as more acceptable to students than the traditional spot test. PMID:26451213

  10. Testing for Statistical Discrimination based on Gender

    DEFF Research Database (Denmark)

    Lesner, Rune Vammen

    This paper develops a model which incorporates the two most commonly cited strands of the literature on statistical discrimination, namely screening discrimination and stereotyping. The model is used to provide empirical evidence of statistical discrimination based on gender in the labour market....... It is shown that the implications of both screening discrimination and stereotyping are consistent with observable wage dynamics. In addition, it is found that the gender wage gap decreases in tenure but increases in job transitions and that the fraction of women in high-ranking positions within a firm does...... not affect the level of statistical discrimination by gender....

  11. Large-Scale Multiobjective Static Test Generation for Web-Based Testing with Integer Programming

    Science.gov (United States)

    Nguyen, M. L.; Hui, Siu Cheung; Fong, A. C. M.

    2013-01-01

    Web-based testing has become a ubiquitous self-assessment method for online learning. One useful feature that is missing from today's web-based testing systems is the reliable capability to fulfill different assessment requirements of students based on a large-scale question data set. A promising approach for supporting large-scale web-based…

  12. Testing the performance of technical trading rules in the Chinese markets based on superior predictive test

    Science.gov (United States)

    Wang, Shan; Jiang, Zhi-Qiang; Li, Sai-Ping; Zhou, Wei-Xing

    2015-12-01

    Technical trading rules have a long history of being used by practitioners in financial markets. The profitable ability and efficiency of technical trading rules are yet controversial. In this paper, we test the performance of more than seven thousand traditional technical trading rules on the Shanghai Securities Composite Index (SSCI) from May 21, 1992 through June 30, 2013 and China Securities Index 300 (CSI 300) from April 8, 2005 through June 30, 2013 to check whether an effective trading strategy could be found by using the performance measurements based on the return and Sharpe ratio. To correct for the influence of the data-snooping effect, we adopt the Superior Predictive Ability test to evaluate if there exists a trading rule that can significantly outperform the benchmark. The result shows that for SSCI, technical trading rules offer significant profitability, while for CSI 300, this ability is lost. We further partition the SSCI into two sub-series and find that the efficiency of technical trading in sub-series, which have exactly the same spanning period as that of CSI 300, is severely weakened. By testing the trading rules on both indexes with a five-year moving window, we find that during the financial bubble from 2005 to 2007, the effectiveness of technical trading rules is greatly improved. This is consistent with the predictive ability of technical trading rules which appears when the market is less efficient.

  13. Partner Accommodation in Posttraumatic Stress Disorder: Initial Testing of the Significant Others' Responses to Trauma Scale (SORTS)

    OpenAIRE

    Fredman, Steffany J.; Vorstenbosch, Valerie; Wagner, Anne C.; Macdonald, Alexandra; Monson, Candice M.

    2014-01-01

    Posttraumatic stress disorder (PTSD) is associated with myriad relationship problems and psychological distress in partners of individuals with PTSD. This study sought to develop a self-report measure of partner accommodation to PTSD (i.e., ways in which partners alter their behavior in response to patient PTSD symptoms), the Significant Others' Responses to Trauma Scale (SORTS), and to investigate its reliability and construct validity in 46 treatment-seeking couples. The SORTS demonstrated ...

  14. Development, evaluation and application of performance-based brake testing technologies field test : executive summary

    Science.gov (United States)

    1999-09-01

    This report presents the results of the field test portion of the Development, Evaluation, and Application of Performance-Based Brake Testing Technologies sponsored by the Federal Highway Administrations (FHWA) Office of Motor Carriers.

  15. Comparison of the Clock Test and a questionnaire-based test for ...

    African Journals Online (AJOL)

    Comparison of the Clock Test and a questionnaire-based test for screening for cognitive impairment in Nigerians. D J VanderJagt, S Ganga, M O Obadofin, P Stanley, M Zimmerman, B J Skipper, R H Glew ...

  16. Identifying significant genetic regulatory networks in the prostate cancer from microarray data based on transcription factor analysis and conditional independency

    Directory of Open Access Journals (Sweden)

    Yeh Cheng-Yu

    2009-12-01

    Full Text Available Abstract Background Prostate cancer is a world wide leading cancer and it is characterized by its aggressive metastasis. According to the clinical heterogeneity, prostate cancer displays different stages and grades related to the aggressive metastasis disease. Although numerous studies used microarray analysis and traditional clustering method to identify the individual genes during the disease processes, the important gene regulations remain unclear. We present a computational method for inferring genetic regulatory networks from micorarray data automatically with transcription factor analysis and conditional independence testing to explore the potential significant gene regulatory networks that are correlated with cancer, tumor grade and stage in the prostate cancer. Results To deal with missing values in microarray data, we used a K-nearest-neighbors (KNN algorithm to determine the precise expression values. We applied web services technology to wrap the bioinformatics toolkits and databases to automatically extract the promoter regions of DNA sequences and predicted the transcription factors that regulate the gene expressions. We adopt the microarray datasets consists of 62 primary tumors, 41 normal prostate tissues from Stanford Microarray Database (SMD as a target dataset to evaluate our method. The predicted results showed that the possible biomarker genes related to cancer and denoted the androgen functions and processes may be in the development of the prostate cancer and promote the cell death in cell cycle. Our predicted results showed that sub-networks of genes SREBF1, STAT6 and PBX1 are strongly related to a high extent while ETS transcription factors ELK1, JUN and EGR2 are related to a low extent. Gene SLC22A3 may explain clinically the differentiation associated with the high grade cancer compared with low grade cancer. Enhancer of Zeste Homolg 2 (EZH2 regulated by RUNX1 and STAT3 is correlated to the pathological stage

  17. Inversion of lithium heparin gel tubes after centrifugation is a significant source of bias in clinical chemistry testing.

    Science.gov (United States)

    Lippi, Giuseppe; Salvagno, Gian Luca; Danese, Elisa; Lima-Oliveira, Gabriel; Brocco, Giorgio; Guidi, Gian Cesare

    2014-09-25

    This study was planned to establish whether random orientation of gel tubes after centrifugation may impair sample quality. Eight gel tubes were collected from 17 volunteers: 2 Becton Dickinson (BD) serum tubes, 2 Terumo serum tubes, 2 BD lithium heparin tubes and 2 Terumo lithium heparin tubes. One patient's tube for each category was kept in a vertical, closure-up position for 90 min ("upright"), whereas paired tubes underwent bottom-up inversion every 15 min, for 90 min ("inverted"). Immediately after this period of time, 14 clinical chemistry analytes, serum indices and complete blood count were then assessed in all tubes. Significant increases were found for phosphate and lipaemic index in all inverted tubes, along with AST, calcium, cholesterol, LDH, potassium, hemolysis index, leukocytes, erythrocytes and platelets limited to lithium heparin tubes. The desirable quality specifications were exceeded for AST, LDH, and potassium in inverted lithium heparin tubes. Residual leukocytes, erythrocytes, platelets and cellular debris were also significantly increased in inverted lithium heparin tubes. Lithium heparin gel tubes should be maintained in a vertical, closure-up position after centrifugation. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. DLP™-based dichoptic vision test system

    Science.gov (United States)

    Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli

    2010-01-01

    It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3% remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer's sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events.

  19. Formal Specification Based Automatic Test Generation for Embedded Network Systems

    Directory of Open Access Journals (Sweden)

    Eun Hye Choi

    2014-01-01

    Full Text Available Embedded systems have become increasingly connected and communicate with each other, forming large-scaled and complicated network systems. To make their design and testing more reliable and robust, this paper proposes a formal specification language called SENS and a SENS-based automatic test generation tool called TGSENS. Our approach is summarized as follows: (1 A user describes requirements of target embedded network systems by logical property-based constraints using SENS. (2 Given SENS specifications, test cases are automatically generated using a SAT-based solver. Filtering mechanisms to select efficient test cases are also available in our tool. (3 In addition, given a testing goal by the user, test sequences are automatically extracted from exhaustive test cases. We’ve implemented our approach and conducted several experiments on practical case studies. Through the experiments, we confirmed the efficiency of our approach in design and test generation of real embedded air-conditioning network systems.

  20. Wind turbine blade testing system using base excitation

    Science.gov (United States)

    Cotrell, Jason; Thresher, Robert; Lambert, Scott; Hughes, Scott; Johnson, Jay

    2014-03-25

    An apparatus (500) for fatigue testing elongate test articles (404) including wind turbine blades through forced or resonant excitation of the base (406) of the test articles (404). The apparatus (500) includes a testing platform or foundation (402). A blade support (410) is provided for retaining or supporting a base (406) of an elongate test article (404), and the blade support (410) is pivotally mounted on the testing platform (402) with at least two degrees of freedom of motion relative to the testing platform (402). An excitation input assembly (540) is interconnected with the blade support (410) and includes first and second actuators (444, 446, 541) that act to concurrently apply forces or loads to the blade support (410). The actuator forces are cyclically applied in first and second transverse directions. The test article (404) responds to shaking of its base (406) by oscillating in two, transverse directions (505, 507).

  1. Specification-Based Testing Via Domain Specific Language

    Directory of Open Access Journals (Sweden)

    Sroka Michal

    2014-12-01

    Full Text Available The article presents tCF (testCaseFramework - a domain specific language with corresponding toolchain for specification-based software testing of embedded software. tCF is designed for efficient preparation of maintainable and intelligible test cases and for testing process automation, as it allows to generate platform specific test cases for various testing levels. The article describes the essential parts of the tCF meta-model and the applied concept of platform specific test cases generators.

  2. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through

  3. The Use of Isoproterenol in Electrophysiologic Drug Testing in Patients with Sustained Ventricular Tachycardia : The Mechanism and Clinical Significance of Isoproterenol

    OpenAIRE

    Satoh, Masahito

    1992-01-01

    Isoproterenol has been used in electrophysiologic studies to facilitate the induction of ventricular tachycardia (VT) as well as in drug testing. However, the mechanism of the induction of VT and the clinical significance of the VT induced with isoproterenol have yet to be determined. The present study assessed the effects of isoproterenol in the induction of VT during drug testing in 23 patients (34 drug testings), and analyzed the patients' characteristics and electrophysiologic parameters....

  4. Moving beyond the Failure of Test-Based Accountability

    Science.gov (United States)

    Koretz, Daniel

    2018-01-01

    In "The Testing Charade: Pretending to Make Schools Better", the author's new book from which this article is drawn, the failures of test-based accountability are documented and some of the most egregious misuses and outright abuses of testing are described, along with some of the most serious negative effects. Neither good intentions…

  5. Environmental Assessment and Finding of No Significant Impact: The Nevada Test Site Development Corporations's Desert Rock Sky Park at the Nevada Test Site

    Energy Technology Data Exchange (ETDEWEB)

    N/A

    2000-03-01

    The United States Department of Energy has prepared an Environmental Assessment (DOE/EA-1300) (EA) which analyzes the potential environmental effects of developing operating and maintaining a commercial/industrial park in Area 22 of the Nevada Test Site, between Mercury Camp and U.S. Highway 95 and east of Desert Rock Airport. The EA evaluates the potential impacts of infrastructure improvements necessary to support fill build out of the 512-acre Desert Rock Sky Park. Two alternative actions were evaluated: (1) Develop, operate and maintain a commercial/industrial park in Area 22 of the Nevada Test Site, and (2) taking no action. The purpose and need for the commercial industrial park are addressed in Section 1.0 of the EA. A detailed description of the proposed action and alternatives is in section 2.0. Section 3.0 describes the affected environment. Section 4.0 the environmental consequences of the proposed action and alternative. Cumulative effects are addressed in Section 5.0. Mitigation measures are addressed in Section 6.0. The Department of Energy determined that the proposed action of developing, operating and maintaining a commercial/industrial park in Area 22 of the Nevada Test Site would best meet the needs of the agency.

  6. Using the noninformative families in family-based association tests : A powerful new testing strategy

    NARCIS (Netherlands)

    Lange, C; DeMeo, D; Silverman, EK; Weiss, ST; Laird, NM

    2003-01-01

    For genetic association studies with multiple phenotypes, we propose a new strategy for multiple testing with family-based association tests (FBATs). The strategy increases the power by both using all available family data and reducing the number of hypotheses tested while being robust against

  7. Development of Power Electronics Based Test Platform for Characterization and Testing of Magnetocaloric Materials

    Directory of Open Access Journals (Sweden)

    Deepak Elamalayil Soman

    2015-01-01

    Full Text Available Magnetocaloric effects of various materials are getting more and more interesting for the future, as they can significantly contribute towards improving the efficiency of many energy intensive applications such as refrigeration, heating, and air conditioning. Accurate characterization of magnetocaloric effects, exhibited by various materials, is an important process for further studies and development of the suitable magnetocaloric heating and cooling solutions. The conventional test facilities have plenty of limitations, as they focus only on the thermodynamic side and use magnetic machines with moving bed of magnetocaloric material or magnet. In this work an entirely new approach for characterization of the magnetocaloric materials is presented, with the main focus on a flexible and efficient power electronic based excitation and a completely static test platform. It can generate a periodically varying magnetic field using superposition of an ac and a dc magnetic field. The scale down prototype uses a customized single phase H-bridge inverter with essential protections and an electromagnet load as actuator. The preliminary simulation and experimental results show good agreement and support the usage of the power electronic test platform for characterizing magnetocaloric materials.

  8. Clinicopathological significance of p16, cyclin D1, Rb and MIB-1 levels in skull base chordoma and chondrosarcoma

    Directory of Open Access Journals (Sweden)

    Jun-qi Liu

    2015-09-01

    Full Text Available Objective: To investigate the expression of p16, cyclin D1, retinoblastoma tumor suppressor protein (Rb and MIB-1 in skull base chordoma and chondrosarcoma tissues, and to determine the clinicopathological significance of the above indexes in these diseases. Methods: A total of 100 skull base chordoma, 30 chondrosarcoma, and 20 normal cartilage tissue samples were analyzed by immunohistochemistry. The expression levels of p16, cyclinD1, Rb and MIB-1 proteins were assessed for potential correlation with the clinicopathological features. Results: As compared to normal cartilage specimen (control, there was decreased expression of p16, and increased expression of cyclin D1, Rb and MIB-1 proteins, in both skull base chordoma and chondrosarcoma specimens. MIB-1 LI levels were significantly increased in skull base chordoma specimens with negative expression of p16, and positive expression of cyclin D1 and Rb (P  0.05. However, p16 and MIB-1 levels correlated with the intradural invasion, and expression of p16, Rb and MIB-1 correlated with the number of tumor foci (P < 0.05. Further, the expression of p16 and MIB-1 appeared to correlate with the prognosis of patients with skull base chordoma. Conclusions: The abnormal expression of p16, cyclin D1 and Rb proteins might be associated with the tumorigenesis of skull base chordoma and chondrosarcoma. Keywords: p16, Cyclin D1, Rb, MIB-1, Skull base chordoma, Skull base chondrosarcoma

  9. Quantitative renal perfusion measurements in a rat model of acute kidney injury at 3T: testing inter- and intramethodical significance of ASL and DCE-MRI.

    Directory of Open Access Journals (Sweden)

    Fabian Zimmer

    Full Text Available OBJECTIVES: To establish arterial spin labelling (ASL for quantitative renal perfusion measurements in a rat model at 3 Tesla and to test the diagnostic significance of ASL and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI in a model of acute kidney injury (AKI. MATERIAL AND METHODS: ASL and DCE-MRI were consecutively employed on six Lewis rats, five of which had a unilateral ischaemic AKI. All measurements in this study were performed on a 3 Tesla MR scanner using a FAIR True-FISP approach and a TWIST sequence for ASL and DCE-MRI, respectively. Perfusion maps were calculated for both methods and the cortical perfusion of healthy and diseased kidneys was inter- and intramethodically compared using a region-of-interest based analysis. RESULTS/SIGNIFICANCE: Both methods produce significantly different values for the healthy and the diseased kidneys (P<0.01. The mean difference was 147±47 ml/100 g/min and 141±46 ml/100 g/min for ASL and DCE-MRI, respectively. ASL measurements yielded a mean cortical perfusion of 416±124 ml/100 g/min for the healthy and 316±102 ml/100 g/min for the diseased kidneys. The DCE-MRI values were systematically higher and the mean cortical renal blood flow (RBF was found to be 542±85 ml/100 g/min (healthy and 407±119 ml/100 g/min (AKI. CONCLUSION: Both methods are equally able to detect abnormal perfusion in diseased (AKI kidneys. This shows that ASL is a capable alternative to DCE-MRI regarding the detection of abnormal renal blood flow. Regarding absolute perfusion values, nontrivial differences and variations remain when comparing the two methods.

  10. Cross-Mode Comparability of Computer-Based Testing (CBT) versus Paper-Pencil Based Testing (PPT): An Investigation of Testing Administration Mode among Iranian Intermediate EFL Learners

    Science.gov (United States)

    Khoshsima, Hooshang; Hosseini, Monirosadat; Toroujeni, Seyyed Morteza Hashemi

    2017-01-01

    Advent of technology has caused growing interest in using computers to convert conventional paper and pencil-based testing (Henceforth PPT) into Computer-based testing (Henceforth CBT) in the field of education during last decades. This constant promulgation of computers to reshape the conventional tests into computerized format permeated the…

  11. Acid-base titrations for polyacids: Significance of the pK sub a and parameters in the Kern equation

    Science.gov (United States)

    Meites, L.

    1978-01-01

    A new method is suggested for calculating the dissociation constants of polyvalent acids, especially polymeric acids. In qualitative form the most significant characteristics of the titration curves are demonstrated and identified which are obtained when titrating the solutions of such acids with a standard base potentiometrically.

  12. Model Based Analysis and Test Generation for Flight Software

    Science.gov (United States)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  13. Design Of Computer Based Test Using The Unified Modeling Language

    Science.gov (United States)

    Tedyyana, Agus; Danuri; Lidyawati

    2017-12-01

    The Admission selection of Politeknik Negeri Bengkalis through interest and talent search (PMDK), Joint Selection of admission test for state Polytechnics (SB-UMPN) and Independent (UM-Polbeng) were conducted by using paper-based Test (PBT). Paper Based Test model has some weaknesses. They are wasting too much paper, the leaking of the questios to the public, and data manipulation of the test result. This reasearch was Aimed to create a Computer-based Test (CBT) models by using Unified Modeling Language (UML) the which consists of Use Case diagrams, Activity diagram and sequence diagrams. During the designing process of the application, it is important to pay attention on the process of giving the password for the test questions before they were shown through encryption and description process. RSA cryptography algorithm was used in this process. Then, the questions shown in the questions banks were randomized by using the Fisher-Yates Shuffle method. The network architecture used in Computer Based test application was a client-server network models and Local Area Network (LAN). The result of the design was the Computer Based Test application for admission to the selection of Politeknik Negeri Bengkalis.

  14. Field-based physiological testing of wheelchair athletes.

    Science.gov (United States)

    Goosey-Tolfrey, Victoria L; Leicht, Christof A

    2013-02-01

    The volume of literature on field-based physiological testing of wheelchair sports, such as basketball, rugby and tennis, is considerably smaller when compared with that available for individuals and team athletes in able-bodied (AB) sports. In analogy to the AB literature, it is recognized that performance in wheelchair sports not only relies on fitness, but also sport-specific skills, experience and technical proficiency. However, in contrast to AB sports, two major components contribute towards 'wheeled sports' performance, which are the athlete and the wheelchair. It is the interaction of these two that enable wheelchair propulsion and the sporting movements required within a given sport. Like any other athlete, participants of wheelchair sports are looking for efficient ways to train and/or analyse their technique and fitness to improve their performance. Consequently, laboratory and/or field-based physiological monitoring tools used at regular intervals at key time points throughout the year must be considered to help with training evaluation. The present review examines methods available in the literature to assess wheelchair sports fitness in a field-based environment, with special attention on outcome variables, validity and reliability issues, and non-physiological influences on performance. It also lays out the context of field-based testing by providing details about the Paralympic court sports and the impacts of a disability on sporting performance. Due to the limited availability of specialized equipment for testing wheelchair-dependent participants in the laboratory, the adoption of field-based testing has become the preferred option by team coaches of wheelchair athletes. An obvious advantage of field-based testing is that large groups of athletes can be tested in less time. Furthermore, athletes are tested in their natural environment (using their normal sports wheelchair set-up and floor surface), potentially making the results of such testing

  15. Power amplifier automatic test system based on LXI bus technology

    Science.gov (United States)

    Li, Yushuang; Chen, Libing; Men, Tao; Yang, Qingfeng; Li, Ning; Nie, Tao

    2017-10-01

    The power amplifier is an important part of the high power digital transceiver module, because of its great demand and diverse measurement indicators, an automatic test system is designed to meet the production requirements of the power amplifiers as the manual test cannot meet the demand of consistency. This paper puts forward the plan of the automatic test system based on LXI bus technology, introduces the hardware and software architecture of the system. The test system has been used for debugging and testing the power amplifiers stably and efficiently, which greatly saves work force and effectively improves productivity.

  16. Integrating software into PRA: a test-based approach.

    Science.gov (United States)

    Li, Bin; Li, Ming; Smidts, Carol

    2005-08-01

    Probabilistic risk assessment (PRA) is a methodology to assess the probability of failure or success of a system's operation. PRA has been proved to be a systematic, logical, and comprehensive technique for risk assessment. Software plays an increasing role in modern safety critical systems. A significant number of failures can be attributed to software failures. Unfortunately, current probabilistic risk assessment concentrates on representing the behavior of hardware systems, humans, and their contributions (to a limited extent) to risk but neglects the contributions of software due to a lack of understanding of software failure phenomena. It is thus imperative to consider and model the impact of software to reflect the risk in current and future systems. The objective of our research is to develop a methodology to account for the impact of software on system failure that can be used in the classical PRA analysis process. A test-based approach for integrating software into PRA is discussed in this article. This approach includes identification of software functions to be modeled in the PRA, modeling of the software contributions in the ESD, and fault tree. The approach also introduces the concepts of input tree and output tree and proposes a quantification strategy that uses a software safety testing technique. The method is applied to an example system, PACS.

  17. Taking HIV Testing to Families: Designing a Family-Based Intervention to Facilitate HIV Testing, Disclosure and Intergenerational Communication

    Directory of Open Access Journals (Sweden)

    Heidi van Rooyen

    2016-08-01

    maps the process for adapting a novel and largely successful home-based counseling and testing intervention for use with families. Expanding the successful home-based counseling and testing model to capture children, adolescents and men could have significant impact if the pilot is successful and scaled-up.

  18. Multidimensional test assembly based on Lagrangian relaxation techniques

    NARCIS (Netherlands)

    Veldkamp, Bernard P.

    1998-01-01

    In this paper, a mathematical programming approach is presented for the assembly of ability tests measuring multiple traits. The values of the variance functions of the estimators of the traits are minimized, while test specifications are met. The approach is based on Lagrangian relaxation

  19. Outcome of Presumptive Versus Rapid Diagnostic Tests-Based ...

    African Journals Online (AJOL)

    First, 50 children with malaria-pneumonia symptom overlap were consecutively enrolled and treated presumptively with antibiotics and antimalarials irrespective of malaria test result (control arm).Then, another 50 eligible children were enrolled and treated with antibiotics with/out antimalarials based on rapid diagnostic test ...

  20. Model-Based GUI Testing Using Uppaal at Novo Nordisk

    DEFF Research Database (Denmark)

    H. Hjort, Ulrik; Rasmussen, Jacob Illum; Larsen, Kim Guldstrand

    2009-01-01

    This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates...

  1. Home-based HIV counselling and testing in Western Kenya ...

    African Journals Online (AJOL)

    Home-based HIV counselling and testing was feasible among this rural population in western Kenya, with a majority of the population accepting to get tested. These data suggest that scaling-up of HBCT is possible and may enable large numbers of individuals to know their HIV serostatus in sub-Saharan Africa.

  2. Microcomputer based test system for charge coupled devices

    International Nuclear Information System (INIS)

    Sidman, S.

    1981-02-01

    A microcomputer based system for testing analog charge coupled integrated circuits has been developed. It measures device performance for three parameters: dynamic range, baseline shift due to leakage current, and transfer efficiency. A companion board tester has also been developed. The software consists of a collection of BASIC and assembly language routines developed on the test system microcomputer

  3. A Test That Isn't Torture: A Field-Tested Performance-Based Assessment

    Science.gov (United States)

    Eastburn, Mark

    2006-01-01

    This article discusses the author's use of a performance-based evaluation in his fifth grade Spanish class in a K-5 public elementary school located in Princeton, New Jersey. The author realized the need to break the old testing paradigm and discover a new way of demonstrating student language acquisition since the traditional tests did not seem…

  4. Space Launch System Base Heating Test: Experimental Operations & Results

    Science.gov (United States)

    Dufrene, Aaron; Mehta, Manish; MacLean, Matthew; Seaford, Mark; Holden, Michael

    2016-01-01

    NASA's Space Launch System (SLS) uses four clustered liquid rocket engines along with two solid rocket boosters. The interaction between all six rocket exhaust plumes will produce a complex and severe thermal environment in the base of the vehicle. This work focuses on a recent 2% scale, hot-fire SLS base heating test. These base heating tests are short-duration tests executed with chamber pressures near the full-scale values with gaseous hydrogen/oxygen engines and RSRMV analogous solid propellant motors. The LENS II shock tunnel/Ludwieg tube tunnel was used at or near flight duplicated conditions up to Mach 5. Model development was based on the Space Shuttle base heating tests with several improvements including doubling of the maximum chamber pressures and duplication of freestream conditions. Test methodology and conditions are presented, and base heating results from 76 runs are reported in non-dimensional form. Regions of high heating are identified and comparisons of various configuration and conditions are highlighted. Base pressure and radiometer results are also reported.

  5. HEV Test Bench Based on CAN Bus Sensor Communication

    Directory of Open Access Journals (Sweden)

    Shupeng ZHAO

    2014-02-01

    Full Text Available The HEV test bench based on Controller Area Network bus was studied and developed. Control system of HEV power test bench used the CAN bus technology. The application of CAN bus technology on control system development has opened up a new research direction for domestic automobile experimental platform. The HEV power control system development work was completed, including power master controller, electric throttle controller, driving simulation platform, CAN2.0 B communication protocol procedures for formulation, CAN communication monitoring system, the simulation model based on MATLAB code automatic generation technology research, etc. Maximum absorption power of the test bench is 90 kW, the test bench top speed is 6000 r/min, the CAN communication data baud rate is 10~500 k, the conventional electric measurement parameter part precision satisfies the requirement of development of HEV. On the HEV test bench the result of regenerative braking experiment shows that the result got by the test bench was closer to the results got by outdoor road test. And the fuel consumption experiment test results show that the HEV fuel consumption and the charge-discharge character are in linear relationship. The establishment of the test platform for the evaluation of the development of hybrid electric vehicle and power provides physical simulation and test platform.

  6. On school choice and test-based accountability.

    Directory of Open Access Journals (Sweden)

    Damian W. Betebenner

    2005-10-01

    Full Text Available Among the two most prominent school reform measures currently being implemented in The United States are school choice and test-based accountability. Until recently, the two policy initiatives remained relatively distinct from one another. With the passage of the No Child Left Behind Act of 2001 (NCLB, a mutualism between choice and accountability emerged whereby school choice complements test-based accountability. In the first portion of this study we present a conceptual overview of school choice and test-based accountability and explicate connections between the two that are explicit in reform implementations like NCLB or implicit within the market-based reform literature in which school choice and test-based accountability reside. In the second portion we scrutinize the connections, in particular, between school choice and test-based accountability using a large western school district with a popular choice system in place. Data from three sources are combined to explore the ways in which school choice and test-based accountability draw on each other: state assessment data of children in the district, school choice data for every participating student in the district choice program, and a parental survey of both participants and non-participants of choice asking their attitudes concerning the use of school report cards in the district. Results suggest that choice is of benefit academically to only the lowest achieving students, choice participation is not uniform across different ethnic groups in the district, and parents' primary motivations as reported on a survey for participation in choice are not due to test scores, though this is not consistent with choice preferences among parents in the district. As such, our results generally confirm the hypotheses of choice critics more so than advocates. Keywords: school choice; accountability; student testing.

  7. Design, construction and testing of a base driven static inverter ...

    African Journals Online (AJOL)

    Based on the active circuit of a 50Hz astable multivibrator, a base driven static inverter has been designed, constructed and tested. Design is able to convert small amounts of dc current to their amplified ac equivalents. A conversion of 12V dc input to the usual domestic range of 220-240V ac is also derivable from the ...

  8. Evolution of a Computer-Based Testing Laboratory

    Science.gov (United States)

    Moskal, Patrick; Caldwell, Richard; Ellis, Taylor

    2009-01-01

    In 2003, faced with increasing growth in technology-based and large-enrollment courses, the College of Business Administration at the University of Central Florida opened a computer-based testing lab to facilitate administration of course examinations. Patrick Moskal, Richard Caldwell, and Taylor Ellis describe the development and evolution of the…

  9. Detailed field test of yaw-based wake steering

    DEFF Research Database (Denmark)

    Fleming, P.; Churchfield, M.; Scholbrock, A.

    2016-01-01

    This paper describes a detailed field-test campaign to investigate yaw-based wake steering. In yaw-based wake steering, an upstream turbine intentionally misaligns its yaw with respect to the inflow to deflect its wake away from a downstream turbine, with the goal of increasing total power produc...

  10. Invariant-Based Automatic Testing of AJAX User Interfaces

    NARCIS (Netherlands)

    Mesbah, A.; Van Deursen, A.

    2009-01-01

    This paper is a pre-print of: Ali Mesbah and Arie van Deursen. Invariant-Based Automatic Testing of AJAX User Interfaces. In Proceedings of the 31st International Conference on Software Engineering (ICSE’09), Research Papers, Vancouver, Canada, IEEE Computer Society, 2009. AJAX-based Web 2.0

  11. Web based aphasia test using service oriented architecture (SOA)

    Energy Technology Data Exchange (ETDEWEB)

    Voos, J A [Clinical Engineering R and D Center, Universidad Tecnologica Nacional, Facultad Regional Cordoba, Cordoba (Argentina); Vigliecca, N S [Consejo Nacional de Investigaciones Cientificas y Tecnicas, CONICET, Cordoba (Argentina); Gonzalez, E A [Clinical Engineering R and D Center, Universidad Tecnologica Nacional, Facultad Regional Cordoba, Cordoba (Argentina)

    2007-11-15

    Based on an aphasia test for Spanish speakers which analyze the patient's basic resources of verbal communication, a web-enabled software was developed to automate its execution. A clinical database was designed as a complement, in order to evaluate the antecedents (risk factors, pharmacological and medical backgrounds, neurological or psychiatric symptoms, brain injury -anatomical and physiological characteristics, etc) which are necessary to carry out a multi-factor statistical analysis in different samples of patients. The automated test was developed following service oriented architecture and implemented in a web site which contains a tests suite, which would allow both integrating the aphasia test with other neuropsychological instruments and increasing the available site information for scientific research. The test design, the database and the study of its psychometric properties (validity, reliability and objectivity) were made in conjunction with neuropsychological researchers, who participate actively in the software design, based on the patients or other subjects of investigation feedback.

  12. Web based aphasia test using service oriented architecture (SOA)

    International Nuclear Information System (INIS)

    Voos, J A; Vigliecca, N S; Gonzalez, E A

    2007-01-01

    Based on an aphasia test for Spanish speakers which analyze the patient's basic resources of verbal communication, a web-enabled software was developed to automate its execution. A clinical database was designed as a complement, in order to evaluate the antecedents (risk factors, pharmacological and medical backgrounds, neurological or psychiatric symptoms, brain injury -anatomical and physiological characteristics, etc) which are necessary to carry out a multi-factor statistical analysis in different samples of patients. The automated test was developed following service oriented architecture and implemented in a web site which contains a tests suite, which would allow both integrating the aphasia test with other neuropsychological instruments and increasing the available site information for scientific research. The test design, the database and the study of its psychometric properties (validity, reliability and objectivity) were made in conjunction with neuropsychological researchers, who participate actively in the software design, based on the patients or other subjects of investigation feedback

  13. Development of seismic technology and reliability based on vibration tests

    International Nuclear Information System (INIS)

    Sasaki, Youichi

    1997-01-01

    This paper deals with some of the vibration tests and investigations on the seismic safety of nuclear power plants (NPPs) in Japan. To ensure the reliability of the seismic safety of nuclear power plants, nuclear power plants in Japan have been designed according to the Technical Guidelines for Aseismic Design of Nuclear Power Plants. This guideline has been developed based on technical date base and findings which were obtained from many vibration tests and investigations. Besides the tests for the guideline, proving tests on seismic reliability of operating nuclear power plants equipment and systems have been carried out. In this paper some vibration tests and their evaluation results are presented. They have crucially contributed to develop the guideline. (J.P.N.)

  14. Development of seismic technology and reliability based on vibration tests

    Energy Technology Data Exchange (ETDEWEB)

    Sasaki, Youichi [Nuclear Power Engineering Corp., Tokyo (Japan)

    1997-03-01

    This paper deals with some of the vibration tests and investigations on the seismic safety of nuclear power plants (NPPs) in Japan. To ensure the reliability of the seismic safety of nuclear power plants, nuclear power plants in Japan have been designed according to the Technical Guidelines for Aseismic Design of Nuclear Power Plants. This guideline has been developed based on technical date base and findings which were obtained from many vibration tests and investigations. Besides the tests for the guideline, proving tests on seismic reliability of operating nuclear power plants equipment and systems have been carried out. In this paper some vibration tests and their evaluation results are presented. They have crucially contributed to develop the guideline. (J.P.N.)

  15. Frequency of Testing for Dyslipidemia: An Evidence-Based Analysis

    Science.gov (United States)

    2014-01-01

    Background Dyslipidemias include high levels of total cholesterol, low-density lipoprotein (LDL) cholesterol, and triglycerides and low levels of high-density lipoprotein (HDL) cholesterol. Dyslipidemia is a risk factor for cardiovascular disease, which is a major contributor to mortality in Canada. Approximately 23% of the 2009/11 Canadian Health Measures Survey (CHMS) participants had a high level of LDL cholesterol, with prevalence increasing with age, and approximately 15% had a total cholesterol to HDL ratio above the threshold. Objectives To evaluate the frequency of lipid testing in adults not diagnosed with dyslipidemia and in adults on treatment for dyslipidemia. Research Methods A systematic review of the literature set out to identify randomized controlled trials (RCTs), systematic reviews, health technology assessments (HTAs), and observational studies published between January 1, 2000, and November 29, 2012, that evaluated the frequency of testing for dyslipidemia in the 2 populations. Results Two observational studies assessed the frequency of lipid testing, 1 in individuals not on lipid-lowering medications and 1 in treated individuals. Both studies were based on previously collected data intended for a different objective and, therefore, no conclusions could be reached about the frequency of testing at intervals other than the ones used in the original studies. Given this limitation and generalizability issues, the quality of evidence was considered very low. No evidence for the frequency of lipid testing was identified in the 2 HTAs included. Canadian and international guidelines recommend testing for dyslipidemia in individuals at an increased risk for cardiovascular disease. The frequency of testing recommended is based on expert consensus. Conclusions Conclusions on the frequency of lipid testing could not be made based on the 2 observational studies. Current guidelines recommend lipid testing in adults with increased cardiovascular risk, with

  16. Visually directed vs. software-based targeted biopsy compared to transperineal template mapping biopsy in the detection of clinically significant prostate cancer.

    Science.gov (United States)

    Valerio, Massimo; McCartan, Neil; Freeman, Alex; Punwani, Shonit; Emberton, Mark; Ahmed, Hashim U

    2015-10-01

    Targeted biopsy based on cognitive or software magnetic resonance imaging (MRI) to transrectal ultrasound registration seems to increase the detection rate of clinically significant prostate cancer as compared with standard biopsy. However, these strategies have not been directly compared against an accurate test yet. The aim of this study was to obtain pilot data on the diagnostic ability of visually directed targeted biopsy vs. software-based targeted biopsy, considering transperineal template mapping (TPM) biopsy as the reference test. Prospective paired cohort study included 50 consecutive men undergoing TPM with one or more visible targets detected on preoperative multiparametric MRI. Targets were contoured on the Biojet software. Patients initially underwent software-based targeted biopsies, then visually directed targeted biopsies, and finally systematic TPM. The detection rate of clinically significant disease (Gleason score ≥3+4 and/or maximum cancer core length ≥4mm) of one strategy against another was compared by 3×3 contingency tables. Secondary analyses were performed using a less stringent threshold of significance (Gleason score ≥4+3 and/or maximum cancer core length ≥6mm). Median age was 68 (interquartile range: 63-73); median prostate-specific antigen level was 7.9ng/mL (6.4-10.2). A total of 79 targets were detected with a mean of 1.6 targets per patient. Of these, 27 (34%), 28 (35%), and 24 (31%) were scored 3, 4, and 5, respectively. At a patient level, the detection rate was 32 (64%), 34 (68%), and 38 (76%) for visually directed targeted, software-based biopsy, and TPM, respectively. Combining the 2 targeted strategies would have led to detection rate of 39 (78%). At a patient level and at a target level, software-based targeted biopsy found more clinically significant diseases than did visually directed targeted biopsy, although this was not statistically significant (22% vs. 14%, P = 0.48; 51.9% vs. 44.3%, P = 0.24). Secondary

  17. Clinical Significance of Two Real-Time PCR Assays for Chronic Hepatitis C Patients Receiving Protease Inhibitor-Based Therapy.

    Science.gov (United States)

    Inoue, Takako; Hmwe, Su Su; Shimada, Noritomo; Kato, Keizo; Ide, Tatsuya; Torimura, Takuji; Kumada, Takashi; Toyoda, Hidenori; Tsubota, Akihito; Takaguchi, Koichi; Wakita, Takaji; Tanaka, Yasuhito

    2017-01-01

    The aim of this study was to determine the efficacy of two hepatitis C virus (HCV) real-time PCR assays, the COBAS AmpliPrep/COBAS TaqMan HCV test (CAP/CTM) and the Abbott RealTime HCV test (ART), for predicting the clinical outcomes of patients infected with HCV who received telaprevir (TVR)-based triple therapy or daclatasvir/asunaprevir (DCV/ASV) dual therapy. The rapid virological response rates in patients receiving TVR-based triple therapy were 92% (23/25) and 40% (10/25) for CAP/CTM and ART, respectively. The false omission rate (FOR) of ART was 93.3% (14/15), indicating that CAP/CTM could accurately predict clinical outcome in the early phase. In an independent examination of 20 patients receiving TVR-based triple therapy who developed viral breakthrough or relapse, the times to HCV disappearance by ART were longer than by CAP/CTM, whereas the times to HCV reappearance were similar. In an independent experiment of WHO standard HCV RNA serially diluted in serum containing TVR, the analytical sensitivities of CAP/CTM and ART were similar. However, cell cultures transfected with HCV and grown in medium containing TVR demonstrated that ART detected HCV RNA for a longer time than CAP/CTM. Similar results were found for 42 patients receiving DCV/ASV dual therapy. The FOR of ART was 73.3% (11/15) at week 8 after initiation of therapy, indicating that ART at week 8 could not accurately predict the clinical outcome. In conclusion, although CAP/CTM and ART detected HCV RNA with comparable analytical sensitivity, CAP/CTM might be preferable for predicting the clinical outcomes of patients receiving protease inhibitor-based therapy.

  18. Intensity-Modulated Radiotherapy Results in Significant Decrease in Clinical Toxicities Compared With Conventional Wedge-Based Breast Radiotherapy

    International Nuclear Information System (INIS)

    Harsolia, Asif; Kestin, Larry; Grills, Inga; Wallace, Michelle; Jolly, Shruti; Jones, Cortney; Lala, Moinaktar; Martinez, Alvaro; Schell, Scott; Vicini, Frank A.

    2007-01-01

    Purpose: We have previously demonstrated that intensity-modulated radiotherapy (IMRT) with a static multileaf collimator process results in a more homogenous dose distribution compared with conventional wedge-based whole breast irradiation (WBI). In the present analysis, we reviewed the acute and chronic toxicity of this IMRT approach compared with conventional wedge-based treatment. Methods and Materials: A total of 172 patients with Stage 0-IIB breast cancer were treated with lumpectomy followed by WBI. All patients underwent treatment planning computed tomography and received WBI (median dose, 45 Gy) followed by a boost to 61 Gy. Of the 172 patients, 93 (54%) were treated with IMRT, and the 79 patients (46%) treated with wedge-based RT in a consecutive fashion immediately before this cohort served as the control group. The median follow-up was 4.7 years. Results: A significant reduction in acute Grade 2 or worse dermatitis, edema, and hyperpigmentation was seen with IMRT compared with wedges. A trend was found toward reduced acute Grade 3 or greater dermatitis (6% vs. 1%, p = 0.09) in favor of IMRT. Chronic Grade 2 or worse breast edema was significantly reduced with IMRT compared with conventional wedges. No difference was found in cosmesis scores between the two groups. In patients with larger breasts (≥1,600 cm 3 , n = 64), IMRT resulted in reduced acute (Grade 2 or greater) breast edema (0% vs. 36%, p <0.001) and hyperpigmentation (3% vs. 41%, p 0.001) and chronic (Grade 2 or greater) long-term edema (3% vs. 30%, p 0.007). Conclusion: The use of IMRT in the treatment of the whole breast results in a significant decrease in acute dermatitis, edema, and hyperpigmentation and a reduction in the development of chronic breast edema compared with conventional wedge-based RT

  19. Testing ESL sociopragmatics development and validation of a web-based test battery

    CERN Document Server

    Roever, Carsten; Elder, Catherine

    2014-01-01

    Testing of second language pragmatics has grown as a research area but still suffers from a tension between construct coverage and practicality. In this book, the authors describe the development and validation of a web-based test of second language pragmatics for learners of English. The test has a sociopragmatic orientation and strives for a broad coverage of the construct by assessing learners'' metapragmatic judgments as well as their ability to co-construct discourse. To ensure practicality, the test is delivered online and is scored partially automatically and partially by human raters.

  20. A multiparametric magnetic resonance imaging-based risk model to determine the risk of significant prostate cancer prior to biopsy.

    Science.gov (United States)

    van Leeuwen, Pim J; Hayen, Andrew; Thompson, James E; Moses, Daniel; Shnier, Ron; Böhm, Maret; Abuodha, Magdaline; Haynes, Anne-Maree; Ting, Francis; Barentsz, Jelle; Roobol, Monique; Vass, Justin; Rasiah, Krishan; Delprado, Warick; Stricker, Phillip D

    2017-12-01

    To develop and externally validate a predictive model for detection of significant prostate cancer. Development of the model was based on a prospective cohort including 393 men who underwent multiparametric magnetic resonance imaging (mpMRI) before biopsy. External validity of the model was then examined retrospectively in 198 men from a separate institution whom underwent mpMRI followed by biopsy for abnormal prostate-specific antigen (PSA) level or digital rectal examination (DRE). A model was developed with age, PSA level, DRE, prostate volume, previous biopsy, and Prostate Imaging Reporting and Data System (PIRADS) score, as predictors for significant prostate cancer (Gleason 7 with >5% grade 4, ≥20% cores positive or ≥7 mm of cancer in any core). Probability was studied via logistic regression. Discriminatory performance was quantified by concordance statistics and internally validated with bootstrap resampling. In all, 393 men had complete data and 149 (37.9%) had significant prostate cancer. While the variable model had good accuracy in predicting significant prostate cancer, area under the curve (AUC) of 0.80, the advanced model (incorporating mpMRI) had a significantly higher AUC of 0.88 (P prostate cancer. Individualised risk assessment of significant prostate cancer using a predictive model that incorporates mpMRI PIRADS score and clinical data allows a considerable reduction in unnecessary biopsies and reduction of the risk of over-detection of insignificant prostate cancer at the cost of a very small increase in the number of significant cancers missed. © 2017 The Authors BJU International © 2017 BJU International Published by John Wiley & Sons Ltd.

  1. Qualitative tests for the determination of inorganic bases

    OpenAIRE

    Založnik, Urša

    2013-01-01

    The unit on acids, bases and salts is dealt with in primary and secondary schools and can be very interesting to students because they encounter these substances on an everyday basis. In my Diploma thesis I will focus on bases, especially on how the students could determine in the most interesting way whether a solution is acid or base and which solution (base) that actually is. My goal is to develop simple qualitative tests to determine inorganic bases in primary schools. In nature, ba...

  2. Omp16-based vaccine encapsulated by alginate-chitosan microspheres provides significant protection against Haemophilus parasuis in mice.

    Science.gov (United States)

    Zheng, Xintian; Yang, Xiaoyan; Li, Xiaohua; Qiu, Guo-Hua; Dai, Ailing; Huang, Qichun; Huang, Cuiqin; Guo, Xiaofeng

    2017-03-07

    Haemophilus parasuis (H. parasuis) is the etiological agent of swine Glässer's disease, which leads to significant economic loss in swine industry over the world. Subunit vaccine based on outer membrane protein is one of the promising choices to protect pigs against H. parasuis infection despite low immunity efficiency. In this paper, outer membrane protein 16 (Omp16) of H. parasuis encapsulated by alginate-chitosan microspheres as antigen carriers was explored for the first time in a mouse model. Our results showed that the microspheres with Omp16 induced significant higher H. parasuis-specific antibodies, and higher titers of IL-2, IL-4, and IFN-γ than those by Omp16-FIA in treated mice (pmice immunized with microspheres containing Omp16 was significantly decreased (pmice treated with Omp16 and 70% mice with Omp16-FIA were survived after challenged with H. parasuis virulent strain LY02 (serovar 5). Therefore, Omp16-based microsphere vaccine induces both humoral and cellular immune responses and provides promising protection against H. parasuis infection in mice. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Automated Functional Testing based on the Navigation of Web Applications

    Directory of Open Access Journals (Sweden)

    Boni García

    2011-08-01

    Full Text Available Web applications are becoming more and more complex. Testing such applications is an intricate hard and time-consuming activity. Therefore, testing is often poorly performed or skipped by practitioners. Test automation can help to avoid this situation. Hence, this paper presents a novel approach to perform automated software testing for web applications based on its navigation. On the one hand, web navigation is the process of traversing a web application using a browser. On the other hand, functional requirements are actions that an application must do. Therefore, the evaluation of the correct navigation of web applications results in the assessment of the specified functional requirements. The proposed method to perform the automation is done in four levels: test case generation, test data derivation, test case execution, and test case reporting. This method is driven by three kinds of inputs: i UML models; ii Selenium scripts; iii XML files. We have implemented our approach in an open-source testing framework named Automatic Testing Platform. The validation of this work has been carried out by means of a case study, in which the target is a real invoice management system developed using a model-driven approach.

  4. Development of Taiwan Smell Identification Test: a quick office-based smell screening test for Taiwanese.

    Science.gov (United States)

    Hsu, Ning-I; Lai, Jen-Tsung; Shen, Ping-Hung

    2015-01-01

    Objective smell tests not only identify levels of smelling ability but also provide information on changes in olfaction after treatment. Odor identification is strongly socially and culturally dependent; therefore, the odorants used in a smell identification test should be familiar to the test population. We developed this smell test for Taiwanese populations with two aims: the test odors should be familiar to Taiwanese and the test should be easily and quickly administered in a busy clinic. Additives that are familiar to Taiwanese people were selected for this smell identification test. Subsequently, the test was validated with the traditional Chinese version of the University of Pennsylvania Smell Identification Test (TC-UPSIT). Finally, this Taiwan Smell Identification Test (TWSIT) was implemented in daily clinical use, and cut-off points of "normosmia," "hyposmia," and "anosmia" were established. A total of 1000 subjects were included in the market survey to identify commonly recognized odors. Eight odorants with identification rate greater than 95% were selected. The TWSIT is an array of multiple-choice questions to select the odor. In addition, patient also reported the strength of the odor. The full score was 48. Thirty-seven patients simultaneously received both TWSIT and TC-UPSIT, and the correlation was high (r = 0.874). Based on the testing results of an additional 187 subjects, we concluded that scores of 47-48, 15-44, and 2-12 corresponded to normosmia, hyposmia, and anosmia, respectively. Patients with scores falling in the gaps require retesting at a later time. The TWSIT is a quick, office-based, and useful odor identification tool for Taiwanese. The experience of developing a culturally specific olfaction test like the TWSIT can be applied in different countries and cultures.

  5. Clinical significance of Smac and survivin expression in breast cancer patients treated with anthracycline‑based neoadjuvant chemotherapy.

    Science.gov (United States)

    Zhao, Ying-Chun; Wang, Yan; Ni, Xiao-Jian; Li, Yong; Wang, Xiu-Ming; Zhu, Yong-Yun; Luo, Chuan-Yu

    2014-02-01

    The second mitochondria‑derived activator of caspases (Smac), an antagonist of the inhibitor of apoptosis protein (IAP), increases chemosensitivity in vitro. Survivin, an IAP family member, mediates cancer cell survival and chemoresistance. The present study investigated the correlation between Smac and survivin expression in primary breast cancer, and the sensitivity to anthracycline during neoadjuvant chemotherapy (NAC). Pre‑treatment biopsies and post‑anthracycline treatment tumor sections were analyzed from 98 cases. Biomarker expression was evaluated by immunohistochemistry in tumor samples from clinical stage II and III anthracycline‑based NAC‑treated breast cancer. A univariate analysis indicated that the estrogen receptor (ER), Smac and survivin were significantly predictive of a pathological complete response (pCR) (P=0.004, 0.001 and 0.037, respectively) in pre‑chemotherapy samples. ER, Smac and survivin expression was also significant for pCR on the multivariate analysis (P=0.001, 0.031 and 0.012, respectively). An inverse association was identified between survivin and Smac expression (r=‑0.217, P=0.032; and r=‑0.335, P=0.003, respectively) prior to and following NAC. The patients with low survivin expression or high Smac expression had significantly longer disease‑free survival (DFS; P=0.012 and P=0.020, respectively) and overall survival (OS; P=0.01 and P=0.033, respectively) compared with the patients with high survivin or low Smac expression. Cox regression analyses demonstrated that survivin, Smac and clinical stage were independent predictors for DFS and OS. The present study indicated the significance of Smac and survivin in determining the breast cancer response to anthracycline‑based chemotherapy, and may permit further stratifying of pre‑chemotherapy patients to undertake more tailored treatments.

  6. Regulation of Internet-based Genetic Testing: Challenges for Australia and Other Jurisdictions

    OpenAIRE

    Jane Tiller; Paul Lacaze

    2018-01-01

    The Internet currently enables unprecedented ease of access for direct-to-consumer (DTC) genetic testing, with saliva collection kits posted directly to consumer homes from anywhere in the world. This poses new challenges for local jurisdictions in regulating genetic testing, traditionally a tightly-regulated industry. Some Internet-based genetic tests have the capacity to cause significant confusion or harm to consumers who are unaware of the risks or potential variability in quality. The em...

  7. Significant life experience: Exploring the lifelong influence of place-based environmental and science education on program participants

    Science.gov (United States)

    Colvin, Corrie Ruth

    Current research provides a limited understanding of the life long influence of nonformal place-based environmental and science education programs on past participants. This study looks to address this gap, exploring the ways in which these learning environments have contributed to environmental identity and stewardship. Using Dorothy Holland's approach to social practice theory's understanding of identity formation, this study employed narrative interviews and a close-ended survey to understand past participants' experience over time. Participants from two place-based environmental education programs and one science-inquiry program were asked to share their reflections on their program experience and the influence they attribute to that experience. Among all participants, the element of hands-on learning, supportive instructors, and engaging learning environments remained salient over time. Participants of nature-based programs demonstrated that these programs in particular were formative in contributing to an environmental stewardship identity. Social practice theory can serve as a helpful theoretical framework for significant life experience research, which has largely been missing from this body of research. This study also holds implications for the fields of place-based environmental education, conservation psychology, and sustainability planning, all of which look to understand and increase environmentally sustainable practices.

  8. Overheating Anomalies during Flight Test Due to the Base Bleeding

    Science.gov (United States)

    Luchinsky, Dmitry; Hafiychuck, Halyna; Osipov, Slava; Ponizhovskaya, Ekaterina; Smelyanskiy, Vadim; Dagostino, Mark; Canabal, Francisco; Mobley, Brandon L.

    2012-01-01

    In this paper we present the results of the analytical and numerical studies of the plume interaction with the base flow in the presence of base out-gassing. The physics-based analysis and CFD modeling of the base heating for single solid rocket motor performed in this research addressed the following questions: what are the key factors making base flow so different from that in the Shuttle [1]; why CFD analysis of this problem reveals small plume recirculation; what major factors influence base temperature; and why overheating was initiated at a given time in the flight. To answer these questions topological analysis of the base flow was performed and Korst theory was used to estimate relative contributions of radiation, plume recirculation, and chemically reactive out-gassing to the base heating. It was shown that base bleeding and small base volume are the key factors contributing to the overheating, while plume recirculation is effectively suppressed by asymmetric configuration of the flow formed earlier in the flight. These findings are further verified using CFD simulations that include multi-species gas environment both in the plume and in the base. Solid particles in the exhaust plume (Al2O3) and char particles in the base bleeding were also included into the simulations and their relative contributions into the base temperature rise were estimated. The results of simulations are in good agreement with the temperature and pressure in the base measured during the test.

  9. Gold Standard Testing of Motion Based Tracking Systems

    Science.gov (United States)

    2017-03-15

    AFRL-RH-WP-TR-2017-0032 GOLD STANDARD TESTING OF MOTION BASED TRACKING SYSTEMS Joshua Hagen Human Signatures Branch Human-Centered ISR Division...DD-MM-YY) 2. REPORT TYPE 3. DATES COVERED (From - To) 15 03 17 Interim Report June 2016 – March 2017 4. TITLE AND SUBTITLE Gold Standard Testing of...systems against a ‘ Gold Standard ’ on-field measurement system for human physiological performance monitoring. Data shows that the accuracy of the

  10. A Model Based Security Testing Method for Protocol Implementation

    Directory of Open Access Journals (Sweden)

    Yu Long Fu

    2014-01-01

    Full Text Available The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.

  11. Small scale motor tests of ADN/GAP based propellants

    OpenAIRE

    Gettwert, Volker; Fischer, Sebastian

    2015-01-01

    Different ADN/GAP based propellants were evaluated as a potential replacement of the smoky AP based composite propellant and low signature double base propellants. The paper focuses on burning tests of propellants in a combustion chamber. The experimental results of an ADN/GAP, ADN/FOX12/GAP and Al/ADN/GAP propellant were compared with a standard Al/AP/HTPB propellant. In all cases the obtained experimental gravimetric specific impulse of the ADN/GAP based propellants were higher compared to ...

  12. Validation test case generation based on safety analysis ontology

    International Nuclear Information System (INIS)

    Fan, Chin-Feng; Wang, Wen-Shing

    2012-01-01

    Highlights: ► Current practice in validation test case generation for nuclear system is mainly ad hoc. ► This study designs a systematic approach to generate validation test cases from a Safety Analysis Report. ► It is based on a domain-specific ontology. ► Test coverage criteria have been defined and satisfied. ► A computerized toolset has been implemented to assist the proposed approach. - Abstract: Validation tests in the current nuclear industry practice are typically performed in an ad hoc fashion. This study presents a systematic and objective method of generating validation test cases from a Safety Analysis Report (SAR). A domain-specific ontology was designed and used to mark up a SAR; relevant information was then extracted from the marked-up document for use in automatically generating validation test cases that satisfy the proposed test coverage criteria; namely, single parameter coverage, use case coverage, abnormal condition coverage, and scenario coverage. The novelty of this technique is its systematic rather than ad hoc test case generation from a SAR to achieve high test coverage.

  13. Development of computer based ultrasonic flaw detector for nondestructive testing

    International Nuclear Information System (INIS)

    Lee, Weon Heum; Kim, Jin Koo; Kim, Yang Rae; Choi, Kwan Sun; Kim, Sun Hyung; Lee, Sun Heum

    1996-01-01

    Ultrasonic Testing is one of the most widely used method of Nondestructive testing for Pre-Service Inspection(PSI) and In-Service Inspection(ISI) in the structure of Bridges, Power plants, chemical plants and heavy industrial fields. It is very important target to estimate safety, remain life, Quality Control of the Structure. Also, a lot of research for quantities evaluation and analysis inspection data is proceeding. But traditional portable ultrasonic flaw detector had been following disadvantages. 1) Analog ultrasonic flaw detector decreased credibility of ultrasonic test, because it is impossible for saying data and digital signal processing. 2) Stand-alone digital ultrasonic flaw detector cannot effectively evaluate received signals because of lack of its storage memory. To overcome this shortcoming, we develop the computer based ultrasonic flaw detector for nondestructive testing. It can store the received signal and effectively evaluate the signal, and then enhance the reliability of the testing results.

  14. Testing of a nuclear-reactor-based positron beam

    International Nuclear Information System (INIS)

    Van Veen, A.; Labohm, F.; Schut, H.; De Roode, J.; Heijenga, T.; Mijnarends, P.E.

    1997-01-01

    This paper describes the testing of a positron beam which is primarily based on copper activation near the core of a nuclear reactor and extraction of the positrons through a beam guide tube. An out-of-core test with a 22 Na source and an in-core test with the reactor at reduced power have been performed. Both tests indicated a high reflectivity of moderated positrons at the tungsten surfaces of the moderation discs which enhanced the expected yield. Secondary electrons generated in the source materials during the in-core test caused electrical field distortions in the electrode system of the system by charging of the insulators. At 100 kW reactor power during one hour, positrons were observed with an intensity of 4.4x10 4 e + s -1 of which 90% was due to positrons created by pair formation and 10% by copper activation

  15. Empirical likelihood-based tests for stochastic ordering

    Science.gov (United States)

    BARMI, HAMMOU EL; MCKEAGUE, IAN W.

    2013-01-01

    This paper develops an empirical likelihood approach to testing for the presence of stochastic ordering among univariate distributions based on independent random samples from each distribution. The proposed test statistic is formed by integrating a localized empirical likelihood statistic with respect to the empirical distribution of the pooled sample. The asymptotic null distribution of this test statistic is found to have a simple distribution-free representation in terms of standard Brownian bridge processes. The approach is used to compare the lengths of rule of Roman Emperors over various historical periods, including the “decline and fall” phase of the empire. In a simulation study, the power of the proposed test is found to improve substantially upon that of a competing test due to El Barmi and Mukerjee. PMID:23874142

  16. Asymptotic formulae for likelihood-based tests of new physics

    Science.gov (United States)

    Cowan, Glen; Cranmer, Kyle; Gross, Eilam; Vitells, Ofer

    2011-02-01

    We describe likelihood-based statistical tests for use in high energy physics for the discovery of new phenomena and for construction of confidence intervals on model parameters. We focus on the properties of the test procedures that allow one to account for systematic uncertainties. Explicit formulae for the asymptotic distributions of test statistics are derived using results of Wilks and Wald. We motivate and justify the use of a representative data set, called the "Asimov data set", which provides a simple method to obtain the median experimental sensitivity of a search or measurement as well as fluctuations about this expectation.

  17. Asymptotic formulae for likelihood-based tests of new physics

    Energy Technology Data Exchange (ETDEWEB)

    Cowan, Glen [Royal Holloway, University of London, Physics Department, Egham (United Kingdom); Cranmer, Kyle [New York University, Physics Department, New York, NY (United States); Gross, Eilam; Vitells, Ofer [Weizmann Institute of Science, Rehovot (Israel)

    2011-02-15

    We describe likelihood-based statistical tests for use in high energy physics for the discovery of new phenomena and for construction of confidence intervals on model parameters. We focus on the properties of the test procedures that allow one to account for systematic uncertainties. Explicit formulae for the asymptotic distributions of test statistics are derived using results of Wilks and Wald. We motivate and justify the use of a representative data set, called the ''Asimov data set'', which provides a simple method to obtain the median experimental sensitivity of a search or measurement as well as fluctuations about this expectation. (orig.)

  18. Image Encryption Performance Evaluation Based on Poker Test

    Directory of Open Access Journals (Sweden)

    Shanshan Li

    2016-01-01

    Full Text Available The fast development of image encryption requires performance evaluation metrics. Traditional metrics like entropy do not consider the correlation between local pixel and its neighborhood. These metrics cannot estimate encryption based on image pixel coordinate permutation. A novel effectiveness evaluation metric is proposed in this paper to address the issue. The cipher text image is transformed to bit stream. Then, Poker Test is implemented. The proposed metric considers the neighbor correlations of image by neighborhood selection and clip scan. The randomness of the cipher text image is tested by calculating the chi-square test value. Experiment results verify the efficiency of the proposed metrics.

  19. FARO base case post-test analysis by COMETA code

    Energy Technology Data Exchange (ETDEWEB)

    Annunziato, A.; Addabbo, C. [Joint Research Centre, Ispra (Italy)

    1995-09-01

    The paper analyzes the COMETA (Core Melt Thermal-Hydraulic Analysis) post test calculations of FARO Test L-11, the so-called Base Case Test. The FARO Facility, located at JRC Ispra, is used to simulate the consequences of Severe Accidents in Nuclear Power Plants under a variety of conditions. The COMETA Code has a 6 equations two phase flow field and a 3 phases corium field: the jet, the droplets and the fused-debris bed. The analysis shown that the code is able to pick-up all the major phenomena occurring during the fuel-coolant interaction pre-mixing phase.

  20. Design of Test Wrapper Scan Chain Based on Differential Evolution

    Directory of Open Access Journals (Sweden)

    Aijun Zhu

    2013-08-01

    Full Text Available Integrated Circuit has entered the era of design of the IP-based SoC (System on Chip, which makes the IP core reuse become a key issue. SoC test wrapper design for scan chain is a NP Hard problem, we propose an algorithm based on Differential Evolution (DE to design wrapper scan chain. Through group’s mutation, crossover and selection operations, the design of test wrapper scan chain is achieved. Experimental verification is carried out according to the international standard benchmark ITC’02. The results show that the algorithm can obtain shorter longest wrapper scan chains, compared with other algorithms.

  1. Social inequality and HIV-testing: Comparing home- and clinic-based testing in rural Malawi

    Directory of Open Access Journals (Sweden)

    Alexander A. Weinreb

    2009-10-01

    Full Text Available The plan to increase HIV testing is a cornerstone of the international health strategy against the HIV/AIDS epidemic, particularly in sub-Saharan Africa. This paper highlights a problematic aspect of that plan: the reliance on clinic- rather than home-based testing. First, drawing on DHS data from across Africa, we demonstrate the substantial differences in socio-demographic and economic profiles between those who report having ever had an HIV test, and those who report never having had one. Then, using data from a random household survey in rural Malawi, we show that substituting home-based for clinic-based testing may eliminate this source of inequality between those tested and those not tested. This result, which is stable across modeling frameworks, has important implications for accurately and equitably addressing the counseling and treatment programs that comprise the international health strategy against AIDS, and that promise to shape the future trajectory of the epidemic in Africa and beyond.

  2. Significant increase of IGF-I concentration and of IGF-I/IGFBP-3 molar ratio in generation test predicts the good response to growth hormone (GH) therapy in children with short stature and normal results of GH stimulating tests.

    Science.gov (United States)

    Smyczynska, Joanna; Hilczer, Maciej; Stawerska, Renata; Lewinski, Andrzej

    2013-01-01

    Insulin-like growth factor-I (IGF-I) generation test has been introduced for the assessment of growth hormone (GH) sensitivity, however, its significance in predicting growth response to GH therapy has also been brought up. The molar ratio of IGF-I to its binding protein-3 (IGFBP-3) determines IGF-I bioavailability. Evaluation of usefulness of IGF-I and IGFBP-3 generation test in predicting the effectiveness of rhGH therapy in children with short stature. The analysis comprised 60 children with short stature, normal results of GH stimulating tests but decreased IGF-I secretion. In all the patients, GH insensitivity was excluded on the basis of IGF-I and IGFBP-3 generation test. Next, GH therapy was administered and height velocity (HV), together with IGF-I and IGFBP-3 secretion, was assessed every year, during 3 years. The comparative group consisted of 30 children with partial GH deficiency (pGHD). Both IGF-I secretion and IGF-I/IGFBP-3 molar ratio increased significantly during generation test (pGH therapy (however insignificantly), together with at least doubling of pretreatment HV. There was no significant difference between the studied group of patients and children with pGHD. Significant increase of IGF-I in generation test speaks for GH therapy effectiveness in short children, despite normal results of GH stimulating tests.

  3. Pharmacists performing quality spirometry testing: an evidence based review.

    Science.gov (United States)

    Cawley, Michael J; Warning, William J

    2015-10-01

    The scope of pharmacist services for patients with pulmonary disease has primarily focused on drug related outcomes; however pharmacists have the ability to broaden the scope of clinical services by performing diagnostic testing including quality spirometry testing. Studies have demonstrated that pharmacists can perform quality spirometry testing based upon international guidelines. The primary aim of this review was to assess the published evidence of pharmacists performing quality spirometry testing based upon American Thoracic Society/European Respiratory Society (ATS/ERS) guidelines. In order to accomplish this, the description of evidence and type of outcome from these services were reviewed. A literature search was conducted using five databases [PubMed (1946-January 2015), International Pharmaceutical Abstracts (1970 to January 2015), Cumulative Index of Nursing and Allied Health Literature, Cochrane Central Register of Controlled Trials and Cochrane Database of Systematic Reviews] with search terms including pharmacy, spirometry, pulmonary function, asthma or COPD was conducted. Searches were limited to publications in English and reported in humans. In addition, Uniform Resource Locators and Google Scholar searches were implemented to include any additional supplemental information. Eight studies (six prospective multi-center trials, two retrospective single center studies) were included. Pharmacists in all studies received specialized training in performing spirometry testing. Of the eight studies meeting inclusion and exclusion criteria, 8 (100%) demonstrated acceptable repeatability of spirometry testing based upon standards set by the ATS/ERS guidelines. Acceptable repeatability of seven studies ranged from 70 to 99% consistent with published data. Available evidence suggests that quality spirometry testing can be performed by pharmacists. More prospective studies are needed to add to the current evidence of quality spirometry testing performed by

  4. Comparison of the clinical performance of an HPV mRNA test and an HPV DNA test in triage of atypical squamous cells of undetermined significance (ASC-US)

    DEFF Research Database (Denmark)

    Waldstrom, M; Ornskov, D

    2012-01-01

    The effect of triaging women with atypical squamous cells of undetermined significance (ASC-US) with human papillomavirus (HPV) DNA testing has been well documented. New tests detecting HPV E6/E7 mRNA are emerging, claiming to be more specific for detecting high-grade disease. We evaluated...... the clinical performance of two HPV tests: the Linear Array HPV genotyping test (LA) detecting HPV DNA from 37 oncogenic and non-oncogenic HPV types and the Aptima HPV assay detecting E6/E7 mRNA from 14 oncogenic HPV types....

  5. Test results judgment method based on BIT faults

    Directory of Open Access Journals (Sweden)

    Wang Gang

    2015-12-01

    Full Text Available Built-in-test (BIT is responsible for equipment fault detection, so the test data correctness directly influences diagnosis results. Equipment suffers all kinds of environment stresses, such as temperature, vibration, and electromagnetic stress. As embedded testing facility, BIT also suffers from these stresses and the interferences/faults are caused, so that the test course is influenced, resulting in incredible results. Therefore it is necessary to monitor test data and judge test failures. Stress monitor and BIT self-diagnosis would redound to BIT reliability, but the existing anti-jamming researches are mainly safeguard design and signal process. This paper focuses on test results monitor and BIT equipment (BITE failure judge, and a series of improved approaches is proposed. Firstly the stress influences on components are illustrated and the effects on the diagnosis results are summarized. Secondly a composite BIT program is proposed with information integration, and a stress monitor program is given. Thirdly, based on the detailed analysis of system faults and forms of BIT results, the test sequence control method is proposed. It assists BITE failure judge and reduces error probability. Finally the validation cases prove that these approaches enhance credibility.

  6. The significance and robustness of a plasma free amino acid (PFAA) profile-based multiplex function for detecting lung cancer

    International Nuclear Information System (INIS)

    Shingyoji, Masato; Mitsushima, Toru; Yamakado, Minoru; Kimura, Hideki; Iizasa, Toshihiko; Higashiyama, Masahiko; Imamura, Fumio; Saruki, Nobuhiro; Imaizumi, Akira; Yamamoto, Hiroshi; Daimon, Takashi; Tochikubo, Osamu

    2013-01-01

    We have recently reported on the changes in plasma free amino acid (PFAA) profiles in lung cancer patients and the efficacy of a PFAA-based, multivariate discrimination index for the early detection of lung cancer. In this study, we aimed to verify the usefulness and robustness of PFAA profiling for detecting lung cancer using new test samples. Plasma samples were collected from 171 lung cancer patients and 3849 controls without apparent cancer. PFAA levels were measured by high-performance liquid chromatography (HPLC)–electrospray ionization (ESI)–mass spectrometry (MS). High reproducibility was observed for both the change in the PFAA profiles in the lung cancer patients and the discriminating performance for lung cancer patients compared to previously reported results. Furthermore, multivariate discriminating functions obtained in previous studies clearly distinguished the lung cancer patients from the controls based on the area under the receiver-operator characteristics curve (AUC of ROC = 0.731 ~ 0.806), strongly suggesting the robustness of the methodology for clinical use. Moreover, the results suggested that the combinatorial use of this classifier and tumor markers improves the clinical performance of tumor markers. These findings suggest that PFAA profiling, which involves a relatively simple plasma assay and imposes a low physical burden on subjects, has great potential for improving early detection of lung cancer

  7. A clip-based protocol for breast boost radiotherapy provides clear target visualisation and demonstrates significant volume reduction over time

    International Nuclear Information System (INIS)

    Lewis, Lorraine; Cox, Jennifer; Morgia, Marita; Atyeo, John; Lamoury, Gillian

    2015-01-01

    The clinical target volume (CTV) for early stage breast cancer is difficult to clearly identify on planning computed tomography (CT) scans. Surgical clips inserted around the tumour bed should help to identify the CTV, particularly if the seroma has been reabsorbed, and enable tracking of CTV changes over time. A surgical clip-based CTV delineation protocol was introduced. CTV visibility and its post-operative shrinkage pattern were assessed. The subjects were 27 early stage breast cancer patients receiving post-operative radiotherapy alone and 15 receiving post-operative chemotherapy followed by radiotherapy. The radiotherapy alone (RT/alone) group received a CT scan at median 25 days post-operatively (CT1rt) and another at 40 Gy, median 68 days (CT2rt). The chemotherapy/RT group (chemo/RT) received a CT scan at median 18 days post-operatively (CT1ch), a planning CT scan at median 126 days (CT2ch), and another at 40 Gy (CT3ch). There was no significant difference (P = 0.08) between the initial mean CTV for each cohort. The RT/alone cohort showed significant CTV volume reduction of 38.4% (P = 0.01) at 40 Gy. The Chemo/RT cohort had significantly reduced volumes between CT1ch: median 54 cm 3 (4–118) and CT2ch: median 16 cm 3 , (2–99), (P = 0.01), but no significant volume reduction thereafter. Surgical clips enable localisation of the post-surgical seroma for radiotherapy targeting. Most seroma shrinkage occurs early, enabling CT treatment planning to take place at 7 weeks, which is within the 9 weeks recommended to limit disease recurrence

  8. A clip-based protocol for breast boost radiotherapy provides clear target visualisation and demonstrates significant volume reduction over time

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, Lorraine [Department of Radiation Oncology, Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales (Australia); Cox, Jennifer [Department of Radiation Oncology, Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales (Australia); Faculty of Health Sciences, University of Sydney, Sydney, New South Wales (Australia); Morgia, Marita [Department of Radiation Oncology, Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales (Australia); Atyeo, John [Faculty of Health Sciences, University of Sydney, Sydney, New South Wales (Australia); Lamoury, Gillian [Department of Radiation Oncology, Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales (Australia)

    2015-09-15

    The clinical target volume (CTV) for early stage breast cancer is difficult to clearly identify on planning computed tomography (CT) scans. Surgical clips inserted around the tumour bed should help to identify the CTV, particularly if the seroma has been reabsorbed, and enable tracking of CTV changes over time. A surgical clip-based CTV delineation protocol was introduced. CTV visibility and its post-operative shrinkage pattern were assessed. The subjects were 27 early stage breast cancer patients receiving post-operative radiotherapy alone and 15 receiving post-operative chemotherapy followed by radiotherapy. The radiotherapy alone (RT/alone) group received a CT scan at median 25 days post-operatively (CT1rt) and another at 40 Gy, median 68 days (CT2rt). The chemotherapy/RT group (chemo/RT) received a CT scan at median 18 days post-operatively (CT1ch), a planning CT scan at median 126 days (CT2ch), and another at 40 Gy (CT3ch). There was no significant difference (P = 0.08) between the initial mean CTV for each cohort. The RT/alone cohort showed significant CTV volume reduction of 38.4% (P = 0.01) at 40 Gy. The Chemo/RT cohort had significantly reduced volumes between CT1ch: median 54 cm{sup 3} (4–118) and CT2ch: median 16 cm{sup 3}, (2–99), (P = 0.01), but no significant volume reduction thereafter. Surgical clips enable localisation of the post-surgical seroma for radiotherapy targeting. Most seroma shrinkage occurs early, enabling CT treatment planning to take place at 7 weeks, which is within the 9 weeks recommended to limit disease recurrence.

  9. Heart rate-based lactate minimum test: a reproducible method.

    NARCIS (Netherlands)

    Strupler, M.; Muller, G.; Perret, C.

    2009-01-01

    OBJECTIVE: To find the individual intensity for aerobic endurance training, the lactate minimum test (LMT) seems to be a promising method. LMTs described in the literature consist of speed or work rate-based protocols, but for training prescription in daily practice mostly heart rate is used. The

  10. Introduction to Permutation and Resampling-Based Hypothesis Tests

    Science.gov (United States)

    LaFleur, Bonnie J.; Greevy, Robert A.

    2009-01-01

    A resampling-based method of inference--permutation tests--is often used when distributional assumptions are questionable or unmet. Not only are these methods useful for obvious departures from parametric assumptions (e.g., normality) and small sample sizes, but they are also more robust than their parametric counterparts in the presences of…

  11. Invariant-Based Automatic Testing of Modern Web Applications

    NARCIS (Netherlands)

    Mesbah, A.; Van Deursen, A.; Roest, D.

    2011-01-01

    AJAX-based Web 2.0 applications rely on stateful asynchronous client/server communication, and client-side run-time manipulation of the DOM tree. This not only makes them fundamentally different from traditional web applications, but also more error-prone and harder to test. We propose a method for

  12. Functional13C-urea and glucose hydrogen/methane breath tests reveal significant association of small intestinal bacterial overgrowth in individuals with active Helicobacter pylori infection.

    Science.gov (United States)

    Enko, Dietmar; Kriegshäuser, Gernot

    2017-01-01

    Helicobacter pylori infection is considered to alter the bacterial flora in the upper gastrointestinal tract. This study aimed at investigating the presence of small intestinal bacterial overgrowth (SIBO) in patients with active H. pylori infection assessed by functional breath testing. A total of 109 outpatients, who were referred for the H. pylori 13 C-urea breath test ( 13 C-UBT) by general practitioners and specialists, were also tested for the presence of SIBO by the glucose hydrogen (H 2 )/methane (CH 4 ) breath test (HMBT). A detailed anamnesis was carried out about the history of H. pylori infection, eradication therapies, proton pump inhibitor intake, and comorbidities. In total, 36/109 (33.0%) patients had a positive H. pylori 13 C-UBT, and 35/109 (32.1%) patients had a positive glucose HMBT, the latter being indicative of SIBO. Interestingly, individuals with a positive H. pylori 13 C-UBT were significantly more often associated with a positive glucose HMBT (p=0.002). Cohen's κ measuring agreement between the 13 C-UBT and the glucose HMBT was 0.31 (confidence intervals: 0.12-0.50) (p=0.001). Altogether, 19 of 54 (35.2%) patients, who had completed up to four eradication therapies, were diagnosed with SIBO by HMBT. H. pylori infection was found to be significantly associated with the presence of SIBO as determined by functional breath testing. In addition, SIBO rates appeared to have increased after completed eradication therapies. However, further longitudinal studies are warranted to fully elucidate the relationship and treatment modalities of coincident H. pylori infection and SIBO. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  13. Optimization of Deep Drilling Performance--Development and Benchmark Testing of Advanced Diamond Product Drill Bits & HP/HT Fluids to Significantly Improve Rates of Penetration

    Energy Technology Data Exchange (ETDEWEB)

    Alan Black; Arnis Judzis

    2003-10-01

    This document details the progress to date on the OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS AND HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION contract for the year starting October 2002 through September 2002. The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit--fluid prototypes and test at large scale; and Phase 3--Field trial smart bit--fluid concepts, modify as necessary and commercialize products. Accomplishments to date include the following: 4Q 2002--Project started; Industry Team was assembled; Kick-off meeting was held at DOE Morgantown; 1Q 2003--Engineering meeting was held at Hughes Christensen, The Woodlands Texas to prepare preliminary plans for development and testing and review equipment needs; Operators started sending information regarding their needs for deep drilling challenges and priorities for large-scale testing experimental matrix; Aramco joined the Industry Team as DEA 148 objectives paralleled the DOE project; 2Q 2003--Engineering and planning for high pressure drilling at TerraTek commenced; 3Q 2003--Continuation of engineering and design work for high pressure drilling at TerraTek; Baker Hughes INTEQ drilling Fluids and Hughes Christensen commence planning for Phase 1 testing--recommendations for bits and fluids.

  14. Robust Automated Test Assembly for Testlet-Based Tests: An Illustration with Analytical Reasoning Items

    Directory of Open Access Journals (Sweden)

    Bernard P. Veldkamp

    2017-12-01

    Full Text Available In many high-stakes testing programs, testlets are used to increase efficiency. Since responses to items belonging to the same testlet not only depend on the latent ability but also on correct reading, understanding, and interpretation of the stimulus, the assumption of local independence does not hold. Testlet response theory (TRT models have been developed to deal with this dependency. For both logit and probit testlet models, a random testlet effect is added to the standard logit and probit item response theory (IRT models. Even though this testlet effect might make the IRT models more realistic, application of these models in practice leads to new questions, for example, in automated test assembly (ATA. In many test assembly models, goals have been formulated for the amount of information the test should provide about the candidates. The amount of Fisher Information is often maximized or it has to meet a prespecified target. Since TRT models have a random testlet effect, Fisher Information contains a random effect as well. The question arises as to how this random effect in ATA should be dealt with. A method based on robust optimization techniques for dealing with uncertainty in test assembly due to random testlet effects is presented. The method is applied in the context of a high-stakes testing program, and the impact of this robust test assembly method is studied. Results are discussed, advantages of the use of robust test assembly are mentioned, and recommendations about the use of the new method are given.

  15. Laboratory test of an APS-based sun sensor prototype

    Science.gov (United States)

    Rufino, Giancarlo; Perrotta, Alessandro; Grassi, Michele

    2017-11-01

    This paper deals with design and prototype development of an Active Pixel Sensor - based miniature sun sensor and a laboratory facility for its indoor test and calibration. The miniature sun sensor is described and the laboratory test facility is presented in detail. The major focus of the paper is on tests and calibration of the sensor. Two different calibration functions have been adopted. They are based, respectively, on a geometrical model, which has required least-squares optimisation of system physical parameters estimates, and on neural networks. Calibration results are presented for the above solutions, showing that accuracy in the order of 0.01° has been achieved. Neural calibration functions have attained better performance thanks to their intrinsic auto-adaptive structure.

  16. Clinical testing with a panel of 25 genes associated with increased cancer risk results in a significant increase in clinically significant findings across a broad range of cancer histories.

    Science.gov (United States)

    Rosenthal, Eric T; Bernhisel, Ryan; Brown, Krystal; Kidd, John; Manley, Susan

    2017-12-01

    Genetic testing for inherited cancer risk is now widely used to target individuals for screening and prevention. However, there is limited evidence available to evaluate the clinical utility of various testing strategies, such as single-syndrome, single-cancer, or pan-cancer gene panels. Here we report on the outcomes of testing with a 25-gene pan-cancer panel in a consecutive series of 252,223 individuals between September 2013 and July 2016. The majority of individuals (92.8%) met testing criteria for Hereditary Breast and Ovarian Cancer (HBOC) and/or Lynch syndrome (LS). Overall, 17,340 PVs were identified in 17,000 (6.7%) of the tested individuals. The PV positive rate was 9.8% among individuals with a personal cancer history, compared to 4.7% in unaffected individuals. PVs were most common in BRCA1/2 (42.2%), other breast cancer (BR) genes (32.9%), and the LS genes (13.2%). Half the PVs identified among individuals who met only HBOC testing criteria were in genes other than BRCA1/2. Similarly, half of PVs identified in individuals who met only LS testing criteria were in non-LS genes. These findings suggest that genetic testing with a pan-cancer panel in this cohort provides improved clinical utility over traditional single-gene or single-syndrome testing. Copyright © 2017 Myriad Genetics, Inc. Published by Elsevier Inc. All rights reserved.

  17. Computer Based Test Untuk Seleksi Masuk Politeknik Negeri Bengkalis

    Directory of Open Access Journals (Sweden)

    Agus Tedyyana

    2017-11-01

    Full Text Available AbstrakPenyeleksian calon mahasiswa baru dapat dilakukan dengan aplikasi Computer Based Test (CBT. Metode yang digunakan meliputi teknik pengumpulan data, analisis sistem, model perancangan, implementasi dan pengujian. Penelitian ini menghasilkan aplikasi CBT dimana soal yang dimunculkan dari bank soal melalui proses pengacakan dengan tidak akan memunculkan soal yang sama dengan menggunakan metoda Fisher-Yates Shuffle. Dalam proses pengamanan informasi soal saat terhubung ke jaringan maka diperlukan teknik untuk penyandian pesan agar soal tersebut sebeum dimunculkan melewati proses enkripsi dan deskripsi data terlebih dahulu maka digunakan algoritma kriptografi  RSA. Metode perancangan perangkat lunak menggunakan model waterfall, perancangan database menggunakan entity relationship diagram, perancangan antarmuka menggunakan hypertext markup language (HTML Cascading Style Sheet (CSS dan jQuery serta diimplementasikan berbasis web dengan menggunakan bahasa pemrograman PHP dan database MySQL, Arsitektur jaringan yang digunakan aplikasi Computer Based Test adalah model jaringan client-server dengan jaringan Local Area Network (LAN. Kata kunci: Computer Based Test, Fisher-Yates Shuffle, Criptography, Local Area Network AbstractSelection of new student candidates can be done with Computer Based Test (CBT application. The methods used include data collection techniques, system analysis, design model, implementation and testing. This study produces a CBT application where the questions raised from the question bank through randomization process will not bring up the same problem using the Fisher-Yates Shuffle method. In the process of securing information about the problem when connected to the network it is necessary techniques for encoding the message so that the problem before appear through the process of encryption and description of data first then used RSA cryptography algorithm. Software design method using waterfall model, database design

  18. A novel classification of frontal bone fractures: The prognostic significance of vertical fracture trajectory and skull base extension.

    Science.gov (United States)

    Garg, Ravi K; Afifi, Ahmed M; Gassner, Jennifer; Hartman, Michael J; Leverson, Glen; King, Timothy W; Bentz, Michael L; Gentry, Lindell R

    2015-05-01

    The broad spectrum of frontal bone fractures, including those with orbital and skull base extension, is poorly understood. We propose a novel classification scheme for frontal bone fractures. Maxillofacial CT scans of trauma patients were reviewed over a five year period, and frontal bone fractures were classified: Type 1: Frontal sinus fracture without vertical extension. Type 2: Vertical fracture through the orbit without frontal sinus involvement. Type 3: Vertical fracture through the frontal sinus without orbit involvement. Type 4: Vertical fracture through the frontal sinus and ipsilateral orbit. Type 5: Vertical fracture through the frontal sinus and contralateral or bilateral orbits. We also identified the depth of skull base extension, and performed a chart review to identify associated complications. 149 frontal bone fractures, including 51 non-vertical frontal sinus (Type 1, 34.2%) and 98 vertical (Types 2-5, 65.8%) fractures were identified. Vertical fractures penetrated the middle or posterior cranial fossa significantly more often than non-vertical fractures (62.2 v. 15.7%, p = 0.0001) and had a significantly higher mortality rate (18.4 v. 0%, p fractures with frontal sinus and orbital extension, and fractures that penetrated the middle or posterior cranial fossa had the strongest association with intracranial injuries, optic neuropathy, disability, and death (p fractures carry a worse prognosis than frontal bone fractures without a vertical pattern. In addition, vertical fractures with extension into the frontal sinus and orbit, or with extension into the middle or posterior cranial fossa have the highest complication rate and mortality. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  19. Computer-based tests: The impact of test design and problem of equivalency

    Czech Academy of Sciences Publication Activity Database

    Květon, Petr; Jelínek, Martin; Vobořil, Dalibor; Klimusová, H.

    -, č. 23 (2007), s. 32-51 ISSN 0747-5632 R&D Projects: GA ČR(CZ) GA406/99/1052; GA AV ČR(CZ) KSK9058117 Institutional research plan: CEZ:AV0Z7025918 Keywords : Computer-based assessment * speeded test * equivalency Subject RIV: AN - Psychology Impact factor: 1.344, year: 2007

  20. Failing Tests: Commentary on "Adapting Educational Measurement to the Demands of Test-Based Accountability"

    Science.gov (United States)

    Thissen, David

    2015-01-01

    In "Adapting Educational Measurement to the Demands of Test-Based Accountability" Koretz takes the time-honored engineering approach to educational measurement, identifying specific problems with current practice and proposing minimal modifications of the system to alleviate those problems. In response to that article, David Thissen…

  1. Communicative Language Testing: Implications for Computer Based Language Testing in French for Specific Purposes

    Science.gov (United States)

    García Laborda, Jesús; López Santiago, Mercedes; Otero de Juan, Nuria; Álvarez Álvarez, Alfredo

    2014-01-01

    Current evolutions of language testing have led to integrating computers in FSP assessments both in oral and written communicative tasks. This paper deals with two main issues: learners' expectations about the types of questions in FSP computer based assessments and the relation with their own experience. This paper describes the experience of 23…

  2. [Multiparameter analysis of the ergometric test. Significance of the failure of systolic blood pressure to decrease during recovery phase as an index of coronary disease].

    Science.gov (United States)

    Doria, G; Scaccianoce, G; Artale, S; Francaviglia, B; Platania, F; Circo, A

    1990-10-01

    Ergometric tests were performed in 27 patients who had previously undergone coronarography following instrumental findings and/or symptoms which seemed highly indicative of ischemic cardiopathy. The aim of the study was to assess the diagnostic importance of the failure of systolic blood pressure to decrease during the third minute of the recovery phase of the test as an index of coronary disease. In particular, as reported by other studies, the ratio between systolic blood pressure at the third minute of recovery and maximum systolic blood pressure during the test was also assessed values above 0.7 were considered pathological. Sixteen out the 27 patients examined showed lesions which were hemodynamically significant, whereas 11 patients were free of lesions and 9 had previous myocardial necrosis. The level of the above ratio in subjects without significant coronary lesions was 0.66 +/- 0.05, whereas it was 0.85 +/- 0.04 (p less than 0.01) in patients with coronary disease. Sensitivity, specificity, and positive and negative prognostic values were respectively 91.6%, 62%, 64.7% and 90.9%. In patients with lesions to the three main arteries both the sensitivity and the specificity were 100%. In the same patients, the ST criteria were 85.7%, 50%, 81.8% and 74.3%.

  3. Risk-based inservice testing program modifications at Palo Verde nuclear generating station

    Energy Technology Data Exchange (ETDEWEB)

    Knauf, S.; Lindenlaub, B.; Linthicum, R.

    1996-12-01

    Arizona Public Service Company (APS) is investigating changes to the Palo Verde Inservice Testing (IST) Program that are intended to result in the reduction of the required test frequency for various valves in the American Society of Mechanical Engineers (ASME) Section XI IST program. The analytical techniques employed to select candidate valves and to demonstrate that these frequency reductions are acceptable are risk based. The results of the Palo Verde probabilistic risk assessment (PRA), updated in June 1994, and the risk significant determination performed as part of the implementation efforts for 10 CFR 50.65 (the maintenance rule) were used to select candidate valves for extended test intervals. Additional component level evaluations were conducted by an `expert panel.` The decision to pursue these changes was facilitated by the ASME Risk-Based Inservice Testing Research Task Force for which Palo Verde is participating as a pilot plant. The NRC`s increasing acceptance of cost beneficial licensing actions and risk-based submittals also provided incentive to seek these changes. Arizona Public Service is pursuing the risk-based IST program modification in order to reduce the unnecessary regulatory burden of the IST program through qualitative and quantitative analysis consistent with maintaining a high level of plant safety. The objectives of this project at Palo Verde are as follows: (1) Apply risk-based technologies to IST components to determine their risk significance (i.e., high or low). (2) Apply a combination of deterministic and risk-based methods to determine appropriate testing requirements for IST components including improvement of testing methods and frequency intervals for high-risk significant components. (3) Apply risk-based technologies to high-risk significant components identified by the {open_quotes}expert panel{close_quotes} and outside of the IST program to determine whether additional testing requirements are appropriate.

  4. A novel PMT test system based on waveform sampling

    Science.gov (United States)

    Yin, S.; Ma, L.; Ning, Z.; Qian, S.; Wang, Y.; Jiang, X.; Wang, Z.; Yu, B.; Gao, F.; Zhu, Y.; Wang, Z.

    2018-01-01

    Comparing with the traditional test system based on a QDC and TDC and scaler, a test system based on waveform sampling is constructed for signal sampling of the 8"R5912 and the 20"R12860 Hamamatsu PMT in different energy states from single to multiple photoelectrons. In order to achieve high throughput and to reduce the dead time in data processing, the data acquisition software based on LabVIEW is developed and runs with a parallel mechanism. The analysis algorithm is realized in LabVIEW and the spectra of charge, amplitude, signal width and rising time are analyzed offline. The results from Charge-to-Digital Converter, Time-to-Digital Converter and waveform sampling are discussed in detailed comparison.

  5. An Approach to Model Based Testing of Multiagent Systems

    Directory of Open Access Journals (Sweden)

    Shafiq Ur Rehman

    2015-01-01

    Full Text Available Autonomous agents perform on behalf of the user to achieve defined goals or objectives. They are situated in dynamic environment and are able to operate autonomously to achieve their goals. In a multiagent system, agents cooperate with each other to achieve a common goal. Testing of multiagent systems is a challenging task due to the autonomous and proactive behavior of agents. However, testing is required to build confidence into the working of a multiagent system. Prometheus methodology is a commonly used approach to design multiagents systems. Systematic and thorough testing of each interaction is necessary. This paper proposes a novel approach to testing of multiagent systems based on Prometheus design artifacts. In the proposed approach, different interactions between the agent and actors are considered to test the multiagent system. These interactions include percepts and actions along with messages between the agents which can be modeled in a protocol diagram. The protocol diagram is converted into a protocol graph, on which different coverage criteria are applied to generate test paths that cover interactions between the agents. A prototype tool has been developed to generate test paths from protocol graph according to the specified coverage criterion.

  6. What if the Electrical Conductivity of Graphene Is Significantly Deteriorated for the Graphene-Semiconductor Composite-Based Photocatalysis?

    Science.gov (United States)

    Weng, Bo; Xu, Yi-Jun

    2015-12-23

    The extraordinary electrical conductivity of graphene has been widely regarded as the bible in literature to explain the activity enhancement of graphene-semiconductor composite photocatalysts. However, from the viewpoint of an entire composite-based artificial photosynthetic system, the significant matter of photocatalytic performance of graphene-semiconductor composite system is not just a simple and only issue of excellent electrical conductivity of graphene. Herein, the intentional design of melamine resin monomers functionalized three-dimensional (3D) graphene (donated as MRGO) with significantly deteriorated electrical conductivity enables us to independently focus on studying the geometry effect of MRGO on the photocatalytic performance of graphene-semiconductor composite. By coupling semiconductor CdS with graphene, including MRGO and reduced graphene oxide (RGO), it was found that the CdS-MRGO composites exhibit much higher visible light photoactivity than CdS-RGO composites although the electrical conductivity of MRGO is remarkably much lower than that of RGO. The comparison characterizations evidence that such photoactivity enhancement is predominantly attributed to the restacking-inhibited 3D architectural morphology of MRGO, by which the synergistic effects of boosted separation and transportation of photogenerated charge carriers and increased adsorption capacity can be achieved. Our work highlights that the significant matter of photocatalytic performance of graphene-semiconductor composite is not a simple issue on how to harness the electrical conductivity of graphene but the rational ensemble design of graphene-semiconductor composite, which includes the integrative optimization of geometrical and electrical factors of individual component and the interface composition.

  7. OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS & HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION

    Energy Technology Data Exchange (ETDEWEB)

    Alan Black; Arnis Judzis

    2004-10-01

    The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit-fluid prototypes and test at large scale; and Phase 3--Field trial smart bit-fluid concepts, modify as necessary and commercialize products. As of report date, TerraTek has concluded all major preparations for the high pressure drilling campaign. Baker Hughes encountered difficulties in providing additional pumping capacity before TerraTek's scheduled relocation to another facility, thus the program was delayed further to accommodate the full testing program.

  8. TR-EDB: Test Reactor Embrittlement Data Base, Version 1

    International Nuclear Information System (INIS)

    Stallmann, F.W.; Wang, J.A.; Kam, F.B.K.

    1994-01-01

    The Test Reactor Embrittlement Data Base (TR-EDB) is a collection of results from irradiation in materials test reactors. It complements the Power Reactor Embrittlement Data Base (PR-EDB), whose data are restricted to the results from the analysis of surveillance capsules in commercial power reactors. The rationale behind their restriction was the assumption that the results of test reactor experiments may not be applicable to power reactors and could, therefore, be challenged if such data were included. For this very reason the embrittlement predictions in the Reg. Guide 1.99, Rev. 2, were based exclusively on power reactor data. However, test reactor experiments are able to cover a much wider range of materials and irradiation conditions that are needed to explore more fully a variety of models for the prediction of irradiation embrittlement. These data are also needed for the study of effects of annealing for life extension of reactor pressure vessels that are difficult to obtain from surveillance capsule results

  9. TR-EDB: Test Reactor Embrittlement Data Base, Version 1

    Energy Technology Data Exchange (ETDEWEB)

    Stallmann, F.W.; Wang, J.A.; Kam, F.B.K. [Oak Ridge National Lab., TN (United States)

    1994-01-01

    The Test Reactor Embrittlement Data Base (TR-EDB) is a collection of results from irradiation in materials test reactors. It complements the Power Reactor Embrittlement Data Base (PR-EDB), whose data are restricted to the results from the analysis of surveillance capsules in commercial power reactors. The rationale behind their restriction was the assumption that the results of test reactor experiments may not be applicable to power reactors and could, therefore, be challenged if such data were included. For this very reason the embrittlement predictions in the Reg. Guide 1.99, Rev. 2, were based exclusively on power reactor data. However, test reactor experiments are able to cover a much wider range of materials and irradiation conditions that are needed to explore more fully a variety of models for the prediction of irradiation embrittlement. These data are also needed for the study of effects of annealing for life extension of reactor pressure vessels that are difficult to obtain from surveillance capsule results.

  10. Lateral flow-based antibody testing for Chlamydia trachomatis.

    Science.gov (United States)

    Gwyn, Sarah; Mitchell, Alexandria; Dean, Deborah; Mkocha, Harran; Handali, Sukwan; Martin, Diana L

    2016-08-01

    We describe here a lateral flow-based assay (LFA) for the detection of antibodies against immunodominant antigen Pgp3 from Chlamydia trachomatis, the causative agent of urogenital chlamydia infection and ocular trachoma. Optimal signal detection was achieved when the gold-conjugate and test line contained Pgp3, creating a dual sandwich capture assay. The LFA yielded positive signals with serum and whole blood but not with eluted dried blood spots. For serum, the agreement of the LFA with the non-reference multiplex assay was 96%, the specificity using nonendemic pediatric sera was 100%, and the inter-rater agreement was κ=0.961. For whole blood, the agreement of LFA with multiplex was 81.5%, the specificity was 100%, and the inter-rater agreement was κ=0.940. The LFA was tested in a field environment and yielded similar results to those from laboratory-based testing. These data show the successful development of a lateral flow assay for detection of antibodies against Pgp3 with reliable use in field settings, which would make antibody-based testing for trachoma surveillance highly practical, especially after cessation of trachoma elimination programs. Published by Elsevier B.V.

  11. Titan TTCN-3 Based Test Framework for Resource Constrained Systems

    Directory of Open Access Journals (Sweden)

    Yushev Artem

    2016-01-01

    Full Text Available Wireless communication systems more and more become part of our daily live. Especially with the Internet of Things (IoT the overall connectivity increases rapidly since everyday objects become part of the global network. For this purpose several new wireless protocols have arisen, whereas 6LoWPAN (IPv6 over Low power Wireless Personal Area Networks can be seen as one of the most important protocols within this sector. Originally designed on top of the IEEE802.15.4 standard it is a subject to various adaptions that will allow to use 6LoWPAN over different technologies; e.g. DECT Ultra Low Energy (ULE. Although this high connectivity offers a lot of new possibilities, there are several requirements and pitfalls coming along with such new systems. With an increasing number of connected devices the interoperability between different providers is one of the biggest challenges, which makes it necessary to verify the functionality and stability of the devices and the network. Therefore testing becomes one of the key components that decides on success or failure of such a system. Although there are several protocol implementations commonly available; e.g., for IoT based systems, there is still a lack of according tools and environments as well as for functional and conformance testing. This article describes the architecture and functioning of the proposed test framework based on Testing and Test Control Notation Version 3 (TTCN-3 for 6LoWPAN over ULE networks.

  12. Tests of gravity with future space-based experiments

    Science.gov (United States)

    Sakstein, Jeremy

    2018-03-01

    Future space-based tests of relativistic gravitation—laser ranging to Phobos, accelerometers in orbit, and optical networks surrounding Earth—will constrain the theory of gravity with unprecedented precision by testing the inverse-square law, the strong and weak equivalence principles, and the deflection and time delay of light by massive bodies. In this paper, we estimate the bounds that could be obtained on alternative gravity theories that use screening mechanisms to suppress deviations from general relativity in the Solar System: chameleon, symmetron, and Galileon models. We find that space-based tests of the parametrized post-Newtonian parameter γ will constrain chameleon and symmetron theories to new levels, and that tests of the inverse-square law using laser ranging to Phobos will provide the most stringent constraints on Galileon theories to date. We end by discussing the potential for constraining these theories using upcoming tests of the weak equivalence principle, and conclude that further theoretical modeling is required in order to fully utilize the data.

  13. A puzzle form of a non-verbal intelligence test gives significantly higher performance measures in children with severe intellectual disability

    Directory of Open Access Journals (Sweden)

    Crewther Sheila G

    2008-08-01

    Full Text Available Abstract Background Assessment of 'potential intellectual ability' of children with severe intellectual disability (ID is limited, as current tests designed for normal children do not maintain their interest. Thus a manual puzzle version of the Raven's Coloured Progressive Matrices (RCPM was devised to appeal to the attentional and sensory preferences and language limitations of children with ID. It was hypothesized that performance on the book and manual puzzle forms would not differ for typically developing children but that children with ID would perform better on the puzzle form. Methods The first study assessed the validity of this puzzle form of the RCPM for 76 typically developing children in a test-retest crossover design, with a 3 week interval between tests. A second study tested performance and completion rate for the puzzle form compared to the book form in a sample of 164 children with ID. Results In the first study, no significant difference was found between performance on the puzzle and book forms in typically developing children, irrespective of the order of completion. The second study demonstrated a significantly higher performance and completion rate for the puzzle form compared to the book form in the ID population. Conclusion Similar performance on book and puzzle forms of the RCPM by typically developing children suggests that both forms measure the same construct. These findings suggest that the puzzle form does not require greater cognitive ability but demands sensory-motor attention and limits distraction in children with severe ID. Thus, we suggest the puzzle form of the RCPM is a more reliable measure of the non-verbal mentation of children with severe ID than the book form.

  14. Tumor-selective replication herpes simplex virus-based technology significantly improves clinical detection and prognostication of viable circulating tumor cells

    DEFF Research Database (Denmark)

    Zhang, Wen; Bao, Li; Yang, Shaoxing

    2016-01-01

    Detection of circulating tumor cells remains a significant challenge due to their vast physical and biological heterogeneity. We developed a cell-surface-marker-independent technology based on telomerase-specific, replication-selective oncolytic herpes-simplex-virus-1 that targets telomerase......-reverse-transcriptase-positive cancer cells and expresses green-fluorescent-protein that identifies viable CTCs from a broad spectrum of malignancies. Our method recovered 75.5-87.2% of tumor cells spiked into healthy donor blood, as validated by different methods, including single cell sequencing. CTCs were detected in 59-100% of 326...... blood samples from patients with 6 different solid organ carcinomas and lymphomas. Significantly, CTC-positive rates increased remarkably with tumor progression from N0M0, N+M0 to M1 in each of 5 tested cancers (lung, colon, liver, gastric and pancreatic cancer, and glioma). Among 21 non-small cell lung...

  15. SIGNIFICANCE OF CREATING A CUSTOM DIAGNOSTIC ENGLISH LANGUAGE TEST IN ENGLISH FOR SPECIFIC PURPOSES COURSE AT THE UNIVERSITY OF NIŠ MEDICAL SCHOOL

    Directory of Open Access Journals (Sweden)

    Nataša Bakić-Mirić

    2012-03-01

    Full Text Available The purpose of a diagnostic English language testing is to help students to assess the level of English language skills and to provide students with effective training tools to improve their English language skills. This kind of test is usually designed and intended for students who reached lower intermediate level of communicative competence in reading, listening, speaking and writing skills and it is based on comprehensive needs analysis. However, it can also be adjusted for students who have reached intermediate, upper-intermediate and advanced knowledge of the English language. Conjointly, this paper also examines the use of qualitative data analysis to improve students’ English language performance while taking an ESP course, which gives an insight into what teacher needs for a successful English language course and lesson planning, and that means having an overlook into gray areas of the English language that largely cause problems for students.

  16. Model-based automated testing of critical PLC programs.

    CERN Document Server

    Fernández Adiego, B; Tournier, J-C; González Suárez, V M; Bliudze, S

    2014-01-01

    Testing of critical PLC (Programmable Logic Controller) programs remains a challenging task for control system engineers as it can rarely be automated. This paper proposes a model based approach which uses the BIP (Behavior, Interactions and Priorities) framework to perform automated testing of PLC programs developed with the UNICOS (UNified Industrial COntrol System) framework. This paper defines the translation procedure and rules from UNICOS to BIP which can be fully automated in order to hide the complexity of the underlying model from the control engineers. The approach is illustrated and validated through the study of a water treatment process.

  17. A test beam upgrade based on the BEPC-LINAC

    International Nuclear Information System (INIS)

    Li Jiacai; Wu Yuanming; Cui Xiangzong; Zhang Liangsheng; Zhou Baoqing; Liu Zhengquan; Zhang Shaoping; Sun Changchun; Zhang Zhuxiang; Zhang Caidi; Zheng Linsheng; Liu Shixing; Shen Ji; Yin Zejie; Zhang Yongming; Chen Ziyu

    2004-01-01

    A total of three beam lines, E1, E2 and E3 have based on the LINAC of BEPC. The E1 beam is to be used for intense slow-positron facility. The E2 is a primary positron or electron beam with an energy of 1.3-1.5 GeV. The E3 is a secondary electron or pion test beam with a momentum can be adjustable continuously. The position accuracy of a detected particle is 0.2-0.4 mm with an event rate of 3 - 4 Hz. This beam has been successfully used for some detectors beam test. (author)

  18. Timer-based data acquisitioning of creep testing machines

    International Nuclear Information System (INIS)

    Rana, M.A.; Farooq, M.A.; Ali, L.

    1998-01-01

    Duration of a creep test may be short or long term extending over several years. Continuous operation of a computer for automatic data acquisition of creep testing machines is useless. Timer based data acquisitioning of the machines already interface with IBM-Pc/AT and compatibles has been streamlined for economical use of the computer. A locally designed and fabricated timer has been introduced in the system in this regard to meet the requirements of the system. The timer switches on the computer according to pre scheduled interval of time of capture creep data in Real time. The periodically captured data is logged on the hard disk for analysis and report generation. (author)

  19. Self-testing of binary observables based on commutation

    DEFF Research Database (Denmark)

    Kaniewski, Jędrzej

    2017-01-01

    We consider the problem of certifying binary observables based on a Bell inequality violation alone, a task known as self-testing of measurements. We introduce a family of commutation-based measures, which encode all the distinct arrangements of two projective observables on a qubit. These quanti......We consider the problem of certifying binary observables based on a Bell inequality violation alone, a task known as self-testing of measurements. We introduce a family of commutation-based measures, which encode all the distinct arrangements of two projective observables on a qubit....... These quantities by construction take into account the usual limitations of self-testing and since they are “weighted” by the (reduced) state, they automatically deal with rank-deficient reduced density matrices. We show that these measures can be estimated from the observed Bell violation in several scenarios...... and the proofs rely only on standard linear algebra. The trade-offs turn out to be tight, and in particular, they give nontrivial statements for arbitrarily small violations. On the other extreme, observing the maximal violation allows us to deduce precisely the form of the observables, which immediately leads...

  20. Do sediment type and test durations affect results of laboratory-based, accelerated testing studies of permeable pavement clogging?

    Science.gov (United States)

    Nichols, Peter W B; White, Richard; Lucke, Terry

    2015-04-01

    Previous studies have attempted to quantify the clogging processes of Permeable Interlocking Concrete Pavers (PICPs) using accelerated testing methods. However, the results have been variable. This study investigated the effects that three different sediment types (natural and silica), and different simulated rainfall intensities, and testing durations had on the observed clogging processes (and measured surface infiltration rates) of laboratory-based, accelerated PICP testing studies. Results showed that accelerated simulated laboratory testing results are highly dependent on the type, and size of sediment used in the experiments. For example, when using real stormwater sediment up to 1.18 mm in size, the results showed that neither testing duration, nor stormwater application rate had any significant effect on PICP clogging. However, the study clearly showed that shorter testing durations generally increased clogging and reduced the surface infiltration rates of the models when artificial silica sediment was used. Longer testing durations also generally increased clogging of the models when using fine sediment (<300 μm). Results from this study will help researchers and designers better anticipate when and why PICPs are susceptible to clogging, reduce maintenance and extend the useful life of these increasingly common stormwater best management practices. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. An Effective Strategy to Build Up a Balanced Test Suite for Spectrum-Based Fault Localization

    Directory of Open Access Journals (Sweden)

    Ning Li

    2016-01-01

    Full Text Available During past decades, many automated software faults diagnosis techniques including Spectrum-Based Fault Localization (SBFL have been proposed to improve the efficiency of software debugging activity. In the field of SBFL, suspiciousness calculation is closely related to the number of failed and passed test cases. Studies have shown that the ratio of the number of failed and passed test case has more significant impact on the accuracy of SBFL than the total number of test cases, and a balanced test suite is more beneficial to improving the accuracy of SBFL. Based on theoretical analysis, we proposed an PNF (Passed test cases, Not execute Faulty statement strategy to reduce test suite and build up a more balanced one for SBFL, which can be used in regression testing. We evaluated the strategy making experiments using the Siemens program and Space program. Experiments indicated that our PNF strategy can be used to construct a new test suite effectively. Compared with the original test suite, the new one has smaller size (average 90% test case was reduced in experiments and more balanced ratio of failed test cases to passed test cases, while it has the same statement coverage and fault localization accuracy.

  2. Application of Panel-Based Tests for Inherited Risk of Cancer.

    Science.gov (United States)

    Shah, Payal D; Nathanson, Katherine L

    2017-08-31

    Next-generation or massively parallel sequencing has transformed the landscape of genetic testing for cancer susceptibility. Panel-based genetic tests evaluate multiple genes simultaneously and rapidly. Because these tests are frequently offered in clinical settings, understanding their clinical validity and utility is critical. When evaluating the inherited risk of breast and ovarian cancers, panel-based tests provide incremental benefit compared with BRCA1/2 genetic testing. For inherited risk of other cancers, such as colon cancer and pheochromocytoma-paraganglioma, the clinical utility and yield of panel-based testing are higher; in fact, simultaneous evaluation of multiple genes has been the historical standard for these diseases. Evaluating inherited risk with panel-based testing has recently entered clinical practice for prostate and pancreatic cancers, with potential therapeutic implications. The resulting variants of uncertain significance and mutations with unclear actionability pose challenges to service providers and patients, underscoring the importance of genetic counseling and data-sharing initiatives. This review explores the evolving merits, challenges, and nuances of panel-based testing for cancer susceptibility.

  3. astrophysical significance

    Directory of Open Access Journals (Sweden)

    Dartois E.

    2014-02-01

    Full Text Available Clathrate hydrates, ice inclusion compounds, are of major importance for the Earth’s permafrost regions and may control the stability of gases in many astrophysical bodies such as the planets, comets and possibly interstellar grains. Their physical behavior may provide a trapping mechanism to modify the absolute and relative composition of icy bodies that could be the source of late-time injection of gaseous species in planetary atmospheres or hot cores. In this study, we provide and discuss laboratory-recorded infrared signatures of clathrate hydrates in the near to mid-infrared and the implications for space-based astrophysical tele-detection in order to constrain their possible presence.

  4. Operational Testing of Satellite based Hydrological Model (SHM)

    Science.gov (United States)

    Gaur, Srishti; Paul, Pranesh Kumar; Singh, Rajendra; Mishra, Ashok; Gupta, Praveen Kumar; Singh, Raghavendra P.

    2017-04-01

    Incorporation of the concept of transposability in model testing is one of the prominent ways to check the credibility of a hydrological model. Successful testing ensures ability of hydrological models to deal with changing conditions, along with its extrapolation capacity. For a newly developed model, a number of contradictions arises regarding its applicability, therefore testing of credibility of model is essential to proficiently assess its strength and limitations. This concept emphasizes to perform 'Hierarchical Operational Testing' of Satellite based Hydrological Model (SHM), a newly developed surface water-groundwater coupled model, under PRACRITI-2 program initiated by Space Application Centre (SAC), Ahmedabad. SHM aims at sustainable water resources management using remote sensing data from Indian satellites. It consists of grid cells of 5km x 5km resolution and comprises of five modules namely: Surface Water (SW), Forest (F), Snow (S), Groundwater (GW) and Routing (ROU). SW module (functions in the grid cells with land cover other than forest and snow) deals with estimation of surface runoff, soil moisture and evapotranspiration by using NRCS-CN method, water balance and Hragreaves method, respectively. The hydrology of F module is dependent entirely on sub-surface processes and water balance is calculated based on it. GW module generates baseflow (depending on water table variation with the level of water in streams) using Boussinesq equation. ROU module is grounded on a cell-to-cell routing technique based on the principle of Time Variant Spatially Distributed Direct Runoff Hydrograph (SDDH) to route the generated runoff and baseflow by different modules up to the outlet. For this study Subarnarekha river basin, flood prone zone of eastern India, has been chosen for hierarchical operational testing scheme which includes tests under stationary as well as transitory conditions. For this the basin has been divided into three sub-basins using three flow

  5. Nonapnea Sleep Disorders in Patients Younger than 65 Years Are Significantly Associated with CKD: A Nationwide Population-Based Study.

    Directory of Open Access Journals (Sweden)

    Hugo You-Hsien Lin

    Full Text Available Nonapnea sleep disorders (NASD and sleep-related problems are associated with poor health outcomes. However, the association between NASD and the development and prognosis of chronic kidney disease (CKD has not been investigated thoroughly. We explored the association between CKD and NASD in Taiwan.We conducted a population-based study using the Taiwan National Health Insurance database with1,000,000 representative data for the period from January 1, 2000 to December 31, 2009. We investigated the incidence and risk of CKD in 7,006 newly diagnosed NASD cases compared with 21,018 people without NASD matched according to age, sex, index year, urbanization, region, and monthly income at a 1:3 ratio.The subsequent risk of CKD was 1.48-foldhigher in the NASD cohort than in the control cohort (95% confidence interval [CI] = 1.26-1.73, p< 0.001. Men, older age, type 2 diabetes mellitus, and gout were significant factors associated with the increased risk of CKD (p< 0.001. Among different types of NASDs, patients with insomnia had a 52% increased risk of developing CKD (95%CI = 1.23-1.84; P<0.01, whereas patients with sleep disturbance had a 49%increased risk of subsequent CKD (95% CI = 1.19-1.87; P<0.001. Younger women (aged < 65 years were at a high risk of CKD with NASD (adjusted hazard ratio, [HR] = 1.81; 95% CI = 1.35-2.40, p< 0.001.In this nationwide population-based cohort study, patients with NASD, particularly men of all ages and women aged younger than 65 years, were at high risk of CKD.

  6. Salt Fog Testing Iron-Based Amorphous Alloys

    International Nuclear Information System (INIS)

    Rebak, Raul B.; Aprigliano, Louis F.; Day, S. Daniel; Farmer, Joseph C.

    2007-01-01

    Iron-based amorphous alloys are hard and highly corrosion resistant, which make them desirable for salt water and other applications. These alloys can be produced as powder and can be deposited as coatings on any surface that needs to be protected from the environment. It was of interest to examine the behavior of these amorphous alloys in the standard salt-fog testing ASTM B 117. Three different amorphous coating compositions were deposited on 316L SS coupons and exposed for many cycles of the salt fog test. Other common engineering alloys such as 1018 carbon steel, 316L SS and Hastelloy C-22 were also tested together with the amorphous coatings. Results show that amorphous coatings are resistant to rusting in salt fog. Partial devitrification may be responsible for isolated rust spots in one of the coatings. (authors)

  7. Acoustic displacement triangle based on the individual element test

    Directory of Open Access Journals (Sweden)

    S. Correa

    Full Text Available A three node -displacement based- acoustic element is developed. In order to avoid spurious rotational modes, a higher order stiffness is introduced. This higher order stiffness is developed from an incompatible strain field which computes element volume changes under nodal rotational displacements fields. The higher order strain resulting from the incompatible strain field satisfies the Individual Element Test (IET requirements without affecting convergence. The higher order stiffness is modulated, element by element, with a factor. As a result, the displacement based formulation presented on this paper is capable of placing the spurious rotational modes above the range of the physical compressional modes that can be accurately calculated by the mesh.

  8. A Muskingum-based methodology for river discharge estimation and rating curve development under significant lateral inflow conditions

    Science.gov (United States)

    Barbetta, Silvia; Moramarco, Tommaso; Perumal, Muthiah

    2017-11-01

    Quite often the discharge at a site is estimated using the rating curve developed for that site and its development requires river flow measurements, which are costly, tedious and dangerous during severe floods. To circumvent the conventional rating curve development approach, Perumal et al. in 2007 and 2010 applied the Variable Parameter Muskingum Stage-hydrograph (VPMS) routing method for developing stage-discharge relationships especially at those ungauged river sites where stage measurements and details of section geometry are available, but discharge measurements are not made. The VPMS method enables to estimate rating curves at ungauged river sites with acceptable accuracy. But the application of the method is subjected to the limitation of negligible presence of lateral flow within the routing reach. To overcome this limitation, this study proposes an extension of the VPMS method, henceforth, known herein as the VPMS-Lin method, for enabling the streamflow assessment even when significant lateral inflow occurs along the river reach considered for routing. The lateral inflow is estimated through the continuity equation expressed in the characteristic form as advocated by Barbetta et al. in 2012. The VPMS-Lin, is tested on two rivers characterized by different geometric and hydraulic properties: 1) a 50 km reach of the Tiber River in (central Italy) and 2) a 73 km reach of the Godavari River in the peninsular India. The study demonstrates that both the upstream and downstream discharge hydrographs are well reproduced, with a root mean square error equal on average to about 35 and 1700 m3 s-1 for the Tiber River and the Godavari River case studies, respectively. Moreover, simulation studies carried out on a river stretch of the Tiber River using the one-dimensional hydraulic model MIKE11 and the VPMS-Lin models demonstrate the accuracy of the VMPS-Lin model, which besides enabling the estimation of streamflow, also enables the estimation of reach averaged

  9. Accounting for Proof Test Data in a Reliability Based Design Optimization Framework

    Science.gov (United States)

    Ventor, Gerharad; Scotti, Stephen J.

    2012-01-01

    This paper investigates the use of proof (or acceptance) test data during the reliability based design optimization of structural components. It is assumed that every component will be proof tested and that the component will only enter into service if it passes the proof test. The goal is to reduce the component weight, while maintaining high reliability, by exploiting the proof test results during the design process. The proposed procedure results in the simultaneous design of the structural component and the proof test itself and provides the designer with direct control over the probability of failing the proof test. The procedure is illustrated using two analytical example problems and the results indicate that significant weight savings are possible when exploiting the proof test results during the design process.

  10. Space Launch System Base Heating Test: Environments and Base Flow Physics

    Science.gov (United States)

    Mehta, Manish; Knox, Kyle S.; Seaford, C. Mark; Dufrene, Aaron T.

    2016-01-01

    The NASA Space Launch System (SLS) vehicle is composed of four RS-25 liquid oxygen- hydrogen rocket engines in the core-stage and two 5-segment solid rocket boosters and as a result six hot supersonic plumes interact within the aft section of the vehicle during ight. Due to the complex nature of rocket plume-induced ows within the launch vehicle base during ascent and a new vehicle con guration, sub-scale wind tunnel testing is required to reduce SLS base convective environment uncertainty and design risk levels. This hot- re test program was conducted at the CUBRC Large Energy National Shock (LENS) II short-duration test facility to simulate ight from altitudes of 50 kft to 210 kft. The test program is a challenging and innovative e ort that has not been attempted in 40+ years for a NASA vehicle. This presentation discusses the various trends of base convective heat ux and pressure as a function of altitude at various locations within the core-stage and booster base regions of the two-percent SLS wind tunnel model. In-depth understanding of the base ow physics is presented using the test data, infrared high-speed imaging and theory. The normalized test design environments are compared to various NASA semi- empirical numerical models to determine exceedance and conservatism of the ight scaled test-derived base design environments. Brief discussion of thermal impact to the launch vehicle base components is also presented.

  11. Pathway-based network analysis of myeloma tumors: monoclonal gammopathy of unknown significance, smoldering multiple myeloma, and multiple myeloma.

    Science.gov (United States)

    Dong, L; Chen, C Y; Ning, B; Xu, D L; Gao, J H; Wang, L L; Yan, S Y; Cheng, S

    2015-08-14

    Although many studies have been carried out on monoclonal gammopathy of unknown significances (MGUS), smoldering multiple myeloma (SMM), and multiple myeloma (MM), their classification and underlying pathogenesis are far from elucidated. To discover the relationships among MGUS, SMM, and MM at the transcriptome level, differentially expressed genes in MGUS, SMM, and MM were identified by the rank product method, and then co-expression networks were constructed by integrating the data. Finally, a pathway-network was constructed based on Kyoto Encyclopedia of Genes and Genomes pathway enrichment analysis, and the relationships between the pathways were identified. The results indicated that there were 55, 78, and 138 pathways involved in the myeloma tumor developmental stages of MGUS, SMM, and MM, respectively. The biological processes identified therein were found to have a close relationship with the immune system. Processes and pathways related to the abnormal activity of DNA and RNA were also present in SMM and MM. Six common pathways were found in the whole process of myeloma tumor development. Nine pathways were shown to participate in the progression of MGUS to SMM, and prostate cancer was the sole pathway that was involved only in MGUS and MM. Pathway-network analysis might provide a new indicator for the developmental stage diagnosis of myeloma tumors.

  12. Computer Based Test Untuk Seleksi Masuk Politeknik Negeri Bengkalis

    OpenAIRE

    Agus Tedyyana; Danuri Danuri

    2017-01-01

    AbstrakPenyeleksian calon mahasiswa baru dapat dilakukan dengan aplikasi Computer Based Test (CBT). Metode yang digunakan meliputi teknik pengumpulan data, analisis sistem, model perancangan, implementasi dan pengujian. Penelitian ini menghasilkan aplikasi CBT dimana soal yang dimunculkan dari bank soal melalui proses pengacakan dengan tidak akan memunculkan soal yang sama dengan menggunakan metoda Fisher-Yates Shuffle. Dalam proses pengamanan informasi soal saat terhubung ke jaringan maka di...

  13. Strengthening Theoretical Testing in Criminology Using Agent-based Modeling

    OpenAIRE

    Johnson, S. D.; Groff, E.

    2014-01-01

    Objectives: The Journal of Research in Crime and Delinquency (JRCD) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity—agent-based computational modeling—that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interes...

  14. Strengthening Theoretical Testing in Criminology Using Agent-based Modeling.

    Science.gov (United States)

    Johnson, Shane D; Groff, Elizabeth R

    2014-07-01

    The Journal of Research in Crime and Delinquency ( JRCD ) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity-agent-based computational modeling-that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interest. Two literature reviews are completed. The aim of the first is to identify those articles published in JRCD that have been the most influential and to classify the theoretical perspectives taken. The second is intended to identify those studies that have used an agent-based model (ABM) to examine criminological theories and to identify which theories have been explored. Ecological theories of crime pattern formation have received the most attention from researchers using ABMs, but many other criminological theories are amenable to testing using such methods. Traditional methods of theory development and testing suffer from a number of potential issues that a more systematic use of ABMs-not without its own issues-may help to overcome. ABMs should become another method in the criminologists toolbox to aid theory testing and falsification.

  15. Self-testing protocols based on the chained Bell inequalities

    International Nuclear Information System (INIS)

    Šupić, I; Augusiak, R; Salavrakos, A; Acín, A

    2016-01-01

    Self-testing is a device-independent technique based on non-local correlations whose aim is to certify the effective uniqueness of the quantum state and measurements needed to produce these correlations. It is known that the maximal violation of some Bell inequalities suffices for this purpose. However, most of the existing self-testing protocols for two devices exploit the well-known Clauser–Horne–Shimony–Holt Bell inequality or modifications of it, and always with two measurements per party. Here, we generalize the previous results by demonstrating that one can construct self-testing protocols based on the chained Bell inequalities, defined for two devices implementing an arbitrary number of two-output measurements. On the one hand, this proves that the quantum state and measurements leading to the maximal violation of the chained Bell inequality are unique. On the other hand, in the limit of a large number of measurements, our approach allows one to self-test the entire plane of measurements spanned by the Pauli matrices X and Z. Our results also imply that the chained Bell inequalities can be used to certify two bits of perfect randomness. (paper)

  16. The Test Reactor Embrittlement Data Base (TR-EDB)

    International Nuclear Information System (INIS)

    Stallmann, F.W.; Kam, F.B.K.; Wang, J.A.

    1993-01-01

    The Test Reactor Embrittlement Data Base (TR-EDB) is part of an ongoing program to collect test data from materials irradiations to aid in the research and evaluation of embrittlement prediction models that are used to assure the safety of pressure vessels in power reactors. This program is being funded by the US Nuclear Regulatory Commission (NRC) and has resulted in the publication of the Power Reactor Embrittlement Data Base (PR-EDB) whose second version is currently being released. The TR-EDB is a compatible collection of data from experiments in materials test reactors. These data contain information that is not obtainable from surveillance results, especially, about the effects of annealing after irradiation. Other information that is only available from test reactors is the influence of fluence rates and irradiation temperatures on radiation embrittlement. The first version of the TR-EDB will be released in fall of 1993 and contains published results from laboratories in many countries. Data collection will continue and further updates will be published

  17. Feasibility and willingness-to-pay for integrated community-based tuberculosis testing

    Directory of Open Access Journals (Sweden)

    Vickery Carter

    2011-11-01

    Full Text Available Abstract Background Community-based screening for TB, combined with HIV and syphilis testing, faces a number of barriers. One significant barrier is the value that target communities place on such screening. Methods Integrated testing for TB, HIV, and syphilis was performed in neighborhoods identified using geographic information systems-based disease mapping. TB testing included skin testing and interferon gamma release assays. Subjects completed a survey describing disease risk factors, healthcare access, healthcare utilization, and willingness to pay for integrated testing. Results Behavioral and social risk factors among the 113 subjects were prevalent (71% prior incarceration, 27% prior or current crack cocaine use, 35% homelessness, and only 38% had a regular healthcare provider. The initial 24 subjects reported that they would be willing to pay a median $20 (IQR: 0-100 for HIV testing and $10 (IQR: 0-100 for TB testing when the question was asked in an open-ended fashion, but when the question was changed to a multiple-choice format, the next 89 subjects reported that they would pay a median $5 for testing, and 23% reported that they would either not pay anything to get tested or would need to be paid $5 to get tested for TB, HIV, or syphilis. Among persons who received tuberculin skin testing, only 14/78 (18% participants returned to have their skin tests read. Only 14/109 (13% persons who underwent HIV testing returned to receive their HIV results. Conclusion The relatively high-risk persons screened in this community outreach study placed low value on testing. Reported willingness to pay for such testing, while low, likely overestimated the true willingness to pay. Successful TB, HIV, and syphilis integrated testing programs in high risk populations will likely require one-visit diagnostic testing and incentives.

  18. Effects of computer-based immediate feedback on foreign language listening comprehension and test-associated anxiety.

    Science.gov (United States)

    Lee, Shu-Ping; Su, Hui-Kai; Lee, Shin-Da

    2012-06-01

    This study investigated the effects of immediate feedback on computer-based foreign language listening comprehension tests and on intrapersonal test-associated anxiety in 72 English major college students at a Taiwanese University. Foreign language listening comprehension of computer-based tests designed by MOODLE, a dynamic e-learning environment, with or without immediate feedback together with the state-trait anxiety inventory (STAI) were tested and repeated after one week. The analysis indicated that immediate feedback during testing caused significantly higher anxiety and resulted in significantly higher listening scores than in the control group, which had no feedback. However, repeated feedback did not affect the test anxiety and listening scores. Computer-based immediate feedback did not lower debilitating effects of anxiety but enhanced students' intrapersonal eustress-like anxiety and probably improved their attention during listening tests. Computer-based tests with immediate feedback might help foreign language learners to increase attention in foreign language listening comprehension.

  19. Comparison of Resistance-Based Walking Cardiorespiratory Test to The Bruce Protocol.

    Science.gov (United States)

    Hurt, Christopher P; Bamman, Marcas; Naidu, Avantika; Brown, David A

    2017-12-11

    Cardiorespiratory fitness is assessed through graded exercise tests that determine the maximum amount of sustained mechanical work that an individual can perform while also providing health and fitness related information. This manuscript describes a novel method to perform graded exercise tests that uses posteriorly directed resistive forces. The purpose of this investigation is to validate a novel resistance based test in comparison to a traditional speed and incline based test in a cohort of non-impaired individuals. Twenty non-impaired individuals, 8 males, 12 females, mean age 28.4± 9.6, range 20-54 years old. Participants performed two maximal exercise tests. The speed and incline based test used the Bruce protocol and increased treadmill incline and speed every three minutes. The resistance based test used a robotic device interfaced with the treadmill that provided specified horizontal resistive forces at the center of mass calculated to match each Bruce Protocol stage while individuals walked at 1.1 m/s. Participants obtained ∼3% higher maximum V˙O2 measure using the speed and incline based method (dependent t-test p=0.08). V˙O2 peaks between tests were strongly correlated (r=0.93, ptests. We found a significant linear relationship between mass-specific work rate and measured V˙O2 stage-by-stage for both tests, but no significant difference between each linear fit (p=0.84). These data suggest horizontal resistive forces while walking on a treadmill, can be used to increase aerobic effort in a way that closely simulates work rates of the Bruce Protocol.

  20. Inquiry-Based Instruction and High Stakes Testing

    Science.gov (United States)

    Cothern, Rebecca L.

    Science education is a key to economic success for a country in terms of promoting advances in national industry and technology and maximizing competitive advantage in a global marketplace. The December 2010 Program for International Student Assessment (PISA) ranked the United States 23rd of 65 countries in science. That dismal standing in science proficiency impedes the ability of American school graduates to compete in the global market place. Furthermore, the implementation of high stakes testing in science mandated by the 2007 No Child Left Behind (NCLB) Act has created an additional need for educators to find effective science pedagogy. Research has shown that inquiry-based science instruction is one of the predominant science instructional methods. Inquiry-based instruction is a multifaceted teaching method with its theoretical foundation in constructivism. A correlational survey research design was used to determine the relationship between levels of inquiry-based science instruction and student performance on a standardized state science test. A self-report survey, using a Likert-type scale, was completed by 26 fifth grade teachers. Participants' responses were analyzed and grouped as high, medium, or low level inquiry instruction. The unit of analysis for the achievement variable was the student scale score average from the state science test. Spearman's Rho correlation data showed a positive relationship between the level of inquiry-based instruction and student achievement on the state assessment. The findings can assist teachers and administrators by providing additional research on the benefits of the inquiry-based instructional method. Implications for positive social change include increases in student proficiency and decision-making skills related to science policy issues which can help make them more competitive in the global marketplace.

  1. Standard Test Method for Preparing Aircraft Cleaning Compounds, Liquid Type, Water Base, for Storage Stability Testing

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2002-01-01

    1.1 This test method covers the determination of the stability in storage, of liquid, water-base chemical cleaning compounds, used to clean the exterior surfaces of aircraft. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  2. Life estimation I and C cable insulation materials based on accelerated life testing accelerated life testing

    International Nuclear Information System (INIS)

    Santhosh, T.V.; Ramteke, P.K.; Shrestha, N.B.; Ahirwar, A.K.; Gopika, V.

    2016-01-01

    Accelerated Iife tests are becoming increasingly popular in today's industry due to the need for obtaining life data quickly and reliably. Life testing of products under higher stress levels without introducing additional failure modes can provide significant savings of both time and money. Correct analysis of data gathered via such accelerated life testing will yield parameters and other information for the product's life under use stress conditions. To be of practical use in assessing the operational behaviour of cables in NPPs, laboratory ageing aims to mimic the type of degradation observed under operational conditions. Conditions of testing therefore need to be carefully chosen to ensure that the degradation mechanism occurring in the accelerated tests are similar to those which occur in service. This paper presents the results of an investigation in which the elongation-at-break (EAB) measurements were carried on a typical control cable to predict the mean life at service conditions. A low voltage polyvinyl chloride (PVC) insulated and PVC sheathed control cable, used in NPP instrumentation and control (I and C) applications, was subjected thermal ageing at three elevated temperatures

  3. Effect modification of air pollution on Urinary 8-Hydroxy-2'-Deoxyguanosine by genotypes: an application of the multiple testing procedure to identify significant SNP interactions

    Directory of Open Access Journals (Sweden)

    Christiani David C

    2010-12-01

    Full Text Available Abstract Background Air pollution is associated with adverse human health, but mechanisms through which pollution exerts effects remain to be clarified. One suggested pathway is that pollution causes oxidative stress. If so, oxidative stress-related genotypes may modify the oxidative response defenses to pollution exposure. Methods We explored the potential pathway by examining whether an array of oxidative stress-related genes (twenty single nucleotide polymorphisms, SNPs in nine genes modified associations of pollutants (organic carbon (OC, ozone and sulfate with urinary 8-hydroxy-2-deoxygunosine (8-OHdG, a biomarker of oxidative stress among the 320 aging men. We used a Multiple Testing Procedure in R modified by our team to identify the significance of the candidate genes adjusting for a priori covariates. Results We found that glutathione S-tranferase P1 (GSTP1, rs1799811, M1 and catalase (rs2284367 and group-specific component (GC, rs2282679, rs1155563 significantly or marginally significantly modified effects of OC and/or sulfate with larger effects among those carrying the wild type of GSTP1, catalase, non-wild type of GC and the non-null of GSTM1. Conclusions Polymorphisms of oxidative stress-related genes modified effects of OC and/or sulfate on 8-OHdG, suggesting that effects of OC or sulfate on 8-OHdG and other endpoints may be through the oxidative stress pathway.

  4. Application of risk-based methods to inservice testing of check valves

    Energy Technology Data Exchange (ETDEWEB)

    Closky, N.B.; Balkey, K.R.; McAllister, W.J. [and others

    1996-12-01

    Research efforts have been underway in the American Society of Mechanical Engineers (ASME) and industry to define appropriate methods for the application of risk-based technology in the development of inservice testing (IST) programs for pumps and valves in nuclear steam supply systems. This paper discusses a pilot application of these methods to the inservice testing of check valves in the emergency core cooling system of Georgia Power`s Vogtle nuclear power station. The results of the probabilistic safety assessment (PSA) are used to divide the check valves into risk-significant and less-risk-significant groups. This information is reviewed by a plant expert panel along with the consideration of appropriate deterministic insights to finally categorize the check valves into more safety-significant and less safety-significant component groups. All of the more safety-significant check valves are further evaluated in detail using a failure modes and causes analysis (FMCA) to assist in defining effective IST strategies. A template has been designed to evaluate how effective current and emerging tests for check valves are in detecting failures or in finding significant conditions that are precursors to failure for the likely failure causes. This information is then used to design and evaluate appropriate IST strategies that consider both the test method and frequency. A few of the less safety-significant check valves are also evaluated using this process since differences exist in check valve design, function, and operating conditions. Appropriate test strategies are selected for each check valve that has been evaluated based on safety and cost considerations. Test strategies are inferred from this information for the other check valves based on similar check valve conditions. Sensitivity studies are performed using the PSA model to arrive at an overall IST program that maintains or enhances safety at the lowest achievable cost.

  5. Integrating knowledge-based techniques into well-test interpretation

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, I.W.; Fraser, J.L. [Artificial Intelligence Applications Inst., Edinburgh (United Kingdom)

    1995-04-01

    The goal of the Spirit Project was to develop a prototype of next-generation well-test-interpretation (WTI) software that would include knowledge-based decision support for the WTI model selection task. This paper describes how Spirit makes use of several different types of information (pressure, seismic, petrophysical, geological, and engineering) to support the user in identifying the most appropriate WTI model. Spirit`s knowledge-based approach to type-curve matching is to generate several different feasible interpretations by making assumptions about the possible presence of both wellbore storage and late-time boundary effects. Spirit fuses information from type-curve matching and other data sources by use of a knowledge-based decision model developed in collaboration with a WTI expert. The sponsors of the work have judged the resulting prototype system a success.

  6. The significance of screening for microvascular diseases in Chinese community-based subjects with various metabolic abnormalities.

    Directory of Open Access Journals (Sweden)

    Can Pang

    Full Text Available BACKGROUND: To assess the association of albuminuria and retinopathy with metabolic syndrome (MetS and the related metabolic components defined by various criteria in Chinese community-based subjects. METHODS: A total of 3240 Chinese subjects were recruited from urban communities and classified into subgroups with isolated or concomitant state of the two microvascular diseases. MetS was defined according to the standard of International Diabetes Federation, the National Cholesterol Education Program's Adult Treatment Panel III and Chinese Diabetes Society (CDS, separately. Albuminuria was defined as an elevated morning urine albumin-to-creatinine ratio. Retinopathy were identified with nonmydriatic retinal photographs according to the Diabetic Retinopathy Disease Severity Scale. Logistic regression was performed to analyze the contributive risk factors. RESULTS: The subgroup of isolated retinopathy was the oldest (P<0.05, with higher blood pressure (P<0.001 and larger waist circumference (P<0.05. After adjusting for age, sex and other metabolic components, individuals with blood pressure over 130/85 mmHg were prone to have isolated albuminuria (OR: 1.51, P = 0.0001; while individuals with fasting plasma glucose over 5.6 mmol/L were in high risk of retinopathy concomitant with albuminria (OR: 3.04, P = 0.006. Larger waist circumference was a potential risk factors for isolated albuminuria and isolated retinopathy, though not significant after further adjustment of other metabolic components. The risk for albuminuria and retinopathy increased with the aggregation of three or more metabolic components. However, the MetS per se did not have synergic effect and only the MetS defined by CDS remained as a risk factor. CONCLUSIONS: Albuminuria and retinopathy were highly associated with accumulated metabolic abnormalities including sub-clinical elevated blood pressure and elevated fasting plasma glucose.

  7. Immunogenic Cell Death Induced by Ginsenoside Rg3: Significance in Dendritic Cell-based Anti-tumor Immunotherapy.

    Science.gov (United States)

    Son, Keum-Joo; Choi, Ki Ryung; Lee, Seog Jae; Lee, Hyunah

    2016-02-01

    Cancer is one of the leading causes of morbidity and mortality worldwide; therefore there is a need to discover new therapeutic modules with improved efficacy and safety. Immune-(cell) therapy is a promising therapeutic strategy for the treatment of intractable cancers. The effectiveness of certain chemotherapeutics in inducing immunogenic tumor cell death thus promoting cancer eradication has been reported. Ginsenoside Rg3 is a ginseng saponin that has antitumor and immunomodulatory activity. In this study, we treated tumor cells with Rg3 to verify the significance of inducing immunogenic tumor cell death in antitumor therapy, especially in DC-based immunotherapy. Rg3 killed the both immunogenic (B16F10 melanoma cells) and non-immunogenic (LLC: Lewis Lung Carcinoma cells) tumor cells by inducing apoptosis. Surface expression of immunogenic death markers including calreticulin and heat shock proteins and the transcription of relevant genes were increased in the Rg3-dying tumor. Increased calreticulin expression was directly related to the uptake of dying tumor cells by dendritic cells (DCs): the proportion of CRT(+) CD11c(+) cells was increased in the Rg3-treated group. Interestingly, tumor cells dying by immunogenic cell death secreted IFN-γ, an effector molecule for antitumor activity in T cells. Along with the Rg3-induced suppression of pro-angiogenic (TNF-α) and immunosuppressive cytokine (TGF-β) secretion, IFN-γ production from the Rg3-treated tumor cells may also indicate Rg3 as an effective anticancer immunotherapeutic strategy. The data clearly suggests that Rg3-induced immunogenic tumor cell death due its cytotoxic effect and its ability to induce DC function. This indicates that Rg3 may be an effective immunotherapeutic strategy.

  8. Significance and suppression of redundant IL17 responses in acute allograft rejection by bioinformatics based drug repositioning of fenofibrate.

    Directory of Open Access Journals (Sweden)

    Silke Roedder

    Full Text Available Despite advanced immunosuppression, redundancy in the molecular diversity of acute rejection (AR often results in incomplete resolution of the injury response. We present a bioinformatics based approach for identification of these redundant molecular pathways in AR and a drug repositioning approach to suppress these using FDA approved drugs currently available for non-transplant indications. Two independent microarray data-sets from human renal allograft biopsies (n = 101 from patients on majorly Th1/IFN-y immune response targeted immunosuppression, with and without AR, were profiled. Using gene-set analysis across 3305 biological pathways, significant enrichment was found for the IL17 pathway in AR in both data-sets. Recent evidence suggests IL17 pathway as an important escape mechanism when Th1/IFN-y mediated responses are suppressed. As current immunosuppressions do not specifically target the IL17 axis, 7200 molecular compounds were interrogated for FDA approved drugs with specific inhibition of this axis. A combined IL17/IFN-y suppressive role was predicted for the antilipidemic drug Fenofibrate. To assess the immunregulatory action of Fenofibrate, we conducted in-vitro treatment of anti-CD3/CD28 stimulated human peripheral blood cells (PBMC, and, as predicted, Fenofibrate reduced IL17 and IFN-γ gene expression in stimulated PMBC. In-vivo Fenofibrate treatment of an experimental rodent model of cardiac AR reduced infiltration of total leukocytes, reduced expression of IL17/IFN-y and their pathway related genes in allografts and recipients' spleens, and extended graft survival by 21 days (p<0.007. In conclusion, this study provides important proof of concept that meta-analyses of genomic data and drug databases can provide new insights into the redundancy of the rejection response and presents an economic methodology to reposition FDA approved drugs in organ transplantation.

  9. Performance-based alternative assessments as a means of eliminating gender achievement differences on science tests

    Science.gov (United States)

    Brown, Norman Merrill

    1998-09-01

    Historically, researchers have reported an achievement difference between females and males on standardized science tests. These differences have been reported to be based upon science knowledge, abstract reasoning skills, mathematical abilities, and cultural and social phenomena. This research was designed to determine how mastery of specific science content from public school curricula might be evaluated with performance-based assessment models, without producing gender achievement differences. The assessment instruments used were Harcourt Brace Educational Measurement's GOALSsp°ler: A Performance-Based Measure of Achievement and the performance-based portion of the Stanford Achievement Testspcopyright, Ninth Edition. The identified independent variables were test, gender, ethnicity, and grade level. A 2 x 2 x 6 x 12 (test x gender x ethnicity x grade) factorial experimental design was used to organize the data. A stratified random sample (N = 2400) was selected from a national pool of norming data: N = 1200 from the GOALSsp°ler group and N = 1200 from the SAT9spcopyright group. The ANOVA analysis yielded mixed results. The factors of test, gender, ethnicity by grade, gender by grade, and gender by grade by ethnicity failed to produce significant results (alpha = 0.05). The factors yielding significant results were ethnicity, grade, and ethnicity by grade. Therefore, no significant differences were found between female and male achievement on these performance-based assessments.

  10. Artefacts of questionnaire-based psychological testing of drivers

    Directory of Open Access Journals (Sweden)

    Anna Łuczak

    2014-06-01

    Full Text Available Background: The purpose of this article is to draw attention to a significant role of social approval variable in the qustionnairebased diagnosis of drivers' psychological aptitude. Material and Methods: Three questionnaires were used: Formal Characteristics of Behavior - Temperament Inventory (FCB-TI, Eysenck Personality Questionnaire (EPQ-R(S and Impulsiveness Questionnaire (Impulsiveness, Venturesomeness, Empathy - IVE. Three groups of drivers were analyzed: professional "without crashes" (N = 46, nonprofessional "without crashes" (N = 75, and nonprofessional "with crashes" (N = 75. Results: Nonprofessional drivers "without crashes" significantly stood up against other drivers. Their personality profile, indicating a significantly utmost perseveration, emotional reactivity, neuroticism, impulsiveness and the lowest endurance did not fit in to the requirements to be met by drivers. The driver safety profile was characteristic of professional drivers (the lowest level of perseveration, impulsiveness and neuroticism and the highest level of endurance. Similar profile occurred among nonprofessional drivers - the offenders of road crashes. Compared to the nonprofessional "without crashes" group, professional drivers and offenders of road crashes were also characterized by a significantly higher score on the Lie scale, determining the need for social approval. This is likely to result from the study procedure according to which the result of professional drivers testing had an impact on a possible continuity of their job and that of nonprofessional drivers "with crashes" decided about possible recovery of the driving license. Conclusions: The variable of social approval can be a significant artifact in the study of psychological drivers' testing and reduce the reliability of the results of questionnaire methods. Med Pr 2014;65(3:373–385

  11. Improved Accelerated Stress Tests Based on Fuel Cell Vehicle Data

    Energy Technology Data Exchange (ETDEWEB)

    Patterson, Timothy [Research Engineer; Motupally, Sathya [Research Engineer

    2012-06-01

    UTC will led a top-tier team of industry and national laboratory participants to update and improve DOE’s Accelerated Stress Tests (AST’s) for hydrogen fuel cells. This in-depth investigation will focused on critical fuel cell components (e.g. membrane electrode assemblies - MEA) whose durability represented barriers for widespread commercialization of hydrogen fuel cell technology. UTC had access to MEA materials that had accrued significant load time under real-world conditions in PureMotion® 120 power plant used in transit buses. These materials are referred to as end-of-life (EOL) components in the rest of this document. Advanced characterization techniques were used to evaluate degradation mode progress using these critical cell components extracted from both bus power plants and corresponding materials tested using the DOE AST’s. These techniques were applied to samples at beginning-of-life (BOL) to serve as a baseline. These comparisons advised the progress of the various failure modes that these critical components were subjected to, such as membrane degradation, catalyst support corrosion, platinum group metal dissolution, and others. Gaps in the existing ASTs predicted the degradation observed in the field in terms of these modes were outlined. Using the gaps, new AST’s were recommended and tested to better reflect the degradation modes seen in field operation. Also, BOL components were degraded in a test vehicle at UTC designed to accelerate the bus field operation.

  12. Quality Testing of Artemisinin-Based Antimalarial Drugs in Myanmar.

    Science.gov (United States)

    Guo, Suqin; Kyaw, Myat Phone; He, Lishan; Min, Myo; Ning, Xiangxue; Zhang, Wei; Wang, Baomin; Cui, Liwang

    2017-10-01

    Artemisinin-based combination therapies are the frontline treatment of Plasmodium falciparum malaria. The circulation of falsified and substandard artemisinin-based antimalarials in Southeast Asia has been a major predicament for the malaria elimination campaign. To provide an update of this situation, we purchased 153 artemisinin-containing antimalarials, as convenience samples, in private drug stores from different regions of Myanmar. The quality of these drugs in terms of their artemisinin derivative content was tested using specific dipsticks for these artemisinin derivatives, as point-of-care devices. A subset of these samples was further tested by high-performance liquid chromatography (HPLC). This survey identified that > 35% of the collected drugs were oral artesunate and artemether monotherapies. When tested with the dipsticks, all but one sample passed the assays, indicating that the detected artemisinin derivative content corresponded approximately to the labeled contents. However, one artesunate injection sample was found to contain no active ingredient at all by the dipstick assay and subsequent HPLC analysis. The continued circulation of oral monotherapies and the description, for the first time, of falsified parenteral artesunate provides a worrisome picture of the antimalarial drug quality in Myanmar during the malaria elimination phase, a situation that deserves more oversight from regulatory authorities.

  13. A Universal Motor Performance Test System Based on Virtual Instrument

    Directory of Open Access Journals (Sweden)

    Wei Li

    2014-09-01

    Full Text Available With the development of technology universal motors play a more and more important role in daily life and production, they have been used in increasingly wide field and the requirements increase gradually. How to control the speed and monitor the real-time temperature of motors are key issues. The cost of motor testing system based on traditional technology platform is very high in many reasons. In the paper a universal motor performance test system which based on virtual instrument is provided. The system achieves the precise control of the current motor speed and completes the measurement of real-time temperature of motor bearing support in order to realize the testing of general-purpose motor property. Experimental result shows that the system can work stability in controlling the speed and monitoring the real-time temperature. It has advantages that traditional using of SCM cannot match in speed, stability, cost and accuracy aspects. Besides it is easy to expand and reconfigure.

  14. Prevailence of Patch Test Positivity with Some Bases

    Directory of Open Access Journals (Sweden)

    J S Pasricha

    1987-01-01

    Full Text Available To evaluate the suitability of some chemicals to act as bases for antigens for patch tests, patch tests were performed with these agents in patients having contact dermatitis. Propylene glycol′used as such produced positive reactions in 25 (50% patients of which 12 were 2 + or more, polyethylene glycol 200 produced positive reactions in 9 (18% cases of which 4 cases were 2 + or more, a mixture of liquid paraffin and hard paraffin gave rise to positive reactions in 10 (10% cases 3 of these being 2 +, a mixture of liquid paraffin and bees wax was positive in 14 (14,Yo cases 3 of these being 2 +, yellow petrolatum was positive in 4 (8% cases, one of which was 2 +, white petrolatum was positive in S (6% cases all of these being + reactions only, and glycerol gave rise to a I + reaction in only one (2% caw. In tropical countries, water should be as base for as many antigens as possible for others, a control test with the b must be included.

  15. A computer vision based candidate for functional balance test.

    Science.gov (United States)

    Nalci, Alican; Khodamoradi, Alireza; Balkan, Ozgur; Nahab, Fatta; Garudadri, Harinath

    2015-08-01

    Balance in humans is a motor skill based on complex multimodal sensing, processing and control. Ability to maintain balance in activities of daily living (ADL) is compromised due to aging, diseases, injuries and environmental factors. Center for Disease Control and Prevention (CDC) estimate of the costs of falls among older adults was $34 billion in 2013 and is expected to reach $54.9 billion in 2020. In this paper, we present a brief review of balance impairments followed by subjective and objective tools currently used in clinical settings for human balance assessment. We propose a novel computer vision (CV) based approach as a candidate for functional balance test. The test will take less than a minute to administer and expected to be objective, repeatable and highly discriminative in quantifying ability to maintain posture and balance. We present an informal study with preliminary data from 10 healthy volunteers, and compare performance with a balance assessment system called BTrackS Balance Assessment Board. Our results show high degree of correlation with BTrackS. The proposed system promises to be a good candidate for objective functional balance tests and warrants further investigations to assess validity in clinical settings, including acute care, long term care and assisted living care facilities. Our long term goals include non-intrusive approaches to assess balance competence during ADL in independent living environments.

  16. Evaluation of the routine antimicrobial susceptibility testing results of clinically significant anaerobic bacteria in a Slovenian tertiary-care hospital in 2015.

    Science.gov (United States)

    Jeverica, Samo; Kolenc, Urša; Mueller-Premru, Manica; Papst, Lea

    2017-10-01

    The aim of our study was to determined antimicrobial susceptibility profiles of 2673 clinically significant anaerobic bacteria belonging to the major genera, isolated in 2015 in a large tertiary-care hospital in Slovenia. The species identification was performed by MALDI-TOF mass spectrometry. Antimicrobial susceptibility was determined immediately at the isolation of the strains against: penicillin, co-amoxiclav, imipenem, clindamycin and metronidazole, using gradient diffusion methodology and EUCAST breakpoints. The most frequent anaerobes were Bacteroides fragilis group with 31% (n = 817), Gram positive anaerobic cocci (GPACs) with 22% (n = 589), Prevotella with 14% (n = 313) and Propionibacterium with 8% (n = 225). Metronidazole has retained full activity (100%) against all groups of anaerobic bacteria intrinsically susceptible to it. Co-amoxiclav and imipenem were active against most tested anaerobes with zero or low resistance rates. However, observed resistance to co-amoxiclav (8%) and imipenem (1%) is worrying especially among B. fragilis group isolates. High overall resistance (23%) to clindamycin was detected in our study and was highest among the genera Prevotella, Bacteroides, Parabacteroides, GPACs and Clostridium. Routine testing of antimicrobial susceptibility of clinically relevant anaerobic bacteria is feasible and provides good surveillance data. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Optimized periodic verification testing blended risk and performance-based MOV inservice test program an application of ASME code case OMN-1

    Energy Technology Data Exchange (ETDEWEB)

    Sellers, C.; Fleming, K.; Bidwell, D.; Forbes, P. [and others

    1996-12-01

    This paper presents an application of ASME Code Case OMN-1 to the GL 89-10 Program at the South Texas Project Electric Generating Station (STPEGS). Code Case OMN-1 provides guidance for a performance-based MOV inservice test program that can be used for periodic verification testing and allows consideration of risk insights. Blended probabilistic and deterministic evaluation techniques were used to establish inservice test strategies including both test methods and test frequency. Described in the paper are the methods and criteria for establishing MOV safety significance based on the STPEGS probabilistic safety assessment, deterministic considerations of MOV performance characteristics and performance margins, the expert panel evaluation process, and the development of inservice test strategies. Test strategies include a mix of dynamic and static testing as well as MOV exercising.

  18. Optimized periodic verification testing blended risk and performance-based MOV inservice test program an application of ASME code case OMN-1

    International Nuclear Information System (INIS)

    Sellers, C.; Fleming, K.; Bidwell, D.; Forbes, P.

    1996-01-01

    This paper presents an application of ASME Code Case OMN-1 to the GL 89-10 Program at the South Texas Project Electric Generating Station (STPEGS). Code Case OMN-1 provides guidance for a performance-based MOV inservice test program that can be used for periodic verification testing and allows consideration of risk insights. Blended probabilistic and deterministic evaluation techniques were used to establish inservice test strategies including both test methods and test frequency. Described in the paper are the methods and criteria for establishing MOV safety significance based on the STPEGS probabilistic safety assessment, deterministic considerations of MOV performance characteristics and performance margins, the expert panel evaluation process, and the development of inservice test strategies. Test strategies include a mix of dynamic and static testing as well as MOV exercising

  19. The Art Gallery Test: A Preliminary Comparison between Traditional Neuropsychological and Ecological VR-Based Tests

    Directory of Open Access Journals (Sweden)

    Pedro Gamito

    2017-11-01

    Full Text Available Ecological validity should be the cornerstone of any assessment of cognitive functioning. For this purpose, we have developed a preliminary study to test the Art Gallery Test (AGT as an alternative to traditional neuropsychological testing. The AGT involves three visual search subtests displayed in a virtual reality (VR art gallery, designed to assess visual attention within an ecologically valid setting. To evaluate the relation between AGT and standard neuropsychological assessment scales, data were collected on a normative sample of healthy adults (n = 30. The measures consisted of concurrent paper-and-pencil neuropsychological measures [Montreal Cognitive Assessment (MoCA, Frontal Assessment Battery (FAB, and Color Trails Test (CTT] along with the outcomes from the three subtests of the AGT. The results showed significant correlations between the AGT subtests describing different visual search exercises strategies with global and specific cognitive measures. Comparative visual search was associated with attention and cognitive flexibility (CTT; whereas visual searches involving pictograms correlated with global cognitive function (MoCA.

  20. Advances in p-Value Based Multiple Test Procedures.

    Science.gov (United States)

    Tamhane, Ajit C; Gou, Jiangtao

    2018-01-01

    In this article we review recent advances in [Formula: see text]-value-based multiple test procedures (MTPs). We begin with a brief review of the basic tests of Bonferroni and Simes. Standard stepwise MTPs derived from them using the closure method of Marcus et al. (1976) are discussed next. They include the well-known MTPs of Holm (1979), Hochberg (1988) and Hommel (1988), and their extensions and improvements. This is followed by stepwise MTPs for a priori ordered hypotheses. Next we present gatekeeping MTPs (Dmitrienko and Tamhane, 2007) for hierarchically ordered families of hypotheses with logical relations among them. Finally, we give a brief review of the graphical approach (Bretz et al., 2009) to constructing and visualizing gatekeeping and other MTPs. Simple numerical examples are given to illustrate the various procedures.

  1. Test-Driven, Model-Based Systems Engineering

    DEFF Research Database (Denmark)

    Munck, Allan

    Hearing systems have evolved over many years from simple mechanical devices (horns) to electronic units consisting of microphones, amplifiers, analog filters, loudspeakers, batteries, etc. Digital signal processors replaced analog filters to provide better performance end new features. Central....... This thesis concerns methods for identifying, selecting and implementing tools for various aspects of model-based systems engineering. A comprehensive method was proposed that include several novel steps such as techniques for analyzing the gap between requirements and tool capabilities. The method...... was verified with good results in two case studies for selection of a traceability tool (single-tool scenario) and a set of modeling tools (multi-tool scenarios). Models must be subjected to testing to allow engineers to predict functionality and performance of systems. Test-first strategies are known...

  2. Bond strength test of acrylic artificial teeth with prosthetic base

    Directory of Open Access Journals (Sweden)

    Erna Kurnikasari

    2008-07-01

    Full Text Available Denture consists of acrylic artificial teeth and acrylic prothesis base bond chemically with a bond strength of 315 kgF/cm2. Most of the commercial acrylic artificial teeth do not specify their specifications and all of those acrylic artificial teeth do not include mechanical data (bond strength. The aim of this study is to discover which acrylic artificial teeth meet ADA specification no. 15. This study is a descriptive analytic study performed to 5 acrylic artificial teeth posterior brands commonly used by dentists and technicians. From each brand, 3 sample teeth were taken. The acrylic artificial teeth were prepared into a rectangular shape and were attached between acrylic prothesis base simulation and jigs. The sample was given tensile load using a Universal Testing Machine. The amount of force that causes the teeth to be fractured was recorded and the bond strength was calculated. The results of the study show that the average value for the five acrylic artificial teeth for the five brands were as followed: Brand A, 125.993 kgF/cm2; B, 188.457 kgF/cm2; C, 175.880 kgF/cm2; D, 153.373 kgF/cm2; E, 82.839 kgF/cm2. The data can be tested statistically by using One Way ANOVA test and Dunnett test (alpha = 0.05. From the study, it is concluded that the five acrylic artificial teeth have a bond strength below the ADA specification no. 15.

  3. Development Testing of 1-Newton ADN-Based Rocket Engines

    Science.gov (United States)

    Anflo, K.; Gronland, T.-A.; Bergman, G.; Nedar, R.; Thormählen, P.

    2004-10-01

    With the objective to reduce operational hazards and improve specific and density impulse as compared with hydrazine, the Research and Development (R&D) of a new monopropellant for space applications based on AmmoniumDiNitramide (ADN), was first proposed in 1997. This pioneering work has been described in previous papers1,2,3,4 . From the discussion above, it is clear that cost savings as well as risk reduction are the main drivers to develop a new generation of reduced hazard propellants. However, this alone is not enough to convince a spacecraft builder to choose a new technology. Cost, risk and schedule reduction are good incentives, but a spacecraft supplier will ask for evidence that this new propulsion system meets a number of requirements within the following areas: This paper describes the ongoing effort to develop a storable liquid monopropellant blend, based on AND, and its specific rocket engines. After building and testing more than 20 experimental rocket engines, the first Engineering Model (EM-1) has now accumulated more than 1 hour of firing-time. The results from test firings have validated the design. Specific impulse, combustion stability, blow-down capability and short pulse capability are amongst the requirements that have been demonstrated. The LMP-103x propellant candidate has been stored for more than 1 year and initial material compatibility screening and testing has started. 1. Performance &life 2. Impact on spacecraft design &operation 3. Flight heritage Hereafter, the essential requirements for some of these areas are outlined. These issues are discussed in detail in a previous paper1 . The use of "Commercial Of The Shelf" (COTS) propulsion system components as much as possible is essential to minimize the overall cost, risk and schedule. This leads to the conclusion that the Technology Readiness Level (TRL) 5 has been reached for the thruster and propellant. Furthermore, that the concept of ADN-based propulsion is feasible.

  4. Powder-based 3D printing application for geomechanical testing

    Science.gov (United States)

    Williams, M.; Yoon, H.; Choens, R. C., II; Martinez, M. J.; Dewers, T. A.; Lee, M.

    2017-12-01

    3D printing of fractured and porous analog geomaterials has the potential to enhance hydrogeological and mechanical interpretations by generating engineered samples in testable configurations with reproducible microstructures and tunable surface and mechanical properties. For geoscience applications, 3D printing technology can be co-opted to print reproducible structures derived from CT-imaging of actual rocks and theoretical algorithms. In particular, the use of 3D printed samples allows us to overcome sample-to-sample heterogeneity that plague rock physics testing and to test material response independent from material variability. In this work, gypsum powder-based 3D printing was used to print cylindrical core samples and block samples with a pre-existing flaw geometry. All samples are printed in three different directions to evaluate the impact of printing direction on mechanical properties. For the cylindrical samples, unconfined compression testing has been performed. For compressive strength, the samples printed perpendicular to the loading direction show stronger than those printed parallel to the loading and at 45 degree. Micro-CT images of the printed samples reveal the uneven spreading of binder, resulting in soft inner core surrounded by stronger outer shell. In particular, the layered feature with binder causes the strong anisotropic properties. This was also confirmed by the wave velocity. For the small block samples ( 6.1cm wide, 10cm high, and 1.25cm thick) with an inclined flaw, uniaxial tests coupled with an array of acoustic emission sensors and digital image correlation revealed that cracks were developed at/near the tip of flaw as expected. Although acoustic events were detected, localization was not detectable mainly due to strong attenuation. Advantage and disadvantage of power-based 3D printing for mechanical testing will be discussed and a few attempts will be presented to improve the applicability of powder-based printing technique. Sandia

  5. CAMAC based Test Signal Generator using Re-configurable device

    Science.gov (United States)

    Sharma, Atish; Raval, Tushar; Srivastava, Amit K.; Reddy, D. Chenna

    2010-02-01

    There are many different types of signal generators, with different purposes and applications (and at varying levels of expense). In general, no device is suitable for all possible applications. Hence the selection of signal generator is as per requirements. For SST-1 Data Acquisition System requirements, we have developed a CAMAC based Test Signal Generator module using Re-configurable device (CPLD). This module is based on CAMAC interface but can be used for testing both CAMAC and PXI Data Acquisition Systems in SST-1 tokamak. It can also be used for other similar applications. Unlike traditional signal generators, which are embedded hardware, it is a flexible hardware unit, programmable through Graphical User Interface (GUI) developed in LabVIEW application development tool. The main aim of this work is to develop a signal generator for testing our data acquisition interface for a large number of channels simultaneously. The module front panel has various connectors like LEMO and D type connectors for signal interface. The module can be operated either in continuous signal generation mode or in triggered mode depending upon application. This can be done either by front panel switch or through CAMAC software commands (for remote operation). Similarly module reset and trigger generation operation can be performed either through front panel push button switch or through software CAMAC commands. The module has the facility to accept external TTL level trigger and clock through LEMO connectors. The module can also generate trigger and the clock signal, which can be delivered to other devices through LEMO connectors. The module generates two types of signals: Analog and digital (TTL level). The analog output (single channel) is generated from Digital to Analog Converter through CPLD for various types of waveforms like Sine, Square, Triangular and other wave shape that can vary in amplitude as well as in frequency. The module is quite useful to test up to 32 channels

  6. CAMAC based Test Signal Generator using Re-configurable device

    International Nuclear Information System (INIS)

    Sharma, Atish; Raval, Tushar; Srivastava, Amit K; Reddy, D Chenna

    2010-01-01

    There are many different types of signal generators, with different purposes and applications (and at varying levels of expense). In general, no device is suitable for all possible applications. Hence the selection of signal generator is as per requirements. For SST-1 Data Acquisition System requirements, we have developed a CAMAC based Test Signal Generator module using Re-configurable device (CPLD). This module is based on CAMAC interface but can be used for testing both CAMAC and PXI Data Acquisition Systems in SST-1 tokamak. It can also be used for other similar applications. Unlike traditional signal generators, which are embedded hardware, it is a flexible hardware unit, programmable through Graphical User Interface (GUI) developed in LabVIEW application development tool. The main aim of this work is to develop a signal generator for testing our data acquisition interface for a large number of channels simultaneously. The module front panel has various connectors like LEMO and D type connectors for signal interface. The module can be operated either in continuous signal generation mode or in triggered mode depending upon application. This can be done either by front panel switch or through CAMAC software commands (for remote operation). Similarly module reset and trigger generation operation can be performed either through front panel push button switch or through software CAMAC commands. The module has the facility to accept external TTL level trigger and clock through LEMO connectors. The module can also generate trigger and the clock signal, which can be delivered to other devices through LEMO connectors. The module generates two types of signals: Analog and digital (TTL level). The analog output (single channel) is generated from Digital to Analog Converter through CPLD for various types of waveforms like Sine, Square, Triangular and other wave shape that can vary in amplitude as well as in frequency. The module is quite useful to test up to 32 channels

  7. Water Pollution Detection Based on Hypothesis Testing in Sensor Networks

    Directory of Open Access Journals (Sweden)

    Xu Luo

    2017-01-01

    Full Text Available Water pollution detection is of great importance in water conservation. In this paper, the water pollution detection problems of the network and of the node in sensor networks are discussed. The detection problems in both cases of the distribution of the monitoring noise being normal and nonnormal are considered. The pollution detection problems are analyzed based on hypothesis testing theory firstly; then, the specific detection algorithms are given. Finally, two implementation examples are given to illustrate how the proposed detection methods are used in the water pollution detection in sensor networks and prove the effectiveness of the proposed detection methods.

  8. Region-based association tests for sequencing data on survival traits.

    Science.gov (United States)

    Chien, Li-Chu; Bowden, Donald W; Chiu, Yen-Feng

    2017-09-01

    Family-based designs enriched with affected subjects and disease associated variants can increase statistical power for identifying functional rare variants. However, few rare variant analysis approaches are available for time-to-event traits in family designs and none of them applicable to the X chromosome. We developed novel pedigree-based burden and kernel association tests for time-to-event outcomes with right censoring for pedigree data, referred to FamRATS (family-based rare variant association tests for survival traits). Cox proportional hazard models were employed to relate a time-to-event trait with rare variants with flexibility to encompass all ranges and collapsing of multiple variants. In addition, the robustness of violating proportional hazard assumptions was investigated for the proposed and four current existing tests, including the conventional population-based Cox proportional model and the burden, kernel, and sum of squares statistic (SSQ) tests for family data. The proposed tests can be applied to large-scale whole-genome sequencing data. They are appropriate for the practical use under a wide range of misspecified Cox models, as well as for population-based, pedigree-based, or hybrid designs. In our extensive simulation study and data example, we showed that the proposed kernel test is the most powerful and robust choice among the proposed burden test and the existing four rare variant survival association tests. When applied to the Diabetes Heart Study, the proposed tests found exome variants of the JAK1 gene on chromosome 1 showed the most significant association with age at onset of type 2 diabetes from the exome-wide analysis. © 2017 WILEY PERIODICALS, INC.

  9. Obsessiveness and a thematic apperception test-based measure of aggression.

    Science.gov (United States)

    Cogan, Rosemary; Ashford, Daniel; Chaney, Brett; Embry, Sheena; Emory, Lindsay; Goebel, Holly; Holstrom, Nicole; Keithley, David; Lawson, Miranda; Mcpherson, Joe; Scott, Brandon; Tebbets, Jomie

    2004-12-01

    Freud (1909/1955) hypothesized a conflict between love and hate in obsessive neurosis. To test this relationship, we compared a Thematic Apperception Test-based measure of aggressive fantasies in college men who scored either high or low on the Obsessive-Compulsive Inventory-Revised. 64 undergraduate men from beginning classes in psychology participated. Their mean age was 19.4 yr. (SD= 1.7). 16 men with high scores had significantly higher scores on a TAT-based measure of aggressive fantasies toward parents, partners, and others than 15 men with low scores, which is consistent with Freud's hypothesis.

  10. Probabilistic liquefaction triggering based on the cone penetration test

    Science.gov (United States)

    Moss, R.E.S.; Seed, R.B.; Kayen, R.E.; Stewart, J.P.; Tokimatsu, K.

    2005-01-01

    Performance-based earthquake engineering requires a probabilistic treatment of potential failure modes in order to accurately quantify the overall stability of the system. This paper is a summary of the application portions of the probabilistic liquefaction triggering correlations proposed recently proposed by Moss and co-workers. To enable probabilistic treatment of liquefaction triggering, the variables comprising the seismic load and the liquefaction resistance were treated as inherently uncertain. Supporting data from an extensive Cone Penetration Test (CPT)-based liquefaction case history database were used to develop a probabilistic correlation. The methods used to measure the uncertainty of the load and resistance variables, how the interactions of these variables were treated using Bayesian updating, and how reliability analysis was applied to produce curves of equal probability of liquefaction are presented. The normalization for effective overburden stress, the magnitude correlated duration weighting factor, and the non-linear shear mass participation factor used are also discussed.

  11. 5-fluorouracil-based therapy induces endovascular injury having potential significance to development of clinically overt cardiotoxicity

    DEFF Research Database (Denmark)

    Jensen, Søren Astrup; Sørensen, Jens Benn

    2012-01-01

    This study aimed to elucidate the influence of 5-fluorouracil (5-FU)-based therapy on the vascular endothelium and its association with 5-FU-induced heart ischemia.......This study aimed to elucidate the influence of 5-fluorouracil (5-FU)-based therapy on the vascular endothelium and its association with 5-FU-induced heart ischemia....

  12. Tracing the base: A topographic test for collusive basing-point pricing

    NARCIS (Netherlands)

    Bos, I.; Schinkel, M.P.

    2008-01-01

    Basing-point pricing is known to have been abused by geographically dispersed firms in order to eliminate competition on transportation costs. This paper develops a topographic test for collusive basing-point pricing. The method uses transaction data (prices, quantities) and customer project site

  13. Tracing the Base: A Topographic Test for Collusive Basing-Point Pricing

    NARCIS (Netherlands)

    Bos, Iwan; Schinkel, Maarten Pieter

    2009-01-01

    Basing-point pricing is known to have been abused by geographically dispersed firms in order to eliminate competition on transportation costs. This paper develops a topographic test for collusive basing-point pricing. The method uses transaction data (prices, quantities) and customer project site

  14. Gas identification field test based on FTIR imaging spectrometer

    Science.gov (United States)

    Wang, Chensheng; Liu, Xingchao; Zhang, Zhijie; Yu, Hui

    2017-10-01

    Gas detection and identification is based on the spectral absorption peak feature, which is acquired by the spectrometer. FTIR imaging spectrometer has the advantages of high spectral resolution and good sensitivity, which are both suitable for the unknown or mixture gas identification applications, such as plume pollution monitoring, chemical agents detection and leakage detection. According to the application requirement, a dual band FTIR imaging spectrometer has been developed and verified. This FTIR imaging spectrometer combines the infrared thermal imaging sensor and Michelson interferometer to form the three dimensional data cube. Based on this instrument, the theoretical analysis and algorithm is introduced, and the numerical method is explained to illuminate the basic idea in gas identification based on spectral features. After that, the field verification test is setup and completed. Firstly, the FTIR imaging spectrometer is used to detect SF6, NH3 and the mixture gas, while the gas is exhausted out from the storage vase with a specific speed. Secondly, the instrument is delivered to the industrial area to monitor the plume emission, and analyze the components in plume. Finally, the instrument is utilized to monitoring the oil spill in ocean, and the practical maritime trial is realized. Further, the gas concentration evaluation method is discussed. Quantitative issue in gas identification is an important topic. The test results show that, based on the gas identification method introduced in this paper, FTIR imaging spectrometer can be utilized to identify the unknown gas or mixture gas in real time. The instrument will play a key role in environmental emergency and monitoring application.

  15. Testing spatial symmetry using contingency tables based on nearest neighbor relations.

    Science.gov (United States)

    Ceyhan, Elvan

    2014-01-01

    We consider two types of spatial symmetry, namely, symmetry in the mixed or shared nearest neighbor (NN) structures. We use Pielou's and Dixon's symmetry tests which are defined using contingency tables based on the NN relationships between the data points. We generalize these tests to multiple classes and demonstrate that both the asymptotic and exact versions of Pielou's first type of symmetry test are extremely conservative in rejecting symmetry in the mixed NN structure and hence should be avoided or only the Monte Carlo randomized version should be used. Under RL, we derive the asymptotic distribution for Dixon's symmetry test and also observe that the usual independence test seems to be appropriate for Pielou's second type of test. Moreover, we apply variants of Fisher's exact test on the shared NN contingency table for Pielou's second test and determine the most appropriate version for our setting. We also consider pairwise and one-versus-rest type tests in post hoc analysis after a significant overall symmetry test. We investigate the asymptotic properties of the tests, prove their consistency under appropriate null hypotheses, and investigate finite sample performance of them by extensive Monte Carlo simulations. The methods are illustrated on a real-life ecological data set.

  16. Performance of the cobas HPV Test for the Triage of Atypical Squamous Cells of Undetermined Significance Cytology in Cervical Specimens Collected in SurePath.

    Science.gov (United States)

    Tewari, Devansu; Novak-Weekley, Susan; Hong, Christina; Aslam, Shagufta; Behrens, Catherine M

    2017-11-02

    Determine performance of the cobas human papillomavirus (HPV) test for triage of atypical squamous cells of undetermined significance (ASC-US) in SurePath. Women presenting for routine screening had cervical specimens collected in SurePath and specimen transport medium (STM); those with ASC-US cytology underwent colposcopy. Performance of cobas HPV in SurePath specimens that had undergone a preanalytic procedure to reverse possible cross-linking of HPV DNA was compared with Hybrid Capture 2 (hc2) specimens in STM. Among 856 women, HPV prevalence was 45.8%; HPV 16 and HPV 18 prevalences were lower than expected in the 21- to 29-year-old group in this highly vaccinated population. cobas HPV performance in SurePath was comparable to hc2 in STM. Sensitivity and specificity for detection of cervical intraepithelial neoplasia grade 3 or worse were 87.5% (95% confidence interval [CI], 71.9%-95.2%) and 55.5% (95% CI, 52.1%-58.9%) for cobas and 85.3% (95% CI, 69.9%-93.6%) and 54.7% (95% CI, 51.4%-57.9%) for hc2. Sensitivity was negatively affected by random biopsies performed at colposcopy; comparable sensitivities were achieved in the nonvaccinated and vaccinated populations with disease determined by directed biopsy only. Performance of cobas HPV for ASC-US triage in pretreated SurePath specimens meets criteria for validation. Preliminary data indicate reliable performance of HPV testing in a highly vaccinated population. © American Society for Clinical Pathology, 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  17. Bridge Testing With Ground-Based Interferometric Radar: Experimental Results

    Science.gov (United States)

    Chiara, P.; Morelli, A.

    2010-05-01

    The research of innovative non-contact techniques aimed at the vibration measurement of civil engineering structures (also for damage detection and structural health monitoring) is continuously directed to the optimization of measures and methods. Ground-Based Radar Interferometry (GBRI) represents the more recent technique available for static and dynamic control of structures and ground movements. Dynamic testing of bridges and buildings in operational conditions are currently performed: (a) to assess the conformity of the structure to the project design at the end of construction; (b) to identify the modal parameters (i.e. natural frequencies, mode shapes and damping ratios) and to check the variation of any modal parameters over the years; (c) to evaluate the amplitude of the structural response to special load conditions (i.e. strong winds, earthquakes, heavy railway or roadway loads). If such tests are carried out by using a non-contact technique (like GBRI), the classical issues of contact sensors (like accelerometers) are easily overtaken. This paper presents and discusses the results of various tests carried out on full-scale bridges by using a Stepped Frequency-Continuous Wave radar system.

  18. Blind Test of Physics-Based Prediction of Protein Structures

    Science.gov (United States)

    Shell, M. Scott; Ozkan, S. Banu; Voelz, Vincent; Wu, Guohong Albert; Dill, Ken A.

    2009-01-01

    We report here a multiprotein blind test of a computer method to predict native protein structures based solely on an all-atom physics-based force field. We use the AMBER 96 potential function with an implicit (GB/SA) model of solvation, combined with replica-exchange molecular-dynamics simulations. Coarse conformational sampling is performed using the zipping and assembly method (ZAM), an approach that is designed to mimic the putative physical routes of protein folding. ZAM was applied to the folding of six proteins, from 76 to 112 monomers in length, in CASP7, a community-wide blind test of protein structure prediction. Because these predictions have about the same level of accuracy as typical bioinformatics methods, and do not utilize information from databases of known native structures, this work opens up the possibility of predicting the structures of membrane proteins, synthetic peptides, or other foldable polymers, for which there is little prior knowledge of native structures. This approach may also be useful for predicting physical protein folding routes, non-native conformations, and other physical properties from amino acid sequences. PMID:19186130

  19. Ageing tests study on wood-based sandwich panels

    Directory of Open Access Journals (Sweden)

    Mateo, Raquel

    2011-12-01

    Full Text Available Composite lightweight wood panels are being increasingly used in construction in Spain. Their growing use should be accompanied by necessary guarantees based on studies of their properties. As it is prescriptive and in addition to others tests, in the present work is examinated the durability of these panels when exposed to the climatic conditions, a characteristic of great importance for wood products, according to Guide ETAG 016, the current standard defining the ageing tests to be used. However, due to the use class of this material, there are indications that the testing outlined in this Guide is inappropriate for assessing the ageing of wood-based sandwich panels. Alternative tests are here proposed that recreate rather better the real conditions under which these products are used. Covering the samples in a waterproof sheeting permeable to the outward movement of water vapour, which is in fact used in the installation, provided the best procedure for testing these panels.

    Los paneles sándwich de madera son un producto de creciente aplicación en la edificación de nuestro país. Este ascendente uso del material debe estar acompañado de las garantías necesarias avaladas por un estudio previo de sus prestaciones. Como es preceptivo y entre otros, se evalúa su durabilidad frente a las condiciones climatológicas, clave en los productos derivados de la madera, acorde a la normativa actual definida con tal fin, la Guía ETAG 016. Sin embargo, debido a la clase de uso del material, se ha detectado que dicha normativa tal y como está concebida no es capaz de valorar su envejecimiento adecuadamente. En este trabajo se proponen ensayos alternativos al establecido tras exhaustivos análisis que recrean las condiciones reales de uso y más acordes a los productos de madera. Se concluye que la incorporación de una lámina impermeable pero permeable al vapor de agua hacia el exterior, como las utilizadas en el montaje, aportan el mejor

  20. Drop-in capsule testing of plutonium-based fuels in the Advanced Test Reactor

    International Nuclear Information System (INIS)

    Chang, G.S.; Ryskamp, J.M.; Terry, W.K.; Ambrosek, R.G.; Palmer, A.J.; Roesener, R.A.

    1996-09-01

    The most attractive way to dispose of weapons-grade plutonium (WGPu) is to use it as fuel in existing light water reactors (LWRs) in the form of mixed oxide (MOX) fuel - i.e., plutonia (PuO[sub 2]) mixed with urania (UO[sub 2]). Before U.S. reactors could be used for this purpose, their operating licenses would have to be amended. Numerous technical issues must be resolved before LWR operating licenses can be amended to allow the use of MOX fuel. The proposed weapons-grade MOX fuel is unusual, even relative to ongoing foreign experience with reactor-grade MOX power reactor fuel. Some demonstration of the in- reactor thermal, mechanical, and fission gas release behavior of the prototype fuel will most likely be required in a limited number of test reactor irradiations. The application to license operation with MOX fuel must be amply supported by experimental data. The Advanced Test Reactor (ATR) at the Idaho National Engineering Laboratory (INEL) is capable of playing a key role in the irradiation, development, and licensing of these new fuel types. The ATR is a 250- MW (thermal) LWR designed to study the effects of intense radiation on reactor fuels and materials. For 25 years, the primary role of the ATR has been to serve in experimental investigations for the development of advanced nuclear fuels. Both large- and small-volume test positions in the ATR could be used for MOX fuel irradiation. The ATR would be a nearly ideal test bed for developing data needed to support applications to license LWRs for operation with MOX fuel made from weapons-grade plutonium. Furthermore, these data can be obtained more quickly by using ATR instead of testing in a commercial LWR. Our previous work in this area has demonstrated that it is technically feasible to perform MOX fuel testing in the ATR. This report documents our analyses of sealed drop-in capsules containing plutonium-based test specimens placed in various ATR positions

  1. Quantum spacetime operationally based on propagators for extended test particles

    International Nuclear Information System (INIS)

    Prugovecki, E.

    1981-01-01

    By taking into account the quantum aspects intrinsic to any operational definition of spatio-temporal relationships, a stochastic concept of spacetime emerges. In relation to its classical counterpart is realized as a stochastic mean around which quantum fluctuations become negligible only in the limit of macroscopic spacetime intervals. The test-particle propagators used in the proposed quantum concept of spacetime are derived by solving in a consistent manner the localizability problem for relativistic particles. This is achieved in the framework of the stochastic phase space formulation of quantum mechanics, which in the nonrelativistic context is shown to result from systems of imprimitivity related to phase space conserved probability currents derivable from bona fide convariant probability densities in stochastic phase spaces of one particle systems, which can be interpreted as due to measurements performed with extended rather than pointlike test particles. The associated particle propagators can be therefore consistently related to coordinate probability densities measurable by the exchange of photons in between test particles from a chosen standard. Quantum spacetime is defined as the family of propagators corresponding to all conceivable coherent flows of test particles. This family of free-fall propagators has to satisfy certain self-consistency conditions as well as consistent laws of motion which inplicitly determine the stochastic geometro-dynamics of quantum space-time. Field theory on quantum spacetime retains many of the formal features of conventional quantum field theory. On a fundamental epistemological level stochastic geometries emerge as essential prerequisites in the construction of spacetime models that would be operationally based and yet consistent with the relativity principle as well as with the uncertinty principle

  2. Autonomous Cryogenic Load Operations: Knowledge-Based Autonomous Test Engineer

    Science.gov (United States)

    Schrading, J. Nicolas

    2013-01-01

    The Knowledge-Based Autonomous Test Engineer (KATE) program has a long history at KSC. Now a part of the Autonomous Cryogenic Load Operations (ACLO) mission, this software system has been sporadically developed over the past 20 years. Originally designed to provide health and status monitoring for a simple water-based fluid system, it was proven to be a capable autonomous test engineer for determining sources of failure in the system. As part of a new goal to provide this same anomaly-detection capability for a complicated cryogenic fluid system, software engineers, physicists, interns and KATE experts are working to upgrade the software capabilities and graphical user interface. Much progress was made during this effort to improve KATE. A display of the entire cryogenic system's graph, with nodes for components and edges for their connections, was added to the KATE software. A searching functionality was added to the new graph display, so that users could easily center their screen on specific components. The GUI was also modified so that it displayed information relevant to the new project goals. In addition, work began on adding new pneumatic and electronic subsystems into the KATE knowledge base, so that it could provide health and status monitoring for those systems. Finally, many fixes for bugs, memory leaks, and memory errors were implemented and the system was moved into a state in which it could be presented to stakeholders. Overall, the KATE system was improved and necessary additional features were added so that a presentation of the program and its functionality in the next few months would be a success.

  3. Comparison of Glycomacropeptide with Phenylalanine Free-Synthetic Amino Acids in Test Meals to PKU Patients: No Significant Differences in Biomarkers, Including Plasma Phe Levels.

    Science.gov (United States)

    Ahring, Kirsten K; Lund, Allan M; Jensen, Erik; Jensen, Thomas G; Brøndum-Nielsen, Karen; Pedersen, Michael; Bardow, Allan; Holst, Jens Juul; Rehfeld, Jens F; Møller, Lisbeth B

    2018-01-01

    Management of phenylketonuria (PKU) is achieved through low-phenylalanine (Phe) diet, supplemented with low-protein food and mixture of free-synthetic (FS) amino acid (AA). Casein glycomacropeptide (CGMP) is a natural peptide released in whey during cheese-making and does not contain Phe. Lacprodan® CGMP-20 used in this study contained a small amount of Phe due to minor presence of other proteins/peptides. The purpose of this study was to compare absorption of CGMP-20 to FSAA with the aim of evaluating short-term effects on plasma AAs as well as biomarkers related to food intake. This study included 8 patients, who had four visits and tested four drink mixtures (DM1-4), consisting of CGMP, FSAA, or a combination. Plasma blood samples were collected at baseline, 15, 30, 60, 120, and 240 minutes (min) after the meal. AA profiles and ghrelin were determined 6 times, while surrogate biomarkers were determined at baseline and 240 min. A visual analogue scale (VAS) was used for evaluation of taste and satiety. The surrogate biomarker concentrations and VAS scores for satiety and taste were nonsignificant between the four DMs, and there were only few significant results for AA profiles (not Phe). CGMP and FSAA had the overall same nonsignificant short-term effect on biomarkers, including Phe. This combination of FSAA and CGMP is a suitable supplement for PKU patients.

  4. Comparison of Glycomacropeptide with Phenylalanine Free-Synthetic Amino Acids in Test Meals to PKU Patients: No Significant Differences in Biomarkers, Including Plasma Phe Levels

    Directory of Open Access Journals (Sweden)

    Kirsten K. Ahring

    2018-01-01

    Full Text Available Introduction. Management of phenylketonuria (PKU is achieved through low-phenylalanine (Phe diet, supplemented with low-protein food and mixture of free-synthetic (FS amino acid (AA. Casein glycomacropeptide (CGMP is a natural peptide released in whey during cheese-making and does not contain Phe. Lacprodan® CGMP-20 used in this study contained a small amount of Phe due to minor presence of other proteins/peptides. Objective. The purpose of this study was to compare absorption of CGMP-20 to FSAA with the aim of evaluating short-term effects on plasma AAs as well as biomarkers related to food intake. Methods. This study included 8 patients, who had four visits and tested four drink mixtures (DM1–4, consisting of CGMP, FSAA, or a combination. Plasma blood samples were collected at baseline, 15, 30, 60, 120, and 240 minutes (min after the meal. AA profiles and ghrelin were determined 6 times, while surrogate biomarkers were determined at baseline and 240 min. A visual analogue scale (VAS was used for evaluation of taste and satiety. Results. The surrogate biomarker concentrations and VAS scores for satiety and taste were nonsignificant between the four DMs, and there were only few significant results for AA profiles (not Phe. Conclusion. CGMP and FSAA had the overall same nonsignificant short-term effect on biomarkers, including Phe. This combination of FSAA and CGMP is a suitable supplement for PKU patients.

  5. A practical approach for implementing risk-based inservice testing of pumps at nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, R.S. [Idaho National Engineering Lab., Idaho Falls, ID (United States); Maret, D.; Seniuk, P.; Smith, L.

    1996-12-01

    The American Society of Mechanical Engineers (ASME) Center for Research and Technology Development`s (CRTD) Research Task Force on Risk-Based Inservice Testing has developed guidelines for risk-based inservice testing (IST) of pumps and valves. These guidelines are intended to help the ASME Operation and Maintenance (OM) Committee to enhance plant safety while focussing appropriate testing resources on critical components. This paper describes a practical approach for implementing those guidelines for pumps at nuclear power plants. The approach, as described in this paper, relies on input, direction, and assistance from several entities such as the ASME Code Committees, United States Nuclear Regulatory Commission (NRC), and the National Laboratories, as well as industry groups and personnel with applicable expertise. Key parts of the risk-based IST process that are addressed here include: identification of important failure modes, identification of significant failure causes, assessing the effectiveness of testing and maintenance activities, development of alternative testing and maintenance strategies, and assessing the effectiveness of alternative testing strategies with present ASME Code requirements. Finally, the paper suggests a method of implementing this process into the ASME OM Code for pump testing.

  6. Regulation of Internet-based Genetic Testing: Challenges for Australia and Other Jurisdictions.

    Science.gov (United States)

    Tiller, Jane; Lacaze, Paul

    2018-01-01

    The Internet currently enables unprecedented ease of access for direct-to-consumer (DTC) genetic testing, with saliva collection kits posted directly to consumer homes from anywhere in the world. This poses new challenges for local jurisdictions in regulating genetic testing, traditionally a tightly-regulated industry. Some Internet-based genetic tests have the capacity to cause significant confusion or harm to consumers who are unaware of the risks or potential variability in quality. The emergence of some online products of questionable content, unsupported by adequate scientific evidence, is a cause for concern. Proliferation of such products in the absence of regulation has the potential to damage public trust in accredited and established clinical genetic testing during a critical period of evidence generation for genomics. Here, we explore the challenges arising from the emergence of Internet-based DTC genetic testing. In particular, there are challenges in regulating unaccredited or potentially harmful Internet-based DTC genetic testing products. In Australia, challenges exist for the Therapeutic Goods Administration, which oversees regulation of the genetic testing sector. Concerns and challenges faced in Australia are likely to reflect those of other comparable non-US jurisdictions. Here, we summarize current Australian regulation, highlight concerns, and offer recommendations on how Australia and other comparable jurisdictions might be more proactive in addressing this emerging public health issue.

  7. Hydrothermal Fe cycling and deep ocean organic carbon scavenging: Model-based evidence for significant POC supply to seafloor sediments

    Digital Repository Service at National Institute of Oceanography (India)

    German, C.R.; Legendre, L.L.; Sander, S.G.;; Niquil, N.; Luther-III, G.W.; LokaBharathi, P.A.; Han, X.; LeBris, N.

    by more than ~10% over background values, what the model does indicate is that scavenging of carbon in association with Fe-rich hydrothermal plume particles should play a significant role in the delivery of particulate organic carbon to deep ocean...

  8. Team-Based Learning, Faculty Research, and Grant Writing Bring Significant Learning Experiences to an Undergraduate Biochemistry Laboratory Course

    Science.gov (United States)

    Evans, Hedeel Guy; Heyl, Deborah L.; Liggit, Peggy

    2016-01-01

    This biochemistry laboratory course was designed to provide significant learning experiences to expose students to different ways of succeeding as scientists in academia and foster development and improvement of their potential and competency as the next generation of investigators. To meet these goals, the laboratory course employs three…

  9. VEGAS2: Software for More Flexible Gene-Based Testing.

    Science.gov (United States)

    Mishra, Aniket; Macgregor, Stuart

    2015-02-01

    Gene-based tests such as versatile gene-based association study (VEGAS) are commonly used following per-single nucleotide polymorphism (SNP) GWAS (genome-wide association studies) analysis. Two limitations of VEGAS were that the HapMap2 reference set was used to model the correlation between SNPs and only autosomal genes were considered. HapMap2 has now been superseded by the 1,000 Genomes reference set, and whereas early GWASs frequently ignored the X chromosome, it is now commonly included. Here we have developed VEGAS2, an extension that uses 1,000 Genomes data to model SNP correlations across the autosomes and chromosome X. VEGAS2 allows greater flexibility when defining gene boundaries. VEGAS2 offers both a user-friendly, web-based front end and a command line Linux version. The online version of VEGAS2 can be accessed through https://vegas2.qimrberghofer.edu.au/. The command line version can be downloaded from https://vegas2.qimrberghofer.edu.au/zVEGAS2offline.tgz. The command line version is developed in Perl, R and shell scripting languages; source code is available for further development.

  10. An Improved Test Selection Optimization Model Based on Fault Ambiguity Group Isolation and Chaotic Discrete PSO

    Directory of Open Access Journals (Sweden)

    Xiaofeng Lv

    2018-01-01

    Full Text Available Sensor data-based test selection optimization is the basis for designing a test work, which ensures that the system is tested under the constraint of the conventional indexes such as fault detection rate (FDR and fault isolation rate (FIR. From the perspective of equipment maintenance support, the ambiguity isolation has a significant effect on the result of test selection. In this paper, an improved test selection optimization model is proposed by considering the ambiguity degree of fault isolation. In the new model, the fault test dependency matrix is adopted to model the correlation between the system fault and the test group. The objective function of the proposed model is minimizing the test cost with the constraint of FDR and FIR. The improved chaotic discrete particle swarm optimization (PSO algorithm is adopted to solve the improved test selection optimization model. The new test selection optimization model is more consistent with real complicated engineering systems. The experimental result verifies the effectiveness of the proposed method.

  11. Effects of an Employer-Based Intervention on Employment Outcomes for Youth with Significant Support Needs Due to Autism

    Science.gov (United States)

    Wehman, Paul; Schall, Carol M.; McDonough, Jennifer; Graham, Carolyn; Brooke, Valerie; Riehle, J. Erin; Brooke, Alissa; Ham, Whitney; Lau, Stephanie; Allen, Jaclyn; Avellone, Lauren

    2017-01-01

    The purpose of this study was to develop and investigate an employer-based 9-month intervention for high school youth with autism spectrum disorder to learn job skills and acquire employment. The intervention modified a program titled Project SEARCH and incorporated the use of applied behavior analysis to develop Project SEARCH plus Autism…

  12. Simulation-Based Testing of Pager Interruptions During Laparoscopic Cholecystectomy.

    Science.gov (United States)

    Sujka, Joseph A; Safcsak, Karen; Bhullar, Indermeet S; Havron, William S

    2018-01-30

    To determine if pager interruptions affect operative time, safety, or complications and management of pager issues during a simulated laparoscopic cholecystectomy. Twelve surgery resident volunteers were tested on a Simbionix Lap Mentor II simulator. Each resident performed 6 randomized simulated laparoscopic cholecystectomies; 3 with pager interruptions (INT) and 3 without pager interruptions (NO-INT). The pager interruptions were sent in the form of standardized patient vignettes and timed to distract the resident during dissection of the critical view of safety and clipping of the cystic duct. The residents were graded on a pass/fail scale for eliciting appropriate patient history and management of the pager issue. Data was extracted from the simulator for the following endpoints: operative time, safety metrics, and incidence of operative complications. The Mann-Whitney U test and contingency table analysis were used to compare the 2 groups (INT vs. NO-INT). Level I trauma center; Simulation laboratory. Twelve general surgery residents. There was no significant difference between the 2 groups in any of the operative endpoints as measured by the simulator. However, in the INT group, only 25% of the time did the surgery residents both adequately address the issue and provide effective patient management in response to the pager interruption. Pager interruptions did not affect operative time, safety, or complications during the simulated procedure. However, there were significant failures in the appropriate evaluations and management of pager issues. Consideration for diversion of patient care issues to fellow residents not operating to improve quality and safety of patient care outside the operating room requires further study. Copyright © 2018. Published by Elsevier Inc.

  13. Sequence risk analysis: A method for the evaluation of event significance based on potential core damage frequency

    International Nuclear Information System (INIS)

    Fader, G.B.; Jones, M.A.; Zebroski, E.L.

    1984-01-01

    This chapter describes a quantitative evaluation method which can be used in lieu of a Probabilistic Risk Assessment (PRA) to estimate event-related risk of core damage, and it is intended to handle unusual sequences and plant-unique system unavailability and operator behavior. Core damage is defined as damage sufficient to cause prolonged outage for replacement of a deformed core and plant decontamination. The event severity evaluation procedure is as follows: assemble plant information, develop plant-specific event tree headings, identify the event initiator, develop the event-specific event tree, and evaluate the event tree for event severity. The event significance evaluation procedure involves the evaluation of the event tree for core damage frequency, the determination of the relevance of the event to other plants or units, and the determination of event significance. Each step is given a detailed explanation

  14. On the equivalence of the Clauser–Horne and Eberhard inequality based tests

    International Nuclear Information System (INIS)

    Khrennikov, Andrei; Ramelow, Sven; Ursin, Rupert; Wittmann, Bernhard; Kofler, Johannes; Basieva, Irina

    2014-01-01

    Recently, the results of the first experimental test for entangled photons closing the detection loophole (also referred to as the fair sampling loophole) were published (Vienna, 2013). From the theoretical viewpoint the main distinguishing feature of this long-aspired to experiment was that the Eberhard inequality was used. Almost simultaneously another experiment closing this loophole was performed (Urbana-Champaign, 2013) and it was based on the Clauser–Horne inequality (for probabilities). The aim of this note is to analyze the mathematical and experimental equivalence of tests based on the Eberhard inequality and various forms of the Clauser–Horne inequality. The structure of the mathematical equivalence is nontrivial. In particular, it is necessary to distinguish between algebraic and statistical equivalence. Although the tests based on these inequalities are algebraically equivalent, they need not be equivalent statistically, i.e., theoretically the level of statistical significance can drop under transition from one test to another (at least for finite samples). Nevertheless, the data collected in the Vienna test implies not only a statistically significant violation of the Eberhard inequality, but also of the Clauser–Horne inequality (in the ratio-rate form): for both a violation >60σ. (paper)

  15. Mechanical Testing of Carbon Based Woven Thermal Protection Materials

    Science.gov (United States)

    Pham, John; Agrawal, Parul; Arnold, James O.; Peterson, Keith; Venkatapathy, Ethiraj

    2013-01-01

    Three Dimensional Woven thermal protection system (TPS) materials are one of the enabling technologies for mechanically deployable hypersonic decelerator systems. These materials have been shown capable of serving a dual purpose as TPS and as structural load bearing members during entry and descent operations. In order to ensure successful structural performance, it is important to characterize the mechanical properties of these materials prior to and post exposure to entry-like heating conditions. This research focuses on the changes in load bearing capacity of woven TPS materials after being subjected to arcjet simulations of entry heating. Preliminary testing of arcjet tested materials [1] has shown a mechanical degradation. However, their residual strength is significantly more than the requirements for a mission to Venus [2]. A systematic investigation at the macro and microstructural scales is reported here to explore the potential causes of this degradation. The effects of heating on the sizing (an epoxy resin coating used to reduce friction and wear during fiber handling) are discussed as one of the possible causes for the decrease in mechanical properties. This investigation also provides valuable guidelines for margin policies for future mechanically deployable entry systems.

  16. Finding of No Significant Impact: Environmental Assessment for Military Family Housing Privatization Initiative, Vandenberg Air Force Base, California

    Science.gov (United States)

    2007-08-01

    Pinus radiate), and eucalyptus trees (Eucalyptus sp.) are present. In areas with mature trees, a non-native grassland understory is present and in...during UXO vegetation clearing. If feasible, the Air Force would conduct UXO vegetative clearing after seed set and before germination of Gaviota...from this designation under section 4(b )(2) of the Act. Gaviota tarplant seeds germinate in response to significant rainfall. Seedlings have been

  17. Clinicopathological significance of psychotic experiences in non-psychotic young people: evidence from four population-based studies.

    LENUS (Irish Health Repository)

    Kelleher, Ian

    2012-07-01

    Epidemiological research has shown that hallucinations and delusions, the classic symptoms of psychosis, are far more prevalent in the population than actual psychotic disorder. These symptoms are especially prevalent in childhood and adolescence. Longitudinal research has demonstrated that psychotic symptoms in adolescence increase the risk of psychotic disorder in adulthood. There has been a lack of research, however, on the immediate clinicopathological significance of psychotic symptoms in adolescence.

  18. A self-calibrating led-based solar test platform

    DEFF Research Database (Denmark)

    Krebs, Frederik C; Sylvester-Hvid, Kristian O.; Jørgensen, Mikkel

    2011-01-01

    A compact platform for testing solar cells is presented. The light source comprises a multi-wavelength high-power LED (light emitting diode) array allowing the homogenous illumination of small laboratory solar cell devices (substrate size 50 × 25 mm) within the 390–940 nm wavelength range......, it is possible to perform all the commonly employed measurements on the solar cell at very high speed without moving the sample. In particular, the LED-based illumination system provides an alternative to light-biased incident photon-to-current efficiency measurement to be performed which we demonstrate. Both...... top and bottom contact is possible and the atmosphere can be controlled around the sample during measurements. The setup was developed for the field of polymer and organic solar cells with particular emphasis on enabling different laboratories to perform measurements in the same manner and obtain...

  19. [Photoelectric parametric test system of LED based on virtual instrument].

    Science.gov (United States)

    Yang, Song-Tao; Wang, Jing; Wang, Li-Dong; Liang, Pei; Dong, Qian-Min

    2014-11-01

    The standardized measuring principle, requirements and implementations of the above parameters of LEDs were researched and analyzed in the present paper. Then a comprehensive test system involved with optics, machinery and computer was designed to accomplish data acquisition, algorithm design and interface design on virtual instrument using NI data acquisition card USB6210. And convincing results of LEDs' parameters, including peak wavelength, width of half-peak wavelength, centroid wavelength, chromaticity coordinates, purity, correlated color temperature and the forward voltage/current, were achieved with good consistency based on the measured spectrum. The system owns simple interface, reliable algorithms and stable results. Respective measurements on five kinds of color of LED result in an average error less than 3%, which show an ideal performance of the system.

  20. Ultrasound-based testing of tendon mechanical properties

    DEFF Research Database (Denmark)

    Seynnes, O R; Bojsen-Møller, J.; Albracht, K

    2015-01-01

    In the past 20 years, the use of ultrasound-based methods has become a standard approach to measure tendon mechanical properties in vivo. Yet the multitude of methodological approaches adopted by various research groups probably contribute to the large variability of reported values. The technique...... of obtaining and relating tendon deformation to tensile force in vivo has been applied differently, depending on practical constraints or scientific points of view. Divergence can be seen in 1) methodological considerations, such as the choice of anatomical features to scan and to track, force measurements......, or signal synchronization; and 2) in physiological considerations related to the viscoelastic behavior or length measurements of tendons. Hence, the purpose of the present review is to assess and discuss the physiological and technical aspects connected to in vivo testing of tendon mechanical properties...

  1. An ADC test system based on the Macintosh computer

    International Nuclear Information System (INIS)

    Gao, Z.; Spiriti, E.; Tortora, L.

    1994-01-01

    DAΦNE is an approved INFN Φ-factory project to be realized at the Frascati National Laboratories of Italy. DAΦNE consists of the construction of a two-ring colliding beam Φ Factory and a 510 Mev e + /e - injector for topping-up. KLOE is a general purpose detector to be used at DAΦNE which is optimized for the study of CP violation in K 0 decays with the aim of achieving a statistical accuracy of ∼ 10 -4 for R(var-epsilon '/var-epsilon) in a one year run at the DAΦNE target luminosity of 10 33 cm -2 S -1 . In the electromagnetic calorimeter front end electronics readout system, the ADC modules are used to provide energy deposit information. A prototype 12 bit/1us ADC CAMAC module has been developed and tested by a system based on the Macintosh computer. Characteristics and performance of the system are described

  2. In vitro digestion testing of lipid-based delivery systems

    DEFF Research Database (Denmark)

    Devraj, Ravi; Williams, Hywel D; Warren, Dallas B

    2012-01-01

    In vitro digestion testing is of practical importance to predict the fate of drugs administered in lipid-based delivery systems. Calcium ions are often added to digestion media to increase the extent of digestion of long-chain triglycerides (LCTs), but the effects they have on phase behaviour...... of the products of digestion, and consequent drug solubilization, are not well understood. This study investigates the effect of calcium and bile salt concentrations on the rate and extent of in vitro digestion of soybean oil, as well as the solubilizing capacity of the digestion products for two poorly water......-soluble drugs, fenofibrate and danazol. In the presence of higher concentrations of calcium ions, the solubilization capacities of the digests were reduced for both drugs. This effect is attributed to the formation of insoluble calcium soaps, visible as precipitates during the digestions. This reduces...

  3. Multimeric recombinant M2e protein-based ELISA: a significant improvement in differentiating avian influenza infected chickens from vaccinated ones.

    Directory of Open Access Journals (Sweden)

    Farshid Hadifar

    Full Text Available Killed avian influenza virus (AIV vaccines have been used to control H5N1 infections in countries where the virus is endemic. Distinguishing vaccinated from naturally infected birds (DIVA in such situations however, has become a major challenge. Recently, we introduced the recombinant ectodomain of the M2 protein (M2e of H5N1 subtype as a novel tool for an ELISA based DIVA test. Despite being antigenic in natural infection the monomer form of the M2e used in ELISA had limited antigenicity and consequently poor diagnostic capability. To address this shortcoming, we evaluated the use of four tandem copies of M2e (tM2e for increased efficiency of M2e antibody detection. The tM2e gene of H5N1 strain from Indonesia (A/Indonesia/CDC540/2006 was cloned into a pMAL- p4x expression vector and expressed in E.coli as a recombinant tM2e-MBP or M2e-MBP proteins. Both of these, M2e and tM2e antigens reacted with sera obtained from chickens following live H5N1 infection but not with sera from vaccinated birds. A significantly stronger M2e antibody reaction was observed with the tM2e compared to M2e antigen. Western blotting also supported the superiority of tM2e over M2e in detection of specific M2e antibodies against live H5N1 infection. Results from this study demonstrate that M2e tetramer is a better antigen than single M2e and could be more suitable for an ELISA based DIVA test.

  4. [The significance of the results of crash-tests with the use of the models of the pedestrians' lower extremities for the prevention of the traffic road accidents].

    Science.gov (United States)

    Smirenin, S A; Fetisov, V A; Grigoryan, V G; Gusarov, A A; Kucheryavets, Yu O

    The disabling injuries inflicted during road traffic accidents (RTA) create a serious challenge for the public health services and are at the same time a major socio-economic problem in the majority of the countries throughout the world. The injuries to the lower extremities of the pedestrians make up the largest fraction of the total number of the non-lethal RTA injuries. Most of them are responsible for the considerable deterioration of the quality of life for the participants in the accidents during the subsequent period. The objective of the present study was to summarize the currently available results of experimental testing of the biomechanical models of the pedestrians' lower extremities in the framework of the program for the prevention of the road traffic accidents as proposed by the World Health Organization (WHO, 2004). The European Enhanced Safety Vehicle Committee (EEVC) has developed a series of crash-tests with the use of the models of the pedestrians' lower extremities simulating the vehicle bumper-pedestrian impact. The models are intended for the assessment of the risk of the tibia fractures and the injuries to the knee joint ligaments. The experts of EEVC proposed the biomechanical criteria for the acceleration of the knee and talocrural parts of the lower limbs as well as for the shear displacement of the knee and knee-bending angle. The engineering solution of this problem is based on numerous innovation proposals being implemented in the machine-building industry with the purpose of reducing the stiffness of structural elements of the bumper and other front components of a modern vehicle designed to protect the pedestrians from severe injuries that can be inflicted in the road traffic accidents. The activities of the public health authorities (in the first place, bureaus of forensic medical expertise and analogous facilities) have a direct bearing on the solution of the problem of control of road traffic injuries because they are possessed of

  5. Chemically modified carbon-based electrodes for the detection of some substances of environmental and biomedical significance

    OpenAIRE

    Hutton, Emily Anne

    2003-01-01

    The thesis opens with an introduction to some basic electroanalytical principles, and a brief description of carbon-based electrodes, with a particular focus on glassy carbon and carbon fibre microelectrodes. Also included is a review of the most commonly employed electrode modification procedures, with some examples of the analytical applications of modified electrodes. The second chapter describes the use of the bismuth film electrode (BiFE), which consists of a bismuth film electrochem...

  6. Digital reflection holography based systems development for MEMS testing

    Science.gov (United States)

    Singh, Vijay Raj; Liansheng, Sui; Asundi, Anand

    2010-05-01

    MEMS are tiny mechanical devices that are built onto semiconductor chips and are measured in micrometers and nanometers. Testing of MEMS device is an important part in carrying out their functional assessment and reliability analysis. Development of systems based on digital holography (DH) for MEMS inspection and characterization is presented in this paper. Two DH reflection systems, table-top and handheld types, are developed depending on the MEMS measurement requirements and their capabilities are presented. The methodologies for the systems are developed for 3D profile inspection and static & dynamic measurements, which is further integrated with in-house developed software that provides the measurement results in near real time. The applications of the developed systems are demonstrated for different MEMS devices for 3D profile inspection, static deformation/deflection measurements and vibration analysis. The developed systems are well suitable for the testing of MEMS and Microsystems samples, with full-field, static & dynamic inspection as well as to monitor micro-fabrication process.

  7. Significant variation between SNP-based HLA imputations in diverse populations: the last mile is the hardest.

    Science.gov (United States)

    Pappas, D J; Lizee, A; Paunic, V; Beutner, K R; Motyer, A; Vukcevic, D; Leslie, S; Biesiada, J; Meller, J; Taylor, K D; Zheng, X; Zhao, L P; Gourraud, P-A; Hollenbach, J A; Mack, S J; Maiers, M

    2017-04-25

    Four single nucleotide polymorphism (SNP)-based human leukocyte antigen (HLA) imputation methods (e-HLA, HIBAG, HLA*IMP:02 and MAGPrediction) were trained using 1000 Genomes SNP and HLA genotypes and assessed for their ability to accurately impute molecular HLA-A, -B, -C and -DRB1 genotypes in the Human Genome Diversity Project cell panel. Imputation concordance was high (>89%) across all methods for both HLA-A and HLA-C, but HLA-B and HLA-DRB1 proved generally difficult to impute. Overall, <27.8% of subjects were correctly imputed for all HLA loci by any method. Concordance across all loci was not enhanced via the application of confidence thresholds; reliance on confidence scores across methods only led to noticeable improvement (+3.2%) for HLA-DRB1. As the HLA complex is highly relevant to the study of human health and disease, a standardized assessment of SNP-based HLA imputation methods is crucial for advancing genomic research. Considerable room remains for the improvement of HLA-B and especially HLA-DRB1 imputation methods, and no imputation method is as accurate as molecular genotyping. The application of large, ancestrally diverse HLA and SNP reference data sets and multiple imputation methods has the potential to make SNP-based HLA imputation methods a tractable option for determining HLA genotypes.The Pharmacogenomics Journal advance online publication, 25 April 2017; doi:10.1038/tpj.2017.7.

  8. Papers Based Electrochemical Biosensors: From Test Strips to Paper-Based Microfluidics

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Bingwen; Du, Dan; Hua, Xin; Yu, Xiao-Ying; Lin, Yuehe

    2014-05-08

    Papers based biosensors such as lateral flow test strips and paper-based microfluidic devices (or paperfluidics) are inexpensive, rapid, flexible, and easy-to-use analytical tools. An apparent trend in their detection is to interpret sensing results from qualitative assessment to quantitative determination. Electrochemical detection plays an important role in quantification. This review focuses on electrochemical (EC) detection enabled biosensors. The first part provides detailed examples in paper test strips. The second part gives an overview of paperfluidics engaging EC detections. The outlook and recommendation of future directions of EC enabled biosensors are discussed in the end.

  9. Overview on PANDA test facility and ISP-42 PANDA tests data base

    International Nuclear Information System (INIS)

    Aksan, N.

    2005-01-01

    As an example of test facilities in which passive decay heat removal systems are tested, the PANDA test facility and the ISP-42-PANDA tests will provide an overview on experimental validation and database. A short overview on the test programs performed in this facility is also given. (author)

  10. Fast Lemons and Sour Boulders: Testing Crossmodal Correspondences Using an Internet-Based Testing Methodology

    Directory of Open Access Journals (Sweden)

    Andy T. Woods

    2013-09-01

    Full Text Available According to a popular family of hypotheses, crossmodal matches between distinct features hold because they correspond to the same polarity on several conceptual dimensions (such as active–passive, good–bad, etc. that can be identified using the semantic differential technique. The main problem here resides in turning this hypothesis into testable empirical predictions. In the present study, we outline a series of plausible consequences of the hypothesis and test a variety of well-established and previously untested crossmodal correspondences by means of a novel internet-based testing methodology. The results highlight that the semantic hypothesis cannot easily explain differences in the prevalence of crossmodal associations built on the same semantic pattern (fast lemons, slow prunes, sour boulders, heavy red; furthermore, the semantic hypothesis only minimally predicts what happens when the semantic dimensions and polarities that are supposed to drive such crossmodal associations are made more salient (e.g., by adding emotional cues that ought to make the good/bad dimension more salient; finally, the semantic hypothesis does not explain why reliable matches are no longer observed once intramodal dimensions with congruent connotations are presented (e.g., visually presented shapes and colour do not appear to correspond.

  11. Annonaceous acetogenins (ACGs) nanosuspensions based on a self-assembly stabilizer and the significantly improved anti-tumor efficacy.

    Science.gov (United States)

    Hong, Jingyi; Li, Yanhong; Xiao, Yao; Li, Yijing; Guo, Yifei; Kuang, Haixue; Wang, Xiangtao

    2016-09-01

    Annonaceous acetogenins (ACGs) have exhibited antitumor activity against various cancers. However, these substances' poor solubility has limited clinical applications. In this study, hydroxypropyl-beta-cyclodextrin (HP-β-CD) and soybean lecithin (SPC) were self-assembled into an amphiphilic complex. ACGs nanosuspensions (ACGs-NSps) were prepared with a mean particle size of 144.4nm, a zeta potential of -22.9mV and a high drug payload of 46.17% using this complex as stabilizer. The ACGs-NSps demonstrated sustained release in vitro and good stability in plasma as well as simulated gastrointestinal fluid, and met the demand of both intravenous injection and oral administration. The ACGs-NSps demonstrated significantly increased cytotoxicity against Hela and HepG2 cancer cell lines compared to ACGs in solution (in vitro cytotoxicity assay). An in vivo study with H22-tumor bearing mice demonstrated that nanosuspensions significantly improved ACGs' antitumor activity. When orally administered, ACGs-NSps achieved a similar tumor inhibition rate at 1/10th the dose of ACGs in an oil solution (47.94% vs. 49.74%, p>0.05). Improved therapeutic efficacy was further achieved when the ACGs-NSps were intravenously injected into mice (70.31%). With the help of nanosuspension technology, ACGs may be an effective antitumor drug for clinic use. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Sustainable Corporate Social Media Marketing Based on Message Structural Features: Firm Size Plays a Significant Role as a Moderator

    Directory of Open Access Journals (Sweden)

    Moon Young Kang

    2018-04-01

    Full Text Available Social media has been receiving attention as a cost-effective tool to build corporate brand image and to enrich customer relationships. This phenomenon calls for more attention to developing a model that measures the impact of structural features, used in corporate social media messages. Based on communication science, this study proposes a model to measure the impact of three essential message structural features (interactivity, formality, and immediacy in corporate social media on customers’ purchase intentions, mediated by brand attitude and corporate trust. Especially, social media platforms are believed to provide a good marketing platform for small and medium enterprises (SMEs by providing access to huge audiences at a very low cost. The findings from this study based on a structural equation model suggest that brand attitude and corporate trust have larger impacts on purchase intention for SMEs than large firms. This implies that SMEs with little to no presence in the market should pay more attention to building corporate trust and brand attitude for their sustainable growth.

  13. Elastic Modulus of Osteoporotic Mouse Femur Based on Femoral Head Compression Test.

    Science.gov (United States)

    Chon, Chang-Soo; Yun, Hui-Suk; Kim, Han Sung; Ko, Cheolwoong

    2017-01-01

    A biomechanical test is a good evaluation method that describes the structural, functional, and pathological differences in the bones, such as osteoporosis and fracture. The tensile test, compression test, and bending test are generally performed to evaluate the elastic modulus of the bone using mice. In particular, the femoral head compression test is mainly used for verifying the osteoporosis change of the femoral neck. This study conducted bone mineral density analysis using in vivo microcomputed tomography (micro-CT) to observe changes in osteoporosis over time. It proposed a method of identifying the elastic modulus of the femur in the normal group (CON group) and the osteoporotic group (OVX group) through finite element analysis based on the femoral head compression test and also conducted a comparative analysis of the results. Through the femoral head compression test, it was verified that the CON group's ultimate and yield loads were significantly higher than those of the OVX group. It was considered that this result was caused by the fact that the bone mineral density change by osteoporosis occurred in the proximal end more often than in the femur diaphysis. However, the elastic modulus derived from the finite element analysis showed no significant difference between the two groups.

  14. Change-based test selection : An empirical evaluation

    NARCIS (Netherlands)

    Soetens, Quinten; Demeyer, Serge; Zaidman, A.E.; Perez, Javier

    2015-01-01

    Regression test selection (i.e., selecting a subset of a given regression test suite) is a problem that has been studied intensely over the last decade. However, with the increasing popularity of developer tests as the driver of the test process, more fine-grained solutions that work well within the

  15. Significantly Increased Odds of Reporting Previous Shoulder Injuries in Female Marines Based on Larger Magnitude Shoulder Rotator Bilateral Strength Differences.

    Science.gov (United States)

    Eagle, Shawn R; Connaboy, Chris; Nindl, Bradley C; Allison, Katelyn F

    2018-02-01

    Musculoskeletal injuries to the extremities are a primary concern for the United States (US) military. One possible injury risk factor in this population is side-to-side strength imbalance. To examine the odds of reporting a previous shoulder injury in US Marine Corps Ground Combat Element Integrated Task Force volunteers based on side-to-side strength differences in isokinetic shoulder strength. Cohort study; Level of evidence, 3. Male (n = 219) and female (n = 91) Marines were included in this analysis. Peak torque values from 5 shoulder internal/external rotation repetitions were averaged and normalized to body weight. The difference in side-to-side strength measurements was calculated as the absolute value of the limb difference divided by the mean peak torque of the dominant limb. Participants were placed into groups based on the magnitude of these differences: 20%. Odds ratios (ORs) and 95% CIs were calculated. When separated by sex, 13.2% of men reported an injury, while 5.5% of women reported an injury. Female Marines with >20% internal rotation side-to-side strength differences demonstrated increased odds of reporting a previous shoulder injury compared with female Marines with reporting a previous shoulder injury compared with those with lesser magnitude differences. Additionally, female sex appears to drastically affect the increased odds of reporting shoulder injuries (OR, 13.9-15.4) with larger magnitude differences (ie, >20%) compared with those with lesser magnitude differences (ie, <10% and 10%-20%). The retrospective cohort design of this study cannot delineate cause and effect but establishes a relationship between female Marines and greater odds of larger magnitude strength differences after returning from an injury.

  16. Cationic lipid-based nanoparticles mediate functional delivery of acetate to tumor cells in vivo leading to significant anticancer effects

    Directory of Open Access Journals (Sweden)

    Brody LP

    2017-09-01

    Full Text Available Leigh P Brody,1,* Meliz Sahuri-Arisoylu,1,* James R Parkinson,1 Harry G Parkes,2 Po Wah So,3 Nabil Hajji,4 E Louise Thomas,1 Gary S Frost,5 Andrew D Miller,6,* Jimmy D Bell1,* 1Department of Life Sciences, Faculty of Science and Technology, University of Westminster, 2CR-UK Clinical MR Research Group, Institute of Cancer Research, Sutton, Surrey, 3Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, 4Department of Medicine, Division of Experimental Medicine, Centre for Pharmacology & Therapeutics, Toxicology Unit, Imperial College London, 5Faculty of Medicine, Nutrition and Dietetic Research Group, Division of Diabetes, Endocrinology and Metabolism, Department of Investigative Medicine, Imperial College London, Hammersmith Hospital, 6Institute of Pharmaceutical Science, King’s College London, London, UK *These authors contributed equally to this work Abstract: Metabolic reengineering using nanoparticle delivery represents an innovative therapeutic approach to normalizing the deregulation of cellular metabolism underlying many diseases, including cancer. Here, we demonstrated a unique and novel application to the treatment of malignancy using a short-chain fatty acid (SCFA-encapsulated lipid-based delivery system – liposome-encapsulated acetate nanoparticles for cancer applications (LITA-CAN. We assessed chronic in vivo administration of our nanoparticle in three separate murine models of colorectal cancer. We demonstrated a substantial reduction in tumor growth in the xenograft model of colorectal cancer cell lines HT-29, HCT-116 p53+/+ and HCT-116 p53-/-. Nanoparticle-induced reductions in histone deacetylase gene expression indicated a potential mechanism for these anti-proliferative effects. Together, these results indicated that LITA-CAN could be used as an effective direct or adjunct therapy to treat malignant transformation in vivo. Keywords: lipid-based nanoparticles, liposomes

  17. Development and Testing of a Sorbent-Based Atmosphere Revitalization System 2010/2011

    Science.gov (United States)

    Miller, Lee A.; Knox, James C.

    2012-01-01

    Spacecraft being developed for future exploration missions incorporate Environmental Control and Life Support Systems (ECLSS) that limit weight, power, and volume thus requiring systems with higher levels of efficiency while maintaining high dependability and robustness. For air revitalization, an approach that meets those goals utilizes a regenerative Vacuum-Swing Adsorption (VSA) system that removes 100% of the CO2 from the cabin atmosphere as well as 100% of the water. A Sorbent Based Atmosphere Revitalization (SBAR) system is a VSA system that utilizes standard commercial adsorbents that have been proven effective and safe in spacecraft including Skylab and the International Space Station. The SBAR system is the subject of a development, test, and evaluation program that is being conducted at NASA s Marshall Space Flight Center. While previous testing had validated that the technology is a viable option, potential improvements to system design and operation were identified. Modifications of the full-scale SBAR test articles and adsorption cycles have been implemented and have shown significant performance gains resulting in a decrease in the consumables required for a mission as well as improved mission safety. Previous testing had utilized single bed test articles, during this period the test facility was enhanced to allow testing on the full 2-bed SBAR system. The test facility simulates a spacecraft ECLSS and allows testing of the SBAR system over the full range of operational conditions using mission simulations that assess the real-time performance of the SBAR system during scenarios that include the metabolic transients associated with extravehicular activity. Although future manned missions are currently being redefined, the atmosphere revitalization requirements for the spacecraft are expected to be quite similar to the Orion and the Altair vehicles and the SBAR test program addressed validation to the defined mission requirements as well as operation

  18. SU-E-T-580: On the Significance of Model Based Dosimetry for Breast and Head and Neck 192Ir HDR Brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Peppa, V; Pappas, E; Pantelis, E; Papagiannis, P [Medical Physics Laboratory, Medical School, University of Athens, Athens (Greece); Major, T; Polgar, C [National Institute of Oncology, Budapest (Hungary)

    2015-06-15

    Purpose: To assess the dosimetric and radiobiological differences between TG43-based and model-based dosimetry in the treatment planning of {sup 192}Ir HDR brachytherapy for breast and head and neck cancer. Methods: Two cohorts of 57 Accelerated Partial Breast Irradiation (APBI) and 22 head and neck (H&N) patients with oral cavity carcinoma were studied. Dosimetry for the treatment plans was performed using the TG43 algorithm of the Oncentra Brachy v4.4 treatment planning system (TPS). Corresponding Monte Carlo (MC) simulations were performed using MCNP6 with input files automatically prepared by the BrachyGuide software tool from DICOM RT plan data. TG43 and MC data were compared in terms of % dose differences, Dose Volume Histograms (DVHs) and related indices of clinical interest for the Planning Target Volume (PTV) and the Organs-At-Risk (OARs). A radiobiological analysis was also performed using the Equivalent Uniform Dose (EUD), mean survival fraction (S) and Tumor Control Probability (TCP) for the PTV, and the Normal Tissue Control Probability (N TCP) and the generalized EUD (gEUD) for the OARs. Significance testing of the observed differences performed using the Wilcoxon paired sample test. Results: Differences between TG43 and MC DVH indices, associated with the increased corresponding local % dose differences observed, were statistically significant. This is mainly attributed to their consistency however, since TG43 agrees closely with MC for the majority of DVH and radiobiological parameters in both patient cohorts. Differences varied considerably among patients only for the ipsilateral lung and ribs in the APBI cohort, with a strong correlation to target location. Conclusion: While the consistency and magnitude of differences in the majority of clinically relevant DVH indices imply that no change is needed in the treatment planning practice, individualized dosimetry improves accuracy and addresses instances of inter-patient variability observed. Research

  19. SU-E-T-580: On the Significance of Model Based Dosimetry for Breast and Head and Neck 192Ir HDR Brachytherapy

    International Nuclear Information System (INIS)

    Peppa, V; Pappas, E; Pantelis, E; Papagiannis, P; Major, T; Polgar, C

    2015-01-01

    Purpose: To assess the dosimetric and radiobiological differences between TG43-based and model-based dosimetry in the treatment planning of 192 Ir HDR brachytherapy for breast and head and neck cancer. Methods: Two cohorts of 57 Accelerated Partial Breast Irradiation (APBI) and 22 head and neck (H&N) patients with oral cavity carcinoma were studied. Dosimetry for the treatment plans was performed using the TG43 algorithm of the Oncentra Brachy v4.4 treatment planning system (TPS). Corresponding Monte Carlo (MC) simulations were performed using MCNP6 with input files automatically prepared by the BrachyGuide software tool from DICOM RT plan data. TG43 and MC data were compared in terms of % dose differences, Dose Volume Histograms (DVHs) and related indices of clinical interest for the Planning Target Volume (PTV) and the Organs-At-Risk (OARs). A radiobiological analysis was also performed using the Equivalent Uniform Dose (EUD), mean survival fraction (S) and Tumor Control Probability (TCP) for the PTV, and the Normal Tissue Control Probability (N TCP) and the generalized EUD (gEUD) for the OARs. Significance testing of the observed differences performed using the Wilcoxon paired sample test. Results: Differences between TG43 and MC DVH indices, associated with the increased corresponding local % dose differences observed, were statistically significant. This is mainly attributed to their consistency however, since TG43 agrees closely with MC for the majority of DVH and radiobiological parameters in both patient cohorts. Differences varied considerably among patients only for the ipsilateral lung and ribs in the APBI cohort, with a strong correlation to target location. Conclusion: While the consistency and magnitude of differences in the majority of clinically relevant DVH indices imply that no change is needed in the treatment planning practice, individualized dosimetry improves accuracy and addresses instances of inter-patient variability observed. Research co

  20. Quantification of Plasmodiophora brassicae Using a DNA-Based Soil Test Facilitates Sustainable Oilseed Rape Production

    OpenAIRE

    Ann-Charlotte Wallenhammar; Albin Gunnarson; Fredrik Hansson; Anders Jonsson

    2016-01-01

    Outbreaks of clubroot disease caused by the soil-borne obligate parasite Plasmodiophora brassicae are common in oilseed rape (OSR) in Sweden. A DNA-based soil testing service that identifies fields where P. brassicae poses a significant risk of clubroot infection is now commercially available. It was applied here in field surveys to monitor the prevalence of P. brassicae DNA in field soils intended for winter OSR production and winter OSR field experiments. In 2013 in Scania, prior to plantin...

  1. Effects of an employer-based intervention on employment outcomes for youth with significant support needs due to autism.

    Science.gov (United States)

    Wehman, Paul; Schall, Carol M; McDonough, Jennifer; Graham, Carolyn; Brooke, Valerie; Riehle, J Erin; Brooke, Alissa; Ham, Whitney; Lau, Stephanie; Allen, Jaclyn; Avellone, Lauren

    2017-04-01

    The purpose of this study was to develop and investigate an employer-based 9-month intervention for high school youth with autism spectrum disorder to learn job skills and acquire employment. The intervention modified a program titled Project SEARCH and incorporated the use of applied behavior analysis to develop Project SEARCH plus Autism Spectrum Disorder Supports. A randomized clinical trial compared the implementation of Project SEARCH plus Autism Spectrum Disorder Supports with high school special education services as usual. Participants were 49 high-school-aged individuals between the ages of 18 and 21 years diagnosed with an autism spectrum disorder and eligible for supported employment. Students also had to demonstrate independent self-care. At 3 months post-graduation, 90% of the treatment group acquired competitive, part-time employment earning US$9.53-US$10.66 per hour. Furthermore, 87% of those individuals maintained employment at 12 months post-graduation. The control group's employment outcomes were 6% acquiring employment by 3 months post-graduation and 12% acquiring employment by 12 months post-graduation. The positive employment outcomes generated by the treatment group provide evidence that youth with autism spectrum disorder can gain and maintain competitive employment. Additionally, there is evidence that they are able to advance within that time toward more weekly hours worked, while they also displayed increasing independence in the work setting.

  2. Comparison of the clinical significance of single cuff-based arterial stiffness parameters with that of the commonly used parameters.

    Science.gov (United States)

    Komatsu, Shunsuke; Tomiyama, Hirofumi; Kimura, Kazutaka; Matsumoto, Chisa; Shiina, Kazuki; Yamashina, Akira

    2017-04-01

    We examined the following: (1) whether the new simple markers related to the arterial stiffness/central hemodynamics [i.e. arterial pressure-volume index (API) and arterial velocity pulse index (AVI)] are clinically interchangeable with the commonly used markers [brachial-ankle pulse wave velocity (baPWV) and radial augmentation index (rAI)]; (2) whether the new simple markers reflect vascular damage as reliably as the commonly used markers; (3) which cardiovascular risk factors are reflected by these new simple markers. API, AVI, baPWV, and rAI were measured simultaneously in consecutive patients admitted for the management of cardiovascular disease and/or cardiovascular risk factors (n=322). The API was correlated with the baPWV (R=0.492, pAPI, AVI, baPWV, and rAI were higher in the patients admitted for coronary angiography (CAG group: n=152) than in those admitted for reasons other than coronary angiography (nonCAG group: n=170). After adjustments for confounding factors, only the AVI was found to be higher in the CAG group than in the nonCAG group. Multivariate linear regression analysis revealed that age and the systolic blood pressure were independently associated with the API and AVI after adjustments. In patients with cardiovascular diseases or cardiovascular risk factors, the new simple markers and the commonly used markers are not interchangeable for assessing vascular damage and/or cardiovascular risk. Further study is proposed to examine whether AVI is higher in subjects with cardiovascular disease than in those without a history of cardiovascular disease. Similar to the case for the commonly used markers, age and the blood pressure significantly influenced both the new markers; therefore, age and the blood pressure need to be taken into account while interpreting the changes in these new simple markers. Copyright © 2016 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.

  3. Vaccination with a Paramyosin-Based Multi-Epitope Vaccine Elicits Significant Protective Immunity against Trichinella spiralis Infection in Mice

    Directory of Open Access Journals (Sweden)

    Yuan Gu

    2017-08-01

    Full Text Available Trichinellosis is a worldwide zoonosis and remains a serious public health problem. Interrupting parasite transmission via vaccination of livestocks with a potent vaccine is a practical approach to prevent human Trichinellosis. Our previous studies have identified that paramyosin of Trichinella spiralis (Ts-Pmy is a good vaccine candidate against Trichinellosis. In this study, a novel multi-epitope vaccine (MEP was constructed by using four CD4+ T cell epitopes (P2, P3, P4, and P5 and one B cell epitope (YX1 from Ts-Pmy and expressed as a soluble recombinant protein (rMEP in Escherichia coli. Mice immunized with rMEP vaccine produced significant higher muscle larval reduction (55.4% than that induced by immunization of parental rTs-Pmy (34.4% against T. spiralis infection. The better protection is associated with rMEP induced high levels of anti-rMEP specific IgG and subclass IgG1/IgG2a, elevated T cell proliferation of splenocytes and secretion of IFN-γ, IL-4 and IL-5. The cellular response to individual T cell epitope also showed that splenocytes from mice immunized with rMEP strongly response to the stimulation of synthetic epitope peptide P2, P3, and P4, but not to P5, suggesting that most of T cell epitopes are exposed and processed well during immunization that may contribute to the high protection induced by the immunization of rMEP. This study implies that epitope vaccine is a promising approach for the development of vaccines against Trichinellosis.

  4. Theme Enrichment Analysis: A Statistical Test for Identifying Significantly Enriched Themes in a List of Stories with an Application to the Star Trek Television Franchise

    OpenAIRE

    Onsjö, Mikael; Sheridan, Paul

    2017-01-01

    In this paper, we describe how the hypergeometric test can be used to determine whether a given theme of interest occurs in a storyset at a frequency more than would be expected by chance. By a storyset we mean simply a list of stories defined according to a common attribute (e.g. author, movement, period). The test works roughly as follows: Given a background storyset (e.g. 19th century adventure novels), and a sub-storyset of interest (e.g. Jules Verne novels), the test determines whether a...

  5. Investigation of the chromosome regions with significant affinity for the nuclear envelope in fruit fly--a model based approach.

    Directory of Open Access Journals (Sweden)

    Nicholas Allen Kinney

    Full Text Available Three dimensional nuclear architecture is important for genome function, but is still poorly understood. In particular, little is known about the role of the "boundary conditions"--points of attachment between chromosomes and the nuclear envelope. We describe a method for modeling the 3D organization of the interphase nucleus, and its application to analysis of chromosome-nuclear envelope (Chr-NE attachments of polytene (giant chromosomes in Drosophila melanogaster salivary glands. The model represents chromosomes as self-avoiding polymer chains confined within the nucleus; parameters of the model are taken directly from experiment, no fitting parameters are introduced. Methods are developed to objectively quantify chromosome territories and intertwining, which are discussed in the context of corresponding experimental observations. In particular, a mathematically rigorous definition of a territory based on convex hull is proposed. The self-avoiding polymer model is used to re-analyze previous experimental data; the analysis suggests 33 additional Chr-NE attachments in addition to the 15 already explored Chr-NE attachments. Most of these new Chr-NE attachments correspond to intercalary heterochromatin--gene poor, dark staining, late replicating regions of the genome; however, three correspond to euchromatin--gene rich, light staining, early replicating regions of the genome. The analysis also suggests 5 regions of anti-contact, characterized by aversion for the NE, only two of these correspond to euchromatin. This composition of chromatin suggests that heterochromatin may not be necessary or sufficient for the formation of a Chr-NE attachment. To the extent that the proposed model represents reality, the confinement of the polytene chromosomes in a spherical nucleus alone does not favor the positioning of specific chromosome regions at the NE as seen in experiment; consequently, the 15 experimentally known Chr-NE attachment positions do not

  6. The Significance and Impact of Innovation Networks of Academia and Business with a Special Emphasis on Work-Based Learning

    Directory of Open Access Journals (Sweden)

    Hogeforster Max A.

    2014-10-01

    countries. The introduction of work-based education plays a crucial role in narrowing this divide.

  7. SB certification handout material requirements, test methods, responsibilities, and minimum classification levels for mixture-based specification for flexible base.

    Science.gov (United States)

    2012-10-01

    A handout with tables representing the material requirements, test methods, responsibilities, and minimum classification levels mixture-based specification for flexible base and details on aggregate and test methods employed, along with agency and co...

  8. A significantly lower potency observed for the 3rd WHO International Standard for Parvovirus B19V DNA with the cobas TaqScreen DPX test.

    Science.gov (United States)

    Pisani, G; Cristiano, K; Fabi, S; Simeoni, M; Marino, F; Gaggioli, A

    2016-08-01

    In the context of the Official Medicines Control Laboratories plasma pool testing for Parvovirus B19 DNA, we use the cobas TaqScreen DPX test. When we re-evaluated this method using the 3rd B19 DNA WHO IS at the final concentration of 4 log IU/mL, we observed a titre lower than expected, i.e. 3.79 log IU/mL. Therefore, we further investigated the accuracy of the DPX test. The following B19V DNA materials were tested by using both the DPX test and an in-house real-time PCR: The 1st, 2nd and 3rd WHO ISs for B19V DNA The Non WHO B19V DNA Reference Material for NAT The Biological Reference Preparation B19 virus DNA for NAT testing, batch 1 . The DPX test showed a good accuracy for all B19V DNA materials with the exception of the 3rd WHO IS for B19V DNA. In fact, an underestimation of about 38% was observed for all dilutions of this standard with respect to the nominal titre. With the B19V in-house real-time PCR, all four materials proved to be well calibrated against the 1(st) WHO IS for B19V DNA, used as external standard curve. In this study, we demonstrated that the DPX test underestimates the B19V DNA content of the 3rd WHO IS for B19V DNA and that this is not due to an incorrect potency assigned to the standard but, most probably, to a mismatch between the primers/probe and the sequence of the target region in the 3rd WHO IS for B19V DNA. © 2016 International Society of Blood Transfusion.

  9. CT-image based conformal brachytherapy of breast cancer. The significance of semi-3-D and 3-D treatment planning

    Energy Technology Data Exchange (ETDEWEB)

    Polgar, C.; Major, T.; Somogyi, A.; Takacsi-Nagy, Z.; Mangel, L.C.; Fodor, J.; Nemeth, G. [Orszagos Onkologiai Intezet, Budapest (Hungary). Dept. of Radiotherapy; Forrai, G. [Haynal Imre Univ. of Health Sciences, Budapest (Hungary). Dept. of Radiology; Sulyok, Z. [Orszagos Onkologiai Intezet, Budapest (Hungary). Dept. of Surgery

    2000-03-01

    In 103 patients with T1-2, N0-1 breast cancer the tumor bed was clipped during breast conserving surgery. Fifty-two of them received boost brachytherapy after 46 to 50 Gy teletherapy and 51 patients were treated with brachytherapy alone via flexible implant tubes. Single double and triple plane implant was used in 6,89 and 8 cases, respectively. The dose of boost brachytherapy and sole brachytherapy prescribed to dose reference points was 3 times 4.75 Gy and 7 times 5.2 Gy, respectively. The positions of dose reference points varied according to the level (2-D, semi-3-D and 3-D) of treatment planning performed. The treatment planning was based on the 3-D reconstruction of the surgical clips, implant tubes and skin points. In all cases the implantations were planned with a semi-3-D technique aided by simulator. In 10 cases a recently developed CT-guided 3-D planning system was used. The semi-3D and 3-D treatment plans were compared to hypothetical 2-D plans using dose-volume histograms and dose non-uniformity ratios. The values of mean central dose, mean skin dose, minimal clip dose, proportion of underdosaged clips and mean target surface dose were evaluated. The accuracy of tumor bed localization and the conformity of planning target volume and treated volume were also analyzed in each technique. Results: With the help of conformal semi-3D and 3D brachytherapy planning we could define reference dose points, active source positions and dwell times individually. This technique decreased the mean skin dose with 22.2% and reduced the possibility of geographical miss. We could achieve the best conformity between the planning target volume and the treated volume with the CT-image based 3-D treatment planning, at the cost of worse dose homogeneity. The mean treated volume was reduced by 25.1% with semi-3-D planning, however, its was increased by 16.2% with 3-D planning, compared to the 2-D planning. (orig.) [German] Bei 103 Patientinnen mit Mammakarzinom der Stadien T1

  10. Cloud-Based Electronic Test Procedures, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Procedures are critical to experimental tests as they describe the specific steps necessary to efficiently and safely carry out a test in a repeatable fashion. The...

  11. Tree-Based Global Model Tests for Polytomous Rasch Models

    Science.gov (United States)

    Komboz, Basil; Strobl, Carolin; Zeileis, Achim

    2018-01-01

    Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these…

  12. Simulated time for host-based testing with TTCN-3

    NARCIS (Netherlands)

    Blom, Stefan; Deiß, Thomas; Ioustinova, Natalia; Kontio, Ari; van de Pol, Jan Cornelis; Rennoch, Axel; Sidorova, Natalia

    2008-01-01

    Prior to testing embedded software in a target environment, it is usually tested in a host environment used for developing the software. When a system is tested in a host environment, its real-time behaviour is affected by the use of simulators, emulation and monitoring. In this paper, the authors

  13. Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data

    KAUST Repository

    Dong, Kai

    2015-09-16

    DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.

  14. Surface Charges and Shell Crosslinks Each Play Significant Roles in Mediating Degradation, Biofouling, Cytotoxicity and Immunotoxicity for Polyphosphoester-based Nanoparticles

    Science.gov (United States)

    Elsabahy, Mahmoud; Zhang, Shiyi; Zhang, Fuwu; Deng, Zhou J.; Lim, Young H.; Wang, Hai; Parsamian, Perouza; Hammond, Paula T.; Wooley, Karen L.

    2013-11-01

    The construction of nanostructures from biodegradable precursors and shell/core crosslinking have been pursued as strategies to solve the problems of toxicity and limited stability, respectively. Polyphosphoester (PPE)-based micelles and crosslinked nanoparticles with non-ionic, anionic, cationic, and zwitterionic surface characteristics for potential packaging and delivery of therapeutic and diagnostic agents, were constructed using a quick and efficient synthetic strategy, and importantly, demonstrated remarkable differences in terms of cytotoxicity, immunotoxicity, and biofouling properties, as a function of their surface characteristics and also with dependence on crosslinking throughout the shell layers. For instance, crosslinking of zwitterionic micelles significantly reduced the immunotoxicity, as evidenced from the absence of secretions of any of the 23 measured cytokines from RAW 264.7 mouse macrophages treated with the nanoparticles. The micelles and their crosslinked analogs demonstrated lower cytotoxicity than several commercially-available vehicles, and their degradation products were not cytotoxic to cells at the range of the tested concentrations. PPE-nanoparticles are expected to have broad implications in clinical nanomedicine as alternative vehicles to those involved in several of the currently available medications.

  15. Automatic Test Pattern Generator for Fuzzing Based on Finite State Machine

    Directory of Open Access Journals (Sweden)

    Ming-Hung Wang

    2017-01-01

    Full Text Available With the rapid development of the Internet, several emerging technologies are adopted to construct fancy, interactive, and user-friendly websites. Among these technologies, HTML5 is a popular one and is widely used in establishing modern sites. However, the security issues in the new web technologies are also raised and are worthy of investigation. For vulnerability investigation, many previous studies used fuzzing and focused on generation-based approaches to produce test cases for fuzzing; however, these methods require a significant amount of knowledge and mental efforts to develop test patterns for generating test cases. To decrease the entry barrier of conducting fuzzing, in this study, we propose a test pattern generation algorithm based on the concept of finite state machines. We apply graph analysis techniques to extract paths from finite state machines and use these paths to construct test patterns automatically. According to the proposal, fuzzing can be completed through inputting a regular expression corresponding to the test target. To evaluate the performance of our proposal, we conduct an experiment in identifying vulnerabilities of the input attributes in HTML5. According to the results, our approach is not only efficient but also effective for identifying weak validators in HTML5.

  16. Testing digital safety system software with a testability measure based on a software fault tree

    International Nuclear Information System (INIS)

    Sohn, Se Do; Hyun Seong, Poong

    2006-01-01

    Using predeveloped software, a digital safety system is designed that meets the quality standards of a safety system. To demonstrate the quality, the design process and operating history of the product are reviewed along with configuration management practices. The application software of the safety system is developed in accordance with the planned life cycle. Testing, which is a major phase that takes a significant time in the overall life cycle, can be optimized if the testability of the software can be evaluated. The proposed testability measure of the software is based on the entropy of the importance of basic statements and the failure probability from a software fault tree. To calculate testability, a fault tree is used in the analysis of a source code. With a quantitative measure of testability, testing can be optimized. The proposed testability can also be used to demonstrate whether the test cases based on uniform partitions, such as branch coverage criteria, result in homogeneous partitions that is known to be more effective than random testing. In this paper, the testability measure is calculated for the modules of a nuclear power plant's safety software. The module testing with branch coverage criteria required fewer test cases if the module has higher testability. The result shows that the testability measure can be used to evaluate whether partitions have homogeneous characteristics

  17. DIRECTED–PROJECT BASED LEARNING AS LANGUAGE LEARNING MODEL: DESIGNING, DEVELOPING AND FIELD TESTING

    Directory of Open Access Journals (Sweden)

    Tri Pratiwi

    2018-02-01

    Full Text Available The aim of the present study is developing English learning model to increase students’ language skills in English subject for VIII graders of SMP N 1 Uram Jaya through Directed-Project Based Learning (DPjBL implementation. This study is designed in Research and Development (R & D using ADDIE model development. The researcher collected the data through the test, questionnaire, observation, and interview which were then analyzed qualitatively and quantitatively. The study revealed that Directed-Project Based Learning (DPjBL implementation is significantly able to be one learning model allowing to increase student’s language skills.

  18. A Computational Tool for Testing Dose-related Trend Using an Age-adjusted Bootstrap-based Poly-k Test

    Directory of Open Access Journals (Sweden)

    Hojin Moon

    2006-08-01

    Full Text Available A computational tool for testing for a dose-related trend and/or a pairwise difference in the incidence of an occult tumor via an age-adjusted bootstrap-based poly-k test and the original poly-k test is presented in this paper. The poly-k test (Bailer and Portier 1988 is a survival-adjusted Cochran-Armitage test, which achieves robustness to effects of differential mortality across dose groups. The original poly-k test is asymptotically standard normal under the null hypothesis. However, the asymptotic normality is not valid if there is a deviation from the tumor onset distribution that is assumed in this test. Our age-adjusted bootstrap-based poly-k test assesses the significance of assumed asymptotic normal tests and investigates an empirical distribution of the original poly-k test statistic using an age-adjusted bootstrap method. A tumor of interest is an occult tumor for which the time to onset is not directly observable. Since most of the animal carcinogenicity studies are designed with a single terminal sacrifice, the present tool is applicable to rodent tumorigenicity assays that have a single terminal sacrifice. The present tool takes input information simply from a user screen and reports testing results back to the screen through a user-interface. The computational tool is implemented in C/C++ and is applied to analyze a real data set as an example. Our tool enables the FDA and the pharmaceutical industry to implement a statistical analysis of tumorigenicity data from animal bioassays via our age-adjusted bootstrap-based poly-k test and the original poly-k test which has been adopted by the National Toxicology Program as its standard statistical test.

  19. Comparison of cytotoxicity test models for evaluating resin-based composites.

    Science.gov (United States)

    Lim, S M; Yap, Auj; Loo, Csl; Ng, J; Goh, C Y; Hong, Chl; Toh, W S

    2017-04-01

    This study compared different cytotoxicity test models for evaluating resin-based composites (RBCs) and assessed the biocompatibility of standard and bulk-fill RBCs. A standard (spectrum TPH) and a bulk-fill (smart dentin replacement (SDR)) RBC were selected. Disc-shaped specimens (7 mm diameter) of 2 and 4 mm thickness were polymerized for 20 s with a LED curing light of 700 mW/cm 2 irradiance. The specimens ( n = 5) were subjected to micro-hardness testing and three cytotoxicity test models (direct contact, indirect contact and extract tests) with the established L-929 cell line. Hardness ratios of top and bottom surfaces of specimens were computed to assess the effectiveness of cure. For the direct and indirect contact tests, the cells were stained and zones of inhibition were analyzed after material contact for 24 h. For the extract test, cells were exposed to extracts for 24 h, and cell viability was measured. Data was analyzed using analysis of variance/Scheffe's post hoc test and Pearson's correlation ( p cytotoxicity were observed for TPH at 4 mm. At 4-mm thickness, SDR was found to be biocompatible with all three models. Correlations between hardness ratio and cell viability ranged from r = 0.89-0.96 for the various tests. A significant correlation ( r = 0.97) was also observed between the three test models. Our data indicated consistency between direct contact, indirect contact and extract test models for cytotoxicity testing of RBCs. Bulk placement and curing at 4 mm for the bulk-fill RBC evaluated did not result in undue cytotoxicity.

  20. [Validation of an observer-based rating set compared to a standardized written psychological test for the diagnosis of depression and anxiety in a university preadmission test center].

    Science.gov (United States)

    Schulz-Stübner, S; de Bruin, J; Neuser, J; Rossaint, R

    2001-06-01

    Depression and anxiety can be a major factor of perioperative stress and might contribute to patients dissatisfaction with medical care if they remain unrecognized. There are several methods to diagnose depression and anxiety like standardized written psychological tests or self report scales. Because these tests are not always suitable for routine use in a busy preadmission test center we evaluated an observer-based rating set for the diagnosis of depression and anxiety. 70 patients of a university hospital preadmission test center were tested with the HADS-D-Test and the observer-based rating set after approval of the institutional review board and written informed consent. Test-data were compared using a logistic regression model and demographic variables were analyzed using t-Test. ANOVA and Pearson correlation. The prevalence of depression in our study population was 11.11% (14.75% in male, 9.76% in female) and the prevalence of anxiety was 7.14% (6.9% in male and 7.32% in female). The correlation between the observer-based rating items and the HADS-D-diagnosis was statistically highly significant. The observer based items "unsteady eye movements" and "general worrisome mood" proved to be especially sensitive for anxiety and the items "sorrowful mood" and "impression of resignation" were sensitive for depression without any influence of the experience of the anesthesiologist. A higher prevalence of depression and anxiety was found in patients with ASA-class III compared to those with ASA-classes I and II while age and type of surgery had no significant influence. Based on our observations depression and anxiety are a relevant factor of preoperative morbidity assessment. Observer-based items are a reliable tool to detect those patients which might need special assistance and therapy in the perioperative period to reduce stress associated with high preexisting levels of depression and anxiety.

  1. Exploring Differential Effects across Two Decoding Treatments on Item-Level Transfer in Children with Significant Word Reading Difficulties: A New Approach for Testing Intervention Elements

    Science.gov (United States)

    Steacy, Laura M.; Elleman, Amy M.; Lovett, Maureen W.; Compton, Donald L.

    2016-01-01

    In English, gains in decoding skill do not map directly onto increases in word reading. However, beyond the Self-Teaching Hypothesis, little is known about the transfer of decoding skills to word reading. In this study, we offer a new approach to testing specific decoding elements on transfer to word reading. To illustrate, we modeled word-reading…

  2. New Statistical Randomness Tests Based on Length of Runs

    Directory of Open Access Journals (Sweden)

    Ali Doğanaksoy

    2015-01-01

    Full Text Available Random sequences and random numbers constitute a necessary part of cryptography. Many cryptographic protocols depend on random values. Randomness is measured by statistical tests and hence security evaluation of a cryptographic algorithm deeply depends on statistical randomness tests. In this work we focus on statistical distributions of runs of lengths one, two, and three. Using these distributions we state three new statistical randomness tests. New tests use χ2 distribution and, therefore, exact values of probabilities are needed. Probabilities associated runs of lengths one, two, and three are stated. Corresponding probabilities are divided into five subintervals of equal probabilities. Accordingly, three new statistical tests are defined and pseudocodes for these new statistical tests are given. New statistical tests are designed to detect the deviations in the number of runs of various lengths from a random sequence. Together with some other statistical tests, we analyse our tests’ results on outputs of well-known encryption algorithms and on binary expansions of e, π, and 2. Experimental results show the performance and sensitivity of our tests.

  3. Implementation and performance test of cloud platform based on Hadoop

    Science.gov (United States)

    Xu, Jingxian; Guo, Jianhong; Ren, Chunlan

    2018-01-01

    Hadoop, as an open source project for the Apache foundation, is a distributed computing framework that deals with large amounts of data and has been widely used in the Internet industry. Therefore, it is meaningful to study the implementation of Hadoop platform and the performance of test platform. The purpose of this subject is to study the method of building Hadoop platform and to study the performance of test platform. This paper presents a method to implement Hadoop platform and a test platform performance method. Experimental results show that the proposed test performance method is effective and it can detect the performance of Hadoop platform.

  4. CUSUM-based person-fit statistics for adaptive testing

    NARCIS (Netherlands)

    van Krimpen-Stoop, Edith; Meijer, R.R.

    2001-01-01

    Item scores that do not fit an assumed item response theory model may cause the latent trait value to be inaccurately estimated. Several person-fit statistics for detecting nonfitting score patterns for paper-and-pencil tests have been proposed. In the context of computerized adaptive tests (CAT),

  5. Toxicogenomics-based in vitro alternatives for estrogenicity testing

    NARCIS (Netherlands)

    Wang, S.

    2013-01-01

    Testing chemicals for their endocrine-disrupting potential, including interference with estrogen receptor signaling, is an important aspect to assess the safety of currently used and newly developed chemicals. The standard test for disruption of normal estrogen function is the in vivo uterotrophic

  6. Training Senior Teachers in Compulsory Computer Based Language Tests

    Science.gov (United States)

    Laborda, Jesus Garcia; Royo, Teresa Magal

    2009-01-01

    The IBT TOEFL has become the principal example of online high stakes language testing since 2005. Most instructors who do the preparation for IBT TOEFL face two main realities: first, students are eager and highly motivated to take the test because of the prospective implications; and, second, specific studies would be necessary to see if…

  7. Heritability in Cognitive Performance: Evidence Using Computer-Based Testing

    Science.gov (United States)

    Hervey, Aaron S.; Greenfield, Kathryn; Gualtieri, C. Thomas

    2012-01-01

    There is overwhelming evidence of genetic influence on cognition. The effect is seen in general cognitive ability, as well as in specific cognitive domains. A conventional assessment approach using face-to-face paper and pencil testing is difficult for large-scale studies. Computerized neurocognitive testing is a suitable alternative. A total of…

  8. Côte de Resyste -- Automated Model Based Testing

    NARCIS (Netherlands)

    Tretmans, G.J.; Brinksma, Hendrik; Schweizer, M.

    2002-01-01

    Systematic testing is very important for assessing and improving the quality of embedded software. Yet, testing turns out to be expensive, laborious, time-consuming and error-prone. The project Cˆote de Resyste has been working since 1998 on methods, techniques and tools for automating specification

  9. TorX: Automated Model-Based Testing

    NARCIS (Netherlands)

    Tretmans, G.J.; Brinksma, Hendrik; Hartman, A.; Dussa-Ziegler, K.

    2003-01-01

    Systematic testing is very important for assessing and improving the quality of software systems. Yet, testing turns out to be expensive, laborious, time-consuming and error-prone. The Dutch research and development project Côte de Resyste worked on methods, techniques and tools for automating

  10. Specification based formal testing: the EasyLink case study

    NARCIS (Netherlands)

    Belinfante, Axel; Veen, J.P.; Feenstra, J.; Heerink, A.W.; de Vries, R.G.

    2001-01-01

    Testing is, in most cases, a manual activity that is time consuming and error prone. Automation, however, can severely reduce the associated costs. In the project Cote de Resyste (COnformance TEsting of REactive SYSTEms) theory is being developed and a protoype tool is being built to support the

  11. CUSUM-based person-fit statistics for adaptive testing

    NARCIS (Netherlands)

    van Krimpen-Stoop, Edith; Meijer, R.R.

    1999-01-01

    Item scores that do not fit an assumed item response theory model may cause the latent trait value to be estimated inaccurately. Several person-fit statistics for detecting nonfitting score patterns for paper-and-pencil tests have been proposed. In the context of computerized adaptive tests (CAT),

  12. Improving sensitivity of linear regression-based cell type-specific differential expression deconvolution with per-gene vs. global significance threshold.

    Science.gov (United States)

    Glass, Edmund R; Dozmorov, Mikhail G

    2016-10-06

    The goal of many human disease-oriented studies is to detect molecular mechanisms different between healthy controls and patients. Yet, commonly used gene expression measurements from blood samples suffer from variability of cell composition. This variability hinders the detection of differentially expressed genes and is often ignored. Combined with cell counts, heterogeneous gene expression may provide deeper insights into the gene expression differences on the cell type-specific level. Published computational methods use linear regression to estimate cell type-specific differential expression, and a global cutoff to judge significance, such as False Discovery Rate (FDR). Yet, they do not consider many artifacts hidden in high-dimensional gene expression data that may negatively affect linear regression. In this paper we quantify the parameter space affecting the performance of linear regression (sensitivity of cell type-specific differential expression detection) on a per-gene basis. We evaluated the effect of sample sizes, cell type-specific proportion variability, and mean squared error on sensitivity of cell type-specific differential expression detection using linear regression. Each parameter affected variability of cell type-specific expression estimates and, subsequently, the sensitivity of differential expression detection. We provide the R package, LRCDE, which performs linear regression-based cell type-specific differential expression (deconvolution) detection on a gene-by-gene basis. Accounting for variability around cell type-specific gene expression estimates, it computes per-gene t-statistics of differential detection, p-values, t-statistic-based sensitivity, group-specific mean squared error, and several gene-specific diagnostic metrics. The sensitivity of linear regression-based cell type-specific differential expression detection differed for each gene as a function of mean squared error, per group sample sizes, and variability of the proportions

  13. Nuclear counting filter based on a centered Skellam test and a double exponential smoothing

    Energy Technology Data Exchange (ETDEWEB)

    Coulon, Romain; Kondrasovs, Vladimir; Dumazert, Jonathan; Rohee, Emmanuel; Normand Stephane [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette, (France)

    2015-07-01

    Online nuclear counting represents a challenge due to the stochastic nature of radioactivity. The count data have to be filtered in order to provide a precise and accurate estimation of the count rate, this with a response time compatible with the application in view. An innovative filter is presented in this paper addressing this issue. It is a nonlinear filter based on a Centered Skellam Test (CST) giving a local maximum likelihood estimation of the signal based on a Poisson distribution assumption. This nonlinear approach allows to smooth the counting signal while maintaining a fast response when brutal change activity occur. The filter has been improved by the implementation of a Brown's double Exponential Smoothing (BES). The filter has been validated and compared to other state of the art smoothing filters. The CST-BES filter shows a significant improvement compared to all tested smoothing filters. (authors)

  14. Developing Reading and Listening Comprehension Tests Based on the Sentence Verification Technique (SVT).

    Science.gov (United States)

    Royer, James M.

    2001-01-01

    Describes a team-based approach for creating Sentence Verification Technique (SVT) tests, a development procedure that allows teachers and other school personnel to develop comprehension tests from curriculum materials in use in their schools. Finds that if tests are based on materials that are appropriate for the population to be tested, the…

  15. Test Setup for Anechoic Room based MIMO OTA Testing of LTE Terminals

    DEFF Research Database (Denmark)

    Carreño, Xavier; Fan, Wei; Nielsen, Jesper Ødum

    2013-01-01

    introduced into, for example, LTE andWiMAX systems. The main purpose of this testing is to validate that the user equipment will have a good performance in real use. CTIA, 3GPP and COST are spending a big effort in standardizing the OTA testing procedure which is much more complex than similar SISO OTA...

  16. Operational Based Vision Assessment Automated Vision Test Collection User Guide

    Science.gov (United States)

    2017-05-15

    14. ABSTRACT The U.S. Air Force School of Aerospace Medicine Operational Based Vision Assessment Laboratory has developed a set of computer - based...Air Force School of Aerospace Medicine Operational Based Vision Assessment (OBVA) Laboratory has developed a set of computer -based, automated vision ...username of your computer ]  “App Data”  “Roaming”  Automated Vision Test”  “Settings”  “Calibration.” Once inside the “Calibration” folder

  17. Colorimetric evaluation of iPhone apps for colour vision tests based on the Ishihara test.

    Science.gov (United States)

    Dain, Stephen J; AlMerdef, Ali

    2016-05-01

    Given the versatility of smart phone displays, it was inevitable that applications (apps) providing colour vision testing would appear as an option. In this study, the colorimetric characteristics of five available iPhone apps for colour vision testing are assessed as a prequel to possible clinical evaluation. The colours of the displays produced by the apps are assessed with reference to the colours of a printed Ishihara test. The visual task is assessed on the basis of the colour differences and the alignment to the dichromatic confusion lines. The apps vary in quality and while some are colorimetrically acceptable, there are also some problems with their construction in making them a clinically useful app rather than curiosity driven self-testing. There is no reason why, in principle, a suitable test cannot be designed for smart phones. © 2016 Optometry Australia.

  18. 49 CFR Appendix C to Part 173 - Procedure for Base-level Vibration Testing

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Procedure for Base-level Vibration Testing C... Base-level Vibration Testing Base-level vibration testing shall be conducted as follows: 1. Three... platform. 4. Immediately following the period of vibration, each package shall be removed from the platform...

  19. Development, construct validity and test-retest reliability of a field-based wheelchair mobility performance test for wheelchair basketball

    NARCIS (Netherlands)

    de Witte, Annemarie M. H.; Hoozemans, Marco J. M.; Berger, Monique A. M.; van der Slikke, Rienk M. A.; van der Woude, Lucas H. V.; Veeger, Dirkjan (H. E. J)

    2018-01-01

    The aim of this study was to develop and describe a wheelchair mobility performance test in wheelchair basketball and to assess its construct validity and reliability. To mimic mobility performance of wheelchair basketball matches in a standardised manner, a test was designed based on observation of

  20. Significance Testing Needs a Taxonomy: Or How the Fisher, Neyman-Pearson Controversy Resulted in the Inferential Tail Wagging the Measurement Dog.

    Science.gov (United States)

    Bradley, Michael T; Brand, Andrew

    2016-10-01

    Accurate measurement and a cutoff probability with inferential statistics are not wholly compatible. Fisher understood this when he developed the F test to deal with measurement variability and to make judgments on manipulations that may be worth further study. Neyman and Pearson focused on modeled distributions whose parameters were highly determined and concluded that inferential judgments following an F test could be made with accuracy because the distribution parameters were determined. Neyman and Pearson's approach in the application of statistical analyses using alpha and beta error rates has played a dominant role guiding inferential judgments, appropriately in highly determined situations and inappropriately in scientific exploration. Fisher tried to explain the different situations, but, in part due to some obscure wording, generated a long standing dispute that currently has left the importance of Fisher's p sense when each approach should be used, a dimension reflecting varying levels of certainty or knowledge of population distributions is presented. The dimension provides a taxonomy of statistical situations and appropriate approaches by delineating four zones that represent how well the underlying population of interest is defined ranging from exploratory situations to highly determined populations. © The Author(s) 2016.