WorldWideScience

Sample records for discovery rate method

  1. Early detection of pharmacovigilance signals with automated methods based on false discovery rates: a comparative study.

    Science.gov (United States)

    Ahmed, Ismaïl; Thiessard, Frantz; Miremont-Salamé, Ghada; Haramburu, Françoise; Kreft-Jais, Carmen; Bégaud, Bernard; Tubert-Bitter, Pascale

    2012-06-01

    Improving the detection of drug safety signals has led several pharmacovigilance regulatory agencies to incorporate automated quantitative methods into their spontaneous reporting management systems. The three largest worldwide pharmacovigilance databases are routinely screened by the lower bound of the 95% confidence interval of proportional reporting ratio (PRR₀₂.₅), the 2.5% quantile of the Information Component (IC₀₂.₅) or the 5% quantile of the Gamma Poisson Shrinker (GPS₀₅). More recently, Bayesian and non-Bayesian False Discovery Rate (FDR)-based methods were proposed that address the arbitrariness of thresholds and allow for a built-in estimate of the FDR. These methods were also shown through simulation studies to be interesting alternatives to the currently used methods. The objective of this work was twofold. Based on an extensive retrospective study, we compared PRR₀₂.₅, GPS₀₅ and IC₀₂.₅ with two FDR-based methods derived from the Fisher's exact test and the GPS model (GPS(pH0) [posterior probability of the null hypothesis H₀ calculated from the Gamma Poisson Shrinker model]). Secondly, restricting the analysis to GPS(pH0), we aimed to evaluate the added value of using automated signal detection tools compared with 'traditional' methods, i.e. non-automated surveillance operated by pharmacovigilance experts. The analysis was performed sequentially, i.e. every month, and retrospectively on the whole French pharmacovigilance database over the period 1 January 1996-1 July 2002. Evaluation was based on a list of 243 reference signals (RSs) corresponding to investigations launched by the French Pharmacovigilance Technical Committee (PhVTC) during the same period. The comparison of detection methods was made on the basis of the number of RSs detected as well as the time to detection. Results comparing the five automated quantitative methods were in favour of GPS(pH0) in terms of both number of detections of true signals and

  2. A broken promise: microbiome differential abundance methods do not control the false discovery rate.

    Science.gov (United States)

    Hawinkel, Stijn; Mattiello, Federico; Bijnens, Luc; Thas, Olivier

    2017-08-22

    High-throughput sequencing technologies allow easy characterization of the human microbiome, but the statistical methods to analyze microbiome data are still in their infancy. Differential abundance methods aim at detecting associations between the abundances of bacterial species and subject grouping factors. The results of such methods are important to identify the microbiome as a prognostic or diagnostic biomarker or to demonstrate efficacy of prodrug or antibiotic drugs. Because of a lack of benchmarking studies in the microbiome field, no consensus exists on the performance of the statistical methods. We have compared a large number of popular methods through extensive parametric and nonparametric simulation as well as real data shuffling algorithms. The results are consistent over the different approaches and all point to an alarming excess of false discoveries. This raises great doubts about the reliability of discoveries in past studies and imperils reproducibility of microbiome experiments. To further improve method benchmarking, we introduce a new simulation tool that allows to generate correlated count data following any univariate count distribution; the correlation structure may be inferred from real data. Most simulation studies discard the correlation between species, but our results indicate that this correlation can negatively affect the performance of statistical methods. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Comparison of seven methods for producing Affymetrix expression scores based on False Discovery Rates in disease profiling data

    Directory of Open Access Journals (Sweden)

    Gruber Stephen B

    2005-02-01

    Full Text Available Abstract Background A critical step in processing oligonucleotide microarray data is combining the information in multiple probes to produce a single number that best captures the expression level of a RNA transcript. Several systematic studies comparing multiple methods for array processing have used tightly controlled calibration data sets as the basis for comparison. Here we compare performances for seven processing methods using two data sets originally collected for disease profiling studies. An emphasis is placed on understanding sensitivity for detecting differentially expressed genes in terms of two key statistical determinants: test statistic variability for non-differentially expressed genes, and test statistic size for truly differentially expressed genes. Results In the two data sets considered here, up to seven-fold variation across the processing methods was found in the number of genes detected at a given false discovery rate (FDR. The best performing methods called up to 90% of the same genes differentially expressed, had less variable test statistics under randomization, and had a greater number of large test statistics in the experimental data. Poor performance of one method was directly tied to a tendency to produce highly variable test statistic values under randomization. Based on an overall measure of performance, two of the seven methods (Dchip and a trimmed mean approach are superior in the two data sets considered here. Two other methods (MAS5 and GCRMA-EB are inferior, while results for the other three methods are mixed. Conclusions Choice of processing method has a major impact on differential expression analysis of microarray data. Previously reported performance analyses using tightly controlled calibration data sets are not highly consistent with results reported here using data from human tissue samples. Performance of array processing methods in disease profiling and other realistic biological studies should be

  4. Computational methods in drug discovery

    Directory of Open Access Journals (Sweden)

    Sumudu P. Leelananda

    2016-12-01

    Full Text Available The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed.

  5. Assessment of Metabolome Annotation Quality: A Method for Evaluating the False Discovery Rate of Elemental Composition Searches

    Science.gov (United States)

    Matsuda, Fumio; Shinbo, Yoko; Oikawa, Akira; Hirai, Masami Yokota; Fiehn, Oliver; Kanaya, Shigehiko; Saito, Kazuki

    2009-01-01

    Background In metabolomics researches using mass spectrometry (MS), systematic searching of high-resolution mass data against compound databases is often the first step of metabolite annotation to determine elemental compositions possessing similar theoretical mass numbers. However, incorrect hits derived from errors in mass analyses will be included in the results of elemental composition searches. To assess the quality of peak annotation information, a novel methodology for false discovery rates (FDR) evaluation is presented in this study. Based on the FDR analyses, several aspects of an elemental composition search, including setting a threshold, estimating FDR, and the types of elemental composition databases most reliable for searching are discussed. Methodology/Principal Findings The FDR can be determined from one measured value (i.e., the hit rate for search queries) and four parameters determined by Monte Carlo simulation. The results indicate that relatively high FDR values (30–50%) were obtained when searching time-of-flight (TOF)/MS data using the KNApSAcK and KEGG databases. In addition, searches against large all-in-one databases (e.g., PubChem) always produced unacceptable results (FDR >70%). The estimated FDRs suggest that the quality of search results can be improved not only by performing more accurate mass analysis but also by modifying the properties of the compound database. A theoretical analysis indicates that FDR could be improved by using compound database with smaller but higher completeness entries. Conclusions/Significance High accuracy mass analysis, such as Fourier transform (FT)-MS, is needed for reliable annotation (FDR metabolome data. PMID:19847304

  6. 43 CFR 4.1130 - Discovery methods.

    Science.gov (United States)

    2010-10-01

    ... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false Discovery methods. 4.1130 Section 4.1130... Special Rules Applicable to Surface Coal Mining Hearings and Appeals Discovery § 4.1130 Discovery methods. Parties may obtain discovery by one or more of the following methods— (a) Depositions upon oral...

  7. 29 CFR 18.13 - Discovery methods.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Discovery methods. 18.13 Section 18.13 Labor Office of the... ADMINISTRATIVE LAW JUDGES General § 18.13 Discovery methods. Parties may obtain discovery by one or more of the following methods: Depositions upon oral examination or written questions; written interrogatories...

  8. Computational methods in drug discovery

    OpenAIRE

    Sumudu P. Leelananda; Steffen Lindert

    2016-01-01

    The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery project...

  9. Controlling the Rate of GWAS False Discoveries.

    Science.gov (United States)

    Brzyski, Damian; Peterson, Christine B; Sobczyk, Piotr; Candès, Emmanuel J; Bogdan, Malgorzata; Sabatti, Chiara

    2017-01-01

    With the rise of both the number and the complexity of traits of interest, control of the false discovery rate (FDR) in genetic association studies has become an increasingly appealing and accepted target for multiple comparison adjustment. While a number of robust FDR-controlling strategies exist, the nature of this error rate is intimately tied to the precise way in which discoveries are counted, and the performance of FDR-controlling procedures is satisfactory only if there is a one-to-one correspondence between what scientists describe as unique discoveries and the number of rejected hypotheses. The presence of linkage disequilibrium between markers in genome-wide association studies (GWAS) often leads researchers to consider the signal associated to multiple neighboring SNPs as indicating the existence of a single genomic locus with possible influence on the phenotype. This a posteriori aggregation of rejected hypotheses results in inflation of the relevant FDR. We propose a novel approach to FDR control that is based on prescreening to identify the level of resolution of distinct hypotheses. We show how FDR-controlling strategies can be adapted to account for this initial selection both with theoretical results and simulations that mimic the dependence structure to be expected in GWAS. We demonstrate that our approach is versatile and useful when the data are analyzed using both tests based on single markers and multiple regression. We provide an R package that allows practitioners to apply our procedure on standard GWAS format data, and illustrate its performance on lipid traits in the North Finland Birth Cohort 66 cohort study. Copyright © 2017 by the Genetics Society of America.

  10. Testing jumps via false discovery rate control.

    Science.gov (United States)

    Yen, Yu-Min

    2013-01-01

    Many recently developed nonparametric jump tests can be viewed as multiple hypothesis testing problems. For such multiple hypothesis tests, it is well known that controlling type I error often makes a large proportion of erroneous rejections, and such situation becomes even worse when the jump occurrence is a rare event. To obtain more reliable results, we aim to control the false discovery rate (FDR), an efficient compound error measure for erroneous rejections in multiple testing problems. We perform the test via the Barndorff-Nielsen and Shephard (BNS) test statistic, and control the FDR with the Benjamini and Hochberg (BH) procedure. We provide asymptotic results for the FDR control. From simulations, we examine relevant theoretical results and demonstrate the advantages of controlling the FDR. The hybrid approach is then applied to empirical analysis on two benchmark stock indices with high frequency data.

  11. Testing jumps via false discovery rate control.

    Directory of Open Access Journals (Sweden)

    Yu-Min Yen

    Full Text Available Many recently developed nonparametric jump tests can be viewed as multiple hypothesis testing problems. For such multiple hypothesis tests, it is well known that controlling type I error often makes a large proportion of erroneous rejections, and such situation becomes even worse when the jump occurrence is a rare event. To obtain more reliable results, we aim to control the false discovery rate (FDR, an efficient compound error measure for erroneous rejections in multiple testing problems. We perform the test via the Barndorff-Nielsen and Shephard (BNS test statistic, and control the FDR with the Benjamini and Hochberg (BH procedure. We provide asymptotic results for the FDR control. From simulations, we examine relevant theoretical results and demonstrate the advantages of controlling the FDR. The hybrid approach is then applied to empirical analysis on two benchmark stock indices with high frequency data.

  12. Search strategy has influenced the discovery rate of human viruses.

    Science.gov (United States)

    Rosenberg, Ronald; Johansson, Michael A; Powers, Ann M; Miller, Barry R

    2013-08-20

    A widely held concern is that the pace of infectious disease emergence has been increasing. We have analyzed the rate of discovery of pathogenic viruses, the preeminent source of newly discovered causes of human disease, from 1897 through 2010. The rate was highest during 1950-1969, after which it moderated. This general picture masks two distinct trends: for arthropod-borne viruses, which comprised 39% of pathogenic viruses, the discovery rate peaked at three per year during 1960-1969, but subsequently fell nearly to zero by 1980; however, the rate of discovery of nonarboviruses remained stable at about two per year from 1950 through 2010. The period of highest arbovirus discovery coincided with a comprehensive program supported by The Rockefeller Foundation of isolating viruses from humans, animals, and arthropod vectors at field stations in Latin America, Africa, and India. The productivity of this strategy illustrates the importance of location, approach, long-term commitment, and sponsorship in the discovery of emerging pathogens.

  13. NEW COMPLETENESS METHODS FOR ESTIMATING EXOPLANET DISCOVERIES BY DIRECT DETECTION

    International Nuclear Information System (INIS)

    Brown, Robert A.; Soummer, Remi

    2010-01-01

    We report on new methods for evaluating realistic observing programs that search stars for planets by direct imaging, where observations are selected from an optimized star list and stars can be observed multiple times. We show how these methods bring critical insight into the design of the mission and its instruments. These methods provide an estimate of the outcome of the observing program: the probability distribution of discoveries (detection and/or characterization) and an estimate of the occurrence rate of planets (η). We show that these parameters can be accurately estimated from a single mission simulation, without the need for a complete Monte Carlo mission simulation, and we prove the accuracy of this new approach. Our methods provide tools to define a mission for a particular science goal; for example, a mission can be defined by the expected number of discoveries and its confidence level. We detail how an optimized star list can be built and how successive observations can be selected. Our approach also provides other critical mission attributes, such as the number of stars expected to be searched and the probability of zero discoveries. Because these attributes depend strongly on the mission scale (telescope diameter, observing capabilities and constraints, mission lifetime, etc.), our methods are directly applicable to the design of such future missions and provide guidance to the mission and instrument design based on scientific performance. We illustrate our new methods with practical calculations and exploratory design reference missions for the James Webb Space Telescope (JWST) operating with a distant starshade to reduce scattered and diffracted starlight on the focal plane. We estimate that five habitable Earth-mass planets would be discovered and characterized with spectroscopy, with a probability of zero discoveries of 0.004, assuming a small fraction of JWST observing time (7%), η = 0.3, and 70 observing visits, limited by starshade fuel.

  14. Discovery of IPV6 Router Interface Addresses via Heuristic Methods

    Science.gov (United States)

    2015-09-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS DISCOVERY OF IPV6 ROUTER INTERFACE ADDRESSES VIA HEURISTIC METHODS by Matthew D. Gray September...AND SUBTITLE DISCOVERY OF IPV6 ROUTER INTERFACE ADDRESSES VIA HEURISTIC METHODS 5. FUNDING NUMBERS CNS-1111445 6. AUTHOR(S) Matthew D. Gray 7...Internet Assigned Numbers Authority, there is continued pressure for widespread IPv6 adoption. Because the IPv6 address space is orders of magnitude

  15. Applying Hierarchical Task Analysis Method to Discovery Layer Evaluation

    Directory of Open Access Journals (Sweden)

    Marlen Promann

    2015-03-01

    Full Text Available Libraries are implementing discovery layers to offer better user experiences. While usability tests have been helpful in evaluating the success or failure of implementing discovery layers in the library context, the focus has remained on its relative interface benefits over the traditional federated search. The informal site- and context specific usability tests have offered little to test the rigor of the discovery layers against the user goals, motivations and workflow they have been designed to support. This study proposes hierarchical task analysis (HTA as an important complementary evaluation method to usability testing of discovery layers. Relevant literature is reviewed for the discovery layers and the HTA method. As no previous application of HTA to the evaluation of discovery layers was found, this paper presents the application of HTA as an expert based and workflow centered (e.g. retrieving a relevant book or a journal article method to evaluating discovery layers. Purdue University’s Primo by Ex Libris was used to map eleven use cases as HTA charts. Nielsen’s Goal Composition theory was used as an analytical framework to evaluate the goal carts from two perspectives: a users’ physical interactions (i.e. clicks, and b user’s cognitive steps (i.e. decision points for what to do next. A brief comparison of HTA and usability test findings is offered as a way of conclusion.

  16. Improved detection of common variants associated with schizophrenia and bipolar disorder using pleiotropy-informed conditional false discovery rate

    DEFF Research Database (Denmark)

    Andreassen, Ole A; Thompson, Wesley K; Schork, Andrew J

    2013-01-01

    are currently lacking. Here, we use a genetic pleiotropy-informed conditional false discovery rate (FDR) method on GWAS summary statistics data to identify new loci associated with schizophrenia (SCZ) and bipolar disorders (BD), two highly heritable disorders with significant missing heritability...... associated with both SCZ and BD (conjunction FDR). Together, these findings show the feasibility of genetic pleiotropy-informed methods to improve gene discovery in SCZ and BD and indicate overlapping genetic mechanisms between these two disorders....

  17. Controlling the local false discovery rate in the adaptive Lasso

    KAUST Repository

    Sampson, J. N.

    2013-04-09

    The Lasso shrinkage procedure achieved its popularity, in part, by its tendency to shrink estimated coefficients to zero, and its ability to serve as a variable selection procedure. Using data-adaptive weights, the adaptive Lasso modified the original procedure to increase the penalty terms for those variables estimated to be less important by ordinary least squares. Although this modified procedure attained the oracle properties, the resulting models tend to include a large number of "false positives" in practice. Here, we adapt the concept of local false discovery rates (lFDRs) so that it applies to the sequence, λn, of smoothing parameters for the adaptive Lasso. We define the lFDR for a given λn to be the probability that the variable added to the model by decreasing λn to λn-δ is not associated with the outcome, where δ is a small value. We derive the relationship between the lFDR and λn, show lFDR =1 for traditional smoothing parameters, and show how to select λn so as to achieve a desired lFDR. We compare the smoothing parameters chosen to achieve a specified lFDR and those chosen to achieve the oracle properties, as well as their resulting estimates for model coefficients, with both simulation and an example from a genetic study of prostate specific antigen.

  18. Estimation of uranium resources by life-cycle or discovery-rate models: a critique

    International Nuclear Information System (INIS)

    Harris, D.P.

    1976-10-01

    This report was motivated primarily by M. A. Lieberman's ''United States Uranium Resources: An Analysis of Historical Data'' (Science, April 30). His conclusion that only 87,000 tons of U 3 O 8 resources recoverable at a forward cost of $8/lb remain to be discovered is criticized. It is shown that there is no theoretical basis for selecting the exponential or any other function for the discovery rate. Some of the economic (productivity, inflation) and data issues involved in the analysis of undiscovered, recoverable U 3 O 8 resources on discovery rates of $8 reserves are discussed. The problem of the ratio of undiscovered $30 resources to undiscovered $8 resources is considered. It is concluded that: all methods for the estimation of unknown resources must employ a model of some form of the endowment-exploration-production complex, but every model is a simplification of the real world, and every estimate is intrinsically uncertain. The life-cycle model is useless for the appraisal of undiscovered, recoverable U 3 O 8 , and the discovery rate model underestimates these resources

  19. Data Mining and Knowledge Discovery via Logic-Based Methods

    CERN Document Server

    Triantaphyllou, Evangelos

    2010-01-01

    There are many approaches to data mining and knowledge discovery (DM&KD), including neural networks, closest neighbor methods, and various statistical methods. This monograph, however, focuses on the development and use of a novel approach, based on mathematical logic, that the author and his research associates have worked on over the last 20 years. The methods presented in the book deal with key DM&KD issues in an intuitive manner and in a natural sequence. Compared to other DM&KD methods, those based on mathematical logic offer a direct and often intuitive approach for extracting easily int

  20. A comparative review of estimates of the proportion unchanged genes and the false discovery rate

    Directory of Open Access Journals (Sweden)

    Broberg Per

    2005-08-01

    Full Text Available Abstract Background In the analysis of microarray data one generally produces a vector of p-values that for each gene give the likelihood of obtaining equally strong evidence of change by pure chance. The distribution of these p-values is a mixture of two components corresponding to the changed genes and the unchanged ones. The focus of this article is how to estimate the proportion unchanged and the false discovery rate (FDR and how to make inferences based on these concepts. Six published methods for estimating the proportion unchanged genes are reviewed, two alternatives are presented, and all are tested on both simulated and real data. All estimates but one make do without any parametric assumptions concerning the distributions of the p-values. Furthermore, the estimation and use of the FDR and the closely related q-value is illustrated with examples. Five published estimates of the FDR and one new are presented and tested. Implementations in R code are available. Results A simulation model based on the distribution of real microarray data plus two real data sets were used to assess the methods. The proposed alternative methods for estimating the proportion unchanged fared very well, and gave evidence of low bias and very low variance. Different methods perform well depending upon whether there are few or many regulated genes. Furthermore, the methods for estimating FDR showed a varying performance, and were sometimes misleading. The new method had a very low error. Conclusion The concept of the q-value or false discovery rate is useful in practical research, despite some theoretical and practical shortcomings. However, it seems possible to challenge the performance of the published methods, and there is likely scope for further developing the estimates of the FDR. The new methods provide the scientist with more options to choose a suitable method for any particular experiment. The article advocates the use of the conjoint information

  1. How can attrition rates be reduced in cancer drug discovery?

    Science.gov (United States)

    Moreno, Lucas; Pearson, Andrew D J

    2013-04-01

    Attrition is a major issue in anticancer drug development with up to 95% of drugs tested in Phase I trials not reaching a marketing authorisation making the drug development process enormously costly and inefficient. It is essential that this problem is addressed throughout the whole drug development process to improve efficiency which will ultimately result in increased patient benefit with more profitable drugs. The approach to reduce cancer drug attrition rates must be based on three pillars. The first of these is that there is a need for new pre-clinical models which can act as better predictors of success in clinical trials. Furthermore, clinical trials driven by tumour biology with the incorporation of predictive and pharmacodynamic biomarkers would be beneficial in drug development. Finally, there is a need for increased collaboration to combine the unique strengths between industry, academia and regulators to ensure that the needs of all stakeholders are met.

  2. Specificity control for read alignments using an artificial reference genome-guided false discovery rate.

    Science.gov (United States)

    Giese, Sven H; Zickmann, Franziska; Renard, Bernhard Y

    2014-01-01

    Accurate estimation, comparison and evaluation of read mapping error rates is a crucial step in the processing of next-generation sequencing data, as further analysis steps and interpretation assume the correctness of the mapping results. Current approaches are either focused on sensitivity estimation and thereby disregard specificity or are based on read simulations. Although continuously improving, read simulations are still prone to introduce a bias into the mapping error quantitation and cannot capture all characteristics of an individual dataset. We introduce ARDEN (artificial reference driven estimation of false positives in next-generation sequencing data), a novel benchmark method that estimates error rates of read mappers based on real experimental reads, using an additionally generated artificial reference genome. It allows a dataset-specific computation of error rates and the construction of a receiver operating characteristic curve. Thereby, it can be used for optimization of parameters for read mappers, selection of read mappers for a specific problem or for filtering alignments based on quality estimation. The use of ARDEN is demonstrated in a general read mapper comparison, a parameter optimization for one read mapper and an application example in single-nucleotide polymorphism discovery with a significant reduction in the number of false positive identifications. The ARDEN source code is freely available at http://sourceforge.net/projects/arden/.

  3. Emerging Computational Methods for the Rational Discovery of Allosteric Drugs.

    Science.gov (United States)

    Wagner, Jeffrey R; Lee, Christopher T; Durrant, Jacob D; Malmstrom, Robert D; Feher, Victoria A; Amaro, Rommie E

    2016-06-08

    Allosteric drug development holds promise for delivering medicines that are more selective and less toxic than those that target orthosteric sites. To date, the discovery of allosteric binding sites and lead compounds has been mostly serendipitous, achieved through high-throughput screening. Over the past decade, structural data has become more readily available for larger protein systems and more membrane protein classes (e.g., GPCRs and ion channels), which are common allosteric drug targets. In parallel, improved simulation methods now provide better atomistic understanding of the protein dynamics and cooperative motions that are critical to allosteric mechanisms. As a result of these advances, the field of predictive allosteric drug development is now on the cusp of a new era of rational structure-based computational methods. Here, we review algorithms that predict allosteric sites based on sequence data and molecular dynamics simulations, describe tools that assess the druggability of these pockets, and discuss how Markov state models and topology analyses provide insight into the relationship between protein dynamics and allosteric drug binding. In each section, we first provide an overview of the various method classes before describing relevant algorithms and software packages.

  4. Use of the false discovery rate for evaluating clinical safety data.

    Science.gov (United States)

    Mehrotra, Devan V; Heyse, Joseph F

    2004-06-01

    Clinical adverse experience (AE) data are routinely evaluated using between group P values for every AE encountered within each of several body systems. If the P values are reported and interpreted without multiplicity considerations, there is a potential for an excess of false positive findings. Procedures based on confidence interval estimates of treatment effects have the same potential for false positive findings as P value methods. Excess false positive findings can needlessly complicate the safety profile of a safe drug or vaccine. Accordingly, we propose a novel method for addressing multiplicity in the evaluation of adverse experience data arising in clinical trial settings. The method involves a two-step application of adjusted P values based on the Benjamini and Hochberg false discovery rate (FDR). Data from three moderate to large vaccine trials are used to illustrate our proposed 'Double FDR' approach, and to reinforce the potential impact of failing to account for multiplicity. This work was in collaboration with the late Professor John W. Tukey who coined the term 'Double FDR'.

  5. A projection and density estimation method for knowledge discovery.

    Directory of Open Access Journals (Sweden)

    Adam Stanski

    Full Text Available A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.

  6. Variation in coral growth rates with depth at Discovery Bay, Jamaica

    Energy Technology Data Exchange (ETDEWEB)

    Huston, M

    1985-01-01

    Growth rates, determined by X-radiographic measurement of skeletal extension, decreased with depth for four of six species of coral examined at Discovery Bay, Jamaica. Growth of Porites astreoides, Montastrea annularis, Colpophyllia natans, and Siderastrea siderea decreased significantly with depth over a 1- to 30-m depth range. In Montastrea cavernosa, the highest growth rate occurred in the middle of the sampled depth range. Agaricia agaricites had no measurable change in growth rate with depth. A compilation of available growth data for Atlantic and Pacific corals shows a strong pattern of highest growth rates a short distance below the surface and a decrease with depth.

  7. False discovery rate control incorporating phylogenetic tree increases detection power in microbiome-wide multiple testing.

    Science.gov (United States)

    Xiao, Jian; Cao, Hongyuan; Chen, Jun

    2017-09-15

    Next generation sequencing technologies have enabled the study of the human microbiome through direct sequencing of microbial DNA, resulting in an enormous amount of microbiome sequencing data. One unique characteristic of microbiome data is the phylogenetic tree that relates all the bacterial species. Closely related bacterial species have a tendency to exhibit a similar relationship with the environment or disease. Thus, incorporating the phylogenetic tree information can potentially improve the detection power for microbiome-wide association studies, where hundreds or thousands of tests are conducted simultaneously to identify bacterial species associated with a phenotype of interest. Despite much progress in multiple testing procedures such as false discovery rate (FDR) control, methods that take into account the phylogenetic tree are largely limited. We propose a new FDR control procedure that incorporates the prior structure information and apply it to microbiome data. The proposed procedure is based on a hierarchical model, where a structure-based prior distribution is designed to utilize the phylogenetic tree. By borrowing information from neighboring bacterial species, we are able to improve the statistical power of detecting associated bacterial species while controlling the FDR at desired levels. When the phylogenetic tree is mis-specified or non-informative, our procedure achieves a similar power as traditional procedures that do not take into account the tree structure. We demonstrate the performance of our method through extensive simulations and real microbiome datasets. We identified far more alcohol-drinking associated bacterial species than traditional methods. R package StructFDR is available from CRAN. chen.jun2@mayo.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  8. Bypass flow rate control method

    International Nuclear Information System (INIS)

    Kiyama, Yoichi.

    1997-01-01

    In a PWR type reactor, bypass flow rate is controlled by exchanging existent jetting hole plugs of a plurality of nozzles disposed to the upper end of incore structures in order to flow a portion of primary coolants as a bypass flow to the upper portion of the pressure vessel. Two kinds of exchange plugs, namely, a first plug and a second plug each having a jetting out hole of different diameter are used as exchange plugs. The first plug has the diameter as that of an existent plug and the second plug has a jetting out hole having larger diameter than that of the existent plug. Remained extent plugs are exchanged to a combination of the first and the second plugs without exchanging existent plugs having seizing with the nozzles, in which the number and the diameter of the jetting out holes of the second plugs are previously determined based on predetermined total bypass flow rate to be jetted from the entire plugs after exchange of plugs. (N.H.)

  9. A petroleum discovery-rate forecast revisited-The problem of field growth

    Science.gov (United States)

    Drew, L.J.; Schuenemeyer, J.H.

    1992-01-01

    A forecast of the future rates of discovery of crude oil and natural gas for the 123,027-km2 Miocene/Pliocene trend in the Gulf of Mexico was made in 1980. This forecast was evaluated in 1988 by comparing two sets of data: (1) the actual versus the forecasted number of fields discovered, and (2) the actual versus the forecasted volumes of crude oil and natural gas discovered with the drilling of 1,820 wildcat wells along the trend between January 1, 1977, and December 31, 1985. The forecast specified that this level of drilling would result in the discovery of 217 fields containing 1.78 billion barrels of oil equivalent; however, 238 fields containing 3.57 billion barrels of oil equivalent were actually discovered. This underestimation is attributed to biases introduced by field growth and, to a lesser degree, the artificially low, pre-1970's price of natural gas that prevented many smaller gas fields from being brought into production at the time of their discovery; most of these fields contained less than 50 billion cubic feet of producible natural gas. ?? 1992 Oxford University Press.

  10. SemaTyP: a knowledge graph based literature mining method for drug discovery.

    Science.gov (United States)

    Sang, Shengtian; Yang, Zhihao; Wang, Lei; Liu, Xiaoxia; Lin, Hongfei; Wang, Jian

    2018-05-30

    Drug discovery is the process through which potential new medicines are identified. High-throughput screening and computer-aided drug discovery/design are the two main drug discovery methods for now, which have successfully discovered a series of drugs. However, development of new drugs is still an extremely time-consuming and expensive process. Biomedical literature contains important clues for the identification of potential treatments. It could support experts in biomedicine on their way towards new discoveries. Here, we propose a biomedical knowledge graph-based drug discovery method called SemaTyP, which discovers candidate drugs for diseases by mining published biomedical literature. We first construct a biomedical knowledge graph with the relations extracted from biomedical abstracts, then a logistic regression model is trained by learning the semantic types of paths of known drug therapies' existing in the biomedical knowledge graph, finally the learned model is used to discover drug therapies for new diseases. The experimental results show that our method could not only effectively discover new drug therapies for new diseases, but also could provide the potential mechanism of action of the candidate drugs. In this paper we propose a novel knowledge graph based literature mining method for drug discovery. It could be a supplementary method for current drug discovery methods.

  11. Improved Detection of Common Variants Associated with Schizophrenia and Bipolar Disorder Using Pleiotropy-Informed Conditional False Discovery Rate

    Science.gov (United States)

    Andreassen, Ole A.; Thompson, Wesley K.; Schork, Andrew J.; Ripke, Stephan; Mattingsdal, Morten; Kelsoe, John R.; Kendler, Kenneth S.; O'Donovan, Michael C.; Rujescu, Dan; Werge, Thomas; Sklar, Pamela; Roddey, J. Cooper; Chen, Chi-Hua; McEvoy, Linda; Desikan, Rahul S.; Djurovic, Srdjan; Dale, Anders M.

    2013-01-01

    Several lines of evidence suggest that genome-wide association studies (GWAS) have the potential to explain more of the “missing heritability” of common complex phenotypes. However, reliable methods to identify a larger proportion of single nucleotide polymorphisms (SNPs) that impact disease risk are currently lacking. Here, we use a genetic pleiotropy-informed conditional false discovery rate (FDR) method on GWAS summary statistics data to identify new loci associated with schizophrenia (SCZ) and bipolar disorders (BD), two highly heritable disorders with significant missing heritability. Epidemiological and clinical evidence suggest similar disease characteristics and overlapping genes between SCZ and BD. Here, we computed conditional Q–Q curves of data from the Psychiatric Genome Consortium (SCZ; n = 9,379 cases and n = 7,736 controls; BD: n = 6,990 cases and n = 4,820 controls) to show enrichment of SNPs associated with SCZ as a function of association with BD and vice versa with a corresponding reduction in FDR. Applying the conditional FDR method, we identified 58 loci associated with SCZ and 35 loci associated with BD below the conditional FDR level of 0.05. Of these, 14 loci were associated with both SCZ and BD (conjunction FDR). Together, these findings show the feasibility of genetic pleiotropy-informed methods to improve gene discovery in SCZ and BD and indicate overlapping genetic mechanisms between these two disorders. PMID:23637625

  12. Graph-Based Methods for Discovery Browsing with Semantic Predications

    DEFF Research Database (Denmark)

    Wilkowski, Bartlomiej; Fiszman, Marcelo; Miller, Christopher M

    2011-01-01

    . Poorly understood relationships may be explored through novel points of view, and potentially interesting relationships need not be known ahead of time. In a process of "cooperative reciprocity" the user iteratively focuses system output, thus controlling the large number of relationships often generated...... in literature-based discovery systems. The underlying technology exploits SemRep semantic predications represented as a graph of interconnected nodes (predication arguments) and edges (predicates). The system suggests paths in this graph, which represent chains of relationships. The methodology is illustrated...

  13. Selection of entropy-measure parameters for knowledge discovery in heart rate variability data.

    Science.gov (United States)

    Mayer, Christopher C; Bachler, Martin; Hörtenhuber, Matthias; Stocker, Christof; Holzinger, Andreas; Wassertheurer, Siegfried

    2014-01-01

    Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical

  14. A constrained polynomial regression procedure for estimating the local False Discovery Rate

    Directory of Open Access Journals (Sweden)

    Broët Philippe

    2007-06-01

    Full Text Available Abstract Background In the context of genomic association studies, for which a large number of statistical tests are performed simultaneously, the local False Discovery Rate (lFDR, which quantifies the evidence of a specific gene association with a clinical or biological variable of interest, is a relevant criterion for taking into account the multiple testing problem. The lFDR not only allows an inference to be made for each gene through its specific value, but also an estimate of Benjamini-Hochberg's False Discovery Rate (FDR for subsets of genes. Results In the framework of estimating procedures without any distributional assumption under the alternative hypothesis, a new and efficient procedure for estimating the lFDR is described. The results of a simulation study indicated good performances for the proposed estimator in comparison to four published ones. The five different procedures were applied to real datasets. Conclusion A novel and efficient procedure for estimating lFDR was developed and evaluated.

  15. Evaluation of gene association methods for coexpression network construction and biological knowledge discovery.

    Directory of Open Access Journals (Sweden)

    Sapna Kumari

    Full Text Available BACKGROUND: Constructing coexpression networks and performing network analysis using large-scale gene expression data sets is an effective way to uncover new biological knowledge; however, the methods used for gene association in constructing these coexpression networks have not been thoroughly evaluated. Since different methods lead to structurally different coexpression networks and provide different information, selecting the optimal gene association method is critical. METHODS AND RESULTS: In this study, we compared eight gene association methods - Spearman rank correlation, Weighted Rank Correlation, Kendall, Hoeffding's D measure, Theil-Sen, Rank Theil-Sen, Distance Covariance, and Pearson - and focused on their true knowledge discovery rates in associating pathway genes and construction coordination networks of regulatory genes. We also examined the behaviors of different methods to microarray data with different properties, and whether the biological processes affect the efficiency of different methods. CONCLUSIONS: We found that the Spearman, Hoeffding and Kendall methods are effective in identifying coexpressed pathway genes, whereas the Theil-sen, Rank Theil-Sen, Spearman, and Weighted Rank methods perform well in identifying coordinated transcription factors that control the same biological processes and traits. Surprisingly, the widely used Pearson method is generally less efficient, and so is the Distance Covariance method that can find gene pairs of multiple relationships. Some analyses we did clearly show Pearson and Distance Covariance methods have distinct behaviors as compared to all other six methods. The efficiencies of different methods vary with the data properties to some degree and are largely contingent upon the biological processes, which necessitates the pre-analysis to identify the best performing method for gene association and coexpression network construction.

  16. Characterization and correction of the false-discovery rates in resting state connectivity using functional near-infrared spectroscopy

    Science.gov (United States)

    Santosa, Hendrik; Aarabi, Ardalan; Perlman, Susan B.; Huppert, Theodore J.

    2017-05-01

    Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low levels of red to near-infrared light to measure changes in cerebral blood oxygenation. Spontaneous (resting state) functional connectivity (sFC) has become a critical tool for cognitive neuroscience for understanding task-independent neural networks, revealing pertinent details differentiating healthy from disordered brain function, and discovering fluctuations in the synchronization of interacting individuals during hyperscanning paradigms. Two of the main challenges to sFC-NIRS analysis are (i) the slow temporal structure of both systemic physiology and the response of blood vessels, which introduces false spurious correlations, and (ii) motion-related artifacts that result from movement of the fNIRS sensors on the participants' head and can introduce non-normal and heavy-tailed noise structures. In this work, we systematically examine the false-discovery rates of several time- and frequency-domain metrics of functional connectivity for characterizing sFC-NIRS. Specifically, we detail the modifications to the statistical models of these methods needed to avoid high levels of false-discovery related to these two sources of noise in fNIRS. We compare these analysis procedures using both simulated and experimental resting-state fNIRS data. Our proposed robust correlation method has better performance in terms of being more reliable to the noise outliers due to the motion artifacts.

  17. Discovery of pyridine-based agrochemicals by using Intermediate Derivatization Methods.

    Science.gov (United States)

    Guan, Ai-Ying; Liu, Chang-Ling; Sun, Xu-Feng; Xie, Yong; Wang, Ming-An

    2016-02-01

    Pyridine-based compounds have been playing a crucial role as agrochemicals or pesticides including fungicides, insecticides/acaricides and herbicides, etc. Since most of the agrochemicals listed in the Pesticide Manual were discovered through screening programs that relied on trial-and-error testing and new agrochemical discovery is not benefiting as much from the in silico new chemical compound identification/discovery techniques used in pharmaceutical research, it has become more important to find new methods to enhance the efficiency of discovering novel lead compounds in the agrochemical field to shorten the time of research phases in order to meet changing market requirements. In this review, we selected 18 representative known agrochemicals containing a pyridine moiety and extrapolate their discovery from the perspective of Intermediate Derivatization Methods in the hope that this approach will have greater appeal to researchers engaged in the discovery of agrochemicals and/or pharmaceuticals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Sample size reassessment for a two-stage design controlling the false discovery rate.

    Science.gov (United States)

    Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin

    2015-11-01

    Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.

  19. Methodologies of Knowledge Discovery from Data and Data Mining Methods in Mechanical Engineering

    Directory of Open Access Journals (Sweden)

    Rogalewicz Michał

    2016-12-01

    Full Text Available The paper contains a review of methodologies of a process of knowledge discovery from data and methods of data exploration (Data Mining, which are the most frequently used in mechanical engineering. The methodologies contain various scenarios of data exploring, while DM methods are used in their scope. The paper shows premises for use of DM methods in industry, as well as their advantages and disadvantages. Development of methodologies of knowledge discovery from data is also presented, along with a classification of the most widespread Data Mining methods, divided by type of realized tasks. The paper is summarized by presentation of selected Data Mining applications in mechanical engineering.

  20. Non-Adiabatic Molecular Dynamics Methods for Materials Discovery

    Energy Technology Data Exchange (ETDEWEB)

    Furche, Filipp [Univ. of California, Irvine, CA (United States); Parker, Shane M. [Univ. of California, Irvine, CA (United States); Muuronen, Mikko J. [Univ. of California, Irvine, CA (United States); Roy, Saswata [Univ. of California, Irvine, CA (United States)

    2017-04-04

    The flow of radiative energy in light-driven materials such as photosensitizer dyes or photocatalysts is governed by non-adiabatic transitions between electronic states and cannot be described within the Born-Oppenheimer approximation commonly used in electronic structure theory. The non-adiabatic molecular dynamics (NAMD) methods based on Tully surface hopping and time-dependent density functional theory developed in this project have greatly extended the range of molecular materials that can be tackled by NAMD simulations. New algorithms to compute molecular excited state and response properties efficiently were developed. Fundamental limitations of common non-linear response methods were discovered and characterized. Methods for accurate computations of vibronic spectra of materials such as black absorbers were developed and applied. It was shown that open-shell TDDFT methods capture bond breaking in NAMD simulations, a longstanding challenge for single-reference molecular dynamics simulations. The methods developed in this project were applied to study the photodissociation of acetaldehyde and revealed that non-adiabatic effects are experimentally observable in fragment kinetic energy distributions. Finally, the project enabled the first detailed NAMD simulations of photocatalytic water oxidation by titania nanoclusters, uncovering the mechanism of this fundamentally important reaction for fuel generation and storage.

  1. The Discovery of Processing Stages: Extension of Sternberg's Method

    NARCIS (Netherlands)

    Anderson, John R; Zhang, Qiong; Borst, Jelmer P; Walsh, Matthew M

    2016-01-01

    We introduce a method for measuring the number and durations of processing stages from the electroencephalographic signal and apply it to the study of associative recognition. Using an extension of past research that combines multivariate pattern analysis with hidden semi-Markov models, the approach

  2. The Effect of Discovery Learning Method Application on Increasing Students' Listening Outcome and Social Attitude

    Science.gov (United States)

    Hanafi

    2016-01-01

    Curriculum of 2013 has been started in schools appointed as the implementer. This curriculum, for English subject demands the students to improve their skills. To reach this one of the suggested methods is discovery learning since this method is considered appropriate to implement for increasing the students' ability especially to fulfill minimum…

  3. Comparison of sequencing based CNV discovery methods using monozygotic twin quartets.

    Directory of Open Access Journals (Sweden)

    Marc-André Legault

    Full Text Available The advent of high throughput sequencing methods breeds an important amount of technical challenges. Among those is the one raised by the discovery of copy-number variations (CNVs using whole-genome sequencing data. CNVs are genomic structural variations defined as a variation in the number of copies of a large genomic fragment, usually more than one kilobase. Here, we aim to compare different CNV calling methods in order to assess their ability to consistently identify CNVs by comparison of the calls in 9 quartets of identical twin pairs. The use of monozygotic twins provides a means of estimating the error rate of each algorithm by observing CNVs that are inconsistently called when considering the rules of Mendelian inheritance and the assumption of an identical genome between twins. The similarity between the calls from the different tools and the advantage of combining call sets were also considered.ERDS and CNVnator obtained the best performance when considering the inherited CNV rate with a mean of 0.74 and 0.70, respectively. Venn diagrams were generated to show the agreement between the different algorithms, before and after filtering out familial inconsistencies. This filtering revealed a high number of false positives for CNVer and Breakdancer. A low overall agreement between the methods suggested a high complementarity of the different tools when calling CNVs. The breakpoint sensitivity analysis indicated that CNVnator and ERDS achieved better resolution of CNV borders than the other tools. The highest inherited CNV rate was achieved through the intersection of these two tools (81%.This study showed that ERDS and CNVnator provide good performance on whole genome sequencing data with respect to CNV consistency across families, CNV breakpoint resolution and CNV call specificity. The intersection of the calls from the two tools would be valuable for CNV genotyping pipelines.

  4. Polyphony: superposition independent methods for ensemble-based drug discovery.

    Science.gov (United States)

    Pitt, William R; Montalvão, Rinaldo W; Blundell, Tom L

    2014-09-30

    Structure-based drug design is an iterative process, following cycles of structural biology, computer-aided design, synthetic chemistry and bioassay. In favorable circumstances, this process can lead to the structures of hundreds of protein-ligand crystal structures. In addition, molecular dynamics simulations are increasingly being used to further explore the conformational landscape of these complexes. Currently, methods capable of the analysis of ensembles of crystal structures and MD trajectories are limited and usually rely upon least squares superposition of coordinates. Novel methodologies are described for the analysis of multiple structures of a protein. Statistical approaches that rely upon residue equivalence, but not superposition, are developed. Tasks that can be performed include the identification of hinge regions, allosteric conformational changes and transient binding sites. The approaches are tested on crystal structures of CDK2 and other CMGC protein kinases and a simulation of p38α. Known interaction - conformational change relationships are highlighted but also new ones are revealed. A transient but druggable allosteric pocket in CDK2 is predicted to occur under the CMGC insert. Furthermore, an evolutionarily-conserved conformational link from the location of this pocket, via the αEF-αF loop, to phosphorylation sites on the activation loop is discovered. New methodologies are described and validated for the superimposition independent conformational analysis of large collections of structures or simulation snapshots of the same protein. The methodologies are encoded in a Python package called Polyphony, which is released as open source to accompany this paper [http://wrpitt.bitbucket.org/polyphony/].

  5. Improving sensitivity in proteome studies by analysis of false discovery rates for multiple search engines.

    Science.gov (United States)

    Jones, Andrew R; Siepen, Jennifer A; Hubbard, Simon J; Paton, Norman W

    2009-03-01

    LC-MS experiments can generate large quantities of data, for which a variety of database search engines are available to make peptide and protein identifications. Decoy databases are becoming widely used to place statistical confidence in result sets, allowing the false discovery rate (FDR) to be estimated. Different search engines produce different identification sets so employing more than one search engine could result in an increased number of peptides (and proteins) being identified, if an appropriate mechanism for combining data can be defined. We have developed a search engine independent score, based on FDR, which allows peptide identifications from different search engines to be combined, called the FDR Score. The results demonstrate that the observed FDR is significantly different when analysing the set of identifications made by all three search engines, by each pair of search engines or by a single search engine. Our algorithm assigns identifications to groups according to the set of search engines that have made the identification, and re-assigns the score (combined FDR Score). The combined FDR Score can differentiate between correct and incorrect peptide identifications with high accuracy, allowing on average 35% more peptide identifications to be made at a fixed FDR than using a single search engine.

  6. KNODWAT: a scientific framework application for testing knowledge discovery methods for the biomedical domain.

    Science.gov (United States)

    Holzinger, Andreas; Zupan, Mario

    2013-06-13

    Professionals in the biomedical domain are confronted with an increasing mass of data. Developing methods to assist professional end users in the field of Knowledge Discovery to identify, extract, visualize and understand useful information from these huge amounts of data is a huge challenge. However, there are so many diverse methods and methodologies available, that for biomedical researchers who are inexperienced in the use of even relatively popular knowledge discovery methods, it can be very difficult to select the most appropriate method for their particular research problem. A web application, called KNODWAT (KNOwledge Discovery With Advanced Techniques) has been developed, using Java on Spring framework 3.1. and following a user-centered approach. The software runs on Java 1.6 and above and requires a web server such as Apache Tomcat and a database server such as the MySQL Server. For frontend functionality and styling, Twitter Bootstrap was used as well as jQuery for interactive user interface operations. The framework presented is user-centric, highly extensible and flexible. Since it enables methods for testing using existing data to assess suitability and performance, it is especially suitable for inexperienced biomedical researchers, new to the field of knowledge discovery and data mining. For testing purposes two algorithms, CART and C4.5 were implemented using the WEKA data mining framework.

  7. Sample size calculation while controlling false discovery rate for differential expression analysis with RNA-sequencing experiments.

    Science.gov (United States)

    Bi, Ran; Liu, Peng

    2016-03-31

    RNA-Sequencing (RNA-seq) experiments have been popularly applied to transcriptome studies in recent years. Such experiments are still relatively costly. As a result, RNA-seq experiments often employ a small number of replicates. Power analysis and sample size calculation are challenging in the context of differential expression analysis with RNA-seq data. One challenge is that there are no closed-form formulae to calculate power for the popularly applied tests for differential expression analysis. In addition, false discovery rate (FDR), instead of family-wise type I error rate, is controlled for the multiple testing error in RNA-seq data analysis. So far, there are very few proposals on sample size calculation for RNA-seq experiments. In this paper, we propose a procedure for sample size calculation while controlling FDR for RNA-seq experimental design. Our procedure is based on the weighted linear model analysis facilitated by the voom method which has been shown to have competitive performance in terms of power and FDR control for RNA-seq differential expression analysis. We derive a method that approximates the average power across the differentially expressed genes, and then calculate the sample size to achieve a desired average power while controlling FDR. Simulation results demonstrate that the actual power of several popularly applied tests for differential expression is achieved and is close to the desired power for RNA-seq data with sample size calculated based on our method. Our proposed method provides an efficient algorithm to calculate sample size while controlling FDR for RNA-seq experimental design. We also provide an R package ssizeRNA that implements our proposed method and can be downloaded from the Comprehensive R Archive Network ( http://cran.r-project.org ).

  8. Improving Junior High School Students' Mathematical Analogical Ability Using Discovery Learning Method

    Science.gov (United States)

    Maarif, Samsul

    2016-01-01

    The aim of this study was to identify the influence of discovery learning method towards the mathematical analogical ability of junior high school's students. This is a research using factorial design 2x2 with ANOVA-Two ways. The population of this research included the entire students of SMPN 13 Jakarta (State Junior High School 13 of Jakarta)…

  9. Rating Methods for Proactive Recommendation on Smartwatches

    OpenAIRE

    Janosch Maier, Wolfgang Wörndl

    2015-01-01

    This paper analyzes possible interaction methods for using a recommender system on a smartwatch. As prerequisite, we describe interaction patterns currently used by Android Wear applications. Based on a prototype implementation the interaction methods action buttons, two button card and swipes are compared against each other. In a user study, 31 participating students were asked to rate restaurant recommendations offered in the setting of a context-aware, proactive recommender system. For eac...

  10. Sterilization: new method options, failure rate info.

    Science.gov (United States)

    1998-01-01

    This article discusses new sterilization methods for tubal ligation, failure rates, and risks for ectopic pregnancy in the US. The Filshie clip, which was developed by Femcare, Ltd. in Nottingham, England, and is distributed by Avalon Medical Corp of Vermont, was approved by the US Food and Drug Administration in September 1996. The company has provided training sessions at major universities nationwide and exhibited at national professional association meetings. A training video and two more films will be available in 1998. The new clip is considered a more modern approach to a tubal occlusion method, which relies on newer materials and solves prior problems. Physicians usually used Falope rings, which had better failure rates than the Hulka clip and bipolar coagulation methods. There is a need for more long-term and large scale information exchange about the new Filshie clip. Some physicians still use the Falope ring because it is cost effective and well-studied. Physicians are warned to continue to advise women about the potential failure rates up to 10 years after sterilization and the 1 in 3 risk of ectopic pregnancy. Counseling about failure rates and the risk of ectopic pregnancy should target women under 30 years old, who have the highest failure rates, and women 30-34 years old. All sterilized women should be advised to seek a provider immediately if they have pregnancy symptoms following sterilization. Counseling should include the issue of "regrets," since it is a permanent method. All women should know about nonpermanent methods and experience a basic informed consent process. Young women and newly divorced women are particularly vulnerable to the "regrets" syndrome.

  11. Bioanalytical methods for food allergy diagnosis, allergen detection and new allergen discovery

    OpenAIRE

    Gasilova, Natalia; Girault, Hubert H

    2015-01-01

    For effective monitoring and prevention of the food allergy, one of the emerging health problems nowadays, existing diagnostic procedures and allergen detection techniques are constantly improved. Meanwhile, new methods are also developed, and more and more putative allergens are discovered. This review describes traditional methods and summarizes recent advances in the fast evolving field of the in vitro food allergy diagnosis, allergen detection in food products and discovery of the new all...

  12. Work Stress Interventions in Hospital Care: Effectiveness of the DISCovery Method

    Directory of Open Access Journals (Sweden)

    Irene Niks

    2018-02-01

    Full Text Available Effective interventions to prevent work stress and to improve health, well-being, and performance of employees are of the utmost importance. This quasi-experimental intervention study presents a specific method for diagnosis of psychosocial risk factors at work and subsequent development and implementation of tailored work stress interventions, the so-called DISCovery method. This method aims at improving employee health, well-being, and performance by optimizing the balance between job demands, job resources, and recovery from work. The aim of the study is to quantitatively assess the effectiveness of the DISCovery method in hospital care. Specifically, we used a three-wave longitudinal, quasi-experimental multiple-case study approach with intervention and comparison groups in health care work. Positive changes were found for members of the intervention groups, relative to members of the corresponding comparison groups, with respect to targeted work-related characteristics and targeted health, well-being, and performance outcomes. Overall, results lend support for the effectiveness of the DISCovery method in hospital care.

  13. Work Stress Interventions in Hospital Care: Effectiveness of the DISCovery Method

    Science.gov (United States)

    Niks, Irene; Gevers, Josette

    2018-01-01

    Effective interventions to prevent work stress and to improve health, well-being, and performance of employees are of the utmost importance. This quasi-experimental intervention study presents a specific method for diagnosis of psychosocial risk factors at work and subsequent development and implementation of tailored work stress interventions, the so-called DISCovery method. This method aims at improving employee health, well-being, and performance by optimizing the balance between job demands, job resources, and recovery from work. The aim of the study is to quantitatively assess the effectiveness of the DISCovery method in hospital care. Specifically, we used a three-wave longitudinal, quasi-experimental multiple-case study approach with intervention and comparison groups in health care work. Positive changes were found for members of the intervention groups, relative to members of the corresponding comparison groups, with respect to targeted work-related characteristics and targeted health, well-being, and performance outcomes. Overall, results lend support for the effectiveness of the DISCovery method in hospital care. PMID:29438350

  14. Work Stress Interventions in Hospital Care: Effectiveness of the DISCovery Method.

    Science.gov (United States)

    Niks, Irene; de Jonge, Jan; Gevers, Josette; Houtman, Irene

    2018-02-13

    Effective interventions to prevent work stress and to improve health, well-being, and performance of employees are of the utmost importance. This quasi-experimental intervention study presents a specific method for diagnosis of psychosocial risk factors at work and subsequent development and implementation of tailored work stress interventions, the so-called DISCovery method. This method aims at improving employee health, well-being, and performance by optimizing the balance between job demands, job resources, and recovery from work. The aim of the study is to quantitatively assess the effectiveness of the DISCovery method in hospital care. Specifically, we used a three-wave longitudinal, quasi-experimental multiple-case study approach with intervention and comparison groups in health care work. Positive changes were found for members of the intervention groups, relative to members of the corresponding comparison groups, with respect to targeted work-related characteristics and targeted health, well-being, and performance outcomes. Overall, results lend support for the effectiveness of the DISCovery method in hospital care.

  15. Application of false discovery rate control in the assessment of decrease of FDG uptake in early Alzheimer dementia

    International Nuclear Information System (INIS)

    Lee, Dong Soo; Kang, Hye Jin; Jang, Myung Jin; Kang, Won Jun; Lee, Jae Sung; Kang, Eun Joo; Lee, Kang Uk; Woo, Jong In; Lee, Myung Chul; Cho, Sang Soo

    2003-01-01

    Determining an appropriate thresholding is crucial for PDG PET analysis since strong control of Type I error could fail to find pathological differences between early Alzheimer' disease (AD) patients and healthy normal controls. We compared the SPM results on FDG PET imaging of early AD using uncorrected p-value, random-field based corrected p-value and false discovery rate (FDR) control. Twenty-eight patients (66±7 years old) with early AD and 18 age-matched normal controls (68±6 years old) underwent FDG brain PET. To identify brain regions with hypo-metabolism in group or individual patient compared to normal controls, group images or each patient's image was compared with normal controls using the same fixed p-value of 0.001 on uncorrected thresholding, random-field based corrected thresholding and FDR control. The number of hypo-metabolic voxels was smallest in corrected p-value method, largest in uncorrected p-value method and intermediate in FDG thresholding in group analysis. Three types of result pattern were found. The first was that corrected p-value did yield any voxel positive but FDR gave a few significantly hypometabolic voxels (8/28, 29%). The second was that both corrected p-value and FDR did not yield any positive region but numerous positive voxels were found with the threshold of uncorrected p-values (6/28, 21%). The last was that FDR was detected as many positive voxels as uncorrected p-value method (14/28, 50%). Conclusions FDR control could identify hypo-metabolic areas in group or individual patients with early AD. We recommend FDR control instead of uncorrected or random-field corrected thresholding method to find the areas showing hypometabolism especially in small group or individual analysis of FDG PET

  16. Three-dimensional compound comparison methods and their application in drug discovery.

    Science.gov (United States)

    Shin, Woong-Hee; Zhu, Xiaolei; Bures, Mark Gregory; Kihara, Daisuke

    2015-07-16

    Virtual screening has been widely used in the drug discovery process. Ligand-based virtual screening (LBVS) methods compare a library of compounds with a known active ligand. Two notable advantages of LBVS methods are that they do not require structural information of a target receptor and that they are faster than structure-based methods. LBVS methods can be classified based on the complexity of ligand structure information utilized: one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D). Unlike 1D and 2D methods, 3D methods can have enhanced performance since they treat the conformational flexibility of compounds. In this paper, a number of 3D methods will be reviewed. In addition, four representative 3D methods were benchmarked to understand their performance in virtual screening. Specifically, we tested overall performance in key aspects including the ability to find dissimilar active compounds, and computational speed.

  17. Three-Dimensional Compound Comparison Methods and Their Application in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Woong-Hee Shin

    2015-07-01

    Full Text Available Virtual screening has been widely used in the drug discovery process. Ligand-based virtual screening (LBVS methods compare a library of compounds with a known active ligand. Two notable advantages of LBVS methods are that they do not require structural information of a target receptor and that they are faster than structure-based methods. LBVS methods can be classified based on the complexity of ligand structure information utilized: one-dimensional (1D, two-dimensional (2D, and three-dimensional (3D. Unlike 1D and 2D methods, 3D methods can have enhanced performance since they treat the conformational flexibility of compounds. In this paper, a number of 3D methods will be reviewed. In addition, four representative 3D methods were benchmarked to understand their performance in virtual screening. Specifically, we tested overall performance in key aspects including the ability to find dissimilar active compounds, and computational speed.

  18. Fastest Rates for Stochastic Mirror Descent Methods

    KAUST Repository

    Hanzely, Filip

    2018-03-20

    Relative smoothness - a notion introduced by Birnbaum et al. (2011) and rediscovered by Bauschke et al. (2016) and Lu et al. (2016) - generalizes the standard notion of smoothness typically used in the analysis of gradient type methods. In this work we are taking ideas from well studied field of stochastic convex optimization and using them in order to obtain faster algorithms for minimizing relatively smooth functions. We propose and analyze two new algorithms: Relative Randomized Coordinate Descent (relRCD) and Relative Stochastic Gradient Descent (relSGD), both generalizing famous algorithms in the standard smooth setting. The methods we propose can be in fact seen as a particular instances of stochastic mirror descent algorithms. One of them, relRCD corresponds to the first stochastic variant of mirror descent algorithm with linear convergence rate.

  19. Fastest Rates for Stochastic Mirror Descent Methods

    KAUST Repository

    Hanzely, Filip; Richtarik, Peter

    2018-01-01

    Relative smoothness - a notion introduced by Birnbaum et al. (2011) and rediscovered by Bauschke et al. (2016) and Lu et al. (2016) - generalizes the standard notion of smoothness typically used in the analysis of gradient type methods. In this work we are taking ideas from well studied field of stochastic convex optimization and using them in order to obtain faster algorithms for minimizing relatively smooth functions. We propose and analyze two new algorithms: Relative Randomized Coordinate Descent (relRCD) and Relative Stochastic Gradient Descent (relSGD), both generalizing famous algorithms in the standard smooth setting. The methods we propose can be in fact seen as a particular instances of stochastic mirror descent algorithms. One of them, relRCD corresponds to the first stochastic variant of mirror descent algorithm with linear convergence rate.

  20. The self-organizing fractal theory as a universal discovery method: the phenomenon of life

    Directory of Open Access Journals (Sweden)

    Kurakin Alexei

    2011-03-01

    Full Text Available Abstract A universal discovery method potentially applicable to all disciplines studying organizational phenomena has been developed. This method takes advantage of a new form of global symmetry, namely, scale-invariance of self-organizational dynamics of energy/matter at all levels of organizational hierarchy, from elementary particles through cells and organisms to the Universe as a whole. The method is based on an alternative conceptualization of physical reality postulating that the energy/matter comprising the Universe is far from equilibrium, that it exists as a flow, and that it develops via self-organization in accordance with the empirical laws of nonequilibrium thermodynamics. It is postulated that the energy/matter flowing through and comprising the Universe evolves as a multiscale, self-similar structure-process, i.e., as a self-organizing fractal. This means that certain organizational structures and processes are scale-invariant and are reproduced at all levels of the organizational hierarchy. Being a form of symmetry, scale-invariance naturally lends itself to a new discovery method that allows for the deduction of missing information by comparing scale-invariant organizational patterns across different levels of the organizational hierarchy. An application of the new discovery method to life sciences reveals that moving electrons represent a keystone physical force (flux that powers, animates, informs, and binds all living structures-processes into a planetary-wide, multiscale system of electron flow/circulation, and that all living organisms and their larger-scale organizations emerge to function as electron transport networks that are supported by and, at the same time, support the flow of electrons down the Earth's redox gradient maintained along the core-mantle-crust-ocean-atmosphere axis of the planet. The presented findings lead to a radically new perspective on the nature and origin of life, suggesting that living matter

  1. The self-organizing fractal theory as a universal discovery method: the phenomenon of life.

    Science.gov (United States)

    Kurakin, Alexei

    2011-03-29

    A universal discovery method potentially applicable to all disciplines studying organizational phenomena has been developed. This method takes advantage of a new form of global symmetry, namely, scale-invariance of self-organizational dynamics of energy/matter at all levels of organizational hierarchy, from elementary particles through cells and organisms to the Universe as a whole. The method is based on an alternative conceptualization of physical reality postulating that the energy/matter comprising the Universe is far from equilibrium, that it exists as a flow, and that it develops via self-organization in accordance with the empirical laws of nonequilibrium thermodynamics. It is postulated that the energy/matter flowing through and comprising the Universe evolves as a multiscale, self-similar structure-process, i.e., as a self-organizing fractal. This means that certain organizational structures and processes are scale-invariant and are reproduced at all levels of the organizational hierarchy. Being a form of symmetry, scale-invariance naturally lends itself to a new discovery method that allows for the deduction of missing information by comparing scale-invariant organizational patterns across different levels of the organizational hierarchy.An application of the new discovery method to life sciences reveals that moving electrons represent a keystone physical force (flux) that powers, animates, informs, and binds all living structures-processes into a planetary-wide, multiscale system of electron flow/circulation, and that all living organisms and their larger-scale organizations emerge to function as electron transport networks that are supported by and, at the same time, support the flow of electrons down the Earth's redox gradient maintained along the core-mantle-crust-ocean-atmosphere axis of the planet. The presented findings lead to a radically new perspective on the nature and origin of life, suggesting that living matter is an organizational state

  2. False-Positive Rate Determination of Protein Target Discovery using a Covalent Modification- and Mass Spectrometry-Based Proteomics Platform

    Science.gov (United States)

    Strickland, Erin C.; Geer, M. Ariel; Hong, Jiyong; Fitzgerald, Michael C.

    2014-01-01

    Detection and quantitation of protein-ligand binding interactions is important in many areas of biological research. Stability of proteins from rates of oxidation (SPROX) is an energetics-based technique for identifying the proteins targets of ligands in complex biological mixtures. Knowing the false-positive rate of protein target discovery in proteome-wide SPROX experiments is important for the correct interpretation of results. Reported here are the results of a control SPROX experiment in which chemical denaturation data is obtained on the proteins in two samples that originated from the same yeast lysate, as would be done in a typical SPROX experiment except that one sample would be spiked with the test ligand. False-positive rates of 1.2-2.2 % and analysis of the isobaric mass tag (e.g., iTRAQ®) reporter ions used for peptide quantitation. Our results also suggest that technical replicates can be used to effectively eliminate such false positives that result from this random error, as is demonstrated in a SPROX experiment to identify yeast protein targets of the drug, manassantin A. The impact of ion purity in the tandem mass spectral analyses and of background oxidation on the false-positive rate of protein target discovery using SPROX is also discussed.

  3. SNP discovery in nonmodel organisms: strand bias and base-substitution errors reduce conversion rates.

    Science.gov (United States)

    Gonçalves da Silva, Anders; Barendse, William; Kijas, James W; Barris, Wes C; McWilliam, Sean; Bunch, Rowan J; McCullough, Russell; Harrison, Blair; Hoelzel, A Rus; England, Phillip R

    2015-07-01

    Single nucleotide polymorphisms (SNPs) have become the marker of choice for genetic studies in organisms of conservation, commercial or biological interest. Most SNP discovery projects in nonmodel organisms apply a strategy for identifying putative SNPs based on filtering rules that account for random sequencing errors. Here, we analyse data used to develop 4723 novel SNPs for the commercially important deep-sea fish, orange roughy (Hoplostethus atlanticus), to assess the impact of not accounting for systematic sequencing errors when filtering identified polymorphisms when discovering SNPs. We used SAMtools to identify polymorphisms in a velvet assembly of genomic DNA sequence data from seven individuals. The resulting set of polymorphisms were filtered to minimize 'bycatch'-polymorphisms caused by sequencing or assembly error. An Illumina Infinium SNP chip was used to genotype a final set of 7714 polymorphisms across 1734 individuals. Five predictors were examined for their effect on the probability of obtaining an assayable SNP: depth of coverage, number of reads that support a variant, polymorphism type (e.g. A/C), strand-bias and Illumina SNP probe design score. Our results indicate that filtering out systematic sequencing errors could substantially improve the efficiency of SNP discovery. We show that BLASTX can be used as an efficient tool to identify single-copy genomic regions in the absence of a reference genome. The results have implications for research aiming to identify assayable SNPs and build SNP genotyping assays for nonmodel organisms. © 2014 John Wiley & Sons Ltd.

  4. Computational methods for 2D materials: discovery, property characterization, and application design.

    Science.gov (United States)

    Paul, J T; Singh, A K; Dong, Z; Zhuang, H; Revard, B C; Rijal, B; Ashton, M; Linscheid, A; Blonsky, M; Gluhovic, D; Guo, J; Hennig, R G

    2017-11-29

    The discovery of two-dimensional (2D) materials comes at a time when computational methods are mature and can predict novel 2D materials, characterize their properties, and guide the design of 2D materials for applications. This article reviews the recent progress in computational approaches for 2D materials research. We discuss the computational techniques and provide an overview of the ongoing research in the field. We begin with an overview of known 2D materials, common computational methods, and available cyber infrastructures. We then move onto the discovery of novel 2D materials, discussing the stability criteria for 2D materials, computational methods for structure prediction, and interactions of monolayers with electrochemical and gaseous environments. Next, we describe the computational characterization of the 2D materials' electronic, optical, magnetic, and superconducting properties and the response of the properties under applied mechanical strain and electrical fields. From there, we move on to discuss the structure and properties of defects in 2D materials, and describe methods for 2D materials device simulations. We conclude by providing an outlook on the needs and challenges for future developments in the field of computational research for 2D materials.

  5. Methods for Discovery and Surveillance of Pathogens in Hotspots of Emerging Infectious Diseases

    DEFF Research Database (Denmark)

    Jensen, Randi Holm

    Viruses are everywhere, and can infect all living things. They are constantly evolving, and new diseases are emerging as a result. Consequently, they have always been of interest to scientists and people in general. Several outbreaks of emerging infectious diseases transmitting from animals...... to virion enrichment compared to samples with no enrichment. We have used these methods to perform pathogen discovery in faecal samples collected from small mammals in Sierra Leone, to describe the presence of pathogenic viruses and bacteria in this area. From these data we were furthermore able to acquire...

  6. Systems-based biological concordance and predictive reproducibility of gene set discovery methods in cardiovascular disease.

    Science.gov (United States)

    Azuaje, Francisco; Zheng, Huiru; Camargo, Anyela; Wang, Haiying

    2011-08-01

    The discovery of novel disease biomarkers is a crucial challenge for translational bioinformatics. Demonstration of both their classification power and reproducibility across independent datasets are essential requirements to assess their potential clinical relevance. Small datasets and multiplicity of putative biomarker sets may explain lack of predictive reproducibility. Studies based on pathway-driven discovery approaches have suggested that, despite such discrepancies, the resulting putative biomarkers tend to be implicated in common biological processes. Investigations of this problem have been mainly focused on datasets derived from cancer research. We investigated the predictive and functional concordance of five methods for discovering putative biomarkers in four independently-generated datasets from the cardiovascular disease domain. A diversity of biosignatures was identified by the different methods. However, we found strong biological process concordance between them, especially in the case of methods based on gene set analysis. With a few exceptions, we observed lack of classification reproducibility using independent datasets. Partial overlaps between our putative sets of biomarkers and the primary studies exist. Despite the observed limitations, pathway-driven or gene set analysis can predict potentially novel biomarkers and can jointly point to biomedically-relevant underlying molecular mechanisms. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Bioanalytical methods for food allergy diagnosis, allergen detection and new allergen discovery.

    Science.gov (United States)

    Gasilova, Natalia; Girault, Hubert H

    2015-01-01

    For effective monitoring and prevention of the food allergy, one of the emerging health problems nowadays, existing diagnostic procedures and allergen detection techniques are constantly improved. Meanwhile, new methods are also developed, and more and more putative allergens are discovered. This review describes traditional methods and summarizes recent advances in the fast evolving field of the in vitro food allergy diagnosis, allergen detection in food products and discovery of the new allergenic molecules. A special attention is paid to the new diagnostic methods under laboratory development like various immuno- and aptamer-based assays, including immunoaffinity capillary electrophoresis. The latter technique shows the importance of MS application not only for the allergen detection but also for the allergy diagnosis.

  8. Application of Combination High-Throughput Phenotypic Screening and Target Identification Methods for the Discovery of Natural Product-Based Combination Drugs.

    Science.gov (United States)

    Isgut, Monica; Rao, Mukkavilli; Yang, Chunhua; Subrahmanyam, Vangala; Rida, Padmashree C G; Aneja, Ritu

    2018-03-01

    Modern drug discovery efforts have had mediocre success rates with increasing developmental costs, and this has encouraged pharmaceutical scientists to seek innovative approaches. Recently with the rise of the fields of systems biology and metabolomics, network pharmacology (NP) has begun to emerge as a new paradigm in drug discovery, with a focus on multiple targets and drug combinations for treating disease. Studies on the benefits of drug combinations lay the groundwork for a renewed focus on natural products in drug discovery. Natural products consist of a multitude of constituents that can act on a variety of targets in the body to induce pharmacodynamic responses that may together culminate in an additive or synergistic therapeutic effect. Although natural products cannot be patented, they can be used as starting points in the discovery of potent combination therapeutics. The optimal mix of bioactive ingredients in natural products can be determined via phenotypic screening. The targets and molecular mechanisms of action of these active ingredients can then be determined using chemical proteomics, and by implementing a reverse pharmacokinetics approach. This review article provides evidence supporting the potential benefits of natural product-based combination drugs, and summarizes drug discovery methods that can be applied to this class of drugs. © 2017 Wiley Periodicals, Inc.

  9. NMR and pattern recognition methods in metabolomics: From data acquisition to biomarker discovery: A review

    International Nuclear Information System (INIS)

    Smolinska, Agnieszka; Blanchet, Lionel; Buydens, Lutgarde M.C.; Wijmenga, Sybren S.

    2012-01-01

    Highlights: ► Procedures for acquisition of different biofluids by NMR. ► Recent developments in metabolic profiling of different biofluids by NMR are presented. ► The crucial steps involved in data preprocessing and multivariate chemometric analysis are reviewed. ► Emphasis is given on recent findings on Multiple Sclerosis via NMR and pattern recognition methods. - Abstract: Metabolomics is the discipline where endogenous and exogenous metabolites are assessed, identified and quantified in different biological samples. Metabolites are crucial components of biological system and highly informative about its functional state, due to their closeness to functional endpoints and to the organism's phenotypes. Nuclear Magnetic Resonance (NMR) spectroscopy, next to Mass Spectrometry (MS), is one of the main metabolomics analytical platforms. The technological developments in the field of NMR spectroscopy have enabled the identification and quantitative measurement of the many metabolites in a single sample of biofluids in a non-targeted and non-destructive manner. Combination of NMR spectra of biofluids and pattern recognition methods has driven forward the application of metabolomics in the field of biomarker discovery. The importance of metabolomics in diagnostics, e.g. in identifying biomarkers or defining pathological status, has been growing exponentially as evidenced by the number of published papers. In this review, we describe the developments in data acquisition and multivariate analysis of NMR-based metabolomics data, with particular emphasis on the metabolomics of Cerebrospinal Fluid (CSF) and biomarker discovery in Multiple Sclerosis (MScl).

  10. Discovery of novel heart rate-associated loci using the Exome Chip

    DEFF Research Database (Denmark)

    van den Berg, Marten E; Warren, Helen R; Cabrera, Claudia P

    2017-01-01

    Resting heart rate is a heritable trait, and an increase in heart rate is associated with increased mortality risk. Genome-wide association study analyses have found loci associated with resting heart rate, at the time of our study these loci explained 0.9% of the variation. This study aims to di......) and fetal muscle samples by including our novel variants.Our findings advance the knowledge of the genetic architecture of heart rate, and indicate new candidate genes for follow-up functional studies....

  11. Macro cell assisted cell discovery method for 5G mobile networks

    DEFF Research Database (Denmark)

    Marcano, Andrea; Christiansen, Henrik Lehrmann

    2016-01-01

    , and requires a new system design. The aspects concerning the impact of using mmWave frequencies on the medium access (MAC) layer are one of the topics that need to be further analyzed. In this article we focus on the cell discovery process of the MAC laywe for mmWave communications. A new approach assuming...... a joint search of the user equipment (UE) between the mmWave small cell (SC) and the macro cell (MC) is proposed. The performance of this method is analyzed and compared with existing methods. The results show that using the MC as aid during the search process can allow for up to 99% improvement in terms...

  12. cn.FARMS: a latent variable model to detect copy number variations in microarray data with a low false discovery rate.

    Science.gov (United States)

    Clevert, Djork-Arné; Mitterecker, Andreas; Mayr, Andreas; Klambauer, Günter; Tuefferd, Marianne; De Bondt, An; Talloen, Willem; Göhlmann, Hinrich; Hochreiter, Sepp

    2011-07-01

    Cost-effective oligonucleotide genotyping arrays like the Affymetrix SNP 6.0 are still the predominant technique to measure DNA copy number variations (CNVs). However, CNV detection methods for microarrays overestimate both the number and the size of CNV regions and, consequently, suffer from a high false discovery rate (FDR). A high FDR means that many CNVs are wrongly detected and therefore not associated with a disease in a clinical study, though correction for multiple testing takes them into account and thereby decreases the study's discovery power. For controlling the FDR, we propose a probabilistic latent variable model, 'cn.FARMS', which is optimized by a Bayesian maximum a posteriori approach. cn.FARMS controls the FDR through the information gain of the posterior over the prior. The prior represents the null hypothesis of copy number 2 for all samples from which the posterior can only deviate by strong and consistent signals in the data. On HapMap data, cn.FARMS clearly outperformed the two most prevalent methods with respect to sensitivity and FDR. The software cn.FARMS is publicly available as a R package at http://www.bioinf.jku.at/software/cnfarms/cnfarms.html.

  13. NMR and pattern recognition methods in metabolomics: From data acquisition to biomarker discovery: A review

    Energy Technology Data Exchange (ETDEWEB)

    Smolinska, Agnieszka, E-mail: A.Smolinska@science.ru.nl [Institute for Molecules and Materials, Radboud University Nijmegen, Nijmegen (Netherlands); Blanchet, Lionel [Institute for Molecules and Materials, Radboud University Nijmegen, Nijmegen (Netherlands); Department of Biochemistry, Nijmegen Centre for Molecular Life Sciences, Radboud University Nijmegen Medical Centre, Nijmegen (Netherlands); Buydens, Lutgarde M.C.; Wijmenga, Sybren S. [Institute for Molecules and Materials, Radboud University Nijmegen, Nijmegen (Netherlands)

    2012-10-31

    Highlights: Black-Right-Pointing-Pointer Procedures for acquisition of different biofluids by NMR. Black-Right-Pointing-Pointer Recent developments in metabolic profiling of different biofluids by NMR are presented. Black-Right-Pointing-Pointer The crucial steps involved in data preprocessing and multivariate chemometric analysis are reviewed. Black-Right-Pointing-Pointer Emphasis is given on recent findings on Multiple Sclerosis via NMR and pattern recognition methods. - Abstract: Metabolomics is the discipline where endogenous and exogenous metabolites are assessed, identified and quantified in different biological samples. Metabolites are crucial components of biological system and highly informative about its functional state, due to their closeness to functional endpoints and to the organism's phenotypes. Nuclear Magnetic Resonance (NMR) spectroscopy, next to Mass Spectrometry (MS), is one of the main metabolomics analytical platforms. The technological developments in the field of NMR spectroscopy have enabled the identification and quantitative measurement of the many metabolites in a single sample of biofluids in a non-targeted and non-destructive manner. Combination of NMR spectra of biofluids and pattern recognition methods has driven forward the application of metabolomics in the field of biomarker discovery. The importance of metabolomics in diagnostics, e.g. in identifying biomarkers or defining pathological status, has been growing exponentially as evidenced by the number of published papers. In this review, we describe the developments in data acquisition and multivariate analysis of NMR-based metabolomics data, with particular emphasis on the metabolomics of Cerebrospinal Fluid (CSF) and biomarker discovery in Multiple Sclerosis (MScl).

  14. An Evaluation of Active Learning Causal Discovery Methods for Reverse-Engineering Local Causal Pathways of Gene Regulation

    Science.gov (United States)

    Ma, Sisi; Kemmeren, Patrick; Aliferis, Constantin F.; Statnikov, Alexander

    2016-01-01

    Reverse-engineering of causal pathways that implicate diseases and vital cellular functions is a fundamental problem in biomedicine. Discovery of the local causal pathway of a target variable (that consists of its direct causes and direct effects) is essential for effective intervention and can facilitate accurate diagnosis and prognosis. Recent research has provided several active learning methods that can leverage passively observed high-throughput data to draft causal pathways and then refine the inferred relations with a limited number of experiments. The current study provides a comprehensive evaluation of the performance of active learning methods for local causal pathway discovery in real biological data. Specifically, 54 active learning methods/variants from 3 families of algorithms were applied for local causal pathways reconstruction of gene regulation for 5 transcription factors in S. cerevisiae. Four aspects of the methods’ performance were assessed, including adjacency discovery quality, edge orientation accuracy, complete pathway discovery quality, and experimental cost. The results of this study show that some methods provide significant performance benefits over others and therefore should be routinely used for local causal pathway discovery tasks. This study also demonstrates the feasibility of local causal pathway reconstruction in real biological systems with significant quality and low experimental cost. PMID:26939894

  15. Discovery of rapid whistlers close to Jupiter implying lightning rates similar to those on Earth

    Science.gov (United States)

    Kolmašová, Ivana; Imai, Masafumi; Santolík, Ondřej; Kurth, William S.; Hospodarsky, George B.; Gurnett, Donald A.; Connerney, John E. P.; Bolton, Scott J.

    2018-06-01

    Electrical currents in atmospheric lightning strokes generate impulsive radio waves in a broad range of frequencies, called atmospherics. These waves can be modified by their passage through the plasma environment of a planet into the form of dispersed whistlers1. In the Io plasma torus around Jupiter, Voyager 1 detected whistlers as several-seconds-long slowly falling tones at audible frequencies2. These measurements were the first evidence of lightning at Jupiter. Subsequently, Jovian lightning was observed by optical cameras on board several spacecraft in the form of localized flashes of light3-7. Here, we show measurements by the Waves instrument8 on board the Juno spacecraft9-11 that indicate observations of Jovian rapid whistlers: a form of dispersed atmospherics at extremely short timescales of several milliseconds to several tens of milliseconds. On the basis of these measurements, we report over 1,600 lightning detections, the largest set obtained to date. The data were acquired during close approaches to Jupiter between August 2016 and September 2017, at radial distances below 5 Jovian radii. We detected up to four lightning strokes per second, similar to rates in thunderstorms on Earth12 and six times the peak rates from the Voyager 1 observations13.

  16. High Repetition Rate Thermometry System And Method

    KAUST Repository

    Chrystie, Robin

    2015-05-14

    A system and method for rapid thermometry using intrapulse spectroscopy can include a laser for propagating pulses of electromagnetic radiation to a region. Each of the pulses can be chirped. The pulses from the region can be detected. An intrapulse absorbance spectrum can be determined from the pulses. An instantaneous temperature of the region based on the intrapulse absorbance spectrum can be determined.

  17. High Repetition Rate Thermometry System And Method

    KAUST Repository

    Chrystie, Robin; Farooq, Aamir

    2015-01-01

    A system and method for rapid thermometry using intrapulse spectroscopy can include a laser for propagating pulses of electromagnetic radiation to a region. Each of the pulses can be chirped. The pulses from the region can be detected. An intrapulse absorbance spectrum can be determined from the pulses. An instantaneous temperature of the region based on the intrapulse absorbance spectrum can be determined.

  18. NetiNeti: discovery of scientific names from text using machine learning methods

    Directory of Open Access Journals (Sweden)

    Akella Lakshmi

    2012-08-01

    Full Text Available Abstract Background A scientific name for an organism can be associated with almost all biological data. Name identification is an important step in many text mining tasks aiming to extract useful information from biological, biomedical and biodiversity text sources. A scientific name acts as an important metadata element to link biological information. Results We present NetiNeti (Name Extraction from Textual Information-Name Extraction for Taxonomic Indexing, a machine learning based approach for recognition of scientific names including the discovery of new species names from text that will also handle misspellings, OCR errors and other variations in names. The system generates candidate names using rules for scientific names and applies probabilistic machine learning methods to classify names based on structural features of candidate names and features derived from their contexts. NetiNeti can also disambiguate scientific names from other names using the contextual information. We evaluated NetiNeti on legacy biodiversity texts and biomedical literature (MEDLINE. NetiNeti performs better (precision = 98.9% and recall = 70.5% compared to a popular dictionary based approach (precision = 97.5% and recall = 54.3% on a 600-page biodiversity book that was manually marked by an annotator. On a small set of PubMed Central’s full text articles annotated with scientific names, the precision and recall values are 98.5% and 96.2% respectively. NetiNeti found more than 190,000 unique binomial and trinomial names in more than 1,880,000 PubMed records when used on the full MEDLINE database. NetiNeti also successfully identifies almost all of the new species names mentioned within web pages. Conclusions We present NetiNeti, a machine learning based approach for identification and discovery of scientific names. The system implementing the approach can be accessed at http://namefinding.ubio.org.

  19. Impact of the Choice of Normalization Method on Molecular Cancer Class Discovery Using Nonnegative Matrix Factorization.

    Science.gov (United States)

    Yang, Haixuan; Seoighe, Cathal

    2016-01-01

    Nonnegative Matrix Factorization (NMF) has proved to be an effective method for unsupervised clustering analysis of gene expression data. By the nonnegativity constraint, NMF provides a decomposition of the data matrix into two matrices that have been used for clustering analysis. However, the decomposition is not unique. This allows different clustering results to be obtained, resulting in different interpretations of the decomposition. To alleviate this problem, some existing methods directly enforce uniqueness to some extent by adding regularization terms in the NMF objective function. Alternatively, various normalization methods have been applied to the factor matrices; however, the effects of the choice of normalization have not been carefully investigated. Here we investigate the performance of NMF for the task of cancer class discovery, under a wide range of normalization choices. After extensive evaluations, we observe that the maximum norm showed the best performance, although the maximum norm has not previously been used for NMF. Matlab codes are freely available from: http://maths.nuigalway.ie/~haixuanyang/pNMF/pNMF.htm.

  20. Discovery of temporal and disease association patterns in condition-specific hospital utilization rates.

    Directory of Open Access Journals (Sweden)

    Julian S Haimovich

    Full Text Available Identifying temporal variation in hospitalization rates may provide insights about disease patterns and thereby inform research, policy, and clinical care. However, the majority of medical conditions have not been studied for their potential seasonal variation. The objective of this study was to apply a data-driven approach to characterize temporal variation in condition-specific hospitalizations. Using a dataset of 34 million inpatient discharges gathered from hospitals in New York State from 2008-2011, we grouped all discharges into 263 clinical conditions based on the principal discharge diagnosis using Clinical Classification Software in order to mitigate the limitation that administrative claims data reflect clinical conditions to varying specificity. After applying Seasonal-Trend Decomposition by LOESS, we estimated the periodicity of the seasonal component using spectral analysis and applied harmonic regression to calculate the amplitude and phase of the condition's seasonal utilization pattern. We also introduced four new indices of temporal variation: mean oscillation width, seasonal coefficient, trend coefficient, and linearity of the trend. Finally, K-means clustering was used to group conditions across these four indices to identify common temporal variation patterns. Of all 263 clinical conditions considered, 164 demonstrated statistically significant seasonality. Notably, we identified conditions for which seasonal variation has not been previously described such as ovarian cancer, tuberculosis, and schizophrenia. Clustering analysis yielded three distinct groups of conditions based on multiple measures of seasonal variation. Our study was limited to New York State and results may not directly apply to other regions with distinct climates and health burden. A substantial proportion of medical conditions, larger than previously described, exhibit seasonal variation in hospital utilization. Moreover, the application of clustering

  1. False Discovery Rates in PET and CT Studies with Texture Features: A Systematic Review.

    Directory of Open Access Journals (Sweden)

    Anastasia Chalkidou

    Full Text Available A number of recent publications have proposed that a family of image-derived indices, called texture features, can predict clinical outcome in patients with cancer. However, the investigation of multiple indices on a single data set can lead to significant inflation of type-I errors. We report a systematic review of the type-I error inflation in such studies and review the evidence regarding associations between patient outcome and texture features derived from positron emission tomography (PET or computed tomography (CT images.For study identification PubMed and Scopus were searched (1/2000-9/2013 using combinations of the keywords texture, prognostic, predictive and cancer. Studies were divided into three categories according to the sources of the type-I error inflation and the use or not of an independent validation dataset. For each study, the true type-I error probability and the adjusted level of significance were estimated using the optimum cut-off approach correction, and the Benjamini-Hochberg method. To demonstrate explicitly the variable selection bias in these studies, we re-analyzed data from one of the published studies, but using 100 random variables substituted for the original image-derived indices. The significance of the random variables as potential predictors of outcome was examined using the analysis methods used in the identified studies.Fifteen studies were identified. After applying appropriate statistical corrections, an average type-I error probability of 76% (range: 34-99% was estimated with the majority of published results not reaching statistical significance. Only 3/15 studies used a validation dataset. For the 100 random variables examined, 10% proved to be significant predictors of survival when subjected to ROC and multiple hypothesis testing analysis.We found insufficient evidence to support a relationship between PET or CT texture features and patient survival. Further fit for purpose validation of these

  2. False Discovery Rates in PET and CT Studies with Texture Features: A Systematic Review.

    Science.gov (United States)

    Chalkidou, Anastasia; O'Doherty, Michael J; Marsden, Paul K

    2015-01-01

    A number of recent publications have proposed that a family of image-derived indices, called texture features, can predict clinical outcome in patients with cancer. However, the investigation of multiple indices on a single data set can lead to significant inflation of type-I errors. We report a systematic review of the type-I error inflation in such studies and review the evidence regarding associations between patient outcome and texture features derived from positron emission tomography (PET) or computed tomography (CT) images. For study identification PubMed and Scopus were searched (1/2000-9/2013) using combinations of the keywords texture, prognostic, predictive and cancer. Studies were divided into three categories according to the sources of the type-I error inflation and the use or not of an independent validation dataset. For each study, the true type-I error probability and the adjusted level of significance were estimated using the optimum cut-off approach correction, and the Benjamini-Hochberg method. To demonstrate explicitly the variable selection bias in these studies, we re-analyzed data from one of the published studies, but using 100 random variables substituted for the original image-derived indices. The significance of the random variables as potential predictors of outcome was examined using the analysis methods used in the identified studies. Fifteen studies were identified. After applying appropriate statistical corrections, an average type-I error probability of 76% (range: 34-99%) was estimated with the majority of published results not reaching statistical significance. Only 3/15 studies used a validation dataset. For the 100 random variables examined, 10% proved to be significant predictors of survival when subjected to ROC and multiple hypothesis testing analysis. We found insufficient evidence to support a relationship between PET or CT texture features and patient survival. Further fit for purpose validation of these image

  3. High School Graduation Rates:Alternative Methods and Implications

    Directory of Open Access Journals (Sweden)

    Jing Miao

    2004-10-01

    Full Text Available The No Child Left Behind Act has brought great attention to the high school graduation rate as one of the mandatory accountability measures for public school systems. However, there is no consensus on how to calculate the high school graduation rate given the lack of longitudinal databases that track individual students. This study reviews literature on and practices in reporting high school graduation rates, compares graduation rate estimates yielded from alternative methods, and estimates discrepancies between alternative results at national, state, and state ethnic group levels. Despite the graduation rate method used, results indicate that high school graduation rates in the U.S. have been declining in recent years and that graduation rates for black and Hispanic students lag substantially behind those of white students. As to graduation rate method preferred, this study found no evidence that the conceptually more complex methods yield more accurate or valid graduation rate estimates than the simpler methods.

  4. How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project

    Science.gov (United States)

    Butler, Ricky W.; Hagen, George; Maddalon, Jeffrey M.; Munoz, Cesar A.; Narkawicz, Anthony; Dowek, Gilles

    2010-01-01

    In this paper we describe a process of algorithmic discovery that was driven by our goal of achieving complete, mechanically verified algorithms that compute conflict prevention bands for use in en route air traffic management. The algorithms were originally defined in the PVS specification language and subsequently have been implemented in Java and C++. We do not present the proofs in this paper: instead, we describe the process of discovery and the key ideas that enabled the final formal proof of correctness

  5. Computational methods for a three-dimensional model of the petroleum-discovery process

    Science.gov (United States)

    Schuenemeyer, J.H.; Bawiec, W.J.; Drew, L.J.

    1980-01-01

    A discovery-process model devised by Drew, Schuenemeyer, and Root can be used to predict the amount of petroleum to be discovered in a basin from some future level of exploratory effort: the predictions are based on historical drilling and discovery data. Because marginal costs of discovery and production are a function of field size, the model can be used to make estimates of future discoveries within deposit size classes. The modeling approach is a geometric one in which the area searched is a function of the size and shape of the targets being sought. A high correlation is assumed between the surface-projection area of the fields and the volume of petroleum. To predict how much oil remains to be found, the area searched must be computed, and the basin size and discovery efficiency must be estimated. The basin is assumed to be explored randomly rather than by pattern drilling. The model may be used to compute independent estimates of future oil at different depth intervals for a play involving multiple producing horizons. We have written FORTRAN computer programs that are used with Drew, Schuenemeyer, and Root's model to merge the discovery and drilling information and perform the necessary computations to estimate undiscovered petroleum. These program may be modified easily for the estimation of remaining quantities of commodities other than petroleum. ?? 1980.

  6. cn.MOPS: mixture of Poissons for discovering copy number variations in next-generation sequencing data with a low false discovery rate.

    Science.gov (United States)

    Klambauer, Günter; Schwarzbauer, Karin; Mayr, Andreas; Clevert, Djork-Arné; Mitterecker, Andreas; Bodenhofer, Ulrich; Hochreiter, Sepp

    2012-05-01

    Quantitative analyses of next-generation sequencing (NGS) data, such as the detection of copy number variations (CNVs), remain challenging. Current methods detect CNVs as changes in the depth of coverage along chromosomes. Technological or genomic variations in the depth of coverage thus lead to a high false discovery rate (FDR), even upon correction for GC content. In the context of association studies between CNVs and disease, a high FDR means many false CNVs, thereby decreasing the discovery power of the study after correction for multiple testing. We propose 'Copy Number estimation by a Mixture Of PoissonS' (cn.MOPS), a data processing pipeline for CNV detection in NGS data. In contrast to previous approaches, cn.MOPS incorporates modeling of depths of coverage across samples at each genomic position. Therefore, cn.MOPS is not affected by read count variations along chromosomes. Using a Bayesian approach, cn.MOPS decomposes variations in the depth of coverage across samples into integer copy numbers and noise by means of its mixture components and Poisson distributions, respectively. The noise estimate allows for reducing the FDR by filtering out detections having high noise that are likely to be false detections. We compared cn.MOPS with the five most popular methods for CNV detection in NGS data using four benchmark datasets: (i) simulated data, (ii) NGS data from a male HapMap individual with implanted CNVs from the X chromosome, (iii) data from HapMap individuals with known CNVs, (iv) high coverage data from the 1000 Genomes Project. cn.MOPS outperformed its five competitors in terms of precision (1-FDR) and recall for both gains and losses in all benchmark data sets. The software cn.MOPS is publicly available as an R package at http://www.bioinf.jku.at/software/cnmops/ and at Bioconductor.

  7. An SEU rate prediction method for microprocessors of space applications

    International Nuclear Information System (INIS)

    Gao Jie; Li Qiang

    2012-01-01

    In this article,the relationship between static SEU (Single Event Upset) rate and dynamic SEU rate in microprocessors for satellites is studied by using process duty cycle concept and fault injection technique. The results are compared to in-orbit flight monitoring data. The results show that dynamic SEU rate by using process duty cycle can estimate in-orbit SEU rate of microprocessor reasonable; and the fault injection technique is a workable method to estimate SEU rate. (authors)

  8. A New Method to Calculate Internal Rate of Return

    Directory of Open Access Journals (Sweden)

    azadeh zandi

    2015-09-01

    Full Text Available A number of methods have been developed to choose the best capital investment projects such as net present value, internal rate of return and etc. Internal rate of return method is probably the most popular method among managers and investors. But despite the popularity there are serious drawbacks and limitations in this method. After decades of efforts made by economists and experts to improve the method and its shortcomings, Magni in 2010 has revealed a new approach that can solves the most of internal rate of return method problems. This paper present a new method which is originated from Magni’s approach but has much more simple calculations and can resolve all the drawbacks of internal rate of return method.

  9. Proposed test method for determining discharge rates from water closets

    DEFF Research Database (Denmark)

    Nielsen, V.; Fjord Jensen, T.

    At present the rates at which discharge takes place from sanitary appliances are mostly known only in the form of estimated average values. SBI has developed a measuring method enabling determination of the exact rate of discharge from a sanitary appliance as function of time. The methods depends...

  10. BICLUSTERING METHODS FOR RE-ORDERING DATA MATRICES IN SYSTEMS BIOLOGY, DRUG DISCOVERY AND TOXICOLOGY

    Directory of Open Access Journals (Sweden)

    Christodoulos A. Floudas

    2010-12-01

    Full Text Available Biclustering has emerged as an important problem in the analysis of gene expression data since genes may only jointly respond over a subset of conditions. Many of the methods for biclustering, and clustering algorithms in general, utilize simplified models or heuristic strategies for identifying the ``best'' grouping of elements according to some metric and cluster definition and thus result in suboptimal clusters. In the first part of the presentation, we present a rigorous approach to biclustering, OREO, which is based on the Optimal RE-Ordering of the rows and columns of a data matrix so as to globally minimize the dissimilarity metric [1,2]. The physical permutations of the rows and columns of the data matrix can be modeled as either a network flow problem or a traveling salesman problem. The performance of OREO is tested on several important data matrices arising in systems biology to validate the ability of the proposed method and compare it to existing biclustering and clustering methods. In the second part of the talk, we will focus on novel methods for clustering of data matrices that are very sparse [3]. These types of data matrices arise in drug discovery where the x- and y-axis of a data matrix can correspond to different functional groups for two distinct substituent sites on a molecular scaffold. Each possible x and y pair corresponds to a single molecule which can be synthesized and tested for a certain property, such as percent inhibition of a protein function. For even moderate size matrices, synthesizing and testing a small fraction of the molecules is labor intensive and not economically feasible. Thus, it is of paramount importance to have a reliable method for guiding the synthesis process to select molecules that have a high probability of success. In the second part of the presentation, we introduce a new strategy to enable efficient substituent reordering and descriptor-free property estimation. Our approach casts

  11. Integrated Proteomic Pipeline Using Multiple Search Engines for a Proteogenomic Study with a Controlled Protein False Discovery Rate.

    Science.gov (United States)

    Park, Gun Wook; Hwang, Heeyoun; Kim, Kwang Hoe; Lee, Ju Yeon; Lee, Hyun Kyoung; Park, Ji Yeong; Ji, Eun Sun; Park, Sung-Kyu Robin; Yates, John R; Kwon, Kyung-Hoon; Park, Young Mok; Lee, Hyoung-Joo; Paik, Young-Ki; Kim, Jin Young; Yoo, Jong Shin

    2016-11-04

    In the Chromosome-Centric Human Proteome Project (C-HPP), false-positive identification by peptide spectrum matches (PSMs) after database searches is a major issue for proteogenomic studies using liquid-chromatography and mass-spectrometry-based large proteomic profiling. Here we developed a simple strategy for protein identification, with a controlled false discovery rate (FDR) at the protein level, using an integrated proteomic pipeline (IPP) that consists of four engrailed steps as follows. First, using three different search engines, SEQUEST, MASCOT, and MS-GF+, individual proteomic searches were performed against the neXtProt database. Second, the search results from the PSMs were combined using statistical evaluation tools including DTASelect and Percolator. Third, the peptide search scores were converted into E-scores normalized using an in-house program. Last, ProteinInferencer was used to filter the proteins containing two or more peptides with a controlled FDR of 1.0% at the protein level. Finally, we compared the performance of the IPP to a conventional proteomic pipeline (CPP) for protein identification using a controlled FDR of <1% at the protein level. Using the IPP, a total of 5756 proteins (vs 4453 using the CPP) including 477 alternative splicing variants (vs 182 using the CPP) were identified from human hippocampal tissue. In addition, a total of 10 missing proteins (vs 7 using the CPP) were identified with two or more unique peptides, and their tryptic peptides were validated using MS/MS spectral pattern from a repository database or their corresponding synthetic peptides. This study shows that the IPP effectively improved the identification of proteins, including alternative splicing variants and missing proteins, in human hippocampal tissues for the C-HPP. All RAW files used in this study were deposited in ProteomeXchange (PXD000395).

  12. High School Graduation Rates:Alternative Methods and Implications

    OpenAIRE

    Jing Miao; Walt Haney

    2004-01-01

    The No Child Left Behind Act has brought great attention to the high school graduation rate as one of the mandatory accountability measures for public school systems. However, there is no consensus on how to calculate the high school graduation rate given the lack of longitudinal databases that track individual students. This study reviews literature on and practices in reporting high school graduation rates, compares graduation rate estimates yielded from alternative methods, and estimates d...

  13. Evaluation Tool for the Application of Discovery Teaching Method in the Greek Environmental School Projects

    Science.gov (United States)

    Kalathaki, Maria

    2015-01-01

    Greek school community emphasizes on the discovery direction of teaching methodology in the school Environmental Education (EE) in order to promote Education for the Sustainable Development (ESD). In ESD school projects the used methodology is experiential teamwork for inquiry based learning. The proposed tool checks whether and how a school…

  14. Trend analysis of time-series data: A novel method for untargeted metabolite discovery

    NARCIS (Netherlands)

    Peters, S.; Janssen, H.-G.; Vivó-Truyols, G.

    2010-01-01

    A new strategy for biomarker discovery is presented that uses time-series metabolomics data. Data sets from samples analysed at different time points after an intervention are searched for compounds that show a meaningful trend following the intervention. Obviously, this requires new data-analytical

  15. Accidental Discovery of Information on the User-Defined Social Web: A Mixed-Method Study

    Science.gov (United States)

    Lu, Chi-Jung

    2012-01-01

    Frequently interacting with other people or working in an information-rich environment can foster the "accidental discovery of information" (ADI) (Erdelez, 2000; McCay-Peet & Toms, 2010). With the increasing adoption of social web technologies, online user-participation communities and user-generated content have provided users the…

  16. Machine Learning Methods for Knowledge Discovery in Medical Data on Atherosclerosis

    Czech Academy of Sciences Publication Activity Database

    Serrano, J.I.; Tomečková, Marie; Zvárová, Jana

    2006-01-01

    Roč. 1, - (2006), s. 6-33 ISSN 1801-5603 Institutional research plan: CEZ:AV0Z10300504 Keywords : knowledge discovery * supervised machine learning * biomedical data mining * risk factors of atherosclerosis Subject RIV: BB - Applied Statistics, Operational Research

  17. Digital One Disc One Compound Method for High Throughput Discovery of Prostate Cancer Targeting Ligands

    Science.gov (United States)

    2016-12-01

    efficiency of drug discovery and make a potential impact on modern pharmaceutical industries . 15. SUBJECT TERMS ODOC carriers, barcode, split-mix...approach4-7. Array technologies can construct high density of molecules in an array format on a solid substrate (microchip), from which the chemical...and-play microfluidic packaging scheme, known as Microflego – 3D Microfluidic Assembly, to facilely establish complex 3D microfluidic networks using

  18. Sodium flow rate measurement method of annular linear induction pump

    International Nuclear Information System (INIS)

    Araseki, Hideo

    2011-01-01

    This report describes a method for measuring sodium flow rate of annular linear induction pumps arranged in parallel and its verification result obtained through an experiment and a numerical analysis. In the method, the leaked magnetic field is measured with measuring coils at the stator end on the outlet side and is correlated with the sodium flow rate. The experimental data and the numerical result indicate that the leaked magnetic field at the stator edge keeps almost constant when the sodium flow rate changes and that the leaked magnetic field change arising from the flow rate change is small compared with the overall leaked magnetic field. It is shown that the correlation between the leaked magnetic field and the sodium flow rate is almost linear due to this feature of the leaked magnetic field, which indicates the applicability of the method to small-scale annular linear induction pumps. (author)

  19. Review of assessment methods discount rate in investment analysis

    Directory of Open Access Journals (Sweden)

    Yamaletdinova Guzel Hamidullovna

    2011-08-01

    Full Text Available The article examines the current methods of calculating discount rate in investment analysis and business valuation, as well as analyzes the key problems using various techniques in terms of the Russian economy.

  20. The Implementation of Discovery Learning Method to Increase Learning Outcomes and Motivation of Student in Senior High School

    Directory of Open Access Journals (Sweden)

    Nanda Saridewi

    2017-11-01

    Full Text Available Based on data from the observation of high school students grade XI that daily low student test scores due to a lack of role of students in the learning process. This classroom action research aims to improve learning outcomes and student motivation through discovery learning method in colloidal material. This study uses the approach developed by Lewin consisting of planning, action, observation, and reflection. Data collection techniques used the questionnaires and ability tests end. Based on the research that results for students received a positive influence on learning by discovery learning model by increasing the average value of 74 students from the first cycle to 90.3 in the second cycle and increased student motivation in the form of two statements based competence (KD categories (sometimes on the first cycle and the first statement KD category in the second cycle. Thus the results of this study can be used to improve learning outcomes and student motivation

  1. Use of the local false discovery rate for identification of metabolic biomarkers in rat urine following Genkwa Flos-induced hepatotoxicity.

    Directory of Open Access Journals (Sweden)

    Zuojing Li

    Full Text Available Metabolomics is concerned with characterizing the large number of metabolites present in a biological system using nuclear magnetic resonance (NMR and HPLC/MS (high-performance liquid chromatography with mass spectrometry. Multivariate analysis is one of the most important tools for metabolic biomarker identification in metabolomic studies. However, analyzing the large-scale data sets acquired during metabolic fingerprinting is a major challenge. As a posterior probability that the features of interest are not affected, the local false discovery rate (LFDR is a good interpretable measure. However, it is rarely used to when interrogating metabolic data to identify biomarkers. In this study, we employed the LFDR method to analyze HPLC/MS data acquired from a metabolomic study of metabolic changes in rat urine during hepatotoxicity induced by Genkwa flos (GF treatment. The LFDR approach was successfully used to identify important rat urine metabolites altered by GF-stimulated hepatotoxicity. Compared with principle component analysis (PCA, LFDR is an interpretable measure and discovers more important metabolites in an HPLC/MS-based metabolomic study.

  2. Improved estimation of the noncentrality parameter distribution from a large number of t-statistics, with applications to false discovery rate estimation in microarray data analysis.

    Science.gov (United States)

    Qu, Long; Nettleton, Dan; Dekkers, Jack C M

    2012-12-01

    Given a large number of t-statistics, we consider the problem of approximating the distribution of noncentrality parameters (NCPs) by a continuous density. This problem is closely related to the control of false discovery rates (FDR) in massive hypothesis testing applications, e.g., microarray gene expression analysis. Our methodology is similar to, but improves upon, the existing approach by Ruppert, Nettleton, and Hwang (2007, Biometrics, 63, 483-495). We provide parametric, nonparametric, and semiparametric estimators for the distribution of NCPs, as well as estimates of the FDR and local FDR. In the parametric situation, we assume that the NCPs follow a distribution that leads to an analytically available marginal distribution for the test statistics. In the nonparametric situation, we use convex combinations of basis density functions to estimate the density of the NCPs. A sequential quadratic programming procedure is developed to maximize the penalized likelihood. The smoothing parameter is selected with the approximate network information criterion. A semiparametric estimator is also developed to combine both parametric and nonparametric fits. Simulations show that, under a variety of situations, our density estimates are closer to the underlying truth and our FDR estimates are improved compared with alternative methods. Data-based simulations and the analyses of two microarray datasets are used to evaluate the performance in realistic situations. © 2012, The International Biometric Society.

  3. Heart rate-based lactate minimum test: a reproducible method.

    NARCIS (Netherlands)

    Strupler, M.; Muller, G.; Perret, C.

    2009-01-01

    OBJECTIVE: To find the individual intensity for aerobic endurance training, the lactate minimum test (LMT) seems to be a promising method. LMTs described in the literature consist of speed or work rate-based protocols, but for training prescription in daily practice mostly heart rate is used. The

  4. 14 CFR 406.143 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Discovery. 406.143 Section 406.143... Transportation Adjudications § 406.143 Discovery. (a) Initiation of discovery. Any party may initiate discovery... after a complaint has been filed. (b) Methods of discovery. The following methods of discovery are...

  5. Sodium flow rate measurement method of annular linear induction pumps

    International Nuclear Information System (INIS)

    Araseki, Hideo; Kirillov, Igor R.; Preslitsky, Gennady V.

    2012-01-01

    Highlights: ► We found a new method of flow rate monitoring of electromagnetic pump. ► The method is very simple and does not require a large space. ► The method was verified with an experiment and a numerical analysis. ► The experimental data and the numerical results are in good agreement. - Abstract: The present paper proposes a method for measuring sodium flow rate of annular linear induction pumps. The feature of the method lies in measuring the leaked magnetic field with measuring coils near the stator end on the outlet side and in correlating it with the sodium flow rate. This method is verified through an experiment and a numerical analysis. The data obtained in the experiment reveals that the correlation between the leaked magnetic field and the sodium flow rate is almost linear. The result of the numerical analysis agrees with the experimental data. The present method will be particularly effective to sodium flow rate monitoring of each one of plural annular linear induction pumps arranged in parallel in a vessel which forms a large-scale pump unit.

  6. Re-assessing copepod growth using the Moult Rate method

    DEFF Research Database (Denmark)

    Hirst, Andrew G.; Keister, J. E.; Richardson, A. J.

    2014-01-01

    Estimating growth and production rates of mesozooplankton, and copepods in particular, is important in describing flows of material and energy though pelagic systems. Over the past 30 years, the Moult Rate (MR) method has been used to estimate juvenile copepod growth rates in ∼40 papers. Yet the MR......-moulting stage, e.g. copepodite stage 5 to adult. We performed experiments with Calanus pacificus to estimate growth of stage C5 using an alternative method. We found that the error size and sign varied between mass type (i.e. DW, C and N). Recommendations for practical future assessments of growth in copepods...

  7. Advanced evaluation method of SG TSP BEC hole blockage rate

    International Nuclear Information System (INIS)

    Izumida, Hiroyuki; Nagata, Yasuyuki; Harada, Yutaka; Murakami, Ryuji

    2003-01-01

    In spite of the control of the water chemistry of SG secondary feed-water in PWR-SG, SG TSP BEC holes, which are the flow path of secondary water, are often clogged. In the past, the trending of BEC hole blockage rate has conducted by evaluating ECT original signals and visual inspections. However, the ECT original signals of deposits are diversified, it becomes difficult to analyze them with the existing evaluation method using the ECT original signals. In this regard, we have developed the secondary side visual inspection system, which enables the high-accuracy evaluation of BEC hole blockage rate, and new ECT signal evaluation method. (author)

  8. A method to assign failure rates for piping reliability assessments

    International Nuclear Information System (INIS)

    Gamble, R.M.; Tagart, S.W. Jr.

    1991-01-01

    This paper reports on a simplified method that has been developed to assign failure rates that can be used in reliability and risk studies of piping. The method can be applied on a line-by-line basis by identifying line and location specific attributes that can lead to piping unreliability from in-service degradation mechanisms and random events. A survey of service experience for nuclear piping reliability also was performed. The data from this survey provides a basis for identifying in-service failure attributes and assigning failure rates for risk and reliability studies

  9. Novel Method For Low-Rate Ddos Attack Detection

    Science.gov (United States)

    Chistokhodova, A. A.; Sidorov, I. D.

    2018-05-01

    The relevance of the work is associated with an increasing number of advanced types of DDoS attacks, in particular, low-rate HTTP-flood. Last year, the power and complexity of such attacks increased significantly. The article is devoted to the analysis of DDoS attacks detecting methods and their modifications with the purpose of increasing the accuracy of DDoS attack detection. The article details low-rate attacks features in comparison with conventional DDoS attacks. During the analysis, significant shortcomings of the available method for detecting low-rate DDoS attacks were found. Thus, the result of the study is an informal description of a new method for detecting low-rate denial-of-service attacks. The architecture of the stand for approbation of the method is developed. At the current stage of the study, it is possible to improve the efficiency of an already existing method by using a classifier with memory, as well as additional information.

  10. Measurement for the Leak Rate enhanced by a Improved Method

    International Nuclear Information System (INIS)

    Bae, Sang-Hoon; Choi, Young-San; Kim, Young-Ki; Lee, Yong-Sub; Jung, Hoan-Sung

    2007-01-01

    The leak rate measurement of the HANARO such as a research reactor that adopts a confinement concept for a reactor hall is very important one during a period inspection. This test verifies whether the reactor building could maintain the negative pressure or not when radiation is perceived by abnormal accidents. Of course, this may not cause a problem in a reactor operation only if it can satisfy the design requirement, but it is necessary to have some margin of a limitation value because a reactor hall should be managed more conservatively than the design reference. To meet the requirements of this strict design condition, previous method was changed to a new type of test with more stable and robust measuring method. The new leak rate measurement method is briefly introduced and the merits of this proposed method are shown through the data analyzed for last 3 years

  11. Benchmark calculations for evaluation methods of gas volumetric leakage rate

    International Nuclear Information System (INIS)

    Asano, R.; Aritomi, M.; Matsuzaki, M.

    1998-01-01

    A containment function of radioactive materials transport casks is essential for safe transportation to prevent the radioactive materials from being released into environment. Regulations such as IAEA standard determined the limit of radioactivity to be released. Since is not practical for the leakage tests to measure directly the radioactivity release from a package, as gas volumetric leakages rates are proposed in ANSI N14.5 and ISO standards. In our previous works, gas volumetric leakage rates for several kinds of gas from various leaks were measured and two evaluation methods, 'a simple evaluation method' and 'a strict evaluation method', were proposed based on the results. The simple evaluation method considers the friction loss of laminar flow with expansion effect. The strict evaluating method considers an exit loss in addition to the friction loss. In this study, four worked examples were completed for on assumed large spent fuel transport cask (Type B Package) with wet or dry capacity and at three transport conditions; normal transport with intact fuels or failed fuels, and an accident in transport. The standard leakage rates and criteria for two kinds of leak test were calculated for each example by each evaluation method. The following observations are made based upon the calculations and evaluations: the choked flow model of ANSI method greatly overestimates the criteria for tests ; the laminar flow models of both ANSI and ISO methods slightly overestimate the criteria for tests; the above two results are within the design margin for ordinary transport condition and all methods are useful for the evaluation; for severe condition such as failed fuel transportation, it should pay attention to apply a choked flow model of ANSI method. (authors)

  12. CREME96 and Related Error Rate Prediction Methods

    Science.gov (United States)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  13. Simple method for the estimation of glomerular filtration rate

    Energy Technology Data Exchange (ETDEWEB)

    Groth, T [Group for Biomedical Informatics, Uppsala Univ. Data Center, Uppsala (Sweden); Tengstroem, B [District General Hospital, Skoevde (Sweden)

    1977-02-01

    A simple method is presented for indirect estimation of the glomerular filtration rate from two venous blood samples, drawn after a single injection of a small dose of (/sup 125/I)sodium iothalamate (10 ..mu..Ci). The method does not require exact dosage, as the first sample, taken after a few minutes (t=5 min) after injection, is used to normilize the value of the second sample, which should be taken in between 2 to 4 h after injection. The glomerular filtration rate, as measured by standard insulin clearance, may then be predicted from the logarithm of the normalized value and linear regression formulas with a standard error of estimate of the order of 1 to 2 ml/min/1.73 m/sup 2/. The slope-intercept method for direct estimation of glomerular filtration rate is also evaluated and found to significantly underestimate standard insulin clearance. The normalized 'single-point' method is concluded to be superior to the slope-intercept method and more sophisticated methods using curve fitting technique, with regard to predictive force and clinical applicability.

  14. Parallel efficient rate control methods for JPEG 2000

    Science.gov (United States)

    Martínez-del-Amor, Miguel Á.; Bruns, Volker; Sparenberg, Heiko

    2017-09-01

    Since the introduction of JPEG 2000, several rate control methods have been proposed. Among them, post-compression rate-distortion optimization (PCRD-Opt) is the most widely used, and the one recommended by the standard. The approach followed by this method is to first compress the entire image split in code blocks, and subsequently, optimally truncate the set of generated bit streams according to the maximum target bit rate constraint. The literature proposes various strategies on how to estimate ahead of time where a block will get truncated in order to stop the execution prematurely and save time. However, none of them have been defined bearing in mind a parallel implementation. Today, multi-core and many-core architectures are becoming popular for JPEG 2000 codecs implementations. Therefore, in this paper, we analyze how some techniques for efficient rate control can be deployed in GPUs. In order to do that, the design of our GPU-based codec is extended, allowing stopping the process at a given point. This extension also harnesses a higher level of parallelism on the GPU, leading to up to 40% of speedup with 4K test material on a Titan X. In a second step, three selected rate control methods are adapted and implemented in our parallel encoder. A comparison is then carried out, and used to select the best candidate to be deployed in a GPU encoder, which gave an extra 40% of speedup in those situations where it was really employed.

  15. Effect of different rates and methods of benomyl and mancozeb ...

    African Journals Online (AJOL)

    AJB SERVER

    2006-07-22

    Jul 22, 2006 ... Field assessment of different rates and methods of three fungicide applications on delay in senescence. (DS) and grain ... Out of 170 species in the genus, V. unguiculata is ... The experiment was conducted at the teaching and research farm of Faculty of Agricultural Sciences, Ladoke Akintola University of.

  16. Seeding method and rate influence on weed suppression in aerobic ...

    African Journals Online (AJOL)

    High weed pressure is amongst the major constraints to the extensive adoption of aerobic rice system as a water-wise technique. Towards developing a sustainable weed management strategy, seeding method and rate may substantially contribute to weed suppression and reduce herbicide use and weeding cost. A trough ...

  17. Estimating Maternal Mortality Rate Using Sisterhood Methods in ...

    African Journals Online (AJOL)

    ... maternal and child morbidity and mortality, which could serve as a surveillance strategy to identify the magnitude of the problem and to mobilize resources to areas where the problems are most prominent for adequate control. KEY WORDS: Maternal Mortality Rate, Sisterhood Method. Highland Medical Research Journal ...

  18. Lidar method to estimate emission rates from extended sources

    Science.gov (United States)

    Currently, point measurements, often combined with models, are the primary means by which atmospheric emission rates are estimated from extended sources. However, these methods often fall short in their spatial and temporal resolution and accuracy. In recent years, lidar has emerged as a suitable to...

  19. Comparison between calculation methods of dose rates in gynecologic brachytherapy

    International Nuclear Information System (INIS)

    Vianello, E.A.; Biaggio, M.F.; D R, M.F.; Almeida, C.E. de

    1998-01-01

    In treatments with radiations for gynecologic tumors is necessary to evaluate the quality of the results obtained by different calculation methods for the dose rates on the points of clinical interest (A, rectal, vesicle). The present work compares the results obtained by two methods. The Manual Calibration Method (MCM) tri dimensional (Vianello E., et.al. 1998), using orthogonal radiographs for each patient in treatment, and the Theraplan/T P-11 planning system (Thratonics International Limited 1990) this last one verified experimentally (Vianello et.al. 1996). The results show that MCM can be used in the physical-clinical practice with a percentile difference comparable at the computerized programs. (Author)

  20. Prospects for DNA methods to measure human heritable mutation rates

    International Nuclear Information System (INIS)

    Mendelsohn, M.L.

    1985-01-01

    A workshop cosponsored by ICPEMC and the US Department of Energy was held in Alta, Utah, December 9-13, 1984 to examine the extent to which DNA-oriented methods might provide new approaches to the important but intractable problem of measuring mutation rates in control and exposed human populations. The workshop identified and analyzed six DNA methods for detection of human heritable mutation, including several created at the meeting, and concluded that none of the methods combine sufficient feasibility and efficiency to be recommended for general application. 8 refs

  1. Comparison of Deep Learning With Multiple Machine Learning Methods and Metrics Using Diverse Drug Discovery Data Sets.

    Science.gov (United States)

    Korotcov, Alexandru; Tkachenko, Valery; Russo, Daniel P; Ekins, Sean

    2017-12-04

    Machine learning methods have been applied to many data sets in pharmaceutical research for several decades. The relative ease and availability of fingerprint type molecular descriptors paired with Bayesian methods resulted in the widespread use of this approach for a diverse array of end points relevant to drug discovery. Deep learning is the latest machine learning algorithm attracting attention for many of pharmaceutical applications from docking to virtual screening. Deep learning is based on an artificial neural network with multiple hidden layers and has found considerable traction for many artificial intelligence applications. We have previously suggested the need for a comparison of different machine learning methods with deep learning across an array of varying data sets that is applicable to pharmaceutical research. End points relevant to pharmaceutical research include absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox) properties, as well as activity against pathogens and drug discovery data sets. In this study, we have used data sets for solubility, probe-likeness, hERG, KCNQ1, bubonic plague, Chagas, tuberculosis, and malaria to compare different machine learning methods using FCFP6 fingerprints. These data sets represent whole cell screens, individual proteins, physicochemical properties as well as a data set with a complex end point. Our aim was to assess whether deep learning offered any improvement in testing when assessed using an array of metrics including AUC, F1 score, Cohen's kappa, Matthews correlation coefficient and others. Based on ranked normalized scores for the metrics or data sets Deep Neural Networks (DNN) ranked higher than SVM, which in turn was ranked higher than all the other machine learning methods. Visualizing these properties for training and test sets using radar type plots indicates when models are inferior or perhaps over trained. These results also suggest the need for assessing deep learning further

  2. 76 FR 38281 - Federal Employees Health Benefits Program: New Premium Rating Method for Most Community Rated...

    Science.gov (United States)

    2011-06-29

    ... CFR Parts 1602, 1615, et al. Federal Employees Health Benefits Program: New Premium Rating Method for... Part 890; 48 CFR Parts 1602, 1615, 1632, and 1652 RIN 3206-AM39 Federal Employees Health Benefits..., 2011 (76 FR 36857). The document amends the Federal Employees Health Benefits (FEHB) regulations at 5...

  3. Methods for estimating disease transmission rates: Evaluating the precision of Poisson regression and two novel methods

    DEFF Research Database (Denmark)

    Kirkeby, Carsten Thure; Hisham Beshara Halasa, Tariq; Gussmann, Maya Katrin

    2017-01-01

    the transmission rate. We use data from the two simulation models and vary the sampling intervals and the size of the population sampled. We devise two new methods to determine transmission rate, and compare these to the frequently used Poisson regression method in both epidemic and endemic situations. For most...... tested scenarios these new methods perform similar or better than Poisson regression, especially in the case of long sampling intervals. We conclude that transmission rate estimates are easily biased, which is important to take into account when using these rates in simulation models....

  4. SWATHtoMRM: Development of High-Coverage Targeted Metabolomics Method Using SWATH Technology for Biomarker Discovery.

    Science.gov (United States)

    Zha, Haihong; Cai, Yuping; Yin, Yandong; Wang, Zhuozhong; Li, Kang; Zhu, Zheng-Jiang

    2018-03-20

    The complexity of metabolome presents a great analytical challenge for quantitative metabolite profiling, and restricts the application of metabolomics in biomarker discovery. Targeted metabolomics using multiple-reaction monitoring (MRM) technique has excellent capability for quantitative analysis, but suffers from the limited metabolite coverage. To address this challenge, we developed a new strategy, namely, SWATHtoMRM, which utilizes the broad coverage of SWATH-MS technology to develop high-coverage targeted metabolomics method. Specifically, SWATH-MS technique was first utilized to untargeted profile one pooled biological sample and to acquire the MS 2 spectra for all metabolites. Then, SWATHtoMRM was used to extract the large-scale MRM transitions for targeted analysis with coverage as high as 1000-2000 metabolites. Then, we demonstrated the advantages of SWATHtoMRM method in quantitative analysis such as coverage, reproducibility, sensitivity, and dynamic range. Finally, we applied our SWATHtoMRM approach to discover potential metabolite biomarkers for colorectal cancer (CRC) diagnosis. A high-coverage targeted metabolomics method with 1303 metabolites in one injection was developed to profile colorectal cancer tissues from CRC patients. A total of 20 potential metabolite biomarkers were discovered and validated for CRC diagnosis. In plasma samples from CRC patients, 17 out of 20 potential biomarkers were further validated to be associated with tumor resection, which may have a great potential in assessing the prognosis of CRC patients after tumor resection. Together, the SWATHtoMRM strategy provides a new way to develop high-coverage targeted metabolomics method, and facilitates the application of targeted metabolomics in disease biomarker discovery. The SWATHtoMRM program is freely available on the Internet ( http://www.zhulab.cn/software.php ).

  5. A comparison of surveillance methods for small incidence rates

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Woodall, William H.; Reynolds, Marion R.

    2008-05-15

    A number of methods have been proposed to detect an increasing shift in the incidence rate of a rare health event, such as a congenital malformation. Among these are the Sets method, two modifcations of the Sets method, and the CUSUM method based on the Poisson distribution. We consider the situation where data are observed as a sequence of Bernoulli trials and propose the Bernoulli CUSUM chart as a desirable method for the surveillance of rare health events. We compare the performance of the Sets method and its modifcations to the Bernoulli CUSUM chart under a wide variety of circumstances. Chart design parameters were chosen to satisfy a minimax criteria.We used the steady- state average run length to measure chart performance instead of the average run length which was used in nearly all previous comparisons involving the Sets method or its modifcations. Except in a very few instances, we found that the Bernoulli CUSUM chart has better steady-state average run length performance than the Sets method and its modifcations for the extensive number of cases considered.

  6. Method for enhancing microbial utilization rates of gases using perfluorocarbons

    Science.gov (United States)

    Turick, C.E.

    1997-06-10

    A method of enhancing the bacterial reduction of industrial gases using perfluorocarbons (PFCs) is disclosed. Because perfluorocarbons (PFCs) allow for a much greater solubility of gases than water does, PFCs have the potential to deliver gases in higher concentrations to microorganisms when used as an additive to microbial growth media thereby increasing the rate of the industrial gas conversion to economically viable chemicals and gases. 3 figs.

  7. Large deviations and queueing networks: Methods for rate function identification

    OpenAIRE

    Atar, Rami; Dupuis, Paul

    1999-01-01

    This paper considers the problem of rate function identification for multidimensional queueing models with feedback. A set of techniques are introduced which allow this identification when the model possesses certain structural properties. The main tools used are representation formulas for exponential integrals, weak convergence methods, and the regularity properties of associated Skorokhod Problems. Two examples are treated as special cases of the general theory: the classical Jackson netwo...

  8. Semi-Automatic Rating Method for Neutrophil Alkaline Phosphatase Activity.

    Science.gov (United States)

    Sugano, Kanae; Hashi, Kotomi; Goto, Misaki; Nishi, Kiyotaka; Maeda, Rie; Kono, Keigo; Yamamoto, Mai; Okada, Kazunori; Kaga, Sanae; Miwa, Keiko; Mikami, Taisei; Masauzi, Nobuo

    2017-01-01

    The neutrophil alkaline phosphatase (NAP) score is a valuable test for the diagnosis of myeloproliferative neoplasms, but it has still manually rated. Therefore, we developed a semi-automatic rating method using Photoshop ® and Image-J, called NAP-PS-IJ. Neutrophil alkaline phosphatase staining was conducted with Tomonaga's method to films of peripheral blood taken from three healthy volunteers. At least 30 neutrophils with NAP scores from 0 to 5+ were observed and taken their images. From which the outer part of neutrophil was removed away with Image-J. These were binarized with two different procedures (P1 and P2) using Photoshop ® . NAP-positive area (NAP-PA) and granule (NAP-PGC) were measured and counted with Image-J. The NAP-PA in images binarized with P1 significantly (P < 0.05) differed between images with NAP scores from 0 to 3+ (group 1) and those from 4+ to 5+ (group 2). The original images in group 1 were binarized with P2. NAP-PGC of them significantly (P < 0.05) differed among all four NAP score groups. The mean NAP-PGC with NAP-PS-IJ indicated a good correlation (r = 0.92, P < 0.001) to results by human examiners. The sensitivity and specificity of NAP-PS-IJ were 60% and 92%, which might be considered as a prototypic method for the full-automatic rating NAP score. © 2016 Wiley Periodicals, Inc.

  9. A new essential protein discovery method based on the integration of protein-protein interaction and gene expression data

    Directory of Open Access Journals (Sweden)

    Li Min

    2012-03-01

    Full Text Available Abstract Background Identification of essential proteins is always a challenging task since it requires experimental approaches that are time-consuming and laborious. With the advances in high throughput technologies, a large number of protein-protein interactions are available, which have produced unprecedented opportunities for detecting proteins' essentialities from the network level. There have been a series of computational approaches proposed for predicting essential proteins based on network topologies. However, the network topology-based centrality measures are very sensitive to the robustness of network. Therefore, a new robust essential protein discovery method would be of great value. Results In this paper, we propose a new centrality measure, named PeC, based on the integration of protein-protein interaction and gene expression data. The performance of PeC is validated based on the protein-protein interaction network of Saccharomyces cerevisiae. The experimental results show that the predicted precision of PeC clearly exceeds that of the other fifteen previously proposed centrality measures: Degree Centrality (DC, Betweenness Centrality (BC, Closeness Centrality (CC, Subgraph Centrality (SC, Eigenvector Centrality (EC, Information Centrality (IC, Bottle Neck (BN, Density of Maximum Neighborhood Component (DMNC, Local Average Connectivity-based method (LAC, Sum of ECC (SoECC, Range-Limited Centrality (RL, L-index (LI, Leader Rank (LR, Normalized α-Centrality (NC, and Moduland-Centrality (MC. Especially, the improvement of PeC over the classic centrality measures (BC, CC, SC, EC, and BN is more than 50% when predicting no more than 500 proteins. Conclusions We demonstrate that the integration of protein-protein interaction network and gene expression data can help improve the precision of predicting essential proteins. The new centrality measure, PeC, is an effective essential protein discovery method.

  10. Virtual screening methods as tools for drug lead discovery from large chemical libraries.

    Science.gov (United States)

    Ma, X H; Zhu, F; Liu, X; Shi, Z; Zhang, J X; Yang, S Y; Wei, Y Q; Chen, Y Z

    2012-01-01

    Virtual screening methods have been developed and explored as useful tools for searching drug lead compounds from chemical libraries, including large libraries that have become publically available. In this review, we discussed the new developments in exploring virtual screening methods for enhanced performance in searching large chemical libraries, their applications in screening libraries of ~ 1 million or more compounds in the last five years, the difficulties in their applications, and the strategies for further improving these methods.

  11. RCS Leak Rate Calculation with High Order Least Squares Method

    International Nuclear Information System (INIS)

    Lee, Jeong Hun; Kang, Young Kyu; Kim, Yang Ki

    2010-01-01

    As a part of action items for Application of Leak before Break(LBB), RCS Leak Rate Calculation Program is upgraded in Kori unit 3 and 4. For real time monitoring of operators, periodic calculation is needed and corresponding noise reduction scheme is used. This kind of study was issued in Korea, so there have upgraded and used real time RCS Leak Rate Calculation Program in UCN unit 3 and 4 and YGN unit 1 and 2. For reduction of the noise in signals, Linear Regression Method was used in those programs. Linear Regression Method is powerful method for noise reduction. But the system is not static with some alternative flow paths and this makes mixed trend patterns of input signal values. In this condition, the trend of signal and average of Linear Regression are not entirely same pattern. In this study, high order Least squares Method is used to follow the trend of signal and the order of calculation is rearranged. The result of calculation makes reasonable trend and the procedure is physically consistence

  12. 76 FR 21673 - Alternative Efficiency Determination Methods and Alternate Rating Methods

    Science.gov (United States)

    2011-04-18

    ... EERE-2011-BP-TP-00024] RIN 1904-AC46 Alternative Efficiency Determination Methods and Alternate Rating Methods AGENCY: Office of Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice of... and data related to the use of computer simulations, mathematical methods, and other alternative...

  13. Apparatus and method for determining solids circulation rate

    Science.gov (United States)

    Ludlow, J Christopher [Morgantown, WV; Spenik, James L [Morgantown, WV

    2012-02-14

    The invention relates to a method of determining bed velocity and solids circulation rate in a standpipe experiencing a moving packed bed flow, such as the in the standpipe section of a circulating bed fluidized reactor The method utilizes in-situ measurement of differential pressure over known axial lengths of the standpipe in conjunction with in-situ gas velocity measurement for a novel application of Ergun equations allowing determination of standpipe void fraction and moving packed bed velocity. The method takes advantage of the moving packed bed property of constant void fraction in order to integrate measured parameters into simultaneous solution of Ergun-based equations and conservation of mass equations across multiple sections of the standpipe.

  14. Waste tank ventilation rates measured with a tracer gas method

    International Nuclear Information System (INIS)

    Huckaby, J.L.; Evans, J.C.; Sklarew, D.S.; Mitroshkov, A.V.

    1998-08-01

    Passive ventilation with the atmosphere is used to prevent accumulation of waste gases and vapors in the headspaces of 132 of the 177 high-level radioactive waste Tanks at the Hanford Site in Southeastern Washington State. Measurements of the passive ventilation rates are needed for the resolution of two key safety issues associated with the rates of flammable gas production and accumulation and the rates at which organic salt-nitrate salt mixtures dry out. Direct measurement of passive ventilation rates using mass flow meters is not feasible because ventilation occurs va multiple pathways to the atmosphere (i.e., via the filtered breather riser and unsealed tank risers and pits), as well as via underground connections to other tanks, junction boxes, and inactive ventilation systems. The tracer gas method discussed in this report provides a direct measurement of the rate at which gases are removed by ventilation and an indirect measurement of the ventilation rate. The tracer gas behaves as a surrogate of the waste-generated gases, but it is only diminished via ventilation, whereas the waste gases are continuously released by the waste and may be subject to depletion mechanisms other than ventilation. The fiscal year 1998 tracer studies provide new evidence that significant exchange of air occurs between tanks via the underground cascade pipes. Most of the single-shell waste tanks are connected via 7.6-cm diameter cascade pipes to one or two adjacent tanks. Tracer gas studies of the Tank U-102/U-103 system indicated that the ventilation occurring via the cascade line could be a significant fraction of the total ventilation. In this two-tank cascade, air evidently flowed from Tank U-103 to Tank U-102 for a time and then was observed to flow from Tank U-102 to Tank U-103

  15. Interactive knowledge discovery from marketing questionnarie using simulated breeding and inductive learning methods

    Energy Technology Data Exchange (ETDEWEB)

    Terano, Takao [Univ. of Tsukuba, Tokyo (Japan); Ishino, Yoko [Univ. of Tokyo (Japan)

    1996-12-31

    This paper describes a novel method to acquire efficient decision rules from questionnaire data using both simulated breeding and inductive learning techniques. The basic ideas of the method are that simulated breeding is used to get the effective features from the questionnaire data and that inductive learning is used to acquire simple decision rules from the data. The simulated breeding is one of the Genetic Algorithm (GA) based techniques to subjectively or interactively evaluate the qualities of offspring generated by genetic operations. In this paper, we show a basic interactive version of the method and two variations: the one with semi-automated GA phases and the one with the relatively evaluation phase via the Analytic Hierarchy Process (AHP). The proposed method has been qualitatively and quantitatively validated by a case study on consumer product questionnaire data.

  16. Volatility Discovery

    DEFF Research Database (Denmark)

    Dias, Gustavo Fruet; Scherrer, Cristina; Papailias, Fotis

    The price discovery literature investigates how homogenous securities traded on different markets incorporate information into prices. We take this literature one step further and investigate how these markets contribute to stochastic volatility (volatility discovery). We formally show...... that the realized measures from homogenous securities share a fractional stochastic trend, which is a combination of the price and volatility discovery measures. Furthermore, we show that volatility discovery is associated with the way that market participants process information arrival (market sensitivity......). Finally, we compute volatility discovery for 30 actively traded stocks in the U.S. and report that Nyse and Arca dominate Nasdaq....

  17. Scale factor measure method without turntable for angular rate gyroscope

    Science.gov (United States)

    Qi, Fangyi; Han, Xuefei; Yao, Yanqing; Xiong, Yuting; Huang, Yuqiong; Wang, Hua

    2018-03-01

    In this paper, a scale factor test method without turntable is originally designed for the angular rate gyroscope. A test system which consists of test device, data acquisition circuit and data processing software based on Labview platform is designed. Taking advantage of gyroscope's sensitivity of angular rate, a gyroscope with known scale factor, serves as a standard gyroscope. The standard gyroscope is installed on the test device together with a measured gyroscope. By shaking the test device around its edge which is parallel to the input axis of gyroscope, the scale factor of the measured gyroscope can be obtained in real time by the data processing software. This test method is fast. It helps test system miniaturized, easy to carry or move. Measure quarts MEMS gyroscope's scale factor multi-times by this method, the difference is less than 0.2%. Compare with testing by turntable, the scale factor difference is less than 1%. The accuracy and repeatability of the test system seems good.

  18. A Galvanic Coupling Method for Assessing Hydration Rates

    Directory of Open Access Journals (Sweden)

    Clement Ogugua Asogwa

    2016-07-01

    Full Text Available Recent advances in biomedical sensors, data acquisition techniques, microelectronics and wireless communication systems opened up the use of wearable technology for ehealth monitoring. We introduce a galvanic coupled intrabody communication for monitoring human body hydration. Studies in hydration provide the information necessary for understanding the desired fluid levels for optimal performance of the body’s physiological and metabolic processes during exercise and activities of daily living. Current measurement techniques are mostly suitable for laboratory purposes due to their complexity and technical requirements. Less technical methods such as urine color observation and skin turgor testing are subjective and cannot be integrated into a wearable device. Bioelectrical impedance methods are popular but mostly used for estimating total body water with limited accuracy and sensitive to 800 mL–1000 mL change in body fluid levels. We introduce a non-intrusive and simple method of tracking hydration rates that can detect up to 1.30 dB reduction in attenuation when as little as 100 mL of water is consumed. Our results show that galvanic coupled intrabody signal propagation can provide qualitative hydration and dehydration rates in line with changes in an individual’s urine specific gravity and body mass. The real-time changes in galvanic coupled intrabody signal attenuation can be integrated into wearable electronic devices to evaluate body fluid levels on a particular area of interest and can aid diagnosis and treatment of fluid disorders such as lymphoedema.

  19. Thermodynamic equilibrium solubility measurements in simulated fluids by 96-well plate method in early drug discovery.

    Science.gov (United States)

    Bharate, Sonali S; Vishwakarma, Ram A

    2015-04-01

    An early prediction of solubility in physiological media (PBS, SGF and SIF) is useful to predict qualitatively bioavailability and absorption of lead candidates. Despite of the availability of multiple solubility estimation methods, none of the reported method involves simplified fixed protocol for diverse set of compounds. Therefore, a simple and medium-throughput solubility estimation protocol is highly desirable during lead optimization stage. The present work introduces a rapid method for assessment of thermodynamic equilibrium solubility of compounds in aqueous media using 96-well microplate. The developed protocol is straightforward to set up and takes advantage of the sensitivity of UV spectroscopy. The compound, in stock solution in methanol, is introduced in microgram quantities into microplate wells followed by drying at an ambient temperature. Microplates were shaken upon addition of test media and the supernatant was analyzed by UV method. A plot of absorbance versus concentration of a sample provides saturation point, which is thermodynamic equilibrium solubility of a sample. The established protocol was validated using a large panel of commercially available drugs and with conventional miniaturized shake flask method (r(2)>0.84). Additionally, the statistically significant QSPR models were established using experimental solubility values of 52 compounds. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. A Fast Soft Bit Error Rate Estimation Method

    Directory of Open Access Journals (Sweden)

    Ait-Idir Tarik

    2010-01-01

    Full Text Available We have suggested in a previous publication a method to estimate the Bit Error Rate (BER of a digital communications system instead of using the famous Monte Carlo (MC simulation. This method was based on the estimation of the probability density function (pdf of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.

  1. A hybrid computational method for the discovery of novel reproduction-related genes.

    Science.gov (United States)

    Chen, Lei; Chu, Chen; Kong, Xiangyin; Huang, Guohua; Huang, Tao; Cai, Yu-Dong

    2015-01-01

    Uncovering the molecular mechanisms underlying reproduction is of great importance to infertility treatment and to the generation of healthy offspring. In this study, we discovered novel reproduction-related genes with a hybrid computational method, integrating three different types of method, which offered new clues for further reproduction research. This method was first executed on a weighted graph, constructed based on known protein-protein interactions, to search the shortest paths connecting any two known reproduction-related genes. Genes occurring in these paths were deemed to have a special relationship with reproduction. These newly discovered genes were filtered with a randomization test. Then, the remaining genes were further selected according to their associations with known reproduction-related genes measured by protein-protein interaction score and alignment score obtained by BLAST. The in-depth analysis of the high confidence novel reproduction genes revealed hidden mechanisms of reproduction and provided guidelines for further experimental validations.

  2. The method of treatment cessation and recurrence rate of amblyopia.

    Science.gov (United States)

    Walsh, Leah A; Hahn, Erik K; LaRoche, G Robert

    2009-09-01

    To date, much of the research regarding amblyopia has been focused on which therapeutic modality is the most efficacious in amblyopia management. Unfortunately, there is a lack of research into which method of treatment cessation is the most appropriate once therapy has been completed. The purpose of this study is to investigate if the cessation method affects the recurrence rate of amblyopia. This study was a prospective randomized clinical trial of 20 subjects who were wearing full-time occlusion and were at the end point of their therapy. The subjects were randomized into one of two groups: abrupt cessation or therapy tapering. All subjects were followed for 3 consecutive 4-week intervals, for a total of 12 weeks, to assess the short-term recurrence rate of amblyopia. Subjects who were in the tapered group had their occlusion reduced from full-time occlusion (all waking hours minus one) to 50% of waking hours at study enrollment (i.e., from 12 hours/day to 6 hours per day); occlusion was reduced by an additional 50% at the first 4-week study visit (i.e., from 6 hours/day to 3 hours), with occlusion being discontinued completely at the week 8 visit. All subjects who were in the abrupt cessation group had their full-time occlusion discontinued completely at the start of the study (i.e., from 12 hours/day to none). Additional assessments were also conducted at week 26 and week 52 post-therapy cessation to determine the longer term amblyopia regression rate. For the purposes of this study, recurrence was defined as a 0.2 (10 letters) or more logarithm of the minimum angle of resolution (logMAR) loss of visual acuity. A recurrence of amblyopia occurred in 4 of 17 (24%; CI 9%-47%) participants completing the study by the week 52 study end point. There were 2 subjects from each treatment group who demonstrated a study protocol-defined recurrence. There was a 24% risk of amblyopia recurrence if therapy was discontinued abruptly or tapered in 8 weeks. In this small

  3. Work stress interventions in hospital care : Effectiveness of the DISCovery method

    NARCIS (Netherlands)

    Niks, I.M.W.; de Jonge, J.; Gevers, J.M.P.; Houtman, I.L.D.

    2018-01-01

    Effective interventions to prevent work stress and to improve health, well-being, and performance of employees are of the utmost importance. This quasi-experimental intervention study presents a specific method for diagnosis of psychosocial risk factors at work and subsequent development and

  4. Robust statistical methods for significance evaluation and applications in cancer driver detection and biomarker discovery

    DEFF Research Database (Denmark)

    Madsen, Tobias

    2017-01-01

    In the present thesis I develop, implement and apply statistical methods for detecting genomic elements implicated in cancer development and progression. This is done in two separate bodies of work. The first uses the somatic mutation burden to distinguish cancer driver mutations from passenger m...

  5. Work Stress Interventions in Hospital Care: Effectiveness of the DISCovery Method

    NARCIS (Netherlands)

    Niks, I.M.W.; Gevers, J.M.P.; Jonge, J. de; Houtman, I.L.D.

    2018-01-01

    Effective interventions to prevent work stress and to improve health, well-being, and performance of employees are of the utmost importance. This quasi-experimental intervention study presents a specific method for diagnosis of psychosocial risk factors at work and subsequent development and

  6. Topology Discovery Using Cisco Discovery Protocol

    OpenAIRE

    Rodriguez, Sergio R.

    2009-01-01

    In this paper we address the problem of discovering network topology in proprietary networks. Namely, we investigate topology discovery in Cisco-based networks. Cisco devices run Cisco Discovery Protocol (CDP) which holds information about these devices. We first compare properties of topologies that can be obtained from networks deploying CDP versus Spanning Tree Protocol (STP) and Management Information Base (MIB) Forwarding Database (FDB). Then we describe a method of discovering topology ...

  7. AFLP fragment isolation technique as a method to produce random sequences for single nucleotide polymorphism discovery in the green turtle, Chelonia mydas.

    Science.gov (United States)

    Roden, Suzanne E; Dutton, Peter H; Morin, Phillip A

    2009-01-01

    The green sea turtle, Chelonia mydas, was used as a case study for single nucleotide polymorphism (SNP) discovery in a species that has little genetic sequence information available. As green turtles have a complex population structure, additional nuclear markers other than microsatellites could add to our understanding of their complex life history. Amplified fragment length polymorphism technique was used to generate sets of random fragments of genomic DNA, which were then electrophoretically separated with precast gels, stained with SYBR green, excised, and directly sequenced. It was possible to perform this method without the use of polyacrylamide gels, radioactive or fluorescent labeled primers, or hybridization methods, reducing the time, expense, and safety hazards of SNP discovery. Within 13 loci, 2547 base pairs were screened, resulting in the discovery of 35 SNPs. Using this method, it was possible to yield a sufficient number of loci to screen for SNP markers without the availability of prior sequence information.

  8. Calculation method for gamma-dose rates from spherical puffs

    International Nuclear Information System (INIS)

    Thykier-Nielsen, S.; Deme, S.; Lang, E.

    1993-05-01

    The Lagrangian puff-models are widely used for calculation of the dispersion of atmospheric releases. Basic output from such models are concentrations of material in the air and on the ground. The most simple method for calculation of the gamma dose from the concentration of airborne activity is based on semi-infinite cloud model. This method is however only applicable for points far away from the release point. The exact calculation of the cloud dose using the volume integral requires significant computer time. The volume integral for the gamma dose could be approximated by using the semi-infinite cloud model combined with correction factors. This type of calculation procedure is very fast, but usually the accuracy is poor due to the fact that the same correction factors are used for all isotopes. The authors describe a more elaborate correction method. This method uses precalculated values of the gamma-dose rate as a function of the puff dispersion parameter (δ p ) and the distance from the puff centre for four energy groups. The release of energy for each radionuclide in each energy group has been calculated and tabulated. Based on these tables and a suitable interpolation procedure the calculation of gamma doses takes very short time and is almost independent of the number of radionuclides. (au) (7 tabs., 7 ills., 12 refs.)

  9. Standard test method for measurement of fatigue crack growth rates

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2015-01-01

    1.1 This test method covers the determination of fatigue crack growth rates from near-threshold to Kmax controlled instability. Results are expressed in terms of the crack-tip stress-intensity factor range (ΔK), defined by the theory of linear elasticity. 1.2 Several different test procedures are provided, the optimum test procedure being primarily dependent on the magnitude of the fatigue crack growth rate to be measured. 1.3 Materials that can be tested by this test method are not limited by thickness or by strength so long as specimens are of sufficient thickness to preclude buckling and of sufficient planar size to remain predominantly elastic during testing. 1.4 A range of specimen sizes with proportional planar dimensions is provided, but size is variable to be adjusted for yield strength and applied force. Specimen thickness may be varied independent of planar size. 1.5 The details of the various specimens and test configurations are shown in Annex A1-Annex A3. Specimen configurations other than t...

  10. 29 CFR 2700.56 - Discovery; general.

    Science.gov (United States)

    2010-07-01

    ...(c) or 111 of the Act has been filed. 30 U.S.C. 815(c) and 821. (e) Completion of discovery... 29 Labor 9 2010-07-01 2010-07-01 false Discovery; general. 2700.56 Section 2700.56 Labor... Hearings § 2700.56 Discovery; general. (a) Discovery methods. Parties may obtain discovery by one or more...

  11. 19 CFR 207.109 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Discovery. 207.109 Section 207.109 Customs Duties... and Committee Proceedings § 207.109 Discovery. (a) Discovery methods. All parties may obtain discovery under such terms and limitations as the administrative law judge may order. Discovery may be by one or...

  12. Dose rate reduction method for NMCA applied BWR plants

    International Nuclear Information System (INIS)

    Nagase, Makoto; Aizawa, Motohiro; Ito, Tsuyoshi; Hosokawa, Hideyuki; Varela, Juan; Caine, Thomas

    2012-09-01

    BRAC (BWR Radiation Assessment and Control) dose rate is used as an indicator of the incorporation of activated corrosion by products into BWR recirculation piping, which is known to be a significant contributor to dose rate received by workers during refueling outages. In order to reduce radiation exposure of the workers during the outage, it is desirable to keep BRAC dose rates as low as possible. After HWC was adopted to reduce IGSCC, a BRAC dose rate increase was observed in many plants. As a countermeasure to these rapid dose rate increases under HWC conditions, Zn injection was widely adopted in United States and Europe resulting in a reduction of BRAC dose rates. However, BRAC dose rates in several plants remain high, prompting the industry to continue to investigate methods to achieve further reductions. In recent years a large portion of the BWR fleet has adopted NMCA (NobleChem TM ) to enhance the hydrogen injection effect to suppress SCC. After NMCA, especially OLNC (On-Line NobleChem TM ), BRAC dose rates were observed to decrease. In some OLNC applied BWR plants this reduction was observed year after year to reach a new reduced equilibrium level. This dose rate reduction trends suggest the potential dose reduction might be obtained by the combination of Pt and Zn injection. So, laboratory experiments and in-plant tests were carried out to evaluate the effect of Pt and Zn on Co-60 deposition behaviour. Firstly, laboratory experiments were conducted to study the effect of noble metal deposition on Co deposition on stainless steel surfaces. Polished type 316 stainless steel coupons were prepared and some of them were OLNC treated in the test loop before the Co deposition test. Water chemistry conditions to simulate HWC were as follows: Dissolved oxygen, hydrogen and hydrogen peroxide were below 5 ppb, 100 ppb and 0 ppb (no addition), respectively. Zn was injected to target a concentration of 5 ppb. The test was conducted up to 1500 hours at 553 K. Test

  13. A novel bioinformatics method for efficient knowledge discovery by BLSOM from big genomic sequence data.

    Science.gov (United States)

    Bai, Yu; Iwasaki, Yuki; Kanaya, Shigehiko; Zhao, Yue; Ikemura, Toshimichi

    2014-01-01

    With remarkable increase of genomic sequence data of a wide range of species, novel tools are needed for comprehensive analyses of the big sequence data. Self-Organizing Map (SOM) is an effective tool for clustering and visualizing high-dimensional data such as oligonucleotide composition on one map. By modifying the conventional SOM, we have previously developed Batch-Learning SOM (BLSOM), which allows classification of sequence fragments according to species, solely depending on the oligonucleotide composition. In the present study, we introduce the oligonucleotide BLSOM used for characterization of vertebrate genome sequences. We first analyzed pentanucleotide compositions in 100 kb sequences derived from a wide range of vertebrate genomes and then the compositions in the human and mouse genomes in order to investigate an efficient method for detecting differences between the closely related genomes. BLSOM can recognize the species-specific key combination of oligonucleotide frequencies in each genome, which is called a "genome signature," and the specific regions specifically enriched in transcription-factor-binding sequences. Because the classification and visualization power is very high, BLSOM is an efficient powerful tool for extracting a wide range of information from massive amounts of genomic sequences (i.e., big sequence data).

  14. Final Report Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    Energy Technology Data Exchange (ETDEWEB)

    O' Leary, Patrick [Kitware, Inc., Clifton Park, NY (United States)

    2017-09-13

    The primary challenge motivating this project is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who can perform analysis only on a small fraction of the data they calculate, resulting in the substantial likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, which is known as in situ processing. The idea in situ processing was not new at the time of the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by Department of Energy (DOE) science projects. Our objective was to produce and enable the use of production-quality in situ methods and infrastructure, at scale, on DOE high-performance computing (HPC) facilities, though we expected to have an impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve this objective, we engaged in software technology research and development (R&D), in close partnerships with DOE science code teams, to produce software technologies that were shown to run efficiently at scale on DOE HPC platforms.

  15. Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-07-24

    The primary challenge motivating this team’s work is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who are able to perform analysis only on a small fraction of the data they compute, resulting in the very real likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, an approach that is known as in situ processing. The idea in situ processing was not new at the time of the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by DOE science projects. In large, our objective was produce and enable use of production-quality in situ methods and infrastructure, at scale, on DOE HPC facilities, though we expected to have impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve that objective, we assembled a unique team of researchers consisting of representatives from DOE national laboratories, academia, and industry, and engaged in software technology R&D, as well as engaged in close partnerships with DOE science code teams, to produce software technologies that were shown to run effectively at scale on DOE HPC platforms.

  16. Calculation method for gamma dose rates from Gaussian puffs

    Energy Technology Data Exchange (ETDEWEB)

    Thykier-Nielsen, S; Deme, S; Lang, E

    1995-06-01

    The Lagrangian puff models are widely used for calculation of the dispersion of releases to the atmosphere. Basic output from such models is concentration of material in the air and on the ground. The most simple method for calculation of the gamma dose from the concentration of airborne activity is based on the semi-infinite cloud model. This method is however only applicable for puffs with large dispersion parameters, i.e. for receptors far away from the release point. The exact calculation of the cloud dose using volume integral requires large computer time usually exceeding what is available for real time calculations. The volume integral for gamma doses could be approximated by using the semi-infinite cloud model combined with correction factors. This type of calculation procedure is very fast, but usually the accuracy is poor because only a few of the relevant parameters are considered. A multi-parameter method for calculation of gamma doses is described here. This method uses precalculated values of the gamma dose rates as a function of E{sub {gamma}}, {sigma}{sub y}, the asymmetry factor - {sigma}{sub y}/{sigma}{sub z}, the height of puff center - H and the distance from puff center R{sub xy}. To accelerate the calculations the release energy, for each significant radionuclide in each energy group, has been calculated and tabulated. Based on the precalculated values and suitable interpolation procedure the calculation of gamma doses needs only short computing time and it is almost independent of the number of radionuclides considered. (au) 2 tabs., 15 ills., 12 refs.

  17. Calculation method for gamma dose rates from Gaussian puffs

    International Nuclear Information System (INIS)

    Thykier-Nielsen, S.; Deme, S.; Lang, E.

    1995-06-01

    The Lagrangian puff models are widely used for calculation of the dispersion of releases to the atmosphere. Basic output from such models is concentration of material in the air and on the ground. The most simple method for calculation of the gamma dose from the concentration of airborne activity is based on the semi-infinite cloud model. This method is however only applicable for puffs with large dispersion parameters, i.e. for receptors far away from the release point. The exact calculation of the cloud dose using volume integral requires large computer time usually exceeding what is available for real time calculations. The volume integral for gamma doses could be approximated by using the semi-infinite cloud model combined with correction factors. This type of calculation procedure is very fast, but usually the accuracy is poor because only a few of the relevant parameters are considered. A multi-parameter method for calculation of gamma doses is described here. This method uses precalculated values of the gamma dose rates as a function of E γ , σ y , the asymmetry factor - σ y /σ z , the height of puff center - H and the distance from puff center R xy . To accelerate the calculations the release energy, for each significant radionuclide in each energy group, has been calculated and tabulated. Based on the precalculated values and suitable interpolation procedure the calculation of gamma doses needs only short computing time and it is almost independent of the number of radionuclides considered. (au) 2 tabs., 15 ills., 12 refs

  18. Validation method training: nurses' experiences and ratings of work climate.

    Science.gov (United States)

    Söderlund, Mona; Norberg, Astrid; Hansebo, Görel

    2014-03-01

    Training nursing staff in communication skills can impact on the quality of care for residents with dementia and contributes to nurses' job satisfaction. Changing attitudes and practices takes time and energy and can affect the entire nursing staff, not just the nurses directly involved in a training programme. Therefore, it seems important to study nurses' experiences of a training programme and any influence of the programme on work climate among the entire nursing staff. To explore nurses' experiences of a 1-year validation method training programme conducted in a nursing home for residents with dementia and to describe ratings of work climate before and after the programme. A mixed-methods approach. Twelve nurses participated in the training and were interviewed afterwards. These individual interviews were tape-recorded and transcribed, then analysed using qualitative content analysis. The Creative Climate Questionnaire was administered before (n = 53) and after (n = 56) the programme to the entire nursing staff in the participating nursing home wards and analysed with descriptive statistics. Analysis of the interviews resulted in four categories: being under extra strain, sharing experiences, improving confidence in care situations and feeling uncertain about continuing the validation method. The results of the questionnaire on work climate showed higher mean values in the assessment after the programme had ended. The training strengthened the participating nurses in caring for residents with dementia, but posed an extra strain on them. These nurses also described an extra strain on the entire nursing staff that was not reflected in the results from the questionnaire. The work climate at the nursing home wards might have made it easier to conduct this extensive training programme. Training in the validation method could develop nurses' communication skills and improve their handling of complex care situations. © 2013 Blackwell Publishing Ltd.

  19. Measurement of gastric emptying rate in humans. Simplified scanning method

    Energy Technology Data Exchange (ETDEWEB)

    Holt, S.; Colliver, J.; Guram, M.; Neal, C.; Verhulst, S.J.; Taylor, T.V. (Univ. of South Carolina School of Medicine, Columbia (USA))

    1990-11-01

    Simultaneous measurements of the gastric emptying rate of the solid and liquid phase of a dual-isotope-labeled test meal were made using a gamma camera and a simple scintillation detector, similar to that used in a hand-held probe. A simple scanning apparatus, similar to that used in a hand-held scintillation probe, was compared with simultaneous measurements made by a gamma camera in 16 healthy males. A dual-labeled test meal was utilized to measure liquid and solid emptying simultaneously. Anterior and posterior scans were taken at intervals up to 120 min using both a gamma camera and the scintillation probe. Good relative agreement between the methods was obtained both for solid-phase (correlation range 0.92-0.99, mean 0.97) and for liquid-phase data (correlation range 0.93-0.99, mean 0.97). For solid emptying data regression line slopes varied from 0.75 to 1.03 (mean 0.84). Liquid emptying data indicated that slopes ranged from 0.71 to 1.06 (mean 0.87). These results suggested that an estimate of the gamma measurement could be obtained by multiplying the scintillation measurement by a factor of 0.84 for the solid phase and 0.87 for the liquid phase. Correlation between repeat studies was 0.97 and 0.96 for solids and liquids, respectively. The application of a hand-held probe technique provides a noninvasive and inexpensive method for accurately assessing solid- and liquid-phase gastric emptying from the human stomach that correlates well with the use of a gamma camera, within the range of gastric emptying rate in the normal individuals in this study.

  20. Measurement of gastric emptying rate in humans. Simplified scanning method

    International Nuclear Information System (INIS)

    Holt, S.; Colliver, J.; Guram, M.; Neal, C.; Verhulst, S.J.; Taylor, T.V.

    1990-01-01

    Simultaneous measurements of the gastric emptying rate of the solid and liquid phase of a dual-isotope-labeled test meal were made using a gamma camera and a simple scintillation detector, similar to that used in a hand-held probe. A simple scanning apparatus, similar to that used in a hand-held scintillation probe, was compared with simultaneous measurements made by a gamma camera in 16 healthy males. A dual-labeled test meal was utilized to measure liquid and solid emptying simultaneously. Anterior and posterior scans were taken at intervals up to 120 min using both a gamma camera and the scintillation probe. Good relative agreement between the methods was obtained both for solid-phase (correlation range 0.92-0.99, mean 0.97) and for liquid-phase data (correlation range 0.93-0.99, mean 0.97). For solid emptying data regression line slopes varied from 0.75 to 1.03 (mean 0.84). Liquid emptying data indicated that slopes ranged from 0.71 to 1.06 (mean 0.87). These results suggested that an estimate of the gamma measurement could be obtained by multiplying the scintillation measurement by a factor of 0.84 for the solid phase and 0.87 for the liquid phase. Correlation between repeat studies was 0.97 and 0.96 for solids and liquids, respectively. The application of a hand-held probe technique provides a noninvasive and inexpensive method for accurately assessing solid- and liquid-phase gastric emptying from the human stomach that correlates well with the use of a gamma camera, within the range of gastric emptying rate in the normal individuals in this study

  1. Discovery of Novel Complex Metal Hydrides for Hydrogen Storage through Molecular Modeling and Combinatorial Methods

    Energy Technology Data Exchange (ETDEWEB)

    Lesch, David A; Adriaan Sachtler, J.W. J.; Low, John J; Jensen, Craig M; Ozolins, Vidvuds; Siegel, Don; Harmon, Laurel

    2011-02-14

    UOP LLC, a Honeywell Company, Ford Motor Company, and Striatus, Inc., collaborated with Professor Craig Jensen of the University of Hawaii and Professor Vidvuds Ozolins of University of California, Los Angeles on a multi-year cost-shared program to discover novel complex metal hydrides for hydrogen storage. This innovative program combined sophisticated molecular modeling with high throughput combinatorial experiments to maximize the probability of identifying commercially relevant, economical hydrogen storage materials with broad application. A set of tools was developed to pursue the medium throughput (MT) and high throughput (HT) combinatorial exploratory investigation of novel complex metal hydrides for hydrogen storage. The assay programs consisted of monitoring hydrogen evolution as a function of temperature. This project also incorporated theoretical methods to help select candidate materials families for testing. The Virtual High Throughput Screening served as a virtual laboratory, calculating structures and their properties. First Principles calculations were applied to various systems to examine hydrogen storage reaction pathways and the associated thermodynamics. The experimental program began with the validation of the MT assay tool with NaAlH4/0.02 mole Ti, the state of the art hydrogen storage system given by decomposition of sodium alanate to sodium hydride, aluminum metal, and hydrogen. Once certified, a combinatorial 21-point study of the NaAlH4 LiAlH4Mg(AlH4)2 phase diagram was investigated with the MT assay. Stability proved to be a problem as many of the materials decomposed during synthesis, altering the expected assay results. This resulted in repeating the entire experiment with a mild milling approach, which only temporarily increased capacity. NaAlH4 was the best performer in both studies and no new mixed alanates were observed, a result consistent with the VHTS. Powder XRD suggested that the reverse reaction, the regeneration of the

  2. Assessment method to predict the rate of unresolved false alarms

    International Nuclear Information System (INIS)

    Reardon, P.T.; Eggers, R.F.; Heaberlin, S.W.

    1982-06-01

    A method has been developed to predict the rate of unresolved false alarms of material loss in a nuclear facility. The computer program DETRES-1 was developed. The program first assigns the true values of control unit components receipts, shipments, beginning and ending inventories. A normal random number generator is used to generate measured values of each component. A loss estimator is calculated from the control unit's measured values. If the loss estimator triggers a detection alarm, a response is simulated. The response simulation is divided into two phases. The first phase is to simulate remeasurement of the components of the detection loss estimator using the same or better measurement methods or inferences from surrounding control units. If this phase of response continues to indicate a material loss, phase of response simulating a production shutdown and comprehensive cleanout is initiated. A new loss estimator is found, and tested against the alarm thresholds. If the estimator value is below the threshold, the original detection alarm is considered resolved; if above the threshold, an unresolved alarm has occurred. A tally is kept of valid alarms, unresolved false alarms, and failure to alarm upon a true loss

  3. Author Correction: Discovery of rapid whistlers close to Jupiter implying lightning rates similar to those on Earth

    Science.gov (United States)

    Kolmašová, Ivana; Imai, Masafumi; Santolík, Ondřej; Kurth, William S.; Hospodarsky, George B.; Gurnett, Donald A.; Connerney, John E. P.; Bolton, Scott J.

    2018-06-01

    In the version of this Letter originally published, in the second sentence of the last paragraph before the Methods section the word `altitudes' was mistakenly used in place of the word `latitudes'. The sentence has now been corrected accordingly to: `Low-dispersion class 1 events indicate that low-density ionospheric regions predominantly occur in the northern hemisphere at latitudes between 20° and 70°.'

  4. On reliable discovery of molecular signatures

    Directory of Open Access Journals (Sweden)

    Björkegren Johan

    2009-01-01

    Full Text Available Abstract Background Molecular signatures are sets of genes, proteins, genetic variants or other variables that can be used as markers for a particular phenotype. Reliable signature discovery methods could yield valuable insight into cell biology and mechanisms of human disease. However, it is currently not clear how to control error rates such as the false discovery rate (FDR in signature discovery. Moreover, signatures for cancer gene expression have been shown to be unstable, that is, difficult to replicate in independent studies, casting doubts on their reliability. Results We demonstrate that with modern prediction methods, signatures that yield accurate predictions may still have a high FDR. Further, we show that even signatures with low FDR may fail to replicate in independent studies due to limited statistical power. Thus, neither stability nor predictive accuracy are relevant when FDR control is the primary goal. We therefore develop a general statistical hypothesis testing framework that for the first time provides FDR control for signature discovery. Our method is demonstrated to be correct in simulation studies. When applied to five cancer data sets, the method was able to discover molecular signatures with 5% FDR in three cases, while two data sets yielded no significant findings. Conclusion Our approach enables reliable discovery of molecular signatures from genome-wide data with current sample sizes. The statistical framework developed herein is potentially applicable to a wide range of prediction problems in bioinformatics.

  5. Beyond Discovery

    DEFF Research Database (Denmark)

    Korsgaard, Steffen; Sassmannshausen, Sean Patrick

    2017-01-01

    In this chapter we explore four alternatives to the dominant discovery view of entrepreneurship; the development view, the construction view, the evolutionary view, and the Neo-Austrian view. We outline the main critique points of the discovery presented in these four alternatives, as well...

  6. Chemical Discovery

    Science.gov (United States)

    Brown, Herbert C.

    1974-01-01

    The role of discovery in the advance of the science of chemistry and the factors that are currently operating to handicap that function are considered. Examples are drawn from the author's work with boranes. The thesis that exploratory research and discovery should be encouraged is stressed. (DT)

  7. Development of a method for rating climate seat comfort

    Science.gov (United States)

    Scheffelmeier, M.; Classen, E.

    2017-10-01

    The comfort aspect in the vehicle interior is becoming increasingly important. A high comfort level offers the driver a good and secure feeling and has a strong influence on passive traffic safety. One important part of comfort is the climate aspect, especially the microclimate that emerges between passenger and seat. In this research, different combinations of typical seat materials are used. Fourteen woven and knitted fabrics and eight leathers and its substitutes for the face fabric layer, one foam, one non-woven and one 3D spacer for the plus pad layer and for the support layer three foam types with variations in structure and raw material as well as one rubber hair structure were investigated. To characterise this sample set by thermo-physiological aspects (e.g. water vapour resistance Ret, thermal resistance Rct, buffering capacity of water vapour Fd) regular and modified sweating guarded hotplates were used according to DIN EN ISO 11092. The results of the material characterisation confirm the common knowledge that seat covers out of textiles have better water vapour resistance values than leathers and its substitutes. Subject trials in a driving simulator were executed to rate the subjective sensation while driving in a vehicle seat. With a thermal, sweating Manikin (Newton Type, Thermetrics) objective product measurements were carried out on the same seat. Indeed the subject trials show that every test subject has his or her own subjective perception concerning the climate comfort. The results of the subject trials offered the parameters for the Newton measuring method. Respectively the sweating rate, sit-in procedure, ambient conditions and sensor positions on and between the seat layers must be comparable with the subject trials. By taking care of all these parameters it is possible to get repeatable and reliable results with the Newton Manikin. The subjective feelings of the test subjects, concerning the microclimate between seat and passenger, provide

  8. Fast and Accurate Protein False Discovery Rates on Large-Scale Proteomics Data Sets with Percolator 3.0

    Science.gov (United States)

    The, Matthew; MacCoss, Michael J.; Noble, William S.; Käll, Lukas

    2016-11-01

    Percolator is a widely used software tool that increases yield in shotgun proteomics experiments and assigns reliable statistical confidence measures, such as q values and posterior error probabilities, to peptides and peptide-spectrum matches (PSMs) from such experiments. Percolator's processing speed has been sufficient for typical data sets consisting of hundreds of thousands of PSMs. With our new scalable approach, we can now also analyze millions of PSMs in a matter of minutes on a commodity computer. Furthermore, with the increasing awareness for the need for reliable statistics on the protein level, we compared several easy-to-understand protein inference methods and implemented the best-performing method—grouping proteins by their corresponding sets of theoretical peptides and then considering only the best-scoring peptide for each protein—in the Percolator package. We used Percolator 3.0 to analyze the data from a recent study of the draft human proteome containing 25 million spectra (PM:24870542). The source code and Ubuntu, Windows, MacOS, and Fedora binary packages are available from http://percolator.ms/ under an Apache 2.0 license.

  9. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  10. DISCOVERY OF NINE GAMMA-RAY PULSARS IN FERMI LARGE AREA TELESCOPE DATA USING A NEW BLIND SEARCH METHOD

    Energy Technology Data Exchange (ETDEWEB)

    Pletsch, H. J.; Allen, B.; Aulbert, C.; Fehrmann, H. [Albert-Einstein-Institut, Max-Planck-Institut fuer Gravitationsphysik, D-30167 Hannover (Germany); Guillemot, L.; Kramer, M.; Barr, E. D.; Champion, D. J.; Eatough, R. P.; Freire, P. C. C. [Max-Planck-Institut fuer Radioastronomie, Auf dem Huegel 69, D-53121 Bonn (Germany); Ray, P. S. [Space Science Division, Naval Research Laboratory, Washington, DC 20375-5352 (United States); Belfiore, A.; Dormody, M. [Santa Cruz Institute for Particle Physics, Department of Physics and Department of Astronomy and Astrophysics, University of California at Santa Cruz, Santa Cruz, CA 95064 (United States); Camilo, F. [Columbia Astrophysics Laboratory, Columbia University, New York, NY 10027 (United States); Caraveo, P. A. [INAF-Istituto di Astrofisica Spaziale e Fisica Cosmica, I-20133 Milano (Italy); Celik, Oe.; Ferrara, E. C. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Hessels, J. W. T. [Astronomical Institute ' Anton Pannekoek' , University of Amsterdam, Postbus 94249, 1090 GE Amsterdam (Netherlands); Keith, M. [CSIRO Astronomy and Space Science, Australia Telescope National Facility, Epping NSW 1710 (Australia); Kerr, M., E-mail: holger.pletsch@aei.mpg.de, E-mail: guillemo@mpifr-bonn.mpg.de [W. W. Hansen Experimental Physics Laboratory, Kavli Institute for Particle Astrophysics and Cosmology, Department of Physics and SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305 (United States); and others

    2012-01-10

    We report the discovery of nine previously unknown gamma-ray pulsars in a blind search of data from the Fermi Large Area Telescope (LAT). The pulsars were found with a novel hierarchical search method originally developed for detecting continuous gravitational waves from rapidly rotating neutron stars. Designed to find isolated pulsars spinning at up to kHz frequencies, the new method is computationally efficient and incorporates several advances, including a metric-based gridding of the search parameter space (frequency, frequency derivative, and sky location) and the use of photon probability weights. The nine pulsars have spin frequencies between 3 and 12 Hz, and characteristic ages ranging from 17 kyr to 3 Myr. Two of them, PSRs J1803-2149 and J2111+ 4606, are young and energetic Galactic-plane pulsars (spin-down power above 6 Multiplication-Sign 10{sup 35} erg s{sup -1} and ages below 100 kyr). The seven remaining pulsars, PSRs J0106+4855, J0622+3749, J1620-4927, J1746-3239, J2028+3332, J2030+4415, and J2139+4716, are older and less energetic; two of them are located at higher Galactic latitudes (|b| > 10 Degree-Sign ). PSR J0106+4855 has the largest characteristic age (3 Myr) and the smallest surface magnetic field (2 Multiplication-Sign 10{sup 11} G) of all LAT blind-search pulsars. PSR J2139+4716 has the lowest spin-down power (3 Multiplication-Sign 10{sup 33} erg s{sup -1}) among all non-recycled gamma-ray pulsars ever found. Despite extensive multi-frequency observations, only PSR J0106+4855 has detectable pulsations in the radio band. The other eight pulsars belong to the increasing population of radio-quiet gamma-ray pulsars.

  11. Application of a high throughput method of biomarker discovery to improvement of the EarlyCDT(®-Lung Test.

    Directory of Open Access Journals (Sweden)

    Isabel K Macdonald

    Full Text Available BACKGROUND: The National Lung Screening Trial showed that CT screening for lung cancer led to a 20% reduction in mortality. However, CT screening has a number of disadvantages including low specificity. A validated autoantibody assay is available commercially (EarlyCDT®-Lung to aid in the early detection of lung cancer and risk stratification in patients with pulmonary nodules detected by CT. Recent advances in high throughput (HTP cloning and expression methods have been developed into a discovery pipeline to identify biomarkers that detect autoantibodies. The aim of this study was to demonstrate the successful clinical application of this strategy to add to the EarlyCDT-Lung panel in order to improve its sensitivity and specificity (and hence positive predictive value, (PPV. METHODS AND FINDINGS: Serum from two matched independent cohorts of lung cancer patients were used (n = 100 and n = 165. Sixty nine proteins were initially screened on an abridged HTP version of the autoantibody ELISA using protein prepared on small scale by a HTP expression and purification screen. Promising leads were produced in shake flask culture and tested on the full assay. These results were analyzed in combination with those from the EarlyCDT-Lung panel in order to provide a set of re-optimized cut-offs. Five proteins that still displayed cancer/normal differentiation were tested for reproducibility and validation on a second batch of protein and a separate patient cohort. Addition of these proteins resulted in an improvement in the sensitivity and specificity of the test from 38% and 86% to 49% and 93% respectively (PPV improvement from 1 in 16 to 1 in 7. CONCLUSION: This is a practical example of the value of investing resources to develop a HTP technology. Such technology may lead to improvement in the clinical utility of the EarlyCDT--Lung test, and so further aid the early detection of lung cancer.

  12. Discovery of Nine Gamma-Ray Pulsars in Fermi-Lat Data Using a New Blind Search Method

    Science.gov (United States)

    Celik-Tinmaz, Ozlem; Ferrara, E. C.; Pletsch, H. J.; Allen, B.; Aulbert, C.; Fehrmann, H.; Kramer, M.; Barr, E. D.; Champion, D. J.; Eatough, R. P.; hide

    2011-01-01

    We report the discovery of nine previously unknown gamma-ray pulsars in a blind search of data from the Fermi Large Area Telescope (LAT). The pulsars were found with a novel hierarchical search method originally developed for detecting continuous gravitational waves from rapidly rotating neutron stars. Designed to find isolated pulsars spinning at up to kHz frequencies, the new method is computationally efficient, and incorporates several advances, including a metric-based gridding of the search parameter space (frequency, frequency derivative and sky location) and the use of photon probability weights. The nine pulsars have spin frequencies between 3 and 12 Hz, and characteristic ages ranging from 17 kyr to 3 Myr. Two of them, PSRs Jl803-2149 and J2111+4606, are young and energetic Galactic-plane pulsars (spin-down power above 6 x 10(exp 35) ergs per second and ages below 100 kyr). The seven remaining pulsars, PSRs J0106+4855, J010622+3749, Jl620-4927, Jl746-3239, J2028+3332,J2030+4415, J2139+4716, are older and less energetic; two of them are located at higher Galactic latitudes (|b| greater than 10 degrees). PSR J0106+4855 has the largest characteristic age (3 Myr) and the smallest surface magnetic field (2x 10(exp 11)G) of all LAT blind-search pulsars. PSR J2139+4716 has the lowest spin-down power (3 x l0(exp 33) erg per second) among all non-recycled gamma-ray pulsars ever found. Despite extensive multi-frequency observations, only PSR J0106+4855 has detectable pulsations in the radio band. The other eight pulsars belong to the increasing population of radio-quiet gamma-ray pulsars.

  13. Methods of achieving and maintaining an appropriate caesarean section rate.

    LENUS (Irish Health Repository)

    Robson, Michael

    2013-04-01

    Caesarean section rates continue to increase worldwide. The appropriate caesarean section rate remains a topic of debate among women and professionals. Evidence-based medicine has not provided an answer and depends on interpretation of the literature. Overall caesarean section rates are unhelpful, and caesarean section rates should not be judged in isolation from other outcomes and epidemiological characteristics. Better understanding of caesarean section rates, their consequences and their benefits will improve care, and enable learning between delivery units nationally and internationally. To achieve and maintain an appropriate caesarean section rate requires a Multidisciplinary Quality Assurance Programme in each delivery unit, recognising caesarean section rates as one of many factors that determine quality. Women will always choose the type of delivery that seems safest to them and their babies. Professionals need to monitor the quality of their practice continuously in a standardised way to ensure that women can make the right choice.

  14. Comparison the Effect of Teaching by Group Guided Discovery Learning, Questions & Answers and Lecturing Methods on the Level of Learning and Information Durability of Students

    Directory of Open Access Journals (Sweden)

    Mardanparvar H.

    2016-02-01

    Full Text Available Aims: The requirements for revising the traditional education methods and utilization of new and active student-oriented learning methods have come into the scope of the educational systems long ago. Therefore, the new methods are being popular in different sciences including medical sciences. The aim of this study was to compare the effectiveness of teaching through three methods (group guided discovery, questions and answers, and lecture methods on the learning level and information durability in the nursing students. Instrument & Methods: In the semi-experimental study, 62 forth-semester nursing students of Nursing and Midwifery Faculty of Isfahan University of Medical Sciences, who were passing the infectious course for the first time at the first semester of the academic year 2015-16, were studied. The subjects were selected via census method and randomly divided into three groups including group guided discovery, questions and answers, and lecture groups. The test was conducted before, immediately after, and one month after the conduction of the training program using a researcher-made questionnaire. Data was analyzed by SPSS 19 software using Chi-square test, one-way ANOVA, ANOVA with repeated observations, and LSD post-hoc test. Findings: The mean score of the test conducted immediately after the training program in the lecture group was significantly lesser than guided discovery and question and answer groups (p<0.001. In addition, the mean score of the test conducted one month after the training program in guided discovery group was significantly higher than both question and answer (p=0.004 and lecture (p=0.001 groups. Conclusion: Active educational methods lead to a higher level of the students’ participation in the educational issues and provided a background to enhance learning and for better information durability. 

  15. Experiences with leak rate calculations methods for LBB application

    International Nuclear Information System (INIS)

    Grebner, H.; Kastner, W.; Hoefler, A.; Maussner, G.

    1997-01-01

    In this paper, three leak rate computer programs for the application of leak before break analysis are described and compared. The programs are compared to each other and to results of an HDR Reactor experiment and two real crack cases. The programs analyzed are PIPELEAK, FLORA, and PICEP. Generally, the different leak rate models are in agreement. To obtain reasonable agreement between measured and calculated leak rates, it was necessary to also use data from detailed crack investigations

  16. Experiences with leak rate calculations methods for LBB application

    Energy Technology Data Exchange (ETDEWEB)

    Grebner, H.; Kastner, W.; Hoefler, A.; Maussner, G. [and others

    1997-04-01

    In this paper, three leak rate computer programs for the application of leak before break analysis are described and compared. The programs are compared to each other and to results of an HDR Reactor experiment and two real crack cases. The programs analyzed are PIPELEAK, FLORA, and PICEP. Generally, the different leak rate models are in agreement. To obtain reasonable agreement between measured and calculated leak rates, it was necessary to also use data from detailed crack investigations.

  17. Methods of analysis speech rate: a pilot study.

    Science.gov (United States)

    Costa, Luanna Maria Oliveira; Martins-Reis, Vanessa de Oliveira; Celeste, Letícia Côrrea

    2016-01-01

    To describe the performance of fluent adults in different measures of speech rate. The study included 24 fluent adults, of both genders, speakers of Brazilian Portuguese, who were born and still living in the metropolitan region of Belo Horizonte, state of Minas Gerais, aged between 18 and 59 years. Participants were grouped by age: G1 (18-29 years), G2 (30-39 years), G3 (40-49 years), and G4 (50-59 years). The speech samples were obtained following the methodology of the Speech Fluency Assessment Protocol. In addition to the measures of speech rate proposed by the protocol (speech rate in words and syllables per minute), the rate of speech into phonemes per second and the articulation rate with and without the disfluencies were calculated. We used the nonparametric Friedman test and the Wilcoxon test for multiple comparisons. Groups were compared using the nonparametric Kruskal Wallis. The significance level was of 5%. There were significant differences between measures of speech rate involving syllables. The multiple comparisons showed that all the three measures were different. There was no effect of age for the studied measures. These findings corroborate previous studies. The inclusion of temporal acoustic measures such as speech rate in phonemes per second and articulation rates with and without disfluencies can be a complementary approach in the evaluation of speech rate.

  18. Determining Sorption Rate by a Continuous Gravimetric Method

    National Research Council Canada - National Science Library

    Hall, Monicia R; Procell, Lawrence R; Bartram, Philip W; Shuely, Wendel J

    2003-01-01

    ... were automatically recorded in an Excel file while CARC coupons were submerged in solvent. Initial sorption rates were determined for butyl acetate, butyl ether, cyclohexane and propylene carbonate...

  19. Higgs Discovery

    DEFF Research Database (Denmark)

    Sannino, Francesco

    2013-01-01

    has been challenged by the discovery of a not-so-heavy Higgs-like state. I will therefore review the recent discovery \\cite{Foadi:2012bb} that the standard model top-induced radiative corrections naturally reduce the intrinsic non-perturbative mass of the composite Higgs state towards the desired...... via first principle lattice simulations with encouraging results. The new findings show that the recent naive claims made about new strong dynamics at the electroweak scale being disfavoured by the discovery of a not-so-heavy composite Higgs are unwarranted. I will then introduce the more speculative......I discuss the impact of the discovery of a Higgs-like state on composite dynamics starting by critically examining the reasons in favour of either an elementary or composite nature of this state. Accepting the standard model interpretation I re-address the standard model vacuum stability within...

  20. Effect of weed management methods and nitrogen fertilizer rates on ...

    African Journals Online (AJOL)

    Inefficient weed management practices and the use of inappropriate nitrogen fertilizer rates are the major causes of low yield of wheat in Ethiopia. Therefore, field experiments were conducted at Bobicho and Faate in southern Ethiopia to determine the effect of weed management practices and N fertilizer rates on grain yield ...

  1. Success rate of two different methods of ilioinguinal-iliohypogastric ...

    African Journals Online (AJOL)

    Background: The ilioinguinal-iliohypogastric (ILIH) nerve block is a safe, effective, and easy to perform in order to provide analgesia for a variety of inguinal surgical procedures in pediatric patients. A relatively high failure rate of 10%-25% has been reported, even in experienced hands. It is assumed that this high failure rate ...

  2. Computational Methods Used in Hit-to-Lead and Lead Optimization Stages of Structure-Based Drug Discovery.

    Science.gov (United States)

    Heifetz, Alexander; Southey, Michelle; Morao, Inaki; Townsend-Nicholson, Andrea; Bodkin, Mike J

    2018-01-01

    GPCR modeling approaches are widely used in the hit-to-lead (H2L) and lead optimization (LO) stages of drug discovery. The aims of these modeling approaches are to predict the 3D structures of the receptor-ligand complexes, to explore the key interactions between the receptor and the ligand and to utilize these insights in the design of new molecules with improved binding, selectivity or other pharmacological properties. In this book chapter, we present a brief survey of key computational approaches integrated with hierarchical GPCR modeling protocol (HGMP) used in hit-to-lead (H2L) and in lead optimization (LO) stages of structure-based drug discovery (SBDD). We outline the differences in modeling strategies used in H2L and LO of SBDD and illustrate how these tools have been applied in three drug discovery projects.

  3. How Well Can We Detect Lineage-Specific Diversification-Rate Shifts? A Simulation Study of Sequential AIC Methods.

    Science.gov (United States)

    May, Michael R; Moore, Brian R

    2016-11-01

    Evolutionary biologists have long been fascinated by the extreme differences in species numbers across branches of the Tree of Life. This has motivated the development of statistical methods for detecting shifts in the rate of lineage diversification across the branches of phylogenic trees. One of the most frequently used methods, MEDUSA, explores a set of diversification-rate models, where each model assigns branches of the phylogeny to a set of diversification-rate categories. Each model is first fit to the data, and the Akaike information criterion (AIC) is then used to identify the optimal diversification model. Surprisingly, the statistical behavior of this popular method is uncharacterized, which is a concern in light of: (1) the poor performance of the AIC as a means of choosing among models in other phylogenetic contexts; (2) the ad hoc algorithm used to visit diversification models, and; (3) errors that we reveal in the likelihood function used to fit diversification models to the phylogenetic data. Here, we perform an extensive simulation study demonstrating that MEDUSA (1) has a high false-discovery rate (on average, spurious diversification-rate shifts are identified [Formula: see text] of the time), and (2) provides biased estimates of diversification-rate parameters. Understanding the statistical behavior of MEDUSA is critical both to empirical researchers-in order to clarify whether these methods can make reliable inferences from empirical datasets-and to theoretical biologists-in order to clarify the specific problems that need to be solved in order to develop more reliable approaches for detecting shifts in the rate of lineage diversification. [Akaike information criterion; extinction; lineage-specific diversification rates; phylogenetic model selection; speciation.]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  4. On the antiproton discovery

    International Nuclear Information System (INIS)

    Piccioni, O.

    1989-01-01

    The author of this article describes his own role in the discovery of the antiproton. Although Segre and Chamberlain received the Nobel Prize in 1959 for its discovery, the author claims that their experimental method was his idea which he communicated to them informally in December 1954. He describes how his application for citizenship (he was Italian), and other scientists' manipulation, prevented him from being at Berkeley to work on the experiment himself. (UK)

  5. Characterizing the Joint Effect of Diverse Test-Statistic Correlation Structures and Effect Size on False Discovery Rates in a Multiple-Comparison Study of Many Outcome Measures

    Science.gov (United States)

    Feiveson, Alan H.; Ploutz-Snyder, Robert; Fiedler, James

    2011-01-01

    In their 2009 Annals of Statistics paper, Gavrilov, Benjamini, and Sarkar report the results of a simulation assessing the robustness of their adaptive step-down procedure (GBS) for controlling the false discovery rate (FDR) when normally distributed test statistics are serially correlated. In this study we extend the investigation to the case of multiple comparisons involving correlated non-central t-statistics, in particular when several treatments or time periods are being compared to a control in a repeated-measures design with many dependent outcome measures. In addition, we consider several dependence structures other than serial correlation and illustrate how the FDR depends on the interaction between effect size and the type of correlation structure as indexed by Foerstner s distance metric from an identity. The relationship between the correlation matrix R of the original dependent variables and R, the correlation matrix of associated t-statistics is also studied. In general R depends not only on R, but also on sample size and the signed effect sizes for the multiple comparisons.

  6. Optical Methods For Automatic Rating Of Engine Test Components

    Science.gov (United States)

    Pritchard, James R.; Moss, Brian C.

    1989-03-01

    In recent years, increasing commercial and legislative pressure on automotive engine manufacturers, including increased oil drain intervals, cleaner exhaust emissions and high specific power outputs, have led to increasing demands on lubricating oil performance. Lubricant performance is defined by bench engine tests run under closely controlled conditions. After test, engines are dismantled and the parts rated for wear and accumulation of deposit. This rating must be consistently carried out in laboratories throughout the world in order to ensure lubricant quality meeting the specified standards. To this end, rating technicians evaluate components, following closely defined procedures. This process is time consuming, inaccurate and subject to drift, requiring regular recalibration of raters by means of international rating workshops. This paper describes two instruments for automatic rating of engine parts. The first uses a laser to determine the degree of polishing of the engine cylinder bore, caused by the reciprocating action of piston. This instrument has been developed to prototype stage by the NDT Centre at Harwell under contract to Exxon Chemical, and is planned for production within the next twelve months. The second instrument uses red and green filtered light to determine the type, quality and position of deposit formed on the piston surfaces. The latter device has undergone feasibility study, but no prototype exists.

  7. Custom database development and biomarker discovery methods for MALDI-TOF mass spectrometry-based identification of high-consequence bacterial pathogens.

    Science.gov (United States)

    Tracz, Dobryan M; Tyler, Andrea D; Cunningham, Ian; Antonation, Kym S; Corbett, Cindi R

    2017-03-01

    A high-quality custom database of MALDI-TOF mass spectral profiles was developed with the goal of improving clinical diagnostic identification of high-consequence bacterial pathogens. A biomarker discovery method is presented for identifying and evaluating MALDI-TOF MS spectra to potentially differentiate biothreat bacteria from less-pathogenic near-neighbour species. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  8. What Predicts Method Effects in Child Behavior Ratings

    Science.gov (United States)

    Low, Justin A.; Keith, Timothy Z.; Jensen, Megan

    2015-01-01

    The purpose of this research was to determine whether child, parent, and teacher characteristics such as sex, socioeconomic status (SES), parental depressive symptoms, the number of years of teaching experience, number of children in the classroom, and teachers' disciplinary self-efficacy predict deviations from maternal ratings in a…

  9. Dose rate measuring device and dose rate measuring method using the same

    International Nuclear Information System (INIS)

    Urata, Megumu; Matsushita, Takashi; Hanazawa, Sadao; Konno, Takahiro; Chiba, Yoshinori; Yumitate, Tadahiro

    1998-01-01

    The device of the present invention comprises a scintillation fiber scope having a shape elongated in the direction of the height of a pressure vessel and emitting light by incident of radiation to detect radiation, a radioactivity measuring device for measuring a dose rate based on the detection of the fiber scope and a reel means for dispensing and taking up the fiber scope, and it constituted such that the dose rate of the pressure vessel and that of a shroud are determined independently. Then, when the taken out shroud is contained in an container, excessive shielding is not necessary, in addition, this device can reliably be inserted to or withdrawn from complicated places between the pressure vessel and the shroud, and further, the dose rate of the pressure vessel and that of the shroud can be measured approximately accurately even when the thickness of them is different greatly. (N.H.)

  10. Dose rate measuring device and dose rate measuring method using the same

    Energy Technology Data Exchange (ETDEWEB)

    Urata, Megumu; Matsushita, Takashi; Hanazawa, Sadao; Konno, Takahiro; Chiba, Yoshinori; Yumitate, Tadahiro

    1998-11-13

    The device of the present invention comprises a scintillation fiber scope having a shape elongated in the direction of the height of a pressure vessel and emitting light by incident of radiation to detect radiation, a radioactivity measuring device for measuring a dose rate based on the detection of the fiber scope and a reel means for dispensing and taking up the fiber scope, and it constituted such that the dose rate of the pressure vessel and that of a shroud are determined independently. Then, when the taken out shroud is contained in an container, excessive shielding is not necessary, in addition, this device can reliably be inserted to or withdrawn from complicated places between the pressure vessel and the shroud, and further, the dose rate of the pressure vessel and that of the shroud can be measured approximately accurately even when the thickness of them is different greatly. (N.H.)

  11. Study of brittle crack jump rate using acoustic emission method

    International Nuclear Information System (INIS)

    Yasnij, P.V.; Pokrovskij, V.V.; Strizhalo, V.A.; Dobrovol'skij, Yu.V.

    1987-01-01

    A new peocedure is elaborated to detect brittle jumps of small length (0.1...5mm) occuring both inside the specimen and along the crack front under static and cyclic loading using the phenomena of acoustic emission (AE). Recording of the crack start and stop moments with an AE sensor as well as evaluation of the brittle crack jump length by the after-failure specimen fracture make it possible to find the mean crack propagation rate. Experimental dependences are obtained for the crack propagation rate with a brittle crack jump in steel 15Kh2MFA (σ B =1157 MPa, σ 0.2 =100 MPa) at 293 K and under cyclic loading as a function of the jump length and also as a function of the critical stress intensity factor K jc i corresponding to the crack jump

  12. Financing drug discovery for orphan diseases

    OpenAIRE

    Fagnan, David Erik; Gromatzky, Austin A.; Stein, Roger Mark; Fernandez, Jose-Maria; Lo, Andrew W.

    2014-01-01

    Recently proposed ‘megafund’ financing methods for funding translational medicine and drug development require billions of dollars in capital per megafund to de-risk the drug discovery process enough to issue long-term bonds. Here, we demonstrate that the same financing methods can be applied to orphan drug development but, because of the unique nature of orphan diseases and therapeutics (lower development costs, faster FDA approval times, lower failure rates and lower correlation of failures...

  13. Methods of forecasting crack growth rate under creep conditions

    International Nuclear Information System (INIS)

    Ol'kin, S.I.

    1979-01-01

    Using construction aluminium alloy application possibility of linear mechanics of the destruction for quantitative description of crack development process under creepage conditions is investigated. It is shown, that the grade dependence between the stress intensity coefficient and the crack growth rate takes place only at certain combination of the sample geometry and creepage parameters, and consequently, its applicability in every given case must necessarily be tested experimentally

  14. Comparison of two methods forecasting binding rate of plasma protein.

    Science.gov (United States)

    Hongjiu, Liu; Yanrong, Hu

    2014-01-01

    By introducing the descriptors calculated from the molecular structure, the binding rates of plasma protein (BRPP) with seventy diverse drugs are modeled by a quantitative structure-activity relationship (QSAR) technique. Two algorithms, heuristic algorithm (HA) and support vector machine (SVM), are used to establish linear and nonlinear models to forecast BRPP. Empirical analysis shows that there are good performances for HA and SVM with cross-validation correlation coefficients Rcv(2) of 0.80 and 0.83. Comparing HA with SVM, it was found that SVM has more stability and more robustness to forecast BRPP.

  15. Transcriptomic SNP discovery for custom genotyping arrays: impacts of sequence data, SNP calling method and genotyping technology on the probability of validation success.

    Science.gov (United States)

    Humble, Emily; Thorne, Michael A S; Forcada, Jaume; Hoffman, Joseph I

    2016-08-26

    Single nucleotide polymorphism (SNP) discovery is an important goal of many studies. However, the number of 'putative' SNPs discovered from a sequence resource may not provide a reliable indication of the number that will successfully validate with a given genotyping technology. For this it may be necessary to account for factors such as the method used for SNP discovery and the type of sequence data from which it originates, suitability of the SNP flanking sequences for probe design, and genomic context. To explore the relative importance of these and other factors, we used Illumina sequencing to augment an existing Roche 454 transcriptome assembly for the Antarctic fur seal (Arctocephalus gazella). We then mapped the raw Illumina reads to the new hybrid transcriptome using BWA and BOWTIE2 before calling SNPs with GATK. The resulting markers were pooled with two existing sets of SNPs called from the original 454 assembly using NEWBLER and SWAP454. Finally, we explored the extent to which SNPs discovered using these four methods overlapped and predicted the corresponding validation outcomes for both Illumina Infinium iSelect HD and Affymetrix Axiom arrays. Collating markers across all discovery methods resulted in a global list of 34,718 SNPs. However, concordance between the methods was surprisingly poor, with only 51.0 % of SNPs being discovered by more than one method and 13.5 % being called from both the 454 and Illumina datasets. Using a predictive modeling approach, we could also show that SNPs called from the Illumina data were on average more likely to successfully validate, as were SNPs called by more than one method. Above and beyond this pattern, predicted validation outcomes were also consistently better for Affymetrix Axiom arrays. Our results suggest that focusing on SNPs called by more than one method could potentially improve validation outcomes. They also highlight possible differences between alternative genotyping technologies that could be

  16. A rapid method to estimate Westergren sedimentation rates.

    Science.gov (United States)

    Alexy, Tamas; Pais, Eszter; Meiselman, Herbert J

    2009-09-01

    The erythrocyte sedimentation rate (ESR) is a nonspecific but simple and inexpensive test that was introduced into medical practice in 1897. Although it is commonly utilized in the diagnosis and follow-up of various clinical conditions, ESR has several limitations including the required 60 min settling time for the test. Herein we introduce a novel use for a commercially available computerized tube viscometer that allows the accurate prediction of human Westergren ESR rates in as little as 4 min. Owing to an initial pressure gradient, blood moves between two vertical tubes through a horizontal small-bore tube and the top of the red blood cell (RBC) column in each vertical tube is monitored continuously with an accuracy of 0.083 mm. Using data from the final minute of a blood viscosity measurement, a sedimentation index (SI) was calculated and correlated with results from the conventional Westergren ESR test. To date, samples from 119 human subjects have been studied and our results indicate a strong correlation between SI and ESR values (R(2)=0.92). In addition, we found a close association between SI and RBC aggregation indices as determined by an automated RBC aggregometer (R(2)=0.71). Determining SI on human blood is rapid, requires no special training and has minimal biohazard risk, thus allowing physicians to rapidly screen for individuals with elevated ESR and to monitor therapeutic responses.

  17. The rate of convergence in the method of alternating projections

    Czech Academy of Sciences Publication Activity Database

    Badea, C.; Grivaux, S.; Müller, Vladimír

    2012-01-01

    Roč. 23, č. 3 (2012), s. 413-434 ISSN 1061-0022 R&D Projects: GA ČR GA201/09/0473; GA AV ČR IAA100190903 Institutional support: RVO:67985840 Keywords : Friedrichs angle * method of alternating projections * arbitrarily slow convergence Subject RIV: BA - General Mathematics Impact factor: 0.460, year: 2012 http://www.ams.org/journals/spmj/2012-23-03/S1061-0022-2012-01202-1/home.html

  18. A New Method for Unconstrained Heart Rate Monitoring

    Science.gov (United States)

    2001-10-25

    members. However, care of bedridden elderly persons are not easy task, and this caused severe psychological and financial problems for other family...physical and mental conditions of bedridden elderly people at home and patients at hospitals and to contribute to the labor saving of the care and the...not suitable for home care of bedridden elderly people. Our method provides very small, simple and mechanically rugged device suitable for home

  19. Optical sensing method to analyze germination rate of Capsicum annum seeds treated with growth-promoting chemical compounds using optical coherence tomography

    Science.gov (United States)

    Wijesinghe, Ruchire Eranga; Lee, Seung-Yeol; Kim, Pilun; Jung, Hee-Young; Jeon, Mansik; Kim, Jeehyun

    2017-09-01

    Seed germination rate differs based on chemical treatments, and nondestructive measurements of germination rate have become an essential requirement in the field of agriculture. Seed scientists and other biologists are interested in optical sensing technologies-based biological discoveries due to nondestructive detection capability. Optical coherence tomography (OCT) has recently emerged as a powerful method for biological and plant material discoveries. We report an extended application of OCT by monitoring the germination rate acceleration of chemically primed seeds. To validate the versatility of the method, Capsicum annum seeds were primed using three chemical compounds: sterile distilled water (SDW), butandiol, and 1-hexadecene. Monitoring was performed using a 1310-nm swept source OCT system. The results confirmed more rapid morphological variations in the seeds treated with 1-hexadecene medium than the seeds treated with SDW and butandiol within 8 consecutive days. In addition, fresh weight measurements (gold standard) of seeds were monitored for 15 days, and the obtained results were correlated with the OCT results. Thus, such a method can be used in various agricultural fields, and OCT shows potential as a rigorous sensing method for selecting the optimal plant growth-promoting chemical compounds rapidly, when compared with the gold standard methods.

  20. Application Methods Guided Discovery in the Effort Improving Skills Observing Student Learning IPA in the Fourth Grades in Primary School

    OpenAIRE

    Septikasari, Zela

    2015-01-01

    The purpose of this research was to improve improve the skills of observing in science learning by using guided discovery. This type of research is a collaborative classroom action research with teachers and research subjects Elementary School fourth grade students in SD Lempuyangan 1, Yogyakarta. The results showed that the percentace of students who has score B on pre- action of 23.53%; in the first cycle increased to 38.24%; and 91.18% in the second cycle. Thus in the first cycle an increa...

  1. Development of a universal metabolome-standard method for long-term LC-MS metabolome profiling and its application for bladder cancer urine-metabolite-biomarker discovery.

    Science.gov (United States)

    Peng, Jun; Chen, Yi-Ting; Chen, Chien-Lun; Li, Liang

    2014-07-01

    Large-scale metabolomics study requires a quantitative method to generate metabolome data over an extended period with high technical reproducibility. We report a universal metabolome-standard (UMS) method, in conjunction with chemical isotope labeling liquid chromatography-mass spectrometry (LC-MS), to provide long-term analytical reproducibility and facilitate metabolome comparison among different data sets. In this method, UMS of a specific type of sample labeled by an isotope reagent is prepared a priori. The UMS is spiked into any individual samples labeled by another form of the isotope reagent in a metabolomics study. The resultant mixture is analyzed by LC-MS to provide relative quantification of the individual sample metabolome to UMS. UMS is independent of a study undertaking as well as the time of analysis and useful for profiling the same type of samples in multiple studies. In this work, the UMS method was developed and applied for a urine metabolomics study of bladder cancer. UMS of human urine was prepared by (13)C2-dansyl labeling of a pooled sample from 20 healthy individuals. This method was first used to profile the discovery samples to generate a list of putative biomarkers potentially useful for bladder cancer detection and then used to analyze the verification samples about one year later. Within the discovery sample set, three-month technical reproducibility was examined using a quality control sample and found a mean CV of 13.9% and median CV of 9.4% for all the quantified metabolites. Statistical analysis of the urine metabolome data showed a clear separation between the bladder cancer group and the control group from the discovery samples, which was confirmed by the verification samples. Receiver operating characteristic (ROC) test showed that the area under the curve (AUC) was 0.956 in the discovery data set and 0.935 in the verification data set. These results demonstrated the utility of the UMS method for long-term metabolomics and

  2. 77 FR 31756 - Energy Conservation Program: Alternative Efficiency Determination Methods and Alternative Rating...

    Science.gov (United States)

    2012-05-30

    ...-AC46 Energy Conservation Program: Alternative Efficiency Determination Methods and Alternative Rating... regulations authorizing the use of alternative methods of determining energy efficiency or energy consumption... alternative methods of determining energy efficiency or energy consumption of various consumer products and...

  3. A method of estimating the knock rating of hydrocarbon fuel blend

    Science.gov (United States)

    Sanders, Newell D

    1943-01-01

    The usefulness of the knock ratings of pure hydrocarbon compounds would be increased if some reliable method of calculating the knock ratings of fuel blends was known. The purpose of this study was to investigate the possibility of developing a method of predicting the knock ratings of fuel blends.

  4. New method for the discovery of adulterated cognacs and brandies based on solid-phase microextraction and gas chromatography - mass spectrometry

    Directory of Open Access Journals (Sweden)

    Darya Mozhayeva

    2014-10-01

    Full Text Available The article represents new method for discovery of adulterated cognacs and brandies based on solidphase microextraction (SPME in combination with gas chromatography – mass spectrometry (GC-MS. The work comprised optimization of SPME parameters (extraction temperature and time, concentration of added salt with subsequent analysis of authentic samples and comparison of the obtained chromatograms using principal component analysis (PCA. According to the obtained results, increase of extraction temperature resulted in an increase of response of the most volatile brandy constituents. To avoid chemical transformations and/or degradation of the samples, the extraction temperature must be limited to 30!C. Increase of the extraction time lead to higher total peak area, but longer extraction times (>10 min for 100 µm polydimethylsiloxane and >2 min for divinylbenzene/Carboxen/polydimethylsiloxane fibers caused displacement of analytes. Salt addition increased total response of analytes, but caused problems with reproducibility. The developed method was successfully applied for discovery of adulterated samples of brandy, cognac, whisky and whiskey sold in Kazakhstan. The obtained data was analyzed applying principal component analysis (PCA. Five adulterated brandy and whisky samples were discovered and confirmed. The developed method is recommended for application in forensic laboratories.

  5. Discovery radiomics via evolutionary deep radiomic sequencer discovery for pathologically proven lung cancer detection.

    Science.gov (United States)

    Shafiee, Mohammad Javad; Chung, Audrey G; Khalvati, Farzad; Haider, Masoom A; Wong, Alexander

    2017-10-01

    While lung cancer is the second most diagnosed form of cancer in men and women, a sufficiently early diagnosis can be pivotal in patient survival rates. Imaging-based, or radiomics-driven, detection methods have been developed to aid diagnosticians, but largely rely on hand-crafted features that may not fully encapsulate the differences between cancerous and healthy tissue. Recently, the concept of discovery radiomics was introduced, where custom abstract features are discovered from readily available imaging data. We propose an evolutionary deep radiomic sequencer discovery approach based on evolutionary deep intelligence. Motivated by patient privacy concerns and the idea of operational artificial intelligence, the evolutionary deep radiomic sequencer discovery approach organically evolves increasingly more efficient deep radiomic sequencers that produce significantly more compact yet similarly descriptive radiomic sequences over multiple generations. As a result, this framework improves operational efficiency and enables diagnosis to be run locally at the radiologist's computer while maintaining detection accuracy. We evaluated the evolved deep radiomic sequencer (EDRS) discovered via the proposed evolutionary deep radiomic sequencer discovery framework against state-of-the-art radiomics-driven and discovery radiomics methods using clinical lung CT data with pathologically proven diagnostic data from the LIDC-IDRI dataset. The EDRS shows improved sensitivity (93.42%), specificity (82.39%), and diagnostic accuracy (88.78%) relative to previous radiomics approaches.

  6. Discovery Mondays

    CERN Multimedia

    2003-01-01

    Many people don't realise quite how much is going on at CERN. Would you like to gain first-hand knowledge of CERN's scientific and technological activities and their many applications? Try out some experiments for yourself, or pick the brains of the people in charge? If so, then the «Lundis Découverte» or Discovery Mondays, will be right up your street. Starting on May 5th, on every first Monday of the month you will be introduced to a different facet of the Laboratory. CERN staff, non-scientists, and members of the general public, everyone is welcome. So tell your friends and neighbours and make sure you don't miss this opportunity to satisfy your curiosity and enjoy yourself at the same time. You won't have to listen to a lecture, as the idea is to have open exchange with the expert in question and for each subject to be illustrated with experiments and demonstrations. There's no need to book, as Microcosm, CERN's interactive museum, will be open non-stop from 7.30 p.m. to 9 p.m. On the first Discovery M...

  7. PEDF as an anticancer drug and new treatment methods following the discovery of its receptors: A patent perspective

    Science.gov (United States)

    Manalo, Katrina B.; Choong, Peter F.M.; Becerra, S. Patricia; Dass, Crispin R.

    2014-01-01

    Background Traditional forms of cancer therapy, which includes chemotherapy, have largely been overhauled due to the significant degree of toxicity they pose to normal, otherwise healthy tissue. It is hoped that use of biological agents, most of which are endogenously present in the body, will lead to safer treatment outcomes, without sacrificing efficacy. Objective The finding that PEDF, a naturally-occurring protein, was a potent angiogenesis inhibitor became the basis for studying the role of PEDF in tumours that are highly resistant to chemotherapy. The determination of the direct role of PEDF against cancer paved the way for understanding and developing PEDF as a novel drug. This review focuses on the patent applications behind testing the anticancer therapeutic effect of PEDF via its receptors as an antiangiogenic agent and as a direct anticancer agent. Conclusions The majority of the PEDF patents describe its and/or its fragments’ antiangiogenic ability and the usage of recombinant vectors as the mode of treatment delivery. PEDF’s therapeutic potential against different diseases and the discovery of its receptors opens possibilities for improving PEDF-based peptide design and drug delivery modes. PMID:21204726

  8. Estimation in adults of the glomerular filtration rate in [99mTc] DTPA renography - the rate constant method

    International Nuclear Information System (INIS)

    Carlsen, Ove

    2004-01-01

    The purpose of this study was to design an alternative and robust method for estimation of glomerular filtration rate (GFR) in [ 99 mTc]-diethylenetriaminepentaacetic acid ([ 99 mTc] -DTPA renography with a reliability not significantly lower than that of the conventional Gates' method. Methods: The method is based on renographies lasting 40 min in which regions of interest (ROIs) are manually created over selected parts of certain blood pools (e.g. heart, lungs, spleen, and liver). For each ROI the corresponding time-activity curve (TAC) was generated, decay corrected and exposed to a monoexponential fit in the time interval 10 to 40 min postinjection. The rate constant in min-1 of the monoexponential fit was denoted BETA. Following an iterative procedure comprising usually 5-10 manually created ROIs, the monoexponential fit with the maximum rate constant (BETA max ) was used for estimation of GFR. Results: In a patient material of 54 adult subjects in whom GFR was determined with multiple or one sample techniques with [ 51 Cr]-ethylenediaminetetraacetic acid ([ 51 Cr]-EDTA) the regression curve of standard GFR (GFR std ) (i.e. GFR adjusted to 1.73 m 2 body surface area) showed a close, non-linear relationship with BETA max with a correlation coefficient of 95%. The standard errors of estimate (SEE) were 6.6, 10.6 and 16.8 for GFR std equal to 30, 60, and 120 ml/(min .73 m 2 ), respectively. The corresponding SEE values for almost the same patient material using Gates' method were 8.4, 11.9, and 16.8 ml/(min 1.73 m 2 ). Conclusions: The alternative rate constant method yields estimates of GFR std with SEE values equal to or slightly smaller than in Gates' method. The two methods provide statistically uncorrelated estimates of GFR std . Therefore, pooled estimates of GFR std can be calculated with SEE values approximately 1.41 times smaller than those mentioned above. The reliabilities of the pooled estimate of GFR std separately and of the multiple samples method

  9. A review on measuring methods of gas-liquid flow rates

    International Nuclear Information System (INIS)

    Minemura, Kiyoshi; Yamashita, Masato

    2000-01-01

    This paper presents a review on the state of current measuring techniques for gas-liquid multiphase flow rates. After briefly discussing the basic idea on measuring methods for single-phase and two-phase flows, existing methods for the two-phase flow rates are classified into several types, that is, with or without a homogenizing device, single or combined method of several techniques, with intrusive or non-intrusive sensors, and physical or software method. Each methods are comparatively reviewed in view of measuring accuracy and manageability. Its scope also contains the techniques developed for petroleum-gas-water flow rates. (author)

  10. Trends in Suicide Methods and Rates among Older Adults in South Korea: A Comparison with Japan

    OpenAIRE

    Park, Subin; Lee, Hochang Benjamin; Lee, Su Yeon; Lee, Go Eun; Ahn, Myung Hee; Yi, Ki Kyoung; Hong, Jin Pyo

    2016-01-01

    Objective Lethality of the chosen method during a suicide attempt is a strong risk factor for completion of suicide. We examined whether annual changes in the pattern of suicide methods is related to annual changes in suicide rates among older adults in South Korea and Japan. Methods We analyzed annual the World Health Organization data on rates and methods of suicide from 2000 to 2011 in South Korea and Japan. Results For Korean older adults, there was a significant positive correlation betw...

  11. Test Method for High β Particle Emission Rate of 63Ni Source Plate

    OpenAIRE

    ZHANG Li-feng

    2015-01-01

    For the problem of measurement difficulties of β particle emission rate of Ni-63 source plate used for Ni-63 betavoltaic battery, a relative test method of scintillation current method was erected according to the measurement principle of scintillation detector.β particle emission rate of homemade Ni-63 source plate was tested by the method, and the test results were analysed and evaluated, it was initially thought that scintillation current method was a feasible way of testing β particle emi...

  12. Measuring Protein Synthesis Rate In Living Object Using Flooding Dose And Constant Infusion Methods

    OpenAIRE

    Ulyarti, Ulyarti

    2018-01-01

    Constant infusion is a method used for measuring protein synthesis rate in living object which uses low concentration of amino acid tracers. Flooding dose method is another technique used to measure the rate of protein synthesis which uses labelled amino acid together with large amount of unlabelled amino acid.  The latter method was firstly developed to solve the problem in determination of precursor pool arise from constant infusion method.  The objective of this writing is to com...

  13. “Time for Some Traffic Problems": Enhancing E-Discovery and Big Data Processing Tools with Linguistic Methods for Deception Detection

    Directory of Open Access Journals (Sweden)

    Erin Smith Crabb

    2014-09-01

    Full Text Available Linguistic deception theory provides methods to discover potentially deceptive texts to make them accessible to clerical review. This paper proposes the integration of these linguistic methods with traditional e-discovery techniques to identify deceptive texts within a given author’s larger body of written work, such as their sent email box. First, a set of linguistic features associated with deception are identified and a prototype classifier is constructed to analyze texts and describe the features’ distributions, while avoiding topic-specific features to improve recall of relevant documents. The tool is then applied to a portion of the Enron Email Dataset to illustrate how these strategies identify records, providing an example of its advantages and capability to stratify the large data set at hand.

  14. Applications and methods utilizing the Simple Semantic Web Architecture and Protocol (SSWAP for bioinformatics resource discovery and disparate data and service integration

    Directory of Open Access Journals (Sweden)

    Nelson Rex T

    2010-06-01

    Full Text Available Abstract Background Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of data between information resources difficult and labor intensive. A recently described semantic web protocol, the Simple Semantic Web Architecture and Protocol (SSWAP; pronounced "swap" offers the ability to describe data and services in a semantically meaningful way. We report how three major information resources (Gramene, SoyBase and the Legume Information System [LIS] used SSWAP to semantically describe selected data and web services. Methods We selected high-priority Quantitative Trait Locus (QTL, genomic mapping, trait, phenotypic, and sequence data and associated services such as BLAST for publication, data retrieval, and service invocation via semantic web services. Data and services were mapped to concepts and categories as implemented in legacy and de novo community ontologies. We used SSWAP to express these offerings in OWL Web Ontology Language (OWL, Resource Description Framework (RDF and eXtensible Markup Language (XML documents, which are appropriate for their semantic discovery and retrieval. We implemented SSWAP services to respond to web queries and return data. These services are registered with the SSWAP Discovery Server and are available for semantic discovery at http://sswap.info. Results A total of ten services delivering QTL information from Gramene were created. From SoyBase, we created six services delivering information about soybean QTLs, and seven services delivering genetic locus information. For LIS we constructed three services, two of which allow the retrieval of DNA and RNA FASTA sequences with the third service providing nucleic acid sequence comparison capability (BLAST. Conclusions The need for semantic integration technologies has preceded

  15. Trends in Suicide Methods and Rates among Older Adults in South Korea: A Comparison with Japan.

    Science.gov (United States)

    Park, Subin; Lee, Hochang Benjamin; Lee, Su Yeon; Lee, Go Eun; Ahn, Myung Hee; Yi, Ki Kyoung; Hong, Jin Pyo

    2016-03-01

    Lethality of the chosen method during a suicide attempt is a strong risk factor for completion of suicide. We examined whether annual changes in the pattern of suicide methods is related to annual changes in suicide rates among older adults in South Korea and Japan. We analyzed annual the World Health Organization data on rates and methods of suicide from 2000 to 2011 in South Korea and Japan. For Korean older adults, there was a significant positive correlation between suicide rate and the rate of hanging or the rate of jumping, and a significant negative correlation between suicide rate and the rate of poisoning. Among older adults in Japan, annual changes in the suicide rate and the pattern of suicide methods were less conspicuous, and no correlation was found between them. The results of the present study suggest that the increasing use of lethal suicide methods has contributed to the rise in suicide rates among older adults in South Korea. Targeted efforts to reduce the social acceptability and accessibility of lethal suicide methods might lead to lower suicide rate among older adults in South Korea.

  16. A parametric method for assessing diversification-rate variation in phylogenetic trees.

    Science.gov (United States)

    Shah, Premal; Fitzpatrick, Benjamin M; Fordyce, James A

    2013-02-01

    Phylogenetic hypotheses are frequently used to examine variation in rates of diversification across the history of a group. Patterns of diversification-rate variation can be used to infer underlying ecological and evolutionary processes responsible for patterns of cladogenesis. Most existing methods examine rate variation through time. Methods for examining differences in diversification among groups are more limited. Here, we present a new method, parametric rate comparison (PRC), that explicitly compares diversification rates among lineages in a tree using a variety of standard statistical distributions. PRC can identify subclades of the tree where diversification rates are at variance with the remainder of the tree. A randomization test can be used to evaluate how often such variance would appear by chance alone. The method also allows for comparison of diversification rate among a priori defined groups. Further, the application of the PRC method is not restricted to monophyletic groups. We examined the performance of PRC using simulated data, which showed that PRC has acceptable false-positive rates and statistical power to detect rate variation. We apply the PRC method to the well-studied radiation of North American Plethodon salamanders, and support the inference that the large-bodied Plethodon glutinosus clade has a higher historical rate of diversification compared to other Plethodon salamanders. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.

  17. Discovery of the iron isotopes

    International Nuclear Information System (INIS)

    Schuh, A.; Fritsch, A.; Heim, M.; Shore, A.; Thoennessen, M.

    2010-01-01

    Twenty-eight iron isotopes have been observed so far and the discovery of these isotopes is discussed here. For each isotope a brief summary of the first refereed publication, including the production and identification method, is presented.

  18. Discovery of the silver isotopes

    International Nuclear Information System (INIS)

    Schuh, A.; Fritsch, A.; Ginepro, J.Q.; Heim, M.; Shore, A.; Thoennessen, M.

    2010-01-01

    Thirty-eight silver isotopes have been observed so far and the discovery of these isotopes is discussed here. For each isotope a brief summary of the first refereed publication, including the production and identification method, is presented.

  19. Discovery of the cadmium isotopes

    International Nuclear Information System (INIS)

    Amos, S.; Thoennessen, M.

    2010-01-01

    Thirty-seven cadmium isotopes have been observed so far and the discovery of these isotopes is discussed here. For each isotope a brief summary of the first refereed publication, including the production and identification method, is presented.

  20. In situ feeding rates of plantonic copepods: A comparison of four methods

    DEFF Research Database (Denmark)

    Kiørboe, Thomas; Møhlenberg, Flemming; Riisgård, Hans Ulrik

    1985-01-01

    into estimates of in situ algal grazing rates by means of independently estimated gut turnover times, and were compared with chlorophyll and particle-volume grazing rates of animals sampled simultaneously and incubated in water from the collection depth. In addition, egg-production rates of adult females were...... problems of the different methods are discussed, and it is concluded that they all approach representative (although minimum) estimates of in situ feeding rates....

  1. Reexamining the Dissolution of Spent Fuel: A Comparison of Different Methods for Calculating Rates

    International Nuclear Information System (INIS)

    Hanson, Brady D.; Stout, Ray B.

    2004-01-01

    Dissolution rates for spent fuel have typically been reported in terms of a rate normalized to the surface area of the specimen. Recent evidence has shown that neither the geometric surface area nor that measured with BET accurately predicts the effective surface area of spent fuel. Dissolution rates calculated from results obtained by flowthrough tests were reexamined comparing the cumulative releases and surface area normalized rates. While initial surface area is important for comparison of different rates, it appears that normalizing to the surface area introduces unnecessary uncertainty compared to using cumulative or fractional release rates. Discrepancies in past data analyses are mitigated using this alternative method

  2. A method for projecting age-specific mortality rates for certain causes of death

    International Nuclear Information System (INIS)

    Leggett, R.W.; Crawford, D.J.

    1981-01-01

    A method is presented for projecting mortality rates for certain causes on the basis of observed rates during past years. This method arose from a study of trends in age-specific mortality rates for respiratory cancers, and for heuristic purposes it is shown how the method can be developed from certain theories of cancer induction. However, the method is applicable in the more common situation in which the underlying physical processes cannot be modeled with any confidence but the mortality rates are approximable over short time intervals by functions of the form a exp(bt), where b may vary in a continuous, predictable fashion as the time interval is varied. It appears from applications to historical data that this projection method is in some cases a substantial improvement over conventional curve-fitting methods and often uncovers trends which are not from observed data

  3. Correction to the count-rate detection limit and sample/blank time-allocation methods

    International Nuclear Information System (INIS)

    Alvarez, Joseph L.

    2013-01-01

    A common form of count-rate detection limits contains a propagation of uncertainty error. This error originated in methods to minimize uncertainty in the subtraction of the blank counts from the gross sample counts by allocation of blank and sample counting times. Correct uncertainty propagation showed that the time allocation equations have no solution. This publication presents the correct form of count-rate detection limits. -- Highlights: •The paper demonstrated a proper method of propagating uncertainty of count rate differences. •The standard count-rate detection limits were in error. •Count-time allocation methods for minimum uncertainty were in error. •The paper presented the correct form of the count-rate detection limit. •The paper discussed the confusion between count-rate uncertainty and count uncertainty

  4. Standard Test Method for Determining Thermal Neutron Reaction Rates and Thermal Neutron Fluence Rates by Radioactivation Techniques

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 The purpose of this test method is to define a general procedure for determining an unknown thermal-neutron fluence rate by neutron activation techniques. It is not practicable to describe completely a technique applicable to the large number of experimental situations that require the measurement of a thermal-neutron fluence rate. Therefore, this method is presented so that the user may adapt to his particular situation the fundamental procedures of the following techniques. 1.1.1 Radiometric counting technique using pure cobalt, pure gold, pure indium, cobalt-aluminum, alloy, gold-aluminum alloy, or indium-aluminum alloy. 1.1.2 Standard comparison technique using pure gold, or gold-aluminum alloy, and 1.1.3 Secondary standard comparison techniques using pure indium, indium-aluminum alloy, pure dysprosium, or dysprosium-aluminum alloy. 1.2 The techniques presented are limited to measurements at room temperatures. However, special problems when making thermal-neutron fluence rate measurements in high-...

  5. Sequential multi-nuclide emission rate estimation method based on gamma dose rate measurement for nuclear emergency management

    International Nuclear Information System (INIS)

    Zhang, Xiaole; Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Li, Yu

    2017-01-01

    Highlights: • Sequentially reconstruct multi-nuclide emission using gamma dose rate measurements. • Incorporate a priori ratio of nuclides into the background error covariance matrix. • Sequentially augment and update the estimation and the background error covariance. • Suppress the generation of negative estimations for the sequential method. • Evaluate the new method with twin experiments based on the JRODOS system. - Abstract: In case of a nuclear accident, the source term is typically not known but extremely important for the assessment of the consequences to the affected population. Therefore the assessment of the potential source term is of uppermost importance for emergency response. A fully sequential method, derived from a regularized weighted least square problem, is proposed to reconstruct the emission and composition of a multiple-nuclide release using gamma dose rate measurement. The a priori nuclide ratios are incorporated into the background error covariance (BEC) matrix, which is dynamically augmented and sequentially updated. The negative estimations in the mathematical algorithm are suppressed by utilizing artificial zero-observations (with large uncertainties) to simultaneously update the state vector and BEC. The method is evaluated by twin experiments based on the JRodos system. The results indicate that the new method successfully reconstructs the emission and its uncertainties. Accurate a priori ratio accelerates the analysis process, which obtains satisfactory results with only limited number of measurements, otherwise it needs more measurements to generate reasonable estimations. The suppression of negative estimation effectively improves the performance, especially for the situation with poor a priori information, where it is more prone to the generation of negative values.

  6. Sequential multi-nuclide emission rate estimation method based on gamma dose rate measurement for nuclear emergency management

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xiaole, E-mail: zhangxiaole10@outlook.com [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, D-76021 (Germany); Institute of Public Safety Research, Department of Engineering Physics, Tsinghua University, Beijing, 100084 (China); Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Li, Yu [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, D-76021 (Germany)

    2017-03-05

    Highlights: • Sequentially reconstruct multi-nuclide emission using gamma dose rate measurements. • Incorporate a priori ratio of nuclides into the background error covariance matrix. • Sequentially augment and update the estimation and the background error covariance. • Suppress the generation of negative estimations for the sequential method. • Evaluate the new method with twin experiments based on the JRODOS system. - Abstract: In case of a nuclear accident, the source term is typically not known but extremely important for the assessment of the consequences to the affected population. Therefore the assessment of the potential source term is of uppermost importance for emergency response. A fully sequential method, derived from a regularized weighted least square problem, is proposed to reconstruct the emission and composition of a multiple-nuclide release using gamma dose rate measurement. The a priori nuclide ratios are incorporated into the background error covariance (BEC) matrix, which is dynamically augmented and sequentially updated. The negative estimations in the mathematical algorithm are suppressed by utilizing artificial zero-observations (with large uncertainties) to simultaneously update the state vector and BEC. The method is evaluated by twin experiments based on the JRodos system. The results indicate that the new method successfully reconstructs the emission and its uncertainties. Accurate a priori ratio accelerates the analysis process, which obtains satisfactory results with only limited number of measurements, otherwise it needs more measurements to generate reasonable estimations. The suppression of negative estimation effectively improves the performance, especially for the situation with poor a priori information, where it is more prone to the generation of negative values.

  7. Quantifying the Ease of Scientific Discovery.

    Science.gov (United States)

    Arbesman, Samuel

    2011-02-01

    It has long been known that scientific output proceeds on an exponential increase, or more properly, a logistic growth curve. The interplay between effort and discovery is clear, and the nature of the functional form has been thought to be due to many changes in the scientific process over time. Here I show a quantitative method for examining the ease of scientific progress, another necessary component in understanding scientific discovery. Using examples from three different scientific disciplines - mammalian species, chemical elements, and minor planets - I find the ease of discovery to conform to an exponential decay. In addition, I show how the pace of scientific discovery can be best understood as the outcome of both scientific output and ease of discovery. A quantitative study of the ease of scientific discovery in the aggregate, such as done here, has the potential to provide a great deal of insight into both the nature of future discoveries and the technical processes behind discoveries in science.

  8. Discovery as a process

    Energy Technology Data Exchange (ETDEWEB)

    Loehle, C.

    1994-05-01

    The three great myths, which form a sort of triumvirate of misunderstanding, are the Eureka! myth, the hypothesis myth, and the measurement myth. These myths are prevalent among scientists as well as among observers of science. The Eureka! myth asserts that discovery occurs as a flash of insight, and as such is not subject to investigation. This leads to the perception that discovery or deriving a hypothesis is a moment or event rather than a process. Events are singular and not subject to description. The hypothesis myth asserts that proper science is motivated by testing hypotheses, and that if something is not experimentally testable then it is not scientific. This myth leads to absurd posturing by some workers conducting empirical descriptive studies, who dress up their study with a ``hypothesis`` to obtain funding or get it published. Methods papers are often rejected because they do not address a specific scientific problem. The fact is that many of the great breakthroughs in silence involve methods and not hypotheses or arise from largely descriptive studies. Those captured by this myth also try to block funding for those developing methods. The third myth is the measurement myth, which holds that determining what to measure is straightforward, so one doesn`t need a lot of introspection to do science. As one ecologist put it to me ``Don`t give me any of that philosophy junk, just let me out in the field. I know what to measure.`` These myths lead to difficulties for scientists who must face peer review to obtain funding and to get published. These myths also inhibit the study of science as a process. Finally, these myths inhibit creativity and suppress innovation. In this paper I first explore these myths in more detail and then propose a new model of discovery that opens the supposedly miraculous process of discovery to doser scrutiny.

  9. Study on the evaluation method of radiation dose rate around spent fuel shipping casks

    International Nuclear Information System (INIS)

    Yamakoshi, Hisao

    1986-01-01

    This study aims at developing a simple calculation method which can evaluate radiation dose rate around casks with high accuracy in a short time. The method is based on a concept of the radiation shielding characteristics of cask walls. The concept was introduced to replace for ordinary radiation shielding calculation which requires a long calculation time and a large memory capacity of a computer in the matrix calculation. For the purpose of verifying the accuracy and reliability of the new method, it was applied to the analysis of the dose rate distribution around actual casks, which had been measured. The results of the analysis revealed that the newly proposed method was excellent for the forecast of radiation dose rate distribution around casks in view of the accuracy and calculation time. The short calculation time and high accuracy by the proposed method were attained by dividing the whole procedure of ordinary fine radiation shielding calculation into the calculation of radiation dose rate on a cask surface by the matrix expression of the characteristic function and the calculation of dose rate distribution using the simple analytical expression of dose rate distribution around casks. The effect of the heterogeneous array of spent fuel in different burnup state on dose rate distribution around casks was evaluated by this method. (Kako, I.)

  10. N- versus O-alkylation: utilizing NMR methods to establish reliable primary structure determinations for drug discovery.

    Science.gov (United States)

    LaPlante, Steven R; Bilodeau, François; Aubry, Norman; Gillard, James R; O'Meara, Jeff; Coulombe, René

    2013-08-15

    A classic synthetic issue that remains unresolved is the reaction that involves the control of N- versus O-alkylation of ambident anions. This common chemical transformation is important for medicinal chemists, who require predictable and reliable protocols for the rapid synthesis of inhibitors. The uncertainty of whether the product(s) are N- and/or O-alkylated is common and can be costly if undetermined. Herein, we report an NMR-based strategy that focuses on distinguishing inhibitors and intermediates that are N- or O-alkylated. The NMR strategy involves three independent and complementary methods. However, any combination of two of the methods can be reliable if the third were compromised due to resonance overlap or other issues. The timely nature of these methods (HSQC/HMQC, HMBC. ROESY, and (13)C shift predictions) allows for contemporaneous determination of regioselective alkylation as needed during the optimization of synthetic routes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Tensile strength of concrete under static and intermediate strain rates: Correlated results from different testing methods

    International Nuclear Information System (INIS)

    Wu Shengxing; Chen Xudong; Zhou Jikai

    2012-01-01

    Highlights: ► Tensile strength of concrete increases with increase in strain rate. ► Strain rate sensitivity of tensile strength of concrete depends on test method. ► High stressed volume method can correlate results from various test methods. - Abstract: This paper presents a comparative experiment and analysis of three different methods (direct tension, splitting tension and four-point loading flexural tests) for determination of the tensile strength of concrete under low and intermediate strain rates. In addition, the objective of this investigation is to analyze the suitability of the high stressed volume approach and Weibull effective volume method to the correlation of the results of different tensile tests of concrete. The test results show that the strain rate sensitivity of tensile strength depends on the type of test, splitting tensile strength of concrete is more sensitive to an increase in the strain rate than flexural and direct tensile strength. The high stressed volume method could be used to obtain a tensile strength value of concrete, free from the influence of the characteristics of tests and specimens. However, the Weibull effective volume method is an inadequate method for describing failure of concrete specimens determined by different testing methods.

  12. [Analysis on traditional Chinese medicine prescriptions treating cancer-related anorexia syndrome based on grey system theory combined with multivariate analysis method and discovery of new prescriptions].

    Science.gov (United States)

    Chen, Song-Lin; Chen, Cong; Zhu, Hui; Li, Jing; Pang, Yan

    2016-01-01

    Cancer-related anorexia syndrome (CACS) is one of the main causes for death at present as well as a syndrome seriously harming patients' quality of life, treatment effect and survival time. In current clinical researches, there are fewer reports about empirical traditional Chinese medicine(TCM) prescriptions and patent prescriptions treating CACS, and prescription rules are rarely analyzed in a systematic manner. As the hidden rules are not excavated, it is hard to have an innovative discovery and knowledge of clinical medication. In this paper, the grey screening method combined with the multivariate statistical method was used to build the ″CACS prescriptions database″. Based on the database, totally 359 prescriptions were selected, the frequency of herbs in prescription was determined, and commonly combined drugs were evolved into 4 new prescriptions for different syndromes. Prescriptions of TCM in treatment of CACS gave priority to benefiting qi for strengthening spleen, also laid emphasis on replenishing kidney essence, dispersing stagnated liver-qi and dispersing lung-qi. Moreover, interdependence and mutual promotion of yin and yang should be taken into account to reflect TCM's holism and theory for treatment based on syndrome differentiation. The grey screening method, as a valuable traditional Chinese medicine research-supporting method, can be used to subjectively and objectively analyze prescription rules; and the new prescriptions can provide reference for the clinical use of TCM for treating CACS and the drug development. Copyright© by the Chinese Pharmaceutical Association.

  13. A comparative study of different methods for calculating electronic transition rates

    Science.gov (United States)

    Kananenka, Alexei A.; Sun, Xiang; Schubert, Alexander; Dunietz, Barry D.; Geva, Eitan

    2018-03-01

    We present a comprehensive comparison of the following mixed quantum-classical methods for calculating electronic transition rates: (1) nonequilibrium Fermi's golden rule, (2) mixed quantum-classical Liouville method, (3) mean-field (Ehrenfest) mixed quantum-classical method, and (4) fewest switches surface-hopping method (in diabatic and adiabatic representations). The comparison is performed on the Garg-Onuchic-Ambegaokar benchmark charge-transfer model, over a broad range of temperatures and electronic coupling strengths, with different nonequilibrium initial states, in the normal and inverted regimes. Under weak to moderate electronic coupling, the nonequilibrium Fermi's golden rule rates are found to be in good agreement with the rates obtained via the mixed quantum-classical Liouville method that coincides with the fully quantum-mechanically exact results for the model system under study. Our results suggest that the nonequilibrium Fermi's golden rule can serve as an inexpensive yet accurate alternative to Ehrenfest and the fewest switches surface-hopping methods.

  14. Biasing transition rate method based on direct MC simulation for probabilistic safety assessment

    Institute of Scientific and Technical Information of China (English)

    Xiao-Lei Pan; Jia-Qun Wang; Run Yuan; Fang Wang; Han-Qing Lin; Li-Qin Hu; Jin Wang

    2017-01-01

    Direct Monte Carlo (MC) simulation is a powerful probabilistic safety assessment method for accounting dynamics of the system.But it is not efficient at simulating rare events.A biasing transition rate method based on direct MC simulation is proposed to solve the problem in this paper.This method biases transition rates of the components by adding virtual components to them in series to increase the occurrence probability of the rare event,hence the decrease in the variance of MC estimator.Several cases are used to benchmark this method.The results show that the method is effective at modeling system failure and is more efficient at collecting evidence of rare events than the direct MC simulation.The performance is greatly improved by the biasing transition rate method.

  15. Glycoblotting method allows for rapid and efficient glycome profiling of human Alzheimer's disease brain, serum and cerebrospinal fluid towards potential biomarker discovery.

    Science.gov (United States)

    Gizaw, Solomon T; Ohashi, Tetsu; Tanaka, Masakazu; Hinou, Hiroshi; Nishimura, Shin-Ichiro

    2016-08-01

    Understanding of the significance of posttranslational glycosylation in Alzheimer's disease (AD) is of growing importance for the investigation of the pathogenesis of AD as well as discovery research of the disease-specific serum biomarkers. We designed a standard protocol for the glycoblotting combined with MALDI-TOFMS to perform rapid and quantitative profiling of the glycan parts of glycoproteins (N-glycans) and glycosphingolipids (GSLs) using human AD's post-mortem samples such as brain tissues (dissected cerebral cortices such as frontal, parietal, occipital, and temporal domains), serum and cerebrospinal fluid (CSF). The structural profiles of the major N-glycans released from glycoproteins and the total expression levels of the glycans were found to be mostly similar between the brain tissues of the AD patients and those of the normal control group. In contrast, the expression levels of the serum and CSF protein N-glycans such as bisect-type and multiply branched glycoforms were increased significantly in AD patient group. In addition, the levels of some gangliosides such as GM1, GM2 and GM3 appeared to alter in the AD patient brain and serum samples when compared with the normal control groups. Alteration of the expression levels of major N- and GSL-glycans in human brain tissues, serum and CSF of AD patients can be monitored quantitatively by means of the glycoblotting-based standard protocols. The changes in the expression levels of the glycans derived from the human post-mortem samples uncovered by the standardized glycoblotting method provides potential serum biomarkers in central nervous system disorders and can contribute to the insight into the molecular mechanisms in the pathogenesis of neurodegenerative diseases and future drug discovery. Most importantly, the present preliminary trials using human post-mortem samples of AD patients suggest that large-scale serum glycomics cohort by means of various-types of human AD patients as well as the normal

  16. Flow Rate Measurement Using 99mTc Radiotracer Method in a Pipe Installation

    International Nuclear Information System (INIS)

    Sipaun, S. M.; Bakar, A. Q. Abu; Othman, N.; Shaari, M. R.; Adnan, M. A. K.; Yusof, J. Mohd; Demanah, R.

    2010-01-01

    Flow rate is a significant parameter for managing processes in chemical processing plants and water processing facility. Accurate measurement of the flow rate allows engineers to monitor the delivery of process material, which in turn impacts a plant's capacity to produce their products. One of the available methods for determining the flow rate of a process material is by introducing a radiotracer to the system that mimics the material's flow pattern. In this study, a low activity Technetium-99m radioisotope was injected into a water piping setup and the 2'' x 2'' NaI (Tl) detectors were calibrated to detect spectrum peaks at specific points of the pipe installation. Using pulse velocity method, water flow rate was determined to be 11.3 litres per minute. For the sampling method, at different pump capacity, the flow rate was 15.0 litres per minute.

  17. Method of measuring the disintegration rate of a beta-emitting radionuclide in a liquid sample

    International Nuclear Information System (INIS)

    Horrocks, D.L.

    1980-01-01

    A novel liquid scintillation counting method of measuring the disintegration rate of a beta-emitting radionuclide is described which involves counting the sample at at least two different quench levels. (UK)

  18. Optimization of automation: III. Development of optimization method for determining automation rate in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Kim, Man Cheol; Seong, Poong Hyun

    2016-01-01

    Highlights: • We propose an appropriate automation rate that enables the best human performance. • We analyze the shortest working time considering Situation Awareness Recovery (SAR). • The optimized automation rate is estimated by integrating the automation and ostracism rate estimation methods. • The process to derive the optimized automation rate is demonstrated through case studies. - Abstract: Automation has been introduced in various industries, including the nuclear field, because it is commonly believed that automation promises greater efficiency, lower workloads, and fewer operator errors through reducing operator errors and enhancing operator and system performance. However, the excessive introduction of automation has deteriorated operator performance due to the side effects of automation, which are referred to as Out-of-the-Loop (OOTL), and this is critical issue that must be resolved. Thus, in order to determine the optimal level of automation introduction that assures the best human operator performance, a quantitative method of optimizing the automation is proposed in this paper. In order to propose the optimization method for determining appropriate automation levels that enable the best human performance, the automation rate and ostracism rate, which are estimation methods that quantitatively analyze the positive and negative effects of automation, respectively, are integrated. The integration was conducted in order to derive the shortest working time through considering the concept of situation awareness recovery (SAR), which states that the automation rate with the shortest working time assures the best human performance. The process to derive the optimized automation rate is demonstrated through an emergency operation scenario-based case study. In this case study, four types of procedures are assumed through redesigning the original emergency operating procedure according to the introduced automation and ostracism levels. Using the

  19. Novel Fingertip Image-Based Heart Rate Detection Methods for a Smartphone

    Directory of Open Access Journals (Sweden)

    Rifat Zaman

    2017-02-01

    Full Text Available We hypothesize that our fingertip image-based heart rate detection methods using smartphone reliably detect the heart rhythm and rate of subjects. We propose fingertip curve line movement-based and fingertip image intensity-based detection methods, which both use the movement of successive fingertip images obtained from smartphone cameras. To investigate the performance of the proposed methods, heart rhythm and rate of the proposed methods are compared to those of the conventional method, which is based on average image pixel intensity. Using a smartphone, we collected 120 s pulsatile time series data from each recruited subject. The results show that the proposed fingertip curve line movement-based method detects heart rate with a maximum deviation of 0.0832 Hz and 0.124 Hz using time- and frequency-domain based estimation, respectively, compared to the conventional method. Moreover, another proposed fingertip image intensity-based method detects heart rate with a maximum deviation of 0.125 Hz and 0.03 Hz using time- and frequency-based estimation, respectively.

  20. Assessing Teachers' Judgements of Students' Academic Motivation and Emotions across Two Rating Methods

    Science.gov (United States)

    Zhu, Mingjing; Urhahne, Detlef

    2014-01-01

    The present study examines the accuracy of teachers' judgements about students' motivation and emotions in English learning with two different rating methods. A sample of 480 sixth-grade Chinese students reported their academic self-concept, learning effort, enjoyment, and test anxiety via a questionnaire and were rated on these dimensions by…

  1. Comparing methods for measuring the rate of spread of invading populations

    Science.gov (United States)

    Marius Gilbert; Andrew. Liebhold

    2010-01-01

    Measuring rates of spread during biological invasions is important for predicting where and when invading organisms will spread in the future as well as for quantifying the influence of environmental conditions on invasion speed. While several methods have been proposed in the literature to measure spread rates, a comprehensive comparison of their accuracy when applied...

  2. A photocurrent compensation method of bipolar transistors under high dose rate radiation and its experimental research

    International Nuclear Information System (INIS)

    Yin Xuesong; Liu Zhongli; Li Chunji; Yu Fang

    2005-01-01

    Experiment using discrete bipolar transistors has been performed to verify the effect of the photocurrent compensation method. The theory of the dose rate effects of bipolar transistors and the photocurrent compensation method are introduced. The comparison between the response of hardened and unhardened circuits under high dose rate radiation is discussed. The experimental results show instructiveness to the hardness of bipolar integrated circuits under transient radiation. (authors)

  3. Double digest RADseq: an inexpensive method for de novo SNP discovery and genotyping in model and non-model species.

    Directory of Open Access Journals (Sweden)

    Brant K Peterson

    Full Text Available The ability to efficiently and accurately determine genotypes is a keystone technology in modern genetics, crucial to studies ranging from clinical diagnostics, to genotype-phenotype association, to reconstruction of ancestry and the detection of selection. To date, high capacity, low cost genotyping has been largely achieved via "SNP chip" microarray-based platforms which require substantial prior knowledge of both genome sequence and variability, and once designed are suitable only for those targeted variable nucleotide sites. This method introduces substantial ascertainment bias and inherently precludes detection of rare or population-specific variants, a major source of information for both population history and genotype-phenotype association. Recent developments in reduced-representation genome sequencing experiments on massively parallel sequencers (commonly referred to as RAD-tag or RADseq have brought direct sequencing to the problem of population genotyping, but increased cost and procedural and analytical complexity have limited their widespread adoption. Here, we describe a complete laboratory protocol, including a custom combinatorial indexing method, and accompanying software tools to facilitate genotyping across large numbers (hundreds or more of individuals for a range of markers (hundreds to hundreds of thousands. Our method requires no prior genomic knowledge and achieves per-site and per-individual costs below that of current SNP chip technology, while requiring similar hands-on time investment, comparable amounts of input DNA, and downstream analysis times on the order of hours. Finally, we provide empirical results from the application of this method to both genotyping in a laboratory cross and in wild populations. Because of its flexibility, this modified RADseq approach promises to be applicable to a diversity of biological questions in a wide range of organisms.

  4. Estimating evolutionary rates using time-structured data: a general comparison of phylogenetic methods.

    Science.gov (United States)

    Duchêne, Sebastián; Geoghegan, Jemma L; Holmes, Edward C; Ho, Simon Y W

    2016-11-15

    In rapidly evolving pathogens, including viruses and some bacteria, genetic change can accumulate over short time-frames. Accordingly, their sampling times can be used to calibrate molecular clocks, allowing estimation of evolutionary rates. Methods for estimating rates from time-structured data vary in how they treat phylogenetic uncertainty and rate variation among lineages. We compiled 81 virus data sets and estimated nucleotide substitution rates using root-to-tip regression, least-squares dating and Bayesian inference. Although estimates from these three methods were often congruent, this largely relied on the choice of clock model. In particular, relaxed-clock models tended to produce higher rate estimates than methods that assume constant rates. Discrepancies in rate estimates were also associated with high among-lineage rate variation, and phylogenetic and temporal clustering. These results provide insights into the factors that affect the reliability of rate estimates from time-structured sequence data, emphasizing the importance of clock-model testing. sduchene@unimelb.edu.au or garzonsebastian@hotmail.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. New methods for estimating follow-up rates in cohort studies

    Directory of Open Access Journals (Sweden)

    Xiaonan Xue

    2017-12-01

    Full Text Available Abstract Background The follow-up rate, a standard index of the completeness of follow-up, is important for assessing the validity of a cohort study. A common method for estimating the follow-up rate, the “Percentage Method”, defined as the fraction of all enrollees who developed the event of interest or had complete follow-up, can severely underestimate the degree of follow-up. Alternatively, the median follow-up time does not indicate the completeness of follow-up, and the reverse Kaplan-Meier based method and Clark’s Completeness Index (CCI also have limitations. Methods We propose a new definition for the follow-up rate, the Person-Time Follow-up Rate (PTFR, which is the observed person-time divided by total person-time assuming no dropouts. The PTFR cannot be calculated directly since the event times for dropouts are not observed. Therefore, two estimation methods are proposed: a formal person-time method (FPT in which the expected total follow-up time is calculated using the event rate estimated from the observed data, and a simplified person-time method (SPT that avoids estimation of the event rate by assigning full follow-up time to all events. Simulations were conducted to measure the accuracy of each method, and each method was applied to a prostate cancer recurrence study dataset. Results Simulation results showed that the FPT has the highest accuracy overall. In most situations, the computationally simpler SPT and CCI methods are only slightly biased. When applied to a retrospective cohort study of cancer recurrence, the FPT, CCI and SPT showed substantially greater 5-year follow-up than the Percentage Method (92%, 92% and 93% vs 68%. Conclusions The Person-time methods correct a systematic error in the standard Percentage Method for calculating follow-up rates. The easy to use SPT and CCI methods can be used in tandem to obtain an accurate and tight interval for PTFR. However, the FPT is recommended when event rates and

  6. Real-time Continuous Assessment Method for Mental and Physiological Condition using Heart Rate Variability

    Science.gov (United States)

    Yoshida, Yutaka; Yokoyama, Kiyoko; Ishii, Naohiro

    It is necessary to monitor the daily health condition for preventing stress syndrome. In this study, it was proposed the method assessing the mental and physiological condition, such as the work stress or the relaxation, using heart rate variability at real time and continuously. The instantanuous heart rate (HR), and the ratio of the number of extreme points (NEP) and the number of heart beats were calculated for assessing mental and physiological condition. In this method, 20 beats heart rate were used to calculate these indexes. These were calculated in one beat interval. Three conditions, which are sitting rest, performing mental arithmetic and watching relaxation movie, were assessed using our proposed algorithm. The assessment accuracies were 71.9% and 55.8%, when performing mental arithmetic and watching relaxation movie respectively. In this method, the mental and physiological condition was assessed using only 20 regressive heart beats, so this method is considered as the real time assessment method.

  7. Standard test method for determining atmospheric chloride deposition rate by wet candle method

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2002-01-01

    1.1 This test method covers a wet candle device and its use in measuring atmospheric chloride deposition (amount of chloride salts deposited from the atmosphere on a given area per unit time). 1.2 Data on atmospheric chloride deposition can be useful in classifying the corrosivity of a specific area, such as an atmospheric test site. Caution must be exercised, however, to take into consideration the season because airborne chlorides vary widely between seasons. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  8. Paraquat prohibition and change in the suicide rate and methods in South Korea.

    Science.gov (United States)

    Myung, Woojae; Lee, Geung-Hee; Won, Hong-Hee; Fava, Maurizio; Mischoulon, David; Nyer, Maren; Kim, Doh Kwan; Heo, Jung-Yoon; Jeon, Hong Jin

    2015-01-01

    The annual suicide rate in South Korea is the highest among the developed countries. Paraquat is a highly lethal herbicide, commonly used in South Korea as a means for suicide. We have studied the effect of the 2011 paraquat prohibition on the national suicide rate and method of suicide in South Korea. We obtained the monthly suicide rate from 2005 to 2013 in South Korea. In our analyses, we adjusted for the effects of celebrity suicides, and economic, meteorological, and seasonal factors on suicide rate. We employed change point analysis to determine the effect of paraquat prohibition on suicide rate over time, and the results were verified by structural change analysis, an alternative statistical method. After the paraquat prohibition period in South Korea, there was a significant reduction in the total suicide rate and suicide rate by poisoning with herbicides or fungicides in all age groups and in both genders. The estimated suicide rates during this period decreased by 10.0% and 46.1% for total suicides and suicides by poisoning of herbicides or fungicides, respectively. In addition, method substitution effect of paraquat prohibition was found in suicide by poisoning by carbon monoxide, which did not exceed the reduction in the suicide rate of poisoning with herbicides or fungicides. In South Korea, paraquat prohibition led to a lower rate of suicide by paraquat poisoning, as well as a reduction in the overall suicide rate. Paraquat prohibition should be considered as a national suicide prevention strategy in developing and developed countries alongside careful observation for method substitution effects.

  9. Paraquat prohibition and change in the suicide rate and methods in South Korea.

    Directory of Open Access Journals (Sweden)

    Woojae Myung

    Full Text Available The annual suicide rate in South Korea is the highest among the developed countries. Paraquat is a highly lethal herbicide, commonly used in South Korea as a means for suicide. We have studied the effect of the 2011 paraquat prohibition on the national suicide rate and method of suicide in South Korea. We obtained the monthly suicide rate from 2005 to 2013 in South Korea. In our analyses, we adjusted for the effects of celebrity suicides, and economic, meteorological, and seasonal factors on suicide rate. We employed change point analysis to determine the effect of paraquat prohibition on suicide rate over time, and the results were verified by structural change analysis, an alternative statistical method. After the paraquat prohibition period in South Korea, there was a significant reduction in the total suicide rate and suicide rate by poisoning with herbicides or fungicides in all age groups and in both genders. The estimated suicide rates during this period decreased by 10.0% and 46.1% for total suicides and suicides by poisoning of herbicides or fungicides, respectively. In addition, method substitution effect of paraquat prohibition was found in suicide by poisoning by carbon monoxide, which did not exceed the reduction in the suicide rate of poisoning with herbicides or fungicides. In South Korea, paraquat prohibition led to a lower rate of suicide by paraquat poisoning, as well as a reduction in the overall suicide rate. Paraquat prohibition should be considered as a national suicide prevention strategy in developing and developed countries alongside careful observation for method substitution effects.

  10. Leak Rate Quantification Method for Gas Pressure Seals with Controlled Pressure Differential

    Science.gov (United States)

    Daniels, Christopher C.; Braun, Minel J.; Oravec, Heather A.; Mather, Janice L.; Taylor, Shawn C.

    2015-01-01

    An enhancement to the pressure decay leak rate method with mass point analysis solved deficiencies in the standard method. By adding a control system, a constant gas pressure differential across the test article was maintained. As a result, the desired pressure condition was met at the onset of the test, and the mass leak rate and measurement uncertainty were computed in real-time. The data acquisition and control system were programmed to automatically stop when specified criteria were met. Typically, the test was stopped when a specified level of measurement uncertainty was attained. Using silicone O-ring test articles, the new method was compared with the standard method that permitted the downstream pressure to be non-constant atmospheric pressure. The two methods recorded comparable leak rates, but the new method recorded leak rates with significantly lower measurement uncertainty, statistical variance, and test duration. Utilizing this new method in leak rate quantification, projects will reduce cost and schedule, improve test results, and ease interpretation between data sets.

  11. A new method to determine reflex latency induced by high rate stimulation of the nervous system

    Directory of Open Access Journals (Sweden)

    Ilhan eKaracan

    2014-07-01

    Full Text Available High rate stimulations of the neuromuscular system, such as continuous whole body vibration, tonic vibration reflex and high frequency electrical stimulation, are used in the physiological research with an increasing interest. In these studies, the neuronal circuitries underlying the reflex responses remain unclear due to the problem of determining the exact reflex latencies. We present a novel cumulated average method to determine the reflex latency during high rate stimulation of the nervous system which was proven to be significantly more accurate than the classical method. The classical method, cumulant density analysis, reveals the relationship between the two synchronously recorded signals as a function of the lag between the signals. The comparison of new method with the classical technique and their relative accuracy was tested using a computer simulation. In the simulated signals the EMG response latency was constructed to be exactly 40 ms. The new method accurately indicated the value of the simulated reflex latency (40 ms. However, the classical method showed that the lag time between the simulated triggers and the simulated signals was 49 ms. Simulation results illustrated that the cumulated average method is a reliable and more accurate method compared with the classical method. We therefore suggest that the new cumulated average method is able to determine the high rate stimulation induced reflex latencies more accurately than the classical method.

  12. Experiments to Evaluate and Implement Passive Tracer Gas Methods to Measure Ventilation Rates in Homes

    Energy Technology Data Exchange (ETDEWEB)

    Lunden, Melissa [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Faulkner, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Heredia, Elizabeth [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cohn, Sebastian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dickerhoff, Darryl [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Noris, Federico [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Logue, Jennifer [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hotchi, Toshifumi [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Singer, Brett [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sherman, Max H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-10-01

    This report documents experiments performed in three homes to assess the methodology used to determine air exchange rates using passive tracer techniques. The experiments used four different tracer gases emitted simultaneously but implemented with different spatial coverage in the home. Two different tracer gas sampling methods were used. The results characterize the factors of the execution and analysis of the passive tracer technique that affect the uncertainty in the calculated air exchange rates. These factors include uncertainties in tracer gas emission rates, differences in measured concentrations for different tracer gases, temporal and spatial variability of the concentrations, the comparison between different gas sampling methods, and the effect of different ventilation conditions.

  13. Measurement of fatigue crack growth rate of reactor structural material in air based on DCPD method

    International Nuclear Information System (INIS)

    Du Donghai; Chen Kai; Yu Lun; Zhang Lefu; Shi Xiuqiang; Xu Xuelian

    2014-01-01

    The principles and details of direct current potential drop (DCPD) in monitoring the crack growth of reactor structural materials was introduced in this paper. Based on this method, the fatigue crack growth rate (CGR) of typical structural materials in nuclear power systems was measured. The effects of applied load, load ratio and loading frequency on the fatigue crack growth rate of reactor structural materials were discussed. The result shows that the fatigue crack growth rate of reactor structural materials depends on the hardness of materials, and the harder the material is, the higher the rate of crack growth is. (authors)

  14. Method of Euthanasia Influences the Oocyte Fertilization Rate with Fresh Mouse Sperm

    Science.gov (United States)

    Hazzard, Karen C; Watkins-Chow, Dawn E; Garrett, Lisa J

    2014-01-01

    In vitro fertilization (IVF) is used to produce mouse embryos for a variety of reasons. We evaluated the effect of the method of euthanasia on the fertilization rate in 2 different IVF protocols. Oocytes collected from C57BL/6J female mice euthanized by CO2 inhalation or cervical dislocation were used in IVF with fresh sperm from either wild-type or genetically engineered C57BL/6J. Compared with CO2 inhalation, cervical dislocation improved the resulting rate of fertilization by 18% in an IVF method using Cook media and by 13% in an IVF method using methyl-B cyclodextrin and reduced glutathione. The lower fertilization rate due to euthanasia by CO2 inhalation was accompanied by changes in blood pH and body temperature despite efforts to minimize temperature drops. In our hands, euthanasia by cervical dislocation improved fertilization rates and consequently reduced the number of egg-donor mice required. PMID:25650969

  15. New method of analyzing well tests in fractured wells using sandface pressure and rate data

    Energy Technology Data Exchange (ETDEWEB)

    Osman, M.; Almehaideb, R.; Abou-Kassem, J. [U.A.E. University, Al-Ain (United Arab Emirates)

    1998-05-01

    Analysis of variable flow rate tests has been of special interest recently because in many cases it is impractical to keep a flow rate constant long enough to perform a drawdown test. Further, in many other drawdown and buildup tests, the early data were influenced by wellbore storage effects, and the duration of these effects could be quite long for low-permeability reservoirs. This paper presents a mathematical model which describes drawdown and buildup tests in hydraulically fractured wells. This new method uses a specialized plot approach to analyze the linear flow data and combines it with the superposition of constant-rate solution method for the analysis of psuedoradial flow data. It does not require prior knowledge of the fracture type (uniform-flux or infinite-conductivity); in fact it predicts the fracture type. This method is useful for the analysis of simultaneously measured downhole pressure and sandface rate data. 12 refs., 11 figs., 3 tabs.

  16. Method of euthanasia influences the oocyte fertilization rate with fresh mouse sperm.

    Science.gov (United States)

    Hazzard, Karen C; Watkins-Chow, Dawn E; Garrett, Lisa J

    2014-11-01

    In vitro fertilization (IVF) is used to produce mouse embryos for a variety of reasons. We evaluated the effect of the method of euthanasia on the fertilization rate in 2 different IVF protocols. Oocytes collected from C57BL/6J female mice euthanized by CO2 inhalation or cervical dislocation were used in IVF with fresh sperm from either wild-type or genetically engineered C57BL/6J. Compared with CO2 inhalation, cervical dislocation improved the resulting rate of fertilization by 18% in an IVF method using Cook media and by 13% in an IVF method using methyl-B cyclodextrin and reduced glutathione. The lower fertilization rate due to euthanasia by CO2 inhalation was accompanied by changes in blood pH and body temperature despite efforts to minimize temperature drops. In our hands, euthanasia by cervical dislocation improved fertilization rates and consequently reduced the number of egg-donor mice required.

  17. Establishing Upper Limits for Item Ratings for the Angoff Method: Are Resulting Standards More 'Realistic'?

    Science.gov (United States)

    Reid, Jerry B.

    This report investigates an area of uncertainty in using the Angoff method for setting standards, namely whether or not a judge's conceptualizations of borderline group performance are realistic. Ratings are usually made with reference to the performance of this hypothetical group, therefore the Angoff method's success is dependent on this point.…

  18. 'Continuation rate', 'use-effectiveness' and their assessment for the diaphragm and jelly method.

    Science.gov (United States)

    Chandrasekaran, C; Karkal, M

    1972-11-01

    Abstract The application of the life-table technique in the calculation of use-effectiveness of a contraceptive was proposed by Potter in 1963.(1) The technique was also found to be useful in assessing the duration for which the use of a contraceptive was continued. The keen interest that existed in the use of IUD in the mid-1960's was reflected in the terminology developed for assessment of the continuity of use. 'Retention rate' was a frequently used index.(2) Because of the development of the concept of segments whose end-period determined either termination of the use of a method or its continuance on a cut-off date, 'closure rate' and 'termination rate' have been used as measures of the discontinuance of the use of methods primarily of the IUD.(3) While discussing concepts relating to acceptance, use and effectiveness of family planning methods, more generally, an expert group suggested that 'continuation' should be used to denote that a client (or a couple) had begun to practise a method and that the method was still being practised.(4) Since this group defined 'an acceptor' as a person taking service and/or advice, i.e. having an IUD insertion or a sterilization operation or receiving supplies (or advice on methods such as 'rhythm' or coitus-interruptus with the intent of using the method), the base for the assessment of continuation rates, according to this group, would be only those acceptors who had begun using the method. The lifetable method has also been used for the study of the continuation rate for pill acceptors.(5) Balakrishnan, et al., made a study of continuation rates of oral contraceptives using the multiple decrement life-table technique.(6).

  19. Implementation of Online Promethee Method for Poor Family Change Rate Calculation

    Science.gov (United States)

    Aji, Dhady Lukito; Suryono; Widodo, Catur Edi

    2018-02-01

    This research has been done online calculation of the rate of poor family change rate by using Preference Ranking Method of Organization Of Enrichment Evaluation (PROMETHEE) .This system is very useful to monitor poverty in a region as well as for administrative services related to poverty rate. The system consists of computer clients and servers connected via the internet network. Poor family residence data obtained from the government. In addition, survey data are inputted through the client computer in each administrative village and also 23 criteria of input in accordance with the established government. The PROMETHEE method is used to evaluate the value of poverty and its weight is used to determine poverty status. PROMETHEE output can also be used to rank the poverty of the registered population of the server based on the netflow value. The poverty rate is calculated based on the current poverty rate compared to the previous poverty rate. The rate results can be viewed online and real time on the server through numbers and graphs. From the test results can be seen that the system can classify poverty status, calculate the poverty rate change rate and can determine the value and poverty ranking of each population.

  20. Implementation of Online Promethee Method for Poor Family Change Rate Calculation

    Directory of Open Access Journals (Sweden)

    Lukito Aji Dhady

    2018-01-01

    Full Text Available This research has been done online calculation of the rate of poor family change rate by using Preference Ranking Method of Organization Of Enrichment Evaluation (PROMETHEE .This system is very useful to monitor poverty in a region as well as for administrative services related to poverty rate. The system consists of computer clients and servers connected via the internet network. Poor family residence data obtained from the government. In addition, survey data are inputted through the client computer in each administrative village and also 23 criteria of input in accordance with the established government. The PROMETHEE method is used to evaluate the value of poverty and its weight is used to determine poverty status. PROMETHEE output can also be used to rank the poverty of the registered population of the server based on the netflow value. The poverty rate is calculated based on the current poverty rate compared to the previous poverty rate. The rate results can be viewed online and real time on the server through numbers and graphs. From the test results can be seen that the system can classify poverty status, calculate the poverty rate change rate and can determine the value and poverty ranking of each population.

  1. Ballistic deficit correction methods for large Ge detectors-high counting rate study

    International Nuclear Information System (INIS)

    Duchene, G.; Moszynski, M.

    1995-01-01

    This study presents different ballistic correction methods versus input count rate (from 3 to 50 kcounts/s) using four large Ge detectors of about 70 % relative efficiency. It turns out that the Tennelec TC245 linear amplifier in the BDC mode (Hinshaw method) is the best compromise for energy resolution throughout. All correction methods lead to narrow sum-peaks indistinguishable from single Γ lines. The full energy peak throughput is found representative of the pile-up inspection dead time of the corrector circuits. This work also presents a new and simple representation, plotting simultaneously energy resolution and throughput versus input count rate. (TEC). 12 refs., 11 figs

  2. A novel analytical method for pharmaceutical polymorphs by terahertz spectroscopy and the optimization of crystal form at the discovery stage.

    Science.gov (United States)

    Ikeda, Yukihiro; Ishihara, Yoko; Moriwaki, Toshiya; Kato, Eiji; Terada, Katsuhide

    2010-01-01

    A novel analytical method for the determination of pharmaceutical polymorphs was developed using terahertz spectroscopy. It was found out that each polymorph of a substance showed a specific terahertz absorption spectrum. In particular, analysis of the second derivative spectrum was enormously beneficial in the discrimination of closely related polymorphs that were difficult to discern by powder X-ray diffractometry. Crystal forms that were obtained by crystallization from various solvents and stored under various conditions were specifically characterized by the second derivative of each terahertz spectrum. Fractional polymorphic transformation for substances stored under stressed conditions was also identified by terahertz spectroscopy during solid-state stability test, but could not be detected by powder X-ray diffractometry. Since polymorphs could be characterized clearly by terahertz spectroscopy, further physicochemical studies could be conducted in a timely manner. The development form of compound examined was determined by the results of comprehensive physicochemical studies that included thermodynamic relationships, as well as chemical and physicochemical stability. In conclusion, terahertz spectroscopy, which has unique power in the elucidation of molecular interaction within a crystal lattice, can play more important role in physicochemical research. Terahertz spectroscopy has a great potential as a tool for polymorphic determination, particularly since the second derivative of the terahertz spectrum possesses high sensitivity for pharmaceutical polymorphs.

  3. Fill rate estimation in periodic review policies with lost sales using simple methods

    Energy Technology Data Exchange (ETDEWEB)

    Cardós, M.; Guijarro Tarradellas, E.; Babiloni Griñón, E.

    2016-07-01

    Purpose: The exact estimation of the fill rate in the lost sales case is complex and time consuming. However, simple and suitable methods are needed for its estimation so that inventory managers could use them. Design/methodology/approach: Instead of trying to compute the fill rate in one step, this paper focuses first on estimating the probabilities of different on-hand stock levels so that the fill rate is computed later. Findings: As a result, the performance of a novel proposed method overcomes the other methods and is relatively simple to compute. Originality/value: Existing methods for estimating stock levels are examined, new procedures are proposed and their performance is assessed.

  4. Identifying the plant-associated microbiome across aquatic and terrestrial environments: the effects of amplification method on taxa discovery

    Energy Technology Data Exchange (ETDEWEB)

    Jackrel, Sara L. [Department of Ecology and Evolution, The University of Chicago, 1101 E 57th Street Chicago IL 60637 USA; Owens, Sarah M. [Biosciences Division, Argonne National Laboratory, 9700 S. Cass Avenue Lemont IL 60439 USA; Gilbert, Jack A. [Biosciences Division, Argonne National Laboratory, 9700 S. Cass Avenue Lemont IL 60439 USA; The Microbiome Center, Department of Surgery, The University of Chicago, 5841 S Maryland Ave Chicago IL 60637 USA; Pfister, Catherine A. [Department of Ecology and Evolution, The University of Chicago, 1101 E 57th Street Chicago IL 60637 USA

    2017-01-25

    Plants in terrestrial and aquatic environments contain a diverse microbiome. Yet, the chloroplast and mitochondria organelles of the plant eukaryotic cell originate from free-living cyanobacteria and Rickettsiales. This represents a challenge for sequencing the plant microbiome with universal primers, as ~99% of 16S rRNA sequences may consist of chloroplast and mitochondrial sequences. Peptide nucleic acid clamps offer a potential solution by blocking amplification of host-associated sequences. We assessed the efficacy of chloroplast and mitochondria-blocking clamps against a range of microbial taxa from soil, freshwater and marine environments. While we found that the mitochondrial blocking clamps appear to be a robust method for assessing animal-associated microbiota, Proteobacterial 16S rRNA binds to the chloroplast-blocking clamp, resulting in a strong sequencing bias against this group. We attribute this bias to a conserved 14-bp sequence in the Proteobacteria that matches the 17-bp chloroplast-blocking clamp sequence. By scanning the Greengenes database, we provide a reference list of nearly 1500 taxa that contain this 14-bp sequence, including 48 families such as the Rhodobacteraceae, Phyllobacteriaceae, Rhizobiaceae, Kiloniellaceae and Caulobacteraceae. To determine where these taxa are found in nature, we mapped this taxa reference list against the Earth Microbiome Project database. These taxa are abundant in a variety of environments, particularly aquatic and semiaquatic freshwater and marine habitats. To facilitate informed decisions on effective use of organelle-blocking clamps, we provide a searchable database of microbial taxa in the Greengenes and Silva databases matching various n-mer oligonucleotides of each PNA sequence.

  5. Application of Statistical Methods of Rain Rate Estimation to Data From The TRMM Precipitation Radar

    Science.gov (United States)

    Meneghini, R.; Jones, J. A.; Iguchi, T.; Okamoto, K.; Liao, L.; Busalacchi, Antonio J. (Technical Monitor)

    2000-01-01

    The TRMM Precipitation Radar is well suited to statistical methods in that the measurements over any given region are sparsely sampled in time. Moreover, the instantaneous rain rate estimates are often of limited accuracy at high rain rates because of attenuation effects and at light rain rates because of receiver sensitivity. For the estimation of the time-averaged rain characteristics over an area both errors are relevant. By enlarging the space-time region over which the data are collected, the sampling error can be reduced. However. the bias and distortion of the estimated rain distribution generally will remain if estimates at the high and low rain rates are not corrected. In this paper we use the TRMM PR data to investigate the behavior of 2 statistical methods the purpose of which is to estimate the rain rate over large space-time domains. Examination of large-scale rain characteristics provides a useful starting point. The high correlation between the mean and standard deviation of rain rate implies that the conditional distribution of this quantity can be approximated by a one-parameter distribution. This property is used to explore the behavior of the area-time-integral (ATI) methods where fractional area above a threshold is related to the mean rain rate. In the usual application of the ATI method a correlation is established between these quantities. However, if a particular form of the rain rate distribution is assumed and if the ratio of the mean to standard deviation is known, then not only the mean but the full distribution can be extracted from a measurement of fractional area above a threshold. The second method is an extension of this idea where the distribution is estimated from data over a range of rain rates chosen in an intermediate range where the effects of attenuation and poor sensitivity can be neglected. The advantage of estimating the distribution itself rather than the mean value is that it yields the fraction of rain contributed by

  6. Charged-particle thermonuclear reaction rates: I. Monte Carlo method and statistical distributions

    International Nuclear Information System (INIS)

    Longland, R.; Iliadis, C.; Champagne, A.E.; Newton, J.R.; Ugalde, C.; Coc, A.; Fitzgerald, R.

    2010-01-01

    A method based on Monte Carlo techniques is presented for evaluating thermonuclear reaction rates. We begin by reviewing commonly applied procedures and point out that reaction rates that have been reported up to now in the literature have no rigorous statistical meaning. Subsequently, we associate each nuclear physics quantity entering in the calculation of reaction rates with a specific probability density function, including Gaussian, lognormal and chi-squared distributions. Based on these probability density functions the total reaction rate is randomly sampled many times until the required statistical precision is achieved. This procedure results in a median (Monte Carlo) rate which agrees under certain conditions with the commonly reported recommended 'classical' rate. In addition, we present at each temperature a low rate and a high rate, corresponding to the 0.16 and 0.84 quantiles of the cumulative reaction rate distribution. These quantities are in general different from the statistically meaningless 'minimum' (or 'lower limit') and 'maximum' (or 'upper limit') reaction rates which are commonly reported. Furthermore, we approximate the output reaction rate probability density function by a lognormal distribution and present, at each temperature, the lognormal parameters μ and σ. The values of these quantities will be crucial for future Monte Carlo nucleosynthesis studies. Our new reaction rates, appropriate for bare nuclei in the laboratory, are tabulated in the second paper of this issue (Paper II). The nuclear physics input used to derive our reaction rates is presented in the third paper of this issue (Paper III). In the fourth paper of this issue (Paper IV) we compare our new reaction rates to previous results.

  7. Systematic method for resource rating with two applications to potential wilderness areas

    International Nuclear Information System (INIS)

    Voelker, A.H.; Wedow, H.; Oakes, E.; Scheffler, P.K.

    1979-09-01

    A versatile method was developed to rate the energy- and mineral-resource potentials of areas in which land-management and resource-development decisions must be reached with a minimum expenditure of money and time. The method surveys published and personal information on resources in the region being assessed, selects the most appropriate information, synthesizes the information into map overlays and tract descriptions, rates the potential of tracts for particular resources, rates the overall importance of each tract for resource development, and documents the ratings and their significance. Basic criteria considered by the assessment team include the favorability and certainty ratings, the overall availability of each rated resource within this country, the size of a given tract, economic factors, and the number of resources in a tract. The method was applied to two separate but roughly similar geologic regions, the Idaho-Wyoming-Utah thrust belt and the central Appalachians. Undeveloped tracts of national forestland in these regions that are being considered for possible designation under the Roadless Area Review and Evaluation (RARE II) planning process were rated for their resource value. Results support earlier indications that the 63 tracts comprising the western thrust belt possess a high potential for future resource development. Nearly one-half of these tracts were rated either 3 or 4. However, the wide spread of the importance ratings between 1 and 4 suggests that some tracts or portions of tracts can be added to the National Wilderness System without compromising resource development. The 72 eastern thrust belt tracts were given lower ratings, which indicates the reduced significance of the few remaining roadless areas in this region in satisfying the nation's near-term resource needs

  8. A Bayes linear Bayes method for estimation of correlated event rates.

    Science.gov (United States)

    Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim

    2013-12-01

    Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.

  9. Suicide epidemics: the impact of newly emerging methods on overall suicide rates - a time trends study

    Directory of Open Access Journals (Sweden)

    Chang Shu-Sen

    2011-05-01

    Full Text Available Abstract Background The impact of newly emerging, popular suicide methods on overall rates of suicide has not previously been investigated systematically. Understanding these effects may have important implications for public health surveillance. We examine the emergence of three novel methods of suicide by gassing in the 20th and 21st centuries and determine the impact of emerging methods on overall suicide rates. Methods We studied the epidemic rises in domestic coal gas (1919-1935, England and Wales, motor vehicle exhaust gas (1975-1992, England and Wales and barbecue charcoal gas (1999-2006, Taiwan suicide using Poisson and joinpoint regression models. Joinpoint regression uses contiguous linear segments and join points (points at which trends change to describe trends in incidence. Results Epidemic increases in the use of new methods of suicide were generally associated with rises in overall suicide rates of between 23% and 71%. The recent epidemic of barbecue charcoal suicides in Taiwan was associated with the largest rise in overall rates (40-50% annual rise, whereas the smallest rise was seen for car exhaust gassing in England and Wales (7% annual rise. Joinpoint analyses were only feasible for car exhaust and charcoal burning suicides; these suggested an impact of the emergence of car exhaust suicides on overall suicide rates in both sexes in England and Wales. However there was no statistical evidence of a change in the already increasing overall suicide trends when charcoal burning suicides emerged in Taiwan, possibly due to the concurrent economic recession. Conclusions Rapid rises in the use of new sources of gas for suicide were generally associated with increases in overall suicide rates. Suicide prevention strategies should include strengthening local and national surveillance for early detection of novel suicide methods and implementation of effective media guidelines and other appropriate interventions to limit the spread of

  10. Research on criticality analysis method of CNC machine tools components under fault rate correlation

    Science.gov (United States)

    Gui-xiang, Shen; Xian-zhuo, Zhao; Zhang, Ying-zhi; Chen-yu, Han

    2018-02-01

    In order to determine the key components of CNC machine tools under fault rate correlation, a system component criticality analysis method is proposed. Based on the fault mechanism analysis, the component fault relation is determined, and the adjacency matrix is introduced to describe it. Then, the fault structure relation is hierarchical by using the interpretive structure model (ISM). Assuming that the impact of the fault obeys the Markov process, the fault association matrix is described and transformed, and the Pagerank algorithm is used to determine the relative influence values, combined component fault rate under time correlation can obtain comprehensive fault rate. Based on the fault mode frequency and fault influence, the criticality of the components under the fault rate correlation is determined, and the key components are determined to provide the correct basis for equationting the reliability assurance measures. Finally, taking machining centers as an example, the effectiveness of the method is verified.

  11. Measurement of disintegration rates of 60Co volume sources by the sum-peak method

    International Nuclear Information System (INIS)

    Kawano, Takao; Ebihara, Hiroshi

    1991-01-01

    The sum-peak method has been applied to the determination of the disintegration rates of 60 Co volume sources (1.05 x 10 4 Bq, 1.05 x 10 3 Bq and 1.05 x 10 2 Bq, in 100-ml polyethylene bottles) by using a NaI(Tl) detector of a diameter of 50 mm and a height of 50 mm. The experimental results showed that decreasing the disintegration rates resulted in enlarged underestimation in comparison with the true disintegration rates. It was presumed that the underestimations of the disintegration rates determined by the sum-peak method resulted from the overestimations of the areas under the sum peaks caused by the overlap of the area under the Compton scattering of the γ-ray (2614 keV) emitted from a naturally occurring radionuclide 208 Tl under the sum peaks. (author)

  12. Neutron Scattering in Hydrogenous Moderators, Studied by Time Dependent Reaction Rate Method

    Energy Technology Data Exchange (ETDEWEB)

    Larsson, L G; Moeller, E; Purohit, S N

    1966-03-15

    The moderation and absorption of a neutron burst in water, poisoned with the non-1/v absorbers cadmium and gadolinium, has been followed on the time scale by multigroup calculations, using scattering kernels for the proton gas and the Nelkin model. The time dependent reaction rate curves for each absorber display clear differences for the two models, and the separation between the curves does not depend much on the absorber concentration. An experimental method for the measurement of infinite medium reaction rate curves in a limited geometry has been investigated. This method makes the measurement of the time dependent reaction rate generally useful for thermalization studies in a small geometry of a liquid hydrogenous moderator, provided that the experiment is coupled to programs for the calculation of scattering kernels and time dependent neutron spectra. Good agreement has been found between the reaction rate curve, measured with cadmium in water, and a calculated curve, where the Haywood kernel has been used.

  13. System and Method for Determining Rate of Rotation Using Brushless DC Motor

    Science.gov (United States)

    Howard, David E. (Inventor); Smith, Dennis A. (Inventor)

    2000-01-01

    A system and method are provided for measuring rate of rotation. A brushless DC motor is rotated and produces a back electromagnetic force (emf) on each winding thereof. Each winding's back-emf is squared. The squared outputs associated with each winding are combined, with the square root being taken of such combination, to produce a DC output proportional only to the rate of rotation of the motor's shaft.

  14. Investigation of the existing methodology of value estimation and methods of discount rate estimation

    OpenAIRE

    Plikus, Iryna

    2017-01-01

    The subject of research is the current practice of determining the fair value of assets and liabilities at the present (discounted) cost. One of the most problematic places is the determination of the discount rate, which belongs to the jurisdiction of a professional accountant judgment.The methods of formalization, hypothetical assumption, system approach and scientific abstraction in substantiating the formation of accounting policy with respect to the choice of the discount rate are used i...

  15. Intracavitary after loading techniques, advantages and disadvantages with high and low dose-rate methods

    International Nuclear Information System (INIS)

    Walstam, Rune

    1980-01-01

    Even though suggested as early as 1903, it is only when suitable sealed gamma sources became available, afterloading methods could be developed for interstitial as well as intracavitary work. Manual afterloading technique can be used only for low dose rate irradiation, while remote controlled afterloading technique can be used for both low and high dose-rate irradiation. Afterloading units used at the Karolinska Institute, Stockholm, are described, and experience of their use is narrated briefly. (M.G.B.)

  16. The log mean heat transfer rate method of heat exchanger considering the influence of heat radiation

    International Nuclear Information System (INIS)

    Wong, K.-L.; Ke, M.-T.; Ku, S.-S.

    2009-01-01

    The log mean temperature difference (LMTD) method is conventionally used to calculate the total heat transfer rate of heat exchangers. Because the heat radiation equation contains the 4th order exponential of temperature which is very complicate in calculations, thus LMTD method neglects the influence of heat radiation. From the recent investigation of a circular duct in some practical situations, it is found that even in the situation of the temperature difference between outer duct surface and surrounding is low to 1 deg. C, the heat radiation effect can not be ignored in the situations of lower ambient convective heat coefficient and greater surface emissivities. In this investigation, the log mean heat transfer rate (LMHTR) method which considering the influence of heat radiation, is developed to calculate the total heat transfer rate of heat exchangers.

  17. Nutrition and biomarkers in psychiatry : research on micronutrient deficiencies in schizophrenia, the role of the intestine in the hyperserotonemia of autism, and a method for non-hypothesis driven discovery of biomarkers in urine

    NARCIS (Netherlands)

    Kemperman, Ramses Franciscus Jacobus

    2007-01-01

    This thesis describes the study of markers of nutrition and intestinal motility in mental disorders with a focus on schizophrenia and autism, and the development, evaluation and application of a biomarker discovery method for urine. The aim of the thesis is to investigate the role of long-chain

  18. Engineering Mathematical Analysis Method for Productivity Rate in Linear Arrangement Serial Structure Automated Flow Assembly Line

    Directory of Open Access Journals (Sweden)

    Tan Chan Sin

    2015-01-01

    Full Text Available Productivity rate (Q or production rate is one of the important indicator criteria for industrial engineer to improve the system and finish good output in production or assembly line. Mathematical and statistical analysis method is required to be applied for productivity rate in industry visual overviews of the failure factors and further improvement within the production line especially for automated flow line since it is complicated. Mathematical model of productivity rate in linear arrangement serial structure automated flow line with different failure rate and bottleneck machining time parameters becomes the basic model for this productivity analysis. This paper presents the engineering mathematical analysis method which is applied in an automotive company which possesses automated flow assembly line in final assembly line to produce motorcycle in Malaysia. DCAS engineering and mathematical analysis method that consists of four stages known as data collection, calculation and comparison, analysis, and sustainable improvement is used to analyze productivity in automated flow assembly line based on particular mathematical model. Variety of failure rate that causes loss of productivity and bottleneck machining time is shown specifically in mathematic figure and presents the sustainable solution for productivity improvement for this final assembly automated flow line.

  19. Method and system of simulating nuclear power plant count rate for training purposes

    International Nuclear Information System (INIS)

    Alliston, W.H.; Koenig, R.H.

    1975-01-01

    A method and system are described for the real-time simulation of the dynamic operation of a nuclear power plant in which nuclear flux rate counters are provided for monitoring the rate of nuclear fission of the reactor. The system utilizes apparatus that includes digital computer means for calculating data relating to the rate of nuclear fission of a simulated reactor model, which rate is controlled in accordance with the operation of control panel devices. A digital number from the computer corresponding to the flux rate controls an oscillator driven counter means to produce a pulse after a predetermined count. This pulse controls an oscillator driven polynomial counter to count a random number that controls a third counter in accordance with pulse from the first counter to produce a random fission count for operating the meters. (U.S.)

  20. Survey of methods for the rating of psychiatric impairment in Australia.

    Science.gov (United States)

    Mendelson, George

    2004-05-01

    One of the enduring clinical issues in the assessment of plaintiffs in personal injury and workers' compensation claims, as well as applicants for social security and disablement benefits, is that of the evaluation of impairment and work incapacity. Many writers on this topic confuse the concepts of impairment and disability, and similar confusion is reflected in a number of the rating methods that purport to evaluate impairment but in reality assess disability. In Australia there are 20 distinct statutory schemes for workers' compensation, motor accident compensation, and social security and other benefits, which utilise a variety of methods for the rating of psychiatric impairment. Recent legislative changes designed to restrict access to personal injury compensation at common law, which in two Australian State jurisdictions require the use of impairment rating scales, also specify the rating methods to be used in the assessment of psychiatric impairment. This article discusses the concepts of impairment and disability as defined by the World Health Organisation, and reviews the various methods for the rating of psychiatric impairment that are specified by statute in the federal and State jurisdictions in Australia.

  1. Study (Prediction of Main Pipes Break Rates in Water Distribution Systems Using Intelligent and Regression Methods

    Directory of Open Access Journals (Sweden)

    Massoud Tabesh

    2011-07-01

    Full Text Available Optimum operation of water distribution networks is one of the priorities of sustainable development of water resources, considering the issues of increasing efficiency and decreasing the water losses. One of the key subjects in optimum operational management of water distribution systems is preparing rehabilitation and replacement schemes, prediction of pipes break rate and evaluation of their reliability. Several approaches have been presented in recent years regarding prediction of pipe failure rates which each one requires especial data sets. Deterministic models based on age and deterministic multi variables and stochastic group modeling are examples of the solutions which relate pipe break rates to parameters like age, material and diameters. In this paper besides the mentioned parameters, more factors such as pipe depth and hydraulic pressures are considered as well. Then using multi variable regression method, intelligent approaches (Artificial neural network and neuro fuzzy models and Evolutionary polynomial Regression method (EPR pipe burst rate are predicted. To evaluate the results of different approaches, a case study is carried out in a part ofMashhadwater distribution network. The results show the capability and advantages of ANN and EPR methods to predict pipe break rates, in comparison with neuro fuzzy and multi-variable regression methods.

  2. Applications and Methods Utilizing the Simple Semantic Web Architecture and Protocol (SSWAP) for Bioinformatics Resource Discovery and Disparate Data and Service Integration

    Science.gov (United States)

    Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of scientific data between information resources difficu...

  3. Neutron spectra determination methods using the measured reaction rates in SAPIS

    International Nuclear Information System (INIS)

    Bondars, Kh.Ya.; Lapenas, A.A.

    1980-01-01

    Mathematical basis of algorithms is given for methods of neutron spectra restoration in accordance with the measured reaction rates of the activation detectors included into the information-determination system SAIPS aimed at generalization of the most popular home and foreign neutron spectra determination methods as well as the establishment of their mutual relations. The following neutron spectra determination methods are described: SAND-II, CRYSTAL BALL, WINDOWS, SPECTRA, RESP, JUL; polynominal and directed divergence methods. The algorithms have been realized on the ES computer

  4. THE EFFECT OF EXCHANGE RATE ON THE CONSTRUCTION PROJECTS AND PROTECTION METHODS

    Directory of Open Access Journals (Sweden)

    Handan AKSUYEK,

    2017-02-01

    Full Text Available As with all sectors, recent extreme changes occurred in the exchange rates have substantially affected the construction operations. While the rise in foreign exchange rates leads to harmful effects in the negative direction at the operations having foreign exchange – based debt or it provides also advantageous effect in the positive direction at the construction companies having foreign exchange – indexed investments. In this context, this sudden change in foreign exchange rates which cannot be predicted beforehand and emerges as a result of speculative events. As with all operations carrying out foreign exchange – based tasks, these fluctuations in the foreign exchange rate head first among the factors which affect the achievement or failure of the cost or profit targets previously determined by the construction companies as well. Therefore, the companies whose costs and profits consist of different units of currency in their construction agreements should apply to the hedging methods in order to be protected against the exchange rate. As for the main tools of protection method are the derivative products such as forward, futures, swap and optional contracts. In this study, the effect of exchange rate fluctuations on the completion costs of construction projects is scrutinized. Moreover, the tools which may be employed by the construction companies in order to get rid of exchange rate which adversely influence the building companies in both directions have been comparatively evaluated.

  5. Comparison on different repetition rate locking methods in Er-doped fiber laser

    Science.gov (United States)

    Yang, Kangwen; Zhao, Peng; Luo, Jiang; Huang, Kun; Hao, Qiang; Zeng, Heping

    2018-05-01

    We demonstrate a systematic comparative research on the all-optical, mechanical and opto-mechanical repetition rate control methods in an Er-doped fiber laser. A piece of Yb-doped fiber, a piezoelectric transducer and an electronic polarization controller are simultaneously added in the laser cavity as different cavity length modulators. By measuring the cavity length tuning ranges, the output power fluctuations, the temporal and frequency repetition rate stability, we show that all-optical method introduces the minimal disturbances under current experimental condition.

  6. Method of measuring the mass flow rate of a substance entering a cocurrent fluid stream

    International Nuclear Information System (INIS)

    Cochran, H.D. Jr.

    1978-01-01

    An improved method of monitoring the mass flow rate of a substance entering a coherent fluid stream is described. The method very basically consists of heating equal sections of the fluid stream above and below the point of entry of the substance to be monitored, and measuring and comparing the resulting change in temperature of the sections. Advantage is taken of the difference in thermal characteristics of the fluid and the substance to be measured to correlate temperature differences in the sections above and below the substance feed point for providing an indication of the mass flow rate of the substance

  7. A Globally Convergent Matrix-Free Method for Constrained Equations and Its Linear Convergence Rate

    Directory of Open Access Journals (Sweden)

    Min Sun

    2014-01-01

    Full Text Available A matrix-free method for constrained equations is proposed, which is a combination of the well-known PRP (Polak-Ribière-Polyak conjugate gradient method and the famous hyperplane projection method. The new method is not only derivative-free, but also completely matrix-free, and consequently, it can be applied to solve large-scale constrained equations. We obtain global convergence of the new method without any differentiability requirement on the constrained equations. Compared with the existing gradient methods for solving such problem, the new method possesses linear convergence rate under standard conditions, and a relax factor γ is attached in the update step to accelerate convergence. Preliminary numerical results show that it is promising in practice.

  8. Fluorination methods in drug discovery

    OpenAIRE

    Yerien, Damián Emilio; Bonesi, Sergio Mauricio; Postigo, Jose Alberto

    2017-01-01

    Fluorination reactions of medicinal and biologically-active compounds will be discussed. Late stage fluorination strategies of medicinal targets have recently attracted considerable attention on account of the influence that the fluorine atom can impart to targets of medicinal importance, such as a modulation of lipophilicity, electronegativity, basicity and bioavailability, this latter as a consequence of membrane permeability. Therefore, the recourse to late-stage fluorine substitution on c...

  9. Precise method for correcting count-rate losses in scintillation cameras

    International Nuclear Information System (INIS)

    Madsen, M.T.; Nickles, R.J.

    1986-01-01

    Quantitative studies performed with scintillation detectors often require corrections for lost data because of the finite resolving time of the detector. Methods that monitor losses by means of a reference source or pulser have unacceptably large statistical fluctuations associated with their correction factors. Analytic methods that model the detector as a paralyzable system require an accurate estimate of the system resolving time. Because the apparent resolving time depends on many variables, including the window setting, source distribution, and the amount of scattering material, significant errors can be introduced by relying on a resolving time obtained from phantom measurements. These problems can be overcome by curve-fitting the data from a reference source to a paralyzable model in which the true total count rate in the selected window is estimated from the observed total rate. The resolving time becomes a free parameter in this method which is optimized to provide the best fit to the observed reference data. The fitted curve has the inherent accuracy of the reference source method with the precision associated with the observed total image count rate. Correction factors can be simply calculated from the ratio of the true reference source rate and the fitted curve. As a result, the statistical uncertainty of the data corrected by this method is not significantly increased

  10. 76 FR 38282 - Federal Employees Health Benefits Program: New Premium Rating Method for Most Community Rated Plans

    Science.gov (United States)

    2011-06-29

    ... set but prior to January 1 of the plan year, including book rates filed with the state. Once SSSGs... after rates were set but before January 1 of the plan year, such as new book rates filed in the state in.... OPM's intention is to keep FEHB premiums stable and sustainable using this more transparent...

  11. Estimation of turbulence dissipation rate by Large eddy PIV method in an agitated vessel

    Directory of Open Access Journals (Sweden)

    Kysela Bohuš

    2015-01-01

    Full Text Available The distribution of turbulent kinetic energy dissipation rate is important for design of mixing apparatuses in chemical industry. Generally used experimental methods of velocity measurements for measurement in complex geometries of an agitated vessel disallow measurement in resolution of small scales close to turbulence dissipation ones. Therefore, Particle image velocity (PIV measurement method improved by large eddy Ply approach was used. Large eddy PIV method is based on modeling of smallest eddies by a sub grid scale (SGS model. This method is similar to numerical calculations using Large Eddy Simulation (LES and the same SGS models are used. In this work the basic Smagorinsky model was employed and compared with power law approximation. Time resolved PIV data were processed by Large Eddy PIV approach and the obtained results of turbulent kinetic dissipation rate were compared in selected points for several operating conditions (impeller speed, operating liquid viscosity.

  12. Simplified method of ''push-pull'' test data analysis for determining in situ reaction rate coefficients

    International Nuclear Information System (INIS)

    Haggerty, R.; Schroth, M.H.; Istok, J.D.

    1998-01-01

    The single-well, ''''push-pull'''' test method is useful for obtaining information on a wide variety of aquifer physical, chemical, and microbiological characteristics. A push-pull test consists of the pulse-type injection of a prepared test solution into a single monitoring well followed by the extraction of the test solution/ground water mixture from the same well. The test solution contains a conservative tracer and one or more reactants selected to investigate a particular process. During the extraction phase, the concentrations of tracer, reactants, and possible reaction products are measured to obtain breakthrough curves for all solutes. This paper presents a simplified method of data analysis that can be used to estimate a first-order reaction rate coefficient from these breakthrough curves. Rate coefficients are obtained by fitting a regression line to a plot of normalized concentrations versus elapsed time, requiring no knowledge of aquifer porosity, dispersivity, or hydraulic conductivity. A semi-analytical solution to the advective-dispersion equation is derived and used in a sensitivity analysis to evaluate the ability of the simplified method to estimate reaction rate coefficients in simulated push-pull tests in a homogeneous, confined aquifer with a fully-penetrating injection/extraction well and varying porosity, dispersivity, test duration, and reaction rate. A numerical flow and transport code (SUTRA) is used to evaluate the ability of the simplified method to estimate reaction rate coefficients in simulated push-pull tests in a heterogeneous, unconfined aquifer with a partially penetrating well. In all cases the simplified method provides accurate estimates of reaction rate coefficients; estimation errors ranged from 0.1 to 8.9% with most errors less than 5%

  13. Determination of the Absolute Disintegration Rate of Cs-137 sources by the Tracer Method

    Energy Technology Data Exchange (ETDEWEB)

    Hellstroem, S; Brune, D

    1963-07-15

    {sup 137}Cs - sources were absolutely measured by the 'tracer method', with {sup 82}Br as a tracer nuclide and with application of the 4{pi} {beta}-{gamma} coincidence technique. A self-absorption of 6 % was found in sources obtained from a solution with a carrier-content of 400 {mu}g/ml. The precision of the method for the determination of the {beta}-emission rate was estimated to {+-} 1 %. The results were compared with those of other works.

  14. Time-trends in method-specific suicide rates compared with the availability of specific compounds

    DEFF Research Database (Denmark)

    Nordentoft, Merete; Qin, Ping; Helweg-Larsen, Karin

    2006-01-01

    Restriction of means for suicide is an important part of suicide preventive strategies in different countries. All suicides in Denmark between 1970 and 2000 were examined with regard to method used for suicide. Overall suicide mortality and method-specific suicide mortality was compared...... in the number of suicides by self-poisoning with these compounds. Restricted access occurred concomittantly with a 55% decrease in suicide rate....

  15. Determination of the Absolute Disintegration Rate of Cs-137 sources by the Tracer Method

    International Nuclear Information System (INIS)

    Hellstroem, S.; Brune, D.

    1963-07-01

    137 Cs - sources were absolutely measured by the 'tracer method', with 82 Br as a tracer nuclide and with application of the 4π β-γ coincidence technique. A self-absorption of 6 % was found in sources obtained from a solution with a carrier-content of 400 μg/ml. The precision of the method for the determination of the β-emission rate was estimated to ± 1 %. The results were compared with those of other works

  16. Predicting hepatitis B monthly incidence rates using weighted Markov chains and time series methods.

    Science.gov (United States)

    Shahdoust, Maryam; Sadeghifar, Majid; Poorolajal, Jalal; Javanrooh, Niloofar; Amini, Payam

    2015-01-01

    Hepatitis B (HB) is a major global mortality. Accurately predicting the trend of the disease can provide an appropriate view to make health policy disease prevention. This paper aimed to apply three different to predict monthly incidence rates of HB. This historical cohort study was conducted on the HB incidence data of Hamadan Province, the west of Iran, from 2004 to 2012. Weighted Markov Chain (WMC) method based on Markov chain theory and two time series models including Holt Exponential Smoothing (HES) and SARIMA were applied on the data. The results of different applied methods were compared to correct percentages of predicted incidence rates. The monthly incidence rates were clustered into two clusters as state of Markov chain. The correct predicted percentage of the first and second clusters for WMC, HES and SARIMA methods was (100, 0), (84, 67) and (79, 47) respectively. The overall incidence rate of HBV is estimated to decrease over time. The comparison of results of the three models indicated that in respect to existing seasonality trend and non-stationarity, the HES had the most accurate prediction of the incidence rates.

  17. Variation of strain rate sensitivity index of a superplastic aluminum alloy in different testing methods

    Science.gov (United States)

    Majidi, Omid; Jahazi, Mohammad; Bombardier, Nicolas; Samuel, Ehab

    2017-10-01

    The strain rate sensitivity index, m-value, is being applied as a common tool to evaluate the impact of the strain rate on the viscoplastic behaviour of materials. The m-value, as a constant number, has been frequently taken into consideration for modeling material behaviour in the numerical simulation of superplastic forming processes. However, the impact of the testing variables on the measured m-values has not been investigated comprehensively. In this study, the m-value for a superplastic grade of an aluminum alloy (i.e., AA5083) has been investigated. The conditions and the parameters that influence the strain rate sensitivity for the material are compared with three different testing methods, i.e., monotonic uniaxial tension test, strain rate jump test and stress relaxation test. All tests were conducted at elevated temperature (470°C) and at strain rates up to 0.1 s-1. The results show that the m-value is not constant and is highly dependent on the applied strain rate, strain level and testing method.

  18. Determination of Glucose Utilization Rates in Cultured Astrocytes and Neurons with [14C]deoxyglucose: Progress, Pitfalls, and Discovery of Intracellular Glucose Compartmentation.

    Science.gov (United States)

    Dienel, Gerald A; Cruz, Nancy F; Sokoloff, Louis; Driscoll, Bernard F

    2017-01-01

    2-Deoxy-D-[ 14 C]glucose ([ 14 C]DG) is commonly used to determine local glucose utilization rates (CMR glc ) in living brain and to estimate CMR glc in cultured brain cells as rates of [ 14 C]DG phosphorylation. Phosphorylation rates of [ 14 C]DG and its metabolizable fluorescent analog, 2-(N-(7-nitrobenz-2-oxa-1,3-diazol-4-yl)amino)-2-deoxyglucose (2-NBDG), however, do not take into account differences in the kinetics of transport and metabolism of [ 14 C]DG or 2-NBDG and glucose in neuronal and astrocytic cells in cultures or in single cells in brain tissue, and conclusions drawn from these data may, therefore, not be correct. As a first step toward the goal of quantitative determination of CMR glc in astrocytes and neurons in cultures, the steady-state intracellular-to-extracellular concentration ratios (distribution spaces) for glucose and [ 14 C]DG were determined in cultured striatal neurons and astrocytes as functions of extracellular glucose concentration. Unexpectedly, the glucose distribution spaces rose during extreme hypoglycemia, exceeding 1.0 in astrocytes, whereas the [ 14 C]DG distribution space fell at the lowest glucose levels. Calculated CMR glc was greatly overestimated in hypoglycemic and normoglycemic cells because the intracellular glucose concentrations were too high. Determination of the distribution space for [ 14 C]glucose revealed compartmentation of intracellular glucose in astrocytes, and probably, also in neurons. A smaller metabolic pool is readily accessible to hexokinase and communicates with extracellular glucose, whereas the larger pool is sequestered from hexokinase activity. A new experimental approach using double-labeled assays with DG and glucose is suggested to avoid the limitations imposed by glucose compartmentation on metabolic assays.

  19. Universal Rate Model Selector: A Method to Quickly Find the Best-Fit Kinetic Rate Model for an Experimental Rate Profile

    Science.gov (United States)

    2017-08-01

    k2 – k1) 3.3 Universal Kinetic Rate Platform Development Kinetic rate models range from pure chemical reactions to mass transfer...14 8. The rate model that best fits the experimental data is a first-order or homogeneous catalytic reaction ...Avrami (7), and intraparticle diffusion (6) rate equations to name a few. A single fitting algorithm (kinetic rate model ) for a reaction does not

  20. Water vapor mass balance method for determining air infiltration rates in houses

    Science.gov (United States)

    David R. DeWalle; Gordon M. Heisler

    1980-01-01

    A water vapor mass balance technique that includes the use of common humidity-control equipment can be used to determine average air infiltration rates in buildings. Only measurements of the humidity inside and outside the home, the mass of vapor exchanged by a humidifier/dehumidifier, and the volume of interior air space are needed. This method gives results that...

  1. A method to increase optical timing spectra measurement rates using a multi-hit TDC

    International Nuclear Information System (INIS)

    Moses, W.W.

    1993-01-01

    A method is presented for using a modern time to digital converter (TDC) to increase the data collection rate for optical timing measurements such as scintillator decay times. It extends the conventional delayed coincidence method, where a synchronization signal ''starts'' a TDC and a photomultiplier tube (PMT) sampling the optical signal ''stops'' the TDC. Data acquisition rates are low with the conventional method because ε, the light collection efficiency of the ''stop'' PMT, is artificially limited to ε∼0.01 photons per ''start'' signal to reduce the probability of detecting more than one photon during the sampling period. With conventional TDCs, these multiple photon events bias the time spectrum since only the first ''stop'' pulse is digitized. The new method uses a modern TDC to detect whether additional ''stop'' signals occur during the sampling period, and actively reject these multiple photon events. This allows ε to be increased to almost 1 photon per ''start'' signal, which maximizes the data acquisition rate at a value nearly 20 times higher. Multi-hit TDCs can digitize the arrival times of n ''stop'' signals per ''start'' signal, which allows ε to be increased to ∼3n/4. While overlap of the ''stop'' signals prevents the full gain in data collection rate to be realized, significant improvements are possible for most applications. (orig.)

  2. A method to quench and recharge avalanche photo diodes for use in high rate situations

    International Nuclear Information System (INIS)

    Regan, T.O.; Fenker, H.C.; Thomas, J.; Oliver, J.

    1992-06-01

    We present a new method of using Avalanche Photo Diodes (APDS) for low level light detection in Geiger mode in high rate situations such as those encountered at the Superconducting Super Collider (SSC). The new technique is readily adaptable to implementation in CMOS VLSI

  3. Effects of nitrogen rate and application method on early production and fruit quality in highbush blueberry

    Science.gov (United States)

    A field study was conducted to examine the effects of nitrogen (N) rate and method of N fertilizer application on growth, yield, and fruit quality in highbush blueberry (Vaccinium corymbosum L.) during the first 4 years after planting in south-coastal BC. Nitrogen was applied at 0-150% of current pr...

  4. Building America Guidance for Identifying and Overcoming Code, Standard, and Rating Method Barriers

    Energy Technology Data Exchange (ETDEWEB)

    Cole, P. C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Halverson, M. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-09-01

    This guidance document was prepared using the input from the meeting summarized in the draft CSI Roadmap to provide Building America research teams and partners with specific information and approaches to identifying and overcoming potential barriers to Building America innovations arising in and/or stemming from codes, standards, and rating methods.

  5. Machine cost analysis using the traditional machine-rate method and ChargeOut!

    Science.gov (United States)

    E. M. (Ted) Bilek

    2009-01-01

    Forestry operations require ever more use of expensive capital equipment. Mechanization is frequently necessary to perform cost-effective and safe operations. Increased capital should mean more sophisticated capital costing methodologies. However the machine rate method, which is the costing methodology most frequently used, dates back to 1942. CHARGEOUT!, a recently...

  6. A method for studying post-fledging survival rates using data from ringing recoveries

    NARCIS (Netherlands)

    Thomson, D.L.; Baillie, S.R.; Peach, W.J.

    1999-01-01

    We present a method for studying post-fledging survival rates from data on national ringing recoveries. The approach extends the classical two-age-class models of Brownie et al. (1985) to include a third age-class of birds ringed as nestlings. The models can incorporate age-class-specific and

  7. A modified Gaussian integration method for thermal reaction rate calculation in U- and Pu-isotopes

    International Nuclear Information System (INIS)

    Bosevski, T.; Fredin, B.

    1966-01-01

    An advanced multi-group cell calculations a lot of data information is very often necessary, and hence the data administration will be elaborate, and the spectrum calculation will be time consuming. We think it is possible to reduce the necessary data information by using an effective reaction rate integration method well suited for U- and Pu-absorptions (author)

  8. A method of inferring k-infinity from reaction rate measurements in thermal reactor systems

    International Nuclear Information System (INIS)

    Newmarch, D.A.

    1967-05-01

    A scheme is described for inferring a value of k-infinity from reaction rate measurements. The method is devised with the METHUSELAH group structure in mind and was developed for the analysis of S.G.H.W. reactor experiments; the underlying principles, however, are general. (author)

  9. The Relationship Between Method of Viewing Lectures, Course Ratings, and Course Timing.

    Science.gov (United States)

    Burton, William B; Ma, Terence P; Grayson, Martha S

    2017-01-01

    In recent years, medical schools have provided students access to video recordings of course lectures, but few studies have investigated the impact of this on ratings of courses and teachers. This study investigated whether the method of viewing lectures was related to student ratings of the course and its components and whether the method used changed over time. Preclinical medical students indicated whether ratings of course lectures were based primarily on lecture attendance, video capture, or both. Students were categorized into Lecture, Video, or Both groups based on their responses to this question. The data consisted of 7584 student evaluations collected over 2 years. Students who attended live lectures rated the course and its components higher than students who only viewed the video or used both methods, although these differences were very small. Students increasingly watched lectures exclusively by video over time: in comparison with first-year students, second-year students were more likely to watch lectures exclusively by video; in comparison with students in the first half of the academic year, students in the second half of the academic year were more likely to watch lectures exclusively by video. With the increase in use of lecture video recordings across medical schools, attention must be paid to student attitudes regarding these methods.

  10. Concurrent Driving Method with Fast Scan Rate for Large Mutual Capacitance Touch Screens

    Directory of Open Access Journals (Sweden)

    Mohamed Gamal Ahmed Mohamed

    2015-01-01

    Full Text Available A novel touch screen control technique is introduced, which scans each frame in two steps of concurrent multichannel driving and differential sensing. The proposed technique substantially increases the scan rate and reduces the ambient noise effectively. It is also extended to a multichip architecture to support excessively large touch screens with great scan rate improvement. The proposed method has been implemented using 0.18 μm CMOS TowerJazz process and tested with FPGA and AFE board connecting a 23-inch touch screen. Experimental results show a scan rate improvement of up to 23.8 times and an SNR improvement of 24.6 dB over the conventional method.

  11. Identification of strain-rate and thermal sensitive material model with an inverse method

    CERN Document Server

    Peroni, L; Peroni, M

    2010-01-01

    This paper describes a numerical inverse method to extract material strength parameters from the experimental data obtained via mechanical tests at different strain-rates and temperatures. It will be shown that this procedure is particularly useful to analyse experimental results when the stress-strain fields in the specimen cannot be correctly described via analytical models. This commonly happens in specimens with no regular shape, in specimens with a regular shape when some instability phenomena occur (for example the necking phenomena in tensile tests that create a strongly heterogeneous stress-strain fields) or in dynamic tests (where the strain-rate field is not constant due to wave propagation phenomena). Furthermore the developed procedure is useful to take into account thermal phenomena generally affecting high strain-rate tests due to the adiabatic overheating related to the conversion of plastic work. The method presented requires strong effort both from experimental and numerical point of view, an...

  12. Measurements of μ capture rates in liquid hydrogen by the lifetime method

    International Nuclear Information System (INIS)

    Martino, Jacques.

    1982-04-01

    The μ capture reaction is a weak interaction. It can be observed as a result of the formation of muonic atoms for which the overlopping of the wave functions of the muon and nucleus is a maximum in the 1s state. The production of this (μp) bound state leads to a capture rate in relatively favorable competition with the disintegration rate. The capture rate for a pulsed muon beam (from the Saclay linear accelerator) was measured in liquid hydrogen by the lifetime method. The method and experimental equipment used for the lifetime measurements are described together with the different sources of systematic error and the results obtained. The interpretation of these results is discussed [fr

  13. Open charcoal chamber method for mass measurements of radon exhalation rate from soil surface

    International Nuclear Information System (INIS)

    Tsapalov, Andrey; Kovler, Konstantin; Miklyaev, Peter

    2016-01-01

    Radon exhalation rate from the soil surface can serve as an important criterion in the evaluation of radon hazard of the land. Recently published international standard ISO 11665-7 (2012) is based on the accumulation of radon gas in a closed container. At the same time since 1998 in Russia, as a part of engineering and environmental studies for the construction, radon flux measurements are made using an open charcoal chamber for a sampling duration of 3–5 h. This method has a well-defined metrological justification and was tested in both favorable and unfavorable conditions. The article describes the characteristics of the method, as well as the means of sampling and measurement of the activity of radon absorbed. The results of the metrological study suggest that regardless of the sampling conditions (weather, the mechanism and rate of radon transport in the soil, soil properties and conditions), uncertainty of method does not exceed 20%, while the combined standard uncertainty of radon exhalation rate measured from the soil surface does not exceed 30%. The results of the daily measurements of radon exhalation rate from the soil surface at the experimental site during one year are reported. - Highlights: • Radon exhalation rate from the soil surface area of 32 cm"2 can be measured at level of 10 mBq/(m"2s) at the uncertainty ≤30%. • The method has a metrological justification. • No need to consider climate conditions, soil properties and conditions, mechanism and rate of radon transport in the soil.

  14. Method and apparatus for obtaining enhanced production rate of thermal chemical reactions

    Science.gov (United States)

    Tonkovich, Anna Lee Y [Pasco, WA; Wang, Yong [Richland, WA; Wegeng, Robert S [Richland, WA; Gao, Yufei [Kennewick, WA

    2003-04-01

    The present invention is a method and apparatus (vessel) for providing a heat transfer rate from a reaction chamber through a wall to a heat transfer chamber substantially matching a local heat transfer rate of a catalytic thermal chemical reaction. The key to the invention is a thermal distance defined on a cross sectional plane through the vessel inclusive of a heat transfer chamber, reaction chamber and a wall between the chambers. The cross sectional plane is perpendicular to a bulk flow direction of the reactant stream, and the thermal distance is a distance between a coolest position and a hottest position on the cross sectional plane. The thermal distance is of a length wherein the heat transfer rate from the reaction chamber to the heat transfer chamber substantially matches the local heat transfer rate.

  15. A finite element modeling method for predicting long term corrosion rates

    International Nuclear Information System (INIS)

    Fu, J.W.; Chan, S.

    1984-01-01

    For the analyses of galvanic corrosion, pitting and crevice corrosion, which have been identified as possible corrosion processes for nuclear waste isolation, a finite element method has been developed for the prediction of corrosion rates. The method uses a finite element mesh to model the corrosive environment and the polarization curves of metals are assigned as the boundary conditions to calculate the corrosion cell current distribution. A subroutine is used to calculate the chemical change with time in the crevice or the pit environments. In this paper, the finite element method is described along with experimental confirmation

  16. Comparison of survival rates among different treatment methods of transcatheter hepatic arterial chemoembolization for hepatocellular carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Shim, Yong Woon; Lee, Jong Tae; Yoo, Hyung Sik; Lee, Do Yun; Jun, Pyoung Jun; Chang, So Yong [Yonsei Univ. College of Medicine, Seoul (Korea, Republic of)

    1996-06-01

    To compare the survival rates of patients with hepatoma using different methods of transcatheter arterial chemoemblization(THAE). Four hundred and eighty three patients with hepatoma diagnosed by biopsy, serum alpha-fetoprotein, abdominal CT scan, abdominal ultrasonography or hepatic angiography were included, but not all had received surgical treatment. They were divided onto two groups according to Child's classification and into subgroups according to different methods of THAE. Five-tear survival rates among these groups were retrospectively compared. The patients were aged between 24 and 85(mean, 58) ; male to female ratio was 324 : 61 for those who received THAE (369 : 87 when only hepatic angiography was considered.). In the group with more than a single episode of chemoembolization, regardless of Child's classification, a better survival rate compared to the other groups with or without concommitant radiotherapy or without chemoembolization was noted. There was no difference in the survival rate of patients with multiple chemoembolization. moreover, no difference in this rate was observed no matter what chemotherapeutic agents, including Adriamycin, Cis-Diaminedichloroplatinum of I-131-Lipiodol, were used. Amortization by gelfoam in conjuction with Adriamycin resulted in no difference in survival rate regardless of frequency of chemoembolization. An improved survival rate was seen when multiple episodes of chemoembolization were applied, but no difference was seen when there was concomitant application of either gelfoam or radiotherapy. Two different chemotherapeutic agents, Adriamycin and Cis-Diaminedichloroplatinum, were used, but there was no difference between them in their effect on survival rates.

  17. Discovery of a general method of solving the Schrödinger and dirac equations that opens a way to accurately predictive quantum chemistry.

    Science.gov (United States)

    Nakatsuji, Hiroshi

    2012-09-18

    Just as Newtonian law governs classical physics, the Schrödinger equation (SE) and the relativistic Dirac equation (DE) rule the world of chemistry. So, if we can solve these equations accurately, we can use computation to predict chemistry precisely. However, for approximately 80 years after the discovery of these equations, chemists believed that they could not solve SE and DE for atoms and molecules that included many electrons. This Account reviews ideas developed over the past decade to further the goal of predictive quantum chemistry. Between 2000 and 2005, I discovered a general method of solving the SE and DE accurately. As a first inspiration, I formulated the structure of the exact wave function of the SE in a compact mathematical form. The explicit inclusion of the exact wave function's structure within the variational space allows for the calculation of the exact wave function as a solution of the variational method. Although this process sounds almost impossible, it is indeed possible, and I have published several formulations and applied them to solve the full configuration interaction (CI) with a very small number of variables. However, when I examined analytical solutions for atoms and molecules, the Hamiltonian integrals in their secular equations diverged. This singularity problem occurred in all atoms and molecules because it originates from the singularity of the Coulomb potential in their Hamiltonians. To overcome this problem, I first introduced the inverse SE and then the scaled SE. The latter simpler idea led to immediate and surprisingly accurate solution for the SEs of the hydrogen atom, helium atom, and hydrogen molecule. The free complement (FC) method, also called the free iterative CI (free ICI) method, was efficient for solving the SEs. In the FC method, the basis functions that span the exact wave function are produced by the Hamiltonian of the system and the zeroth-order wave function. These basis functions are called complement

  18. Usability of Discovery Portals

    OpenAIRE

    Bulens, J.D.; Vullings, L.A.E.; Houtkamp, J.M.; Vanmeulebrouk, B.

    2013-01-01

    As INSPIRE progresses to be implemented in the EU, many new discovery portals are built to facilitate finding spatial data. Currently the structure of the discovery portals is determined by the way spatial data experts like to work. However, we argue that the main target group for discovery portals are not spatial data experts but professionals with limited spatial knowledge, and a focus outside the spatial domain. An exploratory usability experiment was carried out in which three discovery p...

  19. Pleural liquid clearance rate measured in awake sheep by the volume of dilution method

    International Nuclear Information System (INIS)

    Broaddus, V.C.; Wiener-Kronish, J.P.; Berthiaume, Y.; Staub, N.C.

    1986-01-01

    The authors reported 24h clearance of mock pleural effusions measured terminally in sheep. To measure effusion volume at different times in the same sheep, they injected 111 In-transferrin and measured its dilution. In 5 sheep with effusions of known sizes, the method was accurate to +/-10%. In 5 awake sheep, the authors injected 10 ml/kg of a 1% protein solution via a non-penetrating rib capsule. At 6h, the authors measured the volume by the dilution method and at 24h by direct recovery. The clearance rate in each animal was constant at 2.9-6.0%/h (average 4.8 +/- 1.3%/h). This new method gives a reliable two point clearance rate and requires fewer animals

  20. NDT method in determining the rate of corrosion applicable to risk based inspection

    International Nuclear Information System (INIS)

    Mohamed Hairul Hasmoni; Mohamad Pauzi Ismail; Ab Razak Hamzah

    2004-01-01

    Corrosion is a major problem in oil and gas industries, refineries and chemical process plants as the equipment is often exposed to corrosive environments or elevated temperature. Important equipment need to operate safely and reliably to avoid injuries to personnel and the public, and to prevent loss time and cost incurred due to loss of production and shutdown. The paper assess the approach in evaluating the technique of non-destructive testing (NDT) using Ultrasonic Testing (UT) in determining the rate of corrosion and remaining life of equipment applicable to Risk Based Inspection (RBI). Methods in determining the corrosion rate are presented using analytical method. Examples and data from MINT chiller water pipeline are presented to illustrate the application of these methods. (Author)

  1. Financing drug discovery for orphan diseases.

    Science.gov (United States)

    Fagnan, David E; Gromatzky, Austin A; Stein, Roger M; Fernandez, Jose-Maria; Lo, Andrew W

    2014-05-01

    Recently proposed 'megafund' financing methods for funding translational medicine and drug development require billions of dollars in capital per megafund to de-risk the drug discovery process enough to issue long-term bonds. Here, we demonstrate that the same financing methods can be applied to orphan drug development but, because of the unique nature of orphan diseases and therapeutics (lower development costs, faster FDA approval times, lower failure rates and lower correlation of failures among disease targets) the amount of capital needed to de-risk such portfolios is much lower in this field. Numerical simulations suggest that an orphan disease megafund of only US$575 million can yield double-digit expected rates of return with only 10-20 projects in the portfolio. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Usability of Discovery Portals

    NARCIS (Netherlands)

    Bulens, J.D.; Vullings, L.A.E.; Houtkamp, J.M.; Vanmeulebrouk, B.

    2013-01-01

    As INSPIRE progresses to be implemented in the EU, many new discovery portals are built to facilitate finding spatial data. Currently the structure of the discovery portals is determined by the way spatial data experts like to work. However, we argue that the main target group for discovery portals

  3. Discovery and the atom

    International Nuclear Information System (INIS)

    1989-01-01

    ''Discovery and the Atom'' tells the story of the founding of nuclear physics. This programme looks at nuclear physics up to the discovery of the neutron in 1932. Animation explains the science of the classic experiments, such as the scattering of alpha particles by Rutherford and the discovery of the nucleus. Archive film shows the people: Lord Rutherford, James Chadwick, Marie Curie. (author)

  4. Development of An Optimization Method for Determining Automation Rate in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun; Kim, Jong Hyun

    2014-01-01

    Since automation was introduced in various industrial fields, it has been known that automation provides positive effects like greater efficiency and fewer human errors, and negative effect defined as out-of-the-loop (OOTL). Thus, before introducing automation in nuclear field, the estimation of the positive and negative effects of automation on human operators should be conducted. In this paper, by focusing on CPS, the optimization method to find an appropriate proportion of automation is suggested by integrating the suggested cognitive automation rate and the concepts of the level of ostracism. The cognitive automation rate estimation method was suggested to express the reduced amount of human cognitive loads, and the level of ostracism was suggested to express the difficulty in obtaining information from the automation system and increased uncertainty of human operators' diagnosis. The maximized proportion of automation that maintains the high level of attention for monitoring the situation is derived by an experiment, and the automation rate is estimated by the suggested automation rate estimation method. It is expected to derive an appropriate inclusion proportion of the automation system avoiding the OOTL problem and having maximum efficacy at the same time

  5. A collaborative filtering-based approach to biomedical knowledge discovery.

    Science.gov (United States)

    Lever, Jake; Gakkhar, Sitanshu; Gottlieb, Michael; Rashnavadi, Tahereh; Lin, Santina; Siu, Celia; Smith, Maia; Jones, Martin R; Krzywinski, Martin; Jones, Steven J M; Wren, Jonathan

    2018-02-15

    The increase in publication rates makes it challenging for an individual researcher to stay abreast of all relevant research in order to find novel research hypotheses. Literature-based discovery methods make use of knowledge graphs built using text mining and can infer future associations between biomedical concepts that will likely occur in new publications. These predictions are a valuable resource for researchers to explore a research topic. Current methods for prediction are based on the local structure of the knowledge graph. A method that uses global knowledge from across the knowledge graph needs to be developed in order to make knowledge discovery a frequently used tool by researchers. We propose an approach based on the singular value decomposition (SVD) that is able to combine data from across the knowledge graph through a reduced representation. Using cooccurrence data extracted from published literature, we show that SVD performs better than the leading methods for scoring discoveries. We also show the diminishing predictive power of knowledge discovery as we compare our predictions with real associations that appear further into the future. Finally, we examine the strengths and weaknesses of the SVD approach against another well-performing system using several predicted associations. All code and results files for this analysis can be accessed at https://github.com/jakelever/knowledgediscovery. sjones@bcgsc.ca. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  6. Model for determining vapor equilibrium rates in the hanging drop method for protein crystal growth

    Science.gov (United States)

    Baird, James K.; Frieden, Richard W.; Meehan, E. J., Jr.; Twigg, Pamela J.; Howard, Sandra B.; Fowlis, William A.

    1987-01-01

    An engineering analysis of the rate of evaporation of solvent in the hanging drop method of protein crystal growth is presented. Results are applied to 18 drop and well arrangements commonly encountered in the laboratory. The chemical nature of the salt, drop size and shape, drop concentration, well size, well concentration, and temperature are taken into account. The rate of evaporation increases with temperature, drop size, and the salt concentration difference between the drop and the well. The evaporation in this model possesses no unique half-life. Once the salt in the drop achieves 80 percent of its final concentration, further evaporation suffers from the law of diminishing returns.

  7. Rapid and accurate processing method for amide proton exchange rate measurement in proteins

    International Nuclear Information System (INIS)

    Koskela, Harri; Heikkinen, Outi; Kilpelaeinen, Ilkka; Heikkinen, Sami

    2007-01-01

    Exchange between protein backbone amide hydrogen and water gives relevant information about solvent accessibility and protein secondary structure stability. NMR spectroscopy provides a convenient tool to study these dynamic processes with saturation transfer experiments. Processing of this type of NMR spectra has traditionally required peak integration followed by exponential fitting, which can be tedious with large data sets. We propose here a computer-aided method that applies inverse Laplace transform in the exchange rate measurement. With this approach, the determination of exchange rates can be automated, and reliable results can be acquired rapidly without a need for manual processing

  8. Explaining transgression in respiratory rate observation methods in the emergency department: A classic grounded theory analysis.

    Science.gov (United States)

    Flenady, Tracy; Dwyer, Trudy; Applegarth, Judith

    2017-09-01

    Abnormal respiratory rates are one of the first indicators of clinical deterioration in emergency department(ED) patients. Despite the importance of respiratory rate observations, this vital sign is often inaccurately recorded on ED observation charts, compromising patient safety. Concurrently, there is a paucity of research reporting why this phenomenon occurs. To develop a substantive theory explaining ED registered nurses' reasoning when they miss or misreport respiratory rate observations. This research project employed a classic grounded theory analysis of qualitative data. Seventy-nine registered nurses currently working in EDs within Australia. Data collected included detailed responses from individual interviews and open-ended responses from an online questionnaire. Classic grounded theory (CGT) research methods were utilised, therefore coding was central to the abstraction of data and its reintegration as theory. Constant comparison synonymous with CGT methods were employed to code data. This approach facilitated the identification of the main concern of the participants and aided in the generation of theory explaining how the participants processed this issue. The main concern identified is that ED registered nurses do not believe that collecting an accurate respiratory rate for ALL patients at EVERY round of observations is a requirement, and yet organizational requirements often dictate that a value for the respiratory rate be included each time vital signs are collected. The theory 'Rationalising Transgression', explains how participants continually resolve this problem. The study found that despite feeling professionally conflicted, nurses often erroneously record respiratory rate observations, and then rationalise this behaviour by employing strategies that adjust the significance of the organisational requirement. These strategies include; Compensating, when nurses believe they are compensating for errant behaviour by enhancing the patient's outcome

  9. Wavelet denoising method; application to the flow rate estimation for water level control

    International Nuclear Information System (INIS)

    Park, Gee Young; Park, Jin Ho; Lee, Jung Han; Kim, Bong Soo; Seong, Poong Hyun

    2003-01-01

    The wavelet transform decomposes a signal into time- and frequency-domain signals and it is well known that a noise-corrupted signal could be reconstructed or estimated when a proper denoising method is involved in the wavelet transform. Among the wavelet denoising methods proposed up to now, the wavelets by Mallat and Zhong can reconstruct best the pure transient signal from a highly corrupted signal. But there has been no systematic way of discriminating the original signal from the noise in a dyadic wavelet transform. In this paper, a systematic method is proposed for noise discrimination, which could be implemented easily into a digital system. For demonstrating the potential role of the wavelet denoising method in the nuclear field, this method is applied to the steam or feedwater flow rate estimation of the secondary loop. And the configuration of the S/G water level control system is proposed for incorporating the wavelet denoising method in estimating the flow rate value at low operating powers

  10. Advanced methods comparisons of reaction rates in the Purdue Fast Breeder Blanket Facility

    International Nuclear Information System (INIS)

    Hill, R.N.; Ott, K.O.

    1988-01-01

    A review of worldwide results revealed that reaction rates in the blanket region are generally underpredicted with the discrepancy increasing with penetration; however, these results vary widely. Experiments in the large uniform Purdue Fast Breeder Blanket Facility (FBBF) blanket yield an accurate quantification of this discrepancy. Using standard production code methods (diffusion theory with 50 group cross sections), a consistent Calculated/Experimental (C/E) drop-off was observed for various reaction rates. A 50% increase in the calculated results at the outer edge of the blanket is necessary for agreement with experiments. The usefulness of refined group constant generation utilizing specialized weighting spectra and transport theory methods in correcting this discrepancy was analyzed. Refined group constants reduce the discrepancy to half that observed using the standard method. The surprising result was that transport methods had no effect on the blanket deviations; thus, transport theory considerations do not constitute or even contribute to an explanation of the blanket discrepancies. The residual blanket C/E drop-off (about half the standard drop-off) using advanced methods must be caused by some approximations which are applied in all current methods. 27 refs., 3 figs., 1 tab

  11. The assessment of cognitive errors using an observer-rated method.

    Science.gov (United States)

    Drapeau, Martin

    2014-01-01

    Cognitive Errors (CEs) are a key construct in cognitive behavioral therapy (CBT). Integral to CBT is that individuals with depression process information in an overly negative or biased way, and that this bias is reflected in specific depressotypic CEs which are distinct from normal information processing. Despite the importance of this construct in CBT theory, practice, and research, few methods are available to researchers and clinicians to reliably identify CEs as they occur. In this paper, the author presents a rating system, the Cognitive Error Rating Scale, which can be used by trained observers to identify and assess the cognitive errors of patients or research participants in vivo, i.e., as they are used or reported by the patients or participants. The method is described, including some of the more important rating conventions to be considered when using the method. This paper also describes the 15 cognitive errors assessed, and the different summary scores, including valence of the CEs, that can be derived from the method.

  12. Error baseline rates of five sample preparation methods used to characterize RNA virus populations.

    Directory of Open Access Journals (Sweden)

    Jeffrey R Kugelman

    Full Text Available Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5 of all compared methods.

  13. NNAlign: A Web-Based Prediction Method Allowing Non-Expert End-User Discovery of Sequence Motifs in Quantitative Peptide Data

    DEFF Research Database (Denmark)

    Andreatta, Massimo; Schafer-Nielsen, Claus; Lund, Ole

    2011-01-01

    Recent advances in high-throughput technologies have made it possible to generate both gene and protein sequence data at an unprecedented rate and scale thereby enabling entirely new "omics"-based approaches towards the analysis of complex biological processes. However, the amount and complexity...... to interpret large data sets. We have recently developed a method, NNAlign, which is generally applicable to any biological problem where quantitative peptide data is available. This method efficiently identifies underlying sequence patterns by simultaneously aligning peptide sequences and identifying motifs...... associated with quantitative readouts. Here, we provide a web-based implementation of NNAlign allowing non-expert end-users to submit their data (optionally adjusting method parameters), and in return receive a trained method (including a visual representation of the identified motif) that subsequently can...

  14. Method for measuring the disintegration rate of a beta-emitting radionuclide in a liquid sample

    International Nuclear Information System (INIS)

    1977-01-01

    A method of measuring the distintegration rate of a beta-emitting radionuclide in a liquid sample by counting at least two differently quenched versions of the sample. In each counting operation the sample is counted in the presence of and in the absence of a standard radioactive source. A pulse height (PH) corresponding to a unique point on the pulse height spectrum generated in the presence of the standard is determined. A zero threshold sample count rate (CPM) is derived by counting the sample once in a counting window having a zero threshold lower limit. Normalized values of the measured pulse heights (PH) are developed and correlated with the corresponding counts (CPM) to determine the pulse count for a normalized pulse height value of zero and hence the sample disintegration rate

  15. Count rate balance method of measuring sediment transport of sand beds by radioactive tracers

    International Nuclear Information System (INIS)

    Sauzay, G.

    1968-01-01

    Radioactive tracers are applied to the direct measurement of the sediment transport rate of sand beds. The theoretical measurement formula is derived: the variation of the count rate balance is inverse of that of the transport thickness. Simultaneously the representativeness of the tracer is critically studied. The minimum quantity of tracer which has to be injected in order to obtain a correct statistical definition of count rate given by a low number of grains 'seen' by the detector is then studied. A field experiment was made and has let to study the technological conditions for applying this method: only the treatment of results is new, the experiment itself is carried out with conventional techniques applied with great care. (author) [fr

  16. A study on measurement on artificial radiation dose rate using the response matrix method

    International Nuclear Information System (INIS)

    Kidachi, Hiroshi; Ishikawa, Yoichi; Konno, Tatsuya

    2004-01-01

    We examined accuracy and stability of estimated artificial dose contribution which is distinguished from natural background gamma-ray dose rate using Response Matrix method. Irradiation experiments using artificial gamma-ray sources indicated that there was a linear relationship between observed dose rate and estimated artificial dose contribution, when irradiated artificial gamma-ray dose rate was higher than about 2 nGy/h. Statistical and time-series analyses of long term data made it clear that estimated artificial contribution showed almost constant values under no artificial influence from the nuclear power plants. However, variations of estimated artificial dose contribution were infrequently observed due to of rainfall, detector maintenance operation and occurrence of calibration error. Some considerations on the factors to these variations were made. (author)

  17. A method for estimating failure rates for low probability events arising in PSA

    International Nuclear Information System (INIS)

    Thorne, M.C.; Williams, M.M.R.

    1995-01-01

    The authors develop a method for predicting failure rates and failure probabilities per event when, over a given test period or number of demands, no failures have occurred. A Bayesian approach is adopted to calculate a posterior probability distribution for the failure rate or failure probability per event subsequent to the test period. This posterior is then used to estimate effective failure rates or probabilities over a subsequent period of time or number of demands. In special circumstances, the authors results reduce to the well-known rules of thumb, viz: 1/N and 1/T, where N is the number of demands during the test period for no failures and T is the test period for no failures. However, the authors are able to give strict conditions on the validity of these rules of thumb and to improve on them when necessary

  18. Efficient quantum-classical method for computing thermal rate constant of recombination: application to ozone formation.

    Science.gov (United States)

    Ivanov, Mikhail V; Babikov, Dmitri

    2012-05-14

    Efficient method is proposed for computing thermal rate constant of recombination reaction that proceeds according to the energy transfer mechanism, when an energized molecule is formed from reactants first, and is stabilized later by collision with quencher. The mixed quantum-classical theory for the collisional energy transfer and the ro-vibrational energy flow [M. Ivanov and D. Babikov, J. Chem. Phys. 134, 144107 (2011)] is employed to treat the dynamics of molecule + quencher collision. Efficiency is achieved by sampling simultaneously (i) the thermal collision energy, (ii) the impact parameter, and (iii) the incident direction of quencher, as well as (iv) the rotational state of energized molecule. This approach is applied to calculate third-order rate constant of the recombination reaction that forms the (16)O(18)O(16)O isotopomer of ozone. Comparison of the predicted rate vs. experimental result is presented.

  19. Method of measuring the disinteration rate of beta-emitting radionuclide in a liquid sample

    International Nuclear Information System (INIS)

    Horrocks, D.L.

    1977-01-01

    A method of measuring the disintegration rate of a beta-emitting radionuclide in a liquid sample by counting at least two differently quenched versions of the sample is described. In each counting operation the sample is counted in the presence of and in the absence of a standard radioactive source. A pulse height (PH) corresponding to a unique point on the pulse height spectrum generated in the presence of the standard is determined. A zero threshold sample count rate (CPM) is derived by counting the sample once in a counting window having a zero threshold lower limit. Normalized values of the measured pulse heights (PH) are developed and correlated with the corresponding pulse counts (CPM) to determine the pulse count for a normalized pulse height value of zero and hence the sample disintegration rate

  20. Evaluation and comparison of estimation methods for failure rates and probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, Jussi K. [Fortum Power and Heat Oy, P.O. Box 23, 07901 Loviisa (Finland)]. E-mail: jussi.vaurio@fortum.com; Jaenkaelae, Kalle E. [Fortum Nuclear Services, P.O. Box 10, 00048 Fortum (Finland)

    2006-02-01

    An updated parametric robust empirical Bayes (PREB) estimation methodology is presented as an alternative to several two-stage Bayesian methods used to assimilate failure data from multiple units or plants. PREB is based on prior-moment matching and avoids multi-dimensional numerical integrations. The PREB method is presented for failure-truncated and time-truncated data. Erlangian and Poisson likelihoods with gamma prior are used for failure rate estimation, and Binomial data with beta prior are used for failure probability per demand estimation. Combined models and assessment uncertainties are accounted for. One objective is to compare several methods with numerical examples and show that PREB works as well if not better than the alternative more complex methods, especially in demanding problems of small samples, identical data and zero failures. False claims and misconceptions are straightened out, and practical applications in risk studies are presented.

  1. Meta-analytic methods for pooling rates when follow-up duration varies: a case study

    Directory of Open Access Journals (Sweden)

    Wolf Fredric M

    2004-07-01

    Full Text Available Abstract Background Meta-analysis can be used to pool rate measures across studies, but challenges arise when follow-up duration varies. Our objective was to compare different statistical approaches for pooling count data of varying follow-up times in terms of estimates of effect, precision, and clinical interpretability. Methods We examined data from a published Cochrane Review of asthma self-management education in children. We selected two rate measures with the largest number of contributing studies: school absences and emergency room (ER visits. We estimated fixed- and random-effects standardized weighted mean differences (SMD, stratified incidence rate differences (IRD, and stratified incidence rate ratios (IRR. We also fit Poisson regression models, which allowed for further adjustment for clustering by study. Results For both outcomes, all methods gave qualitatively similar estimates of effect in favor of the intervention. For school absences, SMD showed modest results in favor of the intervention (SMD -0.14, 95% CI -0.23 to -0.04. IRD implied that the intervention reduced school absences by 1.8 days per year (IRD -0.15 days/child-month, 95% CI -0.19 to -0.11, while IRR suggested a 14% reduction in absences (IRR 0.86, 95% CI 0.83 to 0.90. For ER visits, SMD showed a modest benefit in favor of the intervention (SMD -0.27, 95% CI: -0.45 to -0.09. IRD implied that the intervention reduced ER visits by 1 visit every 2 years (IRD -0.04 visits/child-month, 95% CI: -0.05 to -0.03, while IRR suggested a 34% reduction in ER visits (IRR 0.66, 95% CI 0.59 to 0.74. In Poisson models, adjustment for clustering lowered the precision of the estimates relative to stratified IRR results. For ER visits but not school absences, failure to incorporate study indicators resulted in a different estimate of effect (unadjusted IRR 0.77, 95% CI 0.59 to 0.99. Conclusions Choice of method among the ones presented had little effect on inference but affected the

  2. Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method

    International Nuclear Information System (INIS)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; Grove, Robert E.

    2015-01-01

    The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysis that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR

  3. An experimental comparison of three wire beam electrode based methods for determining corrosion rates and patterns

    International Nuclear Information System (INIS)

    Tan, Y.-J.

    2005-01-01

    Laboratory experiments have been carried out to examine the advantages and limitations of three wire beam electrode (WBE) based techniques, including the noise resistance R n -WBE method, the overpotential-galvanic current method, and the galvanic current method, in determining corrosion rates and patterns. These techniques have been applied simultaneously to several selected corrosion systems of different characteristics. It has been found that the R n -WBE method has advantages over other WBE based methods when applying to WBE surfaces under uniform corrosion. However, the R n -WBE method has been found to be unsuitable for low noise level corrosion systems. It has also been found that both R n -WBE and overpotential-galvanic current methods are similarly applicable to WBE surfaces under nonuniform corrosion. However, the galvanic current method has been found to be suitable only for WBE surfaces under highly localised corrosion. Some related issues regarding R n calculation such as trend removal and its effects on corrosion mapping have also been discussed

  4. Ultrasonic 3-D Vector Flow Method for Quantitative In Vivo Peak Velocity and Flow Rate Estimation

    DEFF Research Database (Denmark)

    Holbek, Simon; Ewertsen, Caroline; Bouzari, Hamed

    2017-01-01

    Current clinical ultrasound (US) systems are limited to show blood flow movement in either 1-D or 2-D. In this paper, a method for estimating 3-D vector velocities in a plane using the transverse oscillation method, a 32×32 element matrix array, and the experimental US scanner SARUS is presented...... is validated in two phantom studies, where flow rates are measured in a flow-rig, providing a constant parabolic flow, and in a straight-vessel phantom ( ∅=8 mm) connected to a flow pump capable of generating time varying waveforms. Flow rates are estimated to be 82.1 ± 2.8 L/min in the flow-rig compared...

  5. Efficient kinetic Monte Carlo method for reaction-diffusion problems with spatially varying annihilation rates

    Science.gov (United States)

    Schwarz, Karsten; Rieger, Heiko

    2013-03-01

    We present an efficient Monte Carlo method to simulate reaction-diffusion processes with spatially varying particle annihilation or transformation rates as it occurs for instance in the context of motor-driven intracellular transport. Like Green's function reaction dynamics and first-passage time methods, our algorithm avoids small diffusive hops by propagating sufficiently distant particles in large hops to the boundaries of protective domains. Since for spatially varying annihilation or transformation rates the single particle diffusion propagator is not known analytically, we present an algorithm that generates efficiently either particle displacements or annihilations with the correct statistics, as we prove rigorously. The numerical efficiency of the algorithm is demonstrated with an illustrative example.

  6. Increasing recruitment rates in an inpatient clinical research study using quality improvement methods.

    Science.gov (United States)

    Sauers, Hadley S; Beck, Andrew F; Kahn, Robert S; Simmons, Jeffrey M

    2014-11-01

    One important benefit of successful patient recruitment is increased generalizability of findings. We sought to optimize enrollment of children admitted with asthma as part of a population-based, prospective, observational cohort study with the goal of enrolling at least 60% of all eligible and staffed patients. Quality improvement methods were used to improve cohort recruitment. Weekly meetings with study staff and study leadership were held to plan and discuss how to maximize recruitment rates. Significant initial variability in recruitment success prompted the team to use small-scale tests of change to increase recruitment numbers. A number of tests were trialed, focusing primarily on reducing patient refusals and improving recruitment process efficiency. Recruitment rates were calculated by dividing eligible by enrolled patients and displayed using annotated Shewhart control charts. Control charts were used to illustrate week-to-week variability while also enabling differentiation of common-cause and special-cause variation. The study enrolled 774 patients, representing 54% of all eligible and 59% of those eligible for whom staff were available to enroll. Our mean weekly recruitment rate increased from 55% during the first 3 months of the study to a statistically significant sustained rate of 61%. This was sustained given numerous obstacles, such as departing and hiring of staff and adding a second recruitment location. Implementing quality improvement methods within a larger research study led to an increase in the rate of recruitment as well as the stability in recruitment rates from week-to-week. Copyright © 2014 by the American Academy of Pediatrics.

  7. Classical Wigner method with an effective quantum force: application to reaction rates.

    Science.gov (United States)

    Poulsen, Jens Aage; Li, Huaqing; Nyman, Gunnar

    2009-07-14

    We construct an effective "quantum force" to be used in the classical molecular dynamics part of the classical Wigner method when determining correlation functions. The quantum force is obtained by estimating the most important short time separation of the Feynman paths that enter into the expression for the correlation function. The evaluation of the force is then as easy as classical potential energy evaluations. The ideas are tested on three reaction rate problems. The resulting transmission coefficients are in much better agreement with accurate results than transmission coefficients from the ordinary classical Wigner method.

  8. Testing an Adapted Modified Delphi Method: Synthesizing Multiple Stakeholder Ratings of Health Care Service Effectiveness.

    Science.gov (United States)

    Escaron, Anne L; Chang Weir, Rosy; Stanton, Petra; Vangala, Sitaram; Grogan, Tristan R; Clarke, Robin M

    2016-03-01

    The Affordable Care Act incentivizes health systems for better meeting patient needs, but often guidance about patient preferences for particular health services is limited. All too often vulnerable patient populations are excluded from these decision-making settings. A community-based participatory approach harnesses the in-depth knowledge of those experiencing barriers to health care. We made three modifications to the RAND-UCLA appropriateness method, a modified Delphi approach, involving patients, adding an advisory council group to characterize existing knowledge in this little studied area, and using effectiveness rather than "appropriateness" as the basis for rating. As a proof of concept, we tested this method by examining the broadly delivered but understudied nonmedical services that community health centers provide. This method created discrete, new knowledge about these services by defining 6 categories and 112 unique services and by prioritizing among these services based on effectiveness using a 9-point scale. Consistent with the appropriateness method, we found statistical convergence of ratings among the panelists. Challenges include time commitment and adherence to a clear definition of effectiveness of services. This diverse stakeholder engagement method efficiently addresses gaps in knowledge about the effectiveness of health care services to inform population health management. © 2015 Society for Public Health Education.

  9. A GPS Satellite Clock Offset Prediction Method Based on Fitting Clock Offset Rates Data

    Directory of Open Access Journals (Sweden)

    WANG Fuhong

    2016-12-01

    Full Text Available It is proposed that a satellite atomic clock offset prediction method based on fitting and modeling clock offset rates data. This method builds quadratic model or linear model combined with periodic terms to fit the time series of clock offset rates, and computes the model coefficients of trend with the best estimation. The clock offset precisely estimated at the initial prediction epoch is directly adopted to calculate the model coefficient of constant. The clock offsets in the rapid ephemeris (IGR provided by IGS are used as modeling data sets to perform certain experiments for different types of GPS satellite clocks. The results show that the clock prediction accuracies of the proposed method for 3, 6, 12 and 24 h achieve 0.43, 0.58, 0.90 and 1.47 ns respectively, which outperform the traditional prediction method based on fitting original clock offsets by 69.3%, 61.8%, 50.5% and 37.2%. Compared with the IGU real-time clock products provided by IGS, the prediction accuracies of the new method have improved about 15.7%, 23.7%, 27.4% and 34.4% respectively.

  10. A prediction method for the wax deposition rate based on a radial basis function neural network

    Directory of Open Access Journals (Sweden)

    Ying Xie

    2017-06-01

    Full Text Available The radial basis function neural network is a popular supervised learning tool based on machinery learning technology. Its high precision having been proven, the radial basis function neural network has been applied in many areas. The accumulation of deposited materials in the pipeline may lead to the need for increased pumping power, a decreased flow rate or even to the total blockage of the line, with losses of production and capital investment, so research on predicting the wax deposition rate is significant for the safe and economical operation of an oil pipeline. This paper adopts the radial basis function neural network to predict the wax deposition rate by considering four main influencing factors, the pipe wall temperature gradient, pipe wall wax crystal solubility coefficient, pipe wall shear stress and crude oil viscosity, by the gray correlational analysis method. MATLAB software is employed to establish the RBF neural network. Compared with the previous literature, favorable consistency exists between the predicted outcomes and the experimental results, with a relative error of 1.5%. It can be concluded that the prediction method of wax deposition rate based on the RBF neural network is feasible.

  11. A Comparison of Affect Ratings Obtained with Ecological Momentary Assessment and the Day Reconstruction Method

    Science.gov (United States)

    Dockray, Samantha; Grant, Nina; Stone, Arthur A.; Kahneman, Daniel; Wardle, Jane

    2010-01-01

    Measurement of affective states in everyday life is of fundamental importance in many types of quality of life, health, and psychological research. Ecological momentary assessment (EMA) is the recognized method of choice, but the respondent burden can be high. The day reconstruction method (DRM) was developed by Kahneman and colleagues (Science, 2004, 306, 1776–1780) to assess affect, activities and time use in everyday life. We sought to validate DRM affect ratings by comparison with contemporaneous EMA ratings in a sample of 94 working women monitored over work and leisure days. Six EMA ratings of happiness, tiredness, stress, and anger/frustration were obtained over each 24 h period, and were compared with DRM ratings for the same hour, recorded retrospectively at the end of the day. Similar profiles of affect intensity were recorded with the two techniques. The between-person correlations adjusted for attenuation ranged from 0.58 (stress, working day) to 0.90 (happiness, leisure day). The strength of associations was not related to age, educational attainment, or depressed mood. We conclude that the DRM provides reasonably reliable estimates both of the intensity of affect and variations in affect over the day, so is a valuable instrument for the measurement of everyday experience in health and social research. PMID:21113328

  12. Combining Unsupervised and Supervised Statistical Learning Methods for Currency Exchange Rate Forecasting

    OpenAIRE

    Vasiljeva, Polina

    2016-01-01

    In this thesis we revisit the challenging problem of forecasting currency exchange rate. We combine machine learning methods such as agglomerative hierarchical clustering and random forest to construct a two-step approach for predicting movements in currency exchange prices of the Swedish krona and the US dollar. We use a data set with over 200 predictors comprised of different financial and macro-economic time series and their transformations. We perform forecasting for one week ahead with d...

  13. Taguchi Method for Development of Mass Flow Rate Correlation Using Hydrocarbon Refrigerant Mixture in Capillary Tube

    OpenAIRE

    Sulaimon, Shodiya; Nasution, Henry; Aziz, Azhar Abdul; Abdul-Rahman, Abdul-Halim; Darus, Amer N

    2014-01-01

    The capillary tube is an important control device used in small vapor compression refrigeration systems such as window air-conditioners, household refrigerators and freezers. This paper develops a non-dimensional correlation based on the test results of the adiabatic capillary tube for the mass flow rate through the tube using a hydrocarbon refrigerant mixture of 89.3% propane and 10.7% butane (HCM). The Taguchi method, a statistical experimental design approach, was employed. This approach e...

  14. Accounting for discovery bias in genomic prediction

    Science.gov (United States)

    Our objective was to evaluate an approach to mitigating discovery bias in genomic prediction. Accuracy may be improved by placing greater emphasis on regions of the genome expected to be more influential on a trait. Methods emphasizing regions result in a phenomenon known as “discovery bias” if info...

  15. Methods of Data Collection, Sample Processing, and Data Analysis for Edge-of-Field, Streamgaging, Subsurface-Tile, and Meteorological Stations at Discovery Farms and Pioneer Farm in Wisconsin, 2001-7

    Science.gov (United States)

    Stuntebeck, Todd D.; Komiskey, Matthew J.; Owens, David W.; Hall, David W.

    2008-01-01

    The University of Wisconsin (UW)-Madison Discovery Farms (Discovery Farms) and UW-Platteville Pioneer Farm (Pioneer Farm) programs were created in 2000 to help Wisconsin farmers meet environmental and economic challenges. As a partner with each program, and in cooperation with the Wisconsin Department of Natural Resources and the Sand County Foundation, the U.S. Geological Survey (USGS) Wisconsin Water Science Center (WWSC) installed, maintained, and operated equipment to collect water-quantity and water-quality data from 25 edge-offield, 6 streamgaging, and 5 subsurface-tile stations at 7 Discovery Farms and Pioneer Farm. The farms are located in the southern half of Wisconsin and represent a variety of landscape settings and crop- and animal-production enterprises common to Wisconsin agriculture. Meteorological stations were established at most farms to measure precipitation, wind speed and direction, air and soil temperature (in profile), relative humidity, solar radiation, and soil moisture (in profile). Data collection began in September 2001 and is continuing through the present (2008). This report describes methods used by USGS WWSC personnel to collect, process, and analyze water-quantity, water-quality, and meteorological data for edge-of-field, streamgaging, subsurface-tile, and meteorological stations at Discovery Farms and Pioneer Farm from September 2001 through October 2007. Information presented includes equipment used; event-monitoring and samplecollection procedures; station maintenance; sample handling and processing procedures; water-quantity, waterquality, and precipitation data analyses; and procedures for determining estimated constituent concentrations for unsampled runoff events.

  16. Deep Learning in Drug Discovery.

    Science.gov (United States)

    Gawehn, Erik; Hiss, Jan A; Schneider, Gisbert

    2016-01-01

    Artificial neural networks had their first heyday in molecular informatics and drug discovery approximately two decades ago. Currently, we are witnessing renewed interest in adapting advanced neural network architectures for pharmaceutical research by borrowing from the field of "deep learning". Compared with some of the other life sciences, their application in drug discovery is still limited. Here, we provide an overview of this emerging field of molecular informatics, present the basic concepts of prominent deep learning methods and offer motivation to explore these techniques for their usefulness in computer-assisted drug discovery and design. We specifically emphasize deep neural networks, restricted Boltzmann machine networks and convolutional networks. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Bioinformatics in translational drug discovery.

    Science.gov (United States)

    Wooller, Sarah K; Benstead-Hume, Graeme; Chen, Xiangrong; Ali, Yusuf; Pearl, Frances M G

    2017-08-31

    Bioinformatics approaches are becoming ever more essential in translational drug discovery both in academia and within the pharmaceutical industry. Computational exploitation of the increasing volumes of data generated during all phases of drug discovery is enabling key challenges of the process to be addressed. Here, we highlight some of the areas in which bioinformatics resources and methods are being developed to support the drug discovery pipeline. These include the creation of large data warehouses, bioinformatics algorithms to analyse 'big data' that identify novel drug targets and/or biomarkers, programs to assess the tractability of targets, and prediction of repositioning opportunities that use licensed drugs to treat additional indications. © 2017 The Author(s).

  18. Covariance methodology applied to 35S disintegration rate measurements by the CIEMAT/NIST method

    International Nuclear Information System (INIS)

    Koskinas, M.F.; Nascimento, T.S.; Yamazaki, I.M.; Dias, M.S.

    2014-01-01

    The Nuclear Metrology Laboratory (LMN) at IPEN is carrying out measurements in a LSC (Liquid Scintillation Counting system), applying the CIEMAT/NIST method. In this context 35 S is an important radionuclide for medical applications and it is difficult to be standardized by other primary methods due to low beta ray energy. The CIEMAT/NIST is a standard technique used by most metrology laboratories in order to improve accuracy and speed up beta emitter standardization. The focus of the present work was to apply the covariance methodology for determining the overall uncertainty in the 35 S disintegration rate. All partial uncertainties involved in the measurements were considered, taking into account all possible correlations between each pair of them. - Highlights: ► 35 S disintegration rate measured in Liquid Scintillator system using CIEMAT/NIST method. ► Covariance methodology applied to the overall uncertainty in the 35 S disintegration rate. ► Monte Carlo simulation was applied to determine 35 S activity in the 4πβ(PC)-γ coincidence system

  19. Identification of strain-rate and thermal sensitive material model with an inverse method

    Directory of Open Access Journals (Sweden)

    Peroni M.

    2010-06-01

    Full Text Available This paper describes a numerical inverse method to extract material strength parameters from the experimental data obtained via mechanical tests at different strainrates and temperatures. It will be shown that this procedure is particularly useful to analyse experimental results when the stress-strain fields in the specimen cannot be correctly described via analytical models. This commonly happens in specimens with no regular shape, in specimens with a regular shape when some instability phenomena occur (for example the necking phenomena in tensile tests that create a strongly heterogeneous stress-strain fields or in dynamic tests (where the strain-rate field is not constant due to wave propagation phenomena. Furthermore the developed procedure is useful to take into account thermal phenomena generally affecting high strain-rate tests due to the adiabatic overheating related to the conversion of plastic work. The method presented requires strong effort both from experimental and numerical point of view, anyway it allows to precisely identify the parameters of different material models. This could provide great advantages when high reliability of the material behaviour is necessary. Applicability of this method is particularly indicated for special applications in the field of aerospace engineering, ballistic, crashworthiness studies or particle accelerator technologies, where materials could be submitted to strong plastic deformations at high-strain rate in a wide range of temperature. Thermal softening effect has been investigated in a temperature range between 20°C and 1000°C.

  20. Inverse method for determining radon diffusion coefficient and free radon production rate of fragmented uranium ore

    International Nuclear Information System (INIS)

    Ye, Yong-jun; Wang, Li-heng; Ding, De-xin; Zhao, Ya-li; Fan, Nan-bin

    2014-01-01

    The radon diffusion coefficient and the free radon production rate are important parameters for describing radon migration in the fragmented uranium ore. In order to determine the two parameters, the pure diffusion migration equation for radon was firstly established and its analytic solution with the two parameters to be determined was derived. Then, a self manufactured experimental column was used to simulate the pure diffusion of the radon, the improved scintillation cell method was used to measure the pore radon concentrations at different depths of the column loaded with the fragmented uranium ore, and the nonlinear least square algorithm was used to inversely determine the radon diffusion coefficient and the free radon production rate. Finally, the solution with the two inversely determined parameters was used to predict the pore radon concentrations at some depths of the column, and the predicted results were compared with the measured results. The results show that the predicted results are in good agreement with the measured results and the numerical inverse method is applicable to the determination of the radon diffusion coefficient and the free radon production rate for the fragmented uranium ore. - Highlights: • Inverse method for determining two transport parameters of radon is proposed. • A self-made experimental apparatus is used to simulate radon diffusion process. • Sampling volume and position for measuring radon concentration are optimized. • The inverse results of an experimental sample are verified

  1. Novel driver method to improve ordinary CCD frame rate for high-speed imaging diagnosis

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Tong-Ding, E-mail: snuohui@126.com; Li, Bin-Kang; Yang, Shao-Hua; Guo, Ming-An; Yan, Ming

    2016-06-21

    The use of ordinary Charge-coupled-Device (CCD) imagers for the analysis of fast physical phenomenon is restricted because of the low-speed performance resulting from their long output times. Even though the form of Intensified-CCD (ICCD), coupled with a gated image intensifier, has extended their use for high speed imaging, the deficiency remains to be solved that ICDD could record only one image in a single shot. This paper presents a novel driver method designed to significantly improve the ordinary interline CCD burst frame rate for high-speed photography. This method is based on the use of vertical registers as storage, so that a small number of additional frames comprised of reduced-spatial-resolution images obtained via a specific sampling operation can be buffered. Hence, the interval time of the received series of images is related to the exposure and vertical transfer times only and, thus, the burst frame rate can be increased significantly. A prototype camera based on this method is designed as part of this study, exhibiting a burst rate of up to 250,000 frames per second (fps) and a capacity to record three continuous images. This device exhibits a speed enhancement of approximately 16,000 times compared with the conventional speed, with a spatial resolution reduction of only 1/4.

  2. Effect of the irradiation of bacteria upon their survival rate during conventional methods of meat preservation

    International Nuclear Information System (INIS)

    Szczawinska, M.

    1981-01-01

    The purpose of this paper is to define the effect of irradiation upon the survival rate of non-sporing bacteria (Staphylococcus aureus, Salmonella typhimurium, Escherichia coli, Pseudomonas fluorescens) during basic methods of meat preservation. The bacteria were irradiated in broth by X-rays at a dose that destroyed about 90% of the bacteria (D 10 ). The survival rate of unirradiated and irradiated bacteria during cooling and freezing, in solutions of sodium chloride, nitrates and liquid smoke, was defined. The number of microorganisms was determined directly after irradiation as well as 1, 3, 7, 14, 21 and 28 days after irradiation. The effect of irradiation upon heat resistance of the examined species of bacteria was also defined. The microorganisms were heated in broth, at 70 0 C for 1, 2 and 5 minutes. The obtained results were subjected to statistical analysis. On the basis of the research results, a faster dying rate of irradiated populations of S. aureus and E. coli during any type of preservation treatment, the lack of any reaction to irradiation regarding the survival rate of S. typhimurium, and the lack of any effect of irradiation upon the rate of deterioration of P. fluorescens during freezing and storage in a solution with 10% addition of NaCI, were observed. On the other hand, a pronounced effect of irradiation upon the lowering of the heat resistance of the bacteria, as well as delayed growth in other variants of the experiment, was determined. (author)

  3. Standard Test Method for Measuring Fast-Neutron Reaction Rates by Radioactivation of Niobium

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 This test method describes procedures for measuring reaction rates by the activation reaction 93Nb(n,n′)93mNb. 1.2 This activation reaction is useful for monitoring neutrons with energies above approximately 0.5 MeV and for irradiation times up to about 30 years. 1.3 With suitable techniques, fast-neutron reaction rates for neutrons with energy distribution similar to fission neutrons can be determined in fast-neutron fluences above about 1016cm−2. In the presence of high thermal-neutron fluence rates (>1012cm−2·s−1), the transmutation of 93mNb due to neutron capture should be investigated. In the presence of high-energy neutron spectra such as are associated with fusion and spallation sources, the transmutation of 93mNb by reactions such as (n,2n) may occur and should be investigated. 1.4 Procedures for other fast-neutron monitors are referenced in Practice E 261. 1.5 Fast-neutron fluence rates can be determined from the reaction rates provided that the appropriate cross section information ...

  4. Semiempirical method of determining flow coefficients for pitot rake mass flow rate measurements

    Science.gov (United States)

    Trefny, C. J.

    1985-01-01

    Flow coefficients applicable to area-weighted pitot rake mass flow rate measurements are presented for fully developed, turbulent flow in an annulus. A turbulent velocity profile is generated semiempirically for a given annulus hub-to-tip radius ratio and integrated numerically to determine the ideal mass flow rate. The calculated velocities at each probe location are then summed, and the flow rate as indicated by the rake is obtained. The flow coefficient to be used with the particular rake geometry is subsequently obtained by dividing the ideal flow rate by the rake-indicated flow rate. Flow coefficients ranged from 0.903 for one probe placed at a radius dividing two equal areas to 0.984 for a 10-probe area-weighted rake. Flow coefficients were not a strong function of annulus hub-to-tip radius ratio for rakes with three or more probes. The semiempirical method used to generate the turbulent velocity profiles is described in detail.

  5. Method and apparatus for simultaneous determination of fluid mass flow rate, mean velocity and density

    International Nuclear Information System (INIS)

    Hamel, W.R.

    1984-01-01

    This invention relates to a new method and new apparatus for determining fluid mass flow rate and density. In one aspect of the invention, the fluid is passed through a straight cantilevered tube in which transient oscillation has been induced, thus generating Coriolis damping forces on the tube. The decay rate and frequency of the resulting damped oscillation are measured, and the fluid mass flow rate and density are determined therefrom. In another aspect of the invention, the fluid is passed through the cantilevered tube while an electrically powered device imparts steady-state harmonic excitation to the tube. This generates Coriolis tube-damping forces which are dependent on the mass flow rate of the fluid. Means are provided to respond to incipient flow-induced changes in the amplitude of vibration by changing the power input to the excitation device as required to sustain the original amplitude of vibration. The fluid mass flow rate and density are determined from the required change in power input. The invention provides stable, rapid, and accurate measurements. It does not require bending of the fluid flow

  6. Environmental dose rate assessment of ITER using the Monte Carlo method

    Directory of Open Access Journals (Sweden)

    Karimian Alireza

    2014-01-01

    Full Text Available Exposure to radiation is one of the main sources of risk to staff employed in reactor facilities. The staff of a tokamak is exposed to a wide range of neutrons and photons around the tokamak hall. The International Thermonuclear Experimental Reactor (ITER is a nuclear fusion engineering project and the most advanced experimental tokamak in the world. From the radiobiological point of view, ITER dose rates assessment is particularly important. The aim of this study is the assessment of the amount of radiation in ITER during its normal operation in a radial direction from the plasma chamber to the tokamak hall. To achieve this goal, the ITER system and its components were simulated by the Monte Carlo method using the MCNPX 2.6.0 code. Furthermore, the equivalent dose rates of some radiosensitive organs of the human body were calculated by using the medical internal radiation dose phantom. Our study is based on the deuterium-tritium plasma burning by 14.1 MeV neutron production and also photon radiation due to neutron activation. As our results show, the total equivalent dose rate on the outside of the bioshield wall of the tokamak hall is about 1 mSv per year, which is less than the annual occupational dose rate limit during the normal operation of ITER. Also, equivalent dose rates of radiosensitive organs have shown that the maximum dose rate belongs to the kidney. The data may help calculate how long the staff can stay in such an environment, before the equivalent dose rates reach the whole-body dose limits.

  7. Meta-analytic methods for pooling rates when follow-up duration varies: a case study.

    Science.gov (United States)

    Guevara, James P; Berlin, Jesse A; Wolf, Fredric M

    2004-07-12

    Meta-analysis can be used to pool rate measures across studies, but challenges arise when follow-up duration varies. Our objective was to compare different statistical approaches for pooling count data of varying follow-up times in terms of estimates of effect, precision, and clinical interpretability. We examined data from a published Cochrane Review of asthma self-management education in children. We selected two rate measures with the largest number of contributing studies: school absences and emergency room (ER) visits. We estimated fixed- and random-effects standardized weighted mean differences (SMD), stratified incidence rate differences (IRD), and stratified incidence rate ratios (IRR). We also fit Poisson regression models, which allowed for further adjustment for clustering by study. For both outcomes, all methods gave qualitatively similar estimates of effect in favor of the intervention. For school absences, SMD showed modest results in favor of the intervention (SMD -0.14, 95% CI -0.23 to -0.04). IRD implied that the intervention reduced school absences by 1.8 days per year (IRD -0.15 days/child-month, 95% CI -0.19 to -0.11), while IRR suggested a 14% reduction in absences (IRR 0.86, 95% CI 0.83 to 0.90). For ER visits, SMD showed a modest benefit in favor of the intervention (SMD -0.27, 95% CI: -0.45 to -0.09). IRD implied that the intervention reduced ER visits by 1 visit every 2 years (IRD -0.04 visits/child-month, 95% CI: -0.05 to -0.03), while IRR suggested a 34% reduction in ER visits (IRR 0.66, 95% CI 0.59 to 0.74). In Poisson models, adjustment for clustering lowered the precision of the estimates relative to stratified IRR results. For ER visits but not school absences, failure to incorporate study indicators resulted in a different estimate of effect (unadjusted IRR 0.77, 95% CI 0.59 to 0.99). Choice of method among the ones presented had little effect on inference but affected the clinical interpretability of the findings. Incidence rate

  8. Reproducibility of CSF quantitative culture methods for estimating rate of clearance in cryptococcal meningitis.

    Science.gov (United States)

    Dyal, Jonathan; Akampurira, Andrew; Rhein, Joshua; Morawski, Bozena M; Kiggundu, Reuben; Nabeta, Henry W; Musubire, Abdu K; Bahr, Nathan C; Williams, Darlisha A; Bicanic, Tihana; Larsen, Robert A; Meya, David B; Boulware, David R

    2016-05-01

    Quantitative cerebrospinal fluid (CSF) cultures provide a measure of disease severity in cryptococcal meningitis. The fungal clearance rate by quantitative cultures has become a primary endpoint for phase II clinical trials. This study determined the inter-assay accuracy of three different quantitative culture methodologies. Among 91 participants with meningitis symptoms in Kampala, Uganda, during August-November 2013, 305 CSF samples were prospectively collected from patients at multiple time points during treatment. Samples were simultaneously cultured by three methods: (1) St. George's 100 mcl input volume of CSF with five 1:10 serial dilutions, (2) AIDS Clinical Trials Group (ACTG) method using 1000, 100, 10 mcl input volumes, and two 1:100 dilutions with 100 and 10 mcl input volume per dilution on seven agar plates; and (3) 10 mcl calibrated loop of undiluted and 1:100 diluted CSF (loop). Quantitative culture values did not statistically differ between St. George-ACTG methods (P= .09) but did for St. George-10 mcl loop (Pmethods was high (r≥0.88). For detecting sterility, the ACTG-method had the highest negative predictive value of 97% (91% St. George, 60% loop), but the ACTG-method had occasional (∼10%) difficulties in quantification due to colony clumping. For CSF clearance rate, St. George-ACTG methods did not differ overall (mean -0.05 ± 0.07 log10CFU/ml/day;P= .14) on a group level; however, individual-level clearance varied. The St. George and ACTG quantitative CSF culture methods produced comparable but not identical results. Quantitative cultures can inform treatment management strategies. © The Author 2016. Published by Oxford University Press on behalf of The International Society for Human and Animal Mycology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Improving the singles rate method for modeling accidental coincidences in high-resolution PET

    International Nuclear Information System (INIS)

    Oliver, Josep F; Rafecas, Magdalena

    2010-01-01

    Random coincidences ('randoms') are one of the main sources of image degradation in PET imaging. In order to correct for this effect, an accurate method to estimate the contribution of random events is necessary. This aspect becomes especially relevant for high-resolution PET scanners where the highest image quality is sought and accurate quantitative analysis is undertaken. One common approach to estimate randoms is the so-called singles rate method (SR) widely used because of its good statistical properties. SR is based on the measurement of the singles rate in each detector element. However, recent studies suggest that SR systematically overestimates the correct random rate. This overestimation can be particularly marked for low energy thresholds, below 250 keV used in some applications and could entail a significant image degradation. In this work, we investigate the performance of SR as a function of the activity, geometry of the source and energy acceptance window used. We also investigate the performance of an alternative method, which we call 'singles trues' (ST) that improves SR by properly modeling the presence of true coincidences in the sample. Nevertheless, in any real data acquisition the knowledge of which singles are members of a true coincidence is lost. Therefore, we propose an iterative method, STi, that provides an estimation based on ST but which only requires the knowledge of measurable quantities: prompts and singles. Due to inter-crystal scatter, for wide energy windows ST only partially corrects SR overestimations. While SR deviations are in the range 86-300% (depending on the source geometry), the ST deviations are systematically smaller and contained in the range 4-60%. STi fails to reproduce the ST results, although for not too high activities the deviation with respect to ST is only a few percent. For conventional energy windows, i.e. those without inter-crystal scatter, the ST method corrects the SR overestimations, and deviations from

  10. Application of accelerated evaluation method of alteration temperature and constant dose rate irradiation on bipolar linear regulator LM317

    International Nuclear Information System (INIS)

    Deng Wei; Wu Xue; Wang Xin; Zhang Jinxin; Zhang Xiaofu; Zheng Qiwen; Ma Wuying; Lu Wu; Guo Qi; He Chengfa

    2014-01-01

    With different irradiation methods including high dose rate irradiation, low dose rate irradiation, alteration temperature and constant dose rate irradiation, and US military standard constant high temperature and constant dose rate irradiation, the ionizing radiation responses of bipolar linear regulator LM317 from three different companies were investigated under the operating and zero biases. The results show that compared with constant high temperature and constant dose rate irradiation method, the alteration temperature and constant dose rate irradiation method can not only very rapidly and accurately evaluate the dose rate effect of three bipolar linear regulators, but also well simulate the damage of low dose rate irradiation. Experiment results make the alteration temperature and constant dose rate irradiation method successfully apply to bipolar linear regulator. (authors)

  11. 76 FR 36857 - Federal Employees Health Benefits Program: New Premium Rating Method for Most Community Rated Plans

    Science.gov (United States)

    2011-06-23

    ... contingency reserve accounts or factored into reduced premiums for enrollees in the following plan year. Under...-TCR community rated plans' contingency reserves. Issuers failing to meet the FEHB-specific MLR... definition of medical loss ratio by HHS in December 2010, upon which this rule relies. Further, plans have...

  12. 77 FR 19522 - Federal Employees Health Benefits Program: New Premium Rating Method for Most Community Rated Plans

    Science.gov (United States)

    2012-04-02

    ... reconciliation process itself once applied. Additionally, the commenter would like to understand the roles of OPM... methodology unless doing so conflicts with the FEHB contract. Changes Made Since the Interim Final Rule Was... Accounting and price adjustment. * * * * * (b) * * * (4) If rates are determined by comparison with the FEHB...

  13. Methods for monitoring corals and crustose coralline algae to quantify in-situ calcification rates

    Science.gov (United States)

    Morrison, Jennifer M.; Kuffner, Ilsa B.; Hickey, T. Don

    2013-01-01

    The potential effect of global climate change on calcifying marine organisms, such as scleractinian (reef-building) corals, is becoming increasingly evident. Understanding the process of coral calcification and establishing baseline calcification rates are necessary to detect future changes in growth resulting from climate change or other stressors. Here we describe the methods used to establish a network of calcification-monitoring stations along the outer Florida Keys Reef Tract in 2009. In addition to detailing the initial setup and periodic monitoring of calcification stations, we discuss the utility and success of our design and offer suggestions for future deployments. Stations were designed such that whole coral colonies were securely attached to fixed apparati (n = 10 at each site) on the seafloor but also could be easily removed and reattached as needed for periodic weighing. Corals were weighed every 6 months, using the buoyant weight technique, to determine calcification rates in situ. Sites were visited in May and November to obtain winter and summer rates, respectively, and identify seasonal patterns in calcification. Calcification rates of the crustose coralline algal community also were measured by affixing commercially available plastic tiles, deployed vertically, at each station. Colonization by invertebrates and fleshy algae on the tiles was low, indicating relative specificity for the crustose coralline algal community. We also describe a new, nonlethal technique for sampling the corals, used following the completion of the monitoring period, in which two slabs were obtained from the center of each colony. Sampled corals were reattached to the seafloor, and most corals had completely recovered within 6 months. The station design and sampling methods described herein provide an effective approach to assessing coral and crustose coralline algal calcification rates across time and space, offering the ability to quantify the potential effects of

  14. Direct risk standardisation: a new method for comparing casemix adjusted event rates using complex models.

    Science.gov (United States)

    Nicholl, Jon; Jacques, Richard M; Campbell, Michael J

    2013-10-29

    Comparison of outcomes between populations or centres may be confounded by any casemix differences and standardisation is carried out to avoid this. However, when the casemix adjustment models are large and complex, direct standardisation has been described as "practically impossible", and indirect standardisation may lead to unfair comparisons. We propose a new method of directly standardising for risk rather than standardising for casemix which overcomes these problems. Using a casemix model which is the same model as would be used in indirect standardisation, the risk in individuals is estimated. Risk categories are defined, and event rates in each category for each centre to be compared are calculated. A weighted sum of the risk category specific event rates is then calculated. We have illustrated this method using data on 6 million admissions to 146 hospitals in England in 2007/8 and an existing model with over 5000 casemix combinations, and a second dataset of 18,668 adult emergency admissions to 9 centres in the UK and overseas and a published model with over 20,000 casemix combinations and a continuous covariate. Substantial differences between conventional directly casemix standardised rates and rates from direct risk standardisation (DRS) were found. Results based on DRS were very similar to Standardised Mortality Ratios (SMRs) obtained from indirect standardisation, with similar standard errors. Direct risk standardisation using our proposed method is as straightforward as using conventional direct or indirect standardisation, always enables fair comparisons of performance to be made, can use continuous casemix covariates, and was found in our examples to have similar standard errors to the SMR. It should be preferred when there is a risk that conventional direct or indirect standardisation will lead to unfair comparisons.

  15. Methods for measuring specific rates of mercury methylation and degradation and their use in determining factors controlling net rates of mercury methylation

    International Nuclear Information System (INIS)

    Ramlal, P.S.; Rudd, J.W.M.; Hecky, R.E.

    1986-01-01

    A method was developed to estimate specific rates of demethylation of methyl mercury in aquatic samples by measuring the volatile 14 C end products of 14 CH 3 HgI demethylation. This method was used in conjuction with a 203 Hg 2+ radiochemical method which determines specific rates of mercury methylation. Together, these methods enabled us to examine some factors controlling the net rate of mercury methylation. The methodologies were field tested, using lake sediment samples from a recently flooded reservoir in the Southern Indian Lake system which had developed a mercury contamination problem in fish. Ratios of the specific rates of methylation/demethylation were calculated. The highest ratios of methylation/demethylation were calculated. The highest ratios of methylation/demethylation occurred in the flooded shorelines of Southern Indian Lake. These results provide an explanation for the observed increases in the methyl mercury concentrations in fish after flooding

  16. Input data preprocessing method for exchange rate forecasting via neural network

    Directory of Open Access Journals (Sweden)

    Antić Dragan S.

    2014-01-01

    Full Text Available The aim of this paper is to present a method for neural network input parameters selection and preprocessing. The purpose of this network is to forecast foreign exchange rates using artificial intelligence. Two data sets are formed for two different economic systems. Each system is represented by six categories with 70 economic parameters which are used in the analysis. Reduction of these parameters within each category was performed by using the principal component analysis method. Component interdependencies are established and relations between them are formed. Newly formed relations were used to create input vectors of a neural network. The multilayer feed forward neural network is formed and trained using batch training. Finally, simulation results are presented and it is concluded that input data preparation method is an effective way for preprocessing neural network data. [Projekat Ministarstva nauke Republike Srbije, br.TR 35005, br. III 43007 i br. III 44006

  17. Primal Decomposition-Based Method for Weighted Sum-Rate Maximization in Downlink OFDMA Systems

    Directory of Open Access Journals (Sweden)

    Weeraddana Chathuranga

    2010-01-01

    Full Text Available We consider the weighted sum-rate maximization problem in downlink Orthogonal Frequency Division Multiple Access (OFDMA systems. Motivated by the increasing popularity of OFDMA in future wireless technologies, a low complexity suboptimal resource allocation algorithm is obtained for joint optimization of multiuser subcarrier assignment and power allocation. The algorithm is based on an approximated primal decomposition-based method, which is inspired from exact primal decomposition techniques. The original nonconvex optimization problem is divided into two subproblems which can be solved independently. Numerical results are provided to compare the performance of the proposed algorithm to Lagrange relaxation based suboptimal methods as well as to optimal exhaustive search-based method. Despite its reduced computational complexity, the proposed algorithm provides close-to-optimal performance.

  18. Application to Determination of Scholarship Worthiness Using Simple Multi Attribute Rating Technique and Merkle Hellman Method

    Directory of Open Access Journals (Sweden)

    Dicky Nofriansyah

    2017-10-01

    Full Text Available This research was focused on explaining how the concept of simple multi attribute rating technique method in a decision support system based on desktop programming to solve multi-criteria selection problem, especially Scholarship. The Merkle Hellman method is used for securing the results of choices made by the Smart process. The determination of PPA and BBP-PPA scholarship recipients on STMIK Triguna Dharma becomes a problem because it takes a long time in determining the decision. By adopting the SMART method, the application can make decisions quickly and precisely. The expected result of this research is the application can facilitate in overcoming the problems that occur concerning the determination of PPA and BBP-PPA scholarship recipients as well as assisting Student Affairs STMIK Triguna Dharma in making decisions quickly and accurately

  19. Sample Size Calculation for Controlling False Discovery Proportion

    Directory of Open Access Journals (Sweden)

    Shulian Shang

    2012-01-01

    Full Text Available The false discovery proportion (FDP, the proportion of incorrect rejections among all rejections, is a direct measure of abundance of false positive findings in multiple testing. Many methods have been proposed to control FDP, but they are too conservative to be useful for power analysis. Study designs for controlling the mean of FDP, which is false discovery rate, have been commonly used. However, there has been little attempt to design study with direct FDP control to achieve certain level of efficiency. We provide a sample size calculation method using the variance formula of the FDP under weak-dependence assumptions to achieve the desired overall power. The relationship between design parameters and sample size is explored. The adequacy of the procedure is assessed by simulation. We illustrate the method using estimated correlations from a prostate cancer dataset.

  20. Error baseline rates of five sample preparation methods used to characterize RNA virus populations

    Science.gov (United States)

    Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717

  1. Dose Rate Experiment at JET for Benchmarking the Calculation Direct One Step Method

    International Nuclear Information System (INIS)

    Angelone, M.; Petrizzi, L.; Pillon, M.; Villari, R.; Popovichev, S.

    2006-01-01

    Neutrons produced by D-D and D-T plasmas induce the activation of tokamak materials and of components. The development of reliable methods to assess dose rates is a key issue for maintenance and operating nuclear machines, in normal and off-normal conditions. In the frame of the EFDA Fusion Technology work programme, a computational tool based upon MCNP Monte Carlo code has been developed to predict the dose rate after shutdown: it is called Direct One Step Method (D1S). The D1S is an innovative approach in which the decay gammas are coupled to the neutrons as in the prompt case and they are transported in one single step in the same run. Benchmarking of this new tool with experimental data taken in a complex geometry like that of a tokamak is a fundamental step to test the reliability of the D1S method. A dedicated benchmark experiment was proposed for the 2005-2006 experimental campaign of JET. Two irradiation positions have been selected for the benchmark: one inner position inside the vessel, not far from the plasma, called the 2 upper irradiation end (IE2), where neutron fluence is relatively high. The second position is just outside a vertical port in an external position (EX). Here the neutron flux is lower and the dose rate to be measured is not very far from the residual background. Passive detectors are used for in-vessel measurements: the high sensitivity Thermo Luminescent Dosimeters (TLDs) GR-200A (natural LiF), which ensure measurements down to environmental dose level. An active detector of Geiger-Muller (GM) type is used for out of vessel dose rate measurement. Before their use the detectors were calibrated in a secondary gamma-ray standard (Cs-137 and Co-60) facility in term of air-kerma. The background measurement was carried-out in the period July -September 2005 in the outside position EX using the GM tube and in September 2005 inside the vacuum vessel using TLD detectors located in the 2 Upper irradiation end IE2. In the present work

  2. Assessment of Estimation Methods ForStage-Discharge Rating Curve in Rippled Bed Rivers

    Directory of Open Access Journals (Sweden)

    P. Maleki

    2016-02-01

    in a flume located at the hydraulic laboratory ofShahrekordUniversity, Iran. Bass (1993 [reported in Joep (1999], determined an empirical relation between median grain size, D50, and equilibrium ripple length, l: L=75.4 (logD50+197 Eq.(1 Where l and D50 are both given in millimeters. Raudkivi (1997 [reported in Joep (1999], proposed another empirical relation to estimate the ripple length that D50 is given in millimeters: L=245(D500.35 Eq. (2 Flemming (1988 [reported in Joep (1999], derived an empirical relation between mean ripple length and ripple height based on a large dataset: hm= 0.0677l 0.8098 Eq.(3 Where hm is the mean ripple height (m and l is the mean ripple length (m. Ikeda S. and Asaeda (1983 investigated the characteristics of flow over ripples. They found that there are separation areas and vortices at lee of ripples and maximum turbulent diffusion occurs in these areas. Materials and Methods: In this research, the effects of two different type of ripples onthe hydraulic characteristics of flow were experimentally studied in a flume located at the hydraulic laboratory of ShahrekordUniversity, Iran. The flume has the dimensions of 0.4 m wide and depth and 12 m long. Generally 48 tests variety slopes of 0.0005 to 0.003 and discharges of 10 to 40 lit/s, were conducted. Velocity and the shear stress were measured by using an Acoustic Doppler Velocimeter (ADV. Two different types of ripples (parallel and flake ripples were used. The stage- discharge rating curve was then estimated in different ways, such as Einstein - Barbarvsa, shen and White et al. Results and Discussion: In order to investigateresult of the tests, were usedst atistical methods.White method as amaximum valueofα, RMSE, and average absolute error than other methods. Einstein method offitting the discharge under estimated. Evaluation of stage- discharge rating curve methods based on the obtained results from this research showed that Shen method had the highest accuracy for developing the

  3. Life cycle and population growth rate of Caenorhabditis elegans studied by a new method.

    Science.gov (United States)

    Muschiol, Daniel; Schroeder, Fabian; Traunspurger, Walter

    2009-05-16

    The free-living nematode Caenorhabditis elegans is the predominant model organism in biological research, being used by a huge number of laboratories worldwide. Many researchers have evaluated life-history traits of C. elegans in investigations covering quite different aspects such as ecotoxicology, inbreeding depression and heterosis, dietary restriction/supplement, mutations, and ageing. Such traits include juvenile growth rates, age at sexual maturity, adult body size, age-specific fecundity/mortality, total reproduction, mean and maximum lifespan, and intrinsic population growth rates. However, we found that in life-cycle experiments care is needed regarding protocol design. Here, we test a recently developed method that overcomes some problems associated with traditional cultivation techniques. In this fast and yet precise approach, single individuals are maintained within hanging drops of semi-fluid culture medium, allowing the simultaneous investigation of various life-history traits at any desired degree of accuracy. Here, the life cycles of wild-type C. elegans strains N2 (Bristol, UK) and MY6 (Münster, Germany) were compared at 20 degrees C with 5 x 10(9) Escherichia coli ml-1 as food source. High-resolution life tables and fecundity schedules of the two strains are presented. Though isolated 700 km and 60 years apart from each other, the two strains barely differed in life-cycle parameters. For strain N2 (n = 69), the intrinsic rate of natural increase (r m d(-1)), calculated according to the Lotka equation, was 1.375, the net reproductive rate (R 0) 291, the mean generation time (T) 90 h, and the minimum generation time (T min) 73.0 h. The corresponding values for strain MY6 (n = 72) were r m = 1.460, R0 = 289, T = 84 h, and T min = 67.3 h. Peak egg-laying rates in both strains exceeded 140 eggs d(-1). Juvenile and early adulthood mortality was negligible. Strain N2 lived, on average, for 16.7 d, while strain MY6 died 2 days earlier; however

  4. Life cycle and population growth rate of Caenorhabditis elegans studied by a new method

    Directory of Open Access Journals (Sweden)

    Schroeder Fabian

    2009-05-01

    Full Text Available Abstract Background The free-living nematode Caenorhabditis elegans is the predominant model organism in biological research, being used by a huge number of laboratories worldwide. Many researchers have evaluated life-history traits of C. elegans in investigations covering quite different aspects such as ecotoxicology, inbreeding depression and heterosis, dietary restriction/supplement, mutations, and ageing. Such traits include juvenile growth rates, age at sexual maturity, adult body size, age-specific fecundity/mortality, total reproduction, mean and maximum lifespan, and intrinsic population growth rates. However, we found that in life-cycle experiments care is needed regarding protocol design. Here, we test a recently developed method that overcomes some problems associated with traditional cultivation techniques. In this fast and yet precise approach, single individuals are maintained within hanging drops of semi-fluid culture medium, allowing the simultaneous investigation of various life-history traits at any desired degree of accuracy. Here, the life cycles of wild-type C. elegans strains N2 (Bristol, UK and MY6 (Münster, Germany were compared at 20°C with 5 × 109 Escherichia coli ml-1 as food source. Results High-resolution life tables and fecundity schedules of the two strains are presented. Though isolated 700 km and 60 years apart from each other, the two strains barely differed in life-cycle parameters. For strain N2 (n = 69, the intrinsic rate of natural increase (rmd-1, calculated according to the Lotka equation, was 1.375, the net reproductive rate (R0 291, the mean generation time (T 90 h, and the minimum generation time (Tmin 73.0 h. The corresponding values for strain MY6 (n = 72 were rm = 1.460, R0 = 289, T = 84 h, and Tmin = 67.3 h. Peak egg-laying rates in both strains exceeded 140 eggs d-1. Juvenile and early adulthood mortality was negligible. Strain N2 lived, on average, for 16.7 d, while strain MY6 died 2 days

  5. Standard Test Method for Measuring Fast-Neutron Reaction Rates by Radioactivation of Titanium

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 This test method covers procedures for measuring reaction rates by the activation reactions 46Ti(n,p) 46Sc + 47Ti(n, np)46Sc. Note 1—Since the cross section for the (n,np) reaction is relatively small for energies less than 12 MeV and is not easily distinguished from that of the (n,p) reaction, this test method will refer to the (n,p) reaction only. 1.2 The reaction is useful for measuring neutrons with energies above approximately 4.4 MeV and for irradiation times up to about 250 days (for longer irradiations, see Practice E 261). 1.3 With suitable techniques, fission-neutron fluence rates above 109 cm–2·s–1 can be determined. However, in the presence of a high thermal-neutron fluence rate, 46Sc depletion should be investigated. 1.4 Detailed procedures for other fast-neutron detectors are referenced in Practice E 261. 1.5 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.6 This standard does not purport to address all...

  6. ANSI/ASHRAE/IES Standard 90.1-2010 Performance Rating Method Reference Manual

    Energy Technology Data Exchange (ETDEWEB)

    Goel, Supriya [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rosenberg, Michael I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-05-01

    This document is intended to be a reference manual for the Appendix G Performance Rating Method (PRM) of ANSI/ASHRAE/IES Standard 90.1- 2010 (Standard 90.1-2010).The PRM is used for rating the energy efficiency of commercial and high-rise residential buildings with designs that exceed the requirements of Standard 90.1. The procedures and processes described in this manual are designed to provide consistency and accuracy by filling in gaps and providing additional details needed by users of the PRM. It should be noted that this document is created independently from ASHRAE and SSPC 90.1 and is not sanctioned nor approved by either of those entities . Potential users of this manual include energy modelers, software developers and implementers of “beyond code” energy programs. Energy modelers using ASHRAE Standard 90.1-2010 for beyond code programs can use this document as a reference manual for interpreting requirements of the Performance Rating method. Software developers, developing tools for automated creation of the baseline model can use this reference manual as a guideline for developing the rules for the baseline model.

  7. The Role of Noble Metal Addition Methods on BWR Shut Down Dose Rates

    International Nuclear Information System (INIS)

    Cowan, Robert L.; Garcia Susan, E.

    2012-09-01

    Noble metal addition technology was developed for the BWR as a means of establishing low electrochemical corrosion potentials (ECP) on structural materials to mitigate intergranular stress corrosion cracking (IGSCC). When the reactor water molar ratio of H 2 / (O 2 +H 2 O 2 ) is > 2 on noble metal treated surfaces, the resulting ECP is near -500 mV (SHE), well into the mitigation range. This ratio can be achieved in most areas of the reactor with feedwater hydrogen additions in the range of 0.2 mg/kg, a condition that does not increase the radiation level in the main steam, a side effect of conventional hydrogen water chemistry (HWC). The resulting low ECP on the surface of stainless steel piping and components results in a change in form of the stable corrosion film to a spinel structure. Since it is the 60 Co incorporated into the corrosion film that is the primary source term of shutdown dose rates in BWRs, the structure and composition of the film can have a large influence in the resulting dose rates. The results of the first generation of noble metal technology, noble metal chemical addition (NMCA), showed that the reactor water ratio of 60 Co (s)/Zn (s) was a key parameter in determining shut down dose rate values. This paper will review that history and provide mechanistic understanding of how initial post NMCA dose rates are established and change with time. On-line noble metal chemical addition (OLNC) is the second generation of noble metal technology. The method utilizes the on-line injection of dilute Na 2 Pt (OH) 6 into the feedwater over a period of approximately 10 days. The first application of OLNC occurred at a European reactor in July of 2005 and to date over 20 BWRs have applied the technology, with many more applications scheduled. It is expected that OLNC will become the de facto standard because it eliminates 60 hours of outage application time and it addresses the crack flanking concerns that can arise under certain conditions. Because both

  8. Sedimentation rate estimates in Sorsogon Bay, Philippines using 210Pb method

    International Nuclear Information System (INIS)

    Madrid, Jordan F.; Sta. Maria, Efren J.; Olivares, Ryan U.; Aniago, Ryan Joseph; Asa Anie Day DC; Dayaon, Jennyvi P.; Bulos, Adelina DM; Sombrito, Elvira Z.

    2011-01-01

    Sorsogon Bay has experienced a long history of recurring harmful algal blooms over the past few years. In an attempt to establish a chronology of events in the sediment layer, lead-210 ( 210 Pb) dating method has been utilized in estimating sedimentation rates from three selected areas along the bay. Based on the unsupported 210 Pb data and by applying the Constant Initial Concentration (CIC) model, the calculated sedimentation rates were 0.8, 1.3 and 1.8 cm yr 1 for sediment cores collected near the coastal areas of Castilla (SO-01), Sorsogon City (SO-07) and Cadacan River (SO-03), respectively. High sedimentation rates were measured in sediment cores believed to be affected from frequent volcanic ash releases and from areas near human settlement combined with intensive farming and agricultural activities. The collected sediments exhibited non-uniform down core values of dry bulk density and moisture content. This variation in measurements may indicate the general quality and composition of the sediment samples, i.e., amount of organic matter and grain size. The calculated sedimentation rates obtained provided an overview of the sedimentation processes and reflect the land use pattern around the bay which may help in understanding the history and distribution of materials and nutrient input relative to the occurrence of harmful algal bloom in the sediment columns. (author)

  9. Standard Test Method for Measuring Fast-Neutron Reaction Rates by Radioactivation of Aluminum

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2011-01-01

    1.1 This test method covers procedures measuring reaction rates by the activation reaction 27Al(n,α)24Na. 1.2 This activation reaction is useful for measuring neutrons with energies above approximately 6.5 MeV and for irradiation times up to about 2 days (for longer irradiations, see Practice E261). 1.3 With suitable techniques, fission-neutron fluence rates above 106 cm−2·s−1 can be determined. 1.4 Detailed procedures for other fast neutron detectors are referenced in Practice E261. 1.5 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  10. Model Reference Adaptive Control of the Air Flow Rate of Centrifugal Compressor Using State Space Method

    International Nuclear Information System (INIS)

    Han, Jaeyoung; Jung, Mooncheong; Yu, Sangseok; Yi, Sun

    2016-01-01

    In this study, a model reference adaptive controller is developed to regulate the outlet air flow rate of centrifugal compressor for automotive supercharger. The centrifugal compressor is developed using the analytical based method to predict the transient behavior of operating and the designed model is validated with experimental data to confirm the system accuracy. The model reference adaptive control structure consists of a compressor model and a MRAC(model reference adaptive control) mechanism. The feedback control do not robust with variation of system parameter but the applied adaptive control is robust even if the system parameter is changed. As a result, the MRAC was regulated to reference air flow rate. Also MRAC was found to be more robust control compared with the feedback control even if the system parameter is changed.

  11. Model Reference Adaptive Control of the Air Flow Rate of Centrifugal Compressor Using State Space Method

    Energy Technology Data Exchange (ETDEWEB)

    Han, Jaeyoung; Jung, Mooncheong; Yu, Sangseok [Chungnam Nat’l Univ., Daejeon (Korea, Republic of); Yi, Sun [North Carolina A and T State Univ., Raleigh (United States)

    2016-08-15

    In this study, a model reference adaptive controller is developed to regulate the outlet air flow rate of centrifugal compressor for automotive supercharger. The centrifugal compressor is developed using the analytical based method to predict the transient behavior of operating and the designed model is validated with experimental data to confirm the system accuracy. The model reference adaptive control structure consists of a compressor model and a MRAC(model reference adaptive control) mechanism. The feedback control do not robust with variation of system parameter but the applied adaptive control is robust even if the system parameter is changed. As a result, the MRAC was regulated to reference air flow rate. Also MRAC was found to be more robust control compared with the feedback control even if the system parameter is changed.

  12. ANSI/ASHRAE/IES Standard 90.1-2016 Performance Rating Method Reference Manual

    Energy Technology Data Exchange (ETDEWEB)

    Goel, Supriya [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rosenberg, Michael I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Eley, Charles [Eley and Associates, Hobe Sound, FL (United States)

    2017-09-29

    This document is intended to be a reference manual for the Appendix G Performance Rating Method (PRM) of ANSI/ASHRAE/IES Standard 90.1-2016 (Standard 90.1-2016). The PRM can be used to demonstrate compliance with the standard and to rate the energy efficiency of commercial and high-rise residential buildings with designs that exceed the requirements of Standard 90.1. Use of the PRM for demonstrating compliance with Standard 90.1 is a new feature of the 2016 edition. The procedures and processes described in this manual are designed to provide consistency and accuracy by filling in gaps and providing additional details needed by users of the PRM.

  13. Effects of cooking methods and starch structures on starch hydrolysis rates of rice.

    Science.gov (United States)

    Reed, Michael O; Ai, Yongfeng; Leutcher, Josh L; Jane, Jay-lin

    2013-07-01

    This study aimed to understand effects of different cooking methods, including steamed, pilaf, and traditional stir-fried, on starch hydrolysis rates of rice. Rice grains of 3 varieties, japonica, indica, and waxy, were used for the study. Rice starch was isolated from the grain and characterized. Amylose contents of starches from japonica, indica, and waxy rice were 13.5%, 18.0%, and 0.9%, respectively. The onset gelatinization temperature of indica starch (71.6 °C) was higher than that of the japonica and waxy starch (56.0 and 56.8 °C, respectively). The difference was attributed to longer amylopectin branch chains of the indica starch. Starch hydrolysis rates and resistant starch (RS) contents of the rice varieties differed after they were cooked using different methods. Stir-fried rice displayed the least starch hydrolysis rate followed by pilaf rice and steamed rice for each rice variety. RS contents of freshly steamed japonica, indica, and waxy rice were 0.7%, 6.6%, and 1.3%, respectively; those of rice pilaf were 12.1%, 13.2%, and 3.4%, respectively; and the stir-fried rice displayed the largest RS contents of 15.8%, 16.6%, and 12.1%, respectively. Mechanisms of the large RS contents of the stir-fried rice were studied. With the least starch hydrolysis rate and the largest RS content, stir-fried rice would be a desirable way of preparing rice for food to reduce postprandial blood glucose and insulin responses and to improve colon health of humans. © 2013 Institute of Food Technologists®

  14. A Comparative Survey of Methods for Remote Heart Rate Detection From Frontal Face Videos

    Directory of Open Access Journals (Sweden)

    Chen Wang

    2018-05-01

    Full Text Available Remotely measuring physiological activity can provide substantial benefits for both the medical and the affective computing applications. Recent research has proposed different methodologies for the unobtrusive detection of heart rate (HR using human face recordings. These methods are based on subtle color changes or motions of the face due to cardiovascular activities, which are invisible to human eyes but can be captured by digital cameras. Several approaches have been proposed such as signal processing and machine learning. However, these methods are compared with different datasets, and there is consequently no consensus on method performance. In this article, we describe and evaluate several methods defined in literature, from 2008 until present day, for the remote detection of HR using human face recordings. The general HR processing pipeline is divided into three stages: face video processing, face blood volume pulse (BVP signal extraction, and HR computation. Approaches presented in the paper are classified and grouped according to each stage. At each stage, algorithms are analyzed and compared based on their performance using the public database MAHNOB-HCI. Results found in this article are limited on MAHNOB-HCI dataset. Results show that extracted face skin area contains more BVP information. Blind source separation and peak detection methods are more robust with head motions for estimating HR.

  15. Application of the constant rate of pressure change method to improve jet pump performance

    International Nuclear Information System (INIS)

    Long, X P; Yang, X L

    2012-01-01

    This paper adopts a new method named the constant rate of pressure change (CRPC) to improve the jet pump performance. The main contribution of this method is that the diffuser generates uniform pressure gradient. The performance of the jet pump with new diffusers designed by the CRPC method, obtained by CFD methods, was compared with that of the jet pump with traditional conical diffusers. It is found that the CRPC diffuser produces a linear pressure increase indeed. The higher friction loss and the separation decrease the CRPC diffuser efficiency and then lower the pump efficiency. The pump with shorter throats has higher efficiency at small flow ratio while its efficiency is lower than the original pump at lager flow ratio and the peak efficiency of the pumps with the throat length of 5-6 Dt is higher than that of the pumps with other throat length. When the throat length is less than 4 Dt, the CRPC diffuser efficiency is higher than the conical diffuser. The CRPC method could also be used to design the nozzle and other situations needing the pressure change gradually.

  16. Gas reserves, discoveries and production

    International Nuclear Information System (INIS)

    Saniere, A.

    2006-01-01

    Between 2000 and 2004, new discoveries, located mostly in the Asia/Pacific region, permitted a 71% produced reserve replacement rate. The Middle East and the offshore sector represent a growing proportion of world gas production Non-conventional gas resources are substantial but are not exploited to any significant extent, except in the United States, where they account for 30% of U.S. gas production. (author)

  17. Method and apparatus for controlling the flow rate of mercury in a flow system

    Science.gov (United States)

    Grossman, Mark W.; Speer, Richard

    1991-01-01

    A method for increasing the mercury flow rate to a photochemical mercury enrichment utilizing an entrainment system comprises the steps of passing a carrier gas over a pool of mercury maintained at a first temperature T1, wherein the carrier gas entrains mercury vapor; passing said mercury vapor entrained carrier gas to a second temperature zone T2 having temperature less than T1 to condense said entrained mercury vapor, thereby producing a saturated Hg condition in the carrier gas; and passing said saturated Hg carrier gas to said photochemical enrichment reactor.

  18. Supernovae Discovery Efficiency

    Science.gov (United States)

    John, Colin

    2018-01-01

    Abstract:We present supernovae (SN) search efficiency measurements for recent Hubble Space Telescope (HST) surveys. Efficiency is a key component to any search, and is important parameter as a correction factor for SN rates. To achieve an accurate value for efficiency, many supernovae need to be discoverable in surveys. This cannot be achieved from real SN only, due to their scarcity, so fake SN are planted. These fake supernovae—with a goal of realism in mind—yield an understanding of efficiency based on position related to other celestial objects, and brightness. To improve realism, we built a more accurate model of supernovae using a point-spread function. The next improvement to realism is planting these objects close to galaxies and of various parameters of brightness, magnitude, local galactic brightness and redshift. Once these are planted, a very accurate SN is visible and discoverable by the searcher. It is very important to find factors that affect this discovery efficiency. Exploring the factors that effect detection yields a more accurate correction factor. Further inquires into efficiency give us a better understanding of image processing, searching techniques and survey strategies, and result in an overall higher likelihood to find these events in future surveys with Hubble, James Webb, and WFIRST telescopes. After efficiency is discovered and refined with many unique surveys, it factors into measurements of SN rates versus redshift. By comparing SN rates vs redshift against the star formation rate we can test models to determine how long star systems take from the point of inception to explosion (delay time distribution). This delay time distribution is compared to SN progenitors models to get an accurate idea of what these stars were like before their deaths.

  19. Reviewing and piloting methods for decreasing discount rates; someone, somewhere in time.

    Science.gov (United States)

    Parouty, Mehraj B Y; Krooshof, Daan G M; Westra, Tjalke A; Pechlivanoglou, Petros; Postma, Maarten J

    2013-08-01

    There has been substantial debate on the need for decreasing discounting for monetary and health gains in economic evaluations. Next to the discussion on differential discounting, a way to identify the need for such discounting strategies is through eliciting the time preferences for monetary and health outcomes. In this article, the authors investigate the perceived time preference for money and health gains through a pilot survey on Dutch university students using methods on functional forms previously suggested. Formal objectives of the study were to review such existing methods and to pilot them on a convenience sample using a questionnaire designed for this specific purpose. Indeed, a negative relation between the time of delay and the variance of the discounting rate for all models was observed. This study was intended as a pilot for a large-scale population-based investigation using the findings from this pilot on wording of the questionnaire, interpretation, scope and analytic framework.

  20. A Method for Harmonic Sources Detection based on Harmonic Distortion Power Rate

    Science.gov (United States)

    Lin, Ruixing; Xu, Lin; Zheng, Xian

    2018-03-01

    Harmonic sources detection at the point of common coupling is an essential step for harmonic contribution determination and harmonic mitigation. The harmonic distortion power rate index is proposed for harmonic source location based on IEEE Std 1459-2010 in the paper. The method only based on harmonic distortion power is not suitable when the background harmonic is large. To solve this problem, a threshold is determined by the prior information, when the harmonic distortion power is larger than the threshold, the customer side is considered as the main harmonic source, otherwise, the utility side is. A simple model of public power system was built in MATLAB/Simulink and field test results of typical harmonic loads verified the effectiveness of proposed method.

  1. Hybrid Methods and Atomistic Models to Explore Free Energies, Rates and Pathways of Protein Shape Changes

    DEFF Research Database (Denmark)

    Wang, Yong

    When I just joined the Lindor-Larsen group as a fresh PhD student, the Nobel Prize in Chemistry that year was awarded for the development of multiscale models for complex chemical systems" to prize the pioneering works of Martin Karplus, Michael Levitt and Arieh Warshel. As a computational......L), whose conformational dynamics however is still not fully understood. We found modern simulation methods and force elds are able to capture key aspects of how this protein changes its shape, paving the way for future studies for systems that are dicult to study experimentally. In Chapter 3, we...... revisited the problem of accurately quantifying the thermodynamics and kinetics, by following a novel route. In this route both of the forward and backward rates are calculated directly from MD simulations using a recently developed enhanced sampling method, called \\infrequent metadynamics...

  2. Standard Test Method for Measuring Heat-Transfer Rate Using a Thermal Capacitance (Slug) Calorimeter

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 This test method describes the measurement of heat transfer rate using a thermal capacitance-type calorimeter which assumes one-dimensional heat conduction into a cylindrical piece of material (slug) with known physical properties. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. 1.3 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. Note 1—For information see Test Methods E 285, E 422, E 458, E 459, and E 511.

  3. Dose rate evaluation of body phantom behind ITER bio-shield wall using Monte Carlo method

    International Nuclear Information System (INIS)

    Beheshti, A.; Jabbari, I.; Karimian, A.; Abdi, M.

    2012-01-01

    One of the most critical risks to humans in reactors environment is radiation exposure. Around the tokamak hall personnel are exposed to a wide range of particles, including neutrons and photons. International Thermonuclear Experimental Reactor (ITER) is a nuclear fusion research and engineering project, which is the most advanced experimental tokamak nuclear fusion reactor. Dose rates assessment and photon radiation due to the neutron activation of the solid structures in ITER is important from the radiological point of view. Therefore, the dosimetry considered in this case is based on the Deuterium-Tritium (DT) plasma burning with neutrons production rate at 14.1 MeV. The aim of this study is assessment the amount of radiation behind bio-shield wall that a human received during normal operation of ITER by considering neutron activation and delay gammas. To achieve the aim, the ITER system and its components were simulated by Monte Carlo method. Also to increase the accuracy and precision of the absorbed dose assessment a body phantom were considered in the simulation. The results of this research showed that total dose rates level near the outside of bio-shield wall of the tokamak hall is less than ten percent of the annual occupational dose limits during normal operation of ITER and It is possible to learn how long human beings can remain in that environment before the body absorbs dangerous levels of radiation. (authors)

  4. A Method for Medical Diagnosis Based on Optical Fluence Rate Distribution at Tissue Surface.

    Science.gov (United States)

    Hamdy, Omnia; El-Azab, Jala; Al-Saeed, Tarek A; Hassan, Mahmoud F; Solouma, Nahed H

    2017-09-20

    Optical differentiation is a promising tool in biomedical diagnosis mainly because of its safety. The optical parameters' values of biological tissues differ according to the histopathology of the tissue and hence could be used for differentiation. The optical fluence rate distribution on tissue boundaries depends on the optical parameters. So, providing image displays of such distributions can provide a visual means of biomedical diagnosis. In this work, an experimental setup was implemented to measure the spatially-resolved steady state diffuse reflectance and transmittance of native and coagulated chicken liver and native and boiled breast chicken skin at 635 and 808 nm wavelengths laser irradiation. With the measured values, the optical parameters of the samples were calculated in vitro using a combination of modified Kubelka-Munk model and Bouguer-Beer-Lambert law. The estimated optical parameters values were substituted in the diffusion equation to simulate the fluence rate at the tissue surface using the finite element method. Results were verified with Monte-Carlo simulation. The results obtained showed that the diffuse reflectance curves and fluence rate distribution images can provide discrimination tools between different tissue types and hence can be used for biomedical diagnosis.

  5. Standard Test Method for Measuring Fast-Neutron Reaction Rates by Radioactivation of Nickel

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 This test method covers procedures for measuring reaction rates by the activation reaction 58Ni(n,p)58Co. 1.2 This activation reaction is useful for measuring neutrons with energies above approximately 2.1 MeV and for irradiation times up to about 200 days in the absence of high thermal neutron fluence rates (for longer irradiations, see Practice E 261). 1.3 With suitable techniques fission-neutron fluence rates densities above 107 cm−2·s−1 can be determined. 1.4 Detailed procedures for other fast-neutron detectors are referenced in Practice E 261. 1.5 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.6 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. Note—The burnup corrections were com...

  6. Standard Test Method for Measuring Heat Transfer Rate Using a Thin-Skin Calorimeter

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2005-01-01

    1.1 This test method covers the design and use of a thin metallic calorimeter for measuring heat transfer rate (also called heat flux). Thermocouples are attached to the unexposed surface of the calorimeter. A one-dimensional heat flow analysis is used for calculating the heat transfer rate from the temperature measurements. Applications include aerodynamic heating, laser and radiation power measurements, and fire safety testing. 1.2 Advantages 1.2.1 Simplicity of ConstructionThe calorimeter may be constructed from a number of materials. The size and shape can often be made to match the actual application. Thermocouples may be attached to the metal by spot, electron beam, or laser welding. 1.2.2 Heat transfer rate distributions may be obtained if metals with low thermal conductivity, such as some stainless steels, are used. 1.2.3 The calorimeters can be fabricated with smooth surfaces, without insulators or plugs and the attendant temperature discontinuities, to provide more realistic flow conditions for ...

  7. A Method for Medical Diagnosis Based on Optical Fluence Rate Distribution at Tissue Surface

    Directory of Open Access Journals (Sweden)

    Omnia Hamdy

    2017-09-01

    Full Text Available Optical differentiation is a promising tool in biomedical diagnosis mainly because of its safety. The optical parameters’ values of biological tissues differ according to the histopathology of the tissue and hence could be used for differentiation. The optical fluence rate distribution on tissue boundaries depends on the optical parameters. So, providing image displays of such distributions can provide a visual means of biomedical diagnosis. In this work, an experimental setup was implemented to measure the spatially-resolved steady state diffuse reflectance and transmittance of native and coagulated chicken liver and native and boiled breast chicken skin at 635 and 808 nm wavelengths laser irradiation. With the measured values, the optical parameters of the samples were calculated in vitro using a combination of modified Kubelka-Munk model and Bouguer-Beer-Lambert law. The estimated optical parameters values were substituted in the diffusion equation to simulate the fluence rate at the tissue surface using the finite element method. Results were verified with Monte-Carlo simulation. The results obtained showed that the diffuse reflectance curves and fluence rate distribution images can provide discrimination tools between different tissue types and hence can be used for biomedical diagnosis.

  8. Toward Capturing Momentary Changes of Heart Rate Variability by a Dynamic Analysis Method.

    Directory of Open Access Journals (Sweden)

    Haoshi Zhang

    Full Text Available The analysis of heart rate variability (HRV has been performed on long-term electrocardiography (ECG recordings (12~24 hours and short-term recordings (2~5 minutes, which may not capture momentary change of HRV. In this study, we present a new method to analyze the momentary HRV (mHRV. The ECG recordings were segmented into a series of overlapped HRV analysis windows with a window length of 5 minutes and different time increments. The performance of the proposed method in delineating the dynamics of momentary HRV measurement was evaluated with four commonly used time courses of HRV measures on both synthetic time series and real ECG recordings from human subjects and dogs. Our results showed that a smaller time increment could capture more dynamical information on transient changes. Considering a too short increment such as 10 s would cause the indented time courses of the four measures, a 1-min time increment (4-min overlapping was suggested in the analysis of mHRV in the study. ECG recordings from human subjects and dogs were used to further assess the effectiveness of the proposed method. The pilot study demonstrated that the proposed analysis of mHRV could provide more accurate assessment of the dynamical changes in cardiac activity than the conventional measures of HRV (without time overlapping. The proposed method may provide an efficient means in delineating the dynamics of momentary HRV and it would be worthy performing more investigations.

  9. Measurement of exhalation rate of radon and radon concentration in air using open vial method

    International Nuclear Information System (INIS)

    Horiuchi, Kimiko; Ishii, Tadashi.

    1991-01-01

    It was recognized that more than half of total exposure dose on human subject is caused by radon and its decay products which originate from naturally occurring radioactive substances (1988 UNSCEAR). Since then the exhalation of radon from the ground surface has received increasing attention. The authors have developed a new method for the determination of radon in natural water using toluene extraction of radon and applying a liquid scintillation counter of an integral counting technique which is able to get the absolute counting of radon. During these studies, the authors found out that when a counting vial containing of Liquid scintillator (LS)-toluene solution, without a lid, is exposed to the atmosphere for a while, dissolution of radon clearly occurs due to high solubility of radon into toluene layer. To extend this finding for the determination of radon in the atmosphere, the authors devised a new method to actively collect the atmosphere containing radon in a glass bottle by discharging a definite amount of water in it, which is named as open-vial dynamic method. The radon concentration can be easily calculated after the necessary corrections such as the partition coefficient and others. Applying proposed method to measure the radon exhalation rate from the ground surface and radon concentration in air of the dwelling environment, radioactive mineral spring zone and various geological formation such as granitic or sedimentary rocks. (author)

  10. In Silico Assessment of Literature Insulin Bolus Calculation Methods Accounting for Glucose Rate of Change.

    Science.gov (United States)

    Cappon, Giacomo; Marturano, Francesca; Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni

    2018-05-01

    The standard formula (SF) used in bolus calculators (BCs) determines meal insulin bolus using "static" measurement of blood glucose concentration (BG) obtained by self-monitoring of blood glucose (SMBG) fingerprick device. Some methods have been proposed to improve efficacy of SF using "dynamic" information provided by continuous glucose monitoring (CGM), and, in particular, glucose rate of change (ROC). This article compares, in silico and in an ideal framework limiting the exposition to possibly confounding factors (such as CGM noise), the performance of three popular techniques devised for such a scope, that is, the methods of Buckingham et al (BU), Scheiner (SC), and Pettus and Edelman (PE). Using the UVa/Padova Type 1 diabetes simulator we generated data of 100 virtual subjects in noise-free, single-meal scenarios having different preprandial BG and ROC values. Meal insulin bolus was computed using SF, BU, SC, and PE. Performance was assessed with the blood glucose risk index (BGRI) on the 9 hours after meal. On average, BU, SC, and PE improve BGRI compared to SF. When BG is rapidly decreasing, PE obtains the best performance. In the other ROC scenarios, none of the considered methods prevails in all the preprandial BG conditions tested. Our study showed that, at least in the considered ideal framework, none of the methods to correct SF according to ROC is globally better than the others. Critical analysis of the results also suggests that further investigations are needed to develop more effective formulas to account for ROC information in BCs.

  11. A spectral chart method for estimating the mean turbulent kinetic energy dissipation rate

    Science.gov (United States)

    Djenidi, L.; Antonia, R. A.

    2012-10-01

    We present an empirical but simple and practical spectral chart method for determining the mean turbulent kinetic energy dissipation rate DNS spectra, points to this scaling being also valid at small Reynolds numbers, provided effects due to inhomogeneities in the flow are negligible. The methods avoid the difficulty associated with estimating time or spatial derivatives of the velocity fluctuations. It also avoids using the second hypothesis of K41, which implies the existence of a -5/3 inertial subrange only when the Taylor microscale Reynods number R λ is sufficiently large. The method is in fact applied to the lower wavenumber end of the dissipative range thus avoiding most of the problems due to inadequate spatial resolution of the velocity sensors and noise associated with the higher wavenumber end of this range.The use of spectral data (30 ≤ R λ ≤ 400) in both passive and active grid turbulence, a turbulent mixing layer and the turbulent wake of a circular cylinder indicates that the method is robust and should lead to reliable estimates of < \\varepsilon rangle in flows or flow regions where the first similarity hypothesis should hold; this would exclude, for example, the region near a wall.

  12. A gamma camera count rate saturation correction method for whole-body planar imaging

    Science.gov (United States)

    Hobbs, Robert F.; Baechler, Sébastien; Senthamizhchelvan, Srinivasan; Prideaux, Andrew R.; Esaias, Caroline E.; Reinhardt, Melvin; Frey, Eric C.; Loeb, David M.; Sgouros, George

    2010-02-01

    Whole-body (WB) planar imaging has long been one of the staple methods of dosimetry, and its quantification has been formalized by the MIRD Committee in pamphlet no 16. One of the issues not specifically addressed in the formalism occurs when the count rates reaching the detector are sufficiently high to result in camera count saturation. Camera dead-time effects have been extensively studied, but all of the developed correction methods assume static acquisitions. However, during WB planar (sweep) imaging, a variable amount of imaged activity exists in the detector's field of view as a function of time and therefore the camera saturation is time dependent. A new time-dependent algorithm was developed to correct for dead-time effects during WB planar acquisitions that accounts for relative motion between detector heads and imaged object. Static camera dead-time parameters were acquired by imaging decaying activity in a phantom and obtaining a saturation curve. Using these parameters, an iterative algorithm akin to Newton's method was developed, which takes into account the variable count rate seen by the detector as a function of time. The algorithm was tested on simulated data as well as on a whole-body scan of high activity Samarium-153 in an ellipsoid phantom. A complete set of parameters from unsaturated phantom data necessary for count rate to activity conversion was also obtained, including build-up and attenuation coefficients, in order to convert corrected count rate values to activity. The algorithm proved successful in accounting for motion- and time-dependent saturation effects in both the simulated and measured data and converged to any desired degree of precision. The clearance half-life calculated from the ellipsoid phantom data was calculated to be 45.1 h after dead-time correction and 51.4 h with no correction; the physical decay half-life of Samarium-153 is 46.3 h. Accurate WB planar dosimetry of high activities relies on successfully compensating

  13. A Real-Time Analysis Method for Pulse Rate Variability Based on Improved Basic Scale Entropy

    Directory of Open Access Journals (Sweden)

    Yongxin Chou

    2017-01-01

    Full Text Available Base scale entropy analysis (BSEA is a nonlinear method to analyze heart rate variability (HRV signal. However, the time consumption of BSEA is too long, and it is unknown whether the BSEA is suitable for analyzing pulse rate variability (PRV signal. Therefore, we proposed a method named sliding window iterative base scale entropy analysis (SWIBSEA by combining BSEA and sliding window iterative theory. The blood pressure signals of healthy young and old subjects are chosen from the authoritative international database MIT/PhysioNet/Fantasia to generate PRV signals as the experimental data. Then, the BSEA and the SWIBSEA are used to analyze the experimental data; the results show that the SWIBSEA reduces the time consumption and the buffer cache space while it gets the same entropy as BSEA. Meanwhile, the changes of base scale entropy (BSE for healthy young and old subjects are the same as that of HRV signal. Therefore, the SWIBSEA can be used for deriving some information from long-term and short-term PRV signals in real time, which has the potential for dynamic PRV signal analysis in some portable and wearable medical devices.

  14. Flow-rate measurement using radioactive tracers and transit time method

    International Nuclear Information System (INIS)

    Turtiainen, Heikki

    1986-08-01

    The transit time method is a flow measurement method based on tracer techniques. Measurement is done by injecting to the flow a pulse of tracer and measuring its transit time between two detection positions. From the transit time the mean flow velosity and - using the pipe cross section area - the volume flow rate can be calculated. When a radioisotope tracer is used the measurement can be done from outside the pipe and without disturbing the process (excluding the tracer injection). The use of the transit time method has been limited because of difficulties associated with handling and availability of radioactive tracers and lack of equipment suitable for routine use in industrial environments. The purpose of this study was to find out if these difficulties may be overcome by using a portable isotope generator as a tracer source and automating the measurement. In the study a test rig and measuring equipment based on the use of a ''1''3''7Cs/''1''3''7''''mBa isotope generator were constructed. They were used to study the accuracy and error sources of the method and to compare different algorithms to calculate the transit time. The usability of the method and the equipment in industrial environments were studied by carrying out over 20 flow measurements in paper and pulp mills. On the basis of the results of the study, a project for constructing a compact radiatracer flowmeter for industrial use has been started. The application range of this kind of meter is very large. The most obvious applications are in situ calibration of flowmeters, material and energy balance studies, process equipment analyses (e.g. pump efficiency analyses). At the moment tracer techniques are the only methods applicable to these measurements on-line and with sufficient accuracy

  15. IMPROVEMENT EFFORTS TO LEARN LESSONS ACTIVITIES CHASSIS POWER TRANSFER STANDARD COMPETENCE AND CORRECT STEERING SYSTEM WITH LEARNING METHOD DISCOVERY INQUIRY CLASS XIB SMK MUHAMMADIYAH GAMPING ACADEMIC YEAR 2013/2014

    Directory of Open Access Journals (Sweden)

    Harry Suharto

    2013-12-01

    Full Text Available The purpose of the study to determine the increase learners' learning activities subjects chassis and power transfer competency standard steering system repair discovery learning through the implementation of class XI inquiry Lightweight Vehicle Technology SMK Muhammadiyah Gamping, Sleman academic year 2013/2014. This research including action research   Research conducted at SMK Muhammadiyah Gamping XIB class academic year 2013/2014 with a sample of 26 students. Techniques of data collection using questionnaire sheet, observation sheets and documentation to determine the increase in student activity. Instrument validation study using experts judgment. Analysis using descriptive statistics using the technique .   The results showed that the increased activity of the first cycle to the second cycle include an increase of 57.7 % Visual activities; Oral activities amounted to 61.6 %; Listening activities amounted to 23.04 %; Writing activities by 8.7 %; Mental activities of 73.1 %; Emotional activities of 42.3 % ( for the spirit of the students in learning activities ; Motor activities amounted to -7.7 % ( decrease negative activity . Based on these results can be known to most students in SMK Muhammadiyah Gamping gave a positive opinion on the use of inquiry and discovery learning method has a view that the use of inquiry discovery learning methods can be useful for students and schools themselves. Learners who have a good perception of the use of discovery learning method of inquiry he has known and fully aware of the standards of achievement of competence theory fix the steering system. Learning discovery learning methods on achievement of competency standards inquiry repair steering systems theory pleased with the learning process, they are also able to: 1 increase the motivation to learn, 2 improving learning achievement; 3 enhancing creativity; 4 listen, respect, and accept the opinion of the participants other students; 5 reduce boredom

  16. PREDICTION OF RESERVOIR FLOW RATE OF DEZ DAM BY THE PROBABILITY MATRIX METHOD

    Directory of Open Access Journals (Sweden)

    Mohammad Hashem Kanani

    2012-12-01

    Full Text Available The data collected from the operation of existing storage reservoirs, could offer valuable information for the better allocation and management of fresh water rates for future use to mitigation droughts effect. In this paper the long-term Dez reservoir (IRAN water rate prediction is presented using probability matrix method. Data is analyzed to find the probability matrix of water rates in Dez reservoir based on the previous history of annual water entrance during the past and present years(40 years. The algorithm developed covers both, the overflow and non-overflow conditions in the reservoir. Result of this study shows that in non-overflow conditions the most exigency case is equal to 75%. This means that, if the reservoir is empty (the stored water is less than 100 MCM this year, it would be also empty by 75% next year. The stored water in the reservoir would be less than 300 MCM by 85% next year if the reservoir is empty this year. This percentage decreases to 70% next year if the water of reservoir is less than 300 MCM this year. The percentage also decreases to 5% next year if the reservoir is full this year. In overflow conditions the most exigency case is equal to 75% again. The reservoir volume would be less than 150 MCM by 90% next year, if it is empty this year. This percentage decreases to 70% if its water volume is less than 300 MCM and 55% if the water volume is less than 500 MCM this year. Result shows that too, if the probability matrix of water rates to a reservoir is multiplied by itself repeatedly; it converges to a constant probability matrix, which could be used to predict the long-term water rate of the reservoir. In other words, the probability matrix of series of water rates is changed to a steady probability matrix in the course of time, which could reflect the hydrological behavior of the watershed and could be easily used for the long-term prediction of water storage in the down stream reservoirs.

  17. Service Discovery At Home

    NARCIS (Netherlands)

    Sundramoorthy, V.; Scholten, Johan; Jansen, P.G.; Hartel, Pieter H.

    Service discovery is a fady new field that kicked off since the advent of ubiquitous computing and has been found essential in the making of intelligent networks by implementing automated discovery and remote control between deviies. This paper provides an ovewiew and comparison of several prominent

  18. Academic Drug Discovery Centres

    DEFF Research Database (Denmark)

    Kirkegaard, Henriette Schultz; Valentin, Finn

    2014-01-01

    Academic drug discovery centres (ADDCs) are seen as one of the solutions to fill the innovation gap in early drug discovery, which has proven challenging for previous organisational models. Prior studies of ADDCs have identified the need to analyse them from the angle of their economic...

  19. Decades of Discovery

    Science.gov (United States)

    2011-06-01

    For the past two-and-a-half decades, the Office of Science at the U.S. Department of Energy has been at the forefront of scientific discovery. Over 100 important discoveries supported by the Office of Science are represented in this document.

  20. Service discovery at home

    NARCIS (Netherlands)

    Sundramoorthy, V.; Scholten, Johan; Jansen, P.G.; Hartel, Pieter H.

    2003-01-01

    Service discovery is a fairly new field that kicked off since the advent of ubiquitous computing and has been found essential in the making of intelligent networks by implementing automated discovery and remote control between devices. This paper provides an overview and comparison of several

  1. Proteomic and metabolomic approaches to biomarker discovery

    CERN Document Server

    Issaq, Haleem J

    2013-01-01

    Proteomic and Metabolomic Approaches to Biomarker Discovery demonstrates how to leverage biomarkers to improve accuracy and reduce errors in research. Disease biomarker discovery is one of the most vibrant and important areas of research today, as the identification of reliable biomarkers has an enormous impact on disease diagnosis, selection of treatment regimens, and therapeutic monitoring. Various techniques are used in the biomarker discovery process, including techniques used in proteomics, the study of the proteins that make up an organism, and metabolomics, the study of chemical fingerprints created from cellular processes. Proteomic and Metabolomic Approaches to Biomarker Discovery is the only publication that covers techniques from both proteomics and metabolomics and includes all steps involved in biomarker discovery, from study design to study execution.  The book describes methods, and presents a standard operating procedure for sample selection, preparation, and storage, as well as data analysis...

  2. Development of high-frame rate neutron radiography and quantitative measurement method for multiphase flow research

    International Nuclear Information System (INIS)

    Mishima, K.; Hibiki, T.

    1998-01-01

    Neutron radiography (NR) is one of the radiographic techniques which makes use of the difference in attenuation characteristics of neutrons in materials. Fluid measurement using the NR technique is a non-intrusive method which enables visualization of dynamic images of multiphase flow of opaque fluids and/or in a metallic duct. To apply the NR technique to multiphase flow research, high frame-rate NR was developed by combining up-to-date technologies for neutron sources, scintillator, high-speed video and image intensifier. This imaging system has several advantages such as a long recording time (up to 21 minutes), high-frame-rate (up to 1000 frames/s) imaging and there is no need for a triggering signal. Visualization studies of air-water two-phase flow in a metallic duct and molten metal-water interaction were performed at recording speeds of 250, 500 and 1000 frames/s. The qualities of the consequent images were sufficient to observe the flow pattern and behavior. It was also demonstrated that some characteristics of two-phase flow could be measured from these images in collaboration with image processing techniques. By utilizing geometrical information extracted from NR images, data on flow regime, bubble rise velocity, and wave height and interfacial area in annular flow were obtained. By utilizing attenuation characteristics of neutrons in materials, measurements of void profile and average void fraction were performed. It was confirmed that this new technique may have significant advantages both in visualizing and measuring high-speed fluid phenomena when other methods, such as an optical method and X-ray radiography, cannot be applied. (author)

  3. Impact of seeding rate, seeding date, rate and method of phosphorus application in faba bean (Vicia faba L. minor in the absence of moisture stress

    Directory of Open Access Journals (Sweden)

    Turk M.A.

    2002-01-01

    Full Text Available Field experiments were conducted during the winter seasons of 1998-1999, 1999-2000 and 2000-2001 at the semi-arid region in north of Jordan, to study the effect of seeding dates (14 January, 28 January and 12 February, seeding rates (50, 75 and 100 plants per metre, phosphorus levels (0, 17.5, 35.0 and 52.5 kg P per ha and two methods of P placement (banding and broadcast. Seeding rate, seeding date, and rate of phosphorus had a significant effect on most of the measured traits and the yield determinates. Method of phosphorus application had only a significant effect on seed yield and seed weight per plant. In general high yields were obtained by early seeding (14 January, high seeding rate (100-plant per square metre, and P application (52.5 kg P per ha drilled with the seed after cultivation (banded.

  4. Single- versus multiple-sample method to measure glomerular filtration rate.

    Science.gov (United States)

    Delanaye, Pierre; Flamant, Martin; Dubourg, Laurence; Vidal-Petiot, Emmanuelle; Lemoine, Sandrine; Cavalier, Etienne; Schaeffner, Elke; Ebert, Natalie; Pottel, Hans

    2018-01-08

    There are many different ways to measure glomerular filtration rate (GFR) using various exogenous filtration markers, each having their own strengths and limitations. However, not only the marker, but also the methodology may vary in many ways, including the use of urinary or plasma clearance, and, in the case of plasma clearance, the number of time points used to calculate the area under the concentration-time curve, ranging from only one (Jacobsson method) to eight (or more) blood samples. We collected the results obtained from 5106 plasma clearances (iohexol or 51Cr-ethylenediaminetetraacetic acid (EDTA)) using three to four time points, allowing GFR calculation using the slope-intercept method and the Bröchner-Mortensen correction. For each time point, the Jacobsson formula was applied to obtain the single-sample GFR. We used Bland-Altman plots to determine the accuracy of the Jacobsson method at each time point. The single-sample method showed within 10% concordances with the multiple-sample method of 66.4%, 83.6%, 91.4% and 96.0% at the time points 120, 180, 240 and ≥300 min, respectively. Concordance was poorer at lower GFR levels, and this trend is in parallel with increasing age. Results were similar in males and females. Some discordance was found in the obese subjects. Single-sample GFR is highly concordant with a multiple-sample strategy, except in the low GFR range (<30 mL/min). © The Author 2018. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  5. A spectral chart method for estimating the mean turbulent kinetic energy dissipation rate

    Energy Technology Data Exchange (ETDEWEB)

    Djenidi, L.; Antonia, R.A. [The University of Newcastle, School of Engineering, Newcastle, NSW (Australia)

    2012-10-15

    We present an empirical but simple and practical spectral chart method for determining the mean turbulent kinetic energy dissipation rate left angle {epsilon}right angle in a variety of turbulent flows. The method relies on the validity of the first similarity hypothesis of Kolmogorov (C R (Doklady) Acad Sci R R SS, NS 30:301-305, 1941) (or K41) which implies that spectra of velocity fluctuations scale on the kinematic viscosity {nu} and left angle {epsilon}right angle at large Reynolds numbers. However, the evidence, based on the DNS spectra, points to this scaling being also valid at small Reynolds numbers, provided effects due to inhomogeneities in the flow are negligible. The methods avoid the difficulty associated with estimating time or spatial derivatives of the velocity fluctuations. It also avoids using the second hypothesis of K41, which implies the existence of a -5/3 inertial subrange only when the Taylor microscale Reynolds number R{sub {lambda}} is sufficiently large. The method is in fact applied to the lower wavenumber end of the dissipative range thus avoiding most of the problems due to inadequate spatial resolution of the velocity sensors and noise associated with the higher wavenumber end of this range.The use of spectral data (30 {<=} R{sub {lambda}}{<=} 400) in both passive and active grid turbulence, a turbulent mixing layer and the turbulent wake of a circular cylinder indicates that the method is robust and should lead to reliable estimates of left angle {epsilon}right angle in flows or flow regions where the first similarity hypothesis should hold; this would exclude, for example, the region near a wall. (orig.)

  6. "Eureka, Eureka!" Discoveries in Science

    Science.gov (United States)

    Agarwal, Pankaj

    2011-01-01

    Accidental discoveries have been of significant value in the progress of science. Although accidental discoveries are more common in pharmacology and chemistry, other branches of science have also benefited from such discoveries. While most discoveries are the result of persistent research, famous accidental discoveries provide a fascinating…

  7. Nonlinear Methods to Assess Changes in Heart Rate Variability in Type 2 Diabetic Patients

    Energy Technology Data Exchange (ETDEWEB)

    Bhaskar, Roy, E-mail: imbhaskarall@gmail.com [Indian Institute of Technology (India); University of Connecticut, Farmington, CT (United States); Ghatak, Sobhendu [Indian Institute of Technology (India)

    2013-10-15

    Heart rate variability (HRV) is an important indicator of autonomic modulation of cardiovascular function. Diabetes can alter cardiac autonomic modulation by damaging afferent inputs, thereby increasing the risk of cardiovascular disease. We applied nonlinear analytical methods to identify parameters associated with HRV that are indicative of changes in autonomic modulation of heart function in diabetic patients. We analyzed differences in HRV patterns between diabetic and age-matched healthy control subjects using nonlinear methods. Lagged Poincaré plot, autocorrelation, and detrended fluctuation analysis were applied to analyze HRV in electrocardiography (ECG) recordings. Lagged Poincare plot analysis revealed significant changes in some parameters, suggestive of decreased parasympathetic modulation. The detrended fluctuation exponent derived from long-term fitting was higher than the short-term one in the diabetic population, which was also consistent with decreased parasympathetic input. The autocorrelation function of the deviation of inter-beat intervals exhibited a highly correlated pattern in the diabetic group compared with the control group. The HRV pattern significantly differs between diabetic patients and healthy subjects. All three statistical methods employed in the study may prove useful to detect the onset and extent of autonomic neuropathy in diabetic patients.

  8. Nonlinear Methods to Assess Changes in Heart Rate Variability in Type 2 Diabetic Patients

    International Nuclear Information System (INIS)

    Bhaskar, Roy; Ghatak, Sobhendu

    2013-01-01

    Heart rate variability (HRV) is an important indicator of autonomic modulation of cardiovascular function. Diabetes can alter cardiac autonomic modulation by damaging afferent inputs, thereby increasing the risk of cardiovascular disease. We applied nonlinear analytical methods to identify parameters associated with HRV that are indicative of changes in autonomic modulation of heart function in diabetic patients. We analyzed differences in HRV patterns between diabetic and age-matched healthy control subjects using nonlinear methods. Lagged Poincaré plot, autocorrelation, and detrended fluctuation analysis were applied to analyze HRV in electrocardiography (ECG) recordings. Lagged Poincare plot analysis revealed significant changes in some parameters, suggestive of decreased parasympathetic modulation. The detrended fluctuation exponent derived from long-term fitting was higher than the short-term one in the diabetic population, which was also consistent with decreased parasympathetic input. The autocorrelation function of the deviation of inter-beat intervals exhibited a highly correlated pattern in the diabetic group compared with the control group. The HRV pattern significantly differs between diabetic patients and healthy subjects. All three statistical methods employed in the study may prove useful to detect the onset and extent of autonomic neuropathy in diabetic patients

  9. Excessive leakage measurement using pressure decay method in containment building local leakage rate test at nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Won Kyu; Kim, Chang Soo; Kim, Wang Bae [KHNP, Central Research Institute, Daejeon (Korea, Republic of)

    2016-06-15

    There are two methods for conducting the containment local leakage rate test (LLRT) in nuclear power plants: the make-up flow rate method and the pressure decay method. The make-up flow rate method is applied first in most power plants. In this method, the leakage rate is measured by checking the flow rate of the make-up flow. However, when it is difficult to maintain the test pressure because of excessive leakage, the pressure decay method can be used as a complementary method, as the leakage rates at pressures lower than normal can be measured using this method. We studied the method of measuring over leakage using the pressure decay method for conducting the LLRT for the containment building at a nuclear power plant. We performed experiments under conditions similar to those during an LLRT conducted on-site. We measured the characteristics of the leakage rate under varies pressure decay conditions, and calculated the compensation ratio based on these data.

  10. A comparative analysis of signal processing methods for motion-based rate responsive pacing.

    Science.gov (United States)

    Greenhut, S E; Shreve, E A; Lau, C P

    1996-08-01

    Pacemakers that augment heart rate (HR) by sensing body motion have been the most frequently prescribed rate responsive pacemakers. Many comparisons between motion-based rate responsive pacemaker models have been published. However, conclusions regarding specific signal processing methods used for rate response (e.g., filters and algorithms) can be affected by device-specific features. To objectively compare commonly used motion sensing filters and algorithms, acceleration and ECG signals were recorded from 16 normal subjects performing exercise and daily living activities. Acceleration signals were filtered (1-4 or 15-Hz band-pass), then processed using threshold crossing (TC) or integration (IN) algorithms creating four filter/algorithm combinations. Data were converted to an acceleration indicated rate and compared to intrinsic HR using root mean square difference (RMSd) and signed RMSd. Overall, the filters and algorithms performed similarly for most activities. The only differences between filters were for walking at an increasing grade (1-4 Hz superior to 15-Hz) and for rocking in a chair (15-Hz superior to 1-4 Hz). The only differences between algorithms were for bicycling (TC superior to IN), walking at an increasing grade (IN superior to TC), and holding a drill (IN superior to TC). Performance of the four filter/algorithm combinations was also similar over most activities. The 1-4/IN (filter [Hz]/algorithm) combination performed best for walking at a grade, while the 15/TC combination was best for bicycling. However, the 15/TC combination tended to be most sensitive to higher frequency artifact, such as automobile driving, downstairs walking, and hand drilling. Chair rocking artifact was highest for 1-4/IN. The RMSd for bicycling and upstairs walking were large for all combinations, reflecting the nonphysiological nature of the sensor. The 1-4/TC combination demonstrated the least intersubject variability, was the only filter/algorithm combination

  11. Calculation method of rate and area of sedimentation, by non-conventional mathematical process of data treatment

    International Nuclear Information System (INIS)

    Cota, P.L.

    1987-01-01

    The used methods for calculating the rate and area of sedimentation are based in techniques of graphical resolution. The solution of the problem by a mathematical resolution, using computational methods, is more fast and more accuracy. The comparison between the results from this methods and the conventional method is shown. (E.G.) [pt

  12. Multi-rate cubature Kalman filter based data fusion method with residual compensation to adapt to sampling rate discrepancy in attitude measurement system.

    Science.gov (United States)

    Guo, Xiaoting; Sun, Changku; Wang, Peng

    2017-08-01

    This paper investigates the multi-rate inertial and vision data fusion problem in nonlinear attitude measurement systems, where the sampling rate of the inertial sensor is much faster than that of the vision sensor. To fully exploit the high frequency inertial data and obtain favorable fusion results, a multi-rate CKF (Cubature Kalman Filter) algorithm with estimated residual compensation is proposed in order to adapt to the problem of sampling rate discrepancy. During inter-sampling of slow observation data, observation noise can be regarded as infinite. The Kalman gain is unknown and approaches zero. The residual is also unknown. Therefore, the filter estimated state cannot be compensated. To obtain compensation at these moments, state error and residual formulas are modified when compared with the observation data available moments. Self-propagation equation of the state error is established to propagate the quantity from the moments with observation to the moments without observation. Besides, a multiplicative adjustment factor is introduced as Kalman gain, which acts on the residual. Then the filter estimated state can be compensated even when there are no visual observation data. The proposed method is tested and verified in a practical setup. Compared with multi-rate CKF without residual compensation and single-rate CKF, a significant improvement is obtained on attitude measurement by using the proposed multi-rate CKF with inter-sampling residual compensation. The experiment results with superior precision and reliability show the effectiveness of the proposed method.

  13. Statistical evaluation of the analytical method involved in French nuclear glasses leaching rate determination

    Energy Technology Data Exchange (ETDEWEB)

    Broudic, V.; Marques, C.; Bonnal, M

    2004-07-01

    Chemical durability studies of nuclear glasses involves a large number of water leaching experiments at different temperatures and pressures on both, glasses doped with fission products and actinides and non radioactive surrogates. The leaching rates of these glasses are evaluated through ICPAES analysis of the leachate over time. This work presents a statistical evaluation of the analysis method used to determine the concentrations of various vitreous matrix constituents: Si, B, Na, Al, Ca, Li as major elements and Ba, Cr, Fe, Mn, Mo, Ni, P, Sr, Zn, Zr as minor elements. Calibration characteristics, limits of detection, limits of quantification and uncertainties quantification are illustrated with different examples of analysis performed on surrogates and on radioactive leachates in glove box. (authors)

  14. Building America Guidance for Identifying and Overcoming Code, Standard, and Rating Method Barriers

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Pamala C.; Halverson, Mark A.

    2013-09-01

    The U.S. Department of Energy’s (DOE) Building America program implemented a new Codes and Standards Innovation (CSI) Team in 2013. The Team’s mission is to assist Building America (BA) research teams and partners in identifying and resolving conflicts between Building America innovations and the various codes and standards that govern the construction of residences. A CSI Roadmap was completed in September, 2013. This guidance document was prepared using the information in the CSI Roadmap to provide BA research teams and partners with specific information and approaches to identifying and overcoming potential barriers to Building America (BA) innovations arising in and/or stemming from codes, standards, and rating methods. For more information on the BA CSI team, please email: CSITeam@pnnl.gov

  15. Statistical evaluation of the analytical method involved in French nuclear glasses leaching rate determination

    International Nuclear Information System (INIS)

    Broudic, V.; Marques, C.; Bonnal, M.

    2004-01-01

    Chemical durability studies of nuclear glasses involves a large number of water leaching experiments at different temperatures and pressures on both, glasses doped with fission products and actinides and non radioactive surrogates. The leaching rates of these glasses are evaluated through ICPAES analysis of the leachate over time. This work presents a statistical evaluation of the analysis method used to determine the concentrations of various vitreous matrix constituents: Si, B, Na, Al, Ca, Li as major elements and Ba, Cr, Fe, Mn, Mo, Ni, P, Sr, Zn, Zr as minor elements. Calibration characteristics, limits of detection, limits of quantification and uncertainties quantification are illustrated with different examples of analysis performed on surrogates and on radioactive leachates in glove box. (authors)

  16. A method for radiobiological investigations in radiation fields with different LET and high dose rates

    International Nuclear Information System (INIS)

    Grundler, W.

    1976-01-01

    For investigations: 1. Performed in the field of radiobiology with different LET-radiation and a relatively high background dose rate of one component (e.g. investigations with fast and intermediate reactor neutrons) 2. Concerning radiation risk studies within a wide range 3. Of irradiations, covering a long time period (up to 100 days) a test system is necessary which on the one hand makes it possible to analyze the influence of different LET radiation and secondly shows a relative radiation resistant behaviour and allows a simple cell cycle regulation. A survey is given upon the installed device of a simple cell observation method, the biological test system used and the analysis of effects caused by dose, repair and LET. It is possible to analyze the behaviour of the nonsurvival cells and to demonstrate different reactions of the test parameters to the radiation of different LET. (author)

  17. [Heart rate variability as a method of assessing the autonomic nervous system in polycystic ovary syndrome].

    Science.gov (United States)

    de Sá, Joceline Cássia Ferezini; Costa, Eduardo Caldas; da Silva, Ester; Azevedo, George Dantas

    2013-09-01

    Polycystic ovary syndrome (PCOS) is an endocrine disorder associated with several cardiometabolic risk factors, such as central obesity, insulin resistance, type 2 diabetes, metabolic syndrome, and hypertension. These factors are associated with adrenergic overactivity, which is an important prognostic factor for the development of cardiovascular disorders. Given the common cardiometabolic disturbances occurring in PCOS women, over the last years studies have investigated the cardiac autonomic control of these patients, mainly based on heart rate variability (HRV). Thus, in this review, we will discuss the recent findings of the studies that investigated the HRV of women with PCOS, as well as noninvasive methods of analysis of autonomic control starting from basic indexes related to this methodology.

  18. Development and application of an on-line tritium production rate measuring method

    International Nuclear Information System (INIS)

    Yamaguchi, Seiya

    1989-06-01

    A highly sensitive on-line method for measuring the tritium production rate (TPR) of 6 Li was developed using the response difference of 6 Li and 7 Li-glass scintillators in a mixed neutron-gamma radiation field. A fitting method for subtracting the pulse height spectrum of 7 Li-glass from that of 6 Li-glass was introduced. The contribution of competing reactions such as 6 Li (n, n 'd) 4 He was estimated by kinematical analyses. An absolute value of the 6 Li content was determined by a chemical analysis. The thermal flux perturbation due to 6 Li-glass of various thickness and 6 Li contents was evaluated by measurement in a thermal neutron field and calculation by the modified Skyrme theory. A Monte Carlo calculation of the self-shielding effect was also made. The dependence of the self-shielding on neutron energy was examined by this Monte Carlo code. The edge effect, i.e., distortion of the pulse height spectrum due to partial energy deposition of the alpha and/or the triton, was investigated by measurement in a thermal neutron field and by a Monte Carlo simulation that was based on the scintillation mechanism and considered Bragg absorption and the ratio of contributions to luminescence by the alpha and the triton. The dependence of the edge effect on neutron energy was examined by this Monte Carlo code. This method was applied to the measurement of TPR distributions in simulated fusion blanket assemblies bombarded by D-T neutrons. Absolute values of the TPR were obtained with an experimental error of 3∼6 %. The measured results were compared with those of conventional β-counting methods and good agreement was obtained. An optical fiber system, using miniature lithium-glass scintillators, was fabricated for purpose of microminiaturization of detector size and adaption to strong electromagnetic field. Applicability of this system to a D-T neutron field was demonstrated. (author)

  19. Standard Test Method for Measuring Fast-Neutron Reaction Rates by Radioactivation of Iron

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    DESIG: E 263 09 ^TITLE: Standard Test Method for Measuring Fast-Neutron Reaction Rates by Radioactivation of Iron ^SIGNUSE: Refer to Guide E 844 for guidance on the selection, irradiation, and quality control of neutron dosimeters. Refer to Practice E 261 for a general discussion of the determination of fast-neutron fluence rate with threshold detectors. Pure iron in the form of foil or wire is readily available and easily handled. Fig. 1 shows a plot of cross section as a function of neutron energy for the fast-neutron reaction 54Fe(n,p)54Mn (1). This figure is for illustrative purposes only to indicate the range of response of the 54Fe(n,p)54Mn reaction. Refer to Guide E 1018 for descriptions of recommended tabulated dosimetry cross sections. 54Mn has a half-life of 312.13 days (3) (2) and emits a gamma ray with an energy of 834.845 keV (5). (2) Interfering activities generated by neutron activation arising from thermal or fast neutron interactions are 2.57878 (46)-h 56Mn, 44.95-d (8) 59Fe, and 5.27...

  20. Standard Test Method for Measuring Fast-Neutron Reaction Rates by Radioactivation of Copper

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2011-01-01

    1.1 This test method covers procedures for measuring reaction rates by the activation reaction 63Cu(n,α)60Co. The cross section for 60Co produced in this reaction increases rapidly with neutrons having energies greater than about 5 MeV. 60Co decays with a half-life of 1925.27 days (±0.29 days)(1) and emits two gamma rays having energies of 1.1732278 and 1.332492 MeV (1). The isotopic content of natural copper is 69.17 % 63Cu and 30.83 % 65Cu (2). The neutron reaction, 63Cu(n,γ)64Cu, produces a radioactive product that emits gamma rays which might interfere with the counting of the 60Co gamma rays. 1.2 With suitable techniques, fission-neutron fluence rates above 109 cm−2·s−1 can be determined. The 63Cu(n,α)60Co reaction can be used to determine fast-neutron fluences for irradiation times up to about 15 years (for longer irradiations, see Practice E261). 1.3 Detailed procedures for other fast-neutron detectors are referenced in Practice E261. 1.4 This standard does not purport to address all of the...

  1. Standard Test Method for Measuring Reaction Rates by Radioactivation of Uranium-238

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 This test method covers procedures for measuring reaction rates by assaying a fission product (F.P.) from the fission reaction 238U(n,f)F.P. 1.2 The reaction is useful for measuring neutrons with energies from approximately 1.5 to 7 MeV and for irradiation times up to 30 to 40 years. 1.3 Equivalent fission neutron fluence rates as defined in Practice E 261 can be determined. 1.4 Detailed procedures for other fast-neutron detectors are referenced in Practice E 261. 1.5 The values stated in SI units are to be regarded as standard. No other unites of measurement are included in this standard. 1.6 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  2. Standard Test Method for Measuring Reaction Rates by Radioactivation of Neptunium-237

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 This test method covers procedures for measuring reaction rates by assaying a fission product (F.P.) from the fission reaction 237Np(n,f)F.P. 1.2 The reaction is useful for measuring neutrons with energies from approximately 0.7 to 6 MeV and for irradiation times up to 30 to 40 years. 1.3 Equivalent fission neutron fluence rates as defined in Practice E 261 can be determined. 1.4 Detailed procedures for other fast-neutron detectors are referenced in Practice E 261. 1.5 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.6 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  3. Effect of Irrigation Methods, Nitrogen and Phosphorus Fertilizer Rates on Sugar Beet Yield and Quality

    International Nuclear Information System (INIS)

    Janat, M.; Abudlkareem, J.

    2007-01-01

    The experiment was conducted at a research station near Adlib. Two irrigation methods, sprinkler irrigation and drip fertigation, two phosphorus rates and four nitrogen rates 0, 70, 140 and 210 kg N/ha were tested. All N fertilizers were injected for drip irrigation or broadcasted for the sprinkler-irrigated treatments in six equally split applications. Neutron probe Results revealed that the introduction of drip fertigation was not proved to be a water saving relative to sprinkler irrigation. Dry matter production was slightly increased for the drip-fertigated treatments relative to sprinkler irrigated treatments. Nitrogen use efficiency was not improved under drip fertigation relative to that of sprinkler irrigation. Application of phosphorus fertilizer improved sugar beet yield as well as N uptake. No significant differences in sugar beet yield were observed due to the application of N fertilizer under drip fertigation. On the other hand, there was a trend toward increasing sugar beet yield grown under sprinkler irrigation. Drip fertigation had no negative effects on sugar content and other related properties, furthermore some of those properties were enhanced due to the employment of drip fertigation. Field water-use efficiency followed a similar trend and was increased under sprinkler irrigation relative to drip-fertigation for sugar beet yield parameter.

  4. Highly Controlled Codeposition Rate of Organolead Halide Perovskite by Laser Evaporation Method.

    Science.gov (United States)

    Miyadera, Tetsuhiko; Sugita, Takeshi; Tampo, Hitoshi; Matsubara, Koji; Chikamatsu, Masayuki

    2016-10-05

    Organolead-halide perovskites can be promising materials for next-generation solar cells because of its high power conversion efficiency. The method of precise fabrication is required because both solution-process and vacuum-process fabrication of the perovskite have problems of controllability and reproducibility. Vacuum deposition process was expected to achieve precise control; however, vaporization of amine compound significantly degrades the controllability of deposition rate. Here we achieved the reduction of the vaporization by implementing the laser evaporation system for the codeposition of perovskite. Locally irradiated continuous-wave lasers on the source materials realized the reduced vaporization of CH 3 NH 3 I. The deposition rate was stabilized for several hours by adjusting the duty ratio of modulated laser based on proportional-integral control. Organic-photovoltaic-type perovskite solar cells were fabricated by codeposition of PbI 2 and CH 3 NH 3 I. A power-conversion efficiency of 16.0% with reduced hysteresis was achieved.

  5. An Improved in Vivo Deuterium Labeling Method for Measuring the Biosynthetic Rate of Cytokinins

    Directory of Open Access Journals (Sweden)

    Petr Tarkowski

    2010-12-01

    Full Text Available An improved method for determining the relative biosynthetic rate of isoprenoid cytokinins has been developed. A set of 11 relevant isoprenoid cytokinins, including zeatin isomers, was separated by ultra performance liquid chromatography in less than 6 min. The iP-type cytokinins were observed to give rise to a previously-unknown fragment at m/z 69; we suggest that the diagnostic (204-69 transition can be used to monitor the biosynthetic rate of isopentenyladenine. Furthermore, we found that by treating the cytokinin nucleotides with alkaline phosphatase prior to analysis, the sensitivity of the detection process could be increased. In addition, derivatization (propionylation improved the ESI-MS response by increasing the analytes' hydrophobicity. Indeed, the ESI-MS response of propionylated isopentenyladenosine was about 34% higher than that of its underivatized counterpart. Moreover, the response of the derivatized zeatin ribosides was about 75% higher than that of underivatized zeatin ribosides. Finally, we created a web-based calculator (IZOTOP that facilitates MS/MS data processing and offer it freely to the research community.

  6. Effect of milk sample delivery methods and arrival conditions on bacterial contamination rates.

    Science.gov (United States)

    Dinsmore, R P; English, P B; Matthews, J C; Sears, P M

    1990-07-01

    A cross sectional study was performed of factors believed to contribute to the contamination of bovine milk sample cultures submitted to the Ithaca Regional Laboratory of the Quality Milk Promotion Services/New York State Mastitis Control. Of 871 samples entered in the study, 137 (15.7%) were contaminated. There were interactions between the sample source (veterinarian vs dairyman), delivery method, and time between sample collection and arrival at the laboratory. If only those samples collected and hand delivered by the dairyman within 1 day of collection were compared to a like subset of samples collected and hand delivered by veterinarians, no statistically significant differences in milk sample contamination rate (MSCR) were found. Samples were delivered to the laboratory by hand, US Postal Service, United Parcel Service, via the New York State College of Veterinary Medicine Diagnostic Laboratory, or Northeast Dairy Herd Improvement Association Courier. The MSCR was only 7.6% for hand delivered samples, while 26% of Postal Service samples were contaminated. These rates differed significantly from other delivery methods (P less than 0.0001). The USPS samples arrived a longer time after sampling than did samples sent by other routes, and time had a significant effect on MSCR (0 to 1 day, 8.9%; greater than 1 day, 25.9%; P less than 0.01). Samples packaged with ice packs sent by routes other than the Postal Service had a lower MSCR than those not packaged with ice packs, but ice packs did not reduce the MSCR for samples sent by the Postal Service.(ABSTRACT TRUNCATED AT 250 WORDS)

  7. Household projections by the headship rates method: The case of Serbia

    Directory of Open Access Journals (Sweden)

    Vasić Petar

    2017-01-01

    Full Text Available The headship rates method (HRM of household projections based on the share of household heads in the total population of the same demographic characteristics (age, sex, nationality, marital status, etc. is the most commonly used method, especially by statistical institutes and planning institutions. The specific rates of household heads by age are calculated by dividing the number of household holders of a certain age with the total number of residents of the appropriate age. The future number of households is then simply projected on the basis of population projections by age and assumptions about the future changes of HR. The HRM is based on the projection of the future age structure of the population. In that sense, the choice of methods of population projection, as well as the method of projecting HR-s have determining impact on the outcome of household projections. Given the methodological inconsistency typical for official population projections in Serbia and significant differences in addressing uncertainty of the future population change between deterministic and probabilistic approach in making population projections, the decision to use a probabilistic projection of the population of Serbia as the basis for calculating the future number of house-holds and their structure according to the age of the household head proved to be a logical choice. However, as the basic aim of this article is to show the simple method of household projections, the above-mentioned stochastic projection is used in utterly deterministic manner. The median of the prediction interval of the population distributed across age is interpreted as the most probable future, or as a prognosis. The HR-s based on the age structure estimates and estimated number of households by age of the household head from Household budget survey (HBS are used for the purpose of HR projecting so that the number of observations would be large enough for calculating inclination parameters

  8. Interaction of α-cyperone with human serum albumin: Determination of the binding site by using Discovery Studio and via spectroscopic methods

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Qing; He, Jiawei; Wu, Di; Wang, Jing; Yan, Jin; Li, Hui, E-mail: lihuilab@sina.com

    2015-08-15

    α-Cyperone, as the main constituent of Cyperus rotundus, is a sesquiterpene ketone. In this work, LigandFit and CDOCKER docking programs of Discovery Studio 3.1 were used to preliminarily estimate and further confirm the binding sites of α-cyperone. LigandFit results showed that α-cyperone is mainly bound in subdomain IIA. This finding was further confirmed by CDOCKER results. Site marker competitive experimental results also suggested that α-cyperone contains the same binding site as warfarin. Software simulation results further revealed that α-cyperone is mainly bound in subdomain IIA. Site marker competitive experiment results are consistent with simulation results. 3D fluorescence and CD spectroscopy results indicated that the native conformation of HSA molecule is affected by the presence of α-cyperone. - Highlights: • This work carried out by adopting molecular docking and spectroscopic studies. • Discovery studio 3.1 was used for estimating the binding sites. • The insertion of α-cyperone molecule caused the microenvironment of HSA changed. • The native conformation of HSA was changed during binding with α-cyperone.

  9. Interaction of α-cyperone with human serum albumin: Determination of the binding site by using Discovery Studio and via spectroscopic methods

    International Nuclear Information System (INIS)

    Wang, Qing; He, Jiawei; Wu, Di; Wang, Jing; Yan, Jin; Li, Hui

    2015-01-01

    α-Cyperone, as the main constituent of Cyperus rotundus, is a sesquiterpene ketone. In this work, LigandFit and CDOCKER docking programs of Discovery Studio 3.1 were used to preliminarily estimate and further confirm the binding sites of α-cyperone. LigandFit results showed that α-cyperone is mainly bound in subdomain IIA. This finding was further confirmed by CDOCKER results. Site marker competitive experimental results also suggested that α-cyperone contains the same binding site as warfarin. Software simulation results further revealed that α-cyperone is mainly bound in subdomain IIA. Site marker competitive experiment results are consistent with simulation results. 3D fluorescence and CD spectroscopy results indicated that the native conformation of HSA molecule is affected by the presence of α-cyperone. - Highlights: • This work carried out by adopting molecular docking and spectroscopic studies. • Discovery studio 3.1 was used for estimating the binding sites. • The insertion of α-cyperone molecule caused the microenvironment of HSA changed. • The native conformation of HSA was changed during binding with α-cyperone

  10. Initial radioiodine remnant ablation success rates compared by diagnostic scan methods: I123 versus I131

    International Nuclear Information System (INIS)

    Choi, W.; Choi, E.; Yoo, I.; Kim, S.; Han, E.; Lee, S.; Lee, W.

    2015-01-01

    Full text of publication follows. Objective: to see if diagnostic whole body scan (DxWBS) performed with I-131 prior diminishes the success rate of initial radioiodine remnant ablation (RRA) compared to I-123 DxWBS in differentiated thyroid cancer patients. Material and methods: consecutive thyroid cancer patients who received total thyroidectomy for differentiated thyroid cancer and then high dose RRA (either 100 mCi or 150 mCi) within 6 months were included. DxWBSs were performed with I-123 or with I-131. Prior to the DxWBSs, all patients followed strict low iodine diet for 2 weeks and withdrew hormone to stimulate TSH above 30 mIU/l. Patients with extra-thyroidal extension of tumor, lymph node metastasis, or distant metastasis were excluded. The initial RRA was defined as successful if the next DxWBS done 6 months to 1 year later was negative and stimulated thyroglobulin level was below 2 ng/ml. Results: of 71 patients who had I-123 DxWBSs, 31 patients went on to receive RRA with 100 mCi and 40 patients received 150 mCi. Of 73 patients who had I-131 DxWBSs, 66 received 100 mCi and 7 patients received 150 mCi. The overall success rate was 79% for patients who had I-123 DxWBS prior to RRA (68% for 100 mCi and 86% for 150 mCi), and 68% for patient who had I-131 DxWBSs (68% for 100 mCi and 71% for 150 mCi). Conclusion: for patients who received 100 mCi, the RRA success rate was the same for I-123 DxWBS and I-131 DxWBS. For patients treated with 150 mCi, the success rate may be lower in patients who receive RRA following DxWBS with I-131 compared to DxWBS with I-123. (authors)

  11. The Greatest Mathematical Discovery?

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Borwein, Jonathan M.

    2010-05-12

    What mathematical discovery more than 1500 years ago: (1) Is one of the greatest, if not the greatest, single discovery in the field of mathematics? (2) Involved three subtle ideas that eluded the greatest minds of antiquity, even geniuses such as Archimedes? (3) Was fiercely resisted in Europe for hundreds of years after its discovery? (4) Even today, in historical treatments of mathematics, is often dismissed with scant mention, or else is ascribed to the wrong source? Answer: Our modern system of positional decimal notation with zero, together with the basic arithmetic computational schemes, which were discovered in India about 500 CE.

  12. Quantitative Methods for Measuring Repair Rates and Innate-Immune Cell Responses in Wounded Mouse Skin.

    Science.gov (United States)

    Li, Zhi; Gothard, Elizabeth; Coles, Mark C; Ambler, Carrie A

    2018-01-01

    In skin wounds, innate-immune cells clear up tissue debris and microbial contamination, and also secrete cytokines and other growth factors that impact repair process such as re-epithelialization and wound closure. After injury, there is a rapid influx and efflux of immune cells at wound sites, yet the function of each innate cell population in skin repair is still under investigation. Flow cytometry is a valuable research tool for detecting and quantifying immune cells; however, in mouse back skin, the difficulty in extracting immune cells from small area of skin due to tissue complexity has made cytometric analysis an underutilized tool. In this paper, we provide detailed methods on the digestion of lesion-specific skin without disrupting antigen expression followed by multiplex cell staining that allows for identification of seven innate-immune populations, including rare subsets such as group-3 innate lymphoid cells (ILC3s), by flow-cytometry analysis. Furthermore, when studying the functions of immune cells to tissue repair an important metric to monitor is size of the wound opening. Normal wounds close steadily albeit at non-linear rates, while slow or stalled wound closure can indicate an underlying problem with the repair process. Calliper measurements are difficult and time-consuming to obtain and can require repeated sedation of experimental animals. We provide advanced methods for measuring of wound openness; digital 3D image capture and semi-automated image processing that allows for unbiased, reliable measurements that can be taken repeatedly over time.

  13. A method of estimating inspiratory flow rate and volume from an inhaler using acoustic measurements

    International Nuclear Information System (INIS)

    Holmes, Martin S; D'Arcy, Shona; O'Brien, Ultan; Reilly, Richard B; Seheult, Jansen N; Geraghty, Colm; Costello, Richard W; Crispino O'Connell, Gloria

    2013-01-01

    Inhalers are devices employed to deliver medication to the airways in the treatment of respiratory diseases such as asthma and chronic obstructive pulmonary disease. A dry powder inhaler (DPI) is a breath actuated inhaler that delivers medication in dry powder form. When used correctly, DPIs improve patients' clinical outcomes. However, some patients are unable to reach the peak inspiratory flow rate (PIFR) necessary to fully extract the medication. Presently clinicians have no reliable method of objectively measuring PIFR in inhalers. In this study, we propose a novel method of estimating PIFR and also the inspiratory capacity (IC) of patients' inhalations from a commonly used DPI, using acoustic measurements. With a recording device, the acoustic signal of 15 healthy subjects using a DPI over a range of varying PIFR and IC values was obtained. Temporal and spectral signal analysis revealed that the inhalation signal contains sufficient information that can be employed to estimate PIFR and IC. It was found that the average power (P ave ) in the frequency band 300–600 Hz had the strongest correlation with PIFR (R 2 = 0.9079), while the power in the same frequency band was also highly correlated with IC (R 2 = 0.9245). This study has several clinical implications as it demonstrates the feasibility of using acoustics to objectively monitor inhaler use. (paper)

  14. Standard Test Method for Measuring Reaction Rates by Analysis of Barium-140 From Fission Dosimeters

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 This test method describes two procedures for the measurement of reaction rates by determining the amount of the fission product 140Ba produced by the non-threshold reactions 235U(n,f), 241Am(n,f), and 239Pu(n,f), and by the threshold reactions 238U(n,f), 237Np(n,f), and 232Th(n,f). 1.2 These reactions produce many fission products, among which is 140Ba, having a half-life of 12.752 days. 140Ba emits gamma rays of several energies; however, these are not easily detected in the presence of other fission products. Competing activity from other fission products requires that a chemical separation be employed or that the 140Ba activity be determined indirectly by counting its daughter product 140La. This test method describes both procedure (a), the nondestructive determination of 140Ba by the direct counting of 140La several days after irradiation, and procedure (b), the chemical separation of 140Ba and the subsequent counting of 140Ba or its daughter 140La. 1.3 With suitable techniques, fission neutron fl...

  15. Associations between changes in the pattern of suicide methods and rates in Korea, the US, and Finland

    Science.gov (United States)

    2014-01-01

    Background The lethality of the suicide method employed is a strong risk factor for the completion of suicide. We examined whether annual changes in the pattern of suicide methods is related to annual changes in suicide rates in South Korea, the United States (US), and Finland. Methods We analyzed annual data from 2000–2011 for South Korea and Finland, and 2000–2010 for the US in order to examine trends in the rates and methods of suicide. Data on suicide methods were obtained from the World Health Organization (WHO) mortality database. Results Along with an annual rapid increase in suicide rates, the incidence of hanging increased steadily while suicide by self-poisoning steadily decreased in South Korea. In the US, along with an annual increase in suicide rates, the proportion of suicides committed by hanging increased while those committed with the use of firearms steadily decreased. In Finland, annual changes in the suicide rate and suicide method were not statistically significant during the study period. Conclusions Our present findings suggest that the increased use of specific lethal methods for suicide, namely hanging, is reflected in the increased suicide rates in the Korean and the US populations. The most effective approach for reducing overall suicide rates may be the implementation of population-based initiatives that reduce both the accessibility (e.g., access to firearms) and the social acceptability (e.g., effective and responsible regulations for reporting suicide) of lethal methods of suicide. PMID:24949083

  16. Automated discovery systems and the inductivist controversy

    Science.gov (United States)

    Giza, Piotr

    2017-09-01

    The paper explores possible influences that some developments in the field of branches of AI, called automated discovery and machine learning systems, might have upon some aspects of the old debate between Francis Bacon's inductivism and Karl Popper's falsificationism. Donald Gillies facetiously calls this controversy 'the duel of two English knights', and claims, after some analysis of historical cases of discovery, that Baconian induction had been used in science very rarely, or not at all, although he argues that the situation has changed with the advent of machine learning systems. (Some clarification of terms machine learning and automated discovery is required here. The key idea of machine learning is that, given data with associated outcomes, software can be trained to make those associations in future cases which typically amounts to inducing some rules from individual cases classified by the experts. Automated discovery (also called machine discovery) deals with uncovering new knowledge that is valuable for human beings, and its key idea is that discovery is like other intellectual tasks and that the general idea of heuristic search in problem spaces applies also to discovery tasks. However, since machine learning systems discover (very low-level) regularities in data, throughout this paper I use the generic term automated discovery for both kinds of systems. I will elaborate on this later on). Gillies's line of argument can be generalised: thanks to automated discovery systems, philosophers of science have at their disposal a new tool for empirically testing their philosophical hypotheses. Accordingly, in the paper, I will address the question, which of the two philosophical conceptions of scientific method is better vindicated in view of the successes and failures of systems developed within three major research programmes in the field: machine learning systems in the Turing tradition, normative theory of scientific discovery formulated by Herbert Simon

  17. A test case of the deformation rate analysis (DRA) stress measurement method

    Energy Technology Data Exchange (ETDEWEB)

    Dight, P.; Hsieh, A. [Australian Centre for Geomechanics, Univ. of WA, Crawley (Australia); Johansson, E. [Saanio and Riekkola Oy, Helsinki (Finland); Hudson, J.A. [Rock Engineering Consultants (United Kingdom); Kemppainen, K.

    2012-01-15

    As part of Posiva's site and ONKALO investigations, the in situ rock stress has been measured by a variety of techniques, including hydraulic fracturing, overcoring, and convergence measurements. All these techniques involve direct measurements in a drillhole or at the rock surface. An alternative method is to test drillhole core in a way that enables estimation of the magnitudes and orientations of the in situ rock stress. The Kaiser Effect (KE) and Deformation Rate Analysis (DRA) are two ways to do this. In the work reported here, a 'blind' DRA test was conducted on core obtained from the POSE (Posiva's Olkiluoto Spalling Experiment) niche in the ONKALO. The term 'blind' means that the two first authors of this report, who conducted the tests at the Australian Centre for Geomechanics, did not know the depths below surface at which the cores had been obtained. The results of this DRA Test Case are presented, together with an explanation of the DRA procedure. Also, additional information that would help in such DRA testing and associated analysis is explained. One of the problems in comparing the DRA results with the known Olkiluoto stress field is that the latter is highly variable across the site, as experienced by the previous in situ stress measurements and as predicted by numerical analysis. The variability is mainly caused by the presence of the large brittle deformation zones which perturb the local stress state. However, this variability reduces with depth and the stress field becomes more stable at the {approx} 350 m at which the drillhole cores were obtained. Another compounding difficulty is that the stress quantity, being a second order tensor, requires six independent components for its specification. In other words, comparison of the DRA results and the known stress field requires comparison of six different quantities. In terms of the major principal stress orientation, the DRA results predict an orientation completely

  18. A test case of the deformation rate analysis (DRA) stress measurement method

    International Nuclear Information System (INIS)

    Dight, P.; Hsieh, A.; Johansson, E.; Hudson, J.A.; Kemppainen, K.

    2012-01-01

    As part of Posiva's site and ONKALO investigations, the in situ rock stress has been measured by a variety of techniques, including hydraulic fracturing, overcoring, and convergence measurements. All these techniques involve direct measurements in a drillhole or at the rock surface. An alternative method is to test drillhole core in a way that enables estimation of the magnitudes and orientations of the in situ rock stress. The Kaiser Effect (KE) and Deformation Rate Analysis (DRA) are two ways to do this. In the work reported here, a 'blind' DRA test was conducted on core obtained from the POSE (Posiva's Olkiluoto Spalling Experiment) niche in the ONKALO. The term 'blind' means that the two first authors of this report, who conducted the tests at the Australian Centre for Geomechanics, did not know the depths below surface at which the cores had been obtained. The results of this DRA Test Case are presented, together with an explanation of the DRA procedure. Also, additional information that would help in such DRA testing and associated analysis is explained. One of the problems in comparing the DRA results with the known Olkiluoto stress field is that the latter is highly variable across the site, as experienced by the previous in situ stress measurements and as predicted by numerical analysis. The variability is mainly caused by the presence of the large brittle deformation zones which perturb the local stress state. However, this variability reduces with depth and the stress field becomes more stable at the ∼ 350 m at which the drillhole cores were obtained. Another compounding difficulty is that the stress quantity, being a second order tensor, requires six independent components for its specification. In other words, comparison of the DRA results and the known stress field requires comparison of six different quantities. In terms of the major principal stress orientation, the DRA results predict an orientation completely different to the NW-SE regional

  19. Multidimensional process discovery

    NARCIS (Netherlands)

    Ribeiro, J.T.S.

    2013-01-01

    Typically represented in event logs, business process data describe the execution of process events over time. Business process intelligence (BPI) techniques such as process mining can be applied to get strategic insight into business processes. Process discovery, conformance checking and

  20. Fateful discovery almost forgotten

    CERN Multimedia

    1989-01-01

    "The discovery of the fission of uranium exactly half a century ago is at risk of passing unremarked because of the general ambivalence towards the consequences of this development. Can that be wise?" (4 pages)

  1. Toxins and drug discovery.

    Science.gov (United States)

    Harvey, Alan L

    2014-12-15

    Components from venoms have stimulated many drug discovery projects, with some notable successes. These are briefly reviewed, from captopril to ziconotide. However, there have been many more disappointments on the road from toxin discovery to approval of a new medicine. Drug discovery and development is an inherently risky business, and the main causes of failure during development programmes are outlined in order to highlight steps that might be taken to increase the chances of success with toxin-based drug discovery. These include having a clear focus on unmet therapeutic needs, concentrating on targets that are well-validated in terms of their relevance to the disease in question, making use of phenotypic screening rather than molecular-based assays, and working with development partners with the resources required for the long and expensive development process. Copyright © 2014 The Author. Published by Elsevier Ltd.. All rights reserved.

  2. Associations between changes in the pattern of suicide methods and rates in Korea, the US, and Finland

    OpenAIRE

    Park, Subin; Ahn, Myung Hee; Lee, Ahrong; Hong, Jin Pyo

    2014-01-01

    Background The lethality of the suicide method employed is a strong risk factor for the completion of suicide. We examined whether annual changes in the pattern of suicide methods is related to annual changes in suicide rates in South Korea, the United States (US), and Finland. Methods We analyzed annual data from 2000–2011 for South Korea and Finland, and 2000–2010 for the US in order to examine trends in the rates and methods of suicide. Data on suicide methods were obtained from the World ...

  3. Defining Creativity with Discovery

    OpenAIRE

    Wilson, Nicholas Charles; Martin, Lee

    2017-01-01

    The standard definition of creativity has enabled significant empirical and theoretical advances, yet contains philosophical conundrums concerning the nature of novelty and the role of recognition and values. In this work we offer an act of conceptual valeting that addresses these issues and in doing so, argue that creativity definitions can be extended through the use of discovery. Drawing on dispositional realist philosophy we outline why adding the discovery and bringing into being of new ...

  4. Discovery Driven Growth

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj

    2009-01-01

    Anmeldelse af Discovery Driven Growh : A breakthrough process to reduce risk and seize opportunity, af Rita G. McGrath & Ian C. MacMillan, Boston: Harvard Business Press. Udgivelsesdato: 14 august......Anmeldelse af Discovery Driven Growh : A breakthrough process to reduce risk and seize opportunity, af Rita G. McGrath & Ian C. MacMillan, Boston: Harvard Business Press. Udgivelsesdato: 14 august...

  5. The π discovery

    International Nuclear Information System (INIS)

    Fowler, P.H.

    1988-01-01

    The paper traces the discovery of the Π meson. The discovery was made by exposure of nuclear emulsions to cosmic radiation at high altitudes, with subsequent scanning of the emulsions for meson tracks. Disintegration of nuclei by a negative meson, and the decay of a Π meson were both observed. Further measurements revealed the mass of the meson. The studies carried out on the origin of the Π-mesons, and their mode of decay, are both described. (U.K.)

  6. Socratic Questioning-Guided Discovery

    Directory of Open Access Journals (Sweden)

    M. Hakan Türkçapar

    2012-04-01

    Full Text Available “Socratic Method” is a way of teaching philosophical thinking and knowledge by asking questions which was used by antique period greek philosopher Socrates. Socrates was teaching knowledge to his followers by asking questions and the conversation between them was named “Socratic Dialogues”. In this meaning, no novel knowledge is taught to the individual but only what is formerly known is reminded and rediscovered. The form of socratic questioning which is used during the process of cognitive behavioral therapy is known as Guided Discovery. In this method it is aimed to make the client notice the piece of knowledge which he could notice but is not aware with a series of questions. Socratic method or guided discovery consists of several steps which are: Identifying the problem by listening to the client and making reflections, finding alternatives by examining and evaluating, reidentification by using the newly found information and questioning the old distorted belief and reaching to a conclusion and applying it. Question types used during these procedures are, questions for gaining information, questions revealing the meanings, questions revealing the beliefs, questions about behaviours during the similar past experiences, analyse questions and analytic synthesis questions. In order to make the patient feel understood it is important to be empathetic and summarising the problem during the interview. In this text, steps of Socratic Questioning-Guided Discovery will be reviewed with sample dialogues after eachstep. [JCBPR 2012; 1(1.000: 15-20

  7. Spent Fuel Pool Dose Rate Calculations Using Point Kernel and Hybrid Deterministic-Stochastic Shielding Methods

    International Nuclear Information System (INIS)

    Matijevic, M.; Grgic, D.; Jecmenica, R.

    2016-01-01

    This paper presents comparison of the Krsko Power Plant simplified Spent Fuel Pool (SFP) dose rates using different computational shielding methodologies. The analysis was performed to estimate limiting gamma dose rates on wall mounted level instrumentation in case of significant loss of cooling water. The SFP was represented with simple homogenized cylinders (point kernel and Monte Carlo (MC)) or cuboids (MC) using uranium, iron, water, and dry-air as bulk region materials. The pool is divided on the old and new section where the old one has three additional subsections representing fuel assemblies (FAs) with different burnup/cooling time (60 days, 1 year and 5 years). The new section represents the FAs with the cooling time of 10 years. The time dependent fuel assembly isotopic composition was calculated using ORIGEN2 code applied to the depletion of one of the fuel assemblies present in the pool (AC-29). The source used in Microshield calculation is based on imported isotopic activities. The time dependent photon spectra with total source intensity from Microshield multigroup point kernel calculations was then prepared for two hybrid deterministic-stochastic sequences. One is based on SCALE/MAVRIC (Monaco and Denovo) methodology and another uses Monte Carlo code MCNP6.1.1b and ADVANTG3.0.1. code. Even though this model is a fairly simple one, the layers of shielding materials are thick enough to pose a significant shielding problem for MC method without the use of effective variance reduction (VR) technique. For that purpose the ADVANTG code was used to generate VR parameters (SB cards in SDEF and WWINP file) for MCNP fixed-source calculation using continuous energy transport. ADVATNG employs a deterministic forward-adjoint transport solver Denovo which implements CADIS/FW-CADIS methodology. Denovo implements a structured, Cartesian-grid SN solver based on the Koch-Baker-Alcouffe parallel transport sweep algorithm across x-y domain blocks. This was first

  8. Accelerating finite-rate chemical kinetics with coprocessors: Comparing vectorization methods on GPUs, MICs, and CPUs

    Science.gov (United States)

    Stone, Christopher P.; Alferman, Andrew T.; Niemeyer, Kyle E.

    2018-05-01

    Accurate and efficient methods for solving stiff ordinary differential equations (ODEs) are a critical component of turbulent combustion simulations with finite-rate chemistry. The ODEs governing the chemical kinetics at each mesh point are decoupled by operator-splitting allowing each to be solved concurrently. An efficient ODE solver must then take into account the available thread and instruction-level parallelism of the underlying hardware, especially on many-core coprocessors, as well as the numerical efficiency. A stiff Rosenbrock and a nonstiff Runge-Kutta ODE solver are both implemented using the single instruction, multiple thread (SIMT) and single instruction, multiple data (SIMD) paradigms within OpenCL. Both methods solve multiple ODEs concurrently within the same instruction stream. The performance of these parallel implementations was measured on three chemical kinetic models of increasing size across several multicore and many-core platforms. Two separate benchmarks were conducted to clearly determine any performance advantage offered by either method. The first benchmark measured the run-time of evaluating the right-hand-side source terms in parallel and the second benchmark integrated a series of constant-pressure, homogeneous reactors using the Rosenbrock and Runge-Kutta solvers. The right-hand-side evaluations with SIMD parallelism on the host multicore Xeon CPU and many-core Xeon Phi co-processor performed approximately three times faster than the baseline multithreaded C++ code. The SIMT parallel model on the host and Phi was 13%-35% slower than the baseline while the SIMT model on the NVIDIA Kepler GPU provided approximately the same performance as the SIMD model on the Phi. The runtimes for both ODE solvers decreased significantly with the SIMD implementations on the host CPU (2.5-2.7 ×) and Xeon Phi coprocessor (4.7-4.9 ×) compared to the baseline parallel code. The SIMT implementations on the GPU ran 1.5-1.6 times faster than the baseline

  9. Management of marine cage aquaculture. Environmental carrying capacity method based on dry feed conversion rate.

    Science.gov (United States)

    Cai, Huiwen; Sun, Yinglan

    2007-11-01

    Marine cage aquaculture produces a large amount of waste that is released directly into the environment. To effectively manage the mariculture environment, it is important to determine the carrying capacity of an aquaculture area. In many Asian countries trash fish is dominantly used in marine cage aquaculture, which contains more water than pellet feed. The traditional nutrient loading analysis is for pellet feed not for trash fish feed. So, a more critical analysis is necessary in trash fish feed culturing areas. Corresponding to FCR (feed conversion rate), dry feed conversion rate (DFCR) was used to analyze the nutrient loadings from marine cage aquaculture where trash fish is used. Based on the hydrodynamic model and the mass transport model in Xiangshan Harbor, the relationship between the water quality and the waste discharged from cage aquaculture has been determined. The environmental carrying capacity of the aquaculture sea area was calculated by applying the models noted above. Nitrogen and phosphorus are the water quality parameters considered in this study. The simulated results show that the maximum nitrogen and phosphorus concentrations were 0.216 mg/L and 0.039 mg/L, respectively. In most of the sea area, the nutrient concentrations were higher than the water quality standard. The calculated environmental carrying capacity of nitrogen and phosphorus in Xiangshan Harbor were 1,107.37 t/yr and 134.35 t/yr, respectively. The waste generated from cage culturing in 2000 has already exceeded the environmental carrying capacity. Unconsumed feed has been identified as the most important origin of all pollutants in cage culturing systems. It suggests the importance of increasing the feed utilization and improving the feed composition on the basis of nutrient requirement. For the sustainable development of the aquaculture industry, it is an effective management measure to keep the stocking density and pollution loadings below the environmental carrying

  10. Glycoscience aids in biomarker discovery

    Directory of Open Access Journals (Sweden)

    Serenus Hua1,2 & Hyun Joo An1,2,*

    2012-06-01

    Full Text Available The glycome consists of all glycans (or carbohydrates within abiological system, and modulates a wide range of important biologicalactivities, from protein folding to cellular communications.The mining of the glycome for disease markers representsa new paradigm for biomarker discovery; however, this effortis severely complicated by the vast complexity and structuraldiversity of glycans. This review summarizes recent developmentsin analytical technology and methodology as applied tothe fields of glycomics and glycoproteomics. Mass spectrometricstrategies for glycan compositional profiling are described, as arepotential refinements which allow structure-specific profiling.Analytical methods that can discern protein glycosylation at aspecific site of modification are also discussed in detail.Biomarker discovery applications are shown at each level ofanalysis, highlighting the key role that glycoscience can play inhelping scientists understand disease biology.

  11. [Artificial Intelligence in Drug Discovery].

    Science.gov (United States)

    Fujiwara, Takeshi; Kamada, Mayumi; Okuno, Yasushi

    2018-04-01

    According to the increase of data generated from analytical instruments, application of artificial intelligence(AI)technology in medical field is indispensable. In particular, practical application of AI technology is strongly required in "genomic medicine" and "genomic drug discovery" that conduct medical practice and novel drug development based on individual genomic information. In our laboratory, we have been developing a database to integrate genome data and clinical information obtained by clinical genome analysis and a computational support system for clinical interpretation of variants using AI. In addition, with the aim of creating new therapeutic targets in genomic drug discovery, we have been also working on the development of a binding affinity prediction system for mutated proteins and drugs by molecular dynamics simulation using supercomputer "Kei". We also have tackled for problems in a drug virtual screening. Our developed AI technology has successfully generated virtual compound library, and deep learning method has enabled us to predict interaction between compound and target protein.

  12. A quantum causal discovery algorithm

    Science.gov (United States)

    Giarmatzi, Christina; Costa, Fabio

    2018-03-01

    Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.

  13. Dose rate estimates and spatial interpolation maps of outdoor gamma dose rate with geostatistical methods; A case study from Artvin, Turkey

    International Nuclear Information System (INIS)

    Yeşilkanat, Cafer Mert; Kobya, Yaşar; Taşkin, Halim; Çevik, Uğur

    2015-01-01

    In this study, compliance of geostatistical estimation methods is compared to ensure investigation and imaging natural Fon radiation using the minimum number of data. Artvin province, which has a quite hilly terrain and wide variety of soil and located in the north–east of Turkey, is selected as the study area. Outdoor gamma dose rate (OGDR), which is an important determinant of environmental radioactivity level, is measured in 204 stations. Spatial structure of OGDR is determined by anisotropic, isotropic and residual variograms. Ordinary kriging (OK) and universal kriging (UK) interpolation estimations were calculated with the help of model parameters obtained from these variograms. In OK, although calculations are made based on positions of points where samples are taken, in the UK technique, general soil groups and altitude values directly affecting OGDR are included in the calculations. When two methods are evaluated based on their performances, it has been determined that UK model (r = 0.88, p < 0.001) gives quite better results than OK model (r = 0.64, p < 0.001). In addition, as a result of the maps created at the end of the study, it was illustrated that local changes are better reflected by UK method compared to OK method and its error variance is found to be lower. - Highlights: • The spatial dispersion of gamma dose rates in Artvin, which possesses one of the roughest lands in Turkey were studied. • The performance of different Geostatistic methods (OK and UK methods) for dispersion of gamma dose rates were compared. • Estimation values were calculated for non-sampling points by using the geostatistical model, the results were mapped. • The general radiological structure was determined in much less time with lower costs compared to experimental methods. • When theoretical methods are evaluated, it was obtained that UK gives more descriptive results compared to OK.

  14. Point and interval forecasts of mortality rates and life expectancy: A comparison of ten principal component methods

    Directory of Open Access Journals (Sweden)

    Han Lin Shang

    2011-07-01

    Full Text Available Using the age- and sex-specific data of 14 developed countries, we compare the point and interval forecast accuracy and bias of ten principal component methods for forecasting mortality rates and life expectancy. The ten methods are variants and extensions of the Lee-Carter method. Based on one-step forecast errors, the weighted Hyndman-Ullah method provides the most accurate point forecasts of mortality rates and the Lee-Miller method is the least biased. For the accuracy and bias of life expectancy, the weighted Hyndman-Ullah method performs the best for female mortality and the Lee-Miller method for male mortality. While all methods underestimate variability in mortality rates, the more complex Hyndman-Ullah methods are more accurate than the simpler methods. The weighted Hyndman-Ullah method provides the most accurate interval forecasts for mortality rates, while the robust Hyndman-Ullah method provides the best interval forecast accuracy for life expectancy.

  15. Applying genetics in inflammatory disease drug discovery

    DEFF Research Database (Denmark)

    Folkersen, Lasse; Biswas, Shameek; Frederiksen, Klaus Stensgaard

    2015-01-01

    , with several notable exceptions, the journey from a small-effect genetic variant to a functional drug has proven arduous, and few examples of actual contributions to drug discovery exist. Here, we discuss novel approaches of overcoming this hurdle by using instead public genetics resources as a pragmatic guide...... alongside existing drug discovery methods. Our aim is to evaluate human genetic confidence as a rationale for drug target selection....

  16. Accurate and fast methods to estimate the population mutation rate from error prone sequences

    Directory of Open Access Journals (Sweden)

    Miyamoto Michael M

    2009-08-01

    Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.

  17. Quantitative Methods for Measuring Repair Rates and Innate-Immune Cell Responses in Wounded Mouse Skin

    Directory of Open Access Journals (Sweden)

    Zhi Li

    2018-02-01

    Full Text Available In skin wounds, innate-immune cells clear up tissue debris and microbial contamination, and also secrete cytokines and other growth factors that impact repair process such as re-epithelialization and wound closure. After injury, there is a rapid influx and efflux of immune cells at wound sites, yet the function of each innate cell population in skin repair is still under investigation. Flow cytometry is a valuable research tool for detecting and quantifying immune cells; however, in mouse back skin, the difficulty in extracting immune cells from small area of skin due to tissue complexity has made cytometric analysis an underutilized tool. In this paper, we provide detailed methods on the digestion of lesion-specific skin without disrupting antigen expression followed by multiplex cell staining that allows for identification of seven innate-immune populations, including rare subsets such as group-3 innate lymphoid cells (ILC3s, by flow-cytometry analysis. Furthermore, when studying the functions of immune cells to tissue repair an important metric to monitor is size of the wound opening. Normal wounds close steadily albeit at non-linear rates, while slow or stalled wound closure can indicate an underlying problem with the repair process. Calliper measurements are difficult and time-consuming to obtain and can require repeated sedation of experimental animals. We provide advanced methods for measuring of wound openness; digital 3D image capture and semi-automated image processing that allows for unbiased, reliable measurements that can be taken repeatedly over time.

  18. Consumer preferences for hearing aid attributes: a comparison of rating and conjoint analysis methods.

    Science.gov (United States)

    Bridges, John F P; Lataille, Angela T; Buttorff, Christine; White, Sharon; Niparko, John K

    2012-03-01

    Low utilization of hearing aids has drawn increased attention to the study of consumer preferences using both simple ratings (e.g., Likert scale) and conjoint analyses, but these two approaches often produce inconsistent results. The study aims to directly compare Likert scales and conjoint analysis in identifying important attributes associated with hearing aids among those with hearing loss. Seven attributes of hearing aids were identified through qualitative research: performance in quiet settings, comfort, feedback, frequency of battery replacement, purchase price, water and sweat resistance, and performance in noisy settings. The preferences of 75 outpatients with hearing loss were measured with both a 5-point Likert scale and with 8 paired-comparison conjoint tasks (the latter being analyzed using OLS [ordinary least squares] and logistic regression). Results were compared by examining implied willingness-to-pay and Pearson's Rho. A total of 56 respondents (75%) provided complete responses. Two thirds of respondents were male, most had sensorineural hearing loss, and most were older than 50; 44% of respondents had never used a hearing aid. Both methods identified improved performance in noisy settings as the most valued attribute. Respondents were twice as likely to buy a hearing aid with better functionality in noisy environments (p < .001), and willingness to pay for this attribute ranged from US$2674 on the Likert to US$9000 in the conjoint analysis. The authors find a high level of concordance between the methods-a result that is in stark contrast with previous research. The authors conclude that their result stems from constraining the levels on the Likert scale.

  19. The discussion on a new measure method of radon chamber leak rate

    International Nuclear Information System (INIS)

    Zhang Junkui; Tang Bing

    2010-01-01

    Radon chamber is the third standard radon source. The leak rate is the key parameter for the radon chamber to naturally and safely operate. One way, that measure the leak rate is introduced. The experience result is that the way is simple and veracious to measure the leak rate. (authors)

  20. Taguchi Method for Development of Mass Flow Rate Correlation using Hydrocarbon Refrigerant Mixture in Capillary Tube

    Directory of Open Access Journals (Sweden)

    Shodiya Sulaimon

    2014-07-01

    Full Text Available The capillary tube is an important control device used in small vapor compression refrigeration systems such as window air-conditioners, household refrigerators and freezers. This paper develops a non-dimensional correlation based on the test results of the adiabatic capillary tube for the mass flow rate through the tube using a hydrocarbon refrigerant mixture of 89.3% propane and 10.7% butane (HCM. The Taguchi method, a statistical experimental design approach, was employed. This approach explores the economic benefit that lies in studies of this nature, where only a small number of experiments are required and yet valid results are obtained. Considering the effects of the capillary tube geometry and the inlet condition of the tube, dimensionless parameters were chosen. The new correlation was also based on the Buckingham Pi theorem. This correlation predicts 86.67% of the present experimental data within a relative deviation of -10% to +10%. The predictions by this correlation were also compared with results in published literature.

  1. Comparison of two different methods for the determination of erythrocyte sedimentation rate

    Directory of Open Access Journals (Sweden)

    Gülfer Öztürk

    2014-09-01

    Full Text Available Objective: Erythrocyte Sedimentation Rate (ESR can be used for the follow-up of number of diseases. In recent years, closed automated systems that measure ESR directly from a capped EDTA and citrate blood sample tube have been developed. In this study, we aimed to compare and evaluate the consistence of assay results of iSed Alcor Auto-instrument and Berkhun SDM60 Auto-instrument. Methods: K2EDTA and citrated blood samples were taken from 149 randomly selected outpatients. The ESR of blood samples in tubes containing K2EDTA was determined by iSed Alcor Auto-instrument whereas Berkhun SDM60 Auto-instrument was used to determine the ESR of blood samples in tubes containing citrate. Results: The mean ± SD ESR was 24.48 ± 23.10 mm/hr (95% CI for the mean was 20.54–28.02 mm/hr for iSed Alcor Auto-instrument and 23.94 ± 17.24 mm/hr (95% CI for the mean was 21.15–26.73 mm/hr for the Berkhun Auto-instrument. We found the mean difference between two methods as 0.336 mm/hr (95% CI for the mean was −2.06 to 1.39 mm/hr; P = 0.701. The obtained correlation coefficient was 0.90 (P = 0.0001. There was no evidence of systemic bias, equal to 0.3 mm/hr (limits of agreement, –20.6 to 21.2 mm/hr. Conclusion: iSed Alcor Auto-instrument and Berkhun SDM60 Auto-instrument might be used as alternative systems each other. However, it should be monitored carefully in especially high ESR (>50 mm/hr results and the results should be checked according to the Westergren method. J Clin Exp Invest 2014; 5 (3: 371-375

  2. Corrosion potential detection method, potential characteristic simulation method for reaction rate and plant monitoring system using the same

    International Nuclear Information System (INIS)

    Sakai, Masanori; Onaka, Noriyuki; Takahashi, Tatsuya; Yamanaka, Hiroshi.

    1995-01-01

    In a calculation controlling device for a plant monitoring system, concentrations of materials concerning reaction materials in a certain state of a reaction process, and an actually measured value for the potential of a material in this state are substituted into a reaction rate equation obtained in accordance with a reaction process model. With such procedures, a relation between the reaction rate (current value) and the potential of the material can be obtained. A potential at which the reaction rates of an anode reaction and a cathode reaction contained in a corrosion reaction are made equal is determined by a numerical value calculation, based on an electrochemical hybrid potential logic by using the reaction rate equation, the reaction rate information relative to the corrosion reaction of the material and the concentration of the material concerning the corrosion reaction is obtained by a numerical value calculation. Then, simulation for the corrosion potential is enabled based on the handling corresponding to the actual reaction. Further, even for a portion which can not be measured actually, the corrosion potential can be recognized by simulation. (N.H.)

  3. A New Method for Rating Hazard from Intense Sounds: Implications for Hearing Protection, Speech Intelligibility, and Situation Awareness

    National Research Council Canada - National Science Library

    Price, G. R

    2005-01-01

    The auditory hazard assessment algorithm for the human (AHAAH), developed by the U.S. Army Research Laboratory, is theoretically based and has been demonstrated to rate hazard from intense sounds much more accurately than existing methods...

  4. Analysis of the influence of the demand rate on the accident rate of a plant equipped with a single protective channel by Generalized Perturbation Theory (GPT) methods

    International Nuclear Information System (INIS)

    Franca Walter, F.L.; Alvim, A.C.M.; Silva, F.C. da; Melo e Frutuoso, P.F.

    1995-01-01

    The application of the GPT methodology to a reliability engineering problem of great practical interest is discussed: that of the analysis of the influence of the demand rate on the accident rate of a process plant equipped with a single protective channel. This problem has been solved in the literature by traditional methods, that is, for each demand rate value the system of differential equations that governs the system behavior (derived from a Markovian reliability model) is solved and the resulting points are employed to generate the desired curve. This sensitivity analysis has been performed by means of a GPT approach in order to show how it could simplify the calculations. Although an analytical solution is available for the above equations, the application of the GPT approach needed the solution of the system for a few points (reference solutions) and the results agree very well with those published. (author). 9 refs, 4 figs

  5. Evaluation method of gas leakage rate from transportation casks of radioactive materials (gas leakage rates from scratches on O-ring surface)

    International Nuclear Information System (INIS)

    Aritomi, Masanori; Li Ninghua; Asano, Ryoji; Kawa, Tsunemichi

    2004-01-01

    A sealing function is essential for transportation and/or storage casks of radioactive materials under both normal and accidental operating conditions in order to prevent radioactive materials from being released into the environment. In the safety analysis report, the release rate of radioactive materials into the environment is evaluated using the correlations specified in the ANSI N14.5, 1987. The purposes of the work are to reveal the underlying problems on the correlations specified in the ANSI N14.5 related to gas leakage rates from a scratch on O-ring surface and from multi-leak paths, to offer a data base to study the evaluation method of the leakage rate and to propose the evaluation method. In this paper, the following insights were obtained and clarified: 1. If a characteristic value of a leak path is defined as D 4 /a ('D' is the diameter and 'a' is the length), a scratch on the O-ring surface can be evaluated as a circular tube. 2. It is proper to use the width of O-ring groove on the flange as the leak path length for elastomer O-rings. 3. Gas leakage rates from multi leak paths of the transportation cask can be evaluated in the same manner as a single leak path if an effective D4/a is introduced. (author)

  6. Estimating Population Turnover Rates by Relative Quantification Methods Reveals Microbial Dynamics in Marine Sediment.

    Science.gov (United States)

    Kevorkian, Richard; Bird, Jordan T; Shumaker, Alexander; Lloyd, Karen G

    2018-01-01

    The difficulty involved in quantifying biogeochemically significant microbes in marine sediments limits our ability to assess interspecific interactions, population turnover times, and niches of uncultured taxa. We incubated surface sediments from Cape Lookout Bight, North Carolina, USA, anoxically at 21°C for 122 days. Sulfate decreased until day 68, after which methane increased, with hydrogen concentrations consistent with the predicted values of an electron donor exerting thermodynamic control. We measured turnover times using two relative quantification methods, quantitative PCR (qPCR) and the product of 16S gene read abundance and total cell abundance (FRAxC, which stands for "fraction of read abundance times cells"), to estimate the population turnover rates of uncultured clades. Most 16S rRNA reads were from deeply branching uncultured groups, and ∼98% of 16S rRNA genes did not abruptly shift in relative abundance when sulfate reduction gave way to methanogenesis. Uncultured Methanomicrobiales and Methanosarcinales increased at the onset of methanogenesis with population turnover times estimated from qPCR at 9.7 ± 3.9 and 12.6 ± 4.1 days, respectively. These were consistent with FRAxC turnover times of 9.4 ± 5.8 and 9.2 ± 3.5 days, respectively. Uncultured Syntrophaceae , which are possibly fermentative syntrophs of methanogens, and uncultured Kazan-3A-21 archaea also increased at the onset of methanogenesis, with FRAxC turnover times of 14.7 ± 6.9 and 10.6 ± 3.6 days. Kazan-3A-21 may therefore either perform methanogenesis or form a fermentative syntrophy with methanogens. Three genera of sulfate-reducing bacteria, Desulfovibrio , Desulfobacter , and Desulfobacterium , increased in the first 19 days before declining rapidly during sulfate reduction. We conclude that population turnover times on the order of days can be measured robustly in organic-rich marine sediment, and the transition from sulfate-reducing to methanogenic conditions stimulates

  7. Analytical methods of leakage rate estimation from a containment under a LOCA

    International Nuclear Information System (INIS)

    Chun, M.H.

    1981-01-01

    Three most outstanding maximum flow rate formulas are identified from many existing models. Outlines of the three limiting mass flow rate models are given along with computational procedures to estimate approximate amount of fission products released from a containment to environment for a given characteristic hole size for containment-isolation failure and containment pressure and temperature under a loss of coolant accident. Sample calculations are performed using the critical ideal gas flow rate model and the Moody's graphs for the maximum two-phase flow rates, and the results are compared with the values obtained from then mass leakage rate formula of CONTEMPT-LT code for converging nozzle and sonic flow. It is shown that the critical ideal gas flow rate formula gives almost comparable results as one can obtain from the Moody's model. It is also found that a more conservative approach to estimate leakage rate from a containment under a LOCA is to use the maximum ideal gas flow rate equation rather than the mass leakage rate formula of CONTEMPT-LT. (author)

  8. System and method for determining an ammonia generation rate in a three-way catalyst

    Science.gov (United States)

    Sun, Min; Perry, Kevin L; Kim, Chang H

    2014-12-30

    A system according to the principles of the present disclosure includes a rate determination module, a storage level determination module, and an air/fuel ratio control module. The rate determination module determines an ammonia generation rate in a three-way catalyst based on a reaction efficiency and a reactant level. The storage level determination module determines an ammonia storage level in a selective catalytic reduction (SCR) catalyst positioned downstream from the three-way catalyst based on the ammonia generation rate. The air/fuel ratio control module controls an air/fuel ratio of an engine based on the ammonia storage level.

  9. Discovery of the rare HLA-B*39:77 allele in an unrelated Taiwanese bone marrow stem cell donor using the sequence-based typing method.

    Science.gov (United States)

    Yang, K L; Lee, S K; Lin, P Y

    2013-08-01

    We detected a rare HLA-B locus allele, B*39:77, in a Taiwanese unrelated marrow stem cell donor in our routine HLA sequence-based typing (SBT) exercise for a possible haematopoietic stem cell donation. In exons 2, 3 and 4, the DNA sequence of B*39:77 is identical to the sequence of B*39:01:01:01 except one nucleotide at nucleotide position 733 (G->A) in exon 4. The nucleotide variation caused one amino acid alteration at residue 221 (Gly->Ser). B*39:77 was probably derived from a nucleotide substitution event involving B*39:01:01:01. The probable HLA-A, -B, -C, -DRB1 and -DQB1 haplotype in association with B*39:77 may be deduced as A*02:01-B*39:77-C*07:02-DRB1*08:03-DQB1*06:01. Our discovery of B*39:77 in Taiwanese adds further polymorphism of B*39 variants in Taiwanese population. © 2013 John Wiley & Sons Ltd.

  10. Comparison and limitations of three different bulk etch rate measurement methods used for gamma irradiated PM-355 detectors

    Energy Technology Data Exchange (ETDEWEB)

    Fazal-ur-Rehman E-mail: fazalr@kfupm.edu.sa; Abu-Jarad, F.; Al-Jarallah, M.I.; Farhat, M

    2001-06-01

    Samples of Nuclear Track Detectors (PM-355) were exposed to high gamma doses from 1x10{sup 5} Gy (10 Mrad) up to 1.2x10{sup 6} Gy (120 Mrad) at an incremental dose of 1x10{sup 5} Gy (10 Mrad). The gamma source was a 9.03 PBq (244 kCi) Co-60 source used for sterilization of medical syringes. The bulk etch rate (V{sub b}) was measured for various high gamma doses by three different methods: 1--thickness change method; 2--mass change method; 3--fission track diametric method. The study gives a comparison and limitations of these three methods used for bulk etch rate measurements in the detectors as a function of high gamma doses. The track etch rate (V{sub t}) and the sensitivity (V) of the detector were also measured using the fission track diametric method. It was observed that V{sub b} increases with the increase of the gamma absorbed dose at a fixed etching time in each bulk etch measuring method. The bulk etch rate decreases exponentially with the etching time at a fixed gamma absorbed dose in all three methods. The thickness change and mass change methods have successfully been applied to measure V{sub b} at higher gamma doses up to 1.2x10{sup 6} Gy (120 Mrad). The bulk etch rate determined by the mass change and thickness change methods was almost the same at a certain gamma dose and etching time whereas it was quite low in the case of the fission track diametric method due to its limitations at higher doses. Also in this method it was not possible to measure the fission fragment track diameters at higher doses due to the quick disappearance of the fission tracks and therefore the V{sub b} could not be estimated at higher gamma doses.

  11. Evaluation of trauma care using TRISS method: the role of adjusted misclassification rate and adjusted w-statistic

    Directory of Open Access Journals (Sweden)

    Bytyçi Cen I

    2009-01-01

    Full Text Available Abstract Background Major trauma is a leading cause of death worldwide. Evaluation of trauma care using Trauma Injury and Injury Severity Score (TRISS method is focused in trauma outcome (deaths and survivors. For testing TRISS method TRISS misclassification rate is used. Calculating w-statistic, as a difference between observed and TRISS expected survivors, we compare our trauma care results with the TRISS standard. Aim The aim of this study is to analyze interaction between misclassification rate and w-statistic and to adjust these parameters to be closer to the truth. Materials and methods Analysis of components of TRISS misclassification rate and w-statistic and actual trauma outcome. Results The component of false negative (FN (by TRISS method unexpected deaths has two parts: preventable (Pd and non-preventable (nonPd trauma deaths. Pd represents inappropriate trauma care of an institution; otherwise nonpreventable trauma deaths represents errors in TRISS method. Removing patients with preventable trauma deaths we get an Adjusted misclassification rate: (FP + FN - Pd/N or (b+c-Pd/N. Substracting nonPd from FN value in w-statistic formula we get an Adjusted w-statistic: [FP-(FN - nonPd]/N, respectively (FP-Pd/N, or (b-Pd/N. Conclusion Because adjusted formulas clean method from inappropriate trauma care, and clean trauma care from the methods error, TRISS adjusted misclassification rate and adjusted w-statistic gives more realistic results and may be used in researches of trauma outcome.

  12. Guided Discovery with Socratic Questioning

    Directory of Open Access Journals (Sweden)

    M. Hakan Türkçapar

    2015-04-01

    Full Text Available “The Socratic method” is a way of teaching philosophical thinking and knowledge by asking questions. It was first used by in ancient times by the Greek philosopher Socrates who taught his followers by asking questions; these conversations between them are known as “Socratic dialogues”. In this methodology, no new knowledge is taught to the individual; rather, the individual is guided to remember and rediscover what was formerly known through this process. The main method used in cognitive therapy is guided discovery. There are various methods of guided discovery in cognitive therapy. The form of verbal exchange between the therapist and client which is used during the process of cognitive behavioral therapy is known as “socratic questioning”. In this method the goal is to make the client rediscover, with a series of questions, a piece of knowledge which he could otherwise know but is not presently conscious of. The Socratic Questioning consists of several steps, including: identifying the problem by listening to the client and making reflections, finding alternatives by examining and evaluating, reidentification by using the newly rediscovered information and questioning the old distorted belief, and reaching a new conclusion and applying it. Question types used during these procedures are: questions for collecting information, questions revealing meanings, questions revealing beliefs, questions about behaviours during similar past experiences, analytic questions and analytic synthesis questions. In order to make the patient feel understood, it is important to be empathetic and summarize the problem during the interview. In this text, steps of Socratic Questioning-Guided Discovery will be reviewed with sample dialogues provided for each step. [JCBPR 2015; 4(1.000: 47-53

  13. Methods of neutron spectrum calculation from measured reaction rates in saips. Part 1. Review of mathematical methods

    International Nuclear Information System (INIS)

    Bondars, Kh.Ya.; Lapenas, A.A.

    1981-01-01

    We adapted or used on ES EhVM, operating under the control of OS ES, the currently most common algorithms for calculating neutron spectra from measured reaction rates. These programs, together with the neutron cross-section and spectrum libraries, are part of the computerized information system SAIPS. The present article descibes the basic mathematical concepts used in the algorithms of the SAIPS calculation programs

  14. Culture-independent discovery of natural products from soil metagenomes.

    Science.gov (United States)

    Katz, Micah; Hover, Bradley M; Brady, Sean F

    2016-03-01

    Bacterial natural products have proven to be invaluable starting points in the development of many currently used therapeutic agents. Unfortunately, traditional culture-based methods for natural product discovery have been deemphasized by pharmaceutical companies due in large part to high rediscovery rates. Culture-independent, or "metagenomic," methods, which rely on the heterologous expression of DNA extracted directly from environmental samples (eDNA), have the potential to provide access to metabolites encoded by a large fraction of the earth's microbial biosynthetic diversity. As soil is both ubiquitous and rich in bacterial diversity, it is an appealing starting point for culture-independent natural product discovery efforts. This review provides an overview of the history of soil metagenome-driven natural product discovery studies and elaborates on the recent development of new tools for sequence-based, high-throughput profiling of environmental samples used in discovering novel natural product biosynthetic gene clusters. We conclude with several examples of these new tools being employed to facilitate the recovery of novel secondary metabolite encoding gene clusters from soil metagenomes and the subsequent heterologous expression of these clusters to produce bioactive small molecules.

  15. Discovery of natural resources

    Science.gov (United States)

    Guild, P.W.

    1976-01-01

    Mankind will continue to need ores of more or less the types and grades used today to supply its needs for new mineral raw materials, at least until fusion or some other relatively cheap, inexhaustible energy source is developed. Most deposits being mined today were exposed at the surface or found by relatively simple geophysical or other prospecting techniques, but many of these will be depleted in the foreseeable future. The discovery of deeper or less obvious deposits to replace them will require the conjunction of science and technology to deduce the laws that governed the concentration of elements into ores and to detect and evaluate the evidence of their whereabouts. Great theoretical advances are being made to explain the origins of ore deposits and understand the general reasons for their localization. These advances have unquestionable value for exploration. Even a large deposit is, however, very small, and, with few exceptions, it was formed under conditions that have long since ceased to exist. The explorationist must suppress a great deal of "noise" to read and interpret correctly the "signals" that can define targets and guide the drilling required to find it. Is enough being done to ensure the long-term availability of mineral raw materials? The answer is probably no, in view of the expanding consumption and the difficulty of finding new deposits, but ingenuity, persistence, and continued development of new methods and tools to add to those already at hand should put off the day of "doing without" for many years. The possibility of resource exhaustion, especially in view of the long and increasing lead time needed to carry out basic field and laboratory studies in geology, geophysics, and geochemistry and to synthesize and analyze the information gained from them counsels against any letting down of our guard, however (17). Research and exploration by government, academia, and industry must be supported and encouraged; we cannot wait until an eleventh

  16. Bioinformatics for discovery of microbiome variation

    DEFF Research Database (Denmark)

    Brejnrod, Asker Daniel

    of various molecular methods to build hypotheses about the impact of a copper contaminated soil. The introduction is a broad introduction to the field of microbiome research with a focus on the technologies that enable these discoveries and how some of the broader issues have related to this thesis......Sequencing based tools have revolutionized microbiology in recent years. Highthroughput DNA sequencing have allowed high-resolution studies on microbial life in many different environments and at unprecedented low cost. These culture-independent methods have helped discovery of novel bacteria...... 1 ,“Large-scale benchmarking reveals false discoveries and count transformation sensitivity in 16S rRNA gene amplicon data analysis methods used in microbiome studies”, benchmarked the performance of a variety of popular statistical methods for discovering differentially abundant bacteria . between...

  17. The Impact of Accounting Methods on Cost Reduction Rates in Defense Aerospace Weapons System Programs

    Science.gov (United States)

    1988-12-01

    and adhered to in U.S. industry, allow some flexibility in accounting. Under GAAP , accounting areas such as depreciation , inventory, investment tax... depreciation , inventory and investment tax credit) in predicting cost reduction rates are studied. Of the three accounting variables, only inventory...RATES .. ................. ........... 5 1. Depreciation ........ ............... 6 2. Capitalizing or Expensing of Costs . . .. 6 3. Material Costs

  18. A Tractable Method for Describing Complex Couplings between Neurons and Population Rate.

    Science.gov (United States)

    Gardella, Christophe; Marre, Olivier; Mora, Thierry

    2016-01-01

    Neurons within a population are strongly correlated, but how to simply capture these correlations is still a matter of debate. Recent studies have shown that the activity of each cell is influenced by the population rate, defined as the summed activity of all neurons in the population. However, an explicit, tractable model for these interactions is still lacking. Here we build a probabilistic model of population activity that reproduces the firing rate of each cell, the distribution of the population rate, and the linear coupling between them. This model is tractable, meaning that its parameters can be learned in a few seconds on a standard computer even for large population recordings. We inferred our model for a population of 160 neurons in the salamander retina. In this population, single-cell firing rates depended in unexpected ways on the population rate. In particular, some cells had a preferred population rate at which they were most likely to fire. These complex dependencies could not be explained by a linear coupling between the cell and the population rate. We designed a more general, still tractable model that could fully account for these nonlinear dependencies. We thus provide a simple and computationally tractable way to learn models that reproduce the dependence of each neuron on the population rate.

  19. Constrained least squares methods for estimating reaction rate constants from spectroscopic data

    NARCIS (Netherlands)

    Bijlsma, S.; Boelens, H.F.M.; Hoefsloot, H.C.J.; Smilde, A.K.

    2002-01-01

    Model errors, experimental errors and instrumental noise influence the accuracy of reaction rate constant estimates obtained from spectral data recorded in time during a chemical reaction. In order to improve the accuracy, which can be divided into the precision and bias of reaction rate constant

  20. The sediment accumulation rates measured in Lake Poyang using 210Pb dating method

    International Nuclear Information System (INIS)

    Wang Xiuyu; Zeng Erkang; Wan Yusong; Liu Xiaosong

    1987-01-01

    The sediment accumulation rates were estimated from the vertical distribution of excess 210 Pb measured in sediment cores collected from Lake Poyang, Jiangxi Provence of China. These rates were various with the differences in hydrology. The sedimentation rates are the lowest in the middle region of the lake, in which the rates could not be determined from the two samples and the other one is 0.14 cm/a. The sedimentation rates are lower in the northeast basin section, averaging to 0.19 cm/a. The sediment rates in the diffusion area of the lake from which 5 rivers enter sand deposion is more than the rates in other areas of the lake (0.08 cm/a to 0.28 cm/a) and the hydrology factors are various. Because of the sand from the Yangzhi River the sedimentation rates are the highest in the water-way section of the lake, averaging from 0.23 to 0.62 cm/a