WorldWideScience

Sample records for genomic evaluation methods

  1. International genomic evaluation methods for dairy cattle

    Science.gov (United States)

    Background Genomic evaluations are rapidly replacing traditional evaluation systems used for dairy cattle selection. Economies of scale in genomics promote cooperation across country borders. Genomic information can be transferred across countries using simple conversion equations, by modifying mult...

  2. Accurate evaluation and analysis of functional genomics data and methods

    Science.gov (United States)

    Greene, Casey S.; Troyanskaya, Olga G.

    2016-01-01

    The development of technology capable of inexpensively performing large-scale measurements of biological systems has generated a wealth of data. Integrative analysis of these data holds the promise of uncovering gene function, regulation, and, in the longer run, understanding complex disease. However, their analysis has proved very challenging, as it is difficult to quickly and effectively assess the relevance and accuracy of these data for individual biological questions. Here, we identify biases that present challenges for the assessment of functional genomics data and methods. We then discuss evaluation methods that, taken together, begin to address these issues. We also argue that the funding of systematic data-driven experiments and of high-quality curation efforts will further improve evaluation metrics so that they more-accurately assess functional genomics data and methods. Such metrics will allow researchers in the field of functional genomics to continue to answer important biological questions in a data-driven manner. PMID:22268703

  3. Evaluation of methods and marker Systems in Genomic Selection of oil palm (Elaeis guineensis Jacq.).

    Science.gov (United States)

    Kwong, Qi Bin; Teh, Chee Keng; Ong, Ai Ling; Chew, Fook Tim; Mayes, Sean; Kulaveerasingam, Harikrishna; Tammi, Martti; Yeoh, Suat Hui; Appleton, David Ross; Harikrishna, Jennifer Ann

    2017-12-11

    Genomic selection (GS) uses genome-wide markers as an attempt to accelerate genetic gain in breeding programs of both animals and plants. This approach is particularly useful for perennial crops such as oil palm, which have long breeding cycles, and for which the optimal method for GS is still under debate. In this study, we evaluated the effect of different marker systems and modeling methods for implementing GS in an introgressed dura family derived from a Deli dura x Nigerian dura (Deli x Nigerian) with 112 individuals. This family is an important breeding source for developing new mother palms for superior oil yield and bunch characters. The traits of interest selected for this study were fruit-to-bunch (F/B), shell-to-fruit (S/F), kernel-to-fruit (K/F), mesocarp-to-fruit (M/F), oil per palm (O/P) and oil-to-dry mesocarp (O/DM). The marker systems evaluated were simple sequence repeats (SSRs) and single nucleotide polymorphisms (SNPs). RR-BLUP, Bayesian A, B, Cπ, LASSO, Ridge Regression and two machine learning methods (SVM and Random Forest) were used to evaluate GS accuracy of the traits. The kinship coefficient between individuals in this family ranged from 0.35 to 0.62. S/F and O/DM had the highest genomic heritability, whereas F/B and O/P had the lowest. The accuracies using 135 SSRs were low, with accuracies of the traits around 0.20. The average accuracy of machine learning methods was 0.24, as compared to 0.20 achieved by other methods. The trait with the highest mean accuracy was F/B (0.28), while the lowest were both M/F and O/P (0.18). By using whole genomic SNPs, the accuracies for all traits, especially for O/DM (0.43), S/F (0.39) and M/F (0.30) were improved. The average accuracy of machine learning methods was 0.32, compared to 0.31 achieved by other methods. Due to high genomic resolution, the use of whole-genome SNPs improved the efficiency of GS dramatically for oil palm and is recommended for dura breeding programs. Machine learning slightly

  4. Evaluation of the 2b-RAD method for genomic selection in scallop breeding.

    Science.gov (United States)

    Dou, Jinzhuang; Li, Xue; Fu, Qiang; Jiao, Wenqian; Li, Yangping; Li, Tianqi; Wang, Yangfan; Hu, Xiaoli; Wang, Shi; Bao, Zhenmin

    2016-01-12

    The recently developed 2b-restriction site-associated DNA (2b-RAD) sequencing method provides a cost-effective and flexible genotyping platform for aquaculture species lacking sufficient genomic resources. Here, we evaluated the performance of this method in the genomic selection (GS) of Yesso scallop (Patinopecten yessoensis) through simulation and real data analyses using six statistical models. Our simulation analysis revealed that the prediction accuracies obtained using the 2b-RAD markers were slightly lower than those obtained using all polymorphic loci in the genome. Furthermore, a small subset of markers obtained from a reduced tag representation (RTR) library presented comparable performance to that obtained using all markers, making RTR be an attractive approach for GS purpose. Six GS models exhibited variable performance in prediction accuracy depending on the scenarios (e.g., heritability, sample size, population structure), but Bayes-alphabet and BLUP-based models generally outperformed other models. Finally, we performed the evaluation using an empirical dataset composed of 349 Yesso scallops that were derived from five families. The prediction accuracy for this empirical dataset could reach 0.4 based on optimal GS models. In summary, the genotyping flexibility and cost-effectiveness make 2b-RAD be an ideal genotyping platform for genomic selection in aquaculture breeding programs.

  5. Allele coding in genomic evaluation

    DEFF Research Database (Denmark)

    Standen, Ismo; Christensen, Ole Fredslund

    2011-01-01

    Genomic data are used in animal breeding to assist genetic evaluation. Several models to estimate genomic breeding values have been studied. In general, two approaches have been used. One approach estimates the marker effects first and then, genomic breeding values are obtained by summing marker...... effects. In the second approach, genomic breeding values are estimated directly using an equivalent model with a genomic relationship matrix. Allele coding is the method chosen to assign values to the regression coefficients in the statistical model. A common allele coding is zero for the homozygous...... this centered allele coding. This study considered effects of different allele coding methods on inference. Both marker-based and equivalent models were considered, and restricted maximum likelihood and Bayesian methods were used in inference. \\paragraph*{Results:} Theoretical derivations showed that parameter...

  6. Allele coding in genomic evaluation

    Directory of Open Access Journals (Sweden)

    Christensen Ole F

    2011-06-01

    Full Text Available Abstract Background Genomic data are used in animal breeding to assist genetic evaluation. Several models to estimate genomic breeding values have been studied. In general, two approaches have been used. One approach estimates the marker effects first and then, genomic breeding values are obtained by summing marker effects. In the second approach, genomic breeding values are estimated directly using an equivalent model with a genomic relationship matrix. Allele coding is the method chosen to assign values to the regression coefficients in the statistical model. A common allele coding is zero for the homozygous genotype of the first allele, one for the heterozygote, and two for the homozygous genotype for the other allele. Another common allele coding changes these regression coefficients by subtracting a value from each marker such that the mean of regression coefficients is zero within each marker. We call this centered allele coding. This study considered effects of different allele coding methods on inference. Both marker-based and equivalent models were considered, and restricted maximum likelihood and Bayesian methods were used in inference. Results Theoretical derivations showed that parameter estimates and estimated marker effects in marker-based models are the same irrespective of the allele coding, provided that the model has a fixed general mean. For the equivalent models, the same results hold, even though different allele coding methods lead to different genomic relationship matrices. Calculated genomic breeding values are independent of allele coding when the estimate of the general mean is included into the values. Reliabilities of estimated genomic breeding values calculated using elements of the inverse of the coefficient matrix depend on the allele coding because different allele coding methods imply different models. Finally, allele coding affects the mixing of Markov chain Monte Carlo algorithms, with the centered coding being

  7. Evaluating genomic DNA extraction methods from human whole blood using endpoint and real-time PCR assays.

    Science.gov (United States)

    Koshy, Linda; Anju, A L; Harikrishnan, S; Kutty, V R; Jissa, V T; Kurikesu, Irin; Jayachandran, Parvathy; Jayakumaran Nair, A; Gangaprasad, A; Nair, G M; Sudhakaran, P R

    2017-02-01

    The extraction of genomic DNA is the crucial first step in large-scale epidemiological studies. Though there are many popular DNA isolation methods from human whole blood, only a few reports have compared their efficiencies using both end-point and real-time PCR assays. Genomic DNA was extracted from coronary artery disease patients using solution-based conventional protocols such as the phenol-chloroform/proteinase-K method and a non-phenolic non-enzymatic Rapid-Method, which were evaluated and compared vis-a-vis a commercially available silica column-based Blood DNA isolation kit. The appropriate method for efficiently extracting relatively pure DNA was assessed based on the total DNA yield, concentration, purity ratios (A 260 /A 280 and A 260 /A 230 ), spectral profile and agarose gel electrophoresis analysis. The quality of the isolated DNA was further analysed for PCR inhibition using a murine specific ATP1A3 qPCR assay and mtDNA/Y-chromosome ratio determination assay. The suitability of the extracted DNA for downstream applications such as end-point SNP genotyping, was tested using PCR-RFLP analysis of the AGTR1-1166A>C variant, a mirSNP having pharmacogenetic relevance in cardiovascular diseases. Compared to the traditional phenol-chloroform/proteinase-K method, our results indicated the Rapid-Method to be a more suitable protocol for genomic DNA extraction from human whole blood in terms of DNA quantity, quality, safety, processing time and cost. The Rapid-Method, which is based on a simple salting-out procedure, is not only safe and cost-effective, but also has the added advantage of being scaled up to process variable sample volumes, thus enabling it to be applied in large-scale epidemiological studies.

  8. Evaluation of two molecular methods for the detection of Yellow fever virus genome

    OpenAIRE

    Nunes, Marcio R. T.; Palacios, Gustavo; Nunes, Keley N. B.; Casseb, Samir M. M.; Martins, Lívia C.; Quaresma, Juarez A.S.; Savji, Nazir; Lipkin, W. Ian; Vasconcelos, Pedro F. C.

    2011-01-01

    Yellow fever virus (YFV), a member of the family Flaviviridae, genus Flavivirus is endemic to tropical areas of Africa and South America and is among the arboviruses that pose a threat to public health. Recent outbreaks in Brazil, Bolivia, and Paraguay and the observation that vectors capable of transmitting YFV are presenting in urban areas underscore the urgency of improving surveillance and diagnostic methods. Two novel methods (RT-hemi-nested-PCR and SYBR®Green qRT-PCR) for efficient dete...

  9. Evaluation of two molecular methods for the detection of Yellow fever virus genome.

    Science.gov (United States)

    Nunes, Marcio R T; Palacios, Gustavo; Nunes, Keley N B; Casseb, Samir M M; Martins, Lívia C; Quaresma, Juarez A S; Savji, Nazir; Lipkin, W Ian; Vasconcelos, Pedro F C

    2011-06-01

    Yellow fever virus (YFV), a member of the family Flaviviridae, genus Flavivirus is endemic to tropical areas of Africa and South America and is among the arboviruses that pose a threat to public health. Recent outbreaks in Brazil, Bolivia, and Paraguay and the observation that vectors capable of transmitting YFV are presenting in urban areas underscore the urgency of improving surveillance and diagnostic methods. Two novel methods (RT-hemi-nested-PCR and SYBR(®) Green qRT-PCR) for efficient detection of YFV strains circulating in South America have been developed. The methods were validated using samples obtained from golden hamsters infected experimentally with wild-type YFV strains as well as human serum and tissue samples with acute disease. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Evaluation of genome-enabled selection for bacterial cold water disease resistance using progeny performance data in Rainbow Trout: Insights on genotyping methods and genomic prediction models

    Science.gov (United States)

    Bacterial cold water disease (BCWD) causes significant economic losses in salmonid aquaculture, and traditional family-based breeding programs aimed at improving BCWD resistance have been limited to exploiting only between-family variation. We used genomic selection (GS) models to predict genomic br...

  11. Characterisation of the genomic architecture of human chromosome 17q and evaluation of different methods for haplotype block definition

    Directory of Open Access Journals (Sweden)

    Ollier William

    2005-04-01

    Full Text Available Abstract Background The selection of markers in association studies can be informed through the use of haplotype blocks. Recent reports have determined the genomic architecture of chromosomal segments through different haplotype block definitions based on linkage disequilibrium (LD measures or haplotype diversity criteria. The relative applicability of distinct block definitions to association studies, however, remains unclear. We compared different block definitions in 6.1 Mb of chromosome 17q in 189 unrelated healthy individuals. Using 137 single nucleotide polymorphisms (SNPs, at a median spacing of 15.5 kb, we constructed haplotype block maps using published methods and additional methods we have developed. Haplotype tagging SNPs (htSNPs were identified for each map. Results Blocks were found to be shorter and coverage of the region limited with methods based on LD measures, compared to the method based on haplotype diversity. Although the distribution of blocks was highly variable, the number of SNPs that needed to be typed in order to capture the maximum number of haplotypes was consistent. Conclusion For the marker spacing used in this study, choice of block definition is not important when used as an initial screen of the region to identify htSNPs. However, choice of block definition has consequences for the downstream interpretation of association study results.

  12. Economic evaluation of genomic breeding programs.

    Science.gov (United States)

    König, S; Simianer, H; Willam, A

    2009-01-01

    The objective of this study was to compare a conventional dairy cattle breeding program characterized by a progeny testing scheme with different scenarios of genomic breeding programs. The ultimate economic evaluation criterion was discounted profit reflecting discounted returns minus discounted costs per cow in a balanced breeding goal of production and functionality. A deterministic approach mainly based on the gene flow method and selection index calculations was used to model a conventional progeny testing program and different scenarios of genomic breeding programs. As a novel idea, the modeling of the genomic breeding program accounted for the proportion of farmers waiting for daughter records of genotyped young bulls before using them for artificial insemination. Technical and biological coefficients for modeling were chosen to correspond to a German breeding organization. The conventional breeding program for 50 test bulls per year within a population of 100,000 cows served as a base scenario. Scenarios of genomic breeding programs considered the variation of costs for genotyping, selection intensity of cow sires, proportion of farmers waiting for daughter records of genotyped young bulls, and different accuracies of genomic indices for bulls and cows. Given that the accuracies of genomic indices are greater than 0.70, a distinct economic advantage was found for all scenarios of genomic breeding programs up to factor 2.59, mainly due to the reduction in generation intervals. Costs for genotyping were negligible when focusing on a population-wide perspective and considering additional costs for herdbook registration, milk recording, or keeping of bulls, especially if there is no need for yearly recalculation of effects of single nucleotide polymorphisms. Genomic breeding programs generated a higher discounted profit than a conventional progeny testing program for all scenarios where at least 20% of the inseminations were done by genotyped young bulls without

  13. A network-based method to evaluate quality of reproducibility of differential expression in cancer genomics studies.

    Science.gov (United States)

    Li, Robin; Lin, Xiao; Geng, Haijiang; Li, Zhihui; Li, Jiabing; Lu, Tao; Yan, Fangrong

    2015-12-29

    Personalized cancer treatments depend on the determination of a patient's genetic status according to known genetic profiles for which targeted treatments exist. Such genetic profiles must be scientifically validated before they is applied to general patient population. Reproducibility of findings that support such genetic profiles is a fundamental challenge in validation studies. The percentage of overlapping genes (POG) criterion and derivative methods produce unstable and misleading results. Furthermore, in a complex disease, comparisons between different tumor subtypes can produce high POG scores that do not capture the consistencies in the functions. We focused on the quality rather than the quantity of the overlapping genes. We defined the rank value of each gene according to importance or quality by PageRank on basis of a particular topological structure. Then, we used the p-value of the rank-sum of the overlapping genes (PRSOG) to evaluate the quality of reproducibility. Though the POG scores were low in different studies of the same disease, the PRSOG was statistically significant, which suggests that sets of differentially expressed genes might be highly reproducible. Evaluations of eight datasets from breast cancer, lung cancer and four other disorders indicate that quality-based PRSOG method performs better than a quantity-based method. Our analysis of the components of the sets of overlapping genes supports the utility of the PRSOG method.

  14. Genomic methods take the plunge

    DEFF Research Database (Denmark)

    Cammen, Kristina M.; Andrews, Kimberly R.; Carroll, Emma L.

    2016-01-01

    The dramatic increase in the application of genomic techniques to non-model organisms (NMOs) over the past decade has yielded numerous valuable contributions to evolutionary biology and ecology, many of which would not have been possible with traditional genetic markers. We review this recent...... progression with a particular focus on genomic studies of marine mammals, a group of taxa that represent key macroevolutionary transitions from terrestrial to marine environments and for which available genomic resources have recently undergone notable rapid growth. Genomic studies of NMOs utilize...... an expanding range of approaches, including whole genome sequencing, restriction site-associated DNA sequencing, array-based sequencing of single nucleotide polymorphisms and target sequence probes (e.g., exomes), and transcriptome sequencing. These approaches generate different types and quantities of data...

  15. Benchmarking of methods for genomic taxonomy

    DEFF Research Database (Denmark)

    Larsen, Mette Voldby; Cosentino, Salvatore; Lukjancenko, Oksana

    2014-01-01

    ; (ii) Reads2Type that searches for species-specific 50-mers in either the 16S rRNA gene or the gyrB gene (for the Enterobacteraceae family); (iii) the ribosomal multilocus sequence typing (rMLST) method that samples up to 53 ribosomal genes; (iv) TaxonomyFinder, which is based on species......-specific functional protein domain profiles; and finally (v) KmerFinder, which examines the number of cooccurring k-mers (substrings of k nucleotides in DNA sequence data). The performances of the methods were subsequently evaluated on three data sets of short sequence reads or draft genomes from public databases....... In total, the evaluation sets constituted sequence data from more than 11,000 isolates covering 159 genera and 243 species. Our results indicate that methods that sample only chromosomal, core genes have difficulties in distinguishing closely related species which only recently diverged. The Kmer...

  16. Genomic evaluations with many more genotypes

    Directory of Open Access Journals (Sweden)

    Wiggans George R

    2011-03-01

    Full Text Available Abstract Background Genomic evaluations in Holstein dairy cattle have quickly become more reliable over the last two years in many countries as more animals have been genotyped for 50,000 markers. Evaluations can also include animals genotyped with more or fewer markers using new tools such as the 777,000 or 2,900 marker chips recently introduced for cattle. Gains from more markers can be predicted using simulation, whereas strategies to use fewer markers have been compared using subsets of actual genotypes. The overall cost of selection is reduced by genotyping most animals at less than the highest density and imputing their missing genotypes using haplotypes. Algorithms to combine different densities need to be efficient because numbers of genotyped animals and markers may continue to grow quickly. Methods Genotypes for 500,000 markers were simulated for the 33,414 Holsteins that had 50,000 marker genotypes in the North American database. Another 86,465 non-genotyped ancestors were included in the pedigree file, and linkage disequilibrium was generated directly in the base population. Mixed density datasets were created by keeping 50,000 (every tenth of the markers for most animals. Missing genotypes were imputed using a combination of population haplotyping and pedigree haplotyping. Reliabilities of genomic evaluations using linear and nonlinear methods were compared. Results Differing marker sets for a large population were combined with just a few hours of computation. About 95% of paternal alleles were determined correctly, and > 95% of missing genotypes were called correctly. Reliability of breeding values was already high (84.4% with 50,000 simulated markers. The gain in reliability from increasing the number of markers to 500,000 was only 1.6%, but more than half of that gain resulted from genotyping just 1,406 young bulls at higher density. Linear genomic evaluations had reliabilities 1.5% lower than the nonlinear evaluations with 50

  17. Qualitative and quantitative evaluation of the genomic DNA extracted from GMO and non-GMO foodstuffs with four different extraction methods.

    Science.gov (United States)

    Peano, Clelia; Samson, Maria Cristina; Palmieri, Luisa; Gulli, Mariolina; Marmiroli, Nelson

    2004-11-17

    The presence of DNA in foodstuffs derived from or containing genetically modified organisms (GMO) is the basic requirement for labeling of GMO foods in Council Directive 2001/18/CE (Off. J. Eur. Communities 2001, L1 06/2). In this work, four different methods for DNA extraction were evaluated and compared. To rank the different methods, the quality and quantity of DNA extracted from standards, containing known percentages of GMO material and from different food products, were considered. The food products analyzed derived from both soybean and maize and were chosen on the basis of the mechanical, technological, and chemical treatment they had been subjected to during processing. Degree of DNA degradation at various stages of food production was evaluated through the amplification of different DNA fragments belonging to the endogenous genes of both maize and soybean. Genomic DNA was extracted from Roundup Ready soybean and maize MON810 standard flours, according to four different methods, and quantified by real-time Polymerase Chain Reaction (PCR), with the aim of determining the influence of the extraction methods on the DNA quantification through real-time PCR.

  18. Technical note: Rapid calculation of genomic evaluations for new animals.

    Science.gov (United States)

    Wiggans, G R; VanRaden, P M; Cooper, T A

    2015-03-01

    A method was developed to calculate preliminary genomic evaluations daily or weekly before the release of official monthly evaluations by processing only newly genotyped animals using estimates of single nucleotide polymorphism effects from the previous official evaluation. To minimize computing time, reliabilities and genomic inbreeding are not calculated, and fixed weights are used to combine genomic and traditional information. Correlations of preliminary and September official monthly evaluations for animals with genotypes that became usable after the extraction of genotypes for August 2014 evaluations were >0.99 for most Holstein traits. Correlations were lower for breeds with smaller population size. Earlier access to genomic evaluations benefits producers by enabling earlier culling decisions and genotyping laboratories by making workloads more uniform across the month. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  19. Genomic evaluation of both purebred and crossbred performances

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Madsen, Per; Nielsen, Bjarne

    2014-01-01

    For a two-breed crossbreeding system, Wei and van der Werf presented a model for genetic evaluation using information from both purebred and crossbred animals. The model provides breeding values for both purebred and crossbred performances. Genomic evaluation incorporates marker genotypes...... into a genetic evaluation system. Among popular methods are the so-called single-step methods, in which marker genotypes are incorporated into a traditional animal model by using a combined relationship matrix that extends the marker-based relationship matrix to non-genotyped animals. However, a single......-step method for genomic evaluation of both purebred and crossbred performances has not been developed yet. An extension of the Wei and van der Werf model that incorporates genomic information is presented. The extension consists of four steps: (1) the Wei van der Werf model is reformulated using two partial...

  20. Detection of evaluation bias caused by genomic preselection.

    Science.gov (United States)

    Tyrisevä, A-M; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Fikse, W F; Lidauer, M H

    2018-04-01

    The aim of this simulation study was to investigate whether it is possible to detect the effect of genomic preselection on Mendelian sampling (MS) means or variances obtained by the MS validation test. Genomic preselection of bull calves is 1 additional potential source of bias in international evaluations unless adequately accounted for in national evaluations. Selection creates no bias in traditional breeding value evaluation if the data of all animals are included. However, this is not the case with genomic preselection, as it excludes culled bulls. Genomic breeding values become biased if calculated using a multistep procedure instead of, for example, a single-step method. Currently, about 60% of the countries participating in international bull evaluations have already adopted genomic selection in their breeding schemes. The data sent for multiple across-country evaluation can, therefore, be very heterogeneous, and a proper validation method is needed to ensure a fair comparison of the bulls included in international genetic evaluations. To study the effect of genomic preselection, we generated a total of 50 replicates under control and genomic preselection schemes using the structures of the real data and pedigree from a medium-size cow population. A genetic trend of 15% of the genetic standard deviation was created for both schemes. In carrying out the analyses, we used 2 different heritabilities: 0.25 and 0.10. From the start of genomic preselection, all bulls were genomically preselected. Their MS deviations were inflated with a value corresponding to selection of the best 10% of genomically tested bull calves. For cows, the MS deviations were unaltered. The results revealed a clear underestimation of bulls' breeding values (BV) after genomic preselection started, as well as a notable deviation from zero both in true and estimated MS means. The software developed recently for the MS validation test already produces yearly MS means, and they can be used to

  1. Evaluation of a microarray-hybridization based method applicable for discovery of single nucleotide polymorphisms (SNPs) in the Pseudomonas aeruginosa genome

    Science.gov (United States)

    Dötsch, Andreas; Pommerenke, Claudia; Bredenbruch, Florian; Geffers, Robert; Häussler, Susanne

    2009-01-01

    Background Whole genome sequencing techniques have added a new dimension to studies on bacterial adaptation, evolution and diversity in chronic infections. By using this powerful approach it was demonstrated that Pseudomonas aeruginosa undergoes intense genetic adaptation processes, crucial in the development of persistent disease. The challenge ahead is to identify universal infection relevant adaptive bacterial traits as potential targets for the development of alternative treatment strategies. Results We developed a microarray-based method applicable for discovery of single nucleotide polymorphisms (SNPs) in P. aeruginosa as an easy and economical alternative to whole genome sequencing. About 50% of all SNPs theoretically covered by the array could be detected in a comparative hybridization of PAO1 and PA14 genomes at high specificity (> 0.996). Variations larger than SNPs were detected at much higher sensitivities, reaching nearly 100% for genetic differences affecting multiple consecutive probe oligonucleotides. The detailed comparison of the in silico alignment with experimental hybridization data lead to the identification of various factors influencing sensitivity and specificity in SNP detection and to the identification of strain specific features such as a large deletion within the PA4684 and PA4685 genes in the Washington Genome Center PAO1. Conclusion The application of the genome array as a tool to identify adaptive mutations, to depict genome organizations, and to identify global regulons by the "ChIP-on-chip" technique will expand our knowledge on P. aeruginosa adaptation, evolution and regulatory mechanisms of persistence on a global scale and thus advance the development of effective therapies to overcome persistent disease. PMID:19152677

  2. Genomic Evaluation of Both Purebred and Crossbred Performances

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Madsen, Per; Nielsen, Bjarne

    evaluation using information from both purebred and crossbred animals for a two-breed crossbreeding system. The model provides breeding values for both purebred and crossbred performances. However, an extension of that model to incorporate genomic data has not been made yet. Here, a single-step method...... for genomic evaluation of both purebred and crossbred performances is presented. The model is on the one hand a reformulation and extension of the model by Wei van der Werf (1994), and on the other hand it extends previously studied genomic models for crossbred animals to include phenotypes on purebred......Genomic selection has offered a new paradigm for livestock breeding within purebred populations, but it also offers greater opportunities for incorporating information from crossbred individuals and selection for crossbred performance. Wei and van der Werf (1994) presented a model for genetic...

  3. Genomic Evaluation of Both Purebred and Crossbred Performances

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Madsen, Per; Nielsen, Bjarne

    Genomic selection has offered a new paradigm for livestock breeding within purebred populations, but it also offers greater opportunities for incorporating information from crossbred individuals and selection for crossbred performance. Wei and van der Werf (1994) presented a model for genetic...... evaluation using information from both purebred and crossbred animals for a two-breed crossbreeding system. The model provides breeding values for both purebred and crossbred performances. However, an extension of that model to incorporate genomic data has not been made yet. Here, a single-step method...... for genomic evaluation of both purebred and crossbred performances is presented. The model is on the one hand a reformulation and extension of the model by Wei van der Werf (1994), and on the other hand it extends previously studied genomic models for crossbred animals to include phenotypes on purebred...

  4. Genomic prediction based on data from three layer lines: a comparison between linear methods

    NARCIS (Netherlands)

    Calus, M.P.L.; Huang, H.; Vereijken, J.; Visscher, J.; Napel, ten J.; Windig, J.J.

    2014-01-01

    Background The prediction accuracy of several linear genomic prediction models, which have previously been used for within-line genomic prediction, was evaluated for multi-line genomic prediction. Methods Compared to a conventional BLUP (best linear unbiased prediction) model using pedigree data, we

  5. Efficient method for the extraction of genomic DNA from wormwood ...

    African Journals Online (AJOL)

    Suitable method for isolation of genomic DNA is really important. The best method can make the best result for genetic study. About five differences methods were used amongst which were Sarkosyl Method, CTAB method, Kit Method, SDS Method and Phenol-Chloroform Method. Isolated genomic DNA showed high purity ...

  6. Accounting for genomic pre-selection in national BLUP evaluations in dairy cattle

    Directory of Open Access Journals (Sweden)

    Patry Clotilde

    2011-08-01

    Full Text Available Abstract Background In future Best Linear Unbiased Prediction (BLUP evaluations of dairy cattle, genomic selection of young sires will cause evaluation biases and loss of accuracy once the selected ones get progeny. Methods To avoid such bias in the estimation of breeding values, we propose to include information on all genotyped bulls, including the culled ones, in BLUP evaluations. Estimated breeding values based on genomic information were converted into genomic pseudo-performances and then analyzed simultaneously with actual performances. Using simulations based on actual data from the French Holstein population, bias and accuracy of BLUP evaluations were computed for young sires undergoing progeny testing or genomic pre-selection. For bulls pre-selected based on their genomic profile, three different types of information can be included in the BLUP evaluations: (1 data from pre-selected genotyped candidate bulls with actual performances on their daughters, (2 data from bulls with both actual and genomic pseudo-performances, or (3 data from all the genotyped candidates with genomic pseudo-performances. The effects of different levels of heritability, genomic pre-selection intensity and accuracy of genomic evaluation were considered. Results Including information from all the genotyped candidates, i.e. genomic pseudo-performances for both selected and culled candidates, removed bias from genetic evaluation and increased accuracy. This approach was effective regardless of the magnitude of the initial bias and as long as the accuracy of the genomic evaluations was sufficiently high. Conclusions The proposed method can be easily and quickly implemented in BLUP evaluations at the national level, although some improvement is necessary to more accurately propagate genomic information from genotyped to non-genotyped animals. In addition, it is a convenient method to combine direct genomic, phenotypic and pedigree-based information in a multiple

  7. Whole genome amplification: Use of advanced isothermal method ...

    African Journals Online (AJOL)

    Laboratory method for amplifying genomic deoxyribonucleic acid (DNA) samples aiming to generate more amounts and sufficient quantity DNA for subsequent specific analysis is named whole genome amplification (WGA). This method is only way to increase input material from few cells and limited DNA contents.

  8. Understanding Spatial Genome Organization: Methods and Insights

    Directory of Open Access Journals (Sweden)

    Vijay Ramani

    2016-02-01

    Full Text Available The manner by which eukaryotic genomes are packaged into nuclei while maintaining crucial nuclear functions remains one of the fundamental mysteries in biology. Over the last ten years, we have witnessed rapid advances in both microscopic and nucleic acid-based approaches to map genome architecture, and the application of these approaches to the dissection of higher-order chromosomal structures has yielded much new information. It is becoming increasingly clear, for example, that interphase chromosomes form stable, multilevel hierarchical structures. Among them, self-associating domains like so-called topologically associating domains (TADs appear to be building blocks for large-scale genomic organization. This review describes features of these broadly-defined hierarchical structures, insights into the mechanisms underlying their formation, our current understanding of how interactions in the nuclear space are linked to gene regulation, and important future directions for the field.

  9. The limits of genome-wide methods for pharmacogenomic testing.

    Science.gov (United States)

    Gamazon, Eric R; Skol, Andrew D; Perera, Minoli A

    2012-04-01

    The goal of pharmacogenomics is the translation of genomic discoveries into individualized patient care. Recent advances in the means to survey human genetic variation are fundamentally transforming our understanding of the genetic basis of interindividual variation in therapeutic response. The goal of this study was to systematically evaluate high-throughput genotyping technologies for their ability to assay variation in pharmacogenetically important genes (pharmacogenes). These platforms are either being proposed for or are already being widely used for clinical implementation; therefore, knowledge of coverage of pharmacogenes on these platforms would serve to better evaluate current or proposed pharmacogenetic association studies. Among the genes included in our study are drug-metabolizing enzymes, transporters, receptors, and drug targets, of interest to the entire pharmacogenetic community. We considered absolute and linkage disequilibrium (LD)-informed coverage, minor allele frequency spectrum, and functional annotation for a Caucasian population. We also examined the effect of LD, effect size, and cohort size on the power to detect single nucleotide polymorphism associations. In our analysis of 253 pharmacogenes, we found that no platform showed more than 85% coverage of these genes (after accounting for LD). Furthermore, the lack of coverage showed a marked increase at minor allele frequencies of less than 20%. Even after accounting for LD, only 30% of the missense polymorphisms (which are enriched for low-frequency alleles) were covered by HapMap, with still lower coverage on the other platforms. We have conducted the first systematic evaluation of the Axiom Genomic Database, Omni 2.5 M, and the Drug Metabolizing Enzymes and Transporters chip. This study is the first to utilize the 1000 Genomes Project to present a comprehensive evaluative framework. Our results provide a much-needed assessment of microarray-based genotyping and next-generation sequencing

  10. GenoSets: visual analytic methods for comparative genomics.

    Directory of Open Access Journals (Sweden)

    Aurora A Cain

    Full Text Available Many important questions in biology are, fundamentally, comparative, and this extends to our analysis of a growing number of sequenced genomes. Existing genomic analysis tools are often organized around literal views of genomes as linear strings. Even when information is highly condensed, these views grow cumbersome as larger numbers of genomes are added. Data aggregation and summarization methods from the field of visual analytics can provide abstracted comparative views, suitable for sifting large multi-genome datasets to identify critical similarities and differences. We introduce a software system for visual analysis of comparative genomics data. The system automates the process of data integration, and provides the analysis platform to identify and explore features of interest within these large datasets. GenoSets borrows techniques from business intelligence and visual analytics to provide a rich interface of interactive visualizations supported by a multi-dimensional data warehouse. In GenoSets, visual analytic approaches are used to enable querying based on orthology, functional assignment, and taxonomic or user-defined groupings of genomes. GenoSets links this information together with coordinated, interactive visualizations for both detailed and high-level categorical analysis of summarized data. GenoSets has been designed to simplify the exploration of multiple genome datasets and to facilitate reasoning about genomic comparisons. Case examples are included showing the use of this system in the analysis of 12 Brucella genomes. GenoSets software and the case study dataset are freely available at http://genosets.uncc.edu. We demonstrate that the integration of genomic data using a coordinated multiple view approach can simplify the exploration of large comparative genomic data sets, and facilitate reasoning about comparisons and features of interest.

  11. Genomics protocols [Methods in molecular biology, v. 175

    National Research Council Canada - National Science Library

    Starkey, Michael P; Elaswarapu, Ramnath

    2001-01-01

    ... exploiting the potential of gene therapy. Highlights include methods for the analysis of differential gene expression, SNP detection, comparative genomic hybridization, and the functional analysis of genes, as well as the use of bio...

  12. Modified risk evaluation method

    International Nuclear Information System (INIS)

    Udell, C.J.; Tilden, J.A.; Toyooka, R.T.

    1993-08-01

    The purpose of this paper is to provide a structured and cost-oriented process to determine risks associated with nuclear material and other security interests. Financial loss is a continuing concern for US Department of Energy contractors. In this paper risk is equated with uncertainty of cost impacts to material assets or human resources. The concept provides a method for assessing the effectiveness of an integrated protection system, which includes operations, safety, emergency preparedness, and safeguards and security. The concept is suitable for application to sabotage evaluations. The protection of assets is based on risk associated with cost impacts to assets and the potential for undesirable events. This will allow managers to establish protection priorities in terms of the cost and the potential for the event, given the current level of protection

  13. DNA-free genome editing methods for targeted crop improvement.

    Science.gov (United States)

    Kanchiswamy, Chidananda Nagamangala

    2016-07-01

    Evolution of the next-generation clustered, regularly interspaced, short palindromic repeat/Cas9 (CRISPR/Cas9) genome editing tools, ribonucleoprotein (RNA)-guided endonuclease (RGEN) RNPs, is paving the way for developing DNA-free genetically edited crop plants. In this review, I discuss the various methods of RGEN RNPs tool delivery into plant cells and their limitations to adopt this technology to numerous crop plants. Furthermore, focus is given on the importance of developing DNA-free genome edited crop plants, including perennial crop plants. The possible regulation on the DNA-free, next-generation genome-edited crop plants is also highlighted.

  14. Sybil: methods and software for multiple genome comparison and visualization.

    Science.gov (United States)

    Crabtree, Jonathan; Angiuoli, Samuel V; Wortman, Jennifer R; White, Owen R

    2007-01-01

    With the successful completion of genome sequencing projects for a variety of model organisms, the selection of candidate organisms for future sequencing efforts has been guided increasingly by a desire to enable comparative genomics. This trend has both depended on and encouraged the development of software tools that can elucidate and capitalize on the similarities and differences between genomes. "Sybil," one such tool, is a primarily web-based software package whose primary goal is to facilitate the analysis and visualization of comparative genome data, with a particular emphasis on protein and gene cluster data. Herein, a two-phase protein clustering algorithm, used to generate protein clusters suitable for analysis through Sybil and a method for creating graphical displays of protein or gene clusters that span multiple genomes are described. When combined, these two relatively simple techniques provide the user of the Sybil software (The Institute for Genomic Research [TIGR] Bioinformatics Department) with a browsable graphical display of his or her "input" genomes, showing which genes are conserved based on the parameters supplied to the protein clustering algorithm. For any given protein cluster the graphical display consists of a local alignment of the genomes in which the clustered genes are located. The genomes are arranged in a vertical stack, as in a multiple alignment, and shaded areas are used to connect genes in the same cluster, thus displaying conservation at the protein level in the context of the underlying genomic sequences. The authors have found this display-and slight variants thereof-useful for a variety of annotation and comparison tasks, ranging from identifying "missed" gene models or single-exon discrepancies between orthologous genes, to finding large or small regions of conserved gene synteny, and investigating the properties of the breakpoints between such regions.

  15. Nuclear data evaluation method and evaluation system

    International Nuclear Information System (INIS)

    Liu Tingjin

    1995-01-01

    The evaluation methods and Nuclear Data Evaluation System have been developed in China. A new version of the system has been established on Micro-VAX2 computer, which is supported by IAEA under the technology assistance program. The flow chart of Chinese Nuclear Data Evaluation System is shown out. For last ten years, the main efforts have been put on the double differential cross section, covariance data and evaluated data library validation. The developed evaluation method and Chinese Nuclear Data Evaluation System have been widely used at CNDC and in Chinese Nuclear Data Network for CENDL. (1 tab., 15 figs.)

  16. A universal, rapid, and inexpensive method for genomic DNA ...

    Indian Academy of Sciences (India)

    MOHAMMED BAQUR SAHIB A. AL-SHUHAIB

    Abstract. There is no 'one' procedure for extracting DNA from the whole blood of both mammals and birds, since each species has a unique property that require different methods to release its own DNA. Therefore, to obtain genomic DNA, a universal, rapid, and noncostly method was developed. A very simple biological ...

  17. Comparison of three methods of parasitoid polydnavirus genomic DNA isolation to facilitate polydnavirus genomic sequencing.

    Science.gov (United States)

    Rodríguez-Pérez, Mario A; Beckage, Nancy E

    2008-04-01

    A major long-term goal of polydnavirus (PDV) genome research is to identify novel virally encoded molecules that may serve as biopesticides to target insect pests that threaten agriculture and human health. As PDV viral replication in cell culture in vitro has not yet been achieved, several thousands of wasps must be dissected to yield enough viral DNA from the adult ovaries to carry out PDV genomic sequencing. This study compares three methods of PDV genomic DNA isolation for the PDV of Cotesia flavipes, which parasitizes the sugarcane borer, Diatraea saccharalis, preparatory to sequencing the C. flavipes bracovirus genome. Two of these protocols incorporate phenol-chloroform DNA extraction steps in the procedure and the third protocol uses a modified Qiagen DNA kit method to extract viral DNA. The latter method proved significantly less time-consuming and more cost-effective. Efforts are currently underway to bioengineer insect pathogenic viruses with PDV genes, so that their gene products will enhance baculovirus virulence for agricultural insect pests, either via suppression of the immune system of the host or by PDV-mediated induction of its developmental arrest. Sequencing a growing number of complete PDV genomes will enhance those efforts, which will be facilitated by the study reported here. (c) 2008 Wiley-Liss, Inc.

  18. A novel statistical method to estimate the effective SNP size in vertebrate genomes and categorized genomic regions

    Directory of Open Access Journals (Sweden)

    Zhao Zhongming

    2006-12-01

    Full Text Available Abstract Background The local environment of single nucleotide polymorphisms (SNPs contains abundant genetic information for the study of mechanisms of mutation, genome evolution, and causes of diseases. Recent studies revealed that neighboring-nucleotide biases on SNPs were strong and the genome-wide bias patterns could be represented by a small subset of the total SNPs. It remains unsolved for the estimation of the effective SNP size, the number of SNPs that are sufficient to represent the bias patterns observed from the whole SNP data. Results To estimate the effective SNP size, we developed a novel statistical method, SNPKS, which considers both the statistical and biological significances. SNPKS consists of two major steps: to obtain an initial effective size by the Kolmogorov-Smirnov test (KS test and to find an intermediate effective size by interval evaluation. The SNPKS algorithm was implemented in computer programs and applied to the real SNP data. The effective SNP size was estimated to be 38,200, 39,300, 38,000, and 38,700 in the human, chimpanzee, dog, and mouse genomes, respectively, and 39,100, 39,600, 39,200, and 42,200 in human intergenic, genic, intronic, and CpG island regions, respectively. Conclusion SNPKS is the first statistical method to estimate the effective SNP size. It runs efficiently and greatly outperforms the algorithm implemented in SNPNB. The application of SNPKS to the real SNP data revealed the similar small effective SNP size (38,000 – 42,200 in the human, chimpanzee, dog, and mouse genomes as well as in human genomic regions. The findings suggest strong influence of genetic factors across vertebrate genomes.

  19. A Novel Method to Predict Genomic Islands Based on Mean Shift Clustering Algorithm

    Science.gov (United States)

    de Brito, Daniel M.; Maracaja-Coutinho, Vinicius; de Farias, Savio T.; Batista, Leonardo V.; do Rêgo, Thaís G.

    2016-01-01

    Genomic Islands (GIs) are regions of bacterial genomes that are acquired from other organisms by the phenomenon of horizontal transfer. These regions are often responsible for many important acquired adaptations of the bacteria, with great impact on their evolution and behavior. Nevertheless, these adaptations are usually associated with pathogenicity, antibiotic resistance, degradation and metabolism. Identification of such regions is of medical and industrial interest. For this reason, different approaches for genomic islands prediction have been proposed. However, none of them are capable of predicting precisely the complete repertory of GIs in a genome. The difficulties arise due to the changes in performance of different algorithms in the face of the variety of nucleotide distribution in different species. In this paper, we present a novel method to predict GIs that is built upon mean shift clustering algorithm. It does not require any information regarding the number of clusters, and the bandwidth parameter is automatically calculated based on a heuristic approach. The method was implemented in a new user-friendly tool named MSGIP—Mean Shift Genomic Island Predictor. Genomes of bacteria with GIs discussed in other papers were used to evaluate the proposed method. The application of this tool revealed the same GIs predicted by other methods and also different novel unpredicted islands. A detailed investigation of the different features related to typical GI elements inserted in these new regions confirmed its effectiveness. Stand-alone and user-friendly versions for this new methodology are available at http://msgip.integrativebioinformatics.me. PMID:26731657

  20. A Novel Method to Predict Genomic Islands Based on Mean Shift Clustering Algorithm.

    Directory of Open Access Journals (Sweden)

    Daniel M de Brito

    Full Text Available Genomic Islands (GIs are regions of bacterial genomes that are acquired from other organisms by the phenomenon of horizontal transfer. These regions are often responsible for many important acquired adaptations of the bacteria, with great impact on their evolution and behavior. Nevertheless, these adaptations are usually associated with pathogenicity, antibiotic resistance, degradation and metabolism. Identification of such regions is of medical and industrial interest. For this reason, different approaches for genomic islands prediction have been proposed. However, none of them are capable of predicting precisely the complete repertory of GIs in a genome. The difficulties arise due to the changes in performance of different algorithms in the face of the variety of nucleotide distribution in different species. In this paper, we present a novel method to predict GIs that is built upon mean shift clustering algorithm. It does not require any information regarding the number of clusters, and the bandwidth parameter is automatically calculated based on a heuristic approach. The method was implemented in a new user-friendly tool named MSGIP--Mean Shift Genomic Island Predictor. Genomes of bacteria with GIs discussed in other papers were used to evaluate the proposed method. The application of this tool revealed the same GIs predicted by other methods and also different novel unpredicted islands. A detailed investigation of the different features related to typical GI elements inserted in these new regions confirmed its effectiveness. Stand-alone and user-friendly versions for this new methodology are available at http://msgip.integrativebioinformatics.me.

  1. Comparative analysis of methods for genome-wide nucleosome cartography.

    Science.gov (United States)

    Quintales, Luis; Vázquez, Enrique; Antequera, Francisco

    2015-07-01

    Nucleosomes contribute to compacting the genome into the nucleus and regulate the physical access of regulatory proteins to DNA either directly or through the epigenetic modifications of the histone tails. Precise mapping of nucleosome positioning across the genome is, therefore, essential to understanding the genome regulation. In recent years, several experimental protocols have been developed for this purpose that include the enzymatic digestion, chemical cleavage or immunoprecipitation of chromatin followed by next-generation sequencing of the resulting DNA fragments. Here, we compare the performance and resolution of these methods from the initial biochemical steps through the alignment of the millions of short-sequence reads to a reference genome to the final computational analysis to generate genome-wide maps of nucleosome occupancy. Because of the lack of a unified protocol to process data sets obtained through the different approaches, we have developed a new computational tool (NUCwave), which facilitates their analysis, comparison and assessment and will enable researchers to choose the most suitable method for any particular purpose. NUCwave is freely available at http://nucleosome.usal.es/nucwave along with a step-by-step protocol for its use. © The Author 2014. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  2. Voltammetry Method Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Hoyt, N. [Argonne National Lab. (ANL), Argonne, IL (United States); Pereira, C. [Argonne National Lab. (ANL), Argonne, IL (United States); Willit, J. [Argonne National Lab. (ANL), Argonne, IL (United States); Williamson, M. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-07-29

    The purpose of the ANL MPACT Voltammetry project is to evaluate the suitability of previously developed cyclic voltammetry techniques to provide electroanalytical measurements of actinide concentrations in realistic used fuel processing scenarios. The molten salts in these scenarios are very challenging as they include high concentrations of multiple electrochemically active species, thereby creating a variety of complications. Some of the problems that arise therein include issues related to uncompensated resistance, cylindrical diffusion, and alloying of the electrodeposited metals. Improvements to the existing voltammetry technique to account for these issues have been implemented, resulting in good measurements of actinide concentrations across a wide range of adverse conditions.

  3. A universal, rapid, and inexpensive method for genomic DNA ...

    Indian Academy of Sciences (India)

    A very simple biological basis is followed in this procedure, in which, when the bloodis placed in water, it rapidly enters the RBCs by osmosis and causes cells to burst by hemolysis. The validity of extracting genomic DNA was confirmed by several molecular biological experiments. It was found that this method provides an ...

  4. Assessment of evaluation criteria for survival prediction from genomic data.

    Science.gov (United States)

    Bøvelstad, Hege M; Borgan, Ornulf

    2011-03-01

    Survival prediction from high-dimensional genomic data is dependent on a proper regularization method. With an increasing number of such methods proposed in the literature, comparative studies are called for and some have been performed. However, there is currently no consensus on which prediction assessment criterion should be used for time-to-event data. Without a firm knowledge about whether the choice of evaluation criterion may affect the conclusions made as to which regularization method performs best, these comparative studies may be of limited value. In this paper, four evaluation criteria are investigated: the log-rank test for two groups, the area under the time-dependent ROC curve (AUC), an R²-measure based on the Cox partial likelihood, and an R²-measure based on the Brier score. The criteria are compared according to how they rank six widely used regularization methods that are based on the Cox regression model, namely univariate selection, principal components regression (PCR), supervised PCR, partial least squares regression, ridge regression, and the lasso. Based on our application to three microarray gene expression data sets, we find that the results obtained from the widely used log-rank test deviate from the other three criteria studied. For future studies, where one also might want to include non-likelihood or non-model-based regularization methods, we argue in favor of AUC and the R²-measure based on the Brier score, as these do not suffer from the arbitrary splitting into two groups nor depend on the Cox partial likelihood. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Genomic Selection in Plant Breeding: Methods, Models, and Perspectives.

    Science.gov (United States)

    Crossa, José; Pérez-Rodríguez, Paulino; Cuevas, Jaime; Montesinos-López, Osval; Jarquín, Diego; de Los Campos, Gustavo; Burgueño, Juan; González-Camacho, Juan M; Pérez-Elizalde, Sergio; Beyene, Yoseph; Dreisigacker, Susanne; Singh, Ravi; Zhang, Xuecai; Gowda, Manje; Roorkiwal, Manish; Rutkoski, Jessica; Varshney, Rajeev K

    2017-11-01

    Genomic selection (GS) facilitates the rapid selection of superior genotypes and accelerates the breeding cycle. In this review, we discuss the history, principles, and basis of GS and genomic-enabled prediction (GP) as well as the genetics and statistical complexities of GP models, including genomic genotype×environment (G×E) interactions. We also examine the accuracy of GP models and methods for two cereal crops and two legume crops based on random cross-validation. GS applied to maize breeding has shown tangible genetic gains. Based on GP results, we speculate how GS in germplasm enhancement (i.e., prebreeding) programs could accelerate the flow of genes from gene bank accessions to elite lines. Recent advances in hyperspectral image technology could be combined with GS and pedigree-assisted breeding. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Evaluation methods for hospital facilities

    DEFF Research Database (Denmark)

    Fronczek-Munter, Aneta

    2013-01-01

    according to focus areas and proposes which evaluation methods to use in different building phases of healthcare facilities. Hospital evaluations with experts and users are also considered; their subjective view on space, function, technology, usability and aesthetics. Results & solutions: This paper...... presents the different methods for evaluating buildings in use in a new model, the Evaluation Focus Flower, and proposes which evaluation methods are suitable for various aims and building phases, i.e. which is giving best input for the initial briefing process of new hospital facilities with ambition...... of creating buildings with enhanced usability. Additionally various evaluation methods used in hospital cases in Denmark and Norway are presented. Involvement of users is proposed, not just in defining requirements but also in co-creation/design and evaluation of solutions. The theories and preliminary...

  7. Genome Target Evaluator (GTEvaluator: A workflow exploiting genome dataset to measure the sensitivity and specificity of genetic markers.

    Directory of Open Access Journals (Sweden)

    Arnaud Felten

    Full Text Available Most of the bacterial typing methods used to discriminate isolates in medical or food safety microbiology are based on genetic markers used as targets in PCR or hybridization experiments. These DNA typing methods are important tools for studying prevalence and epidemiology, for conducting surveillance, investigations and control of biological hazard sources. In that perspective, it is crucial to insure that the chosen genetic markers have the greatest specificity and sensitivity. The wealth of whole-genome sequences available for many bacterial species offers the opportunity to evaluate the performance of these genetic markers. In the present study, we have developed GTEvaluator, a bioinformatics workflow which ranks genetic markers depending on their sensitivity and specificity towards groups of well-defined genomes. GTEvaluator identifies the most performant genetic markers to target individuals among a population. The individuals (i.e. a group of genomes within a collection are defined by any kind of particular phenotypic or biological properties inside a related population (i.e. collection of genomes. The performance of the genetic markers is computed by a distance value which takes into account both sensitivity and specificity. In this study we report two examples of GTEvaluator application. In the first example Bacillus phenotypic markers were evaluated for their capacity to distinguish B. cereus from B. thuringiensis. In the second experiment, GTEvaluator measured the performance of genetic markers dedicated to the molecular serotyping of Salmonella enterica. In one in silico experiment it was possible to test 64 markers onto 134 genomes corresponding to 14 different serotypes.

  8. Bayesian methods for jointly estimating genomic breeding values of one continuous and one threshold trait.

    Directory of Open Access Journals (Sweden)

    Chonglong Wang

    Full Text Available Genomic selection has become a useful tool for animal and plant breeding. Currently, genomic evaluation is usually carried out using a single-trait model. However, a multi-trait model has the advantage of using information on the correlated traits, leading to more accurate genomic prediction. To date, joint genomic prediction for a continuous and a threshold trait using a multi-trait model is scarce and needs more attention. Based on the previously proposed methods BayesCπ for single continuous trait and BayesTCπ for single threshold trait, we developed a novel method based on a linear-threshold model, i.e., LT-BayesCπ, for joint genomic prediction of a continuous trait and a threshold trait. Computing procedures of LT-BayesCπ using Markov Chain Monte Carlo algorithm were derived. A simulation study was performed to investigate the advantages of LT-BayesCπ over BayesCπ and BayesTCπ with regard to the accuracy of genomic prediction on both traits. Factors affecting the performance of LT-BayesCπ were addressed. The results showed that, in all scenarios, the accuracy of genomic prediction obtained from LT-BayesCπ was significantly increased for the threshold trait compared to that from single trait prediction using BayesTCπ, while the accuracy for the continuous trait was comparable with that from single trait prediction using BayesCπ. The proposed LT-BayesCπ could be a method of choice for joint genomic prediction of one continuous and one threshold trait.

  9. Design optimization methods for genomic DNA tiling arrays.

    Science.gov (United States)

    Bertone, Paul; Trifonov, Valery; Rozowsky, Joel S; Schubert, Falk; Emanuelsson, Olof; Karro, John; Kao, Ming-Yang; Snyder, Michael; Gerstein, Mark

    2006-02-01

    A recent development in microarray research entails the unbiased coverage, or tiling, of genomic DNA for the large-scale identification of transcribed sequences and regulatory elements. A central issue in designing tiling arrays is that of arriving at a single-copy tile path, as significant sequence cross-hybridization can result from the presence of non-unique probes on the array. Due to the fragmentation of genomic DNA caused by the widespread distribution of repetitive elements, the problem of obtaining adequate sequence coverage increases with the sizes of subsequence tiles that are to be included in the design. This becomes increasingly problematic when considering complex eukaryotic genomes that contain many thousands of interspersed repeats. The general problem of sequence tiling can be framed as finding an optimal partitioning of non-repetitive subsequences over a prescribed range of tile sizes, on a DNA sequence comprising repetitive and non-repetitive regions. Exact solutions to the tiling problem become computationally infeasible when applied to large genomes, but successive optimizations are developed that allow their practical implementation. These include an efficient method for determining the degree of similarity of many oligonucleotide sequences over large genomes, and two algorithms for finding an optimal tile path composed of longer sequence tiles. The first algorithm, a dynamic programming approach, finds an optimal tiling in linear time and space; the second applies a heuristic search to reduce the space complexity to a constant requirement. A Web resource has also been developed, accessible at http://tiling.gersteinlab.org, to generate optimal tile paths from user-provided DNA sequences.

  10. Predicting human height by Victorian and genomic methods.

    Science.gov (United States)

    Aulchenko, Yurii S; Struchalin, Maksim V; Belonogova, Nadezhda M; Axenovich, Tatiana I; Weedon, Michael N; Hofman, Albert; Uitterlinden, Andre G; Kayser, Manfred; Oostra, Ben A; van Duijn, Cornelia M; Janssens, A Cecile J W; Borodin, Pavel M

    2009-08-01

    In the Victorian era, Sir Francis Galton showed that 'when dealing with the transmission of stature from parents to children, the average height of the two parents, ... is all we need care to know about them' (1886). One hundred and twenty-two years after Galton's work was published, 54 loci showing strong statistical evidence for association to human height were described, providing us with potential genomic means of human height prediction. In a population-based study of 5748 people, we find that a 54-loci genomic profile explained 4-6% of the sex- and age-adjusted height variance, and had limited ability to discriminate tall/short people, as characterized by the area under the receiver-operating characteristic curve (AUC). In a family-based study of 550 people, with both parents having height measurements, we find that the Galtonian mid-parental prediction method explained 40% of the sex- and age-adjusted height variance, and showed high discriminative accuracy. We have also explored how much variance a genomic profile should explain to reach certain AUC values. For highly heritable traits such as height, we conclude that in applications in which parental phenotypic information is available (eg, medicine), the Victorian Galton's method will long stay unsurpassed, in terms of both discriminative accuracy and costs. For less heritable traits, and in situations in which parental information is not available (eg, forensics), genomic methods may provide an alternative, given that the variants determining an essential proportion of the trait's variation can be identified.

  11. Evaluation Methods for Prevention Education.

    Science.gov (United States)

    Blue, Amy V.; Barnette, J. Jackson; Ferguson, Kristi J.; Garr, David R.

    2000-01-01

    Discusses the importance of assessing medical students' competence in prevention knowledge, skills, and attitudes. Provides general guidance for programs interested in evaluating their prevention instructional efforts, and gives specific examples of possible methods for evaluating prevention education. Stresses the need to tailor assessment…

  12. Computer Architecture Performance Evaluation Methods

    CERN Document Server

    Eeckhout, Lieven

    2010-01-01

    Performance evaluation is at the foundation of computer architecture research and development. Contemporary microprocessors are so complex that architects cannot design systems based on intuition and simple models only. Adequate performance evaluation methods are absolutely crucial to steer the research and development process in the right direction. However, rigorous performance evaluation is non-trivial as there are multiple aspects to performanceevaluation, such as picking workloads, selecting an appropriate modeling or simulation approach, running the model and interpreting the results usi

  13. Methods for Optimizing CRISPR-Cas9 Genome Editing Specificity

    Science.gov (United States)

    Tycko, Josh; Myer, Vic E.; Hsu, Patrick D.

    2016-01-01

    Summary Advances in the development of delivery, repair, and specificity strategies for the CRISPR-Cas9 genome engineering toolbox are helping researchers understand gene function with unprecedented precision and sensitivity. CRISPR-Cas9 also holds enormous therapeutic potential for the treatment of genetic disorders by directly correcting disease-causing mutations. Although the Cas9 protein has been shown to bind and cleave DNA at off-target sites, the field of Cas9 specificity is rapidly progressing with marked improvements in guide RNA selection, protein and guide engineering, novel enzymes, and off-target detection methods. We review important challenges and breakthroughs in the field as a comprehensive practical guide to interested users of genome editing technologies, highlighting key tools and strategies for optimizing specificity. The genome editing community should now strive to standardize such methods for measuring and reporting off-target activity, while keeping in mind that the goal for specificity should be continued improvement and vigilance. PMID:27494557

  14. Performance comparison of two efficient genomic selection methods (gsbay & MixP) applied in aquacultural organisms

    Science.gov (United States)

    Su, Hailin; Li, Hengde; Wang, Shi; Wang, Yangfan; Bao, Zhenmin

    2017-02-01

    Genomic selection is more and more popular in animal and plant breeding industries all around the world, as it can be applied early in life without impacting selection candidates. The objective of this study was to bring the advantages of genomic selection to scallop breeding. Two different genomic selection tools MixP and gsbay were applied on genomic evaluation of simulated data and Zhikong scallop ( Chlamys farreri) field data. The data were compared with genomic best linear unbiased prediction (GBLUP) method which has been applied widely. Our results showed that both MixP and gsbay could accurately estimate single-nucleotide polymorphism (SNP) marker effects, and thereby could be applied for the analysis of genomic estimated breeding values (GEBV). In simulated data from different scenarios, the accuracy of GEBV acquired was ranged from 0.20 to 0.78 by MixP; it was ranged from 0.21 to 0.67 by gsbay; and it was ranged from 0.21 to 0.61 by GBLUP. Estimations made by MixP and gsbay were expected to be more reliable than those estimated by GBLUP. Predictions made by gsbay were more robust, while with MixP the computation is much faster, especially in dealing with large-scale data. These results suggested that both algorithms implemented by MixP and gsbay are feasible to carry out genomic selection in scallop breeding, and more genotype data will be necessary to produce genomic estimated breeding values with a higher accuracy for the industry.

  15. Comparison of methods for genomic localization of gene trap sequences

    Directory of Open Access Journals (Sweden)

    Ferrin Thomas E

    2006-09-01

    Full Text Available Abstract Background Gene knockouts in a model organism such as mouse provide a valuable resource for the study of basic biology and human disease. Determining which gene has been inactivated by an untargeted gene trapping event poses a challenging annotation problem because gene trap sequence tags, which represent sequence near the vector insertion site of a trapped gene, are typically short and often contain unresolved residues. To understand better the localization of these sequences on the mouse genome, we compared stand-alone versions of the alignment programs BLAT, SSAHA, and MegaBLAST. A set of 3,369 sequence tags was aligned to build 34 of the mouse genome using default parameters for each algorithm. Known genome coordinates for the cognate set of full-length genes (1,659 sequences were used to evaluate localization results. Results In general, all three programs performed well in terms of localizing sequences to a general region of the genome, with only relatively subtle errors identified for a small proportion of the sequence tags. However, large differences in performance were noted with regard to correctly identifying exon boundaries. BLAT correctly identified the vast majority of exon boundaries, while SSAHA and MegaBLAST missed the majority of exon boundaries. SSAHA consistently reported the fewest false positives and is the fastest algorithm. MegaBLAST was comparable to BLAT in speed, but was the most susceptible to localizing sequence tags incorrectly to pseudogenes. Conclusion The differences in performance for sequence tags and full-length reference sequences were surprisingly small. Characteristic variations in localization results for each program were noted that affect the localization of sequence at exon boundaries, in particular.

  16. Will genomic selection be a practical method for plant breeding?

    Science.gov (United States)

    Nakaya, Akihiro; Isobe, Sachiko N

    2012-11-01

    Genomic selection or genome-wide selection (GS) has been highlighted as a new approach for marker-assisted selection (MAS) in recent years. GS is a form of MAS that selects favourable individuals based on genomic estimated breeding values. Previous studies have suggested the utility of GS, especially for capturing small-effect quantitative trait loci, but GS has not become a popular methodology in the field of plant breeding, possibly because there is insufficient information available on GS for practical use. In this review, GS is discussed from a practical breeding viewpoint. Statistical approaches employed in GS are briefly described, before the recent progress in GS studies is surveyed. GS practices in plant breeding are then reviewed before future prospects are discussed. Statistical concepts used in GS are discussed with genetic models and variance decomposition, heritability, breeding value and linear model. Recent progress in GS studies is reviewed with a focus on empirical studies. For the practice of GS in plant breeding, several specific points are discussed including linkage disequilibrium, feature of populations and genotyped markers and breeding scheme. Currently, GS is not perfect, but it is a potent, attractive and valuable approach for plant breeding. This method will be integrated into many practical breeding programmes in the near future with further advances and the maturing of its theory.

  17. Evaluative Profiling of Arsenic Sensing and Regulatory Systems in the Human Microbiome Project Genomes

    Directory of Open Access Journals (Sweden)

    Raphael D. Isokpehi

    2014-01-01

    Full Text Available The influence of environmental chemicals including arsenic, a type 1 carcinogen, on the composition and function of the human-associated microbiota is of significance in human health and disease. We have developed a suite of bioinformatics and visual analytics methods to evaluate the availability (presence or absence and abundance of functional annotations in a microbial genome for seven Pfam protein families: As(III-responsive transcriptional repressor (ArsR, anion-transporting ATPase (ArsA, arsenical pump membrane protein (ArsB, arsenate reductase (ArsC, arsenical resistance operon transacting repressor (ArsD, water/glycerol transport protein (aquaporins, and universal stress protein (USP. These genes encode function for sensing and/or regulating arsenic content in the bacterial cell. The evaluative profiling strategy was applied to 3,274 genomes from which 62 genomes from 18 genera were identified to contain genes for the seven protein families. Our list included 12 genomes in the Human Microbiome Project (HMP from the following genera: Citrobacter, Escherichia, Lactobacillus, Providencia, Rhodococcus , and Staphylococcus. Gene neighborhood analysis of the arsenic resistance operon in the genome of Bacteroides thetaiotaomicron VPI-5482, a human gut symbiont, revealed the adjacent arrangement of genes for arsenite binding/transfer (ArsD and cytochrome c biosynthesis (DsbD_2. Visual analytics facilitated evaluation of protein annotations in 367 genomes in the phylum Bacteroidetes identified multiple genomes in which genes for ArsD and DsbD_2 were adjacently arranged. Cytochrome c , produced by a posttranslational process, consists of heme-containing proteins important for cellular energy production and signaling. Further research is desired to elucidate arsenic resistance and arsenic-mediated cellular energy production in the Bacteroidetes.

  18. Accuracy of genomic selection methods in a standard data set of loblolly pine (Pinus taeda L.).

    Science.gov (United States)

    Resende, M F R; Muñoz, P; Resende, M D V; Garrick, D J; Fernando, R L; Davis, J M; Jokela, E J; Martin, T A; Peter, G F; Kirst, M

    2012-04-01

    Genomic selection can increase genetic gain per generation through early selection. Genomic selection is expected to be particularly valuable for traits that are costly to phenotype and expressed late in the life cycle of long-lived species. Alternative approaches to genomic selection prediction models may perform differently for traits with distinct genetic properties. Here the performance of four different original methods of genomic selection that differ with respect to assumptions regarding distribution of marker effects, including (i) ridge regression-best linear unbiased prediction (RR-BLUP), (ii) Bayes A, (iii) Bayes Cπ, and (iv) Bayesian LASSO are presented. In addition, a modified RR-BLUP (RR-BLUP B) that utilizes a selected subset of markers was evaluated. The accuracy of these methods was compared across 17 traits with distinct heritabilities and genetic architectures, including growth, development, and disease-resistance properties, measured in a Pinus taeda (loblolly pine) training population of 951 individuals genotyped with 4853 SNPs. The predictive ability of the methods was evaluated using a 10-fold, cross-validation approach, and differed only marginally for most method/trait combinations. Interestingly, for fusiform rust disease-resistance traits, Bayes Cπ, Bayes A, and RR-BLUB B had higher predictive ability than RR-BLUP and Bayesian LASSO. Fusiform rust is controlled by few genes of large effect. A limitation of RR-BLUP is the assumption of equal contribution of all markers to the observed variation. However, RR-BLUP B performed equally well as the Bayesian approaches.The genotypic and phenotypic data used in this study are publically available for comparative analysis of genomic selection prediction models.

  19. Fine population structure analysis method for genomes of many.

    Science.gov (United States)

    Pan, Xuedong; Wang, Yi; Wong, Emily H M; Telenti, Amalio; Venter, J Craig; Jin, Li

    2017-10-03

    Fine population structure can be examined through the clustering of individuals into subpopulations. The clustering of individuals in large sequence datasets into subpopulations makes the calculation of subpopulation specific allele frequency possible, which may shed light on selection of candidate variants for rare diseases. However, as the magnitude of the data increases, computational burden becomes a challenge in fine population structure analysis. To address this issue, we propose fine population structure analysis (FIPSA), which is an individual-based non-parametric method for dissecting fine population structure. FIPSA maximizes the likelihood ratio of the contingency table of the allele counts multiplied by the group. We demonstrated that its speed and accuracy were superior to existing non-parametric methods when the simulated sample size was up to 5,000 individuals. When applied to real data, the method showed high resolution on the Human Genome Diversity Project (HGDP) East Asian dataset. FIPSA was independently validated on 11,257 human genomes. The group assignment given by FIPSA was 99.1% similar to those assigned based on supervised learning. Thus, FIPSA provides high resolution and is compatible with a real dataset of more than ten thousand individuals.

  20. Optimized application of penalized regression methods to diverse genomic data.

    Science.gov (United States)

    Waldron, Levi; Pintilie, Melania; Tsao, Ming-Sound; Shepherd, Frances A; Huttenhower, Curtis; Jurisica, Igor

    2011-12-15

    Penalized regression methods have been adopted widely for high-dimensional feature selection and prediction in many bioinformatic and biostatistical contexts. While their theoretical properties are well-understood, specific methodology for their optimal application to genomic data has not been determined. Through simulation of contrasting scenarios of correlated high-dimensional survival data, we compared the LASSO, Ridge and Elastic Net penalties for prediction and variable selection. We found that a 2D tuning of the Elastic Net penalties was necessary to avoid mimicking the performance of LASSO or Ridge regression. Furthermore, we found that in a simulated scenario favoring the LASSO penalty, a univariate pre-filter made the Elastic Net behave more like Ridge regression, which was detrimental to prediction performance. We demonstrate the real-life application of these methods to predicting the survival of cancer patients from microarray data, and to classification of obese and lean individuals from metagenomic data. Based on these results, we provide an optimized set of guidelines for the application of penalized regression for reproducible class comparison and prediction with genomic data. A parallelized implementation of the methods presented for regression and for simulation of synthetic data is provided as the pensim R package, available at http://cran.r-project.org/web/packages/pensim/index.html. chuttenh@hsph.harvard.edu; juris@ai.utoronto.ca Supplementary data are available at Bioinformatics online.

  1. LNG Safety Assessment Evaluation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Muna, Alice Baca [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); LaFleur, Angela Christine [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-05-01

    Sandia National Laboratories evaluated published safety assessment methods across a variety of industries including Liquefied Natural Gas (LNG), hydrogen, land and marine transportation, as well as the US Department of Defense (DOD). All the methods were evaluated for their potential applicability for use in the LNG railroad application. After reviewing the documents included in this report, as well as others not included because of repetition, the Department of Energy (DOE) Hydrogen Safety Plan Checklist is most suitable to be adapted to the LNG railroad application. This report was developed to survey industries related to rail transportation for methodologies and tools that can be used by the FRA to review and evaluate safety assessments submitted by the railroad industry as a part of their implementation plans for liquefied or compressed natural gas storage ( on-board or tender) and engine fueling delivery systems. The main sections of this report provide an overview of various methods found during this survey. In most cases, the reference document is quoted directly. The final section provides discussion and a recommendation for the most appropriate methodology that will allow efficient and consistent evaluations to be made. The DOE Hydrogen Safety Plan Checklist was then revised to adapt it as a methodology for the Federal Railroad Administration’s use in evaluating safety plans submitted by the railroad industry.

  2. Evaluation of turbulence mitigation methods

    Science.gov (United States)

    van Eekeren, Adam W. M.; Huebner, Claudia S.; Dijk, Judith; Schutte, Klamer; Schwering, Piet B. W.

    2014-05-01

    Atmospheric turbulence is a well-known phenomenon that diminishes the recognition range in visual and infrared image sequences. There exist many different methods to compensate for the effects of turbulence. This paper focuses on the performance of two software-based methods to mitigate the effects of low- and medium turbulence conditions. Both methods are capable of processing static and dynamic scenes. The first method consists of local registration, frame selection, blur estimation and deconvolution. The second method consists of local motion compensation, fore- /background segmentation and weighted iterative blind deconvolution. A comparative evaluation using quantitative measures is done on some representative sequences captured during a NATO SET 165 trial in Dayton. The amount of blurring and tilt in the imagery seem to be relevant measures for such an evaluation. It is shown that both methods improve the imagery by reducing the blurring and tilt and therefore enlarge the recognition range. Furthermore, results of a recognition experiment using simulated data are presented that show that turbulence mitigation using the first method improves the recognition range up to 25% for an operational optical system.

  3. Genome analysis methods - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available tabase Description Download License Update History of This Database Site Policy | Contact Us Genome analysis methods - PGDBj Regis...switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods

  4. Evaluation of Kjeldahl digestion method

    International Nuclear Information System (INIS)

    Amin, M.; Flowers, T.H.

    2004-01-01

    The evaluation of the Kjeldahl digestion method was investigated by comparing measured values of total nitrogen, phosphorus and potassium using three salt and catalyst mixture in Standard Kjeldahl digestion method and Salicyclic acid Modification method with certified values of plant material as well as comparison was made for determination of total nitrogen from steam distillation method verses the Technicon Auto-analyzer, and phosphorus Ascorbic acid/Molybdate method verses Molybdate/ Metavanadate method on the Technicon Auto-Analyzer. The 1 g salt/catalyst mixture recovered less nitrogen than the 2.5 g in the standard Kjeldahl method due to the lower temperature and incomplete digestion in both plant and soil samples. The 2.5 g catalyst mixture partially recovered nitrate in the standard Kjeldahl method and the salicylic acid modification fail to recover all over nitrate in plant material. Use of 2.5 g salt catalyst mixture and selenium appears to promote nitrogen losses in salicylic acid modification method but not in the standard Kjeldahl method of digestion for soil samples. No interference of selenium or copper was observed in Nitrogen and Phosphorus on calorimetric determination. The standard Kjeldahl method with 2.5 g of salt/catalyst mixture of sodium sulphate copper sulphate (10:1) in 5 ml sulfuric acid were found suitable for determination of total Nitrogen, phosphorus and potassium. The steam distillation and the Technicon Auto-Analyzer technique measure similar amounts of ammonium nitrogen. However, the Technicon Auto analyzer technique is easier, rapid, higher degree of reproducibility, precise, accurate, reliable and free from human error. The amount of phosphorus measured by the Ascorbic acid/Molybdate method was more accurate than by the Molybdate/Metavanadate method on Technicon Auto-Analyzer. (author)

  5. A comparative evaluation of genome assembly reconciliation tools.

    Science.gov (United States)

    Alhakami, Hind; Mirebrahim, Hamid; Lonardi, Stefano

    2017-05-18

    The majority of eukaryotic genomes are unfinished due to the algorithmic challenges of assembling them. A variety of assembly and scaffolding tools are available, but it is not always obvious which tool or parameters to use for a specific genome size and complexity. It is, therefore, common practice to produce multiple assemblies using different assemblers and parameters, then select the best one for public release. A more compelling approach would allow one to merge multiple assemblies with the intent of producing a higher quality consensus assembly, which is the objective of assembly reconciliation. Several assembly reconciliation tools have been proposed in the literature, but their strengths and weaknesses have never been compared on a common dataset. We fill this need with this work, in which we report on an extensive comparative evaluation of several tools. Specifically, we evaluate contiguity, correctness, coverage, and the duplication ratio of the merged assembly compared to the individual assemblies provided as input. None of the tools we tested consistently improved the quality of the input GAGE and synthetic assemblies. Our experiments show an increase in contiguity in the consensus assembly when the original assemblies already have high quality. In terms of correctness, the quality of the results depends on the specific tool, as well as on the quality and the ranking of the input assemblies. In general, the number of misassemblies ranges from being comparable to the best of the input assembly to being comparable to the worst of the input assembly.

  6. Genomics protocols [Methods in molecular biology, v. 175

    National Research Council Canada - National Science Library

    Starkey, Michael P; Elaswarapu, Ramnath

    2001-01-01

    ... to the larger community of researchers who have recognized the potential of genomics research and may themselves be beginning to explore the technologies involved. Some of the techniques described in Genomics Protocols are clearly not restricted to the genomics field; indeed, a prerequisite for many procedures in this discipline is that they require an extremely high throughput, beyond the scope of the average investigator. However, what we have endeavored here to achieve is both to compile a collection of...

  7. Will genomic selection be a practical method for plant breeding?

    OpenAIRE

    Nakaya, Akihiro; Isobe, Sachiko N.

    2012-01-01

    Background Genomic selection or genome-wide selection (GS) has been highlighted as a new approach for marker-assisted selection (MAS) in recent years. GS is a form of MAS that selects favourable individuals based on genomic estimated breeding values. Previous studies have suggested the utility of GS, especially for capturing small-effect quantitative trait loci, but GS has not become a popular methodology in the field of plant breeding, possibly because there is insufficient information avail...

  8. Genomics protocols [Methods in molecular biology, v. 175

    National Research Council Canada - National Science Library

    Starkey, Michael P; Elaswarapu, Ramnath

    2001-01-01

    .... Drawing on emerging technologies in the fields of bioinformatics and proteomics, these protocols cover not only those traditionally recognized as genomics, but also early therapeutich approaches...

  9. Whole genome amplification: Use of advanced isothermal method

    African Journals Online (AJOL)

    Yomi

    2010-12-29

    Dec 29, 2010 ... example, hydrodynamic shearing machine (Arneson et al., 2008b). For validation of whole genome amplification,. Tanabe et al. (2003) have used exon amplification and genotyping of 307 microsatellites, in addition to array CGH. Analyzing of 307 microsatellites distributed throughout the genome revealed ...

  10. Genomics Education in Practice: Evaluation of a Mobile Lab Design

    Science.gov (United States)

    Van Mil, Marc H. W.; Boerwinkel, Dirk Jan; Buizer-Voskamp, Jacobine E.; Speksnijder, Annelies; Waarlo, Arend Jan

    2010-01-01

    Dutch genomics research centers have developed the "DNA labs on the road" to bridge the gap between modern genomics research practice and secondary-school curriculum in the Netherlands. These mobile DNA labs offer upper-secondary students the opportunity to experience genomics research through experiments with laboratory equipment that…

  11. Comparison on genomic predictions using three GBLUP methods and two single-step blending methods in the Nordic Holstein population

    Directory of Open Access Journals (Sweden)

    Gao Hongding

    2012-07-01

    Full Text Available Abstract Background A single-step blending approach allows genomic prediction using information of genotyped and non-genotyped animals simultaneously. However, the combined relationship matrix in a single-step method may need to be adjusted because marker-based and pedigree-based relationship matrices may not be on the same scale. The same may apply when a GBLUP model includes both genomic breeding values and residual polygenic effects. The objective of this study was to compare single-step blending methods and GBLUP methods with and without adjustment of the genomic relationship matrix for genomic prediction of 16 traits in the Nordic Holstein population. Methods The data consisted of de-regressed proofs (DRP for 5 214 genotyped and 9 374 non-genotyped bulls. The bulls were divided into a training and a validation population by birth date, October 1, 2001. Five approaches for genomic prediction were used: 1 a simple GBLUP method, 2 a GBLUP method with a polygenic effect, 3 an adjusted GBLUP method with a polygenic effect, 4 a single-step blending method, and 5 an adjusted single-step blending method. In the adjusted GBLUP and single-step methods, the genomic relationship matrix was adjusted for the difference of scale between the genomic and the pedigree relationship matrices. A set of weights on the pedigree relationship matrix (ranging from 0.05 to 0.40 was used to build the combined relationship matrix in the single-step blending method and the GBLUP method with a polygenetic effect. Results Averaged over the 16 traits, reliabilities of genomic breeding values predicted using the GBLUP method with a polygenic effect (relative weight of 0.20 were 0.3% higher than reliabilities from the simple GBLUP method (without a polygenic effect. The adjusted single-step blending and original single-step blending methods (relative weight of 0.20 had average reliabilities that were 2.1% and 1.8% higher than the simple GBLUP method, respectively. In

  12. Two efficient methods for isolation of high-quality genomic DNA from entomopathogenic fungi.

    Science.gov (United States)

    Serna-Domínguez, María G; Andrade-Michel, Gilda Y; Arredondo-Bernal, Hugo C; Gallou, Adrien

    2018-03-27

    Conventional and commercial methods for isolation of nucleic acids are available for fungal samples including entomopathogenic fungi (EPF). However, there is not a unique optimal method for all organisms. The cell wall structure and the wide range of secondary metabolites of EPF can broadly interfere with the efficiency of the DNA extraction protocol. This study compares three commercial protocols: DNeasy® Plant Mini Kit (Qiagen), Wizard® Genomic DNA Purification Kit (Promega), and Axygen™ Multisource Genomic DNA Miniprep Kit (Axygen) and three conventional methods based on different buffers: SDS, CTAB/PVPP, and CTAB/β-mercaptoethanol versus three cell lysis procedures: liquid nitrogen homogenization and two bead-beating materials (i.e., tungsten-carbide and stainless-steel) for four representative species of EPF (i.e., Beauveria bassiana, Hirsutella citriformis, Isaria javanica, and Metarhizium anisopliae). Liquid nitrogen homogenization combined with DNeasy® Plant Mini Kit (i.e., QN) or SDS buffer (i.e., SN) significantly improved the yield with a good purity (~1.8) and high integrity (>20,000 bp) of genomic DNA in contrast with other methods, also, these results were better when compared with the two bead-beating materials. The purified DNA was evaluated by PCR-based techniques: amplification of translation elongation factor 1-α (TEF) and two highly sensitive molecular markers (i.e., ISSR and AFLP) with reliable and reproducible results. Despite a variation in yield, purity, and integrity of extracted DNA across the four species of EPF with the different DNA extraction methods, the SN and QN protocols maintained a high-quality of DNA which is required for downstream molecular applications. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. New library construction method for single-cell genomes.

    Directory of Open Access Journals (Sweden)

    Larry Xi

    Full Text Available A central challenge in sequencing single-cell genomes is the accurate determination of point mutations, phasing of these mutations, and identifying copy number variations with few assumptions. Ideally, this is accomplished under as low sequencing coverage as possible. Here we report our attempt to meet these goals with a novel library construction and library amplification methodology. In our approach, single-cell genomic DNA is first fragmented with saturated transposition to make a primary library that uniformly covers the whole genome by short fragments. The library is then amplified by a carefully optimized PCR protocol in a uniform and synchronized fashion for next-generation sequencing. Each step of the protocol can be quantitatively characterized. Our shallow sequencing data show that the library is tightly distributed and is useful for the determination of copy number variations.

  14. Methods for evaluating information sources

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2012-01-01

    The article briefly presents and discusses 12 different approaches to the evaluation of information sources (for example a Wikipedia entry or a journal article): (1) the checklist approach; (2) classical peer review; (3) modified peer review; (4) evaluation based on examining the coverage...

  15. Evaluation of Three Automated Genome Annotations for Halorhabdus utahensis

    DEFF Research Database (Denmark)

    Bakke, Peter; Carney, Nick; DeLoache, Will

    2009-01-01

    Genome annotations are accumulating rapidly and depend heavily on automated annotation systems. Many genome centers offer annotation systems but no one has compared their output in a systematic way to determine accuracy and inherent errors. Errors in the annotations are routinely deposited...... in databases such as NCBI and used to validate subsequent annotation errors. We submitted the genome sequence of halophilic archaeon Halorhabdus utahensis to be analyzed by three genome annotation services. We have examined the output from each service in a variety of ways in order to compare the methodology...... and effectiveness of the annotations, as well as to explore the genes, pathways, and physiology of the previously unannotated genome. The annotation services differ considerably in gene calls, features, and ease of use. We had to manually identify the origin of replication and the species-specific consensus...

  16. Comparative assessment of methods for estimating individual genome-wide homozygosity-by-descent from human genomic data

    Directory of Open Access Journals (Sweden)

    McQuillan Ruth

    2010-02-01

    Full Text Available Abstract Background Genome-wide homozygosity estimation from genomic data is becoming an increasingly interesting research topic. The aim of this study was to compare different methods for estimating individual homozygosity-by-descent based on the information from human genome-wide scans rather than genealogies. We considered the four most commonly used methods and investigated their applicability to single-nucleotide polymorphism (SNP data in both a simulation study and by using the human genotyped data. A total of 986 inhabitants from the isolated Island of Vis, Croatia (where inbreeding is present, but no pedigree-based inbreeding was observed at the level of F > 0.0625 were included in this study. All individuals were genotyped with the Illumina HumanHap300 array with 317,503 SNP markers. Results Simulation data suggested that multi-point FEstim is the method most strongly correlated to true homozygosity-by-descent. Correlation coefficients between the homozygosity-by-descent estimates were high but only for inbred individuals, with nearly absolute correlation between single-point measures. Conclusions Deciding who is really inbred is a methodological challenge where multi-point approaches can be very helpful once the set of SNP markers is filtered to remove linkage disequilibrium. The use of several different methodological approaches and hence different homozygosity measures can help to distinguish between homozygosity-by-state and homozygosity-by-descent in studies investigating the effects of genomic autozygosity on human health.

  17. Genomes

    National Research Council Canada - National Science Library

    Brown, T. A. (Terence A.)

    2002-01-01

    ... of genome expression and replication processes, and transcriptomics and proteomics. This text is richly illustrated with clear, easy-to-follow, full color diagrams, which are downloadable from the book's website...

  18. Accuracy of multi-trait genomic selection using different methods

    NARCIS (Netherlands)

    Calus, M.P.L.; Veerkamp, R.F.

    2011-01-01

    Background Genomic selection has become a very important tool in animal genetics and is rapidly emerging in plant genetics. It holds the promise to be particularly beneficial to select for traits that are difficult or expensive to measure, such as traits that are measured in one environment and

  19. Genomic DNA extraction method from Annona senegalensis Pers ...

    African Journals Online (AJOL)

    ... with starches. The isolated DNA proved amenable to polymerase chain reaction (PCR) amplification and restriction digestion. This technique is fast, reproducible, and can be applied for simple sequence repeats (SSR)-PCR markers identification. Key words: Annona senegalensis, genomic DNA, fruits, modified, markers.

  20. Genomic and Genotypic Characterization of Cylindrospermopsis raciborskii: Toward an Intraspecific Phylogenetic Evaluation by Comparative Genomics

    Directory of Open Access Journals (Sweden)

    Vinicius A. C. Abreu

    2018-02-01

    Full Text Available Cylindrospermopsis raciborskii is a freshwater cyanobacterial species with increasing bloom reports worldwide that are likely due to factors related to climate change. In addition to the deleterious effects of blooms on aquatic ecosystems, the majority of ecotypes can synthesize toxic secondary metabolites causing public health issues. To overcome the harmful effects of C. raciborskii blooms, it is important to advance knowledge of diversity, genetic variation, and evolutionary processes within populations. An efficient approach to exploring this diversity and understanding the evolution of C. raciborskii is to use comparative genomics. Here, we report two new draft genomes of C. raciborskii (strains CENA302 and CENA303 from Brazilian isolates of different origins and explore their molecular diversity, phylogeny, and evolutionary diversification by comparing their genomes with sequences from other strains available in public databases. The results obtained by comparing seven C. raciborskii and the Raphidiopsis brookii D9 genomes revealed a set of conserved core genes and a variable set of accessory genes, such as those involved in the biosynthesis of natural products, heterocyte glycolipid formation, and nitrogen fixation. Gene cluster arrangements related to the biosynthesis of the antifungal cyclic glycosylated lipopeptide hassallidin were identified in four C. raciborskii genomes, including the non-nitrogen fixing strain CENA303. Shifts in gene clusters involved in toxin production according to geographic origins were observed, as well as a lack of nitrogen fixation (nif and heterocyte glycolipid (hgl gene clusters in some strains. Single gene phylogeny (16S rRNA sequences was congruent with phylogeny based on 31 concatenated housekeeping protein sequences, and both analyses have shown, with high support values, that the species C. raciborskii is monophyletic. This comparative genomics study allowed a species-wide view of the biological

  1. Methods of Writing Instruction Evaluation.

    Science.gov (United States)

    Lamb, Bill H.

    The Writing Program Director at Johnson County Community College (Kansas) developed quantitative measures for writing instruction evaluation which can support that institution's growing interest in and support for peer collaboration as a means to improving instructional quality. The first process (Interaction Analysis) has an observer measure…

  2. Comparison of methods used to identify superior individuals in genomic selection in plant breeding.

    Science.gov (United States)

    Bhering, L L; Junqueira, V S; Peixoto, L A; Cruz, C D; Laviola, B G

    2015-09-10

    The aim of this study was to evaluate different methods used in genomic selection, and to verify those that select a higher proportion of individuals with superior genotypes. Thus, F2 populations of different sizes were simulated (100, 200, 500, and 1000 individuals) with 10 replications each. These consisted of 10 linkage groups (LG) of 100 cM each, containing 100 equally spaced markers per linkage group, of which 200 controlled the characteristics, defined as the 20 initials of each LG. Genetic and phenotypic values were simulated assuming binomial distribution of effects for each LG, and the absence of dominance. For phenotypic values, heritabilities of 20, 50, and 80% were considered. To compare methodologies, the analysis processing time, coefficient of coincidence (selection of 5, 10, and 20% of superior individuals), and Spearman correlation between true genetic values, and the genomic values predicted by each methodology were determined. Considering the processing time, the three methodologies were statistically different, rrBLUP was the fastest, and Bayesian LASSO was the slowest. Spearman correlation revealed that the rrBLUP and GBLUP methodologies were equivalent, and Bayesian LASSO provided the lowest correlation values. Similar results were obtained in coincidence variables among the individuals selected, in which Bayesian LASSO differed statistically and presented a lower value than the other methodologies. Therefore, for the scenarios evaluated, rrBLUP is the best methodology for the selection of genetically superior individuals.

  3. Genomic evaluation of cattle in a multi-breed context

    DEFF Research Database (Denmark)

    Lund, Mogens Sandø; Su, Guosheng; Janss, Luc

    2014-01-01

    In order to obtain accurate genomic breeding values a large number of reference animals with both phenotype and genotype data are needed. This poses a challenge for breeds with small reference populations. One option to overcome this obstacle is to use a multi-breed reference population. However...... that the effect of multi-breed reference populations on the accuracy of genomic prediction is highly affected by the genetic distance between breeds. When combining populations of the same breeds from different countries, large increases in accuracy are seen, whereas for admixed populations with some exchange...... of sires, substantial but smaller gains are found. Little or no benefit is found when combining distantly related breeds such as Holstein and Jersey and using the widely used genomic BLUP model. By using more sophisticated Bayesian variable selection models that put more focus on genomic markers in strong...

  4. Accounting for linkage disequilibrium in genome-wide association studies: A penalized regression method.

    Science.gov (United States)

    Liu, Jin; Wang, Kai; Ma, Shuangge; Huang, Jian

    2013-01-01

    Penalized regression methods are becoming increasingly popular in genome-wide association studies (GWAS) for identifying genetic markers associated with disease. However, standard penalized methods such as LASSO do not take into account the possible linkage disequilibrium between adjacent markers. We propose a novel penalized approach for GWAS using a dense set of single nucleotide polymorphisms (SNPs). The proposed method uses the minimax concave penalty (MCP) for marker selection and incorporates linkage disequilibrium (LD) information by penalizing the difference of the genetic effects at adjacent SNPs with high correlation. A coordinate descent algorithm is derived to implement the proposed method. This algorithm is efficient in dealing with a large number of SNPs. A multi-split method is used to calculate the p -values of the selected SNPs for assessing their significance. We refer to the proposed penalty function as the smoothed MCP and the proposed approach as the SMCP method. Performance of the proposed SMCP method and its comparison with LASSO and MCP approaches are evaluated through simulation studies, which demonstrate that the proposed method is more accurate in selecting associated SNPs. Its applicability to real data is illustrated using heterogeneous stock mice data and a rheumatoid arthritis.

  5. AluScan: a method for genome-wide scanning of sequence and structure variations in the human genome

    Directory of Open Access Journals (Sweden)

    Mei Lingling

    2011-11-01

    Full Text Available Abstract Background To complement next-generation sequencing technologies, there is a pressing need for efficient pre-sequencing capture methods with reduced costs and DNA requirement. The Alu family of short interspersed nucleotide elements is the most abundant type of transposable elements in the human genome and a recognized source of genome instability. With over one million Alu elements distributed throughout the genome, they are well positioned to facilitate genome-wide sequence amplification and capture of regions likely to harbor genetic variation hotspots of biological relevance. Results Here we report on the use of inter-Alu PCR with an enhanced range of amplicons in conjunction with next-generation sequencing to generate an Alu-anchored scan, or 'AluScan', of DNA sequences between Alu transposons, where Alu consensus sequence-based 'H-type' PCR primers that elongate outward from the head of an Alu element are combined with 'T-type' primers elongating from the poly-A containing tail to achieve huge amplicon range. To illustrate the method, glioma DNA was compared with white blood cell control DNA of the same patient by means of AluScan. The over 10 Mb sequences obtained, derived from more than 8,000 genes spread over all the chromosomes, revealed a highly reproducible capture of genomic sequences enriched in genic sequences and cancer candidate gene regions. Requiring only sub-micrograms of sample DNA, the power of AluScan as a discovery tool for genetic variations was demonstrated by the identification of 357 instances of loss of heterozygosity, 341 somatic indels, 274 somatic SNVs, and seven potential somatic SNV hotspots between control and glioma DNA. Conclusions AluScan, implemented with just a small number of H-type and T-type inter-Alu PCR primers, provides an effective capture of a diversity of genome-wide sequences for analysis. The method, by enabling an examination of gene-enriched regions containing exons, introns, and

  6. Discount method for programming language evaluation

    DEFF Research Database (Denmark)

    Kurtev, Svetomir; Christensen, Tommy Aagaard; Thomsen, Bent

    2016-01-01

    This paper presents work in progress on developing a Discount Method for Programming Language Evaluation inspired by the Discount Usability Evaluation method (Benyon 2010) and the Instant Data Analysis method (Kjeldskov et al. 2004). The method is intended to bridge the gap between small scale...... internal language design evaluation methods and large scale surveys and quantitative evaluation methods. The method is designed to be applicable even before a compiler or IDE is developed for a new language. To test the method, a usability evaluation experiment was carried out on the Quorum programming...... language (Stefik et al. 2016) using programmers with experience in C and C#. When comparing our results with previous studies of Quorum, most of the data was comparable though not strictly in agreement. However, the discrepancies were mainly related to the programmers pre-existing expectations...

  7. Reliability and applications of statistical methods based on oligonucleotide frequencies in bacterial and archaeal genomes

    DEFF Research Database (Denmark)

    Bohlin, J; Skjerve, E; Ussery, David

    2008-01-01

    , or be based on specific statistical distributions. Advantages with these statistical methods include measurements of phylogenetic relationship with relatively small pieces of DNA sampled from almost anywhere within genomes, detection of foreign/conserved DNA, and homology searches. Our aim was to explore...... of foreign/conserved DNA, and plasmid-host similarity comparisons. Additionally, the reliability of the methods was tested by comparing both real and random genomic DNA. RESULTS: Our findings show that the optimal method is context dependent. ROFs were best suited for distant homology searches, whilst......BACKGROUND: The increasing number of sequenced prokaryotic genomes contains a wealth of genomic data that needs to be effectively analysed. A set of statistical tools exists for such analysis, but their strengths and weaknesses have not been fully explored. The statistical methods we are concerned...

  8. Extraction of human genomic DNA from whole blood using a magnetic microsphere method.

    Science.gov (United States)

    Gong, Rui; Li, Shengying

    2014-01-01

    With the rapid development of molecular biology and the life sciences, magnetic extraction is a simple, automatic, and highly efficient method for separating biological molecules, performing immunoassays, and other applications. Human blood is an ideal source of human genomic DNA. Extracting genomic DNA by traditional methods is time-consuming, and phenol and chloroform are toxic reagents that endanger health. Therefore, it is necessary to find a more convenient and efficient method for obtaining human genomic DNA. In this study, we developed urea-formaldehyde resin magnetic microspheres and magnetic silica microspheres for extraction of human genomic DNA. First, a magnetic microsphere suspension was prepared and used to extract genomic DNA from fresh whole blood, frozen blood, dried blood, and trace blood. Second, DNA content and purity were measured by agarose electrophoresis and ultraviolet spectrophotometry. The human genomic DNA extracted from whole blood was then subjected to polymerase chain reaction analysis to further confirm its quality. The results of this study lay a good foundation for future research and development of a high-throughput and rapid extraction method for extracting genomic DNA from various types of blood samples.

  9. Comparison of variations detection between whole-genome amplification methods used in single-cell resequencing

    DEFF Research Database (Denmark)

    Hou, Yong; Wu, Kui; Shi, Xulian

    2015-01-01

    BACKGROUND: Single-cell resequencing (SCRS) provides many biomedical advances in variations detection at the single-cell level, but it currently relies on whole genome amplification (WGA). Three methods are commonly used for WGA: multiple displacement amplification (MDA), degenerate-oligonucleoti......BACKGROUND: Single-cell resequencing (SCRS) provides many biomedical advances in variations detection at the single-cell level, but it currently relies on whole genome amplification (WGA). Three methods are commonly used for WGA: multiple displacement amplification (MDA), degenerate...

  10. Computational methods for data evaluation and assimilation

    CERN Document Server

    Cacuci, Dan Gabriel

    2013-01-01

    Data evaluation and data combination require the use of a wide range of probability theory concepts and tools, from deductive statistics mainly concerning frequencies and sample tallies to inductive inference for assimilating non-frequency data and a priori knowledge. Computational Methods for Data Evaluation and Assimilation presents interdisciplinary methods for integrating experimental and computational information. This self-contained book shows how the methods can be applied in many scientific and engineering areas. After presenting the fundamentals underlying the evaluation of experiment

  11. Recent progress in the methods of genome sequencing

    Directory of Open Access Journals (Sweden)

    Zhao Ning-wei

    2010-04-01

    Full Text Available Genome sequencing is a very important tool for the development of genetic diagnosis, drugs of gene engineering, pharmacogenetics, etc. As the HGP comes into people's ears, there is an emerging need for the genome sequencing. During the recent years, there are two different traditional strategies available for this target: shotgun sequencing and hierarchical sequencing. Besides these, many efforts are pursuing new ideas to facilitate fast and cost-effective genome sequencing, including 454 GS system, polony sequencing, single molecular array, nanopore sequencing, with each having different unique characteristics, but remains to be fully developed.Sequenciação do genoma foi um instrumento muito importante para o desenvolvimento do diagnóstico genético, a droga da engenharia genética, farmacogenética, etc. Como o HGP entrar em ouvidos do povo, há uma necessidade emergente para a sequenciação do genoma. Durante os últimos anos, há duas diferentes estratégias tradicionais disponíveis para este objectivo: seqüenciamento shotgun hierárquico e sequenciação. Além desses, muitos esforços estão a prosseguir novas idéias para facilitar a rápida e eficaz em termos de custos sequenciação do genoma, incluindo 454 GS sistema, polony seqüenciamento, único molecular array, nanopore seqüenciamento, com cada um dos quais com diferentes características únicas, e que resta para ser mais desenvolvido.

  12. Evaluation of nitrate destruction methods

    International Nuclear Information System (INIS)

    Taylor, P.A.; Kurath, D.E.; Guenther, R.

    1993-01-01

    A wide variety of high nitrate-concentration aqueous mixed [radioactive and Resource Conservation and Recovery Act (RCRA) hazardous] wastes are stored at various US Department of Energy (DOE) facilities. These wastes will ultimately be solidified for final disposal, although the waste acceptance criteria for the final waste form is still being determined. Because the nitrates in the wastes will normally increase the volume or reduce the integrity of all of the waste forms under consideration for final disposal, nitrate destruction before solidification of the waste will generally be beneficial. This report describes and evaluates various technologies that could be used to destroy the nitrates in the stored wastes. This work was funded by the Department of Energy's Office of Technology Development, through the Chemical/Physical Technology Support Group of the Mixed Waste Integrated Program. All the nitrate destruction technologies will require further development work before a facility could be designed and built to treat the majority of the stored wastes. Several of the technologies have particularly attractive features: the nitrate to ammonia and ceramic (NAC) process produces an insoluble waste form with a significant volume reduction, electrochemical reduction destroys nitrates without any chemical addition, and the hydrothermal process can simultaneously treat nitrates and organics in both acidic and alkaline wastes. These three technologies have been tested using lab-scale equipment and surrogate solutions. At their current state of development, it is not possible to predict which process will be the most beneficial for a particular waste stream

  13. An efficient method for genomic DNA extraction from different molluscs species.

    Science.gov (United States)

    Pereira, Jorge C; Chaves, Raquel; Bastos, Estela; Leitão, Alexandra; Guedes-Pinto, Henrique

    2011-01-01

    The selection of a DNA extraction method is a critical step when subsequent analysis depends on the DNA quality and quantity. Unlike mammals, for which several capable DNA extraction methods have been developed, for molluscs the availability of optimized genomic DNA extraction protocols is clearly insufficient. Several aspects such as animal physiology, the type (e.g., adductor muscle or gills) or quantity of tissue, can explain the lack of efficiency (quality and yield) in molluscs genomic DNA extraction procedure. In an attempt to overcome these aspects, this work describes an efficient method for molluscs genomic DNA extraction that was tested in several species from different orders: Veneridae, Ostreidae, Anomiidae, Cardiidae (Bivalvia) and Muricidae (Gastropoda), with different weight sample tissues. The isolated DNA was of high molecular weight with high yield and purity, even with reduced quantities of tissue. Moreover, the genomic DNA isolated, demonstrated to be suitable for several downstream molecular techniques, such as PCR sequencing among others.

  14. Rapid and Inexpensive Screening of Genomic Copy Number Variations Using a Novel Quantitative Fluorescent PCR Method

    Directory of Open Access Journals (Sweden)

    Martin Stofanko

    2013-01-01

    Full Text Available Detection of human microdeletion and microduplication syndromes poses significant burden on public healthcare systems in developing countries. With genome-wide diagnostic assays frequently inaccessible, targeted low-cost PCR-based approaches are preferred. However, their reproducibility depends on equally efficient amplification using a number of target and control primers. To address this, the recently described technique called Microdeletion/Microduplication Quantitative Fluorescent PCR (MQF-PCR was shown to reliably detect four human syndromes by quantifying DNA amplification in an internally controlled PCR reaction. Here, we confirm its utility in the detection of eight human microdeletion syndromes, including the more common WAGR, Smith-Magenis, and Potocki-Lupski syndromes with 100% sensitivity and 100% specificity. We present selection, design, and performance evaluation of detection primers using variety of approaches. We conclude that MQF-PCR is an easily adaptable method for detection of human pathological chromosomal aberrations.

  15. A novel whole genome amplification method using type IIS restriction enzymes to create overhangs with random sequences.

    Science.gov (United States)

    Pan, Xiaoming; Wan, Baihui; Li, Chunchuan; Liu, Yu; Wang, Jing; Mou, Haijin; Liang, Xingguo

    2014-08-20

    Ligation-mediated polymerase chain reaction (LM-PCR) is a whole genome amplification (WGA) method, for which genomic DNA is cleaved into numerous fragments and then all of the fragments are amplified by PCR after attaching a universal end sequence. However, the self-ligation of these fragments could happen and may cause biased amplification and restriction of its application. To decrease the self-ligation probability, here we use type IIS restriction enzymes to digest genomic DNA into fragments with 4-5nt long overhangs with random sequences. After ligation to an adapter with random end sequences to above fragments, PCR is carried out and almost all present DNA sequences are amplified. In this study, whole genome of Vibrio parahaemolyticus was amplified and the amplification efficiency was evaluated by quantitative PCR. The results suggested that our approach could provide sufficient genomic DNA with good quality to meet requirements of various genetic analyses. Copyright © 2014. Published by Elsevier B.V.

  16. Optimizing Usability Studies by Complementary Evaluation Methods

    NARCIS (Netherlands)

    Schmettow, Martin; Bach, Cedric; Scapin, Dominique

    2014-01-01

    This paper examines combinations of complementary evaluation methods as a strategy for efficient usability problem discovery. A data set from an earlier study is re-analyzed, involving three evaluation methods applied to two virtual environment applications. Results of a mixed-effects logistic

  17. Empirical evaluation methods in computer vision

    CERN Document Server

    Christensen, Henrik I

    2002-01-01

    This book provides comprehensive coverage of methods for the empirical evaluation of computer vision techniques. The practical use of computer vision requires empirical evaluation to ensure that the overall system has a guaranteed performance. The book contains articles that cover the design of experiments for evaluation, range image segmentation, the evaluation of face recognition and diffusion methods, image matching using correlation methods, and the performance of medical image processing algorithms. Sample Chapter(s). Foreword (228 KB). Chapter 1: Introduction (505 KB). Contents: Automate

  18. Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding.

    Science.gov (United States)

    Ould Estaghvirou, Sidi Boubacar; Ogutu, Joseph O; Schulz-Streeck, Torben; Knaak, Carsten; Ouzunova, Milena; Gordillo, Andres; Piepho, Hans-Peter

    2013-12-06

    In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least

  19. A systematic study of genome context methods: calibration, normalization and combination

    Directory of Open Access Journals (Sweden)

    Dale Joseph M

    2010-10-01

    Full Text Available Abstract Background Genome context methods have been introduced in the last decade as automatic methods to predict functional relatedness between genes in a target genome using the patterns of existence and relative locations of the homologs of those genes in a set of reference genomes. Much work has been done in the application of these methods to different bioinformatics tasks, but few papers present a systematic study of the methods and their combination necessary for their optimal use. Results We present a thorough study of the four main families of genome context methods found in the literature: phylogenetic profile, gene fusion, gene cluster, and gene neighbor. We find that for most organisms the gene neighbor method outperforms the phylogenetic profile method by as much as 40% in sensitivity, being competitive with the gene cluster method at low sensitivities. Gene fusion is generally the worst performing of the four methods. A thorough exploration of the parameter space for each method is performed and results across different target organisms are presented. We propose the use of normalization procedures as those used on microarray data for the genome context scores. We show that substantial gains can be achieved from the use of a simple normalization technique. In particular, the sensitivity of the phylogenetic profile method is improved by around 25% after normalization, resulting, to our knowledge, on the best-performing phylogenetic profile system in the literature. Finally, we show results from combining the various genome context methods into a single score. When using a cross-validation procedure to train the combiners, with both original and normalized scores as input, a decision tree combiner results in gains of up to 20% with respect to the gene neighbor method. Overall, this represents a gain of around 15% over what can be considered the state of the art in this area: the four original genome context methods combined using a

  20. Whole genome sequence analysis of unidentified genetically modified papaya for development of a specific detection method.

    Science.gov (United States)

    Nakamura, Kosuke; Kondo, Kazunari; Akiyama, Hiroshi; Ishigaki, Takumi; Noguchi, Akio; Katsumata, Hiroshi; Takasaki, Kazuto; Futo, Satoshi; Sakata, Kozue; Fukuda, Nozomi; Mano, Junichi; Kitta, Kazumi; Tanaka, Hidenori; Akashi, Ryo; Nishimaki-Mogami, Tomoko

    2016-08-15

    Identification of transgenic sequences in an unknown genetically modified (GM) papaya (Carica papaya L.) by whole genome sequence analysis was demonstrated. Whole genome sequence data were generated for a GM-positive fresh papaya fruit commodity detected in monitoring using real-time polymerase chain reaction (PCR). The sequences obtained were mapped against an open database for papaya genome sequence. Transgenic construct- and event-specific sequences were identified as a GM papaya developed to resist infection from a Papaya ringspot virus. Based on the transgenic sequences, a specific real-time PCR detection method for GM papaya applicable to various food commodities was developed. Whole genome sequence analysis enabled identifying unknown transgenic construct- and event-specific sequences in GM papaya and development of a reliable method for detecting them in papaya food commodities. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Geophysical methods for evaluation of plutonic rocks

    International Nuclear Information System (INIS)

    Gibb, R.A.; Scott, J.S.

    1986-04-01

    Geophysical methods are systematically described according to the physical principle and operational mode of each method, the type of information produced, limitations of a technical and/or economic nature, and the applicability of the method to rock-mass evaluation at Research Areas of the Nuclear Fuel Waste Management Program. The geophysical methods fall into three categories: (1) airborne and other reconnaissance surveys, (2) detailed or surface (ground) surveys, and (3) borehole or subsurface surveys. The possible roles of each method in the site-screening and site-evaluation processes of disposal vault site selection are summarized

  2. Safeguards Evaluation Method for evaluating vulnerability to insider threats

    International Nuclear Information System (INIS)

    Al-Ayat, R.A.; Judd, B.R.; Renis, T.A.

    1986-01-01

    As protection of DOE facilities against outsiders increases to acceptable levels, attention is shifting toward achieving comparable protection against insiders. Since threats and protection measures for insiders are substantially different from those for outsiders, new perspectives and approaches are needed. One such approach is the Safeguards Evaluation Method. This method helps in assessing safeguards vulnerabilities to theft or diversion of special nuclear meterial (SNM) by insiders. The Safeguards Evaluation Method-Insider Threat is a simple model that can be used by safeguards and security planners to evaluate safeguards and proposed upgrades at their own facilities. The method is used to evaluate the effectiveness of safeguards in both timely detection (in time to prevent theft) and late detection (after-the-fact). The method considers the various types of potential insider adversaries working alone or in collusion with other insiders. The approach can be used for a wide variety of facilities with various quantities and forms of SNM. An Evaluation Workbook provides documentation of the baseline assessment; this simplifies subsequent on-site appraisals. Quantitative evaluation is facilitated by an accompanying computer program. The method significantly increases an evaluation team's on-site analytical capabilities, thereby producing a more thorough and accurate safeguards evaluation

  3. A strategy for evaluating pathway analysis methods.

    Science.gov (United States)

    Yu, Chenggang; Woo, Hyung Jun; Yu, Xueping; Oyama, Tatsuya; Wallqvist, Anders; Reifman, Jaques

    2017-10-13

    Researchers have previously developed a multitude of methods designed to identify biological pathways associated with specific clinical or experimental conditions of interest, with the aim of facilitating biological interpretation of high-throughput data. Before practically applying such pathway analysis (PA) methods, we must first evaluate their performance and reliability, using datasets where the pathways perturbed by the conditions of interest have been well characterized in advance. However, such 'ground truths' (or gold standards) are often unavailable. Furthermore, previous evaluation strategies that have focused on defining 'true answers' are unable to systematically and objectively assess PA methods under a wide range of conditions. In this work, we propose a novel strategy for evaluating PA methods independently of any gold standard, either established or assumed. The strategy involves the use of two mutually complementary metrics, recall and discrimination. Recall measures the consistency of the perturbed pathways identified by applying a particular analysis method to an original large dataset and those identified by the same method to a sub-dataset of the original dataset. In contrast, discrimination measures specificity-the degree to which the perturbed pathways identified by a particular method to a dataset from one experiment differ from those identifying by the same method to a dataset from a different experiment. We used these metrics and 24 datasets to evaluate six widely used PA methods. The results highlighted the common challenge in reliably identifying significant pathways from small datasets. Importantly, we confirmed the effectiveness of our proposed dual-metric strategy by showing that previous comparative studies corroborate the performance evaluations of the six methods obtained by our strategy. Unlike any previously proposed strategy for evaluating the performance of PA methods, our dual-metric strategy does not rely on any ground truth

  4. Color image definition evaluation method based on deep learning method

    Science.gov (United States)

    Liu, Di; Li, YingChun

    2018-01-01

    In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.

  5. An overview of recent developments in genomics and associated statistical methods.

    Science.gov (United States)

    Bickel, Peter J; Brown, James B; Huang, Haiyan; Li, Qunhua

    2009-11-13

    The landscape of genomics has changed drastically in the last two decades. Increasingly inexpensive sequencing has shifted the primary focus from the acquisition of biological sequences to the study of biological function. Assays have been developed to study many intricacies of biological systems, and publicly available databases have given rise to integrative analyses that combine information from many sources to draw complex conclusions. Such research was the focus of the recent workshop at the Isaac Newton Institute, 'High dimensional statistics in biology'. Many computational methods from modern genomics and related disciplines were presented and discussed. Using, as much as possible, the material from these talks, we give an overview of modern genomics: from the essential assays that make data-generation possible, to the statistical methods that yield meaningful inference. We point to current analytical challenges, where novel methods, or novel applications of extant methods, are presently needed.

  6. Comparison of dimensionality reduction methods to predict genomic breeding values for carcass traits in pigs.

    Science.gov (United States)

    Azevedo, C F; Nascimento, M; Silva, F F; Resende, M D V; Lopes, P S; Guimarães, S E F; Glória, L S

    2015-10-09

    A significant contribution of molecular genetics is the direct use of DNA information to identify genetically superior individuals. With this approach, genome-wide selection (GWS) can be used for this purpose. GWS consists of analyzing a large number of single nucleotide polymorphism markers widely distributed in the genome; however, because the number of markers is much larger than the number of genotyped individuals, and such markers are highly correlated, special statistical methods are widely required. Among these methods, independent component regression, principal component regression, partial least squares, and partial principal components stand out. Thus, the aim of this study was to propose an application of the methods of dimensionality reduction to GWS of carcass traits in an F2 (Piau x commercial line) pig population. The results show similarities between the principal and the independent component methods and provided the most accurate genomic breeding estimates for most carcass traits in pigs.

  7. Methods of ecological capability evaluation of forest

    International Nuclear Information System (INIS)

    Hosseini, M.; Makhdoum, M.F.; Akbarnia, M.; Saghebtalebi, Kh.

    2000-01-01

    In this research common methods of ecological capability evaluation of forests were reviewed and limitations for performance were analysed. Ecological capability of forests is an index that show site potential in several role of wood production, soil conservation, flood control, biodiversity, conservation and water supply. This index is related to ecological characteristics of land, such as soil, micro climate, elevation, slope and aspect that affect potential of sites. Suitable method of ecological capability evaluation must be chosen according to the objective of forestry. Common methods for ecological capability evaluation include plant and animal diversity, site index curve, soil and land form, inter branches, index plants, leave analyses, analyses regeneration and ecological mapping

  8. Whole-genome array CGH evaluation for replacing prenatal karyotyping in Hong Kong.

    Directory of Open Access Journals (Sweden)

    Anita S Y Kan

    Full Text Available OBJECTIVE: To evaluate the effectiveness of whole-genome array comparative genomic hybridization (aCGH in prenatal diagnosis in Hong Kong. METHODS: Array CGH was performed on 220 samples recruited prospectively as the first-tier test study. In addition 150 prenatal samples with abnormal fetal ultrasound findings found to have normal karyotypes were analyzed as a 'further-test' study using NimbleGen CGX-135K oligonucleotide arrays. RESULTS: Array CGH findings were concordant with conventional cytogenetic results with the exception of one case of triploidy. It was found in the first-tier test study that aCGH detected 20% (44/220 clinically significant copy number variants (CNV, of which 21 were common aneuploidies and 23 had other chromosomal imbalances. There were 3.2% (7/220 samples with CNVs detected by aCGH but not by conventional cytogenetics. In the 'further-test' study, the additional diagnostic yield of detecting chromosome imbalance was 6% (9/150. The overall detection for CNVs of unclear clinical significance was 2.7% (10/370 with 0.9% found to be de novo. Eleven loci of common CNVs were found in the local population. CONCLUSION: Whole-genome aCGH offered a higher resolution diagnostic capacity than conventional karyotyping for prenatal diagnosis either as a first-tier test or as a 'further-test' for pregnancies with fetal ultrasound anomalies. We propose replacing conventional cytogenetics with aCGH for all pregnancies undergoing invasive diagnostic procedures after excluding common aneuploidies and triploidies by quantitative fluorescent PCR. Conventional cytogenetics can be reserved for visualization of clinically significant CNVs.

  9. Defining and Evaluating a Core Genome Multilocus Sequence Typing Scheme for Whole-Genome Sequence-Based Typing of Listeria monocytogenes

    OpenAIRE

    Ruppitsch, Werner; Pietzka, Ariane; Prior, Karola; Bletz, Stefan; Fernandez, Haizpea Lasa; Allerberger, Franz; Harmsen, Dag; Mellmann, Alexander

    2015-01-01

    Whole-genome sequencing (WGS) has emerged today as an ultimate typing tool to characterize Listeria monocytogenes outbreaks. However, data analysis and interlaboratory comparability of WGS data are still challenging for most public health laboratories. Therefore, we have developed and evaluated a new L. monocytogenes typing scheme based on genome-wide gene-by-gene comparisons (core genome multilocus the sequence typing [cgMLST]) to allow for a unique typing nomenclature. Initially, we determi...

  10. Systematic differences in the response of genetic variation to pedigree and genome-based selection methods

    NARCIS (Netherlands)

    Heidaritabar, M.; Vereijken, A.; Muir, W.M.; Meuwissen, T.H.E.; Cheng, H.; Megens, H.J.W.C.; Groenen, M.; Bastiaansen, J.W.M.

    2014-01-01

    Genomic selection (GS) is a DNA-based method of selecting for quantitative traits in animal and plant breeding, and offers a potentially superior alternative to traditional breeding methods that rely on pedigree and phenotype information. Using a 60¿K SNP chip with markers spaced throughout the

  11. A comparison of multivariate genome-wide association methods

    DEFF Research Database (Denmark)

    Galesloot, Tessel E; Van Steen, Kristel; Kiemeney, Lambertus A L M

    2014-01-01

    methods that are implemented in the software packages PLINK, SNPTEST, MultiPhen, BIMBAM, PCHAT and TATES, and also compared them to standard univariate GWAS, analysis of the first principal component of the traits, and meta-analysis of univariate results. We simulated data (N = 1000) for three...... correlation. We compared the power of the methods using empirically fixed significance thresholds (α = 0.05). Our results showed that the multivariate methods implemented in PLINK, SNPTEST, MultiPhen and BIMBAM performed best for the majority of the tested scenarios, with a notable increase in power...

  12. Accuracy of direct genomic breeding values for nationally evaluated traits in US Limousin and Simmental beef cattle

    Directory of Open Access Journals (Sweden)

    Saatchi Mahdi

    2012-12-01

    Full Text Available Abstract Background In national evaluations, direct genomic breeding values can be considered as correlated traits to those for which phenotypes are available for traditional estimation of breeding values. For this purpose, estimates of the accuracy of direct genomic breeding values expressed as genetic correlations between traits and their respective direct genomic breeding values are required. Methods We derived direct genomic breeding values for 2239 registered Limousin and 2703 registered Simmental beef cattle genotyped with either the Illumina BovineSNP50 BeadChip or the Illumina BovineHD BeadChip. For the 264 Simmental animals that were genotyped with the BovineHD BeadChip, genotypes for markers present on the BovineSNP50 BeadChip were extracted. Deregressed estimated breeding values were used as observations in weighted analyses that estimated marker effects to derive direct genomic breeding values for each breed. For each breed, genotyped individuals were clustered into five groups using K-means clustering, with the aim of increasing within-group and decreasing between-group pedigree relationships. Cross-validation was performed five times for each breed, using four groups for training and the fifth group for validation. For each trait, we then applied a weighted bivariate analysis of the direct genomic breeding values of genotyped animals from all five validation sets and their corresponding deregressed estimated breeding values to estimate variance and covariance components. Results After minimizing relationships between training and validation groups, estimated genetic correlations between each trait and its direct genomic breeding values ranged from 0.39 to 0.76 in Limousin and from 0.29 to 0.65 in Simmental. The efficiency of selection based on direct genomic breeding values relative to selection based on parent average information ranged from 0.68 to 1.28 in genotyped Limousin and from 0.51 to 1.44 in genotyped Simmental animals

  13. Evidence-based design and evaluation of a whole genome sequencing clinical report for the reference microbiology laboratory.

    Science.gov (United States)

    Crisan, Anamaria; McKee, Geoffrey; Munzner, Tamara; Gardy, Jennifer L

    2018-01-01

    Microbial genome sequencing is now being routinely used in many clinical and public health laboratories. Understanding how to report complex genomic test results to stakeholders who may have varying familiarity with genomics-including clinicians, laboratorians, epidemiologists, and researchers-is critical to the successful and sustainable implementation of this new technology; however, there are no evidence-based guidelines for designing such a report in the pathogen genomics domain. Here, we describe an iterative, human-centered approach to creating a report template for communicating tuberculosis (TB) genomic test results. We used Design Study Methodology-a human centered approach drawn from the information visualization domain-to redesign an existing clinical report. We used expert consults and an online questionnaire to discover various stakeholders' needs around the types of data and tasks related to TB that they encounter in their daily workflow. We also evaluated their perceptions of and familiarity with genomic data, as well as its utility at various clinical decision points. These data shaped the design of multiple prototype reports that were compared against the existing report through a second online survey, with the resulting qualitative and quantitative data informing the final, redesigned, report. We recruited 78 participants, 65 of whom were clinicians, nurses, laboratorians, researchers, and epidemiologists involved in TB diagnosis, treatment, and/or surveillance. Our first survey indicated that participants were largely enthusiastic about genomic data, with the majority agreeing on its utility for certain TB diagnosis and treatment tasks and many reporting some confidence in their ability to interpret this type of data (between 58.8% and 94.1%, depending on the specific data type). When we compared our four prototype reports against the existing design, we found that for the majority (86.7%) of design comparisons, participants preferred the

  14. Whole-Genome Regression and Prediction Methods Applied to Plant and Animal Breeding

    Science.gov (United States)

    de los Campos, Gustavo; Hickey, John M.; Pong-Wong, Ricardo; Daetwyler, Hans D.; Calus, Mario P. L.

    2013-01-01

    Genomic-enabled prediction is becoming increasingly important in animal and plant breeding and is also receiving attention in human genetics. Deriving accurate predictions of complex traits requires implementing whole-genome regression (WGR) models where phenotypes are regressed on thousands of markers concurrently. Methods exist that allow implementing these large-p with small-n regressions, and genome-enabled selection (GS) is being implemented in several plant and animal breeding programs. The list of available methods is long, and the relationships between them have not been fully addressed. In this article we provide an overview of available methods for implementing parametric WGR models, discuss selected topics that emerge in applications, and present a general discussion of lessons learned from simulation and empirical data analysis in the last decade. PMID:22745228

  15. Creating Alternative Methods for Educational Evaluation.

    Science.gov (United States)

    Smith, Nick L.

    1981-01-01

    A project supported by the National Institute of Education is adapting evaluation procedures from such areas as philosophy, geography, operations research, journalism, film criticism, and other areas. The need for such methods is reviewed, as is the context in which they function, and their contributions to evaluation methodology. (Author/GK)

  16. Evaluation and comparison of mammalian subcellular localization prediction methods

    Directory of Open Access Journals (Sweden)

    Fink J Lynn

    2006-12-01

    Full Text Available Abstract Background Determination of the subcellular location of a protein is essential to understanding its biochemical function. This information can provide insight into the function of hypothetical or novel proteins. These data are difficult to obtain experimentally but have become especially important since many whole genome sequencing projects have been finished and many resulting protein sequences are still lacking detailed functional information. In order to address this paucity of data, many computational prediction methods have been developed. However, these methods have varying levels of accuracy and perform differently based on the sequences that are presented to the underlying algorithm. It is therefore useful to compare these methods and monitor their performance. Results In order to perform a comprehensive survey of prediction methods, we selected only methods that accepted large batches of protein sequences, were publicly available, and were able to predict localization to at least nine of the major subcellular locations (nucleus, cytosol, mitochondrion, extracellular region, plasma membrane, Golgi apparatus, endoplasmic reticulum (ER, peroxisome, and lysosome. The selected methods were CELLO, MultiLoc, Proteome Analyst, pTarget and WoLF PSORT. These methods were evaluated using 3763 mouse proteins from SwissProt that represent the source of the training sets used in development of the individual methods. In addition, an independent evaluation set of 2145 mouse proteins from LOCATE with a bias towards the subcellular localization underrepresented in SwissProt was used. The sensitivity and specificity were calculated for each method and compared to a theoretical value based on what might be observed by random chance. Conclusion No individual method had a sufficient level of sensitivity across both evaluation sets that would enable reliable application to hypothetical proteins. All methods showed lower performance on the LOCATE

  17. Structural materials evaluation by neutron diffraction method

    International Nuclear Information System (INIS)

    Suzuki, Hiroshi

    2010-01-01

    It is well known that neutron diffraction method enables us to measure residual stresses inside materials. It can also evaluate deformation behaviors and phase transformation of materials under loading at various environments such as high or low temperature and also evaluate microstructural factors such as dislocation density, cell size and texture by analyzing diffraction profile. This article reviews some topics of structural materials evaluation using neutron diffraction. (author)

  18. Evidence-based design and evaluation of a whole genome sequencing clinical report for the reference microbiology laboratory

    Science.gov (United States)

    Crisan, Anamaria; McKee, Geoffrey; Munzner, Tamara

    2018-01-01

    Background Microbial genome sequencing is now being routinely used in many clinical and public health laboratories. Understanding how to report complex genomic test results to stakeholders who may have varying familiarity with genomics—including clinicians, laboratorians, epidemiologists, and researchers—is critical to the successful and sustainable implementation of this new technology; however, there are no evidence-based guidelines for designing such a report in the pathogen genomics domain. Here, we describe an iterative, human-centered approach to creating a report template for communicating tuberculosis (TB) genomic test results. Methods We used Design Study Methodology—a human centered approach drawn from the information visualization domain—to redesign an existing clinical report. We used expert consults and an online questionnaire to discover various stakeholders’ needs around the types of data and tasks related to TB that they encounter in their daily workflow. We also evaluated their perceptions of and familiarity with genomic data, as well as its utility at various clinical decision points. These data shaped the design of multiple prototype reports that were compared against the existing report through a second online survey, with the resulting qualitative and quantitative data informing the final, redesigned, report. Results We recruited 78 participants, 65 of whom were clinicians, nurses, laboratorians, researchers, and epidemiologists involved in TB diagnosis, treatment, and/or surveillance. Our first survey indicated that participants were largely enthusiastic about genomic data, with the majority agreeing on its utility for certain TB diagnosis and treatment tasks and many reporting some confidence in their ability to interpret this type of data (between 58.8% and 94.1%, depending on the specific data type). When we compared our four prototype reports against the existing design, we found that for the majority (86.7%) of design

  19. A genomic background based method for association analysis in related individuals.

    Directory of Open Access Journals (Sweden)

    Najaf Amin

    Full Text Available BACKGROUND: Feasibility of genotyping of hundreds and thousands of single nucleotide polymorphisms (SNPs in thousands of study subjects have triggered the need for fast, powerful, and reliable methods for genome-wide association analysis. Here we consider a situation when study participants are genetically related (e.g. due to systematic sampling of families or because a study was performed in a genetically isolated population. Of the available methods that account for relatedness, the Measured Genotype (MG approach is considered the 'gold standard'. However, MG is not efficient with respect to time taken for the analysis of genome-wide data. In this context we proposed a fast two-step method called Genome-wide Association using Mixed Model and Regression (GRAMMAR for the analysis of pedigree-based quantitative traits. This method certainly overcomes the drawback of time limitation of the measured genotype (MG approach, but pays in power. One of the major drawbacks of both MG and GRAMMAR, is that they crucially depend on the availability of complete and correct pedigree data, which is rarely available. METHODOLOGY: In this study we first explore type 1 error and relative power of MG, GRAMMAR, and Genomic Control (GC approaches for genetic association analysis. Secondly, we propose an extension to GRAMMAR i.e. GRAMMAR-GC. Finally, we propose application of GRAMMAR-GC using the kinship matrix estimated through genomic marker data, instead of (possibly missing and/or incorrect genealogy. CONCLUSION: Through simulations we show that MG approach maintains high power across a range of heritabilities and possible pedigree structures, and always outperforms other contemporary methods. We also show that the power of our proposed GRAMMAR-GC approaches to that of the 'gold standard' MG for all models and pedigrees studied. We show that this method is both feasible and powerful and has correct type 1 error in the context of genome-wide association analysis

  20. A genomic background based method for association analysis in related individuals.

    Science.gov (United States)

    Amin, Najaf; van Duijn, Cornelia M; Aulchenko, Yurii S

    2007-12-05

    Feasibility of genotyping of hundreds and thousands of single nucleotide polymorphisms (SNPs) in thousands of study subjects have triggered the need for fast, powerful, and reliable methods for genome-wide association analysis. Here we consider a situation when study participants are genetically related (e.g. due to systematic sampling of families or because a study was performed in a genetically isolated population). Of the available methods that account for relatedness, the Measured Genotype (MG) approach is considered the 'gold standard'. However, MG is not efficient with respect to time taken for the analysis of genome-wide data. In this context we proposed a fast two-step method called Genome-wide Association using Mixed Model and Regression (GRAMMAR) for the analysis of pedigree-based quantitative traits. This method certainly overcomes the drawback of time limitation of the measured genotype (MG) approach, but pays in power. One of the major drawbacks of both MG and GRAMMAR, is that they crucially depend on the availability of complete and correct pedigree data, which is rarely available. In this study we first explore type 1 error and relative power of MG, GRAMMAR, and Genomic Control (GC) approaches for genetic association analysis. Secondly, we propose an extension to GRAMMAR i.e. GRAMMAR-GC. Finally, we propose application of GRAMMAR-GC using the kinship matrix estimated through genomic marker data, instead of (possibly missing and/or incorrect) genealogy. Through simulations we show that MG approach maintains high power across a range of heritabilities and possible pedigree structures, and always outperforms other contemporary methods. We also show that the power of our proposed GRAMMAR-GC approaches to that of the 'gold standard' MG for all models and pedigrees studied. We show that this method is both feasible and powerful and has correct type 1 error in the context of genome-wide association analysis in related individuals.

  1. New methods for selecting and evaluating probiotics.

    Science.gov (United States)

    Gueimonde, Miguel; Salminen, Seppo

    2006-12-01

    Recent studies have increased our understanding on the mechanistic basis of the proposed probiotic health effects. Well designed human studies have demonstrated that specific probiotic strains have health benefits in the human population. These have led to a wide acceptation of the probiotic concept. However, current probiotics have not been selected for specific purposes. Novel methods to select and characterise target-specific probiotic strains are thus needed. In addition to the traditional selection procedures, in recent years, knowledge on intestinal microbiota, nutrition, immunity and mechanisms of action has increased dramatically and can now be combined with genomic data to allow the isolation and characterization of new target- or site-specific probiotics. We should expect to see new, third generation probiotics emerging in the near future and also new selection criteria further defining the targets of future probiotics.

  2. Evaluation of winter pothole patching methods.

    Science.gov (United States)

    2014-01-01

    The main objective of this study was to evaluate the performance and cost-effectiveness of the tow-behind combination : infrared asphalt heater/reclaimer patching method and compare it to the throw and roll and spray injection methods. To : achieve t...

  3. Evaluating a method for automated rigid registration

    DEFF Research Database (Denmark)

    Darkner, Sune; Vester-Christensen, Martin; Larsen, Rasmus

    2007-01-01

    We evaluate a novel method for fully automated rigid registration of 2D manifolds in 3D space based on distance maps, the Gibbs sampler and Iterated Conditional Modes (ICM). The method is tested against the ICP considered as the gold standard for automated rigid registration. Furthermore...

  4. Methods and Strategies to Impute Missing Genotypes for Improving Genomic Prediction

    DEFF Research Database (Denmark)

    Ma, Peipei

    Genomic prediction has been widely used in dairy cattle breeding. Genotype imputation is a key procedure to efficently utilize marker data from different chips and obtain high density marker data with minimizing cost. This thesis investigated methods and strategies to genotype imputation...... for improving genomic prediction. The results indicate the IMPUTE2 and Beagle are accurate imputation methods, while Fimpute is a good alternative for routine imputation with large data set. Genotypes of non-genotyped animals can be accurately imputed if they have genotyped porgenies. A combined reference...

  5. Genomic DNA extraction method from pearl millet ( Pennisetum ...

    African Journals Online (AJOL)

    DNA extraction is difficult in a variety of plants because of the presence of metabolites that interfere with DNA isolation procedures and downstream applications such as DNA restriction, amplification, and cloning. Here we describe a modified procedure based on the hexadecyltrimethylammonium bromide (CTAB) method to ...

  6. A multiple regression method for Genome-wide association study

    Indian Academy of Sciences (India)

    wangzhihua

    Meanwhile, the statistical power of the new method decreased with increasing numbers of .... However,. LD mapping requires a marker to associate with a QTL in LD across the entire population. This association as a property of the population should ... random polygenic effects of size p , the number of SNPs excluding j.

  7. Success tree method of resources evaluation

    International Nuclear Information System (INIS)

    Chen Qinglan; Sun Wenpeng

    1994-01-01

    By applying the reliability theory in system engineering, the success tree method is used to transfer the expert's recognition on metallogenetic regularities into the form of the success tree. The aim of resources evaluation is achieved by means of calculating the metallogenetic probability or favorability of the top event of the success tree. This article introduces in detail, the source, principle of the success tree method and three kinds of calculation methods, expounds concretely how to establish the success tree of comprehensive uranium metallogenesis as well as the procedure from which the resources evaluation is performed. Because this method has not restrictions on the number of known deposits and calculated area, it is applicable to resources evaluation for different mineral species, types and scales and possesses good prospects of development

  8. Unique opportunities for NMR methods in structural genomics.

    Science.gov (United States)

    Montelione, Gaetano T; Arrowsmith, Cheryl; Girvin, Mark E; Kennedy, Michael A; Markley, John L; Powers, Robert; Prestegard, James H; Szyperski, Thomas

    2009-04-01

    This Perspective, arising from a workshop held in July 2008 in Buffalo NY, provides an overview of the role NMR has played in the United States Protein Structure Initiative (PSI), and a vision of how NMR will contribute to the forthcoming PSI-Biology program. NMR has contributed in key ways to structure production by the PSI, and new methods have been developed which are impacting the broader protein NMR community.

  9. Novel extraction method of genomic DNA suitable for long-fragment amplification from small amounts of milk.

    Science.gov (United States)

    Liu, Y F; Gao, J L; Yang, Y F; Ku, T; Zan, L S

    2014-11-01

    Isolation of genomic DNA is a prerequisite for assessment of milk quality. As a source of genomic DNA, milk somatic cells from milking ruminants are practical, animal friendly, and cost-effective sources. Extracting DNA from milk can avoid the stress response caused by blood and tissue sampling of cows. In this study, we optimized a novel DNA extraction method for amplifying long (>1,000 bp) DNA fragments and used it to evaluate the isolation of DNA from small amounts of milk. The techniques used for the separation of milk somatic cell were explored and combined with a sodium dodecyl sulfate (SDS)-phenol method for optimizing DNA extraction from milk. Spectrophotometry was used to determine the concentration and purity of the extracted DNA. Gel electrophoresis and DNA amplification technologies were used for to determine DNA size and quality. The DNA of 112 cows was obtained from milk (samples of 13 ± 1 mL) and the corresponding optical density ratios at 260:280 nm were between 1.65 and 1.75. Concentrations were between 12 and 45 μg/μL and DNA size and quality were acceptable. The specific PCR amplification of 1,019- and 729-bp bovine DNA fragments was successfully carried out. This novel method can be used as a practical, fast, and economical mean for long genomic DNA extraction from a small amount of milk. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  10. Nucleic acids delivery methods for genome editing in zygotes and embryos: the old, the new, and the old-new.

    Science.gov (United States)

    Sato, Masahiro; Ohtsuka, Masato; Watanabe, Satoshi; Gurumurthy, Channabasavaiah B

    2016-03-31

    In the recent years, sequence-specific nucleases such as ZFNs, TALENs, and CRISPR/Cas9 have revolutionzed the fields of animal genome editing and transgenesis. However, these new techniques require microinjection to deliver nucleic acids into embryos to generate gene-modified animals. Microinjection is a delicate procedure that requires sophisticated equipment and highly trained and experienced technicians. Though over a dozen alternate approaches for nucleic acid delivery into embryos were attempted during the pre-CRISPR era, none of them became routinely used as microinjection. The addition of CRISPR/Cas9 to the genome editing toolbox has propelled the search for novel delivery approaches that can obviate the need for microinjection. Indeed, some groups have recently developed electroporation-based methods that have the potential to radically change animal transgenesis. This review provides an overview of the old and new delivery methods, and discusses various strategies that were attempted during the last three decades. In addition, several of the methods are re-evaluated with respect to their suitability to deliver genome editing components, particularly CRISPR/Cas9, to embryos.

  11. Evaluating methods for approximating stochastic differential equations.

    Science.gov (United States)

    Brown, Scott D; Ratcliff, Roger; Smith, Philip L

    2006-08-01

    Models of decision making and response time (RT) are often formulated using stochastic differential equations (SDEs). Researchers often investigate these models using a simple Monte Carlo method based on Euler's method for solving ordinary differential equations. The accuracy of Euler's method is investigated and compared to the performance of more complex simulation methods. The more complex methods for solving SDEs yielded no improvement in accuracy over the Euler method. However, the matrix method proposed by Diederich and Busemeyer (2003) yielded significant improvements. The accuracy of all methods depended critically on the size of the approximating time step. The large (∼10 ms) step sizes often used by psychological researchers resulted in large and systematic errors in evaluating RT distributions.

  12. Evaluation of High-Throughput Genomic Assays for the Fc Gamma Receptor Locus.

    Directory of Open Access Journals (Sweden)

    Chantal E Hargreaves

    Full Text Available Cancer immunotherapy has been revolutionised by the use monoclonal antibodies (mAb that function through their interaction with Fc gamma receptors (FcγRs. The low-affinity FcγR genes are highly homologous, map to a complex locus at 1p23 and harbour single nucleotide polymorphisms (SNPs and copy number variation (CNV that can impact on receptor function and response to therapeutic mAbs. This complexity can hinder accurate characterisation of the locus. We therefore evaluated and optimised a suite of assays for the genomic analysis of the FcγR locus amenable to peripheral blood mononuclear cells and formalin-fixed paraffin-embedded (FFPE material that can be employed in a high-throughput manner. Assessment of TaqMan genotyping for FCGR2A-131H/R, FCGR3A-158F/V and FCGR2B-232I/T SNPs demonstrated the need for additional methods to discriminate genotypes for the FCGR3A-158F/V and FCGR2B-232I/T SNPs due to sequence homology and CNV in the region. A multiplex ligation-dependent probe amplification assay provided high quality SNP and CNV data in PBMC cases, but there was greater data variability in FFPE material in a manner that was predicted by the BIOMED-2 multiplex PCR protocol. In conclusion, we have evaluated a suite of assays for the genomic analysis of the FcγR locus that are scalable for application in large clinical trials of mAb therapy. These assays will ultimately help establish the importance of FcγR genetics in predicting response to antibody therapeutics.

  13. Efficient Server-Aided Secure Two-Party Function Evaluation with Applications to Genomic Computation

    Directory of Open Access Journals (Sweden)

    Blanton Marina

    2016-10-01

    Full Text Available Computation based on genomic data is becoming increasingly popular today, be it for medical or other purposes. Non-medical uses of genomic data in a computation often take place in a server-mediated setting where the server offers the ability for joint genomic testing between the users. Undeniably, genomic data is highly sensitive, which in contrast to other biometry types, discloses a plethora of information not only about the data owner, but also about his or her relatives. Thus, there is an urgent need to protect genomic data. This is particularly true when the data is used in computation for what we call recreational non-health-related purposes. Towards this goal, in this work we put forward a framework for server-aided secure two-party computation with the security model motivated by genomic applications. One particular security setting that we treat in this work provides stronger security guarantees with respect to malicious users than the traditional malicious model. In particular, we incorporate certified inputs into secure computation based on garbled circuit evaluation to guarantee that a malicious user is unable to modify her inputs in order to learn unauthorized information about the other user’s data. Our solutions are general in the sense that they can be used to securely evaluate arbitrary functions and offer attractive performance compared to the state of the art. We apply the general constructions to three specific types of genomic tests: paternity, genetic compatibility, and ancestry testing and implement the constructions. The results show that all such private tests can be executed within a matter of seconds or less despite the large size of one’s genomic data.

  14. Methods for evaluation of industry training programs

    International Nuclear Information System (INIS)

    Morisseau, D.S.; Roe, M.L.; Persensky, J.J.

    1987-01-01

    The NRC Policy Statement on Training and Qualification endorses the INPO-managed Training Accreditation Program in that it encompasses the elements of effective performance-based training. Those elements are: analysis of the job, performance-based learning objectives, training design and implementation, trainee evaluation, and program evaluation. As part of the NRC independent evaluation of utilities implementation of training improvement programs, the staff developed training review criteria and procedures that address all five elements of effective performance-based training. The staff uses these criteria to perform reviews of utility training programs that have already received accreditation. Although no performance-based training program can be said to be complete unless all five elements are in place, the last two, trainee and program evaluation, are perhaps the most important because they determine how well the first three elements have been implemented and ensure the dynamic nature of training. This paper discusses the evaluation elements of the NRC training review criteria. The discussion will detail the elements of evaluation methods and techniques that the staff expects to find as integral parts of performance-based training programs at accredited utilities. Further, the review of the effectiveness of implementation of the evaluation methods is discussed. The paper also addresses some of the qualitative differences between what is minimally acceptable and what is most desirable with respect to trainee and program evaluation mechanisms and their implementation

  15. Quantitative Efficiency Evaluation Method for Transportation Networks

    Directory of Open Access Journals (Sweden)

    Jin Qin

    2014-11-01

    Full Text Available An effective evaluation of transportation network efficiency/performance is essential to the establishment of sustainable development in any transportation system. Based on a redefinition of transportation network efficiency, a quantitative efficiency evaluation method for transportation network is proposed, which could reflect the effects of network structure, traffic demands, travel choice, and travel costs on network efficiency. Furthermore, the efficiency-oriented importance measure for network components is presented, which can be used to help engineers identify the critical nodes and links in the network. The numerical examples show that, compared with existing efficiency evaluation methods, the network efficiency value calculated by the method proposed in this paper can portray the real operation situation of the transportation network as well as the effects of main factors on network efficiency. We also find that the network efficiency and the importance values of the network components both are functions of demands and network structure in the transportation network.

  16. Evaluation of registration methods on thoracic CT

    DEFF Research Database (Denmark)

    Murphy, K.; van Ginneken, B.; Reinhardt, J.

    2011-01-01

    comprised the comprehensive evaluation and comparison of 20 individual algorithms from leading academic and industrial research groups. All algorithms are applied to the same set of 30 thoracic CT pairs. Algorithm settings and parameters are chosen by researchers expert in the configuration of their own....... This article details the organisation of the challenge, the data and evaluation methods and the outcome of the initial launch with 20 algorithms. The gain in knowledge and future work are discussed....

  17. A method for construction, cloning and expression of intron-less gene from unannotated genomic DNA.

    Science.gov (United States)

    Agrawal, Vineet; Gupta, Bharti; Banerjee, Uttam Chand; Roy, Nilanjan

    2008-11-01

    The present century has witnessed an unprecedented rise in genome sequences owing to various genome-sequencing programs. However, the same has not been replicated with cDNA or expressed sequence tags (ESTs). Hence, prediction of protein coding sequence of genes from this enormous collection of genomic sequences presents a significant challenge. While robust high throughput methods of cloning and expression could be used to meet protein requirements, lack of intron information creates a bottleneck. Computational programs designed for recognizing intron-exon boundaries for a particular organism or group of organisms have their own limitations. Keeping this in view, we describe here a method for construction of intron-less gene from genomic DNA in the absence of cDNA/EST information and organism-specific gene prediction program. The method outlined is a sequential application of bioinformatics to predict correct intron-exon boundaries and splicing by overlap extension PCR for spliced gene synthesis. The gene construct so obtained can then be cloned for protein expression. The method is simple and can be used for any eukaryotic gene expression.

  18. A genomic background based method for association analysis in related individuals

    NARCIS (Netherlands)

    N. Amin (Najaf); P. Tikka-Kleemola (Päivi); Y.S. Aulchenko (Yurii)

    2007-01-01

    textabstractBackground. Feasibility of genotyping of hundreds and thousands of single nucleoticle polymorphisms (SNPs) in thousands of study subjects have triggered the need for fast, powerful, and reliable methods for genome-wide association analysis. Here we consider a situation when study

  19. A simple and efficient method for extraction of genomic DNA from ...

    African Journals Online (AJOL)

    DNA extraction in many plants is difficult because of metabolites that interfere with DNA isolation procedures and subsequent applications, such as DNA restriction, amplification and cloning. We have developed a reliable and efficient method for isolating genomic DNA free from polysaccharide, polyphenols and protein ...

  20. Serological evaluation of Mycobacterium ulcerans antigens identified by comparative genomics.

    Directory of Open Access Journals (Sweden)

    Sacha J Pidot

    Full Text Available A specific and sensitive serodiagnostic test for Mycobacterium ulcerans infection would greatly assist the diagnosis of Buruli ulcer and would also facilitate seroepidemiological surveys. By comparative genomics, we identified 45 potential M. ulcerans specific proteins, of which we were able to express and purify 33 in E. coli. Sera from 30 confirmed Buruli ulcer patients, 24 healthy controls from the same endemic region and 30 healthy controls from a non-endemic region in Benin were screened for antibody responses to these specific proteins by ELISA. Serum IgG responses of Buruli ulcer patients were highly variable, however, seven proteins (MUP045, MUP057, MUL_0513, Hsp65, and the polyketide synthase domains ER, AT propionate, and KR A showed a significant difference between patient and non-endemic control antibody responses. However, when sera from the healthy control subjects living in the same Buruli ulcer endemic area as the patients were examined, none of the proteins were able to discriminate between these two groups. Nevertheless, six of the seven proteins showed an ability to distinguish people living in an endemic area from those in a non-endemic area with an average sensitivity of 69% and specificity of 88%, suggesting exposure to M. ulcerans. Further validation of these six proteins is now underway to assess their suitability for use in Buruli ulcer seroepidemiological studies. Such studies are urgently needed to assist efforts to uncover environmental reservoirs and understand transmission pathways of the M. ulcerans.

  1. Estimated allele substitution effects underlying genomic evaluation models depend on the scaling of allele counts.

    Science.gov (United States)

    Bouwman, Aniek C; Hayes, Ben J; Calus, Mario P L

    2017-10-30

    Genomic evaluation is used to predict direct genomic values (DGV) for selection candidates in breeding programs, but also to estimate allele substitution effects (ASE) of single nucleotide polymorphisms (SNPs). Scaling of allele counts influences the estimated ASE, because scaling of allele counts results in less shrinkage towards the mean for low minor allele frequency (MAF) variants. Scaling may become relevant for estimating ASE as more low MAF variants will be used in genomic evaluations. We show the impact of scaling on estimates of ASE using real data and a theoretical framework, and in terms of power, model fit and predictive performance. In a dairy cattle dataset with 630 K SNP genotypes, the correlation between DGV for stature from a random regression model using centered allele counts (RRc) and centered and scaled allele counts (RRcs) was 0.9988, whereas the overall correlation between ASE using RRc and RRcs was 0.27. The main difference in ASE between both methods was found for SNPs with a MAF lower than 0.01. Both the ratio (ASE from RRcs/ASE from RRc) and the regression coefficient (regression of ASE from RRcs on ASE from RRc) were much higher than 1 for low MAF SNPs. Derived equations showed that scenarios with a high heritability, a large number of individuals and a small number of variants have lower ratios between ASE from RRc and RRcs. We also investigated the optimal scaling parameter [from - 1 (RRcs) to 0 (RRc) in steps of 0.1] in the bovine stature dataset. We found that the log-likelihood was maximized with a scaling parameter of - 0.8, while the mean squared error of prediction was minimized with a scaling parameter of - 1, i.e., RRcs. Large differences in estimated ASE were observed for low MAF SNPs when allele counts were scaled or not scaled because there is less shrinkage towards the mean for scaled allele counts. We derived a theoretical framework that shows that the difference in ASE due to shrinkage is heavily influenced by the

  2. "System evaluates system": method for evaluating the efficiency of IS

    Directory of Open Access Journals (Sweden)

    Dita Blazkova

    2016-10-01

    Full Text Available In paper I deal with the possible solution of evaluating the efficiency of information systems in companies. The large number of existing methods used to address the efficiency of information systems is dependent on the subjective responses of the user that may distort output evaluation. Therefore, I propose a method that eliminates the subjective opinion of a user as the primary data source. Applications, which I suggests as part of the method, collects relevant data. In this paper I describe the application in detail. This is a follow-on program on any system that runs parallel with it. The program automatically collects data for evaluation. Data include mainly time data, positions the mouse cursor, printScreens, i-grams of previous, etc. I propose a method of evaluation of the data, which identifies the degree of the friendliness of the information system to the user. Thus, the output of the method is the conclusion whether users, who work with the information system, can handle effectively work with it.

  3. Rapid methods for the extraction and archiving of molecular grade fungal genomic DNA.

    Science.gov (United States)

    Borman, Andrew M; Palmer, Michael; Johnson, Elizabeth M

    2013-01-01

    The rapid and inexpensive extraction of fungal genomic DNA that is of sufficient quality for molecular approaches is central to the molecular identification, epidemiological analysis, taxonomy, and strain typing of pathogenic fungi. Although many commercially available and in-house extraction procedures do eliminate the majority of contaminants that commonly inhibit molecular approaches, the inherent difficulties in breaking fungal cell walls lead to protocols that are labor intensive and that routinely take several hours to complete. Here we describe several methods that we have developed in our laboratory that allow the extremely rapid and inexpensive preparation of fungal genomic DNA.

  4. Methods to improve genomic prediction and GWAS using combined Holstein populations

    DEFF Research Database (Denmark)

    Li, Xiujin

    The thesis focuses on methods to improve GWAS and genomic prediction using combined Holstein populations and investigations G by E interaction. The conclusions are: 1) Prediction reliabilities for Brazilian Holsteins can be increased by adding Nordic and Frensh genotyped bulls and a large G by E...... interaction exists between populations. 2) Combining data from Chinese and Danish Holstein populations increases the power of GWAS and detects new QTL regions for milk fatty acid traits. 3) The novel multi-trait Bayesian model efficiently estimates region-specific genomic variances, covariances...

  5. Nonparametric method for genomics-based prediction of performance of quantitative traits involving epistasis in plant breeding.

    Directory of Open Access Journals (Sweden)

    Xiaochun Sun

    Full Text Available Genomic selection (GS procedures have proven useful in estimating breeding value and predicting phenotype with genome-wide molecular marker information. However, issues of high dimensionality, multicollinearity, and the inability to deal effectively with epistasis can jeopardize accuracy and predictive ability. We, therefore, propose a new nonparametric method, pRKHS, which combines the features of supervised principal component analysis (SPCA and reproducing kernel Hilbert spaces (RKHS regression, with versions for traits with no/low epistasis, pRKHS-NE, to high epistasis, pRKHS-E. Instead of assigning a specific relationship to represent the underlying epistasis, the method maps genotype to phenotype in a nonparametric way, thus requiring fewer genetic assumptions. SPCA decreases the number of markers needed for prediction by filtering out low-signal markers with the optimal marker set determined by cross-validation. Principal components are computed from reduced marker matrix (called supervised principal components, SPC and included in the smoothing spline ANOVA model as independent variables to fit the data. The new method was evaluated in comparison with current popular methods for practicing GS, specifically RR-BLUP, BayesA, BayesB, as well as a newer method by Crossa et al., RKHS-M, using both simulated and real data. Results demonstrate that pRKHS generally delivers greater predictive ability, particularly when epistasis impacts trait expression. Beyond prediction, the new method also facilitates inferences about the extent to which epistasis influences trait expression.

  6. Nonparametric method for genomics-based prediction of performance of quantitative traits involving epistasis in plant breeding.

    Science.gov (United States)

    Sun, Xiaochun; Ma, Ping; Mumm, Rita H

    2012-01-01

    Genomic selection (GS) procedures have proven useful in estimating breeding value and predicting phenotype with genome-wide molecular marker information. However, issues of high dimensionality, multicollinearity, and the inability to deal effectively with epistasis can jeopardize accuracy and predictive ability. We, therefore, propose a new nonparametric method, pRKHS, which combines the features of supervised principal component analysis (SPCA) and reproducing kernel Hilbert spaces (RKHS) regression, with versions for traits with no/low epistasis, pRKHS-NE, to high epistasis, pRKHS-E. Instead of assigning a specific relationship to represent the underlying epistasis, the method maps genotype to phenotype in a nonparametric way, thus requiring fewer genetic assumptions. SPCA decreases the number of markers needed for prediction by filtering out low-signal markers with the optimal marker set determined by cross-validation. Principal components are computed from reduced marker matrix (called supervised principal components, SPC) and included in the smoothing spline ANOVA model as independent variables to fit the data. The new method was evaluated in comparison with current popular methods for practicing GS, specifically RR-BLUP, BayesA, BayesB, as well as a newer method by Crossa et al., RKHS-M, using both simulated and real data. Results demonstrate that pRKHS generally delivers greater predictive ability, particularly when epistasis impacts trait expression. Beyond prediction, the new method also facilitates inferences about the extent to which epistasis influences trait expression.

  7. Improved DOP-PCR (iDOP-PCR: A robust and simple WGA method for efficient amplification of low copy number genomic DNA.

    Directory of Open Access Journals (Sweden)

    Konstantin A Blagodatskikh

    Full Text Available Whole-genome amplification (WGA techniques are used for non-specific amplification of low-copy number DNA, and especially for single-cell genome and transcriptome amplification. There are a number of WGA methods that have been developed over the years. One example is degenerate oligonucleotide-primed PCR (DOP-PCR, which is a very simple, fast and inexpensive WGA technique. Although DOP-PCR has been regarded as one of the pioneering methods for WGA, it only provides low genome coverage and a high allele dropout rate when compared to more modern techniques. Here we describe an improved DOP-PCR (iDOP-PCR. We have modified the classic DOP-PCR by using a new thermostable DNA polymerase (SD polymerase with a strong strand-displacement activity and by adjustments in primers design. We compared iDOP-PCR, classic DOP-PCR and the well-established PicoPlex technique for whole genome amplification of both high- and low-copy number human genomic DNA. The amplified DNA libraries were evaluated by analysis of short tandem repeat genotypes and NGS data. In summary, iDOP-PCR provided a better quality of the amplified DNA libraries compared to the other WGA methods tested, especially when low amounts of genomic DNA were used as an input material.

  8. Genomic similarity and kernel methods I: advancements by building on mathematical and statistical foundations.

    Science.gov (United States)

    Schaid, Daniel J

    2010-01-01

    Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.

  9. Developing a common framework for evaluating the implementation of genomic medicine interventions in clinical care: the IGNITE Network's Common Measures Working Group.

    Science.gov (United States)

    Orlando, Lori A; Sperber, Nina R; Voils, Corrine; Nichols, Marshall; Myers, Rachel A; Wu, R Ryanne; Rakhra-Burris, Tejinder; Levy, Kenneth D; Levy, Mia; Pollin, Toni I; Guan, Yue; Horowitz, Carol R; Ramos, Michelle; Kimmel, Stephen E; McDonough, Caitrin W; Madden, Ebony B; Damschroder, Laura J

    2017-09-14

    PurposeImplementation research provides a structure for evaluating the clinical integration of genomic medicine interventions. This paper describes the Implementing Genomics in Practice (IGNITE) Network's efforts to promote (i) a broader understanding of genomic medicine implementation research and (ii) the sharing of knowledge generated in the network.MethodsTo facilitate this goal, the IGNITE Network Common Measures Working Group (CMG) members adopted the Consolidated Framework for Implementation Research (CFIR) to guide its approach to identifying constructs and measures relevant to evaluating genomic medicine as a whole, standardizing data collection across projects, and combining data in a centralized resource for cross-network analyses.ResultsCMG identified 10 high-priority CFIR constructs as important for genomic medicine. Of those, eight did not have standardized measurement instruments. Therefore, we developed four survey tools to address this gap. In addition, we identified seven high-priority constructs related to patients, families, and communities that did not map to CFIR constructs. Both sets of constructs were combined to create a draft genomic medicine implementation model.ConclusionWe developed processes to identify constructs deemed valuable for genomic medicine implementation and codified them in a model. These resources are freely available to facilitate knowledge generation and sharing across the field.GENETICS in MEDICINE advance online publication, 14 September 2017; doi:10.1038/gim.2017.144.

  10. A multi-sample based method for identifying common CNVs in normal human genomic structure using high-resolution aCGH data.

    Directory of Open Access Journals (Sweden)

    Chihyun Park

    Full Text Available BACKGROUND: It is difficult to identify copy number variations (CNV in normal human genomic data due to noise and non-linear relationships between different genomic regions and signal intensity. A high-resolution array comparative genomic hybridization (aCGH containing 42 million probes, which is very large compared to previous arrays, was recently published. Most existing CNV detection algorithms do not work well because of noise associated with the large amount of input data and because most of the current methods were not designed to analyze normal human samples. Normal human genome analysis often requires a joint approach across multiple samples. However, the majority of existing methods can only identify CNVs from a single sample. METHODOLOGY AND PRINCIPAL FINDINGS: We developed a multi-sample-based genomic variations detector (MGVD that uses segmentation to identify common breakpoints across multiple samples and a k-means-based clustering strategy. Unlike previous methods, MGVD simultaneously considers multiple samples with different genomic intensities and identifies CNVs and CNV zones (CNVZs; CNVZ is a more precise measure of the location of a genomic variant than the CNV region (CNVR. CONCLUSIONS AND SIGNIFICANCE: We designed a specialized algorithm to detect common CNVs from extremely high-resolution multi-sample aCGH data. MGVD showed high sensitivity and a low false discovery rate for a simulated data set, and outperformed most current methods when real, high-resolution HapMap datasets were analyzed. MGVD also had the fastest runtime compared to the other algorithms evaluated when actual, high-resolution aCGH data were analyzed. The CNVZs identified by MGVD can be used in association studies for revealing relationships between phenotypes and genomic aberrations. Our algorithm was developed with standard C++ and is available in Linux and MS Windows format in the STL library. It is freely available at: http://embio.yonsei.ac.kr/~Park/mgvd.php.

  11. A multi-sample based method for identifying common CNVs in normal human genomic structure using high-resolution aCGH data.

    Science.gov (United States)

    Park, Chihyun; Ahn, Jaegyoon; Yoon, Youngmi; Park, Sanghyun

    2011-01-01

    It is difficult to identify copy number variations (CNV) in normal human genomic data due to noise and non-linear relationships between different genomic regions and signal intensity. A high-resolution array comparative genomic hybridization (aCGH) containing 42 million probes, which is very large compared to previous arrays, was recently published. Most existing CNV detection algorithms do not work well because of noise associated with the large amount of input data and because most of the current methods were not designed to analyze normal human samples. Normal human genome analysis often requires a joint approach across multiple samples. However, the majority of existing methods can only identify CNVs from a single sample. We developed a multi-sample-based genomic variations detector (MGVD) that uses segmentation to identify common breakpoints across multiple samples and a k-means-based clustering strategy. Unlike previous methods, MGVD simultaneously considers multiple samples with different genomic intensities and identifies CNVs and CNV zones (CNVZs); CNVZ is a more precise measure of the location of a genomic variant than the CNV region (CNVR). We designed a specialized algorithm to detect common CNVs from extremely high-resolution multi-sample aCGH data. MGVD showed high sensitivity and a low false discovery rate for a simulated data set, and outperformed most current methods when real, high-resolution HapMap datasets were analyzed. MGVD also had the fastest runtime compared to the other algorithms evaluated when actual, high-resolution aCGH data were analyzed. The CNVZs identified by MGVD can be used in association studies for revealing relationships between phenotypes and genomic aberrations. Our algorithm was developed with standard C++ and is available in Linux and MS Windows format in the STL library. It is freely available at: http://embio.yonsei.ac.kr/~Park/mgvd.php.

  12. Application of imputation methods to genomic selection in Chinese Holstein cattle

    Directory of Open Access Journals (Sweden)

    Weng Ziqing

    2012-02-01

    Full Text Available Abstract Missing genotypes are a common feature of high density SNP datasets obtained using SNP chip technology and this is likely to decrease the accuracy of genomic selection. This problem can be circumvented by imputing the missing genotypes with estimated genotypes. When implementing imputation, the criteria used for SNP data quality control and whether to perform imputation before or after data quality control need to consider. In this paper, we compared six strategies of imputation and quality control using different imputation methods, different quality control criteria and by changing the order of imputation and quality control, against a real dataset of milk production traits in Chinese Holstein cattle. The results demonstrated that, no matter what imputation method and quality control criteria were used, strategies with imputation before quality control performed better than strategies with imputation after quality control in terms of accuracy of genomic selection. The different imputation methods and quality control criteria did not significantly influence the accuracy of genomic selection. We concluded that performing imputation before quality control could increase the accuracy of genomic selection, especially when the rate of missing genotypes is high and the reference population is small.

  13. Optimal choice of k-mer in composition vector method for genome sequence comparison.

    Science.gov (United States)

    Das, Subhram; Deb, Tamal; Dey, Nilanjan; Ashour, Amira S; Bhattacharya, D K; Tibarewala, D N

    2017-11-24

    Several proteins and genes are members of families that share a public evolutionary. In order to outline the evolutionary relationships and to recognize conserved patterns, sequence comparison becomes an emerging process. The current work investigates critically the k-mer role in composition vector method for comparing genome sequences. Generally, composition vector methods using k-mer are applied under choice of different value of k to compare genome sequences. For some values of k, results are satisfactory, but for other values of k, results are unsatisfactory. Standard composition vector method is carried out in the proposed work using 3-mer string length. In addition, special type of information based similarity index is used as a distance measure. It establishes that use of 3-mer and information based similarity index provide satisfactory results especially for comparison of whole genome sequences in all cases. These selections provide a sort of unified approach towards comparison of genome sequences. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Research into real-option evaluation method

    International Nuclear Information System (INIS)

    Shiba, Tsuyoshi; Wakamatsu, Hitoshi

    2002-03-01

    As an evaluational method for valuation of a corporation, an investment project, a research and development, or the evaluation technique of an enterprise strategy, a real option analysis attracts attention instead of conventional Discount Cash Flow method. The reason is that it can divert the technique for the option valuation in financial engineering to the decision-making process performed according to change in investment environment. Related references, the analysis tools, the application examples, etc. were investigated about the decision-making technique using real option analysis, and this investigation considered the application method to decision-making of the research and development at Japan Nuclear Cycle Development Institute. Consequently, since the feature is in real option analysis being the evaluation technique on condition of that business conditions and business itself also change, the real option analysis fits for evaluation of a research and development that business conditions were opaque and it turns out that the businesses are highly flexible. Moreover, it turns out that it fits also for evaluation of a capital concentration type investment issue like power plants. (author)

  15. Simplified methods for evaluating road prism stability

    Science.gov (United States)

    William J. Elliot; Mark Ballerini; David Hall

    2003-01-01

    Mass failure is one of the most common failures of low-volume roads in mountainous terrain. Current methods for evaluating stability of these roads require a geotechnical specialist. A stability analysis program, XSTABL, was used to estimate the stability of 3,696 combinations of road geometry, soil, and groundwater conditions. A sensitivity analysis was carried out to...

  16. Evaluation of laboratory diagnostic methods for cryptosporidiosis ...

    African Journals Online (AJOL)

    Background: The laboratory diagnosis of Cryptosporidium parum infection involves the demonstration of the infective oocysts in stool specimen. The conventional method of modified Ziehl-Neelsen (MZN) is very laborious, and stool debris can be mistaken for the parasite oocytes. Objective: This research was set to evaluate ...

  17. Methods for Evaluating Vocational Interest Structural Hypotheses.

    Science.gov (United States)

    Rounds, James; And Others

    1992-01-01

    Two forms of Holland's Hexagon Model (circular order and circumplex structure) are proposed and evaluated to demonstrate a randomization test of hypothesized order relations and confirmatory factor analysis. The models and methods are illustrated with correlation matrices based on the Unisex Edition of the ACT Interest Inventory. (Author/SK)

  18. The Operational Testing Effectiveness Evaluation Method

    Science.gov (United States)

    1988-04-01

    objective evaluation method. 3 CHAPTER TWO HISTORICAL OT&E: A RESTLESS SEARCH We regard the creation of the testing and eva ]uation group as of the utmost...supervise it? Tracing the organiza- tional development of operational testing leads through a bewildering maze of command and staff structures. This chapter

  19. Implementation and the choice of evaluation methods

    DEFF Research Database (Denmark)

    Flyvbjerg, Bent

    1984-01-01

    with an approach founded more in phenomenology and social science. The role of analytical methods is viewed very differently in the two paradigms as in the conception of the policy process in general. Allthough analytical methods have come to play a prominent (and often dominant) role in transportation evaluation...... the programmed paradigm. By emphasizing the importance of the process of social interaction and subordinating analysis to this process, the adaptive paradigm reduces the likelihood of analytical methods narrowing and biasing implementation. To fulfil this subordinate role and to aid social interaction...

  20. Evaluation of Abiotic Resource LCIA Methods

    Directory of Open Access Journals (Sweden)

    Rodrigo A. F. Alvarenga

    2016-02-01

    Full Text Available In a life cycle assessment (LCA, the impacts on resources are evaluated at the area of protection (AoP with the same name, through life cycle impact assessment (LCIA methods. There are different LCIA methods available in literature that assesses abiotic resources, and the goal of this study was to propose recommendations for that impact category. We evaluated 19 different LCIA methods, through two criteria (scientific robustness and scope, divided into three assessment levels, i.e., resource accounting methods (RAM, midpoint, and endpoint. In order to support the assessment, we applied some LCIA methods to a case study of ethylene production. For RAM, the most suitable LCIA method was CEENE (Cumulative Exergy Extraction from the Natural Environment (but SED (Solar Energy Demand and ICEC (Industrial Cumulative Exergy Consumption/ECEC (Ecological Cumulative Exergy Consumption may also be recommended, while the midpoint level was ADP (Abiotic Depletion Potential, and the endpoint level was both the Recipe Endpoint and EPS2000 (Environmental Priority Strategies. We could notice that the assessment for the AoP Resources is not yet well established in the LCA community, since new LCIA methods (with different approaches and assessment frameworks are showing up, and this trend may continue in the future.

  1. Evaluation of inbreeding depression in Holstein cattle using whole-genome SNP markers and alternative measures of genomic inbreeding.

    Science.gov (United States)

    Bjelland, D W; Weigel, K A; Vukasinovic, N; Nkrumah, J D

    2013-07-01

    The effects of increased pedigree inbreeding in dairy cattle populations have been well documented and result in a negative impact on profitability. Recent advances in genotyping technology have allowed researchers to move beyond pedigree analysis and study inbreeding at a molecular level. In this study, 5,853 animals were genotyped for 54,001 single nucleotide polymorphisms (SNP); 2,913 cows had phenotypic records including a single lactation for milk yield (from either lactation 1, 2, 3, or 4), reproductive performance, and linear type conformation. After removing SNP with poor call rates, low minor allele frequencies, and departure from Hardy-Weinberg equilibrium, 33,025 SNP remained for analyses. Three measures of genomic inbreeding were evaluated: percent homozygosity (FPH), inbreeding calculated from runs of homozygosity (FROH), and inbreeding derived from a genomic relationship matrix (FGRM). Average FPH was 60.5±1.1%, average FROH was 3.8±2.1%, and average FGRM was 20.8±2.3%, where animals with larger values for each of the genomic inbreeding indices were considered more inbred. Decreases in total milk yield to 205d postpartum of 53, 20, and 47kg per 1% increase in FPH, FROH, and FGRM, respectively, were observed. Increases in days open per 1% increase in FPH (1.76 d), FROH (1.72 d), and FGRM (1.06 d) were also noted, as well as increases in maternal calving difficulty (0.09, 0.03, and 0.04 on a 5-point scale for FPH, FROH, and FGRM, respectively). Several linear type traits, such as strength (-0.40, -0.11, and -0.19), rear legs rear view (-0.35, -0.16, and -0.14), front teat placement (0.35, 0.25, 0.18), and teat length (-0.24, -0.14, and -0.13) were also affected by increases in FPH, FROH, and FGRM, respectively. Overall, increases in each measure of genomic inbreeding in this study were associated with negative effects on production and reproductive ability in dairy cows. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc

  2. Evaluation of Dynamic Methods for Earthwork Assessment

    Directory of Open Access Journals (Sweden)

    Vlček Jozef

    2015-05-01

    Full Text Available Rapid development of road construction imposes requests on fast and quality methods for earthwork quality evaluation. Dynamic methods are now adopted in numerous civil engineering sections. Especially evaluation of the earthwork quality can be sped up using dynamic equipment. This paper presents the results of the parallel measurements of chosen devices for determining the level of compaction of soils. Measurements were used to develop the correlations between values obtained from various apparatuses. Correlations show that examined apparatuses are suitable for examination of compaction level of fine-grained soils with consideration of boundary conditions of used equipment. Presented methods are quick and results can be obtained immediately after measurement, and they are thus suitable in cases when construction works have to be performed in a short period of time.

  3. Calculation of evolutionary correlation between individual genes and full-length genome: a method useful for choosing phylogenetic markers for molecular epidemiology.

    Directory of Open Access Journals (Sweden)

    Shuai Wang

    Full Text Available Individual genes or regions are still commonly used to estimate the phylogenetic relationships among viral isolates. The genomic regions that can faithfully provide assessments consistent with those predicted with full-length genome sequences would be preferable to serve as good candidates of the phylogenetic markers for molecular epidemiological studies of many viruses. Here we employed a statistical method to evaluate the evolutionary relationships between individual viral genes and full-length genomes without tree construction as a way to determine which gene can match the genome well in phylogenetic analyses. This method was performed by calculation of linear correlations between the genetic distance matrices of aligned individual gene sequences and aligned genome sequences. We applied this method to the phylogenetic analyses of porcine circovirus 2 (PCV2, measles virus (MV, hepatitis E virus (HEV and Japanese encephalitis virus (JEV. Phylogenetic trees were constructed for comparisons and the possible factors affecting the method accuracy were also discussed in the calculations. The results revealed that this method could produce results consistent with those of previous studies about the proper consensus sequences that could be successfully used as phylogenetic markers. And our results also suggested that these evolutionary correlations could provide useful information for identifying genes that could be used effectively to infer the genetic relationships.

  4. Genomic evaluation, breed identification, and population structure of North American, English and Island Guernsey dairy cattle

    Science.gov (United States)

    Genomic evaluations of dairy cattle in the United States have been available for Brown Swiss, Holsteins, and Jerseys since 2009 and for Ayrshires since 2013. As of February 2015, 2,281 Guernsey bulls and cows had genotypes from collaboration between the United States, Canada, England, and the island...

  5. Use of simulated data sets to evaluate the fidelity of Metagenomicprocessing methods

    Energy Technology Data Exchange (ETDEWEB)

    Mavromatis, Konstantinos; Ivanova, Natalia; Barry, Kerri; Shapiro, Harris; Goltsman, Eugene; McHardy, Alice C.; Rigoutsos, Isidore; Salamov, Asaf; Korzeniewski, Frank; Land, Miriam; Lapidus, Alla; Grigoriev, Igor; Richardson, Paul; Hugenholtz, Philip; Kyrpides, Nikos C.

    2006-12-01

    Metagenomics is a rapidly emerging field of research for studying microbial communities. To evaluate methods presently used to process metagenomic sequences, we constructed three simulated data sets of varying complexity by combining sequencing reads randomly selected from 113 isolate genomes. These data sets were designed to model real metagenomes in terms of complexity and phylogenetic composition. We assembled sampled reads using three commonly used genome assemblers (Phrap, Arachne and JAZZ), and predicted genes using two popular gene finding pipelines (fgenesb and CRITICA/GLIMMER). The phylogenetic origins of the assembled contigs were predicted using one sequence similarity--based (blast hit distribution) and two sequence composition--based (PhyloPythia, oligonucleotide frequencies) binning methods. We explored the effects of the simulated community structure and method combinations on the fidelity of each processing step by comparison to the corresponding isolate genomes. The simulated data sets are available online to facilitate standardized benchmarking of tools for metagenomic analysis.

  6. Gene re-annotation in genome of the extremophile Pyrobaculum aerophilum by using bioinformatics methods.

    Science.gov (United States)

    Du, Meng-Ze; Guo, Feng-Biao; Chen, Yue-Yun

    2011-10-01

    In this paper, we re-annotated the genome of Pyrobaculum aerophilum str. IM2, particularly for hypothetical ORFs. The annotation process includes three parts. Firstly and most importantly, 23 new genes, which were missed in the original annotation, are found by combining similarity search and the ab initio gene finding approaches. Among these new genes, five have significant similarities with function-known genes and the rest have significant similarities with hypothetical ORFs contained in other genomes. Secondly, the coding potentials of the 1645 hypothetical ORFs are re-predicted by using 33 Z curve variables combined with Fisher linear discrimination method. With the accuracy being 99.68%, 25 originally annotated hypothetical ORFs are recognized as non-coding by our method. Thirdly, 80 hypothetical ORFs are assigned with potential functions by using similarity search with BLAST program. Re-annotation of the genome will benefit related researches on this hyperthermophilic crenarchaeon. Also, the re-annotation procedure could be taken as a reference for other archaeal genomes. Details of the revised annotation are freely available at http://cobi.uestc.edu.cn/resource/paero/

  7. Improved method for genomic DNA extraction forOpuntiaMill. (Cactaceae).

    Science.gov (United States)

    Martínez-González, César Ramiro; Ramírez-Mendoza, Rosario; Jiménez-Ramírez, Jaime; Gallegos-Vázquez, Clemente; Luna-Vega, Isolda

    2017-01-01

    Genomic DNA extracted from species of Cactaceae is often contaminated with significant amounts of mucilage and pectin. Pectin is one of the main components of cellular walls, whereas mucilage is a complex polysaccharide with a ramified structure. Thus, pectin- and mucilage-free extraction of DNA is a key step for further downstream PCR-based analyses. We tested our DNA extraction method on cladode tissue (juvenile, adult, and herbaria exemplars) of 17 species of Opuntia Mill., which are characterized by a large quantity of pectin and mucilage. We developed a method for the extraction of gDNA free of inhibitory compounds common in species of Opuntia Mill., such as pectin and mucilage. Compared to previously extraction protocols, our method produced higher yields of high-quality genomic DNA.

  8. Comparative Analysis of the Genomic DNA Isolation Methods on Inula sp. (Asteraceae

    Directory of Open Access Journals (Sweden)

    Emre SEVİNDİK

    2016-12-01

    Full Text Available Simple, fast, low-cost and high throughput protocols are required for DNA isolation of plant species. In this study, phenol chloroform isoamyl alcohol and commercial (Sigma DNA isolation kit methods were applied on some Inula species that belong to Asteraceae family. Genomic DNA amounts, A260, A280, A260/A230 and purity degrees (A260/A280 that were obtained through both methods were measured through electrophoresis and spectrophotometer. Additionally, PCR amplification was realized by primer pairs specific to nrDNA ITS, cpDNA ndhF (972F-1603R and trnL-F regions. Results showed that maximum genomic DNA in nanograms obtained by phenol chloroform isoamyl alcohol method. The study also revealed that I. macrocephala had the maximum DNA and I. heterolepis had the minimum DNA amount. A260/A280 purity degrees showed that the highest and lowest purity in gDNAs obtained through phenol-choloform isoamyl alcohol method were in I.aucheriana and I. salicina, respectively. The highest and lowest purity degrees of gDNAs obtained through commercial kit was observed in I. fragilis and I. macrocephala samples, respectively. PCR amplification results showed that while band profiles of each three regions (ITS, trnL-F and ndhF did not yield positive results in PCR amplifications using phenol-choloform isoamyl alcohol method; PCR band profiles obtained through commercial kit yielded positive results. As a result, it is fair to say that the relation of genomic DNA with PCR was found to be more efficient although the maximum amount of genomic DNA was obtained through phenol chloroform isoamyl alcohol method.

  9. Measurement properties of gingival biotype evaluation methods.

    Science.gov (United States)

    Alves, Patrick Henry Machado; Alves, Thereza Cristina Lira Pacheco; Pegoraro, Thiago Amadei; Costa, Yuri Martins; Bonfante, Estevam Augusto; de Almeida, Ana Lúcia Pompéia Fraga

    2018-01-19

    There are numerous methods to measure the dimensions of the gingival tissue, but few have compared the effectiveness of one method over another. This study aimed to describe a new method and to estimate the validity of gingival biotype assessment with the aid of computed tomography scanning (CTS). In each patient different methods of evaluation of the gingival thickness were used: transparency of periodontal probe, transgingival, photography, and a new method of CTS). Intrarater and interrater reliability considering the categorical classification of the gingival biotype were estimated with Cohen's kappa coefficient, intraclass correlation coefficient (ICC), and ANOVA (P validity of the CTS was determined using the transgingival method as the reference standard. Sensitivity and specificity values were computed along with theirs 95% CI. Twelve patients were subjected to assessment of their gingival thickness. The highest agreement was found between transgingival and CTS (86.1%). The comparison between the categorical classifications of CTS and the transgingival method (reference standard) showed high specificity (94.92%) and low sensitivity (53.85%) for definition of a thin biotype. The new method of CTS assessment to classify gingival tissue thickness can be considered reliable and clinically useful to diagnose thick biotype. © 2018 Wiley Periodicals, Inc.

  10. A method for accurate detection of genomic microdeletions using real-time quantitative PCR

    Directory of Open Access Journals (Sweden)

    Bassett Anne S

    2005-12-01

    Full Text Available Abstract Background Quantitative Polymerase Chain Reaction (qPCR is a well-established method for quantifying levels of gene expression, but has not been routinely applied to the detection of constitutional copy number alterations of human genomic DNA. Microdeletions or microduplications of the human genome are associated with a variety of genetic disorders. Although, clinical laboratories routinely use fluorescence in situ hybridization (FISH to identify such cryptic genomic alterations, there remains a significant number of individuals in which constitutional genomic imbalance is suspected, based on clinical parameters, but cannot be readily detected using current cytogenetic techniques. Results In this study, a novel application for real-time qPCR is presented that can be used to reproducibly detect chromosomal microdeletions and microduplications. This approach was applied to DNA from a series of patient samples and controls to validate genomic copy number alteration at cytoband 22q11. The study group comprised 12 patients with clinical symptoms of chromosome 22q11 deletion syndrome (22q11DS, 1 patient trisomic for 22q11 and 4 normal controls. 6 of the patients (group 1 had known hemizygous deletions, as detected by standard diagnostic FISH, whilst the remaining 6 patients (group 2 were classified as 22q11DS negative using the clinical FISH assay. Screening of the patients and controls with a set of 10 real time qPCR primers, spanning the 22q11.2-deleted region and flanking sequence, confirmed the FISH assay results for all patients with 100% concordance. Moreover, this qPCR enabled a refinement of the region of deletion at 22q11. Analysis of DNA from chromosome 22 trisomic sample demonstrated genomic duplication within 22q11. Conclusion In this paper we present a qPCR approach for the detection of chromosomal microdeletions and microduplications. The strategic use of in silico modelling for qPCR primer design to avoid regions of repetitive

  11. Methods of Evaluating Performances for Marketing Strategies

    OpenAIRE

    Ioan Cucu

    2005-01-01

    There are specific methods for assessing and improving the effectiveness of a marketing strategy. A marketer should state in the marketing plan what a marketing strategy is supposed to accomplish. These statements should set forth performance standards, which usually are stated in terms of profits, sales, or costs. Actual performance must be measured in similar terms so that comparisons are possible. This paper describes sales analysis and cost analysis, two general ways of evaluating the act...

  12. Comparative evaluation of different histoprocessing methods.

    Science.gov (United States)

    Singla, Kartesh; Sandhu, Simarpreet Virk; Pal, Rana A G K; Bansal, Himanta; Bhullar, Ramanpreet Kaur; Kaur, Preetinder

    2017-01-01

    Tissue processing for years is carried out by the conventional method, which is a time-consuming technique resulting in 1-day delay in diagnosis. However, in this area of modernization and managed care, rapid diagnosis is increasingly desirable to fulfill the needs of clinicians. The objective of the present study was to compare and determine the positive impact on turnaround times of different tissue processing methods by comparing the color intensity, cytoplasmic details, and nuclear details of the tissues processed by three methods. A total of sixty biopsied tissues were grossed and cut into three equal parts. One part was processed by conventional method, second by rapid manual, and third by microwave-assisted method. The slides obtained after processing were circulated among four observers for evaluation. Sections processed by the three techniques were subjected to statistical analysis by Kruskal-Wallis test. Cronbach's alpha reliability test was applied to assess the reliability among observers. One-way analysis of variance (ANOVA) was used for comparing mean shrinkage before and after processing. All observers were assumed to be reliable as the Cronbach's reliability test was statistically significant. The results were statistically non-significant as observed by Kruskal-Wallis test. One-way ANOVA revealed a significant value on comparison of the tissue shrinkage processed by the three techniques. The histological evaluation of the tissues revealed that the nuclear-cytoplasmic contrast was good in tissues processed by microwave, followed by conventional and rapid manual processing techniques. The color intensity of the tissues processed by microwave was crisper, and there was a good contrast between the hematoxylin and eosin-stained areas as compared to manual methods. The overall quality of tissues from all the three methods was similar. It was not feasible to distinguish between the three techniques by observing the tissue sections. Microwave

  13. gEVAL - a web-based browser for evaluating genome assemblies.

    Science.gov (United States)

    Chow, William; Brugger, Kim; Caccamo, Mario; Sealy, Ian; Torrance, James; Howe, Kerstin

    2016-08-15

    For most research approaches, genome analyses are dependent on the existence of a high quality genome reference assembly. However, the local accuracy of an assembly remains difficult to assess and improve. The gEVAL browser allows the user to interrogate an assembly in any region of the genome by comparing it to different datasets and evaluating the concordance. These analyses include: a wide variety of sequence alignments, comparative analyses of multiple genome assemblies, and consistency with optical and other physical maps. gEVAL highlights allelic variations, regions of low complexity, abnormal coverage, and potential sequence and assembly errors, and offers strategies for improvement. Although gEVAL focuses primarily on sequence integrity, it can also display arbitrary annotation including from Ensembl or TrackHub sources. We provide gEVAL web sites for many human, mouse, zebrafish and chicken assemblies to support the Genome Reference Consortium, and gEVAL is also downloadable to enable its use for any organism and assembly. Web Browser: http://geval.sanger.ac.uk, Plugin: http://wchow.github.io/wtsi-geval-plugin kj2@sanger.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  14. Genomic Analysis of a Marine Bacterium: Bioinformatics for Comparison, Evaluation, and Interpretation of DNA Sequences

    Directory of Open Access Journals (Sweden)

    Bhagwan N. Rekadwad

    2016-01-01

    Full Text Available A total of five highly related strains of an unidentified marine bacterium were analyzed through their short genome sequences (AM260709–AM260713. Genome-to-Genome Distance (GGDC showed high similarity to Pseudoalteromonas haloplanktis (X67024. The generated unique Quick Response (QR codes indicated no identity to other microbial species or gene sequences. Chaos Game Representation (CGR showed the number of bases concentrated in the area. Guanine residues were highest in number followed by cytosine. Frequency of Chaos Game Representation (FCGR indicated that CC and GG blocks have higher frequency in the sequence from the evaluated marine bacterium strains. Maximum GC content for the marine bacterium strains ranged 53-54%. The use of QR codes, CGR, FCGR, and GC dataset helped in identifying and interpreting short genome sequences from specific isolates. A phylogenetic tree was constructed with the bootstrap test (1000 replicates using MEGA6 software. Principal Component Analysis (PCA was carried out using EMBL-EBI MUSCLE program. Thus, generated genomic data are of great assistance for hierarchical classification in Bacterial Systematics which combined with phenotypic features represents a basic procedure for a polyphasic approach on unambiguous bacterial isolate taxonomic classification.

  15. Usability Evaluation Method for Agile Software Development

    Directory of Open Access Journals (Sweden)

    Saad Masood Butt

    2015-02-01

    Full Text Available Agile methods are the best fit for tremendously growing software industry due to its flexible and dynamic nature. But the software developed using agile methods do meet the usability standards? To answer this question we can see that majority of agile software development projects currently involve interactive user interface designs, which can only be possible by following User Centered Design (UCD in agile methods. The question here is, how to integrate UCD with agile models. Both Agile models and UCD are iterative in nature but agile models focus on coding and development of software; whereas, UCD focuses on user interface of the software. Similarly, both of them have testing features where the agile model involves automated tested code while UCD involves an expert or a user to test the user interface. In this paper, a new agile usability model is proposed and the evaluation is of the proposed model is presented by practically implementing it in three real life projects. . Key results from these projects clearly show: the proposed agile model incorporates usability evaluation methods, improves the relationship between usability experts to work with agile software experts; in addition, allows agile developers to incorporate the result from UCD into subsequent interactions.

  16. An Objective Train Timetabling Quality Evaluation Method

    Directory of Open Access Journals (Sweden)

    Feng Jiang

    2017-01-01

    Full Text Available The train timetable dominates the rail traffic organization. The timetabling quality should be evaluated to check the work skill of train timetable managers. The values of existing timetable evaluation indexes vary with infrastructure features and traffic flow; therefore, they are not comparable in fact. Furthermore, subjective inputs like expert scores are involved in evaluation; this will lead to unreliable results because the experts may have different opinions. To overcome these shortages, we propose a relative train path efficiency index by taking the train paths as production units. Each unit consumes some transport resources and produces some feedback outputs. A DEA model is applied to compute the train path efficiency. Two statistical functions of train path efficiency are used to evaluate the timetabling quality. We verify our method with real-world timetables. First, we use the Shibantan-to-Xinqiao line timetable to test the relative feature of the index proposed, and the results show that the train path efficiency value is relative and can reflect whether the stops are evenly distributed or not. Second, we evaluate the timetabling quality of another two timetables of the Qingdao-to-Jinan line with different traffic flows, and the results show that, compared with the 2012 timetable, the timetabling quality decreased in 2013.

  17. Cryptosporidiosis: multiattribute evaluation of six diagnostic methods.

    Science.gov (United States)

    MacPherson, D W; McQueen, R

    1993-02-01

    Six diagnostic methods (Giemsa staining, Ziehl-Neelsen staining, auramine-rhodamine staining, Sheather's sugar flotation, an indirect immunofluorescence procedure, and a modified concentration-sugar flotation method) for the detection of Cryptosporidium spp. in stool specimens were compared on the following attributes: diagnostic yield, cost to perform each test, ease of handling, and ability to process large numbers of specimens for screening purposes by batching. A rank ordering from least desirable to most desirable was then established for each method by using the study attributes. The process of decision analysis with respect to the laboratory diagnosis of cryptosporidiosis is discussed through the application of multiattribute utility theory to the rank ordering of the study criteria. Within a specific health care setting, a diagnostic facility will be able to calculate its own utility scores for our study attributes. Multiattribute evaluation and analysis are potentially powerful tools in the allocation of resources in the laboratory.

  18. Constructs and methods for genome editing and genetic engineering of fungi and protists

    Science.gov (United States)

    Hittinger, Christopher Todd; Alexander, William Gerald

    2018-01-30

    Provided herein are constructs for genome editing or genetic engineering in fungi or protists, methods of using the constructs and media for use in selecting cells. The construct include a polynucleotide encoding a thymidine kinase operably connected to a promoter, suitably a constitutive promoter; a polynucleotide encoding an endonuclease operably connected to an inducible promoter; and a recognition site for the endonuclease. The constructs may also include selectable markers for use in selecting recombinations.

  19. Systematic differences in the response of genetic variation to pedigree and genome-based selection methods.

    Science.gov (United States)

    Heidaritabar, M; Vereijken, A; Muir, W M; Meuwissen, T; Cheng, H; Megens, H-J; Groenen, M A M; Bastiaansen, J W M

    2014-12-01

    Genomic selection (GS) is a DNA-based method of selecting for quantitative traits in animal and plant breeding, and offers a potentially superior alternative to traditional breeding methods that rely on pedigree and phenotype information. Using a 60 K SNP chip with markers spaced throughout the entire chicken genome, we compared the impact of GS and traditional BLUP (best linear unbiased prediction) selection methods applied side-by-side in three different lines of egg-laying chickens. Differences were demonstrated between methods, both at the level and genomic distribution of allele frequency changes. In all three lines, the average allele frequency changes were larger with GS, 0.056 0.064 and 0.066, compared with BLUP, 0.044, 0.045 and 0.036 for lines B1, B2 and W1, respectively. With BLUP, 35 selected regions (empirical P selected regions were identified. Empirical thresholds for local allele frequency changes were determined from gene dropping, and differed considerably between GS (0.167-0.198) and BLUP (0.105-0.126). Between lines, the genomic regions with large changes in allele frequencies showed limited overlap. Our results show that GS applies selection pressure much more locally than BLUP, resulting in larger allele frequency changes. With these results, novel insights into the nature of selection on quantitative traits have been gained and important questions regarding the long-term impact of GS are raised. The rapid changes to a part of the genetic architecture, while another part may not be selected, at least in the short term, require careful consideration, especially when selection occurs before phenotypes are observed.

  20. Comparison of Eleven Methods for Genomic DNA Extraction Suitable for Large-Scale Whole-Genome Genotyping and Long-Term DNA Banking Using Blood Samples

    Science.gov (United States)

    Psifidi, Androniki; Dovas, Chrysostomos I.; Bramis, Georgios; Lazou, Thomai; Russel, Claire L.; Arsenos, Georgios; Banos, Georgios

    2015-01-01

    Over the recent years, next generation sequencing and microarray technologies have revolutionized scientific research with their applications to high-throughput analysis of biological systems. Isolation of high quantities of pure, intact, double stranded, highly concentrated, not contaminated genomic DNA is prerequisite for successful and reliable large scale genotyping analysis. High quantities of pure DNA are also required for the creation of DNA-banks. In the present study, eleven different DNA extraction procedures, including phenol-chloroform, silica and magnetic beads based extractions, were examined to ascertain their relative effectiveness for extracting DNA from ovine blood samples. The quality and quantity of the differentially extracted DNA was subsequently assessed by spectrophotometric measurements, Qubit measurements, real-time PCR amplifications and gel electrophoresis. Processing time, intensity of labor and cost for each method were also evaluated. Results revealed significant differences among the eleven procedures and only four of the methods yielded satisfactory outputs. These four methods, comprising three modified silica based commercial kits (Modified Blood, Modified Tissue, Modified Dx kits) and an in-house developed magnetic beads based protocol, were most appropriate for extracting high quality and quantity DNA suitable for large-scale microarray genotyping and also for long-term DNA storage as demonstrated by their successful application to 600 individuals. PMID:25635817

  1. r2VIM: A new variable selection method for random forests in genome-wide association studies.

    Science.gov (United States)

    Szymczak, Silke; Holzinger, Emily; Dasgupta, Abhijit; Malley, James D; Molloy, Anne M; Mills, James L; Brody, Lawrence C; Stambolian, Dwight; Bailey-Wilson, Joan E

    2016-01-01

    Machine learning methods and in particular random forests (RFs) are a promising alternative to standard single SNP analyses in genome-wide association studies (GWAS). RFs provide variable importance measures (VIMs) to rank SNPs according to their predictive power. However, in contrast to the established genome-wide significance threshold, no clear criteria exist to determine how many SNPs should be selected for downstream analyses. We propose a new variable selection approach, recurrent relative variable importance measure (r2VIM). Importance values are calculated relative to an observed minimal importance score for several runs of RF and only SNPs with large relative VIMs in all of the runs are selected as important. Evaluations on simulated GWAS data show that the new method controls the number of false-positives under the null hypothesis. Under a simple alternative hypothesis with several independent main effects it is only slightly less powerful than logistic regression. In an experimental GWAS data set, the same strong signal is identified while the approach selects none of the SNPs in an underpowered GWAS. The novel variable selection method r2VIM is a promising extension to standard RF for objectively selecting relevant SNPs in GWAS while controlling the number of false-positive results.

  2. A simple method of genomic DNA extraction suitable for analysis of bulk fungal strains.

    Science.gov (United States)

    Zhang, Y J; Zhang, S; Liu, X Z; Wen, H A; Wang, M

    2010-07-01

    A simple and rapid method (designated thermolysis) for extracting genomic DNA from bulk fungal strains was described. In the thermolysis method, a few mycelia or yeast cells were first rinsed with pure water to remove potential PCR inhibitors and then incubated in a lysis buffer at 85 degrees C to break down cell walls and membranes. This method was used to extract genomic DNA from large numbers of fungal strains (more than 92 species, 35 genera of three phyla) isolated from different sections of natural Ophiocordyceps sinensis specimens. Regions of interest from high as well as single-copy number genes were successfully amplified from the extracted DNA samples. The DNA samples obtained by this method can be stored at -20 degrees C for over 1 year. The method was effective, easy and fast and allowed batch DNA extraction from multiple fungal isolates. Use of the thermolysis method will allow researchers to obtain DNA from fungi quickly for use in molecular assays. This method requires only minute quantities of starting material and is suitable for diverse fungal species.

  3. A Review of Study Designs and Statistical Methods for Genomic Epidemiology Studies using Next Generation Sequencing

    Directory of Open Access Journals (Sweden)

    Qian eWang

    2015-04-01

    Full Text Available Results from numerous linkage and association studies have greatly deepened scientists’ understanding of the genetic basis of many human diseases, yet some important questions remain unanswered. For example, although a large number of disease-associated loci have been identified from genome-wide association studies (GWAS in the past 10 years, it is challenging to interpret these results as most disease-associated markers have no clear functional roles in disease etiology, and all the identified genomic factors only explain a small portion of disease heritability. With the help of next-generation sequencing (NGS, diverse types of genomic and epigenetic variations can be detected with high accuracy. More importantly, instead of using linkage disequilibrium to detect association signals based on a set of pre-set probes, NGS allows researchers to directly study all the variants in each individual, therefore promises opportunities for identifying functional variants and a more comprehensive dissection of disease heritability. Although the current scale of NGS studies is still limited due to the high cost, the success of several recent studies suggests the great potential for applying NGS in genomic epidemiology, especially as the cost of sequencing continues to drop. In this review, we discuss several pioneer applications of NGS, summarize scientific discoveries for rare and complex diseases, and compare various study designs including targeted sequencing and whole-genome sequencing using population-based and family-based cohorts. Finally, we highlight recent advancements in statistical methods proposed for sequencing analysis, including group-based association tests, meta-analysis techniques, and annotation tools for variant prioritization.

  4. Evaluation of protein dihedral angle prediction methods.

    Directory of Open Access Journals (Sweden)

    Harinder Singh

    Full Text Available Tertiary structure prediction of a protein from its amino acid sequence is one of the major challenges in the field of bioinformatics. Hierarchical approach is one of the persuasive techniques used for predicting protein tertiary structure, especially in the absence of homologous protein structures. In hierarchical approach, intermediate states are predicted like secondary structure, dihedral angles, Cα-Cα distance bounds, etc. These intermediate states are used to restraint the protein backbone and assist its correct folding. In the recent years, several methods have been developed for predicting dihedral angles of a protein, but it is difficult to conclude which method is better than others. In this study, we benchmarked the performance of dihedral prediction methods ANGLOR and SPINE X on various datasets, including independent datasets. TANGLE dihedral prediction method was not benchmarked (due to unavailability of its standalone and was compared with SPINE X and ANGLOR on only ANGLOR dataset on which TANGLE has reported its results. It was observed that SPINE X performed better than ANGLOR and TANGLE, especially in case of prediction of dihedral angles of glycine and proline residues. The analysis suggested that angle shifting was the foremost reason of better performance of SPINE X. We further evaluated the performance of the methods on independent ccPDB30 dataset and observed that SPINE X performed better than ANGLOR.

  5. Evaluation of check valve monitoring methods

    International Nuclear Information System (INIS)

    Haynes, H.D.

    1989-01-01

    Check valves are used extensively in nuclear plant safety systems and balance-of-plant (BOP) systems. Their failures have resulted in significant maintenance efforts and, on occasion, have resulted in water hammer, overpressurization of low-pressure systems and damage to flow system components. Consequently, in recent years check valves have received considerable attention by the Nuclear Regulatory Commission (NRC) and the nuclear power industry. Oak Ridge National Laboratory (ORNL) is carrying out a comprehensive two phase aging assessment of check valves in support of the Nuclear Plant Aging Research (NPAR) program. As part of the second phase, ORNL is evaluating several developmental and/or commercially available check valve diagnostic monitoring methods; in particular, those based on measurements of acoustic emission, ultrasonics, and magnetic flux. These three methods were found to provide different (and complementary) diagnostic information. The combination of acoustic emission with either ultrasonic or magnetic flux monitoring yields a monitoring system that succeeds in providing sensitivity to detect all major check valve operating conditions. All three methods are still under development and should improve in many respects as a result of further testing and evaluation. 10 refs., 19 figs., 1 tab

  6. Development and evaluation of a core genome multilocus typing scheme for whole-genome sequence-based typing of Acinetobacter baumannii.

    Directory of Open Access Journals (Sweden)

    Paul G Higgins

    Full Text Available We have employed whole genome sequencing to define and evaluate a core genome multilocus sequence typing (cgMLST scheme for Acinetobacter baumannii. To define a core genome we downloaded a total of 1,573 putative A. baumannii genomes from NCBI as well as representative isolates belonging to the eight previously described international A. baumannii clonal lineages. The core genome was then employed against a total of fifty-three carbapenem-resistant A. baumannii isolates that were previously typed by PFGE and linked to hospital outbreaks in eight German cities. We defined a core genome of 2,390 genes of which an average 98.4% were called successfully from 1,339 A. baumannii genomes, while Acinetobacter nosocomialis, Acinetobacter pittii, and Acinetobacter calcoaceticus resulted in 71.2%, 33.3%, and 23.2% good targets, respectively. When tested against the previously identified outbreak strains, we found good correlation between PFGE and cgMLST clustering, with 0-8 allelic differences within a pulsotype, and 40-2,166 differences between pulsotypes. The highest number of allelic differences was between the isolates representing the international clones. This typing scheme was highly discriminatory and identified separate A. baumannii outbreaks. Moreover, because a standardised cgMLST nomenclature is used, the system will allow inter-laboratory exchange of data.

  7. Development of methods for evaluating active faults

    International Nuclear Information System (INIS)

    2013-01-01

    The report for long-term evaluation of active faults was published by the Headquarters for Earthquake Research Promotion on Nov. 2010. After occurrence of the 2011 Tohoku-oki earthquake, the safety review guide with regard to geology and ground of site was revised by the Nuclear Safety Commission on Mar. 2012 with scientific knowledges of the earthquake. The Nuclear Regulation Authority established on Sep. 2012 is newly planning the New Safety Design Standard related to Earthquakes and Tsunamis of Light Water Nuclear Power Reactor Facilities. With respect to those guides and standards, our investigations for developing the methods of evaluating active faults are as follows; (1) For better evaluation on activities of offshore fault, we proposed a work flow to date marine terrace (indicator for offshore fault activity) during the last 400,000 years. We also developed the analysis of fault-related fold for evaluating of blind fault. (2) To clarify the activities of active faults without superstratum, we carried out the color analysis of fault gouge and divided the activities into thousand of years and tens of thousands. (3) To reduce uncertainties of fault activities and frequency of earthquakes, we compiled the survey data and possible errors. (4) For improving seismic hazard analysis, we compiled the fault activities of the Yunotake and Itozawa faults, induced by the 2011 Tohoku-oki earthquake. (author)

  8. New method for evaluating liquefaction potential

    Energy Technology Data Exchange (ETDEWEB)

    Arulmoli, K.; Arulanandan, K.; Seed, H.B.

    1985-01-01

    A new method of indexing the grain and aggregate properties of sand using electrical parameters is described. Correlations are established between these parameters and relative density, D /sub r/ , cyclic stress ratio, /tau//sigma'/sub 0/, and K2 /sub max/ . An electrical probe, used to predict these parameters from in-situ electrical measurements, is described. Evaluations are made of D /sub r/ and /tau//sigma/sub 0/, which are compared with values measured independently from controlled laboratory tests. Reasonable agreement is found between predicted and measured values. The potential applicability of the electrical probe in the field is shown by evaluation of liquefaction and nonliquefaction at sites affected by the 1906 San Francisco, Niigata and Tangshan earthquakes.

  9. Instrumentation and quantitative methods of evaluation

    International Nuclear Information System (INIS)

    Beck, R.N.; Cooper, M.D.

    1991-01-01

    This report summarizes goals and accomplishments of the research program entitled Instrumentation and Quantitative Methods of Evaluation, during the period January 15, 1989 through July 15, 1991. This program is very closely integrated with the radiopharmaceutical program entitled Quantitative Studies in Radiopharmaceutical Science. Together, they constitute the PROGRAM OF NUCLEAR MEDICINE AND QUANTITATIVE IMAGING RESEARCH within The Franklin McLean Memorial Research Institute (FMI). The program addresses problems involving the basic science and technology that underlie the physical and conceptual tools of radiotracer methodology as they relate to the measurement of structural and functional parameters of physiologic importance in health and disease. The principal tool is quantitative radionuclide imaging. The objective of this program is to further the development and transfer of radiotracer methodology from basic theory to routine clinical practice. The focus of the research is on the development of new instruments and radiopharmaceuticals, and the evaluation of these through the phase of clinical feasibility. 234 refs., 11 figs., 2 tabs

  10. Statistical techniques to construct assays for identifying likely responders to a treatment under evaluation from cell line genomic data

    International Nuclear Information System (INIS)

    Huang, Erich P; Fridlyand, Jane; Lewin-Koh, Nicholas; Yue, Peng; Shi, Xiaoyan; Dornan, David; Burington, Bart

    2010-01-01

    Developing the right drugs for the right patients has become a mantra of drug development. In practice, it is very difficult to identify subsets of patients who will respond to a drug under evaluation. Most of the time, no single diagnostic will be available, and more complex decision rules will be required to define a sensitive population, using, for instance, mRNA expression, protein expression or DNA copy number. Moreover, diagnostic development will often begin with in-vitro cell-line data and a high-dimensional exploratory platform, only later to be transferred to a diagnostic assay for use with patient samples. In this manuscript, we present a novel approach to developing robust genomic predictors that are not only capable of generalizing from in-vitro to patient, but are also amenable to clinically validated assays such as qRT-PCR. Using our approach, we constructed a predictor of sensitivity to dacetuzumab, an investigational drug for CD40-expressing malignancies such as lymphoma using genomic measurements of cell lines treated with dacetuzumab. Additionally, we evaluated several state-of-the-art prediction methods by independently pairing the feature selection and classification components of the predictor. In this way, we constructed several predictors that we validated on an independent DLBCL patient dataset. Similar analyses were performed on genomic measurements of breast cancer cell lines and patients to construct a predictor of estrogen receptor (ER) status. The best dacetuzumab sensitivity predictors involved ten or fewer genes and accurately classified lymphoma patients by their survival and known prognostic subtypes. The best ER status classifiers involved one or two genes and led to accurate ER status predictions more than 85% of the time. The novel method we proposed performed as well or better than other methods evaluated. We demonstrated the feasibility of combining feature selection techniques with classification methods to develop assays

  11. Evaluation of tracking methods for maritime surveillance

    Science.gov (United States)

    Fischer, Yvonne; Baum, Marcus; Flohr, Fabian; Hanebeck, Uwe D.; Beyerer, Jürgen

    2012-06-01

    In this article, we present an evaluation of several multi-target tracking methods based on simulated scenarios in the maritime domain. In particular, we consider variations of the Joint Integrated Probabilistic Data Association (JIPDA) algorithm, namely the Linear Multi-Target IPDA (LMIPDA), Linear Joint IPDA (LJIPDA), and Markov Chain Monte Carlo Data Association (MCMCDA). The algorithms are compared with respect to an extension of the Optimal Subpattern Assignment (OSPA) metric, the Hellinger distance and further performance measures. As no single algorithm is equally well fitted to all tested scenarios, our results show which algorithms fits best for specific scenarios.

  12. Ultrahigh-dimensional variable selection method for whole-genome gene-gene interaction analysis

    Directory of Open Access Journals (Sweden)

    Ueki Masao

    2012-05-01

    Full Text Available Abstract Background Genome-wide gene-gene interaction analysis using single nucleotide polymorphisms (SNPs is an attractive way for identification of genetic components that confers susceptibility of human complex diseases. Individual hypothesis testing for SNP-SNP pairs as in common genome-wide association study (GWAS however involves difficulty in setting overall p-value due to complicated correlation structure, namely, the multiple testing problem that causes unacceptable false negative results. A large number of SNP-SNP pairs than sample size, so-called the large p small n problem, precludes simultaneous analysis using multiple regression. The method that overcomes above issues is thus needed. Results We adopt an up-to-date method for ultrahigh-dimensional variable selection termed the sure independence screening (SIS for appropriate handling of numerous number of SNP-SNP interactions by including them as predictor variables in logistic regression. We propose ranking strategy using promising dummy coding methods and following variable selection procedure in the SIS method suitably modified for gene-gene interaction analysis. We also implemented the procedures in a software program, EPISIS, using the cost-effective GPGPU (General-purpose computing on graphics processing units technology. EPISIS can complete exhaustive search for SNP-SNP interactions in standard GWAS dataset within several hours. The proposed method works successfully in simulation experiments and in application to real WTCCC (Wellcome Trust Case–control Consortium data. Conclusions Based on the machine-learning principle, the proposed method gives powerful and flexible genome-wide search for various patterns of gene-gene interaction.

  13. A Scalable Bayesian Method for Integrating Functional Information in Genome-wide Association Studies.

    Science.gov (United States)

    Yang, Jingjing; Fritsche, Lars G; Zhou, Xiang; Abecasis, Gonçalo

    2017-09-07

    Genome-wide association studies (GWASs) have identified many complex loci. However, most loci reside in noncoding regions and have unknown biological functions. Integrative analysis that incorporates known functional information into GWASs can help elucidate the underlying biological mechanisms and prioritize important functional variants. Hence, we develop a flexible Bayesian variable selection model with efficient computational techniques for such integrative analysis. Different from previous approaches, our method models the effect-size distribution and probability of causality for variants with different annotations and jointly models genome-wide variants to account for linkage disequilibrium (LD), thus prioritizing associations based on the quantification of the annotations and allowing for multiple associated variants per locus. Our method dramatically improves both computational speed and posterior sampling convergence by taking advantage of the block-wise LD structures in human genomes. In simulations, our method accurately quantifies the functional enrichment and performs more powerfully for prioritizing the true associations than alternative methods, where the power gain is especially apparent when multiple associated variants in LD reside in the same locus. We applied our method to an in-depth GWAS of age-related macular degeneration with 33,976 individuals and 9,857,286 variants. We find the strongest enrichment for causality among non-synonymous variants (54× more likely to be causal, 1.4× larger effect sizes) and variants in transcription, repressed Polycomb, and enhancer regions, as well as identify five additional candidate loci beyond the 32 known AMD risk loci. In conclusion, our method is shown to efficiently integrate functional information in GWASs, helping identify functional associated-variants and underlying biology. Published by Elsevier Inc.

  14. Laboratory methods to evaluate therapeutic radiopharmaceuticals

    International Nuclear Information System (INIS)

    Arteaga de Murphy, C.; Rodriguez-Cortes, J.; Pedraza-Lopez, M.; Ramirez-Iglesias, MT.; Ferro-Flores, G.

    2007-01-01

    The overall aim of this coordinated research project was to develop in vivo and in vitro laboratory methods to evaluate therapeutic radiopharmaceuticals. Towards this end, the laboratory methods used in this study are described in detail. Two peptides - an 8 amino acid minigastrin analogue and octreotate - were labelled with 177 Lu. Bombesin was labelled with 99 mTc, and its diagnostic utility was proven. For comparison, 99 mTc-TOC was used. The cell lines used in this study were AR42J cells, which overexpress somatostatin receptors found in neuroendocrine cancers, and PC3 cells, which overexpress gastric releasing peptide receptors (GRP-r) found in human prostate and breast cancers. The animal model chosen was athymic mice with implanted dorsal tumours of pathologically confirmed cell cancers. The methodology described for labelling, quality control, and in vitro and in vivo assays can be easily used with other radionuclides and other peptides of interest. (author)

  15. Improved analytical methods for microarray-based genome-composition analysis.

    Science.gov (United States)

    Kim, Charles C; Joyce, Elizabeth A; Chan, Kaman; Falkow, Stanley

    2002-10-29

    Whereas genome sequencing has given us high-resolution pictures of many different species of bacteria, microarrays provide a means of obtaining information on genome composition for many strains of a given species. Genome-composition analysis using microarrays, or 'genomotyping', can be used to categorize genes into 'present' and 'divergent' categories based on the level of hybridization signal. This typically involves selecting a signal value that is used as a cutoff to discriminate present (high signal) and divergent (low signal) genes. Current methodology uses empirical determination of cutoffs for classification into these categories, but this methodology is subject to several problems that can result in the misclassification of many genes. We describe a method that depends on the shape of the signal-ratio distribution and does not require empirical determination of a cutoff. Moreover, the cutoff is determined on an array-to-array basis, accounting for variation in strain composition and hybridization quality. The algorithm also provides an estimate of the probability that any given gene is present, which provides a measure of confidence in the categorical assignments. Many genes previously classified as present using static methods are in fact divergent on the basis of microarray signal; this is corrected by our algorithm. We have reassigned hundreds of genes from previous genomotyping studies of Helicobacter pylori and Campylobacter jejuni strains, and expect that the algorithm should be widely applicable to genomotyping data.

  16. Genomic signal processing methods for computation of alignment-free distances from DNA sequences.

    Science.gov (United States)

    Borrayo, Ernesto; Mendizabal-Ruiz, E Gerardo; Vélez-Pérez, Hugo; Romo-Vázquez, Rebeca; Mendizabal, Adriana P; Morales, J Alejandro

    2014-01-01

    Genomic signal processing (GSP) refers to the use of digital signal processing (DSP) tools for analyzing genomic data such as DNA sequences. A possible application of GSP that has not been fully explored is the computation of the distance between a pair of sequences. In this work we present GAFD, a novel GSP alignment-free distance computation method. We introduce a DNA sequence-to-signal mapping function based on the employment of doublet values, which increases the number of possible amplitude values for the generated signal. Additionally, we explore the use of three DSP distance metrics as descriptors for categorizing DNA signal fragments. Our results indicate the feasibility of employing GAFD for computing sequence distances and the use of descriptors for characterizing DNA fragments.

  17. Comparison of advanced whole genome sequence-based methods to distinguish strains of Salmonella enterica serovar Heidelberg involved in foodborne outbreaks in Québec.

    Science.gov (United States)

    Vincent, Caroline; Usongo, Valentine; Berry, Chrystal; Tremblay, Denise M; Moineau, Sylvain; Yousfi, Khadidja; Doualla-Bell, Florence; Fournier, Eric; Nadon, Céline; Goodridge, Lawrence; Bekal, Sadjia

    2018-08-01

    Salmonella enterica serovar Heidelberg (S. Heidelberg) is one of the top serovars causing human salmonellosis. This serovar ranks second and third among serovars that cause human infections in Québec and Canada, respectively, and has been associated with severe infections. Traditional typing methods such as PFGE do not display adequate discrimination required to resolve outbreak investigations due to the low level of genetic diversity of isolates belonging to this serovar. This study evaluates the ability of four whole genome sequence (WGS)-based typing methods to differentiate among 145 S. Heidelberg strains involved in four distinct outbreak events and sporadic cases of salmonellosis that occurred in Québec between 2007 and 2016. Isolates from all outbreaks were indistinguishable by PFGE. The core genome single nucleotide variant (SNV), core genome multilocus sequence typing (MLST) and whole genome MLST approaches were highly discriminatory and separated outbreak strains into four distinct phylogenetic clusters that were concordant with the epidemiological data. The clustered regularly interspaced short palindromic repeats (CRISPR) typing method was less discriminatory. However, CRISPR typing may be used as a secondary method to differentiate isolates of S. Heidelberg that are genetically similar but epidemiologically unrelated to outbreak events. WGS-based typing methods provide a highly discriminatory alternative to PFGE for the laboratory investigation of foodborne outbreaks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Methods and Metrics for Evaluating Environmental Dredging ...

    Science.gov (United States)

    This report documents the objectives, approach, methodologies, results, and interpretation of a collaborative research study conducted by the National Risk Management Research Laboratory (NRMRL) and the National Exposure Research laboratory (NERL) of the U.S. Environmental Protection Agency’s (U.S. EPA’s) Office of Research and Development (ORD) and the U.S. EPA’s Great Lakes National Program Office (GLNPO). The objectives of the research study were to: 1) evaluate remedy effectiveness of environmental dredging as applied to contaminated sediments in the Ashtabula River in northeastern Ohio, and 2) monitor the recovery of the surrounding ecosystem. The project was carried out over 6 years from 2006 through 2011 and consisted of the development and evaluation of methods and approaches to assess river and ecosystem conditions prior to dredging (2006), during dredging (2006 and 2007), and following dredging, both short term (2008) and long term (2009-2011). This project report summarizes and interprets the results of this 6-year study to develop and assess methods for monitoring pollutant fate and transport and ecosystem recovery through the use of biological, chemical, and physical lines of evidence (LOEs) such as: 1) comprehensive sampling of and chemical analysis of contaminants in surface, suspended, and historic sediments; 2) extensive grab and multi-level real time water sampling and analysis of contaminants in the water column; 3) sampling, chemi

  19. A Method to Constrain Genome-Scale Models with 13C Labeling Data.

    Directory of Open Access Journals (Sweden)

    Héctor García Martín

    2015-09-01

    Full Text Available Current limitations in quantitatively predicting biological behavior hinder our efforts to engineer biological systems to produce biofuels and other desired chemicals. Here, we present a new method for calculating metabolic fluxes, key targets in metabolic engineering, that incorporates data from 13C labeling experiments and genome-scale models. The data from 13C labeling experiments provide strong flux constraints that eliminate the need to assume an evolutionary optimization principle such as the growth rate optimization assumption used in Flux Balance Analysis (FBA. This effective constraining is achieved by making the simple but biologically relevant assumption that flux flows from core to peripheral metabolism and does not flow back. The new method is significantly more robust than FBA with respect to errors in genome-scale model reconstruction. Furthermore, it can provide a comprehensive picture of metabolite balancing and predictions for unmeasured extracellular fluxes as constrained by 13C labeling data. A comparison shows that the results of this new method are similar to those found through 13C Metabolic Flux Analysis (13C MFA for central carbon metabolism but, additionally, it provides flux estimates for peripheral metabolism. The extra validation gained by matching 48 relative labeling measurements is used to identify where and why several existing COnstraint Based Reconstruction and Analysis (COBRA flux prediction algorithms fail. We demonstrate how to use this knowledge to refine these methods and improve their predictive capabilities. This method provides a reliable base upon which to improve the design of biological systems.

  20. Bioethics methods in the ethical, legal, and social implications of the human genome project literature.

    Science.gov (United States)

    Walker, Rebecca L; Morrissey, Clair

    2014-11-01

    While bioethics as a field has concerned itself with methodological issues since the early years, there has been no systematic examination of how ethics is incorporated into research on the Ethical, Legal and Social Implications (ELSI) of the Human Genome Project. Yet ELSI research may bear a particular burden of investigating and substantiating its methods given public funding, an explicitly cross-disciplinary approach, and the perceived significance of adequate responsiveness to advances in genomics. We undertook a qualitative content analysis of a sample of ELSI publications appearing between 2003 and 2008 with the aim of better understanding the methods, aims, and approaches to ethics that ELSI researchers employ. We found that the aims of ethics within ELSI are largely prescriptive and address multiple groups. We also found that the bioethics methods used in the ELSI literature are both diverse between publications and multiple within publications, but are usually not themselves discussed or employed as suggested by bioethics method proponents. Ethics in ELSI is also sometimes undistinguished from related inquiries (such as social, legal, or political investigations). © 2013 John Wiley & Sons Ltd.

  1. BIOETHICS METHODS IN THE ETHICAL, LEGAL, AND SOCIAL IMPLICATIONS OF THE HUMAN GENOME PROJECT LITERATURE

    Science.gov (United States)

    Walker, Rebecca; Morrissey, Clair

    2013-01-01

    While bioethics as a field has concerned itself with methodological issues since the early years, there has been no systematic examination of how ethics is incorporated into research on the Ethical, Legal and Social Implications (ELSI) of the Human Genome Project. Yet ELSI research may bear a particular burden of investigating and substantiating its methods given public funding, an explicitly cross-disciplinary approach, and the perceived significance of adequate responsiveness to advances in genomics. We undertook a qualitative content analysis of a sample of ELSI publications appearing between 2003-2008 with the aim of better understanding the methods, aims, and approaches to ethics that ELSI researchers employ. We found that the aims of ethics within ELSI are largely prescriptive and address multiple groups. We also found that the bioethics methods used in the ELSI literature are both diverse between publications and multiple within publications, but are usually not themselves discussed or employed as suggested by bioethics method proponents. Ethics in ELSI is also sometimes undistinguished from related inquiries (such as social, legal, or political investigations). PMID:23796275

  2. Genomic DNA extraction methods using formalin-fixed paraffin-embedded tissue.

    Science.gov (United States)

    Potluri, Keerti; Mahas, Ahmed; Kent, Michael N; Naik, Sameep; Markey, Michael

    2015-10-01

    As new technologies come within reach for the average cytogenetic laboratory, the study of chromosome structure has become increasingly more sophisticated. Resolution has improved from karyotyping (in which whole chromosomes are discernible) to fluorescence in situ hybridization and comparative genomic hybridization (CGH, with which specific megabase regions are visualized), array-based CGH (aCGH, examining hundreds of base pairs), and next-generation sequencing (providing single base pair resolution). Whole genome next-generation sequencing remains a cost-prohibitive method for many investigators. Meanwhile, the cost of aCGH has been reduced during recent years, even as resolution has increased and protocols have simplified. However, aCGH presents its own set of unique challenges. DNA of sufficient quantity and quality to hybridize to arrays and provide meaningful results is required. This is especially difficult for DNA from formalin-fixed paraffin-embedded (FFPE) tissues. Here, we compare three different methods for acquiring DNA of sufficient length, purity, and "amplifiability" for aCGH and other downstream applications. Phenol-chloroform extraction and column-based commercial kits were compared with adaptive focused acoustics (AFA). Of the three extraction methods, AFA samples showed increased amplicon length and decreased polymerase chain reaction (PCR) failure rate. These findings support AFA as an improvement over previous DNA extraction methods for FFPE tissues. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Functional regression method for whole genome eQTL epistasis analysis with sequencing data.

    Science.gov (United States)

    Xu, Kelin; Jin, Li; Xiong, Momiao

    2017-05-18

    Epistasis plays an essential rule in understanding the regulation mechanisms and is an essential component of the genetic architecture of the gene expressions. However, interaction analysis of gene expressions remains fundamentally unexplored due to great computational challenges and data availability. Due to variation in splicing, transcription start sites, polyadenylation sites, post-transcriptional RNA editing across the entire gene, and transcription rates of the cells, RNA-seq measurements generate large expression variability and collectively create the observed position level read count curves. A single number for measuring gene expression which is widely used for microarray measured gene expression analysis is highly unlikely to sufficiently account for large expression variation across the gene. Simultaneously analyzing epistatic architecture using the RNA-seq and whole genome sequencing (WGS) data poses enormous challenges. We develop a nonlinear functional regression model (FRGM) with functional responses where the position-level read counts within a gene are taken as a function of genomic position, and functional predictors where genotype profiles are viewed as a function of genomic position, for epistasis analysis with RNA-seq data. Instead of testing the interaction of all possible pair-wises SNPs, the FRGM takes a gene as a basic unit for epistasis analysis, which tests for the interaction of all possible pairs of genes and use all the information that can be accessed to collectively test interaction between all possible pairs of SNPs within two genome regions. By large-scale simulations, we demonstrate that the proposed FRGM for epistasis analysis can achieve the correct type 1 error and has higher power to detect the interactions between genes than the existing methods. The proposed methods are applied to the RNA-seq and WGS data from the 1000 Genome Project. The numbers of pairs of significantly interacting genes after Bonferroni correction

  4. A Bayesian method and its variational approximation for prediction of genomic breeding values in multiple traits

    Directory of Open Access Journals (Sweden)

    Hayashi Takeshi

    2013-01-01

    Full Text Available Abstract Background Genomic selection is an effective tool for animal and plant breeding, allowing effective individual selection without phenotypic records through the prediction of genomic breeding value (GBV. To date, genomic selection has focused on a single trait. However, actual breeding often targets multiple correlated traits, and, therefore, joint analysis taking into consideration the correlation between traits, which might result in more accurate GBV prediction than analyzing each trait separately, is suitable for multi-trait genomic selection. This would require an extension of the prediction model for single-trait GBV to multi-trait case. As the computational burden of multi-trait analysis is even higher than that of single-trait analysis, an effective computational method for constructing a multi-trait prediction model is also needed. Results We described a Bayesian regression model incorporating variable selection for jointly predicting GBVs of multiple traits and devised both an MCMC iteration and variational approximation for Bayesian estimation of parameters in this multi-trait model. The proposed Bayesian procedures with MCMC iteration and variational approximation were referred to as MCBayes and varBayes, respectively. Using simulated datasets of SNP genotypes and phenotypes for three traits with high and low heritabilities, we compared the accuracy in predicting GBVs between multi-trait and single-trait analyses as well as between MCBayes and varBayes. The results showed that, compared to single-trait analysis, multi-trait analysis enabled much more accurate GBV prediction for low-heritability traits correlated with high-heritability traits, by utilizing the correlation structure between traits, while the prediction accuracy for uncorrelated low-heritability traits was comparable or less with multi-trait analysis in comparison with single-trait analysis depending on the setting for prior probability that a SNP has zero

  5. Evaluation of genetic variation among Brazilian soybean cultivars through genome resequencing.

    Science.gov (United States)

    Maldonado dos Santos, João Vitor; Valliyodan, Babu; Joshi, Trupti; Khan, Saad M; Liu, Yang; Wang, Juexin; Vuong, Tri D; de Oliveira, Marcelo Fernandes; Marcelino-Guimarães, Francismar Corrêa; Xu, Dong; Nguyen, Henry T; Abdelnoor, Ricardo Vilela

    2016-02-13

    Soybean [Glycine max (L.) Merrill] is one of the most important legumes cultivated worldwide, and Brazil is one of the main producers of this crop. Since the sequencing of its reference genome, interest in structural and allelic variations of cultivated and wild soybean germplasm has grown. To investigate the genetics of the Brazilian soybean germplasm, we selected soybean cultivars based on the year of commercialization, geographical region and maturity group and resequenced their genomes. We resequenced the genomes of 28 Brazilian soybean cultivars with an average genome coverage of 14.8X. A total of 5,835,185 single nucleotide polymorphisms (SNPs) and 1,329,844 InDels were identified across the 20 soybean chromosomes, with 541,762 SNPs, 98,922 InDels and 1,093 CNVs that were exclusive to the 28 Brazilian cultivars. In addition, 668 allelic variations of 327 genes were shared among all of the Brazilian cultivars, including genes related to DNA-dependent transcription-elongation, photosynthesis, ATP synthesis-coupled electron transport, cellular respiration, and precursors of metabolite generation and energy. A very homogeneous structure was also observed for the Brazilian soybean germplasm, and we observed 41 regions putatively influenced by positive selection. Finally, we detected 3,880 regions with copy-number variations (CNVs) that could help to explain the divergence among the accessions evaluated. The large number of allelic and structural variations identified in this study can be used in marker-assisted selection programs to detect unique SNPs for cultivar fingerprinting. The results presented here suggest that despite the diversification of modern Brazilian cultivars, the soybean germplasm remains very narrow because of the large number of genome regions that exhibit low diversity. These results emphasize the need to introduce new alleles to increase the genetic diversity of the Brazilian germplasm.

  6. Seismic evaluation methods for existing buildings

    Energy Technology Data Exchange (ETDEWEB)

    Hsieh, B.J.

    1995-07-01

    Recent US Department of Energy natural phenomena hazards mitigation directives require the earthquake reassessment of existing hazardous facilities and general use structures. This applies also to structures located in accordance with the Uniform Building Code in Seismic Zone 0 where usually no consideration is given to seismic design, but where DOE specifies seismic hazard levels. An economical approach for performing such a seismic evaluation, which relies heavily on the use of preexistent structural analysis results is outlined below. Specifically, three different methods are used to estimate the seismic capacity of a building, which is a unit of a building complex located on a site considered low risk to earthquakes. For structures originally not seismically designed, which may not have or be able to prove sufficient capacity to meet new arbitrarily high seismic design requirement and which are located on low-seismicity sites, it may be very cost effective to perform detailed site-specific seismic hazard studies in order to establish the true seismic threat. This is particularly beneficial, to sites with many buildings and facilities to be seismically evaluated.

  7. A human-machine interface evaluation method: A difficulty evaluation method in information searching (DEMIS)

    International Nuclear Information System (INIS)

    Ha, Jun Su; Seong, Poong Hyun

    2009-01-01

    A human-machine interface (HMI) evaluation method, which is named 'difficulty evaluation method in information searching (DEMIS)', is proposed and demonstrated with an experimental study. The DEMIS is based on a human performance model and two measures of attentional-resource effectiveness in monitoring and detection tasks in nuclear power plants (NPPs). Operator competence and HMI design are modeled to be most significant factors to human performance. One of the two effectiveness measures is fixation-to-importance ratio (FIR) which represents attentional resource (eye fixations) spent on an information source compared to importance of the information source. The other measure is selective attention effectiveness (SAE) which incorporates FIRs for all information sources. The underlying principle of the measures is that the information source should be selectively attended to according to its informational importance. In this study, poor performance in information searching tasks is modeled to be coupled with difficulties caused by poor mental models of operators or/and poor HMI design. Human performance in information searching tasks is evaluated by analyzing the FIR and the SAE. Operator mental models are evaluated by a questionnaire-based method. Then difficulties caused by a poor HMI design are evaluated by a focused interview based on the FIR evaluation and then root causes leading to poor performance are identified in a systematic way.

  8. Evaluation of determinative methods for sodium impurities

    International Nuclear Information System (INIS)

    Molinari, Marcelo; Guido, Osvaldo; Botbol, Jose; Ares, Osvaldo

    1988-01-01

    Sodium, universally accepted as heat transfer fluid in fast breeder reactors, requires a special technology for every operation involved in any applicable methodology, due to its well known chemical reactivity. The purpose of this work is: a) to study the sources and effects of chemical species which, as traces, accompany sodium used in the nuclear field; b) to classify, taking into account, the present requirements and resources of the National Atomic Energy Commission (CNEA), the procedures found in the literature for determination of the most important impurities which exist in experimental liquid sodium systems and c) to describe the principles of the methods and to evaluate them in order to make a selection. It was concluded the convenience to develop, as a first stage, laboratory procedures to determine carbon, oxygen, hydrogen and non-volatile impurities, which besides serving present needs, will be referential for direct methods with undeferred response. The latter are needed in liquid sodium experimental loops and require, primarily, more complex and extended development. Additionally, a description is made of experimental work performed up-to-now in this laboratory, consisting of a transfer device for sodium sampling and a sodium distillation device, adapted from a previous design, with associated vacuum and inert gas systems. It is intended as a separative technique for indirect determination of oxygen and non-volatile impurities. (Author) [es

  9. Evaluation of methods to assess physical activity

    Science.gov (United States)

    Leenders, Nicole Y. J. M.

    Epidemiological evidence has accumulated that demonstrates that the amount of physical activity-related energy expenditure during a week reduces the incidence of cardiovascular disease, diabetes, obesity, and all-cause mortality. To further understand the amount of daily physical activity and related energy expenditure that are necessary to maintain or improve the functional health status and quality of life, instruments that estimate total (TDEE) and physical activity-related energy expenditure (PAEE) under free-living conditions should be determined to be valid and reliable. Without evaluation of the various methods that estimate TDEE and PAEE with the doubly labeled water (DLW) method in females there will be eventual significant limitations on assessing the efficacy of physical activity interventions on health status in this population. A triaxial accelerometer (Tritrac-R3D, (TT)), an uniaxial (Computer Science and Applications Inc., (CSA)) activity monitor, a Yamax-Digiwalker-500sp°ler , (YX-stepcounter), by measuring heart rate responses (HR method) and a 7-d Physical Activity Recall questionnaire (7-d PAR) were compared with the "criterion method" of DLW during a 7-d period in female adults. The DLW-TDEE was underestimated on average 9, 11 and 15% using 7-d PAR, HR method and TT. The underestimation of DLW-PAEE by 7-d PAR was 21% compared to 47% and 67% for TT and YX-stepcounter. Approximately 56% of the variance in DLW-PAEE*kgsp{-1} is explained by the registration of body movement with accelerometry. A larger proportion of the variance in DLW-PAEE*kgsp{-1} was explained by jointly incorporating information from the vertical and horizontal movement measured with the CSA and Tritrac-R3D (rsp2 = 0.87). Although only a small amount of variance in DLW-PAEE*kgsp{-1} is explained by the number of steps taken per day, because of its low cost and ease of use, the Yamax-stepcounter is useful in studies promoting daily walking. Thus, studies involving the

  10. Multilevel Summation Method for Electrostatic Force Evaluation

    Science.gov (United States)

    2015-01-01

    The multilevel summation method (MSM) offers an efficient algorithm utilizing convolution for evaluating long-range forces arising in molecular dynamics simulations. Shifting the balance of computation and communication, MSM provides key advantages over the ubiquitous particle–mesh Ewald (PME) method, offering better scaling on parallel computers and permitting more modeling flexibility, with support for periodic systems as does PME but also for semiperiodic and nonperiodic systems. The version of MSM available in the simulation program NAMD is described, and its performance and accuracy are compared with the PME method. The accuracy feasible for MSM in practical applications reproduces PME results for water property calculations of density, diffusion constant, dielectric constant, surface tension, radial distribution function, and distance-dependent Kirkwood factor, even though the numerical accuracy of PME is higher than that of MSM. Excellent agreement between MSM and PME is found also for interface potentials of air–water and membrane–water interfaces, where long-range Coulombic interactions are crucial. Applications demonstrate also the suitability of MSM for systems with semiperiodic and nonperiodic boundaries. For this purpose, simulations have been performed with periodic boundaries along directions parallel to a membrane surface but not along the surface normal, yielding membrane pore formation induced by an imbalance of charge across the membrane. Using a similar semiperiodic boundary condition, ion conduction through a graphene nanopore driven by an ion gradient has been simulated. Furthermore, proteins have been simulated inside a single spherical water droplet. Finally, parallel scalability results show the ability of MSM to outperform PME when scaling a system of modest size (less than 100 K atoms) to over a thousand processors, demonstrating the suitability of MSM for large-scale parallel simulation. PMID:25691833

  11. Methods for the comparative evaluation of pharmaceuticals

    Directory of Open Access Journals (Sweden)

    Busse, Reinhard

    2005-11-01

    Full Text Available Political background: As a German novelty, the Institute for Quality and Efficiency in Health Care (Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen; IGWiG was established in 2004 to, among other tasks, evaluate the benefit of pharmaceuticals. In this context it is of importance that patented pharmaceuticals are only excluded from the reference pricing system if they offer a therapeutic improvement. The institute is commissioned by the Federal Joint Committee (Gemeinsamer Bundesausschuss, G-BA or by the Ministry of Health and Social Security. The German policy objective expressed by the latest health care reform (Gesetz zur Modernisierung der Gesetzlichen Krankenversicherung, GMG is to base decisions on a scientific assessment of pharmaceuticals in comparison to already available treatments. However, procedures and methods are still to be established. Research questions and methods: This health technology assessment (HTA report was commissioned by the German Agency for HTA at the Institute for Medical Documentation and Information (DAHTA@DIMDI. It analysed criteria, procedures, and methods of comparative drug assessment in other EU-/OECD-countries. The research question was the following: How do national public institutions compare medicines in connection with pharmaceutical regulation, i.e. licensing, reimbursement and pricing of drugs? Institutions as well as documents concerning comparative drug evaluation (e.g. regulations, guidelines were identified through internet, systematic literature, and hand searches. Publications were selected according to pre-defined inclusion and exclusion criteria. Documents were analysed in a qualitative matter following an analytic framework that had been developed in advance. Results were summarised narratively and presented in evidence tables. Results and discussion: Currently licensing agencies do not systematically assess a new drug's added value for patients and society. This is why many

  12. Utilization of defined microbial communities enables effective evaluation of meta-genomic assemblies.

    Science.gov (United States)

    Greenwald, William W; Klitgord, Niels; Seguritan, Victor; Yooseph, Shibu; Venter, J Craig; Garner, Chad; Nelson, Karen E; Li, Weizhong

    2017-04-13

    Metagenomics is the study of the microbial genomes isolated from communities found on our bodies or in our environment. By correctly determining the relation between human health and the human associated microbial communities, novel mechanisms of health and disease can be found, thus enabling the development of novel diagnostics and therapeutics. Due to the diversity of the microbial communities, strategies developed for aligning human genomes cannot be utilized, and genomes of the microbial species in the community must be assembled de novo. However, in order to obtain the best metagenomic assemblies, it is important to choose the proper assembler. Due to the rapidly evolving nature of metagenomics, new assemblers are constantly created, and the field has not yet agreed on a standardized process. Furthermore, the truth sets used to compare these methods are either too simple (computationally derived diverse communities) or complex (microbial communities of unknown composition), yielding results that are hard to interpret. In this analysis, we interrogate the strengths and weaknesses of five popular assemblers through the use of defined biological samples of known genomic composition and abundance. We assessed the performance of each assembler on their ability to reassemble genomes, call taxonomic abundances, and recreate open reading frames (ORFs). We tested five metagenomic assemblers: Omega, metaSPAdes, IDBA-UD, metaVelvet and MEGAHIT on known and synthetic metagenomic data sets. MetaSPAdes excelled in diverse sets, IDBA-UD performed well all around, metaVelvet had high accuracy in high abundance organisms, and MEGAHIT was able to accurately differentiate similar organisms within a community. At the ORF level, metaSPAdes and MEGAHIT had the least number of missing ORFs within diverse and similar communities respectively. Depending on the metagenomics question asked, the correct assembler for the task at hand will differ. It is important to choose the appropriate

  13. Evaluating Digital PCR for the Quantification of Human Genomic DNA: Accessible Amplifiable Targets.

    Science.gov (United States)

    Kline, Margaret C; Romsos, Erica L; Duewer, David L

    2016-02-16

    Polymerase chain reaction (PCR) multiplexed assays perform best when the input quantity of template DNA is controlled to within about a factor of √2. To help ensure that PCR assays yield consistent results over time and place, results from methods used to determine DNA quantity need to be metrologically traceable to a common reference. Many DNA quantitation systems can be accurately calibrated with solutions of DNA in aqueous buffer. Since they do not require external calibration, end-point limiting dilution technologies, collectively termed "digital PCR (dPCR)", have been proposed as suitable for value assigning such DNA calibrants. The performance characteristics of several commercially available dPCR systems have recently been documented using plasmid, viral, or fragmented genomic DNA; dPCR performance with more complex materials, such as human genomic DNA, has been less studied. With the goal of providing a human genomic reference material traceably certified for mass concentration, we are investigating the measurement characteristics of several dPCR systems. We here report results of measurements from multiple PCR assays, on four human genomic DNAs treated with four endonuclease restriction enzymes using both chamber and droplet dPCR platforms. We conclude that dPCR does not estimate the absolute number of PCR targets in a given volume but rather the number of accessible and amplifiable targets. While enzymatic restriction of human genomic DNA increases accessibility for some assays, in well-optimized PCR assays it can reduce the number of amplifiable targets and increase assay variability relative to uncut sample.

  14. Comparison of three genomic DNA extraction methods to obtain high DNA quality from maize.

    Science.gov (United States)

    Abdel-Latif, Amani; Osman, Gamal

    2017-01-01

    The world's top three cereals, based on their monetary value, are rice, wheat, and corn. In cereal crops, DNA extraction is difficult owing to rigid non-cellulose components in the cell wall of leaves and high starch and protein content in grains. The advanced techniques in molecular biology require pure and quick extraction of DNA. The majority of existing DNA extraction methods rely on long incubation and multiple precipitations or commercially available kits to produce contaminant-free high molecular weight DNA. In this study, we compared three different methods used for the isolation of high-quality genomic DNA from the grains of cereal crop, Zea mays , with minor modifications. The DNA from the grains of two maize hybrids, M10 and M321, was extracted using extraction methods DNeasy Qiagen Plant Mini Kit, CTAB-method (with/without 1% PVP) and modified Mericon extraction. Genes coding for 45S ribosomal RNA are organized in tandem arrays of up to several thousand copies and contain codes for 18S, 5.8S and 26S rRNA units separated by internal transcribed spacers ITS1 and ITS2. While the rRNA units are evolutionary conserved, ITS regions show high level of interspecific divergence and have been used frequently in genetic diversity and phylogenetic studies. In this study, the genomic DNA was then amplified with PCR using primers specific for ITS gene. PCR products were then visualized on agarose gel. The modified Mericon extraction method was found to be the most efficient DNA extraction method, capable to provide high DNA yields with better quality, affordable cost and less time.

  15. Comprehensive evaluation of genome-wide 5-hydroxymethylcytosine profiling approaches in human DNA.

    Science.gov (United States)

    Skvortsova, Ksenia; Zotenko, Elena; Luu, Phuc-Loi; Gould, Cathryn M; Nair, Shalima S; Clark, Susan J; Stirzaker, Clare

    2017-01-01

    The discovery that 5-methylcytosine (5mC) can be oxidized to 5-hydroxymethylcytosine (5hmC) by the ten-eleven translocation (TET) proteins has prompted wide interest in the potential role of 5hmC in reshaping the mammalian DNA methylation landscape. The gold-standard bisulphite conversion technologies to study DNA methylation do not distinguish between 5mC and 5hmC. However, new approaches to mapping 5hmC genome-wide have advanced rapidly, although it is unclear how the different methods compare in accurately calling 5hmC. In this study, we provide a comparative analysis on brain DNA using three 5hmC genome-wide approaches, namely whole-genome bisulphite/oxidative bisulphite sequencing (WG Bis/OxBis-seq), Infinium HumanMethylation450 BeadChip arrays coupled with oxidative bisulphite (HM450K Bis/OxBis) and antibody-based immunoprecipitation and sequencing of hydroxymethylated DNA (hMeDIP-seq). We also perform loci-specific TET-assisted bisulphite sequencing (TAB-seq) for validation of candidate regions. We show that whole-genome single-base resolution approaches are advantaged in providing precise 5hmC values but require high sequencing depth to accurately measure 5hmC, as this modification is commonly in low abundance in mammalian cells. HM450K arrays coupled with oxidative bisulphite provide a cost-effective representation of 5hmC distribution, at CpG sites with 5hmC levels >~10%. However, 5hmC analysis is restricted to the genomic location of the probes, which is an important consideration as 5hmC modification is commonly enriched at enhancer elements. Finally, we show that the widely used hMeDIP-seq method provides an efficient genome-wide profile of 5hmC and shows high correlation with WG Bis/OxBis-seq 5hmC distribution in brain DNA. However, in cell line DNA with low levels of 5hmC, hMeDIP-seq-enriched regions are not detected by WG Bis/OxBis or HM450K, either suggesting misinterpretation of 5hmC calls by hMeDIP or lack of sensitivity of the latter methods. We

  16. A simple and efficient total genomic DNA extraction method for individual zooplankton.

    Science.gov (United States)

    Fazhan, Hanafiah; Waiho, Khor; Shahreza, Md Sheriff

    2016-01-01

    Molecular approaches are widely applied in species identification and taxonomic studies of minute zooplankton. One of the most focused zooplankton nowadays is from Subclass Copepoda. Accurate species identification of all life stages of the generally small sized copepods through molecular analysis is important, especially in taxonomic and systematic assessment of harpacticoid copepod populations and to understand their dynamics within the marine community. However, total genomic DNA (TGDNA) extraction from individual harpacticoid copepods can be problematic due to their small size and epibenthic behavior. In this research, six TGDNA extraction methods done on individual harpacticoid copepods were compared. The first new simple, feasible, efficient and consistent TGDNA extraction method was designed and compared with the commercial kit and modified available TGDNA extraction methods. The newly described TGDNA extraction method, "Incubation in PCR buffer" method, yielded good and consistent results based on the high success rate of PCR amplification (82%) compared to other methods. Coupled with its relatively consistent and economical method the "Incubation in PCR buffer" method is highly recommended in the TGDNA extraction of other minute zooplankton species.

  17. Evaluation of plasmid and genomic DNA calibrants used for the quantification of genetically modified organisms.

    Science.gov (United States)

    Caprioara-Buda, M; Meyer, W; Jeynov, B; Corbisier, P; Trapmann, S; Emons, H

    2012-07-01

    The reliable quantification of genetically modified organisms (GMOs) by real-time PCR requires, besides thoroughly validated quantitative detection methods, sustainable calibration systems. The latter establishes the anchor points for the measured value and the measurement unit, respectively. In this paper, the suitability of two types of DNA calibrants, i.e. plasmid DNA and genomic DNA extracted from plant leaves, for the certification of the GMO content in reference materials as copy number ratio between two targeted DNA sequences was investigated. The PCR efficiencies and coefficients of determination of the calibration curves as well as the measured copy number ratios for three powder certified reference materials (CRMs), namely ERM-BF415e (NK603 maize), ERM-BF425c (356043 soya), and ERM-BF427c (98140 maize), originally certified for their mass fraction of GMO, were compared for both types of calibrants. In all three systems investigated, the PCR efficiencies of plasmid DNA were slightly closer to the PCR efficiencies observed for the genomic DNA extracted from seed powders rather than those of the genomic DNA extracted from leaves. Although the mean DNA copy number ratios for each CRM overlapped within their uncertainties, the DNA copy number ratios were significantly different using the two types of calibrants. Based on these observations, both plasmid and leaf genomic DNA calibrants would be technically suitable as anchor points for the calibration of the real-time PCR methods applied in this study. However, the most suitable approach to establish a sustainable traceability chain is to fix a reference system based on plasmid DNA.

  18. Defining and Evaluating a Core Genome Multilocus Sequence Typing Scheme for Whole-Genome Sequence-Based Typing of Listeria monocytogenes.

    Science.gov (United States)

    Ruppitsch, Werner; Pietzka, Ariane; Prior, Karola; Bletz, Stefan; Fernandez, Haizpea Lasa; Allerberger, Franz; Harmsen, Dag; Mellmann, Alexander

    2015-09-01

    Whole-genome sequencing (WGS) has emerged today as an ultimate typing tool to characterize Listeria monocytogenes outbreaks. However, data analysis and interlaboratory comparability of WGS data are still challenging for most public health laboratories. Therefore, we have developed and evaluated a new L. monocytogenes typing scheme based on genome-wide gene-by-gene comparisons (core genome multilocus the sequence typing [cgMLST]) to allow for a unique typing nomenclature. Initially, we determined the breadth of the L. monocytogenes population based on MLST data with a Bayesian approach. Based on the genome sequence data of representative isolates for the whole population, cgMLST target genes were defined and reappraised with 67 L. monocytogenes isolates from two outbreaks and serotype reference strains. The Bayesian population analysis generated five L. monocytogenes groups. Using all available NCBI RefSeq genomes (n = 36) and six additionally sequenced strains, all genetic groups were covered. Pairwise comparisons of these 42 genome sequences resulted in 1,701 cgMLST targets present in all 42 genomes with 100% overlap and ≥90% sequence similarity. Overall, ≥99.1% of the cgMLST targets were present in 67 outbreak and serotype reference strains, underlining the representativeness of the cgMLST scheme. Moreover, cgMLST enabled clustering of outbreak isolates with ≤10 alleles difference and unambiguous separation from unrelated outgroup isolates. In conclusion, the novel cgMLST scheme not only improves outbreak investigations but also enables, due to the availability of the automatically curated cgMLST nomenclature, interlaboratory exchange of data that are crucial, especially for rapid responses during transsectorial outbreaks. Copyright © 2015 Ruppitsch et al.

  19. SEAM PUCKERING EVALUATION METHOD FOR SEWING PROCESS

    Directory of Open Access Journals (Sweden)

    BRAD Raluca

    2014-07-01

    Full Text Available The paper presents an automated method for the assessment and classification of puckering defects detected during the preproduction control stage of the sewing machine or product inspection. In this respect, we have presented the possible causes and remedies of the wrinkle nonconformities. Subjective factors related to the control environment and operators during the seams evaluation can be reduced using an automated system whose operation is based on image processing. Our implementation involves spectral image analysis using Fourier transform and an unsupervised neural network, the Kohonen Map, employed to classify material specimens, the input images, into five discrete degrees of quality, from grade 5 (best to grade 1 (the worst. The puckering features presented in the learning and test images have been pre-classified using the seam puckering quality standard. The network training stage will consist in presenting five input vectors (derived from the down-sampled arrays, representing the puckering grades. The puckering classification consists in providing an input vector derived from the image supposed to be classified. A scalar product between the input values vectors and the weighted training images is computed. The result will be assigned to one of the five classes of which the input image belongs. Using the Kohonen network the puckering defects were correctly classified in proportion of 71.42%.

  20. Cooperative Student Assessment Method: an Evaluation Study

    Directory of Open Access Journals (Sweden)

    Antonella Grasso

    2006-06-01

    Full Text Available Training through the Internet poses a series of technical problems and pedagogical issues. Traditional training is not indiscriminate but takes on different forms according to the needs of the subject being trained and the context where such training occurs. In order to make the systems adaptable in this way, a model of the student’s characteristics - the student model - has to be set up, maintained and updated. However, there are many difficulties involved in obtaining sufficient information to create an accurate student model. One way to solve this problem is to involve students in the student modeling process, stimulating them to provide the necessary information by means of a dialog in which the student and system build the student model according to a collaborative process. The present work describes a cooperative student modeling method (Cooperative Student Assessment - CSA which builds a joint system-student assessment of student’s activities on the basis of the student’s self-assessment ability estimation and a prototype system for children, addressing the learning of fractions, in which CSA is implemented. The article also reports the result of an experimentation carried out with learners attending primary school aiming at evaluating the effectiveness of involving students in the assessment process by comparing two versions of the same system: one using cooperative student modeling and the other the traditional overlay model.

  1. Improved statistical methods enable greater sensitivity in rhythm detection for genome-wide data.

    Directory of Open Access Journals (Sweden)

    Alan L Hutchison

    2015-03-01

    Full Text Available Robust methods for identifying patterns of expression in genome-wide data are important for generating hypotheses regarding gene function. To this end, several analytic methods have been developed for detecting periodic patterns. We improve one such method, JTK_CYCLE, by explicitly calculating the null distribution such that it accounts for multiple hypothesis testing and by including non-sinusoidal reference waveforms. We term this method empirical JTK_CYCLE with asymmetry search, and we compare its performance to JTK_CYCLE with Bonferroni and Benjamini-Hochberg multiple hypothesis testing correction, as well as to five other methods: cyclohedron test, address reduction, stable persistence, ANOVA, and F24. We find that ANOVA, F24, and JTK_CYCLE consistently outperform the other three methods when data are limited and noisy; empirical JTK_CYCLE with asymmetry search gives the greatest sensitivity while controlling for the false discovery rate. Our analysis also provides insight into experimental design and we find that, for a fixed number of samples, better sensitivity and specificity are achieved with higher numbers of replicates than with higher sampling density. Application of the methods to detecting circadian rhythms in a metadataset of microarrays that quantify time-dependent gene expression in whole heads of Drosophila melanogaster reveals annotations that are enriched among genes with highly asymmetric waveforms. These include a wide range of oxidation reduction and metabolic genes, as well as genes with transcripts that have multiple splice forms.

  2. Application of Response Surface Methods To Determine Conditions for Optimal Genomic Prediction

    Science.gov (United States)

    Howard, Réka; Carriquiry, Alicia L.; Beavis, William D.

    2017-01-01

    An epistatic genetic architecture can have a significant impact on prediction accuracies of genomic prediction (GP) methods. Machine learning methods predict traits comprised of epistatic genetic architectures more accurately than statistical methods based on additive mixed linear models. The differences between these types of GP methods suggest a diagnostic for revealing genetic architectures underlying traits of interest. In addition to genetic architecture, the performance of GP methods may be influenced by the sample size of the training population, the number of QTL, and the proportion of phenotypic variability due to genotypic variability (heritability). Possible values for these factors and the number of combinations of the factor levels that influence the performance of GP methods can be large. Thus, efficient methods for identifying combinations of factor levels that produce most accurate GPs is needed. Herein, we employ response surface methods (RSMs) to find the experimental conditions that produce the most accurate GPs. We illustrate RSM with an example of simulated doubled haploid populations and identify the combination of factors that maximize the difference between prediction accuracies of best linear unbiased prediction (BLUP) and support vector machine (SVM) GP methods. The greatest impact on the response is due to the genetic architecture of the population, heritability of the trait, and the sample size. When epistasis is responsible for all of the genotypic variance and heritability is equal to one and the sample size of the training population is large, the advantage of using the SVM method vs. the BLUP method is greatest. However, except for values close to the maximum, most of the response surface shows little difference between the methods. We also determined that the conditions resulting in the greatest prediction accuracy for BLUP occurred when genetic architecture consists solely of additive effects, and heritability is equal to one. PMID

  3. Multidisciplinary eHealth Survey Evaluation Methods

    Science.gov (United States)

    Karras, Bryant T.; Tufano, James T.

    2006-01-01

    This paper describes the development process of an evaluation framework for describing and comparing web survey tools. We believe that this approach will help shape the design, development, deployment, and evaluation of population-based health interventions. A conceptual framework for describing and evaluating web survey systems will enable the…

  4. High-throughput functional genomic methods to analyze the effects of dietary lipids.

    Science.gov (United States)

    Puskás, László G; Ménesi, Dalma; Fehér, Liliána Z; Kitajka, Klára

    2006-12-01

    The applications of 'omics' (genomics, transcriptomics, proteomics and metabolomics) technologies in nutritional studies have opened new possibilities to understand the effects and the action of different diets both in healthy and diseased states and help to define personalized diets and to develop new drugs that revert or prevent the negative dietary effects. Several single nucleotide polymorphisms have already been investigated for potential gene-diet interactions in the response to different lipid diets. It is also well-known that besides the known cellular effects of lipid nutrition, dietary lipids influence gene expression in a tissue, concentration and age-dependent manner. Protein expression and post-translational changes due to different diets have been reported as well. To understand the molecular basis of the effects and roles of dietary lipids high-throughput functional genomic methods such as DNA- or protein microarrays, high-throughput NMR and mass spectrometry are needed to assess the changes in a global way at the genome, at the transcriptome, at the proteome and at the metabolome level. The present review will focus on different high-throughput technologies from the aspects of assessing the effects of dietary fatty acids including cholesterol and polyunsaturated fatty acids. Several genes were identified that exhibited altered expression in response to fish-oil treatment of human lung cancer cells, including protein kinase C, natriuretic peptide receptor-A, PKNbeta, interleukin-1 receptor associated kinase-1 (IRAK-1) and diacylglycerol kinase genes by using high-throughput quantitative real-time PCR. Other results will also be mentioned obtained from cholesterol and polyunsaturated fatty acid fed animals by using DNA- and protein microarrays.

  5. A sensitive, support-vector-machine method for the detection of horizontal gene transfers in viral, archaeal and bacterial genomes.

    Science.gov (United States)

    Tsirigos, Aristotelis; Rigoutsos, Isidore

    2005-01-01

    In earlier work, we introduced and discussed a generalized computational framework for identifying horizontal transfers. This framework relied on a gene's nucleotide composition, obviated the need for knowledge of codon boundaries and database searches, and was shown to perform very well across a wide range of archaeal and bacterial genomes when compared with previously published approaches, such as Codon Adaptation Index and C + G content. Nonetheless, two considerations remained outstanding: we wanted to further increase the sensitivity of detecting horizontal transfers and also to be able to apply the method to increasingly smaller genomes. In the discussion that follows, we present such a method, Wn-SVM, and show that it exhibits a very significant improvement in sensitivity compared with earlier approaches. Wn-SVM uses a one-class support-vector machine and can learn using rather small training sets. This property makes Wn-SVM particularly suitable for studying small-size genomes, similar to those of viruses, as well as the typically larger archaeal and bacterial genomes. We show experimentally that the new method results in a superior performance across a wide range of organisms and that it improves even upon our own earlier method by an average of 10% across all examined genomes. As a small-genome case study, we analyze the genome of the human cytomegalovirus and demonstrate that Wn-SVM correctly identifies regions that are known to be conserved and prototypical of all beta-herpesvirinae, regions that are known to have been acquired horizontally from the human host and, finally, regions that had not up to now been suspected to be horizontally transferred. Atypical region predictions for many eukaryotic viruses, including the alpha-, beta- and gamma-herpesvirinae, and 123 archaeal and bacterial genomes, have been made available online at http://cbcsrv.watson.ibm.com/HGT_SVM/.

  6. DNA immunoprecipitation semiconductor sequencing (DIP-SC-seq) as a rapid method to generate genome wide epigenetic signatures

    OpenAIRE

    Thomson, John P.; Fawkes, Angie; Ottaviano, Raffaele; Hunter, Jennifer M.; Shukla, Ruchi; Mjoseng, Heidi K.; Clark, Richard; Coutts, Audrey; Murphy, Lee; Meehan, Richard R.

    2015-01-01

    Modification of DNA resulting in 5-methylcytosine (5 mC) or 5-hydroxymethylcytosine (5hmC) has been shown to influence the local chromatin environment and affect transcription. Although recent advances in next generation sequencing technology allow researchers to map epigenetic modifications across the genome, such experiments are often time-consuming and cost prohibitive. Here we present a rapid and cost effective method of generating genome wide DNA modification maps utilising commercially ...

  7. Facilitating comparative effectiveness research in cancer genomics: evaluating stakeholder perceptions of the engagement process.

    Science.gov (United States)

    Deverka, Patricia A; Lavallee, Danielle C; Desai, Priyanka J; Armstrong, Joanne; Gorman, Mark; Hole-Curry, Leah; O'Leary, James; Ruffner, B W; Watkins, John; Veenstra, David L; Baker, Laurence H; Unger, Joseph M; Ramsey, Scott D

    2012-07-01

    The Center for Comparative Effectiveness Research in Cancer Genomics completed a 2-year stakeholder-guided process for the prioritization of genomic tests for comparative effectiveness research studies. We sought to evaluate the effectiveness of engagement procedures in achieving project goals and to identify opportunities for future improvements. The evaluation included an online questionnaire, one-on-one telephone interviews and facilitated discussion. Responses to the online questionnaire were tabulated for descriptive purposes, while transcripts from key informant interviews were analyzed using a directed content analysis approach. A total of 11 out of 13 stakeholders completed both the online questionnaire and interview process, while nine participated in the facilitated discussion. Eighty-nine percent of questionnaire items received overall ratings of agree or strongly agree; 11% of responses were rated as neutral with the exception of a single rating of disagreement with an item regarding the clarity of how stakeholder input was incorporated into project decisions. Recommendations for future improvement included developing standard recruitment practices, role descriptions and processes for improved communication with clinical and comparative effectiveness research investigators. Evaluation of the stakeholder engagement process provided constructive feedback for future improvements and should be routinely conducted to ensure maximal effectiveness of stakeholder involvement.

  8. Methods of marketing and advertising activity evaluation

    Directory of Open Access Journals (Sweden)

    A.I. Yakovlev

    2016-09-01

    Full Text Available The result of the business entities’ activities is associated with the development of instruments of the economic processes efficiency determination, including marketing activities. It has determined the purpose of the article. The methodological principles in this area are developed. It is proved that the increase in sales of the profit margin is only partly dependent on the implementation of advertising measures. The methodical approaches for estimation of exhibition and advertising activity and promotion of its employees are specified. The results of work involve evaluation of the advertising effect value on the basis of share of the advertising impact on the increase of sales and revenue from the sale of products. The corresponding proportion of such impact is determined based on the consumer inquiry. The index of trade fair works, its calculation based on two components: how many times a specific company participated in such events; and how well the company was presented at relevant trade fairs. The indices of the cost on advertising and promotion of certain products manufacturer are provided. The scientific innovation of the research is as follows. It is proved that the sales increase effect should not be assigned to advertising only. The compositions that influence the consumer preferences and their share in the total value effect are determined. The new is the proposed index of influence of the trade fair work results depending on the selected factors. The practical importance of the research results involve more accurate calculation of the effect of the activities made and, consequently, increase efficiency of the business entities.

  9. Correcting for cryptic relatedness by a regression-based genomic control method

    Directory of Open Access Journals (Sweden)

    Yang Yaning

    2009-12-01

    Full Text Available Abstract Background Genomic control (GC method is a useful tool to correct for the cryptic relatedness in population-based association studies. It was originally proposed for correcting for the variance inflation of Cochran-Armitage's additive trend test by using information from unlinked null markers, and was later generalized to be applicable to other tests with the additional requirement that the null markers are matched with the candidate marker in allele frequencies. However, matching allele frequencies limits the number of available null markers and thus limits the applicability of the GC method. On the other hand, errors in genotype/allele frequencies may cause further bias and variance inflation and thereby aggravate the effect of GC correction. Results In this paper, we propose a regression-based GC method using null markers that are not necessarily matched in allele frequencies with the candidate marker. Variation of allele frequencies of the null markers is adjusted by a regression method. Conclusion The proposed method can be readily applied to the Cochran-Armitage's trend tests other than the additive trend test, the Pearson's chi-square test and other robust efficiency tests. Simulation results show that the proposed method is effective in controlling type I error in the presence of population substructure.

  10. Systematic evaluation of bias in microbial community profiles induced by whole genome amplification.

    Science.gov (United States)

    Direito, Susana O L; Zaura, Egija; Little, Miranda; Ehrenfreund, Pascale; Röling, Wilfred F M

    2014-03-01

    Whole genome amplification methods facilitate the detection and characterization of microbial communities in low biomass environments. We examined the extent to which the actual community structure is reliably revealed and factors contributing to bias. One widely used [multiple displacement amplification (MDA)] and one new primer-free method [primase-based whole genome amplification (pWGA)] were compared using a polymerase chain reaction (PCR)-based method as control. Pyrosequencing of an environmental sample and principal component analysis revealed that MDA impacted community profiles more strongly than pWGA and indicated that this related to species GC content, although an influence of DNA integrity could not be excluded. Subsequently, biases by species GC content, DNA integrity and fragment size were separately analysed using defined mixtures of DNA from various species. We found significantly less amplification of species with the highest GC content for MDA-based templates and, to a lesser extent, for pWGA. DNA fragmentation also interfered severely: species with more fragmented DNA were less amplified with MDA and pWGA. pWGA was unable to amplify low molecular weight DNA (< 1.5 kb), whereas MDA was inefficient. We conclude that pWGA is the most promising method for characterization of microbial communities in low-biomass environments and for currently planned astrobiological missions to Mars. © 2013 Society for Applied Microbiology and John Wiley & Sons Ltd.

  11. Use of different marker pre-selection methods based on single SNP regression in the estimation of Genomic-EBVs

    Directory of Open Access Journals (Sweden)

    Corrado Dimauro

    2010-01-01

    Full Text Available Two methods of SNPs pre-selection based on single marker regression for the estimation of genomic breeding values (G-EBVs were compared using simulated data provided by the XII QTL-MAS workshop: i Bonferroni correction of the significance threshold and ii Permutation test to obtain the reference distribution of the null hypothesis and identify significant markers at P<0.01 and P<0.001 significance thresholds. From the set of markers significant at P<0.001, random subsets of 50% and 25% markers were extracted, to evaluate the effect of further reducing the number of significant SNPs on G-EBV predictions. The Bonferroni correction method allowed the identification of 595 significant SNPs that gave the best G-EBV accuracies in prediction generations (82.80%. The permutation methods gave slightly lower G-EBV accuracies even if a larger number of SNPs resulted significant (2,053 and 1,352 for 0.01 and 0.001 significance thresholds, respectively. Interestingly, halving or dividing by four the number of SNPs significant at P<0.001 resulted in an only slightly decrease of G-EBV accuracies. The genetic structure of the simulated population with few QTL carrying large effects, might have favoured the Bonferroni method.

  12. Improved genome editing in human cell lines using the CRISPR method.

    Directory of Open Access Journals (Sweden)

    Ivan M Munoz

    Full Text Available The Cas9/CRISPR system has become a popular choice for genome editing. In this system, binding of a single guide (sg RNA to a cognate genomic sequence enables the Cas9 nuclease to induce a double-strand break at that locus. This break is next repaired by an error-prone mechanism, leading to mutation and gene disruption. In this study we describe a range of refinements of the method, including stable cell lines expressing Cas9, and a PCR based protocol for the generation of the sgRNA. We also describe a simple methodology that allows both elimination of Cas9 from cells after gene disruption and re-introduction of the disrupted gene. This advance enables easy assessment of the off target effects associated with gene disruption, as well as phenotype-based structure-function analysis. In our study, we used the Fan1 DNA repair gene as control in these experiments. Cas9/CRISPR-mediated Fan1 disruption occurred at frequencies of around 29%, and resulted in the anticipated spectrum of genotoxin hypersensitivity, which was rescued by re-introduction of Fan1.

  13. Methods for open innovation on a genome-design platform associating scientific, commercial, and educational communities in synthetic biology.

    Science.gov (United States)

    Toyoda, Tetsuro

    2011-01-01

    Synthetic biology requires both engineering efficiency and compliance with safety guidelines and ethics. Focusing on the rational construction of biological systems based on engineering principles, synthetic biology depends on a genome-design platform to explore the combinations of multiple biological components or BIO bricks for quickly producing innovative devices. This chapter explains the differences among various platform models and details a methodology for promoting open innovation within the scope of the statutory exemption of patent laws. The detailed platform adopts a centralized evaluation model (CEM), computer-aided design (CAD) bricks, and a freemium model. It is also important for the platform to support the legal aspects of copyrights as well as patent and safety guidelines because intellectual work including DNA sequences designed rationally by human intelligence is basically copyrightable. An informational platform with high traceability, transparency, auditability, and security is required for copyright proof, safety compliance, and incentive management for open innovation in synthetic biology. GenoCon, which we have organized and explained here, is a competition-styled, open-innovation method involving worldwide participants from scientific, commercial, and educational communities that aims to improve the designs of genomic sequences that confer a desired function on an organism. Using only a Web browser, a participating contributor proposes a design expressed with CAD bricks that generate a relevant DNA sequence, which is then experimentally and intensively evaluated by the GenoCon organizers. The CAD bricks that comprise programs and databases as a Semantic Web are developed, executed, shared, reused, and well stocked on the secure Semantic Web platform called the Scientists' Networking System or SciNetS/SciNeS, based on which a CEM research center for synthetic biology and open innovation should be established. Copyright © 2011 Elsevier Inc

  14. Genomic diversity of Saccharomyces cerevisiae yeasts associated with alcoholic fermentation of bacanora produced by artisanal methods.

    Science.gov (United States)

    Álvarez-Ainza, M L; Zamora-Quiñonez, K A; Moreno-Ibarra, G M; Acedo-Félix, E

    2015-03-01

    Bacanora is a spirituous beverage elaborated with Agave angustifolia Haw in an artisanal process. Natural fermentation is mostly performed with native yeasts and bacteria. In this study, 228 strains of yeast like Saccharomyces were isolated from the natural alcoholic fermentation on the production of bacanora. Restriction analysis of the amplified region ITS1-5.8S-ITS2 of the ribosomal DNA genes (RFLPr) were used to confirm the genus, and 182 strains were identified as Saccharomyces cerevisiae. These strains displayed high genomic variability in their chromosomes profiles by karyotyping. Electrophoretic profiles of the strains evaluated showed a large number of chromosomes the size of which ranged between 225 and 2200 kpb approximately.

  15. Evaluation of a new automated homogeneous PCR assay, GenomEra C. difficile, for rapid detection of Toxigenic Clostridium difficile in fecal specimens.

    Science.gov (United States)

    Hirvonen, Jari J; Mentula, Silja; Kaukoranta, Suvi-Sirkku

    2013-09-01

    We evaluated a new automated homogeneous PCR assay to detect toxigenic Clostridium difficile, the GenomEra C. difficile assay (Abacus Diagnostica, Finland), with 310 diarrheal stool specimens and with a collection of 33 known clostridial and nonclostridial isolates. Results were compared with toxigenic culture results, with discrepancies being resolved by the GeneXpert C. difficile PCR assay (Cepheid). Among the 80 toxigenic culture-positive or GeneXpert C. difficile assay-positive fecal specimens, 79 were also positive with the GenomEra C. difficile assay. Additionally, one specimen was positive with the GenomEra assay but negative with the confirmatory methods. Thus, the sensitivity and specificity were 98.8% and 99.6%, respectively. With the culture collection, no false-positive or -negative results were observed. The analytical sensitivity of the GenomEra C. difficile assay was approximately 5 CFU per PCR test. The short hands-on (<5 min for 1 to 4 samples) and total turnaround (<1 h) times, together with the high positive and negative predictive values (98.8% and 99.6%, respectively), make the GenomEra C. difficile assay an excellent option for toxigenic C. difficile detection in fecal specimens.

  16. Computational Evaluation of the Traceback Method

    Science.gov (United States)

    Kol, Sheli; Nir, Bracha; Wintner, Shuly

    2014-01-01

    Several models of language acquisition have emerged in recent years that rely on computational algorithms for simulation and evaluation. Computational models are formal and precise, and can thus provide mathematically well-motivated insights into the process of language acquisition. Such models are amenable to robust computational evaluation,…

  17. Research on psychological evaluation method for nuclear power plant operators

    International Nuclear Information System (INIS)

    Fang Xiang; He Xuhong; Zhao Bingquan

    2007-01-01

    The qualitative and quantitative psychology evaluation methods to the nuclear power plant operators were analyzed and discussed in the paper. The comparison analysis to the scope and result of application was carried out between method of outline figure fitted and method of fuzzy synthetic evaluation. The research results can be referenced to the evaluation of nuclear power plant operators. (authors)

  18. A comparison of statistical methods for genomic selection in a mice population

    Directory of Open Access Journals (Sweden)

    Neves Haroldo HR

    2012-11-01

    Full Text Available Abstract Background The availability of high-density panels of SNP markers has opened new perspectives for marker-assisted selection strategies, such that genotypes for these markers are used to predict the genetic merit of selection candidates. Because the number of markers is often much larger than the number of phenotypes, marker effect estimation is not a trivial task. The objective of this research was to compare the predictive performance of ten different statistical methods employed in genomic selection, by analyzing data from a heterogeneous stock mice population. Results For the five traits analyzed (W6W: weight at six weeks, WGS: growth slope, BL: body length, %CD8+: percentage of CD8+ cells, CD4+/ CD8+: ratio between CD4+ and CD8+ cells, within-family predictions were more accurate than across-family predictions, although this superiority in accuracy varied markedly across traits. For within-family prediction, two kernel methods, Reproducing Kernel Hilbert Spaces Regression (RKHS and Support Vector Regression (SVR, were the most accurate for W6W, while a polygenic model also had comparable performance. A form of ridge regression assuming that all markers contribute to the additive variance (RR_GBLUP figured among the most accurate for WGS and BL, while two variable selection methods ( LASSO and Random Forest, RF had the greatest predictive abilities for %CD8+ and CD4+/ CD8+. RF, RKHS, SVR and RR_GBLUP outperformed the remainder methods in terms of bias and inflation of predictions. Conclusions Methods with large conceptual differences reached very similar predictive abilities and a clear re-ranking of methods was observed in function of the trait analyzed. Variable selection methods were more accurate than the remainder in the case of %CD8+ and CD4+/CD8+ and these traits are likely to be influenced by a smaller number of QTL than the remainder. Judged by their overall performance across traits and computational requirements, RR

  19. In silico method for modelling metabolism and gene product expression at genome scale

    Energy Technology Data Exchange (ETDEWEB)

    Lerman, Joshua A.; Hyduke, Daniel R.; Latif, Haythem; Portnoy, Vasiliy A.; Lewis, Nathan E.; Orth, Jeffrey D.; Rutledge, Alexandra C.; Smith, Richard D.; Adkins, Joshua N.; Zengler, Karsten; Palsson, Bernard O.

    2012-07-03

    Transcription and translation use raw materials and energy generated metabolically to create the macromolecular machinery responsible for all cellular functions, including metabolism. A biochemically accurate model of molecular biology and metabolism will facilitate comprehensive and quantitative computations of an organism's molecular constitution as a function of genetic and environmental parameters. Here we formulate a model of metabolism and macromolecular expression. Prototyping it using the simple microorganism Thermotoga maritima, we show our model accurately simulates variations in cellular composition and gene expression. Moreover, through in silico comparative transcriptomics, the model allows the discovery of new regulons and improving the genome and transcription unit annotations. Our method presents a framework for investigating molecular biology and cellular physiology in silico and may allow quantitative interpretation of multi-omics data sets in the context of an integrated biochemical description of an organism.

  20. Application of genomic and molecular methods to fundamental questions in canine and feline reproductive health.

    Science.gov (United States)

    Meyers-Wallen, V N

    2012-12-01

    Molecular tools are becoming increasingly available to investigate the genetic basis of reproductive disorders in dogs and cats. These were first successful in identifying the molecular basis of diseases inherited as simple Mendelian traits, and these are now being applied to those that are inherited as complex traits. In order to promote similar studies of reproductive disorders, we need to understand how we can play a proactive role in accumulating sufficient case material. We also need to understand these mutation discovery tools and identify collaborators who have experience with their use. The candidate gene and genomic approaches to mutation discovery in dogs are presented, including new sequencing methods and those used to confirm that a mutation has a role in disease pathology. As the final goal is to use our study results to prevent inherited disorders, we need to consider how we can promote efficiency in obtaining DNA test results and providing genetic counselling. © 2012 Blackwell Verlag GmbH.

  1. G-MAPSEQ – a new method for mapping reads to a reference genome

    Directory of Open Access Journals (Sweden)

    Wojciechowski Pawel

    2016-06-01

    Full Text Available The problem of reads mapping to a reference genome is one of the most essential problems in modern computational biology. The most popular algorithms used to solve this problem are based on the Burrows-Wheeler transform and the FM-index. However, this causes some issues with highly mutated sequences due to a limited number of mutations allowed. G-MAPSEQ is a novel, hybrid algorithm combining two interesting methods: alignment-free sequence comparison and an ultra fast sequence alignment. The former is a fast heuristic algorithm which uses k-mer characteristics of nucleotide sequences to find potential mapping places. The latter is a very fast GPU implementation of sequence alignment used to verify the correctness of these mapping positions. The source code of G-MAPSEQ along with other bioinformatic software is available at: http://gpualign.cs.put.poznan.pl.

  2. A MITE-based genotyping method to reveal hundreds of DNA polymorphisms in an animal genome after a few generations of artificial selection

    Directory of Open Access Journals (Sweden)

    Tetreau Guillaume

    2008-10-01

    Full Text Available Abstract Background For most organisms, developing hundreds of genetic markers spanning the whole genome still requires excessive if not unrealistic efforts. In this context, there is an obvious need for methodologies allowing the low-cost, fast and high-throughput genotyping of virtually any species, such as the Diversity Arrays Technology (DArT. One of the crucial steps of the DArT technique is the genome complexity reduction, which allows obtaining a genomic representation characteristic of the studied DNA sample and necessary for subsequent genotyping. In this article, using the mosquito Aedes aegypti as a study model, we describe a new genome complexity reduction method taking advantage of the abundance of miniature inverted repeat transposable elements (MITEs in the genome of this species. Results Ae. aegypti genomic representations were produced following a two-step procedure: (1 restriction digestion of the genomic DNA and simultaneous ligation of a specific adaptor to compatible ends, and (2 amplification of restriction fragments containing a particular MITE element called Pony using two primers, one annealing to the adaptor sequence and one annealing to a conserved sequence motif of the Pony element. Using this protocol, we constructed a library comprising more than 6,000 DArT clones, of which at least 5.70% were highly reliable polymorphic markers for two closely related mosquito strains separated by only a few generations of artificial selection. Within this dataset, linkage disequilibrium was low, and marker redundancy was evaluated at 2.86% only. Most of the detected genetic variability was observed between the two studied mosquito strains, but individuals of the same strain could still be clearly distinguished. Conclusion The new complexity reduction method was particularly efficient to reveal genetic polymorphisms in Ae. egypti. Overall, our results testify of the flexibility of the DArT genotyping technique and open new

  3. A New Method for the Evaluation of Vaccine Safety Based on Comprehensive Gene Expression Analysis

    Directory of Open Access Journals (Sweden)

    Haruka Momose

    2010-01-01

    Full Text Available For the past 50 years, quality control and safety tests have been used to evaluate vaccine safety. However, conventional animal safety tests need to be improved in several aspects. For example, the number of test animals used needs to be reduced and the test period shortened. It is, therefore, necessary to develop a new vaccine evaluation system. In this review, we show that gene expression patterns are well correlated to biological responses in vaccinated rats. Our findings and methods using experimental biology and genome science provide an important means of assessment for vaccine toxicity.

  4. Evaluating Methods for Evaluating Instruction: The Case of Higher Education

    OpenAIRE

    Bruce A. Weinberg; Belton M. Fleisher; Masanori Hashimoto

    2007-01-01

    This paper develops an original measure of learning in higher education, based on grades in subsequent courses. Using this measure of learning, this paper shows that student evaluations are positively related to current grades but unrelated to learning once current grades are controlled. It offers evidence that the weak relationship between learning and student evaluations arises, in part, because students are unaware of how much they have learned in a course. The paper concludes with a discu...

  5. Genome analysis methods: Malus x domestica [PGDBj Registered plant list, Marker list, QTL list, Plant DB link and Genome analysis methods[Archive

    Lifescience Database Archive (English)

    Full Text Available oints for assembly of corresponding gene sequences ... 103,076 FgenesH , Twinscan , GlimmerHMM , and GeneWise 57,386 GDR; http://www.ros...aceae.org/species/malus/malus_x_domestica/genome_v1.0 v1.0 v1.0 10.1038/ng.654 20802477 ...

  6. A quantitative method to evaluate neutralizer toxicity against Acanthamoeba castellanii.

    Science.gov (United States)

    Buck, S L; Rosenthal, R A

    1996-09-01

    A standard methodology for quantitatively evaluating neutralizer toxicity against Acanthamoeba castellanii does not exist. The objective of this study was to provide a quantitative method for evaluating neutralizer toxicity against A. castellanii. Two methods were evaluated. A quantitative microtiter method for enumerating A. castellanii was evaluated by a 50% lethal dose endpoint method. The microtiter method was compared with the hemacytometer count method. A method for determining the toxicity of neutralizers for antimicrobial agents to A. castellanii was also evaluated. The toxicity to A. castellanii of Dey-Engley neutralizing broth was compared with Page's saline. The microtiter viable cell counts were lower than predicted by the hemacytometer counts. However, the microtiter method gives more reliable counts of viable cells. Dey-Engley neutralizing medium was not toxic to A. castellanii. The method presented gives consistent, reliable results and is simple compared with previous methods.

  7. Identification of genomic insertion and flanking sequence of G2-EPSPS and GAT transgenes in soybean using whole genome sequencing method

    Directory of Open Access Journals (Sweden)

    Bingfu Guo

    2016-07-01

    Full Text Available Molecular characterization of sequences flanking exogenous fragment insertions is essential for safety assessment and labeling of genetically modified organisms (GMO. In this study, the T-DNA insertion sites and flanking sequences were identified in two newly developed transgenic glyphosate-tolerant soybeans GE-J16 and ZH10-6 based on whole genome sequencing (WGS method. About 21 Gb sequence data (~21× coverage for each line was generated on Illumina HiSeq 2500 platform. The junction reads mapped to boundary of T-DNA and flanking sequences in these two events were identified by comparing all sequencing reads with soybean reference genome and sequence of transgenic vector. The putative insertion loci and flanking sequences were further confirmed by PCR amplification, Sanger sequencing, and co-segregation analysis. All these analyses supported that exogenous T-DNA fragments were integrated in positions of Chr19: 50543767-50543792 and Chr17: 7980527-7980541 in these two transgenic lines. Identification of the genomic insertion site of the G2-EPSPS and GAT transgenes will facilitate the use of their glyphosate-tolerant traits in soybean breeding program. These results also demonstrated that WGS is a cost-effective and rapid method of identifying sites of T-DNA insertions and flanking sequences in soybean.

  8. Identification of Genomic Insertion and Flanking Sequence of G2-EPSPS and GAT Transgenes in Soybean Using Whole Genome Sequencing Method.

    Science.gov (United States)

    Guo, Bingfu; Guo, Yong; Hong, Huilong; Qiu, Li-Juan

    2016-01-01

    Molecular characterization of sequence flanking exogenous fragment insertion is essential for safety assessment and labeling of genetically modified organism (GMO). In this study, the T-DNA insertion sites and flanking sequences were identified in two newly developed transgenic glyphosate-tolerant soybeans GE-J16 and ZH10-6 based on whole genome sequencing (WGS) method. More than 22.4 Gb sequence data (∼21 × coverage) for each line was generated on Illumina HiSeq 2500 platform. The junction reads mapped to boundaries of T-DNA and flanking sequences in these two events were identified by comparing all sequencing reads with soybean reference genome and sequence of transgenic vector. The putative insertion loci and flanking sequences were further confirmed by PCR amplification, Sanger sequencing, and co-segregation analysis. All these analyses supported that exogenous T-DNA fragments were integrated in positions of Chr19: 50543767-50543792 and Chr17: 7980527-7980541 in these two transgenic lines. Identification of genomic insertion sites of G2-EPSPS and GAT transgenes will facilitate the utilization of their glyphosate-tolerant traits in soybean breeding program. These results also demonstrated that WGS was a cost-effective and rapid method for identifying sites of T-DNA insertions and flanking sequences in soybean.

  9. Quantitative Methods to Evaluate Timetable Attractiveness

    DEFF Research Database (Denmark)

    Schittenhelm, Bernd; Landex, Alex

    2009-01-01

    The article describes how the attractiveness of timetables can be evaluated quantitatively to ensure a consistent evaluation of timetables. Since the different key stakeholders (infrastructure manager, train operating company, customers, and society) have different opinions on what an attractive...... attractiveness index. To identify the preferred timetable structure it could e.g. be useful to apply multi criteria analysis methodology to weight the input from the stakeholders. A route choice model could for instance be used to get a better picture of the transfer patterns in a given timetable, and thereby...

  10. Evaluation of Ponseti method in neglected clubfoot

    Directory of Open Access Journals (Sweden)

    Abhinav Sinha

    2016-01-01

    Conclusions: Painless, supple, plantigrade, and cosmetically acceptable feet were achieved in neglected clubfeet without any extensive surgery. A fair trial of conservative Ponseti method should be tried before resorting to extensive soft tissue procedure.

  11. Evaluation of box culvert maintenance methods.

    Science.gov (United States)

    2015-02-01

    Traditional methods, such as using a vactor truck, for clearing culverts greater than 48 inches : of debris and accumulated sediment may be inefficient and costly. A survey of states outside : of Ohio has shown several regularly use remote controlled...

  12. Cryptosporidiosis: multiattribute evaluation of six diagnostic methods.

    OpenAIRE

    MacPherson, D W; McQueen, R

    1993-01-01

    Six diagnostic methods (Giemsa staining, Ziehl-Neelsen staining, auramine-rhodamine staining, Sheather's sugar flotation, an indirect immunofluorescence procedure, and a modified concentration-sugar flotation method) for the detection of Cryptosporidium spp. in stool specimens were compared on the following attributes: diagnostic yield, cost to perform each test, ease of handling, and ability to process large numbers of specimens for screening purposes by batching. A rank ordering from least ...

  13. Performance Evaluation of NIPT in Detection of Chromosomal Copy Number Variants Using Low-Coverage Whole-Genome Sequencing of Plasma DNA.

    Science.gov (United States)

    Liu, Hongtai; Gao, Ya; Hu, Zhiyang; Lin, Linhua; Yin, Xuyang; Wang, Jun; Chen, Dayang; Chen, Fang; Jiang, Hui; Ren, Jinghui; Wang, Wei

    2016-01-01

    The aim of this study was to assess the performance of noninvasively prenatal testing (NIPT) for fetal copy number variants (CNVs) in clinical samples, using a whole-genome sequencing method. A total of 919 archived maternal plasma samples with karyotyping/microarray results, including 33 CNVs samples and 886 normal samples from September 1, 2011 to May 31, 2013, were enrolled in this study. The samples were randomly rearranged and blindly sequenced by low-coverage (about 7M reads) whole-genome sequencing of plasma DNA. Fetal CNVs were detected by Fetal Copy-number Analysis through Maternal Plasma Sequencing (FCAPS) to compare to the karyotyping/microarray results. Sensitivity, specificity and were evaluated. 33 samples with deletions/duplications ranging from 1 to 129 Mb were detected with the consistent CNV size and location to karyotyping/microarray results in the study. Ten false positive results and two false negative results were obtained. The sensitivity and specificity of detection deletions/duplications were 84.21% and 98.42%, respectively. Whole-genome sequencing-based NIPT has high performance in detecting genome-wide CNVs, in particular >10Mb CNVs using the current FCAPS algorithm. It is possible to implement the current method in NIPT to prenatally screening for fetal CNVs.

  14. Method of evaluating the reactor core performance

    International Nuclear Information System (INIS)

    Eguchi, Yumiko.

    1987-01-01

    Purpose: To enable exact evaluation for the core performance in a short period. Constitution: A reactor core is equally divided into 2, 4 or 8 sections considering the structure of the symmetricalness and calculation for the evaluation the core performance is carried out to at least one region of the divided core. However, the reactor core can not be said to be completely symmetrical and there is a difference more or less, because if identical type fuels are loaded the way of burning is different depending on the positions, thereby causing difference in the total heat calorie generated. Accordingly, the performance evaluation is conducted for the entire core at a predetermined time interval, the compensation value for each of the fuels is calculated based on the result of the calculation for the entire core and the corresponding result of the calculation in each of the divided cores and the compensated values are added to the calculation result for the divided cores to compensate the calculated evaluation value. This enables to shorten the calculation time and improve the calculation accuracy. (Yoshino, Y.)

  15. A method to produce radiation hybrids for the D-genome chromosomes of wheat (Triticum aestivum L.).

    Science.gov (United States)

    Riera-Lizarazu, O; Leonard, J M; Tiwari, V K; Kianian, S F

    2010-07-01

    Radiation hybrid (RH) mapping is based on radiation-induced chromosome breakage rather than meiotic recombination, as a means to induce marker segregation for mapping. To date, the implementation of this mapping approach in hexaploid (Triticum aestivum L.; 2n = 6x = 42; AABBDD) and tetraploid (T. turgidum L.; 2n = 4x = 28; AABB) wheat has concentrated on the production of mapping panels for individual chromosomes. In order to extend the usefulness of this approach, we have devised a method to produce panels for the simultaneous mapping of all chromosomes of the D subgenome of hexaploid wheat. In this approach, seeds of hexaploid wheat (AABBDD) are irradiated and the surviving plants are crossed to tetraploid wheat (AABB) to produce a mapping panel based on quasi-pentaploids (AABBD). Chromosome lesions in the A and B genomes are largely masked in the quasi-pentaploids due to the presence of A- and B-genome chromosomes from the tetraploid parent. On the other hand, the chromosomes from the D-genome are present in one copy (hemizygous) and allow radiation hybrid mapping of all D-genome chromosomes simultaneously. Our analyses showed that transmission of D-genome chromosomes was apparently normal and that radiation-induced chromosome breakage along D-genome chromosomes was homogeneous. Chromosome breakage levels between D-genome chromosomes were comparable except for chromosome 6D which suffered greater chromosome breakage. These results demonstrate the feasibility of constructing D-genome radiation hybrids (DGRHs) in wheat. Copyright 2010 S. Karger AG, Basel.

  16. Parametric Method For Evaluating Optimal Ship Deadweight

    Directory of Open Access Journals (Sweden)

    Michalski Jan P.

    2014-04-01

    Full Text Available The paper presents a method of choosing the optimal value of the cargo ships deadweight. The method may be useful at the stage of establishing the main owners requirements concerning the ship design parameters as well as for choosing a proper ship for a given transportation task. The deadweight is determined on the basis of a selected economic measure of the transport effectiveness of ship - the Required Freight Rate (RFR. The mathematical model of the problem is of a deterministic character and the simplifying assumptions are justified for ships operating in the liner trade. The assumptions are so selected that solution of the problem is obtained in analytical closed form. The presented method can be useful for application in the pre-investment ships designing parameters simulation or transportation task studies.

  17. Performance Evaluation of NIPT in Detection of Chromosomal Copy Number Variants Using Low-Coverage Whole-Genome Sequencing of Plasma DNA

    DEFF Research Database (Denmark)

    Liu, Hongtai; Gao, Ya; Hu, Zhiyang

    2016-01-01

    Objectives The aim of this study was to assess the performance of noninvasively prenatal testing (NIPT) for fetal copy number variants (CNVs) in clinical samples, using a whole-genome sequencing method. Method A total of 919 archived maternal plasma samples with karyotyping/microarray results...... through Maternal Plasma Sequencing (FCAPS) to compare to the karyotyping/microarray results. Sensitivity, specificity and were evaluated. Results 33 samples with deletions/duplications ranging from 1 to 129 Mb were detected with the consistent CNV size and location to karyotyping/microarray results......, including 33 CNVs samples and 886 normal samples from September 1, 2011 to May 31, 2013, were enrolled in this study. The samples were randomly rearranged and blindly sequenced by low-coverage (about 7M reads) whole-genome sequencing of plasma DNA. Fetal CNVs were detected by Fetal Copy-number Analysis...

  18. Land management planning: a method of evaluating alternatives

    Science.gov (United States)

    Andres Weintraub; Richard Adams; Linda Yellin

    1982-01-01

    A method is described for developing and evaluating alternatives in land management planning. A structured set of 15 steps provides a framework for such an evaluation. when multiple objectives and uncertainty must be considered in the planning process. The method is consistent with other processes used in organizational evaluation, and allows for the interaction of...

  19. An analysis of normalization methods for Drosophila RNAi genomic screens and development of a robust validation scheme

    Science.gov (United States)

    Wiles, Amy M.; Ravi, Dashnamoorthy; Bhavani, Selvaraj; Bishop, Alexander J.R.

    2010-01-01

    Genome-wide RNAi screening is a powerful, yet relatively immature technology that allows investigation into the role of individual genes in a process of choice. Most RNAi screens identify a large number of genes with a continuous gradient in the assessed phenotype. Screeners must then decide whether to examine just those genes with the most robust phenotype or to examine the full gradient of genes that cause an effect and how to identify the candidate genes to be validated. We have used RNAi in Drosophila cells to examine viability in a 384-well plate format and compare two screens, untreated control and treatment. We compare multiple normalization methods, which take advantage of different features within the data, including quantile normalization, background subtraction, scaling, cellHTS2 1, and interquartile range measurement. Considering the false-positive potential that arises from RNAi technology, a robust validation method was designed for the purpose of gene selection for future investigations. In a retrospective analysis, we describe the use of validation data to evaluate each normalization method. While no normalization method worked ideally, we found that a combination of two methods, background subtraction followed by quantile normalization and cellHTS2, at different thresholds, captures the most dependable and diverse candidate genes. Thresholds are suggested depending on whether a few candidate genes are desired or a more extensive systems level analysis is sought. In summary, our normalization approaches and experimental design to perform validation experiments are likely to apply to those high-throughput screening systems attempting to identify genes for systems level analysis. PMID:18753689

  20. Genome analysis methods: Lotus japonicus [PGDBj Registered plant list, Marker list, QTL list, Plant DB link and Genome analysis methods[Archive

    Lifescience Database Archive (English)

    Full Text Available Lotus japonicus Draft 2n=12 472 Mb 2008 Sanger (Clone-based) ... 315.1 Mb 3-5x Parace...l Genome Assembler 954 110,940 Kazusa Annotation PipelinE for Lotus japonicus (KAPSEL) 37,971 (v2.5) KDRI; http://www.kazusa.or.jp/lotus/ v2.5 v2.5 10.1093/dnares/dsn008 18511435 ...

  1. Genome analysis methods: Arabidopsis lyrata [PGDBj Registered plant list, Marker list, QTL list, Plant DB link and Genome analysis methods[Archive

    Lifescience Database Archive (English)

    Full Text Available (http://genome.imim.es/software/geneid/) applying dicot and A. thaliana specific matrices 32,670 (v1.0) JGI; http://www.phytozome.net/alyrata v1.0 v1.0 10.1038/ng.807 21478890 ... ...8.3x Arachne 1,309 ... Fgenesh package of ab initio and homology-based gene predictors, EuGene12, and GeneID13

  2. A comparison of genomic selection models across time in interior spruce (Picea engelmannii × glauca) using unordered SNP imputation methods.

    Science.gov (United States)

    Ratcliffe, B; El-Dien, O G; Klápště, J; Porth, I; Chen, C; Jaquish, B; El-Kassaby, Y A

    2015-12-01

    Genomic selection (GS) potentially offers an unparalleled advantage over traditional pedigree-based selection (TS) methods by reducing the time commitment required to carry out a single cycle of tree improvement. This quality is particularly appealing to tree breeders, where lengthy improvement cycles are the norm. We explored the prospect of implementing GS for interior spruce (Picea engelmannii × glauca) utilizing a genotyped population of 769 trees belonging to 25 open-pollinated families. A series of repeated tree height measurements through ages 3-40 years permitted the testing of GS methods temporally. The genotyping-by-sequencing (GBS) platform was used for single nucleotide polymorphism (SNP) discovery in conjunction with three unordered imputation methods applied to a data set with 60% missing information. Further, three diverse GS models were evaluated based on predictive accuracy (PA), and their marker effects. Moderate levels of PA (0.31-0.55) were observed and were of sufficient capacity to deliver improved selection response over TS. Additionally, PA varied substantially through time accordingly with spatial competition among trees. As expected, temporal PA was well correlated with age-age genetic correlation (r=0.99), and decreased substantially with increasing difference in age between the training and validation populations (0.04-0.47). Moreover, our imputation comparisons indicate that k-nearest neighbor and singular value decomposition yielded a greater number of SNPs and gave higher predictive accuracies than imputing with the mean. Furthermore, the ridge regression (rrBLUP) and BayesCπ (BCπ) models both yielded equal, and better PA than the generalized ridge regression heteroscedastic effect model for the traits evaluated.

  3. Software comparison for evaluating genomic copy number variation for Affymetrix 6.0 SNP array platform

    Directory of Open Access Journals (Sweden)

    Kardia Sharon LR

    2011-05-01

    Full Text Available Abstract Background Copy number data are routinely being extracted from genome-wide association study chips using a variety of software. We empirically evaluated and compared four freely-available software packages designed for Affymetrix SNP chips to estimate copy number: Affymetrix Power Tools (APT, Aroma.Affymetrix, PennCNV and CRLMM. Our evaluation used 1,418 GENOA samples that were genotyped on the Affymetrix Genome-Wide Human SNP Array 6.0. We compared bias and variance in the locus-level copy number data, the concordance amongst regions of copy number gains/deletions and the false-positive rate amongst deleted segments. Results APT had median locus-level copy numbers closest to a value of two, whereas PennCNV and Aroma.Affymetrix had the smallest variability associated with the median copy number. Of those evaluated, only PennCNV provides copy number specific quality-control metrics and identified 136 poor CNV samples. Regions of copy number variation (CNV were detected using the hidden Markov models provided within PennCNV and CRLMM/VanillaIce. PennCNV detected more CNVs than CRLMM/VanillaIce; the median number of CNVs detected per sample was 39 and 30, respectively. PennCNV detected most of the regions that CRLMM/VanillaIce did as well as additional CNV regions. The median concordance between PennCNV and CRLMM/VanillaIce was 47.9% for duplications and 51.5% for deletions. The estimated false-positive rate associated with deletions was similar for PennCNV and CRLMM/VanillaIce. Conclusions If the objective is to perform statistical tests on the locus-level copy number data, our empirical results suggest that PennCNV or Aroma.Affymetrix is optimal. If the objective is to perform statistical tests on the summarized segmented data then PennCNV would be preferred over CRLMM/VanillaIce. Specifically, PennCNV allows the analyst to estimate locus-level copy number, perform segmentation and evaluate CNV-specific quality-control metrics within a

  4. Software comparison for evaluating genomic copy number variation for Affymetrix 6.0 SNP array platform.

    Science.gov (United States)

    Eckel-Passow, Jeanette E; Atkinson, Elizabeth J; Maharjan, Sooraj; Kardia, Sharon L R; de Andrade, Mariza

    2011-05-31

    Copy number data are routinely being extracted from genome-wide association study chips using a variety of software. We empirically evaluated and compared four freely-available software packages designed for Affymetrix SNP chips to estimate copy number: Affymetrix Power Tools (APT), Aroma.Affymetrix, PennCNV and CRLMM. Our evaluation used 1,418 GENOA samples that were genotyped on the Affymetrix Genome-Wide Human SNP Array 6.0. We compared bias and variance in the locus-level copy number data, the concordance amongst regions of copy number gains/deletions and the false-positive rate amongst deleted segments. APT had median locus-level copy numbers closest to a value of two, whereas PennCNV and Aroma.Affymetrix had the smallest variability associated with the median copy number. Of those evaluated, only PennCNV provides copy number specific quality-control metrics and identified 136 poor CNV samples. Regions of copy number variation (CNV) were detected using the hidden Markov models provided within PennCNV and CRLMM/VanillaIce. PennCNV detected more CNVs than CRLMM/VanillaIce; the median number of CNVs detected per sample was 39 and 30, respectively. PennCNV detected most of the regions that CRLMM/VanillaIce did as well as additional CNV regions. The median concordance between PennCNV and CRLMM/VanillaIce was 47.9% for duplications and 51.5% for deletions. The estimated false-positive rate associated with deletions was similar for PennCNV and CRLMM/VanillaIce. If the objective is to perform statistical tests on the locus-level copy number data, our empirical results suggest that PennCNV or Aroma.Affymetrix is optimal. If the objective is to perform statistical tests on the summarized segmented data then PennCNV would be preferred over CRLMM/VanillaIce. Specifically, PennCNV allows the analyst to estimate locus-level copy number, perform segmentation and evaluate CNV-specific quality-control metrics within a single software package. PennCNV has relatively small bias

  5. Comprehensive evaluation of SNP identification with the Restriction Enzyme-based Reduced Representation Library (RRL method

    Directory of Open Access Journals (Sweden)

    Du Ye

    2012-02-01

    Full Text Available Abstract Background Restriction Enzyme-based Reduced Representation Library (RRL method represents a relatively feasible and flexible strategy used for Single Nucleotide Polymorphism (SNP identification in different species. It has remarkable advantage of reducing the complexity of the genome by orders of magnitude. However, comprehensive evaluation for actual efficacy of SNP identification by this method is still unavailable. Results In order to evaluate the efficacy of Restriction Enzyme-based RRL method, we selected Tsp 45I enzyme which covers 266 Mb flanking region of the enzyme recognition site according to in silico simulation on human reference genome, then we sequenced YH RRL after Tsp 45I treatment and obtained reads of which 80.8% were mapped to target region with an 20-fold average coverage, about 96.8% of target region was covered by at least one read and 257 K SNPs were identified in the region using SOAPsnp software. Compared with whole genome resequencing data, we observed false discovery rate (FDR of 13.95% and false negative rate (FNR of 25.90%. The concordance rate of homozygote loci was over 99.8%, but that of heterozygote were only 92.56%. Repeat sequences and bases quality were proved to have a great effect on the accuracy of SNP calling, SNPs in recognition sites contributed evidently to the high FNR and the low concordance rate of heterozygote. Our results indicated that repeat masking and high stringent filter criteria could significantly decrease both FDR and FNR. Conclusions This study demonstrates that Restriction Enzyme-based RRL method was effective for SNP identification. The results highlight the important role of bias and the method-derived defects represented in this method and emphasize the special attentions noteworthy.

  6. Analysis of IAV Replication and Co-infection Dynamics by a Versatile RNA Viral Genome Labeling Method

    Directory of Open Access Journals (Sweden)

    Dan Dou

    2017-07-01

    Full Text Available Genome delivery to the proper cellular compartment for transcription and replication is a primary goal of viruses. However, methods for analyzing viral genome localization and differentiating genomes with high identity are lacking, making it difficult to investigate entry-related processes and co-examine heterogeneous RNA viral populations. Here, we present an RNA labeling approach for single-cell analysis of RNA viral replication and co-infection dynamics in situ, which uses the versatility of padlock probes. We applied this method to identify influenza A virus (IAV infections in cells and lung tissue with single-nucleotide specificity and to classify entry and replication stages by gene segment localization. Extending the classification strategy to co-infections of IAVs with single-nucleotide variations, we found that the dependence on intracellular trafficking places a time restriction on secondary co-infections necessary for genome reassortment. Altogether, these data demonstrate how RNA viral genome labeling can help dissect entry and co-infections.

  7. Evaluations of Three Methods for Remote Training

    Science.gov (United States)

    Woolford, B.; Chmielewski, C.; Pandya, A.; Adolf, J.; Whitmore, M.; Berman, A.; Maida, J.

    1999-01-01

    Long duration space missions require a change in training methods and technologies. For Shuttle missions, crew members could train for all the planned procedures, and carry documentation of planned procedures for a variety of contingencies. As International Space Station (ISS) missions of three months or longer are carried out, many more tasks will need to be performed for which little or no training was received prior to launch. Eventually, exploration missions will last several years, and communications with Earth will have long time delays or be impossible at times. This series of three studies was performed to identify the advantages and disadvantages of three types of training for self-instruction: video-conferencing; multimedia; and virtual reality. These studies each compared two types of training methods, on two different types of tasks. In two of the studies, the subject's were in an isolated, confined environment analogous to space flight; the third study was performed in a laboratory.

  8. Systematic evaluation of nondestructive testing methods

    International Nuclear Information System (INIS)

    Segal, Y.; Notea, A.; Segal, E.

    1977-01-01

    The main task of an NDT engineer is to select the best method, considering the cost-benefit value of different available systems and taking into account the special existing constraints. The aim of the paper is to suggest a tool that will enable characterization of measuring systems. The derivation of the characterization parameters and functions has to be general, i.e., suitable for all possible measuring methods, independent of their principle of operation. Quite often the properties measured during the NDT procedure are not the wanted ones, but there must be a correlation between the measured property and the performance of the product. One has to bear in mind that the ultimate choice between systems is not, in practice, just based on the mathematical optimization approach that is presented. Factors like cost-benefit, availability of trained manpower, service, real-time information, weight, volume, etc., may be crucial problems, and they may well dictate the final selection

  9. Evaluation of toothbrush disinfection via different methods

    Directory of Open Access Journals (Sweden)

    Adil BASMAN

    2016-01-01

    Full Text Available The aim of this study was to compare the efficacy of using a dishwasher or different chemical agents, including 0.12% chlorhexidine gluconate, 2% sodium hypochlorite (NaOCl, a mouthrinse containing essential oils and alcohol, and 50% white vinegar, for toothbrush disinfection. Sixty volunteers were divided into five experimental groups and one control group (n = 10. Participants brushed their teeth using toothbrushes with standard bristles, and they disinfected the toothbrushes according to instructed methods. Bacterial contamination of the toothbrushes was compared between the experimental groups and the control group. Data were analyzed by Kruskal–Wallis and Duncan's multiple range tests, with 95% confidence intervals for multiple comparisons. Bacterial contamination of toothbrushes from individuals in the experimental groups differed from those in the control group (p < 0.05. The most effective method for elimination of all tested bacterial species was 50% white vinegar, followed in order by 2% NaOCl, mouthrinse containing essential oils and alcohol, 0.12% chlorhexidine gluconate, dishwasher use, and tap water (control. The results of this study show that the most effective method for disinfecting toothbrushes was submersion in 50% white vinegar, which is cost-effective, easy to access, and appropriate for household use.

  10. Performance Evaluation Methods for Assistive Robotic Technology

    Science.gov (United States)

    Tsui, Katherine M.; Feil-Seifer, David J.; Matarić, Maja J.; Yanco, Holly A.

    Robots have been developed for several assistive technology domains, including intervention for Autism Spectrum Disorders, eldercare, and post-stroke rehabilitation. Assistive robots have also been used to promote independent living through the use of devices such as intelligent wheelchairs, assistive robotic arms, and external limb prostheses. Work in the broad field of assistive robotic technology can be divided into two major research phases: technology development, in which new devices, software, and interfaces are created; and clinical, in which assistive technology is applied to a given end-user population. Moving from technology development towards clinical applications is a significant challenge. Developing performance metrics for assistive robots poses a related set of challenges. In this paper, we survey several areas of assistive robotic technology in order to derive and demonstrate domain-specific means for evaluating the performance of such systems. We also present two case studies of applied performance measures and a discussion regarding the ubiquity of functional performance measures across the sampled domains. Finally, we present guidelines for incorporating human performance metrics into end-user evaluations of assistive robotic technologies.

  11. A method to customize population-specific arrays for genome-wide association testing

    NARCIS (Netherlands)

    Ehli, E.A.; Abdellaoui, A.; Fedko, I.O.; Grieser, C.; Nohzadeh-Malakshah, S.; Willemsen, G.; de Geus, E.J.C.; Boomsma, D.I.; Davies, G.E.; Hottenga, J.J.

    2017-01-01

    As an example of optimizing population-specific genotyping assays using a whole-genome sequence reference set, we detail the approach that followed to design the Axiom-NL array which is characterized by an improved imputation backbone based on the Genome of the Netherlands (GoNL) reference sequence

  12. A simple, rapid and efficient method for the extraction of genomic ...

    African Journals Online (AJOL)

    The isolation of intact, high-molecular-mass genomic DNA is essential for many molecular biology applications including long range PCR, endonuclease restriction digestion, southern blot analysis, and genomic library construction. Many protocols are available for the extraction of DNA from plant material, but obtain it is ...

  13. Whole-genome regression and prediction methods applied to plant and animal breeding

    NARCIS (Netherlands)

    Los Campos, De G.; Hickey, J.M.; Pong-Wong, R.; Daetwyler, H.D.; Calus, M.P.L.

    2013-01-01

    Genomic-enabled prediction is becoming increasingly important in animal and plant breeding, and is also receiving attention in human genetics. Deriving accurate predictions of complex traits requires implementing whole-genome regression (WGR) models where phenotypes are regressed on thousands of

  14. A simple and rapid lysis method for preparation of genomic DNA ...

    African Journals Online (AJOL)

    Moreover, the resultant genomic DNA was in good quantity and quality and can be used successfully for restriction endonucleases digestion, PCR amplification and others types of molecular biology manipulations. Keywords: Genomic DNA, lysis, carvacrol, Gram-negative bacteria, Escherichia coli, Erwinia chrysanthemi ...

  15. Platform comparison for evaluation of ALK protein immunohistochemical expression, genomic copy number and hotspot mutation status in neuroblastomas.

    Directory of Open Access Journals (Sweden)

    Benedict Yan

    Full Text Available ALK is an established causative oncogenic driver in neuroblastoma, and is likely to emerge as a routine biomarker in neuroblastoma diagnostics. At present, the optimal strategy for clinical diagnostic evaluation of ALK protein, genomic and hotspot mutation status is not well-studied. We evaluated ALK immunohistochemical (IHC protein expression using three different antibodies (ALK1, 5A4 and D5F3 clones, ALK genomic status using single-color chromogenic in situ hybridization (CISH, and ALK hotspot mutation status using conventional Sanger sequencing and a next-generation sequencing platform (Ion Torrent Personal Genome Machine (IT-PGM, in archival formalin-fixed, paraffin-embedded neuroblastoma samples. We found a significant difference in IHC results using the three different antibodies, with the highest percentage of positive cases seen on D5F3 immunohistochemistry. Correlation with ALK genomic and hotspot mutational status revealed that the majority of D5F3 ALK-positive cases did not possess either ALK genomic amplification or hotspot mutations. Comparison of sequencing platforms showed a perfect correlation between conventional Sanger and IT-PGM sequencing. Our findings suggest that D5F3 immunohistochemistry, single-color CISH and IT-PGM sequencing are suitable assays for evaluation of ALK status in future neuroblastoma clinical trials.

  16. Auditing as method of QA programme evaluation

    International Nuclear Information System (INIS)

    Wilhelm, H.

    1980-01-01

    The status and adequacy of a quality assurance programme should be regularly reviewed by the cognizant management. The programme audit is an independent review to determine the compliance with respective quality assurance requirements and to determine the effectiveness of that programme. This lecture gives an introduction of the method to perform audits under the following topics: 1. Definition and purpose of quality audits. 2. Organization of the quality audit function. 3. Unique requirements for auditors. 4. Audit preparation and planning. 5. Conduct of the audit. 6. Reporting the audit results. 7. Follow-up activities. (RW)

  17. Credit Institutions Management Evaluation using Quantitative Methods

    Directory of Open Access Journals (Sweden)

    Nicolae Dardac

    2006-04-01

    Full Text Available Credit institutions supervising mission by state authorities is mostly assimilated with systemic risk prevention. In present, the mission is orientated on analyzing the risk profile of the credit institutions, the mechanism and existing systems as management tools providing to bank rules the proper instruments to avoid and control specific bank risks. Rating systems are sophisticated measurement instruments which are capable to assure the above objectives, such as success in banking risk management. The management quality is one of the most important elements from the set of variables used in the quoting process in credit operations. Evaluation of this quality is – generally speaking – fundamented on quantitative appreciations which can induce subjectivism and heterogeneity in quotation. The problem can be solved by using, complementary, quantitative technics such us DEA (Data Envelopment Analysis.

  18. Credit Institutions Management Evaluation using Quantitative Methods

    Directory of Open Access Journals (Sweden)

    Nicolae Dardac

    2006-02-01

    Full Text Available Credit institutions supervising mission by state authorities is mostly assimilated with systemic risk prevention. In present, the mission is orientated on analyzing the risk profile of the credit institutions, the mechanism and existing systems as management tools providing to bank rules the proper instruments to avoid and control specific bank risks. Rating systems are sophisticated measurement instruments which are capable to assure the above objectives, such as success in banking risk management. The management quality is one of the most important elements from the set of variables used in the quoting process in credit operations. Evaluation of this quality is – generally speaking – fundamented on quantitative appreciations which can induce subjectivism and heterogeneity in quotation. The problem can be solved by using, complementary, quantitative technics such us DEA (Data Envelopment Analysis.

  19. [Evaluation of Wits appraisal with superimposition method].

    Science.gov (United States)

    Xu, T; Ahn, J; Baumrind, S

    1999-07-01

    To compare the conventional Wits appraisal with superimposed Wits appraisal in evaluation of sagittal jaw relationship change between pre and post orthodontic treatment. The sample consists of 48-case pre and post treatment lateral head films. Computerized digitizing is used to get the cephalometric landmarks and measure conventional Wits value, superimposed Wits value and ANB angle. The correlation analysis among these three measures was done by SAS statistical package. The change of ANB angle has higher correlation with the change of superimposed Wits than that of the conventional Wits. The r-value is as high as 0.849 (P < 0.001). The superimposed Wits appraisal reflects the change of sagittal jaw relationship more objectively than the conventional one.

  20. A Design Process Evaluation Method for Sustainable Buildings

    OpenAIRE

    Christopher S. Magent; Sinem Korkmaz; Leidy E Klotz; David R. Riley

    2009-01-01

    This research develops a technique to model and evaluate the design process for sustainable buildings. Three case studies were conducted to validate this method. The resulting design process evaluation method for sustainable buildings (DPEMSB) may assist project teams in designing their own sustainable building design processes. This method helps to identify critical decisions in the design process, to evaluate these decisions for time and sequence, to define information required for decision...

  1. EVALUATION METHODS USED FOR TANGIBLE ASSETS BY ECONOMIC ENTITIES

    OpenAIRE

    Csongor CSŐSZ; Partenie DUMBRAVĂ

    2014-01-01

    At many entities the net asset value is influenced by the evaluation methods applied for tangible assets, because the value of intangible assets and financial assets is small in most cases. The objective of this paper is to analyze the differences between the procedures / methods of evaluation applied by micro and small entities and medium and large entities for tangible assets in Romania and Hungary. Furthermore, we analyze the differences between the procedures / methods of evaluation appli...

  2. Evaluating different methods of microarray data normalization

    Directory of Open Access Journals (Sweden)

    Ferreira Carlos

    2006-10-01

    Full Text Available Abstract Background With the development of DNA hybridization microarray technologies, nowadays it is possible to simultaneously assess the expression levels of thousands to tens of thousands of genes. Quantitative comparison of microarrays uncovers distinct patterns of gene expression, which define different cellular phenotypes or cellular responses to drugs. Due to technical biases, normalization of the intensity levels is a pre-requisite to performing further statistical analyses. Therefore, choosing a suitable approach for normalization can be critical, deserving judicious consideration. Results Here, we considered three commonly used normalization approaches, namely: Loess, Splines and Wavelets, and two non-parametric regression methods, which have yet to be used for normalization, namely, the Kernel smoothing and Support Vector Regression. The results obtained were compared using artificial microarray data and benchmark studies. The results indicate that the Support Vector Regression is the most robust to outliers and that Kernel is the worst normalization technique, while no practical differences were observed between Loess, Splines and Wavelets. Conclusion In face of our results, the Support Vector Regression is favored for microarray normalization due to its superiority when compared to the other methods for its robustness in estimating the normalization curve.

  3. A new method used to evaluate organic working fluids

    International Nuclear Information System (INIS)

    Zhang, Xinxin; He, Maogang; Wang, Jingfu

    2014-01-01

    In this paper, we propose a method named “Weight Classification-Hasse Dominance” to evaluate organic working fluids. This new method combines the advantages of both the method of weight determination and the Hasse Diagram Technique (HDT). It can be used to evaluate the thermodynamic performance, environmental protection indicator, and safety requirement of organic working fluid simultaneously. This evaluation method can offer good reference for working fluid selection. Using this method, the organic working fluids which have been phased out and will be phased out by the Montreal Protocol including CFCs (chlorofluorocarbons), HCFCs (hydrochlorofluorocarbons), and HFCs (hydrofluorocarbons) were evaluated. Moreover, HCs (hydrocarbons) can be considered as a completely different kind of organic working fluid from CFCs, HCFCs, and HFCs according to the comparison based on this new evaluation method. - Highlights: • We propose a new method used to evaluate organic working fluids. • This evaluation method can offer good reference for working fluid selection. • CFC, HCFC, and HFC working fluids were evaluated using this evaluation method. • HC can be considered as a totally different working fluid from CFC, HCFC, and HFC

  4. Household batteries: Evaluation of collection methods

    Energy Technology Data Exchange (ETDEWEB)

    Seeberger, D.A.

    1992-12-31

    While it is difficult to prove that a specific material is causing contamination in a landfill, tests have been conducted at waste-to-energy facilities that indicate that household batteries contribute significant amounts of heavy metals to both air emissions and ash residue. Hennepin County, MN, used a dual approach for developing and implementing a special household battery collection. Alternative collection methods were examined; test collections were conducted. The second phase examined operating and disposal policy issues. This report describes the results of the grant project, moving from a broad examination of the construction and content of batteries, to a description of the pilot collection programs, and ending with a discussion of variables affecting the cost and operation of a comprehensive battery collection program. Three out-of-state companies (PA, NY) were found that accept spent batteries; difficulties in reclaiming household batteries are discussed.

  5. Household batteries: Evaluation of collection methods

    Energy Technology Data Exchange (ETDEWEB)

    Seeberger, D.A.

    1992-01-01

    While it is difficult to prove that a specific material is causing contamination in a landfill, tests have been conducted at waste-to-energy facilities that indicate that household batteries contribute significant amounts of heavy metals to both air emissions and ash residue. Hennepin County, MN, used a dual approach for developing and implementing a special household battery collection. Alternative collection methods were examined; test collections were conducted. The second phase examined operating and disposal policy issues. This report describes the results of the grant project, moving from a broad examination of the construction and content of batteries, to a description of the pilot collection programs, and ending with a discussion of variables affecting the cost and operation of a comprehensive battery collection program. Three out-of-state companies (PA, NY) were found that accept spent batteries; difficulties in reclaiming household batteries are discussed.

  6. A re-evaluation of the taxonomy of phytopathogenic genera Dickeya and Pectobacterium using whole-genome sequencing data.

    Science.gov (United States)

    Zhang, Yucheng; Fan, Qiurong; Loria, Rosemary

    2016-06-01

    The genera Dickeya and Pectobacterium contain important plant pathogens. However, species from these genera are often poorly defined and some new isolates could not be assigned to any of the existing species. Due to their wide geographic distribution and lethality, a reliable and easy classification scheme for these pathogens is urgently needed. The low cost of next-generation sequencing has generated an upsurge of microbial genome sequences. Here, we present a phylogenomic and systematic analysis of the genera Dickeya and Pectobacterium. Eighty-three genomes from these two genera as well as two Brenneria genomes were included in this study. We estimated average nucleotide identity (ANI) and in silico DNA-DNA hybridization (isDDH) values in combination with the whole-genome-based phylogeny from 895 single-copy orthologous genes using these 85 genomes. Strains with ANI values of ≥96% and isDDH values of ≥70% were consistently grouped together in the phylogenetic tree. ANI, isDDH, and whole-genome-based phylogeny all support the elevation of Pectobacterium carotovorum's four subspecies (actinidiae, odoriferum, carotovorum, and brasiliense) to the species level. We also found some strains could not be assigned to any of the existing species, indicating these strains represent novel species. Furthermore, our study revealed at least ten tested genomes from these genera were misnamed in GenBank. This work highlights the potential of using whole genome sequences to re-evaluate current prokaryotic species definition and establish a unified prokaryotic species definition frame for taxonomically challenging genera. Copyright © 2016 Elsevier GmbH. All rights reserved.

  7. A Rapid and Efficient Method for Purifying High Quality Total RNA from Peaches (Prunus persica for Functional Genomics Analyses

    Directory of Open Access Journals (Sweden)

    LEE MEISEL

    2005-01-01

    Full Text Available Prunus persica has been proposed as a genomic model for deciduous trees and the Rosaceae family. Optimized protocols for RNA isolation are necessary to further advance studies in this model species such that functional genomics analyses may be performed. Here we present an optimized protocol to rapidly and efficiently purify high quality total RNA from peach fruits (Prunus persica. Isolating high-quality RNA from fruit tissue is often difficult due to large quantities of polysaccharides and polyphenolic compounds that accumulate in this tissue and co-purify with the RNA. Here we demonstrate that a modified version of the method used to isolate RNA from pine trees and the woody plant Cinnamomun tenuipilum is ideal for isolating high quality RNA from the fruits of Prunus persica. This RNA may be used for many functional genomic based experiments such as RT-PCR and the construction of large-insert cDNA libraries.

  8. Partial digestion with restriction enzymes of ultraviolet-irradiated human genomic DNA: a method for identifying restriction site polymorphisms

    International Nuclear Information System (INIS)

    Nobile, C.; Romeo, G.

    1988-01-01

    A method for partial digestion of total human DNA with restriction enzymes has been developed on the basis of a principle already utilized by P.A. Whittaker and E. Southern for the analysis of phage lambda recombinants. Total human DNA irradiated with uv light of 254 nm is partially digested by restriction enzymes that recognize sequences containing adjacent thymidines because of TT dimer formation. The products resulting from partial digestion of specific genomic regions are detected in Southern blots by genomic-unique DNA probes with high reproducibility. This procedure is rapid and simple to perform because the same conditions of uv irradiation are used for different enzymes and probes. It is shown that restriction site polymorphisms occurring in the genomic regions analyzed are recognized by the allelic partial digest patterns they determine

  9. GONAD: A Novel CRISPR/Cas9 Genome Editing Method that Does Not Require Ex Vivo Handling of Embryos.

    Science.gov (United States)

    Gurumurthy, Channabasavaiah B; Takahashi, Gou; Wada, Kenta; Miura, Hiromi; Sato, Masahiro; Ohtsuka, Masato

    2016-01-01

    Transgenic technologies used for creating a desired genomic change in animals involve three critical steps: isolation of fertilized eggs, microinjection of transgenic DNA into them and their subsequent transfer to recipient females. These ex vivo steps have been widely used for over 3 decades and they were also readily adapted for the latest genome editing technologies such as ZFNs, TALENs, and CRISPR/Cas9 systems. We recently developed a method called GONAD (Genome editing via Oviductal Nucleic Acids Delivery) that does not require all the three critical steps of transgenesis and therefore relieves the bottlenecks of widely used animal transgenic technologies. Here we provide protocols for the GONAD system. Copyright © 2016 John Wiley & Sons, Inc.

  10. MERITED LABOUR: METHODOLOGY AND METHODS OF EVALUATION

    Directory of Open Access Journals (Sweden)

    N.Z. Shaimardanov

    2009-12-01

    Full Text Available A methodological approach to the analysis of the economic category "merited labour", in which the employee is considered as a subject and an object of work at once is worked out, method of calculation of the integral index of merited labour in the subjects of Russia in the dynamics and in the context of industries and municipalities of the Sverdlovsk region is justified. Ratings of integral indicators of quality of life in terms of "merited labour" program are composed. Integral indices and ratings of decent work in the RF subjects and municipalities of the Sverdlovsk region are calculated and analyzed. The practical significance of the work consists in the possibility of use in forming the monitoring of social and labor sphere of the region taking into account regional specificity, and in application of the integral index of merited labour allowing to give a qualitative description of the social and labor sphere of the region and estimate the effectiveness of policies of executive power in this area.

  11. Evaluation of splenic autotransplants by radionuclide methods

    International Nuclear Information System (INIS)

    Nawaz, K.; Nema, T.A.; Al-Mohannadi, S.; Abdel-Dayem, H.M.

    1986-01-01

    The viability of omental autotransplantation of splenic tissue after splenectomy has been disputed. The authors followed up splenic implants by imaging with either Tc-99m tin colloid or heat-damaged RBCs to determine how early implants can be visualized and whether a difference exists between patients who underwent emergency splenectomy for trauma (nine patients) and those who underwent elective splenectomy (seven patients). In the latter group, splenectomy was performed for portal hypertension in six patients and for hematologic disorder (Wiscott Aldrich syndrome) in one. All patients were imaged 2-4 weeks and 6 months after surgery. In the first group, seven implants were seen at 2-4 weeks and all nine were seen by 6 months. In the second group, only two implants were seen at 2-4 weeks and four were seen at 6 months; two implants were not visualized even at 6 months. The implant of the patient with hematologic disorder was not seen before 6 months. The authors conclude that splenic implants can be visualized bu scintigraphic methods as early as 2-4 weeks after surgery, and that by 6 months all implants from normal spleen are viable. By contrast, spleen implants placed for portal hypertension or hematologic disorders may fail

  12. Novel methods for physical mapping of the human genome applied to the long arm of chromosome 5

    Energy Technology Data Exchange (ETDEWEB)

    McClelland, M.

    1991-12-01

    The object of our current grant is to develop novel methods for mapping of the human genome. The techniques to be assessed were: (1) three methods for the production of unique sequence clones from the region of interest; (2) novel methods for the production and separation of multi-megabase DNA fragments; (3) methods for the production of physical linking clones'' that contain rare restriction sites; (4) application of these methods and available resources to map the region of interest. Progress includes: In the first two years methods were developed for physical mapping and the production of arrayed clones; We have concentrated on developing rare- cleavage tools based or restriction endonucleases and methylases; We studied the effect of methylation on enzymes used for PFE mapping of the human genome; we characterized two new isoschizomers of rare cutting endonucleases; we developed a reliable way to produce partial digests of DNA in agarose plugs and applied it to the human genome; and we applied a method to double the apparent specificity of the rare-cutter'' endonucleases.

  13. Novel methods for physical mapping of the human genome applied to the long arm of chromosome 5. Final report

    Energy Technology Data Exchange (ETDEWEB)

    McClelland, M.

    1991-12-01

    The object of our current grant is to develop novel methods for mapping of the human genome. The techniques to be assessed were: (1) three methods for the production of unique sequence clones from the region of interest; (2) novel methods for the production and separation of multi-megabase DNA fragments; (3) methods for the production of ``physical linking clones`` that contain rare restriction sites; (4) application of these methods and available resources to map the region of interest. Progress includes: In the first two years methods were developed for physical mapping and the production of arrayed clones; We have concentrated on developing rare- cleavage tools based or restriction endonucleases and methylases; We studied the effect of methylation on enzymes used for PFE mapping of the human genome; we characterized two new isoschizomers of rare cutting endonucleases; we developed a reliable way to produce partial digests of DNA in agarose plugs and applied it to the human genome; and we applied a method to double the apparent specificity of the ``rare-cutter`` endonucleases.

  14. DPS - a rapid method for genome sequencing of DNA-containing bacteriophages directly from a single plaque.

    Science.gov (United States)

    Kot, Witold; Vogensen, Finn K; Sørensen, Søren J; Hansen, Lars H

    2014-02-01

    Bacteriophages (phages) coexist with bacteria in all environments and influence microbial diversity, evolution and industrial production processes. As a result of this major impact of phages on microbes, tools that allow rapid characterization of phages are needed. Today, one of the most powerful methods for characterization of phages is determination of the whole genome using high throughput sequencing approaches. Here a direct plaque sequencing (DPS) is described, which is a rapid method that allows easy full genome sequencing of DNA-containing phages using the Nextera XT™ kit. A combination of host-DNA removal followed by purification and concentration of the viral DNA, allowed the construction of Illumina-compatible sequencing libraries using the Nextera™ XT technology directly from single phage plaques without any whole genome amplification step. This method was tested on three Caudovirales phages; ϕ29 Podoviridae, P113g Siphoviridae and T4 Myovirdae, which are representative of >96% of all known phages, and were sequenced using the Illumina MiSeq platform. Successful de novo assembly of the viral genomes was possible. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. A new method for isolation of purified genomic DNA from haemosporidian parasites inhabiting nucleated red blood cells.

    Science.gov (United States)

    Palinauskas, Vaidas; Križanauskienė, Asta; Iezhova, Tatjana A; Bolshakov, Casimir V; Jönsson, Jane; Bensch, Staffan; Valkiūnas, Gediminas

    2013-03-01

    During the last 10 years, whole genomes have been sequenced from an increasing number of organisms. However, there is still no data on complete genomes of avian and lizard Plasmodium spp. or other haemosporidian parasites. In contrast to mammals, bird and reptile red blood cells have nuclei and thus blood of these vertebrates contains high amount of host DNA; that complicates preparation of purified template DNA from haemosporidian parasites, which has been the main obstacle for genomic studies of these parasites. In the present study we describe a method that generates large amount of purified avian haemosporidian DNA. The method is based on a unique biological feature of haemosporidian parasites, namely that mature gametocytes in blood can be induced to exflagellate in vitro. This results in the development of numerous microgametes, which can be separated from host blood cells by simple centrifugation. Our results reveal that this straight forward method provides opportunities to collect pure parasite DNA material, which can be used as a template for various genetic analyses including whole genome sequencing of haemosporidians infecting birds and lizards. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Evaluating the accuracy of genomic prediction of growth and wood traits in two Eucalyptus species and their F1hybrids.

    Science.gov (United States)

    Tan, Biyue; Grattapaglia, Dario; Martins, Gustavo Salgado; Ferreira, Karina Zamprogno; Sundberg, Björn; Ingvarsson, Pär K

    2017-06-29

    Genomic prediction is a genomics assisted breeding methodology that can increase genetic gains by accelerating the breeding cycle and potentially improving the accuracy of breeding values. In this study, we use 41,304 informative SNPs genotyped in a Eucalyptus breeding population involving 90 E.grandis and 78 E.urophylla parents and their 949 F 1 hybrids to develop genomic prediction models for eight phenotypic traits - basic density and pulp yield, circumference at breast height and height and tree volume scored at age three and six years. We assessed the impact of different genomic prediction methods, the composition and size of the training and validation set and the number and genomic location of SNPs on the predictive ability (PA). Heritabilities estimated using the realized genomic relationship matrix (GRM) were considerably higher than estimates based on the expected pedigree, mainly due to inconsistencies in the expected pedigree that were readily corrected by the GRM. Moreover, the GRM more precisely capture Mendelian sampling among related individuals, such that the genetic covariance was based on the true proportion of the genome shared between individuals. PA improved considerably when increasing the size of the training set and by enhancing relatedness to the validation set. Prediction models trained on pure species parents could not predict well in F 1 hybrids, indicating that model training has to be carried out in hybrid populations if one is to predict in hybrid selection candidates. The different genomic prediction methods provided similar results for all traits, therefore either GBLUP or rrBLUP represents better compromises between computational time and prediction efficiency. Only slight improvement was observed in PA when more than 5000 SNPs were used for all traits. Using SNPs in intergenic regions provided slightly better PA than using SNPs sampled exclusively in genic regions. The size and composition of the training set and number of SNPs

  17. MEBS, a software platform to evaluate large (meta)genomic collections according to their metabolic machinery: unraveling the sulfur cycle.

    Science.gov (United States)

    De Anda, Valerie; Zapata-Peñasco, Icoquih; Poot-Hernandez, Augusto Cesar; Eguiarte, Luis E; Contreras-Moreira, Bruno; Souza, Valeria

    2017-11-01

    The increasing number of metagenomic and genomic sequences has dramatically improved our understanding of microbial diversity, yet our ability to infer metabolic capabilities in such datasets remains challenging. We describe the Multigenomic Entropy Based Score pipeline (MEBS), a software platform designed to evaluate, compare, and infer complex metabolic pathways in large "omic" datasets, including entire biogeochemical cycles. MEBS is open source and available through https://github.com/eead-csic-compbio/metagenome_Pfam_score. To demonstrate its use, we modeled the sulfur cycle by exhaustively curating the molecular and ecological elements involved (compounds, genes, metabolic pathways, and microbial taxa). This information was reduced to a collection of 112 characteristic Pfam protein domains and a list of complete-sequenced sulfur genomes. Using the mathematical framework of relative entropy (H΄), we quantitatively measured the enrichment of these domains among sulfur genomes. The entropy of each domain was used both to build up a final score that indicates whether a (meta)genomic sample contains the metabolic machinery of interest and to propose marker domains in metagenomic sequences such as DsrC (PF04358). MEBS was benchmarked with a dataset of 2107 non-redundant microbial genomes from RefSeq and 935 metagenomes from MG-RAST. Its performance, reproducibility, and robustness were evaluated using several approaches, including random sampling, linear regression models, receiver operator characteristic plots, and the area under the curve metric (AUC). Our results support the broad applicability of this algorithm to accurately classify (AUC = 0.985) hard-to-culture genomes (e.g., Candidatus Desulforudis audaxviator), previously characterized ones, and metagenomic environments such as hydrothermal vents, or deep-sea sediment. Our benchmark indicates that an entropy-based score can capture the metabolic machinery of interest and can be used to efficiently classify

  18. Genomic Methods and Microbiological Technologies for Profiling Novel and Extreme Environments for the Extreme Microbiome Project (XMP).

    Science.gov (United States)

    Tighe, Scott; Afshinnekoo, Ebrahim; Rock, Tara M; McGrath, Ken; Alexander, Noah; McIntyre, Alexa; Ahsanuddin, Sofia; Bezdan, Daniela; Green, Stefan J; Joye, Samantha; Stewart Johnson, Sarah; Baldwin, Don A; Bivens, Nathan; Ajami, Nadim; Carmical, Joseph R; Herriott, Ian Charold; Colwell, Rita; Donia, Mohamed; Foox, Jonathan; Greenfield, Nick; Hunter, Tim; Hoffman, Jessica; Hyman, Joshua; Jorgensen, Ellen; Krawczyk, Diana; Lee, Jodie; Levy, Shawn; Garcia-Reyero, Natàlia; Settles, Matthew; Thomas, Kelley; Gómez, Felipe; Schriml, Lynn; Kyrpides, Nikos; Zaikova, Elena; Penterman, Jon; Mason, Christopher E

    2017-04-01

    The Extreme Microbiome Project (XMP) is a project launched by the Association of Biomolecular Resource Facilities Metagenomics Research Group (ABRF MGRG) that focuses on whole genome shotgun sequencing of extreme and unique environments using a wide variety of biomolecular techniques. The goals are multifaceted, including development and refinement of new techniques for the following: 1) the detection and characterization of novel microbes, 2) the evaluation of nucleic acid techniques for extremophilic samples, and 3) the identification and implementation of the appropriate bioinformatics pipelines. Here, we highlight the different ongoing projects that we have been working on, as well as details on the various methods we use to characterize the microbiome and metagenome of these complex samples. In particular, we present data of a novel multienzyme extraction protocol that we developed, called Polyzyme or MetaPolyZyme. Presently, the XMP is characterizing sample sites around the world with the intent of discovering new species, genes, and gene clusters. Once a project site is complete, the resulting data will be publically available. Sites include Lake Hillier in Western Australia, the "Door to Hell" crater in Turkmenistan, deep ocean brine lakes of the Gulf of Mexico, deep ocean sediments from Greenland, permafrost tunnels in Alaska, ancient microbial biofilms from Antarctica, Blue Lagoon Iceland, Ethiopian toxic hot springs, and the acidic hypersaline ponds in Western Australia.

  19. Evaluation of multiple approaches to identify genome-wide polymorphisms in closely related genotypes of sweet cherry (Prunus avium L.

    Directory of Open Access Journals (Sweden)

    Seanna Hewitt

    Full Text Available Identification of genetic polymorphisms and subsequent development of molecular markers is important for marker assisted breeding of superior cultivars of economically important species. Sweet cherry (Prunus avium L. is an economically important non-climacteric tree fruit crop in the Rosaceae family and has undergone a genetic bottleneck due to breeding, resulting in limited genetic diversity in the germplasm that is utilized for breeding new cultivars. Therefore, it is critical to recognize the best platforms for identifying genome-wide polymorphisms that can help identify, and consequently preserve, the diversity in a genetically constrained species. For the identification of polymorphisms in five closely related genotypes of sweet cherry, a gel-based approach (TRAP, reduced representation sequencing (TRAPseq, a 6k cherry SNParray, and whole genome sequencing (WGS approaches were evaluated in the identification of genome-wide polymorphisms in sweet cherry cultivars. All platforms facilitated detection of polymorphisms among the genotypes with variable efficiency. In assessing multiple SNP detection platforms, this study has demonstrated that a combination of appropriate approaches is necessary for efficient polymorphism identification, especially between closely related cultivars of a species. The information generated in this study provides a valuable resource for future genetic and genomic studies in sweet cherry, and the insights gained from the evaluation of multiple approaches can be utilized for other closely related species with limited genetic diversity in the breeding germplasm. Keywords: Polymorphisms, Prunus avium, Next-generation sequencing, Target region amplification polymorphism (TRAP, Genetic diversity, SNParray, Reduced representation sequencing, Whole genome sequencing (WGS

  20. Genomic estimation of additive and dominance effects and impact of accounting for dominance on accuracy of genomic evaluation in sheep populations.

    Science.gov (United States)

    Moghaddar, N; van der Werf, J H J

    2017-12-01

    The objectives of this study were to estimate the additive and dominance variance component of several weight and ultrasound scanned body composition traits in purebred and combined cross-bred sheep populations based on single nucleotide polymorphism (SNP) marker genotypes and then to investigate the effect of fitting additive and dominance effects on accuracy of genomic evaluation. Additive and dominance variance components were estimated in a mixed model equation based on "average information restricted maximum likelihood" using additive and dominance (co)variances between animals calculated from 48,599 SNP marker genotypes. Genomic prediction was based on genomic best linear unbiased prediction (GBLUP), and the accuracy of prediction was assessed based on a random 10-fold cross-validation. Across different weight and scanned body composition traits, dominance variance ranged from 0.0% to 7.3% of the phenotypic variance in the purebred population and from 7.1% to 19.2% in the combined cross-bred population. In the combined cross-bred population, the range of dominance variance decreased to 3.1% and 9.9% after accounting for heterosis effects. Accounting for dominance effects significantly improved the likelihood of the fitting model in the combined cross-bred population. This study showed a substantial dominance genetic variance for weight and ultrasound scanned body composition traits particularly in cross-bred population; however, improvement in the accuracy of genomic breeding values was small and statistically not significant. Dominance variance estimates in combined cross-bred population could be overestimated if heterosis is not fitted in the model. © 2017 Blackwell Verlag GmbH.

  1. Breaking RAD: an evaluation of the utility of restriction site-associated DNA sequencing for genome scans of adaptation.

    Science.gov (United States)

    Lowry, David B; Hoban, Sean; Kelley, Joanna L; Lotterhos, Katie E; Reed, Laura K; Antolin, Michael F; Storfer, Andrew

    2017-03-01

    Understanding how and why populations evolve is of fundamental importance to molecular ecology. Restriction site-associated DNA sequencing (RADseq), a popular reduced representation method, has ushered in a new era of genome-scale research for assessing population structure, hybridization, demographic history, phylogeography and migration. RADseq has also been widely used to conduct genome scans to detect loci involved in adaptive divergence among natural populations. Here, we examine the capacity of those RADseq-based genome scan studies to detect loci involved in local adaptation. To understand what proportion of the genome is missed by RADseq studies, we developed a simple model using different numbers of RAD-tags, genome sizes and extents of linkage disequilibrium (length of haplotype blocks). Under the best-case modelling scenario, we found that RADseq using six- or eight-base pair cutting restriction enzymes would fail to sample many regions of the genome, especially for species with short linkage disequilibrium. We then surveyed recent studies that have used RADseq for genome scans and found that the median density of markers across these studies was 4.08 RAD-tag markers per megabase (one marker per 245 kb). The length of linkage disequilibrium for many species is one to three orders of magnitude less than density of the typical recent RADseq study. Thus, we conclude that genome scans based on RADseq data alone, while useful for studies of neutral genetic variation and genetic population structure, will likely miss many loci under selection in studies of local adaptation. © 2016 John Wiley & Sons Ltd.

  2. Short communication: Improving accuracy of Jersey genomic evaluations in the United States and Denmark by sharing reference population bulls

    Science.gov (United States)

    The effect on prediction accuracy for Jersey genomic evaluations in Denmark and the United States from using larger reference populations was assessed. Each country contributed genotypes from 1,157 Jersey bulls to the reference population of the other. Eight of 9 traits analyzed by Denmark (milk, fa...

  3. Statistical evaluation of mathematical methods in solving linear ...

    African Journals Online (AJOL)

    Statistical evaluation of mathematical methods in solving linear theory problems: Design of water distribution systems. ... The flow obtained using the methods were evaluated using model of selection criterion MSC, coefficient of determination CD, reliability RD and errors. The study revealed that flow in pipe network ...

  4. Alternative methods for clinical nursing assessment and evaluation ...

    African Journals Online (AJOL)

    The recommendations made in the article on nurse educators' perceptions of OSCE as a clinical evaluation method (Chabeli, 2001:84-91) are addressed in this article. The research question: What alternative methods of assessment and evaluation can be used to measure the comprehensive and holistic clinical nursing ...

  5. Using Developmental Evaluation Methods with Communities of Practice

    Science.gov (United States)

    van Winkelen, Christine

    2016-01-01

    Purpose: This paper aims to explore the use of developmental evaluation methods with community of practice programmes experiencing change or transition to better understand how to target support resources. Design/methodology/approach: The practical use of a number of developmental evaluation methods was explored in three organizations over a…

  6. Plant DB link - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available e Site Policy | Contact Us Plant DB link - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive ... ...switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods

  7. QTL list - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available Policy | Contact Us QTL list - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive ... ...switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods

  8. Marker list - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available Database Site Policy | Contact Us Marker list - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive ... ...switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods

  9. A comprehensive evaluation of rodent malaria parasite genomes and gene expression

    KAUST Repository

    Otto, Thomas D

    2014-10-30

    Background: Rodent malaria parasites (RMP) are used extensively as models of human malaria. Draft RMP genomes have been published for Plasmodium yoelii, P. berghei ANKA (PbA) and P. chabaudi AS (PcAS). Although availability of these genomes made a significant impact on recent malaria research, these genomes were highly fragmented and were annotated with little manual curation. The fragmented nature of the genomes has hampered genome wide analysis of Plasmodium gene regulation and function. Results: We have greatly improved the genome assemblies of PbA and PcAS, newly sequenced the virulent parasite P. yoelii YM genome, sequenced additional RMP isolates/lines and have characterized genotypic diversity within RMP species. We have produced RNA-seq data and utilized it to improve gene-model prediction and to provide quantitative, genome-wide, data on gene expression. Comparison of the RMP genomes with the genome of the human malaria parasite P. falciparum and RNA-seq mapping permitted gene annotation at base-pair resolution. Full-length chromosomal annotation permitted a comprehensive classification of all subtelomeric multigene families including the `Plasmodium interspersed repeat genes\\' (pir). Phylogenetic classification of the pir family, combined with pir expression patterns, indicates functional diversification within this family. Conclusions: Complete RMP genomes, RNA-seq and genotypic diversity data are excellent and important resources for gene-function and post-genomic analyses and to better interrogate Plasmodium biology. Genotypic diversity between P. chabaudi isolates makes this species an excellent parasite to study genotype-phenotype relationships. The improved classification of multigene families will enhance studies on the role of (variant) exported proteins in virulence and immune evasion/modulation.

  10. Bridging the Gulf: Mixed Methods and Library Service Evaluation

    Science.gov (United States)

    Haynes, Abby

    2004-01-01

    This paper explores library evaluation in Australia and proposes a return to research fundamentals in which evaluators are asked to consider the centrality of philosophical issues and the role of different research methods. A critique of current evaluation examples demonstrates a system-centred, quantitative, input/output focus which fails to…

  11. Research Methods for Assessing and Evaluating School-Based Clinics.

    Science.gov (United States)

    Kirby, Douglas

    This monograph describes three types of evaluation that are potentially useful to school-based clinics: needs assessments, process evaluations, and impact evaluations. Two important methodological principles are involved: (1) collecting multiple kinds of data with multiple methods; and (2) collecting comparison data. Student needs can be…

  12. EVALUATION METHODS USED FOR TANGIBLE ASSETS BY ECONOMIC ENTITIES

    Directory of Open Access Journals (Sweden)

    Csongor CSŐSZ

    2014-06-01

    Full Text Available At many entities the net asset value is influenced by the evaluation methods applied for tangible assets, because the value of intangible assets and financial assets is small in most cases. The objective of this paper is to analyze the differences between the procedures / methods of evaluation applied by micro and small entities and medium and large entities for tangible assets in Romania and Hungary. Furthermore, we analyze the differences between the procedures / methods of evaluation applied by micro and small entities in Romania and Hungary, respectively the differences between medium and large entities regarding de evaluation methods for tangible assets in Romania and Hungary. For this empirical study the questionnaire is used – as research technique, and to demonstrate the significant differences between the evaluation methods we used the Kolmogorov – Smirnov Z test.

  13. New approach to equipment quality evaluation method with distinct functions

    Directory of Open Access Journals (Sweden)

    Milisavljević Vladimir M.

    2016-01-01

    Full Text Available The paper presents new approach for improving method for quality evaluation and selection of equipment (devices and machinery by applying distinct functions. Quality evaluation and selection of devices and machinery is a multi-criteria problem which involves the consideration of numerous parameters of various origins. Original selection method with distinct functions is based on technical parameters with arbitrary evaluation of each parameter importance (weighting. Improvement of this method, presented in this paper, addresses the issue of weighting of parameters by using Delphi Method. Finally, two case studies are provided, which included quality evaluation of standard boilers for heating and evaluation of load-haul-dump (LHD machines, to demonstrate applicability of this approach. Analytical Hierarchical Process (AHP is used as a control method.

  14. Identification of replication origins in archaeal genomes based on the Z-curve method

    Directory of Open Access Journals (Sweden)

    Ren Zhang

    2005-01-01

    Full Text Available The Z-curve is a three-dimensional curve that constitutes a unique representation of a DNA sequence, i.e., both the Z-curve and the given DNA sequence can be uniquely reconstructed from the other. We employed Z-curve analysis to identify one replication origin in the Methanocaldococcus jannaschii genome, two replication origins in the Halobacterium species NRC-1 genome and one replication origin in the Methanosarcina mazei genome. One of the predicted replication origins of Halobacterium species NRC-1 is the same as a replication origin later identified by in vivo experiments. The Z-curve analysis of the Sulfolobus solfataricus P2 genome suggested the existence of three replication origins, which is also consistent with later experimental results. This review aims to summarize applications of the Z-curve in identifying replication origins of archaeal genomes, and to provide clues about the locations of as yet unidentified replication origins of the Aeropyrum pernix K1, Methanococcus maripaludis S2, Picrophilus torridus DSM 9790 and Pyrobaculum aerophilum str. IM2 genomes.

  15. Multiple-Trait Genomic Selection Methods Increase Genetic Value Prediction Accuracy

    Science.gov (United States)

    Jia, Yi; Jannink, Jean-Luc

    2012-01-01

    Genetic correlations between quantitative traits measured in many breeding programs are pervasive. These correlations indicate that measurements of one trait carry information on other traits. Current single-trait (univariate) genomic selection does not take advantage of this information. Multivariate genomic selection on multiple traits could accomplish this but has been little explored and tested in practical breeding programs. In this study, three multivariate linear models (i.e., GBLUP, BayesA, and BayesCπ) were presented and compared to univariate models using simulated and real quantitative traits controlled by different genetic architectures. We also extended BayesA with fixed hyperparameters to a full hierarchical model that estimated hyperparameters and BayesCπ to impute missing phenotypes. We found that optimal marker-effect variance priors depended on the genetic architecture of the trait so that estimating them was beneficial. We showed that the prediction accuracy for a low-heritability trait could be significantly increased by multivariate genomic selection when a correlated high-heritability trait was available. Further, multiple-trait genomic selection had higher prediction accuracy than single-trait genomic selection when phenotypes are not available on all individuals and traits. Additional factors affecting the performance of multiple-trait genomic selection were explored. PMID:23086217

  16. A novel data mining method to identify assay-specific signatures in functional genomic studies

    Directory of Open Access Journals (Sweden)

    Guidarelli Jack W

    2006-08-01

    Full Text Available Abstract Background: The highly dimensional data produced by functional genomic (FG studies makes it difficult to visualize relationships between gene products and experimental conditions (i.e., assays. Although dimensionality reduction methods such as principal component analysis (PCA have been very useful, their application to identify assay-specific signatures has been limited by the lack of appropriate methodologies. This article proposes a new and powerful PCA-based method for the identification of assay-specific gene signatures in FG studies. Results: The proposed method (PM is unique for several reasons. First, it is the only one, to our knowledge, that uses gene contribution, a product of the loading and expression level, to obtain assay signatures. The PM develops and exploits two types of assay-specific contribution plots, which are new to the application of PCA in the FG area. The first type plots the assay-specific gene contribution against the given order of the genes and reveals variations in distribution between assay-specific gene signatures as well as outliers within assay groups indicating the degree of importance of the most dominant genes. The second type plots the contribution of each gene in ascending or descending order against a constantly increasing index. This type of plots reveals assay-specific gene signatures defined by the inflection points in the curve. In addition, sharp regions within the signature define the genes that contribute the most to the signature. We proposed and used the curvature as an appropriate metric to characterize these sharp regions, thus identifying the subset of genes contributing the most to the signature. Finally, the PM uses the full dataset to determine the final gene signature, thus eliminating the chance of gene exclusion by poor screening in earlier steps. The strengths of the PM are demonstrated using a simulation study, and two studies of real DNA microarray data – a study of

  17. Conceptual evaluation of population health surveillance programs: method and example.

    Science.gov (United States)

    El Allaki, Farouk; Bigras-Poulin, Michel; Ravel, André

    2013-03-01

    Veterinary and public health surveillance programs can be evaluated to assess and improve the planning, implementation and effectiveness of these programs. Guidelines, protocols and methods have been developed for such evaluation. In general, they focus on a limited set of attributes (e.g., sensitivity and simplicity), that are assessed quantitatively whenever possible, otherwise qualitatively. Despite efforts at standardization, replication by different evaluators is difficult, making evaluation outcomes open to interpretation. This ultimately limits the usefulness of surveillance evaluations. At the same time, the growing demand to prove freedom from disease or pathogen, and the Sanitary and Phytosanitary Agreement and the International Health Regulations require stronger surveillance programs. We developed a method for evaluating veterinary and public health surveillance programs that is detailed, structured, transparent and based on surveillance concepts that are part of all types of surveillance programs. The proposed conceptual evaluation method comprises four steps: (1) text analysis, (2) extraction of the surveillance conceptual model, (3) comparison of the extracted surveillance conceptual model to a theoretical standard, and (4) validation interview with a surveillance program designer. This conceptual evaluation method was applied in 2005 to C-EnterNet, a new Canadian zoonotic disease surveillance program that encompasses laboratory based surveillance of enteric diseases in humans and active surveillance of the pathogens in food, water, and livestock. The theoretical standard used for evaluating C-EnterNet was a relevant existing structure called the "Population Health Surveillance Theory". Five out of 152 surveillance concepts were absent in the design of C-EnterNet. However, all of the surveillance concept relationships found in C-EnterNet were valid. The proposed method can be used to improve the design and documentation of surveillance programs. It

  18. A hybrid method for evaluating enterprise architecture implementation.

    Science.gov (United States)

    Nikpay, Fatemeh; Ahmad, Rodina; Yin Kia, Chiam

    2017-02-01

    Enterprise Architecture (EA) implementation evaluation provides a set of methods and practices for evaluating the EA implementation artefacts within an EA implementation project. There are insufficient practices in existing EA evaluation models in terms of considering all EA functions and processes, using structured methods in developing EA implementation, employing matured practices, and using appropriate metrics to achieve proper evaluation. The aim of this research is to develop a hybrid evaluation method that supports achieving the objectives of EA implementation. To attain this aim, the first step is to identify EA implementation evaluation practices. To this end, a Systematic Literature Review (SLR) was conducted. Second, the proposed hybrid method was developed based on the foundation and information extracted from the SLR, semi-structured interviews with EA practitioners, program theory evaluation and Information Systems (ISs) evaluation. Finally, the proposed method was validated by means of a case study and expert reviews. This research provides a suitable foundation for researchers who wish to extend and continue this research topic with further analysis and exploration, and for practitioners who would like to employ an effective and lightweight evaluation method for EA projects. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. A method for evaluating discoverability and navigability of recommendation algorithms.

    Science.gov (United States)

    Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis

    2017-01-01

    Recommendations are increasingly used to support and enable discovery, browsing, and exploration of items. This is especially true for entertainment platforms such as Netflix or YouTube, where frequently, no clear categorization of items exists. Yet, the suitability of a recommendation algorithm to support these use cases cannot be comprehensively evaluated by any recommendation evaluation measures proposed so far. In this paper, we propose a method to expand the repertoire of existing recommendation evaluation techniques with a method to evaluate the discoverability and navigability of recommendation algorithms. The proposed method tackles this by means of first evaluating the discoverability of recommendation algorithms by investigating structural properties of the resulting recommender systems in terms of bow tie structure, and path lengths. Second, the method evaluates navigability by simulating three different models of information seeking scenarios and measuring the success rates. We show the feasibility of our method by applying it to four non-personalized recommendation algorithms on three data sets and also illustrate its applicability to personalized algorithms. Our work expands the arsenal of evaluation techniques for recommendation algorithms, extends from a one-click-based evaluation towards multi-click analysis, and presents a general, comprehensive method to evaluating navigability of arbitrary recommendation algorithms.

  20. An effective method to purify Plasmodium falciparum DNA directly from clinical blood samples for whole genome high-throughput sequencing.

    Directory of Open Access Journals (Sweden)

    Sarah Auburn

    Full Text Available Highly parallel sequencing technologies permit cost-effective whole genome sequencing of hundreds of Plasmodium parasites. The ability to sequence clinical Plasmodium samples, extracted directly from patient blood without a culture step, presents a unique opportunity to sample the diversity of "natural" parasite populations in high resolution clinical and epidemiological studies. A major challenge to sequencing clinical Plasmodium samples is the abundance of human DNA, which may substantially reduce the yield of Plasmodium sequence. We tested a range of human white blood cell (WBC depletion methods on P. falciparum-infected patient samples in search of a method displaying an optimal balance of WBC-removal efficacy, cost, simplicity, and applicability to low resource settings. In the first of a two-part study, combinations of three different WBC depletion methods were tested on 43 patient blood samples in Mali. A two-step combination of Lymphoprep plus Plasmodipur best fitted our requirements, although moderate variability was observed in human DNA quantity. This approach was further assessed in a larger sample of 76 patients from Burkina Faso. WBC-removal efficacy remained high (70% samples and lower variation was observed in human DNA quantities. In order to assess the Plasmodium sequence yield at different human DNA proportions, 59 samples with up to 60% human DNA contamination were sequenced on the Illumina Genome Analyzer platform. An average ~40-fold coverage of the genome was observed per lane for samples with ≤ 30% human DNA. Even in low resource settings, using a simple two-step combination of Lymphoprep plus Plasmodipur, over 70% of clinical sample preparations should exhibit sufficiently low human DNA quantities to enable ~40-fold sequence coverage of the P. falciparum genome using a single lane on the Illumina Genome Analyzer platform. This approach should greatly facilitate large-scale clinical and epidemiologic studies of P

  1. SNPDelScore: combining multiple methods to score deleterious effects of noncoding mutations in the human genome.

    Science.gov (United States)

    Vera Alvarez, Roberto; Li, Shan; Landsman, David; Ovcharenko, Ivan

    2017-09-14

    Addressing deleterious effects of noncoding mutations is an essential step towards the identification of disease-causal mutations of gene regulatory elements. Several methods for quantifying the deleteriousness of noncoding mutations using artificial intelligence, deep learning, and other approaches have been recently proposed. Although the majority of the proposed methods have demonstrated excellent accuracy on different test sets, there is rarely a consensus. In addition, advanced statistical and artificial learning approaches used by these methods make it difficult porting these methods outside of the labs that have developed them. To address these challenges and to transform the methodological advances in predicting deleterious noncoding mutations into a practical resource available for the broader functional genomics and population genetics communities, we developed SNPDelScore, which uses a panel of proposed methods for quantifying deleterious effects of noncoding mutations to precompute and compare the deleteriousness scores of all common SNPs in the human genome in 44 cell lines. The panel of deleteriousness scores of a SNP computed using different methods is supplemented by functional information from the GWAS Catalog, libraries of transcription factor binding sites, and genic characteristics of mutations. SNPDelScore comes with a genome browser capable of displaying and comparing large sets of SNPs in a genomic locus and rapidly identifying consensus SNPs with the highest deleteriousness scores making those prime candidates for phenotype-causal polymorphisms. https://www.ncbi.nlm.nih.gov/research/snpdelscore/. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.

  2. Evaluating the effectiveness of methods for capturing meetings

    OpenAIRE

    Hall, Mark John; Bermell-Garcia, Pablo; McMahon, Chris A.; Johansson, Anders; Gonzalez-Franco, Mar

    2015-01-01

    The purpose of this paper is to evaluate the effectiveness of commonly used methods to capture synchronous meetings for information and knowledge retrieval. Four methods of capture are evaluated in the form of a case study whereby a technical design meeting was captured by; (i) transcription; (ii) diagrammatic argumentation; (iii) meeting minutes; and (iv) video. The paper describes an experiment where participants undertook an information retrieval task and provided feedback on the methods. ...

  3. Long-term response to genomic selection: effects of estimation method and reference population structure for different genetic architectures.

    Science.gov (United States)

    Bastiaansen, John W M; Coster, Albart; Calus, Mario P L; van Arendonk, Johan A M; Bovenhuis, Henk

    2012-01-24

    Genomic selection has become an important tool in the genetic improvement of animals and plants. The objective of this study was to investigate the impacts of breeding value estimation method, reference population structure, and trait genetic architecture, on long-term response to genomic selection without updating marker effects. Three methods were used to estimate genomic breeding values: a BLUP method with relationships estimated from genome-wide markers (GBLUP), a Bayesian method, and a partial least squares regression method (PLSR). A shallow (individuals from one generation) or deep reference population (individuals from five generations) was used with each method. The effects of the different selection approaches were compared under four different genetic architectures for the trait under selection. Selection was based on one of the three genomic breeding values, on pedigree BLUP breeding values, or performed at random. Selection continued for ten generations. Differences in long-term selection response were small. For a genetic architecture with a very small number of three to four quantitative trait loci (QTL), the Bayesian method achieved a response that was 0.05 to 0.1 genetic standard deviation higher than other methods in generation 10. For genetic architectures with approximately 30 to 300 QTL, PLSR (shallow reference) or GBLUP (deep reference) had an average advantage of 0.2 genetic standard deviation over the Bayesian method in generation 10. GBLUP resulted in 0.6% and 0.9% less inbreeding than PLSR and BM and on average a one third smaller reduction of genetic variance. Responses in early generations were greater with the shallow reference population while long-term response was not affected by reference population structure. The ranking of estimation methods was different with than without selection. Under selection, applying GBLUP led to lower inbreeding and a smaller reduction of genetic variance while a similar response to selection was

  4. Calculation of 3D genome structures for comparison of chromosome conformation capture experiments with microscopy: An evaluation of single-cell Hi-C protocols.

    Science.gov (United States)

    Lando, David; Stevens, Tim J; Basu, Srinjan; Laue, Ernest D

    2018-01-01

    Single-cell chromosome conformation capture approaches are revealing the extent of cell-to-cell variability in the organization and packaging of genomes. These single-cell methods, unlike their multi-cell counterparts, allow straightforward computation of realistic chromosome conformations that may be compared and combined with other, independent, techniques to study 3D structure. Here we discuss how single-cell Hi-C and subsequent 3D genome structure determination allows comparison with data from microscopy. We then carry out a systematic evaluation of recently published single-cell Hi-C datasets to establish a computational approach for the evaluation of single-cell Hi-C protocols. We show that the calculation of genome structures provides a useful tool for assessing the quality of single-cell Hi-C data because it requires a self-consistent network of interactions, relating to the underlying 3D conformation, with few errors, as well as sufficient longer-range cis- and trans-chromosomal contacts.

  5. Evaluation of Signature Erosion in Ebola Virus Due to Genomic Drift and Its Impact on the Performance of Diagnostic Assays

    Science.gov (United States)

    Sozhamannan, Shanmuga; Holland, Mitchell Y.; Hall, Adrienne T.; Negrón, Daniel A.; Ivancich, Mychal; Koehler, Jeffrey W.; Minogue, Timothy D.; Campbell, Catherine E.; Berger, Walter J.; Christopher, George W.; Goodwin, Bruce G.; Smith, Michael A.

    2015-01-01

    Genome sequence analyses of the 2014 Ebola Virus (EBOV) isolates revealed a potential problem with the diagnostic assays currently in use; i.e., drifting genomic profiles of the virus may affect the sensitivity or even produce false-negative results. We evaluated signature erosion in ebolavirus molecular assays using an in silico approach and found frequent potential false-negative and false-positive results. We further empirically evaluated many EBOV assays, under real time PCR conditions using EBOV Kikwit (1995) and Makona (2014) RNA templates. These results revealed differences in performance between assays but were comparable between the old and new EBOV templates. Using a whole genome approach and a novel algorithm, termed BioVelocity, we identified new signatures that are unique to each of EBOV, Sudan virus (SUDV), and Reston virus (RESTV). Interestingly, many of the current assay signatures do not fall within these regions, indicating a potential drawback in the past assay design strategies. The new signatures identified in this study may be evaluated with real-time reverse transcription PCR (rRT-PCR) assay development and validation. In addition, we discuss regulatory implications and timely availability to impact a rapidly evolving outbreak using existing but perhaps less than optimal assays versus redesign these assays for addressing genomic changes. PMID:26090727

  6. Intellectual Data Analysis Method for Evaluation of Virtual Teams

    Directory of Open Access Journals (Sweden)

    Dalia Krikščiūnienė

    2012-07-01

    Full Text Available The purpose of the article is to present a method for virtual team performance evaluation based on intelligent team member collaboration data analysis. The motivation for the research is based on the ability to create an evaluation method that is similar to ambiguous expert evaluations. The concept of the hierarchical fuzzy rule based method aims to evaluate the data from virtual team interaction instances related to implementation of project tasks.The suggested method is designed for project managers or virtual team leaders to help in virtual teamwork evaluation that is based on captured data analysis. The main point of the method is the ability to repeat human thinking and expert valuation process for data analysis by applying fuzzy logic: fuzzy sets, fuzzy signatures and fuzzy rules.The fuzzy set principle used in the method allows evaluation criteria numerical values to transform into linguistic terms and use it in constructing fuzzy rules. Using a fuzzy signature is possible in constructing a hierarchical criteria structure. This structure helps to solve the problem of exponential increase of fuzzy rules including more input variables.The suggested method is aimed to be applied in the virtual collaboration software as a real time teamwork evaluation tool. The research shows that by applying fuzzy logic for team collaboration data analysis it is possible to get evaluations equal to expert insights. The method includes virtual team, project task and team collaboration data analysis.The advantage of the suggested method is the possibility to use variables gained from virtual collaboration systems as fuzzy rules inputs. Information on fuzzy logic based virtual teamwork collaboration evaluation has evidence that can be investigated in the future. Also the method can be seen as the next virtual collaboration software development step.

  7. Intellectual Data Analysis Method for Evaluation of Virtual Teams

    Directory of Open Access Journals (Sweden)

    Sandra Strigūnaitė

    2013-01-01

    Full Text Available The purpose of the article is to present a method for virtual team performance evaluation based on intelligent team member collaboration data analysis. The motivation for the research is based on the ability to create an evaluation method that is similar to ambiguous expert evaluations. The concept of the hierarchical fuzzy rule based method aims to evaluate the data from virtual team interaction instances related to implementation of project tasks. The suggested method is designed for project managers or virtual team leaders to help in virtual teamwork evaluation that is based on captured data analysis. The main point of the method is the ability to repeat human thinking and expert valuation process for data analysis by applying fuzzy logic: fuzzy sets, fuzzy signatures and fuzzy rules. The fuzzy set principle used in the method allows evaluation criteria numerical values to transform into linguistic terms and use it in constructing fuzzy rules. Using a fuzzy signature is possible in constructing a hierarchical criteria structure. This structure helps to solve the problem of exponential increase of fuzzy rules including more input variables. The suggested method is aimed to be applied in the virtual collaboration software as a real time teamwork evaluation tool. The research shows that by applying fuzzy logic for team collaboration data analysis it is possible to get evaluations equal to expert insights. The method includes virtual team, project task and team collaboration data analysis. The advantage of the suggested method is the possibility to use variables gained from virtual collaboration systems as fuzzy rules inputs. Information on fuzzy logic based virtual teamwork collaboration evaluation has evidence that can be investigated in the future. Also the method can be seen as the next virtual collaboration software development step.

  8. FN-Identify: Novel Restriction Enzymes-Based Method for Bacterial Identification in Absence of Genome Sequencing

    Directory of Open Access Journals (Sweden)

    Mohamed Awad

    2015-01-01

    Full Text Available Sequencing and restriction analysis of genes like 16S rRNA and HSP60 are intensively used for molecular identification in the microbial communities. With aid of the rapid progress in bioinformatics, genome sequencing became the method of choice for bacterial identification. However, the genome sequencing technology is still out of reach in the developing countries. In this paper, we propose FN-Identify, a sequencing-free method for bacterial identification. FN-Identify exploits the gene sequences data available in GenBank and other databases and the two algorithms that we developed, CreateScheme and GeneIdentify, to create a restriction enzyme-based identification scheme. FN-Identify was tested using three different and diverse bacterial populations (members of Lactobacillus, Pseudomonas, and Mycobacterium groups in an in silico analysis using restriction enzymes and sequences of 16S rRNA gene. The analysis of the restriction maps of the members of three groups using the fragment numbers information only or along with fragments sizes successfully identified all of the members of the three groups using a minimum of four and maximum of eight restriction enzymes. Our results demonstrate the utility and accuracy of FN-Identify method and its two algorithms as an alternative method that uses the standard microbiology laboratories techniques when the genome sequencing is not available.

  9. FN-Identify: Novel Restriction Enzymes-Based Method for Bacterial Identification in Absence of Genome Sequencing.

    Science.gov (United States)

    Awad, Mohamed; Ouda, Osama; El-Refy, Ali; El-Feky, Fawzy A; Mosa, Kareem A; Helmy, Mohamed

    2015-01-01

    Sequencing and restriction analysis of genes like 16S rRNA and HSP60 are intensively used for molecular identification in the microbial communities. With aid of the rapid progress in bioinformatics, genome sequencing became the method of choice for bacterial identification. However, the genome sequencing technology is still out of reach in the developing countries. In this paper, we propose FN-Identify, a sequencing-free method for bacterial identification. FN-Identify exploits the gene sequences data available in GenBank and other databases and the two algorithms that we developed, CreateScheme and GeneIdentify, to create a restriction enzyme-based identification scheme. FN-Identify was tested using three different and diverse bacterial populations (members of Lactobacillus, Pseudomonas, and Mycobacterium groups) in an in silico analysis using restriction enzymes and sequences of 16S rRNA gene. The analysis of the restriction maps of the members of three groups using the fragment numbers information only or along with fragments sizes successfully identified all of the members of the three groups using a minimum of four and maximum of eight restriction enzymes. Our results demonstrate the utility and accuracy of FN-Identify method and its two algorithms as an alternative method that uses the standard microbiology laboratories techniques when the genome sequencing is not available.

  10. Evaluating website quality: Five studies on user-focused evaluation methods

    NARCIS (Netherlands)

    Elling, S.K.

    2012-01-01

    The benefits of evaluating websites among potential users are widely acknowledged. There are several methods that can be used to evaluate the websites’ quality from a users’ perspective. In current practice, many evaluations are executed with inadequate methods that lack research-based validation.

  11. Evaluation of two gas-dilution methods for instrument calibration

    Science.gov (United States)

    Evans, A., Jr.

    1977-01-01

    Two gas dilution methods were evaluated for use in the calibration of analytical instruments used in air pollution studies. A dual isotope fluorescence carbon monoxide analyzer was used as the transfer standard. The methods are not new but some modifications are described. The rotary injection gas dilution method was found to be more accurate than the closed loop method. Results by the two methods differed by 5 percent. This could not be accounted for by the random errors in the measurements. The methods avoid the problems associated with pressurized cylinders. Both methods have merit and have found a place in instrument calibration work.

  12. Benchmark calculations for evaluation methods of gas volumetric leakage rate

    International Nuclear Information System (INIS)

    Asano, R.; Aritomi, M.; Matsuzaki, M.

    1998-01-01

    A containment function of radioactive materials transport casks is essential for safe transportation to prevent the radioactive materials from being released into environment. Regulations such as IAEA standard determined the limit of radioactivity to be released. Since is not practical for the leakage tests to measure directly the radioactivity release from a package, as gas volumetric leakages rates are proposed in ANSI N14.5 and ISO standards. In our previous works, gas volumetric leakage rates for several kinds of gas from various leaks were measured and two evaluation methods, 'a simple evaluation method' and 'a strict evaluation method', were proposed based on the results. The simple evaluation method considers the friction loss of laminar flow with expansion effect. The strict evaluating method considers an exit loss in addition to the friction loss. In this study, four worked examples were completed for on assumed large spent fuel transport cask (Type B Package) with wet or dry capacity and at three transport conditions; normal transport with intact fuels or failed fuels, and an accident in transport. The standard leakage rates and criteria for two kinds of leak test were calculated for each example by each evaluation method. The following observations are made based upon the calculations and evaluations: the choked flow model of ANSI method greatly overestimates the criteria for tests ; the laminar flow models of both ANSI and ISO methods slightly overestimate the criteria for tests; the above two results are within the design margin for ordinary transport condition and all methods are useful for the evaluation; for severe condition such as failed fuel transportation, it should pay attention to apply a choked flow model of ANSI method. (authors)

  13. Advanced evaluation method of SG TSP BEC hole blockage rate

    International Nuclear Information System (INIS)

    Izumida, Hiroyuki; Nagata, Yasuyuki; Harada, Yutaka; Murakami, Ryuji

    2003-01-01

    In spite of the control of the water chemistry of SG secondary feed-water in PWR-SG, SG TSP BEC holes, which are the flow path of secondary water, are often clogged. In the past, the trending of BEC hole blockage rate has conducted by evaluating ECT original signals and visual inspections. However, the ECT original signals of deposits are diversified, it becomes difficult to analyze them with the existing evaluation method using the ECT original signals. In this regard, we have developed the secondary side visual inspection system, which enables the high-accuracy evaluation of BEC hole blockage rate, and new ECT signal evaluation method. (author)

  14. A Design Process Evaluation Method for Sustainable Buildings

    Directory of Open Access Journals (Sweden)

    Christopher S. Magent

    2009-12-01

    Full Text Available This research develops a technique to model and evaluate the design process for sustainable buildings. Three case studies were conducted to validate this method. The resulting design process evaluation method for sustainable buildings (DPEMSB may assist project teams in designing their own sustainable building design processes. This method helps to identify critical decisions in the design process, to evaluate these decisions for time and sequence, to define information required for decisions from various project stakeholders, and to identify stakeholder competencies for process implementation. Published in the Journal AEDM - Volume 5, Numbers 1-2, 2009 , pp. 62-74(13

  15. A online credit evaluation method based on AHP and SPA

    Science.gov (United States)

    Xu, Yingtao; Zhang, Ying

    2009-07-01

    Online credit evaluation is the foundation for the establishment of trust and for the management of risk between buyers and sellers in e-commerce. In this paper, a new credit evaluation method based on the analytic hierarchy process (AHP) and the set pair analysis (SPA) is presented to determine the credibility of the electronic commerce participants. It solves some of the drawbacks found in classical credit evaluation methods and broadens the scope of current approaches. Both qualitative and quantitative indicators are considered in the proposed method, then a overall credit score is achieved from the optimal perspective. In the end, a case analysis of China Garment Network is provided for illustrative purposes.

  16. An efficient genotyping method for genome-modified animals and human cells generated with CRISPR/Cas9 system.

    Science.gov (United States)

    Zhu, Xiaoxiao; Xu, Yajie; Yu, Shanshan; Lu, Lu; Ding, Mingqin; Cheng, Jing; Song, Guoxu; Gao, Xing; Yao, Liangming; Fan, Dongdong; Meng, Shu; Zhang, Xuewen; Hu, Shengdi; Tian, Yong

    2014-09-19

    The rapid generation of various species and strains of laboratory animals using CRISPR/Cas9 technology has dramatically accelerated the interrogation of gene function in vivo. So far, the dominant approach for genotyping of genome-modified animals has been the T7E1 endonuclease cleavage assay. Here, we present a polyacrylamide gel electrophoresis-based (PAGE) method to genotype mice harboring different types of indel mutations. We developed 6 strains of genome-modified mice using CRISPR/Cas9 system, and utilized this approach to genotype mice from F0 to F2 generation, which included single and multiplexed genome-modified mice. We also determined the maximal detection sensitivity for detecting mosaic DNA using PAGE-based assay as 0.5%. We further applied PAGE-based genotyping approach to detect CRISPR/Cas9-mediated on- and off-target effect in human 293T and induced pluripotent stem cells (iPSCs). Thus, PAGE-based genotyping approach meets the rapidly increasing demand for genotyping of the fast-growing number of genome-modified animals and human cell lines created using CRISPR/Cas9 system or other nuclease systems such as TALEN or ZFN.

  17. A Pattern-Oriented Approach to a Methodical Evaluation of Modeling Methods

    Directory of Open Access Journals (Sweden)

    Michael Amberg

    1996-11-01

    Full Text Available The paper describes a pattern-oriented approach to evaluate modeling methods and to compare various methods with each other from a methodical viewpoint. A specific set of principles (the patterns is defined by investigating the notations and the documentation of comparable modeling methods. Each principle helps to examine some parts of the methods from a specific point of view. All principles together lead to an overall picture of the method under examination. First the core ("method neutral" meaning of each principle is described. Then the methods are examined regarding the principle. Afterwards the method specific interpretations are compared with each other and with the core meaning of the principle. By this procedure, the strengths and weaknesses of modeling methods regarding methodical aspects are identified. The principles are described uniformly using a principle description template according to descriptions of object oriented design patterns. The approach is demonstrated by evaluating a business process modeling method.

  18. Biological chromodynamics: a general method for measuring protein occupancy across the genome by calibrating ChIP-seq

    Science.gov (United States)

    Hu, Bin; Petela, Naomi; Kurze, Alexander; Chan, Kok-Lung; Chapard, Christophe; Nasmyth, Kim

    2015-01-01

    Sequencing DNA fragments associated with proteins following in vivo cross-linking with formaldehyde (known as ChIP-seq) has been used extensively to describe the distribution of proteins across genomes. It is not widely appreciated that this method merely estimates a protein's distribution and cannot reveal changes in occupancy between samples. To do this, we tagged with the same epitope orthologous proteins in Saccharomyces cerevisiae and Candida glabrata, whose sequences have diverged to a degree that most DNA fragments longer than 50 bp are unique to just one species. By mixing defined numbers of C. glabrata cells (the calibration genome) with S. cerevisiae samples (the experimental genomes) prior to chromatin fragmentation and immunoprecipitation, it is possible to derive a quantitative measure of occupancy (the occupancy ratio – OR) that enables a comparison of occupancies not only within but also between genomes. We demonstrate for the first time that this ‘internal standard’ calibration method satisfies the sine qua non for quantifying ChIP-seq profiles, namely linearity over a wide range. Crucially, by employing functional tagged proteins, our calibration process describes a method that distinguishes genuine association within ChIP-seq profiles from background noise. Our method is applicable to any protein, not merely highly conserved ones, and obviates the need for the time consuming, expensive, and technically demanding quantification of ChIP using qPCR, which can only be performed on individual loci. As we demonstrate for the first time in this paper, calibrated ChIP-seq represents a major step towards documenting the quantitative distributions of proteins along chromosomes in different cell states, which we term biological chromodynamics. PMID:26130708

  19. Evaluation and validation of de novo and hybrid assembly techniques to derive high-quality genome sequences.

    Science.gov (United States)

    Utturkar, Sagar M; Klingeman, Dawn M; Land, Miriam L; Schadt, Christopher W; Doktycz, Mitchel J; Pelletier, Dale A; Brown, Steven D

    2014-10-01

    To assess the potential of different types of sequence data combined with de novo and hybrid assembly approaches to improve existing draft genome sequences. Illumina, 454 and PacBio sequencing technologies were used to generate de novo and hybrid genome assemblies for four different bacteria, which were assessed for quality using summary statistics (e.g. number of contigs, N50) and in silico evaluation tools. Differences in predictions of multiple copies of rDNA operons for each respective bacterium were evaluated by PCR and Sanger sequencing, and then the validated results were applied as an additional criterion to rank assemblies. In general, assemblies using longer PacBio reads were better able to resolve repetitive regions. In this study, the combination of Illumina and PacBio sequence data assembled through the ALLPATHS-LG algorithm gave the best summary statistics and most accurate rDNA operon number predictions. This study will aid others looking to improve existing draft genome assemblies. All assembly tools except CLC Genomics Workbench are freely available under GNU General Public License. brownsd@ornl.gov Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  20. A One-Step PCR-Based Assay to Evaluate the Efficiency and Precision of Genomic DNA-Editing Tools

    Directory of Open Access Journals (Sweden)

    Diego Germini

    2017-06-01

    Full Text Available Despite rapid progress, many problems and limitations persist and limit the applicability of gene-editing techniques. Making use of meganucleases, TALENs, or CRISPR/Cas9-based tools requires an initial step of pre-screening to determine the efficiency and specificity of the designed tools. This step remains time consuming and material consuming. Here we propose a simple, cheap, reliable, time-saving, and highly sensitive method to evaluate a given gene-editing tool based on its capacity to induce chromosomal translocations when combined with a reference engineered nuclease. In the proposed technique, designated engineered nuclease-induced translocations (ENIT, a plasmid coding for the DNA-editing tool to be tested is co-transfected into carefully chosen target cells along with that for an engineered nuclease of known specificity and efficiency. If the new enzyme efficiently cuts within the desired region, then specific chromosomal translocations will be generated between the two targeted genomic regions and be readily detectable by a one-step PCR or qPCR assay. The PCR product thus obtained can be directly sequenced, thereby determining the exact position of the double-strand breaks induced by the gene-editing tools. As a proof of concept, ENIT was successfully tested in different cell types and with different meganucleases, TALENs, and CRISPR/Cas9-based editing tools.

  1. Radiochemistry methods in DOE methods for evaluating environmental and waste management samples

    International Nuclear Information System (INIS)

    Fadeff, S.K.; Goheen, S.C.

    1994-08-01

    Current standard sources of radiochemistry methods are often inappropriate for use in evaluating US Department of Energy environmental and waste management (DOE/EW) samples. Examples of current sources include EPA, ASTM, Standard Methods for the Examination of Water and Wastewater and HASL-300. Applicability of these methods is limited to specific matrices (usually water), radiation levels (usually environmental levels), and analytes (limited number). Radiochemistry methods in DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) attempt to fill the applicability gap that exists between standard methods and those needed for DOE/EM activities. The Radiochemistry chapter in DOE Methods includes an ''analysis and reporting'' guidance section as well as radiochemistry methods. A basis for identifying the DOE/EM radiochemistry needs is discussed. Within this needs framework, the applicability of standard methods and targeted new methods is identified. Sources of new methods (consolidated methods from DOE laboratories and submissions from individuals) and the methods review process will be discussed. The processes involved in generating consolidated methods add editing individually submitted methods will be compared. DOE Methods is a living document and continues to expand by adding various kinds of methods. Radiochemistry methods are highlighted in this paper. DOE Methods is intended to be a resource for methods applicable to DOE/EM problems. Although it is intended to support DOE, the guidance and methods are not necessarily exclusive to DOE. The document is available at no cost through the Laboratory Management Division of DOE, Office of Technology Development

  2. Resampling methods for evaluating classification accuracy of wildlife habitat models

    Science.gov (United States)

    Verbyla, David L.; Litvaitis, John A.

    1989-11-01

    Predictive models of wildlife-habitat relationships often have been developed without being tested The apparent classification accuracy of such models can be optimistically biased and misleading. Data resampling methods exist that yield a more realistic estimate of model classification accuracy These methods are simple and require no new sample data. We illustrate these methods (cross-validation, jackknife resampling, and bootstrap resampling) with computer simulation to demonstrate the increase in precision of the estimate. The bootstrap method is then applied to field data as a technique for model comparison We recommend that biologists use some resampling procedure to evaluate wildlife habitat models prior to field evaluation.

  3. PCR-fingerprint profiles of mitochondrial and genomic DNA extracted from Fetus cervi using different extraction methods.

    Science.gov (United States)

    Ai, Jinxia; Wang, Xuesong; Gao, Lijun; Xia, Wei; Li, Mingcheng; Yuan, Guangxin; Niu, Jiamu; Zhang, Lihua

    2017-11-01

    The use of Fetus cervi, which is derived from the embryo and placenta of Cervus Nippon Temminck or Cervs elaphus Linnaeus, has been documented for a long time in China. There are abundant species of deer worldwide. Those recorded by China Pharmacopeia (2010 edition) from all the species were either authentic or adulterants/counterfeits. Identification of their origins or authenticity became a key in the preparation of the authentic products. The traditional SDS alkaline lysis and salt-outing methods were modified to extract mt DNA and genomic DNA from fresh and dry Fetus cervi in addition to Fetus from false animals, respectively. A set of primers were designed by bioinformatics to target the intra-and inter-variation. The mt DNA and genomic DNA extracted from Fetus cervi using the two methods meet the requirement for authenticity. Extraction of mt DNA by SDS alkaline lysis is more practical and accurate than extraction of genomic DNA by salt-outing method. There were differences in length and number of segments amplified by PCR between mt DNA from authentic Fetus cervi and false animals Fetus. The distinctive PCR-fingerprint patterns can distinguish the Fetus cervi from adulterants and counterfeit animal Fetus.

  4. Method of evaluation of diagnostics reference levels in computerized tomography

    International Nuclear Information System (INIS)

    Vega, Walter Flores

    1999-04-01

    Computerized tomography is a complex technique with several selectable exposition parameters delivering high doses to the patient. In this work it was developed a simple methodology to evaluate diagnostic reference levels in computerized tomography, using the concept of Multiple Scan Average Dose (MSAD), recently adopted by the Health Ministry. For evaluation of the MSAD, a dose distribution was obtained through a measured dose profile on the axial axis of a water phantom with thermoluminescence dosemeters, TLD-100, for different exam technique. The MSAD was evaluated hrough two distinct methods. First, it was evaluated by the integration of the dose profile of a single slice and, second, obtained by the integration on central slice of the profile of several slices. The latter is in of accordance with the ionization chamber method, suggesting to be the most practical method of dose evaluation to be applied in the diagnostic reference level assessment routine for CT, using TLDs. (author)

  5. Entrepreneur environment management behavior evaluation method derived from environmental economy.

    Science.gov (United States)

    Zhang, Lili; Hou, Xilin; Xi, Fengru

    2013-12-01

    Evaluation system can encourage and guide entrepreneurs, and impel them to perform well in environment management. An evaluation method based on advantage structure is established. It is used to analyze entrepreneur environment management behavior in China. Entrepreneur environment management behavior evaluation index system is constructed based on empirical research. Evaluation method of entrepreneurs is put forward, from the point of objective programming-theory to alert entrepreneurs concerned to think much of it, which means to take minimized objective function as comprehensive evaluation result and identify disadvantage structure pattern. Application research shows that overall behavior of Chinese entrepreneurs environmental management are good, specially, environment strategic behavior are best, environmental management behavior are second, cultural behavior ranks last. Application results show the efficiency and feasibility of this method. Copyright © 2013 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.

  6. ASTM test methods for composite characterization and evaluation

    Science.gov (United States)

    Masters, John E.

    1994-01-01

    A discussion of the American Society for Testing and Materials is given. Under the topic of composite materials characterization and evaluation, general industry practice and test methods for textile composites are presented.

  7. DC potential drop method for evaluating material degradation

    International Nuclear Information System (INIS)

    Seok, Chang Sung; Bae, Bong Kook; Koo, Jae Mean

    2004-01-01

    The remaining life estimation for the aged components in power plants as well as chemical plants are very important because mechanical properties of the components are degraded with in-service exposure time in high temperatures. Since it is difficult to take specimens from the operating components to evaluate mechanical properties of components, nondestructive techniques are needed to evaluate the degradation. In this study, test materials with several different degradation levels were prepared by isothermal aging heat treatment at 630 .deg. C. The DC potential drop method and destructive methods such as tensile and fracture toughness were used in order to evaluate the degradation of 1Cr-1Mo-0.25V steels. In this result, we can see that tensile strength and fracture toughness can be calculated from resistivity and it is possible to evaluate material degradation using DC potential drop method, non-destructive method

  8. Multivariate Methods Based Soft Measurement for Wine Quality Evaluation

    Directory of Open Access Journals (Sweden)

    Shen Yin

    2014-01-01

    a decision. However, since the physicochemical indexes of wine can to some extent reflect the quality of wine, the multivariate statistical methods based soft measure can help the oenologist in wine evaluation.

  9. Evaluation of cassava (Manihot esculenta (Crantz) planting methods ...

    African Journals Online (AJOL)

    Evaluation of cassava (Manihot esculenta (Crantz) planting methods and soybean [Glycine max (L.) Merrill] sowing dates on the yield performance of the component species in cassava/soybean intercrop under the humid tropical lowlands of southeastern Nigeria.

  10. EVALUATION OF ANALYTICAL METHODS FOR DETERMINING PESTICIDES IN BABY FOOD

    Science.gov (United States)

    Three extraction methods and two detection techniques for determining pesticides in baby food were evaluated. The extraction techniques examined were supercritical fluid extraction (SFE), enhanced solvent extraction (ESE), and solid phase extraction (SPE). The detection techni...

  11. Evaluation of full-scope simulator testing methods

    International Nuclear Information System (INIS)

    Feher, M.P.; Moray, N.; Senders, J.W.; Biron, K.

    1995-03-01

    This report discusses the use of full scope nuclear power plant simulators in licensing examinations for Unit First Operators of CANDU reactors. The existing literature is reviewed, and an annotated bibliography of the more important sources provided. Since existing methods are judged inadequate, conceptual bases for designing a system for licensing are discussed, and a method proposed which would make use of objective scoring methods based on data collection in full-scope simulators. A field trial of such a method is described. The practicality of such a method is critically discussed and possible advantages of subjective methods of evaluation considered. (author). 32 refs., 1 tab., 4 figs

  12. Development of an automatic evaluation method for patient positioning error.

    Science.gov (United States)

    Kubota, Yoshiki; Tashiro, Mutsumi; Shinohara, Ayaka; Abe, Satoshi; Souda, Saki; Okada, Ryosuke; Ishii, Takayoshi; Kanai, Tatsuaki; Ohno, Tatsuya; Nakano, Takashi

    2015-07-08

    Highly accurate radiotherapy needs highly accurate patient positioning. At our facility, patient positioning is manually performed by radiology technicians. After the positioning, positioning error is measured by manually comparing some positions on a digital radiography image (DR) to the corresponding positions on a digitally reconstructed radiography image (DRR). This method is prone to error and can be time-consuming because of its manual nature. Therefore, we propose an automated measuring method for positioning error to improve patient throughput and achieve higher reliability. The error between a position on the DR and a position on the DRR was calculated to determine the best matched position using the block-matching method. The zero-mean normalized cross correlation was used as our evaluation function, and the Gaussian weight function was used to increase importance as the pixel position approached the isocenter. The accuracy of the calculation method was evaluated using pelvic phantom images, and the method's effectiveness was evaluated on images of prostate cancer patients before the positioning, comparing them with the results of radiology technicians' measurements. The root mean square error (RMSE) of the calculation method for the pelvic phantom was 0.23 ± 0.05 mm. The coefficients between the calculation method and the measurement results of the technicians were 0.989 for the phantom images and 0.980 for the patient images. The RMSE of the total evaluation results of positioning for prostate cancer patients using the calculation method was 0.32 ± 0.18 mm. Using the proposed method, we successfully measured residual positioning errors. The accuracy and effectiveness of the method was evaluated for pelvic phantom images and images of prostate cancer patients. In the future, positioning for cancer patients at other sites will be evaluated using the calculation method. Consequently, we expect an improvement in treatment throughput for these other sites.

  13. CHARACTERISTICS OF MIRR METHOD IN EVALUATION OF INVESTMENT PROJECTS' EFFECTIVENESS

    OpenAIRE

    P. Kukhta

    2014-01-01

    There were analyzed characteristics of the Modified Internal Rate of Return method in the evaluation of investment projects, restrictions connected with its application, advantages and disadvantages compared with indicators of the original Internal Rate of Return and Net Present Value for projects with certain baseline characteristics. It was determined opportunities to adapt the method of Modified Internal Rate of Return to alternative computational approaches of the project cash flows evalu...

  14. CHARACTERISTICS OF MIRR METHOD IN EVALUATION OF INVESTMENT PROJECTS' EFFECTIVENESS

    Directory of Open Access Journals (Sweden)

    P. Kukhta

    2014-09-01

    Full Text Available There were analyzed characteristics of the Modified Internal Rate of Return method in the evaluation of investment projects, restrictions connected with its application, advantages and disadvantages compared with indicators of the original Internal Rate of Return and Net Present Value for projects with certain baseline characteristics. It was determined opportunities to adapt the method of Modified Internal Rate of Return to alternative computational approaches of the project cash flows evaluation.

  15. A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice

    Science.gov (United States)

    Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah

    2016-01-01

    One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel “trick” concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available. PMID:27555865

  16. A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice.

    Science.gov (United States)

    Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah

    2016-01-01

    One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel "trick" concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.

  17. Analysis of n-gram based promoter recognition methods and application to whole genome promoter prediction.

    Science.gov (United States)

    Rani, T Sobha; Bapi, Raju S

    2009-01-01

    Promoter prediction is an important and complex problem. Pattern recognition algorithms typically require features that could capture this complexity. A special bias towards certain combinations of base pairs in the promoter sequences may be possible. In order to determine these biases n-grams are usually extracted and analyzed. An n-gram is a selection of n contiguous characters from a given character stream, DNA sequence segments in this case. Here a systematic study is made to discover the efficacy of n-grams for n = 2, 3, 4, 5 in promoter prediction. A study of n-grams as features for a neural network classifier for E. coli and Drosophila promoters is made. In case of E. coli n=3 and in case of Drosophila n=4 seem to give optimal prediction values. Using the 3-gram features, promoter prediction in the genome sequence of E. coli is done. The results are encouraging in positive identification of promoters in the genome compared to software packages such as BPROM, NNPP, and SAK. Whole genome promoter prediction in Drosophila genome was also performed but with 4-gram features.

  18. Microbial genomics, transcriptomics and proteomics: new discoveries in microbial decomposition research using complementary methods

    Czech Academy of Sciences Publication Activity Database

    Baldrian, Petr; López-Mondéjar, Rubén

    2014-01-01

    Roč. 98, č. 4 (2014), s. 1531-1537 ISSN 0175-7598 R&D Projects: GA MŠk LD12050; GA MŠk(CZ) EE2.3.30.0003 Institutional support: RVO:61388971 Keywords : decomposition * genomics * proteomics * saprotrophic fungi * bacteria Subject RIV: EE - Microbiology, Virology Impact factor: 3.337, year: 2014

  19. A Critical Review of Concepts and Methods Used in Classical Genome Analysis

    DEFF Research Database (Denmark)

    Seberg, Ole; Petersen, Gitte

    1998-01-01

    A short account of the development of classical genome analysis, the analysis of chromosome behaviour in metaphase I of meiosis, primarily in interspecific hybrids, is given. The application of the concept of homology to describe chromosome pairing between the respective chromosomes of a pair dur...

  20. Telomerecat: A ploidy-agnostic method for estimating telomere length from whole genome sequencing data

    NARCIS (Netherlands)

    Farmery, James H. R.; Smith, Mike L.; Lynch, Andy G.; Huissoon, Aarnoud; Furnell, Abigail; Mead, Adam; Levine, Adam P.; Manzur, Adnan; Thrasher, Adrian; Greenhalgh, Alan; Parker, Alasdair; Sanchis-Juan, Alba; Richter, Alex; Gardham, Alice; Lawrie, Allan; Sohal, Aman; Creaser-Myers, Amanda; Frary, Amy; Greinacher, Andreas; Themistocleous, Andreas; Peacock, Andrew J.; Marshall, Andrew; Mumford, Andrew; Rice, Andrew; Webster, Andrew; Brady, Angie; Koziell, Ania; Manson, Ania; Chandra, Anita; Hensiek, Anke; Veld, Anna Huis In't; Maw, Anna; Kelly, Anne M.; Moore, Anthony; Vonk Noordegraaf, Anton; Attwood, Antony; Herwadkar, Archana; Ghofrani, Ardi; Houweling, Arjan C.; Girerd, Barbara; Furie, Bruce; Treacy, Carmen M.; Millar, Carolyn M.; Sewell, Carrock; Roughley, Catherine; Titterton, Catherine; Williamson, Catherine; Hadinnapola, Charaka; Deshpande, Charu; Toh, Cheng-Hock; Bacchelli, Chiara; Patch, Chris; Geet, Chris Van; Babbs, Christian; Bryson, Christine; Penkett, Christopher J.; Rhodes, Christopher J.; Watt, Christopher; Bethune, Claire; Booth, Claire; Lentaigne, Claire; McJannet, Coleen; Church, Colin; French, Courtney; Samarghitean, Crina; Halmagyi, Csaba; Gale, Daniel; Greene, Daniel; Hart, Daniel; Allsup, David; Bennett, David; Edgar, David; Kiely, David G.; Gosal, David; Perry, David J.; Keeling, David; Montani, David; Shipley, Debbie; Whitehorn, Deborah; Fletcher, Debra; Krishnakumar, Deepa; Grozeva, Detelina; Kumararatne, Dinakantha; Thompson, Dorothy; Josifova, Dragana; Maher, Eamonn; Wong, Edwin K. S.; Murphy, Elaine; Dewhurst, Eleanor; Louka, Eleni; Rosser, Elisabeth; Chalmers, Elizabeth; Colby, Elizabeth; Drewe, Elizabeth; McDermott, Elizabeth; Thomas, Ellen; Staples, Emily; Clement, Emma; Matthews, Emma; Wakeling, Emma; Oksenhendler, Eric; Turro, Ernest; Reid, Evan; Wassmer, Evangeline; Raymond, F. Lucy; Hu, Fengyuan; Kennedy, Fiona; Soubrier, Florent; Flinter, Frances; Kovacs, Gabor; Polwarth, Gary; Ambegaonkar, Gautum; Arno, Gavin; Hudson, Gavin; Woods, Geoff; Coghlan, Gerry; Hayman, Grant; Arumugakani, Gururaj; Schotte, Gwen; Cook, H. Terry; Alachkar, Hana; Lango Allen, Hana; Lango-Allen, Hana; Stark, Hannah; Stauss, Hans; Schulze, Harald; Boggard, Harm J.; Baxendale, Helen; Dolling, Helen; Firth, Helen; Gall, Henning; Watson, Henry; Longhurst, Hilary; Markus, Hugh S.; Watkins, Hugh; Simeoni, Ilenia; Emmerson, Ingrid; Roberts, Irene; Quinti, Isabella; Wanjiku, Ivy; Gibbs, J. Simon R.; Thaventhiran, James; Whitworth, James; Hurst, Jane; Collins, Janine; Suntharalingam, Jay; Payne, Jeanette; Thachil, Jecko; Martin, Jennifer M.; Martin, Jennifer; Carmichael, Jenny; Maimaris, Jesmeen; Paterson, Joan; Pepke-Zaba, Joanna; Heemskerk, Johan W. M.; Gebhart, Johanna; Davis, John; Pasi, John; Bradley, John R.; Wharton, John; Stephens, Jonathan; Rankin, Julia; Anderson, Julie; Vogt, Julie; von Ziegenweldt, Julie; Rehnstrom, Karola; Megy, Karyn; Talks, Kate; Peerlinck, Kathelijne; Yates, Katherine; Freson, Kathleen; Stirrups, Kathleen; Gomez, Keith; Smith, Kenneth G. C.; Carss, Keren; Rue-Albrecht, Kevin; Gilmour, Kimberley; Masati, Larahmie; Scelsi, Laura; Southgate, Laura; Ranganathan, Lavanya; Ginsberg, Lionel; Devlin, Lisa; Willcocks, Lisa; Ormondroyd, Liz; Lorenzo, Lorena; Harper, Lorraine; Allen, Louise; Daugherty, Louise; Chitre, Manali; Kurian, Manju; Humbert, Marc; Tischkowitz, Marc; Bitner-Glindzicz, Maria; Erwood, Marie; Scully, Marie; Veltman, Marijke; Caulfield, Mark; Layton, Mark; McCarthy, Mark; Ponsford, Mark; Toshner, Mark; Bleda, Marta; Wilkins, Martin; Mathias, Mary; Reilly, Mary; Afzal, Maryam; Brown, Matthew; Rondina, Matthew; Stubbs, Matthew; Haimel, Matthias; Lees, Melissa; Laffan, Michael A.; Browning, Michael; Gattens, Michael; Richards, Michael; Michaelides, Michel; Lambert, Michele P.; Makris, Mike; de Vries, Minka; Mahdi-Rogers, Mohamed; Saleem, Moin; Thomas, Moira; Holder, Muriel; Eyries, Mélanie; Clements-Brod, Naomi; Canham, Natalie; Dormand, Natalie; Zuydam, Natalie Van; Kingston, Nathalie; Ghali, Neeti; Cooper, Nichola; Morrell, Nicholas W.; Yeatman, Nigel; Roy, Noémi; Shamardina, Olga; Alavijeh, Omid S.; Gresele, Paolo; Nurden, Paquita; Chinnery, Patrick; Deegan, Patrick; Yong, Patrick; Man, Patrick Yu Wai; Corris, Paul A.; Calleja, Paul; Gissen, Paul; Bolton-Maggs, Paula; Rayner-Matthews, Paula; Ghataorhe, Pavandeep K.; Gordins, Pavel; Stein, Penelope; Collins, Peter; Dixon, Peter; Kelleher, Peter; Ancliff, Phil; Yu, Ping; Tait, R. Campbell; Linger, Rachel; Doffinger, Rainer; Machado, Rajiv; Kazmi, Rashid; Sargur, Ravishankar; Favier, Remi; Tan, Rhea; Liesner, Ri; Antrobus, Richard; Sandford, Richard; Scott, Richard; Trembath, Richard; Horvath, Rita; Hadden, Rob; MackenzieRoss, Rob V.; Henderson, Robert; MacLaren, Robert; James, Roger; Ghurye, Rohit; DaCosta, Rosa; Hague, Rosie; Mapeta, Rutendo; Armstrong, Ruth; Noorani, Sadia; Murng, Sai; Santra, Saikat; Tuna, Salih; Johnson, Sally; Chong, Sam; Lear, Sara; Walker, Sara; Goddard, Sarah; Mangles, Sarah; Westbury, Sarah; Mehta, Sarju; Hackett, Scott; Nejentsev, Sergey; Moledina, Shahin; Bibi, Shahnaz; Meehan, Sharon; Othman, Shokri; Revel-Vilk, Shoshana; Holden, Simon; McGowan, Simon; Staines, Simon; Savic, Sinisa; Burns, Siobhan; Grigoriadou, Sofia; Papadia, Sofia; Ashford, Sofie; Schulman, Sol; Ali, Sonia; Park, Soo-Mi; Davies, Sophie; Stock, Sophie; Ali, Souad; Deevi, Sri V. V.; Gräf, Stefan; Ghio, Stefano; Wort, Stephen J.; Jolles, Stephen; Austin, Steve; Welch, Steve; Meacham, Stuart; Rankin, Stuart; Walker, Suellen; Seneviratne, Suranjith; Holder, Susan; Sivapalaratnam, Suthesh; Richardson, Sylvia; Kuijpers, Taco; Bariana, Tadbir K.; Bakchoul, Tamam; Everington, Tamara; Renton, Tara; Young, Tim; Aitman, Timothy; Warner, Timothy Q.; Vale, Tom; Hammerton, Tracey; Pollock, Val; Matser, Vera; Cookson, Victoria; Clowes, Virginia; Qasim, Waseem; Wei, Wei; Erber, Wendy N.; Ouwehand, Willem H.; Astle, William; Egner, William; Turek, Wojciech; Henskens, Yvonne; Tan, Yvonne

    2018-01-01

    Telomere length is a risk factor in disease and the dynamics of telomere length are crucial to our understanding of cell replication and vitality. The proliferation of whole genome sequencing represents an unprecedented opportunity to glean new insights into telomere biology on a previously

  1. Restriction site extension PCR: a novel method for high-throughput characterization of tagged DNA fragments and genome walking.

    Directory of Open Access Journals (Sweden)

    Jiabing Ji

    Full Text Available BACKGROUND: Insertion mutant isolation and characterization are extremely valuable for linking genes to physiological function. Once an insertion mutant phenotype is identified, the challenge is to isolate the responsible gene. Multiple strategies have been employed to isolate unknown genomic DNA that flanks mutagenic insertions, however, all these methods suffer from limitations due to inefficient ligation steps, inclusion of restriction sites within the target DNA, and non-specific product generation. These limitations become close to insurmountable when the goal is to identify insertion sites in a high throughput manner. METHODOLOGY/PRINCIPAL FINDINGS: We designed a novel strategy called Restriction Site Extension PCR (RSE-PCR to efficiently conduct large-scale isolation of unknown genomic DNA fragments linked to DNA insertions. The strategy is a modified adaptor-mediated PCR without ligation. An adapter, with complementarity to the 3' overhang of the endonuclease (KpnI, NsiI, PstI, or SacI restricted DNA fragments, extends the 3' end of the DNA fragments in the first cycle of the primary RSE-PCR. During subsequent PCR cycles and a second semi-nested PCR (secondary RSE-PCR, touchdown and two-step PCR are combined to increase the amplification specificity of target fragments. The efficiency and specificity was demonstrated in our characterization of 37 tex mutants of Arabidopsis. All the steps of RSE-PCR can be executed in a 96 well PCR plate. Finally, RSE-PCR serves as a successful alternative to Genome Walker as demonstrated by gene isolation from maize, a plant with a more complex genome than Arabidopsis. CONCLUSIONS/SIGNIFICANCE: RSE-PCR has high potential application in identifying tagged (T-DNA or transposon sequence or walking from known DNA toward unknown regions in large-genome plants, with likely application in other organisms as well.

  2. Systematic evaluation of observational methods assessing biomechanical exposures at work

    DEFF Research Database (Denmark)

    Takala, Esa-Pekka; Irmeli, Pehkonen; Forsman, Mikael

    2009-01-01

      Systematic evaluation of observational methods assessing biomechanical exposures at work   Esa-Pekka Takala 1, Irmeli Pehkonen 1, Mikael Forsman 2, Gert-Åke Hansson 3, Svend Erik Mathiassen 4, W. Patrick Neumann 5, Gisela Sjøgaard 6, Kaj Bo Veiersted 7, Rolf Westgaard 8, Jørgen Winkel 9   1...... University of Science and Technology, Trondheim, 9 University of Gothenburg and National Research Centre for the Working Environment, Copenhagen   The aim of this project was to identify and systematically evaluate observational methods to assess workload on the musculoskeletal system. Searches...... by sorting the methods according to the several items evaluated.   Numerous methods have been developed to assess physical workload (biomechanical exposures) in order to identify hazards leading to musculoskeletal disorders, to monitor the effects of ergonomic changes, and for research. No indvidual method...

  3. Data-driven performance evaluation method for CMS RPC trigger ...

    Indian Academy of Sciences (India)

    2012-10-06

    Oct 6, 2012 ... A data-driven method for muon trigger performance evaluation. The task of the GMT algorithm is ... For example, to evaluate the RPC trigger system efficiency in the barrel, we select the events in .... to ∼5 GeV/c) in general hit a lower number of layers thus producing low-quality regional trigger candidates.

  4. Evaluation of man-machine systems - methods and problems

    International Nuclear Information System (INIS)

    1985-01-01

    The symposium gives a survey of the methods of evaluation which permit as quantitive an assessment as possible of the collaboration between men and machines. This complex of problems is of great current significance in many areas of application. The systems to be evaluated are aircraft, land vehicles and watercraft as well as process control systems. (orig./GL) [de

  5. Comparative study of heuristic evaluation and usability testing methods.

    Science.gov (United States)

    Thyvalikakath, Thankam Paul; Monaco, Valerie; Thambuganipalle, Himabindu; Schleyer, Titus

    2009-01-01

    Usability methods, such as heuristic evaluation, cognitive walk-throughs and user testing, are increasingly used to evaluate and improve the design of clinical software applications. There is still some uncertainty, however, as to how those methods can be used to support the development process and evaluation in the most meaningful manner. In this study, we compared the results of a heuristic evaluation with those of formal user tests in order to determine which usability problems were detected by both methods. We conducted heuristic evaluation and usability testing on four major commercial dental computer-based patient records (CPRs), which together cover 80% of the market for chairside computer systems among general dentists. Both methods yielded strong evidence that the dental CPRs have significant usability problems. An average of 50% of empirically-determined usability problems were identified by the preceding heuristic evaluation. Some statements of heuristic violations were specific enough to precisely identify the actual usability problem that study participants encountered. Other violations were less specific, but still manifested themselves in usability problems and poor task outcomes. In this study, heuristic evaluation identified a significant portion of problems found during usability testing. While we make no assumptions about the generalizability of the results to other domains and software systems, heuristic evaluation may, under certain circumstances, be a useful tool to determine design problems early in the development cycle.

  6. Assessing Student Understanding of the "New Biology": Development and Evaluation of a Criterion-Referenced Genomics and Bioinformatics Assessment

    Science.gov (United States)

    Campbell, Chad Edward

    Over the past decade, hundreds of studies have introduced genomics and bioinformatics (GB) curricula and laboratory activities at the undergraduate level. While these publications have facilitated the teaching and learning of cutting-edge content, there has yet to be an evaluation of these assessment tools to determine if they are meeting the quality control benchmarks set forth by the educational research community. An analysis of these assessment tools indicated that valid and reliable inferences about student learning. To remedy this situation the development of a robust GB assessment aligned with the quality control benchmarks was undertaken in order to ensure evidence-based evaluation of student learning outcomes. Content validity is a central piece of construct validity, and it must be used to guide instrument and item development. This study reports on: (1) the correspondence of content validity evidence gathered from independent sources; (2) the process of item development using this evidence; (3) the results from a pilot administration of the assessment; (4) the subsequent modification of the assessment based on the pilot administration results and; (5) the results from the second administration of the assessment. Twenty-nine different subtopics within GB (Appendix B: Genomics and Bioinformatics Expert Survey) were developed based on preliminary GB textbook analyses. These subtopics were analyzed using two methods designed to gather content validity evidence: (1) a survey of GB experts (n=61) and (2) a detailed content analyses of GB textbooks (n=6). By including only the subtopics that were shown to have robust support across these sources, 22 GB subtopics were established for inclusion in the assessment. An expert panel subsequently developed, evaluated, and revised two multiple-choice items to align with each of the 22 subtopics, producing a final item pool of 44 items. These items were piloted with student samples of varying content exposure levels

  7. Omni-PolyA: a method and tool for accurate recognition of Poly(A) signals in human genomic DNA

    KAUST Repository

    Magana-Mora, Arturo

    2017-08-15

    BackgroundPolyadenylation is a critical stage of RNA processing during the formation of mature mRNA, and is present in most of the known eukaryote protein-coding transcripts and many long non-coding RNAs. The correct identification of poly(A) signals (PAS) not only helps to elucidate the 3′-end genomic boundaries of a transcribed DNA region and gene regulatory mechanisms but also gives insight into the multiple transcript isoforms resulting from alternative PAS. Although progress has been made in the in-silico prediction of genomic signals, the recognition of PAS in DNA genomic sequences remains a challenge.ResultsIn this study, we analyzed human genomic DNA sequences for the 12 most common PAS variants. Our analysis has identified a set of features that helps in the recognition of true PAS, which may be involved in the regulation of the polyadenylation process. The proposed features, in combination with a recognition model, resulted in a novel method and tool, Omni-PolyA. Omni-PolyA combines several machine learning techniques such as different classifiers in a tree-like decision structure and genetic algorithms for deriving a robust classification model. We performed a comparison between results obtained by state-of-the-art methods, deep neural networks, and Omni-PolyA. Results show that Omni-PolyA significantly reduced the average classification error rate by 35.37% in the prediction of the 12 considered PAS variants relative to the state-of-the-art results.ConclusionsThe results of our study demonstrate that Omni-PolyA is currently the most accurate model for the prediction of PAS in human and can serve as a useful complement to other PAS recognition methods. Omni-PolyA is publicly available as an online tool accessible at www.cbrc.kaust.edu.sa/omnipolya/.

  8. Demographically-Based Evaluation of Genomic Regions under Selection in Domestic Dogs.

    Directory of Open Access Journals (Sweden)

    Adam H Freedman

    2016-03-01

    Full Text Available Controlling for background demographic effects is important for accurately identifying loci that have recently undergone positive selection. To date, the effects of demography have not yet been explicitly considered when identifying loci under selection during dog domestication. To investigate positive selection on the dog lineage early in the domestication, we examined patterns of polymorphism in six canid genomes that were previously used to infer a demographic model of dog domestication. Using an inferred demographic model, we computed false discovery rates (FDR and identified 349 outlier regions consistent with positive selection at a low FDR. The signals in the top 100 regions were frequently centered on candidate genes related to brain function and behavior, including LHFPL3, CADM2, GRIK3, SH3GL2, MBP, PDE7B, NTAN1, and GLRA1. These regions contained significant enrichments in behavioral ontology categories. The 3rd top hit, CCRN4L, plays a major role in lipid metabolism, that is supported by additional metabolism related candidates revealed in our scan, including SCP2D1 and PDXC1. Comparing our method to an empirical outlier approach that does not directly account for demography, we found only modest overlaps between the two methods, with 60% of empirical outliers having no overlap with our demography-based outlier detection approach. Demography-aware approaches have lower-rates of false discovery. Our top candidates for selection, in addition to expanding the set of neurobehavioral candidate genes, include genes related to lipid metabolism, suggesting a dietary target of selection that was important during the period when proto-dogs hunted and fed alongside hunter-gatherers.

  9. Demographically-Based Evaluation of Genomic Regions under Selection in Domestic Dogs

    Science.gov (United States)

    Freedman, Adam H.; Schweizer, Rena M.; Ortega-Del Vecchyo, Diego; Han, Eunjung; Davis, Brian W.; Gronau, Ilan; Silva, Pedro M.; Galaverni, Marco; Fan, Zhenxin; Marx, Peter; Lorente-Galdos, Belen; Ramirez, Oscar; Hormozdiari, Farhad; Alkan, Can; Vilà, Carles; Squire, Kevin; Geffen, Eli; Kusak, Josip; Boyko, Adam R.; Parker, Heidi G.; Lee, Clarence; Tadigotla, Vasisht; Siepel, Adam; Bustamante, Carlos D.; Harkins, Timothy T.; Nelson, Stanley F.; Marques-Bonet, Tomas; Ostrander, Elaine A.; Wayne, Robert K.; Novembre, John

    2016-01-01

    Controlling for background demographic effects is important for accurately identifying loci that have recently undergone positive selection. To date, the effects of demography have not yet been explicitly considered when identifying loci under selection during dog domestication. To investigate positive selection on the dog lineage early in the domestication, we examined patterns of polymorphism in six canid genomes that were previously used to infer a demographic model of dog domestication. Using an inferred demographic model, we computed false discovery rates (FDR) and identified 349 outlier regions consistent with positive selection at a low FDR. The signals in the top 100 regions were frequently centered on candidate genes related to brain function and behavior, including LHFPL3, CADM2, GRIK3, SH3GL2, MBP, PDE7B, NTAN1, and GLRA1. These regions contained significant enrichments in behavioral ontology categories. The 3rd top hit, CCRN4L, plays a major role in lipid metabolism, that is supported by additional metabolism related candidates revealed in our scan, including SCP2D1 and PDXC1. Comparing our method to an empirical outlier approach that does not directly account for demography, we found only modest overlaps between the two methods, with 60% of empirical outliers having no overlap with our demography-based outlier detection approach. Demography-aware approaches have lower-rates of false discovery. Our top candidates for selection, in addition to expanding the set of neurobehavioral candidate genes, include genes related to lipid metabolism, suggesting a dietary target of selection that was important during the period when proto-dogs hunted and fed alongside hunter-gatherers. PMID:26943675

  10. SSFI and SSOMI new method of evaluating design

    International Nuclear Information System (INIS)

    Tolson, G.M.

    1992-01-01

    The NRC has developed a new inspection method which has proven its effectiveness in evaluating design organizations. The new method is used in two types of NRC inspections, Safety System Functional Inspection (SSFI), and Safety System Outage Modification Inspection (SSOMI). The SSFI/SSOMI audits were developed following an event which brought a nuclear power plant close to a core meltdown. That event was caused by a series of problems which would not have been found using conventional methods. The SSFI and SSOMI audits involve intense technical evaluation of a nuclear system to determine wheter the system will function as designed. The SSFI/SSOMI method normally uses eight to fifteen engineers with different fields of expertise to evaluate a system, or a change to a system in the case of a SSOMI. The effectiveness of each engineer's input is amplified in a series of open, questioning, free-wheeling, brainstorming-type team meetings. During the team meetings, all aspects of the audit are controlled by a consensus of the team members. The findings from these new methods are surprisingly consistent, regardless of which organization is audited or which organization performs the audit. This consistency implies a widespread generic weakness in the manner design is being performed. This paper addresses generic findings and recommends increased use of these new methods to evaluate design organizations. These audit methods can be readily used to evaluate any process or system. (orig.)

  11. Genomic variation in Salmonella enterica core genes for epidemiological typing

    DEFF Research Database (Denmark)

    Leekitcharoenphon, Pimlapas; Lukjancenko, Oksana; Rundsten, Carsten Friis

    2012-01-01

    genomes and evaluate their value as typing targets, comparing whole genome typing and traditional methods such as 16S and MLST. A consensus tree based on variation of core genes gives much better resolution than 16S and MLST; the pan-genome family tree is similar to the consensus tree, but with higher...... that there is a positive selection towards mutations leading to amino acid changes. Conclusions: Genomic variation within the core genome is useful for investigating molecular evolution and providing candidate genes for bacterial genome typing. Identification of genes with different degrees of variation is important...

  12. Selection of Suitable DNA Extraction Methods for Genetically Modified Maize 3272, and Development and Evaluation of an Event-Specific Quantitative PCR Method for 3272.

    Science.gov (United States)

    Takabatake, Reona; Masubuchi, Tomoko; Futo, Satoshi; Minegishi, Yasutaka; Noguchi, Akio; Kondo, Kazunari; Teshima, Reiko; Kurashima, Takeyo; Mano, Junichi; Kitta, Kazumi

    2016-01-01

    A novel real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) maize, 3272. We first attempted to obtain genome DNA from this maize using a DNeasy Plant Maxi kit and a DNeasy Plant Mini kit, which have been widely utilized in our previous studies, but DNA extraction yields from 3272 were markedly lower than those from non-GM maize seeds. However, lowering of DNA extraction yields was not observed with GM quicker or Genomic-tip 20/G. We chose GM quicker for evaluation of the quantitative method. We prepared a standard plasmid for 3272 quantification. The conversion factor (Cf), which is required to calculate the amount of a genetically modified organism (GMO), was experimentally determined for two real-time PCR instruments, the Applied Biosystems 7900HT (the ABI 7900) and the Applied Biosystems 7500 (the ABI7500). The determined Cf values were 0.60 and 0.59 for the ABI 7900 and the ABI 7500, respectively. To evaluate the developed method, a blind test was conducted as part of an interlaboratory study. The trueness and precision were evaluated as the bias and reproducibility of the relative standard deviation (RSDr). The determined values were similar to those in our previous validation studies. The limit of quantitation for the method was estimated to be 0.5% or less, and we concluded that the developed method would be suitable and practical for detection and quantification of 3272.

  13. Accounting for discovery bias in genomic prediction

    Science.gov (United States)

    Our objective was to evaluate an approach to mitigating discovery bias in genomic prediction. Accuracy may be improved by placing greater emphasis on regions of the genome expected to be more influential on a trait. Methods emphasizing regions result in a phenomenon known as “discovery bias” if info...

  14. Efficient Server-Aided Secure Two-Party Function Evaluation with Applications to Genomic Computation

    Science.gov (United States)

    2016-07-14

    for medical or other purposes. Non-medical uses of genomic data in a computation often take place in a server- mediated setting where the server...through some third-party service provider. Thus, in this work we look at private genomic computation in the light of server- mediated setting and utilize...adversaries who corrupt them) do not col- lude, at any given point of time there might be multiple adver- saries, but they are independent of each other

  15. Simulation study on the effects of excluding offspring information for genetic evaluation versus using genomic markers for selection in dog breeding.

    Science.gov (United States)

    Stock, K F; Distl, O

    2010-02-01

    Different modes of selection in dogs were studied with a special focus on the availability of disease information. Canine hip dysplasia (CHD) in the German shepherd dog was used as an example. The study was performed using a simulation model, comparing cases when selection was based on phenotype, true or predicted breeding value, or genomic breeding value. The parameters in the simulation model were drawn from the real population data. The data on all parents and 40% of their progeny were assumed to be available for the genetic evaluation carried out by Gibbs sampling. With respect to the use of disease records on progeny, three scenarios were considered: random exclusion of disease data (no restrictions, N), general exclusion of disease data (G) and exclusion of disease data for popular sires (P). One round of selection was considered, and the response was expressed as change of mean CHD score, proportion of dogs scored as normal, proportion of dogs scored as clearly affected and true mean breeding value in progeny of popular sires in comparison with all sires. When no restrictions on data were applied, selection on breeding value was three times more efficient than when some systematic exclusion was practised. Higher selection response than in the exclusion cases was achieved by selecting on the basis of genomic breeding value and CHD score. Genomic selection would therefore be the method of choice in the future.

  16. A unified and comprehensible view of parametric and kernel methods for genomic prediction with application to rice

    Directory of Open Access Journals (Sweden)

    Laval Jacquin

    2016-08-01

    Full Text Available One objective of this study was to provide readers with a clear and unified understanding ofparametric statistical and kernel methods, used for genomic prediction, and to compare some ofthese in the context of rice breeding for quantitative traits. Furthermore, another objective wasto provide a simple and user-friendly R package, named KRMM, which allows users to performRKHS regression with several kernels. After introducing the concept of regularized empiricalrisk minimization, the connections between well-known parametric and kernel methods suchas Ridge regression (i.e. genomic best linear unbiased predictor (GBLUP and reproducingkernel Hilbert space (RKHS regression were reviewed. Ridge regression was then reformulatedso as to show and emphasize the advantage of the kernel trick concept, exploited by kernelmethods in the context of epistatic genetic architectures, over parametric frameworks used byconventional methods. Some parametric and kernel methods; least absolute shrinkage andselection operator (LASSO, GBLUP, support vector machine regression (SVR and RKHSregression were thereupon compared for their genomic predictive ability in the context of ricebreeding using three real data sets. Among the compared methods, RKHS regression and SVRwere often the most accurate methods for prediction followed by GBLUP and LASSO. An Rfunction which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression,with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time hasbeen developed. Moreover, a modified version of this function, which allows users to tune kernelsfor RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.

  17. Computational methods for planning and evaluating geothermal energy projects

    International Nuclear Information System (INIS)

    Goumas, M.G.; Lygerou, V.A.; Papayannakis, L.E.

    1999-01-01

    In planning, designing and evaluating a geothermal energy project, a number of technical, economic, social and environmental parameters should be considered. The use of computational methods provides a rigorous analysis improving the decision-making process. This article demonstrates the application of decision-making methods developed in operational research for the optimum exploitation of geothermal resources. Two characteristic problems are considered: (1) the economic evaluation of a geothermal energy project under uncertain conditions using a stochastic analysis approach and (2) the evaluation of alternative exploitation schemes for optimum development of a low enthalpy geothermal field using a multicriteria decision-making procedure. (Author)

  18. New knowledge network evaluation method for design rationale management

    Science.gov (United States)

    Jing, Shikai; Zhan, Hongfei; Liu, Jihong; Wang, Kuan; Jiang, Hao; Zhou, Jingtao

    2015-01-01

    Current design rationale (DR) systems have not demonstrated the value of the approach in practice since little attention is put to the evaluation method of DR knowledge. To systematize knowledge management process for future computer-aided DR applications, a prerequisite is to provide the measure for the DR knowledge. In this paper, a new knowledge network evaluation method for DR management is presented. The method characterizes the DR knowledge value from four perspectives, namely, the design rationale structure scale, association knowledge and reasoning ability, degree of design justification support and degree of knowledge representation conciseness. The DR knowledge comprehensive value is also measured by the proposed method. To validate the proposed method, different style of DR knowledge network and the performance of the proposed measure are discussed. The evaluation method has been applied in two realistic design cases and compared with the structural measures. The research proposes the DR knowledge evaluation method which can provide object metric and selection basis for the DR knowledge reuse during the product design process. In addition, the method is proved to be more effective guidance and support for the application and management of DR knowledge.

  19. Evaluation of SNP Data from the Malus Infinium Array Identifies Challenges for Genetic Analysis of Complex Genomes of Polyploid Origin.

    Directory of Open Access Journals (Sweden)

    Michela Troggio

    Full Text Available High throughput arrays for the simultaneous genotyping of thousands of single-nucleotide polymorphisms (SNPs have made the rapid genetic characterisation of plant genomes and the development of saturated linkage maps a realistic prospect for many plant species of agronomic importance. However, the correct calling of SNP genotypes in divergent polyploid genomes using array technology can be problematic due to paralogy, and to divergence in probe sequences causing changes in probe binding efficiencies. An Illumina Infinium II whole-genome genotyping array was recently developed for the cultivated apple and used to develop a molecular linkage map for an apple rootstock progeny (M432, but a large proportion of segregating SNPs were not mapped in the progeny, due to unexpected genotype clustering patterns. To investigate the causes of this unexpected clustering we performed BLAST analysis of all probe sequences against the 'Golden Delicious' genome sequence and discovered evidence for paralogous annealing sites and probe sequence divergence for a high proportion of probes contained on the array. Following visual re-evaluation of the genotyping data generated for 8,788 SNPs for the M432 progeny using the array, we manually re-scored genotypes at 818 loci and mapped a further 797 markers to the M432 linkage map. The newly mapped markers included the majority of those that could not be mapped previously, as well as loci that were previously scored as monomorphic, but which segregated due to divergence leading to heterozygosity in probe annealing sites. An evaluation of the 8,788 probes in a diverse collection of Malus germplasm showed that more than half the probes returned genotype clustering patterns that were difficult or impossible to interpret reliably, highlighting implications for the use of the array in genome-wide association studies.

  20. Update History of This Database - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available L of The original website information 2014/10/10 PGDBj Registered plant list, Marker list, QTL list, Plant D...B link & Genome analysis methods English archive site is opened. 2012/08/08 PGDBj Regis...ate History of This Database - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive ... ...switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods

  1. DOE methods for evaluating environmental and waste management samples.

    Energy Technology Data Exchange (ETDEWEB)

    Goheen, S C; McCulloch, M; Thomas, B L; Riley, R G; Sklarew, D S; Mong, G M; Fadeff, S K [eds.; Pacific Northwest Lab., Richland, WA (United States)

    1994-04-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) provides applicable methods in use by. the US Department of Energy (DOE) laboratories for sampling and analyzing constituents of waste and environmental samples. The development of DOE Methods is supported by the Laboratory Management Division (LMD) of the DOE. This document contains chapters and methods that are proposed for use in evaluating components of DOE environmental and waste management samples. DOE Methods is a resource intended to support sampling and analytical activities that will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the US Environmental Protection Agency (EPA), or others.

  2. DOE methods for evaluating environmental and waste management samples

    International Nuclear Information System (INIS)

    Goheen, S.C.; McCulloch, M.; Thomas, B.L.; Riley, R.G.; Sklarew, D.S.; Mong, G.M.; Fadeff, S.K.

    1993-03-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) provides applicable methods in use by. the US Department of Energy (DOE) laboratories for sampling and analyzing constituents of waste and environmental samples. The development of DOE Methods is supported by the Laboratory Management Division (LMD) of the DOE. This document contains chapters and methods that are proposed for use in evaluating components of DOE environmental and waste management samples. DOE Methods is a resource intended to support sampling and analytical activities that will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the US Environmental Protection Agency (EPA), or others

  3. Evaluation of methods for the concentration and extraction of viruses from sewage in the context of metagenomic sequencing

    DEFF Research Database (Denmark)

    Hjelmsø, Mathis Hjort; Hellmér, Maria; Fernandez-Cassi, Xavier

    2017-01-01

    concentrations. This necessitates a step of sample concentration to allow for sensitive virus detection. Additionally, viruses harbor a large diversity of both surface and genome structures, which makes universal viral genomic extraction difficult. Current studies have tackled these challenges in many different...... this study aimed to evaluate the efficiency of four commonly applied viral concentrations techniques (precipitation with polyethylene glycol, organic flocculation with skim milk, monolithic adsorption filtration and glass wool filtration) and extraction methods (Nucleospin RNA XS, QIAamp Viral RNA Mini Kit...... or PowerViral® Environmental RNA/DNA Isolation Kit. Highest viral specificity were found in samples concentrated by precipitation with polyethylene glycol or extracted with Nucleospin RNA XS. Detection of viral pathogens depended on the method used. These results contribute to the understanding of method...

  4. Development and Evaluation of a Genome-Wide 6K SNP Array for Diploid Sweet Cherry and Tetraploid Sour Cherry

    Science.gov (United States)

    Peace, Cameron; Bassil, Nahla; Main, Dorrie; Ficklin, Stephen; Rosyara, Umesh R.; Stegmeir, Travis; Sebolt, Audrey; Gilmore, Barbara; Lawley, Cindy; Mockler, Todd C.; Bryant, Douglas W.; Wilhelm, Larry; Iezzoni, Amy

    2012-01-01

    High-throughput genome scans are important tools for genetic studies and breeding applications. Here, a 6K SNP array for use with the Illumina Infinium® system was developed for diploid sweet cherry (Prunus avium) and allotetraploid sour cherry (P. cerasus). This effort was led by RosBREED, a community initiative to enable marker-assisted breeding for rosaceous crops. Next-generation sequencing in diverse breeding germplasm provided 25 billion basepairs (Gb) of cherry DNA sequence from which were identified genome-wide SNPs for sweet cherry and for the two sour cherry subgenomes derived from sweet cherry (avium subgenome) and P. fruticosa (fruticosa subgenome). Anchoring to the peach genome sequence, recently released by the International Peach Genome Initiative, predicted relative physical locations of the 1.9 million putative SNPs detected, preliminarily filtered to 368,943 SNPs. Further filtering was guided by results of a 144-SNP subset examined with the Illumina GoldenGate® assay on 160 accessions. A 6K Infinium® II array was designed with SNPs evenly spaced genetically across the sweet and sour cherry genomes. SNPs were developed for each sour cherry subgenome by using minor allele frequency in the sour cherry detection panel to enrich for subgenome-specific SNPs followed by targeting to either subgenome according to alleles observed in sweet cherry. The array was evaluated using panels of sweet (n = 269) and sour (n = 330) cherry breeding germplasm. Approximately one third of array SNPs were informative for each crop. A total of 1825 polymorphic SNPs were verified in sweet cherry, 13% of these originally developed for sour cherry. Allele dosage was resolved for 2058 polymorphic SNPs in sour cherry, one third of these being originally developed for sweet cherry. This publicly available genomics resource represents a significant advance in cherry genome-scanning capability that will accelerate marker-locus-trait association discovery, genome

  5. Noise robustness of interferometric surface topography evaluation methods. Correlogram correlation

    Science.gov (United States)

    Kiselev, Ilia; Kiselev, Egor I.; Drexel, Michael; Hauptmannl, Michael

    2017-12-01

    Different surface height estimation methods are differently affected by interferometric noise. From a theoretical analysis we obtain height variance estimators for the methods. The estimations allow us to rigorously compare the noise robustness of popular evaluation algorithms. The envelope methods have the highest variances and hence the lowest noise resistances. The noise robustness improves from the envelope to the phase methods, but a technique involving the correlation of correlograms is superior even to the latter. We dwell on some details of this correlogram correlation method and the range of its application.

  6. Study on evaluation methods for Rayleigh wave dispersion characteristic

    Science.gov (United States)

    Shi, L.; Tao, X.; Kayen, R.; Shi, H.; Yan, S.

    2005-01-01

    The evaluation of Rayleigh wave dispersion characteristic is the key step for detecting S-wave velocity structure. By comparing the dispersion curves directly with the spectra analysis of surface waves (SASW) method, rather than comparing the S-wave velocity structure, the validity and precision of microtremor-array method (MAM) can be evaluated more objectively. The results from the China - US joint surface wave investigation in 26 sites in Tangshan, China, show that the MAM has the same precision with SASW method in 83% of the 26 sites. The MAM is valid for Rayleigh wave dispersion characteristic testing and has great application potentiality for site S-wave velocity structure detection.

  7. Methods of Identification and Evaluation of Brownfield Sites

    Directory of Open Access Journals (Sweden)

    Safet Kurtović

    2014-04-01

    Full Text Available The basic objective of this paper was to determine the importance and potential restoration of brownfield sites in terms of economic prosperity of a particular region or country. In addition, in a theoretical sense, this paper presents the methods used in the identification of brownfield sites such as Smart Growth Network model and Thomas GIS model, and methods for evaluation of brownfield sites or the indexing method, cost-benefit and multivariate analysis.

  8. Investigation of evaluation methods for human factors education effectiveness

    International Nuclear Information System (INIS)

    Yoshimura, Seiichi; Fujimoto, Junzo; Sasou Kunihide; Hasegawa, Naoko

    2004-01-01

    Education effectiveness in accordance with investment is required in the steam of electric power regulation alleviation. Therefore, evaluation methods for human factors education effectiveness which can observe human factors culture pervading process were investigated through research activities on education effectiveness in universities and actual in house education in industry companies. As a result, the contents of evaluation were found to be the change of feeling for human factors and some improving proposals in work places when considering the purpose of human factors education. And, questionnaire is found to be suitable for the style of evaluation. In addition, the timing of evaluation is desirable for both just after education and after some period in work places. Hereafter, data will be collected using these two kinds of questionnaires in human factors education courses in CRIEPI and some education courses in utilities. Thus, education effectiveness evaluation method which is suitable for human factors will be established. (author)

  9. System and method for evaluating a wire conductor

    Science.gov (United States)

    Panozzo, Edward; Parish, Harold

    2013-10-22

    A method of evaluating an electrically conductive wire segment having an insulated intermediate portion and non-insulated ends includes passing the insulated portion of the wire segment through an electrically conductive brush. According to the method, an electrical potential is established on the brush by a power source. The method also includes determining a value of electrical current that is conducted through the wire segment by the brush when the potential is established on the brush. The method additionally includes comparing the value of electrical current conducted through the wire segment with a predetermined current value to thereby evaluate the wire segment. A system for evaluating an electrically conductive wire segment is also disclosed.

  10. Methods to evaluate fish freshness in research and industry

    DEFF Research Database (Denmark)

    Olafsdottir, G.; Martinsdóttir, E.; Oehlenschläger, J.

    1997-01-01

    Current work in a European concerted action project 'Evaluation of Fish Freshness' (AIR3 CT94-2283) focuses on harmonizing research activities in the area of fish freshness evaluation in leading fish laboratories in Europe (see Box 1). The overall aim of the concerted action project is to validat...... measurements with respect to fish freshness evaluation. In this article, the different subgroups have summarized changes that occur in fish and methods to evaluate fish freshness as a first step towards the definition of criteria for fish freshness...

  11. Durability evaluation method on rebar corrosion of reinforced concrete

    International Nuclear Information System (INIS)

    Kitsutaka, Yoshinori

    2013-01-01

    In this paper, method on the durability evaluation in nuclear power plant concrete structures was investigated. In view of the importance of evaluating the degree of deterioration of reinforced concrete structures, relationships should be formulated among the number of years elapsed, t, the amount of action of a deteriorative factor, F, the degree of material deterioration, D, and the performance of the structure, P. Evaluation by PDFt diagrams combining these relationships may be effective. A detailed procedure of durability evaluation for a reinforced concrete structure using PDFt concept is presented for the deterioration of rebar corrosion caused by neutralization and penetration of salinity by referring to the recent papers. (author)

  12. Novel degenerate PCR method for whole genome amplification applied to Peru Margin (ODP Leg 201 subsurface samples

    Directory of Open Access Journals (Sweden)

    Amanda eMartino

    2012-01-01

    Full Text Available A degenerate PCR-based method of whole-genome amplification, designed to work fluidly with 454 sequencing technology, was developed and tested for use on deep marine subsurface DNA samples. The method, which we have called Random Amplification Metagenomic PCR (RAMP, involves the use of specific primers from Roche 454 amplicon sequencing, modified by the addition of a degenerate region at the 3’ end. It utilizes a PCR reaction, which resulted in no amplification from blanks, even after 50 cycles of PCR. After efforts to optimize experimental conditions, the method was tested with DNA extracted from cultured E. coli cells, and genome coverage was estimated after sequencing on three different occasions. Coverage did not vary greatly with the different experimental conditions tested, and was around 62% with a sequencing effort equivalent to a theoretical genome coverage of 14.10X. The GC content of the sequenced amplification product was within 2% of the predicted values for this strain of E. coli. The method was also applied to DNA extracted from marine subsurface samples from ODP Leg 201 site 1229 (Peru Margin, and results of a taxonomic analysis revealed microbial communities dominated by Proteobacteria, Chloroflexi, Firmicutes, Euryarchaeota, and Crenarchaeota, among others. These results were similar to those obtained previously for those samples; however, variations in the proportions of taxa show that community analysis can be sensitive to both the amplification technique used and the method of assigning sequences to taxonomic groups. Overall, we find that RAMP represents a valid methodology for amplifying metagenomes from low biomass samples.

  13. Methods and extractants to evaluate silicon availability for sugarcane.

    Science.gov (United States)

    Crusciol, Carlos Alexandre Costa; de Arruda, Dorival Pires; Fernandes, Adalton Mazetti; Antonangelo, João Arthur; Alleoni, Luís Reynaldo Ferracciú; Nascimento, Carlos Antonio Costa do; Rossato, Otávio Bagiotto; McCray, James Mabry

    2018-01-17

    The correct evaluation of silicon (Si) availability in different soil types is critical in defining the amount of Si to be supplied to crops. This study was carried out to evaluate two methods and five chemical Si extractants in clayey, sandy-loam, and sandy soils cultivated with sugarcane (Saccharum spp. hybrids). Soluble Si was extracted using two extraction methods (conventional and microwave oven) and five Si extractants (CaCl 2 , deionized water, KCl, Na-acetate buffer (pH 4.0), and acetic acid). No single method and/or extractant adequately estimated the Si availability in the soils. Conventional extraction with KCl was no more effective than other methods in evaluating Si availability; however, it had less variation in estimating soluble Si between soils with different textural classes. In the clayey and sandy soils, the Na-acetate buffer (pH 4.0) and acetic acid were effective in evaluating the Si availability in the soil regardless of the extraction methods. The extraction with acetic acid using the microwave oven, however, overestimated the Si availability. In the sandy-loam soil, extraction with deionized water using the microwave oven method was more effective in estimating the Si availability in the soil than the other extraction methods.

  14. Genome-scale genetic manipulation methods for exploring bacterial molecular biology.

    Science.gov (United States)

    Gagarinova, Alla; Emili, Andrew

    2012-06-01

    Bacteria are diverse and abundant, playing key roles in human health and disease, the environment, and biotechnology. Despite progress in genome sequencing and bioengineering, much remains unknown about the functional organization of prokaryotes. For instance, roughly a third of the protein-coding genes of the best-studied model bacterium, Escherichia coli, currently lack experimental annotations. Systems-level experimental approaches for investigating the functional associations of bacterial genes and genetic structures are essential for defining the fundamental molecular biology of microbes, preventing the spread of antibacterial resistance in the clinic, and driving the development of future biotechnological applications. This review highlights recently introduced large-scale genetic manipulation and screening procedures for the systematic exploration of bacterial gene functions, molecular relationships, and the global organization of bacteria at the gene, pathway, and genome levels.

  15. Combining genomic sequencing methods to explore viral diversity and reveal potential virus-host interactions

    Directory of Open Access Journals (Sweden)

    Cheryl-Emiliane Tien Chow

    2015-04-01

    Full Text Available Viral diversity and virus-host interactions in oxygen-starved regions of the ocean, also known as oxygen minimum zones (OMZs, remain relatively unexplored. Microbial community metabolism in OMZs alters nutrient and energy flow through marine food webs, resulting in biological nitrogen loss and greenhouse gas production. Thus, viruses infecting OMZ microbes have the potential to modulate community metabolism with resulting feedback on ecosystem function. Here, we describe viral communities inhabiting oxic surface (10m and oxygen-starved basin (200m waters of Saanich Inlet, a seasonally anoxic fjord on the coast of Vancouver Island, British Columbia using viral metagenomics and complete viral fosmid sequencing on samples collected between April 2007 and April 2010. Of 6459 open reading frames (ORFs predicted across all 34 viral fosmids, 77.6% (n=5010 had no homology to reference viral genomes. These fosmids recruited a higher proportion of viral metagenomic sequences from Saanich Inlet than from nearby northeastern subarctic Pacific Ocean (Line P waters, indicating differences in the viral communities between coastal and open ocean locations. While functional annotations of fosmid ORFs were limited, recruitment to NCBI’s non-redundant ‘nr’ database and publicly available single-cell genomes identified putative viruses infecting marine thaumarchaeal and SUP05 proteobacteria to provide potential host linkages with relevance to coupled biogeochemical cycling processes in OMZ waters. Taken together, these results highlight the power of coupled analyses of multiple sequence data types, such as viral metagenomic and fosmid sequence data with prokaryotic single cell genomes, to chart viral diversity, elucidate genomic and ecological contexts for previously unclassifiable viral sequences, and identify novel host interactions in natural and engineered ecosystems.

  16. A Critical Review of Concepts and Methods Used in Classical Genome Analysis

    DEFF Research Database (Denmark)

    Seberg, Ole; Petersen, Gitte

    1998-01-01

    A short account of the development of classical genome analysis, the analysis of chromosome behaviour in metaphase I of meiosis, primarily in interspecific hybrids, is given. The application of the concept of homology to describe chromosome pairing between the respective chromosomes of a pair...... breeding but it has no place in systematics. With an increased knowledge and understanding of the mechanism behind meiosis, data useful in a systematic context may eventually be produced....

  17. A new method for detecting signal regions in ordered sequences of real numbers, and application to viral genomic data.

    Science.gov (United States)

    Gog, Julia R; Lever, Andrew M L; Skittrall, Jordan P

    2018-01-01

    We present a fast, robust and parsimonious approach to detecting signals in an ordered sequence of numbers. Our motivation is in seeking a suitable method to take a sequence of scores corresponding to properties of positions in virus genomes, and find outlying regions of low scores. Suitable statistical methods without using complex models or making many assumptions are surprisingly lacking. We resolve this by developing a method that detects regions of low score within sequences of real numbers. The method makes no assumptions a priori about the length of such a region; it gives the explicit location of the region and scores it statistically. It does not use detailed mechanistic models so the method is fast and will be useful in a wide range of applications. We present our approach in detail, and test it on simulated sequences. We show that it is robust to a wide range of signal morphologies, and that it is able to capture multiple signals in the same sequence. Finally we apply it to viral genomic data to identify regions of evolutionary conservation within influenza and rotavirus.

  18. Reference size-matching, whole-genome amplification, and fluorescent labeling as a method for chromosomal microarray analysis of clinically actionable copy number alterations in formalin-fixed, paraffin-embedded tumor tissue.

    Science.gov (United States)

    Gunn, Shelly R; Govender, Shailin; Sims, Cynthe L; Khurana, Aditi; Koo, Samuel; Scoggin, Jayne; Moore, Mathew W; Cotter, Philip D

    2018-02-19

    Cancer genome copy number alterations (CNAs) assist clinicians in selecting targeted therapeutics. Solid tumor CNAs are most commonly evaluated in formalin-fixed, paraffin-embedded (FFPE) tissue by fluorescence in situ hybridization. Although fluorescence in situ hybridization is a sensitive and specific assay for interrogating pre-selected genomic regions, it provides no information about co-existing clinically significant copy number changes. Chromosomal microarray analysis is an alternative DNA-based method for interrogating genome-wide CNAs in solid tumors. However, DNA extracted from FFPE tumor tissue produces an essential, yet problematic, sample type. The College of American Pathologists/American Society of Clinical Oncology guidelines for optimal tumor tissue handling published in 2007 for breast cancer, and in 2016 for gastroesophageal adenocarcinomas are lacking for other solid tumors. Thus, cold ischemia times are seldom monitored in non-breast cancer, non-gastroesophageal adenocarcinomas, and all tumor biospecimens are affected by chemical fixation. Although intended to preserve specimens for long-term storage, formalin fixation causes loss of genetic information through DNA damage. Here, we describe a reference size matching, whole-genome amplification, and fluorescent labeling method for FFPE-derived DNA designed to improve chromosomal microarray results from sub-optimal nucleic acids and salvage highly degraded samples. With this technological advance, whole-genome copy number analysis of tumor DNA can be reliably performed in the clinical laboratory for a wide variety of tissue conditions and tumor types. Copyright © 2018. Published by Elsevier Inc.

  19. Enriching the gene set analysis of genome-wide data by incorporating directionality of gene expression and combining statistical hypotheses and methods

    Science.gov (United States)

    Väremo, Leif; Nielsen, Jens; Nookaew, Intawat

    2013-01-01

    Gene set analysis (GSA) is used to elucidate genome-wide data, in particular transcriptome data. A multitude of methods have been proposed for this step of the analysis, and many of them have been compared and evaluated. Unfortunately, there is no consolidated opinion regarding what methods should be preferred, and the variety of available GSA software and implementations pose a difficulty for the end-user who wants to try out different methods. To address this, we have developed the R package Piano that collects a range of GSA methods into the same system, for the benefit of the end-user. Further on we refine the GSA workflow by using modifications of the gene-level statistics. This enables us to divide the resulting gene set P-values into three classes, describing different aspects of gene expression directionality at gene set level. We use our fully implemented workflow to investigate the impact of the individual components of GSA by using microarray and RNA-seq data. The results show that the evaluated methods are globally similar and the major separation correlates well with our defined directionality classes. As a consequence of this, we suggest to use a consensus scoring approach, based on multiple GSA runs. In combination with the directionality classes, this constitutes a more thorough basis for an enriched biological interpretation. PMID:23444143

  20. Nurse educators’ perceptions of OSCE as a clinical evaluation method

    Directory of Open Access Journals (Sweden)

    MM Chabeli

    2001-09-01

    Full Text Available The South African Qualifications Authority, and the South African Nursing Council are in pursuit of quality nursing education to enable the learners to practise as independent and autonomous practitioners. The educational programme should focus on the facilitation of critical and reflective thinking skills that will help the learner to make rational decisions and solve problems. A way of achieving this level of functioning is the use of assessment and evaluation methods that measure the learners’ clinical competence holistically. This article is focused on the perceptions of twenty nurse educators, purposively selected from three Nursing Colleges affiliated to a university in Gauteng, regarding the use of OSCE (Objective Structured Clinical Examination as a clinical evaluation method within a qualitative and descriptive research strategy. Three focus group interviews were conducted in different sessions. A descriptive content analysis was used. Trustworthiness was ensured by using Lincoln and Guba’s model (1985. The results revealed both positive and negative aspects of OSCE as a clinical evaluation method with regard to: administrative aspects; evaluators; learners; procedures/instruments and evaluation. The conclusion drawn from the related findings is that OSCE does not measure the learners’ clinical competence holistically. It is therefore recommended that the identified negative perception be taken as challenges faced by nurse educators and that the positive aspects be strengthened. One way of meeting these recommendations is the use of varied alternative methods for clinical assessment and evaluation that focus on the holistic measurement of the learners’ clinical competence.

  1. Use Case Evaluation (UCE): A Method for Early Usability Evaluation in Software Development

    DEFF Research Database (Denmark)

    Stage, Jan; Høegh, Rune Thaarup; Hornbæk, K.

    2007-01-01

    t is often argued that usability problems should be identified as early as possible during software development, but many usability evaluation methods do not fit well in early development activities. We propose a method for usability evaluation of use cases, a widely used representation of design...... ideas produced early in software development processes. The method proceeds by systematic inspection of use cases with reference to a set of guidelines for usable design. To validate the method, four evaluators inspected a set of use cases for a health care application....

  2. An evaluation of multiple annealing and looping based genome amplification using a synthetic bacterial community

    KAUST Repository

    Wang, Yong

    2016-02-23

    The low biomass in environmental samples is a major challenge for microbial metagenomic studies. The amplification of a genomic DNA was frequently applied to meeting the minimum requirement of the DNA for a high-throughput next-generation-sequencing technology. Using a synthetic bacterial community, the amplification efficiency of the Multiple Annealing and Looping Based Amplification Cycles (MALBAC) kit that is originally developed to amplify the single-cell genomic DNA of mammalian organisms is examined. The DNA template of 10 pg in each reaction of the MALBAC amplification may generate enough DNA for Illumina sequencing. Using 10 pg and 100 pg templates for each reaction set, the MALBAC kit shows a stable and homogeneous amplification as indicated by the highly consistent coverage of the reads from the two amplified samples on the contigs assembled by the original unamplified sample. Although GenomePlex whole genome amplification kit allows one to generate enough DNA using 100 pg of template in each reaction, the minority of the mixed bacterial species is not linearly amplified. For both of the kits, the GC-rich regions of the genomic DNA are not efficiently amplified as suggested by the low coverage of the contigs with the high GC content. The high efficiency of the MALBAC kit is supported for the amplification of environmental microbial DNA samples, and the concerns on its application are also raised to bacterial species with the high GC content.

  3. DOE methods for evaluating environmental and waste management samples

    International Nuclear Information System (INIS)

    Goheen, S.C.; McCulloch, M.; Thomas, B.L.; Riley, R.G.; Sklarew, D.S.; Mong, G.M.; Fadeff, S.K.

    1994-04-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) is a resource intended to support sampling and analytical activities for the evaluation of environmental and waste management samples from U.S. Department of Energy (DOE) sites. DOE Methods is the result of extensive cooperation from all DOE analytical laboratories. All of these laboratories have contributed key information and provided technical reviews as well as significant moral support leading to the success of this document. DOE Methods is designed to encompass methods for collecting representative samples and for determining the radioisotope activity and organic and inorganic composition of a sample. These determinations will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the U.S. Environmental Protection Agency, or others. The development of DOE Methods is supported by the Laboratory Management Division of the DOE. Methods are prepared for entry into DOE Methods as chapter editors, together with DOE and other participants in this program, identify analytical and sampling method needs. Unique methods or methods consolidated from similar procedures in the DOE Procedures Database are selected for potential inclusion in this document. Initial selection is based largely on DOE needs and procedure applicability and completeness. Methods appearing in this document are one of two types. open-quotes Draftclose quotes or open-quotes Verified.close quotes. open-quotes Draftclose quotes methods that have been reviewed internally and show potential for eventual verification are included in this document, but they have not been reviewed externally, and their precision and bias may not be known. open-quotes Verifiedclose quotes methods in DOE Methods have been reviewed by volunteers from various DOE sites and private corporations

  4. Evaluation of a Rapid Method of Determination of Plasma Fibrinogen

    Science.gov (United States)

    Thomson, G. W.; McSherry, B. J.; Valli, V. E. O.

    1974-01-01

    An evaluation was made of a rapid semiautomated method of determining fibrinogen levels in bovine plasma. This method, the fibrometer method of Morse, Panek and Menga (8), is based on the principle that when thrombin is added to suitably diluted plasma the time of clotting is linearly related to the fibrinogen concentration. A standard curve prepared using bovine plasma had an r value of .9987 and analysis of variance showed there was no significant deviation from regression. A comparison of the fibrometer method and the biuret method of Ware, Guest and Seegers done on 158 bovine plasma samples showed good correlation between the two methods. It was concluded that the fibrometer method does measure bovine fibrinogen and has considerable merit for use in clinical diseases of cattle. PMID:4277474

  5. Evaluation of genome-wide genotyping concordance between tumor tissues and peripheral blood.

    Science.gov (United States)

    Shao, Wei; Ge, Yuqiu; Ma, Gaoxiang; Du, Mulong; Chu, Haiyan; Qiang, Fulin; Zhang, Zhengdong; Wang, Meilin

    2017-03-01

    Tumor tissues were potential resources in cancer susceptibility studies. To assess the genotyping concordance between tumor tissues and peripheral blood, we conducted this study in a large sample size and genome-wide scale. Genome-wide genotypes of human colon adenocarcinoma (COAD) retrieved from The Cancer Genome Atlas (TCGA) was analyzed. A total of 387 pairs of matched fresh frozen tumor tissues and peripheral blood samples passed the quality control processes. High concordant rate (94.85% with no-calls and 97.89% without no-calls) was found between tumor tissues and peripheral blood. The discordant rate raised with the increase of heterozygote rate, and the tendency was statistically significant. The total missing rate was 3.10%. We also verified 14 susceptibility SNPs and the average genotyping concordant rate was 97.42%. These findings suggest that majority of SNPs could be accurately genotyped using DNA isolated from tumor tissues. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Multiple centroid method to evaluate the adaptability of alfalfa genotypes

    Directory of Open Access Journals (Sweden)

    Moysés Nascimento

    2015-02-01

    Full Text Available This study aimed to evaluate the efficiency of multiple centroids to study the adaptability of alfalfa genotypes (Medicago sativa L.. In this method, the genotypes are compared with ideotypes defined by the bissegmented regression model, according to the researcher's interest. Thus, genotype classification is carried out as determined by the objective of the researcher and the proposed recommendation strategy. Despite the great potential of the method, it needs to be evaluated under the biological context (with real data. In this context, we used data on the evaluation of dry matter production of 92 alfalfa cultivars, with 20 cuttings, from an experiment in randomized blocks with two repetitions carried out from November 2004 to June 2006. The multiple centroid method proved efficient for classifying alfalfa genotypes. Moreover, it showed no unambiguous indications and provided that ideotypes were defined according to the researcher's interest, facilitating data interpretation.

  7. Comparison of two heuristic evaluation methods for evaluating the usability of health information systems.

    Science.gov (United States)

    Khajouei, Reza; Hajesmaeel Gohari, Sadrieh; Mirzaee, Moghaddameh

    2018-04-01

    In addition to following the usual Heuristic Evaluation (HE) method, the usability of health information systems can also be evaluated using a checklist. The objective of this study is to compare the performance of these two methods in identifying usability problems of health information systems. Eight evaluators independently evaluated different parts of a Medical Records Information System using two methods of HE (usual and with a checklist). The two methods were compared in terms of the number of problems identified, problem type, and the severity of identified problems. In all, 192 usability problems were identified by two methods in the Medical Records Information System. This was significantly higher than the number of usability problems identified by the checklist and usual method (148 and 92, respectively) (p information systems. The results demonstrated that the checklist method had significantly better performance in terms of the number of identified usability problems; however, the performance of the usual method for identifying problems of higher severity was significantly better. Although the checklist method can be more efficient for less experienced evaluators, wherever usability is critical, the checklist should be used with caution in usability evaluations. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Methods for evaluation of energy efficiency of machine tools

    International Nuclear Information System (INIS)

    Schudeleit, Timo; Züst, Simon; Wegener, Konrad

    2015-01-01

    Energy efficiency of machine tools proves to be an ongoing challenge to manufacturing industries as a number of international initiatives shows. The first part of the ISO 14955 series focusses on the basic understanding, power metering and energy efficient design of machine tool. The ISO standardization body (ISO/TC 39 WG 12) is currently working on the second part of the ISO 14955 series, which aims at defining of a standardized test method. However, a method meant for standardization could not been identified yet, due to the versatile advantages and disadvantages of the different test methods. In order to find the most feasible test method for standardization, four general energy efficiency test methods are described and compared in a state-of-the-art review. The test methods are then evaluated against seven key characteristic criteria using the Analytic Hierarchy Process (AHP), a structured multiple criteria decision-making technique. The criteria selection and judgement of their relative importance has been carried out in collaboration with experts from the machine tool industry and research institutes. Hence, weight factors are derived and the best suited test method for both industrial application and standardization is identified. The validity of the evaluation results is proven using the geometric consistency method. - Highlights: • Study for pushing forward the standardization work on the ISO 14955 series. • Comparison of methods for testing the energy efficiency of machine tools. • Evaluation of test methods using a multiple criteria decision-making technique. • Reference process is the recommended test method for the ISO 14955 series.

  9. User Experience Evaluation Methods in Product Development (UXEM'09)

    Science.gov (United States)

    Roto, Virpi; Väänänen-Vainio-Mattila, Kaisa; Law, Effie; Vermeeren, Arnold

    High quality user experience (UX) has become a central competitive factor of product development in mature consumer markets [1]. Although the term UX originated from industry and is a widely used term also in academia, the tools for managing UX in product development are still inadequate. A prerequisite for designing delightful UX in an industrial setting is to understand both the requirements tied to the pragmatic level of functionality and interaction and the requirements pertaining to the hedonic level of personal human needs, which motivate product use [2]. Understanding these requirements helps managers set UX targets for product development. The next phase in a good user-centered design process is to iteratively design and evaluate prototypes [3]. Evaluation is critical for systematically improving UX. In many approaches to UX, evaluation basically needs to be postponed until the product is fully or at least almost fully functional. However, in an industrial setting, it is very expensive to find the UX failures only at this phase of product development. Thus, product development managers and developers have a strong need to conduct UX evaluation as early as possible, well before all the parts affecting the holistic experience are available. Different types of products require evaluation on different granularity and maturity levels of a prototype. For example, due to its multi-user characteristic, a community service or an enterprise resource planning system requires a broader scope of UX evaluation than a microwave oven or a word processor that is meant for a single user at a time. Before systematic UX evaluation can be taken into practice, practical, lightweight UX evaluation methods suitable for different types of products and different phases of product readiness are needed. A considerable amount of UX research is still about the conceptual frameworks and models for user experience [4]. Besides, applying existing usability evaluation methods (UEMs) without

  10. The genome of Shigella dysenteriae strain Sd1617 comparison to representative strains in evaluating pathogenesis.

    Science.gov (United States)

    Vongsawan, Ajchara A; Kapatral, Vinayak; Vaisvil, Benjamin; Burd, Henry; Serichantalergs, Oralak; Venkatesan, Malabi M; Mason, Carl J

    2015-03-01

    We sequenced and analyzed Shigella dysenteriae strain Sd1617 serotype 1 that is widely used as model strain for vaccine design, trials and research. A combination of next-generation sequencing platforms and assembly yielded two contigs representing a chromosome size of 4.34 Mb and the large virulence plasmid of 177 kb. This genome sequence is compared with other Shigella genomes in order to understand gene complexity and pathogenic factors. © The Author 2015. Published by Oxford University Press on behalf of on behalf of Federation of European Microbiological Society.

  11. A biosegmentation benchmark for evaluation of bioimage analysis methods

    Directory of Open Access Journals (Sweden)

    Kvilekval Kristian

    2009-11-01

    Full Text Available Abstract Background We present a biosegmentation benchmark that includes infrastructure, datasets with associated ground truth, and validation methods for biological image analysis. The primary motivation for creating this resource comes from the fact that it is very difficult, if not impossible, for an end-user to choose from a wide range of segmentation methods available in the literature for a particular bioimaging problem. No single algorithm is likely to be equally effective on diverse set of images and each method has its own strengths and limitations. We hope that our benchmark resource would be of considerable help to both the bioimaging researchers looking for novel image processing methods and image processing researchers exploring application of their methods to biology. Results Our benchmark consists of different classes of images and ground truth data, ranging in scale from subcellular, cellular to tissue level, each of which pose their own set of challenges to image analysis. The associated ground truth data can be used to evaluate the effectiveness of different methods, to improve methods and to compare results. Standard evaluation methods and some analysis tools are integrated into a database framework that is available online at http://bioimage.ucsb.edu/biosegmentation/. Conclusion This online benchmark will facilitate integration and comparison of image analysis methods for bioimages. While the primary focus is on biological images, we believe that the dataset and infrastructure will be of interest to researchers and developers working with biological image analysis, image segmentation and object tracking in general.

  12. Registered plant list - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods Regis...tered plant list Data detail Data name Registered plant list DOI 10.18908/lsdba.nbdc01194-01-001 Descri...base Site Policy | Contact Us Registered plant list - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive ... ...switchLanguage; BLAST Search Image Search Home About Archive Update History Data

  13. Download - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available arch and download 1 README README_e.html - 2 Registered plant list pgdbj_dna_marker_linkage_map_plant_specie... of This Database Site Policy | Contact Us Download - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive ... ...switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods

  14. License - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available f you use data from this database, please be sure attribute this database as follows: ... PGDBj Registered plan... Policy | Contact Us License - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive ... ...switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods

  15. Evaluation of five methods for total DNA extraction from western corn rootworm beetles.

    Directory of Open Access Journals (Sweden)

    Hong Chen

    Full Text Available BACKGROUND: DNA extraction is a routine step in many insect molecular studies. A variety of methods have been used to isolate DNA molecules from insects, and many commercial kits are available. Extraction methods need to be evaluated for their efficiency, cost, and side effects such as DNA degradation during extraction. METHODOLOGY/PRINCIPAL FINDINGS: From individual western corn rootworm beetles, Diabrotica virgifera virgifera, DNA extractions by the SDS method, CTAB method, DNAzol reagent, Puregene solutions and DNeasy column were compared in terms of DNA quantity and quality, cost of materials, and time consumed. Although all five methods resulted in acceptable DNA concentrations and absorbance ratios, the SDS and CTAB methods resulted in higher DNA yield (ng DNA vs. mg tissue at much lower cost and less degradation as revealed on agarose gels. The DNeasy kit was most time-efficient but was the costliest among the methods tested. The effects of ethanol volume, temperature and incubation time on precipitation of DNA were also investigated. The DNA samples obtained by the five methods were tested in PCR for six microsatellites located in various positions of the beetle's genome, and all samples showed successful amplifications. CONCLUSION/SIGNIFICANCE: These evaluations provide a guide for choosing methods of DNA extraction from western corn rootworm beetles based on expected DNA yield and quality, extraction time, cost, and waste control. The extraction conditions for this mid-size insect were optimized. The DNA extracted by the five methods was suitable for further molecular applications such as PCR and sequencing by synthesis.

  16. The large-scale blast score ratio (LS-BSR) pipeline: a method to rapidly compare genetic content between bacterial genomes.

    Science.gov (United States)

    Sahl, Jason W; Caporaso, J Gregory; Rasko, David A; Keim, Paul

    2014-01-01

    Background. As whole genome sequence data from bacterial isolates becomes cheaper to generate, computational methods are needed to correlate sequence data with biological observations. Here we present the large-scale BLAST score ratio (LS-BSR) pipeline, which rapidly compares the genetic content of hundreds to thousands of bacterial genomes, and returns a matrix that describes the relatedness of all coding sequences (CDSs) in all genomes surveyed. This matrix can be easily parsed in order to identify genetic relationships between bacterial genomes. Although pipelines have been published that group peptides by sequence similarity, no other software performs the rapid, large-scale, full-genome comparative analyses carried out by LS-BSR. Results. To demonstrate the utility of the method, the LS-BSR pipeline was tested on 96 Escherichia coli and Shigella genomes; the pipeline ran in 163 min using 16 processors, which is a greater than 7-fold speedup compared to using a single processor. The BSR values for each CDS, which indicate a relative level of relatedness, were then mapped to each genome on an independent core genome single nucleotide polymorphism (SNP) based phylogeny. Comparisons were then used to identify clade specific CDS markers and validate the LS-BSR pipeline based on molecular markers that delineate between classical E. coli pathogenic variant (pathovar) designations. Scalability tests demonstrated that the LS-BSR pipeline can process 1,000 E. coli genomes in 27-57 h, depending upon the alignment method, using 16 processors. Conclusions. LS-BSR is an open-source, parallel implementation of the BSR algorithm, enabling rapid comparison of the genetic content of large numbers of genomes. The results of the pipeline can be used to identify specific markers between user-defined phylogenetic groups, and to identify the loss and/or acquisition of genetic information between bacterial isolates. Taxa-specific genetic markers can then be translated into clinical

  17. Technology transfer - insider protection workshop (Safeguards Evaluation Method - Insider Threat)

    International Nuclear Information System (INIS)

    Strait, R.S.; Renis, T.A.

    1986-01-01

    The Safeguards Evaluation Method - Insider Threat, developed by Lawrence Livermore National Laboratory, is a field-applicable tool to evaluate facility safeguards against theft or diversion of special nuclear material (SNM) by nonviolent insiders. To ensure successful transfer of this technology from the laboratory to DOE field offices and contractors, LLNL developed a three-part package. The package includes a workbook, user-friendly microcomputer software, and a three-day training program. The workbook guides an evaluation team through the Safeguards Evaluation Method and provides forms for gathering data. The microcomputer software assists in the evaluation of safeguards effectiveness. The software is designed for safeguards analysts with no previous computer experience. It runs on an IBM Personal Computer or any compatible machine. The three-day training program is called the Insider Protection Workshop. The workshop students learn how to use the workbook and the computer software to assess insider vulnerabilities and to evaluate the benefits and costs of potential improvements. These activities increase the students' appreciation of the insider threat. The workshop format is informal and interactive, employing four different instruction modes: classroom presentations, small-group sessions, a practical exercise, and ''hands-on'' analysis using microcomputers. This approach to technology transfer has been successful: over 100 safeguards planners and analysts have been trained in the method, and it is being used at facilities through the DOE complex

  18. Use of Thermoanalytic Methods in the Evaluation of Combusted Materials

    Directory of Open Access Journals (Sweden)

    František Krepelka

    2006-12-01

    Full Text Available The paper describes possibilities of using thermoanalytic methods for the evaluation and comparison of materials designed for a direct combustion. Differential thermal analysis (DTA and thermogravimetric analysis (TGA were both used in the evaluation. The paper includes a description of methods of data processing from analyses for the purposes of comparison of used materials regarding their heating values. The following materials were analysed in the experiments: wooden coal of objectional grain size, fly ash from heating plant exhaust funnels, dendromass waste: spruce sawdust, micro-briquettes of spruce sawdust and fly-ash combined.

  19. DOE methods for evaluating environmental and waste management samples

    Energy Technology Data Exchange (ETDEWEB)

    Goheen, S.C.; McCulloch, M.; Thomas, B.L.; Riley, R.G.; Sklarew, D.S.; Mong, G.M.; Fadeff, S.K. [eds.

    1994-10-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) is a resource intended to support sampling and analytical activities for the evaluation of environmental and waste management samples from U.S. Department of Energy (DOE) sites. DOE Methods is the result of extensive cooperation from all DOE analytical laboratories. All of these laboratories have contributed key information and provided technical reviews as well as significant moral support leading to the success of this document. DOE Methods is designed to encompass methods for collecting representative samples and for determining the radioisotope activity and organic and inorganic composition of a sample. These determinations will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the U.S. Environmental Protection Agency, or others. The development of DOE Methods is supported by the Analytical Services Division of DOE. Unique methods or methods consolidated from similar procedures in the DOE Procedures Database are selected for potential inclusion in this document. Initial selection is based largely on DOE needs and procedure applicability and completeness. Methods appearing in this document are one of two types, {open_quotes}Draft{close_quotes} or {open_quotes}Verified{close_quotes}. {open_quotes}Draft{close_quotes} methods that have been reviewed internally and show potential for eventual verification are included in this document, but they have not been reviewed externally, and their precision and bias may not be known. {open_quotes}Verified{close_quotes} methods in DOE Methods have been reviewed by volunteers from various DOE sites and private corporations. These methods have delineated measures of precision and accuracy.

  20. DOE methods for evaluating environmental and waste management samples

    International Nuclear Information System (INIS)

    Goheen, S.C.; McCulloch, M.; Thomas, B.L.; Riley, R.G.; Sklarew, D.S.; Mong, G.M.; Fadeff, S.K.

    1994-10-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) is a resource intended to support sampling and analytical activities for the evaluation of environmental and waste management samples from U.S. Department of Energy (DOE) sites. DOE Methods is the result of extensive cooperation from all DOE analytical laboratories. All of these laboratories have contributed key information and provided technical reviews as well as significant moral support leading to the success of this document. DOE Methods is designed to encompass methods for collecting representative samples and for determining the radioisotope activity and organic and inorganic composition of a sample. These determinations will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the U.S. Environmental Protection Agency, or others. The development of DOE Methods is supported by the Analytical Services Division of DOE. Unique methods or methods consolidated from similar procedures in the DOE Procedures Database are selected for potential inclusion in this document. Initial selection is based largely on DOE needs and procedure applicability and completeness. Methods appearing in this document are one of two types, open-quotes Draftclose quotes or open-quotes Verifiedclose quotes. open-quotes Draftclose quotes methods that have been reviewed internally and show potential for eventual verification are included in this document, but they have not been reviewed externally, and their precision and bias may not be known. open-quotes Verifiedclose quotes methods in DOE Methods have been reviewed by volunteers from various DOE sites and private corporations. These methods have delineated measures of precision and accuracy

  1. Multi-criteria evaluation methods in the production scheduling

    Science.gov (United States)

    Kalinowski, K.; Krenczyk, D.; Paprocka, I.; Kempa, W.; Grabowik, C.

    2016-08-01

    The paper presents a discussion on the practical application of different methods of multi-criteria evaluation in the process of scheduling in manufacturing systems. Among the methods two main groups are specified: methods based on the distance function (using metacriterion) and methods that create a Pareto set of possible solutions. The basic criteria used for scheduling were also described. The overall procedure of evaluation process in production scheduling was presented. It takes into account the actions in the whole scheduling process and human decision maker (HDM) participation. The specified HDM decisions are related to creating and editing a set of evaluation criteria, selection of multi-criteria evaluation method, interaction in the searching process, using informal criteria and making final changes in the schedule for implementation. According to need, process scheduling may be completely or partially automated. Full automatization is possible in case of metacriterion based objective function and if Pareto set is selected - the final decision has to be done by HDM.

  2. A method to customize population-specific arrays for genome-wide association testing.

    Science.gov (United States)

    Ehli, Erik A; Abdellaoui, Abdel; Fedko, Iryna O; Grieser, Charlie; Nohzadeh-Malakshah, Sahar; Willemsen, Gonneke; de Geus, Eco Jc; Boomsma, Dorret I; Davies, Gareth E; Hottenga, Jouke J

    2017-02-01

    As an example of optimizing population-specific genotyping assays using a whole-genome sequence reference set, we detail the approach that followed to design the Axiom-NL array which is characterized by an improved imputation backbone based on the Genome of the Netherlands (GoNL) reference sequence and, compared with earlier arrays, a more comprehensive inclusion of SNPs on chromosomes X, Y, and the mitochondria. Common variants on the array were selected to be compatible with the Illumina Psych Array and the Affymetrix UK Biobank Axiom array. About 3.5% of the array (23 977 markers) represents SNPs from the GWAS catalog, including SNPs at FTO, APOE, Ion-channels, killer-cell immunoglobulin-like receptors, and HLA. Around 26 000 markers associated with common psychiatric disorders are included, as well as 6705 markers suggested to be associated with fertility and twinning. The platform can thus be used for risk profiling, detection of new variants, as well as ancestry determination. Results of coverage tests in 249 unrelated subjects with GoNL-based sequence data show that after imputation with 1000G as a reference, the median concordance between original and imputed genotypes is above 98%. The median imputation quality R 2 for MAF thresholds of 0.001, 0.01, 0.05, and >0.05 are 0.05, 0.28, 0.80, 0.99, respectively, for the 1000G imputed SNPs, with a similar quality for the autosomes and X chromosome, showing a good genome-wide coverage for association studies after imputation.

  3. Genome-enabled methods for predicting litter size in pigs: a comparison.

    Science.gov (United States)

    Tusell, L; Pérez-Rodríguez, P; Forni, S; Wu, X-L; Gianola, D

    2013-11-01

    Predictive ability of models for litter size in swine on the basis of different sources of genetic information was investigated. Data represented average litter size on 2598, 1604 and 1897 60K genotyped sows from two purebred and one crossbred line, respectively. The average correlation (r) between observed and predicted phenotypes in a 10-fold cross-validation was used to assess predictive ability. Models were: pedigree-based mixed-effects model (PED), Bayesian ridge regression (BRR), Bayesian LASSO (BL), genomic BLUP (GBLUP), reproducing kernel Hilbert spaces regression (RKHS), Bayesian regularized neural networks (BRNN) and radial basis function neural networks (RBFNN). BRR and BL used the marker matrix or its principal component scores matrix (UD) as covariates; RKHS employed a Gaussian kernel with additive codes for markers whereas neural networks employed the additive genomic relationship matrix (G) or UD as inputs. The non-parametric models (RKHS, BRNN, RNFNN) gave similar predictions to the parametric counterparts (average r ranged from 0.15 to 0.23); most of the genome-based models outperformed PED (r = 0.16). Predictive abilities of linear models and RKHS were similar over lines, but BRNN varied markedly, giving the best prediction (r = 0.31) when G was used in crossbreds, but the worst (r = 0.02) when the G matrix was used in one of the purebred lines. The r values for RBFNN ranged from 0.16 to 0.23. Predictive ability was better in crossbreds (0.26) than in purebreds (0.15 to 0.22). This may be related to family structure in the purebred lines.

  4. A reliability evaluation method for NPP safety DCS application software

    International Nuclear Information System (INIS)

    Li Yunjian; Zhang Lei; Liu Yuan

    2014-01-01

    In the field of nuclear power plant (NPP) digital i and c application, reliability evaluation for safety DCS application software is a key obstacle to be removed. In order to quantitatively evaluate reliability of NPP safety DCS application software, this paper propose a reliability evaluating method based on software development life cycle every stage's v and v defects density characteristics, by which the operating reliability level of the software can be predicted before its delivery, and helps to improve the reliability of NPP safety important software. (authors)

  5. Genome-wide prediction methods in highly diverse and heterozygous species: proof-of-concept through simulation in grapevine.

    Directory of Open Access Journals (Sweden)

    Agota Fodor

    Full Text Available Nowadays, genome-wide association studies (GWAS and genomic selection (GS methods which use genome-wide marker data for phenotype prediction are of much potential interest in plant breeding. However, to our knowledge, no studies have been performed yet on the predictive ability of these methods for structured traits when using training populations with high levels of genetic diversity. Such an example of a highly heterozygous, perennial species is grapevine. The present study compares the accuracy of models based on GWAS or GS alone, or in combination, for predicting simple or complex traits, linked or not with population structure. In order to explore the relevance of these methods in this context, we performed simulations using approx 90,000 SNPs on a population of 3,000 individuals structured into three groups and corresponding to published diversity grapevine data. To estimate the parameters of the prediction models, we defined four training populations of 1,000 individuals, corresponding to these three groups and a core collection. Finally, to estimate the accuracy of the models, we also simulated four breeding populations of 200 individuals. Although prediction accuracy was low when breeding populations were too distant from the training populations, high accuracy levels were obtained using the sole core-collection as training population. The highest prediction accuracy was obtained (up to 0.9 using the combined GWAS-GS model. We thus recommend using the combined prediction model and a core-collection as training population for grapevine breeding or for other important economic crops with the same characteristics.

  6. Improved methods and resources for paramecium genomics: transcription units, gene annotation and gene expression.

    Science.gov (United States)

    Arnaiz, Olivier; Van Dijk, Erwin; Bétermier, Mireille; Lhuillier-Akakpo, Maoussi; de Vanssay, Augustin; Duharcourt, Sandra; Sallet, Erika; Gouzy, Jérôme; Sperling, Linda

    2017-06-26

    The 15 sibling species of the Paramecium aurelia cryptic species complex emerged after a whole genome duplication that occurred tens of millions of years ago. Given extensive knowledge of the genetics and epigenetics of Paramecium acquired over the last century, this species complex offers a uniquely powerful system to investigate the consequences of whole genome duplication in a unicellular eukaryote as well as the genetic and epigenetic mechanisms that drive speciation. High quality Paramecium gene models are important for research using this system. The major aim of the work reported here was to build an improved gene annotation pipeline for the Paramecium lineage. We generated oriented RNA-Seq transcriptome data across the sexual process of autogamy for the model species Paramecium tetraurelia. We determined, for the first time in a ciliate, candidate P. tetraurelia transcription start sites using an adapted Cap-Seq protocol. We developed TrUC, multi-threaded Perl software that in conjunction with TopHat mapping of RNA-Seq data to a reference genome, predicts transcription units for the annotation pipeline. We used EuGene software to combine annotation evidence. The high quality gene structural annotations obtained for P. tetraurelia were used as evidence to improve published annotations for 3 other Paramecium species. The RNA-Seq data were also used for differential gene expression analysis, providing a gene expression atlas that is more sensitive than the previously established microarray resource. We have developed a gene annotation pipeline tailored for the compact genomes and tiny introns of Paramecium species. A novel component of this pipeline, TrUC, predicts transcription units using Cap-Seq and oriented RNA-Seq data. TrUC could prove useful beyond Paramecium, especially in the case of high gene density. Accurate predictions of 3' and 5' UTR will be particularly valuable for studies of gene expression (e.g. nucleosome positioning, identification of cis

  7. Acoustic Methods for Evaluation of High Energy Explosions

    OpenAIRE

    Lobanovsky, Yury I.

    2013-01-01

    Two independent acoustic methods were used to verify the results of earlier explosion energy calculations of Chelyabinsk meteoroid. They are: estimations through a path length of infrasound wave and through maximum concentration of the wave energy. The energy of this explosion turned out the same as in earlier calculations, and it is close to 57 Mt of TNT. The first method, as well as evaluations through seismic signals and barograms, have confirmed the energy of Tunguska meteoroid explosion ...

  8. Combination of Three Methods of Photo Voltaic Panels Damage Evaluation

    Directory of Open Access Journals (Sweden)

    Olšan T.

    2017-06-01

    Full Text Available In broken photovoltaic (PV cells the flow of electric current can be reduced in some places, which results in a lowered efficiency. In the present study, the damage of PV cells and panels was evaluated using three methods - electroluminescence, infrared camera imaging, and visual examination. The damage is detectable by all these methods which were presented and compared from the viewpoint of resolution, difficulty, and accuracy of monitoring the PV panels damage.

  9. Quantitative methods for somatosensory evaluation in atypical odontalgia

    OpenAIRE

    PORPORATTI,André Luís; COSTA,Yuri Martins; STUGINSKI-BARBOSA,Juliana; BONJARDIM,Leonardo Rigoldi; CONTI,Paulo César Rodrigues; SVENSSON,Peter

    2015-01-01

    A systematic review was conducted to identify reliable somatosensory evaluation methods for atypical odontalgia (AO) patients. The computerized search included the main databases (MEDLINE, EMBASE, and Cochrane Library). The studies included used the following quantitative sensory testing (QST) methods: mechanical detection threshold (MDT), mechanical pain threshold (MPT) (pinprick), pressure pain threshold (PPT), dynamic mechanical allodynia with a cotton swab (DMA1) or a brush (DMA2), warm d...

  10. Evaluation of Information Requirements of Reliability Methods in Engineering Design

    DEFF Research Database (Denmark)

    Marini, Vinicius Kaster; Restrepo-Giraldo, John Dairo; Ahmed-Kristensen, Saeema

    2010-01-01

    This paper aims to characterize the information needed to perform methods for robustness and reliability, and verify their applicability to early design stages. Several methods were evaluated on their support to synthesis in engineering design. Of those methods, FMEA, FTA and HAZOP were selected...... on their insight on design risks and wide spread application. A pilot case study has been performed with a washing machine in using these methods to assess design risks, following a reverse engineering approach. The study has shown the methods can be initiated at early design stages, but cannot be concluded....... For that reason, new methods are needed to assist assessing robustness and reliability at early design stages. A specific taxonomy on robustness and reliability information in design could support classifying available design information to orient new techniques assessing innovative designs....

  11. Nondestructive methods for quality evaluation of livestock products.

    Science.gov (United States)

    Narsaiah, K; Jha, Shyam N

    2012-06-01

    The muscles derived from livestock are highly perishable. Rapid and nondestructive methods are essential for quality assurance of such products. Potential nondestructive methods, which can supplement or replace many of traditional time consuming destructive methods, include colour and computer image analysis, NIR spectroscopy, NMRI, electronic nose, ultrasound, X-ray imaging and biosensors. These methods are briefly described and the research work involving them for products derived from livestock is reviewed. These methods will be helpful in rapid screening of large number of samples, monitoring distribution networks, quick product recall and enhance traceability in the value chain of livestock products. With new developments in the areas of basic science related to these methods, colour, image processing, NIR spectroscopy, biosensors and ultrasonic analysis are expected to be widespread and cost effective for large scale meat quality evaluation in near future.

  12. Systematic drug safety evaluation based on public genomic expression (Connectivity Map) data: Myocardial and infectious adverse reactions as application cases

    International Nuclear Information System (INIS)

    Wang, Kejian; Weng, Zuquan; Sun, Liya; Sun, Jiazhi; Zhou, Shu-Feng; He, Lin

    2015-01-01

    Adverse drug reaction (ADR) is of great importance to both regulatory agencies and the pharmaceutical industry. Various techniques, such as quantitative structure–activity relationship (QSAR) and animal toxicology, are widely used to identify potential risks during the preclinical stage of drug development. Despite these efforts, drugs with safety liabilities can still pass through safety checkpoints and enter the market. This situation raises the concern that conventional chemical structure analysis and phenotypic screening are not sufficient to avoid all clinical adverse events. Genomic expression data following in vitro drug treatments characterize drug actions and thus have become widely used in drug repositioning. In the present study, we explored prediction of ADRs based on the drug-induced gene-expression profiles from cultured human cells in the Connectivity Map (CMap) database. The results showed that drugs inducing comparable ADRs generally lead to similar CMap expression profiles. Based on such ADR-gene expression association, we established prediction models for various ADRs, including severe myocardial and infectious events. Drugs with FDA boxed warnings of safety liability were effectively identified. We therefore suggest that drug-induced gene expression change, in combination with effective computational methods, may provide a new dimension of information to facilitate systematic drug safety evaluation. - Highlights: • Drugs causing common toxicity lead to similar in vitro gene expression changes. • We built a model to predict drug toxicity with drug-specific expression profiles. • Drugs with FDA black box warnings were effectively identified by our model. • In vitro assay can detect severe toxicity in the early stage of drug development

  13. Systematic drug safety evaluation based on public genomic expression (Connectivity Map) data: Myocardial and infectious adverse reactions as application cases

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Kejian, E-mail: kejian.wang.bio@gmail.com [Bio-X Institutes, Key Laboratory for the Genetics of Developmental and Neuropsychiatric Disorders, Shanghai Jiao Tong University, Shanghai (China); Weng, Zuquan [Japan National Institute of Occupational Safety and Health, Kawasaki (Japan); Sun, Liya [Bio-X Institutes, Key Laboratory for the Genetics of Developmental and Neuropsychiatric Disorders, Shanghai Jiao Tong University, Shanghai (China); Sun, Jiazhi; Zhou, Shu-Feng [Department of Pharmaceutical Sciences, College of Pharmacy, University of South Florida, Tampa, FL (United States); He, Lin, E-mail: helin@Bio-X.com [Bio-X Institutes, Key Laboratory for the Genetics of Developmental and Neuropsychiatric Disorders, Shanghai Jiao Tong University, Shanghai (China)

    2015-02-13

    Adverse drug reaction (ADR) is of great importance to both regulatory agencies and the pharmaceutical industry. Various techniques, such as quantitative structure–activity relationship (QSAR) and animal toxicology, are widely used to identify potential risks during the preclinical stage of drug development. Despite these efforts, drugs with safety liabilities can still pass through safety checkpoints and enter the market. This situation raises the concern that conventional chemical structure analysis and phenotypic screening are not sufficient to avoid all clinical adverse events. Genomic expression data following in vitro drug treatments characterize drug actions and thus have become widely used in drug repositioning. In the present study, we explored prediction of ADRs based on the drug-induced gene-expression profiles from cultured human cells in the Connectivity Map (CMap) database. The results showed that drugs inducing comparable ADRs generally lead to similar CMap expression profiles. Based on such ADR-gene expression association, we established prediction models for various ADRs, including severe myocardial and infectious events. Drugs with FDA boxed warnings of safety liability were effectively identified. We therefore suggest that drug-induced gene expression change, in combination with effective computational methods, may provide a new dimension of information to facilitate systematic drug safety evaluation. - Highlights: • Drugs causing common toxicity lead to similar in vitro gene expression changes. • We built a model to predict drug toxicity with drug-specific expression profiles. • Drugs with FDA black box warnings were effectively identified by our model. • In vitro assay can detect severe toxicity in the early stage of drug development.

  14. Comparative evaluation of the genomes of three common Drosophila-associated bacteria

    Directory of Open Access Journals (Sweden)

    Kristina Petkau

    2016-09-01

    Full Text Available Drosophila melanogaster is an excellent model to explore the molecular exchanges that occur between an animal intestine and associated microbes. Previous studies in Drosophila uncovered a sophisticated web of host responses to intestinal bacteria. The outcomes of these responses define critical events in the host, such as the establishment of immune responses, access to nutrients, and the rate of larval development. Despite our steady march towards illuminating the host machinery that responds to bacterial presence in the gut, there are significant gaps in our understanding of the microbial products that influence bacterial association with a fly host. We sequenced and characterized the genomes of three common Drosophila-associated microbes: Lactobacillus plantarum, Lactobacillus brevis and Acetobacter pasteurianus. For each species, we compared the genomes of Drosophila-associated strains to the genomes of strains isolated from alternative sources. We found that environmental Lactobacillus strains readily associated with adult Drosophila and were similar to fly isolates in terms of genome organization. In contrast, we identified a strain of A. pasteurianus that apparently fails to associate with adult Drosophila due to an inability to grow on fly nutrient food. Comparisons between association competent and incompetent A. pasteurianus strains identified a short list of candidate genes that may contribute to survival on fly medium. Many of the gene products unique to fly-associated strains have established roles in the stabilization of host-microbe interactions. These data add to a growing body of literature that examines the microbial perspective of host-microbe relationships.

  15. Evaluation of whole genome sequencing for outbreak detection of Salmonella enterica

    DEFF Research Database (Denmark)

    Leekitcharoenphon, Pimlapas; Nielsen, Eva M.; Kaas, Rolf Sommer

    2014-01-01

    Salmonella enterica is a common cause of minor and large food borne outbreaks. To achieve successful and nearly ‘real-time’ monitoring and identification of outbreaks, reliable sub-typing is essential. Whole genome sequencing (WGS) shows great promises for using as a routine epidemiological typing...

  16. The MetJ regulon in gammaproteobacteria determined by comparative genomics methods

    Directory of Open Access Journals (Sweden)

    Augustus Anne M

    2011-11-01

    Full Text Available Abstract Background Whole-genome sequencing of bacteria has proceeded at an exponential pace but annotation validation has lagged behind. For instance, the MetJ regulon, which controls methionine biosynthesis and transport, has been studied almost exclusively in E. coli and Salmonella, but homologs of MetJ exist in a variety of other species. These include some that are pathogenic (e.g. Yersinia and some that are important for environmental remediation (e.g. Shewanella but many of which have not been extensively characterized in the literature. Results We have determined the likely composition of the MetJ regulon in all species which have MetJ homologs using bioinformatics techniques. We show that the core genes known from E. coli are consistently regulated in other species, and we identify previously unknown members of the regulon. These include the cobalamin transporter, btuB; all the genes involved in the methionine salvage pathway; as well as several enzymes and transporters of unknown specificity. Conclusions The MetJ regulon is present and functional in five orders of gammaproteobacteria: Enterobacteriales, Pasteurellales, Vibrionales, Aeromonadales and Alteromonadales. New regulatory activity for MetJ was identified in the genomic data and verified experimentally. This strategy should be applicable for the elucidation of regulatory pathways in other systems by using the extensive sequencing data currently being generated.

  17. Applying Hierarchical Task Analysis Method to Discovery Layer Evaluation

    Directory of Open Access Journals (Sweden)

    Marlen Promann

    2015-03-01

    Full Text Available Libraries are implementing discovery layers to offer better user experiences. While usability tests have been helpful in evaluating the success or failure of implementing discovery layers in the library context, the focus has remained on its relative interface benefits over the traditional federated search. The informal site- and context specific usability tests have offered little to test the rigor of the discovery layers against the user goals, motivations and workflow they have been designed to support. This study proposes hierarchical task analysis (HTA as an important complementary evaluation method to usability testing of discovery layers. Relevant literature is reviewed for the discovery layers and the HTA method. As no previous application of HTA to the evaluation of discovery layers was found, this paper presents the application of HTA as an expert based and workflow centered (e.g. retrieving a relevant book or a journal article method to evaluating discovery layers. Purdue University’s Primo by Ex Libris was used to map eleven use cases as HTA charts. Nielsen’s Goal Composition theory was used as an analytical framework to evaluate the goal carts from two perspectives: a users’ physical interactions (i.e. clicks, and b user’s cognitive steps (i.e. decision points for what to do next. A brief comparison of HTA and usability test findings is offered as a way of conclusion.

  18. Investigation of Evaluation method of chemical runaway reaction

    International Nuclear Information System (INIS)

    Sato, Yoshihiko; Sasaya, Shinji; Kurakata, Koichiro; Nojiri, Ichiro

    2002-02-01

    Safety study 'Study of evaluation of abnormal occurrence for chemical substances in the nuclear fuel facilities' will be carried out from 2001 to 2005. In this study, the prediction of thermal hazards of chemical substances will be investigated and prepared. The hazard prediction method of chemical substances will be constructed from these results. Therefore, the hazard prediction methods applied in the chemical engineering in which the chemical substances with the hazard of fire and explosion were often treated were investigated. CHETAH (The ASTM Computer Program for Chemical Thermodynamic and Energy Release Evaluation) developed by ASTM (American Society for Testing and Materials) and TSS (Thermal Safety Software) developed by CISP (ChemInform St. Petersburg) were introduced and the fire and explosion hazards of chemical substances and reactions in the reprocessing process were evaluated. From these evaluated results, CHETAH could almost estimate the heat of reaction at 10% accuracy. It was supposed that CHETAH was useful as a screening for the hazards of fire and explosion of the new chemical substances and so on. TSS could calculate the reaction rate and the reaction behavior from the data measured by the various calorimeters rapidly. It was supposed that TSS was useful as an evaluation method for the hazards of fire and explosion of the new chemical reactions and so on. (author)

  19. Evaluation methods used for phosphate-solubilizing bacteria ...

    African Journals Online (AJOL)

    This work aimed to evaluate the different selection methods and select inorganic phosphorus-solubilizing bacteria as potential plant-growth promoters. Bacterial isolates obtained from sugarcane roots and soil were tested using solid growth media containing bicalcium phosphate and Irecê Apatite ground rock phosphate as ...

  20. Data-driven performance evaluation method for CMS RPC trigger ...

    Indian Academy of Sciences (India)

    level triggers, to handle the large stream of data produced in collision. The information transmitted from the three muon subsystems (DT, CSC and RPC) are collected by the Global Muon Trigger (GMT) Board and merged. A method for evaluating ...

  1. [Comparison of the software safety evaluation methods in medical devices].

    Science.gov (United States)

    Yu, Sicong; Pan, Ying; Yu, Xiping; Zhu, Yinfeng

    2010-09-01

    The article intends to analyze the software safety problems in high-risk medical devices based on the investigation of software R & D Quality control procedures in Shanghai medical device manufacturing enterprises. The idea of improving the software pre-market safety evaluation method in China is also explored through the way of comparing those in U.S. and Europe.

  2. Evaluation of some Methods for Preparing Gliclazide-β-Cyclodextrin ...

    African Journals Online (AJOL)

    Purpose: Gliclazide has been found to form inclusion complexes with β- cyclodextrin (β-CD) in solution and in solid state. The present study was undertaken to determine a suitable method for scaling up gliclazide-β-CD inclusion complex formation and to evaluate the effect of some parameters on the efficiency of ...

  3. Comparison of methods to evaluate bacterial contact-killing materials

    NARCIS (Netherlands)

    van de lagemaat, Marieke; Grotenhuis, Arjen; van de Belt-Gritter, Betsy; Roest, Steven; Loontjens, Ton J. A.; Busscher, Henk J.; van der Mei, Henny C.; Ren, Yijin

    2017-01-01

    Cationic surfaces with alkylated quaternary-ammonium groups kill adhering bacteria upon contact by membrane disruption and are considered increasingly promising as a non-antibiotic based way to eradicate bacteria adhering to surfaces. However, reliable in vitro evaluation methods for bacterial

  4. Data-driven performance evaluation method for CMS RPC trigger

    Indian Academy of Sciences (India)

    level triggers, to handle the large stream of data produced in collision. The information transmitted from the three muon subsystems (DT, CSC and RPC) are collected by the Global Muon Trigger (GMT) Board and merged. A method for evaluating ...

  5. Evaluation of different methods to overcome in vitro seed dormancy ...

    African Journals Online (AJOL)

    SAM

    2014-09-03

    Sep 3, 2014 ... Seeds from yellow passion fruit (Passiflora edulis Sims) present dormancy imposed by the seed-coat. The present study aimed to evaluate some methods to overcome dormancy of seeds from P. edulis grown under in vitro conditions. The experimental design was completely randomized in factorial scheme ...

  6. Evaluation of different methods to overcome in vitro seed dormancy ...

    African Journals Online (AJOL)

    Seeds from yellow passion fruit (Passiflora edulis Sims) present dormancy imposed by the seed-coat. The present study aimed to evaluate some methods to overcome dormancy of seeds from P. edulis grown under in vitro conditions. The experimental design was completely randomized in factorial scheme (15 scarification ...

  7. Evaluation and analysis of term scoring methods for term extraction

    NARCIS (Netherlands)

    Verberne, S.; Sappelli, M.; Hiemstra, D.; Kraaij, W.

    2016-01-01

    We evaluate five term scoring methods for automatic term extraction on four different types of text collections: personal document collections, news articles, scientific articles and medical discharge summaries. Each collection has its own use case: author profiling, boolean query term suggestion,

  8. An evaluation of solutions to moment method of biochemical oxygen ...

    African Journals Online (AJOL)

    This paper evaluated selected solutions of moment method in respect to Biochemical Oxygen Demand (BOD) kinetics with the aim of ascertain error free solution. Domestic - institutional wastewaters were collected two - weekly for three months from waste - stabilization ponds in Obafemi Awolowo University, Ile - Ife.

  9. Evaluation of current methods to estimate pulp yield of hemp.

    NARCIS (Netherlands)

    Meijer, de E.P.M.; Werf, van der H.M.G.

    1994-01-01

    Large-scale evaluation of hemp stems from field trials requires a rapid method for the characterization of stem quality. The large differences between bark and woody core in anatomical and chemical properties, make a quantification of these two fractions of primary importance for quality assessment.

  10. Laboratory methods for evaluating the effect of low level laser ...

    African Journals Online (AJOL)

    Laboratory methods for evaluating the effect of low level laser therapy (LLLT) in wound healing. D Hawkins, H Abrahamse. Abstract. The basic tenet of laser therapy is that laser radiation has a wavelength dependent capability to alter cellular behaviour in the absence of significant heating. Low intensity radiation can inhibit ...

  11. Dogmas in the assessment of usability evaluation methods

    DEFF Research Database (Denmark)

    Hornbæk, Kasper

    2010-01-01

    Usability evaluation methods (UEMs) are widely recognised as an essential part of systems development. Assessments of the performance of UEMs, however, have been criticised for low validity and limited reliability. The present study extends this critique by describing seven dogmas in recent work ...

  12. Piloting a method to evaluate the implementation of integrated water ...

    African Journals Online (AJOL)

    ISSN 1816-7950 (On-line) = Water SA Vol. 41 No. 5 October 2015. Published under a Creative Commons Attribution Licence. Piloting a method to evaluate the implementation of integrated water resource management in the Inkomati River Basin. Melanie J Wilkinson1, Thandi K Magagula1* and Rashid M Hassan2.

  13. MIMO Terminal Performance Evaluation with a Novel Wireless Cable Method

    DEFF Research Database (Denmark)

    Fan, Wei; Kyösti, Pekka; Hentilä, Lassi

    2017-01-01

    Conventional conductive method, where antennas on the device under test (DUT) are disconnected from antenna ports and replaced with radio frequency (RF) coaxial cables, has been dominantly utilized in industry to evaluate multiple-input multiple-output (MIMO) capable terminals. However, direct RF...

  14. Statistical methods for evaluating the attainment of cleanup standards

    Energy Technology Data Exchange (ETDEWEB)

    Gilbert, R.O.; Simpson, J.C.

    1992-12-01

    This document is the third volume in a series of volumes sponsored by the US Environmental Protection Agency (EPA), Statistical Policy Branch, that provide statistical methods for evaluating the attainment of cleanup Standards at Superfund sites. Volume 1 (USEPA 1989a) provides sampling designs and tests for evaluating attainment of risk-based standards for soils and solid media. Volume 2 (USEPA 1992) provides designs and tests for evaluating attainment of risk-based standards for groundwater. The purpose of this third volume is to provide statistical procedures for designing sampling programs and conducting statistical tests to determine whether pollution parameters in remediated soils and solid media at Superfund sites attain site-specific reference-based standards. This.document is written for individuals who may not have extensive training or experience with statistical methods. The intended audience includes EPA regional remedial project managers, Superfund-site potentially responsible parties, state environmental protection agencies, and contractors for these groups.

  15. Sensitivity evaluation of dynamic speckle activity measurements using clustering methods

    International Nuclear Information System (INIS)

    Etchepareborda, Pablo; Federico, Alejandro; Kaufmann, Guillermo H.

    2010-01-01

    We evaluate and compare the use of competitive neural networks, self-organizing maps, the expectation-maximization algorithm, K-means, and fuzzy C-means techniques as partitional clustering methods, when the sensitivity of the activity measurement of dynamic speckle images needs to be improved. The temporal history of the acquired intensity generated by each pixel is analyzed in a wavelet decomposition framework, and it is shown that the mean energy of its corresponding wavelet coefficients provides a suited feature space for clustering purposes. The sensitivity obtained by using the evaluated clustering techniques is also compared with the well-known methods of Konishi-Fujii, weighted generalized differences, and wavelet entropy. The performance of the partitional clustering approach is evaluated using simulated dynamic speckle patterns and also experimental data.

  16. Methods for the Evaluation of Waste Treatment Processes

    Directory of Open Access Journals (Sweden)

    Hans-Joachim Gehrmann

    2017-01-01

    Full Text Available Decision makers for waste management are confronted with the problem of selecting the most economic, environmental, and socially acceptable waste treatment process. This paper elucidates evaluation methods for waste treatment processes for the comparison of ecological and economic aspects such as material flow analysis, statistical entropy analysis, energetic and exergetic assessment, cumulative energy demand, and life cycle assessment. The work is based on the VDI guideline 3925. A comparison of two thermal waste treatment plants with different process designs and energy recovery systems was performed with the described evaluation methods. The results are mainly influenced by the type of energy recovery, where the waste-to-energy plant providing district heat and process steam emerged to be beneficial in most aspects. Material recovery options from waste incineration were evaluated according to sustainability targets, such as saving of resources and environmental protection.

  17. Evaluation of VOC emission measurement methods for paint spray booths.

    Science.gov (United States)

    Eklund, B M; Nelson, T P

    1995-03-01

    Interest in regulations to control solvent emissions from automotive painting systems is increasing, especially in ozone nonattainment areas. Therefore, an accurate measurement method for VOC emissions from paint spray booths used in the automotive industry is needed to ascertain the efficiency of the spray booth capture and the total emissions. This paper presents the results of a laboratory study evaluating potential VOC sampling and analytical methods used in estimating paint spray booth emissions, and discusses these results relative to other published data. Eight test methods were selected for evaluation. The accuracy of each sampling and analytical method was determined using test atmospheres of known concentration and composition that closely matched the actual exhaust air from paint spray booths. The solvent mixture to generate the test atmospheres contained a large proportion of polar, oxygenated hydrocarbons such as ketones and alcohols. A series of identical tests was performed for each sampling/analytical method with each test atmosphere to assess the precision of the methods. The study identified significant differences among the test methods in terms of accuracy, precision, cost, and complexity.

  18. Efficiency of boiling and four other methods for genomic DNA extraction of deteriorating spore-forming bacteria from milk

    Directory of Open Access Journals (Sweden)

    Jose Carlos Ribeiro Junior

    2016-10-01

    Full Text Available The spore-forming microbiota is mainly responsible for the deterioration of pasteurized milk with long shelf life in the United States. The identification of these microorganisms, using molecular tools, is of particular importance for the maintenance of the quality of milk. However, these molecular techniques are not only costly but also labor-intensive and time-consuming. The aim of this study was to compare the efficiency of boiling in conjunction with four other methods for the genomic DNA extraction of sporulated bacteria with proteolytic and lipolytic potential isolated from raw milk in the states of Paraná and Maranhão, Brazil. Protocols based on cellular lysis by enzymatic digestion, phenolic extraction, microwave-heating, as well as the use of guanidine isothiocyanate were used. This study proposes a method involving simple boiling for the extraction of genomic DNA from these microorganisms. Variations in the quality and yield of the extracted DNA among these methods were observed. However, both the cell lysis protocol by enzymatic digestion (commercial kit and the simple boiling method proposed in this study yielded sufficient DNA for successfully carrying out the Polymerase Chain Reaction (PCR of the rpoB and 16S rRNA genes for all 11 strains of microorganisms tested. Other protocols failed to yield sufficient quantity and quality of DNA from all microorganisms tested, since only a few strains have showed positive results by PCR, thereby hindering the search for new microorganisms. Thus, the simple boiling method for DNA extraction from sporulated bacteria in spoiled milk showed the same efficacy as that of the commercial kit. Moreover, the method is inexpensive, easy to perform, and much less time-consuming.

  19. Proportionate methods for evaluating a simple digital mental health tool.

    Science.gov (United States)

    Davies, E Bethan; Craven, Michael P; Martin, Jennifer L; Simons, Lucy

    2017-11-01

    Traditional evaluation methods are not keeping pace with rapid developments in mobile health. More flexible methodologies are needed to evaluate mHealth technologies, particularly simple, self-help tools. One approach is to combine a variety of methods and data to build a comprehensive picture of how a technology is used and its impact on users. This paper aims to demonstrate how analytical data and user feedback can be triangulated to provide a proportionate and practical approach to the evaluation of a mental well-being smartphone app ( In Hand ). A three-part process was used to collect data: (1) app analytics; (2) an online user survey and (3) interviews with users. Analytics showed that >50% of user sessions counted as 'meaningful engagement'. User survey findings (n=108) revealed that In Hand was perceived to be helpful on several dimensions of mental well-being. Interviews (n=8) provided insight into how these self-reported positive effects were understood by users. This evaluation demonstrates how different methods can be combined to complete a real world, naturalistic evaluation of a self-help digital tool and provide insights into how and why an app is used and its impact on users' well-being. This triangulation approach to evaluation provides insight into how well-being apps are used and their perceived impact on users' mental well-being. This approach is useful for mental healthcare professionals and commissioners who wish to recommend simple digital tools to their patients and evaluate their uptake, use and benefits. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  20. On Some Methods in Safety Evaluation in Geotechnics

    Science.gov (United States)

    Puła, Wojciech; Zaskórski, Łukasz

    2015-06-01

    The paper demonstrates how the reliability methods can be utilised in order to evaluate safety in geotechnics. Special attention is paid to the so-called reliability based design that can play a useful and complementary role to Eurocode 7. In the first part, a brief review of first- and second-order reliability methods is given. Next, two examples of reliability-based design are demonstrated. The first one is focussed on bearing capacity calculation and is dedicated to comparison with EC7 requirements. The second one analyses a rigid pile subjected to lateral load and is oriented towards working stress design method. In the second part, applications of random field to safety evaluations in geotechnics are addressed. After a short review of the theory a Random Finite Element algorithm to reliability based design of shallow strip foundation is given. Finally, two illustrative examples for cohesive and cohesionless soils are demonstrated.

  1. The Bolmen tunnel project - evaluation of geophysical site investigation methods

    International Nuclear Information System (INIS)

    Stanfors, R.

    1987-12-01

    The report presents geophysical measurements along and adjacent to the tunnel and an evaluation of the ability of the various methods to permit prediction of rock mass parameters of significance to stability and water bearing ability. The evaluation shows that, using airborne electro-magnetic surveys, it was possible to indicate about 80% of alla the zones of weakness more than 50 m wide in the tunnel. Airborne magnetic surveys located about 90% of all dolerite dykes more than 10 m wide. Ground-level VLF and Slingram methods of electro-magnetic measurement indicated 75% and 85% respectively of all zones of weakness more than 50 m wide. Resistivity methods were successfully used to locate clay filled and water-bearing fracture zones. About 75% of the length of tunnel over which resistivity values below 500 ohm m were measured required shotcrete support and pre-grouting. (orig./DG)

  2. On Some Methods in Safety Evaluation in Geotechnics

    Directory of Open Access Journals (Sweden)

    Puła Wojciech

    2015-06-01

    Full Text Available The paper demonstrates how the reliability methods can be utilised in order to evaluate safety in geotechnics. Special attention is paid to the so-called reliability based design that can play a useful and complementary role to Eurocode 7. In the first part, a brief review of first- and second-order reliability methods is given. Next, two examples of reliability-based design are demonstrated. The first one is focussed on bearing capacity calculation and is dedicated to comparison with EC7 requirements. The second one analyses a rigid pile subjected to lateral load and is oriented towards working stress design method. In the second part, applications of random field to safety evaluations in geotechnics are addressed. After a short review of the theory a Random Finite Element algorithm to reliability based design of shallow strip foundation is given. Finally, two illustrative examples for cohesive and cohesionless soils are demonstrated.

  3. Methods for in vitro evaluating antimicrobial activity: A review

    Directory of Open Access Journals (Sweden)

    Mounyr Balouiri

    2016-04-01

    Full Text Available In recent years, there has been a growing interest in researching and developing new antimicrobial agents from various sources to combat microbial resistance. Therefore, a greater attention has been paid to antimicrobial activity screening and evaluating methods. Several bioassays such as disk-diffusion, well diffusion and broth or agar dilution are well known and commonly used, but others such as flow cytofluorometric and bioluminescent methods are not widely used because they require specified equipment and further evaluation for reproducibility and standardization, even if they can provide rapid results of the antimicrobial agent's effects and a better understanding of their impact on the viability and cell damage inflicted to the tested microorganism. In this review article, an exhaustive list of in vitro antimicrobial susceptibility testing methods and detailed information on their advantages and limitations are reported.

  4. Holistic Evaluation of Lightweight Operating Systems using the PERCU Method

    Energy Technology Data Exchange (ETDEWEB)

    Kramer, William T.C.; He, Yun (Helen); Carter, Jonathan; Glenski, Joseph; Rippe, Lynn; Cardo, Nicholas

    2008-05-01

    The scale of Leadership Class Systems presents unique challenges to the features and performance of operating system services. This paper reports results of comprehensive evaluations of two Light Weight Operating Systems (LWOS), Cray's Catamount Virtual Node (CVN) and Linux Environment (CLE) operating systems, on the exact same large-scale hardware. The evaluation was carried out over a 5-month period on NERSC's 19,480 core Cray XT-4, Franklin, using a comprehensive evaluation method that spans Performance, Effectiveness, Reliability, Consistency and Usability criteria for all major subsystems and features. The paper presents the results of the comparison between CVN and CLE, evaluates their relative strengths, and reports observations regarding the world's largest Cray XT-4 as well.

  5. Are three methods better than one? A comparative assessment of usability evaluation methods in an EHR.

    Science.gov (United States)

    Walji, Muhammad F; Kalenderian, Elsbeth; Piotrowski, Mark; Tran, Duong; Kookal, Krishna K; Tokede, Oluwabunmi; White, Joel M; Vaderhobli, Ram; Ramoni, Rachel; Stark, Paul C; Kimmes, Nicole S; Lagerweij, Maxim; Patel, Vimla L

    2014-05-01

    To comparatively evaluate the effectiveness of three different methods involving end-users for detecting usability problems in an EHR: user testing, semi-structured interviews and surveys. Data were collected at two major urban dental schools from faculty, residents and dental students to assess the usability of a dental EHR for developing a treatment plan. These included user testing (N=32), semi-structured interviews (N=36), and surveys (N=35). The three methods together identified a total of 187 usability violations: 54% via user testing, 28% via the semi-structured interview and 18% from the survey method, with modest overlap. These usability problems were classified into 24 problem themes in 3 broad categories. User testing covered the broadest range of themes (83%), followed by the interview (63%) and survey (29%) methods. Multiple evaluation methods provide a comprehensive approach to identifying EHR usability challenges and specific problems. The three methods were found to be complementary, and thus each can provide unique insights for software enhancement. Interview and survey methods were found not to be sufficient by themselves, but when used in conjunction with the user testing method, they provided a comprehensive evaluation of the EHR. We recommend using a multi-method approach when testing the usability of health information technology because it provides a more comprehensive picture of usability challenges. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. A method for labeling proteins with tags at the native genomic loci in budding yeast.

    Science.gov (United States)

    Wang, Qian; Xue, Huijun; Li, Siqi; Chen, Ying; Tian, Xuelei; Xu, Xin; Xiao, Wei; Fu, Yu Vincent

    2017-01-01

    Fluorescent proteins and epitope tags are often used as protein fusion tags to study target proteins. One prevailing technique in the budding yeast Saccharomyces cerevisiae is to fuse these tags to a target gene at the precise chromosomal location via homologous recombination. However, several limitations hamper the application of this technique, such as the selectable markers not being reusable, tagging of only the C-terminal being possible, and a "scar" sequence being left in the genome. Here, we describe a strategy to solve these problems by tagging target genes based on a pop-in/pop-out and counter-selection system. Three fluorescent protein tag (mCherry, sfGFP, and mKikGR) and two epitope tag (HA and 3×FLAG) constructs were developed and utilized to tag HHT1, UBC13 or RAD5 at the chromosomal locus as proof-of-concept.

  7. A method for labeling proteins with tags at the native genomic loci in budding yeast.

    Directory of Open Access Journals (Sweden)

    Qian Wang

    Full Text Available Fluorescent proteins and epitope tags are often used as protein fusion tags to study target proteins. One prevailing technique in the budding yeast Saccharomyces cerevisiae is to fuse these tags to a target gene at the precise chromosomal location via homologous recombination. However, several limitations hamper the application of this technique, such as the selectable markers not being reusable, tagging of only the C-terminal being possible, and a "scar" sequence being left in the genome. Here, we describe a strategy to solve these problems by tagging target genes based on a pop-in/pop-out and counter-selection system. Three fluorescent protein tag (mCherry, sfGFP, and mKikGR and two epitope tag (HA and 3×FLAG constructs were developed and utilized to tag HHT1, UBC13 or RAD5 at the chromosomal locus as proof-of-concept.

  8. Participatory Training Evaluation Method (PATEM) as a Collaborative Evaluation Capacity Building Strategy

    Science.gov (United States)

    Kuzmin, Alexey

    2012-01-01

    This article describes Participatory Training Evaluation Method (PATEM) of measuring participants' reaction to the training. PATEM provides rich information; allows to document evaluation findings; becomes organic part of the training that helps participants process their experience individually and as a group; makes sense to participants; is an…

  9. Development of methods for usability evaluations of EHR systems.

    Science.gov (United States)

    Lilholt, Lars H; Pedersen, Signe S; Madsen, Inge; Nielsen, Per H; Boye, Niels; Andersen, Stig K; Nøhr, Christian

    2006-01-01

    Developing electronic health record (EHR) systems in Denmark is an on going, iterative process, where also a maturation process for clinical use should be considered. Convincing methodology for collecting and incorporating in the soft- and hardware knowledge and robustness for the clinical environments is not on hand. A way to involve the clinicians in the development process is conducting usability evaluations. The complexity of the clinical use of the systems is difficult to transmit to a usability laboratory, and due to ethical issues a traditional field study can be impossible to carry out. The aim of this study has been to investigate how it is possible to identify usability problems in an EHR system by combining methods from laboratory tests and field studies. The methods selected for the test design are: the think aloud method, video and screen recording, debriefing, a scenario based on an authentic patient record, and testing on the normal production system. The reliability and validity of the results is increased due to the application of method- and data-triangulation. The results of the usability evaluation include problems in the categories: system response time, GUI-design, functionality, procedures, and error messages. The problems were classified as cosmetic, severe, or critical according to a rating scale. The experience with each method is discussed. It is concluded that combining methods from laboratory test and field study makes it possible to identify usability problems. There are indications that some of the usability problems only occurred due to the establishment of an authentic scenario.

  10. Initial Results of an MDO Method Evaluation Study

    Science.gov (United States)

    Alexandrov, Natalia M.; Kodiyalam, Srinivas

    1998-01-01

    The NASA Langley MDO method evaluation study seeks to arrive at a set of guidelines for using promising MDO methods by accumulating and analyzing computational data for such methods. The data are collected by conducting a series of re- producible experiments. In the first phase of the study, three MDO methods were implemented in the SIGHT: framework and used to solve a set of ten relatively simple problems. In this paper, we comment on the general considerations for conducting method evaluation studies and report some initial results obtained to date. In particular, although the results are not conclusive because of the small initial test set, other formulations, optimality conditions, and sensitivity of solutions to various perturbations. Optimization algorithms are used to solve a particular MDO formulation. It is then appropriate to speak of local convergence rates and of global convergence properties of an optimization algorithm applied to a specific formulation. An analogous distinction exists in the field of partial differential equations. On the one hand, equations are analyzed in terms of regularity, well-posedness, and the existence and unique- ness of solutions. On the other, one considers numerous algorithms for solving differential equations. The area of MDO methods studies MDO formulations combined with optimization algorithms, although at times the distinction is blurred. It is important to

  11. Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency

    Directory of Open Access Journals (Sweden)

    Rodrigo Aniceto

    2015-01-01

    Full Text Available Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB.

  12. Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency.

    Science.gov (United States)

    Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio

    2015-01-01

    Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB.

  13. Evaluation method of radon preventing effect in underground construction

    International Nuclear Information System (INIS)

    Luo Shaodong; Deng Yuequan; Dong Faqin; Qu Ruixue; Xie Zhonglei

    2014-01-01

    Background: It's difficult to evaluate the radon prevention effect because of the short operating time of measuring instrument under the circumstances of high humidity in underground construction. Purpose: A new rapid method to evaluate the radon prevention efficiency of underground construction was introduced. Methods: The radon concentrations before and after shielding operation were determined, and according to the regularity of radon decay, the shielding rate can be calculated. Results: The results showed that radon shielding rate in underground construction remains generally stable with variation of time, and the actual relatively standard deviation was 3.95%. So the rapid determination and evaluation of radon preventing effect under special conditions in underground construction can be realized by taking shielding rate in a short time for the final shielding rate. Compared with those by the local static method in ground lab, the results were similar. Conclusion: This paper provided a prompt, accurate and practicable way for the evaluation of radon prevention in underground construction, having a certain reference value. (authors)

  14. Methodological proposal for environmental impact evaluation since different specific methods

    International Nuclear Information System (INIS)

    Leon Pelaez, Juan Diego; Lopera Arango Gabriel Jaime

    1999-01-01

    Some conceptual and practical elements related to environmental impact evaluation are described and related to the preparation of technical reports (environmental impact studies and environmental management plans) to be presented to environmental authorities for obtaining the environmental permits for development projects. In the first part of the document a summary of the main aspects of normative type is made that support the studies of environmental impact in Colombia. We propose a diagram for boarding and elaboration of the evaluation of environmental impact, which begins with the description of the project and of the environmental conditions in the area of the same. Passing then to identify the impacts through a method matricial and continuing with the quantitative evaluation of the same. For which we propose the use of the method developed by Arboleda (1994). Also we propose to qualify the activities of the project and the components of the environment in their relative importance, by means of a method here denominated agglomerate evaluation. Which allows finding those activities more impacting and the mostly impacted components. Lastly it is presented some models for the elaboration and presentation of the environmental management plans. The pursuit programs and those of environmental supervision

  15. An IMU Evaluation Method Using a Signal Grafting Scheme.

    Science.gov (United States)

    Niu, Xiaoji; Wang, Qiang; Li, You; Zhang, Quan; Jiang, Peng

    2016-06-10

    As various inertial measurement units (IMUs) from different manufacturers appear every year, it is not affordable to evaluate every IMU through tests. Therefore, this paper presents an IMU evaluation method by grafting data from the tested IMU to the reference data from a higher-grade IMU. The signal grafting (SG) method has several benefits: (a) only one set of field tests with a higher-grade IMU is needed, and can be used to evaluate numerous IMUs. Thus, SG is effective and economic because all data from the tested IMU is collected in the lab; (b) it is a general approach to compare navigation performances of various IMUs by using the same reference data; and, finally, (c) through SG, one can first evaluate an IMU in the lab, and then decide whether to further test it. Moreover, this paper verified the validity of SG to both medium- and low-grade IMUs, and presents and compared two SG strategies, i.e., the basic-error strategy and the full-error strategy. SG provided results similar to field tests, with a difference of under 5% and 19.4%-26.7% for tested tactical-grade and MEMS IMUs. Meanwhile, it was found that dynamic IMU errors were essential to guarantee the effect of the SG method.

  16. PredictSNP2: A Unified Platform for Accurately Evaluating SNP Effects by Exploiting the Different Characteristics of Variants in Distinct Genomic Regions.

    Directory of Open Access Journals (Sweden)

    Jaroslav Bendl

    2016-05-01

    Full Text Available An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i regulatory, (ii splicing, (iii missense, (iv synonymous, and (v nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools' predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of

  17. An evaluation of methods for estimating decadal stream loads

    Science.gov (United States)

    Lee, Casey; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.

    2016-01-01

    Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen – lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale’s ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between

  18. Test Methods for Evaluating Solid Waste, Physical/Chemical Methods. First Update. (3rd edition)

    International Nuclear Information System (INIS)

    Friedman; Sellers.

    1988-01-01

    The proposed Update is for Test Methods for Evaluating Solid Waste, Physical/Chemical Methods, SW-846, Third Edition. Attached to the report is a list of methods included in the proposed update indicating whether the method is a new method, a partially revised method, or a totally revised method. Do not discard or replace any of the current pages in the SW-846 manual until the proposed update I package is promulgated. Until promulgation of the update package, the methods in the update package are not officially part of the SW-846 manual and thus do not carry the status of EPA-approved methods. In addition to the proposed Update, six finalized methods are included for immediate inclusion into the Third Edition of SW-846. Four methods, originally proposed October 1, 1984, will be finalized in a soon to be released rulemaking. They are, however, being submitted to subscribers for the first time in the update. These methods are 7211, 7381, 7461, and 7951. Two other methods were finalized in the 2nd Edition of SW-846. They were inadvertantly omitted from the 3rd Edition and are not being proposed as new. These methods are 7081 and 7761

  19. Evaluation of Quality Assessment Protocols for High Throughput Genome Resequencing Data.

    Science.gov (United States)

    Chiara, Matteo; Pavesi, Giulio

    2017-01-01

    Large-scale initiatives aiming to recover the complete sequence of thousands of human genomes are currently being undertaken worldwide, concurring to the generation of a comprehensive catalog of human genetic variation. The ultimate and most ambitious goal of human population scale genomics is the characterization of the so-called human "variome," through the identification of causal mutations or haplotypes. Several research institutions worldwide currently use genotyping assays based on Next-Generation Sequencing (NGS) for diagnostics and clinical screenings, and the widespread application of such technologies promises major revolutions in medical science. Bioinformatic analysis of human resequencing data is one of the main factors limiting the effectiveness and general applicability of NGS for clinical studies. The requirement for multiple tools, to be combined in dedicated protocols in order to accommodate different types of data (gene panels, exomes, or whole genomes) and the high variability of the data makes difficult the establishment of a ultimate strategy of general use. While there already exist several studies comparing sensitivity and accuracy of bioinformatic pipelines for the identification of single nucleotide variants from resequencing data, little is known about the impact of quality assessment and reads pre-processing strategies. In this work we discuss major strengths and limitations of the various genome resequencing protocols are currently used in molecular diagnostics and for the discovery of novel disease-causing mutations. By taking advantage of publicly available data we devise and suggest a series of best practices for the pre-processing of the data that consistently improve the outcome of genotyping with minimal impacts on computational costs.

  20. Operator performance evaluation using multi criteria decision making methods

    Science.gov (United States)

    Rani, Ruzanita Mat; Ismail, Wan Rosmanira; Razali, Siti Fatihah

    2014-06-01

    Operator performance evaluation is a very important operation in labor-intensive manufacturing industry because the company's productivity depends on the performance of its operators. The aims of operator performance evaluation are to give feedback to operators on their performance, to increase company's productivity and to identify strengths and weaknesses of each operator. In this paper, six multi criteria decision making methods; Analytical Hierarchy Process (AHP), fuzzy AHP (FAHP), ELECTRE, PROMETHEE II, Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) and VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) are used to evaluate the operators' performance and to rank the operators. The performance evaluation is based on six main criteria; competency, experience and skill, teamwork and time punctuality, personal characteristics, capability and outcome. The study was conducted at one of the SME food manufacturing companies in Selangor. From the study, it is found that AHP and FAHP yielded the "outcome" criteria as the most important criteria. The results of operator performance evaluation showed that the same operator is ranked the first using all six methods.

  1. Development of evaluation method for software hazard identification techniques

    International Nuclear Information System (INIS)

    Huang, H. W.; Chen, M. H.; Shih, C.; Yih, S.; Kuo, C. T.; Wang, L. H.; Yu, Y. C.; Chen, C. W.

    2006-01-01

    This research evaluated the applicable software hazard identification techniques nowadays, such as, Preliminary Hazard Analysis (PHA), Failure Modes and Effects Analysis (FMEA), Fault Tree Analysis (FTA), Markov chain modeling, Dynamic Flow-graph Methodology (DFM), and simulation-based model analysis; and then determined indexes in view of their characteristics, which include dynamic capability, completeness, achievability, detail, signal/noise ratio, complexity, and implementation cost. By this proposed method, the analysts can evaluate various software hazard identification combinations for specific purpose. According to the case study results, the traditional PHA + FMEA + FTA (with failure rate) + Markov chain modeling (with transfer rate) combination is not competitive due to the dilemma for obtaining acceptable software failure rates. However, the systematic architecture of FTA and Markov chain modeling is still valuable for realizing the software fault structure. The system centric techniques, such as DFM and simulation-based model-analysis, show the advantage on dynamic capability, achievability, detail, signal/noise ratio. However, their disadvantages are the completeness complexity and implementation cost. This evaluation method can be a platform to reach common consensus for the stakeholders. Following the evolution of software hazard identification techniques, the evaluation results could be changed. However, the insight of software hazard identification techniques is much more important than the numbers obtained by the evaluation. (authors)

  2. Evaluating the adaptive evolutionary convergence of carnivorous plant taxa through functional genomics.

    Science.gov (United States)

    Wheeler, Gregory L; Carstens, Bryan C

    2018-01-01

    Carnivorous plants are striking examples of evolutionary convergence, displaying complex and often highly similar adaptations despite lack of shared ancestry. Using available carnivorous plant genomes along with non-carnivorous reference taxa, this study examines the convergence of functional overrepresentation of genes previously implicated in plant carnivory. Gene Ontology (GO) coding was used to quantitatively score functional representation in these taxa, in terms of proportion of carnivory-associated functions relative to all functional sequence. Statistical analysis revealed that, in carnivorous plants as a group, only two of the 24 functions tested showed a signal of substantial overrepresentation. However, when the four carnivorous taxa were analyzed individually, 11 functions were found to be significant in at least one taxon. Though carnivorous plants collectively may show overrepresentation in functions from the predicted set, the specific functions that are overrepresented vary substantially from taxon to taxon. While it is possible that some functions serve a similar practical purpose such that one taxon does not need to utilize both to achieve the same result, it appears that there are multiple approaches for the evolution of carnivorous function in plant genomes. Our approach could be applied to tests of functional convergence in other systems provided on the availability of genomes and annotation data for a group.

  3. Evaluating the adaptive evolutionary convergence of carnivorous plant taxa through functional genomics

    Directory of Open Access Journals (Sweden)

    Gregory L. Wheeler

    2018-01-01

    Full Text Available Carnivorous plants are striking examples of evolutionary convergence, displaying complex and often highly similar adaptations despite lack of shared ancestry. Using available carnivorous plant genomes along with non-carnivorous reference taxa, this study examines the convergence of functional overrepresentation of genes previously implicated in plant carnivory. Gene Ontology (GO coding was used to quantitatively score functional representation in these taxa, in terms of proportion of carnivory-associated functions relative to all functional sequence. Statistical analysis revealed that, in carnivorous plants as a group, only two of the 24 functions tested showed a signal of substantial overrepresentation. However, when the four carnivorous taxa were analyzed individually, 11 functions were found to be significant in at least one taxon. Though carnivorous plants collectively may show overrepresentation in functions from the predicted set, the specific functions that are overrepresented vary substantially from taxon to taxon. While it is possible that some functions serve a similar practical purpose such that one taxon does not need to utilize both to achieve the same result, it appears that there are multiple approaches for the evolution of carnivorous function in plant genomes. Our approach could be applied to tests of functional convergence in other systems provided on the availability of genomes and annotation data for a group.

  4. An evaluation of the genetic-matched pair study design using genome-wide SNP data from the European population

    DEFF Research Database (Denmark)

    Lu, Timothy Tehua; Lao, Oscar; Nothnagel, Michael

    2009-01-01

    Genetic matching potentially provides a means to alleviate the effects of incomplete Mendelian randomization in population-based gene-disease association studies. We therefore evaluated the genetic-matched pair study design on the basis of genome-wide SNP data (309,790 markers; Affymetrix Gene...... of predicting the BOM than randomly chosen subsets. This leads us to conclude that, at least in Europe, the utility of the genetic-matched pair study design depends critically on the availability of comprehensive genotype information for both cases and controls....

  5. Evaluation of an explicit NOx chemistry method in AERMOD.

    Science.gov (United States)

    Carruthers, David J; Stocker, Jenny R; Ellis, Andrew; Seaton, Martin D; Smith, Stephen E

    2017-06-01

    An explicit NO x chemistry method has been implemented in AERMOD version 15181, ADMSM. The scheme has been evaluated by comparison with the methodologies currently recommended by the U.S. EPA for Tier 3 NO 2 calculations, that is, OLM and PVMRM2. Four data sets have been used for NO 2 chemistry method evaluation. Overall, ADMSM-modeled NO 2 concentrations show the most consistency with the AERMOD calculations of NO x and the highest Index of Agreement; they are also on average lower than those of both OLM and PVMRM2. OLM shows little consistency with modeled NO x concentrations and markedly overpredicts NO 2 . PVMRM2 shows performance closer to that of ADMSM than OLM; however, its behavior is inconsistent with modeled NO x in some cases and it has less good statistics for NO 2 . The trend in model performance can be explained by examining the features particular to each chemistry method: OLM can be considered as a screening model as it calculates the upper bound of conversion from NO to NO 2 possible with the background O 3 concentration; PVMRM2 includes a much-improved estimate of in-plume O 3 but is otherwise similar to OLM, assuming instantaneous reaction of NO with O 3 ; and ADMSM allows for the rate of this reaction and also the photolysis of NO 2 . Evaluation with additional data sets is needed to further clarify the relative performance of ADMSM and PVMRM2. Extensive evaluation of the current AERMOD Tier 3 chemistry methods OLM and PVMRM2, alongside a new scheme that explicitly calculates the oxidation of NO by O 3 and the reverse photolytic reaction, shows that OLM consistently overpredicts NO 2 concentrations. PVMRM2 performs well in general, but there are some cases where this method overpredicts NO 2 . The new explicit NO x chemistry scheme, ADMSM, predicts NO 2 concentrations that are more consistent with both the modeled NO x concentrations and the observations.

  6. Evaluation of statistical methods for normalization and differential expression in mRNA-Seq experiments

    Directory of Open Access Journals (Sweden)

    Hansen Kasper D

    2010-02-01

    Full Text Available Abstract Background High-throughput sequencing technologies, such as the Illumina Genome Analyzer, are powerful new tools for investigating a wide range of biological and medical questions. Statistical and computational methods are key for drawing meaningful and accurate conclusions from the massive and complex datasets generated by the sequencers. We provide a detailed evaluation of statistical methods for normalization and differential expression (DE analysis of Illumina transcriptome sequencing (mRNA-Seq data. Results We compare statistical methods for detecting genes that are significantly DE between two types of biological samples and find that there are substantial differences in how the test statistics handle low-count genes. We evaluate how DE results are affected by features of the sequencing platform, such as, varying gene lengths, base-calling calibration method (with and without phi X control lane, and flow-cell/library preparation effects. We investigate the impact of the read count normalization method on DE results and show that the standard approach of scaling by total lane counts (e.g., RPKM can bias estimates of DE. We propose more general quantile-based normalization procedures and demonstrate an improvement in DE detection. Conclusions Our results have significant practical and methodological implications for the design and analysis of mRNA-Seq experiments. They highlight the importance of appropriate statistical methods for normalization and DE inference, to account for features of the sequencing platform that could impact the accuracy of results. They also reveal the need for further research in the development of statistical and computational methods for mRNA-Seq.

  7. Bacterial whole genome-based phylogeny: construction of a new benchmarking dataset and assessment of some existing methods

    DEFF Research Database (Denmark)

    Ahrenfeldt, Johanne; Skaarup, Carina; Hasman, Henrik

    2017-01-01

    for sequencing. The result is a data set consisting of 101 whole genome sequences with known phylogenetic relationship. Among the sequenced samples 51 correspond to internal nodes in the phylogeny because they are ancestral, while the remaining 50 correspond to leaves.We also used the newly created data set...... sequences are placed as leafs, even though some of them are in fact ancestral. We therefore devised a method for post processing the inferred trees by collapsing short branches (thus relocating some leafs to internal nodes), and also present two new measures of tree similarity that takes into account...... the identity of both internal and leaf nodes. Conclusions Based on this analysis we find that, among the investigated methods, CSI Phylogeny had the best performance, correctly identifying 73% of all branches in the tree and 71% of all clades.We have made all data from this experiment (raw sequencing reads...

  8. Evaluation of Test Method for Solar Collector Efficiency

    DEFF Research Database (Denmark)

    Fan, Jianhua; Shah, Louise Jivan; Furbo, Simon

    on these efficiencies, an efficiency equation is determined by regression analysis. In the test method, there are no requirements on the ambient air temperature and the sky temperature. The paper will present an evaluation of the test method for a 12.5 m² flat plate solar collector panel from Arcon Solvarme A....../S. The solar collector panel investigated has 16 parallel connected horizontal absorber fins. CFD (Computational Fluid Dynamics) simulations, calculations with a solar collector simulation program SOLEFF (Rasmussen and Svendsen, 1996) and thermal experiments are carried out in the investigation......The test method of the standard EN12975-2 (European Committee for Standardization, 2004) is used by European test laboratories to determine the efficiency of solar collectors. In the test methods the mean solar collector fluid temperature in the solar collector, Tm is determined by the approximated...

  9. Evaluation of non cyanide methods for hemoglobin estimation

    Directory of Open Access Journals (Sweden)

    Vinaya B Shah

    2011-01-01

    Full Text Available Background: The hemoglobincyanide method (HiCN method for measuring hemoglobin is used extensively worldwide; its advantages are the ready availability of a stable and internationally accepted reference standard calibrator. However, its use may create a problem, as the waste disposal of large volumes of reagent containing cyanide constitutes a potential toxic hazard. Aims and Objective: As an alternative to drabkin`s method of Hb estimation, we attempted to estimate hemoglobin by other non-cyanide methods: alkaline hematin detergent (AHD-575 using Triton X-100 as lyser and alkaline- borax method using quarternary ammonium detergents as lyser. Materials and Methods: The hemoglobin (Hb results on 200 samples of varying Hb concentrations obtained by these two cyanide free methods were compared with a cyanmethemoglobin method on a colorimeter which is light emitting diode (LED based. Hemoglobin was also estimated in one hundred blood donors and 25 blood samples of infants and compared by these methods. Statistical analysis used was Pearson`s correlation coefficient. Results: The response of the non cyanide method is linear for serially diluted blood samples over the Hb concentration range from 3gm/dl -20 gm/dl. The non cyanide methods has a precision of + 0.25g/dl (coefficient of variation= (2.34% and is suitable for use with fixed wavelength or with colorimeters at wavelength- 530 nm and 580 nm. Correlation of these two methods was excellent (r=0.98. The evaluation has shown it to be as reliable and reproducible as HiCN for measuring hemoglobin at all concentrations. The reagents used in non cyanide methods are non-biohazardous and did not affect the reliability of data determination and also the cost was less than HiCN method. Conclusions: Thus, non cyanide methods of Hb estimation offer possibility of safe and quality Hb estimation and should prove useful for routine laboratory use. Non cyanide methods is easily incorporated in hemobloginometers

  10. Identification, evaluation, and application of the genomic-SSR loci in ramie

    Directory of Open Access Journals (Sweden)

    Ming-Bao Luan

    2016-09-01

    Full Text Available To provide a theoretical and practical foundation for ramie genetic analysis, simple sequence repeats (SSRs were identified in the ramie genome and employed in this study. From the 115 369 sequences of a specific-locus amplified fragment library, a type of reduced representation library obtained by high-throughput sequencing, we identified 4774 sequences containing 5064 SSR motifs. SSRs of ramie included repeat motifs with lengths of 1 to 6 nucleotides, and the abundance of each motif type varied greatly. We found that mononucleotide, dinucleotide, and trinucleotide repeat motifs were the most prevalent (95.91%. A total of 98 distinct motif types were detected in the genomic-SSRs of ramie. Of them, The A/T mononucleotide motif was the most abundant, accounting for 41.45% of motifs, followed by AT/TA, accounting for 20.30%. The number of alleles per locus in 31 polymorphic microsatellite loci ranged from 2 to 7, and observed and expected heterozygosities ranged from 0.04 to 1.00 and 0.04 to 0.83, respectively. Furthermore, molecular identity cards (IDs of the germplasms were constructed employing the ID Analysis 3.0 software. In the current study, the 26 germplasms of ramie can be distinguished by a combination of five SSR primers including Ibg5-5, Ibg3-210, Ibg1-11, Ibg6-468, and Ibg6-481. The allele polymorphisms produced by all SSR primers were used to analyze genetic relationships among the germplasms. The similarity coefficients ranged from 0.41 to 0.88. We found that these 26 germplasms were clustered into five categories using UPGMA, with poor correlation between germplasm and geographical distribution. Our study is the first large-scale SSR identification from ramie genomic sequences. We have further studied the SSR distribution pattern in the ramie genome, and proposed that it is possible to develop SSR loci from genomic data for population genetics studies, linkage mapping, quantitative trait locus mapping, cultivar fingerprinting

  11. scnRCA: a novel method to detect consistent patterns of translational selection in mutationally-biased genomes.

    Directory of Open Access Journals (Sweden)

    Patrick K O'Neill

    Full Text Available Codon usage bias (CUB results from the complex interplay between translational selection and mutational biases. Current methods for CUB analysis apply heuristics to integrate both components, limiting the depth and scope of CUB analysis as a technique to probe into the evolution and optimization of protein-coding genes. Here we introduce a self-consistent CUB index (scnRCA that incorporates implicit correction for mutational biases, facilitating exploration of the translational selection component of CUB. We validate this technique using gene expression data and we apply it to a detailed analysis of CUB in the Pseudomonadales. Our results illustrate how the selective enrichment of specific codons among highly expressed genes is preserved in the context of genome-wide shifts in codon frequencies, and how the balance between mutational and translational biases leads to varying definitions of codon optimality. We extend this analysis to other moderate and fast growing bacteria and we provide unified support for the hypothesis that C- and A-ending codons of two-box amino acids, and the U-ending codons of four-box amino acids, are systematically enriched among highly expressed genes across bacteria. The use of an unbiased estimator of CUB allows us to report for the first time that the signature of translational selection is strongly conserved in the Pseudomonadales in spite of drastic changes in genome composition, and extends well beyond the core set of highly optimized genes in each genome. We generalize these results to other moderate and fast growing bacteria, hinting at selection for a universal pattern of gene expression that is conserved and detectable in conserved patterns of codon usage bias.

  12. Gene Editing in Human Lymphoid Cells: Role for Donor DNA, Type of Genomic Nuclease and Cell Selection Method

    Directory of Open Access Journals (Sweden)

    Anastasia Zotova

    2017-11-01

    Full Text Available Programmable endonucleases introduce DNA breaks at specific sites, which are repaired by non-homologous end joining (NHEJ or homology recombination (HDR. Genome editing in human lymphoid cells is challenging as these difficult-to-transfect cells may also inefficiently repair DNA by HDR. Here, we estimated efficiencies and dynamics of knockout (KO and knockin (KI generation in human T and B cell lines depending on repair template, target loci and types of genomic endonucleases. Using zinc finger nuclease (ZFN, we have engineered Jurkat and CEM cells with the 8.2 kb human immunodeficiency virus type 1 (HIV-1 ∆Env genome integrated at the adeno-associated virus integration site 1 (AAVS1 locus that stably produce virus particles and mediate infection upon transfection with helper vectors. Knockouts generated by ZFN or clustered regularly interspaced short palindromic repeats (CRISPR/Cas9 double nicking techniques were comparably efficient in lymphoid cells. However, unlike polyclonal sorted cells, gene-edited cells selected by cloning exerted tremendous deviations in functionality as estimated by replication of HIV-1 and human T cell leukemia virus type 1 (HTLV-1 in these cells. Notably, the recently reported high-fidelity eCas9 1.1 when combined to the nickase mutation displayed gene-dependent decrease in on-target activity. Thus, the balance between off-target effects and on-target efficiency of nucleases, as well as choice of the optimal method of edited cell selection should be taken into account for proper gene function validation in lymphoid cells.

  13. REGEN: Ancestral Genome Reconstruction for Bacteria

    Directory of Open Access Journals (Sweden)

    João C. Setubal

    2012-07-01

    Full Text Available Ancestral genome reconstruction can be understood as a phylogenetic study with more details than a traditional phylogenetic tree reconstruction. We present a new computational system called REGEN for ancestral bacterial genome reconstruction at both the gene and replicon levels. REGEN reconstructs gene content, contiguous gene runs, and replicon structure for each ancestral genome. Along each branch of the phylogenetic tree, REGEN infers evolutionary events, including gene creation and deletion and replicon fission and fusion. The reconstruction can be performed by either a maximum parsimony or a maximum likelihood method. Gene content reconstruction is based on the concept of neighboring gene pairs. REGEN was designed to be used with any set of genomes that are sufficiently related, which will usually be the case for bacteria within the same taxonomic order. We evaluated REGEN using simulated genomes and genomes in the Rhizobiales order.

  14. REGEN: Ancestral Genome Reconstruction for Bacteria.

    Science.gov (United States)

    Yang, Kuan; Heath, Lenwood S; Setubal, João C

    2012-07-18

    Ancestral genome reconstruction can be understood as a phylogenetic study with more details than a traditional phylogenetic tree reconstruction. We present a new computational system called REGEN for ancestral bacterial genome reconstruction at both the gene and replicon levels. REGEN reconstructs gene content, contiguous gene runs, and replicon structure for each ancestral genome. Along each branch of the phylogenetic tree, REGEN infers evolutionary events, including gene creation and deletion and replicon fission and fusion. The reconstruction can be performed by either a maximum parsimony or a maximum likelihood method. Gene content reconstruction is based on the concept of neighboring gene pairs. REGEN was designed to be used with any set of genomes that are sufficiently related, which will usually be the case for bacteria within the same taxonomic order. We evaluated REGEN using simulated genomes and genomes in the Rhizobiales order.

  15. Microsatellite markers for evaluating the diversity of the natural killer complex and major histocompatibility complex genomic regions in domestic horses.

    Science.gov (United States)

    Horecky, C; Horecka, E; Futas, J; Janova, E; Horin, P; Knoll, A

    2018-04-01

    Genotyping microsatellite markers represents a standard, relatively easy, and inexpensive method of assessing genetic diversity of complex genomic regions in various animal species, such as the major histocompatibility complex (MHC) and/or natural killer cell receptor (NKR) genes. MHC-linked microsatellite markers have been identified and some of them were used for characterizing MHC polymorphism in various species, including horses. However, most of those were MHC class II markers, while MHC class I and III sub-regions were less well covered. No tools for studying genetic diversity of NKR complex genomic regions are available in horses. Therefore, the aims of this work were to establish a panel of markers suitable for analyzing genetic diversity of the natural killer complex (NKC), and to develop additional microsatellite markers of the MHC class I and class III genomic sub-regions in horses. Nine polymorphic microsatellite loci were newly identified in the equine NKC. Along with two previously reported microsatellites flanking this region, they constituted a panel of 11 loci allowing to characterize genetic variation in this functionally important part of the horse genome. Four newly described MHC class I/III-linked markers were added to 11 known microsatellites to establish a panel of 15 MHC markers with a better coverage of the class I and class III sub-regions. Major characteristics of the two panels produced on a group of 65 horses of 13 breeds and on five Przewalski's horses showed that they do reflect genetic variation within the horse species. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.