WorldWideScience

Sample records for high-throughput contact mapping

  1. High-throughput antibody development and retrospective epitope mapping

    DEFF Research Database (Denmark)

    Rydahl, Maja Gro

    Plant cell walls are composed of an interlinked network of polysaccharides, glycoproteins and phenolic polymers. When addressing the diverse polysaccharides in green plants, including land plants and the ancestral green algae, there are significant overlaps in the cell wall structures. Yet......, there are noteworthy differences in the less evolved species of algae as compared to land plants. The dynamic process orchestrating the deposition of these biopolymers both in algae and higher plants, is complex and highly heterogeneous, yet immensely important for the development and differentiation of the cell...... of green algae, during the development into land plants. Hence, there is a pressing need for rethinking the glycomic toolbox, by developing new and high-throughput (HTP) technology, in order to acquire information of the location and relative abundance of diverse cell wall polymers. In this dissertation...

  2. Algorithms for mapping high-throughput DNA sequences

    DEFF Research Database (Denmark)

    Frellsen, Jes; Menzel, Peter; Krogh, Anders

    2014-01-01

    of data generation, new bioinformatics approaches have been developed to cope with the large amount of sequencing reads obtained in these experiments. In this chapter, we first introduce HTS technologies and their usage in molecular biology and discuss the problem of mapping sequencing reads...

  3. High-Throughput SNP Discovery And Genetic Mapping In Perennial Ryegrass

    DEFF Research Database (Denmark)

    Asp, Torben; Studer, Bruno; Lübberstedt, Thomas

    Gene-associated single nucleotide polymorphisms (SNPs) are of major interest for genome analysis and breeding applications in the key grassland species perennial ryegrass. High-throughput 454 Titanium transcriptome sequencing was performed on two genotypes, which previously have been used...... in the VrnA mapping population. Here we report on large-scale SNP discovery, and the construction of a genetic map enabling QTL fine mapping, map-based cloning, and comparative genomics in perennial ryegrass....

  4. Single DNA molecule patterning for high-throughput epigenetic mapping.

    Science.gov (United States)

    Cerf, Aline; Cipriany, Benjamin R; Benítez, Jaime J; Craighead, Harold G

    2011-11-01

    We present a method for profiling the 5-methyl cytosine distribution on single DNA molecules. Our method combines soft-lithography and molecular elongation to form ordered arrays estimated to contain more than 250 000 individual DNA molecules immobilized on a solid substrate. The methylation state of the DNA is detected and mapped by binding of fluorescently labeled methyl-CpG binding domain peptides to the elongated dsDNA molecules and imaging of their distribution. The stretched molecules are fixed in their extended configuration by adsorption onto the substrate so analysis can be performed with high spatial resolution and signal averaging. We further prove this technique allows imaging of DNA molecules with different methylation states.

  5. Perspective: Composition–structure–property mapping in high-throughput experiments: Turning data into knowledge

    Directory of Open Access Journals (Sweden)

    Jason R. Hattrick-Simpers

    2016-05-01

    Full Text Available With their ability to rapidly elucidate composition-structure-property relationships, high-throughput experimental studies have revolutionized how materials are discovered, optimized, and commercialized. It is now possible to synthesize and characterize high-throughput libraries that systematically address thousands of individual cuts of fabrication parameter space. An unresolved issue remains transforming structural characterization data into phase mappings. This difficulty is related to the complex information present in diffraction and spectroscopic data and its variation with composition and processing. We review the field of automated phase diagram attribution and discuss the impact that emerging computational approaches will have in the generation of phase diagrams and beyond.

  6. Construction and analysis of high-density linkage map using high-throughput sequencing data.

    Directory of Open Access Journals (Sweden)

    Dongyuan Liu

    Full Text Available Linkage maps enable the study of important biological questions. The construction of high-density linkage maps appears more feasible since the advent of next-generation sequencing (NGS, which eases SNP discovery and high-throughput genotyping of large population. However, the marker number explosion and genotyping errors from NGS data challenge the computational efficiency and linkage map quality of linkage study methods. Here we report the HighMap method for constructing high-density linkage maps from NGS data. HighMap employs an iterative ordering and error correction strategy based on a k-nearest neighbor algorithm and a Monte Carlo multipoint maximum likelihood algorithm. Simulation study shows HighMap can create a linkage map with three times as many markers as ordering-only methods while offering more accurate marker orders and stable genetic distances. Using HighMap, we constructed a common carp linkage map with 10,004 markers. The singleton rate was less than one-ninth of that generated by JoinMap4.1. Its total map distance was 5,908 cM, consistent with reports on low-density maps. HighMap is an efficient method for constructing high-density, high-quality linkage maps from high-throughput population NGS data. It will facilitate genome assembling, comparative genomic analysis, and QTL studies. HighMap is available at http://highmap.biomarker.com.cn/.

  7. Arabidopsis Seed Content QTL Mapping Using High-Throughput Phenotyping: The Assets of Near Infrared Spectroscopy.

    Science.gov (United States)

    Jasinski, Sophie; Lécureuil, Alain; Durandet, Monique; Bernard-Moulin, Patrick; Guerche, Philippe

    2016-01-01

    Seed storage compounds are of crucial importance for human diet, feed and industrial uses. In oleo-proteaginous species like rapeseed, seed oil and protein are the qualitative determinants that conferred economic value to the harvested seed. To date, although the biosynthesis pathways of oil and storage protein are rather well-known, the factors that determine how these types of reserves are partitioned in seeds have to be identified. With the aim of implementing a quantitative genetics approach, requiring phenotyping of 100s of plants, our first objective was to establish near-infrared reflectance spectroscopic (NIRS) predictive equations in order to estimate oil, protein, carbon, and nitrogen content in Arabidopsis seed with high-throughput level. Our results demonstrated that NIRS is a powerful non-destructive, high-throughput method to assess the content of these four major components studied in Arabidopsis seed. With this tool in hand, we analyzed Arabidopsis natural variation for these four components and illustrated that they all displayed a wide range of variation. Finally, NIRS was used in order to map QTL for these four traits using seeds from the Arabidopsis thaliana Ct-1 × Col-0 recombinant inbred line population. Some QTL co-localized with QTL previously identified, but others mapped to chromosomal regions never identified so far for such traits. This paper illustrates the usefulness of NIRS predictive equations to perform accurate high-throughput phenotyping of Arabidopsis seed content, opening new perspectives in gene identification following QTL mapping and genome wide association studies.

  8. Arabidopsis seed content QTL mapping using high-throughput phenotyping: the assets of Near Infrared Spectroscopy

    Directory of Open Access Journals (Sweden)

    Sophie Jasinski

    2016-11-01

    Full Text Available Seed storage compounds are of crucial importance for human diet, feed and industrial uses. In oleo-proteaginous species like rapeseed, seed oil and protein are the qualitative determinants that conferred economic value to the harvested seed. To date, although the biosynthesis pathways of oil and storage protein are rather well known, the factors that determine how these types of reserves are partitioned in seeds have to be identified. With the aim of implementing a quantitative genetics approach, requiring phenotyping of hundreds of plants, our first objective was to establish near-infrared reflectance spectroscopic (NIRS predictive equations in order to estimate oil, protein, carbon and nitrogen content in Arabidopsis seed with high-throughput level. Our results demonstrated that NIRS is a powerful non-destructive, high-throughput method to assess the content of these four major components studied in Arabidopsis seed. With this tool in hand, we analysed Arabidopsis natural variation for these four components and illustrated that they all displayed a wide range of variation. Finally, NIRS was used in order to map QTL for these four traits using seeds from the Arabidopsis thaliana Ct-1 x Col-0 recombinant inbred line population. Some QTL co-localised with QTL previously identified, but others mapped to chromosomal regions never identified so far for such traits. This paper illustrates the usefulness of NIRS predictive equations to perform accurate high-throughput phenotyping of Arabidopsis seed content, opening new perspectives in gene identification following QTL mapping and Genome Wide Association Studies.

  9. High-throughput mapping of cell-wall polymers within and between plants using novel microarrays

    DEFF Research Database (Denmark)

    Moller, Isabel Eva; Sørensen, Iben; Bernal Giraldo, Adriana Jimena

    2007-01-01

    We describe here a methodology that enables the occurrence of cell-wall glycans to be systematically mapped throughout plants in a semi-quantitative high-throughput fashion. The technique (comprehensive microarray polymer profiling, or CoMPP) integrates the sequential extraction of glycans from...... analysis of mutant and wild-type plants, as demonstrated here for the Arabidopsis thaliana mutants fra8, mur1 and mur3. CoMPP was also applied to Physcomitrella patens cell walls and was validated by carbohydrate linkage analysis. These data provide new insights into the structure and functions of plant...

  10. High-Throughput Peptide Epitope Mapping Using Carbon Nanotube Field-Effect Transistors

    Directory of Open Access Journals (Sweden)

    Steingrimur Stefansson

    2013-01-01

    Full Text Available Label-free and real-time detection technologies can dramatically reduce the time and cost of pharmaceutical testing and development. However, to reach their full promise, these technologies need to be adaptable to high-throughput automation. To demonstrate the potential of single-walled carbon nanotube field-effect transistors (SWCNT-FETs for high-throughput peptide-based assays, we have designed circuits arranged in an 8 × 12 (96-well format that are accessible to standard multichannel pipettors. We performed epitope mapping of two HIV-1 gp160 antibodies using an overlapping gp160 15-mer peptide library coated onto nonfunctionalized SWCNTs. The 15-mer peptides did not require a linker to adhere to the non-functionalized SWCNTs, and binding data was obtained in real time for all 96 circuits. Despite some sequence differences in the HIV strains used to generate these antibodies and the overlapping peptide library, respectively, our results using these antibodies are in good agreement with known data, indicating that peptides immobilized onto SWCNT are accessible and that linear epitope mapping can be performed in minutes using SWCNT-FET.

  11. High-Throughput Phenotyping and QTL Mapping Reveals the Genetic Architecture of Maize Plant Growth.

    Science.gov (United States)

    Zhang, Xuehai; Huang, Chenglong; Wu, Di; Qiao, Feng; Li, Wenqiang; Duan, Lingfeng; Wang, Ke; Xiao, Yingjie; Chen, Guoxing; Liu, Qian; Xiong, Lizhong; Yang, Wanneng; Yan, Jianbing

    2017-03-01

    With increasing demand for novel traits in crop breeding, the plant research community faces the challenge of quantitatively analyzing the structure and function of large numbers of plants. A clear goal of high-throughput phenotyping is to bridge the gap between genomics and phenomics. In this study, we quantified 106 traits from a maize (Zea mays) recombinant inbred line population (n = 167) across 16 developmental stages using the automatic phenotyping platform. Quantitative trait locus (QTL) mapping with a high-density genetic linkage map, including 2,496 recombinant bins, was used to uncover the genetic basis of these complex agronomic traits, and 988 QTLs have been identified for all investigated traits, including three QTL hotspots. Biomass accumulation and final yield were predicted using a combination of dissected traits in the early growth stage. These results reveal the dynamic genetic architecture of maize plant growth and enhance ideotype-based maize breeding and prediction. © 2017 American Society of Plant Biologists. All Rights Reserved.

  12. High throughput integrated thermal characterization with non-contact optical calorimetry

    Science.gov (United States)

    Hou, Sichao; Huo, Ruiqing; Su, Ming

    2017-10-01

    Commonly used thermal analysis tools such as calorimeter and thermal conductivity meter are separated instruments and limited by low throughput, where only one sample is examined each time. This work reports an infrared based optical calorimetry with its theoretical foundation, which is able to provide an integrated solution to characterize thermal properties of materials with high throughput. By taking time domain temperature information of spatially distributed samples, this method allows a single device (infrared camera) to determine the thermal properties of both phase change systems (melting temperature and latent heat of fusion) and non-phase change systems (thermal conductivity and heat capacity). This method further allows these thermal properties of multiple samples to be determined rapidly, remotely, and simultaneously. In this proof-of-concept experiment, the thermal properties of a panel of 16 samples including melting temperatures, latent heats of fusion, heat capacities, and thermal conductivities have been determined in 2 min with high accuracy. Given the high thermal, spatial, and temporal resolutions of the advanced infrared camera, this method has the potential to revolutionize the thermal characterization of materials by providing an integrated solution with high throughput, high sensitivity, and short analysis time.

  13. Comparison of mapping algorithms used in high-throughput sequencing: application to Ion Torrent data.

    Science.gov (United States)

    Caboche, Ségolène; Audebert, Christophe; Lemoine, Yves; Hot, David

    2014-04-05

    The rapid evolution in high-throughput sequencing (HTS) technologies has opened up new perspectives in several research fields and led to the production of large volumes of sequence data. A fundamental step in HTS data analysis is the mapping of reads onto reference sequences. Choosing a suitable mapper for a given technology and a given application is a subtle task because of the difficulty of evaluating mapping algorithms. In this paper, we present a benchmark procedure to compare mapping algorithms used in HTS using both real and simulated datasets and considering four evaluation criteria: computational resource and time requirements, robustness of mapping, ability to report positions for reads in repetitive regions, and ability to retrieve true genetic variation positions. To measure robustness, we introduced a new definition for a correctly mapped read taking into account not only the expected start position of the read but also the end position and the number of indels and substitutions. We developed CuReSim, a new read simulator, that is able to generate customized benchmark data for any kind of HTS technology by adjusting parameters to the error types. CuReSim and CuReSimEval, a tool to evaluate the mapping quality of the CuReSim simulated reads, are freely available. We applied our benchmark procedure to evaluate 14 mappers in the context of whole genome sequencing of small genomes with Ion Torrent data for which such a comparison has not yet been established. A benchmark procedure to compare HTS data mappers is introduced with a new definition for the mapping correctness as well as tools to generate simulated reads and evaluate mapping quality. The application of this procedure to Ion Torrent data from the whole genome sequencing of small genomes has allowed us to validate our benchmark procedure and demonstrate that it is helpful for selecting a mapper based on the intended application, questions to be addressed, and the technology used. This benchmark

  14. AWV: high-throughput cross-array cross-wafer variation mapping

    Science.gov (United States)

    Yeo, Jeong-Ho; Lee, Byoung-Ho; Lee, Tae-Yong; Greenberg, Gadi; Meshulach, Doron; Ravid, Erez; Levi, Shimon; Kan, Kobi; Shabtay, Saar; Cohen, Yehuda; Rotlevi, Ofer

    2008-03-01

    Minute variations in advanced VLSI manufacturing processes are well known to significantly impact device performance and die yield. These variations drive the need for increased measurement sampling with a minimal impact on Fab productivity. Traditional discrete measurements such as CDSEM or OCD, provide, statistical information for process control and monitoring. Typically these measurements require a relatively long time and cover only a fraction of the wafer area. Across array across wafer variation mapping ( AWV) suggests a new approach for high throughput, full wafer process variation monitoring, using a DUV bright-field inspection tool. With this technique we present a full wafer scanning, visualizing the variation trends within a single die and across the wafer. The underlying principle of the AWV inspection method is to measure variations in the reflected light from periodic structures, under optimized illumination and collection conditions. Structural changes in the periodic array induce variations in the reflected light. This information is collected and analyzed in real time. In this paper we present AWV concept, measurements and simulation results. Experiments were performed using a DUV bright-field inspection tool (UVision (TM), Applied Materials) on a memory short loop experiment (SLE), Focus Exposure Matrix (FEM) and normal wafers. AWV and CDSEM results are presented to reflect CD variations within a memory array and across wafers.

  15. High-throughput SNP genotyping in Cucurbita pepo for map construction and quantitative trait loci mapping

    Directory of Open Access Journals (Sweden)

    Esteras Cristina

    2012-02-01

    Full Text Available Abstract Background Cucurbita pepo is a member of the Cucurbitaceae family, the second- most important horticultural family in terms of economic importance after Solanaceae. The "summer squash" types, including Zucchini and Scallop, rank among the highest-valued vegetables worldwide. There are few genomic tools available for this species. The first Cucurbita transcriptome, along with a large collection of Single Nucleotide Polymorphisms (SNP, was recently generated using massive sequencing. A set of 384 SNP was selected to generate an Illumina GoldenGate assay in order to construct the first SNP-based genetic map of Cucurbita and map quantitative trait loci (QTL. Results We herein present the construction of the first SNP-based genetic map of Cucurbita pepo using a population derived from the cross of two varieties with contrasting phenotypes, representing the main cultivar groups of the species' two subspecies: Zucchini (subsp. pepo × Scallop (subsp. ovifera. The mapping population was genotyped with 384 SNP, a set of selected EST-SNP identified in silico after massive sequencing of the transcriptomes of both parents, using the Illumina GoldenGate platform. The global success rate of the assay was higher than 85%. In total, 304 SNP were mapped, along with 11 SSR from a previous map, giving a map density of 5.56 cM/marker. This map was used to infer syntenic relationships between C. pepo and cucumber and to successfully map QTL that control plant, flowering and fruit traits that are of benefit to squash breeding. The QTL effects were validated in backcross populations. Conclusion Our results show that massive sequencing in different genotypes is an excellent tool for SNP discovery, and that the Illumina GoldenGate platform can be successfully applied to constructing genetic maps and performing QTL analysis in Cucurbita. This is the first SNP-based genetic map in the Cucurbita genus and is an invaluable new tool for biological research

  16. High-throughput SNP genotyping in Cucurbita pepo for map construction and quantitative trait loci mapping.

    Science.gov (United States)

    Esteras, Cristina; Gómez, Pedro; Monforte, Antonio J; Blanca, José; Vicente-Dólera, Nelly; Roig, Cristina; Nuez, Fernando; Picó, Belén

    2012-02-22

    Cucurbita pepo is a member of the Cucurbitaceae family, the second- most important horticultural family in terms of economic importance after Solanaceae. The "summer squash" types, including Zucchini and Scallop, rank among the highest-valued vegetables worldwide. There are few genomic tools available for this species.The first Cucurbita transcriptome, along with a large collection of Single Nucleotide Polymorphisms (SNP), was recently generated using massive sequencing. A set of 384 SNP was selected to generate an Illumina GoldenGate assay in order to construct the first SNP-based genetic map of Cucurbita and map quantitative trait loci (QTL). We herein present the construction of the first SNP-based genetic map of Cucurbita pepo using a population derived from the cross of two varieties with contrasting phenotypes, representing the main cultivar groups of the species' two subspecies: Zucchini (subsp. pepo) × Scallop (subsp. ovifera). The mapping population was genotyped with 384 SNP, a set of selected EST-SNP identified in silico after massive sequencing of the transcriptomes of both parents, using the Illumina GoldenGate platform. The global success rate of the assay was higher than 85%. In total, 304 SNP were mapped, along with 11 SSR from a previous map, giving a map density of 5.56 cM/marker. This map was used to infer syntenic relationships between C. pepo and cucumber and to successfully map QTL that control plant, flowering and fruit traits that are of benefit to squash breeding. The QTL effects were validated in backcross populations. Our results show that massive sequencing in different genotypes is an excellent tool for SNP discovery, and that the Illumina GoldenGate platform can be successfully applied to constructing genetic maps and performing QTL analysis in Cucurbita. This is the first SNP-based genetic map in the Cucurbita genus and is an invaluable new tool for biological research, especially considering that most of these markers are located in

  17. High-throughput microarray mapping of cell wall polymers in roots and tubers during the viscosity-reducing process

    DEFF Research Database (Denmark)

    Huang, Yuhong; Willats, William George Tycho; Lange, Lene

    2016-01-01

    the viscosity-reducing process are poorly characterized. Comprehensive microarray polymer profiling, which is a high-throughput microarray, was used for the first time to map changes in the cell wall polymers of sweet potato (Ipomoea batatas), cassava (Manihot esculenta), and Canna edulis Ker. over the entire...... of the sweet potato and the cassava was attributed to the degradation of homogalacturonan and the released 1,4-β-d-galactan and 1,5-α-l-arabinan....

  18. Mapping and Classifying Molecules from a High-Throughput Structural Database

    CERN Document Server

    De, Sandip; Ingram, Teresa; Baldauf, Carsten; Ceriotti, Michele

    2016-01-01

    High-throughput computational materials design promises to greatly accelerate the process of discovering new materials and compounds, and of optimizing their properties. The large databases of structures and properties that result from computational searches, as well as the agglomeration of data of heterogeneous provenance leads to considerable challenges when it comes to navigating the database, representing its structure at a glance, understanding structure-property relations, eliminating duplicates and identifying inconsistencies. Here we present a case study, based on a data set of conformers of amino acids and dipeptides, of how machine-learning techniques can help addressing these issues. We will exploit a recently developed strategy to define a metric between structures, and use it as the basis of both clustering and dimensionality reduction techniques showing how these can help reveal structure-property relations, identify outliers and inconsistent structures, and rationalise how perturbations (e.g. b...

  19. High-throughput 454 resequencing for allele discovery and recombination mapping in Plasmodium falciparum

    Directory of Open Access Journals (Sweden)

    Tan John C

    2011-02-01

    Full Text Available Abstract Background Knowledge of the origins, distribution, and inheritance of variation in the malaria parasite (Plasmodium falciparum genome is crucial for understanding its evolution; however the 81% (A+T genome poses challenges to high-throughput sequencing technologies. We explore the viability of the Roche 454 Genome Sequencer FLX (GS FLX high throughput sequencing technology for both whole genome sequencing and fine-resolution characterization of genetic exchange in malaria parasites. Results We present a scheme to survey recombination in the haploid stage genomes of two sibling parasite clones, using whole genome pyrosequencing that includes a sliding window approach to predict recombination breakpoints. Whole genome shotgun (WGS sequencing generated approximately 2 million reads, with an average read length of approximately 300 bp. De novo assembly using a combination of WGS and 3 kb paired end libraries resulted in contigs ≤ 34 kb. More than 8,000 of the 24,599 SNP markers identified between parents were genotyped in the progeny, resulting in a marker density of approximately 1 marker/3.3 kb and allowing for the detection of previously unrecognized crossovers (COs and many non crossover (NCO gene conversions throughout the genome. Conclusions By sequencing the 23 Mb genomes of two haploid progeny clones derived from a genetic cross at more than 30× coverage, we captured high resolution information on COs, NCOs and genetic variation within the progeny genomes. This study is the first to resequence progeny clones to examine fine structure of COs and NCOs in malaria parasites.

  20. High-Throughput Dietary Exposure Predictions for Chemical Migrants from Food Contact Substances for Use in Chemical Prioritization

    Data.gov (United States)

    U.S. Environmental Protection Agency — Under the ExpoCast program, United States Environmental Protection Agency (EPA) researchers have developed a high-throughput (HT) framework for estimating aggregate...

  1. A Torrent of data: mapping chromatin organization using 5C and high-throughput sequencing.

    Science.gov (United States)

    Fraser, James; Ethier, Sylvain D; Miura, Hisashi; Dostie, Josée

    2012-01-01

    The study of three-dimensional genome organization is an exciting research area, which has benefited from the rapid development of high-resolution molecular mapping techniques over the past decade. These methods are derived from the chromosome conformation capture (3C) technique and are each aimed at improving some aspect of 3C. All 3C technologies use formaldehyde fixation and proximity-based ligation to capture chromatin contacts in cell populations and consider in vivo spatial proximity more or less inversely proportional to the frequency of measured interactions. The 3C-carbon copy (5C) method is among the most quantitative of these approaches. 5C is extremely robust and can be used to study chromatin organization at various scales. Here, we present a modified 5C analysis protocol adapted for sequencing with an Ion Torrent Personal Genome Machine™ (PGM™). We explain how Torrent 5C libraries are produced and sequenced. We also describe the statistical and computational methods we developed to normalize and analyze raw Torrent 5C sequence data. The Torrent 5C protocol should facilitate the study of in vivo chromatin architecture at high resolution because it benefits from high accuracy, greater speed, low running costs, and the flexibility of in-house next-generation sequencing. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. High-throughput dietary exposure predictions for chemical migrants from food contact substances for use in chemical prioritization.

    Science.gov (United States)

    Biryol, Derya; Nicolas, Chantel I; Wambaugh, John; Phillips, Katherine; Isaacs, Kristin

    2017-11-01

    Under the ExpoCast program, United States Environmental Protection Agency (EPA) researchers have developed a high-throughput (HT) framework for estimating aggregate exposures to chemicals from multiple pathways to support rapid prioritization of chemicals. Here, we present methods to estimate HT exposures to chemicals migrating into food from food contact substances (FCS). These methods consisted of combining an empirical model of chemical migration with estimates of daily population food intakes derived from food diaries from the National Health and Nutrition Examination Survey (NHANES). A linear regression model for migration at equilibrium was developed by fitting available migration measurements as a function of temperature, food type (i.e., fatty, aqueous, acidic, alcoholic), initial chemical concentration in the FCS (C0) and chemical properties. The most predictive variables in the resulting model were C0, molecular weight, log Kow, and food type (R2=0.71, pchemicals identified via publicly-available data sources as being present in polymer FCSs were predicted for 12 food groups (combinations of 3 storage temperatures and food type). The model was parameterized with screening-level estimates of C0 based on the functional role of chemicals in FCS. By combining these concentrations with daily intakes for food groups derived from NHANES, population ingestion exposures of chemical in mg/kg-bodyweight/day (mg/kg-BW/day) were estimated. Calibrated aggregate exposures were estimated for 1931 chemicals by fitting HT FCS and consumer product exposures to exposures inferred from NHANES biomonitoring (R2=0.61, pchemicals), they can provide critical refinement to aggregate exposure predictions used in risk-based chemical priority-setting. Published by Elsevier Ltd.

  3. High-Throughput Phenotyping and QTL Mapping Reveals the Genetic Architecture of Maize Plant Growth1[OPEN

    Science.gov (United States)

    Huang, Chenglong; Wu, Di; Qiao, Feng; Li, Wenqiang; Duan, Lingfeng; Wang, Ke; Xiao, Yingjie; Chen, Guoxing; Liu, Qian; Yang, Wanneng

    2017-01-01

    With increasing demand for novel traits in crop breeding, the plant research community faces the challenge of quantitatively analyzing the structure and function of large numbers of plants. A clear goal of high-throughput phenotyping is to bridge the gap between genomics and phenomics. In this study, we quantified 106 traits from a maize (Zea mays) recombinant inbred line population (n = 167) across 16 developmental stages using the automatic phenotyping platform. Quantitative trait locus (QTL) mapping with a high-density genetic linkage map, including 2,496 recombinant bins, was used to uncover the genetic basis of these complex agronomic traits, and 988 QTLs have been identified for all investigated traits, including three QTL hotspots. Biomass accumulation and final yield were predicted using a combination of dissected traits in the early growth stage. These results reveal the dynamic genetic architecture of maize plant growth and enhance ideotype-based maize breeding and prediction. PMID:28153923

  4. High-throughput genomic mapping of vector integration sites in gene therapy studies.

    Science.gov (United States)

    Beard, Brian C; Adair, Jennifer E; Trobridge, Grant D; Kiem, Hans-Peter

    2014-01-01

    Gene therapy has enormous potential to treat a variety of infectious and genetic diseases. To date hundreds of patients worldwide have received hematopoietic cell products that have been gene-modified with retrovirus vectors carrying therapeutic transgenes, and many patients have been cured or demonstrated disease stabilization as a result (Adair et al., Sci Transl Med 4:133ra57, 2012; Biffi et al., Science 341:1233158, 2013; Aiuti et al., Science 341:1233151, 2013; Fischer et al., Gene 525:170-173, 2013). Unfortunately, for some patients the provirus integration dysregulated the expression of nearby genes leading to clonal outgrowth and, in some cases, cancer. Thus, the unwanted side effect of insertional mutagenesis has become a major concern for retrovirus gene therapy. The careful study of retrovirus integration sites (RIS) and the contribution of individual gene-modified clones to hematopoietic repopulating cells is of crucial importance for all gene therapy studies. Supporting this, the US Food and Drug Administration (FDA) has mandated the careful monitoring of RIS in all clinical trials of gene therapy. An invaluable method was developed: linear amplification mediated-polymerase chain reaction (LAM-PCR) capable of analyzing in vitro and complex in vivo samples, capturing valuable genomic information directly flanking the site of provirus integration. Linking this method and similar methods to high-throughput sequencing has now made possible an unprecedented understanding of the integration profile of various retrovirus vectors, and allows for sensitive monitoring of their safety. It also allows for a detailed comparison of improved safety-enhanced gene therapy vectors. An important readout of safety is the relative contribution of individual gene-modified repopulating clones. One limitation of LAM-PCR is that the ability to capture the relative contribution of individual clones is compromised because of the initial linear PCR common to all current methods

  5. High-throughput physical map anchoring via BAC-pool sequencing

    Czech Academy of Sciences Publication Activity Database

    Cviková, Kateřina; Cattonaro, F.; Alaux, M.; Stein, N.; Mayer, K.F.X.; Doležel, Jaroslav; Bartoš, Jan

    2015-01-01

    Roč. 15, APR 11 (2015) ISSN 1471-2229 R&D Projects: GA ČR GA13-08786S; GA MŠk(CZ) LO1204 Institutional support: RVO:61389030 Keywords : Physical map * Contig anchoring * Next generation sequencing Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 3.631, year: 2015

  6. Next generation MUT-MAP, a high-sensitivity high-throughput microfluidics chip-based mutation analysis panel.

    Directory of Open Access Journals (Sweden)

    Erica B Schleifman

    Full Text Available Molecular profiling of tumor tissue to detect alterations, such as oncogenic mutations, plays a vital role in determining treatment options in oncology. Hence, there is an increasing need for a robust and high-throughput technology to detect oncogenic hotspot mutations. Although commercial assays are available to detect genetic alterations in single genes, only a limited amount of tissue is often available from patients, requiring multiplexing to allow for simultaneous detection of mutations in many genes using low DNA input. Even though next-generation sequencing (NGS platforms provide powerful tools for this purpose, they face challenges such as high cost, large DNA input requirement, complex data analysis, and long turnaround times, limiting their use in clinical settings. We report the development of the next generation mutation multi-analyte panel (MUT-MAP, a high-throughput microfluidic, panel for detecting 120 somatic mutations across eleven genes of therapeutic interest (AKT1, BRAF, EGFR, FGFR3, FLT3, HRAS, KIT, KRAS, MET, NRAS, and PIK3CA using allele-specific PCR (AS-PCR and Taqman technology. This mutation panel requires as little as 2 ng of high quality DNA from fresh frozen or 100 ng of DNA from formalin-fixed paraffin-embedded (FFPE tissues. Mutation calls, including an automated data analysis process, have been implemented to run 88 samples per day. Validation of this platform using plasmids showed robust signal and low cross-reactivity in all of the newly added assays and mutation calls in cell line samples were found to be consistent with the Catalogue of Somatic Mutations in Cancer (COSMIC database allowing for direct comparison of our platform to Sanger sequencing. High correlation with NGS when compared to the SuraSeq500 panel run on the Ion Torrent platform in a FFPE dilution experiment showed assay sensitivity down to 0.45%. This multiplexed mutation panel is a valuable tool for high-throughput biomarker discovery in

  7. Detection and mapping of mtDNA SNPs in Atlantic salmon using high throughput DNA sequencing

    Directory of Open Access Journals (Sweden)

    Olafsdottir Gudbjorg

    2011-04-01

    Full Text Available Abstract Background Approximately half of the mitochondrial genome inherent within 546 individual Atlantic salmon (Salmo salar derived from across the species' North Atlantic range, was selectively amplified with a novel combination of standard PCR and pyro-sequencing in a single run using 454 Titanium FLX technology (Roche, 454 Life Sciences. A unique combination of barcoded primers and a partitioned sequencing plate was employed to designate each sequence read to its original sample. The sequence reads were aligned according to the S. salar mitochondrial reference sequence (NC_001960.1, with the objective of identifying single nucleotide polymorphisms (SNPs. They were validated if they met with the following three stringent criteria: (i sequence reads were produced from both DNA strands; (ii SNPs were confirmed in a minimum of 90% of replicate sequence reads; and (iii SNPs occurred in more than one individual. Results Pyrosequencing generated a total of 179,826,884 bp of data, and 10,765 of the total 10,920 S. salar sequences (98.6% were assigned back to their original samples. The approach taken resulted in a total of 216 SNPs and 2 indels, which were validated and mapped onto the S. salar mitochondrial genome, including 107 SNPs and one indel not previously reported. An average of 27.3 sequence reads with a standard deviation of 11.7 supported each SNP per individual. Conclusion The study generated a mitochondrial SNP panel from a large sample group across a broad geographical area, reducing the potential for ascertainment bias, which has hampered previous studies. The SNPs identified here validate those identified in previous studies, and also contribute additional potentially informative loci for the future study of phylogeography and evolution in the Atlantic salmon. The overall success experienced with this novel application of HT sequencing of targeted regions suggests that the same approach could be successfully applied for SNP mining

  8. High-throughput microarray mapping of cell wall polymers in roots and tubers during the viscosity-reducing process.

    Science.gov (United States)

    Huang, Yuhong; Willats, William G; Lange, Lene; Jin, Yanling; Fang, Yang; Salmeán, Armando A; Pedersen, Henriette L; Busk, Peter Kamp; Zhao, Hai

    2016-01-01

    Viscosity reduction has a great impact on the efficiency of ethanol production when using roots and tubers as feedstock. Plant cell wall-degrading enzymes have been successfully applied to overcome the challenges posed by high viscosity. However, the changes in cell wall polymers during the viscosity-reducing process are poorly characterized. Comprehensive microarray polymer profiling, which is a high-throughput microarray, was used for the first time to map changes in the cell wall polymers of sweet potato (Ipomoea batatas), cassava (Manihot esculenta), and Canna edulis Ker. over the entire viscosity-reducing process. The results indicated that the composition of cell wall polymers among these three roots and tubers was markedly different. The gel-like matrix and glycoprotein network in the C. edulis Ker. cell wall caused difficulty in viscosity reduction. The obvious viscosity reduction of the sweet potato and the cassava was attributed to the degradation of homogalacturonan and the released 1,4-β-d-galactan and 1,5-α-l-arabinan. © 2015 International Union of Biochemistry and Molecular Biology, Inc.

  9. QTL Mapping for Rice RVA Properties Using High-Throughput Re-sequenced Chromosome Segment Substitution Lines

    Directory of Open Access Journals (Sweden)

    Chang-quan ZHANG

    2013-11-01

    Full Text Available The rapid visco analyser (RVA profile is an important factor for evaluation of the cooking and eating quality of rice. To improve rice quality, the identification of new quantitative trait loci (QTLs for RVA profiling is of great significance. We used a japonica rice cultivar Nipponbare as the recipient and indica rice 9311 as the donor to develop a population containing 38 chromosome segment substitution lines (CSSLs genotyped by a high-throughput re-sequencing strategy. In this study, the population and the parent lines, which contained similar apparent amylose contents, were used to map the QTLs of RVA properties including peak paste viscosity (PKV, hot paste viscosity (HPV, cool paste viscosity (CPV, breakdown viscosity (BKV, setback viscosity (SBV, consistency viscosity (CSV, peak time (PeT and pasting temperature (PaT. QTL analysis was carried out using one-way analysis of variance and Dunnett's test, and stable QTLs were identified over two years and under two environments. We identified 10 stable QTLs: qPKV2-1, qSBV2-1; qPKV5-1, qHPV5-1, qCPV5-1; qPKV7-1, qHPV7-1, qCPV7-1, qSBV7-1; and qPKV8-1 on chromosomes 2, 5, 7 and 8, respectively, with contributions ranging from −95.6% to 47.1%. Besides, there was pleiotropy in the QTLs on chromosomes 2, 5 and 7.

  10. High Throughput Facility

    Data.gov (United States)

    Federal Laboratory Consortium — Argonne?s high throughput facility provides highly automated and parallel approaches to material and materials chemistry development. The facility allows scientists...

  11. Novel mobilizable prokaryotic two-hybrid system vectors for high-throughput protein interaction mapping in Escherichia coli by bacterial conjugation.

    Science.gov (United States)

    Clarke, Paul; Cuív, Páraic O; O'Connell, Michael

    2005-02-01

    Since its initial description, the yeast two-hybrid (Y2H) system has been widely used for the detection and analysis of protein-protein interactions. Mating-based strategies have been developed permitting its application for automated proteomic interaction mapping projects using both exhaustive and high-throughput strategies. More recently, a number of prokaryotic two-hybrid (P2H) systems have been developed but, despite the many advantages such Escherichia coli-based systems have over the Y2H system, they have not yet been widely implemented for proteomic interaction mapping. This may be largely due to the fact that high-throughput strategies employing bacterial transformation are not as amenable to automation as Y2H mating-based strategies. Here, we describe the construction of novel conjugative P2H system vectors. These vectors carry a mobilization element of the IncPalpha group plasmid RP4 and can therefore be mobilized with high efficiency from an E.coli donor strain encoding all of the required transport functions in trans. We demonstrate how these vectors permit the exploitation of bacterial conjugation for technically simplified and automated proteomic interaction mapping strategies in E.coli, analogous to the mating-based strategies developed for the Y2H system.

  12. High-throughput materials discovery and development: breakthroughs and challenges in the mapping of the materials genome

    Science.gov (United States)

    Buongiorno Nardelli, Marco

    High-Throughput Quantum-Mechanics computation of materials properties by ab initio methods has become the foundation of an effective approach to materials design, discovery and characterization. This data driven approach to materials science currently presents the most promising path to the development of advanced technological materials that could solve or mitigate important social and economic challenges of the 21st century. In particular, the rapid proliferation of computational data on materials properties presents the possibility to complement and extend materials property databases where the experimental data is lacking and difficult to obtain. Enhanced repositories such as AFLOWLIB open novel opportunities for structure discovery and optimization, including uncovering of unsuspected compounds, metastable structures and correlations between various properties. The practical realization of these opportunities depends almost exclusively on the the design of efficient algorithms for electronic structure simulations of realistic material systems beyond the limitations of the current standard theories. In this talk, I will review recent progress in theoretical and computational tools, and in particular, discuss the development and validation of novel functionals within Density Functional Theory and of local basis representations for effective ab-initio tight-binding schemes. Marco Buongiorno Nardelli is a pioneer in the development of computational platforms for theory/data/applications integration rooted in his profound and extensive expertise in the design of electronic structure codes and in his vision for sustainable and innovative software development for high-performance materials simulations. His research activities range from the design and discovery of novel materials for 21st century applications in renewable energy, environment, nano-electronics and devices, the development of advanced electronic structure theories and high-throughput techniques in

  13. A high-throughput strategy for screening of bacterial artificial chromosome libraries and anchoring of clones on a genetic map constructed with single nucleotide polymorphisms

    Directory of Open Access Journals (Sweden)

    Deal Karin R

    2009-01-01

    Full Text Available Abstract Background Current techniques of screening bacterial artificial chromosome (BAC libraries for molecular markers during the construction of physical maps are slow, laborious and often assign multiple BAC contigs to a single locus on a genetic map. These limitations are the principal impediment in the construction of physical maps of large eukaryotic genomes. It is hypothesized that this impediment can be overcome by screening multidimensional pools of BAC clones using the highly parallel Illumina GoldenGate™ assay. Results To test the efficacy of the Golden Gate assay in BAC library screening, multidimensional pools involving 302976 Aegilops tauschii BAC clones were genotyped for the presence/absence of specific gene sequences with multiplexed Illumina GoldenGate oligonucleotide assays previously used to place single nucleotide polymorphisms on an Ae. tauschii genetic map. Of 1384 allele-informative oligonucleotide assays, 87.6% successfully clustered BAC pools into those positive for a BAC clone harboring a specific gene locus and those negative for it. The location of the positive BAC clones within contigs assembled from 199190 fingerprinted Ae. tauschii BAC clones was used to evaluate the precision of anchoring of BAC clones and contigs on the Ae. tauschii genetic map. For 41 (95% assays, positive BAC clones were neighbors in single contigs. Those contigs could be unequivocally assigned to loci on the genetic map. For two (5% assays, positive clones were in two different contigs and the relationships of these contigs to loci on the Ae. tauschii genetic map were equivocal. Screening of BAC libraries with a simple five-dimensional BAC pooling strategy was evaluated and shown to allow direct detection of positive BAC clones without the need for manual deconvolution of BAC clone pools. Conclusion The highly parallel Illumina oligonucleotide assay is shown here to be an efficient tool for screening BAC libraries and a strategy for high-throughput

  14. High-throughput dual-color precision imaging for brain-wide mapping of the connectome with cytoarchitectonic landmarks at the cellular level (Conference Presentation)

    Science.gov (United States)

    Luo, Qingming; Gong, Hui; Yuan, Jing; Li, Xiangning; Li, Anan; Xu, Tonghui

    2017-02-01

    Deciphering the fine morphology and precise location of neurons and neural circuits are crucial to enhance our understanding of brain function and diseases. Traditionally, we have to map brain images to coarse axial-sampling planar reference atlases to orient neural structures. However, this means might fail to orient neural projections at single-cell resolution due to position errors resulting from individual differences at the cellular level. Here, we present a high-throughput imaging method that can automatically obtain the fine morphologies and precise locations of both neurons and circuits, employing wide-field large-volume tomography to acquire three-dimensional images of thick tissue and implementing real-time soma counterstaining to obtain cytoarchitectonic landmarks during the imaging process. The reconstruction and orientation of brain-wide neural circuits at single-neuron resolution can be accomplished for the same mouse brain without additional counterstains or image registration. Using our method, mouse brain imaging datasets of multiple type-specific neurons and circuits were successfully acquired, demonstrating the versatility. The results show that the simultaneous acquisition of labeled neural structures and cytoarchitecture reference at single-neuron resolution in the same brain greatly facilitates precise tracing of long-range projections and accurate locating of nuclei. Our method provides a novel and effective tool for application in studies on genetic dissection, brain function and the pathology of the nervous system.

  15. Optimal contact definition for reconstruction of Contact Maps

    Directory of Open Access Journals (Sweden)

    Stehr Henning

    2010-05-01

    Full Text Available Abstract Background Contact maps have been extensively used as a simplified representation of protein structures. They capture most important features of a protein's fold, being preferred by a number of researchers for the description and study of protein structures. Inspired by the model's simplicity many groups have dedicated a considerable amount of effort towards contact prediction as a proxy for protein structure prediction. However a contact map's biological interest is subject to the availability of reliable methods for the 3-dimensional reconstruction of the structure. Results We use an implementation of the well-known distance geometry protocol to build realistic protein 3-dimensional models from contact maps, performing an extensive exploration of many of the parameters involved in the reconstruction process. We try to address the questions: a to what accuracy does a contact map represent its corresponding 3D structure, b what is the best contact map representation with regard to reconstructability and c what is the effect of partial or inaccurate contact information on the 3D structure recovery. Our results suggest that contact maps derived from the application of a distance cutoff of 9 to 11Å around the Cβ atoms constitute the most accurate representation of the 3D structure. The reconstruction process does not provide a single solution to the problem but rather an ensemble of conformations that are within 2Å RMSD of the crystal structure and with lower values for the pairwise average ensemble RMSD. Interestingly it is still possible to recover a structure with partial contact information, although wrong contacts can lead to dramatic loss in reconstruction fidelity. Conclusions Thus contact maps represent a valid approximation to the structures with an accuracy comparable to that of experimental methods. The optimal contact definitions constitute key guidelines for methods based on contact maps such as structure prediction through

  16. High Throughput Plasma Water Treatment

    Science.gov (United States)

    Mujovic, Selman; Foster, John

    2016-10-01

    The troublesome emergence of new classes of micro-pollutants, such as pharmaceuticals and endocrine disruptors, poses challenges for conventional water treatment systems. In an effort to address these contaminants and to support water reuse in drought stricken regions, new technologies must be introduced. The interaction of water with plasma rapidly mineralizes organics by inducing advanced oxidation in addition to other chemical, physical and radiative processes. The primary barrier to the implementation of plasma-based water treatment is process volume scale up. In this work, we investigate a potentially scalable, high throughput plasma water reactor that utilizes a packed bed dielectric barrier-like geometry to maximize the plasma-water interface. Here, the water serves as the dielectric medium. High-speed imaging and emission spectroscopy are used to characterize the reactor discharges. Changes in methylene blue concentration and basic water parameters are mapped as a function of plasma treatment time. Experimental results are compared to electrostatic and plasma chemistry computations, which will provide insight into the reactor's operation so that efficiency can be assessed. Supported by NSF (CBET 1336375).

  17. CMView: interactive contact map visualization and analysis.

    Science.gov (United States)

    Vehlow, Corinna; Stehr, Henning; Winkelmann, Matthias; Duarte, José M; Petzold, Lars; Dinse, Juliane; Lappe, Michael

    2011-06-01

    Contact maps are a valuable visualization tool in structural biology. They are a convenient way to display proteins in two dimensions and to quickly identify structural features such as domain architecture, secondary structure and contact clusters. We developed a tool called CMView which integrates rich contact map analysis with 3D visualization using PyMol. Our tool provides functions for contact map calculation from structure, basic editing, visualization in contact map and 3D space and structural comparison with different built-in alignment methods. A unique feature is the interactive refinement of structural alignments based on user selected substructures. CMView is freely available for Linux, Windows and MacOS. The software and a comprehensive manual can be downloaded from http://www.bioinformatics.org/cmview/. The source code is licensed under the GNU General Public License.

  18. High Throughput Transcriptomics @ USEPA (Toxicology ...

    Science.gov (United States)

    The ideal chemical testing approach will provide complete coverage of all relevant toxicological responses. It should be sensitive and specific It should identify the mechanism/mode-of-action (with dose-dependence). It should identify responses relevant to the species of interest. Responses should ideally be translated into tissue-, organ-, and organism-level effects. It must be economical and scalable. Using a High Throughput Transcriptomics platform within US EPA provides broader coverage of biological activity space and toxicological MOAs and helps fill the toxicological data gap. Slide presentation at the 2016 ToxForum on using High Throughput Transcriptomics at US EPA for broader coverage biological activity space and toxicological MOAs.

  19. Genome-wide association mapping in winter barley for grain yield and culm cell wall polymer content using the high-throughput CoMPP technique.

    Directory of Open Access Journals (Sweden)

    Andrea Bellucci

    Full Text Available A collection of 112 winter barley varieties (Hordeum vulgare L. was grown in the field for two years (2008/09 and 2009/10 in northern Italy and grain and straw yields recorded. In the first year of the trial, a severe attack of barley yellow mosaic virus (BaYMV strongly influenced final performances with an average reduction of ~ 50% for grain and straw harvested in comparison to the second year. The genetic determination (GD for grain yield was 0.49 and 0.70, for the two years respectively, and for straw yield GD was low in 2009 (0.09 and higher in 2010 (0.29. Cell wall polymers in culms were quantified by means of the monoclonal antibodies LM6, LM11, JIM13 and BS-400-3 and the carbohydrate-binding module CBM3a using the high-throughput CoMPP technique. Of these, LM6, which detects arabinan components, showed a relatively high GD in both years and a significantly negative correlation with grain yield (GYLD. Overall, heritability (H2 was calculated for GYLD, LM6 and JIM and resulted to be 0.42, 0.32 and 0.20, respectively. A total of 4,976 SNPs from the 9K iSelect array were used in the study for the analysis of population structure, linkage disequilibrium (LD and genome-wide association study (GWAS. Marker-trait associations (MTA were analyzed for grain yield and cell wall determination by LM6 and JIM13 as these were the traits showing significant correlations between the years. A single QTL for GYLD containing three MTAs was found on chromosome 3H located close to the Hv-eIF4E gene, which is known to regulate resistance to BaYMV. Subsequently the QTL was shown to be tightly linked to rym4, a locus for resistance to the virus. GWAs on arabinans quantified by LM6 resulted in the identification of major QTLs closely located on 3H and hypotheses regarding putative candidate genes were formulated through the study of gene expression levels based on bioinformatics tools.

  20. High throughput protein production screening

    Science.gov (United States)

    Beernink, Peter T [Walnut Creek, CA; Coleman, Matthew A [Oakland, CA; Segelke, Brent W [San Ramon, CA

    2009-09-08

    Methods, compositions, and kits for the cell-free production and analysis of proteins are provided. The invention allows for the production of proteins from prokaryotic sequences or eukaryotic sequences, including human cDNAs using PCR and IVT methods and detecting the proteins through fluorescence or immunoblot techniques. This invention can be used to identify optimized PCR and WT conditions, codon usages and mutations. The methods are readily automated and can be used for high throughput analysis of protein expression levels, interactions, and functional states.

  1. Parallelism maps for optically contacted etalons

    Science.gov (United States)

    Killeen, T. L.; Hays, P. B.; Devos, J.

    1981-01-01

    A measurement technique is described that is capable of producing accurate numerical maps of the parallelism defect of optically contacted Fabry-Perot etalons. A photoelectric raster scan of the etalon transmission intensity distribution is performed at each of three closely spaced etalon tunings, one at maximum on-axis and two at the half-maxima on-axis positions. Simple data manipulation gives numerical or contour maps of the parallelism defect. Results demonstrating the effect of mechanical constraints on etalon parallelism are presented

  2. CloudDOE: a user-friendly tool for deploying Hadoop clouds and analyzing high-throughput sequencing data with MapReduce.

    Directory of Open Access Journals (Sweden)

    Wei-Chun Chung

    Full Text Available BACKGROUND: Explosive growth of next-generation sequencing data has resulted in ultra-large-scale data sets and ensuing computational problems. Cloud computing provides an on-demand and scalable environment for large-scale data analysis. Using a MapReduce framework, data and workload can be distributed via a network to computers in the cloud to substantially reduce computational latency. Hadoop/MapReduce has been successfully adopted in bioinformatics for genome assembly, mapping reads to genomes, and finding single nucleotide polymorphisms. Major cloud providers offer Hadoop cloud services to their users. However, it remains technically challenging to deploy a Hadoop cloud for those who prefer to run MapReduce programs in a cluster without built-in Hadoop/MapReduce. RESULTS: We present CloudDOE, a platform-independent software package implemented in Java. CloudDOE encapsulates technical details behind a user-friendly graphical interface, thus liberating scientists from having to perform complicated operational procedures. Users are guided through the user interface to deploy a Hadoop cloud within in-house computing environments and to run applications specifically targeted for bioinformatics, including CloudBurst, CloudBrush, and CloudRS. One may also use CloudDOE on top of a public cloud. CloudDOE consists of three wizards, i.e., Deploy, Operate, and Extend wizards. Deploy wizard is designed to aid the system administrator to deploy a Hadoop cloud. It installs Java runtime environment version 1.6 and Hadoop version 0.20.203, and initiates the service automatically. Operate wizard allows the user to run a MapReduce application on the dashboard list. To extend the dashboard list, the administrator may install a new MapReduce application using Extend wizard. CONCLUSIONS: CloudDOE is a user-friendly tool for deploying a Hadoop cloud. Its smart wizards substantially reduce the complexity and costs of deployment, execution, enhancement, and

  3. CloudDOE: a user-friendly tool for deploying Hadoop clouds and analyzing high-throughput sequencing data with MapReduce.

    Science.gov (United States)

    Chung, Wei-Chun; Chen, Chien-Chih; Ho, Jan-Ming; Lin, Chung-Yen; Hsu, Wen-Lian; Wang, Yu-Chun; Lee, D T; Lai, Feipei; Huang, Chih-Wei; Chang, Yu-Jung

    2014-01-01

    Explosive growth of next-generation sequencing data has resulted in ultra-large-scale data sets and ensuing computational problems. Cloud computing provides an on-demand and scalable environment for large-scale data analysis. Using a MapReduce framework, data and workload can be distributed via a network to computers in the cloud to substantially reduce computational latency. Hadoop/MapReduce has been successfully adopted in bioinformatics for genome assembly, mapping reads to genomes, and finding single nucleotide polymorphisms. Major cloud providers offer Hadoop cloud services to their users. However, it remains technically challenging to deploy a Hadoop cloud for those who prefer to run MapReduce programs in a cluster without built-in Hadoop/MapReduce. We present CloudDOE, a platform-independent software package implemented in Java. CloudDOE encapsulates technical details behind a user-friendly graphical interface, thus liberating scientists from having to perform complicated operational procedures. Users are guided through the user interface to deploy a Hadoop cloud within in-house computing environments and to run applications specifically targeted for bioinformatics, including CloudBurst, CloudBrush, and CloudRS. One may also use CloudDOE on top of a public cloud. CloudDOE consists of three wizards, i.e., Deploy, Operate, and Extend wizards. Deploy wizard is designed to aid the system administrator to deploy a Hadoop cloud. It installs Java runtime environment version 1.6 and Hadoop version 0.20.203, and initiates the service automatically. Operate wizard allows the user to run a MapReduce application on the dashboard list. To extend the dashboard list, the administrator may install a new MapReduce application using Extend wizard. CloudDOE is a user-friendly tool for deploying a Hadoop cloud. Its smart wizards substantially reduce the complexity and costs of deployment, execution, enhancement, and management. Interested users may collaborate to improve the

  4. CloudDOE: A User-Friendly Tool for Deploying Hadoop Clouds and Analyzing High-Throughput Sequencing Data with MapReduce

    Science.gov (United States)

    Chung, Wei-Chun; Chen, Chien-Chih; Ho, Jan-Ming; Lin, Chung-Yen; Hsu, Wen-Lian; Wang, Yu-Chun; Lee, D. T.; Lai, Feipei; Huang, Chih-Wei; Chang, Yu-Jung

    2014-01-01

    Background Explosive growth of next-generation sequencing data has resulted in ultra-large-scale data sets and ensuing computational problems. Cloud computing provides an on-demand and scalable environment for large-scale data analysis. Using a MapReduce framework, data and workload can be distributed via a network to computers in the cloud to substantially reduce computational latency. Hadoop/MapReduce has been successfully adopted in bioinformatics for genome assembly, mapping reads to genomes, and finding single nucleotide polymorphisms. Major cloud providers offer Hadoop cloud services to their users. However, it remains technically challenging to deploy a Hadoop cloud for those who prefer to run MapReduce programs in a cluster without built-in Hadoop/MapReduce. Results We present CloudDOE, a platform-independent software package implemented in Java. CloudDOE encapsulates technical details behind a user-friendly graphical interface, thus liberating scientists from having to perform complicated operational procedures. Users are guided through the user interface to deploy a Hadoop cloud within in-house computing environments and to run applications specifically targeted for bioinformatics, including CloudBurst, CloudBrush, and CloudRS. One may also use CloudDOE on top of a public cloud. CloudDOE consists of three wizards, i.e., Deploy, Operate, and Extend wizards. Deploy wizard is designed to aid the system administrator to deploy a Hadoop cloud. It installs Java runtime environment version 1.6 and Hadoop version 0.20.203, and initiates the service automatically. Operate wizard allows the user to run a MapReduce application on the dashboard list. To extend the dashboard list, the administrator may install a new MapReduce application using Extend wizard. Conclusions CloudDOE is a user-friendly tool for deploying a Hadoop cloud. Its smart wizards substantially reduce the complexity and costs of deployment, execution, enhancement, and management. Interested users

  5. Towards High-throughput Immunomics for Infectious Diseases: Use of Next-generation Peptide Microarrays for Rapid Discovery and Mapping of Antigenic Determinants

    DEFF Research Database (Denmark)

    J. Carmona, Santiago; Nielsen, Morten; Schafer-Nielsen, Claus

    2015-01-01

    , we developed a highly-multiplexed platform based on next-generation high-density peptide microarrays to map these specificities in Chagas Disease, an exemplar of a human infectious disease caused by the protozoan Trypanosoma cruzi. We designed a high-density peptide microarray containing more than...... 175,000 overlapping 15mer peptides derived from T. cruzi proteins. Peptides were synthesized in situ on microarray slides, spanning the complete length of 457 parasite proteins with fully overlapped 15mers (1 residue shift). Screening of these slides with antibodies purified from infected patients...

  6. Regular Simple Queues of Protein Contact Maps.

    Science.gov (United States)

    Guo, Qiang-Hui; Sun, Lisa Hui; Wang, Jian

    2017-01-01

    A protein fold can be viewed as a self-avoiding walk in certain lattice model, and its contact map is a graph that represents the patterns of contacts in the fold. Goldman, Istrail, and Papadimitriou showed that a contact map in the 2D square lattice can be decomposed into at most two stacks and one queue. In the terminology of combinatorics, stacks and queues are noncrossing and nonnesting partitions, respectively. In this paper, we are concerned with 2-regular and 3-regular simple queues, for which the degree of each vertex is at most one and the arc lengths are at least 2 and 3, respectively. We show that 2-regular simple queues are in one-to-one correspondence with hill-free Motzkin paths, which have been enumerated by Barcucci, Pergola, Pinzani, and Rinaldi by using the Enumerating Combinatorial Objects method. We derive a recurrence relation for the generating function of Motzkin paths with [Formula: see text] peaks at level i, which reduces to the generating function for hill-free Motzkin paths. Moreover, we show that 3-regular simple queues are in one-to-one correspondence with Motzkin paths avoiding certain patterns. Then we obtain a formula for the generating function of 3-regular simple queues. Asymptotic formulas for 2-regular and 3-regular simple queues are derived based on the generating functions.

  7. Applications of High Throughput Nucleotide Sequencing

    DEFF Research Database (Denmark)

    Waage, Johannes Eichler

    The recent advent of high throughput sequencing of nucleic acids (RNA and DNA) has vastly expanded research into the functional and structural biology of the genome of all living organisms (and even a few dead ones). With this enormous and exponential growth in biological data generation come...... equally large demands in data handling, analysis and interpretation, perhaps defining the modern challenge of the computational biologist of the post-genomic era. The first part of this thesis consists of a general introduction to the history, common terms and challenges of next generation sequencing......, focusing on oft encountered problems in data processing, such as quality assurance, mapping, normalization, visualization, and interpretation. Presented in the second part are scientific endeavors representing solutions to problems of two sub-genres of next generation sequencing. For the first flavor, RNA-sequencing...

  8. P56-M High-Throughput Genotyping of International HapMap Project Populations with Applied Biosystems TaqMan Drug Metabolism Genotyping Assays: An Automated Laboratory and Analysis Pipeline

    Science.gov (United States)

    Haque, K. A.; Wronka, L. M.; Dagnall, C. L.; Stefan, C. M.; Beerman, M. B.; Hicks, B. D.; Welch, R. A.

    2007-01-01

    Although high-density whole-genome SNP scans are available for association studies, the tagging SNP approach used to design many of these panels from International HapMap Project data may miss a substantial number of coding functional variations of drug metabolism enzymes (DME). In fact, more than 40 DME genes are not covered by the HapMap Project, probably due to the difficulties in assay design for these highly homologous gene families. Additionally, many of these technologies do not provide detection in a high number of known DME genes, leading to further gaps in whole-genome scans. Of the polymorphic putative functional DME variants not typed in Hap-Map, a large proportion is untagged by any combination of HapMap SNPs. Therefore, to correlate phenotypes to putative functional DME variations in pharmacogenomic studies, direct genotyping of these functional SNPs will be necessary. Applied Biosystems has developed a panel of N = 2394 TaqMan Drug Metabolism Genotyping Assays to interrogate putative functional variations in N = 220 DME genes. At the National Cancer Institute’s Core Genotyping Facility, an automated, high-throughput pipeline has been created to genotype these assays on the International HapMap Project population. DNA sample preparation and handling, assay set-up, genotype analysis, and data publishing at SNP500 Cancer Database (http://snp500cancer.nci.nih.gov), have all been automated. Using a series of custom-designed methods on five Beckman Coulter Biomek FXs, a Laboratory Information Management System, and analysis software, >650,000 genotypes have been obtained and analyzed by a single person in about 8 weeks. Using this pipeline, a completion rate of >99% and no Mendelian inheritance errors were observed. Furthermore, the CGF has implemented quality-controlled, automated pipelines for sample receiving, quantification, numerous DNA handling procedures, genotyping, and analysis for all samples and studies processed.

  9. Advances in protein contact map prediction based on machine learning.

    Science.gov (United States)

    Xie, Jiang; Ding, Wang; Chen, Luonan; Guo, Qiang; Zhang, Wu

    2015-01-01

    A protein contact map is a simplified, two-dimensional version of the three-dimensional protein structure. Protein contact map is proved to be crucial in forming the three-dimensional structure. Contact map prediction has now become an indispensable and promising intermediate step towards final three-dimensional structure prediction, while directed sequence-structure prediction hits its bottlenecks. In this article, different evaluation scores of prediction efficiency are compared. Next, the state of the art and future perspectives of contact map methods are reviewed and special attention is paid to those relying on machine learning algorithms. Details of neural network based methods as well as a list of machine learning based methods are given. Finally, bottlenecks and potential improvements of contact map predictions are discussed.

  10. High-Throughput Toxicity Testing: New Strategies for ...

    Science.gov (United States)

    In recent years, the food industry has made progress in improving safety testing methods focused on microbial contaminants in order to promote food safety. However, food industry toxicologists must also assess the safety of food-relevant chemicals including pesticides, direct additives, and food contact substances. With the rapidly growing use of new food additives, as well as innovation in food contact substance development, an interest in exploring the use of high-throughput chemical safety testing approaches has emerged. Currently, the field of toxicology is undergoing a paradigm shift in how chemical hazards can be evaluated. Since there are tens of thousands of chemicals in use, many of which have little to no hazard information and there are limited resources (namely time and money) for testing these chemicals, it is necessary to prioritize which chemicals require further safety testing to better protect human health. Advances in biochemistry and computational toxicology have paved the way for animal-free (in vitro) high-throughput screening which can characterize chemical interactions with highly specific biological processes. Screening approaches are not novel; in fact, quantitative high-throughput screening (qHTS) methods that incorporate dose-response evaluation have been widely used in the pharmaceutical industry. For toxicological evaluation and prioritization, it is the throughput as well as the cost- and time-efficient nature of qHTS that makes it

  11. Graph-based signal integration for high-throughput phenotyping.

    Science.gov (United States)

    Herskovic, Jorge R; Subramanian, Devika; Cohen, Trevor; Bozzo-Silva, Pamela A; Bearden, Charles F; Bernstam, Elmer V

    2012-01-01

    Electronic Health Records aggregated in Clinical Data Warehouses (CDWs) promise to revolutionize Comparative Effectiveness Research and suggest new avenues of research. However, the effectiveness of CDWs is diminished by the lack of properly labeled data. We present a novel approach that integrates knowledge from the CDW, the biomedical literature, and the Unified Medical Language System (UMLS) to perform high-throughput phenotyping. In this paper, we automatically construct a graphical knowledge model and then use it to phenotype breast cancer patients. We compare the performance of this approach to using MetaMap when labeling records. MetaMap's overall accuracy at identifying breast cancer patients was 51.1% (n=428); recall=85.4%, precision=26.2%, and F1=40.1%. Our unsupervised graph-based high-throughput phenotyping had accuracy of 84.1%; recall=46.3%, precision=61.2%, and F1=52.8%. We conclude that our approach is a promising alternative for unsupervised high-throughput phenotyping.

  12. High-throughput scoring of seed germination

    NARCIS (Netherlands)

    Ligterink, Wilco; Hilhorst, Henk W.M.

    2017-01-01

    High-throughput analysis of seed germination for phenotyping large genetic populations or mutant collections is very labor intensive and would highly benefit from an automated setup. Although very often used, the total germination percentage after a nominated period of time is not very

  13. A high throughput spectral image microscopy system

    Science.gov (United States)

    Gesley, M.; Puri, R.

    2018-01-01

    A high throughput spectral image microscopy system is configured for rapid detection of rare cells in large populations. To overcome flow cytometry rates and use of fluorophore tags, a system architecture integrates sample mechanical handling, signal processors, and optics in a non-confocal version of light absorption and scattering spectroscopic microscopy. Spectral images with native contrast do not require the use of exogeneous stain to render cells with submicron resolution. Structure may be characterized without restriction to cell clusters of differentiation.

  14. High Throughput Neuro-Imaging Informatics

    Directory of Open Access Journals (Sweden)

    Michael I Miller

    2013-12-01

    Full Text Available This paper describes neuroinformatics technologies at 1 mm anatomical scale based on high throughput 3D functional and structural imaging technologies of the human brain. The core is an abstract pipeline for converting functional and structural imagery into their high dimensional neuroinformatic representations index containing O(E3-E4 discriminating dimensions. The pipeline is based on advanced image analysis coupled to digital knowledge representations in the form of dense atlases of the human brain at gross anatomical scale. We demonstrate the integration of these high-dimensional representations with machine learning methods, which have become the mainstay of other fields of science including genomics as well as social networks. Such high throughput facilities have the potential to alter the way medical images are stored and utilized in radiological workflows. The neuroinformatics pipeline is used to examine cross-sectional and personalized analyses of neuropsychiatric illnesses in clinical applications as well as longitudinal studies. We demonstrate the use of high throughput machine learning methods for supporting (i cross-sectional image analysis to evaluate the health status of individual subjects with respect to the population data, (ii integration of image and non-image information for diagnosis and prognosis.

  15. High Throughput PBTK: Open-Source Data and Tools for ...

    Science.gov (United States)

    Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy

  16. Economic consequences of high throughput maskless lithography

    Science.gov (United States)

    Hartley, John G.; Govindaraju, Lakshmi

    2005-11-01

    Many people in the semiconductor industry bemoan the high costs of masks and view mask cost as one of the significant barriers to bringing new chip designs to market. All that is needed is a viable maskless technology and the problem will go away. Numerous sites around the world are working on maskless lithography but inevitably, the question asked is "Wouldn't a one wafer per hour maskless tool make a really good mask writer?" Of course, the answer is yes, the hesitation you hear in the answer isn't based on technology concerns, it's financial. The industry needs maskless lithography because mask costs are too high. Mask costs are too high because mask pattern generators (PG's) are slow and expensive. If mask PG's become much faster, mask costs go down, the maskless market goes away and the PG supplier is faced with an even smaller tool demand from the mask shops. Technical success becomes financial suicide - or does it? In this paper we will present the results of a model that examines some of the consequences of introducing high throughput maskless pattern generation. Specific features in the model include tool throughput for masks and wafers, market segmentation by node for masks and wafers and mask cost as an entry barrier to new chip designs. How does the availability of low cost masks and maskless tools affect the industries tool makeup and what is the ultimate potential market for high throughput maskless pattern generators?

  17. Preliminary High-Throughput Metagenome Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Dusheyko, Serge; Furman, Craig; Pangilinan, Jasmyn; Shapiro, Harris; Tu, Hank

    2007-03-26

    Metagenome data sets present a qualitatively different assembly problem than traditional single-organism whole-genome shotgun (WGS) assembly. The unique aspects of such projects include the presence of a potentially large number of distinct organisms and their representation in the data set at widely different fractions. In addition, multiple closely related strains could be present, which would be difficult to assemble separately. Failure to take these issues into account can result in poor assemblies that either jumble together different strains or which fail to yield useful results. The DOE Joint Genome Institute has sequenced a number of metagenomic projects and plans to considerably increase this number in the coming year. As a result, the JGI has a need for high-throughput tools and techniques for handling metagenome projects. We present the techniques developed to handle metagenome assemblies in a high-throughput environment. This includes a streamlined assembly wrapper, based on the JGI?s in-house WGS assembler, Jazz. It also includes the selection of sensible defaults targeted for metagenome data sets, as well as quality control automation for cleaning up the raw results. While analysis is ongoing, we will discuss preliminary assessments of the quality of the assembly results (http://fames.jgi-psf.org).

  18. Modeling Steroidogenesis Disruption Using High-Throughput ...

    Science.gov (United States)

    Environmental chemicals can elicit endocrine disruption by altering steroid hormone biosynthesis and metabolism (steroidogenesis) causing adverse reproductive and developmental effects. Historically, a lack of assays resulted in few chemicals having been evaluated for effects on steroidogenesis. The steroidogenic pathway is a series of hydroxylation and dehydrogenation steps carried out by CYP450 and hydroxysteroid dehydrogenase enzymes, yet the only enzyme in the pathway for which a high-throughput screening (HTS) assay has been developed is aromatase (CYP19A1), responsible for the aromatization of androgens to estrogens. Recently, the ToxCast HTS program adapted the OECD validated H295R steroidogenesis assay using human adrenocortical carcinoma cells into a high-throughput model to quantitatively assess the concentration-dependent (0.003-100 µM) effects of chemicals on 10 steroid hormones including progestagens, androgens, estrogens and glucocorticoids. These results, in combination with two CYP19A1 inhibition assays, comprise a large dataset amenable to clustering approaches supporting the identification and characterization of putative mechanisms of action (pMOA) for steroidogenesis disruption. In total, 514 chemicals were tested in all CYP19A1 and steroidogenesis assays. 216 chemicals were identified as CYP19A1 inhibitors in at least one CYP19A1 assay. 208 of these chemicals also altered hormone levels in the H295R assay, suggesting 96% sensitivity in the

  19. High-Throughput Analysis of Enzyme Activities

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Guoxin [Iowa State Univ., Ames, IA (United States)

    2007-01-01

    High-throughput screening (HTS) techniques have been applied to many research fields nowadays. Robot microarray printing technique and automation microtiter handling technique allows HTS performing in both heterogeneous and homogeneous formats, with minimal sample required for each assay element. In this dissertation, new HTS techniques for enzyme activity analysis were developed. First, patterns of immobilized enzyme on nylon screen were detected by multiplexed capillary system. The imaging resolution is limited by the outer diameter of the capillaries. In order to get finer images, capillaries with smaller outer diameters can be used to form the imaging probe. Application of capillary electrophoresis allows separation of the product from the substrate in the reaction mixture, so that the product doesn't have to have different optical properties with the substrate. UV absorption detection allows almost universal detection for organic molecules. Thus, no modifications of either the substrate or the product molecules are necessary. This technique has the potential to be used in screening of local distribution variations of specific bio-molecules in a tissue or in screening of multiple immobilized catalysts. Another high-throughput screening technique is developed by directly monitoring the light intensity of the immobilized-catalyst surface using a scientific charge-coupled device (CCD). Briefly, the surface of enzyme microarray is focused onto a scientific CCD using an objective lens. By carefully choosing the detection wavelength, generation of product on an enzyme spot can be seen by the CCD. Analyzing the light intensity change over time on an enzyme spot can give information of reaction rate. The same microarray can be used for many times. Thus, high-throughput kinetic studies of hundreds of catalytic reactions are made possible. At last, we studied the fluorescence emission spectra of ADP and obtained the detection limits for ADP under three different

  20. High throughput assays for analyzing transcription factors.

    Science.gov (United States)

    Li, Xianqiang; Jiang, Xin; Yaoi, Takuro

    2006-06-01

    Transcription factors are a group of proteins that modulate the expression of genes involved in many biological processes, such as cell growth and differentiation. Alterations in transcription factor function are associated with many human diseases, and therefore these proteins are attractive potential drug targets. A key issue in the development of such therapeutics is the generation of effective tools that can be used for high throughput discovery of the critical transcription factors involved in human diseases, and the measurement of their activities in a variety of disease or compound-treated samples. Here, a number of innovative arrays and 96-well format assays for profiling and measuring the activities of transcription factors will be discussed.

  1. High-throughput hyperdimensional vertebrate phenotyping.

    Science.gov (United States)

    Pardo-Martin, Carlos; Allalou, Amin; Medina, Jaime; Eimon, Peter M; Wählby, Carolina; Fatih Yanik, Mehmet

    2013-01-01

    Most gene mutations and biologically active molecules cause complex responses in animals that cannot be predicted by cell culture models. Yet animal studies remain too slow and their analyses are often limited to only a few readouts. Here we demonstrate high-throughput optical projection tomography with micrometre resolution and hyperdimensional screening of entire vertebrates in tens of seconds using a simple fluidic system. Hundreds of independent morphological features and complex phenotypes are automatically captured in three dimensions with unprecedented speed and detail in semitransparent zebrafish larvae. By clustering quantitative phenotypic signatures, we can detect and classify even subtle alterations in many biological processes simultaneously. We term our approach hyperdimensional in vivo phenotyping. To illustrate the power of hyperdimensional in vivo phenotyping, we have analysed the effects of several classes of teratogens on cartilage formation using 200 independent morphological measurements, and identified similarities and differences that correlate well with their known mechanisms of actions in mammals.

  2. High-Throughput Process Development for Biopharmaceuticals.

    Science.gov (United States)

    Shukla, Abhinav A; Rameez, Shahid; Wolfe, Leslie S; Oien, Nathan

    2017-11-14

    The ability to conduct multiple experiments in parallel significantly reduces the time that it takes to develop a manufacturing process for a biopharmaceutical. This is particularly significant before clinical entry, because process development and manufacturing are on the "critical path" for a drug candidate to enter clinical development. High-throughput process development (HTPD) methodologies can be similarly impactful during late-stage development, both for developing the final commercial process as well as for process characterization and scale-down validation activities that form a key component of the licensure filing package. This review examines the current state of the art for HTPD methodologies as they apply to cell culture, downstream purification, and analytical techniques. In addition, we provide a vision of how HTPD activities across all of these spaces can integrate to create a rapid process development engine that can accelerate biopharmaceutical drug development. Graphical Abstract.

  3. Applications of High Throughput Nucleotide Sequencing

    DEFF Research Database (Denmark)

    Waage, Johannes Eichler

    The recent advent of high throughput sequencing of nucleic acids (RNA and DNA) has vastly expanded research into the functional and structural biology of the genome of all living organisms (and even a few dead ones). With this enormous and exponential growth in biological data generation come...... equally large demands in data handling, analysis and interpretation, perhaps defining the modern challenge of the computational biologist of the post-genomic era. The first part of this thesis consists of a general introduction to the history, common terms and challenges of next generation sequencing......). For the second flavor, DNA-seq, a study presenting genome wide profiling of transcription factor CEBP/A in liver cells undergoing regeneration after partial hepatectomy (article IV) is included....

  4. Genome contact map explorer: a platform for the comparison, interactive visualization and analysis of genome contact maps

    Science.gov (United States)

    Kumar, Rajendra; Sobhy, Haitham

    2017-01-01

    Abstract Hi-C experiments generate data in form of large genome contact maps (Hi-C maps). These show that chromosomes are arranged in a hierarchy of three-dimensional compartments. But to understand how these compartments form and by how much they affect genetic processes such as gene regulation, biologists and bioinformaticians need efficient tools to visualize and analyze Hi-C data. However, this is technically challenging because these maps are big. In this paper, we remedied this problem, partly by implementing an efficient file format and developed the genome contact map explorer platform. Apart from tools to process Hi-C data, such as normalization methods and a programmable interface, we made a graphical interface that let users browse, scroll and zoom Hi-C maps to visually search for patterns in the Hi-C data. In the software, it is also possible to browse several maps simultaneously and plot related genomic data. The software is openly accessible to the scientific community. PMID:28973466

  5. High throughput sequencing of microRNAs in chicken somites.

    Science.gov (United States)

    Rathjen, Tina; Pais, Helio; Sweetman, Dylan; Moulton, Vincent; Munsterberg, Andrea; Dalmay, Tamas

    2009-05-06

    High throughput Solexa sequencing technology was applied to identify microRNAs in somites of developing chicken embryos. We obtained 651,273 reads, from which 340,415 were mapped to the chicken genome representing 1701 distinct sequences. Eighty-five of these were known microRNAs and 42 novel miRNA candidates were identified. Accumulation of 18 of 42 sequences was confirmed by Northern blot analysis. Ten of the 18 sequences are new variants of known miRNAs and eight short RNAs are novel miRNAs. Six of these eight have not been reported by other deep sequencing projects. One of the six new miRNAs is highly enriched in somite tissue suggesting that deep sequencing of other specific tissues has the potential to identify novel tissue specific miRNAs.

  6. ESSENTIALS: Software for Rapid Analysis of High Throughput Transposon Insertion Sequencing Data.

    NARCIS (Netherlands)

    Zomer, A.L.; Burghout, P.J.; Bootsma, H.J.; Hermans, P.W.M.; Hijum, S.A.F.T. van

    2012-01-01

    High-throughput analysis of genome-wide random transposon mutant libraries is a powerful tool for (conditional) essential gene discovery. Recently, several next-generation sequencing approaches, e.g. Tn-seq/INseq, HITS and TraDIS, have been developed that accurately map the site of transposon

  7. High-throughput microcavitation bubble induced cellular mechanotransduction

    Science.gov (United States)

    Compton, Jonathan Lee

    inhibitor to IP 3 induced Ca2+ release. This capability opens the development of a high-throughput screening platform for molecules that modulate cellular mechanotransduction. We have applied this approach to screen the effects of a small set of small molecules, in a 96-well plate in less than an hour. These detailed studies offer a basis for the design, development, and implementation of a novel high-throughput mechanotransduction assay to rapidly screen the effect of small molecules on cellular mechanotransduction at high throughput.

  8. High Throughput Spectroscopic Catalyst Screening via Surface Plasmon Spectroscopy

    Science.gov (United States)

    2015-07-15

    Final 3. DATES COVERED (From - To) 26-June-2014 to 25-March-2015 4. TITLE AND SUBTITLE High Throughput Catalyst Screening via Surface...TITLE AND SUBTITLE High Throughput Catalyst Screening via Surface Plasmon Spectroscopy 5a. CONTRACT NUMBER FA2386-14-1-4064 5b. GRANT NUMBER 5c...AOARD Grant 144064 FA2386-14-1-4064 “High Throughput Spectroscopic Catalyst Screening by Surface Plasmon Spectroscopy” Date July 15, 2015

  9. High-throughput crystallography for structural genomics.

    Science.gov (United States)

    Joachimiak, Andrzej

    2009-10-01

    Protein X-ray crystallography recently celebrated its 50th anniversary. The structures of myoglobin and hemoglobin determined by Kendrew and Perutz provided the first glimpses into the complex protein architecture and chemistry. Since then, the field of structural molecular biology has experienced extraordinary progress and now more than 55000 protein structures have been deposited into the Protein Data Bank. In the past decade many advances in macromolecular crystallography have been driven by world-wide structural genomics efforts. This was made possible because of third-generation synchrotron sources, structure phasing approaches using anomalous signal, and cryo-crystallography. Complementary progress in molecular biology, proteomics, hardware and software for crystallographic data collection, structure determination and refinement, computer science, databases, robotics and automation improved and accelerated many processes. These advancements provide the robust foundation for structural molecular biology and assure strong contribution to science in the future. In this report we focus mainly on reviewing structural genomics high-throughput X-ray crystallography technologies and their impact.

  10. High-throughput Crystallography for Structural Genomics

    Science.gov (United States)

    Joachimiak, Andrzej

    2009-01-01

    Protein X-ray crystallography recently celebrated its 50th anniversary. The structures of myoglobin and hemoglobin determined by Kendrew and Perutz provided the first glimpses into the complex protein architecture and chemistry. Since then, the field of structural molecular biology has experienced extraordinary progress and now over 53,000 proteins structures have been deposited into the Protein Data Bank. In the past decade many advances in macromolecular crystallography have been driven by world-wide structural genomics efforts. This was made possible because of third-generation synchrotron sources, structure phasing approaches using anomalous signal and cryo-crystallography. Complementary progress in molecular biology, proteomics, hardware and software for crystallographic data collection, structure determination and refinement, computer science, databases, robotics and automation improved and accelerated many processes. These advancements provide the robust foundation for structural molecular biology and assure strong contribution to science in the future. In this report we focus mainly on reviewing structural genomics high-throughput X-ray crystallography technologies and their impact. PMID:19765976

  11. A pocket device for high-throughput optofluidic holographic microscopy

    Science.gov (United States)

    Mandracchia, B.; Bianco, V.; Wang, Z.; Paturzo, M.; Bramanti, A.; Pioggia, G.; Ferraro, P.

    2017-06-01

    Here we introduce a compact holographic microscope embedded onboard a Lab-on-a-Chip (LoC) platform. A wavefront division interferometer is realized by writing a polymer grating onto the channel to extract a reference wave from the object wave impinging the LoC. A portion of the beam reaches the samples flowing along the channel path, carrying their information content to the recording device, while one of the diffraction orders from the grating acts as an off-axis reference wave. Polymeric micro-lenses are delivered forward the chip by Pyro-ElectroHydroDynamic (Pyro-EHD) inkjet printing techniques. Thus, all the required optical components are embedded onboard a pocket device, and fast, non-iterative, reconstruction algorithms can be used. We use our device in combination with a novel high-throughput technique, named Space-Time Digital Holography (STDH). STDH exploits the samples motion inside microfluidic channels to obtain a synthetic hologram, mapped in a hybrid space-time domain, and with intrinsic useful features. Indeed, a single Linear Sensor Array (LSA) is sufficient to build up a synthetic representation of the entire experiment (i.e. the STDH) with unlimited Field of View (FoV) along the scanning direction, independently from the magnification factor. The throughput of the imaging system is dramatically increased as STDH provides unlimited FoV, refocusable imaging of samples inside the liquid volume with no need for hologram stitching. To test our embedded STDH microscopy module, we counted, imaged and tracked in 3D with high-throughput red blood cells moving inside the channel volume under non ideal flow conditions.

  12. Ultraspecific probes for high throughput HLA typing

    Directory of Open Access Journals (Sweden)

    Eggers Rick

    2009-02-01

    Full Text Available Abstract Background The variations within an individual's HLA (Human Leukocyte Antigen genes have been linked to many immunological events, e.g. susceptibility to disease, response to vaccines, and the success of blood, tissue, and organ transplants. Although the microarray format has the potential to achieve high-resolution typing, this has yet to be attained due to inefficiencies of current probe design strategies. Results We present a novel three-step approach for the design of high-throughput microarray assays for HLA typing. This approach first selects sequences containing the SNPs present in all alleles of the locus of interest and next calculates the number of base changes necessary to convert a candidate probe sequences to the closest subsequence within the set of sequences that are likely to be present in the sample including the remainder of the human genome in order to identify those candidate probes which are "ultraspecific" for the allele of interest. Due to the high specificity of these sequences, it is possible that preliminary steps such as PCR amplification are no longer necessary. Lastly, the minimum number of these ultraspecific probes is selected such that the highest resolution typing can be achieved for the minimal cost of production. As an example, an array was designed and in silico results were obtained for typing of the HLA-B locus. Conclusion The assay presented here provides a higher resolution than has previously been developed and includes more alleles than previously considered. Based upon the in silico and preliminary experimental results, we believe that the proposed approach can be readily applied to any highly polymorphic gene system.

  13. High Throughput Determinations of Critical Dosing Parameters (IVIVE workshop)

    Science.gov (United States)

    High throughput toxicokinetics (HTTK) is an approach that allows for rapid estimations of TK for hundreds of environmental chemicals. HTTK-based reverse dosimetry (i.e, reverse toxicokinetics or RTK) is used in order to convert high throughput in vitro toxicity screening (HTS) da...

  14. Savant: genome browser for high-throughput sequencing data.

    Science.gov (United States)

    Fiume, Marc; Williams, Vanessa; Brook, Andrew; Brudno, Michael

    2010-08-15

    The advent of high-throughput sequencing (HTS) technologies has made it affordable to sequence many individuals' genomes. Simultaneously the computational analysis of the large volumes of data generated by the new sequencing machines remains a challenge. While a plethora of tools are available to map the resulting reads to a reference genome, and to conduct primary analysis of the mappings, it is often necessary to visually examine the results and underlying data to confirm predictions and understand the functional effects, especially in the context of other datasets. We introduce Savant, the Sequence Annotation, Visualization and ANalysis Tool, a desktop visualization and analysis browser for genomic data. Savant was developed for visualizing and analyzing HTS data, with special care taken to enable dynamic visualization in the presence of gigabases of genomic reads and references the size of the human genome. Savant supports the visualization of genome-based sequence, point, interval and continuous datasets, and multiple visualization modes that enable easy identification of genomic variants (including single nucleotide polymorphisms, structural and copy number variants), and functional genomic information (e.g. peaks in ChIP-seq data) in the context of genomic annotations. Savant is freely available at http://compbio.cs.toronto.edu/savant.

  15. Complex multi-enhancer contacts captured by genome architecture mapping.

    Science.gov (United States)

    Beagrie, Robert A; Scialdone, Antonio; Schueler, Markus; Kraemer, Dorothee C A; Chotalia, Mita; Xie, Sheila Q; Barbieri, Mariano; de Santiago, Inês; Lavitas, Liron-Mark; Branco, Miguel R; Fraser, James; Dostie, Josée; Game, Laurence; Dillon, Niall; Edwards, Paul A W; Nicodemi, Mario; Pombo, Ana

    2017-03-23

    The organization of the genome in the nucleus and the interactions of genes with their regulatory elements are key features of transcriptional control and their disruption can cause disease. Here we report a genome-wide method, genome architecture mapping (GAM), for measuring chromatin contacts and other features of three-dimensional chromatin topology on the basis of sequencing DNA from a large collection of thin nuclear sections. We apply GAM to mouse embryonic stem cells and identify enrichment for specific interactions between active genes and enhancers across very large genomic distances using a mathematical model termed SLICE (statistical inference of co-segregation). GAM also reveals an abundance of three-way contacts across the genome, especially between regions that are highly transcribed or contain super-enhancers, providing a level of insight into genome architecture that, owing to the technical limitations of current technologies, has previously remained unattainable. Furthermore, GAM highlights a role for gene-expression-specific contacts in organizing the genome in mammalian nuclei.

  16. High Throughput, Continuous, Mass Production of Photovoltaic Modules

    Energy Technology Data Exchange (ETDEWEB)

    Kurt Barth

    2008-02-06

    AVA Solar has developed a very low cost solar photovoltaic (PV) manufacturing process and has demonstrated the significant economic and commercial potential of this technology. This I & I Category 3 project provided significant assistance toward accomplishing these milestones. The original goals of this project were to design, construct and test a production prototype system, fabricate PV modules and test the module performance. The original module manufacturing costs in the proposal were estimated at $2/Watt. The objectives of this project have been exceeded. An advanced processing line was designed, fabricated and installed. Using this automated, high throughput system, high efficiency devices and fully encapsulated modules were manufactured. AVA Solar has obtained 2 rounds of private equity funding, expand to 50 people and initiated the development of a large scale factory for 100+ megawatts of annual production. Modules will be manufactured at an industry leading cost which will enable AVA Solar's modules to produce power that is cost-competitive with traditional energy resources. With low manufacturing costs and the ability to scale manufacturing, AVA Solar has been contacted by some of the largest customers in the PV industry to negotiate long-term supply contracts. The current market for PV has continued to grow at 40%+ per year for nearly a decade and is projected to reach $40-$60 Billion by 2012. Currently, a crystalline silicon raw material supply shortage is limiting growth and raising costs. Our process does not use silicon, eliminating these limitations.

  17. High Throughput Architecture for High Performance NoC

    OpenAIRE

    Ghany, Mohamed A. Abd El; El-Moursy, Magdy A.; Ismail, Mohammed

    2010-01-01

    In this chapter, the high throughput NoC architecture is proposed to increase the throughput of the switch in NoC. The proposed architecture can also improve the latency of the network. The proposed high throughput interconnect architecture is applied on different NoC architectures. The architecture increases the throughput of the network by more than 38% while preserving the average latency. The area of high throughput NoC switch is decreased by 18% as compared to the area of BFT switch. The...

  18. A high throughput DNA extraction method with high yield and quality

    Directory of Open Access Journals (Sweden)

    Xin Zhanguo

    2012-07-01

    Full Text Available Abstract Background Preparation of large quantity and high quality genomic DNA from a large number of plant samples is a major bottleneck for most genetic and genomic analyses, such as, genetic mapping, TILLING (Targeting Induced Local Lesion IN Genome, and next-generation sequencing directly from sheared genomic DNA. A variety of DNA preparation methods and commercial kits are available. However, they are either low throughput, low yield, or costly. Here, we describe a method for high throughput genomic DNA isolation from sorghum [Sorghum bicolor (L. Moench] leaves and dry seeds with high yield, high quality, and affordable cost. Results We developed a high throughput DNA isolation method by combining a high yield CTAB extraction method with an improved cleanup procedure based on MagAttract kit. The method yielded large quantity and high quality DNA from both lyophilized sorghum leaves and dry seeds. The DNA yield was improved by nearly 30 fold with 4 times less consumption of MagAttract beads. The method can also be used in other plant species, including cotton leaves and pine needles. Conclusion A high throughput system for DNA extraction from sorghum leaves and seeds was developed and validated. The main advantages of the method are low cost, high yield, high quality, and high throughput. One person can process two 96-well plates in a working day at a cost of $0.10 per sample of magnetic beads plus other consumables that other methods will also need.

  19. A high throughput DNA extraction method with high yield and quality.

    Science.gov (United States)

    Xin, Zhanguo; Chen, Junping

    2012-07-28

    Preparation of large quantity and high quality genomic DNA from a large number of plant samples is a major bottleneck for most genetic and genomic analyses, such as, genetic mapping, TILLING (Targeting Induced Local Lesion IN Genome), and next-generation sequencing directly from sheared genomic DNA. A variety of DNA preparation methods and commercial kits are available. However, they are either low throughput, low yield, or costly. Here, we describe a method for high throughput genomic DNA isolation from sorghum [Sorghum bicolor (L.) Moench] leaves and dry seeds with high yield, high quality, and affordable cost. We developed a high throughput DNA isolation method by combining a high yield CTAB extraction method with an improved cleanup procedure based on MagAttract kit. The method yielded large quantity and high quality DNA from both lyophilized sorghum leaves and dry seeds. The DNA yield was improved by nearly 30 fold with 4 times less consumption of MagAttract beads. The method can also be used in other plant species, including cotton leaves and pine needles. A high throughput system for DNA extraction from sorghum leaves and seeds was developed and validated. The main advantages of the method are low cost, high yield, high quality, and high throughput. One person can process two 96-well plates in a working day at a cost of $0.10 per sample of magnetic beads plus other consumables that other methods will also need.

  20. A high-throughput Arabidopsis reverse genetics system.

    Science.gov (United States)

    Sessions, Allen; Burke, Ellen; Presting, Gernot; Aux, George; McElver, John; Patton, David; Dietrich, Bob; Ho, Patrick; Bacwaden, Johana; Ko, Cynthia; Clarke, Joseph D; Cotton, David; Bullis, David; Snell, Jennifer; Miguel, Trini; Hutchison, Don; Kimmerly, Bill; Mitzel, Theresa; Katagiri, Fumiaki; Glazebrook, Jane; Law, Marc; Goff, Stephen A

    2002-12-01

    A collection of Arabidopsis lines with T-DNA insertions in known sites was generated to increase the efficiency of functional genomics. A high-throughput modified thermal asymmetric interlaced (TAIL)-PCR protocol was developed and used to amplify DNA fragments flanking the T-DNA left borders from approximately 100000 transformed lines. A total of 85108 TAIL-PCR products from 52964 T-DNA lines were sequenced and compared with the Arabidopsis genome to determine the positions of T-DNAs in each line. Predicted T-DNA insertion sites, when mapped, showed a bias against predicted coding sequences. Predicted insertion mutations in genes of interest can be identified using Arabidopsis Gene Index name searches or by BLAST (Basic Local Alignment Search Tool) search. Insertions can be confirmed by simple PCR assays on individual lines. Predicted insertions were confirmed in 257 of 340 lines tested (76%). This resource has been named SAIL (Syngenta Arabidopsis Insertion Library) and is available to the scientific community at www.tmri.org.

  1. Use of High Throughput Screening Data in IARC Monograph ...

    Science.gov (United States)

    Purpose: Evaluation of carcinogenic mechanisms serves a critical role in IARC monograph evaluations, and can lead to “upgrade” or “downgrade” of the carcinogenicity conclusions based on human and animal evidence alone. Three recent IARC monograph Working Groups (110, 112, and 113) pioneered analysis of high throughput in vitro screening data from the U.S. Environmental Protection Agency’s ToxCast program in evaluations of carcinogenic mechanisms. Methods: For monograph 110, ToxCast assay data across multiple nuclear receptors were used to test the hypothesis that PFOA acts exclusively through the PPAR family of receptors, with activity profiles compared to several prototypical nuclear receptor-activating compounds. For monographs 112 and 113, ToxCast assays were systematically evaluated and used as an additional data stream in the overall evaluation of the mechanistic evidence. Specifically, ToxCast assays were mapped to 10 “key characteristics of carcinogens” recently identified by an IARC expert group, and chemicals’ bioactivity profiles were evaluated both in absolute terms (number of relevant assays positive for bioactivity) and relative terms (ranking with respect to other compounds evaluated by IARC, using the ToxPi methodology). Results: PFOA activates multiple nuclear receptors in addition to the PPAR family in the ToxCast assays. ToxCast assays offered substantial coverage for 5 of the 10 “key characteristics,” with the greates

  2. Strategies for high-throughput gene cloning and expression.

    Science.gov (United States)

    Dieckman, L J; Hanly, W C; Collart, E R

    2006-01-01

    High-throughput approaches for gene cloning and expression require the development of new, nonstandard tools for use by molecular biologists and biochemists. We have developed and implemented a series of methods that enable the production of expression constructs in 96-well plate format. A screening process is described that facilitates the identification of bacterial clones expressing soluble protein. Application of the solubility screen then provides a plate map that identifies the location of wells containing clones producing soluble proteins. A series of semi-automated methods can then be applied for validation of solubility and production of freezer stocks for the protein production group. This process provides an 80% success rate for the identification of clones producing soluble protein and results in a significant decrease in the level of effort required for the labor-intensive components of validation and preparation of freezer stocks. This process is customized for large-scale structural genomics programs that rely on the production of large amounts of soluble proteins for crystallization trials.

  3. SNP-PHAGE – High throughput SNP discovery pipeline

    Directory of Open Access Journals (Sweden)

    Cregan Perry B

    2006-10-01

    Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs as defined here are single base sequence changes or short insertion/deletions between or within individuals of a given species. As a result of their abundance and the availability of high throughput analysis technologies SNP markers have begun to replace other traditional markers such as restriction fragment length polymorphisms (RFLPs, amplified fragment length polymorphisms (AFLPs and simple sequence repeats (SSRs or microsatellite markers for fine mapping and association studies in several species. For SNP discovery from chromatogram data, several bioinformatics programs have to be combined to generate an analysis pipeline. Results have to be stored in a relational database to facilitate interrogation through queries or to generate data for further analyses such as determination of linkage disequilibrium and identification of common haplotypes. Although these tasks are routinely performed by several groups, an integrated open source SNP discovery pipeline that can be easily adapted by new groups interested in SNP marker development is currently unavailable. Results We developed SNP-PHAGE (SNP discovery Pipeline with additional features for identification of common haplotypes within a sequence tagged site (Haplotype Analysis and GenBank (-dbSNP submissions. This tool was applied for analyzing sequence traces from diverse soybean genotypes to discover over 10,000 SNPs. This package was developed on UNIX/Linux platform, written in Perl and uses a MySQL database. Scripts to generate a user-friendly web interface are also provided with common queries for preliminary data analysis. A machine learning tool developed by this group for increasing the efficiency of SNP discovery is integrated as a part of this package as an optional feature. The SNP-PHAGE package is being made available open source at http://bfgl.anri.barc.usda.gov/ML/snp-phage/. Conclusion SNP-PHAGE provides a bioinformatics

  4. Automated High-Throughput Root Phenotyping of Arabidopsis thaliana Under Nutrient Deficiency Conditions.

    Science.gov (United States)

    Satbhai, Santosh B; Göschl, Christian; Busch, Wolfgang

    2017-01-01

    The central question of genetics is how a genotype determines the phenotype of an organism. Genetic mapping approaches are a key for finding answers to this question. In particular, genome-wide association (GWA) studies have been rapidly adopted to study the architecture of complex quantitative traits. This was only possible due to the improvement of high-throughput and low-cost phenotyping methodologies. In this chapter we provide a detailed protocol for obtaining root trait data from the model species Arabidopsis thaliana using the semiautomated, high-throughput phenotyping pipeline BRAT (Busch-lab Root Analysis Toolchain) for early root growth under the stress condition of iron deficiency. Extracted root trait data can be directly used to perform GWA mapping using the freely accessible web application GWAPP to identify marker polymorphisms associated with the phenotype of interest.

  5. High Throughput Hall Thruster for Small Spacecraft Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Busek is developing a high throughput nominal 100-W Hall Effect Thruster. This device is well sized for spacecraft ranging in size from several tens of kilograms to...

  6. AOPs & Biomarkers: Bridging High Throughput Screening and Regulatory Decision Making.

    Science.gov (United States)

    As high throughput screening (HTS) approaches play a larger role in toxicity testing, computational toxicology has emerged as a critical component in interpreting the large volume of data produced. Computational models for this purpose are becoming increasingly more sophisticated...

  7. High Throughput Hall Thruster for Small Spacecraft Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Busek Co. Inc. proposes to develop a high throughput, nominal 100 W Hall Effect Thruster (HET). This HET will be sized for small spacecraft (< 180 kg), including...

  8. High-Throughput Analysis and Automation for Glycomics Studies

    NARCIS (Netherlands)

    Shubhakar, A.; Reiding, K.R.; Gardner, R.A.; Spencer, D.I.R.; Fernandes, D.L.; Wuhrer, M.

    2015-01-01

    This review covers advances in analytical technologies for high-throughput (HTP) glycomics. Our focus is on structural studies of glycoprotein glycosylation to support biopharmaceutical realization and the discovery of glycan biomarkers for human disease. For biopharmaceuticals, there is increasing

  9. Materiomics - High-Throughput Screening of Biomaterial Properties

    NARCIS (Netherlands)

    de Boer, Jan; van Blitterswijk, Clemens

    2013-01-01

    This complete, yet concise, guide introduces you to the rapidly developing field of high throughput screening of biomaterials: materiomics. Bringing together the key concepts and methodologies used to determine biomaterial properties, you will understand the adaptation and application of materomics

  10. MIPHENO: Data normalization for high throughput metabolic analysis.

    Science.gov (United States)

    High throughput methodologies such as microarrays, mass spectrometry and plate-based small molecule screens are increasingly used to facilitate discoveries from gene function to drug candidate identification. These large-scale experiments are typically carried out over the course...

  11. Applications of High Throughput Sequencing for Immunology and Clinical Diagnostics

    OpenAIRE

    Kim, Hyunsung John

    2014-01-01

    High throughput sequencing methods have fundamentally shifted the manner in which biological experiments are performed. In this dissertation, conventional and novel high throughput sequencing and bioinformatics methods are applied to immunology and diagnostics. In order to study rare subsets of cells, an RNA sequencing method was first optimized for use with minimal levels of RNA and cellular input. The optimized RNA sequencing method was then applied to study the transcriptional differences ...

  12. Pyicos: a versatile toolkit for the analysis of high-throughput sequencing data

    OpenAIRE

    Althammer, Sonja Daniela; González-Vallinas Rostes, Juan, 1983-; Ballaré, Cecilia Julia; Beato, Miguel; Eyras Jiménez, Eduardo

    2011-01-01

    Motivation: High-throughput sequencing (HTS) has revolutionized gene regulation studies and is now fundamental for the detection of protein?DNA and protein?RNA binding, as well as for measuring RNA expression. With increasing variety and sequencing depth of HTS datasets, the need for more flexible and memory-efficient tools to analyse them is growing. Results: We describe Pyicos, a powerful toolkit for the analysis of mapped reads from diverse HTS experiments: ChIP-Seq, either punctuated or b...

  13. Emerging metrology for high-throughput nanomaterial genotoxicology.

    Science.gov (United States)

    Nelson, Bryant C; Wright, Christa W; Ibuki, Yuko; Moreno-Villanueva, Maria; Karlsson, Hanna L; Hendriks, Giel; Sims, Christopher M; Singh, Neenu; Doak, Shareen H

    2017-01-01

    The rapid development of the engineered nanomaterial (ENM) manufacturing industry has accelerated the incorporation of ENMs into a wide variety of consumer products across the globe. Unintentionally or not, some of these ENMs may be introduced into the environment or come into contact with humans or other organisms resulting in unexpected biological effects. It is thus prudent to have rapid and robust analytical metrology in place that can be used to critically assess and/or predict the cytotoxicity, as well as the potential genotoxicity of these ENMs. Many of the traditional genotoxicity test methods [e.g. unscheduled DNA synthesis assay, bacterial reverse mutation (Ames) test, etc.,] for determining the DNA damaging potential of chemical and biological compounds are not suitable for the evaluation of ENMs, due to a variety of methodological issues ranging from potential assay interferences to problems centered on low sample throughput. Recently, a number of sensitive, high-throughput genotoxicity assays/platforms (CometChip assay, flow cytometry/micronucleus assay, flow cytometry/γ-H2AX assay, automated 'Fluorimetric Detection of Alkaline DNA Unwinding' (FADU) assay, ToxTracker reporter assay) have been developed, based on substantial modifications and enhancements of traditional genotoxicity assays. These new assays have been used for the rapid measurement of DNA damage (strand breaks), chromosomal damage (micronuclei) and for detecting upregulated DNA damage signalling pathways resulting from ENM exposures. In this critical review, we describe and discuss the fundamental measurement principles and measurement endpoints of these new assays, as well as the modes of operation, analytical metrics and potential interferences, as applicable to ENM exposures. An unbiased discussion of the major technical advantages and limitations of each assay for evaluating and predicting the genotoxic potential of ENMs is also provided. Published by Oxford University Press on

  14. Accurate Classification of Protein Subcellular Localization from High-Throughput Microscopy Images Using Deep Learning

    Directory of Open Access Journals (Sweden)

    Tanel Pärnamaa

    2017-05-01

    Full Text Available High-throughput microscopy of many single cells generates high-dimensional data that are far from straightforward to analyze. One important problem is automatically detecting the cellular compartment where a fluorescently-tagged protein resides, a task relatively simple for an experienced human, but difficult to automate on a computer. Here, we train an 11-layer neural network on data from mapping thousands of yeast proteins, achieving per cell localization classification accuracy of 91%, and per protein accuracy of 99% on held-out images. We confirm that low-level network features correspond to basic image characteristics, while deeper layers separate localization classes. Using this network as a feature calculator, we train standard classifiers that assign proteins to previously unseen compartments after observing only a small number of training examples. Our results are the most accurate subcellular localization classifications to date, and demonstrate the usefulness of deep learning for high-throughput microscopy.

  15. A High-Throughput Microfluidic Platform for Mammalian Cell Transfection and Culturing

    Science.gov (United States)

    Woodruff, Kristina; Maerkl, Sebastian J.

    2016-01-01

    Mammalian synthetic biology could be augmented through the development of high-throughput microfluidic systems that integrate cellular transfection, culturing, and imaging. We created a microfluidic chip that cultures cells and implements 280 independent transfections at up to 99% efficiency. The chip can perform co-transfections, in which the number of cells expressing each protein and the average protein expression level can be precisely tuned as a function of input DNA concentration and synthetic gene circuits can be optimized on chip. We co-transfected four plasmids to test a histidine kinase signaling pathway and mapped the dose dependence of this network on the level of one of its constituents. The chip is readily integrated with high-content imaging, enabling the evaluation of cellular behavior and protein expression dynamics over time. These features make the transfection chip applicable to high-throughput mammalian protein and synthetic biology studies. PMID:27030663

  16. High-throughput characterization of film thickness in thin film materials libraries by digital holographic microscopy.

    Science.gov (United States)

    Lai, Yiu Wai; Krause, Michael; Savan, Alan; Thienhaus, Sigurd; Koukourakis, Nektarios; Hofmann, Martin R; Ludwig, Alfred

    2011-10-01

    A high-throughput characterization technique based on digital holography for mapping film thickness in thin-film materials libraries was developed. Digital holographic microscopy is used for fully automatic measurements of the thickness of patterned films with nanometer resolution. The method has several significant advantages over conventional stylus profilometry: it is contactless and fast, substrate bending is compensated, and the experimental setup is simple. Patterned films prepared by different combinatorial thin-film approaches were characterized to investigate and demonstrate this method. The results show that this technique is valuable for the quick, reliable and high-throughput determination of the film thickness distribution in combinatorial materials research. Importantly, it can also be applied to thin films that have been structured by shadow masking.

  17. Automatic Classification of Protein Structure Using the Maximum Contact Map Overlap Metric

    NARCIS (Netherlands)

    R. Andonov (Rumen); H. Djidjev (Hristo); G.W. Klau (Gunnar); M. Le Boudic-Jamin (Mathilde); I. Wohlers (Inken)

    2015-01-01

    htmlabstractIn this work, we propose a new distance measure for comparing two protein structures based on their contact map representations. We show that our novel measure, which we refer to as the maximum contact map overlap (max-CMO) metric, satisfies all properties of a metric on the space of

  18. Automatic classification of protein structure using the maximum contact map overlap metric

    NARCIS (Netherlands)

    Andonov, Rumen; Djidjev, Hristo; Klau, Gunnar W.; Boudic-Jamin, Mathilde Le; Wohlers, Inken

    2015-01-01

    In this work, we propose a new distance measure for comparing two protein structures based on their contact map representations. We show that our novel measure, which we refer to as the maximum contact map overlap (max-CMO) metric, satisfiesall properties of a metric on the space of protein

  19. Protein contact maps: A binary depiction of protein 3D structures

    Science.gov (United States)

    Emerson, Isaac Arnold; Amala, Arumugam

    2017-01-01

    In recent years, there has been a considerable interest in examining the structure and dynamics of complex networks. Proteins in 3D space may also be considered as complex systems emerged through the interactions of their constituent amino acids. This representation provides a powerful framework to uncover the general organized principle of protein contact network. Here we reviewed protein contact map in terms of protein structure prediction and analyses. In addition, we had also discussed the various computational techniques for the prediction of protein contact maps and the tools to visualize contact maps.

  20. A novel high throughput method to investigate polymer dissolution.

    Science.gov (United States)

    Zhang, Ying; Mallapragada, Surya K; Narasimhan, Balaji

    2010-02-16

    The dissolution behavior of polystyrene (PS) in biodiesel was studied by developing a novel high throughput approach based on Fourier-transform infrared (FTIR) microscopy. A multiwell device for high throughput dissolution testing was fabricated using a photolithographic rapid prototyping method. The dissolution of PS films in each well was tracked by following the characteristic IR band of PS and the effect of PS molecular weight and temperature on the dissolution rate was simultaneously investigated. The results were validated with conventional gravimetric methods. The high throughput method can be extended to evaluate the dissolution profiles of a large number of samples, or to simultaneously investigate the effect of variables such as polydispersity, crystallinity, and mixed solvents. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Determination of contact maps in proteins: A combination of structural and chemical approaches

    Energy Technology Data Exchange (ETDEWEB)

    Wołek, Karol; Cieplak, Marek, E-mail: mc@ifpan.edu.pl [Institute of Physics, Polish Academy of Science, Al. Lotników 32/46, 02-668 Warsaw (Poland); Gómez-Sicilia, Àngel [Instituto Cajal, Consejo Superior de Investigaciones Cientificas (CSIC), Av. Doctor Arce, 37, 28002 Madrid (Spain); Instituto Madrileño de Estudios Avanzados en Nanociencia (IMDEA-Nanociencia), C/Faraday 9, 28049 Cantoblanco (Madrid) (Spain)

    2015-12-28

    Contact map selection is a crucial step in structure-based molecular dynamics modelling of proteins. The map can be determined in many different ways. We focus on the methods in which residues are represented as clusters of effective spheres. One contact map, denoted as overlap (OV), is based on the overlap of such spheres. Another contact map, named Contacts of Structural Units (CSU), involves the geometry in a different way and, in addition, brings chemical considerations into account. We develop a variant of the CSU approach in which we also incorporate Coulombic effects such as formation of the ionic bridges and destabilization of possible links through repulsion. In this way, the most essential and well defined contacts are identified. The resulting residue-residue contact map, dubbed repulsive CSU (rCSU), is more sound in its physico-chemical justification than CSU. It also provides a clear prescription for validity of an inter-residual contact: the number of attractive atomic contacts should be larger than the number of repulsive ones — a feature that is not present in CSU. However, both of these maps do not correlate well with the experimental data on protein stretching. Thus, we propose to use rCSU together with the OV map. We find that the combined map, denoted as OV+rCSU, performs better than OV. In most situations, OV and OV+rCSU yield comparable folding properties but for some proteins rCSU provides contacts which improve folding in a substantial way. We discuss the likely residue-specificity of the rCSU contacts. Finally, we make comparisons to the recently proposed shadow contact map, which is derived from different principles.

  2. Screening and synthesis: high throughput technologies applied to parasitology.

    Science.gov (United States)

    Morgan, R E; Westwood, N J

    2004-01-01

    High throughput technologies continue to develop in response to the challenges set by the genome projects. This article discusses how the techniques of both high throughput screening (HTS) and synthesis can influence research in parasitology. Examples of the use of targeted and phenotype-based HTS using unbiased compound collections are provided. The important issue of identifying the protein target(s) of bioactive compounds is discussed from the synthetic chemist's perspective. This article concludes by reviewing recent examples of successful target identification studies in parasitology.

  3. High throughput materials research and development for lithium ion batteries

    Directory of Open Access Journals (Sweden)

    Parker Liu

    2017-09-01

    Full Text Available Development of next generation batteries requires a breakthrough in materials. Traditional one-by-one method, which is suitable for synthesizing large number of sing-composition material, is time-consuming and costly. High throughput and combinatorial experimentation, is an effective method to synthesize and characterize huge amount of materials over a broader compositional region in a short time, which enables to greatly speed up the discovery and optimization of materials with lower cost. In this work, high throughput and combinatorial materials synthesis technologies for lithium ion battery research are discussed, and our efforts on developing such instrumentations are introduced.

  4. High-throughput optical coherence tomography at 800 nm.

    Science.gov (United States)

    Goda, Keisuke; Fard, Ali; Malik, Omer; Fu, Gilbert; Quach, Alan; Jalali, Bahram

    2012-08-27

    We report high-throughput optical coherence tomography (OCT) that offers 1,000 times higher axial scan rate than conventional OCT in the 800 nm spectral range. This is made possible by employing photonic time-stretch for chirping a pulse train and transforming it into a passive swept source. We demonstrate a record high axial scan rate of 90.9 MHz. To show the utility of our method, we also demonstrate real-time observation of laser ablation dynamics. Our high-throughput OCT is expected to be useful for industrial applications where the speed of conventional OCT falls short.

  5. High throughput calorimetry for evaluating enzymatic reactions generating phosphate.

    Science.gov (United States)

    Hoflack, Lieve; De Groeve, Manu; Desmet, Tom; Van Gerwen, Peter; Soetaert, Wim

    2010-05-01

    A calorimetric assay is described for the high-throughput screening of enzymes that produce inorganic phosphate. In the current example, cellobiose phosphorylase (EC 2.4.1.20) is tested for its ability to synthesise rare disaccharides. The generated phosphate is measured in a high-throughput calorimeter by coupling the reaction to pyruvate oxidase and catalase. This procedure allows for the simultaneous analysis of 48 reactions in microtiter plate format and has been validated by comparison with a colorimetric phosphate assay. The proposed assay has a coefficient of variation of 3.14% and is useful for screening enzyme libraries for enhanced activity and substrate libraries for enzyme promiscuity.

  6. Towards a high throughput droplet-based agglutination assay

    KAUST Repository

    Kodzius, Rimantas

    2013-10-22

    This work demonstrates the detection method for a high throughput droplet based agglutination assay system. Using simple hydrodynamic forces to mix and aggregate functionalized microbeads we avoid the need to use magnetic assistance or mixing structures. The concentration of our target molecules was estimated by agglutination strength, obtained through optical image analysis. Agglutination in droplets was performed with flow rates of 150 µl/min and occurred in under a minute, with potential to perform high-throughput measurements. The lowest target concentration detected in droplet microfluidics was 0.17 nM, which is three orders of magnitude more sensitive than a conventional card based agglutination assay.

  7. A two-stage approach for improved prediction of residue contact maps

    Directory of Open Access Journals (Sweden)

    Pollastri Gianluca

    2006-03-01

    Full Text Available Abstract Background Protein topology representations such as residue contact maps are an important intermediate step towards ab initio prediction of protein structure. Although improvements have occurred over the last years, the problem of accurately predicting residue contact maps from primary sequences is still largely unsolved. Among the reasons for this are the unbalanced nature of the problem (with far fewer examples of contacts than non-contacts, the formidable challenge of capturing long-range interactions in the maps, the intrinsic difficulty of mapping one-dimensional input sequences into two-dimensional output maps. In order to alleviate these problems and achieve improved contact map predictions, in this paper we split the task into two stages: the prediction of a map's principal eigenvector (PE from the primary sequence; the reconstruction of the contact map from the PE and primary sequence. Predicting the PE from the primary sequence consists in mapping a vector into a vector. This task is less complex than mapping vectors directly into two-dimensional matrices since the size of the problem is drastically reduced and so is the scale length of interactions that need to be learned. Results We develop architectures composed of ensembles of two-layered bidirectional recurrent neural networks to classify the components of the PE in 2, 3 and 4 classes from protein primary sequence, predicted secondary structure, and hydrophobicity interaction scales. Our predictor, tested on a non redundant set of 2171 proteins, achieves classification performances of up to 72.6%, 16% above a base-line statistical predictor. We design a system for the prediction of contact maps from the predicted PE. Our results show that predicting maps through the PE yields sizeable gains especially for long-range contacts which are particularly critical for accurate protein 3D reconstruction. The final predictor's accuracy on a non-redundant set of 327 targets is 35

  8. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    Science.gov (United States)

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  9. Infra-red thermography for high throughput field phenotyping in Solanum tuberosum.

    Science.gov (United States)

    Prashar, Ankush; Yildiz, Jane; McNicol, James W; Bryan, Glenn J; Jones, Hamlyn G

    2013-01-01

    The rapid development of genomic technology has made high throughput genotyping widely accessible but the associated high throughput phenotyping is now the major limiting factor in genetic analysis of traits. This paper evaluates the use of thermal imaging for the high throughput field phenotyping of Solanum tuberosum for differences in stomatal behaviour. A large multi-replicated trial of a potato mapping population was used to investigate the consistency in genotypic rankings across different trials and across measurements made at different times of day and on different days. The results confirmed a high degree of consistency between the genotypic rankings based on relative canopy temperature on different occasions. Genotype discrimination was enhanced both through normalising data by expressing genotype temperatures as differences from image means and through the enhanced replication obtained by using overlapping images. A Monte Carlo simulation approach was used to confirm the magnitude of genotypic differences that it is possible to discriminate. The results showed a clear negative association between canopy temperature and final tuber yield for this population, when grown under ample moisture supply. We have therefore established infrared thermography as an easy, rapid and non-destructive screening method for evaluating large population trials for genetic analysis. We also envisage this approach as having great potential for evaluating plant response to stress under field conditions.

  10. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    Science.gov (United States)

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  11. Infra-red thermography for high throughput field phenotyping in Solanum tuberosum.

    Directory of Open Access Journals (Sweden)

    Ankush Prashar

    Full Text Available The rapid development of genomic technology has made high throughput genotyping widely accessible but the associated high throughput phenotyping is now the major limiting factor in genetic analysis of traits. This paper evaluates the use of thermal imaging for the high throughput field phenotyping of Solanum tuberosum for differences in stomatal behaviour. A large multi-replicated trial of a potato mapping population was used to investigate the consistency in genotypic rankings across different trials and across measurements made at different times of day and on different days. The results confirmed a high degree of consistency between the genotypic rankings based on relative canopy temperature on different occasions. Genotype discrimination was enhanced both through normalising data by expressing genotype temperatures as differences from image means and through the enhanced replication obtained by using overlapping images. A Monte Carlo simulation approach was used to confirm the magnitude of genotypic differences that it is possible to discriminate. The results showed a clear negative association between canopy temperature and final tuber yield for this population, when grown under ample moisture supply. We have therefore established infrared thermography as an easy, rapid and non-destructive screening method for evaluating large population trials for genetic analysis. We also envisage this approach as having great potential for evaluating plant response to stress under field conditions.

  12. High throughput defect detection with multiple parallel electron beams

    NARCIS (Netherlands)

    Himbergen, H.M.P. van; Nijkerk, M.D.; Jager, P.W.H. de; Hosman, T.C.; Kruit, P.

    2007-01-01

    A new concept for high throughput defect detection with multiple parallel electron beams is described. As many as 30 000 beams can be placed on a footprint of a in.2, each beam having its own microcolumn and detection system without cross-talk. Based on the International Technology Roadmap for

  13. High-throughput cloning and expression in recalcitrant bacteria

    NARCIS (Netherlands)

    Geertsma, Eric R.; Poolman, Bert

    We developed a generic method for high-throughput cloning in bacteria that are less amenable to conventional DNA manipulations. The method involves ligation-independent cloning in an intermediary Escherichia coli vector, which is rapidly converted via vector-backbone exchange (VBEx) into an

  14. High-throughput screening, predictive modeling and computational embryology - Abstract

    Science.gov (United States)

    High-throughput screening (HTS) studies are providing a rich source of data that can be applied to chemical profiling to address sensitivity and specificity of molecular targets, biological pathways, cellular and developmental processes. EPA’s ToxCast project is testing 960 uniq...

  15. High-throughput screening, predictive modeling and computational embryology

    Science.gov (United States)

    High-throughput screening (HTS) studies are providing a rich source of data that can be applied to profile thousands of chemical compounds for biological activity and potential toxicity. EPA’s ToxCast™ project, and the broader Tox21 consortium, in addition to projects worldwide,...

  16. High-throughput sequencing in mitochondrial DNA research.

    Science.gov (United States)

    Ye, Fei; Samuels, David C; Clark, Travis; Guo, Yan

    2014-07-01

    Next-generation sequencing, also known as high-throughput sequencing, has greatly enhanced researchers' ability to conduct biomedical research on all levels. Mitochondrial research has also benefitted greatly from high-throughput sequencing; sequencing technology now allows for screening of all 16,569 base pairs of the mitochondrial genome simultaneously for SNPs and low level heteroplasmy and, in some cases, the estimation of mitochondrial DNA copy number. It is important to realize the full potential of high-throughput sequencing for the advancement of mitochondrial research. To this end, we review how high-throughput sequencing has impacted mitochondrial research in the categories of SNPs, low level heteroplasmy, copy number, and structural variants. We also discuss the different types of mitochondrial DNA sequencing and their pros and cons. Based on previous studies conducted by various groups, we provide strategies for processing mitochondrial DNA sequencing data, including assembly, variant calling, and quality control. Copyright © 2014 Elsevier B.V. and Mitochondria Research Society. All rights reserved.

  17. Enzyme free cloning for high throughput gene cloning and expression

    NARCIS (Netherlands)

    de Jong, R.N.; Daniëls, M.; Kaptein, R.|info:eu-repo/dai/nl/074334603; Folkers, G.E.|info:eu-repo/dai/nl/162277202

    2006-01-01

    Structural and functional genomics initiatives significantly improved cloning methods over the past few years. Although recombinational cloning is highly efficient, its costs urged us to search for an alternative high throughput (HTP) cloning method. We implemented a modified Enzyme Free Cloning

  18. Fully Bayesian Analysis of High-throughput Targeted Metabolomics Assays

    Science.gov (United States)

    High-throughput metabolomic assays that allow simultaneous targeted screening of hundreds of metabolites have recently become available in kit form. Such assays provide a window into understanding changes to biochemical pathways due to chemical exposure or disease, and are usefu...

  19. Chemometric Optimization Studies in Catalysis Employing High-Throughput Experimentation

    NARCIS (Netherlands)

    Pereira, S.R.M.

    2008-01-01

    The main topic of this thesis is the investigation of the synergies between High-Throughput Experimentation (HTE) and Chemometric Optimization methodologies in Catalysis research and of the use of such methodologies to maximize the advantages of using HTE methods. Several case studies were analysed

  20. High throughput production of mouse monoclonal antibodies using antigen microarrays

    DEFF Research Database (Denmark)

    De Masi, Federico; Chiarella, P.; Wilhelm, H.

    2005-01-01

    Recent advances in proteomics research underscore the increasing need for high-affinity monoclonal antibodies, which are still generated with lengthy, low-throughput antibody production techniques. Here we present a semi-automated, high-throughput method of hybridoma generation and identification...

  1. High throughput 16S rRNA gene amplicon sequencing

    DEFF Research Database (Denmark)

    Nierychlo, Marta; Larsen, Poul; Jørgensen, Mads Koustrup

    S rRNA gene amplicon sequencing has been developed over the past few years and is now ready to use for more comprehensive studies related to plant operation and optimization thanks to short analysis time, low cost, high throughput, and high taxonomic resolution. In this study we show how 16S r...

  2. High-throughput retrotransposon-based fluorescent markers: improved information content and allele discrimination

    Directory of Open Access Journals (Sweden)

    Baker David

    2009-07-01

    Full Text Available Abstract Background Dense genetic maps, together with the efficiency and accuracy of their construction, are integral to genetic studies and marker assisted selection for plant breeding. High-throughput multiplex markers that are robust and reproducible can contribute to both efficiency and accuracy. Multiplex markers are often dominant and so have low information content, this coupled with the pressure to find alternatives to radio-labelling, has led us to adapt the SSAP (sequence specific amplified polymorphism marker method from a 33P labelling procedure to fluorescently tagged markers analysed from an automated ABI 3730 xl platform. This method is illustrated for multiplexed SSAP markers based on retrotransposon insertions of pea and is applicable for the rapid and efficient generation of markers from genomes where repetitive element sequence information is available for primer design. We cross-reference SSAP markers previously generated using the 33P manual PAGE system to fluorescent peaks, and use these high-throughput fluorescent SSAP markers for further genetic studies in Pisum. Results The optimal conditions for the fluorescent-labelling method used a triplex set of primers in the PCR. These included a fluorescently labelled specific primer together with its unlabelled counterpart, plus an adapter-based primer with two bases of selection on the 3' end. The introduction of the unlabelled specific primer helped to optimise the fluorescent signal across the range of fragment sizes expected, and eliminated the need for extensive dilutions of PCR amplicons. The software (GeneMarker Version 1.6 used for the high-throughput data analysis provided an assessment of amplicon size in nucleotides, peak areas and fluorescence intensity in a table format, so providing additional information content for each marker. The method has been tested in a small-scale study with 12 pea accessions resulting in 467 polymorphic fluorescent SSAP markers of which

  3. Screening and Crystallization Plates for Manual and High-throughput Protein Crystal Growth

    Science.gov (United States)

    Thorne, Robert E. (Inventor); Berejnov, Viatcheslav (Inventor); Kalinin, Yevgeniy (Inventor)

    2010-01-01

    In one embodiment, a crystallization and screening plate comprises a plurality of cells open at a top and a bottom, a frame that defines the cells in the plate, and at least two films. The first film seals a top of the plate and the second film seals a bottom of the plate. At least one of the films is patterned to strongly pin the contact lines of drops dispensed onto it, fixing their position and shape. The present invention also includes methods and other devices for manual and high-throughput protein crystal growth.

  4. Bedrock Geologic Map of Vermont - Faults and Contacts

    Data.gov (United States)

    Vermont Center for Geographic Information — The bedrock geology was last mapped at a statewide scale 50 years ago at a scale of 1:250,000 (Doll and others, 1961). The 1961 map was compiled from 1:62,500-scale...

  5. High-throughput scanning of the rat genome using interspersed repetitive sequence-PCR markers.

    Science.gov (United States)

    Gösele, C; Hong, L; Kreitler, T; Rossmann, M; Hieke, B; Gross, U; Kramer, M; Himmelbauer, H; Bihoreau, M T; Kwitek-Black, A E; Twigger, S; Tonellato, P J; Jacob, H J; Schalkwyk, L C; Lindpaintner, K; Ganten, D; Lehrach, H; Knoblauch, M

    2000-11-01

    We report the establishment of a hybridization-based marker system for the rat genome based on the PCR amplification of interspersed repetitive sequences (IRS). Overall, 351 IRS markers were mapped within the rat genome. The IRS marker panel consists of 210 nonpolymorphic and 141 polymorphic markers that were screened for presence/absence polymorphism patterns in 38 different rat strains and substrains that are commonly used in biomedical research. The IRS marker panel was demonstrated to be useful for rapid genome screening in experimental rat crosses and high-throughput characterization of large-insert genomic library clones. Information on corresponding YAC clones is made available for this IRS marker set distributed over the whole rat genome. The two existing rat radiation hybrid maps were integrated by placing the IRS markers in both maps. The genetic and physical mapping data presented provide substantial information for ongoing positional cloning projects in the rat. Copyright 2000 Academic Press.

  6. Amino acid empirical contact energy definitions for fold recognition in the space of contact maps

    Directory of Open Access Journals (Sweden)

    Fogolari Federico

    2003-02-01

    Full Text Available Abstract Background Contradicting evidence has been presented in the literature concerning the effectiveness of empirical contact energies for fold recognition. Empirical contact energies are calculated on the basis of information available from selected protein structures, with respect to a defined reference state, according to the quasi-chemical approximation. Protein-solvent interactions are estimated from residue solvent accessibility. Results In the approach presented here, contact energies are derived from the potential of mean force theory, several definitions of contact are examined and their performance in fold recognition is evaluated on sets of decoy structures. The best definition of contact is tested, on a more realistic scenario, on all predictions including sidechains accepted in the CASP4 experiment. In 30 out of 35 cases the native structure is correctly recognized and best predictions are usually found among the 10 lowest energy predictions. Conclusion The definition of contact based on van der Waals radii of alpha carbon and side chain heavy atoms is seen to perform better than other definitions involving only alpha carbons, only beta carbons, all heavy atoms or only backbone atoms. An important prerequisite for the applicability of the approach is that the protein structure under study should not exhibit anomalous solvent accessibility, compared to soluble proteins whose structure is deposited in the Protein Data Bank. The combined evaluation of a solvent accessibility parameter and contact energy allows for an effective gross screening of predictive models.

  7. Amino acid empirical contact energy definitions for fold recognition in the space of contact maps.

    Science.gov (United States)

    Berrera, Marco; Molinari, Henriette; Fogolari, Federico

    2003-02-28

    Contradicting evidence has been presented in the literature concerning the effectiveness of empirical contact energies for fold recognition. Empirical contact energies are calculated on the basis of information available from selected protein structures, with respect to a defined reference state, according to the quasi-chemical approximation. Protein-solvent interactions are estimated from residue solvent accessibility. In the approach presented here, contact energies are derived from the potential of mean force theory, several definitions of contact are examined and their performance in fold recognition is evaluated on sets of decoy structures. The best definition of contact is tested, on a more realistic scenario, on all predictions including sidechains accepted in the CASP4 experiment. In 30 out of 35 cases the native structure is correctly recognized and best predictions are usually found among the 10 lowest energy predictions. The definition of contact based on van der Waals radii of alpha carbon and side chain heavy atoms is seen to perform better than other definitions involving only alpha carbons, only beta carbons, all heavy atoms or only backbone atoms. An important prerequisite for the applicability of the approach is that the protein structure under study should not exhibit anomalous solvent accessibility, compared to soluble proteins whose structure is deposited in the Protein Data Bank. The combined evaluation of a solvent accessibility parameter and contact energy allows for an effective gross screening of predictive models.

  8. Scanning droplet cell for high throughput electrochemical and photoelectrochemical measurements

    Science.gov (United States)

    Gregoire, John M.; Xiang, Chengxiang; Liu, Xiaonao; Marcin, Martin; Jin, Jian

    2013-02-01

    High throughput electrochemical techniques are widely applied in material discovery and optimization. For many applications, the most desirable electrochemical characterization requires a three-electrode cell under potentiostat control. In high throughput screening, a material library is explored by either employing an array of such cells, or rastering a single cell over the library. To attain this latter capability with unprecedented throughput, we have developed a highly integrated, compact scanning droplet cell that is optimized for rapid electrochemical and photoeletrochemical measurements. Using this cell, we screened a quaternary oxide library as (photo)electrocatalysts for the oxygen evolution (water splitting) reaction. High quality electrochemical measurements were carried out and key electrocatalytic properties were identified for each of 5456 samples with a throughput of 4 s per sample.

  9. High-throughput theoretical design of lithium battery materials

    Science.gov (United States)

    Shi-Gang, Ling; Jian, Gao; Rui-Juan, Xiao; Li-Quan, Chen

    2016-01-01

    The rapid evolution of high-throughput theoretical design schemes to discover new lithium battery materials is reviewed, including high-capacity cathodes, low-strain cathodes, anodes, solid state electrolytes, and electrolyte additives. With the development of efficient theoretical methods and inexpensive computers, high-throughput theoretical calculations have played an increasingly important role in the discovery of new materials. With the help of automatic simulation flow, many types of materials can be screened, optimized and designed from a structural database according to specific search criteria. In advanced cell technology, new materials for next generation lithium batteries are of great significance to achieve performance, and some representative criteria are: higher energy density, better safety, and faster charge/discharge speed. Project supported by the National Natural Science Foundation of China (Grant Nos. 11234013 and 51172274) and the National High Technology Research and Development Program of China (Grant No. 2015AA034201).

  10. High throughput screening of starch structures using carbohydrate microarrays.

    Science.gov (United States)

    Tanackovic, Vanja; Rydahl, Maja Gro; Pedersen, Henriette Lodberg; Motawia, Mohammed Saddik; Shaik, Shahnoor Sultana; Mikkelsen, Maria Dalgaard; Krunic, Susanne Langgaard; Fangel, Jonatan Ulrik; Willats, William George Tycho; Blennow, Andreas

    2016-07-29

    In this study we introduce the starch-recognising carbohydrate binding module family 20 (CBM20) from Aspergillus niger for screening biological variations in starch molecular structure using high throughput carbohydrate microarray technology. Defined linear, branched and phosphorylated maltooligosaccharides, pure starch samples including a variety of different structures with variations in the amylopectin branching pattern, amylose content and phosphate content, enzymatically modified starches and glycogen were included. Using this technique, different important structures, including amylose content and branching degrees could be differentiated in a high throughput fashion. The screening method was validated using transgenic barley grain analysed during development and subjected to germination. Typically, extreme branching or linearity were detected less than normal starch structures. The method offers the potential for rapidly analysing resistant and slowly digested dietary starches.

  11. High-Throughput Thermodynamic Modeling and Uncertainty Quantification for ICME

    Science.gov (United States)

    Otis, Richard A.; Liu, Zi-Kui

    2017-05-01

    One foundational component of the integrated computational materials engineering (ICME) and Materials Genome Initiative is the computational thermodynamics based on the calculation of phase diagrams (CALPHAD) method. The CALPHAD method pioneered by Kaufman has enabled the development of thermodynamic, atomic mobility, and molar volume databases of individual phases in the full space of temperature, composition, and sometimes pressure for technologically important multicomponent engineering materials, along with sophisticated computational tools for using the databases. In this article, our recent efforts will be presented in terms of developing new computational tools for high-throughput modeling and uncertainty quantification based on high-throughput, first-principles calculations and the CALPHAD method along with their potential propagations to downstream ICME modeling and simulations.

  12. Trade-Off Analysis in High-Throughput Materials Exploration.

    Science.gov (United States)

    Volety, Kalpana K; Huyberechts, Guido P J

    2017-03-13

    This Research Article presents a strategy to identify the optimum compositions in metal alloys with certain desired properties in a high-throughput screening environment, using a multiobjective optimization approach. In addition to the identification of the optimum compositions in a primary screening, the strategy also allows pointing to regions in the compositional space where further exploration in a secondary screening could be carried out. The strategy for the primary screening is a combination of two multiobjective optimization approaches namely Pareto optimality and desirability functions. The experimental data used in the present study have been collected from over 200 different compositions belonging to four different alloy systems. The metal alloys (comprising Fe, Ti, Al, Nb, Hf, Zr) are synthesized and screened using high-throughput technologies. The advantages of such a kind of approach compared to the limitations of the traditional and comparatively simpler approaches like ranking and calculating figures of merit are discussed.

  13. High throughput screening of starch structures using carbohydrate microarrays

    DEFF Research Database (Denmark)

    Tanackovic, Vanja; Rydahl, Maja Gro; Pedersen, Henriette Lodberg

    2016-01-01

    In this study we introduce the starch-recognising carbohydrate binding module family 20 (CBM20) from Aspergillus niger for screening biological variations in starch molecular structure using high throughput carbohydrate microarray technology. Defined linear, branched and phosphorylated...... maltooligosaccharides, pure starch samples including a variety of different structures with variations in the amylopectin branching pattern, amylose content and phosphate content, enzymatically modified starches and glycogen were included. Using this technique, different important structures, including amylose content...... and branching degrees could be differentiated in a high throughput fashion. The screening method was validated using transgenic barley grain analysed during development and subjected to germination. Typically, extreme branching or linearity were detected less than normal starch structures. The method offers...

  14. A high-throughput multiplex method adapted for GMO detection.

    Science.gov (United States)

    Chaouachi, Maher; Chupeau, Gaëlle; Berard, Aurélie; McKhann, Heather; Romaniuk, Marcel; Giancola, Sandra; Laval, Valérie; Bertheau, Yves; Brunel, Dominique

    2008-12-24

    A high-throughput multiplex assay for the detection of genetically modified organisms (GMO) was developed on the basis of the existing SNPlex method designed for SNP genotyping. This SNPlex assay allows the simultaneous detection of up to 48 short DNA sequences (approximately 70 bp; "signature sequences") from taxa endogenous reference genes, from GMO constructions, screening targets, construct-specific, and event-specific targets, and finally from donor organisms. This assay avoids certain shortcomings of multiplex PCR-based methods already in widespread use for GMO detection. The assay demonstrated high specificity and sensitivity. The results suggest that this assay is reliable, flexible, and cost- and time-effective for high-throughput GMO detection.

  15. High-throughput screening for modulators of cellular contractile force

    CERN Document Server

    Park, Chan Young; Tambe, Dhananjay; Chen, Bohao; Lavoie, Tera; Dowell, Maria; Simeonov, Anton; Maloney, David J; Marinkovic, Aleksandar; Tschumperlin, Daniel J; Burger, Stephanie; Frykenberg, Matthew; Butler, James P; Stamer, W Daniel; Johnson, Mark; Solway, Julian; Fredberg, Jeffrey J; Krishnan, Ramaswamy

    2014-01-01

    When cellular contractile forces are central to pathophysiology, these forces comprise a logical target of therapy. Nevertheless, existing high-throughput screens are limited to upstream signaling intermediates with poorly defined relationship to such a physiological endpoint. Using cellular force as the target, here we screened libraries to identify novel drug candidates in the case of human airway smooth muscle cells in the context of asthma, and also in the case of Schlemm's canal endothelial cells in the context of glaucoma. This approach identified several drug candidates for both asthma and glaucoma. We attained rates of 1000 compounds per screening day, thus establishing a force-based cellular platform for high-throughput drug discovery.

  16. High-throughput sequence alignment using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Trapnell Cole

    2007-12-01

    Full Text Available Abstract Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU.

  17. High-throughput optical screening of cellular mechanotransduction

    OpenAIRE

    Compton, JL; Luo, JC; Ma, H.; Botvinick, E; Venugopalan, V

    2014-01-01

    We introduce an optical platform for rapid, high-throughput screening of exogenous molecules that affect cellular mechanotransduction. Our method initiates mechanotransduction in adherent cells using single laser-microbeam generated microcavitation bubbles without requiring flow chambers or microfluidics. These microcavitation bubbles expose adherent cells to a microtsunami, a transient microscale burst of hydrodynamic shear stress, which stimulates cells over areas approaching 1 mm2. We demo...

  18. Intel: High Throughput Computing Collaboration: A CERN openlab / Intel collaboration

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    The Intel/CERN High Throughput Computing Collaboration studies the application of upcoming Intel technologies to the very challenging environment of the LHC trigger and data-acquisition systems. These systems will need to transport and process many terabits of data every second, in some cases with tight latency constraints. Parallelisation and tight integration of accelerators and classical CPU via Intel's OmniPath fabric are the key elements in this project.

  19. High-throughput sequence alignment using Graphics Processing Units.

    Science.gov (United States)

    Schatz, Michael C; Trapnell, Cole; Delcher, Arthur L; Varshney, Amitabh

    2007-12-10

    The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs) in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA) from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU.

  20. High-throughput evaluation of synthetic metabolic pathways.

    Science.gov (United States)

    Klesmith, Justin R; Whitehead, Timothy A

    2016-03-01

    A central challenge in the field of metabolic engineering is the efficient identification of a metabolic pathway genotype that maximizes specific productivity over a robust range of process conditions. Here we review current methods for optimizing specific productivity of metabolic pathways in living cells. New tools for library generation, computational analysis of pathway sequence-flux space, and high-throughput screening and selection techniques are discussed.

  1. The high-throughput highway to computational materials design.

    Science.gov (United States)

    Curtarolo, Stefano; Hart, Gus L W; Nardelli, Marco Buongiorno; Mingo, Natalio; Sanvito, Stefano; Levy, Ohad

    2013-03-01

    High-throughput computational materials design is an emerging area of materials science. By combining advanced thermodynamic and electronic-structure methods with intelligent data mining and database construction, and exploiting the power of current supercomputer architectures, scientists generate, manage and analyse enormous data repositories for the discovery of novel materials. In this Review we provide a current snapshot of this rapidly evolving field, and highlight the challenges and opportunities that lie ahead.

  2. Web-based visual analysis for high-throughput genomics.

    Science.gov (United States)

    Goecks, Jeremy; Eberhard, Carl; Too, Tomithy; Nekrutenko, Anton; Taylor, James

    2013-06-13

    Visualization plays an essential role in genomics research by making it possible to observe correlations and trends in large datasets as well as communicate findings to others. Visual analysis, which combines visualization with analysis tools to enable seamless use of both approaches for scientific investigation, offers a powerful method for performing complex genomic analyses. However, there are numerous challenges that arise when creating rich, interactive Web-based visualizations/visual analysis applications for high-throughput genomics. These challenges include managing data flow from Web server to Web browser, integrating analysis tools and visualizations, and sharing visualizations with colleagues. We have created a platform simplifies the creation of Web-based visualization/visual analysis applications for high-throughput genomics. This platform provides components that make it simple to efficiently query very large datasets, draw common representations of genomic data, integrate with analysis tools, and share or publish fully interactive visualizations. Using this platform, we have created a Circos-style genome-wide viewer, a generic scatter plot for correlation analysis, an interactive phylogenetic tree, a scalable genome browser for next-generation sequencing data, and an application for systematically exploring tool parameter spaces to find good parameter values. All visualizations are interactive and fully customizable. The platform is integrated with the Galaxy (http://galaxyproject.org) genomics workbench, making it easy to integrate new visual applications into Galaxy. Visualization and visual analysis play an important role in high-throughput genomics experiments, and approaches are needed to make it easier to create applications for these activities. Our framework provides a foundation for creating Web-based visualizations and integrating them into Galaxy. Finally, the visualizations we have created using the framework are useful tools for high-throughput

  3. Validation of high throughput sequencing and microbial forensics applications

    OpenAIRE

    Budowle, Bruce; Connell, Nancy D.; Bielecka-Oder, Anna; Rita R Colwell; Corbett, Cindi R.; Fletcher, Jacqueline; Forsman, Mats; Kadavy, Dana R; Markotic, Alemka; Morse, Stephen A.; Murch, Randall S; Sajantila, Antti; Schemes, Sarah E; Ternus, Krista L; Turner, Stephen D

    2014-01-01

    Abstract High throughput sequencing (HTS) generates large amounts of high quality sequence data for microbial genomics. The value of HTS for microbial forensics is the speed at which evidence can be collected and the power to characterize microbial-related evidence to solve biocrimes and bioterrorist events. As HTS technologies continue to improve, they provide increasingly powerful sets of tools to support the entire field of microbial forensics. Accurate, credible results a...

  4. Condor-COPASI: high-throughput computing for biochemical networks

    Directory of Open Access Journals (Sweden)

    Kent Edward

    2012-07-01

    Full Text Available Abstract Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage.

  5. High-throughput computational and experimental techniques in structural genomics.

    Science.gov (United States)

    Chance, Mark R; Fiser, Andras; Sali, Andrej; Pieper, Ursula; Eswar, Narayanan; Xu, Guiping; Fajardo, J Eduardo; Radhakannan, Thirumuruhan; Marinkovic, Nebojsa

    2004-10-01

    Structural genomics has as its goal the provision of structural information for all possible ORF sequences through a combination of experimental and computational approaches. The access to genome sequences and cloning resources from an ever-widening array of organisms is driving high-throughput structural studies by the New York Structural Genomics Research Consortium. In this report, we outline the progress of the Consortium in establishing its pipeline for structural genomics, and some of the experimental and bioinformatics efforts leading to structural annotation of proteins. The Consortium has established a pipeline for structural biology studies, automated modeling of ORF sequences using solved (template) structures, and a novel high-throughput approach (metallomics) to examining the metal binding to purified protein targets. The Consortium has so far produced 493 purified proteins from >1077 expression vectors. A total of 95 have resulted in crystal structures, and 81 are deposited in the Protein Data Bank (PDB). Comparative modeling of these structures has generated >40,000 structural models. We also initiated a high-throughput metal analysis of the purified proteins; this has determined that 10%-15% of the targets contain a stoichiometric structural or catalytic transition metal atom. The progress of the structural genomics centers in the U.S. and around the world suggests that the goal of providing useful structural information on most all ORF domains will be realized. This projected resource will provide structural biology information important to understanding the function of most proteins of the cell.

  6. Mapping of the Rsd contact site on the sigma 70 subunit of Escherichia coli RNA polymerase.

    Science.gov (United States)

    Jishage, M; Dasgupta, D; Ishihama, A

    2001-05-01

    Rsd (regulator of sigma D) is an anti-sigma factor for the Escherichia coli RNA polymerase sigma(70) subunit. The contact site of Rsd on sigma(70) was analyzed after mapping of the contact-dependent cleavage sites by Rsd-tethered iron-p-bromoacetamidobenzyl EDTA and by analysis of the complex formation between Ala-substituted sigma(70) and Rsd. Results indicate that the Rsd contact site is located downstream of the promoter -35 recognition helix-turn-helix motif within region 4, overlapping with the regions involved in interaction with both core enzyme and sigma(70) contact transcription factors.

  7. Toward high-throughput, multicriteria protein-structure comparison and analysis.

    Science.gov (United States)

    Shah, Azhar Ali; Folino, Gianluigi; Krasnogor, Natalio

    2010-06-01

    Protein-structure comparison (PSC) is an essential component of biomedical research as it impacts on, e.g., drug design, molecular docking, protein folding and structure prediction algorithms as well as being essential to the assessment of these predictions. Each of these applications, as well as many others where molecular comparison plays an important role, requires a different notion of similarity that naturally lead to the multicriteria PSC (MC-PSC) problem. Protein (Structure) Comparison, Knowledge, Similarity, and Information (ProCKSI) (www.procksi.org) provides algorithmic solutions for the MC-PSC problem by means of an enhanced structural comparison that relies on the principled application of information fusion to similarity assessments derived from multiple comparison methods. Current MC-PSC works well for moderately sized datasets and it is time consuming as it provides public service to multiple users. Many of the structural bioinformatics applications mentioned above would benefit from the ability to perform, for a dedicated user, thousands or tens of thousands of comparisons through multiple methods in real time, a capacity beyond our current technology. In this paper, we take a key step into that direction by means of a high-throughput distributed reimplementation of ProCKSI for very large datasets. The core of the proposed framework lies in the design of an innovative distributed algorithm that runs on each compute node in a cluster/grid environment to perform structure comparison of a given subset of input structures using some of the most popular PSC methods [e.g., universal similarity metric (USM), maximum contact map overlap (MaxCMO), fast alignment and search tool (FAST), distance alignment (DaliLite), combinatorial extension (CE), template modeling alignment (TMAlign)]. We follow this with a procedure of distributed consensus building. Thus, the new algorithms proposed here achieve ProCKSI's similarity assessment quality but with a fraction of

  8. 76 FR 28990 - Ultra High Throughput Sequencing for Clinical Diagnostic Applications-Approaches To Assess...

    Science.gov (United States)

    2011-05-19

    ... Clinical Diagnostic Applications--Approaches To Assess Analytical Validity.'' The purpose of the public... approaches to assess analytical validity of ultra high throughput sequencing for clinical diagnostic... HUMAN SERVICES Food and Drug Administration Ultra High Throughput Sequencing for Clinical Diagnostic...

  9. High-throughput cultivation and screening platform for unicellular phototrophs.

    Science.gov (United States)

    Tillich, Ulrich M; Wolter, Nick; Schulze, Katja; Kramer, Dan; Brödel, Oliver; Frohme, Marcus

    2014-09-16

    High-throughput cultivation and screening methods allow a parallel, miniaturized and cost efficient processing of many samples. These methods however, have not been generally established for phototrophic organisms such as microalgae or cyanobacteria. In this work we describe and test high-throughput methods with the model organism Synechocystis sp. PCC6803. The required technical automation for these processes was achieved with a Tecan Freedom Evo 200 pipetting robot. The cultivation was performed in 2.2 ml deepwell microtiter plates within a cultivation chamber outfitted with programmable shaking conditions, variable illumination, variable temperature, and an adjustable CO2 atmosphere. Each microtiter-well within the chamber functions as a separate cultivation vessel with reproducible conditions. The automated measurement of various parameters such as growth, full absorption spectrum, chlorophyll concentration, MALDI-TOF-MS, as well as a novel vitality measurement protocol, have already been established and can be monitored during cultivation. Measurement of growth parameters can be used as inputs for the system to allow for periodic automatic dilutions and therefore a semi-continuous cultivation of hundreds of cultures in parallel. The system also allows the automatic generation of mid and long term backups of cultures to repeat experiments or to retrieve strains of interest. The presented platform allows for high-throughput cultivation and screening of Synechocystis sp. PCC6803. The platform should be usable for many phototrophic microorganisms as is, and be adaptable for even more. A variety of analyses are already established and the platform is easily expandable both in quality, i.e. with further parameters to screen for additional targets and in quantity, i.e. size or number of processed samples.

  10. A high throughput architecture for a low complexity soft-output demapping algorithm

    Science.gov (United States)

    Ali, I.; Wasenmüller, U.; Wehn, N.

    2015-11-01

    Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.

  11. High-throughput sequencing enhanced phage display enables the identification of patient-specific epitope motifs in serum

    DEFF Research Database (Denmark)

    Christiansen, Anders; Kringelum, Jens Vindahl; Hansen, Christian Skjødt

    2015-01-01

    Phage display is a prominent screening technique with a multitude of applications including therapeutic antibody development and mapping of antigen epitopes. In this study, phages were selected based on their interaction with patient serum and exhaustively characterised by high-throughput sequenc......Phage display is a prominent screening technique with a multitude of applications including therapeutic antibody development and mapping of antigen epitopes. In this study, phages were selected based on their interaction with patient serum and exhaustively characterised by high......-throughput sequencing. A bioinformatics approach was developed in order to identify peptide motifs of interest based on clustering and contrasting to control samples. Comparison of patient and control samples confirmed a major issue in phage display, namely the selection of unspecific peptides. The potential...... display by (i) enabling the analysis of complex biological samples, (ii) circumventing the traditional laborious picking and functional testing of individual phage clones and (iii) reducing the number of selection rounds....

  12. Adaptive Sampling for High Throughput Data Using Similarity Measures

    Energy Technology Data Exchange (ETDEWEB)

    Bulaevskaya, V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sales, A. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-05-06

    The need for adaptive sampling arises in the context of high throughput data because the rates of data arrival are many orders of magnitude larger than the rates at which they can be analyzed. A very fast decision must therefore be made regarding the value of each incoming observation and its inclusion in the analysis. In this report we discuss one approach to adaptive sampling, based on the new data point’s similarity to the other data points being considered for inclusion. We present preliminary results for one real and one synthetic data set.

  13. High-throughput sequencing: a roadmap toward community ecology.

    Science.gov (United States)

    Poisot, Timothée; Péquin, Bérangère; Gravel, Dominique

    2013-04-01

    High-throughput sequencing is becoming increasingly important in microbial ecology, yet it is surprisingly under-used to generate or test biogeographic hypotheses. In this contribution, we highlight how adding these methods to the ecologist toolbox will allow the detection of new patterns, and will help our understanding of the structure and dynamics of diversity. Starting with a review of ecological questions that can be addressed, we move on to the technical and analytical issues that will benefit from an increased collaboration between different disciplines.

  14. UAV-based high-throughput phenotyping in legume crops

    Science.gov (United States)

    Sankaran, Sindhuja; Khot, Lav R.; Quirós, Juan; Vandemark, George J.; McGee, Rebecca J.

    2016-05-01

    In plant breeding, one of the biggest obstacles in genetic improvement is the lack of proven rapid methods for measuring plant responses in field conditions. Therefore, the major objective of this research was to evaluate the feasibility of utilizing high-throughput remote sensing technology for rapid measurement of phenotyping traits in legume crops. The plant responses of several chickpea and peas varieties to the environment were assessed with an unmanned aerial vehicle (UAV) integrated with multispectral imaging sensors. Our preliminary assessment showed that the vegetation indices are strongly correlated (pphenotyping traits.

  15. REDItools: high-throughput RNA editing detection made easy.

    Science.gov (United States)

    Picardi, Ernesto; Pesole, Graziano

    2013-07-15

    The reliable detection of RNA editing sites from massive sequencing data remains challenging and, although several methodologies have been proposed, no computational tools have been released to date. Here, we introduce REDItools a suite of python scripts to perform high-throughput investigation of RNA editing using next-generation sequencing data. REDItools are in python programming language and freely available at http://code.google.com/p/reditools/. ernesto.picardi@uniba.it or graziano.pesole@uniba.it Supplementary data are available at Bioinformatics online.

  16. High throughput platforms for structural genomics of integral membrane proteins.

    Science.gov (United States)

    Mancia, Filippo; Love, James

    2011-08-01

    Structural genomics approaches on integral membrane proteins have been postulated for over a decade, yet specific efforts are lagging years behind their soluble counterparts. Indeed, high throughput methodologies for production and characterization of prokaryotic integral membrane proteins are only now emerging, while large-scale efforts for eukaryotic ones are still in their infancy. Presented here is a review of recent literature on actively ongoing structural genomics of membrane protein initiatives, with a focus on those aimed at implementing interesting techniques aimed at increasing our rate of success for this class of macromolecules. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. High-throughput epitope identification for snakebite antivenom

    DEFF Research Database (Denmark)

    Engmark, Mikael; De Masi, Federico; Laustsen, Andreas Hougaard

    Insight into the epitopic recognition pattern for polyclonal antivenoms is a strong tool for accurate prediction of antivenom cross-reactivity and provides a basis for design of novel antivenoms. In this work, a high-throughput approach was applied to characterize linear epitopes in 966 individual...... toxins from pit vipers (Crotalidae) using the ICP Crotalidae antivenom. Due to an abundance of snake venom metalloproteinases and phospholipase A2s in the venoms used for production of the investigated antivenom, this study focuses on these toxin families....

  18. Bifrost: Stream processing framework for high-throughput applications

    Science.gov (United States)

    Barsdell, Ben; Price, Daniel; Cranmer, Miles; Garsden, Hugh; Dowell, Jayce

    2017-11-01

    Bifrost is a stream processing framework that eases the development of high-throughput processing CPU/GPU pipelines. It is designed for digital signal processing (DSP) applications within radio astronomy. Bifrost uses a flexible ring buffer implementation that allows different signal processing blocks to be connected to form a pipeline. Each block may be assigned to a CPU core, and the ring buffers are used to transport data to and from blocks. Processing blocks may be run on either the CPU or GPU, and the ring buffer will take care of memory copies between the CPU and GPU spaces.

  19. High-throughput DNA sequencing: a genomic data manufacturing process.

    Science.gov (United States)

    Huang, G M

    1999-01-01

    The progress trends in automated DNA sequencing operation are reviewed. Technological development in sequencing instruments, enzymatic chemistry and robotic stations has resulted in ever-increasing capacity of sequence data production. This progress leads to a higher demand on laboratory information management and data quality assessment. High-throughput laboratories face the challenge of organizational management, as well as technology management. Engineering principles of process control should be adopted in this biological data manufacturing procedure. While various systems attempt to provide solutions to automate different parts of, or even the entire process, new technical advances will continue to change the paradigm and provide new challenges.

  20. Spectrophotometric Enzyme Assays for High-Throughput Screening

    Directory of Open Access Journals (Sweden)

    Jean-Louis Reymond

    2004-01-01

    Full Text Available This paper reviews high-throughput screening enzyme assays developed in our laboratory over the last ten years. These enzyme assays were initially developed for the purpose of discovering catalytic antibodies by screening cell culture supernatants, but have proved generally useful for testing enzyme activities. Examples include TLC-based screening using acridone-labeled substrates, fluorogenic assays based on the β-elimination of umbelliferone or nitrophenol, and indirect assays such as the back-titration method with adrenaline and the copper-calcein fluorescence assay for aminoacids.

  1. Gold nanoparticles for high-throughput genotyping of long-range haplotypes

    Science.gov (United States)

    Chen, Peng; Pan, Dun; Fan, Chunhai; Chen, Jianhua; Huang, Ke; Wang, Dongfang; Zhang, Honglu; Li, You; Feng, Guoyin; Liang, Peiji; He, Lin; Shi, Yongyong

    2011-10-01

    Completion of the Human Genome Project and the HapMap Project has led to increasing demands for mapping complex traits in humans to understand the aetiology of diseases. Identifying variations in the DNA sequence, which affect how we develop disease and respond to pathogens and drugs, is important for this purpose, but it is difficult to identify these variations in large sample sets. Here we show that through a combination of capillary sequencing and polymerase chain reaction assisted by gold nanoparticles, it is possible to identify several DNA variations that are associated with age-related macular degeneration and psoriasis on significant regions of human genomic DNA. Our method is accurate and promising for large-scale and high-throughput genetic analysis of susceptibility towards disease and drug resistance.

  2. MassCode liquid arrays as a tool for multiplexed high-throughput genetic profiling.

    Directory of Open Access Journals (Sweden)

    Gregory S Richmond

    Full Text Available Multiplexed detection assays that analyze a modest number of nucleic acid targets over large sample sets are emerging as the preferred testing approach in such applications as routine pathogen typing, outbreak monitoring, and diagnostics. However, very few DNA testing platforms have proven to offer a solution for mid-plexed analysis that is high-throughput, sensitive, and with a low cost per test. In this work, an enhanced genotyping method based on MassCode technology was devised and integrated as part of a high-throughput mid-plexing analytical system that facilitates robust qualitative differential detection of DNA targets. Samples are first analyzed using MassCode PCR (MC-PCR performed with an array of primer sets encoded with unique mass tags. Lambda exonuclease and an array of MassCode probes are then contacted with MC-PCR products for further interrogation and target sequences are specifically identified. Primer and probe hybridizations occur in homogeneous solution, a clear advantage over micro- or nanoparticle suspension arrays. The two cognate tags coupled to resultant MassCode hybrids are detected in an automated process using a benchtop single quadrupole mass spectrometer. The prospective value of using MassCode probe arrays for multiplexed bioanalysis was demonstrated after developing a 14plex proof of concept assay designed to subtype a select panel of Salmonella enterica serogroups and serovars. This MassCode system is very flexible and test panels can be customized to include more, less, or different markers.

  3. High throughput instruments, methods, and informatics for systems biology.

    Energy Technology Data Exchange (ETDEWEB)

    Sinclair, Michael B.; Cowie, Jim R. (New Mexico State University, Las Cruces, NM); Van Benthem, Mark Hilary; Wylie, Brian Neil; Davidson, George S.; Haaland, David Michael; Timlin, Jerilyn Ann; Aragon, Anthony D. (University of New Mexico, Albuquerque, NM); Keenan, Michael Robert; Boyack, Kevin W.; Thomas, Edward Victor; Werner-Washburne, Margaret C. (University of New Mexico, Albuquerque, NM); Mosquera-Caro, Monica P. (University of New Mexico, Albuquerque, NM); Martinez, M. Juanita (University of New Mexico, Albuquerque, NM); Martin, Shawn Bryan; Willman, Cheryl L. (University of New Mexico, Albuquerque, NM)

    2003-12-01

    High throughput instruments and analysis techniques are required in order to make good use of the genomic sequences that have recently become available for many species, including humans. These instruments and methods must work with tens of thousands of genes simultaneously, and must be able to identify the small subsets of those genes that are implicated in the observed phenotypes, or, for instance, in responses to therapies. Microarrays represent one such high throughput method, which continue to find increasingly broad application. This project has improved microarray technology in several important areas. First, we developed the hyperspectral scanner, which has discovered and diagnosed numerous flaws in techniques broadly employed by microarray researchers. Second, we used a series of statistically designed experiments to identify and correct errors in our microarray data to dramatically improve the accuracy, precision, and repeatability of the microarray gene expression data. Third, our research developed new informatics techniques to identify genes with significantly different expression levels. Finally, natural language processing techniques were applied to improve our ability to make use of online literature annotating the important genes. In combination, this research has improved the reliability and precision of laboratory methods and instruments, while also enabling substantially faster analysis and discovery.

  4. High throughput inclusion body sizing: Nano particle tracking analysis.

    Science.gov (United States)

    Reichelt, Wieland N; Kaineder, Andreas; Brillmann, Markus; Neutsch, Lukas; Taschauer, Alexander; Lohninger, Hans; Herwig, Christoph

    2017-06-01

    The expression of pharmaceutical relevant proteins in Escherichia coli frequently triggers inclusion body (IB) formation caused by protein aggregation. In the scientific literature, substantial effort has been devoted to the quantification of IB size. However, particle-based methods used up to this point to analyze the physical properties of representative numbers of IBs lack sensitivity and/or orthogonal verification. Using high pressure freezing and automated freeze substitution for transmission electron microscopy (TEM) the cytosolic inclusion body structure was preserved within the cells. TEM imaging in combination with manual grey scale image segmentation allowed the quantification of relative areas covered by the inclusion body within the cytosol. As a high throughput method nano particle tracking analysis (NTA) enables one to derive the diameter of inclusion bodies in cell homogenate based on a measurement of the Brownian motion. The NTA analysis of fixated (glutaraldehyde) and non-fixated IBs suggests that high pressure homogenization annihilates the native physiological shape of IBs. Nevertheless, the ratio of particle counts of non-fixated and fixated samples could potentially serve as factor for particle stickiness. In this contribution, we establish image segmentation of TEM pictures as an orthogonal method to size biologic particles in the cytosol of cells. More importantly, NTA has been established as a particle-based, fast and high throughput method (1000-3000 particles), thus constituting a much more accurate and representative analysis than currently available methods. Copyright © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Efficient Management of High-Throughput Screening Libraries with SAVANAH

    DEFF Research Database (Denmark)

    List, Markus; Elnegaard, Marlene Pedersen; Schmidt, Steffen

    2017-01-01

    High-throughput screening (HTS) has become an indispensable tool for the pharmaceutical industry and for biomedical research. A high degree of automation allows for experiments in the range of a few hundred up to several hundred thousand to be performed in close succession. The basis for such scr......High-throughput screening (HTS) has become an indispensable tool for the pharmaceutical industry and for biomedical research. A high degree of automation allows for experiments in the range of a few hundred up to several hundred thousand to be performed in close succession. The basis...... for such screens are molecular libraries, that is, microtiter plates with solubilized reagents such as siRNAs, shRNAs, miRNA inhibitors or mimics, and sgRNAs, or small compounds, that is, drugs. These reagents are typically condensed to provide enough material for covering several screens. Library plates thus need...... to be serially diluted before they can be used as assay plates. This process, however, leads to an explosion in the number of plates and samples to be tracked. Here, we present SAVANAH, the first tool to effectively manage molecular screening libraries across dilution series. It conveniently links (connects...

  6. Human transcriptome array for high-throughput clinical studies

    Science.gov (United States)

    Xu, Weihong; Seok, Junhee; Mindrinos, Michael N.; Schweitzer, Anthony C.; Jiang, Hui; Wilhelmy, Julie; Clark, Tyson A.; Kapur, Karen; Xing, Yi; Faham, Malek; Storey, John D.; Moldawer, Lyle L.; Maier, Ronald V.; Tompkins, Ronald G.; Wong, Wing Hung; Davis, Ronald W.; Xiao, Wenzhong; Toner, Mehmet; Warren, H. Shaw; Schoenfeld, David A.; Rahme, Laurence; McDonald-Smith, Grace P.; Hayden, Douglas; Mason, Philip; Fagan, Shawn; Yu, Yong-Ming; Cobb, J. Perren; Remick, Daniel G.; Mannick, John A.; Lederer, James A.; Gamelli, Richard L.; Silver, Geoffrey M.; West, Michael A.; Shapiro, Michael B.; Smith, Richard; Camp, David G.; Qian, Weijun; Tibshirani, Rob; Lowry, Stephen; Calvano, Steven; Chaudry, Irshad; Cohen, Mitchell; Moore, Ernest E.; Johnson, Jeffrey; Baker, Henry V.; Efron, Philip A.; Balis, Ulysses G. J.; Billiar, Timothy R.; Ochoa, Juan B.; Sperry, Jason L.; Miller-Graziano, Carol L.; De, Asit K.; Bankey, Paul E.; Herndon, David N.; Finnerty, Celeste C.; Jeschke, Marc G.; Minei, Joseph P.; Arnoldo, Brett D.; Hunt, John L.; Horton, Jureta; Cobb, J. Perren; Brownstein, Bernard; Freeman, Bradley; Nathens, Avery B.; Cuschieri, Joseph; Gibran, Nicole; Klein, Matthew; O'Keefe, Grant

    2011-01-01

    A 6.9 million-feature oligonucleotide array of the human transcriptome [Glue Grant human transcriptome (GG-H array)] has been developed for high-throughput and cost-effective analyses in clinical studies. This array allows comprehensive examination of gene expression and genome-wide identification of alternative splicing as well as detection of coding SNPs and noncoding transcripts. The performance of the array was examined and compared with mRNA sequencing (RNA-Seq) results over multiple independent replicates of liver and muscle samples. Compared with RNA-Seq of 46 million uniquely mappable reads per replicate, the GG-H array is highly reproducible in estimating gene and exon abundance. Although both platforms detect similar expression changes at the gene level, the GG-H array is more sensitive at the exon level. Deeper sequencing is required to adequately cover low-abundance transcripts. The array has been implemented in a multicenter clinical program and has generated high-quality, reproducible data. Considering the clinical trial requirements of cost, sample availability, and throughput, the GG-H array has a wide range of applications. An emerging approach for large-scale clinical genomic studies is to first use RNA-Seq to the sufficient depth for the discovery of transcriptome elements relevant to the disease process followed by high-throughput and reliable screening of these elements on thousands of patient samples using custom-designed arrays. PMID:21317363

  7. Fluorescent foci quantitation for high-throughput analysis

    Directory of Open Access Journals (Sweden)

    Elena Ledesma-Fernández

    2015-06-01

    Full Text Available A number of cellular proteins localize to discrete foci within cells, for example DNA repair proteins, microtubule organizing centers, P bodies or kinetochores. It is often possible to measure the fluorescence emission from tagged proteins within these foci as a surrogate for the concentration of that specific protein. We wished to develop tools that would allow quantitation of fluorescence foci intensities in high-throughput studies. As proof of principle we have examined the kinetochore, a large multi-subunit complex that is critical for the accurate segregation of chromosomes during cell division. Kinetochore perturbations lead to aneuploidy, which is a hallmark of cancer cells. Hence, understanding kinetochore homeostasis and regulation are important for a global understanding of cell division and genome integrity. The 16 budding yeast kinetochores colocalize within the nucleus to form a single focus. Here we have created a set of freely-available tools to allow high-throughput quantitation of kinetochore foci fluorescence. We use this ‘FociQuant’ tool to compare methods of kinetochore quantitation and we show proof of principle that FociQuant can be used to identify changes in kinetochore protein levels in a mutant that affects kinetochore function. This analysis can be applied to any protein that forms discrete foci in cells.

  8. High-throughput technology for novel SO2 oxidation catalysts

    Directory of Open Access Journals (Sweden)

    Jonas Loskyll, Klaus Stoewe and Wilhelm F Maier

    2011-01-01

    Full Text Available We review the state of the art and explain the need for better SO2 oxidation catalysts for the production of sulfuric acid. A high-throughput technology has been developed for the study of potential catalysts in the oxidation of SO2 to SO3. High-throughput methods are reviewed and the problems encountered with their adaptation to the corrosive conditions of SO2 oxidation are described. We show that while emissivity-corrected infrared thermography (ecIRT can be used for primary screening, it is prone to errors because of the large variations in the emissivity of the catalyst surface. UV-visible (UV-Vis spectrometry was selected instead as a reliable analysis method of monitoring the SO2 conversion. Installing plain sugar absorbents at reactor outlets proved valuable for the detection and quantitative removal of SO3 from the product gas before the UV-Vis analysis. We also overview some elements used for prescreening and those remaining after the screening of the first catalyst generations.

  9. Fusion genes and their discovery using high throughput sequencing.

    Science.gov (United States)

    Annala, M J; Parker, B C; Zhang, W; Nykter, M

    2013-11-01

    Fusion genes are hybrid genes that combine parts of two or more original genes. They can form as a result of chromosomal rearrangements or abnormal transcription, and have been shown to act as drivers of malignant transformation and progression in many human cancers. The biological significance of fusion genes together with their specificity to cancer cells has made them into excellent targets for molecular therapy. Fusion genes are also used as diagnostic and prognostic markers to confirm cancer diagnosis and monitor response to molecular therapies. High-throughput sequencing has enabled the systematic discovery of fusion genes in a wide variety of cancer types. In this review, we describe the history of fusion genes in cancer and the ways in which fusion genes form and affect cellular function. We also describe computational methodologies for detecting fusion genes from high-throughput sequencing experiments, and the most common sources of error that lead to false discovery of fusion genes. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  10. Computational analysis of high-throughput flow cytometry data.

    Science.gov (United States)

    Robinson, J Paul; Rajwa, Bartek; Patsekin, Valery; Davisson, Vincent Jo

    2012-08-01

    Flow cytometry has been around for over 40 years, but only recently has the opportunity arisen to move into the high-throughput domain. The technology is now available and is highly competitive with imaging tools under the right conditions. Flow cytometry has, however, been a technology that has focused on its unique ability to study single cells and appropriate analytical tools are readily available to handle this traditional role of the technology. Expansion of flow cytometry to a high-throughput (HT) and high-content technology requires both advances in hardware and analytical tools. The historical perspective of flow cytometry operation as well as how the field has changed and what the key changes have been discussed. The authors provide a background and compelling arguments for moving toward HT flow, where there are many innovative opportunities. With alternative approaches now available for flow cytometry, there will be a considerable number of new applications. These opportunities show strong capability for drug screening and functional studies with cells in suspension. There is no doubt that HT flow is a rich technology awaiting acceptance by the pharmaceutical community. It can provide a powerful phenotypic analytical toolset that has the capacity to change many current approaches to HT screening. The previous restrictions on the technology, based on its reduced capacity for sample throughput, are no longer a major issue. Overcoming this barrier has transformed a mature technology into one that can focus on systems biology questions not previously considered possible.

  11. Evaluation of a High Throughput Starch Analysis Optimised for Wood

    Science.gov (United States)

    Bellasio, Chandra; Fini, Alessio; Ferrini, Francesco

    2014-01-01

    Starch is the most important long-term reserve in trees, and the analysis of starch is therefore useful source of physiological information. Currently published protocols for wood starch analysis impose several limitations, such as long procedures and a neutralization step. The high-throughput standard protocols for starch analysis in food and feed represent a valuable alternative. However, they have not been optimised or tested with woody samples. These have particular chemical and structural characteristics, including the presence of interfering secondary metabolites, low reactivity of starch, and low starch content. In this study, a standard method for starch analysis used for food and feed (AOAC standard method 996.11) was optimised to improve precision and accuracy for the analysis of starch in wood. Key modifications were introduced in the digestion conditions and in the glucose assay. The optimised protocol was then evaluated through 430 starch analyses of standards at known starch content, matrix polysaccharides, and wood collected from three organs (roots, twigs, mature wood) of four species (coniferous and flowering plants). The optimised protocol proved to be remarkably precise and accurate (3%), suitable for a high throughput routine analysis (35 samples a day) of specimens with a starch content between 40 mg and 21 µg. Samples may include lignified organs of coniferous and flowering plants and non-lignified organs, such as leaves, fruits and rhizomes. PMID:24523863

  12. Plant chip for high-throughput phenotyping of Arabidopsis.

    Science.gov (United States)

    Jiang, Huawei; Xu, Zhen; Aluru, Maneesha R; Dong, Liang

    2014-04-07

    We report on the development of a vertical and transparent microfluidic chip for high-throughput phenotyping of Arabidopsis thaliana plants. Multiple Arabidopsis seeds can be germinated and grown hydroponically over more than two weeks in the chip, thus enabling large-scale and quantitative monitoring of plant phenotypes. The novel vertical arrangement of this microfluidic device not only allows for normal gravitropic growth of the plants but also, more importantly, makes it convenient to continuously monitor phenotypic changes in plants at the whole organismal level, including seed germination and root and shoot growth (hypocotyls, cotyledons, and leaves), as well as at the cellular level. We also developed a hydrodynamic trapping method to automatically place single seeds into seed holding sites of the device and to avoid potential damage to seeds that might occur during manual loading. We demonstrated general utility of this microfluidic device by showing clear visible phenotypes of the immutans mutant of Arabidopsis, and we also showed changes occurring during plant-pathogen interactions at different developmental stages. Arabidopsis plants grown in the device maintained normal morphological and physiological behaviour, and distinct phenotypic variations consistent with a priori data were observed via high-resolution images taken in real time. Moreover, the timeline for different developmental stages for plants grown in this device was highly comparable to growth using a conventional agar plate method. This prototype plant chip technology is expected to lead to the establishment of a powerful experimental and cost-effective framework for high-throughput and precise plant phenotyping.

  13. COMPUTER APPROACHES TO WHEAT HIGH-THROUGHPUT PHENOTYPING

    Directory of Open Access Journals (Sweden)

    Afonnikov D.

    2012-08-01

    Full Text Available The growing need for rapid and accurate approaches for large-scale assessment of phenotypic characters in plants becomes more and more obvious in the studies looking into relationships between genotype and phenotype. This need is due to the advent of high throughput methods for analysis of genomes. Nowadays, any genetic experiment involves data on thousands and dozens of thousands of plants. Traditional ways of assessing most phenotypic characteristics (those with reliance on the eye, the touch, the ruler are little effective on samples of such sizes. Modern approaches seek to take advantage of automated phenotyping, which warrants a much more rapid data acquisition, higher accuracy of the assessment of phenotypic features, measurement of new parameters of these features and exclusion of human subjectivity from the process. Additionally, automation allows measurement data to be rapidly loaded into computer databases, which reduces data processing time.In this work, we present the WheatPGE information system designed to solve the problem of integration of genotypic and phenotypic data and parameters of the environment, as well as to analyze the relationships between the genotype and phenotype in wheat. The system is used to consolidate miscellaneous data on a plant for storing and processing various morphological traits and genotypes of wheat plants as well as data on various environmental factors. The system is available at www.wheatdb.org. Its potential in genetic experiments has been demonstrated in high-throughput phenotyping of wheat leaf pubescence.

  14. A Fully Automated High-Throughput Zebrafish Behavioral Ototoxicity Assay.

    Science.gov (United States)

    Todd, Douglas W; Philip, Rohit C; Niihori, Maki; Ringle, Ryan A; Coyle, Kelsey R; Zehri, Sobia F; Zabala, Leanne; Mudery, Jordan A; Francis, Ross H; Rodriguez, Jeffrey J; Jacob, Abraham

    2017-08-01

    Zebrafish animal models lend themselves to behavioral assays that can facilitate rapid screening of ototoxic, otoprotective, and otoregenerative drugs. Structurally similar to human inner ear hair cells, the mechanosensory hair cells on their lateral line allow the zebrafish to sense water flow and orient head-to-current in a behavior called rheotaxis. This rheotaxis behavior deteriorates in a dose-dependent manner with increased exposure to the ototoxin cisplatin, thereby establishing itself as an excellent biomarker for anatomic damage to lateral line hair cells. Building on work by our group and others, we have built a new, fully automated high-throughput behavioral assay system that uses automated image analysis techniques to quantify rheotaxis behavior. This novel system consists of a custom-designed swimming apparatus and imaging system consisting of network-controlled Raspberry Pi microcomputers capturing infrared video. Automated analysis techniques detect individual zebrafish, compute their orientation, and quantify the rheotaxis behavior of a zebrafish test population, producing a powerful, high-throughput behavioral assay. Using our fully automated biological assay to test a standardized ototoxic dose of cisplatin against varying doses of compounds that protect or regenerate hair cells may facilitate rapid translation of candidate drugs into preclinical mammalian models of hearing loss.

  15. Structuring intuition with theory: The high-throughput way

    Science.gov (United States)

    Fornari, Marco

    2015-03-01

    First principles methodologies have grown in accuracy and applicability to the point where large databases can be built, shared, and analyzed with the goal of predicting novel compositions, optimizing functional properties, and discovering unexpected relationships between the data. In order to be useful to a large community of users, data should be standardized, validated, and distributed. In addition, tools to easily manage large datasets should be made available to effectively lead to materials development. Within the AFLOW consortium we have developed a simple frame to expand, validate, and mine data repositories: the MTFrame. Our minimalistic approach complement AFLOW and other existing high-throughput infrastructures and aims to integrate data generation with data analysis. We present few examples from our work on materials for energy conversion. Our intent s to pinpoint the usefulness of high-throughput methodologies to guide the discovery process by quantitatively structuring the scientific intuition. This work was supported by ONR-MURI under Contract N00014-13-1-0635 and the Duke University Center for Materials Genomics.

  16. High-throughput genomics enhances tomato breeding efficiency.

    Science.gov (United States)

    Barone, A; Di Matteo, A; Carputo, D; Frusciante, L

    2009-03-01

    Tomato (Solanum lycopersicum) is considered a model plant species for a group of economically important crops, such as potato, pepper, eggplant, since it exhibits a reduced genomic size (950 Mb), a short generation time, and routine transformation technologies. Moreover, it shares with the other Solanaceous plants the same haploid chromosome number and a high level of conserved genomic organization. Finally, many genomic and genetic resources are actually available for tomato, and the sequencing of its genome is in progress. These features make tomato an ideal species for theoretical studies and practical applications in the genomics field. The present review describes how structural genomics assist the selection of new varieties resistant to pathogens that cause damage to this crop. Many molecular markers highly linked to resistance genes and cloned resistance genes are available and could be used for a high-throughput screening of multiresistant varieties. Moreover, a new genomics-assisted breeding approach for improving fruit quality is presented and discussed. It relies on the identification of genetic mechanisms controlling the trait of interest through functional genomics tools. Following this approach, polymorphisms in major gene sequences responsible for variability in the expression of the trait under study are then exploited for tracking simultaneously favourable allele combinations in breeding programs using high-throughput genomic technologies. This aims at pyramiding in the genetic background of commercial cultivars alleles that increase their performances. In conclusion, tomato breeding strategies supported by advanced technologies are expected to target increased productivity and lower costs of improved genotypes even for complex traits.

  17. Computational analysis of high-throughput flow cytometry data

    Science.gov (United States)

    Robinson, J Paul; Rajwa, Bartek; Patsekin, Valery; Davisson, Vincent Jo

    2015-01-01

    Introduction Flow cytometry has been around for over 40 years, but only recently has the opportunity arisen to move into the high-throughput domain. The technology is now available and is highly competitive with imaging tools under the right conditions. Flow cytometry has, however, been a technology that has focused on its unique ability to study single cells and appropriate analytical tools are readily available to handle this traditional role of the technology. Areas covered Expansion of flow cytometry to a high-throughput (HT) and high-content technology requires both advances in hardware and analytical tools. The historical perspective of flow cytometry operation as well as how the field has changed and what the key changes have been discussed. The authors provide a background and compelling arguments for moving toward HT flow, where there are many innovative opportunities. With alternative approaches now available for flow cytometry, there will be a considerable number of new applications. These opportunities show strong capability for drug screening and functional studies with cells in suspension. Expert opinion There is no doubt that HT flow is a rich technology awaiting acceptance by the pharmaceutical community. It can provide a powerful phenotypic analytical toolset that has the capacity to change many current approaches to HT screening. The previous restrictions on the technology, based on its reduced capacity for sample throughput, are no longer a major issue. Overcoming this barrier has transformed a mature technology into one that can focus on systems biology questions not previously considered possible. PMID:22708834

  18. High-Throughput Microfluidics for the Screening of Yeast Libraries.

    Science.gov (United States)

    Huang, Mingtao; Joensson, Haakan N; Nielsen, Jens

    2018-01-01

    Cell factory development is critically important for efficient biological production of chemicals, biofuels, and pharmaceuticals. Many rounds of the Design-Build-Test-Learn cycles may be required before an engineered strain meeting specific metrics required for industrial application. The bioindustry prefer products in secreted form (secreted products or extracellular metabolites) as it can lower the cost of downstream processing, reduce metabolic burden to cell hosts, and allow necessary modification on the final products , such as biopharmaceuticals. Yet, products in secreted form result in the disconnection of phenotype from genotype, which may have limited throughput in the Test step for identification of desired variants from large libraries of mutant strains. In droplet microfluidic screening, single cells are encapsulated in individual droplet and enable high-throughput processing and sorting of single cells or clones. Encapsulation in droplets allows this technology to overcome the throughput limitations present in traditional methods for screening by extracellular phenotypes. In this chapter, we describe a protocol/guideline for high-throughput droplet microfluidics screening of yeast libraries for higher protein secretion . This protocol can be adapted to screening by a range of other extracellular products from yeast or other hosts.

  19. High-throughput search for improved transparent conducting oxides

    Science.gov (United States)

    Miglio, Anna

    High-throughput methodologies are a very useful computational tool to explore the space of binary and ternary oxides. We use these methods to search for new and improved transparent conducting oxides (TCOs). TCOs exhibit both visible transparency and good carrier mobility and underpin many energy and electronic applications (e.g. photovoltaics, transparent transistors). We find several potential new n-type and p-type TCOs with a low effective mass. Combining different ab initio approaches, we characterize candidate oxides by their effective mass (mobility), band gap (transparency) and dopability. We present several compounds, not considered previously as TCOs, and discuss the chemical rationale for their promising properties. This analysis is useful to formulate design strategies for future high mobility oxides and has led to follow-up studies including preliminary experimental characterization of a p-type TCO candidate with unexpected chemistry. G. Hautier, A. Miglio, D. Waroquiers, G.-M. Rignanese, and X. Gonze, ``How Does Chemistry Influence Electron Effective Mass in Oxides? A High-Throughput Computational Analysis'', Chem. Mater. 26, 5447 (2014). G. Hautier, A. Miglio, G. Ceder, G.-M. Rignanese, and X. Gonze, ``Identification and design principles of low hole effective mass p-type transparent conducting oxides'', Nature Commun. 4, 2292 (2013).

  20. A systematical analysis of in vivo contact forces on virtual catheter tip/tissue surface contact during cardiac mapping and intervention.

    Science.gov (United States)

    Okumura, Yasuo; Johnson, Susan B; Bunch, T Jared; Henz, Benhur D; O'Brien, Christine J; Packer, Douglas L

    2008-06-01

    While catheter tip/tissue contact has been shown to be an important determinant of ablative lesions in in vitro studies, the impact of contact on the outcomes of mapping and ablation in the intact heart has not been evaluated. Twelve dogs underwent atrial ablation guided by the Senesitrade mark robotic catheter remote control system. After intracardiac ultrasound (ICE) validation of contact force measured by an in-line mechanical sensor, the relationship between contact force and individual lesion formation was established during irrigated-tipped ablation (flow 17 mL/sec) at 15 watts for 30 seconds. Minimal contact by ICE correlated with force of 4.7 +/- 5.8 grams, consistent contact 9.9 +/- 8.6 grams and tissue tenting produced 25.0 +/- 14.0 grams. Conversely, catheter tip/tissue contact by ICE was predicted by contact force. A contact force of 10-20 and > or =20 grams generated full-thickness, larger volume ablative lesions than that created with <10 grams (98 +/- 69 and 89 +/- 70 mm(3) vs 40 +/- 42 mm(3), P < 0.05). Moderate (10 grams) and marked contact (15-20 grams) application produced 1.5 X greater electroanatomic map volumes that were seen with minimal contact (5 grams) (26 +/- 3 cm(3) vs 33 +/- 6, 39 +/- 3 cm(3), P < 0.05). The electroanatomic map/CT merge process was also more distorted when mapping was generated at moderate to marked contact force. This study shows that mapping and ablation using a robotic sheath guidance system are critically dependent on generated force. These findings suggest that ablative lesion size is optimized by the application of 10-20 grams of contact force, although mapping requires lower-force application to avoid image distortions.

  1. Morphology control in polymer blend fibers—a high throughput computing approach

    Science.gov (United States)

    Sesha Sarath Pokuri, Balaji; Ganapathysubramanian, Baskar

    2016-08-01

    Fibers made from polymer blends have conventionally enjoyed wide use, particularly in textiles. This wide applicability is primarily aided by the ease of manufacturing such fibers. More recently, the ability to tailor the internal morphology of polymer blend fibers by carefully designing processing conditions has enabled such fibers to be used in technologically relevant applications. Some examples include anisotropic insulating properties for heat and anisotropic wicking of moisture, coaxial morphologies for optical applications as well as fibers with high internal surface area for filtration and catalysis applications. However, identifying the appropriate processing conditions from the large space of possibilities using conventional trial-and-error approaches is a tedious and resource-intensive process. Here, we illustrate a high throughput computational approach to rapidly explore and characterize how processing conditions (specifically blend ratio and evaporation rates) affect the internal morphology of polymer blends during solvent based fabrication. We focus on a PS: PMMA system and identify two distinct classes of morphologies formed due to variations in the processing conditions. We subsequently map the processing conditions to the morphology class, thus constructing a ‘phase diagram’ that enables rapid identification of processing parameters for specific morphology class. We finally demonstrate the potential for time dependent processing conditions to get desired features of the morphology. This opens up the possibility of rational stage-wise design of processing pathways for tailored fiber morphology using high throughput computing.

  2. Blurring contact maps of thousands of proteins: what we can learn by reconstructing 3D structure

    Science.gov (United States)

    2011-01-01

    Background The present knowledge of protein structures at atomic level derives from some 60,000 molecules. Yet the exponential ever growing set of hypothetical protein sequences comprises some 10 million chains and this makes the problem of protein structure prediction one of the challenging goals of bioinformatics. In this context, the protein representation with contact maps is an intermediate step of fold recognition and constitutes the input of contact map predictors. However contact map representations require fast and reliable methods to reconstruct the specific folding of the protein backbone. Methods In this paper, by adopting a GRID technology, our algorithm for 3D reconstruction FT-COMAR is benchmarked on a huge set of non redundant proteins (1716) taking random noise into consideration and this makes our computation the largest ever performed for the task at hand. Results We can observe the effects of introducing random noise on 3D reconstruction and derive some considerations useful for future implementations. The dimension of the protein set allows also statistical considerations after grouping per SCOP structural classes. Conclusions All together our data indicate that the quality of 3D reconstruction is unaffected by deleting up to an average 75% of the real contacts while only few percentage of randomly generated contacts in place of non-contacts are sufficient to hamper 3D reconstruction. PMID:21232136

  3. A Coarea Formula for Smooth Contact Mappings of Carnot--Carath\\'{e}odory Spaces

    OpenAIRE

    Karmanova, M.; Vodopyanov, S.

    2011-01-01

    We prove the coarea formula for sufficiently smooth contact mappings of Carnot manifolds. In particular, we investigate level surfaces of these mappings, and compare Riemannian and sub-Riemannian measures on them. Our main tool is the sharp asymptotic behavior of the Riemannian measure of the intersection of a tangent plane to a level surface and a sub-Riemannian ball. This calculation in particular implies that the sub-Riemannian measure of the set of characteristic points (i.\\,e., the point...

  4. Mapping of the Rsd Contact Site on the Sigma 70 Subunit of Escherichia coli RNA Polymerase

    OpenAIRE

    Jishage, Miki; Dasgupta, Dipak; Ishihama, Akira

    2001-01-01

    Rsd (regulator of sigma D) is an anti-sigma factor for the Escherichia coli RNA polymerase ς70 subunit. The contact site of Rsd on ς70 was analyzed after mapping of the contact-dependent cleavage sites by Rsd-tethered iron-p-bromoacetamidobenzyl EDTA and by analysis of the complex formation between Ala-substituted ς70 and Rsd. Results indicate that the Rsd contact site is located downstream of the promoter −35 recognition helix-turn-helix motif within region 4, overlapping with the regions in...

  5. High-throughput phenotyping and genetic linkage of cortical bone microstructure in the mouse.

    Science.gov (United States)

    Mader, Kevin S; Donahue, Leah Rae; Müller, Ralph; Stampanoni, Marco

    2015-07-03

    Understanding cellular structure and organization, which plays an important role in biological systems ranging from mechanosensation to neural organization, is a complicated multifactorial problem depending on genetics, environmental factors, and stochastic processes. Isolating these factors necessitates the measurement and sensitive quantification of many samples in a reliable, high-throughput, unbiased manner. In this manuscript we present a pipelined approach using a fully automated framework based on Synchrotron-based X-ray Tomographic Microscopy (SRXTM) for performing a full 3D characterization of millions of substructures. We demonstrate the framework on a genetic study on the femur bones of in-bred mice. We measured 1300 femurs from a F2 cross experiment in mice without the growth hormone (which can confound many of the smaller structural differences between strains) and characterized more than 50 million osteocyte lacunae (cell-sized hollows in the bone). The results were then correlated with genetic markers in a process called quantitative trait localization (QTL). Our findings provide a mapping between regions of the genome (all 19 autosomes) and observable phenotypes which could explain between 8-40 % of the variance using between 2-10 loci for each trait. This map shows 4 areas of overlap with previous studies looking at bone strength and 3 areas not previously associated with bone. The mapping of microstructural phenotypes provides a starting point for both structure-function and genetic studies on murine bone structure and the specific loci can be investigated in more detail to identify single gene candidates which can then be translated to human investigations. The flexible infrastructure offers a full spectrum of shape, distribution, and connectivity metrics for cellular networks and can be adapted to a wide variety of materials ranging from plant roots to lung tissue in studies requiring high sample counts and sensitive metrics such as the drug

  6. The landscape of existing models for high-throughput exposure assessment

    DEFF Research Database (Denmark)

    Jolliet, O.; Fantke, Peter; Huang, L.

    2017-01-01

    for skin permeation and volatilization as competing processes and that requires a limited number of readily available physiochemical properties would be suitable for LCA and HTS purposes. Thus, the multi-pathway exposure model for chemicals in cosmetics developed by Ernstoff et al.constitutes a suitable......Models are becoming increasingly available to model near-field fate and exposure, but not all are suited for high throughput. This presentation evaluates the available models for modeling exposure to chemicals in cosmetics, cleaning products, food contact and building materials. It assesses...... in indoor air (Little et al., 2012; Liu et al., 2013), but they do not well account for SVOC sorption into indoor surfaces and absorption into human skins (Huang et al., 2017). Thus a more comprehensive simplified solution is needed for SVOCs . For personal Care Products, a mass balance model that accounts...

  7. Miniaturization of High-Throughput Epigenetic Methyltransferase Assays with Acoustic Liquid Handling.

    Science.gov (United States)

    Edwards, Bonnie; Lesnick, John; Wang, Jing; Tang, Nga; Peters, Carl

    2016-02-01

    Epigenetics continues to emerge as an important target class for drug discovery and cancer research. As programs scale to evaluate many new targets related to epigenetic expression, new tools and techniques are required to enable efficient and reproducible high-throughput epigenetic screening. Assay miniaturization increases screening throughput and reduces operating costs. Echo liquid handlers can transfer compounds, samples, reagents, and beads in submicroliter volumes to high-density assay formats using only acoustic energy-no contact or tips required. This eliminates tip costs and reduces the risk of reagent carryover. In this study, we demonstrate the miniaturization of a methyltransferase assay using Echo liquid handlers and two different assay technologies: AlphaLISA from PerkinElmer and EPIgeneous HTRF from Cisbio. © 2015 Society for Laboratory Automation and Screening.

  8. High-throughput ballistic injection nanorheology to measure cell mechanics

    Science.gov (United States)

    Wu, Pei-Hsun; Hale, Christopher M; Chen, Wei-Chiang; Lee, Jerry S H; Tseng, Yiider; Wirtz, Denis

    2015-01-01

    High-throughput ballistic injection nanorheology is a method for the quantitative study of cell mechanics. Cell mechanics are measured by ballistic injection of submicron particles into the cytoplasm of living cells and tracking the spontaneous displacement of the particles at high spatial resolution. The trajectories of the cytoplasm-embedded particles are transformed into mean-squared displacements, which are subsequently transformed into frequency-dependent viscoelastic moduli and time-dependent creep compliance of the cytoplasm. This method allows for the study of a wide range of cellular conditions, including cells inside a 3D matrix, cell subjected to shear flows and biochemical stimuli, and cells in a live animal. Ballistic injection lasts < 1 min and is followed by overnight incubation. Multiple particle tracking for one cell lasts < 1 min. Forty cells can be examined in < 1 h. PMID:22222790

  9. Machine Learning for High-Throughput Stress Phenotyping in Plants.

    Science.gov (United States)

    Singh, Arti; Ganapathysubramanian, Baskar; Singh, Asheesh Kumar; Sarkar, Soumik

    2016-02-01

    Advances in automated and high-throughput imaging technologies have resulted in a deluge of high-resolution images and sensor data of plants. However, extracting patterns and features from this large corpus of data requires the use of machine learning (ML) tools to enable data assimilation and feature identification for stress phenotyping. Four stages of the decision cycle in plant stress phenotyping and plant breeding activities where different ML approaches can be deployed are (i) identification, (ii) classification, (iii) quantification, and (iv) prediction (ICQP). We provide here a comprehensive overview and user-friendly taxonomy of ML tools to enable the plant community to correctly and easily apply the appropriate ML tools and best-practice guidelines for various biotic and abiotic stress traits. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. An improved high throughput sequencing method for studying oomycete communities

    DEFF Research Database (Denmark)

    Sapkota, Rumakanta; Nicolaisen, Mogens

    2015-01-01

    Culture-independent studies using next generation sequencing have revolutionizedmicrobial ecology, however, oomycete ecology in soils is severely lagging behind. The aimof this study was to improve and validate standard techniques for using high throughput sequencing as a tool for studying oomycete...... agricultural fields in Denmark, and 11 samples from carrot tissue with symptoms of Pythium infection. Sequence data from the Pythium and Phytophthora mock communities showed that our strategy successfully detected all included species. Taxonomic assignments of OTUs from 26 soil sample showed that 95...... the usefulness of the method not only in soil DNA but also in a plant DNA background. In conclusion, we demonstrate a successful approach for pyrosequencing of oomycete communities using ITS1 as the barcode sequence with well-known primers for oomycete DNA amplification....

  11. Dimensioning storage and computing clusters for efficient High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Scientific experiments are producing huge amounts of data, and they continue increasing the size of their datasets and the total volume of data. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of Scientific Data Centres has shifted from coping efficiently with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful s...

  12. High-Throughput Screening Using Fourier-Transform Infrared Imaging

    Directory of Open Access Journals (Sweden)

    Erdem Sasmaz

    2015-06-01

    Full Text Available Efficient parallel screening of combinatorial libraries is one of the most challenging aspects of the high-throughput (HT heterogeneous catalysis workflow. Today, a number of methods have been used in HT catalyst studies, including various optical, mass-spectrometry, and gas-chromatography techniques. Of these, rapid-scanning Fourier-transform infrared (FTIR imaging is one of the fastest and most versatile screening techniques. Here, the new design of the 16-channel HT reactor is presented and test results for its accuracy and reproducibility are shown. The performance of the system was evaluated through the oxidation of CO over commercial Pd/Al2O3 and cobalt oxide nanoparticles synthesized with different reducer-reductant molar ratios, surfactant types, metal and surfactant concentrations, synthesis temperatures, and ramp rates.

  13. High throughput parametric studies of the structure of complex nanomaterials

    Science.gov (United States)

    Tian, Peng

    The structure of nanoscale materials is difficult to study because crystallography, the gold-standard for structure studies, no longer works at the nanoscale. New tools are needed to study nanostructure. Furthermore, it is important to study the evolution of nanostructure of complex nanostructured materials as a function of various parameters such as temperature or other environmental variables. These are called parametric studies because an environmental parameter is being varied. This means that the new tools for studying nanostructure also need to be extended to work quickly and on large numbers of datasets. This thesis describes the development of new tools for high throughput studies of complex and nanostructured materials, and their application to study the structural evolution of bulk, and nanoparticles of, MnAs as a function of temperature. The tool for high throughput analysis of the bulk material was developed as part of this PhD thesis work and is called SrRietveld. A large part of making a new tool is to validate it and we did this for SrRietveld by carrying out a high-throughput study of uncertainties coming from the program using different ways of estimating the uncertainty. This tool was applied to study structural changes in MnAs as a function of temperature. We were also interested in studying different MnAs nanoparticles fabricated through different methods because of their applications in information storage. PDFgui, an existing tool for analyzing nanoparticles using Pair distribution function (PDF) refinement, was used in these cases. Comparing the results from the analysis by SrRietveld and PDFgui, we got more comprehensive structure information about MnAs. The layout of the thesis is as follows. First, the background knowledge about material structures is given. The conventional crystallographic analysis is introduced in both theoretical and practical ways. For high throughput study, the next-generation Rietveld analysis program: Sr

  14. High-Throughput Automation in Chemical Process Development.

    Science.gov (United States)

    Selekman, Joshua A; Qiu, Jun; Tran, Kristy; Stevens, Jason; Rosso, Victor; Simmons, Eric; Xiao, Yi; Janey, Jacob

    2017-06-07

    High-throughput (HT) techniques built upon laboratory automation technology and coupled to statistical experimental design and parallel experimentation have enabled the acceleration of chemical process development across multiple industries. HT technologies are often applied to interrogate wide, often multidimensional experimental spaces to inform the design and optimization of any number of unit operations that chemical engineers use in process development. In this review, we outline the evolution of HT technology and provide a comprehensive overview of how HT automation is used throughout different industries, with a particular focus on chemical and pharmaceutical process development. In addition, we highlight the common strategies of how HT automation is incorporated into routine development activities to maximize its impact in various academic and industrial settings.

  15. Interactive Visual Analysis of High Throughput Text Streams

    Energy Technology Data Exchange (ETDEWEB)

    Steed, Chad A [ORNL; Potok, Thomas E [ORNL; Patton, Robert M [ORNL; Goodall, John R [ORNL; Maness, Christopher S [ORNL; Senter, James K [ORNL; Potok, Thomas E [ORNL

    2012-01-01

    The scale, velocity, and dynamic nature of large scale social media systems like Twitter demand a new set of visual analytics techniques that support near real-time situational awareness. Social media systems are credited with escalating social protest during recent large scale riots. Virtual communities form rapidly in these online systems, and they occasionally foster violence and unrest which is conveyed in the users language. Techniques for analyzing broad trends over these networks or reconstructing conversations within small groups have been demonstrated in recent years, but state-of- the-art tools are inadequate at supporting near real-time analysis of these high throughput streams of unstructured information. In this paper, we present an adaptive system to discover and interactively explore these virtual networks, as well as detect sentiment, highlight change, and discover spatio- temporal patterns.

  16. The Principals and Practice of Distributed High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The potential of Distributed Processing Systems to deliver computing capabilities with qualities ranging from high availability and reliability to easy expansion in functionality and capacity were recognized and formalized in the 1970’s. For more three decade these principals Distributed Computing guided the development of the HTCondor resource and job management system. The widely adopted suite of software tools offered by HTCondor are based on novel distributed computing technologies and are driven by the evolving needs of High Throughput scientific applications. We will review the principals that underpin our work, the distributed computing frameworks and technologies we developed and the lessons we learned from delivering effective and dependable software tools in an ever changing landscape computing technologies and needs that range today from a desktop computer to tens of thousands of cores offered by commercial clouds. About the speaker Miron Livny received a B.Sc. degree in Physics and Mat...

  17. High-Throughput Mass Spectrometry Applied to Structural Genomics

    Directory of Open Access Journals (Sweden)

    Rod Chalk

    2014-10-01

    Full Text Available Mass spectrometry (MS remains under-utilized for the analysis of expressed proteins because it is inaccessible to the non-specialist, and sample-turnaround from service labs is slow. Here, we describe 3.5 min Liquid-Chromatography (LC-MS and 16 min LC-MSMS methods which are tailored to validation and characterization of recombinant proteins in a high throughput structural biology pipeline. We illustrate the type and scope of MS data typically obtained from a 96-well expression and purification test for both soluble and integral membrane proteins (IMPs, and describe their utility in the selection of constructs for scale-up structural work, leading to cost and efficiency savings. We propose that value of MS data lies in how quickly it becomes available and that this can fundamentally change the way in which it is used.

  18. High-throughput screening: update on practices and success.

    Science.gov (United States)

    Fox, Sandra; Farr-Jones, Shauna; Sopchak, Lynne; Boggs, Amy; Nicely, Helen Wang; Khoury, Richard; Biros, Michael

    2006-10-01

    High-throughput screening (HTS) has become an important part of drug discovery at most pharmaceutical and many biotechnology companies worldwide, and use of HTS technologies is expanding into new areas. Target validation, assay development, secondary screening, ADME/Tox, and lead optimization are among the areas in which there is an increasing use of HTS technologies. It is becoming fully integrated within drug discovery, both upstream and downstream, which includes increasing use of cell-based assays and high-content screening (HCS) technologies to achieve more physiologically relevant results and to find higher quality leads. In addition, HTS laboratories are continually evaluating new technologies as they struggle to increase their success rate for finding drug candidates. The material in this article is based on a 900-page HTS industry report involving 54 HTS directors representing 58 HTS laboratories and 34 suppliers.

  19. Single-platelet nanomechanics measured by high-throughput cytometry

    Science.gov (United States)

    Myers, David R.; Qiu, Yongzhi; Fay, Meredith E.; Tennenbaum, Michael; Chester, Daniel; Cuadrado, Jonas; Sakurai, Yumiko; Baek, Jong; Tran, Reginald; Ciciliano, Jordan C.; Ahn, Byungwook; Mannino, Robert G.; Bunting, Silvia T.; Bennett, Carolyn; Briones, Michael; Fernandez-Nieves, Alberto; Smith, Michael L.; Brown, Ashley C.; Sulchek, Todd; Lam, Wilbur A.

    2017-02-01

    Haemostasis occurs at sites of vascular injury, where flowing blood forms a clot, a dynamic and heterogeneous fibrin-based biomaterial. Paramount in the clot's capability to stem haemorrhage are its changing mechanical properties, the major drivers of which are the contractile forces exerted by platelets against the fibrin scaffold. However, how platelets transduce microenvironmental cues to mediate contraction and alter clot mechanics is unknown. This is clinically relevant, as overly softened and stiffened clots are associated with bleeding and thrombotic disorders. Here, we report a high-throughput hydrogel-based platelet-contraction cytometer that quantifies single-platelet contraction forces in different clot microenvironments. We also show that platelets, via the Rho/ROCK pathway, synergistically couple mechanical and biochemical inputs to mediate contraction. Moreover, highly contractile platelet subpopulations present in healthy controls are conspicuously absent in a subset of patients with undiagnosed bleeding disorders, and therefore may function as a clinical diagnostic biophysical biomarker.

  20. Ethoscopes: An open platform for high-throughput ethomics.

    Directory of Open Access Journals (Sweden)

    Quentin Geissmann

    2017-10-01

    Full Text Available Here, we present the use of ethoscopes, which are machines for high-throughput analysis of behavior in Drosophila and other animals. Ethoscopes provide a software and hardware solution that is reproducible and easily scalable. They perform, in real-time, tracking and profiling of behavior by using a supervised machine learning algorithm, are able to deliver behaviorally triggered stimuli to flies in a feedback-loop mode, and are highly customizable and open source. Ethoscopes can be built easily by using 3D printing technology and rely on Raspberry Pi microcomputers and Arduino boards to provide affordable and flexible hardware. All software and construction specifications are available at http://lab.gilest.ro/ethoscope.

  1. Ethoscopes: An open platform for high-throughput ethomics.

    Science.gov (United States)

    Geissmann, Quentin; Garcia Rodriguez, Luis; Beckwith, Esteban J; French, Alice S; Jamasb, Arian R; Gilestro, Giorgio F

    2017-10-01

    Here, we present the use of ethoscopes, which are machines for high-throughput analysis of behavior in Drosophila and other animals. Ethoscopes provide a software and hardware solution that is reproducible and easily scalable. They perform, in real-time, tracking and profiling of behavior by using a supervised machine learning algorithm, are able to deliver behaviorally triggered stimuli to flies in a feedback-loop mode, and are highly customizable and open source. Ethoscopes can be built easily by using 3D printing technology and rely on Raspberry Pi microcomputers and Arduino boards to provide affordable and flexible hardware. All software and construction specifications are available at http://lab.gilest.ro/ethoscope.

  2. Ethoscopes: An open platform for high-throughput ethomics

    Science.gov (United States)

    Geissmann, Quentin; Garcia Rodriguez, Luis; Beckwith, Esteban J.; French, Alice S.; Jamasb, Arian R.

    2017-01-01

    Here, we present the use of ethoscopes, which are machines for high-throughput analysis of behavior in Drosophila and other animals. Ethoscopes provide a software and hardware solution that is reproducible and easily scalable. They perform, in real-time, tracking and profiling of behavior by using a supervised machine learning algorithm, are able to deliver behaviorally triggered stimuli to flies in a feedback-loop mode, and are highly customizable and open source. Ethoscopes can be built easily by using 3D printing technology and rely on Raspberry Pi microcomputers and Arduino boards to provide affordable and flexible hardware. All software and construction specifications are available at http://lab.gilest.ro/ethoscope. PMID:29049280

  3. High-throughput drawing and testing of metallic glass nanostructures.

    Science.gov (United States)

    Hasan, Molla; Kumar, Golden

    2017-03-02

    Thermoplastic embossing of metallic glasses promises direct imprinting of metal nanostructures using templates. However, embossing high-aspect-ratio nanostructures faces unworkable flow resistance due to friction and non-wetting conditions at the template interface. Herein, we show that these inherent challenges of embossing can be reversed by thermoplastic drawing using templates. The flow resistance not only remains independent of wetting but also decreases with increasing feature aspect-ratio. Arrays of assembled nanotips, nanowires, and nanotubes with aspect-ratios exceeding 1000 can be produced through controlled elongation and fracture of metallic glass structures. In contrast to embossing, the drawing approach generates two sets of nanostructures upon final fracture; one set remains anchored to the metallic glass substrate while the second set is assembled on the template. This method can be readily adapted for high-throughput fabrication and testing of nanoscale tensile specimens, enabling rapid screening of size-effects in mechanical behavior.

  4. Statistically invalid classification of high throughput gene expression data

    Science.gov (United States)

    Barbash, Shahar; Soreq, Hermona

    2013-01-01

    Classification analysis based on high throughput data is a common feature in neuroscience and other fields of science, with a rapidly increasing impact on both basic biology and disease-related studies. The outcome of such classifications often serves to delineate novel biochemical mechanisms in health and disease states, identify new targets for therapeutic interference, and develop innovative diagnostic approaches. Given the importance of this type of studies, we screened 111 recently-published high-impact manuscripts involving classification analysis of gene expression, and found that 58 of them (53%) based their conclusions on a statistically invalid method which can lead to bias in a statistical sense (lower true classification accuracy then the reported classification accuracy). In this report we characterize the potential methodological error and its scope, investigate how it is influenced by different experimental parameters, and describe statistically valid methods for avoiding such classification mistakes. PMID:23346359

  5. Applications of High-Throughput Nucleotide Sequencing (PhD)

    DEFF Research Database (Denmark)

    Waage, Johannes

    The recent advent of high throughput sequencing of nucleic acids (RNA and DNA) has vastly expanded research into the functional and structural biology of the genome of all living organisms (and even a few dead ones). With this enormous and exponential growth in biological data generation come...... equally large demands in data handling, analysis and interpretation, perhaps defining the modern challenge of the computational biologist of the post-genomic era. The first part of this thesis consists of a general introduction to the history, common terms and challenges of next generation sequencing......). For the second flavor, DNA-seq, a study presenting genome wide profiling of transcription factor CEBP/A in liver cells undergoing regeneration after partial hepatectomy (article IV) is included....

  6. Automated high-throughput behavioral analyses in zebrafish larvae.

    Science.gov (United States)

    Richendrfer, Holly; Créton, Robbert

    2013-07-04

    We have created a novel high-throughput imaging system for the analysis of behavior in 7-day-old zebrafish larvae in multi-lane plates. This system measures spontaneous behaviors and the response to an aversive stimulus, which is shown to the larvae via a PowerPoint presentation. The recorded images are analyzed with an ImageJ macro, which automatically splits the color channels, subtracts the background, and applies a threshold to identify individual larvae placement in the lanes. We can then import the coordinates into an Excel sheet to quantify swim speed, preference for edge or side of the lane, resting behavior, thigmotaxis, distance between larvae, and avoidance behavior. Subtle changes in behavior are easily detected using our system, making it useful for behavioral analyses after exposure to environmental toxicants or pharmaceuticals.

  7. High-throughput ab-initio dilute solute diffusion database

    Science.gov (United States)

    Wu, Henry; Mayeshiba, Tam; Morgan, Dane

    2016-01-01

    We demonstrate automated generation of diffusion databases from high-throughput density functional theory (DFT) calculations. A total of more than 230 dilute solute diffusion systems in Mg, Al, Cu, Ni, Pd, and Pt host lattices have been determined using multi-frequency diffusion models. We apply a correction method for solute diffusion in alloys using experimental and simulated values of host self-diffusivity. We find good agreement with experimental solute diffusion data, obtaining a weighted activation barrier RMS error of 0.176 eV when excluding magnetic solutes in non-magnetic alloys. The compiled database is the largest collection of consistently calculated ab-initio solute diffusion data in the world. PMID:27434308

  8. Reverse Phase Protein Arrays for High-throughput Toxicity Screening

    DEFF Research Database (Denmark)

    Pedersen, Marlene Lemvig; Block, Ines; List, Markus

    RNAs with known killing effects as a model system to demonstrate that RPPA-based protein quantification can serve as substitute readout of cell viability, hereby reliably reflecting toxicity. In terms of automation, cell exposure, protein harvest, serial dilution and sample reformatting were performed using...... beneficially in automated high-throughput toxicity testing. An advantage of using RPPAs is that, in addition to the baseline toxicity readout, they allow testing of multiple markers of toxicity, such as inflammatory responses, which do not necessarily cumulate in cell death. We used transfection of si...... a robotic screening platform. Furthermore, we automated sample tracking and data analysis by developing a bundled bioinformatics tool named “MIRACLE”. Automation and RPPA-based viability/toxicity readouts enable rapid testing of large sample numbers, while granting the possibility for flexible consecutive...

  9. A Dual-Mode Large-Arrayed CMOS ISFET Sensor for Accurate and High-Throughput pH Sensing in Biomedical Diagnosis.

    Science.gov (United States)

    Huang, Xiwei; Yu, Hao; Liu, Xu; Jiang, Yu; Yan, Mei; Wu, Dongping

    2015-09-01

    The existing ISFET-based DNA sequencing detects hydrogen ions released during the polymerization of DNA strands on microbeads, which are scattered into microwell array above the ISFET sensor with unknown distribution. However, false pH detection happens at empty microwells due to crosstalk from neighboring microbeads. In this paper, a dual-mode CMOS ISFET sensor is proposed to have accurate pH detection toward DNA sequencing. Dual-mode sensing, optical and chemical modes, is realized by integrating a CMOS image sensor (CIS) with ISFET pH sensor, and is fabricated in a standard 0.18-μm CIS process. With accurate determination of microbead physical locations with CIS pixel by contact imaging, the dual-mode sensor can correlate local pH for one DNA slice at one location-determined microbead, which can result in improved pH detection accuracy. Moreover, toward a high-throughput DNA sequencing, a correlated-double-sampling readout that supports large array for both modes is deployed to reduce pixel-to-pixel nonuniformity such as threshold voltage mismatch. The proposed CMOS dual-mode sensor is experimentally examined to show a well correlated pH map and optical image for microbeads with a pH sensitivity of 26.2 mV/pH, a fixed pattern noise (FPN) reduction from 4% to 0.3%, and a readout speed of 1200 frames/s. A dual-mode CMOS ISFET sensor with suppressed FPN for accurate large-arrayed pH sensing is proposed and demonstrated with state-of-the-art measured results toward accurate and high-throughput DNA sequencing. The developed dual-mode CMOS ISFET sensor has great potential for future personal genome diagnostics with high accuracy and low cost.

  10. Spatio-temporal flow maps for visualizing movement and contact patterns

    Directory of Open Access Journals (Sweden)

    Bing Ni

    2017-03-01

    Full Text Available The advanced telecom technologies and massive volumes of intelligent mobile phone users have yielded a huge amount of real-time data of people’s all-in-one telecommunication records, which we call telco big data. With telco data and the domain knowledge of an urban city, we are now able to analyze the movement and contact patterns of humans in an unprecedented scale. Flow map is widely used to display the movements of humans from one single source to multiple destinations by representing locations as nodes and movements as edges. However, it fails the task of visualizing both movement and contact data. In addition, analysts often need to compare and examine the patterns side by side, and do various quantitative analysis. In this work, we propose a novel spatio-temporal flow map layout to visualize when and where people from different locations move into the same places and make contact. We also propose integrating the spatiotemporal flow maps into existing spatiotemporal visualization techniques to form a suite of techniques for visualizing the movement and contact patterns. We report a potential application the proposed techniques can be applied to. The results show that our design and techniques properly unveil hidden information, while analysis can be achieved efficiently. Keywords: Spatio-temporal data, Flow map, Urban mobility

  11. A robust robotic high-throughput antibody purification platform.

    Science.gov (United States)

    Schmidt, Peter M; Abdo, Michael; Butcher, Rebecca E; Yap, Min-Yin; Scotney, Pierre D; Ramunno, Melanie L; Martin-Roussety, Genevieve; Owczarek, Catherine; Hardy, Matthew P; Chen, Chao-Guang; Fabri, Louis J

    2016-07-15

    Monoclonal antibodies (mAbs) have become the fastest growing segment in the drug market with annual sales of more than 40 billion US$ in 2013. The selection of lead candidate molecules involves the generation of large repertoires of antibodies from which to choose a final therapeutic candidate. Improvements in the ability to rapidly produce and purify many antibodies in sufficient quantities reduces the lead time for selection which ultimately impacts on the speed with which an antibody may transition through the research stage and into product development. Miniaturization and automation of chromatography using micro columns (RoboColumns(®) from Atoll GmbH) coupled to an automated liquid handling instrument (ALH; Freedom EVO(®) from Tecan) has been a successful approach to establish high throughput process development platforms. Recent advances in transient gene expression (TGE) using the high-titre Expi293F™ system have enabled recombinant mAb titres of greater than 500mg/L. These relatively high protein titres reduce the volume required to generate several milligrams of individual antibodies for initial biochemical and biological downstream assays, making TGE in the Expi293F™ system ideally suited to high throughput chromatography on an ALH. The present publication describes a novel platform for purifying Expi293F™-expressed recombinant mAbs directly from cell-free culture supernatant on a Perkin Elmer JANUS-VariSpan ALH equipped with a plate shuttle device. The purification platform allows automated 2-step purification (Protein A-desalting/size exclusion chromatography) of several hundred mAbs per week. The new robotic method can purify mAbs with high recovery (>90%) at sub-milligram level with yields of up to 2mg from 4mL of cell-free culture supernatant. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Surrogate-assisted feature extraction for high-throughput phenotyping.

    Science.gov (United States)

    Yu, Sheng; Chakrabortty, Abhishek; Liao, Katherine P; Cai, Tianrun; Ananthakrishnan, Ashwin N; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Cai, Tianxi

    2017-04-01

    Phenotyping algorithms are capable of accurately identifying patients with specific phenotypes from within electronic medical records systems. However, developing phenotyping algorithms in a scalable way remains a challenge due to the extensive human resources required. This paper introduces a high-throughput unsupervised feature selection method, which improves the robustness and scalability of electronic medical record phenotyping without compromising its accuracy. The proposed Surrogate-Assisted Feature Extraction (SAFE) method selects candidate features from a pool of comprehensive medical concepts found in publicly available knowledge sources. The target phenotype's International Classification of Diseases, Ninth Revision and natural language processing counts, acting as noisy surrogates to the gold-standard labels, are used to create silver-standard labels. Candidate features highly predictive of the silver-standard labels are selected as the final features. Algorithms were trained to identify patients with coronary artery disease, rheumatoid arthritis, Crohn's disease, and ulcerative colitis using various numbers of labels to compare the performance of features selected by SAFE, a previously published automated feature extraction for phenotyping procedure, and domain experts. The out-of-sample area under the receiver operating characteristic curve and F -score from SAFE algorithms were remarkably higher than those from the other two, especially at small label sizes. SAFE advances high-throughput phenotyping methods by automatically selecting a succinct set of informative features for algorithm training, which in turn reduces overfitting and the needed number of gold-standard labels. SAFE also potentially identifies important features missed by automated feature extraction for phenotyping or experts.

  13. A High-Throughput Antibody-Based Microarray Typing Platform

    Science.gov (United States)

    Andrew, Gehring; Charles, Barnett; Chu, Ted; DebRoy, Chitrita; D'Souza, Doris; Eaker, Shannon; Fratamico, Pina; Gillespie, Barbara; Hegde, Narasimha; Jones, Kevin; Lin, Jun; Oliver, Stephen; Paoli, George; Perera, Ashan; Uknalis, Joseph

    2013-01-01

    Many rapid methods have been developed for screening foods for the presence of pathogenic microorganisms. Rapid methods that have the additional ability to identify microorganisms via multiplexed immunological recognition have the potential for classification or typing of microbial contaminants thus facilitating epidemiological investigations that aim to identify outbreaks and trace back the contamination to its source. This manuscript introduces a novel, high throughput typing platform that employs microarrayed multiwell plate substrates and laser-induced fluorescence of the nucleic acid intercalating dye/stain SYBR Gold for detection of antibody-captured bacteria. The aim of this study was to use this platform for comparison of different sets of antibodies raised against the same pathogens as well as demonstrate its potential effectiveness for serotyping. To that end, two sets of antibodies raised against each of the “Big Six” non-O157 Shiga toxin-producing E. coli (STEC) as well as E. coli O157:H7 were array-printed into microtiter plates, and serial dilutions of the bacteria were added and subsequently detected. Though antibody specificity was not sufficient for the development of an STEC serotyping method, the STEC antibody sets performed reasonably well exhibiting that specificity increased at lower capture antibody concentrations or, conversely, at lower bacterial target concentrations. The favorable results indicated that with sufficiently selective and ideally concentrated sets of biorecognition elements (e.g., antibodies or aptamers), this high-throughput platform can be used to rapidly type microbial isolates derived from food samples within ca. 80 min of total assay time. It can also potentially be used to detect the pathogens from food enrichments and at least serve as a platform for testing antibodies. PMID:23645110

  14. High throughput phenotyping for aphid resistance in large plant collections

    Directory of Open Access Journals (Sweden)

    Chen Xi

    2012-08-01

    Full Text Available Abstract Background Phloem-feeding insects are among the most devastating pests worldwide. They not only cause damage by feeding from the phloem, thereby depleting the plant from photo-assimilates, but also by vectoring viruses. Until now, the main way to prevent such problems is the frequent use of insecticides. Applying resistant varieties would be a more environmental friendly and sustainable solution. For this, resistant sources need to be identified first. Up to now there were no methods suitable for high throughput phenotyping of plant germplasm to identify sources of resistance towards phloem-feeding insects. Results In this paper we present a high throughput screening system to identify plants with an increased resistance against aphids. Its versatility is demonstrated using an Arabidopsis thaliana activation tag mutant line collection. This system consists of the green peach aphid Myzus persicae (Sulzer and the circulative virus Turnip yellows virus (TuYV. In an initial screening, with one plant representing one mutant line, 13 virus-free mutant lines were identified by ELISA. Using seeds produced from these lines, the putative candidates were re-evaluated and characterized, resulting in nine lines with increased resistance towards the aphid. Conclusions This M. persicae-TuYV screening system is an efficient, reliable and quick procedure to identify among thousands of mutated lines those resistant to aphids. In our study, nine mutant lines with increased resistance against the aphid were selected among 5160 mutant lines in just 5 months by one person. The system can be extended to other phloem-feeding insects and circulative viruses to identify insect resistant sources from several collections, including for example genebanks and artificially prepared mutant collections.

  15. High-throughput phenotyping of seminal root traits in wheat.

    Science.gov (United States)

    Richard, Cecile Ai; Hickey, Lee T; Fletcher, Susan; Jennings, Raeleen; Chenu, Karine; Christopher, Jack T

    2015-01-01

    Water availability is a major limiting factor for wheat (Triticum aestivum L.) production in rain-fed agricultural systems worldwide. Root system architecture has important functional implications for the timing and extent of soil water extraction, yet selection for root architectural traits in breeding programs has been limited by a lack of suitable phenotyping methods. The aim of this research was to develop low-cost high-throughput phenotyping methods to facilitate selection for desirable root architectural traits. Here, we report two methods, one using clear pots and the other using growth pouches, to assess the angle and the number of seminal roots in wheat seedlings- two proxy traits associated with the root architecture of mature wheat plants. Both methods revealed genetic variation for seminal root angle and number in the panel of 24 wheat cultivars. The clear pot method provided higher heritability and higher genetic correlations across experiments compared to the growth pouch method. In addition, the clear pot method was more efficient - requiring less time, space, and labour compared to the growth pouch method. Therefore the clear pot method was considered the most suitable for large-scale and high-throughput screening of seedling root characteristics in crop improvement programs. The clear-pot method could be easily integrated in breeding programs targeting drought tolerance to rapidly enrich breeding populations with desirable alleles. For instance, selection for narrow root angle and high number of seminal roots could lead to deeper root systems with higher branching at depth. Such root characteristics are highly desirable in wheat to cope with anticipated future climate conditions, particularly where crops rely heavily on stored soil moisture at depth, including some Australian, Indian, South American, and African cropping regions.

  16. High-Throughput Genomics Enhances Tomato Breeding Efficiency

    Science.gov (United States)

    Barone, A; Di Matteo, A; Carputo, D; Frusciante, L

    2009-01-01

    Tomato (Solanum lycopersicum) is considered a model plant species for a group of economically important crops, such as potato, pepper, eggplant, since it exhibits a reduced genomic size (950 Mb), a short generation time, and routine transformation technologies. Moreover, it shares with the other Solanaceous plants the same haploid chromosome number and a high level of conserved genomic organization. Finally, many genomic and genetic resources are actually available for tomato, and the sequencing of its genome is in progress. These features make tomato an ideal species for theoretical studies and practical applications in the genomics field. The present review describes how structural genomics assist the selection of new varieties resistant to pathogens that cause damage to this crop. Many molecular markers highly linked to resistance genes and cloned resistance genes are available and could be used for a high-throughput screening of multiresistant varieties. Moreover, a new genomics-assisted breeding approach for improving fruit quality is presented and discussed. It relies on the identification of genetic mechanisms controlling the trait of interest through functional genomics tools. Following this approach, polymorphisms in major gene sequences responsible for variability in the expression of the trait under study are then exploited for tracking simultaneously favourable allele combinations in breeding programs using high-throughput genomic technologies. This aims at pyramiding in the genetic background of commercial cultivars alleles that increase their performances. In conclusion, tomato breeding strategies supported by advanced technologies are expected to target increased productivity and lower costs of improved genotypes even for complex traits. PMID:19721805

  17. A Primer on High-Throughput Computing for Genomic Selection

    Directory of Open Access Journals (Sweden)

    Xiao-Lin eWu

    2011-02-01

    Full Text Available High-throughput computing (HTC uses computer clusters to solve advanced computational problems, with the goal of accomplishing high throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general purpose computation on a graphics processing unit (GPU provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin – Madison, which can be leveraged for genomic selection, in terms of central processing unit (CPU capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of

  18. A high throughput mechanical screening device for cartilage tissue engineering.

    Science.gov (United States)

    Mohanraj, Bhavana; Hou, Chieh; Meloni, Gregory R; Cosgrove, Brian D; Dodge, George R; Mauck, Robert L

    2014-06-27

    Articular cartilage enables efficient and near-frictionless load transmission, but suffers from poor inherent healing capacity. As such, cartilage tissue engineering strategies have focused on mimicking both compositional and mechanical properties of native tissue in order to provide effective repair materials for the treatment of damaged or degenerated joint surfaces. However, given the large number design parameters available (e.g. cell sources, scaffold designs, and growth factors), it is difficult to conduct combinatorial experiments of engineered cartilage. This is particularly exacerbated when mechanical properties are a primary outcome, given the long time required for testing of individual samples. High throughput screening is utilized widely in the pharmaceutical industry to rapidly and cost-effectively assess the effects of thousands of compounds for therapeutic discovery. Here we adapted this approach to develop a high throughput mechanical screening (HTMS) system capable of measuring the mechanical properties of up to 48 materials simultaneously. The HTMS device was validated by testing various biomaterials and engineered cartilage constructs and by comparing the HTMS results to those derived from conventional single sample compression tests. Further evaluation showed that the HTMS system was capable of distinguishing and identifying 'hits', or factors that influence the degree of tissue maturation. Future iterations of this device will focus on reducing data variability, increasing force sensitivity and range, as well as scaling-up to even larger (96-well) formats. This HTMS device provides a novel tool for cartilage tissue engineering, freeing experimental design from the limitations of mechanical testing throughput. © 2013 Published by Elsevier Ltd.

  19. High-throughput heterodyne thermoreflectance: Application to thermal conductivity measurements of a Fe-Si-Ge thin film alloy library

    Science.gov (United States)

    d'Acremont, Quentin; Pernot, Gilles; Rampnoux, Jean-Michel; Furlan, Andrej; Lacroix, David; Ludwig, Alfred; Dilhaire, Stefan

    2017-07-01

    A High-Throughput Time-Domain ThermoReflectance (HT-TDTR) technique was developed to perform fast thermal conductivity measurements with minimum user actions required. This new setup is based on a heterodyne picosecond thermoreflectance system. The use of two different laser oscillators has been proven to reduce the acquisition time by two orders of magnitude and avoid the experimental artefacts usually induced by moving the elements present in TDTR systems. An amplitude modulation associated to a lock-in detection scheme is included to maintain a high sensitivity to thermal properties. We demonstrate the capabilities of the HT-TDTR setup to perform high-throughput thermal analysis by mapping thermal conductivity and interface resistances of a ternary thin film silicide library FexSiyGe100-x-y (20 analysis areas of different ternary alloy compositions.

  20. ViewBS: a powerful toolkit for visualization of high-throughput bisulfite sequencing data.

    Science.gov (United States)

    Huang, Xiaosan; Zhang, Shaoling; Li, Kongqing; Thimmapuram, Jyothi; Xie, Shaojun

    2017-10-26

    High throughput bisulfite sequencing (BS-seq) is an important technology to generate single-base DNA methylomes in both plants and animals. In order to accelerate the data analysis of BS-seq data, toolkits for visualization are required. ViewBS, an open-source toolkit, can extract and visualize the DNA methylome data easily and with flexibility. By using Tabix, ViewBS can visualize BS-seq for large datasets quickly. ViewBS can generate publication-quality figures, such as meta-plots, heat maps and violin-boxplots, which can help users to answer biological questions. We illustrate its application using BS-seq data from Arabidopsis thaliana. ViewBS is freely available at: https://github.com/xie186/ViewBS. xie186@purdue.edu. Supplementary data are available at Bioinformatics online.

  1. Field-based high throughput phenotyping rapidly identifies genomic regions controlling yield components in rice.

    Science.gov (United States)

    Tanger, Paul; Klassen, Stephen; Mojica, Julius P; Lovell, John T; Moyers, Brook T; Baraoidan, Marietta; Naredo, Maria Elizabeth B; McNally, Kenneth L; Poland, Jesse; Bush, Daniel R; Leung, Hei; Leach, Jan E; McKay, John K

    2017-02-21

    To ensure food security in the face of population growth, decreasing water and land for agriculture, and increasing climate variability, crop yields must increase faster than the current rates. Increased yields will require implementing novel approaches in genetic discovery and breeding. Here we demonstrate the potential of field-based high throughput phenotyping (HTP) on a large recombinant population of rice to identify genetic variation underlying important traits. We find that detecting quantitative trait loci (QTL) with HTP phenotyping is as accurate and effective as traditional labor-intensive measures of flowering time, height, biomass, grain yield, and harvest index. Genetic mapping in this population, derived from a cross of an modern cultivar (IR64) with a landrace (Aswina), identified four alleles with negative effect on grain yield that are fixed in IR64, demonstrating the potential for HTP of large populations as a strategy for the second green revolution.

  2. High-throughput phenotyping of plant populations using a personal digital assistant.

    Science.gov (United States)

    Vankudavath, Raju Naik; Bodanapu, Reddaiah; Sreelakshmi, Yellamaraju; Sharma, Rameshwar

    2012-01-01

    During many biological experiments voluminous data is acquired, which can be best collected with -portable data acquisition devices and later analyzed with a personal computer (PC). Public domain software catering to data acquisition and analysis is currently limited. The necessity of phenotyping large plant populations led to the development of the application "PHENOME" to manage the data. PHENOME allows acquisition of phenotypic data using a personal digital assistant (PDA) with a built-in barcode scanner. The acquired data can be exported to a customized database on a PC for further analysis and cataloging. PHENOME can be used for a variety of applications, for example high-throughput phenotyping of a mutagenized or mapping population, or phenotyping of several individuals in one or more ecological niches.

  3. Towards low-delay and high-throughput cognitive radio vehicular networks

    Directory of Open Access Journals (Sweden)

    Nada Elgaml

    2017-12-01

    Full Text Available Cognitive Radio Vehicular Ad-hoc Networks (CR-VANETs exploit cognitive radios to allow vehicles to access the unused channels in their radio environment. Thus, CR-VANETs do not only suffer the traditional CR problems, especially spectrum sensing, but also suffer new challenges due to the highly dynamic nature of VANETs. In this paper, we present a low-delay and high-throughput radio environment assessment scheme for CR-VANETs that can be easily incorporated with the IEEE 802.11p standard developed for VANETs. Simulation results show that the proposed scheme significantly reduces the time to get the radio environment map and increases the CR-VANET throughput.

  4. SNP calling using genotype model selection on high-throughput sequencing data

    KAUST Repository

    You, Na

    2012-01-16

    Motivation: A review of the available single nucleotide polymorphism (SNP) calling procedures for Illumina high-throughput sequencing (HTS) platform data reveals that most rely mainly on base-calling and mapping qualities as sources of error when calling SNPs. Thus, errors not involved in base-calling or alignment, such as those in genomic sample preparation, are not accounted for.Results: A novel method of consensus and SNP calling, Genotype Model Selection (GeMS), is given which accounts for the errors that occur during the preparation of the genomic sample. Simulations and real data analyses indicate that GeMS has the best performance balance of sensitivity and positive predictive value among the tested SNP callers. © The Author 2012. Published by Oxford University Press. All rights reserved.

  5. Analysis and Ranking of Protein-Protein Docking Models Using Inter-Residue Contacts and Inter-Molecular Contact Maps

    Directory of Open Access Journals (Sweden)

    Romina Oliva

    2015-07-01

    Full Text Available In view of the increasing interest both in inhibitors of protein-protein interactions and in protein drugs themselves, analysis of the three-dimensional structure of protein-protein complexes is assuming greater relevance in drug design. In the many cases where an experimental structure is not available, protein-protein docking becomes the method of choice for predicting the arrangement of the complex. However, reliably scoring protein-protein docking poses is still an unsolved problem. As a consequence, the screening of many docking models is usually required in the analysis step, to possibly single out the correct ones. Here, making use of exemplary cases, we review our recently introduced methods for the analysis of protein complex structures and for the scoring of protein docking poses, based on the use of inter-residue contacts and their visualization in inter-molecular contact maps. We also show that the ensemble of tools we developed can be used in the context of rational drug design targeting protein-protein interactions.

  6. Analysis and Ranking of Protein-Protein Docking Models Using Inter-Residue Contacts and Inter-Molecular Contact Maps

    KAUST Repository

    Oliva, Romina

    2015-07-01

    In view of the increasing interest both in inhibitors of protein-protein interactions and in protein drugs themselves, analysis of the three-dimensional structure of protein-protein complexes is assuming greater relevance in drug design. In the many cases where an experimental structure is not available, protein-protein docking becomes the method of choice for predicting the arrangement of the complex. However, reliably scoring protein-protein docking poses is still an unsolved problem. As a consequence, the screening of many docking models is usually required in the analysis step, to possibly single out the correct ones. Here, making use of exemplary cases, we review our recently introduced methods for the analysis of protein complex structures and for the scoring of protein docking poses, based on the use of inter-residue contacts and their visualization in inter-molecular contact maps. We also show that the ensemble of tools we developed can be used in the context of rational drug design targeting protein-protein interactions.

  7. A Primer on High-Throughput Computing for Genomic Selection

    Science.gov (United States)

    Wu, Xiao-Lin; Beissinger, Timothy M.; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J. M.; Weigel, Kent A.; Gatti, Natalia de Leon; Gianola, Daniel

    2011-01-01

    High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin–Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized

  8. A practical strategy for using miniature chromatography columns in a standardized high-throughput workflow for purification development of monoclonal antibodies.

    Science.gov (United States)

    Welsh, John P; Petroff, Matthew G; Rowicki, Patricia; Bao, Haiying; Linden, Thomas; Roush, David J; Pollard, Jennifer M

    2014-01-01

    The emergence of monoclonal antibody (mAb) therapies has created a need for faster and more efficient bioprocess development strategies in order to meet timeline and material demands. In this work, a high-throughput process development (HTPD) strategy implementing several high-throughput chromatography purification techniques is described. Namely, batch incubations are used to scout feasible operating conditions, miniature columns are then used to determine separation of impurities, and, finally, a limited number of lab scale columns are tested to confirm the conditions identified using high-throughput techniques and to provide a path toward large scale processing. This multistep approach builds upon previous HTPD work by combining, in a unique sequential fashion, the flexibility and throughput of batch incubations with the increased separation characteristics for the packed bed format of miniature columns. Additionally, in order to assess the applicability of using miniature columns in this workflow, transport considerations were compared with traditional lab scale columns, and performances were mapped for the two techniques. The high-throughput strategy was utilized to determine optimal operating conditions with two different types of resins for a difficult separation of a mAb monomer from aggregates. Other more detailed prediction models are cited, but the intent of this work was to use high-throughput strategies as a general guide for scaling and assessing operating space rather than as a precise model to exactly predict performance. © 2014 American Institute of Chemical Engineers.

  9. High-throughput optical screening of cellular mechanotransduction

    Science.gov (United States)

    Compton, Jonathan L.; Luo, Justin C.; Ma, Huan; Botvinick, Elliot; Venugopalan, Vasan

    2014-09-01

    We introduce an optical platform for rapid, high-throughput screening of exogenous molecules that affect cellular mechanotransduction. Our method initiates mechanotransduction in adherent cells using single laser-microbeam generated microcavitation bubbles without requiring flow chambers or microfluidics. These microcavitation bubbles expose adherent cells to a microtsunami, a transient microscale burst of hydrodynamic shear stress, which stimulates cells over areas approaching 1 mm2. We demonstrate microtsunami-initiated mechanosignalling in primary human endothelial cells. This observed signalling is consistent with G-protein-coupled receptor stimulation, resulting in Ca2+ release by the endoplasmic reticulum. Moreover, we demonstrate the dose-dependent modulation of microtsunami-induced Ca2+ signalling by introducing a known inhibitor to this pathway. The imaging of Ca2+ signalling and its modulation by exogenous molecules demonstrates the capacity to initiate and assess cellular mechanosignalling in real time. We utilize this capability to screen the effects of a set of small molecules on cellular mechanotransduction in 96-well plates using standard imaging cytometry.

  10. High-Throughput Printing Process for Flexible Electronics

    Science.gov (United States)

    Hyun, Woo Jin

    Printed electronics is an emerging field for manufacturing electronic devices with low cost and minimal material waste for a variety of applications including displays, distributed sensing, smart packaging, and energy management. Moreover, its compatibility with roll-to-roll production formats and flexible substrates is desirable for continuous, high-throughput production of flexible electronics. Despite the promise, however, the roll-to-roll production of printed electronics is quite challenging due to web movement hindering accurate ink registration and high-fidelity printing. In this talk, I will present a promising strategy for roll-to-roll production using a novel printing process that we term SCALE (Self-aligned Capillarity-Assisted Lithography for Electronics). By utilizing capillarity of liquid inks on nano/micro-structured substrates, the SCALE process facilitates high-resolution and self-aligned patterning of electrically functional inks with greatly improved printing tolerance. I will show the fabrication of key building blocks (e.g. transistor, resistor, capacitor) for electronic circuits using the SCALE process on plastics.

  11. A high-throughput biliverdin assay using infrared fluorescence.

    Science.gov (United States)

    Berlec, Aleš; Štrukelj, Borut

    2014-07-01

    Biliverdin is an intermediate of heme degradation with an established role in veterinary clinical diagnostics of liver-related diseases. The need for chromatographic assays has so far prevented its wider use in diagnostic laboratories. The current report describes a simple, fast, high-throughput, and inexpensive assay, based on the interaction of biliverdin with infrared fluorescent protein (iRFP) that yields functional protein exhibiting infrared fluorescence. The assay is linear in the range of 0-10 µmol/l of biliverdin, has a limit of detection of 0.02 μmol/l, and has a limit of quantification of 0.03 µmol/l. The assay is accurate with relative error less than 0.15, and precise, with coefficient of variation less than 5% in the concentration range of 2-9 µmol/l of biliverdin. More than 95% of biliverdin was recovered from biological samples by simple dimethyl sulfoxide extraction. There was almost no interference by hemin, although bilirubin caused an increase in the biliverdin concentration, probably due to spontaneous oxidation of bilirubin to biliverdin. The newly developed biliverdin assay is appropriate for reliable quantification of large numbers of samples in veterinary medicine.

  12. High-Throughput Network Communication with NetIO

    CERN Document Server

    Schumacher, J\\"orn; The ATLAS collaboration; Vandelli, Wainer

    2016-01-01

    HPC network technologies like Infiniband, TrueScale or OmniPath provide low-latency and high-throughput communication between hosts, which makes them attractive options for data-acquisition systems in large-scale high-energy physics experiments. Like HPC networks, DAQ networks are local and include a well specified number of systems. Unfortunately traditional network communication APIs for HPC clusters like MPI or PGAS target exclusively the HPC community and are not suited well for DAQ applications. It is possible to build distributed DAQ applications using low-level system APIs like Infiniband Verbs (and this has been done), but it requires a non negligible effort and expert knowledge. On the other hand, message services like 0MQ have gained popularity in the HEP community. Such APIs allow to build distributed applications with a high-level approach and provide good performance. Unfortunately their usage usually limits developers to TCP/IP-based networks. While it is possible to operate a TCP/IP stack on to...

  13. Quantitative High-Throughput Screening Using a Coincidence Reporter Biocircuit.

    Science.gov (United States)

    Schuck, Brittany W; MacArthur, Ryan; Inglese, James

    2017-04-10

    Reporter-biased artifacts-i.e., compounds that interact directly with the reporter enzyme used in a high-throughput screening (HTS) assay and not the biological process or pharmacology being interrogated-are now widely recognized to reduce the efficiency and quality of HTS used for chemical probe and therapeutic development. Furthermore, narrow or single-concentration HTS perpetuates false negatives during primary screening campaigns. Titration-based HTS, or quantitative HTS (qHTS), and coincidence reporter technology can be employed to reduce false negatives and false positives, respectively, thereby increasing the quality and efficiency of primary screening efforts, where the number of compounds investigated can range from tens of thousands to millions. The three protocols described here allow for generation of a coincidence reporter (CR) biocircuit to interrogate a biological or pharmacological question of interest, generation of a stable cell line expressing the CR biocircuit, and qHTS using the CR biocircuit to efficiently identify high-quality biologically active small molecules. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  14. Functional approach to high-throughput plant growth analysis

    Science.gov (United States)

    2013-01-01

    Method Taking advantage of the current rapid development in imaging systems and computer vision algorithms, we present HPGA, a high-throughput phenotyping platform for plant growth modeling and functional analysis, which produces better understanding of energy distribution in regards of the balance between growth and defense. HPGA has two components, PAE (Plant Area Estimation) and GMA (Growth Modeling and Analysis). In PAE, by taking the complex leaf overlap problem into consideration, the area of every plant is measured from top-view images in four steps. Given the abundant measurements obtained with PAE, in the second module GMA, a nonlinear growth model is applied to generate growth curves, followed by functional data analysis. Results Experimental results on model plant Arabidopsis thaliana show that, compared to an existing approach, HPGA reduces the error rate of measuring plant area by half. The application of HPGA on the cfq mutant plants under fluctuating light reveals the correlation between low photosynthetic rates and small plant area (compared to wild type), which raises a hypothesis that knocking out cfq changes the sensitivity of the energy distribution under fluctuating light conditions to repress leaf growth. Availability HPGA is available at http://www.msu.edu/~jinchen/HPGA. PMID:24565437

  15. High-throughput literature mining to support read-across ...

    Science.gov (United States)

    Building scientific confidence in the development and evaluation of read-across remains an ongoing challenge. Approaches include establishing systematic frameworks to identify sources of uncertainty and ways to address them. One source of uncertainty is related to characterizing biological similarity. Many research efforts are underway such as structuring mechanistic data in adverse outcome pathways and investigating the utility of high throughput (HT)/high content (HC) screening data. A largely untapped resource for read-across to date is the biomedical literature. This information has the potential to support read-across by facilitating the identification of valid source analogues with similar biological and toxicological profiles as well as providing the mechanistic understanding for any prediction made. A key challenge in using biomedical literature is to convert and translate its unstructured form into a computable format that can be linked to chemical structure. We developed a novel text-mining strategy to represent literature information for read across. Keywords were used to organize literature into toxicity signatures at the chemical level. These signatures were integrated with HT in vitro data and curated chemical structures. A rule-based algorithm assessed the strength of the literature relationship, providing a mechanism to rank and visualize the signature as literature ToxPIs (LitToxPIs). LitToxPIs were developed for over 6,000 chemicals for a varie

  16. Field high-throughput phenotyping: the new crop breeding frontier.

    Science.gov (United States)

    Araus, José Luis; Cairns, Jill E

    2014-01-01

    Constraints in field phenotyping capability limit our ability to dissect the genetics of quantitative traits, particularly those related to yield and stress tolerance (e.g., yield potential as well as increased drought, heat tolerance, and nutrient efficiency, etc.). The development of effective field-based high-throughput phenotyping platforms (HTPPs) remains a bottleneck for future breeding advances. However, progress in sensors, aeronautics, and high-performance computing are paving the way. Here, we review recent advances in field HTPPs, which should combine at an affordable cost, high capacity for data recording, scoring and processing, and non-invasive remote sensing methods, together with automated environmental data collection. Laboratory analyses of key plant parts may complement direct phenotyping under field conditions. Improvements in user-friendly data management together with a more powerful interpretation of results should increase the use of field HTPPs, therefore increasing the efficiency of crop genetic improvement to meet the needs of future generations. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Mouse eye enucleation for remote high-throughput phenotyping.

    Science.gov (United States)

    Mahajan, Vinit B; Skeie, Jessica M; Assefnia, Amir H; Mahajan, Maryann; Tsang, Stephen H

    2011-11-19

    The mouse eye is an important genetic model for the translational study of human ophthalmic disease. Blinding diseases in humans, such as macular degeneration, photoreceptor degeneration, cataract, glaucoma, retinoblastoma, and diabetic retinopathy have been recapitulated in transgenic mice.(1-5) Most transgenic and knockout mice have been generated by laboratories to study non-ophthalmic diseases, but genetic conservation between organ systems suggests that many of the same genes may also play a role in ocular development and disease. Hence, these mice represent an important resource for discovering new genotype-phenotype correlations in the eye. Because these mice are scattered across the globe, it is difficult to acquire, maintain, and phenotype them in an efficient, cost-effective manner. Thus, most high-throughput ophthalmic phenotyping screens are restricted to a few locations that require on-site, ophthalmic expertise to examine eyes in live mice. (6-9) An alternative approach developed by our laboratory is a method for remote tissue-acquisition that can be used in large or small-scale surveys of transgenic mouse eyes. Standardized procedures for video-based surgical skill transfer, tissue fixation, and shipping allow any lab to collect whole eyes from mutant animals and send them for molecular and morphological phenotyping. In this video article, we present techniques to enucleate and transfer both unfixed and perfusion fixed mouse eyes for remote phenotyping analyses.

  18. Large scale library generation for high throughput sequencing.

    Directory of Open Access Journals (Sweden)

    Erik Borgström

    Full Text Available BACKGROUND: Large efforts have recently been made to automate the sample preparation protocols for massively parallel sequencing in order to match the increasing instrument throughput. Still, the size selection through agarose gel electrophoresis separation is a labor-intensive bottleneck of these protocols. METHODOLOGY/PRINCIPAL FINDINGS: In this study a method for automatic library preparation and size selection on a liquid handling robot is presented. The method utilizes selective precipitation of certain sizes of DNA molecules on to paramagnetic beads for cleanup and selection after standard enzymatic reactions. CONCLUSIONS/SIGNIFICANCE: The method is used to generate libraries for de novo and re-sequencing on the Illumina HiSeq 2000 instrument with a throughput of 12 samples per instrument in approximately 4 hours. The resulting output data show quality scores and pass filter rates comparable to manually prepared samples. The sample size distribution can be adjusted for each application, and are suitable for all high throughput DNA processing protocols seeking to control size intervals.

  19. High Throughput Multispectral Image Processing with Applications in Food Science.

    Science.gov (United States)

    Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John

    2015-01-01

    Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.

  20. High Throughput Heuristics for Prioritizing Human Exposure to ...

    Science.gov (United States)

    The risk posed to human health by any of the thousands of untested anthropogenic chemicals in our environment is a function of both the potential hazard presented by the chemical, and the possibility of being exposed. Without the capacity to make quantitative, albeit uncertain, forecasts of exposure, the putative risk of adverse health effect from a chemical cannot be evaluated. We used Bayesian methodology to infer ranges of exposure intakes that are consistent with biomarkers of chemical exposures identified in urine samples from the U.S. population by the National Health and Nutrition Examination Survey (NHANES). We perform linear regression on inferred exposure for demographic subsets of NHANES demarked by age, gender, and weight using high throughput chemical descriptors gleaned from databases and chemical structure-based calculators. We find that five of these descriptors are capable of explaining roughly 50% of the variability across chemicals for all the demographic groups examined, including children aged 6-11. For the thousands of chemicals with no other source of information, this approach allows rapid and efficient prediction of average exposure intake of environmental chemicals. The methods described by this manuscript provide a highly improved methodology for HTS of human exposure to environmental chemicals. The manuscript includes a ranking of 7785 environmental chemicals with respect to potential human exposure, including most of the Tox21 in vit

  1. High-throughput membrane surface modification to control NOM fouling.

    Science.gov (United States)

    Zhou, Mingyan; Liu, Hongwei; Kilduff, James E; Langer, Robert; Anderson, Daniel G; Belfort, Georges

    2009-05-15

    A novel method for synthesis and screening of fouling-resistant membrane surfaces was developed by combining a high-throughput platform (HTP) approach together with photoinduced graft polymerization (PGP)forfacile modification of commercial poly(aryl sulfone) membranes. This method is an inexpensive, fast, simple, reproducible, and scalable approach to identify fouling-resistant surfaces appropriate for a specific feed. In this research, natural organic matter (NOM)-resistant surfaces were synthesized and indentified from a library of 66 monomers. Surfaces were prepared via graft polymerization onto poly(ether sulfone) (PES) membranes and were evaluated using an assay involving NOM adsorption, followed by pressure-driven-filtration. In this work new and previously tested low-fouling surfaces for NOM are identified, and their ability to mitigate NOM and protein (bovine serum albumin)fouling is compared. The best-performing monomers were the zwitterion [2-(methacryloyloxy)ethyl]dimethyl-(3-sulfopropyl)ammonium hydroxide, and diacetone acrylamide, a neutral monomer containing an amide group. Other excellent surfaces were synthesized from amides, amines, basic monomers, and long-chain poly(ethylene) glycols. Bench-scale studies conducted for selected monomers verified the scalability of HTP-PGP results. The results and the synthesis and screening method presented here offer new opportunities for choosing new membrane chemistries that minimize NOM fouling.

  2. Efficient Management of High-Throughput Screening Libraries with SAVANAH.

    Science.gov (United States)

    List, Markus; Elnegaard, Marlene Pedersen; Schmidt, Steffen; Christiansen, Helle; Tan, Qihua; Mollenhauer, Jan; Baumbach, Jan

    2017-02-01

    High-throughput screening (HTS) has become an indispensable tool for the pharmaceutical industry and for biomedical research. A high degree of automation allows for experiments in the range of a few hundred up to several hundred thousand to be performed in close succession. The basis for such screens are molecular libraries, that is, microtiter plates with solubilized reagents such as siRNAs, shRNAs, miRNA inhibitors or mimics, and sgRNAs, or small compounds, that is, drugs. These reagents are typically condensed to provide enough material for covering several screens. Library plates thus need to be serially diluted before they can be used as assay plates. This process, however, leads to an explosion in the number of plates and samples to be tracked. Here, we present SAVANAH, the first tool to effectively manage molecular screening libraries across dilution series. It conveniently links (connects) sample information from the library to experimental results from the assay plates. All results can be exported to the R statistical environment or piped into HiTSeekR ( http://hitseekr.compbio.sdu.dk ) for comprehensive follow-up analyses. In summary, SAVANAH supports the HTS community in managing and analyzing HTS experiments with an emphasis on serially diluted molecular libraries.

  3. Assessing the utility and limitations of high throughput virtual screening

    Directory of Open Access Journals (Sweden)

    Paul Daniel Phillips

    2016-05-01

    Full Text Available Due to low cost, speed, and unmatched ability to explore large numbers of compounds, high throughput virtual screening and molecular docking engines have become widely utilized by computational scientists. It is generally accepted that docking engines, such as AutoDock, produce reliable qualitative results for ligand-macromolecular receptor binding, and molecular docking results are commonly reported in literature in the absence of complementary wet lab experimental data. In this investigation, three variants of the sixteen amino acid peptide, α-conotoxin MII, were docked to a homology model of the a3β2-nicotinic acetylcholine receptor. DockoMatic version 2.0 was used to perform a virtual screen of each peptide ligand to the receptor for ten docking trials consisting of 100 AutoDock cycles per trial. The results were analyzed for both variation in the calculated binding energy obtained from AutoDock, and the orientation of bound peptide within the receptor. The results show that, while no clear correlation exists between consistent ligand binding pose and the calculated binding energy, AutoDock is able to determine a consistent positioning of bound peptide in the majority of trials when at least ten trials were evaluated.

  4. High-Throughput Identification of Antimicrobial Peptides from Amphibious Mudskippers.

    Science.gov (United States)

    Yi, Yunhai; You, Xinxin; Bian, Chao; Chen, Shixi; Lv, Zhao; Qiu, Limei; Shi, Qiong

    2017-11-22

    Widespread existence of antimicrobial peptides (AMPs) has been reported in various animals with comprehensive biological activities, which is consistent with the important roles of AMPs as the first line of host defense system. However, no big-data-based analysis on AMPs from any fish species is available. In this study, we identified 507 AMP transcripts on the basis of our previously reported genomes and transcriptomes of two representative amphibious mudskippers, Boleophthalmus pectinirostris (BP) and Periophthalmus magnuspinnatus (PM). The former is predominantly aquatic with less time out of water, while the latter is primarily terrestrial with extended periods of time on land. Within these identified AMPs, 449 sequences are novel; 15 were reported in BP previously; 48 are identically overlapped between BP and PM; 94 were validated by mass spectrometry. Moreover, most AMPs presented differential tissue transcription patterns in the two mudskippers. Interestingly, we discovered two AMPs, hemoglobin β1 and amylin, with high inhibitions on Micrococcus luteus. In conclusion, our high-throughput screening strategy based on genomic and transcriptomic data opens an efficient pathway to discover new antimicrobial peptides for ongoing development of marine drugs.

  5. High-Throughput Identification of Antimicrobial Peptides from Amphibious Mudskippers

    Directory of Open Access Journals (Sweden)

    Yunhai Yi

    2017-11-01

    Full Text Available Widespread existence of antimicrobial peptides (AMPs has been reported in various animals with comprehensive biological activities, which is consistent with the important roles of AMPs as the first line of host defense system. However, no big-data-based analysis on AMPs from any fish species is available. In this study, we identified 507 AMP transcripts on the basis of our previously reported genomes and transcriptomes of two representative amphibious mudskippers, Boleophthalmus pectinirostris (BP and Periophthalmus magnuspinnatus (PM. The former is predominantly aquatic with less time out of water, while the latter is primarily terrestrial with extended periods of time on land. Within these identified AMPs, 449 sequences are novel; 15 were reported in BP previously; 48 are identically overlapped between BP and PM; 94 were validated by mass spectrometry. Moreover, most AMPs presented differential tissue transcription patterns in the two mudskippers. Interestingly, we discovered two AMPs, hemoglobin β1 and amylin, with high inhibitions on Micrococcus luteus. In conclusion, our high-throughput screening strategy based on genomic and transcriptomic data opens an efficient pathway to discover new antimicrobial peptides for ongoing development of marine drugs.

  6. High throughput screening for anti-Trypanosoma cruzi drug discovery.

    Directory of Open Access Journals (Sweden)

    Julio Alonso-Padilla

    2014-12-01

    Full Text Available The discovery of new therapeutic options against Trypanosoma cruzi, the causative agent of Chagas disease, stands as a fundamental need. Currently, there are only two drugs available to treat this neglected disease, which represents a major public health problem in Latin America. Both available therapies, benznidazole and nifurtimox, have significant toxic side effects and their efficacy against the life-threatening symptomatic chronic stage of the disease is variable. Thus, there is an urgent need for new, improved anti-T. cruzi drugs. With the objective to reliably accelerate the drug discovery process against Chagas disease, several advances have been made in the last few years. Availability of engineered reporter gene expressing parasites triggered the development of phenotypic in vitro assays suitable for high throughput screening (HTS as well as the establishment of new in vivo protocols that allow faster experimental outcomes. Recently, automated high content microscopy approaches have also been used to identify new parasitic inhibitors. These in vitro and in vivo early drug discovery approaches, which hopefully will contribute to bring better anti-T. cruzi drug entities in the near future, are reviewed here.

  7. Nanoliter high-throughput PCR for DNA and RNA profiling.

    Science.gov (United States)

    Brenan, Colin J H; Roberts, Douglas; Hurley, James

    2009-01-01

    The increasing emphasis in life science research on utilization of genetic and genomic information underlies the need for high-throughput technologies capable of analyzing the expression of multiple genes or the presence of informative single nucleotide polymorphisms (SNPs) in large-scale, population-based applications. Human disease research, disease diagnosis, personalized therapeutics, environmental monitoring, blood testing, and identification of genetic traits impacting agricultural practices, both in terms of food quality and production efficiency, are a few areas where such systems are in demand. This has stimulated the need for PCR technologies that preserves the intrinsic analytical benefits of PCR yet enables higher throughputs without increasing the time to answer, labor and reagent expenses and workflow complexity. An example of such a system based on a high-density array of nanoliter PCR assays is described here. Functionally equivalent to a microtiter plate, the nanoplate system makes possible up to 3,072 simultaneous end-point or real-time PCR measurements in a device, the size of a standard microscope slide. Methods for SNP genotyping with end-point TaqMan PCR assays and quantitative measurement of gene expression with SYBR Green I real-time PCR are outlined and illustrative data showing system performance is provided.

  8. The JCSG high-throughput structural biology pipeline.

    Science.gov (United States)

    Elsliger, Marc André; Deacon, Ashley M; Godzik, Adam; Lesley, Scott A; Wooley, John; Wüthrich, Kurt; Wilson, Ian A

    2010-10-01

    The Joint Center for Structural Genomics high-throughput structural biology pipeline has delivered more than 1000 structures to the community over the past ten years. The JCSG has made a significant contribution to the overall goal of the NIH Protein Structure Initiative (PSI) of expanding structural coverage of the protein universe, as well as making substantial inroads into structural coverage of an entire organism. Targets are processed through an extensive combination of bioinformatics and biophysical analyses to efficiently characterize and optimize each target prior to selection for structure determination. The pipeline uses parallel processing methods at almost every step in the process and can adapt to a wide range of protein targets from bacterial to human. The construction, expansion and optimization of the JCSG gene-to-structure pipeline over the years have resulted in many technological and methodological advances and developments. The vast number of targets and the enormous amounts of associated data processed through the multiple stages of the experimental pipeline required the development of variety of valuable resources that, wherever feasible, have been converted to free-access web-based tools and applications.

  9. Achieving High Throughput for Data Transfer over ATM Networks

    Science.gov (United States)

    Johnson, Marjory J.; Townsend, Jeffrey N.

    1996-01-01

    File-transfer rates for ftp are often reported to be relatively slow, compared to the raw bandwidth available in emerging gigabit networks. While a major bottleneck is disk I/O, protocol issues impact performance as well. Ftp was developed and optimized for use over the TCP/IP protocol stack of the Internet. However, TCP has been shown to run inefficiently over ATM. In an effort to maximize network throughput, data-transfer protocols can be developed to run over UDP or directly over IP, rather than over TCP. If error-free transmission is required, techniques for achieving reliable transmission can be included as part of the transfer protocol. However, selected image-processing applications can tolerate a low level of errors in images that are transmitted over a network. In this paper we report on experimental work to develop a high-throughput protocol for unreliable data transfer over ATM networks. We attempt to maximize throughput by keeping the communications pipe full, but still keep packet loss under five percent. We use the Bay Area Gigabit Network Testbed as our experimental platform.

  10. Validation of high throughput sequencing and microbial forensics applications.

    Science.gov (United States)

    Budowle, Bruce; Connell, Nancy D; Bielecka-Oder, Anna; Colwell, Rita R; Corbett, Cindi R; Fletcher, Jacqueline; Forsman, Mats; Kadavy, Dana R; Markotic, Alemka; Morse, Stephen A; Murch, Randall S; Sajantila, Antti; Schmedes, Sarah E; Ternus, Krista L; Turner, Stephen D; Minot, Samuel

    2014-01-01

    High throughput sequencing (HTS) generates large amounts of high quality sequence data for microbial genomics. The value of HTS for microbial forensics is the speed at which evidence can be collected and the power to characterize microbial-related evidence to solve biocrimes and bioterrorist events. As HTS technologies continue to improve, they provide increasingly powerful sets of tools to support the entire field of microbial forensics. Accurate, credible results allow analysis and interpretation, significantly influencing the course and/or focus of an investigation, and can impact the response of the government to an attack having individual, political, economic or military consequences. Interpretation of the results of microbial forensic analyses relies on understanding the performance and limitations of HTS methods, including analytical processes, assays and data interpretation. The utility of HTS must be defined carefully within established operating conditions and tolerances. Validation is essential in the development and implementation of microbial forensics methods used for formulating investigative leads attribution. HTS strategies vary, requiring guiding principles for HTS system validation. Three initial aspects of HTS, irrespective of chemistry, instrumentation or software are: 1) sample preparation, 2) sequencing, and 3) data analysis. Criteria that should be considered for HTS validation for microbial forensics are presented here. Validation should be defined in terms of specific application and the criteria described here comprise a foundation for investigators to establish, validate and implement HTS as a tool in microbial forensics, enhancing public safety and national security.

  11. Advances in High Throughput Screening of Biomass Recalcitrance (Poster)

    Energy Technology Data Exchange (ETDEWEB)

    Turner, G. B.; Decker, S. R.; Tucker, M. P.; Law, C.; Doeppke, C.; Sykes, R. W.; Davis, M. F.; Ziebell, A.

    2012-06-01

    This was a poster displayed at the Symposium. Advances on previous high throughput screening of biomass recalcitrance methods have resulted in improved conversion and replicate precision. Changes in plate reactor metallurgy, improved preparation of control biomass, species-specific pretreatment conditions, and enzymatic hydrolysis parameters have reduced overall coefficients of variation to an average of 6% for sample replicates. These method changes have improved plate-to-plate variation of control biomass recalcitrance and improved confidence in sugar release differences between samples. With smaller errors plant researchers can have a higher degree of assurance more low recalcitrance candidates can be identified. Significant changes in plate reactor, control biomass preparation, pretreatment conditions and enzyme have significantly reduced sample and control replicate variability. Reactor plate metallurgy significantly impacts sugar release aluminum leaching into reaction during pretreatment degrades sugars and inhibits enzyme activity. Removal of starch and extractives significantly decreases control biomass variability. New enzyme formulations give more consistent and higher conversion levels, however required re-optimization for switchgrass. Pretreatment time and temperature (severity) should be adjusted to specific biomass types i.e. woody vs. herbaceous. Desalting of enzyme preps to remove low molecular weight stabilizers and improved conversion levels likely due to water activity impacts on enzyme structure and substrate interactions not attempted here due to need to continually desalt and validate precise enzyme concentration and activity.

  12. Generation of RNAi Libraries for High-Throughput Screens

    Directory of Open Access Journals (Sweden)

    Julie Clark

    2006-01-01

    Full Text Available The completion of the genome sequencing for several organisms has created a great demand for genomic tools that can systematically analyze the growing wealth of data. In contrast to the classical reverse genetics approach of creating specific knockout cell lines or animals that is time-consuming and expensive, RNA-mediated interference (RNAi has emerged as a fast, simple, and cost-effective technique for gene knockdown in large scale. Since its discovery as a gene silencing response to double-stranded RNA (dsRNA with homology to endogenous genes in Caenorhabditis elegans (C elegans, RNAi technology has been adapted to various high-throughput screens (HTS for genome-wide loss-of-function (LOF analysis. Biochemical insights into the endogenous mechanism of RNAi have led to advances in RNAi methodology including RNAi molecule synthesis, delivery, and sequence design. In this article, we will briefly review these various RNAi library designs and discuss the benefits and drawbacks of each library strategy.

  13. Contact Pressure and Sliding Velocity Maps of the Friction, Wear and Emission from a Low-Metallic/Cast-Iron Disc Brake Contact Pair

    Directory of Open Access Journals (Sweden)

    J. Wahlström

    2017-12-01

    Full Text Available Particulate matter with an aerodynamic diameter less than 10 µm (PM10 from car disc brakes contribute up to 50% of the total non-exhaust emissions from road transport in the EU. These emissions come from the wear of the pad and rotor contact surfaces. Yet few studies have reported contact pressures and offered sliding speed maps of the friction, wear, and particle emission performance of disc brake materials at a material level. Such maps are crucial to understanding material behaviour at different loads and can be used as input data to numerical simulations. A low-metallic pad and grey cast-iron rotor contact pair commonly used today in passenger car disc brakes was studied using a pin-on-disc tribometer at twelve contact pressure and sliding speed combinations. Maps of the coefficient of friction, specific wear rate, particle number, and mass rate are presented and discussed.

  14. Development of a phenotyping platform for high throughput screening of nodal root angle in sorghum.

    Science.gov (United States)

    Joshi, Dinesh C; Singh, Vijaya; Hunt, Colleen; Mace, Emma; van Oosterom, Erik; Sulman, Richard; Jordan, David; Hammer, Graeme

    2017-01-01

    In sorghum, the growth angle of nodal roots is a major component of root system architecture. It strongly influences the spatial distribution of roots of mature plants in the soil profile, which can impact drought adaptation. However, selection for nodal root angle in sorghum breeding programs has been restricted by the absence of a suitable high throughput phenotyping platform. The aim of this study was to develop a phenotyping platform for the rapid, non-destructive and digital measurement of nodal root angle of sorghum at the seedling stage. The phenotyping platform comprises of 500 soil filled root chambers (50 × 45 × 0.3 cm in size), made of transparent perspex sheets that were placed in metal tubs and covered with polycarbonate sheets. Around 3 weeks after sowing, once the first flush of nodal roots was visible, roots were imaged in situ using an imaging box that included two digital cameras that were remotely controlled by two android tablets. Free software (openGelPhoto.tcl) allowed precise measurement of nodal root angle from the digital images. The reliability and efficiency of the platform was evaluated by screening a large nested association mapping population of sorghum and a set of hybrids in six independent experimental runs that included up to 500 plants each. The platform revealed extensive genetic variation and high heritability (repeatability) for nodal root angle. High genetic correlations and consistent ranking of genotypes across experimental runs confirmed the reproducibility of the platform. This low cost, high throughput root phenotyping platform requires no sophisticated equipment, is adaptable to most glasshouse environments and is well suited to dissect the genetic control of nodal root angle of sorghum. The platform is suitable for use in sorghum breeding programs aiming to improve drought adaptation through root system architecture manipulation.

  15. High-Throughput Quantification of GFP-LC3+ Dots by Automated Fluorescence Microscopy.

    Science.gov (United States)

    Bravo-San Pedro, J M; Pietrocola, F; Sica, V; Izzo, V; Sauvat, A; Kepp, O; Maiuri, M C; Kroemer, G; Galluzzi, L

    2017-01-01

    Macroautophagy is a specific variant of autophagy that involves a dedicated double-membraned organelle commonly known as autophagosome. Various methods have been developed to quantify the size of the autophagosomal compartment, which is an indirect indicator of macroautophagic responses, based on the peculiar ability of microtubule-associated protein 1 light chain 3 beta (MAP1LC3B; best known as LC3) to accumulate in forming autophagosomes upon maturation. One particularly convenient method to monitor the accumulation of mature LC3 within autophagosomes relies on a green fluorescent protein (GFP)-tagged variant of this protein and fluorescence microscopy. In physiological conditions, cells transfected temporarily or stably with a GFP-LC3-encoding construct exhibit a diffuse green fluorescence over the cytoplasm and nucleus. Conversely, in response to macroautophagy-promoting stimuli, the GFP-LC3 signal becomes punctate and often (but not always) predominantly cytoplasmic. The accumulation of GFP-LC3 in cytoplasmic dots, however, also ensues the blockage of any of the steps that ensure the degradation of mature autophagosomes, calling for the implementation of strategies that accurately discriminate between an increase in autophagic flux and an arrest in autophagic degradation. Various cell lines have been engineered to stably express GFP-LC3, which-combined with the appropriate controls of flux, high-throughput imaging stations, and automated image analysis-offer a relatively straightforward tool to screen large chemical or biological libraries for inducers or inhibitors of autophagy. Here, we describe a simple and robust method for the high-throughput quantification of GFP-LC3+ dots by automated fluorescence microscopy. © 2017 Elsevier Inc. All rights reserved.

  16. Detection of genomic variation by selection of a 9 mb DNA region and high throughput sequencing.

    Directory of Open Access Journals (Sweden)

    Sergey I Nikolaev

    Full Text Available Detection of the rare polymorphisms and causative mutations of genetic diseases in a targeted genomic area has become a major goal in order to understand genomic and phenotypic variability. We have interrogated repeat-masked regions of 8.9 Mb on human chromosomes 21 (7.8 Mb and 7 (1.1 Mb from an individual from the International HapMap Project (NA12872. We have optimized a method of genomic selection for high throughput sequencing. Microarray-based selection and sequencing resulted in 260-fold enrichment, with 41% of reads mapping to the target region. 83% of SNPs in the targeted region had at least 4-fold sequence coverage and 54% at least 15-fold. When assaying HapMap SNPs in NA12872, our sequence genotypes are 91.3% concordant in regions with coverage > or = 4-fold, and 97.9% concordant in regions with coverage > or = 15-fold. About 81% of the SNPs recovered with both thresholds are listed in dbSNP. We observed that regions with low sequence coverage occur in close proximity to low-complexity DNA. Validation experiments using Sanger sequencing were performed for 46 SNPs with 15-20 fold coverage, with a confirmation rate of 96%, suggesting that DNA selection provides an accurate and cost-effective method for identifying rare genomic variants.

  17. SUGAR: graphical user interface-based data refiner for high-throughput DNA sequencing.

    Science.gov (United States)

    Sato, Yukuto; Kojima, Kaname; Nariai, Naoki; Yamaguchi-Kabata, Yumi; Kawai, Yosuke; Takahashi, Mamoru; Mimori, Takahiro; Nagasaki, Masao

    2014-08-08

    Next-generation sequencers (NGSs) have become one of the main tools for current biology. To obtain useful insights from the NGS data, it is essential to control low-quality portions of the data affected by technical errors such as air bubbles in sequencing fluidics. We develop a software SUGAR (subtile-based GUI-assisted refiner) which can handle ultra-high-throughput data with user-friendly graphical user interface (GUI) and interactive analysis capability. The SUGAR generates high-resolution quality heatmaps of the flowcell, enabling users to find possible signals of technical errors during the sequencing. The sequencing data generated from the error-affected regions of a flowcell can be selectively removed by automated analysis or GUI-assisted operations implemented in the SUGAR. The automated data-cleaning function based on sequence read quality (Phred) scores was applied to a public whole human genome sequencing data and we proved the overall mapping quality was improved. The detailed data evaluation and cleaning enabled by SUGAR would reduce technical problems in sequence read mapping, improving subsequent variant analysis that require high-quality sequence data and mapping results. Therefore, the software will be especially useful to control the quality of variant calls to the low population cells, e.g., cancers, in a sample with technical errors of sequencing procedures.

  18. Complex multi-enhancer contacts captured by Genome Architecture Mapping (GAM)

    Science.gov (United States)

    Beagrie, Robert A.; Scialdone, Antonio; Schueler, Markus; Kraemer, Dorothee C.A.; Chotalia, Mita; Xie, Sheila Q.; Barbieri, Mariano; de Santiago, Inês; Lavitas, Liron-Mark; Branco, Miguel R.; Fraser, James; Dostie, Josée; Game, Laurence; Dillon, Niall; Edwards, Paul A.W.; Nicodemi, Mario; Pombo, Ana

    2017-01-01

    Summary The organization of the genome in the nucleus and the interactions of genes with their regulatory elements are key features of transcriptional control and their disruption can cause disease. We developed a novel genome-wide method, Genome Architecture Mapping (GAM), for measuring chromatin contacts, and other features of three-dimensional chromatin topology, based on sequencing DNA from a large collection of thin nuclear sections. We apply GAM to mouse embryonic stem cells and identify an enrichment for specific interactions between active genes and enhancers across very large genomic distances, using a mathematical model ‘SLICE’ (Statistical Inference of Co-segregation). GAM also reveals an abundance of three-way contacts genome-wide, especially between regions that are highly transcribed or contain super-enhancers, highlighting a previously inaccessible complexity in genome architecture and a major role for gene-expression specific contacts in organizing the genome in mammalian nuclei. PMID:28273065

  19. High-throughput flow cytometry data normalization for clinical trials.

    Science.gov (United States)

    Finak, Greg; Jiang, Wenxin; Krouse, Kevin; Wei, Chungwen; Sanz, Ignacio; Phippard, Deborah; Asare, Adam; De Rosa, Stephen C; Self, Steve; Gottardo, Raphael

    2014-03-01

    Flow cytometry datasets from clinical trials generate very large datasets and are usually highly standardized, focusing on endpoints that are well defined apriori. Staining variability of individual makers is not uncommon and complicates manual gating, requiring the analyst to adapt gates for each sample, which is unwieldy for large datasets. It can lead to unreliable measurements, especially if a template-gating approach is used without further correction to the gates. In this article, a computational framework is presented for normalizing the fluorescence intensity of multiple markers in specific cell populations across samples that is suitable for high-throughput processing of large clinical trial datasets. Previous approaches to normalization have been global and applied to all cells or data with debris removed. They provided no mechanism to handle specific cell subsets. This approach integrates tightly with the gating process so that normalization is performed during gating and is local to the specific cell subsets exhibiting variability. This improves peak alignment and the performance of the algorithm. The performance of this algorithm is demonstrated on two clinical trial datasets from the HIV Vaccine Trials Network (HVTN) and the Immune Tolerance Network (ITN). In the ITN data set we show that local normalization combined with template gating can account for sample-to-sample variability as effectively as manual gating. In the HVTN dataset, it is shown that local normalization mitigates false-positive vaccine response calls in an intracellular cytokine staining assay. In both datasets, local normalization performs better than global normalization. The normalization framework allows the use of template gates even in the presence of sample-to-sample staining variability, mitigates the subjectivity and bias of manual gating, and decreases the time necessary to analyze large datasets. © 2013 International Society for Advancement of Cytometry.

  20. Mining Chemical Activity Status from High-Throughput Screening Assays.

    Science.gov (United States)

    Soufan, Othman; Ba-alawi, Wail; Afeef, Moataz; Essack, Magbubah; Rodionov, Valentin; Kalnis, Panos; Bajic, Vladimir B

    2015-01-01

    High-throughput screening (HTS) experiments provide a valuable resource that reports biological activity of numerous chemical compounds relative to their molecular targets. Building computational models that accurately predict such activity status (active vs. inactive) in specific assays is a challenging task given the large volume of data and frequently small proportion of active compounds relative to the inactive ones. We developed a method, DRAMOTE, to predict activity status of chemical compounds in HTP activity assays. For a class of HTP assays, our method achieves considerably better results than the current state-of-the-art-solutions. We achieved this by modification of a minority oversampling technique. To demonstrate that DRAMOTE is performing better than the other methods, we performed a comprehensive comparison analysis with several other methods and evaluated them on data from 11 PubChem assays through 1,350 experiments that involved approximately 500,000 interactions between chemicals and their target proteins. As an example of potential use, we applied DRAMOTE to develop robust models for predicting FDA approved drugs that have high probability to interact with the thyroid stimulating hormone receptor (TSHR) in humans. Our findings are further partially and indirectly supported by 3D docking results and literature information. The results based on approximately 500,000 interactions suggest that DRAMOTE has performed the best and that it can be used for developing robust virtual screening models. The datasets and implementation of all solutions are available as a MATLAB toolbox online at www.cbrc.kaust.edu.sa/dramote and can be found on Figshare.

  1. Compound Cytotoxicity Profiling Using Quantitative High-Throughput Screening

    Science.gov (United States)

    Xia, Menghang; Huang, Ruili; Witt, Kristine L.; Southall, Noel; Fostel, Jennifer; Cho, Ming-Hsuang; Jadhav, Ajit; Smith, Cynthia S.; Inglese, James; Portier, Christopher J.; Tice, Raymond R.; Austin, Christopher P.

    2008-01-01

    Background The propensity of compounds to produce adverse health effects in humans is generally evaluated using animal-based test methods. Such methods can be relatively expensive, low-throughput, and associated with pain suffered by the treated animals. In addition, differences in species biology may confound extrapolation to human health effects. Objective The National Toxicology Program and the National Institutes of Health Chemical Genomics Center are collaborating to identify a battery of cell-based screens to prioritize compounds for further toxicologic evaluation. Methods A collection of 1,408 compounds previously tested in one or more traditional toxicologic assays were profiled for cytotoxicity using quantitative high-throughput screening (qHTS) in 13 human and rodent cell types derived from six common targets of xenobiotic toxicity (liver, blood, kidney, nerve, lung, skin). Selected cytotoxicants were further tested to define response kinetics. Results qHTS of these compounds produced robust and reproducible results, which allowed cross-compound, cross-cell type, and cross-species comparisons. Some compounds were cytotoxic to all cell types at similar concentrations, whereas others exhibited species- or cell type–specific cytotoxicity. Closely related cell types and analogous cell types in human and rodent frequently showed different patterns of cytotoxicity. Some compounds inducing similar levels of cytotoxicity showed distinct time dependence in kinetic studies, consistent with known mechanisms of toxicity. Conclusions The generation of high-quality cytotoxicity data on this large library of known compounds using qHTS demonstrates the potential of this methodology to profile a much broader array of assays and compounds, which, in aggregate, may be valuable for prioritizing compounds for further toxicologic evaluation, identifying compounds with particular mechanisms of action, and potentially predicting in vivo biological response. PMID:18335092

  2. Hypoxia-sensitive reporter system for high-throughput screening.

    Science.gov (United States)

    Tsujita, Tadayuki; Kawaguchi, Shin-ichi; Dan, Takashi; Baird, Liam; Miyata, Toshio; Yamamoto, Masayuki

    2015-02-01

    The induction of anti-hypoxic stress enzymes and proteins has the potential to be a potent therapeutic strategy to prevent the progression of ischemic heart, kidney or brain diseases. To realize this idea, small chemical compounds, which mimic hypoxic conditions by activating the PHD-HIF-α system, have been developed. However, to date, none of these compounds were identified by monitoring the transcriptional activation of hypoxia-inducible factors (HIFs). Thus, to facilitate the discovery of potent inducers of HIF-α, we have developed an effective high-throughput screening (HTS) system to directly monitor the output of HIF-α transcription. We generated a HIF-α-dependent reporter system that responds to hypoxic stimuli in a concentration- and time-dependent manner. This system was developed through multiple optimization steps, resulting in the generation of a construct that consists of the secretion-type luciferase gene (Metridia luciferase, MLuc) under the transcriptional regulation of an enhancer containing 7 copies of 40-bp hypoxia responsive element (HRE) upstream of a mini-TATA promoter. This construct was stably integrated into the human neuroblastoma cell line, SK-N-BE(2)c, to generate a reporter system, named SKN:HRE-MLuc. To improve this system and to increase its suitability for the HTS platform, we incorporated the next generation luciferase, Nano luciferase (NLuc), whose longer half-life provides us with flexibility for the use of this reporter. We thus generated a stably transformed clone with NLuc, named SKN:HRE-NLuc, and found that it showed significantly improved reporter activity compared to SKN:HRE-MLuc. In this study, we have successfully developed the SKN:HRE-NLuc screening system as an efficient platform for future HTS.

  3. Towards Chip Scale Liquid Chromatography and High Throughput Immunosensing

    Energy Technology Data Exchange (ETDEWEB)

    Ni, Jing [Iowa State Univ., Ames, IA (United States)

    2000-09-21

    This work describes several research projects aimed towards developing new instruments and novel methods for high throughput chemical and biological analysis. Approaches are taken in two directions. The first direction takes advantage of well-established semiconductor fabrication techniques and applies them to miniaturize instruments that are workhorses in analytical laboratories. Specifically, the first part of this work focused on the development of micropumps and microvalves for controlled fluid delivery. The mechanism of these micropumps and microvalves relies on the electrochemically-induced surface tension change at a mercury/electrolyte interface. A miniaturized flow injection analysis device was integrated and flow injection analyses were demonstrated. In the second part of this work, microfluidic chips were also designed, fabricated, and tested. Separations of two fluorescent dyes were demonstrated in microfabricated channels, based on an open-tubular liquid chromatography (OT LC) or an electrochemically-modulated liquid chromatography (EMLC) format. A reduction in instrument size can potentially increase analysis speed, and allow exceedingly small amounts of sample to be analyzed under diverse separation conditions. The second direction explores the surface enhanced Raman spectroscopy (SERS) as a signal transduction method for immunoassay analysis. It takes advantage of the improved detection sensitivity as a result of surface enhancement on colloidal gold, the narrow width of Raman band, and the stability of Raman scattering signals to distinguish several different species simultaneously without exploiting spatially-separated addresses on a biochip. By labeling gold nanoparticles with different Raman reporters in conjunction with different detection antibodies, a simultaneous detection of a dual-analyte immunoassay was demonstrated. Using this scheme for quantitative analysis was also studied and preliminary dose-response curves from an immunoassay of a

  4. Missing call bias in high-throughput genotyping

    Directory of Open Access Journals (Sweden)

    Lin Rong

    2009-03-01

    Full Text Available Abstract Background The advent of high-throughput and cost-effective genotyping platforms made genome-wide association (GWA studies a reality. While the primary focus has been invested upon the improvement of reducing genotyping error, the problems associated with missing calls are largely overlooked. Results To probe into the effect of missing calls on GWAs, we demonstrated experimentally the prevalence and severity of the problem of missing call bias (MCB in four genotyping technologies (Affymetrix 500 K SNP array, SNPstream, TaqMan, and Illumina Beadlab. Subsequently, we showed theoretically that MCB leads to biased conclusions in the subsequent analyses, including estimation of allele/genotype frequencies, the measurement of HWE and association tests under various modes of inheritance relationships. We showed that MCB usually leads to power loss in association tests, and such power change is greater than what could be achieved by equivalent reduction of sample size unbiasedly. We also compared the bias in allele frequency estimation and in association tests introduced by MCB with those by genotyping errors. Our results illustrated that in most cases, the bias can be greatly reduced by increasing the call-rate at the cost of genotyping error rate. Conclusion The commonly used 'no-call' procedure for the observations of borderline quality should be modified. If the objective is to minimize the bias, the cut-off for call-rate and that for genotyping error rate should be properly coupled in GWA. We suggested that the ongoing QC cut-off for call-rate should be increased, while the cut-off for genotyping error rate can be reduced properly.

  5. Mining Chemical Activity Status from High-Throughput Screening Assays

    KAUST Repository

    Soufan, Othman

    2015-12-14

    High-throughput screening (HTS) experiments provide a valuable resource that reports biological activity of numerous chemical compounds relative to their molecular targets. Building computational models that accurately predict such activity status (active vs. inactive) in specific assays is a challenging task given the large volume of data and frequently small proportion of active compounds relative to the inactive ones. We developed a method, DRAMOTE, to predict activity status of chemical compounds in HTP activity assays. For a class of HTP assays, our method achieves considerably better results than the current state-of-the-art-solutions. We achieved this by modification of a minority oversampling technique. To demonstrate that DRAMOTE is performing better than the other methods, we performed a comprehensive comparison analysis with several other methods and evaluated them on data from 11 PubChem assays through 1,350 experiments that involved approximately 500,000 interactions between chemicals and their target proteins. As an example of potential use, we applied DRAMOTE to develop robust models for predicting FDA approved drugs that have high probability to interact with the thyroid stimulating hormone receptor (TSHR) in humans. Our findings are further partially and indirectly supported by 3D docking results and literature information. The results based on approximately 500,000 interactions suggest that DRAMOTE has performed the best and that it can be used for developing robust virtual screening models. The datasets and implementation of all solutions are available as a MATLAB toolbox online at www.cbrc.kaust.edu.sa/dramote and can be found on Figshare.

  6. Applications of Biophysics in High-Throughput Screening Hit Validation.

    Science.gov (United States)

    Genick, Christine Clougherty; Barlier, Danielle; Monna, Dominique; Brunner, Reto; Bé, Céline; Scheufler, Clemens; Ottl, Johannes

    2014-06-01

    For approximately a decade, biophysical methods have been used to validate positive hits selected from high-throughput screening (HTS) campaigns with the goal to verify binding interactions using label-free assays. By applying label-free readouts, screen artifacts created by compound interference and fluorescence are discovered, enabling further characterization of the hits for their target specificity and selectivity. The use of several biophysical methods to extract this type of high-content information is required to prevent the promotion of false positives to the next level of hit validation and to select the best candidates for further chemical optimization. The typical technologies applied in this arena include dynamic light scattering, turbidometry, resonance waveguide, surface plasmon resonance, differential scanning fluorimetry, mass spectrometry, and others. Each technology can provide different types of information to enable the characterization of the binding interaction. Thus, these technologies can be incorporated in a hit-validation strategy not only according to the profile of chemical matter that is desired by the medicinal chemists, but also in a manner that is in agreement with the target protein's amenability to the screening format. Here, we present the results of screening strategies using biophysics with the objective to evaluate the approaches, discuss the advantages and challenges, and summarize the benefits in reference to lead discovery. In summary, the biophysics screens presented here demonstrated various hit rates from a list of ~2000 preselected, IC50-validated hits from HTS (an IC50 is the inhibitor concentration at which 50% inhibition of activity is observed). There are several lessons learned from these biophysical screens, which will be discussed in this article. © 2014 Society for Laboratory Automation and Screening.

  7. High-throughput DNA extraction of forensic adhesive tapes.

    Science.gov (United States)

    Forsberg, Christina; Jansson, Linda; Ansell, Ricky; Hedman, Johannes

    2016-09-01

    Tape-lifting has since its introduction in the early 2000's become a well-established sampling method in forensic DNA analysis. Sampling is quick and straightforward while the following DNA extraction is more challenging due to the "stickiness", rigidity and size of the tape. We have developed, validated and implemented a simple and efficient direct lysis DNA extraction protocol for adhesive tapes that requires limited manual labour. The method uses Chelex beads and is applied with SceneSafe FAST tape. This direct lysis protocol provided higher mean DNA yields than PrepFiler Express BTA on Automate Express, although the differences were not significant when using clothes worn in a controlled fashion as reference material (p=0.13 and p=0.34 for T-shirts and button-down shirts, respectively). Through in-house validation we show that the method is fit-for-purpose for application in casework, as it provides high DNA yields and amplifiability, as well as good reproducibility and DNA extract stability. After implementation in casework, the proportion of extracts with DNA concentrations above 0.01ng/μL increased from 71% to 76%. Apart from providing higher DNA yields compared with the previous method, the introduction of the developed direct lysis protocol also reduced the amount of manual labour by half and doubled the potential throughput for tapes at the laboratory. Generally, simplified manual protocols can serve as a cost-effective alternative to sophisticated automation solutions when the aim is to enable high-throughput DNA extraction of complex crime scene samples. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  8. High-throughput microfluidic line scan imaging for cytological characterization

    Science.gov (United States)

    Hutcheson, Joshua A.; Powless, Amy J.; Majid, Aneeka A.; Claycomb, Adair; Fritsch, Ingrid; Balachandran, Kartik; Muldoon, Timothy J.

    2015-03-01

    Imaging cells in a microfluidic chamber with an area scan camera is difficult due to motion blur and data loss during frame readout causing discontinuity of data acquisition as cells move at relatively high speeds through the chamber. We have developed a method to continuously acquire high-resolution images of cells in motion through a microfluidics chamber using a high-speed line scan camera. The sensor acquires images in a line-by-line fashion in order to continuously image moving objects without motion blur. The optical setup comprises an epi-illuminated microscope with a 40X oil immersion, 1.4 NA objective and a 150 mm tube lens focused on a microfluidic channel. Samples containing suspended cells fluorescently stained with 0.01% (w/v) proflavine in saline are introduced into the microfluidics chamber via a syringe pump; illumination is provided by a blue LED (455 nm). Images were taken of samples at the focal plane using an ELiiXA+ 8k/4k monochrome line-scan camera at a line rate of up to 40 kHz. The system's line rate and fluid velocity are tightly controlled to reduce image distortion and are validated using fluorescent microspheres. Image acquisition was controlled via MATLAB's Image Acquisition toolbox. Data sets comprise discrete images of every detectable cell which may be subsequently mined for morphological statistics and definable features by a custom texture analysis algorithm. This high-throughput screening method, comparable to cell counting by flow cytometry, provided efficient examination including counting, classification, and differentiation of saliva, blood, and cultured human cancer cells.

  9. A bioimage informatics platform for high-throughput embryo phenotyping.

    Science.gov (United States)

    Brown, James M; Horner, Neil R; Lawson, Thomas N; Fiegel, Tanja; Greenaway, Simon; Morgan, Hugh; Ring, Natalie; Santos, Luis; Sneddon, Duncan; Teboul, Lydia; Vibert, Jennifer; Yaikhom, Gagarine; Westerberg, Henrik; Mallon, Ann-Marie

    2018-01-01

    High-throughput phenotyping is a cornerstone of numerous functional genomics projects. In recent years, imaging screens have become increasingly important in understanding gene-phenotype relationships in studies of cells, tissues and whole organisms. Three-dimensional (3D) imaging has risen to prominence in the field of developmental biology for its ability to capture whole embryo morphology and gene expression, as exemplified by the International Mouse Phenotyping Consortium (IMPC). Large volumes of image data are being acquired by multiple institutions around the world that encompass a range of modalities, proprietary software and metadata. To facilitate robust downstream analysis, images and metadata must be standardized to account for these differences. As an open scientific enterprise, making the data readily accessible is essential so that members of biomedical and clinical research communities can study the images for themselves without the need for highly specialized software or technical expertise. In this article, we present a platform of software tools that facilitate the upload, analysis and dissemination of 3D images for the IMPC. Over 750 reconstructions from 80 embryonic lethal and subviable lines have been captured to date, all of which are openly accessible at mousephenotype.org. Although designed for the IMPC, all software is available under an open-source licence for others to use and develop further. Ongoing developments aim to increase throughput and improve the analysis and dissemination of image data. Furthermore, we aim to ensure that images are searchable so that users can locate relevant images associated with genes, phenotypes or human diseases of interest. © The Author 2016. Published by Oxford University Press.

  10. High-Throughput Neuroimaging-Genetics Computational Infrastructure

    Directory of Open Access Journals (Sweden)

    Ivo D Dinov

    2014-04-01

    Full Text Available Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate and disseminate novel scientific methods, computational resources and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval and aggregation. Computational processing involves the necessary software, hardware and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical and phenotypic data and meta-data. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI and the Laboratory of Neuro Imaging (LONI at University of Southern California (USC. INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize diverse suites of software tools and web-services. These pipeline workflows are represented as portable XML objects which transfer the execution instructions and user specifications from the client user machine to remote pipeline servers for distributed computing. Using Alzheimer’s and Parkinson’s data, we provide several examples of translational applications using this infrastructure.

  11. Thin-film-transistor array: an exploratory attempt for high throughput cell manipulation using electrowetting principle

    Science.gov (United States)

    Shaik, F. Azam; Cathcart, G.; Ihida, S.; Lereau-Bernier, M.; Leclerc, E.; Sakai, Y.; Toshiyoshi, H.; Tixier-Mita, A.

    2017-05-01

    In lab-on-a-chip (LoC) devices, microfluidic displacement of liquids is a key component. electrowetting on dielectric (EWOD) is a technique to move fluids, with the advantage of not requiring channels, pumps or valves. Fluids are discretized into droplets on microelectrodes and moved by applying an electric field via the electrodes to manipulate the contact angle. Micro-objects, such as biological cells, can be transported inside of these droplets. However, the design of conventional microelectrodes, made by standard micro-fabrication techniques, fixes the path of the droplets, and limits the reconfigurability of paths and thus limits the parallel processing of droplets. In that respect, thin film transistor (TFT) technology presents a great opportunity as it allows infinitely reconfigurable paths, with high parallelizability. We propose here to investigate the possibility of using TFT array devices for high throughput cell manipulation using EWOD. A COMSOL based 2D simulation coupled with a MATLAB algorithm was used to simulate the contact angle modulation, displacement and mixing of droplets. These simulations were confirmed by experimental results. The EWOD technique was applied to a droplet of culture medium containing HepG2 carcinoma cells and demonstrated no negative effects on the viability of the cells. This confirms the possibility of applying EWOD techniques to cellular applications, such as parallel cell analysis.

  12. HiC-spector: a matrix library for spectral and reproducibility analysis of Hi-C contact maps.

    Science.gov (United States)

    Yan, Koon-Kiu; Yardimci, Galip Gürkan; Yan, Chengfei; Noble, William S; Gerstein, Mark

    2017-07-15

    Genome-wide proximity ligation based assays like Hi-C have opened a window to the 3D organization of the genome. In so doing, they present data structures that are different from conventional 1D signal tracks. To exploit the 2D nature of Hi-C contact maps, matrix techniques like spectral analysis are particularly useful. Here, we present HiC-spector, a collection of matrix-related functions for analyzing Hi-C contact maps. In particular, we introduce a novel reproducibility metric for quantifying the similarity between contact maps based on spectral decomposition. The metric successfully separates contact maps mapped from Hi-C data coming from biological replicates, pseudo-replicates and different cell types. Source code in Julia and Python, and detailed documentation is available at https://github.com/gersteinlab/HiC-spector . koonkiu.yan@gmail.com or mark@gersteinlab.org. Supplementary data are available at Bioinformatics online.

  13. Recent advances in high-throughput molecular marker identification for superficial and invasive bladder cancers

    DEFF Research Database (Denmark)

    Andersen, Lars Dyrskjøt; Zieger, Karsten; Ørntoft, Torben Falck

    2007-01-01

    individually contributed to the management of the disease. However, the development of high-throughput techniques for simultaneous assessment of a large number of markers has allowed classification of tumors into clinically relevant molecular subgroups beyond those possible by pathological classification. Here......, we review the recent advances in high-throughput molecular marker identification for superficial and invasive bladder cancers....

  14. High-throughput and computational approaches for diagnostic and prognostic host tuberculosis biomarkers

    Directory of Open Access Journals (Sweden)

    January Weiner

    2017-03-01

    Full Text Available High-throughput techniques strive to identify new biomarkers that will be useful for the diagnosis, treatment, and prevention of tuberculosis (TB. However, their analysis and interpretation pose considerable challenges. Recent developments in the high-throughput detection of host biomarkers in TB are reported in this review.

  15. Alginate Immobilization of Metabolic Enzymes (AIME) for High-Throughput Screening Assays (SOT)

    Science.gov (United States)

    Alginate Immobilization of Metabolic Enzymes (AIME) for High-Throughput Screening Assays DE DeGroot, RS Thomas, and SO SimmonsNational Center for Computational Toxicology, US EPA, Research Triangle Park, NC USAThe EPA’s ToxCast program utilizes a wide variety of high-throughput s...

  16. High-Throughput Structural and Functional Characterization of the Thin Film Materials System Ni-Co-Al.

    Science.gov (United States)

    Decker, Peer; Naujoks, Dennis; Langenkämper, Dennis; Somsen, Christoph; Ludwig, Alfred

    2017-10-09

    High-throughput methods were used to investigate a Ni-Co-Al thin film materials library, which is of interest for structural and functional applications (superalloys, shape memory alloys). X-ray diffraction (XRD) measurements were performed to identify the phase regions of the Ni-Co-Al system in its state after annealing at 600 °C. Optical, electrical, and magneto-optical measurements were performed to map functional properties and confirm XRD results. All results and literature data were used to propose a ternary thin film phase diagram of the Ni-Co-Al thin film system.

  17. High-Throughput Continuous Hydrothermal Synthesis of Transparent Conducting Aluminum and Gallium Co-doped Zinc Oxides.

    Science.gov (United States)

    Howard, Dougal P; Marchand, Peter; McCafferty, Liam; Carmalt, Claire J; Parkin, Ivan P; Darr, Jawwad A

    2017-04-10

    High-throughput continuous hydrothermal flow synthesis was used to generate a library of aluminum and gallium-codoped zinc oxide nanoparticles of specific atomic ratios. Resistivities of the materials were determined by Hall Effect measurements on heat-treated pressed discs and the results collated into a conductivity-composition map. Optimal resistivities of ∼9 × 10-3 Ω cm were reproducibly achieved for several samples, for example, codoped ZnO with 2 at% Ga and 1 at% Al. The optimum sample on balance of performance and cost was deemed to be ZnO codoped with 3 at% Al and 1 at% Ga.

  18. Construction of adhesion maps for contacts between a sphere and a half-space: Considering size effects of the sphere.

    Science.gov (United States)

    Zhang, Yuyan; Wang, Xiaoli; Li, Hanqing; Yang, Weixu

    2015-11-15

    Previous adhesion maps, such as the JG (Johnson-Greenwood) and YCG (Yao-Ciavarella-Gao) maps, are used to guide the selection of Bradley, DMT, M-D, JKR and Hertz models. However, when the size of the contact sphere decreases to the small scale, the applicability of JG and YCG maps is limited because the assumptions regarding the contact region profile, interaction between contact bodies and sphere shape in the classical models constituting these two maps are no longer valid. To avoid this limitation, in this paper, a new numerical model considering size effects of the sphere is established first and then introduced into the new adhesion maps together with the YGG (Yao-Guduru-Gao) model and Hertz model. Regimes of these models in the new map under a certain sphere radius are demarcated by the criteria related to the relative force differences and the ratio of contact radius to sphere radius. In addition, the approaches at pull-off, jump-in and jump-out for different Tabor parameters and sphere radii are provided in the new maps. Finally, to make the new maps more feasible, the numerical results of approaches, force and contact radius involved in the maps are formularized by using the piecewise fitting. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. The Utilization of Formalin Fixed-Paraffin-Embedded Specimens in High Throughput Genomic Studies

    Directory of Open Access Journals (Sweden)

    Pan Zhang

    2017-01-01

    Full Text Available High throughput genomic assays empower us to study the entire human genome in short time with reasonable cost. Formalin fixed-paraffin-embedded (FFPE tissue processing remains the most economical approach for longitudinal tissue specimen storage. Therefore, the ability to apply high throughput genomic applications to FFPE specimens can expand clinical assays and discovery. Many studies have measured the accuracy and repeatability of data generated from FFPE specimens using high throughput genomic assays. Together, these studies demonstrate feasibility and provide crucial guidance for future studies using FFPE specimens. Here, we summarize the findings of these studies and discuss the limitations of high throughput data generated from FFPE specimens across several platforms that include microarray, high throughput sequencing, and NanoString.

  20. Performance of high-throughput DNA quantification methods

    Directory of Open Access Journals (Sweden)

    Chanock Stephen J

    2003-10-01

    Full Text Available Abstract Background The accuracy and precision of estimates of DNA concentration are critical factors for efficient use of DNA samples in high-throughput genotype and sequence analyses. We evaluated the performance of spectrophotometric (OD DNA quantification, and compared it to two fluorometric quantification methods, the PicoGreen® assay (PG, and a novel real-time quantitative genomic PCR assay (QG specific to a region at the human BRCA1 locus. Twenty-Two lymphoblastoid cell line DNA samples with an initial concentration of ~350 ng/uL were diluted to 20 ng/uL. DNA concentration was estimated by OD and further diluted to 5 ng/uL. The concentrations of multiple aliquots of the final dilution were measured by the OD, QG and PG methods. The effects of manual and robotic laboratory sample handling procedures on the estimates of DNA concentration were assessed using variance components analyses. Results The OD method was the DNA quantification method most concordant with the reference sample among the three methods evaluated. A large fraction of the total variance for all three methods (36.0–95.7% was explained by sample-to-sample variation, whereas the amount of variance attributable to sample handling was small (0.8–17.5%. Residual error (3.2–59.4%, corresponding to un-modelled factors, contributed a greater extent to the total variation than the sample handling procedures. Conclusion The application of a specific DNA quantification method to a particular molecular genetic laboratory protocol must take into account the accuracy and precision of the specific method, as well as the requirements of the experimental workflow with respect to sample volumes and throughput. While OD was the most concordant and precise DNA quantification method in this study, the information provided by the quantitative PCR assay regarding the suitability of DNA samples for PCR may be an essential factor for some protocols, despite the decreased concordance and

  1. High throughput modular chambers for rapid evaluation of anesthetic sensitivity

    Directory of Open Access Journals (Sweden)

    Eckmann David M

    2006-11-01

    Full Text Available Abstract Background Anesthetic sensitivity is determined by the interaction of multiple genes. Hence, a dissection of genetic contributors would be aided by precise and high throughput behavioral screens. Traditionally, anesthetic phenotyping has addressed only induction of anesthesia, evaluated with dose-response curves, while ignoring potentially important data on emergence from anesthesia. Methods We designed and built a controlled environment apparatus to permit rapid phenotyping of twenty-four mice simultaneously. We used the loss of righting reflex to indicate anesthetic-induced unconsciousness. After fitting the data to a sigmoidal dose-response curve with variable slope, we calculated the MACLORR (EC50, the Hill coefficient, and the 95% confidence intervals bracketing these values. Upon termination of the anesthetic, Emergence timeRR was determined and expressed as the mean ± standard error for each inhaled anesthetic. Results In agreement with several previously published reports we find that the MACLORR of halothane, isoflurane, and sevoflurane in 8–12 week old C57BL/6J mice is 0.79% (95% confidence interval = 0.78 – 0.79%, 0.91% (95% confidence interval = 0.90 – 0.93%, and 1.96% (95% confidence interval = 1.94 – 1.97%, respectively. Hill coefficients for halothane, isoflurane, and sevoflurane are 24.7 (95% confidence interval = 19.8 – 29.7%, 19.2 (95% confidence interval = 14.0 – 24.3%, and 33.1 (95% confidence interval = 27.3 – 38.8%, respectively. After roughly 2.5 MACLORR • hr exposures, mice take 16.00 ± 1.07, 6.19 ± 0.32, and 2.15 ± 0.12 minutes to emerge from halothane, isoflurane, and sevoflurane, respectively. Conclusion This system enabled assessment of inhaled anesthetic responsiveness with a higher precision than that previously reported. It is broadly adaptable for delivering an inhaled therapeutic (or toxin to a population while monitoring its vital signs, motor reflexes, and providing precise control

  2. High throughput comet assay to study genotoxicity of nanomaterials

    Directory of Open Access Journals (Sweden)

    Naouale El Yamani

    2015-06-01

    Full Text Available The unique physicochemical properties of engineered nanomaterials (NMs have accelerated their use in diverse industrial and domestic products. Although their presence in consumer products represents a major concern for public health safety, their potential impact on human health is poorly understood. There is therefore an urgent need to clarify the toxic effects of NMs and to elucidate the mechanisms involved. In view of the large number of NMs currently being used, high throughput (HTP screening technologies are clearly needed for efficient assessment of toxicity. The comet assay is the most used method in nanogenotoxicity studies and has great potential for increasing throughput as it is fast, versatile and robust; simple technical modifications of the assay make it possible to test many compounds (NMs in a single experiment. The standard gel of 70-100 μL contains thousands of cells, of which only a tiny fraction are actually scored. Reducing the gel to a volume of 5 μL, with just a few hundred cells, allows twelve gels to be set on a standard slide, or 96 as a standard 8x12 array. For the 12 gel format, standard slides precoated with agarose are placed on a metal template and gels are set on the positions marked on the template. The HTP comet assay, incorporating digestion of DNA with formamidopyrimidine DNA glycosylase (FPG to detect oxidised purines, has recently been applied to study the potential induction of genotoxicity by NMs via reactive oxygen. In the NanoTEST project we investigated the genotoxic potential of several well-characterized metal and polymeric nanoparticles with the comet assay. All in vitro studies were harmonized; i.e. NMs were from the same batch, and identical dispersion protocols, exposure time, concentration range, culture conditions, and time-courses were used. As a kidney model, Cos-1 fibroblast-like kidney cells were treated with different concentrations of iron oxide NMs, and cells embedded in minigels (12

  3. High throughput RNAi assay optimization using adherent cell cytometry

    Directory of Open Access Journals (Sweden)

    Pradhan Leena

    2011-04-01

    Full Text Available Abstract Background siRNA technology is a promising tool for gene therapy of vascular disease. Due to the multitude of reagents and cell types, RNAi experiment optimization can be time-consuming. In this study adherent cell cytometry was used to rapidly optimize siRNA transfection in human aortic vascular smooth muscle cells (AoSMC. Methods AoSMC were seeded at a density of 3000-8000 cells/well of a 96well plate. 24 hours later AoSMC were transfected with either non-targeting unlabeled siRNA (50 nM, or non-targeting labeled siRNA, siGLO Red (5 or 50 nM using no transfection reagent, HiPerfect or Lipofectamine RNAiMax. For counting cells, Hoechst nuclei stain or Cell Tracker green were used. For data analysis an adherent cell cytometer, Celigo® was used. Data was normalized to the transfection reagent alone group and expressed as red pixel count/cell. Results After 24 hours, none of the transfection conditions led to cell loss. Red fluorescence counts were normalized to the AoSMC count. RNAiMax was more potent compared to HiPerfect or no transfection reagent at 5 nM siGLO Red (4.12 +/-1.04 vs. 0.70 +/-0.26 vs. 0.15 +/-0.13 red pixel/cell and 50 nM siGLO Red (6.49 +/-1.81 vs. 2.52 +/-0.67 vs. 0.34 +/-0.19. Fluorescence expression results supported gene knockdown achieved by using MARCKS targeting siRNA in AoSMCs. Conclusion This study underscores that RNAi delivery depends heavily on the choice of delivery method. Adherent cell cytometry can be used as a high throughput-screening tool for the optimization of RNAi assays. This technology can accelerate in vitro cell assays and thus save costs.

  4. High-throughput metal susceptibility testing of microbial biofilms

    Directory of Open Access Journals (Sweden)

    Turner Raymond J

    2005-10-01

    Full Text Available Abstract Background Microbial biofilms exist all over the natural world, a distribution that is paralleled by metal cations and oxyanions. Despite this reality, very few studies have examined how biofilms withstand exposure to these toxic compounds. This article describes a batch culture technique for biofilm and planktonic cell metal susceptibility testing using the MBEC assay. This device is compatible with standard 96-well microtiter plate technology. As part of this method, a two part, metal specific neutralization protocol is summarized. This procedure minimizes residual biological toxicity arising from the carry-over of metals from challenge to recovery media. Neutralization consists of treating cultures with a chemical compound known to react with or to chelate the metal. Treated cultures are plated onto rich agar to allow metal complexes to diffuse into the recovery medium while bacteria remain on top to recover. Two difficulties associated with metal susceptibility testing were the focus of two applications of this technique. First, assays were calibrated to allow comparisons of the susceptibility of different organisms to metals. Second, the effects of exposure time and growth medium composition on the susceptibility of E. coli JM109 biofilms to metals were investigated. Results This high-throughput method generated 96-statistically equivalent biofilms in a single device and thus allowed for comparative and combinatorial experiments of media, microbial strains, exposure times and metals. By adjusting growth conditions, it was possible to examine biofilms of different microorganisms that had similar cell densities. In one example, Pseudomonas aeruginosa ATCC 27853 was up to 80 times more resistant to heavy metalloid oxyanions than Escherichia coli TG1. Further, biofilms were up to 133 times more tolerant to tellurite (TeO32- than corresponding planktonic cultures. Regardless of the growth medium, the tolerance of biofilm and planktonic

  5. High-throughput metal susceptibility testing of microbial biofilms

    Science.gov (United States)

    Harrison, Joe J; Turner, Raymond J; Ceri, Howard

    2005-01-01

    Background Microbial biofilms exist all over the natural world, a distribution that is paralleled by metal cations and oxyanions. Despite this reality, very few studies have examined how biofilms withstand exposure to these toxic compounds. This article describes a batch culture technique for biofilm and planktonic cell metal susceptibility testing using the MBEC assay. This device is compatible with standard 96-well microtiter plate technology. As part of this method, a two part, metal specific neutralization protocol is summarized. This procedure minimizes residual biological toxicity arising from the carry-over of metals from challenge to recovery media. Neutralization consists of treating cultures with a chemical compound known to react with or to chelate the metal. Treated cultures are plated onto rich agar to allow metal complexes to diffuse into the recovery medium while bacteria remain on top to recover. Two difficulties associated with metal susceptibility testing were the focus of two applications of this technique. First, assays were calibrated to allow comparisons of the susceptibility of different organisms to metals. Second, the effects of exposure time and growth medium composition on the susceptibility of E. coli JM109 biofilms to metals were investigated. Results This high-throughput method generated 96-statistically equivalent biofilms in a single device and thus allowed for comparative and combinatorial experiments of media, microbial strains, exposure times and metals. By adjusting growth conditions, it was possible to examine biofilms of different microorganisms that had similar cell densities. In one example, Pseudomonas aeruginosa ATCC 27853 was up to 80 times more resistant to heavy metalloid oxyanions than Escherichia coli TG1. Further, biofilms were up to 133 times more tolerant to tellurite (TeO32-) than corresponding planktonic cultures. Regardless of the growth medium, the tolerance of biofilm and planktonic cell E. coli JM109 to metals

  6. HiCTMap: Detection and analysis of chromosome territory structure and position by high-throughput imaging.

    Science.gov (United States)

    Jowhar, Ziad; Gudla, Prabhakar R; Shachar, Sigal; Wangsa, Darawalee; Russ, Jill L; Pegoraro, Gianluca; Ried, Thomas; Raznahan, Armin; Misteli, Tom

    2018-02-10

    The spatial organization of chromosomes in the nuclear space is an extensively studied field that relies on measurements of structural features and 3D positions of chromosomes with high precision and robustness. However, no tools are currently available to image and analyze chromosome territories in a high-throughput format. Here, we have developed High-throughput Chromosome Territory Mapping (HiCTMap), a method for the robust and rapid analysis of 2D and 3D chromosome territory positioning in mammalian cells. HiCTMap is a high-throughput imaging-based chromosome detection method which enables routine analysis of chromosome structure and nuclear position. Using an optimized FISH staining protocol in a 384-well plate format in conjunction with a bespoke automated image analysis workflow, HiCTMap faithfully detects chromosome territories and their position in 2D and 3D in a large population of cells per experimental condition. We apply this novel technique to visualize chromosomes 18, X, and Y in male and female primary human skin fibroblasts, and show accurate detection of the correct number of chromosomes in the respective genotypes. Given the ability to visualize and quantitatively analyze large numbers of nuclei, we use HiCTMap to measure chromosome territory area and volume with high precision and determine the radial position of chromosome territories using either centroid or equidistant-shell analysis. The HiCTMap protocol is also compatible with RNA FISH as demonstrated by simultaneous labeling of X chromosomes and Xist RNA in female cells. We suggest HiCTMap will be a useful tool for routine precision mapping of chromosome territories in a wide range of cell types and tissues. Published by Elsevier Inc.

  7. Multiscale cartilage biomechanics: technical challenges in realizing a high-throughput modelling and simulation workflow.

    Science.gov (United States)

    Erdemir, Ahmet; Bennetts, Craig; Davis, Sean; Reddy, Akhil; Sibole, Scott

    2015-04-06

    Understanding the mechanical environment of articular cartilage and chondrocytes is of the utmost importance in evaluating tissue damage which is often related to failure of the fibre architecture and mechanical injury to the cells. This knowledge also has significant implications for understanding the mechanobiological response in healthy and diseased cartilage and can drive the development of intervention strategies, ranging from the design of tissue-engineered constructs to the establishment of rehabilitation protocols. Spanning multiple spatial scales, a wide range of biomechanical factors dictate this mechanical environment. Computational modelling and simulation provide descriptive and predictive tools to identify multiscale interactions, and can lead towards a greater comprehension of healthy and diseased cartilage function, possibly in an individualized manner. Cartilage and chondrocyte mechanics can be examined in silico, through post-processing or feed-forward approaches. First, joint-tissue level simulations, typically using the finite-element method, solve boundary value problems representing the joint articulation and underlying tissue, which can differentiate the role of compartmental joint loading in cartilage contact mechanics and macroscale cartilage field mechanics. Subsequently, tissue-cell scale simulations, driven by the macroscale cartilage mechanical field information, can predict chondrocyte deformation metrics along with the mechanics of the surrounding pericellular and extracellular matrices. A high-throughput modelling and simulation framework is necessary to develop models representative of regional and population-wide variations in cartilage and chondrocyte anatomy and mechanical properties, and to conduct large-scale analysis accommodating a multitude of loading scenarios. However, realization of such a framework is a daunting task, with technical difficulties hindering the processes of model development, scale coupling, simulation and

  8. A TMA de-arraying method for high throughput biomarker discovery in tissue research.

    Directory of Open Access Journals (Sweden)

    Yinhai Wang

    Full Text Available BACKGROUND: Tissue MicroArrays (TMAs represent a potential high-throughput platform for the analysis and discovery of tissue biomarkers. As TMA slides are produced manually and subject to processing and sectioning artefacts, the layout of TMA cores on the final slide and subsequent digital scan (TMA digital slide is often disturbed making it difficult to associate cores with their original position in the planned TMA map. Additionally, the individual cores can be greatly altered and contain numerous irregularities such as missing cores, grid rotation and stretching. These factors demand the development of a robust method for de-arraying TMAs which identifies each TMA core, and assigns them to their appropriate coordinates on the constructed TMA slide. METHODOLOGY: This study presents a robust TMA de-arraying method consisting of three functional phases: TMA core segmentation, gridding and mapping. The segmentation of TMA cores uses a set of morphological operations to identify each TMA core. Gridding then utilises a Delaunay Triangulation based method to find the row and column indices of each TMA core. Finally, mapping correlates each TMA core from a high resolution TMA whole slide image with its name within a TMAMap. CONCLUSION: This study describes a genuine robust TMA de-arraying algorithm for the rapid identification of TMA cores from digital slides. The result of this de-arraying algorithm allows the easy partition of each TMA core for further processing. Based on a test group of 19 TMA slides (3129 cores, 99.84% of cores were segmented successfully, 99.81% of cores were gridded correctly and 99.96% of cores were mapped with their correct names via TMAMaps. The gridding of TMA cores were also extensively tested using a set of 113 pseudo slide (13,536 cores with a variety of irregular grid layouts including missing cores, rotation and stretching. 100% of the cores were gridded correctly.

  9. A TMA de-arraying method for high throughput biomarker discovery in tissue research.

    Science.gov (United States)

    Wang, Yinhai; Savage, Kienan; Grills, Claire; McCavigan, Andrena; James, Jacqueline A; Fennell, Dean A; Hamilton, Peter W

    2011-01-01

    Tissue MicroArrays (TMAs) represent a potential high-throughput platform for the analysis and discovery of tissue biomarkers. As TMA slides are produced manually and subject to processing and sectioning artefacts, the layout of TMA cores on the final slide and subsequent digital scan (TMA digital slide) is often disturbed making it difficult to associate cores with their original position in the planned TMA map. Additionally, the individual cores can be greatly altered and contain numerous irregularities such as missing cores, grid rotation and stretching. These factors demand the development of a robust method for de-arraying TMAs which identifies each TMA core, and assigns them to their appropriate coordinates on the constructed TMA slide. This study presents a robust TMA de-arraying method consisting of three functional phases: TMA core segmentation, gridding and mapping. The segmentation of TMA cores uses a set of morphological operations to identify each TMA core. Gridding then utilises a Delaunay Triangulation based method to find the row and column indices of each TMA core. Finally, mapping correlates each TMA core from a high resolution TMA whole slide image with its name within a TMAMap. This study describes a genuine robust TMA de-arraying algorithm for the rapid identification of TMA cores from digital slides. The result of this de-arraying algorithm allows the easy partition of each TMA core for further processing. Based on a test group of 19 TMA slides (3129 cores), 99.84% of cores were segmented successfully, 99.81% of cores were gridded correctly and 99.96% of cores were mapped with their correct names via TMAMaps. The gridding of TMA cores were also extensively tested using a set of 113 pseudo slide (13,536 cores) with a variety of irregular grid layouts including missing cores, rotation and stretching. 100% of the cores were gridded correctly.

  10. A microreactor array for spatially resolved measurement of catalytic activity for high-throughput catalysis science

    Energy Technology Data Exchange (ETDEWEB)

    Kondratyuk, Petro; Gumuslu, Gamze; Shukla, Shantanu; Miller, James B; Morreale, Bryan D; Gellman, Andrew J

    2013-04-01

    We describe a 100 channel microreactor array capable of spatially resolved measurement of catalytic activity across the surface of a flat substrate. When used in conjunction with a composition spread alloy film (CSAF, e.g. Pd{sub x}Cu{sub y}Au{sub 1-x-y}) across which component concentrations vary smoothly, such measurements permit high-throughput analysis of catalytic activity and selectivity as a function of catalyst composition. In the reported implementation, the system achieves spatial resolution of 1 mm{sup 2} over a 10×10 mm{sup 2} area. During operation, the reactant gases are delivered at constant flow rate to 100 points of differing composition on the CSAF surface by means of a 100-channel microfluidic device. After coming into contact with the CSAF catalyst surface, the product gas mixture from each of the 100 points is withdrawn separately through a set of 100 isolated channels for analysis using a mass spectrometer. We demonstrate the operation of the device on a Pd{sub x}Cu{sub y}Au{sub 1-x-y} CSAF catalyzing the H{sub 2}-D{sub 2} exchange reaction at 333 K. In essentially a single experiment, we measured the catalytic activity over a broad swathe of concentrations from the ternary composition space of the Pd{sub x}Cu{sub y}Au{sub 1-x-y} alloy.

  11. FLIC: high-throughput, continuous analysis of feeding behaviors in Drosophila.

    Directory of Open Access Journals (Sweden)

    Jennifer Ro

    Full Text Available We present a complete hardware and software system for collecting and quantifying continuous measures of feeding behaviors in the fruit fly, Drosophila melanogaster. The FLIC (Fly Liquid-Food Interaction Counter detects analog electronic signals as brief as 50 µs that occur when a fly makes physical contact with liquid food. Signal characteristics effectively distinguish between different types of behaviors, such as feeding and tasting events. The FLIC system performs as well or better than popular methods for simple assays, and it provides an unprecedented opportunity to study novel components of feeding behavior, such as time-dependent changes in food preference and individual levels of motivation and hunger. Furthermore, FLIC experiments can persist indefinitely without disturbance, and we highlight this ability by establishing a detailed picture of circadian feeding behaviors in the fly. We believe that the FLIC system will work hand-in-hand with modern molecular techniques to facilitate mechanistic studies of feeding behaviors in Drosophila using modern, high-throughput technologies.

  12. SIVQ-aided laser capture microdissection: A tool for high-throughput expression profiling

    Directory of Open Access Journals (Sweden)

    Jason Hipp

    2011-01-01

    Full Text Available Introduction: Laser capture microdissection (LCM facilitates procurement of defined cell populations for study in the context of histopathology. The morphologic assessment step in the LCM procedure is time consuming and tedious, thus restricting the utility of the technology for large applications. Results: Here, we describe the use of Spatially Invariant Vector Quantization (SIVQ for histological analysis and LCM. Using SIVQ, we selected vectors as morphologic predicates that were representative of normal epithelial or cancer cells and then searched for phenotypically similar cells across entire tissue sections. The selected cells were subsequently auto-microdissected and the recovered RNA was analyzed by expression microarray. Gene expression profiles from SIVQ-LCM and standard LCM-derived samples demonstrated highly congruous signatures, confirming the equivalence of the differing microdissection methods. Conclusion: SIVQ-LCM improves the work-flow of microdissection in two significant ways. First, the process is transformative in that it shifts the pathologist′s role from technical execution of the entire microdissection to a limited-contact supervisory role, enabling large-scale extraction of tissue by expediting subsequent semi-autonomous identification of target cell populations. Second, this work-flow model provides an opportunity to systematically identify highly constrained cell populations and morphologically consistent regions within tissue sections. Integrating SIVQ with LCM in a single environment provides advanced capabilities for efficient and high-throughput histological-based molecular studies.

  13. Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model

    Science.gov (United States)

    Li, Zhen; Zhang, Renyu

    2017-01-01

    -assisted models also have much better quality than template-based models especially for membrane proteins. The 3D models built from our contact prediction have TMscore>0.5 for 208 of the 398 membrane proteins, while those from homology modeling have TMscore>0.5 for only 10 of them. Further, even if trained mostly by soluble proteins, our deep learning method works very well on membrane proteins. In the recent blind CAMEO benchmark, our fully-automated web server implementing this method successfully folded 6 targets with a new fold and only 0.3L-2.3L effective sequence homologs, including one β protein of 182 residues, one α+β protein of 125 residues, one α protein of 140 residues, one α protein of 217 residues, one α/β of 260 residues and one α protein of 462 residues. Our method also achieved the highest F1 score on free-modeling targets in the latest CASP (Critical Assessment of Structure Prediction), although it was not fully implemented back then. Availability http://raptorx.uchicago.edu/ContactMap/ PMID:28056090

  14. Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model.

    Science.gov (United States)

    Wang, Sheng; Sun, Siqi; Li, Zhen; Zhang, Renyu; Xu, Jinbo

    2017-01-01

    much better quality than template-based models especially for membrane proteins. The 3D models built from our contact prediction have TMscore>0.5 for 208 of the 398 membrane proteins, while those from homology modeling have TMscore>0.5 for only 10 of them. Further, even if trained mostly by soluble proteins, our deep learning method works very well on membrane proteins. In the recent blind CAMEO benchmark, our fully-automated web server implementing this method successfully folded 6 targets with a new fold and only 0.3L-2.3L effective sequence homologs, including one β protein of 182 residues, one α+β protein of 125 residues, one α protein of 140 residues, one α protein of 217 residues, one α/β of 260 residues and one α protein of 462 residues. Our method also achieved the highest F1 score on free-modeling targets in the latest CASP (Critical Assessment of Structure Prediction), although it was not fully implemented back then. http://raptorx.uchicago.edu/ContactMap/.

  15. Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model.

    Directory of Open Access Journals (Sweden)

    Sheng Wang

    2017-01-01

    -assisted models also have much better quality than template-based models especially for membrane proteins. The 3D models built from our contact prediction have TMscore>0.5 for 208 of the 398 membrane proteins, while those from homology modeling have TMscore>0.5 for only 10 of them. Further, even if trained mostly by soluble proteins, our deep learning method works very well on membrane proteins. In the recent blind CAMEO benchmark, our fully-automated web server implementing this method successfully folded 6 targets with a new fold and only 0.3L-2.3L effective sequence homologs, including one β protein of 182 residues, one α+β protein of 125 residues, one α protein of 140 residues, one α protein of 217 residues, one α/β of 260 residues and one α protein of 462 residues. Our method also achieved the highest F1 score on free-modeling targets in the latest CASP (Critical Assessment of Structure Prediction, although it was not fully implemented back then.http://raptorx.uchicago.edu/ContactMap/.

  16. Laser-Induced Fluorescence Detection in High-Throughput Screening of Heterogeneous Catalysts and Single Cells Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Su, Hui [Iowa State Univ., Ames, IA (United States)

    2001-01-01

    Laser-induced fluorescence detection is one of the most sensitive detection techniques and it has found enormous applications in various areas. The purpose of this research was to develop detection approaches based on laser-induced fluorescence detection in two different areas, heterogeneous catalysts screening and single cell study. First, the author introduced laser-induced imaging (LIFI) as a high-throughput screening technique for heterogeneous catalysts to explore the use of this high-throughput screening technique in discovery and study of various heterogeneous catalyst systems. This scheme is based on the fact that the creation or the destruction of chemical bonds alters the fluorescence properties of suitably designed molecules. By irradiating the region immediately above the catalytic surface with a laser, the fluorescence intensity of a selected product or reactant can be imaged by a charge-coupled device (CCD) camera to follow the catalytic activity as a function of time and space. By screening the catalytic activity of vanadium pentoxide catalysts in oxidation of naphthalene, they demonstrated LIFI has good detection performance and the spatial and temporal resolution needed for high-throughput screening of heterogeneous catalysts. The sample packing density can reach up to 250 x 250 subunits/cm2 for 40-μm wells. This experimental set-up also can screen solid catalysts via near infrared thermography detection. In the second part of this dissertation, the author used laser-induced native fluorescence coupled with capillary electrophoresis (LINF-CE) and microscope imaging to study the single cell degranulation. On the basis of good temporal correlation with events observed through an optical microscope, they have identified individual peaks in the fluorescence electropherograms as serotonin released from the granular core on contact with the surrounding fluid.

  17. SNP high-throughput screening in grapevine using the SNPlex genotyping system.

    Science.gov (United States)

    Pindo, Massimo; Vezzulli, Silvia; Coppola, Giuseppina; Cartwright, Dustin A; Zharkikh, Andrey; Velasco, Riccardo; Troggio, Michela

    2008-01-28

    Until recently, only a small number of low- and mid-throughput methods have been used for single nucleotide polymorphism (SNP) discovery and genotyping in grapevine (Vitis vinifera L.). However, following completion of the sequence of the highly heterozygous genome of Pinot Noir, it has been possible to identify millions of electronic SNPs (eSNPs) thus providing a valuable source for high-throughput genotyping methods. Herein we report the first application of the SNPlexgenotyping system in grapevine aiming at the anchoring of an eukaryotic genome. This approach combines robust SNP detection with automated assay readout and data analysis. 813 candidate eSNPs were developed from non-repetitive contigs of the assembled genome of Pinot Noir and tested in 90 progeny of Syrah x Pinot Noir cross. 563 new SNP-based markers were obtained and mapped. The efficiency rate of 69% was enhanced to 80% when multiple displacement amplification (MDA) methods were used for preparation of genomic DNA for the SNPlex assay. Unlike other SNP genotyping methods used to investigate thousands of SNPs in a few genotypes, or a few SNPs in around a thousand genotypes, the SNPlex genotyping system represents a good compromise to investigate several hundred SNPs in a hundred or more samples simultaneously. Therefore, the use of the SNPlex assay, coupled with whole genome amplification (WGA), is a good solution for future applications in well-equipped laboratories.

  18. SNP high-throughput screening in grapevine using the SNPlex™ genotyping system

    Science.gov (United States)

    Pindo, Massimo; Vezzulli, Silvia; Coppola, Giuseppina; Cartwright, Dustin A; Zharkikh, Andrey; Velasco, Riccardo; Troggio, Michela

    2008-01-01

    Background Until recently, only a small number of low- and mid-throughput methods have been used for single nucleotide polymorphism (SNP) discovery and genotyping in grapevine (Vitis vinifera L.). However, following completion of the sequence of the highly heterozygous genome of Pinot Noir, it has been possible to identify millions of electronic SNPs (eSNPs) thus providing a valuable source for high-throughput genotyping methods. Results Herein we report the first application of the SNPlex™ genotyping system in grapevine aiming at the anchoring of an eukaryotic genome. This approach combines robust SNP detection with automated assay readout and data analysis. 813 candidate eSNPs were developed from non-repetitive contigs of the assembled genome of Pinot Noir and tested in 90 progeny of Syrah × Pinot Noir cross. 563 new SNP-based markers were obtained and mapped. The efficiency rate of 69% was enhanced to 80% when multiple displacement amplification (MDA) methods were used for preparation of genomic DNA for the SNPlex assay. Conclusion Unlike other SNP genotyping methods used to investigate thousands of SNPs in a few genotypes, or a few SNPs in around a thousand genotypes, the SNPlex genotyping system represents a good compromise to investigate several hundred SNPs in a hundred or more samples simultaneously. Therefore, the use of the SNPlex assay, coupled with whole genome amplification (WGA), is a good solution for future applications in well-equipped laboratories. PMID:18226250

  19. Unlocking the Potential of High-Throughput Drug Combination Assays Using Acoustic Dispensing.

    Science.gov (United States)

    Chan, Grace Ka Yan; Wilson, Stacy; Schmidt, Stephen; Moffat, John G

    2016-02-01

    Assessment of synergistic effects of drug combinations in vitro is a critical part of anticancer drug research. However, the complexities of dosing and analyzing two drugs over the appropriate range of doses have generally led to compromises in experimental design that restrict the quality and robustness of the data. In particular, the use of a single dose response of combined drugs, rather than a full two-way matrix of varying doses, has predominated in higher-throughput studies. Acoustic dispensing unlocks the potential of high-throughput dose matrix analysis. We have developed acoustic dispensing protocols that enable compound synergy assays in a 384-well format. This experimental design is considerably more efficient and flexible with respect to time, reagent usage, and labware than is achievable using traditional serial-dilution approaches. Data analysis tools integrated in Genedata Screener were used to efficiently deconvolute the combination compound mapping scheme and calculate compound potency and synergy metrics. We have applied this workflow to evaluate interactions among drugs targeting different nodes of the mitogen-activated protein kinase pathway in a panel of cancer cell lines. © 2015 Society for Laboratory Automation and Screening.

  20. ESSENTIALS: Software for Rapid Analysis of High Throughput Transposon Insertion Sequencing Data

    Science.gov (United States)

    Zomer, Aldert; Burghout, Peter; Bootsma, Hester J.; Hermans, Peter W. M.; van Hijum, Sacha A. F. T.

    2012-01-01

    High-throughput analysis of genome-wide random transposon mutant libraries is a powerful tool for (conditional) essential gene discovery. Recently, several next-generation sequencing approaches, e.g. Tn-seq/INseq, HITS and TraDIS, have been developed that accurately map the site of transposon insertions by mutant-specific amplification and sequence readout of DNA flanking the transposon insertions site, assigning a measure of essentiality based on the number of reads per insertion site flanking sequence or per gene. However, analysis of these large and complex datasets is hampered by the lack of an easy to use and automated tool for transposon insertion sequencing data. To fill this gap, we developed ESSENTIALS, an open source, web-based software tool for researchers in the genomics field utilizing transposon insertion sequencing analysis. It accurately predicts (conditionally) essential genes and offers the flexibility of using different sample normalization methods, genomic location bias correction, data preprocessing steps, appropriate statistical tests and various visualizations to examine the results, while requiring only a minimum of input and hands-on work from the researcher. We successfully applied ESSENTIALS to in-house and published Tn-seq, TraDIS and HITS datasets and we show that the various pre- and post-processing steps on the sequence reads and count data with ESSENTIALS considerably improve the sensitivity and specificity of predicted gene essentiality. PMID:22900082

  1. High-throughput identification of potential minor histocompatibility antigens by MHC tetramer-based screening

    DEFF Research Database (Denmark)

    Hombrink, Pleun; Hadrup, Sine R; Bakker, Arne

    2011-01-01

    T-cell recognition of minor histocompatibility antigens (MiHA) plays an important role in the graft-versus-tumor (GVT) effect of allogeneic stem cell transplantation (allo-SCT). However, the number of MiHA identified to date remains limited, making clinical application of MiHA reactive T......-cell infusion difficult. This study represents the first attempt of genome-wide prediction of MiHA, coupled to the isolation of T-cell populations that react with these antigens. In this unbiased high-throughput MiHA screen, both the possibilities and pitfalls of this approach were investigated. First, 973......MHC-tetramer-based enrichment and multi-color flow cytometry. Using this approach, 71 peptide-reactive T-cell populations were generated. The isolation of a T-cell line specifically recognizing target cells expressing the MAP4K1(IMA) antigen demonstrates that identification of MiHA through this approach is in principle...

  2. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience.

    Science.gov (United States)

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes- neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.

  3. Pyicos: a versatile toolkit for the analysis of high-throughput sequencing data.

    Science.gov (United States)

    Althammer, Sonja; González-Vallinas, Juan; Ballaré, Cecilia; Beato, Miguel; Eyras, Eduardo

    2011-12-15

    High-throughput sequencing (HTS) has revolutionized gene regulation studies and is now fundamental for the detection of protein-DNA and protein-RNA binding, as well as for measuring RNA expression. With increasing variety and sequencing depth of HTS datasets, the need for more flexible and memory-efficient tools to analyse them is growing. We describe Pyicos, a powerful toolkit for the analysis of mapped reads from diverse HTS experiments: ChIP-Seq, either punctuated or broad signals, CLIP-Seq and RNA-Seq. We prove the effectiveness of Pyicos to select for significant signals and show that its accuracy is comparable and sometimes superior to that of methods specifically designed for each particular type of experiment. Pyicos facilitates the analysis of a variety of HTS datatypes through its flexibility and memory efficiency, providing a useful framework for data integration into models of regulatory genomics. Open-source software, with tutorials and protocol files, is available at http://regulatorygenomics.upf.edu/pyicos or as a Galaxy server at http://regulatorygenomics.upf.edu/galaxy eduardo.eyras@upf.edu Supplementary data are available at Bioinformatics online.

  4. A beginners guide to SNP calling from high-throughput DNA-sequencing data.

    Science.gov (United States)

    Altmann, André; Weber, Peter; Bader, Daniel; Preuss, Michael; Binder, Elisabeth B; Müller-Myhsok, Bertram

    2012-10-01

    High-throughput DNA sequencing (HTS) is of increasing importance in the life sciences. One of its most prominent applications is the sequencing of whole genomes or targeted regions of the genome such as all exonic regions (i.e., the exome). Here, the objective is the identification of genetic variants such as single nucleotide polymorphisms (SNPs). The extraction of SNPs from the raw genetic sequences involves many processing steps and the application of a diverse set of tools. We review the essential building blocks for a pipeline that calls SNPs from raw HTS data. The pipeline includes quality control, mapping of short reads to the reference genome, visualization and post-processing of the alignment including base quality recalibration. The final steps of the pipeline include the SNP calling procedure along with filtering of SNP candidates. The steps of this pipeline are accompanied by an analysis of a publicly available whole-exome sequencing dataset. To this end, we employ several alignment programs and SNP calling routines for highlighting the fact that the choice of the tools significantly affects the final results.

  5. SNP high-throughput screening in grapevine using the SNPlex™ genotyping system

    Directory of Open Access Journals (Sweden)

    Velasco Riccardo

    2008-01-01

    Full Text Available Abstract Background Until recently, only a small number of low- and mid-throughput methods have been used for single nucleotide polymorphism (SNP discovery and genotyping in grapevine (Vitis vinifera L.. However, following completion of the sequence of the highly heterozygous genome of Pinot Noir, it has been possible to identify millions of electronic SNPs (eSNPs thus providing a valuable source for high-throughput genotyping methods. Results Herein we report the first application of the SNPlex™ genotyping system in grapevine aiming at the anchoring of an eukaryotic genome. This approach combines robust SNP detection with automated assay readout and data analysis. 813 candidate eSNPs were developed from non-repetitive contigs of the assembled genome of Pinot Noir and tested in 90 progeny of Syrah × Pinot Noir cross. 563 new SNP-based markers were obtained and mapped. The efficiency rate of 69% was enhanced to 80% when multiple displacement amplification (MDA methods were used for preparation of genomic DNA for the SNPlex assay. Conclusion Unlike other SNP genotyping methods used to investigate thousands of SNPs in a few genotypes, or a few SNPs in around a thousand genotypes, the SNPlex genotyping system represents a good compromise to investigate several hundred SNPs in a hundred or more samples simultaneously. Therefore, the use of the SNPlex assay, coupled with whole genome amplification (WGA, is a good solution for future applications in well-equipped laboratories.

  6. EuroPhenome: a repository for high-throughput mouse phenotyping data.

    Science.gov (United States)

    Morgan, Hugh; Beck, Tim; Blake, Andrew; Gates, Hilary; Adams, Niels; Debouzy, Guillaume; Leblanc, Sophie; Lengger, Christoph; Maier, Holger; Melvin, David; Meziane, Hamid; Richardson, Dave; Wells, Sara; White, Jacqui; Wood, Joe; de Angelis, Martin Hrabé; Brown, Steve D M; Hancock, John M; Mallon, Ann-Marie

    2010-01-01

    The broad aim of biomedical science in the postgenomic era is to link genomic and phenotype information to allow deeper understanding of the processes leading from genomic changes to altered phenotype and disease. The EuroPhenome project (http://www.EuroPhenome.org) is a comprehensive resource for raw and annotated high-throughput phenotyping data arising from projects such as EUMODIC. EUMODIC is gathering data from the EMPReSSslim pipeline (http://www.empress.har.mrc.ac.uk/) which is performed on inbred mouse strains and knock-out lines arising from the EUCOMM project. The EuroPhenome interface allows the user to access the data via the phenotype or genotype. It also allows the user to access the data in a variety of ways, including graphical display, statistical analysis and access to the raw data via web services. The raw phenotyping data captured in EuroPhenome is annotated by an annotation pipeline which automatically identifies statistically different mutants from the appropriate baseline and assigns ontology terms for that specific test. Mutant phenotypes can be quickly identified using two EuroPhenome tools: PhenoMap, a graphical representation of statistically relevant phenotypes, and mining for a mutant using ontology terms. To assist with data definition and cross-database comparisons, phenotype data is annotated using combinations of terms from biological ontologies.

  7. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience

    Science.gov (United States)

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992

  8. High-throughput continuous hydrothermal synthesis of an entire nanoceramic phase diagram.

    Science.gov (United States)

    Weng, Xiaole; Cockcroft, Jeremy K; Hyett, Geoffrey; Vickers, Martin; Boldrin, Paul; Tang, Chiu C; Thompson, Stephen P; Parker, Julia E; Knowles, Jonathan C; Rehman, Ihtesham; Parkin, Ivan; Evans, Julian R G; Darr, Jawwad A

    2009-01-01

    A novel High-Throughput Continuous Hydrothermal (HiTCH) flow synthesis reactor was used to make directly and rapidly a 66-sample nanoparticle library (entire phase diagram) of nanocrystalline Ce(x)Zr(y)Y(z)O(2-delta) in less than 12 h. High resolution PXRD data were obtained for the entire heat-treated library (at 1000 degrees C/1 h) in less than a day using the new robotic beamline I11, located at Diamond Light Source (DLS). This allowed Rietveld-quality powder X-ray diffraction (PXRD) data collection of the entire 66-sample library in <1 day. Consequently, the authors rapidly mapped out phase behavior and sintering behaviors for the entire library. Out of the entire 66-sample heat-treated library, the PXRD data suggests that 43 possess the fluorite structure, of which 30 (out of 36) are ternary compositions. The speed, quantity and quality of data obtained by our new approach, offers an exciting new development which will allow structure-property relationships to be accessed for nanoceramics in much shorter time periods.

  9. Pair-barcode high-throughput sequencing for large-scale multiplexed sample analysis

    Directory of Open Access Journals (Sweden)

    Tu Jing

    2012-01-01

    Full Text Available Abstract Background The multiplexing becomes the major limitation of the next-generation sequencing (NGS in application to low complexity samples. Physical space segregation allows limited multiplexing, while the existing barcode approach only permits simultaneously analysis of up to several dozen samples. Results Here we introduce pair-barcode sequencing (PBS, an economic and flexible barcoding technique that permits parallel analysis of large-scale multiplexed samples. In two pilot runs using SOLiD sequencer (Applied Biosystems Inc., 32 independent pair-barcoded miRNA libraries were simultaneously discovered by the combination of 4 unique forward barcodes and 8 unique reverse barcodes. Over 174,000,000 reads were generated and about 64% of them are assigned to both of the barcodes. After mapping all reads to pre-miRNAs in miRBase, different miRNA expression patterns are captured from the two clinical groups. The strong correlation using different barcode pairs and the high consistency of miRNA expression in two independent runs demonstrates that PBS approach is valid. Conclusions By employing PBS approach in NGS, large-scale multiplexed pooled samples could be practically analyzed in parallel so that high-throughput sequencing economically meets the requirements of samples which are low sequencing throughput demand.

  10. Performance Evaluation of IEEE 802.11ah Networks With High-Throughput Bidirectional Traffic.

    Science.gov (United States)

    Šljivo, Amina; Kerkhove, Dwight; Tian, Le; Famaey, Jeroen; Munteanu, Adrian; Moerman, Ingrid; Hoebeke, Jeroen; De Poorter, Eli

    2018-01-23

    So far, existing sub-GHz wireless communication technologies focused on low-bandwidth, long-range communication with large numbers of constrained devices. Although these characteristics are fine for many Internet of Things (IoT) applications, more demanding application requirements could not be met and legacy Internet technologies such as Transmission Control Protocol/Internet Protocol (TCP/IP) could not be used. This has changed with the advent of the new IEEE 802.11ah Wi-Fi standard, which is much more suitable for reliable bidirectional communication and high-throughput applications over a wide area (up to 1 km). The standard offers great possibilities for network performance optimization through a number of physical- and link-layer configurable features. However, given that the optimal configuration parameters depend on traffic patterns, the standard does not dictate how to determine them. Such a large number of configuration options can lead to sub-optimal or even incorrect configurations. Therefore, we investigated how two key mechanisms, Restricted Access Window (RAW) grouping and Traffic Indication Map (TIM) segmentation, influence scalability, throughput, latency and energy efficiency in the presence of bidirectional TCP/IP traffic. We considered both high-throughput video streaming traffic and large-scale reliable sensing traffic and investigated TCP behavior in both scenarios when the link layer introduces long delays. This article presents the relations between attainable throughput per station and attainable number of stations, as well as the influence of RAW, TIM and TCP parameters on both. We found that up to 20 continuously streaming IP-cameras can be reliably connected via IEEE 802.11ah with a maximum average data rate of 160 kbps, whereas 10 IP-cameras can achieve average data rates of up to 255 kbps over 200 m. Up to 6960 stations transmitting every 60 s can be connected over 1 km with no lost packets. The presented results enable the fine tuning

  11. Transcriptomic and proteomic profiling of two porcine tissues using high-throughput technologies

    Directory of Open Access Journals (Sweden)

    Panitz Frank

    2009-01-01

    Full Text Available Abstract Background The recent development within high-throughput technologies for expression profiling has allowed for parallel analysis of transcriptomes and proteomes in biological systems such as comparative analysis of transcript and protein levels of tissue regulated genes. Until now, such studies of have only included microarray or short length sequence tags for transcript profiling. Furthermore, most comparisons of transcript and protein levels have been based on absolute expression values from within the same tissue and not relative expression values based on tissue ratios. Results Presented here is a novel study of two porcine tissues based on integrative analysis of data from expression profiling of identical samples using cDNA microarray, 454-sequencing and iTRAQ-based proteomics. Sequence homology identified 2.541 unique transcripts that are detectable by both microarray hybridizations and 454-sequencing of 1.2 million cDNA tags. Both transcript-based technologies showed high reproducibility between sample replicates of the same tissue, but the correlation across these two technologies was modest. Thousands of genes being differentially expressed were identified with microarray. Out of the 306 differentially expressed genes, identified by 454-sequencing, 198 (65% were also found by microarray. The relationship between the regulation of transcript and protein levels was analyzed by integrating iTRAQ-based proteomics data. Protein expression ratios were determined for 354 genes, of which 148 could be mapped to both microarray and 454-sequencing data. A comparison of the expression ratios from the three technologies revealed that differences in transcript and protein levels across heart and muscle tissues are positively correlated. Conclusion We show that the reproducibility within cDNA microarray and 454-sequencing is high, but that the agreement across these two technologies is modest. We demonstrate that the regulation of transcript

  12. Video observation to map hand contact and bacterial transmission in operating rooms.

    Science.gov (United States)

    Rowlands, John; Yeager, Mark P; Beach, Michael; Patel, Hetal M; Huysman, Bridget C; Loftus, Randy W

    2014-07-01

    Hand hygiene (HH) is considered a primary intervention to avoid transmission of bacteria in health care settings and to prevent health care-associated infections. Despite efforts to decrease the incidence of health care-associated infections by improving HH, HH compliance rates vary widely depending on the hospital environment. We used intraoperative video observation to map temporal patterns of anesthesia provider hand contact with anesthesia work environment (AWE) surfaces and to assess HH compliance. Serial bacterial cultures of high contact objects were subsequently used to characterize bacterial transmission over time. Using World Health Organization criteria, we found a large number of HH opportunities and a low rate of HH compliance by anesthesia providers (mean, 2.9%). We observed an inverse correlation between provider hand hygiene compliance during induction and emergence from anesthesia (3.2% and 4.1%, respectively) and the magnitude of AWE surface contamination (103 and 147 CFU, respectively) at these time points. We found no correlation between frequency of hand contact with the AWE and bacterial contamination. Compliance with current HH recommendations by anesthesia providers is not feasible. However, there does appear to be a correlation between HH compliance rates and bacterial contamination of the AWE, an observation that should stimulate further work to design new methods for control of bacterial transmission in operating rooms. Copyright © 2014 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  13. Generating information-rich high-throughput experimental materials genomes using functional clustering via multitree genetic programming and information theory.

    Science.gov (United States)

    Suram, Santosh K; Haber, Joel A; Jin, Jian; Gregoire, John M

    2015-04-13

    High-throughput experimental methodologies are capable of synthesizing, screening and characterizing vast arrays of combinatorial material libraries at a very rapid rate. These methodologies strategically employ tiered screening wherein the number of compositions screened decreases as the complexity, and very often the scientific information obtained from a screening experiment, increases. The algorithm used for down-selection of samples from higher throughput screening experiment to a lower throughput screening experiment is vital in achieving information-rich experimental materials genomes. The fundamental science of material discovery lies in the establishment of composition-structure-property relationships, motivating the development of advanced down-selection algorithms which consider the information value of the selected compositions, as opposed to simply selecting the best performing compositions from a high throughput experiment. Identification of property fields (composition regions with distinct composition-property relationships) in high throughput data enables down-selection algorithms to employ advanced selection strategies, such as the selection of representative compositions from each field or selection of compositions that span the composition space of the highest performing field. Such strategies would greatly enhance the generation of data-driven discoveries. We introduce an informatics-based clustering of composition-property functional relationships using a combination of information theory and multitree genetic programming concepts for identification of property fields in a composition library. We demonstrate our approach using a complex synthetic composition-property map for a 5 at. % step ternary library consisting of four distinct property fields and finally explore the application of this methodology for capturing relationships between composition and catalytic activity for the oxygen evolution reaction for 5429 catalyst compositions in a

  14. Wide Throttling, High Throughput Hall Thruster for Science and Exploration Missions Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In response to Topic S3.04 "Propulsion Systems," Busek Co. Inc. will develop a high throughput Hall effect thruster with a nominal peak power of 1-kW and wide...

  15. High-Throughput Approaches to Pinpoint Function within the Noncoding Genome.

    Science.gov (United States)

    Montalbano, Antonino; Canver, Matthew C; Sanjana, Neville E

    2017-10-05

    The clustered regularly interspaced short palindromic repeats (CRISPR)-Cas nuclease system is a powerful tool for genome editing, and its simple programmability has enabled high-throughput genetic and epigenetic studies. These high-throughput approaches offer investigators a toolkit for functional interrogation of not only protein-coding genes but also noncoding DNA. Historically, noncoding DNA has lacked the detailed characterization that has been applied to protein-coding genes in large part because there has not been a robust set of methodologies for perturbing these regions. Although the majority of high-throughput CRISPR screens have focused on the coding genome to date, an increasing number of CRISPR screens targeting noncoding genomic regions continue to emerge. Here, we review high-throughput CRISPR-based approaches to uncover and understand functional elements within the noncoding genome and discuss practical aspects of noncoding library design and screen analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. High-Throughput Industrial Coatings Research at The Dow Chemical Company.

    Science.gov (United States)

    Kuo, Tzu-Chi; Malvadkar, Niranjan A; Drumright, Ray; Cesaretti, Richard; Bishop, Matthew T

    2016-09-12

    At The Dow Chemical Company, high-throughput research is an active area for developing new industrial coatings products. Using the principles of automation (i.e., using robotic instruments), parallel processing (i.e., prepare, process, and evaluate samples in parallel), and miniaturization (i.e., reduce sample size), high-throughput tools for synthesizing, formulating, and applying coating compositions have been developed at Dow. In addition, high-throughput workflows for measuring various coating properties, such as cure speed, hardness development, scratch resistance, impact toughness, resin compatibility, pot-life, surface defects, among others have also been developed in-house. These workflows correlate well with the traditional coatings tests, but they do not necessarily mimic those tests. The use of such high-throughput workflows in combination with smart experimental designs allows accelerated discovery and commercialization.

  17. High-throughput phenotyping of multicellular organisms: finding the link between genotype and phenotype

    OpenAIRE

    Sozzani, Rosangela; Benfey, Philip N

    2011-01-01

    High-throughput phenotyping approaches (phenomics) are being combined with genome-wide genetic screens to identify alterations in phenotype that result from gene inactivation. Here we highlight promising technologies for 'phenome-scale' analyses in multicellular organisms.

  18. High-throughput phenotyping of multicellular organisms: finding the link between genotype and phenotype

    Science.gov (United States)

    2011-01-01

    High-throughput phenotyping approaches (phenomics) are being combined with genome-wide genetic screens to identify alterations in phenotype that result from gene inactivation. Here we highlight promising technologies for 'phenome-scale' analyses in multicellular organisms. PMID:21457493

  19. EMPeror: a tool for visualizing high-throughput microbial community data

    National Research Council Canada - National Science Library

    Vázquez-Baeza, Yoshiki; Pirrung, Meg; Gonzalez, Antonio; Knight, Rob

    2013-01-01

    As microbial ecologists take advantage of high-throughput sequencing technologies to describe microbial communities across ever-increasing numbers of samples, new analysis tools are required to relate...

  20. High-throughput system-wide engineering and screening for microbial biotechnology.

    Science.gov (United States)

    Vervoort, Yannick; Linares, Alicia Gutiérrez; Roncoroni, Miguel; Liu, Chengxun; Steensels, Jan; Verstrepen, Kevin J

    2017-08-01

    Genetic engineering and screening of large number of cells or populations is a crucial bottleneck in today's systems biology and applied (micro)biology. Instead of using standard methods in bottles, flasks or 96-well plates, scientists are increasingly relying on high-throughput strategies that miniaturize their experiments to the nanoliter and picoliter scale and the single-cell level. In this review, we summarize different high-throughput system-wide genome engineering and screening strategies for microbes. More specifically, we will emphasize the use of multiplex automated genome evolution (MAGE) and CRISPR/Cas systems for high-throughput genome engineering and the application of (lab-on-chip) nanoreactors for high-throughput single-cell or population screening. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. An image analysis toolbox for high-throughput C. elegans assays.

    Science.gov (United States)

    Wählby, Carolina; Kamentsky, Lee; Liu, Zihan H; Riklin-Raviv, Tammy; Conery, Annie L; O'Rourke, Eyleen J; Sokolnicki, Katherine L; Visvikis, Orane; Ljosa, Vebjorn; Irazoqui, Javier E; Golland, Polina; Ruvkun, Gary; Ausubel, Frederick M; Carpenter, Anne E

    2012-04-22

    We present a toolbox for high-throughput screening of image-based Caenorhabditis elegans phenotypes. The image analysis algorithms measure morphological phenotypes in individual worms and are effective for a variety of assays and imaging systems. This WormToolbox is available through the open-source CellProfiler project and enables objective scoring of whole-worm high-throughput image-based assays of C. elegans for the study of diverse biological pathways that are relevant to human disease.

  2. Deep Recurrent Neural Network for Mobile Human Activity Recognition with High Throughput

    OpenAIRE

    Inoue, Masaya; Inoue, Sozo; Nishida, Takeshi

    2016-01-01

    In this paper, we propose a method of human activity recognition with high throughput from raw accelerometer data applying a deep recurrent neural network (DRNN), and investigate various architectures and its combination to find the best parameter values. The "high throughput" refers to short time at a time of recognition. We investigated various parameters and architectures of the DRNN by using the training dataset of 432 trials with 6 activity classes from 7 people. The maximum recognition ...

  3. High-throughput ion beam analysis at imec

    Science.gov (United States)

    Meersschaut, J.; Vandervorst, W.

    2017-09-01

    We describe the ion beam analysis activities at imec. Rutherford backscattering spectrometry and time of flight-energy (TOF-E) elastic recoil detection analysis are pursued to support the nano-electronics research and development. We outline the experimental set-up and we introduce a new data acquisition software platform. Finally, we illustrate the use of Rutherford backscattering spectrometry to map the thickness of a metallic thin film on a 300 mm Si wafer.

  4. HTSSIP: An R package for analysis of high throughput sequencing data from nucleic acid stable isotope probing (SIP) experiments.

    Science.gov (United States)

    Youngblut, Nicholas D; Barnett, Samuel E; Buckley, Daniel H

    2018-01-01

    Combining high throughput sequencing with stable isotope probing (HTS-SIP) is a powerful method for mapping in situ metabolic processes to thousands of microbial taxa. However, accurately mapping metabolic processes to taxa is complex and challenging. Multiple HTS-SIP data analysis methods have been developed, including high-resolution stable isotope probing (HR-SIP), multi-window high-resolution stable isotope probing (MW-HR-SIP), quantitative stable isotope probing (qSIP), and ΔBD. Currently, there is no publicly available software designed specifically for analyzing HTS-SIP data. To address this shortfall, we have developed the HTSSIP R package, an open-source, cross-platform toolset for conducting HTS-SIP analyses in a straightforward and easily reproducible manner. The HTSSIP package, along with full documentation and examples, is available from CRAN at https://cran.r-project.org/web/packages/HTSSIP/index.html and Github at https://github.com/buckleylab/HTSSIP.

  5. Lessons from high-throughput protein crystallization screening: 10 years of practical experience

    Science.gov (United States)

    JR, Luft; EH, Snell; GT, DeTitta

    2011-01-01

    Introduction X-ray crystallography provides the majority of our structural biological knowledge at a molecular level and in terms of pharmaceutical design is a valuable tool to accelerate discovery. It is the premier technique in the field, but its usefulness is significantly limited by the need to grow well-diffracting crystals. It is for this reason that high-throughput crystallization has become a key technology that has matured over the past 10 years through the field of structural genomics. Areas covered The authors describe their experiences in high-throughput crystallization screening in the context of structural genomics and the general biomedical community. They focus on the lessons learnt from the operation of a high-throughput crystallization screening laboratory, which to date has screened over 12,500 biological macromolecules. They also describe the approaches taken to maximize the success while minimizing the effort. Through this, the authors hope that the reader will gain an insight into the efficient design of a laboratory and protocols to accomplish high-throughput crystallization on a single-, multiuser-laboratory or industrial scale. Expert Opinion High-throughput crystallization screening is readily available but, despite the power of the crystallographic technique, getting crystals is still not a solved problem. High-throughput approaches can help when used skillfully; however, they still require human input in the detailed analysis and interpretation of results to be more successful. PMID:22646073

  6. Using manifold embedding for assessing and predicting protein interactions from high-throughput experimental data.

    Science.gov (United States)

    You, Zhu-Hong; Lei, Ying-Ke; Gui, Jie; Huang, De-Shuang; Zhou, Xiaobo

    2010-11-01

    High-throughput protein interaction data, with ever-increasing volume, are becoming the foundation of many biological discoveries, and thus high-quality protein-protein interaction (PPI) maps are critical for a deeper understanding of cellular processes. However, the unreliability and paucity of current available PPI data are key obstacles to the subsequent quantitative studies. It is therefore highly desirable to develop an approach to deal with these issues from the computational perspective. Most previous works for assessing and predicting protein interactions either need supporting evidences from multiple information resources or are severely impacted by the sparseness of PPI networks. We developed a robust manifold embedding technique for assessing the reliability of interactions and predicting new interactions, which purely utilizes the topological information of PPI networks and can work on a sparse input protein interactome without requiring additional information types. After transforming a given PPI network into a low-dimensional metric space using manifold embedding based on isometric feature mapping (ISOMAP), the problem of assessing and predicting protein interactions is recasted into the form of measuring similarity between points of its metric space. Then a reliability index, a likelihood indicating the interaction of two proteins, is assigned to each protein pair in the PPI networks based on the similarity between the points in the embedded space. Validation of the proposed method is performed with extensive experiments on densely connected and sparse PPI network of yeast, respectively. Results demonstrate that the interactions ranked top by our method have high-functional homogeneity and localization coherence, especially our method is very efficient for large sparse PPI network with which the traditional algorithms fail. Therefore, the proposed algorithm is a much more promising method to detect both false positive and false negative interactions

  7. High-throughput two-dimensional root system phenotyping platform facilitates genetic analysis of root growth and development.

    Science.gov (United States)

    Clark, Randy T; Famoso, Adam N; Zhao, Keyan; Shaff, Jon E; Craft, Eric J; Bustamante, Carlos D; McCouch, Susan R; Aneshansley, Daniel J; Kochian, Leon V

    2013-02-01

    High-throughput phenotyping of root systems requires a combination of specialized techniques and adaptable plant growth, root imaging and software tools. A custom phenotyping platform was designed to capture images of whole root systems, and novel software tools were developed to process and analyse these images. The platform and its components are adaptable to a wide range root phenotyping studies using diverse growth systems (hydroponics, paper pouches, gel and soil) involving several plant species, including, but not limited to, rice, maize, sorghum, tomato and Arabidopsis. The RootReader2D software tool is free and publicly available and was designed with both user-guided and automated features that increase flexibility and enhance efficiency when measuring root growth traits from specific roots or entire root systems during large-scale phenotyping studies. To demonstrate the unique capabilities and high-throughput capacity of this phenotyping platform for studying root systems, genome-wide association studies on rice (Oryza sativa) and maize (Zea mays) root growth were performed and root traits related to aluminium (Al) tolerance were analysed on the parents of the maize nested association mapping (NAM) population. © 2012 Blackwell Publishing Ltd.

  8. CRISPR-Cas9 epigenome editing enables high-throughput screening for functional regulatory elements in the human genome.

    Science.gov (United States)

    Klann, Tyler S; Black, Joshua B; Chellappan, Malathi; Safi, Alexias; Song, Lingyun; Hilton, Isaac B; Crawford, Gregory E; Reddy, Timothy E; Gersbach, Charles A

    2017-06-01

    Large genome-mapping consortia and thousands of genome-wide association studies have identified non-protein-coding elements in the genome as having a central role in various biological processes. However, decoding the functions of the millions of putative regulatory elements discovered in these studies remains challenging. CRISPR-Cas9-based epigenome editing technologies have enabled precise perturbation of the activity of specific regulatory elements. Here we describe CRISPR-Cas9-based epigenomic regulatory element screening (CERES) for improved high-throughput screening of regulatory element activity in the native genomic context. Using dCas9KRAB repressor and dCas9p300 activator constructs and lentiviral single guide RNA libraries to target DNase I hypersensitive sites surrounding a gene of interest, we carried out both loss- and gain-of-function screens to identify regulatory elements for the β-globin and HER2 loci in human cells. CERES readily identified known and previously unidentified regulatory elements, some of which were dependent on cell type or direction of perturbation. This technology allows the high-throughput functional annotation of putative regulatory elements in their native chromosomal context.

  9. High-throughput characterization of stresses in thin film materials libraries using Si cantilever array wafers and digital holographic microscopy.

    Science.gov (United States)

    Lai, Y W; Hamann, S; Ehmann, M; Ludwig, A

    2011-06-01

    We report the development of an advanced high-throughput stress characterization method for thin film materials libraries sputter-deposited on micro-machined cantilever arrays consisting of around 1500 cantilevers on 4-inch silicon-on-insulator wafers. A low-cost custom-designed digital holographic microscope (DHM) is employed to simultaneously monitor the thin film thickness, the surface topography and the curvature of each of the cantilevers before and after deposition. The variation in stress state across the thin film materials library is then calculated by Stoney's equation based on the obtained radii of curvature of the cantilevers and film thicknesses. DHM with nanometer-scale out-of-plane resolution allows stress measurements in a wide range, at least from several MPa to several GPa. By using an automatic x-y translation stage, the local stresses within a 4-inch materials library are mapped with high accuracy within 10 min. The speed of measurement is greatly improved compared with the prior laser scanning approach that needs more than an hour of measuring time. A high-throughput stress measurement of an as-deposited Fe-Pd-W materials library was evaluated for demonstration. The fast characterization method is expected to accelerate the development of (functional) thin films, e.g., (magnetic) shape memory materials, whose functionality is greatly stress dependent. © 2011 American Institute of Physics

  10. High-throughput RNA structure probing reveals critical folding events during early 60S ribosome assembly in yeast.

    Science.gov (United States)

    Burlacu, Elena; Lackmann, Fredrik; Aguilar, Lisbeth-Carolina; Belikov, Sergey; Nues, Rob van; Trahan, Christian; Hector, Ralph D; Dominelli-Whiteley, Nicholas; Cockroft, Scott L; Wieslander, Lars; Oeffinger, Marlene; Granneman, Sander

    2017-09-28

    While the protein composition of various yeast 60S ribosomal subunit assembly intermediates has been studied in detail, little is known about ribosomal RNA (rRNA) structural rearrangements that take place during early 60S assembly steps. Using a high-throughput RNA structure probing method, we provide nucleotide resolution insights into rRNA structural rearrangements during nucleolar 60S assembly. Our results suggest that many rRNA-folding steps, such as folding of 5.8S rRNA, occur at a very specific stage of assembly, and propose that downstream nuclear assembly events can only continue once 5.8S folding has been completed. Our maps of nucleotide flexibility enable making predictions about the establishment of protein-rRNA interactions, providing intriguing insights into the temporal order of protein-rRNA as well as long-range inter-domain rRNA interactions. These data argue that many distant domains in the rRNA can assemble simultaneously during early 60S assembly and underscore the enormous complexity of 60S synthesis.Ribosome biogenesis is a dynamic process that involves the ordered assembly of ribosomal proteins and numerous RNA structural rearrangements. Here the authors apply ChemModSeq, a high-throughput RNA structure probing method, to quantitatively measure changes in RNA flexibility during the nucleolar stages of 60S assembly in yeast.

  11. Deglutition-Induced Atrial Tachycardia Localised with Contact Force-Guided Activation Mapping.

    Science.gov (United States)

    McLellan, Alex J A; Bates, Matthew G D; Young, Shane D F; Sparks, Paul B

    2016-11-01

    Deglutition-induced atrial tachycardia is a rare arrhythmia with a poor response to medical therapy. Electrophysiological study is challenging due to the dependence of induction on swallowing. We present a novel approach to management of deglutition-induced atrial tachycardia arising from right superior pulmonary vein. Use of minimal conscious sedation and repeated swallow challenge inductions, together with contact force-guided mapping were key determinants of success. We review published cases, discussing potential mechanisms including oesophageal distension and neural reflexes. Copyright © 2016 Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) and the Cardiac Society of Australia and New Zealand (CSANZ). Published by Elsevier B.V. All rights reserved.

  12. LightAssembler: fast and memory-efficient assembly algorithm for high-throughput sequencing reads.

    Science.gov (United States)

    El-Metwally, Sara; Zakaria, Magdi; Hamza, Taher

    2016-11-01

    The deluge of current sequenced data has exceeded Moore's Law, more than doubling every 2 years since the next-generation sequencing (NGS) technologies were invented. Accordingly, we will able to generate more and more data with high speed at fixed cost, but lack the computational resources to store, process and analyze it. With error prone high throughput NGS reads and genomic repeats, the assembly graph contains massive amount of redundant nodes and branching edges. Most assembly pipelines require this large graph to reside in memory to start their workflows, which is intractable for mammalian genomes. Resource-efficient genome assemblers combine both the power of advanced computing techniques and innovative data structures to encode the assembly graph efficiently in a computer memory. LightAssembler is a lightweight assembly algorithm designed to be executed on a desktop machine. It uses a pair of cache oblivious Bloom filters, one holding a uniform sample of [Formula: see text]-spaced sequenced [Formula: see text]-mers and the other holding [Formula: see text]-mers classified as likely correct, using a simple statistical test. LightAssembler contains a light implementation of the graph traversal and simplification modules that achieves comparable assembly accuracy and contiguity to other competing tools. Our method reduces the memory usage by [Formula: see text] compared to the resource-efficient assemblers using benchmark datasets from GAGE and Assemblathon projects. While LightAssembler can be considered as a gap-based sequence assembler, different gap sizes result in an almost constant assembly size and genome coverage. https://github.com/SaraEl-Metwally/LightAssembler CONTACT: sarah_almetwally4@mans.edu.egSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. An Updated Protocol for High Throughput Plant Tissue Sectioning

    Directory of Open Access Journals (Sweden)

    Jonathan A. Atkinson

    2017-10-01

    Full Text Available Quantification of the tissue and cellular structure of plant material is essential for the study of a variety of plant sciences applications. Currently, many methods for sectioning plant material are either low throughput or involve free-hand sectioning which requires a significant amount of practice. Here, we present an updated method to provide rapid and high-quality cross sections, primarily of root tissue but which can also be readily applied to other tissues such as leaves or stems. To increase the throughput of traditional agarose embedding and sectioning, custom designed 3D printed molds were utilized to embed 5–15 roots in a block for sectioning in a single cut. A single fluorescent stain in combination with laser scanning confocal microscopy was used to obtain high quality images of thick sections. The provided CAD files allow production of the embedding molds described here from a number of online 3D printing services. Although originally developed for roots, this method provides rapid, high quality cross sections of many plant tissue types, making it suitable for use in forward genetic screens for differences in specific cell structures or developmental changes. To demonstrate the utility of the technique, the two parent lines of the wheat (Triticum aestivum Chinese Spring × Paragon doubled haploid mapping population were phenotyped for root anatomical differences. Significant differences in adventitious cross section area, stele area, xylem, phloem, metaxylem, and cortical cell file count were found.

  14. eRNA: a graphic user interface-based tool optimized for large data analysis from high-throughput RNA sequencing.

    Science.gov (United States)

    Yuan, Tiezheng; Huang, Xiaoyi; Dittmar, Rachel L; Du, Meijun; Kohli, Manish; Boardman, Lisa; Thibodeau, Stephen N; Wang, Liang

    2014-03-05

    RNA sequencing (RNA-seq) is emerging as a critical approach in biological research. However, its high-throughput advantage is significantly limited by the capacity of bioinformatics tools. The research community urgently needs user-friendly tools to efficiently analyze the complicated data generated by high throughput sequencers. We developed a standalone tool with graphic user interface (GUI)-based analytic modules, known as eRNA. The capacity of performing parallel processing and sample management facilitates large data analyses by maximizing hardware usage and freeing users from tediously handling sequencing data. The module miRNA identification" includes GUIs for raw data reading, adapter removal, sequence alignment, and read counting. The module "mRNA identification" includes GUIs for reference sequences, genome mapping, transcript assembling, and differential expression. The module "Target screening" provides expression profiling analyses and graphic visualization. The module "Self-testing" offers the directory setups, sample management, and a check for third-party package dependency. Integration of other GUIs including Bowtie, miRDeep2, and miRspring extend the program's functionality. eRNA focuses on the common tools required for the mapping and quantification analysis of miRNA-seq and mRNA-seq data. The software package provides an additional choice for scientists who require a user-friendly computing environment and high-throughput capacity for large data analysis. eRNA is available for free download at https://sourceforge.net/projects/erna/?source=directory.

  15. High-throughput phenotyping and genomic selection: the frontiers of crop breeding converge.

    Science.gov (United States)

    Cabrera-Bosquet, Llorenç; Crossa, José; von Zitzewitz, Jarislav; Serret, María Dolors; Araus, José Luis

    2012-05-01

    Genomic selection (GS) and high-throughput phenotyping have recently been captivating the interest of the crop breeding community from both the public and private sectors world-wide. Both approaches promise to revolutionize the prediction of complex traits, including growth, yield and adaptation to stress. Whereas high-throughput phenotyping may help to improve understanding of crop physiology, most powerful techniques for high-throughput field phenotyping are empirical rather than analytical and comparable to genomic selection. Despite the fact that the two methodological approaches represent the extremes of what is understood as the breeding process (phenotype versus genome), they both consider the targeted traits (e.g. grain yield, growth, phenology, plant adaptation to stress) as a black box instead of dissecting them as a set of secondary traits (i.e. physiological) putatively related to the target trait. Both GS and high-throughput phenotyping have in common their empirical approach enabling breeders to use genome profile or phenotype without understanding the underlying biology. This short review discusses the main aspects of both approaches and focuses on the case of genomic selection of maize flowering traits and near-infrared spectroscopy (NIRS) and plant spectral reflectance as high-throughput field phenotyping methods for complex traits such as crop growth and yield. © 2012 Institute of Botany, Chinese Academy of Sciences.

  16. Protocol: A high-throughput DNA extraction system suitable for conifers.

    Science.gov (United States)

    Bashalkhanov, Stanislav; Rajora, Om P

    2008-08-01

    High throughput DNA isolation from plants is a major bottleneck for most studies requiring large sample sizes. A variety of protocols have been developed for DNA isolation from plants. However, many species, including conifers, have high contents of secondary metabolites that interfere with the extraction process or the subsequent analysis steps. Here, we describe a procedure for high-throughput DNA isolation from conifers. We have developed a high-throughput DNA extraction protocol for conifers using an automated liquid handler and modifying the Qiagen MagAttract Plant Kit protocol. The modifications involve change to the buffer system and improving the protocol so that it almost doubles the number of samples processed per kit, which significantly reduces the overall costs. We describe two versions of the protocol: one for medium-throughput (MTP) and another for high-throughput (HTP) DNA isolation. The HTP version works from start to end in the industry-standard 96-well format, while the MTP version provides higher DNA yields per sample processed. We have successfully used the protocol for DNA extraction and genotyping of thousands of individuals of several spruce and a pine species. A high-throughput system for DNA extraction from conifer needles and seeds has been developed and validated. The quality of the isolated DNA was comparable with that obtained from two commonly used methods: the silica-spin column and the classic CTAB protocol. Our protocol provides a fully automatable and cost effective solution for processing large numbers of conifer samples.

  17. Embedded image enhancement for high-throughput cameras

    Science.gov (United States)

    Geerts, Stan J. C.; Cornelissen, Dion; de With, Peter H. N.

    2014-03-01

    This paper presents image enhancement for a novel Ultra-High-Definition (UHD) video camera offering 4K images and higher. Conventional image enhancement techniques need to be reconsidered for the high-resolution images and the low-light sensitivity of the new sensor. We study two image enhancement functions and evaluate and optimize the algorithms for embedded implementation in programmable logic (FPGA). The enhancement study involves high-quality Auto White Balancing (AWB) and Local Contrast Enhancement (LCE). We have compared multiple algorithms from literature, both with objective and subjective metrics. In order to objectively compare Local Contrast (LC), an existing LC metric is modified for LC measurement in UHD images. For AWB, we have found that color histogram stretching offers a subjective high image quality and it is among the algorithms with the lowest complexity, while giving only a small balancing error. We impose a color-to-color gain constraint, which improves robustness of low-light images. For local contrast enhancement, a combination of contrast preserving gamma and single-scale Retinex is selected. A modified bilateral filter is designed to prevent halo artifacts, while significantly reducing the complexity and simultaneously preserving quality. We show that by cascading contrast preserving gamma and single-scale Retinex, the visibility of details is improved towards the level appropriate for high-quality surveillance applications. The user is offered control over the amount of enhancement. Also, we discuss the mapping of those functions on a heterogeneous platform to come to an effective implementation while preserving quality and robustness.

  18. High-throughput phenotyping to detect drought tolerance QTL in wild barley introgression lines

    KAUST Repository

    Honsdorf, Nora

    2014-05-13

    Drought is one of the most severe stresses, endangering crop yields worldwide. In order to select drought tolerant genotypes, access to exotic germplasm and efficient phenotyping protocols are needed. In this study the high-throughput phenotyping platform "The Plant Accelerator", Adelaide, Australia, was used to screen a set of 47 juvenile (six week old) wild barley introgression lines (S42ILs) for drought stress responses. The kinetics of growth development was evaluated under early drought stress and well watered treatments. High correlation (r = 0.98) between image based biomass estimates and actual biomass was demonstrated, and the suitability of the system to accurately and non-destructively estimate biomass was validated. Subsequently, quantitative trait loci (QTL) were located, which contributed to the genetic control of growth under drought stress. In total, 44 QTL for eleven out of 14 investigated traits were mapped, which for example controlled growth rate and water use efficiency. The correspondence of those QTL with QTL previously identified in field trials is shown. For instance, six out of eight QTL controlling plant height were also found in previous field and glasshouse studies with the same introgression lines. This indicates that phenotyping juvenile plants may assist in predicting adult plant performance. In addition, favorable wild barley alleles for growth and biomass parameters were detected, for instance, a QTL that increased biomass by approximately 36%. In particular, introgression line S42IL-121 revealed improved growth under drought stress compared to the control Scarlett. The introgression line showed a similar behavior in previous field experiments, indicating that S42IL-121 may be an attractive donor for breeding of drought tolerant barley cultivars. © 2014 Honsdorf et al.

  19. High-throughput phenotyping to detect drought tolerance QTL in wild barley introgression lines.

    Directory of Open Access Journals (Sweden)

    Nora Honsdorf

    Full Text Available Drought is one of the most severe stresses, endangering crop yields worldwide. In order to select drought tolerant genotypes, access to exotic germplasm and efficient phenotyping protocols are needed. In this study the high-throughput phenotyping platform "The Plant Accelerator", Adelaide, Australia, was used to screen a set of 47 juvenile (six week old wild barley introgression lines (S42ILs for drought stress responses. The kinetics of growth development was evaluated under early drought stress and well watered treatments. High correlation (r=0.98 between image based biomass estimates and actual biomass was demonstrated, and the suitability of the system to accurately and non-destructively estimate biomass was validated. Subsequently, quantitative trait loci (QTL were located, which contributed to the genetic control of growth under drought stress. In total, 44 QTL for eleven out of 14 investigated traits were mapped, which for example controlled growth rate and water use efficiency. The correspondence of those QTL with QTL previously identified in field trials is shown. For instance, six out of eight QTL controlling plant height were also found in previous field and glasshouse studies with the same introgression lines. This indicates that phenotyping juvenile plants may assist in predicting adult plant performance. In addition, favorable wild barley alleles for growth and biomass parameters were detected, for instance, a QTL that increased biomass by approximately 36%. In particular, introgression line S42IL-121 revealed improved growth under drought stress compared to the control Scarlett. The introgression line showed a similar behavior in previous field experiments, indicating that S42IL-121 may be an attractive donor for breeding of drought tolerant barley cultivars.

  20. Use of flow cytometry for high-throughput cell population estimates in fixed brain tissue

    Directory of Open Access Journals (Sweden)

    Nicole A Young

    2012-07-01

    Full Text Available The numbers and types of cells in an area of cortex define its function. Therefore it is essential to characterize the numbers and distributions of total cells in areas of the cortex, as well as to identify numbers of subclasses of neurons and glial cells. To date, the large size of the primate brain and the lack of innovation in cell counting methods have been a roadblock to obtaining high-resolution maps of cell and neuron density across the cortex in humans and non-human primates. Stereological counting methods and the isotropic fractionator are valuable tools for estimating cell numbers, but are better suited to smaller, well-defined brain structures or to cortex as a whole. In the present study, we have extended our flow-cytometry based counting method, the flow fractionator (Collins et al., 2010a, to include high-throughput total cell population estimates in homogenized cortical samples. We demonstrate that our method produces consistent, accurate and repeatable cell estimates quickly. The estimates we report are in excellent agreement with estimates for the same samples obtained using a Neubauer chamber and a fluorescence microscope. We show that our flow cytometry-based method for total cell estimation in homogenized brain tissue is more efficient and more precise than manual counting methods. The addition of automated nuclei counting to our flow fractionator method allows for a fully automated, rapid characterization of total cells and neuronal and non-neuronal populations in human and non-human primate brains, providing valuable data to further our understanding of the functional organization of normal, aging and diseased brains.

  1. A new pooling strategy for high-throughput screening: the Shifted Transversal Design

    Directory of Open Access Journals (Sweden)

    Thierry-Mieg Nicolas

    2006-01-01

    Full Text Available Abstract Background In binary high-throughput screening projects where the goal is the identification of low-frequency events, beyond the obvious issue of efficiency, false positives and false negatives are a major concern. Pooling constitutes a natural solution: it reduces the number of tests, while providing critical duplication of the individual experiments, thereby correcting for experimental noise. The main difficulty consists in designing the pools in a manner that is both efficient and robust: few pools should be necessary to correct the errors and identify the positives, yet the experiment should not be too vulnerable to biological shakiness. For example, some information should still be obtained even if there are slightly more positives or errors than expected. This is known as the group testing problem, or pooling problem. Results In this paper, we present a new non-adaptive combinatorial pooling design: the "shifted transversal design" (STD. It relies on arithmetics, and rests on two intuitive ideas: minimizing the co-occurrence of objects, and constructing pools of constant-sized intersections. We prove that it allows unambiguous decoding of noisy experimental observations. This design is highly flexible, and can be tailored to function robustly in a wide range of experimental settings (i.e., numbers of objects, fractions of positives, and expected error-rates. Furthermore, we show that our design compares favorably, in terms of efficiency, to the previously described non-adaptive combinatorial pooling designs. Conclusion This method is currently being validated by field-testing in the context of yeast-two-hybrid interactome mapping, in collaboration with Marc Vidal's lab at the Dana Farber Cancer Institute. Many similar projects could benefit from using the Shifted Transversal Design.

  2. High-throughput phenotyping to detect drought tolerance QTL in wild barley introgression lines.

    Science.gov (United States)

    Honsdorf, Nora; March, Timothy John; Berger, Bettina; Tester, Mark; Pillen, Klaus

    2014-01-01

    Drought is one of the most severe stresses, endangering crop yields worldwide. In order to select drought tolerant genotypes, access to exotic germplasm and efficient phenotyping protocols are needed. In this study the high-throughput phenotyping platform "The Plant Accelerator", Adelaide, Australia, was used to screen a set of 47 juvenile (six week old) wild barley introgression lines (S42ILs) for drought stress responses. The kinetics of growth development was evaluated under early drought stress and well watered treatments. High correlation (r=0.98) between image based biomass estimates and actual biomass was demonstrated, and the suitability of the system to accurately and non-destructively estimate biomass was validated. Subsequently, quantitative trait loci (QTL) were located, which contributed to the genetic control of growth under drought stress. In total, 44 QTL for eleven out of 14 investigated traits were mapped, which for example controlled growth rate and water use efficiency. The correspondence of those QTL with QTL previously identified in field trials is shown. For instance, six out of eight QTL controlling plant height were also found in previous field and glasshouse studies with the same introgression lines. This indicates that phenotyping juvenile plants may assist in predicting adult plant performance. In addition, favorable wild barley alleles for growth and biomass parameters were detected, for instance, a QTL that increased biomass by approximately 36%. In particular, introgression line S42IL-121 revealed improved growth under drought stress compared to the control Scarlett. The introgression line showed a similar behavior in previous field experiments, indicating that S42IL-121 may be an attractive donor for breeding of drought tolerant barley cultivars.

  3. High-Throughput Sequencing of Three Lemnoideae (Duckweeds) Chloroplast Genomes from Total DNA

    Science.gov (United States)

    Wang, Wenqin; Messing, Joachim

    2011-01-01

    Background Chloroplast genomes provide a wealth of information for evolutionary and population genetic studies. Chloroplasts play a particularly important role in the adaption for aquatic plants because they float on water and their major surface is exposed continuously to sunlight. The subfamily of Lemnoideae represents such a collection of aquatic species that because of photosynthesis represents one of the fastest growing plant species on earth. Methods We sequenced the chloroplast genomes from three different genera of Lemnoideae, Spirodela polyrhiza, Wolffiella lingulata and Wolffia australiana by high-throughput DNA sequencing of genomic DNA using the SOLiD platform. Unfractionated total DNA contains high copies of plastid DNA so that sequences from the nucleus and mitochondria can easily be filtered computationally. Remaining sequence reads were assembled into contiguous sequences (contigs) using SOLiD software tools. Contigs were mapped to a reference genome of Lemna minor and gaps, selected by PCR, were sequenced on the ABI3730xl platform. Conclusions This combinatorial approach yielded whole genomic contiguous sequences in a cost-effective manner. Over 1,000-time coverage of chloroplast from total DNA were reached by the SOLiD platform in a single spot on a quadrant slide without purification. Comparative analysis indicated that the chloroplast genome was conserved in gene number and organization with respect to the reference genome of L. minor. However, higher nucleotide substitution, abundant deletions and insertions occurred in non-coding regions of these genomes, indicating a greater genomic dynamics than expected from the comparison of other related species in the Pooideae. Noticeably, there was no transition bias over transversion in Lemnoideae. The data should have immediate applications in evolutionary biology and plant taxonomy with increased resolution and statistical power. PMID:21931804

  4. High-throughput sequencing of three Lemnoideae (duckweeds chloroplast genomes from total DNA.

    Directory of Open Access Journals (Sweden)

    Wenqin Wang

    Full Text Available BACKGROUND: Chloroplast genomes provide a wealth of information for evolutionary and population genetic studies. Chloroplasts play a particularly important role in the adaption for aquatic plants because they float on water and their major surface is exposed continuously to sunlight. The subfamily of Lemnoideae represents such a collection of aquatic species that because of photosynthesis represents one of the fastest growing plant species on earth. METHODS: We sequenced the chloroplast genomes from three different genera of Lemnoideae, Spirodela polyrhiza, Wolffiella lingulata and Wolffia australiana by high-throughput DNA sequencing of genomic DNA using the SOLiD platform. Unfractionated total DNA contains high copies of plastid DNA so that sequences from the nucleus and mitochondria can easily be filtered computationally. Remaining sequence reads were assembled into contiguous sequences (contigs using SOLiD software tools. Contigs were mapped to a reference genome of Lemna minor and gaps, selected by PCR, were sequenced on the ABI3730xl platform. CONCLUSIONS: This combinatorial approach yielded whole genomic contiguous sequences in a cost-effective manner. Over 1,000-time coverage of chloroplast from total DNA were reached by the SOLiD platform in a single spot on a quadrant slide without purification. Comparative analysis indicated that the chloroplast genome was conserved in gene number and organization with respect to the reference genome of L. minor. However, higher nucleotide substitution, abundant deletions and insertions occurred in non-coding regions of these genomes, indicating a greater genomic dynamics than expected from the comparison of other related species in the Pooideae. Noticeably, there was no transition bias over transversion in Lemnoideae. The data should have immediate applications in evolutionary biology and plant taxonomy with increased resolution and statistical power.

  5. NucTools: analysis of chromatin feature occupancy profiles from high-throughput sequencing data.

    Science.gov (United States)

    Vainshtein, Yevhen; Rippe, Karsten; Teif, Vladimir B

    2017-02-14

    Biomedical applications of high-throughput sequencing methods generate a vast amount of data in which numerous chromatin features are mapped along the genome. The results are frequently analysed by creating binary data sets that link the presence/absence of a given feature to specific genomic loci. However, the nucleosome occupancy or chromatin accessibility landscape is essentially continuous. It is currently a challenge in the field to cope with continuous distributions of deep sequencing chromatin readouts and to integrate the different types of discrete chromatin features to reveal linkages between them. Here we introduce the NucTools suite of Perl scripts as well as MATLAB- and R-based visualization programs for a nucleosome-centred downstream analysis of deep sequencing data. NucTools accounts for the continuous distribution of nucleosome occupancy. It allows calculations of nucleosome occupancy profiles averaged over several replicates, comparisons of nucleosome occupancy landscapes between different experimental conditions, and the estimation of the changes of integral chromatin properties such as the nucleosome repeat length. Furthermore, NucTools facilitates the annotation of nucleosome occupancy with other chromatin features like binding of transcription factors or architectural proteins, and epigenetic marks like histone modifications or DNA methylation. The applications of NucTools are demonstrated for the comparison of several datasets for nucleosome occupancy in mouse embryonic stem cells (ESCs) and mouse embryonic fibroblasts (MEFs). The typical workflows of data processing and integrative analysis with NucTools reveal information on the interplay of nucleosome positioning with other features such as for example binding of a transcription factor CTCF, regions with stable and unstable nucleosomes, and domains of large organized chromatin K9me2 modifications (LOCKs). As potential limitations and problems we discuss how inter-replicate variability of

  6. The sva package for removing batch effects and other unwanted variation in high-throughput experiments.

    Science.gov (United States)

    Leek, Jeffrey T; Johnson, W Evan; Parker, Hilary S; Jaffe, Andrew E; Storey, John D

    2012-03-15

    Heterogeneity and latent variables are now widely recognized as major sources of bias and variability in high-throughput experiments. The most well-known source of latent variation in genomic experiments are batch effects-when samples are processed on different days, in different groups or by different people. However, there are also a large number of other variables that may have a major impact on high-throughput measurements. Here we describe the sva package for identifying, estimating and removing unwanted sources of variation in high-throughput experiments. The sva package supports surrogate variable estimation with the sva function, direct adjustment for known batch effects with the ComBat function and adjustment for batch and latent variables in prediction problems with the fsva function.

  7. Recent progress using high-throughput sequencing technologies in plant molecular breeding.

    Science.gov (United States)

    Gao, Qiang; Yue, Guidong; Li, Wenqi; Wang, Junyi; Xu, Jiaohui; Yin, Ye

    2012-04-01

    High-throughput sequencing is a revolutionary technological innovation in DNA sequencing. This technology has an ultra-low cost per base of sequencing and an overwhelmingly high data output. High-throughput sequencing has brought novel research methods and solutions to the research fields of genomics and post-genomics. Furthermore, this technology is leading to a new molecular breeding revolution that has landmark significance for scientific research and enables us to launch multi-level, multi-faceted, and multi-extent studies in the fields of crop genetics, genomics, and crop breeding. In this paper, we review progress in the application of high-throughput sequencing technologies to plant molecular breeding studies. © 2012 Institute of Botany, Chinese Academy of Sciences.

  8. Plant phenomics and high-throughput phenotyping: accelerating rice functional genomics using multidisciplinary technologies.

    Science.gov (United States)

    Yang, Wanneng; Duan, Lingfeng; Chen, Guoxing; Xiong, Lizhong; Liu, Qian

    2013-05-01

    The functional analysis of the rice genome has entered into a high-throughput stage, and a project named RICE2020 has been proposed to determine the function of every gene in the rice genome by the year 2020. However, as compared with the robustness of genetic techniques, the evaluation of rice phenotypic traits is still performed manually, and the process is subjective, inefficient, destructive and error-prone. To overcome these limitations and help rice phenomics more closely parallel rice genomics, reliable, automatic, multifunctional, and high-throughput phenotyping platforms should be developed. In this article, we discuss the key plant phenotyping technologies, particularly photonics-based technologies, and then introduce their current applications in rice (wheat or barley) phenomics. We also note the major challenges in rice phenomics and are confident that these reliable high-throughput phenotyping tools will give plant scientists new perspectives on the information encoded in the rice genome. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Convergence of topological domain boundaries, insulators, and polytene interbands revealed by high-resolution mapping of chromatin contacts in the early Drosophila melanogaster embryo

    Science.gov (United States)

    Stadler, Michael R; Haines, Jenna E

    2017-01-01

    High-throughput assays of three-dimensional interactions of chromosomes have shed considerable light on the structure of animal chromatin. Despite this progress, the precise physical nature of observed structures and the forces that govern their establishment remain poorly understood. Here we present high resolution Hi-C data from early Drosophila embryos. We demonstrate that boundaries between topological domains of various sizes map to DNA elements that resemble classical insulator elements: short genomic regions sensitive to DNase digestion that are strongly bound by known insulator proteins and are frequently located between divergent promoters. Further, we show a striking correspondence between these elements and the locations of mapped polytene interband regions. We believe it is likely this relationship between insulators, topological boundaries, and polytene interbands extends across the genome, and we therefore propose a model in which decompaction of boundary-insulator-interband regions drives the organization of interphase chromosomes by creating stable physical separation between adjacent domains. PMID:29148971

  10. High-throughput crystallization of membrane proteins using the lipidic bicelle method.

    Science.gov (United States)

    Ujwal, Rachna; Abramson, Jeff

    2012-01-09

    of these technical difficulties, we utilized another lipidic medium for crystallization-bicelles (Figure 1). Bicelles are lipid/amphiphile mixtures formed by blending a phosphatidylcholine lipid (DMPC) with an amphiphile (CHAPSO) or a short-chain lipid (DHPC). Within each bicelle disc, the lipid molecules generate a bilayer while the amphiphile molecules line the apolar edges providing beneficial properties of both bilayers and detergents. Importantly, below their transition temperature, protein-bicelle mixtures have a reduced viscosity and are manipulated in a similar manner as detergent-solubilized MPs, making bicelles compatible with crystallization robots. Bicelles have been successfully used to crystallize several membrane proteins (Table 1). This growing collection of proteins demonstrates the versatility of bicelles for crystallizing both alpha helical and beta sheet MPs from prokaryotic and eukaryotic sources. Because of these successes and the simplicity of high-throughput implementation, bicelles should be part of every membrane protein crystallographer's arsenal. In this video, we describe the bicelle methodology and provide a step-by-step protocol for setting up high-throughput crystallization trials of purified MPs using standard robotics. Copyright © 2012 Journal of Visualized Experiments

  11. Correlation of Scar in Cardiac MRI and High‐Resolution Contact Mapping of Left Ventricle in a Chronic Infarct Model

    Science.gov (United States)

    THAJUDEEN, ANEES; STEWART, BRIAN; COKIC, IVAN; NAKAGAWA, HIROSHI; SHEHATA, MICHAEL; AMORN, ALLEN M.; KALI, AVINASH; LIU, EZH; HARLEV, DORON; BENNETT, NATHAN; DHARMAKUMAR, ROHAN; CHUGH, SUMEET S.; WANG, XUNZHANG

    2015-01-01

    Background Endocardial mapping for scars and abnormal electrograms forms the most essential component of ventricular tachycardia ablation. The utility of ultra‐high resolution mapping of ventricular scar was assessed using a multielectrode contact mapping system in a chronic canine infarct model. Methods Chronic infarcts were created in five anesthetized dogs by ligating the left anterior descending coronary artery. Late gadolinium‐enhanced magnetic resonance imaging (LGE MRI) was obtained 4.9 ± 0.9 months after infarction, with three‐dimensional (3D) gadolinium enhancement signal intensity maps at 1‐mm and 5‐mm depths from the endocardium. Ultra‐high resolution electroanatomical maps were created using a novel mapping system (Rhythmia Mapping System, Rhythmia Medical/Boston Scientific, Marlborough, MA, USA) Rhythmia Medical, Boston Scientific, Marlborough, MA, USA with an 8.5F catheter with mini‐basket electrode array (64 tiny electrodes, 2.5‐mm spacing, center‐to‐center). Results The maps contained 7,754 ± 1,960 electrograms per animal with a mean resolution of 2.8 ± 0.6 mm. Low bipolar voltage (transmural scar, and dense transmural scar) as well as normal tissue, were significantly different. A unipolar voltage of transmural extension of scar in MRI. Electrograms exhibiting isolated late potentials (ILPs) were manually annotated and ILP maps were created showing ILP location and timing. ILPs were identified in 203 ± 159 electrograms per dog (within low‐voltage areas) and ILP maps showed gradation in timing of ILPs at different locations in the scar. Conclusions Ultra‐high resolution contact electroanatomical mapping accurately localizes ventricular scar and abnormal myocardial tissue in this chronic canine infarct model. The high fidelity electrograms provided clear identification of the very low amplitude ILPs within the scar tissue and has the potential to quickly identify targets for ablation. PMID:25656924

  12. Correlation of scar in cardiac MRI and high-resolution contact mapping of left ventricle in a chronic infarct model.

    Science.gov (United States)

    Thajudeen, Anees; Jackman, Warren M; Stewart, Brian; Cokic, Ivan; Nakagawa, Hiroshi; Shehata, Michael; Amorn, Allen M; Kali, Avinash; Liu, Ezh; Harlev, Doron; Bennett, Nathan; Dharmakumar, Rohan; Chugh, Sumeet S; Wang, Xunzhang

    2015-06-01

    Endocardial mapping for scars and abnormal electrograms forms the most essential component of ventricular tachycardia ablation. The utility of ultra-high resolution mapping of ventricular scar was assessed using a multielectrode contact mapping system in a chronic canine infarct model. Chronic infarcts were created in five anesthetized dogs by ligating the left anterior descending coronary artery. Late gadolinium-enhanced magnetic resonance imaging (LGE MRI) was obtained 4.9 ± 0.9 months after infarction, with three-dimensional (3D) gadolinium enhancement signal intensity maps at 1-mm and 5-mm depths from the endocardium. Ultra-high resolution electroanatomical maps were created using a novel mapping system (Rhythmia Mapping System, Rhythmia Medical/Boston Scientific, Marlborough, MA, USA) Rhythmia Medical, Boston Scientific, Marlborough, MA, USA with an 8.5F catheter with mini-basket electrode array (64 tiny electrodes, 2.5-mm spacing, center-to-center). The maps contained 7,754 ± 1,960 electrograms per animal with a mean resolution of 2.8 ± 0.6 mm. Low bipolar voltage (transmural scar, and dense transmural scar) as well as normal tissue, were significantly different. A unipolar voltage of transmural extension of scar in MRI. Electrograms exhibiting isolated late potentials (ILPs) were manually annotated and ILP maps were created showing ILP location and timing. ILPs were identified in 203 ± 159 electrograms per dog (within low-voltage areas) and ILP maps showed gradation in timing of ILPs at different locations in the scar. Ultra-high resolution contact electroanatomical mapping accurately localizes ventricular scar and abnormal myocardial tissue in this chronic canine infarct model. The high fidelity electrograms provided clear identification of the very low amplitude ILPs within the scar tissue and has the potential to quickly identify targets for ablation. ©2015 The Authors. Pacing and Clinical Electrophysiology Published by Wiley Periodicals, Inc.

  13. High-Throughput 3D Tumor Culture in a Recyclable Microfluidic Platform.

    Science.gov (United States)

    Liu, Wenming; Wang, Jinyi

    2017-01-01

    Three-dimensional (3D) tumor culture miniaturized platforms are of importance to biomimetic model construction and pathophysiological studies. Controllable and high-throughput production of 3D tumors is desirable to make cell-based manipulation dynamic and efficient at micro-scale. Moreover, the 3D culture platform being reusable is convenient to research scholars. In this chapter, we describe a dynamically controlled 3D tumor manipulation and culture method using pneumatic microstructure-based microfluidics, which has potential applications in the fields of tissue engineering, tumor biology, and clinical medicine in a high-throughput way.

  14. Applications of high-throughput plant phenotyping to study nutrient use efficiency.

    Science.gov (United States)

    Berger, Bettina; de Regt, Bas; Tester, Mark

    2013-01-01

    Remote sensing and spectral reflectance measurements of plants has long been used to assess the growth and nutrient status of plants in a noninvasive manner. With improved imaging and computer technologies, these approaches can now be used at high-throughput for more extensive physiological and genetic studies. Here, we present an example of how high-throughput imaging can be used to study the growth of plants exposed to different nutrient levels. In addition, the color of the leaves can be used to estimate leaf chlorophyll and nitrogen status of the plant.

  15. A platform for high-throughput screening of DNA-encoded catalyst libraries in organic solvents.

    Science.gov (United States)

    Hook, K Delaney; Chambers, John T; Hili, Ryan

    2017-10-01

    We have developed a novel high-throughput screening platform for the discovery of small-molecules catalysts for bond-forming reactions. The method employs an in vitro selection for bond-formation using amphiphilic DNA-encoded small molecules charged with reaction substrate, which enables selections to be conducted in a variety of organic or aqueous solvents. Using the amine-catalysed aldol reaction as a catalytic model and high-throughput DNA sequencing as a selection read-out, we demonstrate the 1200-fold enrichment of a known aldol catalyst from a library of 16.7-million uncompetitive library members.

  16. Macro-to-micro structural proteomics: native source proteins for high-throughput crystallization.

    Science.gov (United States)

    Totir, Monica; Echols, Nathaniel; Nanao, Max; Gee, Christine L; Moskaleva, Alisa; Gradia, Scott; Iavarone, Anthony T; Berger, James M; May, Andrew P; Zubieta, Chloe; Alber, Tom

    2012-01-01

    Structural biology and structural genomics projects routinely rely on recombinantly expressed proteins, but many proteins and complexes are difficult to obtain by this approach. We investigated native source proteins for high-throughput protein crystallography applications. The Escherichia coli proteome was fractionated, purified, crystallized, and structurally characterized. Macro-scale fermentation and fractionation were used to subdivide the soluble proteome into 408 unique fractions of which 295 fractions yielded crystals in microfluidic crystallization chips. Of the 295 crystals, 152 were selected for optimization, diffraction screening, and data collection. Twenty-three structures were determined, four of which were novel. This study demonstrates the utility of native source proteins for high-throughput crystallography.

  17. HTP-NLP: A New NLP System for High Throughput Phenotyping.

    Science.gov (United States)

    Schlegel, Daniel R; Crowner, Chris; Lehoullier, Frank; Elkin, Peter L

    2017-01-01

    Secondary use of clinical data for research requires a method to quickly process the data so that researchers can quickly extract cohorts. We present two advances in the High Throughput Phenotyping NLP system which support the aim of truly high throughput processing of clinical data, inspired by a characterization of the linguistic properties of such data. Semantic indexing to store and generalize partially-processed results and the use of compositional expressions for ungrammatical text are discussed, along with a set of initial timing results for the system.

  18. High-throughput exposure modeling to support prioritization of chemicals in personal care products

    DEFF Research Database (Denmark)

    Csiszar, Susan A.; Ernstoff, Alexi; Fantke, Peter

    2016-01-01

    We demonstrate the application of a high-throughput modeling framework to estimate exposure to chemicals used in personal care products (PCPs). As a basis for estimating exposure, we use the product intake fraction (PiF), defined as the mass of chemical taken by an individual or population per mass...... intakes were associated with body lotion. Bioactive doses derived from high-throughput in vitro toxicity data were combined with the estimated PiFs to demonstrate an approach to estimate bioactive equivalent chemical content and to screen chemicals for risk....

  19. High-Throughput Combinatorial Development of High-Entropy Alloys For Light-Weight Structural Applications

    Energy Technology Data Exchange (ETDEWEB)

    Van Duren, Jeroen K; Koch, Carl; Luo, Alan; Sample, Vivek; Sachdev, Anil

    2017-12-29

    The primary limitation of today’s lightweight structural alloys is that specific yield strengths (SYS) higher than 200MPa x cc/g (typical value for titanium alloys) are extremely difficult to achieve. This holds true especially at a cost lower than 5dollars/kg (typical value for magnesium alloys). Recently, high-entropy alloys (HEA) have shown promising SYS, yet the large composition space of HEA makes screening compositions complex and time-consuming. Over the course of this 2-year project we started from 150 billion compositions and reduced the number of potential low-density (<5g/cc), low-cost (<5dollars/kg) high-entropy alloy (LDHEA) candidates that are single-phase, disordered, solid-solution (SPSS) to a few thousand compositions. This was accomplished by means of machine learning to guide design for SPSS LDHEA based on a combination of recursive partitioning, an extensive, experimental HEA database compiled from 24 literature sources, and 91 calculated parameters serving as phenomenological selection rules. Machine learning shows an accuracy of 82% in identifying which compositions of a separate, smaller, experimental HEA database are SPSS HEA. Calculation of Phase Diagrams (CALPHAD) shows an accuracy of 71-77% for the alloys supported by the CALPHAD database, where 30% of the compiled HEA database is not supported by CALPHAD. In addition to machine learning, and CALPHAD, a third tool was developed to aid design of SPSS LDHEA. Phase diagrams were calculated by constructing the Gibbs-free energy convex hull based on easily accessible enthalpy and entropy terms. Surprisingly, accuracy was 78%. Pursuing these LDHEA candidates by high-throughput experimental methods resulted in SPSS LDHEA composed of transition metals (e.g. Cr, Mn, Fe, Ni, Cu) alloyed with Al, yet the high concentration of Al, necessary to bring the mass density below 5.0g/cc, makes these materials hard and brittle, body-centered-cubic (BCC) alloys. A related, yet multi-phase BCC alloy, based

  20. Influence of Intramyocardial Adipose Tissue on the Accuracy of Endocardial Contact Mapping of the Chronic Myocardial Infarction Substrate.

    Science.gov (United States)

    Samanta, Rahul; Kumar, Saurabh; Chik, William; Qian, Pierre; Barry, Michael A; Al Raisi, Sara; Bhaskaran, Abhishek; Farraha, Melad; Nadri, Fazlur; Kizana, Eddy; Thiagalingam, Aravinda; Kovoor, Pramesh; Pouliopoulos, Jim

    2017-10-01

    Recent studies have demonstrated that intramyocardial adipose tissue (IMAT) may contribute to ventricular electrophysiological remodeling in patients with chronic myocardial infarction. Using an ovine model of myocardial infarction, we aimed to determine the influence of IMAT on scar tissue identification during endocardial contact mapping and optimal voltage-based mapping criteria for defining IMAT dense regions. In 7 sheep, left ventricular endocardial and transmural mapping was performed 84 weeks (15-111 weeks) post-myocardial infarction. Spearman rank correlation coefficient was used to assess the relationship between endocardial contact electrogram amplitude and histological composition of myocardium. Receiver operator characteristic curves were used to derive optimal electrogram thresholds for IMAT delineation during endocardial mapping and to describe the use of endocardial mapping for delineation of IMAT dense regions within scar. Endocardial electrogram amplitude correlated significantly with IMAT (unipolar r=-0.48±0.12, P<0.001; bipolar r=-0.45±0.22, P=0.04) but not collagen (unipolar r=-0.36±0.24, P=0.13; bipolar r=-0.43±0.31, P=0.16). IMAT dense regions of myocardium reliably identified using endocardial mapping with thresholds of <3.7 and <0.6 mV, respectively, for unipolar, bipolar, and combined modalities (single modality area under the curve=0.80, P<0.001; combined modality area under the curve=0.84, P<0.001). Unipolar mapping using optimal thresholding remained significantly reliable (area under the curve=0.76, P<0.001) during mapping of IMAT, confined to putative scar border zones (bipolar amplitude, 0.5-1.5 mV). These novel findings enhance our understanding of the confounding influence of IMAT on endocardial scar mapping. Combined bipolar and unipolar voltage mapping using optimal thresholds may be useful for delineating IMAT dense regions of myocardium, in postinfarct cardiomyopathy. © 2017 American Heart Association, Inc.

  1. A High-Throughput Technique Reveals the Load- and Site Density-Dependent Kinetics of E-Selectin

    Science.gov (United States)

    Snook, Jeremy H.; Guilford, William H.

    2013-01-01

    The kinetics of bond rupture between receptors and ligand are critically dependent on applied mechanical force. Force spectroscopy of single receptor-ligand pairs to measure kinetics is a laborious and time-consuming process that is generally performed using individual force probes and making one measurement at a time when typically hundreds of measurements are needed. A high-throughput approach is thus desirable. We report here a magnetic bond puller that provides high-throughput measurements of single receptor-ligand bond kinetics. Electromagnets are used to apply pN tensile and compressive forces to receptor-coated magnetic microspheres while monitoring their contact with a ligand-coated surface. Bond lifetimes and the probability of forming a bond are measured via videomicroscopy, and the data are used to determine the load dependent rates of bond rupture and bond formation. The approach is simple, customizable, relatively inexpensive, and can make dozens of kinetic measurements simultaneously. We used the device to investigate how compressive and tensile forces affect the rates of formation and rupture, respectively, of bonds between E-selectin and sialyl Lewisa (sLea), a sugar on P-selectin glycoprotein ligand-1 to which selectins bind. We confirmed earlier findings of a load-dependent rate of bond formation between these two molecules, and that they form a catch-slip bond like other selectin family members. We also make the novel observation of an “ideal” bond in a highly multivalent system of this receptor-ligand pair. PMID:24511329

  2. MicroRNA from Moringa oleifera: Identification by High Throughput Sequencing and Their Potential Contribution to Plant Medicinal Value.

    Directory of Open Access Journals (Sweden)

    Stefano Pirrò

    Full Text Available Moringa oleifera is a widespread plant with substantial nutritional and medicinal value. We postulated that microRNAs (miRNAs, which are endogenous, noncoding small RNAs regulating gene expression at the post-transcriptional level, might contribute to the medicinal properties of plants of this species after ingestion into human body, regulating human gene expression. However, the knowledge is scarce about miRNA in Moringa. Furthermore, in order to test the hypothesis on the pharmacological potential properties of miRNA, we conducted a high-throughput sequencing analysis using the Illumina platform. A total of 31,290,964 raw reads were produced from a library of small RNA isolated from M. oleifera seeds. We identified 94 conserved and two novel miRNAs that were validated by qRT-PCR assays. Results from qRT-PCR trials conducted on the expression of 20 Moringa miRNA showed that are conserved across multiple plant species as determined by their detection in tissue of other common crop plants. In silico analyses predicted target genes for the conserved miRNA that in turn allowed to relate the miRNAs to the regulation of physiological processes. Some of the predicted plant miRNAs have functional homology to their mammalian counterparts and regulated human genes when they were transfected into cell lines. To our knowledge, this is the first report of discovering M. oleifera miRNAs based on high-throughput sequencing and bioinformatics analysis and we provided new insight into a potential cross-species control of human gene expression. The widespread cultivation and consumption of M. oleifera, for nutritional and medicinal purposes, brings humans into close contact with products and extracts of this plant species. The potential for miRNA transfer should be evaluated as one possible mechanism of action to account for beneficial properties of this valuable species.

  3. MicroRNA from Moringa oleifera: Identification by High Throughput Sequencing and Their Potential Contribution to Plant Medicinal Value.

    Science.gov (United States)

    Pirrò, Stefano; Zanella, Letizia; Kenzo, Maurice; Montesano, Carla; Minutolo, Antonella; Potestà, Marina; Sobze, Martin Sanou; Canini, Antonella; Cirilli, Marco; Muleo, Rosario; Colizzi, Vittorio; Galgani, Andrea

    2016-01-01

    Moringa oleifera is a widespread plant with substantial nutritional and medicinal value. We postulated that microRNAs (miRNAs), which are endogenous, noncoding small RNAs regulating gene expression at the post-transcriptional level, might contribute to the medicinal properties of plants of this species after ingestion into human body, regulating human gene expression. However, the knowledge is scarce about miRNA in Moringa. Furthermore, in order to test the hypothesis on the pharmacological potential properties of miRNA, we conducted a high-throughput sequencing analysis using the Illumina platform. A total of 31,290,964 raw reads were produced from a library of small RNA isolated from M. oleifera seeds. We identified 94 conserved and two novel miRNAs that were validated by qRT-PCR assays. Results from qRT-PCR trials conducted on the expression of 20 Moringa miRNA showed that are conserved across multiple plant species as determined by their detection in tissue of other common crop plants. In silico analyses predicted target genes for the conserved miRNA that in turn allowed to relate the miRNAs to the regulation of physiological processes. Some of the predicted plant miRNAs have functional homology to their mammalian counterparts and regulated human genes when they were transfected into cell lines. To our knowledge, this is the first report of discovering M. oleifera miRNAs based on high-throughput sequencing and bioinformatics analysis and we provided new insight into a potential cross-species control of human gene expression. The widespread cultivation and consumption of M. oleifera, for nutritional and medicinal purposes, brings humans into close contact with products and extracts of this plant species. The potential for miRNA transfer should be evaluated as one possible mechanism of action to account for beneficial properties of this valuable species.

  4. Liquid Phase Multiplex High-Throughput Screening of Metagenomic Libraries Using p-Nitrophenyl-Linked Substrates for Accessory Lignocellulosic Enzymes.

    Science.gov (United States)

    Smart, Mariette; Huddy, Robert J; Cowan, Don A; Trindade, Marla

    2017-01-01

    To access the genetic potential contained in large metagenomic libraries, suitable high-throughput functional screening methods are required. Here we describe a high-throughput screening approach which enables the rapid identification of metagenomic library clones expressing functional accessory lignocellulosic enzymes. The high-throughput nature of this method hinges on the multiplexing of both the E. coli metagenomic library clones and the colorimetric p-nitrophenyl linked substrates which allows for the simultaneous screening for β-glucosidases, β-xylosidases, and α-L-arabinofuranosidases. This method is readily automated and compatible with high-throughput robotic screening systems.

  5. High-throughput analysis of the impact of antibiotics on the human intestinal microbiota composition

    NARCIS (Netherlands)

    Ladirat, S.E.; Schols, H.A.; Nauta, A.; Schoterman, M.H.C.; Keijser, B.J.F.; Montijn, R.C.; Gruppen, H.; Schuren, F.H.J.

    2013-01-01

    Antibiotic treatments can lead to a disruption of the human microbiota. In this in-vitro study, the impact of antibiotics on adult intestinal microbiota was monitored in a new high-throughput approach: a fermentation screening-platform was coupled with a phylogenetic microarray analysis

  6. Complementing high-throughput X-ray powder diffraction data with quantum-chemical calculations

    DEFF Research Database (Denmark)

    Naelapaa, Kaisa; van de Streek, Jacco; Rantanen, Jukka

    2012-01-01

    High-throughput crystallisation and characterisation platforms provide an efficient means to carry out solid-form screening during the pre-formulation phase. To determine the crystal structures of identified new solid phases, however, usually requires independent crystallisation trials to produce...... obtained only during high-energy processing such as spray drying or milling....

  7. Reverse Phase Protein Arrays for High-Throughput Protein Measurements in Mammospheres

    DEFF Research Database (Denmark)

    Pedersen, Marlene Lemvig; Block, Ines; List, Markus

    Protein Array (RPPA)-based readout format integrated into robotic siRNA screening. This technique would allow post-screening high-throughput quantification of protein changes. Recently, breast cancer stem cells (BCSCs) have attracted much attention, as a tumor- and metastasis-driving subpopulation...

  8. High-throughput open source computational methods for genetics and genomics

    NARCIS (Netherlands)

    Prins, J.C.P.

    2015-01-01

    Biology is increasingly data driven by virtue of the development of high-throughput technologies, such as DNA and RNA sequencing. Computational biology and bioinformatics are scientific disciplines that cross-over between the disciplines of biology, informatics and statistics; which is clearly

  9. ToxCast Workflow: High-throughput screening assay data processing, analysis and management (SOT)

    Science.gov (United States)

    US EPA’s ToxCast program is generating data in high-throughput screening (HTS) and high-content screening (HCS) assays for thousands of environmental chemicals, for use in developing predictive toxicity models. Currently the ToxCast screening program includes over 1800 unique c...

  10. Microfluidic Impedance Flow Cytometry Enabling High-Throughput Single-Cell Electrical Property Characterization

    OpenAIRE

    Jian Chen; Chengcheng Xue; Yang Zhao; Deyong Chen; Min-Hsien Wu; Junbo Wang

    2015-01-01

    This article reviews recent developments in microfluidic impedance flow cytometry for high-throughput electrical property characterization of single cells. Four major perspectives of microfluidic impedance flow cytometry for single-cell characterization are included in this review: (1) early developments of microfluidic impedance flow cytometry for single-cell electrical property characterization; (2) microfluidic impedance flow cytometry with enhanced sensitivity; (3) microfluidic impedance ...

  11. High-throughput transformation of Saccharomyces cerevisiae using liquid handling robots.

    Directory of Open Access Journals (Sweden)

    Guangbo Liu

    Full Text Available Saccharomyces cerevisiae (budding yeast is a powerful eukaryotic model organism ideally suited to high-throughput genetic analyses, which time and again has yielded insights that further our understanding of cell biology processes conserved in humans. Lithium Acetate (LiAc transformation of yeast with DNA for the purposes of exogenous protein expression (e.g., plasmids or genome mutation (e.g., gene mutation, deletion, epitope tagging is a useful and long established method. However, a reliable and optimized high throughput transformation protocol that runs almost no risk of human error has not been described in the literature. Here, we describe such a method that is broadly transferable to most liquid handling high-throughput robotic platforms, which are now commonplace in academic and industry settings. Using our optimized method, we are able to comfortably transform approximately 1200 individual strains per day, allowing complete transformation of typical genomic yeast libraries within 6 days. In addition, use of our protocol for gene knockout purposes also provides a potentially quicker, easier and more cost-effective approach to generating collections of double mutants than the popular and elegant synthetic genetic array methodology. In summary, our methodology will be of significant use to anyone interested in high throughput molecular and/or genetic analysis of yeast.

  12. The Power of High-Throughput Experimentation in Homogeneous Catalysis Research for Fine Chemicals

    NARCIS (Netherlands)

    Vries, Johannes G. de; Vries, André H.M. de

    2003-01-01

    The use of high-throughput experimentation (HTE) in homogeneous catalysis research for the production of fine chemicals is an important breakthrough. Whereas in the past stoichiometric chemistry was often preferred because of time-to-market constraints, HTE allows catalytic solutions to be found

  13. High throughput phenotyping to accelerate crop breeding and monitoring of diseases in the field.

    Science.gov (United States)

    Shakoor, Nadia; Lee, Scott; Mockler, Todd C

    2017-08-01

    Effective implementation of technology that facilitates accurate and high-throughput screening of thousands of field-grown lines is critical for accelerating crop improvement and breeding strategies for higher yield and disease tolerance. Progress in the development of field-based high throughput phenotyping methods has advanced considerably in the last 10 years through technological progress in sensor development and high-performance computing. Here, we review recent advances in high throughput field phenotyping technologies designed to inform the genetics of quantitative traits, including crop yield and disease tolerance. Successful application of phenotyping platforms to advance crop breeding and identify and monitor disease requires: (1) high resolution of imaging and environmental sensors; (2) quality data products that facilitate computer vision, machine learning and GIS; (3) capacity infrastructure for data management and analysis; and (4) automated environmental data collection. Accelerated breeding for agriculturally relevant crop traits is key to the development of improved varieties and is critically dependent on high-resolution, high-throughput field-scale phenotyping technologies that can efficiently discriminate better performing lines within a larger population and across multiple environments. Copyright © 2017. Published by Elsevier Ltd.

  14. Functional characterisation of human glycine receptors in a fluorescence-based high throughput screening assay

    DEFF Research Database (Denmark)

    Jensen, Anders A.

    2005-01-01

    receptors in this assay were found to be in good agreement with those from electrophysiology studies of the receptors expressed in Xenopus oocytes or mammalian cell lines. Hence, this high throughput screening assay will be of great use in future pharmacological studies of glycine receptors, particular...

  15. Predicting gene function through systematic analysis and quality assessment of high-throughput data.

    Science.gov (United States)

    Kemmeren, Patrick; Kockelkorn, Thessa T J P; Bijma, Theo; Donders, Rogier; Holstege, Frank C P

    2005-04-15

    Determining gene function is an important challenge arising from the availability of whole genome sequences. Until recently, approaches based on sequence homology were the only high-throughput method for predicting gene function. Use of high-throughput generated experimental data sets for determining gene function has been limited for several reasons. Here a new approach is presented for integration of high-throughput data sets, leading to prediction of function based on relationships supported by multiple types and sources of data. This is achieved with a database containing 125 different high-throughput data sets describing phenotypes, cellular localizations, protein interactions and mRNA expression levels from Saccharomyces cerevisiae, using a bit-vector representation and information content-based ranking. The approach takes characteristic and qualitative differences between the data sets into account, is highly flexible, efficient and scalable. Database queries result in predictions for 543 uncharacterized genes, based on multiple functional relationships each supported by at least three types of experimental data. Some of these are experimentally verified, further demonstrating their reliability. The results also generate insights into the relative merits of different data types and provide a coherent framework for functional genomic datamining. Free availability over the Internet. f.c.p.holstege@med.uu.nl http://www.genomics.med.uu.nl/pub/pk/comb_gen_network.

  16. tcpl: The ToxCast Pipeline for High-Throughput Screening Data

    Science.gov (United States)

    Motivation: The large and diverse high-throughput chemical screening efforts carried out by the US EPAToxCast program requires an efficient, transparent, and reproducible data pipeline.Summary: The tcpl R package and its associated MySQL database provide a generalized platform fo...

  17. Application of high-throughput technologies to a structural proteomics-type analysis of Bacillus anthracis

    NARCIS (Netherlands)

    Au, K.; Folkers, G.E.; Kaptein, R.

    2006-01-01

    A collaborative project between two Structural Proteomics In Europe (SPINE) partner laboratories, York and Oxford, aimed at high-throughput (HTP) structure determination of proteins from Bacillus anthracis, the aetiological agent of anthrax and a biomedically important target, is described. Based

  18. Roche genome sequencer FLX based high-throughput sequencing of ancient DNA

    DEFF Research Database (Denmark)

    Alquezar-Planas, David E; Fordyce, Sarah Louise

    2012-01-01

    Since the development of so-called "next generation" high-throughput sequencing in 2005, this technology has been applied to a variety of fields. Such applications include disease studies, evolutionary investigations, and ancient DNA. Each application requires a specialized protocol to ensure tha...

  19. 20170913 - Retrofit Strategies for Incorporating Xenobiotic Metabolism into High Throughput Screening Assays (EMGS)

    Science.gov (United States)

    The US EPA’s ToxCast program is designed to assess chemical perturbations of molecular and cellular endpoints using a variety of high-throughput screening (HTS) assays. However, existing HTS assays have limited or no xenobiotic metabolism which could lead to a mischaracteri...

  20. PLASMA PROTEIN PROFILING AS A HIGH THROUGHPUT TOOL FOR CHEMICAL SCREENING USING A SMALL FISH MODEL

    Science.gov (United States)

    Hudson, R. Tod, Michael J. Hemmer, Kimberly A. Salinas, Sherry S. Wilkinson, James Watts, James T. Winstead, Peggy S. Harris, Amy Kirkpatrick and Calvin C. Walker. In press. Plasma Protein Profiling as a High Throughput Tool for Chemical Screening Using a Small Fish Model (Abstra...

  1. HTPheno: an image analysis pipeline for high-throughput plant phenotyping.

    Science.gov (United States)

    Hartmann, Anja; Czauderna, Tobias; Hoffmann, Roberto; Stein, Nils; Schreiber, Falk

    2011-05-12

    In the last few years high-throughput analysis methods have become state-of-the-art in the life sciences. One of the latest developments is automated greenhouse systems for high-throughput plant phenotyping. Such systems allow the non-destructive screening of plants over a period of time by means of image acquisition techniques. During such screening different images of each plant are recorded and must be analysed by applying sophisticated image analysis algorithms. This paper presents an image analysis pipeline (HTPheno) for high-throughput plant phenotyping. HTPheno is implemented as a plugin for ImageJ, an open source image processing software. It provides the possibility to analyse colour images of plants which are taken in two different views (top view and side view) during a screening. Within the analysis different phenotypical parameters for each plant such as height, width and projected shoot area of the plants are calculated for the duration of the screening. HTPheno is applied to analyse two barley cultivars. HTPheno, an open source image analysis pipeline, supplies a flexible and adaptable ImageJ plugin which can be used for automated image analysis in high-throughput plant phenotyping and therefore to derive new biological insights, such as determination of fitness.

  2. HTPheno: An image analysis pipeline for high-throughput plant phenotyping

    Directory of Open Access Journals (Sweden)

    Stein Nils

    2011-05-01

    Full Text Available Abstract Background In the last few years high-throughput analysis methods have become state-of-the-art in the life sciences. One of the latest developments is automated greenhouse systems for high-throughput plant phenotyping. Such systems allow the non-destructive screening of plants over a period of time by means of image acquisition techniques. During such screening different images of each plant are recorded and must be analysed by applying sophisticated image analysis algorithms. Results This paper presents an image analysis pipeline (HTPheno for high-throughput plant phenotyping. HTPheno is implemented as a plugin for ImageJ, an open source image processing software. It provides the possibility to analyse colour images of plants which are taken in two different views (top view and side view during a screening. Within the analysis different phenotypical parameters for each plant such as height, width and projected shoot area of the plants are calculated for the duration of the screening. HTPheno is applied to analyse two barley cultivars. Conclusions HTPheno, an open source image analysis pipeline, supplies a flexible and adaptable ImageJ plugin which can be used for automated image analysis in high-throughput plant phenotyping and therefore to derive new biological insights, such as determination of fitness.

  3. High-Throughput Immunogenetics for Clinical and Research Applications in Immunohematology: Potential and Challenges

    NARCIS (Netherlands)

    Langerak, A.W.; Bruggemann, M.; Davi, F.; Darzentas, N.; Dongen, J.J. van; Gonzalez, D.; Cazzaniga, G.; Giudicelli, V.; Lefranc, M.P.; Giraud, M.; Macintyre, E.A.; Hummel, M.; Pott, C.; Groenen, P.J.T.A.; Stamatopoulos, K.

    2017-01-01

    Analysis and interpretation of Ig and TCR gene rearrangements in the conventional, low-throughput way have their limitations in terms of resolution, coverage, and biases. With the advent of high-throughput, next-generation sequencing (NGS) technologies, a deeper analysis of Ig and/or TCR (IG/TR)

  4. Accelerating the design of solar thermal fuel materials through high throughput simulations.

    Science.gov (United States)

    Liu, Yun; Grossman, Jeffrey C

    2014-12-10

    Solar thermal fuels (STF) store the energy of sunlight, which can then be released later in the form of heat, offering an emission-free and renewable solution for both solar energy conversion and storage. However, this approach is currently limited by the lack of low-cost materials with high energy density and high stability. In this Letter, we present an ab initio high-throughput computational approach to accelerate the design process and allow for searches over a broad class of materials. The high-throughput screening platform we have developed can run through large numbers of molecules composed of earth-abundant elements and identifies possible metastable structures of a given material. Corresponding isomerization enthalpies associated with the metastable structures are then computed. Using this high-throughput simulation approach, we have discovered molecular structures with high isomerization enthalpies that have the potential to be new candidates for high-energy density STF. We have also discovered physical principles to guide further STF materials design through structural analysis. More broadly, our results illustrate the potential of using high-throughput ab initio simulations to design materials that undergo targeted structural transitions.

  5. Investigation of non-halogenated solvent mixtures for high throughput fabrication of polymerfullerene solar cells

    NARCIS (Netherlands)

    Schmidt-Hansberg, B.; Sanyal, M.; Grossiord, N.; Galagan, Y.O.; Baunach, M.; Klein, M.F.G.; Colsmann, A.; Scharfer, P.; Lemmer, U.; Dosch, H.; Michels, J.J; Barrena, E.; Schabel, W.

    2012-01-01

    The rapidly increasing power conversion efficiencies of organic solar cells are an important prerequisite towards low cost photovoltaic fabricated in high throughput. In this work we suggest indane as a non-halogenated replacement for the commonly used halogenated solvent o-dichlorobenzene. Indane

  6. Development of a thyroperoxidase inhibition assay for high-throughput screening

    Science.gov (United States)

    High-throughput screening (HTPS) assays to detect inhibitors of thyroperoxidase (TPO), the enzymatic catalyst for thyroid hormone (TH) synthesis, are not currently available. Herein we describe the development of a HTPS TPO inhibition assay. Rat thyroid microsomes and a fluores...

  7. Evaluation of Simple and Inexpensive High-Throughput Methods for Phytic Acid Determination

    DEFF Research Database (Denmark)

    Raboy, Victor; Johnson, Amy; Bilyeu, Kristin

    2017-01-01

    High-throughput/low-cost/low-tech methods for phytic acid determination that are sufficiently accurate and reproducible would be of value in plant genetics, crop breeding and in the food and feed industries. Variants of two candidate methods, those described by Vaintraub and Lapteva (Anal Biochem...

  8. High-throughput genotoxicity assay identifies antioxidants as inducers of DNA damage response and cell death

    Science.gov (United States)

    Human ATAD5 is an excellent biomarker for identifying genotoxic compounds because ATADS protein levels increase post-transcriptionally following exposure to a variety of DNA damaging agents. Here we report a novel quantitative high-throughput ATAD5-Iuciferase assay that can moni...

  9. Increasing ecological inference from high throughput sequencing of fungi in the environment through a tagging approach

    Science.gov (United States)

    D. Lee Taylor; Michael G. Booth; Jack W. McFarland; Ian C. Herriott; Niall J. Lennon; Chad Nusbaum; Thomas G. Marr

    2008-01-01

    High throughput sequencing methods are widely used in analyses of microbial diversity but are generally applied to small numbers of samples, which precludes charaterization of patterns of microbial diversity across space and time. We have designed a primer-tagging approach that allows pooling and subsequent sorting of numerous samples, which is directed to...

  10. The protein crystallography beamline BW6 at DORIS - automatic operation and high-throughput data collection

    CERN Document Server

    Blume, H; Bourenkov, G P; Kosciesza, D; Bartunik, H D

    2001-01-01

    The wiggler beamline BW6 at DORIS has been optimized for de-novo solution of protein structures on the basis of MAD phasing. Facilities for automatic data collection, rapid data transfer and storage, and online processing have been developed which provide adequate conditions for high-throughput applications, e.g., in structural genomics.

  11. The Complete Automation of Cell Culture: Improvements for High-Throughput and High-Content Screening

    NARCIS (Netherlands)

    Jain, S.; Sondervan, D.; Rizzu, P.; Bochdanovits, Z.; Caminada, D.; Heutink, P.

    2011-01-01

    Genomic approaches provide enormous amounts of raw data with regard to genetic variation, the diversity of RNA species, and protein complement. high-throughput (HT) and high-content (HC) cellular screens are ideally suited to contextualize the information gathered from other "omic" approaches into

  12. A high throughput platform for understanding the influence of excipients on physical and chemical stability

    DEFF Research Database (Denmark)

    Raijada, Dhara; Cornett, Claus; Rantanen, Jukka

    2013-01-01

    The present study puts forward a miniaturized high-throughput platform to understand influence of excipient selection and processing on the stability of a given drug compound. Four model drugs (sodium naproxen, theophylline, amlodipine besylate and nitrofurantoin) and ten different excipients were...

  13. A High-Throughput MALDI-TOF Mass Spectrometry-Based Assay of Chitinase Activity

    Science.gov (United States)

    A high-throughput MALDI-TOF mass spectrometric assay is described for assay of chitolytic enzyme activity. The assay uses unmodified chitin oligosaccharide substrates, and is readily achievable on a microliter scale (2 µL total volume, containing 2 µg of substrate and 1 ng of protein). The speed a...

  14. High-throughput siRNA screening applied to the ubiquitin-proteasome system

    DEFF Research Database (Denmark)

    Poulsen, Esben Guldahl; Nielsen, Sofie V.; Pietras, Elin J.

    2016-01-01

    that are not genetically tractable as, for instance, a yeast model system. Here, we describe a method relying on high-throughput cellular imaging of cells transfected with a targeted siRNA library to screen for components involved in degradation of a protein of interest. This method is a rapid and cost-effective tool...

  15. High-throughput shotgun lipidomics by quadrupole time-of-flight mass spectrometry

    DEFF Research Database (Denmark)

    Ståhlman, Marcus; Ejsing, Christer S.; Tarasov, Kirill

    2009-01-01

    we describe a novel high-throughput shotgun lipidomic platform based on 96-well robot-assisted lipid extraction, automated sample infusion by mircofluidic-based nanoelectrospray ionization, and quantitative multiple precursor ion scanning analysis on a quadrupole time-of-flight mass spectrometer...

  16. High-throughput verification of transcriptional starting sites by Deep-RACE

    DEFF Research Database (Denmark)

    Olivarius, Signe; Plessy, Charles; Carninci, Piero

    2009-01-01

    We present a high-throughput method for investigating the transcriptional starting sites of genes of interest, which we named Deep-RACE (Deep–rapid amplification of cDNA ends). Taking advantage of the latest sequencing technology, it allows the parallel analysis of multiple genes and is free...

  17. High throughput generated micro-aggregates of chondrocytes stimulate cartilage formation in vitro and in vivo

    NARCIS (Netherlands)

    Moreira Teixeira, Liliana; Leijten, Jeroen Christianus Hermanus; Sobral, J.; Jin, R.; van Apeldoorn, Aart A.; Feijen, Jan; van Blitterswijk, Clemens; Dijkstra, Pieter J.; Karperien, Hermanus Bernardus Johannes

    2012-01-01

    Cell-based cartilage repair strategies such as matrix-induced autologous chondrocyte implantation (MACI) could be improved by enhancing cell performance. We hypothesised that micro-aggregates of chondrocytes generated in high-throughput prior to implantation in a defect could stimulate cartilaginous

  18. DNA from buccal swabs suitable for high-throughput SNP multiplex analysis.

    Science.gov (United States)

    McMichael, Gai L; Gibson, Catherine S; O'Callaghan, Michael E; Goldwater, Paul N; Dekker, Gustaaf A; Haan, Eric A; MacLennan, Alastair H

    2009-12-01

    We sought a convenient and reliable method for collection of genetic material that is inexpensive and noninvasive and suitable for self-collection and mailing and a compatible, commercial DNA extraction protocol to meet quantitative and qualitative requirements for high-throughput single nucleotide polymorphism (SNP) multiplex analysis on an automated platform. Buccal swabs were collected from 34 individuals as part of a pilot study to test commercially available buccal swabs and DNA extraction kits. DNA was quantified on a spectrofluorometer with Picogreen dsDNA prior to testing the DNA integrity with predesigned SNP multiplex assays. Based on the pilot study results, the Catch-All swabs and Isohelix buccal DNA isolation kit were selected for our high-throughput application and extended to a further 1140 samples as part of a large cohort study. The average DNA yield in the pilot study (n=34) was 1.94 microg +/- 0.54 with a 94% genotyping pass rate. For the high-throughput application (n=1140), the average DNA yield was 2.44 microg +/- 1.74 with a >or=93% genotyping pass rate. The Catch-All buccal swabs are a convenient and cost-effective alternative to blood sampling. Combined with the Isohelix buccal DNA isolation kit, they provided DNA of sufficient quantity and quality for high-throughput SNP multiplex analysis.

  19. High-Throughput Dietary Exposure Predictions for Chemical Migrants from Food Packaging Materials

    Science.gov (United States)

    United States Environmental Protection Agency researchers have developed a Stochastic Human Exposure and Dose Simulation High -Throughput (SHEDS-HT) model for use in prioritization of chemicals under the ExpoCast program. In this research, new methods were implemented in SHEDS-HT...

  20. New approach for high-throughput screening of drug activity on Plasmodium liver stages.

    NARCIS (Netherlands)

    Gego, A.; Silvie, O.; Franetich, J.F.; Farhati, K.; Hannoun, L.; Luty, A.J.F.; Sauerwein, R.W.; Boucheix, C.; Rubinstein, E.; Mazier, D.

    2006-01-01

    Plasmodium liver stages represent potential targets for antimalarial prophylactic drugs. Nevertheless, there is a lack of molecules active on these stages. We have now developed a new approach for the high-throughput screening of drug activity on Plasmodium liver stages in vitro, based on an

  1. High-throughput tri-colour flow cytometry technique to assess Plasmodium falciparum parasitaemia in bioassays

    DEFF Research Database (Denmark)

    Tiendrebeogo, Regis W; Adu, Bright; Singh, Susheel K

    2014-01-01

    distinction of early ring stages of Plasmodium falciparum from uninfected red blood cells (uRBC) remains a challenge. METHODS: Here, a high-throughput, three-parameter (tri-colour) flow cytometry technique based on mitotracker red dye, the nucleic acid dye coriphosphine O (CPO) and the leucocyte marker CD45...

  2. High throughput system for magnetic manipulation of cells, polymers, and biomaterials

    Science.gov (United States)

    Spero, Richard Chasen; Vicci, Leandra; Cribb, Jeremy; Bober, David; Swaminathan, Vinay; O’Brien, E. Timothy; Rogers, Stephen L.; Superfine, R.

    2008-01-01

    In the past decade, high throughput screening (HTS) has changed the way biochemical assays are performed, but manipulation and mechanical measurement of micro- and nanoscale systems have not benefited from this trend. Techniques using microbeads (particles ∼0.1–10 μm) show promise for enabling high throughput mechanical measurements of microscopic systems. We demonstrate instrumentation to magnetically drive microbeads in a biocompatible, multiwell magnetic force system. It is based on commercial HTS standards and is scalable to 96 wells. Cells can be cultured in this magnetic high throughput system (MHTS). The MHTS can apply independently controlled forces to 16 specimen wells. Force calibrations demonstrate forces in excess of 1 nN, predicted force saturation as a function of pole material, and powerlaw dependence of F∼r−2.7±0.1. We employ this system to measure the stiffness of SR2+ Drosophila cells. MHTS technology is a key step toward a high throughput screening system for micro- and nanoscale biophysical experiments. PMID:19044357

  3. The Impact of Data Fragmentation on High-Throughput Clinical Phenotyping

    Science.gov (United States)

    Wei, Weiqi

    2012-01-01

    Subject selection is essential and has become the rate-limiting step for harvesting knowledge to advance healthcare through clinical research. Present manual approaches inhibit researchers from conducting deep and broad studies and drawing confident conclusions. High-throughput clinical phenotyping (HTCP), a recently proposed approach, leverages…

  4. A high-throughput method for GMO multi-detection using a microfluidic dynamic array

    NARCIS (Netherlands)

    Brod, F.C.A.; Dijk, van J.P.; Voorhuijzen, M.M.; Dinon, A.Z.; Guimarães, L.H.S.; Scholtens, I.M.J.; Arisi, A.C.M.; Kok, E.J.

    2014-01-01

    The ever-increasing production of genetically modified crops generates a demand for high-throughput DNAbased methods for the enforcement of genetically modified organisms (GMO) labelling requirements. The application of standard real-time PCR will become increasingly costly with the growth of the

  5. High-throughput assessment of context-dependent effects of chromatin proteins

    NARCIS (Netherlands)

    Brueckner, L. (Laura); Van Arensbergen, J. (Joris); Akhtar, W. (Waseem); L. Pagie (Ludo); B. van Steensel (Bas)

    2016-01-01

    textabstractBackground: Chromatin proteins control gene activity in a concerted manner. We developed a high-throughput assay to study the effects of the local chromatin environment on the regulatory activity of a protein of interest. The assay combines a previously reported multiplexing strategy

  6. Establishment of integrated protocols for automated high throughput kinetic chlorophyll fluorescence analyses.

    Science.gov (United States)

    Tschiersch, Henning; Junker, Astrid; Meyer, Rhonda C; Altmann, Thomas

    2017-01-01

    Automated plant phenotyping has been established as a powerful new tool in studying plant growth, development and response to various types of biotic or abiotic stressors. Respective facilities mainly apply non-invasive imaging based methods, which enable the continuous quantification of the dynamics of plant growth and physiology during developmental progression. However, especially for plants of larger size, integrative, automated and high throughput measurements of complex physiological parameters such as photosystem II efficiency determined through kinetic chlorophyll fluorescence analysis remain a challenge. We present the technical installations and the establishment of experimental procedures that allow the integrated high throughput imaging of all commonly determined PSII parameters for small and large plants using kinetic chlorophyll fluorescence imaging systems (FluorCam, PSI) integrated into automated phenotyping facilities (Scanalyzer, LemnaTec). Besides determination of the maximum PSII efficiency, we focused on implementation of high throughput amenable protocols recording PSII operating efficiency (ΦPSII). Using the presented setup, this parameter is shown to be reproducibly measured in differently sized plants despite the corresponding variation in distance between plants and light source that caused small differences in incident light intensity. Values of ΦPSII obtained with the automated chlorophyll fluorescence imaging setup correlated very well with conventionally determined data using a spot-measuring chlorophyll fluorometer. The established high throughput operating protocols enable the screening of up to 1080 small and 184 large plants per hour, respectively. The application of the implemented high throughput protocols is demonstrated in screening experiments performed with large Arabidopsis and maize populations assessing natural variation in PSII efficiency. The incorporation of imaging systems suitable for kinetic chlorophyll

  7. R2C: improving ab initio residue contact map prediction using dynamic fusion strategy and Gaussian noise filter.

    Science.gov (United States)

    Yang, Jing; Jin, Qi-Yu; Zhang, Biao; Shen, Hong-Bin

    2016-08-15

    Inter-residue contacts in proteins dictate the topology of protein structures. They are crucial for protein folding and structural stability. Accurate prediction of residue contacts especially for long-range contacts is important to the quality of ab inito structure modeling since they can enforce strong restraints to structure assembly. In this paper, we present a new Residue-Residue Contact predictor called R2C that combines machine learning-based and correlated mutation analysis-based methods, together with a two-dimensional Gaussian noise filter to enhance the long-range residue contact prediction. Our results show that the outputs from the machine learning-based method are concentrated with better performance on short-range contacts; while for correlated mutation analysis-based approach, the predictions are widespread with higher accuracy on long-range contacts. An effective query-driven dynamic fusion strategy proposed here takes full advantages of the two different methods, resulting in an impressive overall accuracy improvement. We also show that the contact map directly from the prediction model contains the interesting Gaussian noise, which has not been discovered before. Different from recent studies that tried to further enhance the quality of contact map by removing its transitive noise, we designed a new two-dimensional Gaussian noise filter, which was especially helpful for reinforcing the long-range residue contact prediction. Tested on recent CASP10/11 datasets, the overall top L/5 accuracy of our final R2C predictor is 17.6%/15.5% higher than the pure machine learning-based method and 7.8%/8.3% higher than the correlated mutation analysis-based approach for the long-range residue contact prediction. http://www.csbio.sjtu.edu.cn/bioinf/R2C/Contact:hbshen@sjtu.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Development and application of high-throughput techniques for evaluation of photopolymerizable monomer formulations

    Science.gov (United States)

    Johnson, Peter Michael

    Photopolymerization of multicomponent mixtures is widely used in industry to create a wide range of material properties for a variety of applications including microelectronics, contact lenses, dental restorations, adhesives and coatings. Under typical photopolymerization conditions, crosslinking polymerization exhibit non-classical phenomena such as autoacceleration, autodeceleration, incomplete conversion, and reaction diffusion controlled termination. These non-classical behaviors present difficult challenges to predicting network properties over various polymerization conditions. However, the complex photopolymerization reaction that generates these properties is modulated by the variety of conditions that these materials are subjected to both during and after the photopolymerization. A design of experiments approach limits the number of systems that are required for analysis, but a significant fraction of photopolymer analysis techniques are time consuming and the conclusions are limited to the region analyzed. This process creates a large bottleneck in the ability to optimize formulations and understand the effects that develop such advantageous properties. The wide spectrum of monomer chemistries add to this complexity and hinder the efficient optimization of various control parameters to produce a resulting polymer with the suitable material properties. In this work, high-throughput techniques for analyzing photopolymer conversion and modeling copolymerizations were developed. Controllable gradients of significant properties in photopolymerization were employed to produce a two factor system with all available analysis conditions on a single substrate. Gradients of composition, light intensity, exposure time, and temperature were all developed to produce combinatorial samples that were subsequently analyzed for their conversion. This technique allows for the rapid assessment of conversion over the entire substrate, generating a large set of data in a

  9. Protocol: A high-throughput DNA extraction system suitable for conifers

    Directory of Open Access Journals (Sweden)

    Rajora Om P

    2008-08-01

    Full Text Available Abstract Background High throughput DNA isolation from plants is a major bottleneck for most studies requiring large sample sizes. A variety of protocols have been developed for DNA isolation from plants. However, many species, including conifers, have high contents of secondary metabolites that interfere with the extraction process or the subsequent analysis steps. Here, we describe a procedure for high-throughput DNA isolation from conifers. Results We have developed a high-throughput DNA extraction protocol for conifers using an automated liquid handler and modifying the Qiagen MagAttract Plant Kit protocol. The modifications involve change to the buffer system and improving the protocol so that it almost doubles the number of samples processed per kit, which significantly reduces the overall costs. We describe two versions of the protocol: one for medium-throughput (MTP and another for high-throughput (HTP DNA isolation. The HTP version works from start to end in the industry-standard 96-well format, while the MTP version provides higher DNA yields per sample processed. We have successfully used the protocol for DNA extraction and genotyping of thousands of individuals of several spruce and a pine species. Conclusion A high-throughput system for DNA extraction from conifer needles and seeds has been developed and validated. The quality of the isolated DNA was comparable with that obtained from two commonly used methods: the silica-spin column and the classic CTAB protocol. Our protocol provides a fully automatable and cost effective solution for processing large numbers of conifer samples.

  10. A functional analysis of the CREB signaling pathway using HaloCHIP-chip and high throughput reporter assays

    Directory of Open Access Journals (Sweden)

    Aldred Shelley F

    2009-10-01

    Full Text Available Abstract Background Regulation of gene expression is essential for normal development and cellular growth. Transcriptional events are tightly controlled both spatially and temporally by specific DNA-protein interactions. In this study we finely map the genome-wide targets of the CREB protein across all known and predicted human promoters, and characterize the functional consequences of a subset of these binding events using high-throughput reporter assays. To measure CREB binding, we used HaloCHIP, an antibody-free alternative to the ChIP method that utilizes the HaloTag fusion protein, and also high-throughput promoter-luciferase reporter assays, which provide rapid and quantitative screening of promoters for transcriptional activation or repression in living cells. Results In analysis of CREB genome-wide binding events using a comprehensive DNA microarray of human promoters, we observe for the first time that CREB has a strong preference for binding at bidirectional promoters and unlike unidirectional promoters, these binding events often occur downstream of transcription start sites. Comparison between HaloCHIP-chip and ChIP-chip data reveal this to be true for both methodologies, indicating it is not a bias of the technology chosen. Transcriptional data obtained from promoter-luciferase reporter arrays also show an unprecedented, high level of activation of CREB-bound promoters in the presence of the co-activator protein TORC1. Conclusion These data suggest for the first time that TORC1 provides directional information when CREB is bound at bidirectional promoters and possible pausing of the CREB protein after initial transcriptional activation. Also, this combined approach demonstrates the ability to more broadly characterize CREB protein-DNA interactions wherein not only DNA binding sites are discovered, but also the potential of the promoter sequence to respond to CREB is evaluated.

  11. Bulk segregant analysis by high-throughput sequencing reveals a novel xylose utilization gene from Saccharomyces cerevisiae.

    Directory of Open Access Journals (Sweden)

    Jared W Wenger

    2010-05-01

    Full Text Available Fermentation of xylose is a fundamental requirement for the efficient production of ethanol from lignocellulosic biomass sources. Although they aggressively ferment hexoses, it has long been thought that native Saccharomyces cerevisiae strains cannot grow fermentatively or non-fermentatively on xylose. Population surveys have uncovered a few naturally occurring strains that are weakly xylose-positive, and some S. cerevisiae have been genetically engineered to ferment xylose, but no strain, either natural or engineered, has yet been reported to ferment xylose as efficiently as glucose. Here, we used a medium-throughput screen to identify Saccharomyces strains that can increase in optical density when xylose is presented as the sole carbon source. We identified 38 strains that have this xylose utilization phenotype, including strains of S. cerevisiae, other sensu stricto members, and hybrids between them. All the S. cerevisiae xylose-utilizing strains we identified are wine yeasts, and for those that could produce meiotic progeny, the xylose phenotype segregates as a single gene trait. We mapped this gene by Bulk Segregant Analysis (BSA using tiling microarrays and high-throughput sequencing. The gene is a putative xylitol dehydrogenase, which we name XDH1, and is located in the subtelomeric region of the right end of chromosome XV in a region not present in the S288c reference genome. We further characterized the xylose phenotype by performing gene expression microarrays and by genetically dissecting the endogenous Saccharomyces xylose pathway. We have demonstrated that natural S. cerevisiae yeasts are capable of utilizing xylose as the sole carbon source, characterized the genetic basis for this trait as well as the endogenous xylose utilization pathway, and demonstrated the feasibility of BSA using high-throughput sequencing.

  12. Automated High Throughput Protein Crystallization Screening at Nanoliter Scale and Protein Structural Study on Lactate Dehydrogenase

    Energy Technology Data Exchange (ETDEWEB)

    Li, Fenglei [Iowa State Univ., Ames, IA (United States)

    2006-08-09

    The purposes of our research were: (1) To develop an economical, easy to use, automated, high throughput system for large scale protein crystallization screening. (2) To develop a new protein crystallization method with high screening efficiency, low protein consumption and complete compatibility with high throughput screening system. (3) To determine the structure of lactate dehydrogenase complexed with NADH by x-ray protein crystallography to study its inherent structural properties. Firstly, we demonstrated large scale protein crystallization screening can be performed in a high throughput manner with low cost, easy operation. The overall system integrates liquid dispensing, crystallization and detection and serves as a whole solution to protein crystallization screening. The system can dispense protein and multiple different precipitants in nanoliter scale and in parallel. A new detection scheme, native fluorescence, has been developed in this system to form a two-detector system with a visible light detector for detecting protein crystallization screening results. This detection scheme has capability of eliminating common false positives by distinguishing protein crystals from inorganic crystals in a high throughput and non-destructive manner. The entire system from liquid dispensing, crystallization to crystal detection is essentially parallel, high throughput and compatible with automation. The system was successfully demonstrated by lysozyme crystallization screening. Secondly, we developed a new crystallization method with high screening efficiency, low protein consumption and compatibility with automation and high throughput. In this crystallization method, a gas permeable membrane is employed to achieve the gentle evaporation required by protein crystallization. Protein consumption is significantly reduced to nanoliter scale for each condition and thus permits exploring more conditions in a phase diagram for given amount of protein. In addition

  13. Genome-Wide Assessment of the Binding Effects of Artificial Transcriptional Activators by High-Throughput Sequencing.

    Science.gov (United States)

    Chandran, Anandhakumar; Syed, Junetha; Li, Yue; Sato, Shinsuke; Bando, Toshikazu; Sugiyama, Hiroshi

    2016-10-17

    One of the major goals in DNA-based personalized medicine is the development of sequence-specific small molecules to target the genome. SAHA-PIPs belong to such class of small molecule. In the context of the complex eukaryotic genome, the differential biological effects of SAHA-PIPs are unclear. This question can be addressed by identifying the binding regions across the genome; however, it is a challenge to enrich small-molecule-bound DNA without chemical crosslinking. Here, we developed a method that employs high-throughput sequencing to map the binding area of small molecules throughout the chromatinized human genome. Analysis of the sequenced data confirmed the presence of specific binding sites for SAHA-PIPs from the enriched sequence reads. Mapping the binding sites and enriched regions on the human genome clarifies the reason for the distinct biological effects of SAHA-PIP. This approach will be useful for identifying the function of other small molecules on a large scale. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. HTSSIP: An R package for analysis of high throughput sequencing data from nucleic acid stable isotope probing (SIP experiments.

    Directory of Open Access Journals (Sweden)

    Nicholas D Youngblut

    Full Text Available Combining high throughput sequencing with stable isotope probing (HTS-SIP is a powerful method for mapping in situ metabolic processes to thousands of microbial taxa. However, accurately mapping metabolic processes to taxa is complex and challenging. Multiple HTS-SIP data analysis methods have been developed, including high-resolution stable isotope probing (HR-SIP, multi-window high-resolution stable isotope probing (MW-HR-SIP, quantitative stable isotope probing (qSIP, and ΔBD. Currently, there is no publicly available software designed specifically for analyzing HTS-SIP data. To address this shortfall, we have developed the HTSSIP R package, an open-source, cross-platform toolset for conducting HTS-SIP analyses in a straightforward and easily reproducible manner. The HTSSIP package, along with full documentation and examples, is available from CRAN at https://cran.r-project.org/web/packages/HTSSIP/index.html and Github at https://github.com/buckleylab/HTSSIP.

  15. High-throughput Sequencing Based Immune Repertoire Study during Infectious Disease

    Directory of Open Access Journals (Sweden)

    Dongni Hou

    2016-08-01

    Full Text Available The selectivity of the adaptive immune response is based on the enormous diversity of T and B cell antigen-specific receptors. The immune repertoire, the collection of T and B cells with functional diversity in the circulatory system at any given time, is dynamic and reflects the essence of immune selectivity. In this article, we review the recent advances in immune repertoire study of infectious diseases that achieved by traditional techniques and high-throughput sequencing techniques. High-throughput sequencing techniques enable the determination of complementary regions of lymphocyte receptors with unprecedented efficiency and scale. This progress in methodology enhances the understanding of immunologic changes during pathogen challenge, and also provides a basis for further development of novel diagnostic markers, immunotherapies and vaccines.

  16. Nanostructured biosensing platform-shadow edge lithography for high-throughput nanofabrication.

    Science.gov (United States)

    Bai, John G; Yeo, Woon-Hong; Chung, Jae-Hyun

    2009-02-07

    One of the critical challenges in nanostructured biosensors is to manufacture an addressable array of nanopatterns at low cost. The addressable array (1) provides multiplexing for biomolecule detection and (2) enables direct detection of biomolecules without labeling and amplification. To fabricate such an array of nanostructures, current nanolithography methods are limited by the lack of either high throughput or high resolution. This paper presents a high-resolution and high-throughput nanolithography method using the compensated shadow effect in high-vacuum evaporation. The approach enables the fabrication of uniform nanogaps down to 20 nm in width across a 100 mm silicon wafer. The nanogap pattern is used as a template for the routine fabrication of zero-, one-, and two-dimensional nanostructures with a high yield. The method can facilitate the fabrication of nanostructured biosensors on a wafer scale at a low manufacturing cost.

  17. An automated system for high-throughput single cell-based breeding

    Science.gov (United States)

    Yoshimoto, Nobuo; Kida, Akiko; Jie, Xu; Kurokawa, Masaya; Iijima, Masumi; Niimi, Tomoaki; Maturana, Andrés D.; Nikaido, Itoshi; Ueda, Hiroki R.; Tatematsu, Kenji; Tanizawa, Katsuyuki; Kondo, Akihiko; Fujii, Ikuo; Kuroda, Shun'ichi

    2013-01-01

    When establishing the most appropriate cells from the huge numbers of a cell library for practical use of cells in regenerative medicine and production of various biopharmaceuticals, cell heterogeneity often found in an isogenic cell population limits the refinement of clonal cell culture. Here, we demonstrated high-throughput screening of the most suitable cells in a cell library by an automated undisruptive single-cell analysis and isolation system, followed by expansion of isolated single cells. This system enabled establishment of the most suitable cells, such as embryonic stem cells with the highest expression of the pluripotency marker Rex1 and hybridomas with the highest antibody secretion, which could not be achieved by conventional high-throughput cell screening systems (e.g., a fluorescence-activated cell sorter). This single cell-based breeding system may be a powerful tool to analyze stochastic fluctuations and delineate their molecular mechanisms. PMID:23378922

  18. A high-throughput, multi-channel photon-counting detector with picosecond timing

    CERN Document Server

    Lapington, J S; Miller, G M; Ashton, T J R; Jarron, P; Despeisse, M; Powolny, F; Howorth, J; Milnes, J

    2009-01-01

    High-throughput photon counting with high time resolution is a niche application area where vacuum tubes can still outperform solid-state devices. Applications in the life sciences utilizing time-resolved spectroscopies, particularly in the growing field of proteomics, will benefit greatly from performance enhancements in event timing and detector throughput. The HiContent project is a collaboration between the University of Leicester Space Research Centre, the Microelectronics Group at CERN, Photek Ltd., and end-users at the Gray Cancer Institute and the University of Manchester. The goal is to develop a detector system specifically designed for optical proteomics, capable of high content (multi-parametric) analysis at high throughput. The HiContent detector system is being developed to exploit this niche market. It combines multi-channel, high time resolution photon counting in a single miniaturized detector system with integrated electronics. The combination of enabling technologies; small pore microchanne...

  19. Robust, high-throughput solution structural analyses by small angle X-ray scattering (SAXS)

    Energy Technology Data Exchange (ETDEWEB)

    Hura, Greg L.; Menon, Angeli L.; Hammel, Michal; Rambo, Robert P.; Poole II, Farris L.; Tsutakawa, Susan E.; Jenney Jr, Francis E.; Classen, Scott; Frankel, Kenneth A.; Hopkins, Robert C.; Yang, Sungjae; Scott, Joseph W.; Dillard, Bret D.; Adams, Michael W. W.; Tainer, John A.

    2009-07-20

    We present an efficient pipeline enabling high-throughput analysis of protein structure in solution with small angle X-ray scattering (SAXS). Our SAXS pipeline combines automated sample handling of microliter volumes, temperature and anaerobic control, rapid data collection and data analysis, and couples structural analysis with automated archiving. We subjected 50 representative proteins, mostly from Pyrococcus furiosus, to this pipeline and found that 30 were multimeric structures in solution. SAXS analysis allowed us to distinguish aggregated and unfolded proteins, define global structural parameters and oligomeric states for most samples, identify shapes and similar structures for 25 unknown structures, and determine envelopes for 41 proteins. We believe that high-throughput SAXS is an enabling technology that may change the way that structural genomics research is done.

  20. The promise and challenge of high-throughput sequencing of the antibody repertoire

    Science.gov (United States)

    Georgiou, George; Ippolito, Gregory C; Beausang, John; Busse, Christian E; Wardemann, Hedda; Quake, Stephen R

    2014-01-01

    Efforts to determine the antibody repertoire encoded by B cells in the blood or lymphoid organs using high-throughput DNA sequencing technologies have been advancing at an extremely rapid pace and are transforming our understanding of humoral immune responses. Information gained from high-throughput DNA sequencing of immunoglobulin genes (Ig-seq) can be applied to detect B-cell malignancies with high sensitivity, to discover antibodies specific for antigens of interest, to guide vaccine development and to understand autoimmunity. Rapid progress in the development of experimental protocols and informatics analysis tools is helping to reduce sequencing artifacts, to achieve more precise quantification of clonal diversity and to extract the most pertinent biological information. That said, broader application of Ig-seq, especially in clinical settings, will require the development of a standardized experimental design framework that will enable the sharing and meta-analysis of sequencing data generated by different laboratories. PMID:24441474

  1. tcpl: the ToxCast pipeline for high-throughput screening data.

    Science.gov (United States)

    Filer, Dayne L; Kothiya, Parth; Setzer, R Woodrow; Judson, Richard S; Martin, Matthew T

    2017-02-15

    Large high-throughput screening (HTS) efforts are widely used in drug development and chemical toxicity screening. Wide use and integration of these data can benefit from an efficient, transparent and reproducible data pipeline. Summary: The tcpl R package and its associated MySQL database provide a generalized platform for efficiently storing, normalizing and dose-response modeling of large high-throughput and high-content chemical screening data. The novel dose-response modeling algorithm has been tested against millions of diverse dose-response series, and robustly fits data with outliers and cytotoxicity-related signal loss. tcpl is freely available on the Comprehensive R Archive Network under the GPL-2 license. martin.matt@epa.gov.

  2. Current developments in high-throughput analysis for microalgae cellular contents.

    Science.gov (United States)

    Lee, Tsung-Hua; Chang, Jo-Shu; Wang, Hsiang-Yu

    2013-11-01

    Microalgae have emerged as one of the most promising feedstocks for biofuels and bio-based chemical production. However, due to the lack of effective tools enabling rapid and high-throughput analysis of the content of microalgae biomass, the efficiency of screening and identification of microalgae with desired functional components from the natural environment is usually quite low. Moreover, the real-time monitoring of the production of target components from microalgae is also difficult. Recently, research efforts focusing on overcoming this limitation have started. In this review, the recent development of high-throughput methods for analyzing microalgae cellular contents is summarized. The future prospects and impacts of these detection methods in microalgae-related processing and industries are also addressed. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Quantitative monitoring of Arabidopsis thaliana growth and development using high-throughput plant phenotyping.

    Science.gov (United States)

    Arend, Daniel; Lange, Matthias; Pape, Jean-Michel; Weigelt-Fischer, Kathleen; Arana-Ceballos, Fernando; Mücke, Ingo; Klukas, Christian; Altmann, Thomas; Scholz, Uwe; Junker, Astrid

    2016-08-16

    With the implementation of novel automated, high throughput methods and facilities in the last years, plant phenomics has developed into a highly interdisciplinary research domain integrating biology, engineering and bioinformatics. Here we present a dataset of a non-invasive high throughput plant phenotyping experiment, which uses image- and image analysis- based approaches to monitor the growth and development of 484 Arabidopsis thaliana plants (thale cress). The result is a comprehensive dataset of images and extracted phenotypical features. Such datasets require detailed documentation, standardized description of experimental metadata as well as sustainable data storage and publication in order to ensure the reproducibility of experiments, data reuse and comparability among the scientific community. Therefore the here presented dataset has been annotated using the standardized ISA-Tab format and considering the recently published recommendations for the semantical description of plant phenotyping experiments.

  4. Development of New Sensing Materials Using Combinatorial and High-Throughput Experimentation

    Science.gov (United States)

    Potyrailo, Radislav A.; Mirsky, Vladimir M.

    New sensors with improved performance characteristics are needed for applications as diverse as bedside continuous monitoring, tracking of environmental pollutants, monitoring of food and water quality, monitoring of chemical processes, and safety in industrial, consumer, and automotive settings. Typical requirements in sensor improvement are selectivity, long-term stability, sensitivity, response time, reversibility, and reproducibility. Design of new sensing materials is the important cornerstone in the effort to develop new sensors. Often, sensing materials are too complex to predict their performance quantitatively in the design stage. Thus, combinatorial and high-throughput experimentation methodologies provide an opportunity to generate new required data to discover new sensing materials and/or to optimize existing material compositions. The goal of this chapter is to provide an overview of the key concepts of experimental development of sensing materials using combinatorial and high-throughput experimentation tools, and to promote additional fruitful interactions between computational scientists and experimentalists.

  5. High-throughput clone screening followed by protein expression cross-check: A visual assay platform.

    Science.gov (United States)

    Bose, Partha Pratim; Kumar, Prakash

    2017-01-01

    In high-throughput biotechnology and structural biology, molecular cloning is an essential prerequisite for attaining high yields of recombinant protein. However, a rapid, cost-effective, easy clone screening protocol is still required to identify colonies with desired insert along with a cross check method to certify the expression of the desired protein as the end product. We report an easy, fast, sensitive and cheap visual clone screening and protein expression cross check protocol employing gold nanoparticle based plasmonic detection phenomenon. This is a non-gel, non-PCR based visual detection technique, which can be used as simultaneous high throughput clone screening followed by the determination of expression of desired protein. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Engineering High Affinity Protein-Protein Interactions Using a High-Throughput Microcapillary Array Platform.

    Science.gov (United States)

    Lim, Sungwon; Chen, Bob; Kariolis, Mihalis S; Dimov, Ivan K; Baer, Thomas M; Cochran, Jennifer R

    2017-02-17

    Affinity maturation of protein-protein interactions requires iterative rounds of protein library generation and high-throughput screening to identify variants that bind with increased affinity to a target of interest. We recently developed a multipurpose protein engineering platform, termed μSCALE (Microcapillary Single Cell Analysis and Laser Extraction). This technology enables high-throughput screening of libraries of millions of cell-expressing protein variants based on their binding properties or functional activity. Here, we demonstrate the first use of the μSCALE platform for affinity maturation of a protein-protein binding interaction. In this proof-of-concept study, we engineered an extracellular domain of the Axl receptor tyrosine kinase to bind tighter to its ligand Gas6. Within 2 weeks, two iterative rounds of library generation and screening resulted in engineered Axl variants with a 50-fold decrease in kinetic dissociation rate, highlighting the use of μSCALE as a new tool for directed evolution.

  7. Post-high-throughput screening analysis: an empirical compound prioritization scheme.

    Science.gov (United States)

    Oprea, Tudor I; Bologa, Cristian G; Edwards, Bruce S; Prossnitz, Eric R; Sklar, Larry A

    2005-08-01

    An empirical scheme to evaluate and prioritize screening hits from high-throughput screening (HTS) is proposed. Negative scores are given when chemotypes found in the HTS hits are present in annotated databases such as MDDR and WOMBAT or for testing positive in toxicity-related experiments reported in TOXNET. Positive scores were given for higher measured biological activities, for testing negative in toxicity-related literature, and for good overlap when profiled against drug-related properties. Particular emphasis is placed on estimating aqueous solubility to prioritize in vivo experiments. This empirical scheme is given as an illustration to assist the decision-making process in selecting chemotypes and individual compounds for further experimentation, when confronted with multiple hits from high-throughput experiments. The decision-making process is discussed for a set of G-protein coupled receptor antagonists and validated on a literature example for dihydrofolate reductase inhibition.

  8. High-throughput investigation of catalysts for JP-8 fuel cracking to liquefied petroleum gas.

    Science.gov (United States)

    Bedenbaugh, John E; Kim, Sungtak; Sasmaz, Erdem; Lauterbach, Jochen

    2013-09-09

    Portable power technologies for military applications necessitate the production of fuels similar to LPG from existing feedstocks. Catalytic cracking of military jet fuel to form a mixture of C₂-C₄ hydrocarbons was investigated using high-throughput experimentation. Cracking experiments were performed in a gas-phase, 16-sample high-throughput reactor. Zeolite ZSM-5 catalysts with low Si/Al ratios (≤25) demonstrated the highest production of C₂-C₄ hydrocarbons at moderate reaction temperatures (623-823 K). ZSM-5 catalysts were optimized for JP-8 cracking activity to LPG through varying reaction temperature and framework Si/Al ratio. The reducing atmosphere required during catalytic cracking resulted in coking of the catalyst and a commensurate decrease in conversion rate. Rare earth metal promoters for ZSM-5 catalysts were screened to reduce coking deactivation rates, while noble metal promoters reduced onset temperatures for coke burnoff regeneration.

  9. Macro-to-micro structural proteomics: native source proteins for high-throughput crystallization.

    Directory of Open Access Journals (Sweden)

    Monica Totir

    Full Text Available Structural biology and structural genomics projects routinely rely on recombinantly expressed proteins, but many proteins and complexes are difficult to obtain by this approach. We investigated native source proteins for high-throughput protein crystallography applications. The Escherichia coli proteome was fractionated, purified, crystallized, and structurally characterized. Macro-scale fermentation and fractionation were used to subdivide the soluble proteome into 408 unique fractions of which 295 fractions yielded crystals in microfluidic crystallization chips. Of the 295 crystals, 152 were selected for optimization, diffraction screening, and data collection. Twenty-three structures were determined, four of which were novel. This study demonstrates the utility of native source proteins for high-throughput crystallography.

  10. The complete automation of cell culture: improvements for high-throughput and high-content screening.

    Science.gov (United States)

    Jain, Shushant; Sondervan, David; Rizzu, Patrizia; Bochdanovits, Zoltan; Caminada, Daniel; Heutink, Peter

    2011-09-01

    Genomic approaches provide enormous amounts of raw data with regard to genetic variation, the diversity of RNA species, and protein complement. High-throughput (HT) and high-content (HC) cellular screens are ideally suited to contextualize the information gathered from other "omic" approaches into networks and can be used for the identification of therapeutic targets. Current methods used for HT-HC screens are laborious, time-consuming, and prone to human error. The authors thus developed an automated high-throughput system with an integrated fluorescent imager for HC screens called the AI.CELLHOST. The implementation of user-defined culturing and assay plate setup parameters allows parallel operation of multiple screens in diverse mammalian cell types. The authors demonstrate that such a system is able to successfully maintain different cell lines in culture for extended periods of time as well as significantly increasing throughput, accuracy, and reproducibility of HT and HC screens.

  11. Multiple and high-throughput droplet reactions via combination of microsampling technique and microfluidic chip

    KAUST Repository

    Wu, Jinbo

    2012-11-20

    Microdroplets offer unique compartments for accommodating a large number of chemical and biological reactions in tiny volume with precise control. A major concern in droplet-based microfluidics is the difficulty to address droplets individually and achieve high throughput at the same time. Here, we have combined an improved cartridge sampling technique with a microfluidic chip to perform droplet screenings and aggressive reaction with minimal (nanoliter-scale) reagent consumption. The droplet composition, distance, volume (nanoliter to subnanoliter scale), number, and sequence could be precisely and digitally programmed through the improved sampling technique, while sample evaporation and cross-contamination are effectively eliminated. Our combined device provides a simple model to utilize multiple droplets for various reactions with low reagent consumption and high throughput. © 2012 American Chemical Society.

  12. Marine natural product libraries for high-throughput screening and rapid drug discovery.

    Science.gov (United States)

    Bugni, Tim S; Richards, Burt; Bhoite, Leen; Cimbora, Daniel; Harper, Mary Kay; Ireland, Chris M

    2008-06-01

    There is a need for diverse molecular libraries for phenotype-selective and high-throughput screening. To make marine natural products (MNPs) more amenable to newer screening paradigms and shorten discovery time lines, we have created an MNP library characterized online using MS. To test the potential of the library, we screened a subset of the library in a phenotype-selective screen to identify compounds that inhibited the growth of BRCA2-deficient cells.

  13. Geochip: A high throughput genomic tool for linking community structure to functions

    Energy Technology Data Exchange (ETDEWEB)

    Van Nostrand, Joy D.; Liang, Yuting; He, Zhili; Li, Guanghe; Zhou, Jizhong

    2009-01-30

    GeoChip is a comprehensive functional gene array that targets key functional genes involved in the geochemical cycling of N, C, and P, sulfate reduction, metal resistance and reduction, and contaminant degradation. Studies have shown the GeoChip to be a sensitive, specific, and high-throughput tool for microbial community analysis that has the power to link geochemical processes with microbial community structure. However, several challenges remain regarding the development and applications of microarrays for microbial community analysis.

  14. Computational and Statistical Methods for High-Throughput Mass Spectrometry-Based PTM Analysis.

    Science.gov (United States)

    Schwämmle, Veit; Vaudel, Marc

    2017-01-01

    Cell signaling and functions heavily rely on post-translational modifications (PTMs) of proteins. Their high-throughput characterization is thus of utmost interest for multiple biological and medical investigations. In combination with efficient enrichment methods, peptide mass spectrometry analysis allows the quantitative comparison of thousands of modified peptides over different conditions. However, the large and complex datasets produced pose multiple data interpretation challenges, ranging from spectral interpretation to statistical and multivariate analyses. Here, we present a typical workflow to interpret such data.

  15. In Vitro High Throughput Screening, What Next? Lessons from the Screening for Aurora Kinase Inhibitors

    Directory of Open Access Journals (Sweden)

    Thi-My-Nhung Hoang

    2014-02-01

    Full Text Available Based on in vitro assays, we performed a High Throughput Screening (HTS to identify kinase inhibitors among 10,000 small chemical compounds. In this didactic paper, we describe step-by-step the approach to validate the hits as well as the major pitfalls encountered in the development of active molecules. We propose a decision tree that could be adapted to most in vitro HTS.

  16. In vitro high throughput screening, what next? Lessons from the screening for aurora kinase inhibitors.

    Science.gov (United States)

    Hoang, Thi-My-Nhung; Vu, Hong-Lien; Le, Ly-Thuy-Tram; Nguyen, Chi-Hung; Molla, Annie

    2014-02-27

    Based on in vitro assays, we performed a High Throughput Screening (HTS) to identify kinase inhibitors among 10,000 small chemical compounds. In this didactic paper, we describe step-by-step the approach to validate the hits as well as the major pitfalls encountered in the development of active molecules. We propose a decision tree that could be adapted to most in vitro HTS.

  17. DRABAL: novel method to mine large high-throughput screening assays using Bayesian active learning

    OpenAIRE

    Soufan, Othman; Ba-Alawi, Wail; Afeef, Moataz; Essack, Magbubah; Kalnis, Panos; Bajic, Vladimir B.

    2016-01-01

    Background Mining high-throughput screening (HTS) assays is key for enhancing decisions in the area of drug repositioning and drug discovery. However, many challenges are encountered in the process of developing suitable and accurate methods for extracting useful information from these assays. Virtual screening and a wide variety of databases, methods and solutions proposed to-date, did not completely overcome these challenges. This study is based on a multi-label classification (MLC) techniq...

  18. Machine learning in computational biology to accelerate high-throughput protein expression

    DEFF Research Database (Denmark)

    Sastry, Anand; Monk, Jonathan M.; Tegel, Hanna

    2017-01-01

    over 40 000 unique human protein fragments have been expressed in E. coli. These datasets enable quantitative tracking of entire cellular proteomes and present new avenues for understanding molecular-level properties influencing expression and solubility. Results: Combining computational biology......Motivation: The Human Protein Atlas (HPA) enables the simultaneous characterization of thousands of proteins across various tissues to pinpoint their spatial location in the human body. This has been achieved through transcriptomics and high-throughput immunohistochemistry-based approaches, where...

  19. A Concept for a Sensitive Micro Total Analysis System for High Throughput Fluorescence Imaging

    OpenAIRE

    Rabner, Arthur; Shacham, Yosi

    2006-01-01

    This paper discusses possible methods for on-chip fluorescent imaging for integrated bio-sensors. The integration of optical and electro-optical accessories, according to suggested methods, can improve the performance of fluorescence imaging. It can boost the signal to background ratio by a few orders of magnitudes in comparison to conventional discrete setups. The methods that are present in this paper are oriented towards building reproducible arrays for high-throughput micro total analysis...

  20. Patterning cell using Si-stencil for high-throughput assay

    KAUST Repository

    Wu, Jinbo

    2011-01-01

    In this communication, we report a newly developed cell pattering methodology by a silicon-based stencil, which exhibited advantages such as easy handling, reusability, hydrophilic surface and mature fabrication technologies. Cell arrays obtained by this method were used to investigate cell growth under a temperature gradient, which demonstrated the possibility of studying cell behavior in a high-throughput assay. This journal is © The Royal Society of Chemistry 2011.

  1. Streptococcus mutans Protein Synthesis during Mixed-Species Biofilm Development by High-Throughput Quantitative Proteomics

    OpenAIRE

    Klein, Marlise I.; Xiao, Jin; Lu, Bingwen; Delahunty, Claire M.; Yates, John R.; Koo, Hyun

    2012-01-01

    Biofilms formed on tooth surfaces are comprised of mixed microbiota enmeshed in an extracellular matrix. Oral biofilms are constantly exposed to environmental changes, which influence the microbial composition, matrix formation and expression of virulence. Streptococcus mutans and sucrose are key modulators associated with the evolution of virulent-cariogenic biofilms. In this study, we used a high-throughput quantitative proteomics approach to examine how S. mutans produces relevant proteins...

  2. Statistical Methods for Integrating Multiple Types of High-Throughput Data

    OpenAIRE

    Xie, Yang; Ahn, Chul

    2010-01-01

    Large-scale sequencing, copy number, mRNA, and protein data have given great promise to the biomedical research, while posing great challenges to data management and data analysis. Integrating different types of high-throughput data from diverse sources can increase the statistical power of data analysis and provide deeper biological understanding. This chapter uses two biomedical research examples to illustrate why there is an urgent need to develop reliable and robust methods for integratin...

  3. Acanthamoeba castellanii: a new high-throughput method for drug screening in vitro

    OpenAIRE

    Ortega-Rivas, Antonio; Padrón, José M; Valladares, Basilio; Elsheikha, Hany M

    2016-01-01

    Despite significant public health impact, there is no specific antiprotozoal therapy for prevention and treatment of Acanthamoeba castellanii infection. There is a need for new and efficient anti-Acanthamoeba drugs that are less toxic and can reduce treatment duration and frequency of administration. In this context a new, rapid and sensitive assay is required for high-throughput activity testing and screening of new therapeutic compounds. A colorimetric assay based on sulforhodamine B (SRB) ...

  4. Predictions versus high-throughput experiments in T-cell epitope discovery: competition or synergy?

    DEFF Research Database (Denmark)

    Lundegaard, Claus; Lund, Ole; Nielsen, Morten

    2012-01-01

    Prediction methods as well as experimental methods for T-cell epitope discovery have developed significantly in recent years. High-throughput experimental methods have made it possible to perform full-length protein scans for epitopes restricted to a limited number of MHC alleles. The high costs...... discovery. We expect prediction methods as well as experimental validation methods to continue to develop and that we will soon see clinical trials of products whose development has been guided by prediction methods....

  5. High-Throughput Synthesis, Screening, and Scale-Up of Optimized Conducting Indium Tin Oxides

    OpenAIRE

    Marchand, P; Makwana, N. M.; Tighe, C. J.; Gruar, R. I.; Parkin, I. P.; Carmalt, C. J.; Darr, J. A.

    2016-01-01

    A high-throughput optimization and subsequent scale-up methodology has been used for the synthesis of conductive tin-doped indium oxide (known as ITO) nanoparticles. ITO nanoparticles with up to 12 at % Sn were synthesized using a laboratory scale (15 g/hour by dry mass) continuous hydrothermal synthesis process, and the as-synthesized powders were characterized by powder X-ray diffraction, transmission electron microscopy, energy-dispersive X-ray analysis, and X-ray photoelectron spectroscop...

  6. Deep Mutational Scanning: Library Construction, Functional Selection, and High-Throughput Sequencing.

    Science.gov (United States)

    Starita, Lea M; Fields, Stanley

    2015-08-03

    Deep mutational scanning is a highly parallel method that uses high-throughput sequencing to track changes in >10(5) protein variants before and after selection to measure the effects of mutations on protein function. Here we outline the stages of a deep mutational scanning experiment, focusing on the construction of libraries of protein sequence variants and the preparation of Illumina sequencing libraries. © 2015 Cold Spring Harbor Laboratory Press.

  7. Peranan Biologi Molekuler Dan Hts (High Throughput Screening) Dalam Pengembangan Obat Sintetik Baru

    OpenAIRE

    Nurrochmad, Arief

    2004-01-01

    Recently, the discovery of new drugs uses the new concept by modern techniques instead ofthe convenstional techniques. In the development of scientific knowledge, the role of molecularbiology and the modern techniques in the investigations and discovery new drug becomes theimportant things. Many methods and modern techniques use in the discovery of new drugs, i.e,genetic enginering, DNA recombinant, radioligand binding assay technique, HTS techniques (HighThroughput Screening), and mass ligan...

  8. A Novel High-Throughput Approach to Measure Hydroxyl Radicals Induced by Airborne Particulate Matter

    Directory of Open Access Journals (Sweden)

    Yeongkwon Son

    2015-10-01

    Full Text Available Oxidative stress is one of the key mechanisms linking ambient particulate matter (PM exposure with various adverse health effects. The oxidative potential of PM has been used to characterize the ability of PM induced oxidative stress. Hydroxyl radical (•OH is the most destructive radical produced by PM. However, there is currently no high-throughput approach which can rapidly measure PM-induced •OH for a large number of samples with an automated system. This study evaluated four existing molecular probes (disodium terephthalate, 3′-p-(aminophenylfluorescein, coumarin-3-carboxylic acid, and sodium benzoate for their applicability to measure •OH induced by PM in a high-throughput cell-free system using fluorescence techniques, based on both our experiments and on an assessment of the physicochemical properties of the probes reported in the literature. Disodium terephthalate (TPT was the most applicable molecular probe to measure •OH induced by PM, due to its high solubility, high stability of the corresponding fluorescent product (i.e., 2-hydroxyterephthalic acid, high yield compared with the other molecular probes, and stable fluorescence intensity in a wide range of pH environments. TPT was applied in a high-throughput format to measure PM (NIST 1648a-induced •OH, in phosphate buffered saline. The formed fluorescent product was measured at designated time points up to 2 h. The fluorescent product of TPT had a detection limit of 17.59 nM. The soluble fraction of PM contributed approximately 76.9% of the •OH induced by total PM, and the soluble metal ions of PM contributed 57.4% of the overall •OH formation. This study provides a promising cost-effective high-throughput method to measure •OH induced by PM on a routine basis.

  9. A high-throughput method for quantifying metabolically active yeast cells

    DEFF Research Database (Denmark)

    Nandy, Subir Kumar; Knudsen, Peter Boldsen; Rosenkjær, Alexander

    2015-01-01

    By redesigning the established methylene blue reduction test for bacteria and yeast, we present a cheap and efficient methodology for quantitative physiology of eukaryotic cells applicable for high-throughput systems. Validation of themethod in fermenters and highthroughput systems proved...... equivalent, displaying reduction curves that interrelated directly with CFU counts. For growth rate estimation, the methylene blue reduction test (MBRT) proved superior, since the discriminatory nature of the method allowed for the quantification of metabolically active cells only, excluding dead cells...

  10. High-throughput, image-based screening of pooled genetic-variant libraries.

    Science.gov (United States)

    Emanuel, George; Moffitt, Jeffrey R; Zhuang, Xiaowei

    2017-12-01

    We report a high-throughput screening method that allows diverse genotypes and corresponding phenotypes to be imaged in individual cells. We achieve genotyping by introducing barcoded genetic variants into cells as pooled libraries and reading the barcodes out using massively multiplexed fluorescence in situ hybridization. To demonstrate the power of image-based pooled screening, we identified brighter and more photostable variants of the fluorescent protein YFAST among 60,000 variants.

  11. Computational and statistical methods for high-throughput analysis of post-translational modifications of proteins

    DEFF Research Database (Denmark)

    Schwämmle, Veit; Braga, Thiago Verano; Roepstorff, Peter

    2015-01-01

    The investigation of post-translational modifications (PTMs) represents one of the main research focuses for the study of protein function and cell signaling. Mass spectrometry instrumentation with increasing sensitivity improved protocols for PTM enrichment and recently established pipelines...... for high-throughput experiments allow large-scale identification and quantification of several PTM types. This review addresses the concurrently emerging challenges for the computational analysis of the resulting data and presents PTM-centered approaches for spectra identification, statistical analysis...

  12. Development of High-Throughput Quantitative Assays for Glucose Uptake in Cancer Cell Lines

    Science.gov (United States)

    Hassanein, Mohamed; Weidow, Brandy; Koehler, Elizabeth; Bakane, Naimish; Garbett, Shawn; Shyr, Yu; Quaranta, Vito

    2013-01-01

    Purpose Metabolism, and especially glucose uptake, is a key quantitative cell trait that is closely linked to cancer initiation and progression. Therefore, developing high-throughput assays for measuring glucose uptake in cancer cells would be enviable for simultaneous comparisons of multiple cell lines and microenvironmental conditions. This study was designed with two specific aims in mind: the first was to develop and validate a high-throughput screening method for quantitative assessment of glucose uptake in “normal” and tumor cells using the fluorescent 2-deoxyglucose analog 2-[N-(7-nitrobenz-2-oxa-1,3-diazol-4-yl)amino]-2-deoxyglucose (2-NBDG), and the second was to develop an image-based, quantitative, single-cell assay for measuring glucose uptake using the same probe to dissect the full spectrum of metabolic variability within populations of tumor cells in vitro in higher resolution. Procedure The kinetics of population-based glucose uptake was evaluated for MCF10A mammary epithelial and CA1d breast cancer cell lines, using 2-NBDG and a fluorometric microplate reader. Glucose uptake for the same cell lines was also examined at the single-cell level using high-content automated microscopy coupled with semi-automated cell-cytometric image analysis approaches. Statistical treatments were also implemented to analyze intra-population variability. Results Our results demonstrate that the high-throughput fluorometric assay using 2-NBDG is a reliable method to assess population-level kinetics of glucose uptake in cell lines in vitro. Similarly, single-cell image-based assays and analyses of 2-NBDG fluorescence proved an effective and accurate means for assessing glucose uptake, which revealed that breast tumor cell lines display intra-population variability that is modulated by growth conditions. Conclusions These studies indicate that 2-NBDG can be used to aid in the high-throughput analysis of the influence of chemotherapeutics on glucose uptake in cancer

  13. High Throughput Single-cell and Multiple-cell Micro-encapsulation

    OpenAIRE

    Lagus, Todd P.; Edd, Jon F.

    2012-01-01

    Microfluidic encapsulation methods have been previously utilized to capture cells in picoliter-scale aqueous, monodisperse drops, providing confinement from a bulk fluid environment with applications in high throughput screening, cytometry, and mass spectrometry. We describe a method to not only encapsulate single cells, but to repeatedly capture a set number of cells (here we demonstrate one- and two-cell encapsulation) to study both isolation and the interactions between cells in groups of ...

  14. Rapid 2,2'-bicinchoninic-based xylanase assay compatible with high throughput screening

    Science.gov (United States)

    William R. Kenealy; Thomas W. Jeffries

    2003-01-01

    High-throughput screening requires simple assays that give reliable quantitative results. A microplate assay was developed for reducing sugar analysis that uses a 2,2'-bicinchoninic-based protein reagent. Endo-1,4-â-D-xylanase activity against oat spelt xylan was detected at activities of 0.002 to 0.011 IU ml−1. The assay is linear for sugar...

  15. An Automated High Throughput Proteolysis and Desalting Platform for Quantitative Proteomic Analysis

    Directory of Open Access Journals (Sweden)

    Albert-Baskar Arul

    2013-06-01

    Full Text Available Proteomics for biomarker validation needs high throughput instrumentation to analyze huge set of clinical samples for quantitative and reproducible analysis at a minimum time without manual experimental errors. Sample preparation, a vital step in proteomics plays a major role in identification and quantification of proteins from biological samples. Tryptic digestion a major check point in sample preparation for mass spectrometry based proteomics needs to be more accurate with rapid processing time. The present study focuses on establishing a high throughput automated online system for proteolytic digestion and desalting of proteins from biological samples quantitatively and qualitatively in a reproducible manner. The present study compares online protein digestion and desalting of BSA with conventional off-line (in-solution method and validated for real time sample for reproducibility. Proteins were identified using SEQUEST data base search engine and the data were quantified using IDEALQ software. The present study shows that the online system capable of handling high throughput samples in 96 well formats carries out protein digestion and peptide desalting efficiently in a reproducible and quantitative manner. Label free quantification showed clear increase of peptide quantities with increase in concentration with much linearity compared to off line method. Hence we would like to suggest that inclusion of this online system in proteomic pipeline will be effective in quantification of proteins in comparative proteomics were the quantification is really very crucial.

  16. A versatile toolkit for high throughput functional genomics with Trichoderma reesei

    Energy Technology Data Exchange (ETDEWEB)

    Schuster, Andre; Bruno, Kenneth S.; Collett, James R.; Baker, Scott E.; Seiboth, Bernhard; Kubicek, Christian P.; Schmoll, Monika

    2012-01-02

    The ascomycete fungus, Trichoderma reesei (anamorph of Hypocrea jecorina), represents a biotechnological workhorse and is currently one of the most proficient cellulase producers. While strain improvement was traditionally accomplished by random mutagenesis, a detailed understanding of cellulase regulation can only be gained using recombinant technologies. RESULTS: Aiming at high efficiency and high throughput methods, we present here a construction kit for gene knock out in T. reesei. We provide a primer database for gene deletion using the pyr4, amdS and hph selection markers. For high throughput generation of gene knock outs, we constructed vectors using yeast mediated recombination and then transformed a T. reesei strain deficient in non-homologous end joining (NHEJ) by spore electroporation. This NHEJ-defect was subsequently removed by crossing of mutants with a sexually competent strain derived from the parental strain, QM9414.CONCLUSIONS:Using this strategy and the materials provided, high throughput gene deletion in T. reesei becomes feasible. Moreover, with the application of sexual development, the NHEJ-defect can be removed efficiently and without the need for additional selection markers. The same advantages apply for the construction of multiple mutants by crossing of strains with different gene deletions, which is now possible with considerably less hands-on time and minimal screening effort compared to a transformation approach. Consequently this toolkit can considerably boost research towards efficient exploitation of the resources of T. reesei for cellulase expression and hence second generation biofuel production.

  17. Engineering serendipity: High-throughput discovery of materials that resist bacterial attachment.

    Science.gov (United States)

    Magennis, E P; Hook, A L; Davies, M C; Alexander, C; Williams, P; Alexander, M R

    2016-04-01

    Controlling the colonisation of materials by microorganisms is important in a wide range of industries and clinical settings. To date, the underlying mechanisms that govern the interactions of bacteria with material surfaces remain poorly understood, limiting the ab initio design and engineering of biomaterials to control bacterial attachment. Combinatorial approaches involving high-throughput screening have emerged as key tools for identifying materials to control bacterial attachment. The hundreds of different materials assessed using these methods can be carried out with the aid of computational modelling. This approach can develop an understanding of the rules used to predict bacterial attachment to surfaces of non-toxic synthetic materials. Here we outline our view on the state of this field and the challenges and opportunities in this area for the coming years. This opinion article on high throughput screening methods reflects one aspect of how the field of biomaterials research has developed and progressed. The piece takes the reader through key developments in biomaterials discovery, particularly focusing on need to reduce bacterial colonisation of surfaces. Such bacterial resistant surfaces are increasingly required in this age of antibiotic resistance. The influence and origin of high-throughput methods are discussed with insights into the future of biomaterials development where computational methods may drive materials development into new fertile areas of discovery. New biomaterials will exhibit responsiveness to adapt to the biological environment and promote better integration and reduced rejection or infection. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  18. Carbohydrate chips for studying high-throughput carbohydrate-protein interactions.

    Science.gov (United States)

    Park, Sungjin; Lee, Myung-ryul; Pyo, Soon-Jin; Shin, Injae

    2004-04-21

    Carbohydrate-protein interactions play important biological roles in living organisms. For the most part, biophysical and biochemical methods have been used for studying these biomolecular interactions. Less attention has been given to the development of high-throughput methods to elucidate recognition events between carbohydrates and proteins. In the current effort to develop a novel high-throughput tool for monitoring carbohydrate-protein interactions, we prepared carbohydrate microarrays by immobilizing maleimide-linked carbohydrates on thiol-derivatized glass slides and carried out lectin binding experiments by using these microarrays. The results showed that carbohydrates with different structural features selectively bound to the corresponding lectins with relative binding affinities that correlated with those obtained from solution-based assays. In addition, binding affinities of lectins to carbohydrates were also quantitatively analyzed by determining IC(50) values of soluble carbohydrates with the carbohydrate microarrays. To fabricate carbohydrate chips that contained more diverse carbohydrate probes, solution-phase parallel and enzymatic glycosylations were performed. Three model disaccharides were in parallel synthesized in solution-phase and used as carbohydrate probes for the fabrication of carbohydrate chips. Three enzymatic glycosylations on glass slides were consecutively performed to generate carbohydrate microarrays that contained the complex oligosaccharide, sialyl Le(x). Overall, these works demonstrated that carbohydrate chips could be efficiently prepared by covalent immobilization of maleimide-linked carbohydrates on the thiol-coated glass slides and applied for the high-throughput analyses of carbohydrate-protein interactions.

  19. A strategy for primary high throughput cytotoxicity screening in pharmaceutical toxicology.

    Science.gov (United States)

    Bugelski, P J; Atif, U; Molton, S; Toeg, I; Lord, P G; Morgan, D G

    2000-10-01

    Recent advances in combinatorial chemistry and high throughput screens for pharmacologic activity have created an increasing demand for in vitro high throughput screens for toxicological evaluation in the early phases of drug discovery. To develop a strategy for such a screen, we have conducted a data mining study of the National Cancer Institute's Developmental Therapeutics Program (DTP) cytotoxicity database. Using hierarchical cluster analysis, we confirmed that the different tissues of origin and individual cell lines showed differential sensitivity to compounds in the DTP Standard Agents database. Surprisingly, however, approaching the data globally, linear regression analysis showed that the differences were relatively minor. Comparison with the literature on acute toxicity in mice showed that the predictive power of growth inhibition was marginally superior to that of cell death. This datamining study suggests that in designing a strategy for high throughput cytotoxicity screening: a single cell line, the choice of which may not be critical, can be used as a primary screen; a single end point may be an adequate measure and a cut off value for 50% growth inhibition between 10(-6) and 10(-8) M may be a reasonable starting point for accepting a cytotoxic compound for scale up and further study.

  20. High-throughput gene expression profiling of memory differentiation in primary human T cells

    Directory of Open Access Journals (Sweden)

    Russell Kate

    2008-08-01

    Full Text Available Abstract Background The differentiation of naive T and B cells into memory lymphocytes is essential for immunity to pathogens. Therapeutic manipulation of this cellular differentiation program could improve vaccine efficacy and the in vitro expansion of memory cells. However, chemical screens to identify compounds that induce memory differentiation have been limited by 1 the lack of reporter-gene or functional assays that can distinguish naive and memory-phenotype T cells at high throughput and 2 a suitable cell-line representative of naive T cells. Results Here, we describe a method for gene-expression based screening that allows primary naive and memory-phenotype lymphocytes to be discriminated based on complex genes signatures corresponding to these differentiation states. We used ligation-mediated amplification and a fluorescent, bead-based detection system to quantify simultaneously 55 transcripts representing naive and memory-phenotype signatures in purified populations of human T cells. The use of a multi-gene panel allowed better resolution than any constituent single gene. The method was precise, correlated well with Affymetrix microarray data, and could be easily scaled up for high-throughput. Conclusion This method provides a generic solution for high-throughput differentiation screens in primary human T cells where no single-gene or functional assay is available. This screening platform will allow the identification of small molecules, genes or soluble factors that direct memory differentiation in naive human lymphocytes.

  1. Machine learning in computational biology to accelerate high-throughput protein expression.

    Science.gov (United States)

    Sastry, Anand; Monk, Jonathan; Tegel, Hanna; Uhlen, Mathias; Palsson, Bernhard O; Rockberg, Johan; Brunk, Elizabeth

    2017-08-15

    The Human Protein Atlas (HPA) enables the simultaneous characterization of thousands of proteins across various tissues to pinpoint their spatial location in the human body. This has been achieved through transcriptomics and high-throughput immunohistochemistry-based approaches, where over 40 000 unique human protein fragments have been expressed in E. coli. These datasets enable quantitative tracking of entire cellular proteomes and present new avenues for understanding molecular-level properties influencing expression and solubility. Combining computational biology and machine learning identifies protein properties that hinder the HPA high-throughput antibody production pipeline. We predict protein expression and solubility with accuracies of 70% and 80%, respectively, based on a subset of key properties (aromaticity, hydropathy and isoelectric point). We guide the selection of protein fragments based on these characteristics to optimize high-throughput experimentation. We present the machine learning workflow as a series of IPython notebooks hosted on GitHub (https://github.com/SBRG/Protein_ML). The workflow can be used as a template for analysis of further expression and solubility datasets. ebrunk@ucsd.edu or johanr@biotech.kth.se. Supplementary data are available at Bioinformatics online.

  2. High-Throughput Cloning and Expression Library Creation for Functional Proteomics

    Science.gov (United States)

    Festa, Fernanda; Steel, Jason; Bian, Xiaofang; Labaer, Joshua

    2013-01-01

    The study of protein function usually requires the use of a cloned version of the gene for protein expression and functional assays. This strategy is particular important when the information available regarding function is limited. The functional characterization of the thousands of newly identified proteins revealed by genomics requires faster methods than traditional single gene experiments, creating the need for fast, flexible and reliable cloning systems. These collections of open reading frame (ORF) clones can be coupled with high-throughput proteomics platforms, such as protein microarrays and cell-based assays, to answer biological questions. In this tutorial we provide the background for DNA cloning, discuss the major high-throughput cloning systems (Gateway® Technology, Flexi® Vector Systems, and Creator™ DNA Cloning System) and compare them side-by-side. We also report an example of high-throughput cloning study and its application in functional proteomics. This Tutorial is part of the International Proteomics Tutorial Programme (IPTP12). Details can be found at http://www.proteomicstutorials.org. PMID:23457047

  3. PCR cycles above routine numbers do not compromise high-throughput DNA barcoding results.

    Science.gov (United States)

    Vierna, J; Doña, J; Vizcaíno, A; Serrano, D; Jovani, R

    2017-10-01

    High-throughput DNA barcoding has become essential in ecology and evolution, but some technical questions still remain. Increasing the number of PCR cycles above the routine 20-30 cycles is a common practice when working with old-type specimens, which provide little amounts of DNA, or when facing annealing issues with the primers. However, increasing the number of cycles can raise the number of artificial mutations due to polymerase errors. In this work, we sequenced 20 COI libraries in the Illumina MiSeq platform. Libraries were prepared with 40, 45, 50, 55, and 60 PCR cycles from four individuals belonging to four species of four genera of cephalopods. We found no relationship between the number of PCR cycles and the number of mutations despite using a nonproofreading polymerase. Moreover, even when using a high number of PCR cycles, the resulting number of mutations was low enough not to be an issue in the context of high-throughput DNA barcoding (but may still remain an issue in DNA metabarcoding due to chimera formation). We conclude that the common practice of increasing the number of PCR cycles should not negatively impact the outcome of a high-throughput DNA barcoding study in terms of the occurrence of point mutations.

  4. An Air-Well sparging minifermenter system for high-throughput protein production.

    Science.gov (United States)

    Deantonio, Cecilia; Sedini, Valentina; Cesaro, Patrizia; Quasso, Fabio; Cotella, Diego; Persichetti, Francesca; Santoro, Claudio; Sblattero, Daniele

    2014-09-14

    Over the last few years High-Throughput Protein Production (HTPP) has played a crucial role for functional proteomics. High-quality, high yield and fast recombinant protein production are critical for new HTPP technologies. Escherichia coli is usually the expression system of choice in protein production thanks to its fast growth, ease of handling and high yields of protein produced. Even though shake-flask cultures are widely used, there is an increasing need for easy to handle, lab scale, high throughput systems. In this article we described a novel minifermenter system suitable for HTPP. The Air-Well minifermenter system is made by a homogeneous air sparging device that includes an air diffusion system, and a stainless steel 96 needle plate integrated with a 96 deep well plate where cultures take place. This system provides aeration to achieve higher optical density growth compared to classical shaking growth without the decrease in pH value and bacterial viability. Moreover the yield of recombinant protein is up to 3-fold higher with a considerable improvement in the amount of full length proteins. High throughput production of hundreds of proteins in parallel can be obtained sparging air in a continuous and controlled manner. The system used is modular and can be easily modified and scaled up to meet the demands for HTPP.

  5. Design of novel solar thermal fuels with high-throughput ab initio simulations

    Science.gov (United States)

    Liu, Yun; Grossman, Jeffrey

    2014-03-01

    Solar thermal fuels (STF) store the energy of sunlight, which can then be released later in the form of heat, offering an emission-free and renewable solution for both solar energy conversion and storage. However, this approach is currently limited by the lack of low-cost materials with high energy density and high stability. Previously we have predicted a new class of functional materials that have the potential to address these challenges. Recently, we have developed an ab initio high-throughput computational approach to accelerate the design process and allow for searches over a broad class of materials. The high-throughput screening algorithm we have developed can run through large numbers of molecules composed of earth-abundant elements, and identifies possible metastable structures of a given material. Corresponding isomerization enthalpies associated with the metastable structures are then computed. Using this high-throughput simulation approach, we have discovered molecular structures with high isomerization enthalpies that have the potential to be new candidates for high-energy density STF. We have also discovered physical design principles to guide further STF materials design through the correlation between isomerization enthalpy and structural properties.

  6. High-throughput screening to identify selective inhibitors of microbial sulfate reduction (and beyond)

    Science.gov (United States)

    Carlson, H. K.; Coates, J. D.; Deutschbauer, A. M.

    2015-12-01

    The selective perturbation of complex microbial ecosystems to predictably influence outcomes in engineered and industrial environments remains a grand challenge for geomicrobiology. In some industrial ecosystems, such as oil reservoirs, sulfate reducing microorganisms (SRM) produce hydrogen sulfide which is toxic, explosive and corrosive. Current strategies to selectively inhibit sulfidogenesis are based on non-specific biocide treatments, bio-competitive exclusion by alternative electron acceptors or sulfate-analogs which are competitive inhibitors or futile/alternative substrates of the sulfate reduction pathway. Despite the economic cost of sulfidogenesis, there has been minimal exploration of the chemical space of possible inhibitory compounds, and very little work has quantitatively assessed the selectivity of putative souring treatments. We have developed a high-throughput screening strategy to target SRM, quantitatively ranked the selectivity and potency of hundreds of compounds and identified previously unrecognized SRM selective inhibitors and synergistic interactions between inhibitors. Once inhibitor selectivity is defined, high-throughput characterization of microbial community structure across compound gradients and identification of fitness determinants using isolate bar-coded transposon mutant libraries can give insights into the genetic mechanisms whereby compounds structure microbial communities. The high-throughput (HT) approach we present can be readily applied to target SRM in diverse environments and more broadly, could be used to identify and quantify the potency and selectivity of inhibitors of a variety of microbial metabolisms. Our findings and approach are relevant for engineering environmental ecosystems and also to understand the role of natural gradients in shaping microbial niche space.

  7. High throughput workflow for coacervate formation and characterization in shampoo systems.

    Science.gov (United States)

    Kalantar, T H; Tucker, C J; Zalusky, A S; Boomgaard, T A; Wilson, B E; Ladika, M; Jordan, S L; Li, W K; Zhang, X; Goh, C G

    2007-01-01

    Cationic cellulosic polymers find wide utility as benefit agents in shampoo. Deposition of these polymers onto hair has been shown to mend split-ends, improve appearance and wet combing, as well as provide controlled delivery of insoluble actives. The deposition is thought to be enhanced by the formation of a polymer/surfactant complex that phase-separates from the bulk solution upon dilution. A standard characterization method has been developed to characterize the coacervate formation upon dilution, but the test is time and material prohibitive. We have developed a semi-automated high throughput workflow to characterize the coacervate-forming behavior of different shampoo formulations. A procedure that allows testing of real use shampoo dilutions without first formulating a complete shampoo was identified. This procedure was adapted to a Tecan liquid handler by optimizing the parameters for liquid dispensing as well as for mixing. The high throughput workflow enabled preparation and testing of hundreds of formulations with different types and levels of cationic cellulosic polymers and surfactants, and for each formulation a haze diagram was constructed. Optimal formulations and their dilutions that give substantial coacervate formation (determined by haze measurements) were identified. Results from this high throughput workflow were shown to reproduce standard haze and bench-top turbidity measurements, and this workflow has the advantages of using less material and allowing more variables to be tested with significant time savings.

  8. A high throughput array microscope for the mechanical characterization of biomaterials

    Science.gov (United States)

    Cribb, Jeremy; Osborne, Lukas D.; Hsiao, Joe Ping-Lin; Vicci, Leandra; Meshram, Alok; O'Brien, E. Tim; Spero, Richard Chasen; Taylor, Russell; Superfine, Richard

    2015-02-01

    In the last decade, the emergence of high throughput screening has enabled the development of novel drug therapies and elucidated many complex cellular processes. Concurrently, the mechanobiology community has developed tools and methods to show that the dysregulation of biophysical properties and the biochemical mechanisms controlling those properties contribute significantly to many human diseases. Despite these advances, a complete understanding of the connection between biomechanics and disease will require advances in instrumentation that enable parallelized, high throughput assays capable of probing complex signaling pathways, studying biology in physiologically relevant conditions, and capturing specimen and mechanical heterogeneity. Traditional biophysical instruments are unable to meet this need. To address the challenge of large-scale, parallelized biophysical measurements, we have developed an automated array high-throughput microscope system that utilizes passive microbead diffusion to characterize mechanical properties of biomaterials. The instrument is capable of acquiring data on twelve-channels simultaneously, where each channel in the system can independently drive two-channel fluorescence imaging at up to 50 frames per second. We employ this system to measure the concentration-dependent apparent viscosity of hyaluronan, an essential polymer found in connective tissue and whose expression has been implicated in cancer progression.

  9. A High-Throughput Biological Calorimetry Core: Steps to Startup, Run, and Maintain a Multiuser Facility.

    Science.gov (United States)

    Yennawar, Neela H; Fecko, Julia A; Showalter, Scott A; Bevilacqua, Philip C

    2016-01-01

    Many labs have conventional calorimeters where denaturation and binding experiments are setup and run one at a time. While these systems are highly informative to biopolymer folding and ligand interaction, they require considerable manual intervention for cleaning and setup. As such, the throughput for such setups is limited typically to a few runs a day. With a large number of experimental parameters to explore including different buffers, macromolecule concentrations, temperatures, ligands, mutants, controls, replicates, and instrument tests, the need for high-throughput automated calorimeters is on the rise. Lower sample volume requirements and reduced user intervention time compared to the manual instruments have improved turnover of calorimetry experiments in a high-throughput format where 25 or more runs can be conducted per day. The cost and efforts to maintain high-throughput equipment typically demands that these instruments be housed in a multiuser core facility. We describe here the steps taken to successfully start and run an automated biological calorimetry facility at Pennsylvania State University. Scientists from various departments at Penn State including Chemistry, Biochemistry and Molecular Biology, Bioengineering, Biology, Food Science, and Chemical Engineering are benefiting from this core facility. Samples studied include proteins, nucleic acids, sugars, lipids, synthetic polymers, small molecules, natural products, and virus capsids. This facility has led to higher throughput of data, which has been leveraged into grant support, attracting new faculty hire and has led to some exciting publications. © 2016 Elsevier Inc. All rights reserved.

  10. A versatile toolkit for high throughput functional genomics with Trichoderma reesei

    Directory of Open Access Journals (Sweden)

    Schuster André

    2012-01-01

    Full Text Available Abstract Background The ascomycete fungus, Trichoderma reesei (anamorph of Hypocrea jecorina, represents a biotechnological workhorse and is currently one of the most proficient cellulase producers. While strain improvement was traditionally accomplished by random mutagenesis, a detailed understanding of cellulase regulation can only be gained using recombinant technologies. Results Aiming at high efficiency and high throughput methods, we present here a construction kit for gene knock out in T. reesei. We provide a primer database for gene deletion using the pyr4, amdS and hph selection markers. For high throughput generation of gene knock outs, we constructed vectors using yeast mediated recombination and then transformed a T. reesei strain deficient in non-homologous end joining (NHEJ by spore electroporation. This NHEJ-defect was subsequently removed by crossing of mutants with a sexually competent strain derived from the parental strain, QM9414. Conclusions Using this strategy and the materials provided, high throughput gene deletion in T. reesei becomes feasible. Moreover, with the application of sexual development, the NHEJ-defect can be removed efficiently and without the need for additional selection markers. The same advantages apply for the construction of multiple mutants by crossing of strains with different gene deletions, which is now possible with considerably less hands-on time and minimal screening effort compared to a transformation approach. Consequently this toolkit can considerably boost research towards efficient exploitation of the resources of T. reesei for cellulase expression and hence second generation biofuel production.

  11. High Throughput, Polymeric Aqueous Two-Phase Printing of Tumor Spheroids

    Science.gov (United States)

    Atefi, Ehsan; Lemmo, Stephanie; Fyffe, Darcy; Luker, Gary D.; Tavana, Hossein

    2014-01-01

    This paper presents a new 3D culture microtechnology for high throughput production of tumor spheroids and validates its utility for screening anti-cancer drugs. We use two immiscible polymeric aqueous solutions and microprint a submicroliter drop of the “patterning” phase containing cells into a bath of the “immersion” phase. Selecting proper formulations of biphasic systems using a panel of biocompatible polymers results in the formation of a round drop that confines cells to facilitate spontaneous formation of a spheroid without any external stimuli. Adapting this approach to robotic tools enables straightforward generation and maintenance of spheroids of well-defined size in standard microwell plates and biochemical analysis of spheroids in situ, which is not possible with existing techniques for spheroid culture. To enable high throughput screening, we establish a phase diagram to identify minimum cell densities within specific volumes of the patterning drop to result in a single spheroid. Spheroids show normal growth over long-term incubation and dose-dependent decrease in cellular viability when treated with drug compounds, but present significant resistance compared to monolayer cultures. The unprecedented ease of implementing this microtechnology and its robust performance will benefit high throughput studies of drug screening against cancer cells with physiologically-relevant 3D tumor models. PMID:25411577

  12. High-throughput micro-scale cultivations and chromatography modeling: Powerful tools for integrated process development.

    Science.gov (United States)

    Baumann, Pascal; Hahn, Tobias; Hubbuch, Jürgen

    2015-10-01

    Upstream processes are rather complex to design and the productivity of cells under suitable cultivation conditions is hard to predict. The method of choice for examining the design space is to execute high-throughput cultivation screenings in micro-scale format. Various predictive in silico models have been developed for many downstream processes, leading to a reduction of time and material costs. This paper presents a combined optimization approach based on high-throughput micro-scale cultivation experiments and chromatography modeling. The overall optimized system must not necessarily be the one with highest product titers, but the one resulting in an overall superior process performance in up- and downstream. The methodology is presented in a case study for the Cherry-tagged enzyme Glutathione-S-Transferase from Escherichia coli SE1. The Cherry-Tag™ (Delphi Genetics, Belgium) which can be fused to any target protein allows for direct product analytics by simple VIS absorption measurements. High-throughput cultivations were carried out in a 48-well format in a BioLector micro-scale cultivation system (m2p-Labs, Germany). The downstream process optimization for a set of randomly picked upstream conditions producing high yields was performed in silico using a chromatography modeling software developed in-house (ChromX). The suggested in silico-optimized operational modes for product capturing were validated subsequently. The overall best system was chosen based on a combination of excellent up- and downstream performance. © 2015 Wiley Periodicals, Inc.

  13. The application of the high throughput sequencing technology in the transposable elements.

    Science.gov (United States)

    Liu, Zhen; Xu, Jian-hong

    2015-09-01

    High throughput sequencing technology has dramatically improved the efficiency of DNA sequencing, and decreased the costs to a great extent. Meanwhile, this technology usually has advantages of better specificity, higher sensitivity and accuracy. Therefore, it has been applied to the research on genetic variations, transcriptomics and epigenomics. Recently, this technology has been widely employed in the studies of transposable elements and has achieved fruitful results. In this review, we summarize the application of high throughput sequencing technology in the fields of transposable elements, including the estimation of transposon content, preference of target sites and distribution, insertion polymorphism and population frequency, identification of rare copies, transposon horizontal transfers as well as transposon tagging. We also briefly introduce the major common sequencing strategies and algorithms, their advantages and disadvantages, and the corresponding solutions. Finally, we envision the developing trends of high throughput sequencing technology, especially the third generation sequencing technology, and its application in transposon studies in the future, hopefully providing a comprehensive understanding and reference for related scientific researchers.

  14. A priori Considerations When Conducting High-Throughput Amplicon-Based Sequence Analysis

    Directory of Open Access Journals (Sweden)

    Aditi Sengupta

    2016-03-01

    Full Text Available Amplicon-based sequencing strategies that include 16S rRNA and functional genes, alongside “meta-omics” analyses of communities of microorganisms, have allowed researchers to pose questions and find answers to “who” is present in the environment and “what” they are doing. Next-generation sequencing approaches that aid microbial ecology studies of agricultural systems are fast gaining popularity among agronomy, crop, soil, and environmental science researchers. Given the rapid development of these high-throughput sequencing techniques, researchers with no prior experience will desire information about the best practices that can be used before actually starting high-throughput amplicon-based sequence analyses. We have outlined items that need to be carefully considered in experimental design, sampling, basic bioinformatics, sequencing of mock communities and negative controls, acquisition of metadata, and in standardization of reaction conditions as per experimental requirements. Not all considerations mentioned here may pertain to a particular study. The overall goal is to inform researchers about considerations that must be taken into account when conducting high-throughput microbial DNA sequencing and sequences analysis.

  15. A general approach for discriminative de novo motif discovery from high-throughput data.

    Science.gov (United States)

    Grau, Jan; Posch, Stefan; Grosse, Ivo; Keilwagen, Jens

    2013-11-01

    De novo motif discovery has been an important challenge of bioinformatics for the past two decades. Since the emergence of high-throughput techniques like ChIP-seq, ChIP-exo and protein-binding microarrays (PBMs), the focus of de novo motif discovery has shifted to runtime and accuracy on large data sets. For this purpose, specialized algorithms have been designed for discovering motifs in ChIP-seq or PBM data. However, none of the existing approaches work perfectly for all three high-throughput techniques. In this article, we propose Dimont, a general approach for fast and accurate de novo motif discovery from high-throughput data. We demonstrate that Dimont yields a higher number of correct motifs from ChIP-seq data than any of the specialized approaches and achieves a higher accuracy for predicting PBM intensities from probe sequence than any of the approaches specifically designed for that purpose. Dimont also reports the expected motifs for several ChIP-exo data sets. Investigating differences between in vitro and in vivo binding, we find that for most transcription factors, the motifs discovered by Dimont are in good accordance between techniques, but we also find notable exceptions. We also observe that modeling intra-motif dependencies may increase accuracy, which indicates that more complex motif models are a worthwhile field of research.

  16. Comprehensive molecular diagnosis of Bardet-Biedl syndrome by high-throughput targeted exome sequencing.

    Directory of Open Access Journals (Sweden)

    Dong-Jun Xing

    Full Text Available Bardet-Biedl syndrome (BBS is an autosomal recessive disorder with significant genetic heterogeneity. BBS is linked to mutations in 17 genes, which contain more than 200 coding exons. Currently, BBS is diagnosed by direct DNA sequencing for mutations in these genes, which because of the large genomic screening region is both time-consuming and expensive. In order to develop a practical method for the clinic diagnosis of BBS, we have developed a high-throughput targeted exome sequencing (TES for genetic diagnosis. Five typical BBS patients were recruited and screened for mutations in a total of 144 known genes responsible for inherited retinal diseases, a hallmark symptom of BBS. The genomic DNA of these patients and their families were subjected to high-throughput DNA re-sequencing. Deep bioinformatics analysis was carried out to filter the massive sequencing data, which were further confirmed through co-segregation analysis. TES successfully revealed mutations in BBS genes in each patient and family member. Six pathological mutations, including five novel mutations, were revealed in the genes BBS2, MKKS, ARL6, MKS1. This study represents the first report of targeted exome sequencing in BBS patients and demonstrates that high-throughput TES is an accurate and rapid method for the genetic diagnosis of BBS.

  17. An improved high-throughput lipid extraction method for the analysis of human brain lipids.

    Science.gov (United States)

    Abbott, Sarah K; Jenner, Andrew M; Mitchell, Todd W; Brown, Simon H J; Halliday, Glenda M; Garner, Brett

    2013-03-01

    We have developed a protocol suitable for high-throughput lipidomic analysis of human brain samples. The traditional Folch extraction (using chloroform and glass-glass homogenization) was compared to a high-throughput method combining methyl-tert-butyl ether (MTBE) extraction with mechanical homogenization utilizing ceramic beads. This high-throughput method significantly reduced sample handling time and increased efficiency compared to glass-glass homogenizing. Furthermore, replacing chloroform with MTBE is safer (less carcinogenic/toxic), with lipids dissolving in the upper phase, allowing for easier pipetting and the potential for automation (i.e., robotics). Both methods were applied to the analysis of human occipital cortex. Lipid species (including ceramides, sphingomyelins, choline glycerophospholipids, ethanolamine glycerophospholipids and phosphatidylserines) were analyzed via electrospray ionization mass spectrometry and sterol species were analyzed using gas chromatography mass spectrometry. No differences in lipid species composition were evident when the lipid extraction protocols were compared, indicating that MTBE extraction with mechanical bead homogenization provides an improved method for the lipidomic profiling of human brain tissue.

  18. Droplet microfluidic technology for single-cell high-throughput screening.

    Science.gov (United States)

    Brouzes, Eric; Medkova, Martina; Savenelli, Neal; Marran, Dave; Twardowski, Mariusz; Hutchison, J Brian; Rothberg, Jonathan M; Link, Darren R; Perrimon, Norbert; Samuels, Michael L

    2009-08-25

    We present a droplet-based microfluidic technology that enables high-throughput screening of single mammalian cells. This integrated platform allows for the encapsulation of single cells and reagents in independent aqueous microdroplets (1 pL to 10 nL volumes) dispersed in an immiscible carrier oil and enables the digital manipulation of these reactors at a very high-throughput. Here, we validate a full droplet screening workflow by conducting a droplet-based cytotoxicity screen. To perform this screen, we first developed a droplet viability assay that permits the quantitative scoring of cell viability and growth within intact droplets. Next, we demonstrated the high viability of encapsulated human monocytic U937 cells over a period of 4 days. Finally, we developed an optically-coded droplet library enabling the identification of the droplets composition during the assay read-out. Using the integrated droplet technology, we screened a drug library for its cytotoxic effect against U937 cells. Taken together our droplet microfluidic platform is modular, robust, uses no moving parts, and has a wide range of potential applications including high-throughput single-cell analyses, combinatorial screening, and facilitating small sample analyses.

  19. CrossCheck: an open-source web tool for high-throughput screen data analysis.

    Science.gov (United States)

    Najafov, Jamil; Najafov, Ayaz

    2017-07-19

    Modern high-throughput screening methods allow researchers to generate large datasets that potentially contain important biological information. However, oftentimes, picking relevant hits from such screens and generating testable hypotheses requires training in bioinformatics and the skills to efficiently perform database mining. There are currently no tools available to general public that allow users to cross-reference their screen datasets with published screen datasets. To this end, we developed CrossCheck, an online platform for high-throughput screen data analysis. CrossCheck is a centralized database that allows effortless comparison of the user-entered list of gene symbols with 16,231 published datasets. These datasets include published data from genome-wide RNAi and CRISPR screens, interactome proteomics and phosphoproteomics screens, cancer mutation databases, low-throughput studies of major cell signaling mediators, such as kinases, E3 ubiquitin ligases and phosphatases, and gene ontological information. Moreover, CrossCheck includes a novel database of predicted protein kinase substrates, which was developed using proteome-wide consensus motif searches. CrossCheck dramatically simplifies high-throughput screen data analysis and enables researchers to dig deep into the published literature and streamline data-driven hypothesis generation. CrossCheck is freely accessible as a web-based application at http://proteinguru.com/crosscheck.

  20. Use of a high-throughput screening approach coupled with in vivo zebrafish embryo screening to develop hazard ranking for engineered nanomaterials.

    Science.gov (United States)

    George, Saji; Xia, Tian; Rallo, Robert; Zhao, Yan; Ji, Zhaoxia; Lin, Sijie; Wang, Xiang; Zhang, Haiyuan; France, Bryan; Schoenfeld, David; Damoiseaux, Robert; Liu, Rong; Lin, Shuo; Bradley, Kenneth A; Cohen, Yoram; Nel, André E

    2011-03-22

    Because of concerns about the safety of a growing number of engineered nanomaterials (ENM), it is necessary to develop high-throughput screening and in silico data transformation tools that can speed up in vitro hazard ranking. Here, we report the use of a multiparametric, automated screening assay that incorporates sublethal and lethal cellular injury responses to perform high-throughput analysis of a batch of commercial metal/metal oxide nanoparticles (NP) with the inclusion of a quantum dot (QD1). The responses chosen for tracking cellular injury through automated epifluorescence microscopy included ROS production, intracellular calcium flux, mitochondrial depolarization, and plasma membrane permeability. The z-score transformed high volume data set was used to construct heat maps for in vitro hazard ranking as well as showing the similarity patterns of NPs and response parameters through the use of self-organizing maps (SOM). Among the materials analyzed, QD1 and nano-ZnO showed the most prominent lethality, while Pt, Ag, SiO2, Al2O3, and Au triggered sublethal effects but without cytotoxicity. In order to compare the in vitro with the in vivo response outcomes in zebrafish embryos, NPs were used to assess their impact on mortality rate, hatching rate, cardiac rate, and morphological defects. While QDs, ZnO, and Ag induced morphological abnormalities or interfered in embryo hatching, Pt and Ag exerted inhibitory effects on cardiac rate. Ag toxicity in zebrafish differed from the in vitro results, which is congruent with this material's designation as extremely dangerous in the environment. Interestingly, while toxicity in the initially selected QD formulation was due to a solvent (toluene), supplementary testing of additional QDs selections yielded in vitro hazard profiling that reflect the release of chalcogenides. In conclusion, the use of a high-throughput screening, in silico data handling and zebrafish testing may constitute a paradigm for rapid and

  1. Use of a High Throughput Screening Approach Coupled With In Vivo Zebrafish Embryo Screening to Develop Hazard Ranking for Engineered Nanomaterials

    Science.gov (United States)

    George, Saji; Xia, Tian; Rallo, Robert; Zhao, Yan; Ji, Zhaoxia; Lin, Sijie; Wang, Xiang; Zhang, Haiyuan; France, Bryan; Schoenfeld, David; Damoiseaux, Robert; Liu, Rong; Lin, Shuo; Bradley, Kenneth A; Cohen, Yoram; Nel, André E

    2014-01-01

    Because of concerns about the safety of a growing number of engineered nanomaterials (ENM), it is necessary to develop high throughput screening and in silico data transformation tools that can speed up in vitro hazard ranking. Here, we report the use of a multi-parametric, automated screening assay that incorporates sub-lethal and lethal cellular injury responses to perform high throughput analysis of a batch of commercial metal/metal oxide nanoparticles (NP) with the inclusion of a quantum dot (QD1). The responses chosen for tracking cellular injury through automated epifluorescence microscopy included ROS production, intracellular calcium flux, mitochondrial depolarization, and plasma membrane permeability. The z-score transformed high volume data set was used to construct heat maps for in vitro hazard ranking as well as showing the similarity patterns of NPs and response parameters through the use of self-organizing maps (SOM). Among the materials analyzed, QD1 and nano-ZnO showed the most prominent lethality, while Pt, Ag, SiO2, Al2O3, and Au triggered sub-lethal effects but without cytotoxicity. In order to compare the in vitro with the in vivo response outcomes in zebrafish embryos, NPs were used to assess their impact on mortality rate, hatching rate, cardiac rate, and morphological defects. While QDs, ZnO, and Ag induced morphological abnormalities or interfered in embryo hatching, Pt and Ag exerted inhibitory effects on cardiac rate. Ag toxicity in zebrafish differed from the in vitro results, which is congruent with this material's designation as extremely dangerous in the environment. Interestingly, while toxicity in the initially selected QD formulation was due to a solvent (toluene), supplementary testing of additional QDs selections yielded in vitro hazard profiling that reflect the release of chalcogenides. In conclusion, the use of a high throughput screening, in silico data handling and zebrafish testing may constitute a paradigm for rapid and

  2. Development of an efficient transformation method by Agrobacterium tumefaciens and high throughput spray assay to identify transgenic plants for woodland strawberry (Fragaria vesca) using NPTII selection.

    Science.gov (United States)

    Pantazis, Christopher J; Fisk, Sarah; Mills, Kerri; Flinn, Barry S; Shulaev, Vladimir; Veilleux, Richard E; Dan, Yinghui

    2013-03-01

    KEY MESSAGE : We developed an efficient Agrobacterium -mediated transformation method using an Ac/Ds transposon tagging construct for F. vesca and high throughput paromomycin spray assay to identify its transformants for strawberry functional genomics. Genomic resources for Rosaceae species are now readily available, including the Fragaria vesca genome, EST sequences, markers, linkage maps, and physical maps. The Rosaceae Genomic Executive Committee has promoted strawberry as a translational genomics model due to its unique biological features and transformability for fruit trait improvement. Our overall research goal is to use functional genomic and metabolic approaches to pursue high throughput gene discovery in the diploid woodland strawberry. F. vesca offers several advantages of a fleshy fruit typical of most fruit crops, short life cycle (seed to seed in 12-16 weeks), small genome size (206 Mbb/C), small plant size, self-compatibility, and many seeds per plant. We have developed an efficient Agrobacterium tumefaciens-mediated strawberry transformation method using kanamycin selection, and high throughput paromomycin spray assay to efficiently identify transgenic strawberry plants. Using our kanamycin transformation method, we were able to produce up to 98 independent kanamycin resistant insertional mutant lines using a T-DNA construct carrying an Ac/Ds transposon Launchpad system from a single transformation experiment involving inoculation of 22 leaf explants of F. vesca accession 551572 within approx. 11 weeks (from inoculation to soil). Transgenic plants with 1-2 copies of a transgene were confirmed by Southern blot analysis. Using our paromomycin spray assay, transgenic F. vesca plants were rapidly identified within 10 days after spraying.

  3. eRNA: a graphic user interface-based tool optimized for large data analysis from high-throughput RNA sequencing

    Science.gov (United States)

    2014-01-01

    Background RNA sequencing (RNA-seq) is emerging as a critical approach in biological research. However, its high-throughput advantage is significantly limited by the capacity of bioinformatics tools. The research community urgently needs user-friendly tools to efficiently analyze the complicated data generated by high throughput sequencers. Results We developed a standalone tool with graphic user interface (GUI)-based analytic modules, known as eRNA. The capacity of performing parallel processing and sample management facilitates large data analyses by maximizing hardware usage and freeing users from tediously handling sequencing data. The module miRNA identification” includes GUIs for raw data reading, adapter removal, sequence alignment, and read counting. The module “mRNA identification” includes GUIs for reference sequences, genome mapping, transcript assembling, and differential expression. The module “Target screening” provides expression profiling analyses and graphic visualization. The module “Self-testing” offers the directory setups, sample management, and a check for third-party package dependency. Integration of other GUIs including Bowtie, miRDeep2, and miRspring extend the program’s functionality. Conclusions eRNA focuses on the common tools required for the mapping and quantification analysis of miRNA-seq and mRNA-seq data. The software package provides an additional choice for scientists who require a user-friendly computing environment and high-throughput capacity for large data analysis. eRNA is available for free download at https://sourceforge.net/projects/erna/?source=directory. PMID:24593312

  4. Developing high throughput genotyped chromosome segment substitution lines based on population whole-genome re-sequencing in rice (Oryza sativa L.

    Directory of Open Access Journals (Sweden)

    Gu Minghong

    2010-11-01

    Full Text Available Abstract Background Genetic populations provide the basis for a wide range of genetic and genomic studies and have been widely used in genetic mapping, gene discovery and genomics-assisted breeding. Chromosome segment substitution lines (CSSLs are the most powerful tools for the detection and precise mapping of quantitative trait loci (QTLs, for the analysis of complex traits in plant molecular genetics. Results In this study, a wide population consisting of 128 CSSLs was developed, derived from the crossing and back-crossing of two sequenced rice cultivars: 9311, an elite indica cultivar as the recipient and Nipponbare, a japonica cultivar as the donor. First, a physical map of the 128 CSSLs was constructed on the basis of estimates of the lengths and locations of the substituted chromosome segments using 254 PCR-based molecular markers. From this map, the total size of the 142 substituted segments in the population was 882.2 Mb, was 2.37 times that of the rice genome. Second, every CSSL underwent high-throughput genotyping by whole-genome re-sequencing with a 0.13× genome sequence, and an ultrahigh-quality physical map was constructed. This sequencing-based physical map indicated that 117 new segments were detected; almost all were shorter than 3 Mb and were not apparent in the molecular marker map. Furthermore, relative to the molecular marker-based map, the sequencing-based map yielded more precise recombination breakpoint determination and greater accuracy of the lengths of the substituted segments, and provided more accurate background information. Third, using the 128 CSSLs combined with the bin-map converted from the sequencing-based physical map, a multiple linear regression QTL analysis mapped nine QTLs, which explained 89.50% of the phenotypic variance for culm length. A large-effect QTL was located in a 791,655 bp region that contained the rice 'green revolution' gene. Conclusions The present results demonstrated that high

  5. Developing high throughput genotyped chromosome segment substitution lines based on population whole-genome re-sequencing in rice (Oryza sativa L.).

    Science.gov (United States)

    Xu, Jianjun; Zhao, Qiang; Du, Peina; Xu, Chenwu; Wang, Baohe; Feng, Qi; Liu, Qiaoquan; Tang, Shuzhu; Gu, Minghong; Han, Bin; Liang, Guohua

    2010-11-24

    Genetic populations provide the basis for a wide range of genetic and genomic studies and have been widely used in genetic mapping, gene discovery and genomics-assisted breeding. Chromosome segment substitution lines (CSSLs) are the most powerful tools for the detection and precise mapping of quantitative trait loci (QTLs), for the analysis of complex traits in plant molecular genetics. In this study, a wide population consisting of 128 CSSLs was developed, derived from the crossing and back-crossing of two sequenced rice cultivars: 9311, an elite indica cultivar as the recipient and Nipponbare, a japonica cultivar as the donor. First, a physical map of the 128 CSSLs was constructed on the basis of estimates of the lengths and locations of the substituted chromosome segments using 254 PCR-based molecular markers. From this map, the total size of the 142 substituted segments in the population was 882.2 Mb, was 2.37 times that of the rice genome. Second, every CSSL underwent high-throughput genotyping by whole-genome re-sequencing with a 0.13× genome sequence, and an ultrahigh-quality physical map was constructed. This sequencing-based physical map indicated that 117 new segments were detected; almost all were shorter than 3 Mb and were not apparent in the molecular marker map. Furthermore, relative to the molecular marker-based map, the sequencing-based map yielded more precise recombination breakpoint determination and greater accuracy of the lengths of the substituted segments, and provided more accurate background information. Third, using the 128 CSSLs combined with the bin-map converted from the sequencing-based physical map, a multiple linear regression QTL analysis mapped nine QTLs, which explained 89.50% of the phenotypic variance for culm length. A large-effect QTL was located in a 791,655 bp region that contained the rice 'green revolution' gene. The present results demonstrated that high throughput genotyped CSSLs combine the advantages of an ultrahigh

  6. Optimizing transformations for automated, high throughput analysis of flow cytometry data

    Directory of Open Access Journals (Sweden)

    Weng Andrew

    2010-11-01

    Full Text Available Abstract Background In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. Results We compare the performance of parameter-optimized and default-parameter (in flowCore data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter

  7. A cell-based high-throughput screening assay for radiation susceptibility using automated cell counting.

    Science.gov (United States)

    Hodzic, Jasmina; Dingjan, Ilse; Maas, Mariëlle Jp; van der Meulen-Muileman, Ida H; de Menezes, Renee X; Heukelom, Stan; Verheij, Marcel; Gerritsen, Winald R; Geldof, Albert A; van Triest, Baukelien; van Beusechem, Victor W

    2015-02-27

    Radiotherapy is one of the mainstays in the treatment for cancer, but its success can be limited due to inherent or acquired resistance. Mechanisms underlying radioresistance in various cancers are poorly understood and available radiosensitizers have shown only modest clinical benefit. There is thus a need to identify new targets and drugs for more effective sensitization of cancer cells to irradiation. Compound and RNA interference high-throughput screening technologies allow comprehensive enterprises to identify new agents and targets for radiosensitization. However, the gold standard assay to investigate radiosensitivity of cancer cells in vitro, the colony formation assay (CFA), is unsuitable for high-throughput screening. We developed a new high-throughput screening method for determining radiation susceptibility. Fast and uniform irradiation of batches up to 30 microplates was achieved using a Perspex container and a clinically employed linear accelerator. The readout was done by automated counting of fluorescently stained nuclei using the Acumen eX3 laser scanning cytometer. Assay performance was compared to that of the CFA and the CellTiter-Blue homogeneous uniform-well cell viability assay. The assay was validated in a whole-genome siRNA library screening setting using PC-3 prostate cancer cells. On 4 different cancer cell lines, the automated cell counting assay produced radiation dose response curves that followed a linear-quadratic equation and that exhibited a better correlation to the results of the CFA than did the cell viability assay. Moreover, the cell counting assay could be used to detect radiosensitization by silencing DNA-PKcs or by adding caffeine. In a high-throughput screening setting, using 4 Gy irradiated and control PC-3 cells, the effects of DNA-PKcs siRNA and non-targeting control siRNA could be clearly discriminated. We developed a simple assay for radiation susceptibility that can be used for high-throughput screening. This will

  8. A high-throughput absorbance-based assay for methionine produced by methionine aminopeptidase using S-adenosyl-L-methionine synthetase.

    Science.gov (United States)

    Shapiro, Adam B; Gao, Ning; Thresher, Jason; Walkup, Grant K; Whiteaker, James

    2011-06-01

    Methionine aminopeptidase (MAP) (E.C. 3.4.11.18) is a metallopeptidase that cleaves the N-terminal methionine (Met) residue from some proteins. MAP is essential for growth of several bacterial pathogens, making it a target for antibacterial drug discovery. MAP enzymes are also present in eukaryotic cells, and one is a target for antiangiogenic cancer therapy. To screen large compound libraries for MAP inhibitors as the starting point for drug discovery, a high-throughput-compatible assay is valuable. Here the authors describe a novel assay, which detects the Met product of MAP-catalyzed peptide cleavage by coupling it to adenosine triphosphate (ATP)-dependent production of S-adenosyl-L-methionine (SAM) and inorganic phosphate (P(i)) by SAM synthetase (MetK) combined with inorganic pyrophosphatase. The three P(i) ions produced for each Met consumed are detected using Malachite Green/molybdate reagent. This assay can use any unmodified peptide MAP substrate with an N-terminal Met. The assay was used to measure kinetic constants for Escherichia coli MAP using Mn(2+) as the activator and the peptide Met-Gly-Met-Met as the substrate, as well as to measure the potency of a MAP inhibitor. A Mn(2+) buffer is described that can be used to prevent free Mn(2+) depletion by chelating compounds from interfering in screens for MAP inhibitors.

  9. ImmuneDB: a system for the analysis and exploration of high-throughput adaptive immune receptor sequencing data.

    Science.gov (United States)

    Rosenfeld, Aaron M; Meng, Wenzhao; Luning Prak, Eline T; Hershberg, Uri

    2017-01-15

    As high-throughput sequencing of B cells becomes more common, the need for tools to analyze the large quantity of data also increases. This article introduces ImmuneDB, a system for analyzing vast amounts of heavy chain variable region sequences and exploring the resulting data. It can take as input raw FASTA/FASTQ data, identify genes, determine clones, construct lineages, as well as provide information such as selection pressure and mutation analysis. It uses an industry leading database, MySQL, to provide fast analysis and avoid the complexities of using error prone flat-files. ImmuneDB is freely available at http://immunedb.comA demo of the ImmuneDB web interface is available at: http://immunedb.com/demo CONTACT: Uh25@drexel.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Soil DNA metabarcoding and high-throughput sequencing as a forensic tool: considerations, potential limitations and recommendations.

    Science.gov (United States)

    Young, J M; Austin, J J; Weyrich, L S

    2017-02-01

    Analysis of physical evidence is typically a deciding factor in forensic casework by establishing what transpired at a scene or who was involved. Forensic geoscience is an emerging multi-disciplinary science that can offer significant benefits to forensic investigations. Soil is a powerful, nearly 'ideal' contact trace evidence, as it is highly individualistic, easy to characterise, has a high transfer and retention probability, and is often overlooked in attempts to conceal evidence. However, many real-life cases encounter close proximity soil samples or soils with low inorganic content, which cannot be easily discriminated based on current physical and chemical analysis techniques. The capability to improve forensic soil discrimination, and identify key indicator taxa from soil using the organic fraction is currently lacking. The development of new DNA sequencing technologies offers the ability to generate detailed genetic profiles from soils and enhance current forensic soil analyses. Here, we discuss the use of DNA metabarcoding combined with high-throughput sequencing (HTS) technology to distinguish between soils from different locations in a forensic context. Specifically, we provide recommendations for best practice, outline the potential limitations encountered in a forensic context and describe the future directions required to integrate soil DNA analysis into casework. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Third Generation (3G) Site Characterization: Cryogenic Core Collection and High Throughput Core Analysis - An Addendum to Basic Research Addressing Contaminants in Low Permeability Zones - A State of the Science Review

    Science.gov (United States)

    2016-07-29

    Hollow-Stem Auger HTCA High-Throughput Core Analysis IC Ion Chromatograph ID Inner Diameter k Permeability LN Liquid Nitrogen LNAPL Light ...liner and the cooling system. • Applying food -grade oil to the outside of the sample liner to limit direct contact of water with the sample liner...strong magnetic field after radio frequency (RF) pulsing ; the resulting data can be used to determine the proton densities spatially throughout the

  12. The Candida Genome Database (CGD): incorporation of Assembly 22, systematic identifiers and visualization of high throughput sequencing data.

    Science.gov (United States)

    Skrzypek, Marek S; Binkley, Jonathan; Binkley, Gail; Miyasato, Stuart R; Simison, Matt; Sherlock, Gavin

    2017-01-04

    The Candida Genome Database (CGD, http://www.candidagenome.org/) is a freely available online resource that provides gene, protein and sequence information for multiple Candida species, along with web-based tools for accessing, analyzing and exploring these data. The mission of CGD is to facilitate and accelerate research into Candida pathogenesis and biology, by curating the scientific literature in real time, and connecting literature-derived annotations to the latest version of the genomic sequence and its annotations. Here, we report the incorporation into CGD of Assembly 22, the first chromosome-level, phased diploid assembly of the C. albicans genome, coupled with improvements that we have made to the assembly using additional available sequence data. We also report the creation of systematic identifiers for C. albicans genes and sequence features using a system similar to that adopted by the yeast community over two decades ago. Finally, we describe the incorporation of JBrowse into CGD, which allows online browsing of mapped high throughput sequencing data, and its implementation for several RNA-Seq data sets, as well as the whole genome sequencing data that was used in the construction of Assembly 22. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. A high-throughput SNP array in the amphidiploid species Brassica napus shows diversity in resistance genes.

    Science.gov (United States)

    Dalton-Morgan, Jessica; Hayward, Alice; Alamery, Salman; Tollenaere, Reece; Mason, Annaliese S; Campbell, Emma; Patel, Dhwani; Lorenc, Michał T; Yi, Bin; Long, Yan; Meng, Jinling; Raman, Rosy; Raman, Harsh; Lawley, Cindy; Edwards, David; Batley, Jacqueline

    2014-12-01

    Single-nucleotide polymorphisms (SNPs)are molecular markers based on nucleotide variation and can be used for genotyping assays across populations and to track genomic inheritance. SNPs offer a comprehensive genotyping alternative to whole-genome sequencing for both agricultural and research purposes including molecular breeding and diagnostics, genome evolution and genetic diversity analyses, genetic mapping, and trait association studies. Here genomic SNPs were discovered between four cultivars of the important amphidiploid oilseed species Brassica napus and used to develop a B. napus Infinium™ array containing 5,306 SNPs randomly dispersed across the genome. Assay success was high, with >94 % of these producing a reproducible, polymorphic genotype in the 1,070 samples screened. Although the assay was designed to B. napus, successful SNP amplification was achieved in the B. napus progenitor species, Brassica rapa and Brassica oleracea, and to a lesser extent in the related species Brassica nigra. Phylogenetic analysis was consistent with the expected relationships between B. napus individuals. This study presents an efficient custom SNP assay development pipeline in the complex polyploid Brassica genome and demonstrates the utility of the array for high-throughput genotyping in a number of related Brassica species. It also demonstrates the utility of this assay in genotyping resistance genes on chromosome A7, which segregate amongst the 1,070 samples.

  14. VarSim: a high-fidelity simulation and validation framework for high-throughput genome sequencing with cancer applications.

    Science.gov (United States)

    Mu, John C; Mohiyuddin, Marghoob; Li, Jian; Bani Asadi, Narges; Gerstein, Mark B; Abyzov, Alexej; Wong, Wing H; Lam, Hugo Y K

    2015-05-01

    VarSim is a framework for assessing alignment and variant calling accuracy in high-throughput genome sequencing through simulation or real data. In contrast to simulating a random mutation spectrum, it synthesizes diploid genomes with germline and somatic mutations based on a realistic model. This model leverages information such as previously reported mutations to make the synthetic genomes biologically relevant. VarSim simulates and validates a wide range of variants, including single nucleotide variants, small indels and large structural variants. It is an automated, comprehensive compute framework supporting parallel computation and multiple read simulators. Furthermore, we developed a novel map data structure to validate read alignments, a strategy to compare variants binned in size ranges and a lightweight, interactive, graphical report to visualize validation results with detailed statistics. Thus far, it is the most comprehensive validation tool for secondary analysis in next generation sequencing. Code in Java and Python along with instructions to download the reads and variants is at http://bioinform.github.io/varsim. rd@bina.com Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  15. An infrared optical pacing system for high-throughput screening of cardiac electrophysiology in human cardiomyocytes (Conference Presentation)

    Science.gov (United States)

    McPheeters, Matt T.; Wang, Yves T.; Laurita, Kenneth R.; Jenkins, Michael W.

    2017-02-01

    Cardiomyocytes derived from human induced pluripotent stem cells (hiPS-HCM) have the potential to provide individualized therapies for patients and to test drug candidates for cardiac toxicity. In order for hiPS-CM to be useful for such applications, there is a need for high-throughput technology to rapidly assess cardiac electrophysiology parameters. Here, we designed and tested a fully contactless optical mapping (OM) and optical pacing (OP) system capable of imaging and point stimulation of hiPS-CM in small wells. OM allowed us to characterize cardiac electrophysiological parameters (conduction velocity, action potential duration, etc.) using voltage-sensitive dyes with high temporal and spatial resolution over the entire well. To improve OM signal-to-noise ratio, we tested a new voltage-sensitive dye (Fluovolt) for accuracy and phototoxicity. Stimulation is essential because most electrophysiological parameters are rate dependent; however, traditional methods utilizing electrical stimulation is difficult in small wells. To overcome this limitation, we utilized OP (λ = 1464 nm) to precisely control heart rate with spatial precision without the addition of exogenous agents. We optimized OP parameters (e.g., well size, pulse width, spot size) to achieve robust pacing and minimize the threshold radiant exposure. Finally, we tested system sensitivity using Flecainide, a drug with well described action on multiple electrophysiological properties.

  16. Combined Rational Design and a High Throughput Screening Platform for Identifying Chemical Inhibitors of a Ras-activating Enzyme*

    Science.gov (United States)

    Evelyn, Chris R.; Biesiada, Jacek; Duan, Xin; Tang, Hong; Shang, Xun; Papoian, Ruben; Seibel, William L.; Nelson, Sandra; Meller, Jaroslaw; Zheng, Yi

    2015-01-01

    The Ras family small GTPases regulate multiple cellular processes, including cell growth, survival, movement, and gene expression, and are intimately involved in cancer pathogenesis. Activation of these small GTPases is catalyzed by a special class of enzymes, termed guanine nucleotide exchange factors (GEFs). Herein, we developed a small molecule screening platform for identifying lead hits targeting a Ras GEF enzyme, SOS1. We employed an ensemble structure-based virtual screening approach in combination with a multiple tier high throughput experimental screen utilizing two complementary fluorescent guanine nucleotide exchange assays to identify small molecule inhibitors of GEF catalytic activity toward Ras. From a library of 350,000 compounds, we selected a set of 418 candidate compounds predicted to disrupt the GEF-Ras interaction, of which dual wavelength GDP dissociation and GTP-loading experimental screening identified two chemically distinct small molecule inhibitors. Subsequent biochemical validations indicate that they are capable of dose-dependently inhibiting GEF catalytic activity, binding to SOS1 with micromolar affinity, and disrupting GEF-Ras interaction. Mutagenesis studies in conjunction with structure-activity relationship studies mapped both compounds to different sites in the catalytic pocket, and both inhibited Ras signaling in cells. The unique screening platform established here for targeting Ras GEF enzymes could be broadly useful for identifying lead inhibitors for a variety of small GTPase-activating GEF reactions. PMID:25825487

  17. The RABiT: high-throughput technology for assessing global DSB repair.

    Science.gov (United States)

    Turner, Helen C; Sharma, P; Perrier, J R; Bertucci, A; Smilenov, L; Johnson, G; Taveras, M; Brenner, D J; Garty, G

    2014-05-01

    At the Center for High-Throughput Minimally Invasive Radiation Biodosimetry, we have developed a rapid automated biodosimetry tool (RABiT); this is a completely automated, ultra-high-throughput robotically based biodosimetry workstation designed for use following a large-scale radiological event, to perform radiation biodosimetry measurements based on a fingerstick blood sample. High throughput is achieved through purpose built robotics, sample handling in filter-bottomed multi-well plates and innovations in high-speed imaging and analysis. Currently, we are adapting the RABiT technologies for use in laboratory settings, for applications in epidemiological and clinical studies. Our overall goal is to extend the RABiT system to directly measure the kinetics of DNA repair proteins. The design of the kinetic/time-dependent studies is based on repeated, automated sampling of lymphocytes from a central reservoir of cells housed in the RABiT incubator as a function of time after the irradiation challenge. In the present study, we have characterized the DNA repair kinetics of the following repair proteins: γ-H2AX, 53-BP1, ATM kinase, MDC1 at multiple times (0.5, 2, 4, 7 and 24 h) after irradiation with 4 Gy γ rays. In order to provide a consistent dose exposure at time zero, we have developed an automated capillary irradiator to introduce DNA DSBs into fingerstick-size blood samples within the RABiT. To demonstrate the scalability of the laboratory-based RABiT system, we have initiated a population study using γ-H2AX as a biomarker.

  18. Generalized empirical Bayesian methods for discovery of differential data in high-throughput biology.

    Science.gov (United States)

    Hardcastle, Thomas J

    2016-01-15

    High-throughput data are now commonplace in biological research. Rapidly changing technologies and application mean that novel methods for detecting differential behaviour that account for a 'large P, small n' setting are required at an increasing rate. The development of such methods is, in general, being done on an ad hoc basis, requiring further development cycles and a lack of standardization between analyses. We present here a generalized method for identifying differential behaviour within high-throughput biological data through empirical Bayesian methods. This approach is based on our baySeq algorithm for identification of differential expression in RNA-seq data based on a negative binomial distribution, and in paired data based on a beta-binomial distribution. Here we show how the same empirical Bayesian approach can be applied to any parametric distribution, removing the need for lengthy development of novel methods for differently distributed data. Comparisons with existing methods developed to address specific problems in high-throughput biological data show that these generic methods can achieve equivalent or better performance. A number of enhancements to the basic algorithm are also presented to increase flexibility and reduce computational costs. The methods are implemented in the R baySeq (v2) package, available on Bioconductor http://www.bioconductor.org/packages/release/bioc/html/baySeq.html. tjh48@cam.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. A high-throughput, multi-channel photon-counting detector with picosecond timing

    Science.gov (United States)

    Lapington, J. S.; Fraser, G. W.; Miller, G. M.; Ashton, T. J. R.; Jarron, P.; Despeisse, M.; Powolny, F.; Howorth, J.; Milnes, J.

    2009-06-01

    High-throughput photon counting with high time resolution is a niche application area where vacuum tubes can still outperform solid-state devices. Applications in the life sciences utilizing time-resolved spectroscopies, particularly in the growing field of proteomics, will benefit greatly from performance enhancements in event timing and detector throughput. The HiContent project is a collaboration between the University of Leicester Space Research Centre, the Microelectronics Group at CERN, Photek Ltd., and end-users at the Gray Cancer Institute and the University of Manchester. The goal is to develop a detector system specifically designed for optical proteomics, capable of high content (multi-parametric) analysis at high throughput. The HiContent detector system is being developed to exploit this niche market. It combines multi-channel, high time resolution photon counting in a single miniaturized detector system with integrated electronics. The combination of enabling technologies; small pore microchannel plate devices with very high time resolution, and high-speed multi-channel ASIC electronics developed for the LHC at CERN, provides the necessary building blocks for a high-throughput detector system with up to 1024 parallel counting channels and 20 ps time resolution. We describe the detector and electronic design, discuss the current status of the HiContent project and present the results from a 64-channel prototype system. In the absence of an operational detector, we present measurements of the electronics performance using a pulse generator to simulate detector events. Event timing results from the NINO high-speed front-end ASIC captured using a fast digital oscilloscope are compared with data taken with the proposed electronic configuration which uses the multi-channel HPTDC timing ASIC.

  20. High-throughput detection, genotyping and quantification of the human papillomavirus using real-time PCR.

    Science.gov (United States)

    Micalessi, Isabel M; Boulet, Gaëlle A V; Bogers, Johannes J; Benoy, Ina H; Depuydt, Christophe E

    2011-12-20

    The establishment of the causal relationship between high-risk human papillomavirus (HR-HPV) infection and cervical cancer and its precursors has resulted in the development of HPV DNA detection systems. Currently, real-time PCR assays for the detection of HPV, such as the RealTime High Risk (HR) HPV assay (Abbott) and the cobas® 4800 HPV Test (Roche Molecular Diagnostics) are commercially available. However, none of them enables the detection and typing of all HR-HPV types in a clinical high-throughput setting. This paper describes the laboratory workflow and the validation of a type-specific real-time quantitative PCR (qPCR) assay for high-throughput HPV detection, genotyping and quantification. This assay is routinely applied in a liquid-based cytology screening setting (700 samples in 24 h) and was used in many epidemiological and clinical studies. The TaqMan-based qPCR assay enables the detection of 17 HPV genotypes and β-globin in seven multiplex reactions. These HPV types include all 12 high-risk types (HPV16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, 59), three probably high-risk types (HPV53, 66 and 68), one low-risk type (HPV6) and one undetermined risk type (HPV67). An analytical sensitivity of ≤100 copies was obtained for all the HPV types. The analytical specificity of each primer pair was 100% and an intra- and inter-run variability of real-time PCR approach enables detection of 17 HPV types, identification of the HPV type and determination of the viral load in a single sensitive assay suitable for high-throughput screening.