WorldWideScience

Sample records for high-throughput sequence-based epigenomic

  1. High-throughput Sequencing Based Immune Repertoire Study during Infectious Disease

    Directory of Open Access Journals (Sweden)

    Dongni Hou

    2016-08-01

    Full Text Available The selectivity of the adaptive immune response is based on the enormous diversity of T and B cell antigen-specific receptors. The immune repertoire, the collection of T and B cells with functional diversity in the circulatory system at any given time, is dynamic and reflects the essence of immune selectivity. In this article, we review the recent advances in immune repertoire study of infectious diseases that achieved by traditional techniques and high-throughput sequencing techniques. High-throughput sequencing techniques enable the determination of complementary regions of lymphocyte receptors with unprecedented efficiency and scale. This progress in methodology enhances the understanding of immunologic changes during pathogen challenge, and also provides a basis for further development of novel diagnostic markers, immunotherapies and vaccines.

  2. CRISPR-Cas9 epigenome editing enables high-throughput screening for functional regulatory elements in the human genome.

    Science.gov (United States)

    Klann, Tyler S; Black, Joshua B; Chellappan, Malathi; Safi, Alexias; Song, Lingyun; Hilton, Isaac B; Crawford, Gregory E; Reddy, Timothy E; Gersbach, Charles A

    2017-06-01

    Large genome-mapping consortia and thousands of genome-wide association studies have identified non-protein-coding elements in the genome as having a central role in various biological processes. However, decoding the functions of the millions of putative regulatory elements discovered in these studies remains challenging. CRISPR-Cas9-based epigenome editing technologies have enabled precise perturbation of the activity of specific regulatory elements. Here we describe CRISPR-Cas9-based epigenomic regulatory element screening (CERES) for improved high-throughput screening of regulatory element activity in the native genomic context. Using dCas9KRAB repressor and dCas9p300 activator constructs and lentiviral single guide RNA libraries to target DNase I hypersensitive sites surrounding a gene of interest, we carried out both loss- and gain-of-function screens to identify regulatory elements for the β-globin and HER2 loci in human cells. CERES readily identified known and previously unidentified regulatory elements, some of which were dependent on cell type or direction of perturbation. This technology allows the high-throughput functional annotation of putative regulatory elements in their native chromosomal context.

  3. Novel Sequencing-based Strategies for High-Throughput Discovery of Genetic Mutations Underlying Inherited Antibody Deficiency Disorders

    OpenAIRE

    Wang, Hong-Ying; Jain, Ashish

    2011-01-01

    Human inherited antibody deficiency disorders are generally caused by mutations in genes involved in the pathways regulating B-cell class switch recombination; DNA damage repair; and B-cell development, differentiation, and survival. Sequencing a large set of candidate genes involved in these pathways appears to be a highly efficient way to identify novel mutations. Herein we review several high-throughput sequencing approaches as well as recent improvements in target gene enrichment technolo...

  4. High-throughput sequencing-based genome-wide identification of microRNAs expressed in developing cotton seeds.

    Science.gov (United States)

    Wang, YanMei; Ding, Yan; Yu, DingWei; Xue, Wei; Liu, JinYuan

    2015-08-01

    MicroRNAs (miRNAs) have been shown to play critical regulatory roles in gene expression in cotton. Although a large number of miRNAs have been identified in cotton fibers, the functions of miRNAs in seed development remain unexplored. In this study, a small RNA library was constructed from cotton seeds sampled at 15 days post-anthesis (DPA) and was subjected to high-throughput sequencing. A total of 95 known miRNAs were detected to be expressed in cotton seeds. The expression pattern of these identified miRNAs was profiled and 48 known miRNAs were differentially expressed between cotton seeds and fibers at 15 DPA. In addition, 23 novel miRNA candidates were identified in 15-DPA seeds. Putative targets for 21 novel and 87 known miRNAs were successfully predicted and 900 expressed sequence tag (EST) sequences were proposed to be candidate target genes, which are involved in various metabolic and biological processes, suggesting a complex regulatory network in developing cotton seeds. Furthermore, miRNA-mediated cleavage of three important transcripts in vivo was validated by RLM-5' RACE. This study is the first to show the regulatory network of miRNAs that are involved in developing cotton seeds and provides a foundation for future studies on the specific functions of these miRNAs in seed development.

  5. ViralEpi v1.0: a high-throughput spectrum of viral epigenomic methylation profiles from diverse diseases.

    Science.gov (United States)

    Khan, Mohd Shoaib; Gupta, Amit Kumar; Kumar, Manoj

    2016-01-01

    To develop a computational resource for viral epigenomic methylation profiles from diverse diseases. Methylation patterns of Epstein-Barr virus and hepatitis B virus genomic regions are provided as web platform developed using open source Linux-Apache-MySQL-PHP (LAMP) bundle: programming and scripting languages, that is, HTML, JavaScript and PERL. A comprehensive and integrated web resource ViralEpi v1.0 is developed providing well-organized compendium of methylation events and statistical analysis associated with several diseases. Additionally, it also facilitates 'Viral EpiGenome Browser' for user-affable browsing experience using JavaScript-based JBrowse. This web resource would be helpful for research community engaged in studying epigenetic biomarkers for appropriate prognosis and diagnosis of diseases and its various stages.

  6. Towards cracking the epigenetic code using a combination of high-throughput epigenomics and quantitative mass spectrometry-based proteomics.

    Science.gov (United States)

    Stunnenberg, Hendrik G; Vermeulen, Michiel

    2011-07-01

    High-throughput genomic sequencing and quantitative mass spectrometry (MS)-based proteomics technology have recently emerged as powerful tools, increasing our understanding of chromatin structure and function. Both of these approaches require substantial investments and expertise in terms of instrumentation, experimental methodology, bioinformatics, and data interpretation and are, therefore, usually applied independently from each other by dedicated research groups. However, when applied reiteratively in the context of epigenetics research these approaches are strongly synergistic in nature. Copyright © 2011 WILEY Periodicals, Inc.

  7. Epigenomics

    Science.gov (United States)

    ... and Projects Grant Information NIH Common Fund NIH RePORTER Research at NHGRI An Overview Branches Clinical Research ... the epigenome? The epigenome is a multitude of chemical compounds that can tell the genome what to ...

  8. Evaluation of a High-Throughput Repetitive-Sequence-Based PCR System for DNA Fingerprinting of Mycobacterium tuberculosis and Mycobacterium avium Complex Strains

    Science.gov (United States)

    Cangelosi, Gerard A.; Freeman, Robert J.; Lewis, Kaeryn N.; Livingston-Rosanoff, Devon; Shah, Ketan S.; Milan, Sparrow Joy; Goldberg, Stefan V.

    2004-01-01

    Repetitive-sequence-based PCR (rep-PCR) is useful for generating DNA fingerprints of diverse bacterial and fungal species. Rep-PCR amplicon fingerprints represent genomic segments lying between repetitive sequences. A commercial system that electrophoretically separates rep-PCR amplicons on microfluidic chips, and provides computer-generated readouts of results has been adapted for use with Mycobacterium species. The ability of this system to type M. tuberculosis and M. avium complex (MAC) isolates was evaluated. M. tuberculosis strains (n = 56) were typed by spoligotyping with rep-PCR as a high-resolution adjunct. Results were compared with those generated by a standard approach of spoligotyping with IS6110-targeted restriction fragment length polymorphism (IS6110-RFLP) as the high-resolution adjunct. The sample included 11 epidemiologically and genotypically linked outbreak isolates and a population-based sample of 45 isolates from recent immigrants to Seattle, Wash., from the African Horn countries of Somalia, Eritrea, and Ethiopia. Twenty isolates exhibited unique spoligotypes and were not analyzed further. Of the 36 outbreak and African Horn isolates with nonunique spoligotypes, 23 fell into four clusters identified by IS6110-RFLP and rep-PCR, with 97% concordance observed between the two methods. Both approaches revealed extensive strain heterogeneity within the African Horn sample, consistent with a predominant pattern of reactivation of latent infections in this immigrant population. Rep-PCR exhibited 89% concordance with IS1245-RFLP typing of 28 M. avium subspecies avium strains. For M. tuberculosis as well as M. avium subspecies avium, the discriminative power of rep-PCR equaled or exceeded that of RFLP. Rep-PCR also generated DNA fingerprints from M. intracellulare (n = 8) and MACx (n = 2) strains. It shows promise as a fast, unified method for high-throughput genotypic fingerprinting of multiple Mycobacterium species. PMID:15184453

  9. High-Throughput Analysis With 96-Capillary Array Electrophoresis and Integrated Sample Preparation for DNA Sequencing Based on Laser Induced Fluorescence Detection

    Energy Technology Data Exchange (ETDEWEB)

    Xue, Gang [Iowa State Univ., Ames, IA (United States)

    2001-01-01

    The purpose of this research was to improve the fluorescence detection for the multiplexed capillary array electrophoresis, extend its use beyond the genomic analysis, and to develop an integrated micro-sample preparation system for high-throughput DNA sequencing. The authors first demonstrated multiplexed capillary zone electrophoresis (CZE) and micellar electrokinetic chromatography (MEKC) separations in a 96-capillary array system with laser-induced fluorescence detection. Migration times of four kinds of fluoresceins and six polyaromatic hydrocarbons (PAHs) are normalized to one of the capillaries using two internal standards. The relative standard deviations (RSD) after normalization are 0.6-1.4% for the fluoresceins and 0.1-1.5% for the PAHs. Quantitative calibration of the separations based on peak areas is also performed, again with substantial improvement over the raw data. This opens up the possibility of performing massively parallel separations for high-throughput chemical analysis for process monitoring, combinatorial synthesis, and clinical diagnosis. The authors further improved the fluorescence detection by step laser scanning. A computer-controlled galvanometer scanner is adapted for scanning a focused laser beam across a 96-capillary array for laser-induced fluorescence detection. The signal at a single photomultiplier tube is temporally sorted to distinguish among the capillaries. The limit of detection for fluorescein is 3 x 10-11 M (S/N = 3) for 5-mW of total laser power scanned at 4 Hz. The observed cross-talk among capillaries is 0.2%. Advantages include the efficient utilization of light due to the high duty-cycle of step scan, good detection performance due to the reduction of stray light, ruggedness due to the small mass of the galvanometer mirror, low cost due to the simplicity of components, and flexibility due to the independent paths for excitation and emission.

  10. High Throughput Facility

    Data.gov (United States)

    Federal Laboratory Consortium — Argonne?s high throughput facility provides highly automated and parallel approaches to material and materials chemistry development. The facility allows scientists...

  11. High-Throughput Sequencing Based Methods of RNA Structure Investigation

    DEFF Research Database (Denmark)

    Kielpinski, Lukasz Jan

    In this thesis we describe the development of four related methods for RNA structure probing that utilize massive parallel sequencing. Using them, we were able to gather structural data for multiple, long molecules simultaneously. First, we have established an easy to follow experimental and comp......In this thesis we describe the development of four related methods for RNA structure probing that utilize massive parallel sequencing. Using them, we were able to gather structural data for multiple, long molecules simultaneously. First, we have established an easy to follow experimental...... with known priming sites....

  12. The gymnastics of epigenomics in rice.

    Science.gov (United States)

    Banerjee, Aditya; Roychoudhury, Aryadeep

    2017-09-02

    Epigenomics is represented by the high-throughput investigations of genome-wide epigenetic alterations, which ultimately dictate genomic, transcriptomic, proteomic and metabolomic dynamism. Rice has been accepted as the global staple crop. As a result, this model crop deserves significant importance in the rapidly emerging field of plant epigenomics. A large number of recently available data reveal the immense flexibility and potential of variable epigenomic landscapes. Such epigenomic impacts and variability are determined by a number of epigenetic regulators and several crucial inheritable epialleles, respectively. This article highlights the correlation of the epigenomic landscape with growth, flowering, reproduction, non-coding RNA-mediated post-transcriptional regulation, transposon mobility and even heterosis in rice. We have also discussed the drastic epigenetic alterations which are reported in rice plants grown from seeds exposed to the extraterrestrial environment. Such abiotic conditions impose stress on the plants leading to epigenomic modifications in a genotype-specific manner. Some significant bioinformatic databases and in silico approaches have also been explained in this article. These softwares provide important interfaces for comparative epigenomics. The discussion concludes with a unified goal of developing epigenome editing to promote biological hacking of the rice epigenome. Such a cutting-edge technology if properly standardized, can integrate genomics and epigenomics together with the generation of high-yielding trait in several cultivars of rice.

  13. High Throughput Transcriptomics @ USEPA (Toxicology ...

    Science.gov (United States)

    The ideal chemical testing approach will provide complete coverage of all relevant toxicological responses. It should be sensitive and specific It should identify the mechanism/mode-of-action (with dose-dependence). It should identify responses relevant to the species of interest. Responses should ideally be translated into tissue-, organ-, and organism-level effects. It must be economical and scalable. Using a High Throughput Transcriptomics platform within US EPA provides broader coverage of biological activity space and toxicological MOAs and helps fill the toxicological data gap. Slide presentation at the 2016 ToxForum on using High Throughput Transcriptomics at US EPA for broader coverage biological activity space and toxicological MOAs.

  14. Enabling interspecies epigenomic comparison with CEpBrowser

    OpenAIRE

    Cao, Xiaoyi; Zhong, Sheng

    2013-01-01

    Summary: We developed the Comparative Epigenome Browser (CEpBrowser) to allow the public to perform multi-species epigenomic analysis. The web-based CEpBrowser integrates, manages and visualizes sequencing-based epigenomic datasets. Five key features were developed to maximize the efficiency of interspecies epigenomic comparisons. Availability: CEpBrowser is a web application implemented with PHP, MySQL, C and Apache. URL: http://www.cepbrowser.org/. Contact: Supplementary inf...

  15. High throughput protein production screening

    Science.gov (United States)

    Beernink, Peter T [Walnut Creek, CA; Coleman, Matthew A [Oakland, CA; Segelke, Brent W [San Ramon, CA

    2009-09-08

    Methods, compositions, and kits for the cell-free production and analysis of proteins are provided. The invention allows for the production of proteins from prokaryotic sequences or eukaryotic sequences, including human cDNAs using PCR and IVT methods and detecting the proteins through fluorescence or immunoblot techniques. This invention can be used to identify optimized PCR and WT conditions, codon usages and mutations. The methods are readily automated and can be used for high throughput analysis of protein expression levels, interactions, and functional states.

  16. Reconstructing ancient genomes and epigenomes

    DEFF Research Database (Denmark)

    Orlando, Ludovic Antoine Alexandre; Gilbert, M. Thomas P.; Willerslev, Eske

    2015-01-01

    Research involving ancient DNA (aDNA) has experienced a true technological revolution in recent years through advances in the recovery of aDNA and, particularly, through applications of high-throughput sequencing. Formerly restricted to the analysis of only limited amounts of genetic information, aDNA...... studies have now progressed to whole-genome sequencing for an increasing number of ancient individuals and extinct species, as well as to epigenomic characterization. Such advances have enabled the sequencing of specimens of up to 1 million years old, which, owing to their extensive DNA damage...... and contamination, were previously not amenable to genetic analyses. In this Review, we discuss these varied technical challenges and solutions for sequencing ancient genomes and epigenomes....

  17. High Throughput Plasma Water Treatment

    Science.gov (United States)

    Mujovic, Selman; Foster, John

    2016-10-01

    The troublesome emergence of new classes of micro-pollutants, such as pharmaceuticals and endocrine disruptors, poses challenges for conventional water treatment systems. In an effort to address these contaminants and to support water reuse in drought stricken regions, new technologies must be introduced. The interaction of water with plasma rapidly mineralizes organics by inducing advanced oxidation in addition to other chemical, physical and radiative processes. The primary barrier to the implementation of plasma-based water treatment is process volume scale up. In this work, we investigate a potentially scalable, high throughput plasma water reactor that utilizes a packed bed dielectric barrier-like geometry to maximize the plasma-water interface. Here, the water serves as the dielectric medium. High-speed imaging and emission spectroscopy are used to characterize the reactor discharges. Changes in methylene blue concentration and basic water parameters are mapped as a function of plasma treatment time. Experimental results are compared to electrostatic and plasma chemistry computations, which will provide insight into the reactor's operation so that efficiency can be assessed. Supported by NSF (CBET 1336375).

  18. Epigenetics and Epigenomics of Plants.

    Science.gov (United States)

    Yadav, Chandra Bhan; Pandey, Garima; Muthamilarasan, Mehanathan; Prasad, Manoj

    2018-01-23

    The genetic material DNA in association with histone proteins forms the complex structure called chromatin, which is prone to undergo modification through certain epigenetic mechanisms including cytosine DNA methylation, histone modifications, and small RNA-mediated methylation. Alterations in chromatin structure lead to inaccessibility of genomic DNA to various regulatory proteins such as transcription factors, which eventually modulates gene expression. Advancements in high-throughput sequencing technologies have provided the opportunity to study the epigenetic mechanisms at genome-wide levels. Epigenomic studies using high-throughput technologies will widen the understanding of mechanisms as well as functions of regulatory pathways in plant genomes, which will further help in manipulating these pathways using genetic and biochemical approaches. This technology could be a potential research tool for displaying the systematic associations of genetic and epigenetic variations, especially in terms of cytosine methylation onto the genomic region in a specific cell or tissue. A comprehensive study of plant populations to correlate genotype to epigenotype and to phenotype, and also the study of methyl quantitative trait loci (QTL) or epiGWAS, is possible by using high-throughput sequencing methods, which will further accelerate molecular breeding programs for crop improvement. Graphical Abstract.

  19. High-throughput scoring of seed germination

    NARCIS (Netherlands)

    Ligterink, Wilco; Hilhorst, Henk W.M.

    2017-01-01

    High-throughput analysis of seed germination for phenotyping large genetic populations or mutant collections is very labor intensive and would highly benefit from an automated setup. Although very often used, the total germination percentage after a nominated period of time is not very

  20. A high throughput spectral image microscopy system

    Science.gov (United States)

    Gesley, M.; Puri, R.

    2018-01-01

    A high throughput spectral image microscopy system is configured for rapid detection of rare cells in large populations. To overcome flow cytometry rates and use of fluorophore tags, a system architecture integrates sample mechanical handling, signal processors, and optics in a non-confocal version of light absorption and scattering spectroscopic microscopy. Spectral images with native contrast do not require the use of exogeneous stain to render cells with submicron resolution. Structure may be characterized without restriction to cell clusters of differentiation.

  1. High Throughput Neuro-Imaging Informatics

    Directory of Open Access Journals (Sweden)

    Michael I Miller

    2013-12-01

    Full Text Available This paper describes neuroinformatics technologies at 1 mm anatomical scale based on high throughput 3D functional and structural imaging technologies of the human brain. The core is an abstract pipeline for converting functional and structural imagery into their high dimensional neuroinformatic representations index containing O(E3-E4 discriminating dimensions. The pipeline is based on advanced image analysis coupled to digital knowledge representations in the form of dense atlases of the human brain at gross anatomical scale. We demonstrate the integration of these high-dimensional representations with machine learning methods, which have become the mainstay of other fields of science including genomics as well as social networks. Such high throughput facilities have the potential to alter the way medical images are stored and utilized in radiological workflows. The neuroinformatics pipeline is used to examine cross-sectional and personalized analyses of neuropsychiatric illnesses in clinical applications as well as longitudinal studies. We demonstrate the use of high throughput machine learning methods for supporting (i cross-sectional image analysis to evaluate the health status of individual subjects with respect to the population data, (ii integration of image and non-image information for diagnosis and prognosis.

  2. Next generation sequencing-based multigene panel for high throughput detection of food-borne pathogens.

    Science.gov (United States)

    Ferrario, Chiara; Lugli, Gabriele Andrea; Ossiprandi, Maria Cristina; Turroni, Francesca; Milani, Christian; Duranti, Sabrina; Mancabelli, Leonardo; Mangifesta, Marta; Alessandri, Giulia; van Sinderen, Douwe; Ventura, Marco

    2017-09-01

    Contamination of food by chemicals or pathogenic bacteria may cause particular illnesses that are linked to food consumption, commonly referred to as foodborne diseases. Bacteria are present in/on various foods products, such as fruits, vegetables and ready-to-eat products. Bacteria that cause foodborne diseases are known as foodborne pathogens (FBPs). Accurate detection methods that are able to reveal the presence of FBPs in food matrices are in constant demand, in order to ensure safe foods with a minimal risk of causing foodborne diseases. Here, a multiplex PCR-based Illumina sequencing method for FBP detection in food matrices was developed. Starting from 25 bacterial targets and 49 selected PCR primer pairs, a primer collection called foodborne pathogen - panel (FPP) consisting of 12 oligonucleotide pairs was developed. The FPP allows a more rapid and reliable identification of FBPs compared to classical cultivation methods. Furthermore, FPP permits sensitive and specific FBP detection in about two days from food sample acquisition to bioinformatics-based identification. The FPP is able to simultaneously identify eight different bacterial pathogens, i.e. Listeria monocytogenes, Campylobacter jejuni, Campylobacter coli, Salmonella enterica subsp. enterica serovar enteritidis, Escherichia coli, Shigella sonnei, Staphylococcus aureus and Yersinia enterocolitica, in a given food matrix at a threshold contamination level of 10 1 cell/g. Moreover, this novel detection method may represent an alternative and/or a complementary approach to PCR-based techniques, which are routinely used for FBP detection, and could be implemented in (parts of) the food chain as a quality check. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. High Throughput PBTK: Open-Source Data and Tools for ...

    Science.gov (United States)

    Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy

  4. The application of the high throughput sequencing technology in the transposable elements.

    Science.gov (United States)

    Liu, Zhen; Xu, Jian-hong

    2015-09-01

    High throughput sequencing technology has dramatically improved the efficiency of DNA sequencing, and decreased the costs to a great extent. Meanwhile, this technology usually has advantages of better specificity, higher sensitivity and accuracy. Therefore, it has been applied to the research on genetic variations, transcriptomics and epigenomics. Recently, this technology has been widely employed in the studies of transposable elements and has achieved fruitful results. In this review, we summarize the application of high throughput sequencing technology in the fields of transposable elements, including the estimation of transposon content, preference of target sites and distribution, insertion polymorphism and population frequency, identification of rare copies, transposon horizontal transfers as well as transposon tagging. We also briefly introduce the major common sequencing strategies and algorithms, their advantages and disadvantages, and the corresponding solutions. Finally, we envision the developing trends of high throughput sequencing technology, especially the third generation sequencing technology, and its application in transposon studies in the future, hopefully providing a comprehensive understanding and reference for related scientific researchers.

  5. Economic consequences of high throughput maskless lithography

    Science.gov (United States)

    Hartley, John G.; Govindaraju, Lakshmi

    2005-11-01

    Many people in the semiconductor industry bemoan the high costs of masks and view mask cost as one of the significant barriers to bringing new chip designs to market. All that is needed is a viable maskless technology and the problem will go away. Numerous sites around the world are working on maskless lithography but inevitably, the question asked is "Wouldn't a one wafer per hour maskless tool make a really good mask writer?" Of course, the answer is yes, the hesitation you hear in the answer isn't based on technology concerns, it's financial. The industry needs maskless lithography because mask costs are too high. Mask costs are too high because mask pattern generators (PG's) are slow and expensive. If mask PG's become much faster, mask costs go down, the maskless market goes away and the PG supplier is faced with an even smaller tool demand from the mask shops. Technical success becomes financial suicide - or does it? In this paper we will present the results of a model that examines some of the consequences of introducing high throughput maskless pattern generation. Specific features in the model include tool throughput for masks and wafers, market segmentation by node for masks and wafers and mask cost as an entry barrier to new chip designs. How does the availability of low cost masks and maskless tools affect the industries tool makeup and what is the ultimate potential market for high throughput maskless pattern generators?

  6. Preliminary High-Throughput Metagenome Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Dusheyko, Serge; Furman, Craig; Pangilinan, Jasmyn; Shapiro, Harris; Tu, Hank

    2007-03-26

    Metagenome data sets present a qualitatively different assembly problem than traditional single-organism whole-genome shotgun (WGS) assembly. The unique aspects of such projects include the presence of a potentially large number of distinct organisms and their representation in the data set at widely different fractions. In addition, multiple closely related strains could be present, which would be difficult to assemble separately. Failure to take these issues into account can result in poor assemblies that either jumble together different strains or which fail to yield useful results. The DOE Joint Genome Institute has sequenced a number of metagenomic projects and plans to considerably increase this number in the coming year. As a result, the JGI has a need for high-throughput tools and techniques for handling metagenome projects. We present the techniques developed to handle metagenome assemblies in a high-throughput environment. This includes a streamlined assembly wrapper, based on the JGI?s in-house WGS assembler, Jazz. It also includes the selection of sensible defaults targeted for metagenome data sets, as well as quality control automation for cleaning up the raw results. While analysis is ongoing, we will discuss preliminary assessments of the quality of the assembly results (http://fames.jgi-psf.org).

  7. Modeling Steroidogenesis Disruption Using High-Throughput ...

    Science.gov (United States)

    Environmental chemicals can elicit endocrine disruption by altering steroid hormone biosynthesis and metabolism (steroidogenesis) causing adverse reproductive and developmental effects. Historically, a lack of assays resulted in few chemicals having been evaluated for effects on steroidogenesis. The steroidogenic pathway is a series of hydroxylation and dehydrogenation steps carried out by CYP450 and hydroxysteroid dehydrogenase enzymes, yet the only enzyme in the pathway for which a high-throughput screening (HTS) assay has been developed is aromatase (CYP19A1), responsible for the aromatization of androgens to estrogens. Recently, the ToxCast HTS program adapted the OECD validated H295R steroidogenesis assay using human adrenocortical carcinoma cells into a high-throughput model to quantitatively assess the concentration-dependent (0.003-100 µM) effects of chemicals on 10 steroid hormones including progestagens, androgens, estrogens and glucocorticoids. These results, in combination with two CYP19A1 inhibition assays, comprise a large dataset amenable to clustering approaches supporting the identification and characterization of putative mechanisms of action (pMOA) for steroidogenesis disruption. In total, 514 chemicals were tested in all CYP19A1 and steroidogenesis assays. 216 chemicals were identified as CYP19A1 inhibitors in at least one CYP19A1 assay. 208 of these chemicals also altered hormone levels in the H295R assay, suggesting 96% sensitivity in the

  8. Integrating Epigenomics into the Understanding of Biomedical Insight.

    Science.gov (United States)

    Han, Yixing; He, Ximiao

    2016-01-01

    Epigenetics is one of the most rapidly expanding fields in biomedical research, and the popularity of the high-throughput next-generation sequencing (NGS) highlights the accelerating speed of epigenomics discovery over the past decade. Epigenetics studies the heritable phenotypes resulting from chromatin changes but without alteration on DNA sequence. Epigenetic factors and their interactive network regulate almost all of the fundamental biological procedures, and incorrect epigenetic information may lead to complex diseases. A comprehensive understanding of epigenetic mechanisms, their interactions, and alterations in health and diseases genome widely has become a priority in biological research. Bioinformatics is expected to make a remarkable contribution for this purpose, especially in processing and interpreting the large-scale NGS datasets. In this review, we introduce the epigenetics pioneering achievements in health status and complex diseases; next, we give a systematic review of the epigenomics data generation, summarize public resources and integrative analysis approaches, and finally outline the challenges and future directions in computational epigenomics.

  9. High-Throughput Analysis of Enzyme Activities

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Guoxin [Iowa State Univ., Ames, IA (United States)

    2007-01-01

    High-throughput screening (HTS) techniques have been applied to many research fields nowadays. Robot microarray printing technique and automation microtiter handling technique allows HTS performing in both heterogeneous and homogeneous formats, with minimal sample required for each assay element. In this dissertation, new HTS techniques for enzyme activity analysis were developed. First, patterns of immobilized enzyme on nylon screen were detected by multiplexed capillary system. The imaging resolution is limited by the outer diameter of the capillaries. In order to get finer images, capillaries with smaller outer diameters can be used to form the imaging probe. Application of capillary electrophoresis allows separation of the product from the substrate in the reaction mixture, so that the product doesn't have to have different optical properties with the substrate. UV absorption detection allows almost universal detection for organic molecules. Thus, no modifications of either the substrate or the product molecules are necessary. This technique has the potential to be used in screening of local distribution variations of specific bio-molecules in a tissue or in screening of multiple immobilized catalysts. Another high-throughput screening technique is developed by directly monitoring the light intensity of the immobilized-catalyst surface using a scientific charge-coupled device (CCD). Briefly, the surface of enzyme microarray is focused onto a scientific CCD using an objective lens. By carefully choosing the detection wavelength, generation of product on an enzyme spot can be seen by the CCD. Analyzing the light intensity change over time on an enzyme spot can give information of reaction rate. The same microarray can be used for many times. Thus, high-throughput kinetic studies of hundreds of catalytic reactions are made possible. At last, we studied the fluorescence emission spectra of ADP and obtained the detection limits for ADP under three different

  10. High throughput assays for analyzing transcription factors.

    Science.gov (United States)

    Li, Xianqiang; Jiang, Xin; Yaoi, Takuro

    2006-06-01

    Transcription factors are a group of proteins that modulate the expression of genes involved in many biological processes, such as cell growth and differentiation. Alterations in transcription factor function are associated with many human diseases, and therefore these proteins are attractive potential drug targets. A key issue in the development of such therapeutics is the generation of effective tools that can be used for high throughput discovery of the critical transcription factors involved in human diseases, and the measurement of their activities in a variety of disease or compound-treated samples. Here, a number of innovative arrays and 96-well format assays for profiling and measuring the activities of transcription factors will be discussed.

  11. High-throughput hyperdimensional vertebrate phenotyping.

    Science.gov (United States)

    Pardo-Martin, Carlos; Allalou, Amin; Medina, Jaime; Eimon, Peter M; Wählby, Carolina; Fatih Yanik, Mehmet

    2013-01-01

    Most gene mutations and biologically active molecules cause complex responses in animals that cannot be predicted by cell culture models. Yet animal studies remain too slow and their analyses are often limited to only a few readouts. Here we demonstrate high-throughput optical projection tomography with micrometre resolution and hyperdimensional screening of entire vertebrates in tens of seconds using a simple fluidic system. Hundreds of independent morphological features and complex phenotypes are automatically captured in three dimensions with unprecedented speed and detail in semitransparent zebrafish larvae. By clustering quantitative phenotypic signatures, we can detect and classify even subtle alterations in many biological processes simultaneously. We term our approach hyperdimensional in vivo phenotyping. To illustrate the power of hyperdimensional in vivo phenotyping, we have analysed the effects of several classes of teratogens on cartilage formation using 200 independent morphological measurements, and identified similarities and differences that correlate well with their known mechanisms of actions in mammals.

  12. High-Throughput Process Development for Biopharmaceuticals.

    Science.gov (United States)

    Shukla, Abhinav A; Rameez, Shahid; Wolfe, Leslie S; Oien, Nathan

    2017-11-14

    The ability to conduct multiple experiments in parallel significantly reduces the time that it takes to develop a manufacturing process for a biopharmaceutical. This is particularly significant before clinical entry, because process development and manufacturing are on the "critical path" for a drug candidate to enter clinical development. High-throughput process development (HTPD) methodologies can be similarly impactful during late-stage development, both for developing the final commercial process as well as for process characterization and scale-down validation activities that form a key component of the licensure filing package. This review examines the current state of the art for HTPD methodologies as they apply to cell culture, downstream purification, and analytical techniques. In addition, we provide a vision of how HTPD activities across all of these spaces can integrate to create a rapid process development engine that can accelerate biopharmaceutical drug development. Graphical Abstract.

  13. Applications of High Throughput Nucleotide Sequencing

    DEFF Research Database (Denmark)

    Waage, Johannes Eichler

    The recent advent of high throughput sequencing of nucleic acids (RNA and DNA) has vastly expanded research into the functional and structural biology of the genome of all living organisms (and even a few dead ones). With this enormous and exponential growth in biological data generation come...... equally large demands in data handling, analysis and interpretation, perhaps defining the modern challenge of the computational biologist of the post-genomic era. The first part of this thesis consists of a general introduction to the history, common terms and challenges of next generation sequencing......, focusing on oft encountered problems in data processing, such as quality assurance, mapping, normalization, visualization, and interpretation. Presented in the second part are scientific endeavors representing solutions to problems of two sub-genres of next generation sequencing. For the first flavor, RNA-sequencing...

  14. Applications of High Throughput Nucleotide Sequencing

    DEFF Research Database (Denmark)

    Waage, Johannes Eichler

    The recent advent of high throughput sequencing of nucleic acids (RNA and DNA) has vastly expanded research into the functional and structural biology of the genome of all living organisms (and even a few dead ones). With this enormous and exponential growth in biological data generation come...... equally large demands in data handling, analysis and interpretation, perhaps defining the modern challenge of the computational biologist of the post-genomic era. The first part of this thesis consists of a general introduction to the history, common terms and challenges of next generation sequencing......). For the second flavor, DNA-seq, a study presenting genome wide profiling of transcription factor CEBP/A in liver cells undergoing regeneration after partial hepatectomy (article IV) is included....

  15. High Throughput Spectroscopic Catalyst Screening via Surface Plasmon Spectroscopy

    Science.gov (United States)

    2015-07-15

    Final 3. DATES COVERED (From - To) 26-June-2014 to 25-March-2015 4. TITLE AND SUBTITLE High Throughput Catalyst Screening via Surface...TITLE AND SUBTITLE High Throughput Catalyst Screening via Surface Plasmon Spectroscopy 5a. CONTRACT NUMBER FA2386-14-1-4064 5b. GRANT NUMBER 5c...AOARD Grant 144064 FA2386-14-1-4064 “High Throughput Spectroscopic Catalyst Screening by Surface Plasmon Spectroscopy” Date July 15, 2015

  16. High-throughput crystallography for structural genomics.

    Science.gov (United States)

    Joachimiak, Andrzej

    2009-10-01

    Protein X-ray crystallography recently celebrated its 50th anniversary. The structures of myoglobin and hemoglobin determined by Kendrew and Perutz provided the first glimpses into the complex protein architecture and chemistry. Since then, the field of structural molecular biology has experienced extraordinary progress and now more than 55000 protein structures have been deposited into the Protein Data Bank. In the past decade many advances in macromolecular crystallography have been driven by world-wide structural genomics efforts. This was made possible because of third-generation synchrotron sources, structure phasing approaches using anomalous signal, and cryo-crystallography. Complementary progress in molecular biology, proteomics, hardware and software for crystallographic data collection, structure determination and refinement, computer science, databases, robotics and automation improved and accelerated many processes. These advancements provide the robust foundation for structural molecular biology and assure strong contribution to science in the future. In this report we focus mainly on reviewing structural genomics high-throughput X-ray crystallography technologies and their impact.

  17. High-throughput Crystallography for Structural Genomics

    Science.gov (United States)

    Joachimiak, Andrzej

    2009-01-01

    Protein X-ray crystallography recently celebrated its 50th anniversary. The structures of myoglobin and hemoglobin determined by Kendrew and Perutz provided the first glimpses into the complex protein architecture and chemistry. Since then, the field of structural molecular biology has experienced extraordinary progress and now over 53,000 proteins structures have been deposited into the Protein Data Bank. In the past decade many advances in macromolecular crystallography have been driven by world-wide structural genomics efforts. This was made possible because of third-generation synchrotron sources, structure phasing approaches using anomalous signal and cryo-crystallography. Complementary progress in molecular biology, proteomics, hardware and software for crystallographic data collection, structure determination and refinement, computer science, databases, robotics and automation improved and accelerated many processes. These advancements provide the robust foundation for structural molecular biology and assure strong contribution to science in the future. In this report we focus mainly on reviewing structural genomics high-throughput X-ray crystallography technologies and their impact. PMID:19765976

  18. Ultraspecific probes for high throughput HLA typing

    Directory of Open Access Journals (Sweden)

    Eggers Rick

    2009-02-01

    Full Text Available Abstract Background The variations within an individual's HLA (Human Leukocyte Antigen genes have been linked to many immunological events, e.g. susceptibility to disease, response to vaccines, and the success of blood, tissue, and organ transplants. Although the microarray format has the potential to achieve high-resolution typing, this has yet to be attained due to inefficiencies of current probe design strategies. Results We present a novel three-step approach for the design of high-throughput microarray assays for HLA typing. This approach first selects sequences containing the SNPs present in all alleles of the locus of interest and next calculates the number of base changes necessary to convert a candidate probe sequences to the closest subsequence within the set of sequences that are likely to be present in the sample including the remainder of the human genome in order to identify those candidate probes which are "ultraspecific" for the allele of interest. Due to the high specificity of these sequences, it is possible that preliminary steps such as PCR amplification are no longer necessary. Lastly, the minimum number of these ultraspecific probes is selected such that the highest resolution typing can be achieved for the minimal cost of production. As an example, an array was designed and in silico results were obtained for typing of the HLA-B locus. Conclusion The assay presented here provides a higher resolution than has previously been developed and includes more alleles than previously considered. Based upon the in silico and preliminary experimental results, we believe that the proposed approach can be readily applied to any highly polymorphic gene system.

  19. Transcriptional biomarkers--high throughput screening, quantitative verification, and bioinformatical validation methods.

    Science.gov (United States)

    Riedmaier, Irmgard; Pfaffl, Michael W

    2013-01-01

    Molecular biomarkers found their way into many research fields, especially in molecular medicine, medical diagnostics, disease prognosis, risk assessment but also in other areas like food safety. Different definitions for the term biomarker exist, but on the whole biomarkers are measureable biological molecules that are characteristic for a specific physiological status including drug intervention, normal or pathological processes. There are various examples for molecular biomarkers that are already successfully used in clinical diagnostics, especially as prognostic or diagnostic tool for diseases. Molecular biomarkers can be identified on different molecular levels, namely the genome, the epigenome, the transcriptome, the proteome, the metabolome and the lipidome. With special "omic" technologies, nowadays often high throughput technologies, these molecular biomarkers can be identified and quantitatively measured. This article describes the different molecular levels on which biomarker research is possible including some biomarker candidates that have already been identified. Hereby the transcriptomic approach will be described in detail including available high throughput methods, molecular levels, quantitative verification, and biostatistical requirements for transcriptional biomarker identification and validation. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. High Throughput Determinations of Critical Dosing Parameters (IVIVE workshop)

    Science.gov (United States)

    High throughput toxicokinetics (HTTK) is an approach that allows for rapid estimations of TK for hundreds of environmental chemicals. HTTK-based reverse dosimetry (i.e, reverse toxicokinetics or RTK) is used in order to convert high throughput in vitro toxicity screening (HTS) da...

  1. Time for the epigenome

    CERN Multimedia

    2010-01-01

    The complexity of genetic regulation is one of the great wonders of nature, but it represents a daunting challenge to unravel. The International Human Epigenome Consortium is an appropriate response. ( 1 page)

  2. Epigenomics of Hypertension

    Science.gov (United States)

    Liang, Mingyu; Cowley, Allen W.; Mattson, David L.; Kotchen, Theodore A.; Liu, Yong

    2013-01-01

    Multiple genes and pathways are involved in the pathogenesis of hypertension. Epigenomic studies of hypertension are beginning to emerge and hold great promise of providing novel insights into the mechanisms underlying hypertension. Epigenetic marks or mediators including DNA methylation, histone modifications, and non-coding RNA can be studied at a genome or near-genome scale using epigenomic approaches. At the single gene level, several studies have identified changes in epigenetic modifications in genes expressed in the kidney that correlate with the development of hypertension. Systematic analysis and integration of epigenetic marks at the genome scale, demonstration of cellular and physiological roles of specific epigenetic modifications, and investigation of inheritance are among the major challenges and opportunities for future epigenomic and epigenetic studies of hypertension. Essential hypertension is a multifactorial disease involving multiple genetic and environmental factors and mediated by alterations in multiple biological pathways. Because the non-genetic mechanisms may involve epigenetic modifications, epigenomics is one of the latest concepts and approaches brought to bear on hypertension research. In this article, we summarize briefly the concepts and techniques for epigenomics, discuss the rationale for applying epigenomic approaches to study hypertension, and review the current state of this research area. PMID:24011581

  3. High Throughput Architecture for High Performance NoC

    OpenAIRE

    Ghany, Mohamed A. Abd El; El-Moursy, Magdy A.; Ismail, Mohammed

    2010-01-01

    In this chapter, the high throughput NoC architecture is proposed to increase the throughput of the switch in NoC. The proposed architecture can also improve the latency of the network. The proposed high throughput interconnect architecture is applied on different NoC architectures. The architecture increases the throughput of the network by more than 38% while preserving the average latency. The area of high throughput NoC switch is decreased by 18% as compared to the area of BFT switch. The...

  4. High Throughput Hall Thruster for Small Spacecraft Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Busek is developing a high throughput nominal 100-W Hall Effect Thruster. This device is well sized for spacecraft ranging in size from several tens of kilograms to...

  5. AOPs & Biomarkers: Bridging High Throughput Screening and Regulatory Decision Making.

    Science.gov (United States)

    As high throughput screening (HTS) approaches play a larger role in toxicity testing, computational toxicology has emerged as a critical component in interpreting the large volume of data produced. Computational models for this purpose are becoming increasingly more sophisticated...

  6. High Throughput Hall Thruster for Small Spacecraft Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Busek Co. Inc. proposes to develop a high throughput, nominal 100 W Hall Effect Thruster (HET). This HET will be sized for small spacecraft (< 180 kg), including...

  7. High-Throughput Analysis and Automation for Glycomics Studies

    NARCIS (Netherlands)

    Shubhakar, A.; Reiding, K.R.; Gardner, R.A.; Spencer, D.I.R.; Fernandes, D.L.; Wuhrer, M.

    2015-01-01

    This review covers advances in analytical technologies for high-throughput (HTP) glycomics. Our focus is on structural studies of glycoprotein glycosylation to support biopharmaceutical realization and the discovery of glycan biomarkers for human disease. For biopharmaceuticals, there is increasing

  8. Materiomics - High-Throughput Screening of Biomaterial Properties

    NARCIS (Netherlands)

    de Boer, Jan; van Blitterswijk, Clemens

    2013-01-01

    This complete, yet concise, guide introduces you to the rapidly developing field of high throughput screening of biomaterials: materiomics. Bringing together the key concepts and methodologies used to determine biomaterial properties, you will understand the adaptation and application of materomics

  9. MIPHENO: Data normalization for high throughput metabolic analysis.

    Science.gov (United States)

    High throughput methodologies such as microarrays, mass spectrometry and plate-based small molecule screens are increasingly used to facilitate discoveries from gene function to drug candidate identification. These large-scale experiments are typically carried out over the course...

  10. Epigenomics of hypertension.

    Science.gov (United States)

    Liang, Mingyu; Cowley, Allen W; Mattson, David L; Kotchen, Theodore A; Liu, Yong

    2013-07-01

    Multiple genes and pathways are involved in the pathogenesis of hypertension. Epigenomic studies of hypertension are beginning to emerge and hold great promise of providing novel insights into the mechanisms underlying hypertension. Epigenetic marks or mediators including DNA methylation, histone modifications, and noncoding RNA can be studied at a genome or near-genome scale using epigenomic approaches. At the single gene level, several studies have identified changes in epigenetic modifications in genes expressed in the kidney that correlate with the development of hypertension. Systematic analysis and integration of epigenetic marks at the genome-wide scale, demonstration of cellular and physiological roles of specific epigenetic modifications, and investigation of inheritance are among the major challenges and opportunities for future epigenomic and epigenetic studies of hypertension. © 2013 Elsevier Inc. All rights reserved.

  11. Applications of High Throughput Sequencing for Immunology and Clinical Diagnostics

    OpenAIRE

    Kim, Hyunsung John

    2014-01-01

    High throughput sequencing methods have fundamentally shifted the manner in which biological experiments are performed. In this dissertation, conventional and novel high throughput sequencing and bioinformatics methods are applied to immunology and diagnostics. In order to study rare subsets of cells, an RNA sequencing method was first optimized for use with minimal levels of RNA and cellular input. The optimized RNA sequencing method was then applied to study the transcriptional differences ...

  12. Epigenomics in marine fishes.

    Science.gov (United States)

    Metzger, David C H; Schulte, Patricia M

    2016-12-01

    Epigenetic mechanisms are an underappreciated and often ignored component of an organism's response to environmental change and may underlie many types of phenotypic plasticity. Recent technological advances in methods for detecting epigenetic marks at a whole-genome scale have launched new opportunities for studying epigenomics in ecologically relevant non-model systems. The study of ecological epigenomics holds great promise to better understand the linkages between genotype, phenotype, and the environment and to explore mechanisms of phenotypic plasticity. The many attributes of marine fish species, including their high diversity, variable life histories, high fecundity, impressive plasticity, and economic value provide unique opportunities for studying epigenetic mechanisms in an environmental context. To provide a primer on epigenomic research for fish biologists, we start by describing fundamental aspects of epigenetics, focusing on the most widely studied and most well understood of the epigenetic marks: DNA methylation. We then describe the techniques that have been used to investigate DNA methylation in marine fishes to date and highlight some new techniques that hold great promise for future studies. Epigenomic research in marine fishes is in its early stages, so we first briefly discuss what has been learned about the establishment, maintenance, and function of DNA methylation in fishes from studies in zebrafish and then summarize the studies demonstrating the pervasive effects of the environment on the epigenomes of marine fishes. We conclude by highlighting the potential for ongoing research on the epigenomics of marine fishes to reveal critical aspects of the interaction between organisms and their environments. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Epigenomics of autoimmune diseases.

    Science.gov (United States)

    Gupta, Bhawna; Hawkins, R David

    2015-03-01

    Autoimmune diseases are complex disorders of largely unknown etiology. Genetic studies have identified a limited number of causal genes from a marginal number of individuals, and demonstrated a high degree of discordance in monozygotic twins. Studies have begun to reveal epigenetic contributions to these diseases, primarily through the study of DNA methylation, but chromatin and non-coding RNA changes are also emerging. Moving forward an integrative analysis of genomic, transcriptomic and epigenomic data, with the latter two coming from specific cell types, will provide an understanding that has been missed from genetics alone. We provide an overview of the current state of the field and vision for deriving the epigenomics of autoimmunity.

  14. A novel high throughput method to investigate polymer dissolution.

    Science.gov (United States)

    Zhang, Ying; Mallapragada, Surya K; Narasimhan, Balaji

    2010-02-16

    The dissolution behavior of polystyrene (PS) in biodiesel was studied by developing a novel high throughput approach based on Fourier-transform infrared (FTIR) microscopy. A multiwell device for high throughput dissolution testing was fabricated using a photolithographic rapid prototyping method. The dissolution of PS films in each well was tracked by following the characteristic IR band of PS and the effect of PS molecular weight and temperature on the dissolution rate was simultaneously investigated. The results were validated with conventional gravimetric methods. The high throughput method can be extended to evaluate the dissolution profiles of a large number of samples, or to simultaneously investigate the effect of variables such as polydispersity, crystallinity, and mixed solvents. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Screening and synthesis: high throughput technologies applied to parasitology.

    Science.gov (United States)

    Morgan, R E; Westwood, N J

    2004-01-01

    High throughput technologies continue to develop in response to the challenges set by the genome projects. This article discusses how the techniques of both high throughput screening (HTS) and synthesis can influence research in parasitology. Examples of the use of targeted and phenotype-based HTS using unbiased compound collections are provided. The important issue of identifying the protein target(s) of bioactive compounds is discussed from the synthetic chemist's perspective. This article concludes by reviewing recent examples of successful target identification studies in parasitology.

  16. High throughput materials research and development for lithium ion batteries

    Directory of Open Access Journals (Sweden)

    Parker Liu

    2017-09-01

    Full Text Available Development of next generation batteries requires a breakthrough in materials. Traditional one-by-one method, which is suitable for synthesizing large number of sing-composition material, is time-consuming and costly. High throughput and combinatorial experimentation, is an effective method to synthesize and characterize huge amount of materials over a broader compositional region in a short time, which enables to greatly speed up the discovery and optimization of materials with lower cost. In this work, high throughput and combinatorial materials synthesis technologies for lithium ion battery research are discussed, and our efforts on developing such instrumentations are introduced.

  17. High-throughput optical coherence tomography at 800 nm.

    Science.gov (United States)

    Goda, Keisuke; Fard, Ali; Malik, Omer; Fu, Gilbert; Quach, Alan; Jalali, Bahram

    2012-08-27

    We report high-throughput optical coherence tomography (OCT) that offers 1,000 times higher axial scan rate than conventional OCT in the 800 nm spectral range. This is made possible by employing photonic time-stretch for chirping a pulse train and transforming it into a passive swept source. We demonstrate a record high axial scan rate of 90.9 MHz. To show the utility of our method, we also demonstrate real-time observation of laser ablation dynamics. Our high-throughput OCT is expected to be useful for industrial applications where the speed of conventional OCT falls short.

  18. High throughput calorimetry for evaluating enzymatic reactions generating phosphate.

    Science.gov (United States)

    Hoflack, Lieve; De Groeve, Manu; Desmet, Tom; Van Gerwen, Peter; Soetaert, Wim

    2010-05-01

    A calorimetric assay is described for the high-throughput screening of enzymes that produce inorganic phosphate. In the current example, cellobiose phosphorylase (EC 2.4.1.20) is tested for its ability to synthesise rare disaccharides. The generated phosphate is measured in a high-throughput calorimeter by coupling the reaction to pyruvate oxidase and catalase. This procedure allows for the simultaneous analysis of 48 reactions in microtiter plate format and has been validated by comparison with a colorimetric phosphate assay. The proposed assay has a coefficient of variation of 3.14% and is useful for screening enzyme libraries for enhanced activity and substrate libraries for enzyme promiscuity.

  19. Towards a high throughput droplet-based agglutination assay

    KAUST Repository

    Kodzius, Rimantas

    2013-10-22

    This work demonstrates the detection method for a high throughput droplet based agglutination assay system. Using simple hydrodynamic forces to mix and aggregate functionalized microbeads we avoid the need to use magnetic assistance or mixing structures. The concentration of our target molecules was estimated by agglutination strength, obtained through optical image analysis. Agglutination in droplets was performed with flow rates of 150 µl/min and occurred in under a minute, with potential to perform high-throughput measurements. The lowest target concentration detected in droplet microfluidics was 0.17 nM, which is three orders of magnitude more sensitive than a conventional card based agglutination assay.

  20. High-throughput functional annotation and data mining with the Blast2GO suite

    Science.gov (United States)

    Götz, Stefan; García-Gómez, Juan Miguel; Terol, Javier; Williams, Tim D.; Nagaraj, Shivashankar H.; Nueda, María José; Robles, Montserrat; Talón, Manuel; Dopazo, Joaquín; Conesa, Ana

    2008-01-01

    Functional genomics technologies have been widely adopted in the biological research of both model and non-model species. An efficient functional annotation of DNA or protein sequences is a major requirement for the successful application of these approaches as functional information on gene products is often the key to the interpretation of experimental results. Therefore, there is an increasing need for bioinformatics resources which are able to cope with large amount of sequence data, produce valuable annotation results and are easily accessible to laboratories where functional genomics projects are being undertaken. We present the Blast2GO suite as an integrated and biologist-oriented solution for the high-throughput and automatic functional annotation of DNA or protein sequences based on the Gene Ontology vocabulary. The most outstanding Blast2GO features are: (i) the combination of various annotation strategies and tools controlling type and intensity of annotation, (ii) the numerous graphical features such as the interactive GO-graph visualization for gene-set function profiling or descriptive charts, (iii) the general sequence management features and (iv) high-throughput capabilities. We used the Blast2GO framework to carry out a detailed analysis of annotation behaviour through homology transfer and its impact in functional genomics research. Our aim is to offer biologists useful information to take into account when addressing the task of functionally characterizing their sequence data. PMID:18445632

  1. Epigenome dysregulation in cholangiocarcinoma

    DEFF Research Database (Denmark)

    O'Rourke, Colm J; Munoz-Garrido, Patricia; Aguayo, Esmeralda L

    2017-01-01

    Epigenomics is a fast-evolving field of research that has lately attracted considerable interest, mainly due to the reversibility of epigenetic marks. Clinically, among solid tumors, the field is still limited. In cholangiocarcinoma (CCA) it is well known that the epigenetic landscape is deregula...... on the role of non-coding RNA (ncRNA) interactions, DNA methylation, post-translational modifications (PTMs) of histones and chromatin remodeling complexes....

  2. Epigenomics in cancer management

    Science.gov (United States)

    Costa, Fabricio F

    2010-01-01

    The identification of all epigenetic modifications implicated in gene expression is the next step for a better understanding of human biology in both normal and pathological states. This field is referred to as epigenomics, and it is defined as epigenetic changes (ie, DNA methylation, histone modifications and regulation by noncoding RNAs such as microRNAs) on a genomic scale rather than a single gene. Epigenetics modulate the structure of the chromatin, thereby affecting the transcription of genes in the genome. Different studies have already identified changes in epigenetic modifications in a few genes in specific pathways in cancers. Based on these epigenetic changes, drugs against different types of tumors were developed, which mainly target epimutations in the genome. Examples include DNA methylation inhibitors, histone modification inhibitors, and small molecules that target chromatin-remodeling proteins. However, these drugs are not specific, and side effects are a major problem; therefore, new DNA sequencing technologies combined with epigenomic tools have the potential to identify novel biomarkers and better molecular targets to treat cancers. The purpose of this review is to discuss current and emerging epigenomic tools and to address how these new technologies may impact the future of cancer management. PMID:21188117

  3. High throughput defect detection with multiple parallel electron beams

    NARCIS (Netherlands)

    Himbergen, H.M.P. van; Nijkerk, M.D.; Jager, P.W.H. de; Hosman, T.C.; Kruit, P.

    2007-01-01

    A new concept for high throughput defect detection with multiple parallel electron beams is described. As many as 30 000 beams can be placed on a footprint of a in.2, each beam having its own microcolumn and detection system without cross-talk. Based on the International Technology Roadmap for

  4. High-throughput cloning and expression in recalcitrant bacteria

    NARCIS (Netherlands)

    Geertsma, Eric R.; Poolman, Bert

    We developed a generic method for high-throughput cloning in bacteria that are less amenable to conventional DNA manipulations. The method involves ligation-independent cloning in an intermediary Escherichia coli vector, which is rapidly converted via vector-backbone exchange (VBEx) into an

  5. High-throughput screening, predictive modeling and computational embryology - Abstract

    Science.gov (United States)

    High-throughput screening (HTS) studies are providing a rich source of data that can be applied to chemical profiling to address sensitivity and specificity of molecular targets, biological pathways, cellular and developmental processes. EPA’s ToxCast project is testing 960 uniq...

  6. High-throughput screening, predictive modeling and computational embryology

    Science.gov (United States)

    High-throughput screening (HTS) studies are providing a rich source of data that can be applied to profile thousands of chemical compounds for biological activity and potential toxicity. EPA’s ToxCast™ project, and the broader Tox21 consortium, in addition to projects worldwide,...

  7. High-throughput sequencing in mitochondrial DNA research.

    Science.gov (United States)

    Ye, Fei; Samuels, David C; Clark, Travis; Guo, Yan

    2014-07-01

    Next-generation sequencing, also known as high-throughput sequencing, has greatly enhanced researchers' ability to conduct biomedical research on all levels. Mitochondrial research has also benefitted greatly from high-throughput sequencing; sequencing technology now allows for screening of all 16,569 base pairs of the mitochondrial genome simultaneously for SNPs and low level heteroplasmy and, in some cases, the estimation of mitochondrial DNA copy number. It is important to realize the full potential of high-throughput sequencing for the advancement of mitochondrial research. To this end, we review how high-throughput sequencing has impacted mitochondrial research in the categories of SNPs, low level heteroplasmy, copy number, and structural variants. We also discuss the different types of mitochondrial DNA sequencing and their pros and cons. Based on previous studies conducted by various groups, we provide strategies for processing mitochondrial DNA sequencing data, including assembly, variant calling, and quality control. Copyright © 2014 Elsevier B.V. and Mitochondria Research Society. All rights reserved.

  8. Enzyme free cloning for high throughput gene cloning and expression

    NARCIS (Netherlands)

    de Jong, R.N.; Daniëls, M.; Kaptein, R.|info:eu-repo/dai/nl/074334603; Folkers, G.E.|info:eu-repo/dai/nl/162277202

    2006-01-01

    Structural and functional genomics initiatives significantly improved cloning methods over the past few years. Although recombinational cloning is highly efficient, its costs urged us to search for an alternative high throughput (HTP) cloning method. We implemented a modified Enzyme Free Cloning

  9. Fully Bayesian Analysis of High-throughput Targeted Metabolomics Assays

    Science.gov (United States)

    High-throughput metabolomic assays that allow simultaneous targeted screening of hundreds of metabolites have recently become available in kit form. Such assays provide a window into understanding changes to biochemical pathways due to chemical exposure or disease, and are usefu...

  10. Chemometric Optimization Studies in Catalysis Employing High-Throughput Experimentation

    NARCIS (Netherlands)

    Pereira, S.R.M.

    2008-01-01

    The main topic of this thesis is the investigation of the synergies between High-Throughput Experimentation (HTE) and Chemometric Optimization methodologies in Catalysis research and of the use of such methodologies to maximize the advantages of using HTE methods. Several case studies were analysed

  11. High throughput production of mouse monoclonal antibodies using antigen microarrays

    DEFF Research Database (Denmark)

    De Masi, Federico; Chiarella, P.; Wilhelm, H.

    2005-01-01

    Recent advances in proteomics research underscore the increasing need for high-affinity monoclonal antibodies, which are still generated with lengthy, low-throughput antibody production techniques. Here we present a semi-automated, high-throughput method of hybridoma generation and identification...

  12. High throughput 16S rRNA gene amplicon sequencing

    DEFF Research Database (Denmark)

    Nierychlo, Marta; Larsen, Poul; Jørgensen, Mads Koustrup

    S rRNA gene amplicon sequencing has been developed over the past few years and is now ready to use for more comprehensive studies related to plant operation and optimization thanks to short analysis time, low cost, high throughput, and high taxonomic resolution. In this study we show how 16S r...

  13. Integrating Epigenomics into the Understanding of Biomedical Insight

    Science.gov (United States)

    Han, Yixing; He, Ximiao

    2016-01-01

    Epigenetics is one of the most rapidly expanding fields in biomedical research, and the popularity of the high-throughput next-generation sequencing (NGS) highlights the accelerating speed of epigenomics discovery over the past decade. Epigenetics studies the heritable phenotypes resulting from chromatin changes but without alteration on DNA sequence. Epigenetic factors and their interactive network regulate almost all of the fundamental biological procedures, and incorrect epigenetic information may lead to complex diseases. A comprehensive understanding of epigenetic mechanisms, their interactions, and alterations in health and diseases genome widely has become a priority in biological research. Bioinformatics is expected to make a remarkable contribution for this purpose, especially in processing and interpreting the large-scale NGS datasets. In this review, we introduce the epigenetics pioneering achievements in health status and complex diseases; next, we give a systematic review of the epigenomics data generation, summarize public resources and integrative analysis approaches, and finally outline the challenges and future directions in computational epigenomics. PMID:27980397

  14. Performance comparison of genetic markers for high-throughput sequencing-based biodiversity assessment in complex communities.

    Science.gov (United States)

    Zhan, Aibin; Bailey, Sarah A; Heath, Daniel D; Macisaac, Hugh J

    2014-09-01

    Metabarcode surveys of DNA extracted from environmental samples are increasingly popular for biodiversity assessment in natural communities. Such surveys rely heavily on robust genetic markers. Therefore, analysis of PCR efficiency and subsequent biodiversity estimation for different types of genetic markers and their corresponding primers is important. Here, we test the PCR efficiency and biodiversity recovery potential of three commonly used genetic markers - nuclear small subunit ribosomal DNA (18S), mitochondrial cytochrome c oxidase subunit I (COI) and 16S ribosomal RNA (mt16S) - using 454 pyrosequencing of a zooplankton community collected from Hamilton Harbour, Ontario. We found that biodiversity detection power and PCR efficiency varied widely among these markers. All tested primers for COI failed to provide high-quality PCR products for pyrosequencing, but newly designed primers for 18S and 16S passed all tests. Furthermore, multiple analyses based on large-scale pyrosequencing (i.e. 1/2 PicoTiter plate for each marker) showed that primers for 18S recover more (38 orders) groups than 16S (10 orders) across all taxa, and four vs. two orders and nine vs. six families for Crustacea. Our results showed that 18S, using newly designed primers, is an efficient and powerful tool for profiling biodiversity in largely unexplored communities, especially when amplification difficulties exist for mitochondrial markers such as COI. Universal primers for higher resolution markers such as COI are still needed to address the possible low resolution of 18S for species-level identification. © 2014 John Wiley & Sons Ltd.

  15. The International Human Epigenome Consortium

    DEFF Research Database (Denmark)

    Stunnenberg, Hendrik G; Hirst, Martin

    2016-01-01

    The International Human Epigenome Consortium (IHEC) coordinates the generation of a catalog of high-resolution reference epigenomes of major primary human cell types. The studies now presented (see the Cell Press IHEC web portal at http://www.cell.com/consortium/IHEC) highlight the coordinated ac...

  16. Cancer epigenomics: beyond genomics.

    Science.gov (United States)

    Sandoval, Juan; Esteller, Manel

    2012-02-01

    For many years cancer research has focused on genetic defects, but during the last decade epigenetic deregulation has been increasingly recognized as a hallmark of cancer. The advent of genome-scale analysis techniques, including the recently developed next-generation sequencing, has enabled an invaluable advance in the molecular mechanisms underlying tumor initiation, progression, and expansion. In this review we describe recent advances in the field of cancer epigenomics concerning DNA methylation, histone modifications, and miRNAs. In the near future, this information will be used to generate novel biomarkers of relevance to diagnosis, prognosis, and chemotherapeutic response. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Scanning droplet cell for high throughput electrochemical and photoelectrochemical measurements

    Science.gov (United States)

    Gregoire, John M.; Xiang, Chengxiang; Liu, Xiaonao; Marcin, Martin; Jin, Jian

    2013-02-01

    High throughput electrochemical techniques are widely applied in material discovery and optimization. For many applications, the most desirable electrochemical characterization requires a three-electrode cell under potentiostat control. In high throughput screening, a material library is explored by either employing an array of such cells, or rastering a single cell over the library. To attain this latter capability with unprecedented throughput, we have developed a highly integrated, compact scanning droplet cell that is optimized for rapid electrochemical and photoeletrochemical measurements. Using this cell, we screened a quaternary oxide library as (photo)electrocatalysts for the oxygen evolution (water splitting) reaction. High quality electrochemical measurements were carried out and key electrocatalytic properties were identified for each of 5456 samples with a throughput of 4 s per sample.

  18. High-throughput theoretical design of lithium battery materials

    Science.gov (United States)

    Shi-Gang, Ling; Jian, Gao; Rui-Juan, Xiao; Li-Quan, Chen

    2016-01-01

    The rapid evolution of high-throughput theoretical design schemes to discover new lithium battery materials is reviewed, including high-capacity cathodes, low-strain cathodes, anodes, solid state electrolytes, and electrolyte additives. With the development of efficient theoretical methods and inexpensive computers, high-throughput theoretical calculations have played an increasingly important role in the discovery of new materials. With the help of automatic simulation flow, many types of materials can be screened, optimized and designed from a structural database according to specific search criteria. In advanced cell technology, new materials for next generation lithium batteries are of great significance to achieve performance, and some representative criteria are: higher energy density, better safety, and faster charge/discharge speed. Project supported by the National Natural Science Foundation of China (Grant Nos. 11234013 and 51172274) and the National High Technology Research and Development Program of China (Grant No. 2015AA034201).

  19. High throughput screening of starch structures using carbohydrate microarrays.

    Science.gov (United States)

    Tanackovic, Vanja; Rydahl, Maja Gro; Pedersen, Henriette Lodberg; Motawia, Mohammed Saddik; Shaik, Shahnoor Sultana; Mikkelsen, Maria Dalgaard; Krunic, Susanne Langgaard; Fangel, Jonatan Ulrik; Willats, William George Tycho; Blennow, Andreas

    2016-07-29

    In this study we introduce the starch-recognising carbohydrate binding module family 20 (CBM20) from Aspergillus niger for screening biological variations in starch molecular structure using high throughput carbohydrate microarray technology. Defined linear, branched and phosphorylated maltooligosaccharides, pure starch samples including a variety of different structures with variations in the amylopectin branching pattern, amylose content and phosphate content, enzymatically modified starches and glycogen were included. Using this technique, different important structures, including amylose content and branching degrees could be differentiated in a high throughput fashion. The screening method was validated using transgenic barley grain analysed during development and subjected to germination. Typically, extreme branching or linearity were detected less than normal starch structures. The method offers the potential for rapidly analysing resistant and slowly digested dietary starches.

  20. High-Throughput Thermodynamic Modeling and Uncertainty Quantification for ICME

    Science.gov (United States)

    Otis, Richard A.; Liu, Zi-Kui

    2017-05-01

    One foundational component of the integrated computational materials engineering (ICME) and Materials Genome Initiative is the computational thermodynamics based on the calculation of phase diagrams (CALPHAD) method. The CALPHAD method pioneered by Kaufman has enabled the development of thermodynamic, atomic mobility, and molar volume databases of individual phases in the full space of temperature, composition, and sometimes pressure for technologically important multicomponent engineering materials, along with sophisticated computational tools for using the databases. In this article, our recent efforts will be presented in terms of developing new computational tools for high-throughput modeling and uncertainty quantification based on high-throughput, first-principles calculations and the CALPHAD method along with their potential propagations to downstream ICME modeling and simulations.

  1. Trade-Off Analysis in High-Throughput Materials Exploration.

    Science.gov (United States)

    Volety, Kalpana K; Huyberechts, Guido P J

    2017-03-13

    This Research Article presents a strategy to identify the optimum compositions in metal alloys with certain desired properties in a high-throughput screening environment, using a multiobjective optimization approach. In addition to the identification of the optimum compositions in a primary screening, the strategy also allows pointing to regions in the compositional space where further exploration in a secondary screening could be carried out. The strategy for the primary screening is a combination of two multiobjective optimization approaches namely Pareto optimality and desirability functions. The experimental data used in the present study have been collected from over 200 different compositions belonging to four different alloy systems. The metal alloys (comprising Fe, Ti, Al, Nb, Hf, Zr) are synthesized and screened using high-throughput technologies. The advantages of such a kind of approach compared to the limitations of the traditional and comparatively simpler approaches like ranking and calculating figures of merit are discussed.

  2. High throughput screening of starch structures using carbohydrate microarrays

    DEFF Research Database (Denmark)

    Tanackovic, Vanja; Rydahl, Maja Gro; Pedersen, Henriette Lodberg

    2016-01-01

    In this study we introduce the starch-recognising carbohydrate binding module family 20 (CBM20) from Aspergillus niger for screening biological variations in starch molecular structure using high throughput carbohydrate microarray technology. Defined linear, branched and phosphorylated...... maltooligosaccharides, pure starch samples including a variety of different structures with variations in the amylopectin branching pattern, amylose content and phosphate content, enzymatically modified starches and glycogen were included. Using this technique, different important structures, including amylose content...... and branching degrees could be differentiated in a high throughput fashion. The screening method was validated using transgenic barley grain analysed during development and subjected to germination. Typically, extreme branching or linearity were detected less than normal starch structures. The method offers...

  3. A high-throughput multiplex method adapted for GMO detection.

    Science.gov (United States)

    Chaouachi, Maher; Chupeau, Gaëlle; Berard, Aurélie; McKhann, Heather; Romaniuk, Marcel; Giancola, Sandra; Laval, Valérie; Bertheau, Yves; Brunel, Dominique

    2008-12-24

    A high-throughput multiplex assay for the detection of genetically modified organisms (GMO) was developed on the basis of the existing SNPlex method designed for SNP genotyping. This SNPlex assay allows the simultaneous detection of up to 48 short DNA sequences (approximately 70 bp; "signature sequences") from taxa endogenous reference genes, from GMO constructions, screening targets, construct-specific, and event-specific targets, and finally from donor organisms. This assay avoids certain shortcomings of multiplex PCR-based methods already in widespread use for GMO detection. The assay demonstrated high specificity and sensitivity. The results suggest that this assay is reliable, flexible, and cost- and time-effective for high-throughput GMO detection.

  4. High-throughput screening for modulators of cellular contractile force

    CERN Document Server

    Park, Chan Young; Tambe, Dhananjay; Chen, Bohao; Lavoie, Tera; Dowell, Maria; Simeonov, Anton; Maloney, David J; Marinkovic, Aleksandar; Tschumperlin, Daniel J; Burger, Stephanie; Frykenberg, Matthew; Butler, James P; Stamer, W Daniel; Johnson, Mark; Solway, Julian; Fredberg, Jeffrey J; Krishnan, Ramaswamy

    2014-01-01

    When cellular contractile forces are central to pathophysiology, these forces comprise a logical target of therapy. Nevertheless, existing high-throughput screens are limited to upstream signaling intermediates with poorly defined relationship to such a physiological endpoint. Using cellular force as the target, here we screened libraries to identify novel drug candidates in the case of human airway smooth muscle cells in the context of asthma, and also in the case of Schlemm's canal endothelial cells in the context of glaucoma. This approach identified several drug candidates for both asthma and glaucoma. We attained rates of 1000 compounds per screening day, thus establishing a force-based cellular platform for high-throughput drug discovery.

  5. High-throughput sequence alignment using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Trapnell Cole

    2007-12-01

    Full Text Available Abstract Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU.

  6. High-throughput optical screening of cellular mechanotransduction

    OpenAIRE

    Compton, JL; Luo, JC; Ma, H.; Botvinick, E; Venugopalan, V

    2014-01-01

    We introduce an optical platform for rapid, high-throughput screening of exogenous molecules that affect cellular mechanotransduction. Our method initiates mechanotransduction in adherent cells using single laser-microbeam generated microcavitation bubbles without requiring flow chambers or microfluidics. These microcavitation bubbles expose adherent cells to a microtsunami, a transient microscale burst of hydrodynamic shear stress, which stimulates cells over areas approaching 1 mm2. We demo...

  7. Intel: High Throughput Computing Collaboration: A CERN openlab / Intel collaboration

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    The Intel/CERN High Throughput Computing Collaboration studies the application of upcoming Intel technologies to the very challenging environment of the LHC trigger and data-acquisition systems. These systems will need to transport and process many terabits of data every second, in some cases with tight latency constraints. Parallelisation and tight integration of accelerators and classical CPU via Intel's OmniPath fabric are the key elements in this project.

  8. High-throughput sequence alignment using Graphics Processing Units.

    Science.gov (United States)

    Schatz, Michael C; Trapnell, Cole; Delcher, Arthur L; Varshney, Amitabh

    2007-12-10

    The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs) in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA) from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU.

  9. High-throughput evaluation of synthetic metabolic pathways.

    Science.gov (United States)

    Klesmith, Justin R; Whitehead, Timothy A

    2016-03-01

    A central challenge in the field of metabolic engineering is the efficient identification of a metabolic pathway genotype that maximizes specific productivity over a robust range of process conditions. Here we review current methods for optimizing specific productivity of metabolic pathways in living cells. New tools for library generation, computational analysis of pathway sequence-flux space, and high-throughput screening and selection techniques are discussed.

  10. The high-throughput highway to computational materials design.

    Science.gov (United States)

    Curtarolo, Stefano; Hart, Gus L W; Nardelli, Marco Buongiorno; Mingo, Natalio; Sanvito, Stefano; Levy, Ohad

    2013-03-01

    High-throughput computational materials design is an emerging area of materials science. By combining advanced thermodynamic and electronic-structure methods with intelligent data mining and database construction, and exploiting the power of current supercomputer architectures, scientists generate, manage and analyse enormous data repositories for the discovery of novel materials. In this Review we provide a current snapshot of this rapidly evolving field, and highlight the challenges and opportunities that lie ahead.

  11. Web-based visual analysis for high-throughput genomics.

    Science.gov (United States)

    Goecks, Jeremy; Eberhard, Carl; Too, Tomithy; Nekrutenko, Anton; Taylor, James

    2013-06-13

    Visualization plays an essential role in genomics research by making it possible to observe correlations and trends in large datasets as well as communicate findings to others. Visual analysis, which combines visualization with analysis tools to enable seamless use of both approaches for scientific investigation, offers a powerful method for performing complex genomic analyses. However, there are numerous challenges that arise when creating rich, interactive Web-based visualizations/visual analysis applications for high-throughput genomics. These challenges include managing data flow from Web server to Web browser, integrating analysis tools and visualizations, and sharing visualizations with colleagues. We have created a platform simplifies the creation of Web-based visualization/visual analysis applications for high-throughput genomics. This platform provides components that make it simple to efficiently query very large datasets, draw common representations of genomic data, integrate with analysis tools, and share or publish fully interactive visualizations. Using this platform, we have created a Circos-style genome-wide viewer, a generic scatter plot for correlation analysis, an interactive phylogenetic tree, a scalable genome browser for next-generation sequencing data, and an application for systematically exploring tool parameter spaces to find good parameter values. All visualizations are interactive and fully customizable. The platform is integrated with the Galaxy (http://galaxyproject.org) genomics workbench, making it easy to integrate new visual applications into Galaxy. Visualization and visual analysis play an important role in high-throughput genomics experiments, and approaches are needed to make it easier to create applications for these activities. Our framework provides a foundation for creating Web-based visualizations and integrating them into Galaxy. Finally, the visualizations we have created using the framework are useful tools for high-throughput

  12. Validation of high throughput sequencing and microbial forensics applications

    OpenAIRE

    Budowle, Bruce; Connell, Nancy D.; Bielecka-Oder, Anna; Rita R Colwell; Corbett, Cindi R.; Fletcher, Jacqueline; Forsman, Mats; Kadavy, Dana R; Markotic, Alemka; Morse, Stephen A.; Murch, Randall S; Sajantila, Antti; Schemes, Sarah E; Ternus, Krista L; Turner, Stephen D

    2014-01-01

    Abstract High throughput sequencing (HTS) generates large amounts of high quality sequence data for microbial genomics. The value of HTS for microbial forensics is the speed at which evidence can be collected and the power to characterize microbial-related evidence to solve biocrimes and bioterrorist events. As HTS technologies continue to improve, they provide increasingly powerful sets of tools to support the entire field of microbial forensics. Accurate, credible results a...

  13. Genomics and epigenomics of the human glycome.

    Science.gov (United States)

    Zoldoš, Vlatka; Novokmet, Mislav; Bečeheli, Ivona; Lauc, Gordan

    2013-01-01

    The majority of all proteins are glycosylated and glycans have numerous important structural, functional and regulatory roles in various physiological processes. While structure of the polypeptide part of a glycoprotein is defined by the sequence of nucleotides in the corresponding gene, structure of a glycan part results from dynamic interactions between hundreds of genes, their protein products and environmental factors. The composition of the glycome attached to an individual protein, or to a complex mixture of proteins, like human plasma, is stable within an individual, but very variable between individuals. This variability stems from numerous common genetic polymorphisms reflecting in changes in the complex biosynthetic pathway of glycans, but also from the interaction with the environment. Environment can affect glycan biosynthesis at the level of substrate availability, regulation of enzyme activity and/or hormonal signals, but also through gene-environment interactions. Epigenetics provides a molecular basis how the environment can modify phenotype of an individual. The epigenetic information (DNA methylation pattern and histone code) is especially vulnerable to environmental effects in the early intrauterine and neo-natal development and many common late-onset diseases take root already at that time. The evidences showing the link between epigenetics and glycosylation are accumulating. Recent progress in high-throughput glycomics, genomics and epigenomics enabled first epidemiological and genome-wide association studies of the glycome, which are presented in this mini-review.

  14. High-Throughput Toxicity Testing: New Strategies for ...

    Science.gov (United States)

    In recent years, the food industry has made progress in improving safety testing methods focused on microbial contaminants in order to promote food safety. However, food industry toxicologists must also assess the safety of food-relevant chemicals including pesticides, direct additives, and food contact substances. With the rapidly growing use of new food additives, as well as innovation in food contact substance development, an interest in exploring the use of high-throughput chemical safety testing approaches has emerged. Currently, the field of toxicology is undergoing a paradigm shift in how chemical hazards can be evaluated. Since there are tens of thousands of chemicals in use, many of which have little to no hazard information and there are limited resources (namely time and money) for testing these chemicals, it is necessary to prioritize which chemicals require further safety testing to better protect human health. Advances in biochemistry and computational toxicology have paved the way for animal-free (in vitro) high-throughput screening which can characterize chemical interactions with highly specific biological processes. Screening approaches are not novel; in fact, quantitative high-throughput screening (qHTS) methods that incorporate dose-response evaluation have been widely used in the pharmaceutical industry. For toxicological evaluation and prioritization, it is the throughput as well as the cost- and time-efficient nature of qHTS that makes it

  15. Graph-based signal integration for high-throughput phenotyping.

    Science.gov (United States)

    Herskovic, Jorge R; Subramanian, Devika; Cohen, Trevor; Bozzo-Silva, Pamela A; Bearden, Charles F; Bernstam, Elmer V

    2012-01-01

    Electronic Health Records aggregated in Clinical Data Warehouses (CDWs) promise to revolutionize Comparative Effectiveness Research and suggest new avenues of research. However, the effectiveness of CDWs is diminished by the lack of properly labeled data. We present a novel approach that integrates knowledge from the CDW, the biomedical literature, and the Unified Medical Language System (UMLS) to perform high-throughput phenotyping. In this paper, we automatically construct a graphical knowledge model and then use it to phenotype breast cancer patients. We compare the performance of this approach to using MetaMap when labeling records. MetaMap's overall accuracy at identifying breast cancer patients was 51.1% (n=428); recall=85.4%, precision=26.2%, and F1=40.1%. Our unsupervised graph-based high-throughput phenotyping had accuracy of 84.1%; recall=46.3%, precision=61.2%, and F1=52.8%. We conclude that our approach is a promising alternative for unsupervised high-throughput phenotyping.

  16. Condor-COPASI: high-throughput computing for biochemical networks

    Directory of Open Access Journals (Sweden)

    Kent Edward

    2012-07-01

    Full Text Available Abstract Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage.

  17. High-throughput computational and experimental techniques in structural genomics.

    Science.gov (United States)

    Chance, Mark R; Fiser, Andras; Sali, Andrej; Pieper, Ursula; Eswar, Narayanan; Xu, Guiping; Fajardo, J Eduardo; Radhakannan, Thirumuruhan; Marinkovic, Nebojsa

    2004-10-01

    Structural genomics has as its goal the provision of structural information for all possible ORF sequences through a combination of experimental and computational approaches. The access to genome sequences and cloning resources from an ever-widening array of organisms is driving high-throughput structural studies by the New York Structural Genomics Research Consortium. In this report, we outline the progress of the Consortium in establishing its pipeline for structural genomics, and some of the experimental and bioinformatics efforts leading to structural annotation of proteins. The Consortium has established a pipeline for structural biology studies, automated modeling of ORF sequences using solved (template) structures, and a novel high-throughput approach (metallomics) to examining the metal binding to purified protein targets. The Consortium has so far produced 493 purified proteins from >1077 expression vectors. A total of 95 have resulted in crystal structures, and 81 are deposited in the Protein Data Bank (PDB). Comparative modeling of these structures has generated >40,000 structural models. We also initiated a high-throughput metal analysis of the purified proteins; this has determined that 10%-15% of the targets contain a stoichiometric structural or catalytic transition metal atom. The progress of the structural genomics centers in the U.S. and around the world suggests that the goal of providing useful structural information on most all ORF domains will be realized. This projected resource will provide structural biology information important to understanding the function of most proteins of the cell.

  18. 76 FR 28990 - Ultra High Throughput Sequencing for Clinical Diagnostic Applications-Approaches To Assess...

    Science.gov (United States)

    2011-05-19

    ... Clinical Diagnostic Applications--Approaches To Assess Analytical Validity.'' The purpose of the public... approaches to assess analytical validity of ultra high throughput sequencing for clinical diagnostic... HUMAN SERVICES Food and Drug Administration Ultra High Throughput Sequencing for Clinical Diagnostic...

  19. Chromatin replication and epigenome maintenance

    DEFF Research Database (Denmark)

    Alabert, Constance; Groth, Anja

    2012-01-01

    initiates, whereas the replication process itself disrupts chromatin and challenges established patterns of genome regulation. Specialized replication-coupled mechanisms assemble new DNA into chromatin, but epigenome maintenance is a continuous process taking place throughout the cell cycle. If DNA...

  20. Laboratory Information Management Software for genotyping workflows: applications in high throughput crop genotyping

    Directory of Open Access Journals (Sweden)

    Prasanth VP

    2006-08-01

    Full Text Available Abstract Background With the advances in DNA sequencer-based technologies, it has become possible to automate several steps of the genotyping process leading to increased throughput. To efficiently handle the large amounts of genotypic data generated and help with quality control, there is a strong need for a software system that can help with the tracking of samples and capture and management of data at different steps of the process. Such systems, while serving to manage the workflow precisely, also encourage good laboratory practice by standardizing protocols, recording and annotating data from every step of the workflow. Results A laboratory information management system (LIMS has been designed and implemented at the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT that meets the requirements of a moderately high throughput molecular genotyping facility. The application is designed as modules and is simple to learn and use. The application leads the user through each step of the process from starting an experiment to the storing of output data from the genotype detection step with auto-binning of alleles; thus ensuring that every DNA sample is handled in an identical manner and all the necessary data are captured. The application keeps track of DNA samples and generated data. Data entry into the system is through the use of forms for file uploads. The LIMS provides functions to trace back to the electrophoresis gel files or sample source for any genotypic data and for repeating experiments. The LIMS is being presently used for the capture of high throughput SSR (simple-sequence repeat genotyping data from the legume (chickpea, groundnut and pigeonpea and cereal (sorghum and millets crops of importance in the semi-arid tropics. Conclusion A laboratory information management system is available that has been found useful in the management of microsatellite genotype data in a moderately high throughput genotyping

  1. High-throughput cultivation and screening platform for unicellular phototrophs.

    Science.gov (United States)

    Tillich, Ulrich M; Wolter, Nick; Schulze, Katja; Kramer, Dan; Brödel, Oliver; Frohme, Marcus

    2014-09-16

    High-throughput cultivation and screening methods allow a parallel, miniaturized and cost efficient processing of many samples. These methods however, have not been generally established for phototrophic organisms such as microalgae or cyanobacteria. In this work we describe and test high-throughput methods with the model organism Synechocystis sp. PCC6803. The required technical automation for these processes was achieved with a Tecan Freedom Evo 200 pipetting robot. The cultivation was performed in 2.2 ml deepwell microtiter plates within a cultivation chamber outfitted with programmable shaking conditions, variable illumination, variable temperature, and an adjustable CO2 atmosphere. Each microtiter-well within the chamber functions as a separate cultivation vessel with reproducible conditions. The automated measurement of various parameters such as growth, full absorption spectrum, chlorophyll concentration, MALDI-TOF-MS, as well as a novel vitality measurement protocol, have already been established and can be monitored during cultivation. Measurement of growth parameters can be used as inputs for the system to allow for periodic automatic dilutions and therefore a semi-continuous cultivation of hundreds of cultures in parallel. The system also allows the automatic generation of mid and long term backups of cultures to repeat experiments or to retrieve strains of interest. The presented platform allows for high-throughput cultivation and screening of Synechocystis sp. PCC6803. The platform should be usable for many phototrophic microorganisms as is, and be adaptable for even more. A variety of analyses are already established and the platform is easily expandable both in quality, i.e. with further parameters to screen for additional targets and in quantity, i.e. size or number of processed samples.

  2. Adaptive Sampling for High Throughput Data Using Similarity Measures

    Energy Technology Data Exchange (ETDEWEB)

    Bulaevskaya, V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sales, A. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-05-06

    The need for adaptive sampling arises in the context of high throughput data because the rates of data arrival are many orders of magnitude larger than the rates at which they can be analyzed. A very fast decision must therefore be made regarding the value of each incoming observation and its inclusion in the analysis. In this report we discuss one approach to adaptive sampling, based on the new data point’s similarity to the other data points being considered for inclusion. We present preliminary results for one real and one synthetic data set.

  3. High-throughput sequencing: a roadmap toward community ecology.

    Science.gov (United States)

    Poisot, Timothée; Péquin, Bérangère; Gravel, Dominique

    2013-04-01

    High-throughput sequencing is becoming increasingly important in microbial ecology, yet it is surprisingly under-used to generate or test biogeographic hypotheses. In this contribution, we highlight how adding these methods to the ecologist toolbox will allow the detection of new patterns, and will help our understanding of the structure and dynamics of diversity. Starting with a review of ecological questions that can be addressed, we move on to the technical and analytical issues that will benefit from an increased collaboration between different disciplines.

  4. UAV-based high-throughput phenotyping in legume crops

    Science.gov (United States)

    Sankaran, Sindhuja; Khot, Lav R.; Quirós, Juan; Vandemark, George J.; McGee, Rebecca J.

    2016-05-01

    In plant breeding, one of the biggest obstacles in genetic improvement is the lack of proven rapid methods for measuring plant responses in field conditions. Therefore, the major objective of this research was to evaluate the feasibility of utilizing high-throughput remote sensing technology for rapid measurement of phenotyping traits in legume crops. The plant responses of several chickpea and peas varieties to the environment were assessed with an unmanned aerial vehicle (UAV) integrated with multispectral imaging sensors. Our preliminary assessment showed that the vegetation indices are strongly correlated (pphenotyping traits.

  5. REDItools: high-throughput RNA editing detection made easy.

    Science.gov (United States)

    Picardi, Ernesto; Pesole, Graziano

    2013-07-15

    The reliable detection of RNA editing sites from massive sequencing data remains challenging and, although several methodologies have been proposed, no computational tools have been released to date. Here, we introduce REDItools a suite of python scripts to perform high-throughput investigation of RNA editing using next-generation sequencing data. REDItools are in python programming language and freely available at http://code.google.com/p/reditools/. ernesto.picardi@uniba.it or graziano.pesole@uniba.it Supplementary data are available at Bioinformatics online.

  6. High throughput platforms for structural genomics of integral membrane proteins.

    Science.gov (United States)

    Mancia, Filippo; Love, James

    2011-08-01

    Structural genomics approaches on integral membrane proteins have been postulated for over a decade, yet specific efforts are lagging years behind their soluble counterparts. Indeed, high throughput methodologies for production and characterization of prokaryotic integral membrane proteins are only now emerging, while large-scale efforts for eukaryotic ones are still in their infancy. Presented here is a review of recent literature on actively ongoing structural genomics of membrane protein initiatives, with a focus on those aimed at implementing interesting techniques aimed at increasing our rate of success for this class of macromolecules. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. High-throughput epitope identification for snakebite antivenom

    DEFF Research Database (Denmark)

    Engmark, Mikael; De Masi, Federico; Laustsen, Andreas Hougaard

    Insight into the epitopic recognition pattern for polyclonal antivenoms is a strong tool for accurate prediction of antivenom cross-reactivity and provides a basis for design of novel antivenoms. In this work, a high-throughput approach was applied to characterize linear epitopes in 966 individual...... toxins from pit vipers (Crotalidae) using the ICP Crotalidae antivenom. Due to an abundance of snake venom metalloproteinases and phospholipase A2s in the venoms used for production of the investigated antivenom, this study focuses on these toxin families....

  8. Bifrost: Stream processing framework for high-throughput applications

    Science.gov (United States)

    Barsdell, Ben; Price, Daniel; Cranmer, Miles; Garsden, Hugh; Dowell, Jayce

    2017-11-01

    Bifrost is a stream processing framework that eases the development of high-throughput processing CPU/GPU pipelines. It is designed for digital signal processing (DSP) applications within radio astronomy. Bifrost uses a flexible ring buffer implementation that allows different signal processing blocks to be connected to form a pipeline. Each block may be assigned to a CPU core, and the ring buffers are used to transport data to and from blocks. Processing blocks may be run on either the CPU or GPU, and the ring buffer will take care of memory copies between the CPU and GPU spaces.

  9. High-throughput DNA sequencing: a genomic data manufacturing process.

    Science.gov (United States)

    Huang, G M

    1999-01-01

    The progress trends in automated DNA sequencing operation are reviewed. Technological development in sequencing instruments, enzymatic chemistry and robotic stations has resulted in ever-increasing capacity of sequence data production. This progress leads to a higher demand on laboratory information management and data quality assessment. High-throughput laboratories face the challenge of organizational management, as well as technology management. Engineering principles of process control should be adopted in this biological data manufacturing procedure. While various systems attempt to provide solutions to automate different parts of, or even the entire process, new technical advances will continue to change the paradigm and provide new challenges.

  10. Spectrophotometric Enzyme Assays for High-Throughput Screening

    Directory of Open Access Journals (Sweden)

    Jean-Louis Reymond

    2004-01-01

    Full Text Available This paper reviews high-throughput screening enzyme assays developed in our laboratory over the last ten years. These enzyme assays were initially developed for the purpose of discovering catalytic antibodies by screening cell culture supernatants, but have proved generally useful for testing enzyme activities. Examples include TLC-based screening using acridone-labeled substrates, fluorogenic assays based on the β-elimination of umbelliferone or nitrophenol, and indirect assays such as the back-titration method with adrenaline and the copper-calcein fluorescence assay for aminoacids.

  11. High throughput instruments, methods, and informatics for systems biology.

    Energy Technology Data Exchange (ETDEWEB)

    Sinclair, Michael B.; Cowie, Jim R. (New Mexico State University, Las Cruces, NM); Van Benthem, Mark Hilary; Wylie, Brian Neil; Davidson, George S.; Haaland, David Michael; Timlin, Jerilyn Ann; Aragon, Anthony D. (University of New Mexico, Albuquerque, NM); Keenan, Michael Robert; Boyack, Kevin W.; Thomas, Edward Victor; Werner-Washburne, Margaret C. (University of New Mexico, Albuquerque, NM); Mosquera-Caro, Monica P. (University of New Mexico, Albuquerque, NM); Martinez, M. Juanita (University of New Mexico, Albuquerque, NM); Martin, Shawn Bryan; Willman, Cheryl L. (University of New Mexico, Albuquerque, NM)

    2003-12-01

    High throughput instruments and analysis techniques are required in order to make good use of the genomic sequences that have recently become available for many species, including humans. These instruments and methods must work with tens of thousands of genes simultaneously, and must be able to identify the small subsets of those genes that are implicated in the observed phenotypes, or, for instance, in responses to therapies. Microarrays represent one such high throughput method, which continue to find increasingly broad application. This project has improved microarray technology in several important areas. First, we developed the hyperspectral scanner, which has discovered and diagnosed numerous flaws in techniques broadly employed by microarray researchers. Second, we used a series of statistically designed experiments to identify and correct errors in our microarray data to dramatically improve the accuracy, precision, and repeatability of the microarray gene expression data. Third, our research developed new informatics techniques to identify genes with significantly different expression levels. Finally, natural language processing techniques were applied to improve our ability to make use of online literature annotating the important genes. In combination, this research has improved the reliability and precision of laboratory methods and instruments, while also enabling substantially faster analysis and discovery.

  12. High throughput inclusion body sizing: Nano particle tracking analysis.

    Science.gov (United States)

    Reichelt, Wieland N; Kaineder, Andreas; Brillmann, Markus; Neutsch, Lukas; Taschauer, Alexander; Lohninger, Hans; Herwig, Christoph

    2017-06-01

    The expression of pharmaceutical relevant proteins in Escherichia coli frequently triggers inclusion body (IB) formation caused by protein aggregation. In the scientific literature, substantial effort has been devoted to the quantification of IB size. However, particle-based methods used up to this point to analyze the physical properties of representative numbers of IBs lack sensitivity and/or orthogonal verification. Using high pressure freezing and automated freeze substitution for transmission electron microscopy (TEM) the cytosolic inclusion body structure was preserved within the cells. TEM imaging in combination with manual grey scale image segmentation allowed the quantification of relative areas covered by the inclusion body within the cytosol. As a high throughput method nano particle tracking analysis (NTA) enables one to derive the diameter of inclusion bodies in cell homogenate based on a measurement of the Brownian motion. The NTA analysis of fixated (glutaraldehyde) and non-fixated IBs suggests that high pressure homogenization annihilates the native physiological shape of IBs. Nevertheless, the ratio of particle counts of non-fixated and fixated samples could potentially serve as factor for particle stickiness. In this contribution, we establish image segmentation of TEM pictures as an orthogonal method to size biologic particles in the cytosol of cells. More importantly, NTA has been established as a particle-based, fast and high throughput method (1000-3000 particles), thus constituting a much more accurate and representative analysis than currently available methods. Copyright © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Efficient Management of High-Throughput Screening Libraries with SAVANAH

    DEFF Research Database (Denmark)

    List, Markus; Elnegaard, Marlene Pedersen; Schmidt, Steffen

    2017-01-01

    High-throughput screening (HTS) has become an indispensable tool for the pharmaceutical industry and for biomedical research. A high degree of automation allows for experiments in the range of a few hundred up to several hundred thousand to be performed in close succession. The basis for such scr......High-throughput screening (HTS) has become an indispensable tool for the pharmaceutical industry and for biomedical research. A high degree of automation allows for experiments in the range of a few hundred up to several hundred thousand to be performed in close succession. The basis...... for such screens are molecular libraries, that is, microtiter plates with solubilized reagents such as siRNAs, shRNAs, miRNA inhibitors or mimics, and sgRNAs, or small compounds, that is, drugs. These reagents are typically condensed to provide enough material for covering several screens. Library plates thus need...... to be serially diluted before they can be used as assay plates. This process, however, leads to an explosion in the number of plates and samples to be tracked. Here, we present SAVANAH, the first tool to effectively manage molecular screening libraries across dilution series. It conveniently links (connects...

  14. Human transcriptome array for high-throughput clinical studies

    Science.gov (United States)

    Xu, Weihong; Seok, Junhee; Mindrinos, Michael N.; Schweitzer, Anthony C.; Jiang, Hui; Wilhelmy, Julie; Clark, Tyson A.; Kapur, Karen; Xing, Yi; Faham, Malek; Storey, John D.; Moldawer, Lyle L.; Maier, Ronald V.; Tompkins, Ronald G.; Wong, Wing Hung; Davis, Ronald W.; Xiao, Wenzhong; Toner, Mehmet; Warren, H. Shaw; Schoenfeld, David A.; Rahme, Laurence; McDonald-Smith, Grace P.; Hayden, Douglas; Mason, Philip; Fagan, Shawn; Yu, Yong-Ming; Cobb, J. Perren; Remick, Daniel G.; Mannick, John A.; Lederer, James A.; Gamelli, Richard L.; Silver, Geoffrey M.; West, Michael A.; Shapiro, Michael B.; Smith, Richard; Camp, David G.; Qian, Weijun; Tibshirani, Rob; Lowry, Stephen; Calvano, Steven; Chaudry, Irshad; Cohen, Mitchell; Moore, Ernest E.; Johnson, Jeffrey; Baker, Henry V.; Efron, Philip A.; Balis, Ulysses G. J.; Billiar, Timothy R.; Ochoa, Juan B.; Sperry, Jason L.; Miller-Graziano, Carol L.; De, Asit K.; Bankey, Paul E.; Herndon, David N.; Finnerty, Celeste C.; Jeschke, Marc G.; Minei, Joseph P.; Arnoldo, Brett D.; Hunt, John L.; Horton, Jureta; Cobb, J. Perren; Brownstein, Bernard; Freeman, Bradley; Nathens, Avery B.; Cuschieri, Joseph; Gibran, Nicole; Klein, Matthew; O'Keefe, Grant

    2011-01-01

    A 6.9 million-feature oligonucleotide array of the human transcriptome [Glue Grant human transcriptome (GG-H array)] has been developed for high-throughput and cost-effective analyses in clinical studies. This array allows comprehensive examination of gene expression and genome-wide identification of alternative splicing as well as detection of coding SNPs and noncoding transcripts. The performance of the array was examined and compared with mRNA sequencing (RNA-Seq) results over multiple independent replicates of liver and muscle samples. Compared with RNA-Seq of 46 million uniquely mappable reads per replicate, the GG-H array is highly reproducible in estimating gene and exon abundance. Although both platforms detect similar expression changes at the gene level, the GG-H array is more sensitive at the exon level. Deeper sequencing is required to adequately cover low-abundance transcripts. The array has been implemented in a multicenter clinical program and has generated high-quality, reproducible data. Considering the clinical trial requirements of cost, sample availability, and throughput, the GG-H array has a wide range of applications. An emerging approach for large-scale clinical genomic studies is to first use RNA-Seq to the sufficient depth for the discovery of transcriptome elements relevant to the disease process followed by high-throughput and reliable screening of these elements on thousands of patient samples using custom-designed arrays. PMID:21317363

  15. Fluorescent foci quantitation for high-throughput analysis

    Directory of Open Access Journals (Sweden)

    Elena Ledesma-Fernández

    2015-06-01

    Full Text Available A number of cellular proteins localize to discrete foci within cells, for example DNA repair proteins, microtubule organizing centers, P bodies or kinetochores. It is often possible to measure the fluorescence emission from tagged proteins within these foci as a surrogate for the concentration of that specific protein. We wished to develop tools that would allow quantitation of fluorescence foci intensities in high-throughput studies. As proof of principle we have examined the kinetochore, a large multi-subunit complex that is critical for the accurate segregation of chromosomes during cell division. Kinetochore perturbations lead to aneuploidy, which is a hallmark of cancer cells. Hence, understanding kinetochore homeostasis and regulation are important for a global understanding of cell division and genome integrity. The 16 budding yeast kinetochores colocalize within the nucleus to form a single focus. Here we have created a set of freely-available tools to allow high-throughput quantitation of kinetochore foci fluorescence. We use this ‘FociQuant’ tool to compare methods of kinetochore quantitation and we show proof of principle that FociQuant can be used to identify changes in kinetochore protein levels in a mutant that affects kinetochore function. This analysis can be applied to any protein that forms discrete foci in cells.

  16. High-throughput technology for novel SO2 oxidation catalysts

    Directory of Open Access Journals (Sweden)

    Jonas Loskyll, Klaus Stoewe and Wilhelm F Maier

    2011-01-01

    Full Text Available We review the state of the art and explain the need for better SO2 oxidation catalysts for the production of sulfuric acid. A high-throughput technology has been developed for the study of potential catalysts in the oxidation of SO2 to SO3. High-throughput methods are reviewed and the problems encountered with their adaptation to the corrosive conditions of SO2 oxidation are described. We show that while emissivity-corrected infrared thermography (ecIRT can be used for primary screening, it is prone to errors because of the large variations in the emissivity of the catalyst surface. UV-visible (UV-Vis spectrometry was selected instead as a reliable analysis method of monitoring the SO2 conversion. Installing plain sugar absorbents at reactor outlets proved valuable for the detection and quantitative removal of SO3 from the product gas before the UV-Vis analysis. We also overview some elements used for prescreening and those remaining after the screening of the first catalyst generations.

  17. Fusion genes and their discovery using high throughput sequencing.

    Science.gov (United States)

    Annala, M J; Parker, B C; Zhang, W; Nykter, M

    2013-11-01

    Fusion genes are hybrid genes that combine parts of two or more original genes. They can form as a result of chromosomal rearrangements or abnormal transcription, and have been shown to act as drivers of malignant transformation and progression in many human cancers. The biological significance of fusion genes together with their specificity to cancer cells has made them into excellent targets for molecular therapy. Fusion genes are also used as diagnostic and prognostic markers to confirm cancer diagnosis and monitor response to molecular therapies. High-throughput sequencing has enabled the systematic discovery of fusion genes in a wide variety of cancer types. In this review, we describe the history of fusion genes in cancer and the ways in which fusion genes form and affect cellular function. We also describe computational methodologies for detecting fusion genes from high-throughput sequencing experiments, and the most common sources of error that lead to false discovery of fusion genes. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Computational analysis of high-throughput flow cytometry data.

    Science.gov (United States)

    Robinson, J Paul; Rajwa, Bartek; Patsekin, Valery; Davisson, Vincent Jo

    2012-08-01

    Flow cytometry has been around for over 40 years, but only recently has the opportunity arisen to move into the high-throughput domain. The technology is now available and is highly competitive with imaging tools under the right conditions. Flow cytometry has, however, been a technology that has focused on its unique ability to study single cells and appropriate analytical tools are readily available to handle this traditional role of the technology. Expansion of flow cytometry to a high-throughput (HT) and high-content technology requires both advances in hardware and analytical tools. The historical perspective of flow cytometry operation as well as how the field has changed and what the key changes have been discussed. The authors provide a background and compelling arguments for moving toward HT flow, where there are many innovative opportunities. With alternative approaches now available for flow cytometry, there will be a considerable number of new applications. These opportunities show strong capability for drug screening and functional studies with cells in suspension. There is no doubt that HT flow is a rich technology awaiting acceptance by the pharmaceutical community. It can provide a powerful phenotypic analytical toolset that has the capacity to change many current approaches to HT screening. The previous restrictions on the technology, based on its reduced capacity for sample throughput, are no longer a major issue. Overcoming this barrier has transformed a mature technology into one that can focus on systems biology questions not previously considered possible.

  19. Evaluation of a High Throughput Starch Analysis Optimised for Wood

    Science.gov (United States)

    Bellasio, Chandra; Fini, Alessio; Ferrini, Francesco

    2014-01-01

    Starch is the most important long-term reserve in trees, and the analysis of starch is therefore useful source of physiological information. Currently published protocols for wood starch analysis impose several limitations, such as long procedures and a neutralization step. The high-throughput standard protocols for starch analysis in food and feed represent a valuable alternative. However, they have not been optimised or tested with woody samples. These have particular chemical and structural characteristics, including the presence of interfering secondary metabolites, low reactivity of starch, and low starch content. In this study, a standard method for starch analysis used for food and feed (AOAC standard method 996.11) was optimised to improve precision and accuracy for the analysis of starch in wood. Key modifications were introduced in the digestion conditions and in the glucose assay. The optimised protocol was then evaluated through 430 starch analyses of standards at known starch content, matrix polysaccharides, and wood collected from three organs (roots, twigs, mature wood) of four species (coniferous and flowering plants). The optimised protocol proved to be remarkably precise and accurate (3%), suitable for a high throughput routine analysis (35 samples a day) of specimens with a starch content between 40 mg and 21 µg. Samples may include lignified organs of coniferous and flowering plants and non-lignified organs, such as leaves, fruits and rhizomes. PMID:24523863

  20. Plant chip for high-throughput phenotyping of Arabidopsis.

    Science.gov (United States)

    Jiang, Huawei; Xu, Zhen; Aluru, Maneesha R; Dong, Liang

    2014-04-07

    We report on the development of a vertical and transparent microfluidic chip for high-throughput phenotyping of Arabidopsis thaliana plants. Multiple Arabidopsis seeds can be germinated and grown hydroponically over more than two weeks in the chip, thus enabling large-scale and quantitative monitoring of plant phenotypes. The novel vertical arrangement of this microfluidic device not only allows for normal gravitropic growth of the plants but also, more importantly, makes it convenient to continuously monitor phenotypic changes in plants at the whole organismal level, including seed germination and root and shoot growth (hypocotyls, cotyledons, and leaves), as well as at the cellular level. We also developed a hydrodynamic trapping method to automatically place single seeds into seed holding sites of the device and to avoid potential damage to seeds that might occur during manual loading. We demonstrated general utility of this microfluidic device by showing clear visible phenotypes of the immutans mutant of Arabidopsis, and we also showed changes occurring during plant-pathogen interactions at different developmental stages. Arabidopsis plants grown in the device maintained normal morphological and physiological behaviour, and distinct phenotypic variations consistent with a priori data were observed via high-resolution images taken in real time. Moreover, the timeline for different developmental stages for plants grown in this device was highly comparable to growth using a conventional agar plate method. This prototype plant chip technology is expected to lead to the establishment of a powerful experimental and cost-effective framework for high-throughput and precise plant phenotyping.

  1. COMPUTER APPROACHES TO WHEAT HIGH-THROUGHPUT PHENOTYPING

    Directory of Open Access Journals (Sweden)

    Afonnikov D.

    2012-08-01

    Full Text Available The growing need for rapid and accurate approaches for large-scale assessment of phenotypic characters in plants becomes more and more obvious in the studies looking into relationships between genotype and phenotype. This need is due to the advent of high throughput methods for analysis of genomes. Nowadays, any genetic experiment involves data on thousands and dozens of thousands of plants. Traditional ways of assessing most phenotypic characteristics (those with reliance on the eye, the touch, the ruler are little effective on samples of such sizes. Modern approaches seek to take advantage of automated phenotyping, which warrants a much more rapid data acquisition, higher accuracy of the assessment of phenotypic features, measurement of new parameters of these features and exclusion of human subjectivity from the process. Additionally, automation allows measurement data to be rapidly loaded into computer databases, which reduces data processing time.In this work, we present the WheatPGE information system designed to solve the problem of integration of genotypic and phenotypic data and parameters of the environment, as well as to analyze the relationships between the genotype and phenotype in wheat. The system is used to consolidate miscellaneous data on a plant for storing and processing various morphological traits and genotypes of wheat plants as well as data on various environmental factors. The system is available at www.wheatdb.org. Its potential in genetic experiments has been demonstrated in high-throughput phenotyping of wheat leaf pubescence.

  2. A Fully Automated High-Throughput Zebrafish Behavioral Ototoxicity Assay.

    Science.gov (United States)

    Todd, Douglas W; Philip, Rohit C; Niihori, Maki; Ringle, Ryan A; Coyle, Kelsey R; Zehri, Sobia F; Zabala, Leanne; Mudery, Jordan A; Francis, Ross H; Rodriguez, Jeffrey J; Jacob, Abraham

    2017-08-01

    Zebrafish animal models lend themselves to behavioral assays that can facilitate rapid screening of ototoxic, otoprotective, and otoregenerative drugs. Structurally similar to human inner ear hair cells, the mechanosensory hair cells on their lateral line allow the zebrafish to sense water flow and orient head-to-current in a behavior called rheotaxis. This rheotaxis behavior deteriorates in a dose-dependent manner with increased exposure to the ototoxin cisplatin, thereby establishing itself as an excellent biomarker for anatomic damage to lateral line hair cells. Building on work by our group and others, we have built a new, fully automated high-throughput behavioral assay system that uses automated image analysis techniques to quantify rheotaxis behavior. This novel system consists of a custom-designed swimming apparatus and imaging system consisting of network-controlled Raspberry Pi microcomputers capturing infrared video. Automated analysis techniques detect individual zebrafish, compute their orientation, and quantify the rheotaxis behavior of a zebrafish test population, producing a powerful, high-throughput behavioral assay. Using our fully automated biological assay to test a standardized ototoxic dose of cisplatin against varying doses of compounds that protect or regenerate hair cells may facilitate rapid translation of candidate drugs into preclinical mammalian models of hearing loss.

  3. Structuring intuition with theory: The high-throughput way

    Science.gov (United States)

    Fornari, Marco

    2015-03-01

    First principles methodologies have grown in accuracy and applicability to the point where large databases can be built, shared, and analyzed with the goal of predicting novel compositions, optimizing functional properties, and discovering unexpected relationships between the data. In order to be useful to a large community of users, data should be standardized, validated, and distributed. In addition, tools to easily manage large datasets should be made available to effectively lead to materials development. Within the AFLOW consortium we have developed a simple frame to expand, validate, and mine data repositories: the MTFrame. Our minimalistic approach complement AFLOW and other existing high-throughput infrastructures and aims to integrate data generation with data analysis. We present few examples from our work on materials for energy conversion. Our intent s to pinpoint the usefulness of high-throughput methodologies to guide the discovery process by quantitatively structuring the scientific intuition. This work was supported by ONR-MURI under Contract N00014-13-1-0635 and the Duke University Center for Materials Genomics.

  4. High-throughput genomics enhances tomato breeding efficiency.

    Science.gov (United States)

    Barone, A; Di Matteo, A; Carputo, D; Frusciante, L

    2009-03-01

    Tomato (Solanum lycopersicum) is considered a model plant species for a group of economically important crops, such as potato, pepper, eggplant, since it exhibits a reduced genomic size (950 Mb), a short generation time, and routine transformation technologies. Moreover, it shares with the other Solanaceous plants the same haploid chromosome number and a high level of conserved genomic organization. Finally, many genomic and genetic resources are actually available for tomato, and the sequencing of its genome is in progress. These features make tomato an ideal species for theoretical studies and practical applications in the genomics field. The present review describes how structural genomics assist the selection of new varieties resistant to pathogens that cause damage to this crop. Many molecular markers highly linked to resistance genes and cloned resistance genes are available and could be used for a high-throughput screening of multiresistant varieties. Moreover, a new genomics-assisted breeding approach for improving fruit quality is presented and discussed. It relies on the identification of genetic mechanisms controlling the trait of interest through functional genomics tools. Following this approach, polymorphisms in major gene sequences responsible for variability in the expression of the trait under study are then exploited for tracking simultaneously favourable allele combinations in breeding programs using high-throughput genomic technologies. This aims at pyramiding in the genetic background of commercial cultivars alleles that increase their performances. In conclusion, tomato breeding strategies supported by advanced technologies are expected to target increased productivity and lower costs of improved genotypes even for complex traits.

  5. Computational analysis of high-throughput flow cytometry data

    Science.gov (United States)

    Robinson, J Paul; Rajwa, Bartek; Patsekin, Valery; Davisson, Vincent Jo

    2015-01-01

    Introduction Flow cytometry has been around for over 40 years, but only recently has the opportunity arisen to move into the high-throughput domain. The technology is now available and is highly competitive with imaging tools under the right conditions. Flow cytometry has, however, been a technology that has focused on its unique ability to study single cells and appropriate analytical tools are readily available to handle this traditional role of the technology. Areas covered Expansion of flow cytometry to a high-throughput (HT) and high-content technology requires both advances in hardware and analytical tools. The historical perspective of flow cytometry operation as well as how the field has changed and what the key changes have been discussed. The authors provide a background and compelling arguments for moving toward HT flow, where there are many innovative opportunities. With alternative approaches now available for flow cytometry, there will be a considerable number of new applications. These opportunities show strong capability for drug screening and functional studies with cells in suspension. Expert opinion There is no doubt that HT flow is a rich technology awaiting acceptance by the pharmaceutical community. It can provide a powerful phenotypic analytical toolset that has the capacity to change many current approaches to HT screening. The previous restrictions on the technology, based on its reduced capacity for sample throughput, are no longer a major issue. Overcoming this barrier has transformed a mature technology into one that can focus on systems biology questions not previously considered possible. PMID:22708834

  6. High-Throughput Microfluidics for the Screening of Yeast Libraries.

    Science.gov (United States)

    Huang, Mingtao; Joensson, Haakan N; Nielsen, Jens

    2018-01-01

    Cell factory development is critically important for efficient biological production of chemicals, biofuels, and pharmaceuticals. Many rounds of the Design-Build-Test-Learn cycles may be required before an engineered strain meeting specific metrics required for industrial application. The bioindustry prefer products in secreted form (secreted products or extracellular metabolites) as it can lower the cost of downstream processing, reduce metabolic burden to cell hosts, and allow necessary modification on the final products , such as biopharmaceuticals. Yet, products in secreted form result in the disconnection of phenotype from genotype, which may have limited throughput in the Test step for identification of desired variants from large libraries of mutant strains. In droplet microfluidic screening, single cells are encapsulated in individual droplet and enable high-throughput processing and sorting of single cells or clones. Encapsulation in droplets allows this technology to overcome the throughput limitations present in traditional methods for screening by extracellular phenotypes. In this chapter, we describe a protocol/guideline for high-throughput droplet microfluidics screening of yeast libraries for higher protein secretion . This protocol can be adapted to screening by a range of other extracellular products from yeast or other hosts.

  7. High-throughput search for improved transparent conducting oxides

    Science.gov (United States)

    Miglio, Anna

    High-throughput methodologies are a very useful computational tool to explore the space of binary and ternary oxides. We use these methods to search for new and improved transparent conducting oxides (TCOs). TCOs exhibit both visible transparency and good carrier mobility and underpin many energy and electronic applications (e.g. photovoltaics, transparent transistors). We find several potential new n-type and p-type TCOs with a low effective mass. Combining different ab initio approaches, we characterize candidate oxides by their effective mass (mobility), band gap (transparency) and dopability. We present several compounds, not considered previously as TCOs, and discuss the chemical rationale for their promising properties. This analysis is useful to formulate design strategies for future high mobility oxides and has led to follow-up studies including preliminary experimental characterization of a p-type TCO candidate with unexpected chemistry. G. Hautier, A. Miglio, D. Waroquiers, G.-M. Rignanese, and X. Gonze, ``How Does Chemistry Influence Electron Effective Mass in Oxides? A High-Throughput Computational Analysis'', Chem. Mater. 26, 5447 (2014). G. Hautier, A. Miglio, G. Ceder, G.-M. Rignanese, and X. Gonze, ``Identification and design principles of low hole effective mass p-type transparent conducting oxides'', Nature Commun. 4, 2292 (2013).

  8. High-throughput microcavitation bubble induced cellular mechanotransduction

    Science.gov (United States)

    Compton, Jonathan Lee

    inhibitor to IP 3 induced Ca2+ release. This capability opens the development of a high-throughput screening platform for molecules that modulate cellular mechanotransduction. We have applied this approach to screen the effects of a small set of small molecules, in a 96-well plate in less than an hour. These detailed studies offer a basis for the design, development, and implementation of a novel high-throughput mechanotransduction assay to rapidly screen the effect of small molecules on cellular mechanotransduction at high throughput.

  9. High-throughput ballistic injection nanorheology to measure cell mechanics

    Science.gov (United States)

    Wu, Pei-Hsun; Hale, Christopher M; Chen, Wei-Chiang; Lee, Jerry S H; Tseng, Yiider; Wirtz, Denis

    2015-01-01

    High-throughput ballistic injection nanorheology is a method for the quantitative study of cell mechanics. Cell mechanics are measured by ballistic injection of submicron particles into the cytoplasm of living cells and tracking the spontaneous displacement of the particles at high spatial resolution. The trajectories of the cytoplasm-embedded particles are transformed into mean-squared displacements, which are subsequently transformed into frequency-dependent viscoelastic moduli and time-dependent creep compliance of the cytoplasm. This method allows for the study of a wide range of cellular conditions, including cells inside a 3D matrix, cell subjected to shear flows and biochemical stimuli, and cells in a live animal. Ballistic injection lasts < 1 min and is followed by overnight incubation. Multiple particle tracking for one cell lasts < 1 min. Forty cells can be examined in < 1 h. PMID:22222790

  10. Machine Learning for High-Throughput Stress Phenotyping in Plants.

    Science.gov (United States)

    Singh, Arti; Ganapathysubramanian, Baskar; Singh, Asheesh Kumar; Sarkar, Soumik

    2016-02-01

    Advances in automated and high-throughput imaging technologies have resulted in a deluge of high-resolution images and sensor data of plants. However, extracting patterns and features from this large corpus of data requires the use of machine learning (ML) tools to enable data assimilation and feature identification for stress phenotyping. Four stages of the decision cycle in plant stress phenotyping and plant breeding activities where different ML approaches can be deployed are (i) identification, (ii) classification, (iii) quantification, and (iv) prediction (ICQP). We provide here a comprehensive overview and user-friendly taxonomy of ML tools to enable the plant community to correctly and easily apply the appropriate ML tools and best-practice guidelines for various biotic and abiotic stress traits. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. An improved high throughput sequencing method for studying oomycete communities

    DEFF Research Database (Denmark)

    Sapkota, Rumakanta; Nicolaisen, Mogens

    2015-01-01

    Culture-independent studies using next generation sequencing have revolutionizedmicrobial ecology, however, oomycete ecology in soils is severely lagging behind. The aimof this study was to improve and validate standard techniques for using high throughput sequencing as a tool for studying oomycete...... agricultural fields in Denmark, and 11 samples from carrot tissue with symptoms of Pythium infection. Sequence data from the Pythium and Phytophthora mock communities showed that our strategy successfully detected all included species. Taxonomic assignments of OTUs from 26 soil sample showed that 95...... the usefulness of the method not only in soil DNA but also in a plant DNA background. In conclusion, we demonstrate a successful approach for pyrosequencing of oomycete communities using ITS1 as the barcode sequence with well-known primers for oomycete DNA amplification....

  12. Dimensioning storage and computing clusters for efficient High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Scientific experiments are producing huge amounts of data, and they continue increasing the size of their datasets and the total volume of data. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of Scientific Data Centres has shifted from coping efficiently with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful s...

  13. High-Throughput Screening Using Fourier-Transform Infrared Imaging

    Directory of Open Access Journals (Sweden)

    Erdem Sasmaz

    2015-06-01

    Full Text Available Efficient parallel screening of combinatorial libraries is one of the most challenging aspects of the high-throughput (HT heterogeneous catalysis workflow. Today, a number of methods have been used in HT catalyst studies, including various optical, mass-spectrometry, and gas-chromatography techniques. Of these, rapid-scanning Fourier-transform infrared (FTIR imaging is one of the fastest and most versatile screening techniques. Here, the new design of the 16-channel HT reactor is presented and test results for its accuracy and reproducibility are shown. The performance of the system was evaluated through the oxidation of CO over commercial Pd/Al2O3 and cobalt oxide nanoparticles synthesized with different reducer-reductant molar ratios, surfactant types, metal and surfactant concentrations, synthesis temperatures, and ramp rates.

  14. High throughput parametric studies of the structure of complex nanomaterials

    Science.gov (United States)

    Tian, Peng

    The structure of nanoscale materials is difficult to study because crystallography, the gold-standard for structure studies, no longer works at the nanoscale. New tools are needed to study nanostructure. Furthermore, it is important to study the evolution of nanostructure of complex nanostructured materials as a function of various parameters such as temperature or other environmental variables. These are called parametric studies because an environmental parameter is being varied. This means that the new tools for studying nanostructure also need to be extended to work quickly and on large numbers of datasets. This thesis describes the development of new tools for high throughput studies of complex and nanostructured materials, and their application to study the structural evolution of bulk, and nanoparticles of, MnAs as a function of temperature. The tool for high throughput analysis of the bulk material was developed as part of this PhD thesis work and is called SrRietveld. A large part of making a new tool is to validate it and we did this for SrRietveld by carrying out a high-throughput study of uncertainties coming from the program using different ways of estimating the uncertainty. This tool was applied to study structural changes in MnAs as a function of temperature. We were also interested in studying different MnAs nanoparticles fabricated through different methods because of their applications in information storage. PDFgui, an existing tool for analyzing nanoparticles using Pair distribution function (PDF) refinement, was used in these cases. Comparing the results from the analysis by SrRietveld and PDFgui, we got more comprehensive structure information about MnAs. The layout of the thesis is as follows. First, the background knowledge about material structures is given. The conventional crystallographic analysis is introduced in both theoretical and practical ways. For high throughput study, the next-generation Rietveld analysis program: Sr

  15. High-Throughput Automation in Chemical Process Development.

    Science.gov (United States)

    Selekman, Joshua A; Qiu, Jun; Tran, Kristy; Stevens, Jason; Rosso, Victor; Simmons, Eric; Xiao, Yi; Janey, Jacob

    2017-06-07

    High-throughput (HT) techniques built upon laboratory automation technology and coupled to statistical experimental design and parallel experimentation have enabled the acceleration of chemical process development across multiple industries. HT technologies are often applied to interrogate wide, often multidimensional experimental spaces to inform the design and optimization of any number of unit operations that chemical engineers use in process development. In this review, we outline the evolution of HT technology and provide a comprehensive overview of how HT automation is used throughout different industries, with a particular focus on chemical and pharmaceutical process development. In addition, we highlight the common strategies of how HT automation is incorporated into routine development activities to maximize its impact in various academic and industrial settings.

  16. Interactive Visual Analysis of High Throughput Text Streams

    Energy Technology Data Exchange (ETDEWEB)

    Steed, Chad A [ORNL; Potok, Thomas E [ORNL; Patton, Robert M [ORNL; Goodall, John R [ORNL; Maness, Christopher S [ORNL; Senter, James K [ORNL; Potok, Thomas E [ORNL

    2012-01-01

    The scale, velocity, and dynamic nature of large scale social media systems like Twitter demand a new set of visual analytics techniques that support near real-time situational awareness. Social media systems are credited with escalating social protest during recent large scale riots. Virtual communities form rapidly in these online systems, and they occasionally foster violence and unrest which is conveyed in the users language. Techniques for analyzing broad trends over these networks or reconstructing conversations within small groups have been demonstrated in recent years, but state-of- the-art tools are inadequate at supporting near real-time analysis of these high throughput streams of unstructured information. In this paper, we present an adaptive system to discover and interactively explore these virtual networks, as well as detect sentiment, highlight change, and discover spatio- temporal patterns.

  17. The Principals and Practice of Distributed High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The potential of Distributed Processing Systems to deliver computing capabilities with qualities ranging from high availability and reliability to easy expansion in functionality and capacity were recognized and formalized in the 1970’s. For more three decade these principals Distributed Computing guided the development of the HTCondor resource and job management system. The widely adopted suite of software tools offered by HTCondor are based on novel distributed computing technologies and are driven by the evolving needs of High Throughput scientific applications. We will review the principals that underpin our work, the distributed computing frameworks and technologies we developed and the lessons we learned from delivering effective and dependable software tools in an ever changing landscape computing technologies and needs that range today from a desktop computer to tens of thousands of cores offered by commercial clouds. About the speaker Miron Livny received a B.Sc. degree in Physics and Mat...

  18. High-Throughput Mass Spectrometry Applied to Structural Genomics

    Directory of Open Access Journals (Sweden)

    Rod Chalk

    2014-10-01

    Full Text Available Mass spectrometry (MS remains under-utilized for the analysis of expressed proteins because it is inaccessible to the non-specialist, and sample-turnaround from service labs is slow. Here, we describe 3.5 min Liquid-Chromatography (LC-MS and 16 min LC-MSMS methods which are tailored to validation and characterization of recombinant proteins in a high throughput structural biology pipeline. We illustrate the type and scope of MS data typically obtained from a 96-well expression and purification test for both soluble and integral membrane proteins (IMPs, and describe their utility in the selection of constructs for scale-up structural work, leading to cost and efficiency savings. We propose that value of MS data lies in how quickly it becomes available and that this can fundamentally change the way in which it is used.

  19. High-throughput screening: update on practices and success.

    Science.gov (United States)

    Fox, Sandra; Farr-Jones, Shauna; Sopchak, Lynne; Boggs, Amy; Nicely, Helen Wang; Khoury, Richard; Biros, Michael

    2006-10-01

    High-throughput screening (HTS) has become an important part of drug discovery at most pharmaceutical and many biotechnology companies worldwide, and use of HTS technologies is expanding into new areas. Target validation, assay development, secondary screening, ADME/Tox, and lead optimization are among the areas in which there is an increasing use of HTS technologies. It is becoming fully integrated within drug discovery, both upstream and downstream, which includes increasing use of cell-based assays and high-content screening (HCS) technologies to achieve more physiologically relevant results and to find higher quality leads. In addition, HTS laboratories are continually evaluating new technologies as they struggle to increase their success rate for finding drug candidates. The material in this article is based on a 900-page HTS industry report involving 54 HTS directors representing 58 HTS laboratories and 34 suppliers.

  20. Single-platelet nanomechanics measured by high-throughput cytometry

    Science.gov (United States)

    Myers, David R.; Qiu, Yongzhi; Fay, Meredith E.; Tennenbaum, Michael; Chester, Daniel; Cuadrado, Jonas; Sakurai, Yumiko; Baek, Jong; Tran, Reginald; Ciciliano, Jordan C.; Ahn, Byungwook; Mannino, Robert G.; Bunting, Silvia T.; Bennett, Carolyn; Briones, Michael; Fernandez-Nieves, Alberto; Smith, Michael L.; Brown, Ashley C.; Sulchek, Todd; Lam, Wilbur A.

    2017-02-01

    Haemostasis occurs at sites of vascular injury, where flowing blood forms a clot, a dynamic and heterogeneous fibrin-based biomaterial. Paramount in the clot's capability to stem haemorrhage are its changing mechanical properties, the major drivers of which are the contractile forces exerted by platelets against the fibrin scaffold. However, how platelets transduce microenvironmental cues to mediate contraction and alter clot mechanics is unknown. This is clinically relevant, as overly softened and stiffened clots are associated with bleeding and thrombotic disorders. Here, we report a high-throughput hydrogel-based platelet-contraction cytometer that quantifies single-platelet contraction forces in different clot microenvironments. We also show that platelets, via the Rho/ROCK pathway, synergistically couple mechanical and biochemical inputs to mediate contraction. Moreover, highly contractile platelet subpopulations present in healthy controls are conspicuously absent in a subset of patients with undiagnosed bleeding disorders, and therefore may function as a clinical diagnostic biophysical biomarker.

  1. High-throughput antibody development and retrospective epitope mapping

    DEFF Research Database (Denmark)

    Rydahl, Maja Gro

    Plant cell walls are composed of an interlinked network of polysaccharides, glycoproteins and phenolic polymers. When addressing the diverse polysaccharides in green plants, including land plants and the ancestral green algae, there are significant overlaps in the cell wall structures. Yet......, there are noteworthy differences in the less evolved species of algae as compared to land plants. The dynamic process orchestrating the deposition of these biopolymers both in algae and higher plants, is complex and highly heterogeneous, yet immensely important for the development and differentiation of the cell...... of green algae, during the development into land plants. Hence, there is a pressing need for rethinking the glycomic toolbox, by developing new and high-throughput (HTP) technology, in order to acquire information of the location and relative abundance of diverse cell wall polymers. In this dissertation...

  2. Ethoscopes: An open platform for high-throughput ethomics.

    Directory of Open Access Journals (Sweden)

    Quentin Geissmann

    2017-10-01

    Full Text Available Here, we present the use of ethoscopes, which are machines for high-throughput analysis of behavior in Drosophila and other animals. Ethoscopes provide a software and hardware solution that is reproducible and easily scalable. They perform, in real-time, tracking and profiling of behavior by using a supervised machine learning algorithm, are able to deliver behaviorally triggered stimuli to flies in a feedback-loop mode, and are highly customizable and open source. Ethoscopes can be built easily by using 3D printing technology and rely on Raspberry Pi microcomputers and Arduino boards to provide affordable and flexible hardware. All software and construction specifications are available at http://lab.gilest.ro/ethoscope.

  3. Ethoscopes: An open platform for high-throughput ethomics.

    Science.gov (United States)

    Geissmann, Quentin; Garcia Rodriguez, Luis; Beckwith, Esteban J; French, Alice S; Jamasb, Arian R; Gilestro, Giorgio F

    2017-10-01

    Here, we present the use of ethoscopes, which are machines for high-throughput analysis of behavior in Drosophila and other animals. Ethoscopes provide a software and hardware solution that is reproducible and easily scalable. They perform, in real-time, tracking and profiling of behavior by using a supervised machine learning algorithm, are able to deliver behaviorally triggered stimuli to flies in a feedback-loop mode, and are highly customizable and open source. Ethoscopes can be built easily by using 3D printing technology and rely on Raspberry Pi microcomputers and Arduino boards to provide affordable and flexible hardware. All software and construction specifications are available at http://lab.gilest.ro/ethoscope.

  4. Ethoscopes: An open platform for high-throughput ethomics

    Science.gov (United States)

    Geissmann, Quentin; Garcia Rodriguez, Luis; Beckwith, Esteban J.; French, Alice S.; Jamasb, Arian R.

    2017-01-01

    Here, we present the use of ethoscopes, which are machines for high-throughput analysis of behavior in Drosophila and other animals. Ethoscopes provide a software and hardware solution that is reproducible and easily scalable. They perform, in real-time, tracking and profiling of behavior by using a supervised machine learning algorithm, are able to deliver behaviorally triggered stimuli to flies in a feedback-loop mode, and are highly customizable and open source. Ethoscopes can be built easily by using 3D printing technology and rely on Raspberry Pi microcomputers and Arduino boards to provide affordable and flexible hardware. All software and construction specifications are available at http://lab.gilest.ro/ethoscope. PMID:29049280

  5. High-throughput drawing and testing of metallic glass nanostructures.

    Science.gov (United States)

    Hasan, Molla; Kumar, Golden

    2017-03-02

    Thermoplastic embossing of metallic glasses promises direct imprinting of metal nanostructures using templates. However, embossing high-aspect-ratio nanostructures faces unworkable flow resistance due to friction and non-wetting conditions at the template interface. Herein, we show that these inherent challenges of embossing can be reversed by thermoplastic drawing using templates. The flow resistance not only remains independent of wetting but also decreases with increasing feature aspect-ratio. Arrays of assembled nanotips, nanowires, and nanotubes with aspect-ratios exceeding 1000 can be produced through controlled elongation and fracture of metallic glass structures. In contrast to embossing, the drawing approach generates two sets of nanostructures upon final fracture; one set remains anchored to the metallic glass substrate while the second set is assembled on the template. This method can be readily adapted for high-throughput fabrication and testing of nanoscale tensile specimens, enabling rapid screening of size-effects in mechanical behavior.

  6. Statistically invalid classification of high throughput gene expression data

    Science.gov (United States)

    Barbash, Shahar; Soreq, Hermona

    2013-01-01

    Classification analysis based on high throughput data is a common feature in neuroscience and other fields of science, with a rapidly increasing impact on both basic biology and disease-related studies. The outcome of such classifications often serves to delineate novel biochemical mechanisms in health and disease states, identify new targets for therapeutic interference, and develop innovative diagnostic approaches. Given the importance of this type of studies, we screened 111 recently-published high-impact manuscripts involving classification analysis of gene expression, and found that 58 of them (53%) based their conclusions on a statistically invalid method which can lead to bias in a statistical sense (lower true classification accuracy then the reported classification accuracy). In this report we characterize the potential methodological error and its scope, investigate how it is influenced by different experimental parameters, and describe statistically valid methods for avoiding such classification mistakes. PMID:23346359

  7. Applications of High-Throughput Nucleotide Sequencing (PhD)

    DEFF Research Database (Denmark)

    Waage, Johannes

    The recent advent of high throughput sequencing of nucleic acids (RNA and DNA) has vastly expanded research into the functional and structural biology of the genome of all living organisms (and even a few dead ones). With this enormous and exponential growth in biological data generation come...... equally large demands in data handling, analysis and interpretation, perhaps defining the modern challenge of the computational biologist of the post-genomic era. The first part of this thesis consists of a general introduction to the history, common terms and challenges of next generation sequencing......). For the second flavor, DNA-seq, a study presenting genome wide profiling of transcription factor CEBP/A in liver cells undergoing regeneration after partial hepatectomy (article IV) is included....

  8. High throughput sequencing of microRNAs in chicken somites.

    Science.gov (United States)

    Rathjen, Tina; Pais, Helio; Sweetman, Dylan; Moulton, Vincent; Munsterberg, Andrea; Dalmay, Tamas

    2009-05-06

    High throughput Solexa sequencing technology was applied to identify microRNAs in somites of developing chicken embryos. We obtained 651,273 reads, from which 340,415 were mapped to the chicken genome representing 1701 distinct sequences. Eighty-five of these were known microRNAs and 42 novel miRNA candidates were identified. Accumulation of 18 of 42 sequences was confirmed by Northern blot analysis. Ten of the 18 sequences are new variants of known miRNAs and eight short RNAs are novel miRNAs. Six of these eight have not been reported by other deep sequencing projects. One of the six new miRNAs is highly enriched in somite tissue suggesting that deep sequencing of other specific tissues has the potential to identify novel tissue specific miRNAs.

  9. Automated high-throughput behavioral analyses in zebrafish larvae.

    Science.gov (United States)

    Richendrfer, Holly; Créton, Robbert

    2013-07-04

    We have created a novel high-throughput imaging system for the analysis of behavior in 7-day-old zebrafish larvae in multi-lane plates. This system measures spontaneous behaviors and the response to an aversive stimulus, which is shown to the larvae via a PowerPoint presentation. The recorded images are analyzed with an ImageJ macro, which automatically splits the color channels, subtracts the background, and applies a threshold to identify individual larvae placement in the lanes. We can then import the coordinates into an Excel sheet to quantify swim speed, preference for edge or side of the lane, resting behavior, thigmotaxis, distance between larvae, and avoidance behavior. Subtle changes in behavior are easily detected using our system, making it useful for behavioral analyses after exposure to environmental toxicants or pharmaceuticals.

  10. High-throughput ab-initio dilute solute diffusion database

    Science.gov (United States)

    Wu, Henry; Mayeshiba, Tam; Morgan, Dane

    2016-01-01

    We demonstrate automated generation of diffusion databases from high-throughput density functional theory (DFT) calculations. A total of more than 230 dilute solute diffusion systems in Mg, Al, Cu, Ni, Pd, and Pt host lattices have been determined using multi-frequency diffusion models. We apply a correction method for solute diffusion in alloys using experimental and simulated values of host self-diffusivity. We find good agreement with experimental solute diffusion data, obtaining a weighted activation barrier RMS error of 0.176 eV when excluding magnetic solutes in non-magnetic alloys. The compiled database is the largest collection of consistently calculated ab-initio solute diffusion data in the world. PMID:27434308

  11. Reverse Phase Protein Arrays for High-throughput Toxicity Screening

    DEFF Research Database (Denmark)

    Pedersen, Marlene Lemvig; Block, Ines; List, Markus

    RNAs with known killing effects as a model system to demonstrate that RPPA-based protein quantification can serve as substitute readout of cell viability, hereby reliably reflecting toxicity. In terms of automation, cell exposure, protein harvest, serial dilution and sample reformatting were performed using...... beneficially in automated high-throughput toxicity testing. An advantage of using RPPAs is that, in addition to the baseline toxicity readout, they allow testing of multiple markers of toxicity, such as inflammatory responses, which do not necessarily cumulate in cell death. We used transfection of si...... a robotic screening platform. Furthermore, we automated sample tracking and data analysis by developing a bundled bioinformatics tool named “MIRACLE”. Automation and RPPA-based viability/toxicity readouts enable rapid testing of large sample numbers, while granting the possibility for flexible consecutive...

  12. Sensitivity and accuracy of high-throughput metabarcoding methods for early detection of invasive fish species

    Science.gov (United States)

    Hatzenbuhler, Chelsea; Kelly, John R.; Martinson, John; Okum, Sara; Pilgrim, Erik

    2017-04-01

    High-throughput DNA metabarcoding has gained recognition as a potentially powerful tool for biomonitoring, including early detection of aquatic invasive species (AIS). DNA based techniques are advancing, but our understanding of the limits to detection for metabarcoding complex samples is inadequate. For detecting AIS at an early stage of invasion when the species is rare, accuracy at low detection limits is key. To evaluate the utility of metabarcoding in future fish community monitoring programs, we conducted several experiments to determine the sensitivity and accuracy of routine metabarcoding methods. Experimental mixes used larval fish tissue from multiple “common” species spiked with varying proportions of tissue from an additional “rare” species. Pyrosequencing of genetic marker, COI (cytochrome c oxidase subunit I) and subsequent sequence data analysis provided experimental evidence of low-level detection of the target “rare” species at biomass percentages as low as 0.02% of total sample biomass. Limits to detection varied interspecifically and were susceptible to amplification bias. Moreover, results showed some data processing methods can skew sequence-based biodiversity measurements from corresponding relative biomass abundances and increase false absences. We suggest caution in interpreting presence/absence and relative abundance in larval fish assemblages until metabarcoding methods are optimized for accuracy and precision.

  13. Galaxy Workflows for Web-based Bioinformatics Analysis of Aptamer High-throughput Sequencing Data

    Directory of Open Access Journals (Sweden)

    William H Thiel

    2016-01-01

    Full Text Available Development of RNA and DNA aptamers for diagnostic and therapeutic applications is a rapidly growing field. Aptamers are identified through iterative rounds of selection in a process termed SELEX (Systematic Evolution of Ligands by EXponential enrichment. High-throughput sequencing (HTS revolutionized the modern SELEX process by identifying millions of aptamer sequences across multiple rounds of aptamer selection. However, these vast aptamer HTS datasets necessitated bioinformatics techniques. Herein, we describe a semiautomated approach to analyze aptamer HTS datasets using the Galaxy Project, a web-based open source collection of bioinformatics tools that were originally developed to analyze genome, exome, and transcriptome HTS data. Using a series of Workflows created in the Galaxy webserver, we demonstrate efficient processing of aptamer HTS data and compilation of a database of unique aptamer sequences. Additional Workflows were created to characterize the abundance and persistence of aptamer sequences within a selection and to filter sequences based on these parameters. A key advantage of this approach is that the online nature of the Galaxy webserver and its graphical interface allow for the analysis of HTS data without the need to compile code or install multiple programs.

  14. Galaxy Workflows for Web-based Bioinformatics Analysis of Aptamer High-throughput Sequencing Data.

    Science.gov (United States)

    Thiel, William H

    2016-01-01

    Development of RNA and DNA aptamers for diagnostic and therapeutic applications is a rapidly growing field. Aptamers are identified through iterative rounds of selection in a process termed SELEX (Systematic Evolution of Ligands by EXponential enrichment). High-throughput sequencing (HTS) revolutionized the modern SELEX process by identifying millions of aptamer sequences across multiple rounds of aptamer selection. However, these vast aptamer HTS datasets necessitated bioinformatics techniques. Herein, we describe a semiautomated approach to analyze aptamer HTS datasets using the Galaxy Project, a web-based open source collection of bioinformatics tools that were originally developed to analyze genome, exome, and transcriptome HTS data. Using a series of Workflows created in the Galaxy webserver, we demonstrate efficient processing of aptamer HTS data and compilation of a database of unique aptamer sequences. Additional Workflows were created to characterize the abundance and persistence of aptamer sequences within a selection and to filter sequences based on these parameters. A key advantage of this approach is that the online nature of the Galaxy webserver and its graphical interface allow for the analysis of HTS data without the need to compile code or install multiple programs. Copyright © 2016 Official journal of the American Society of Gene & Cell Therapy. Published by Elsevier Inc. All rights reserved.

  15. High-throughput high-resolution class I HLA genotyping in East Africa.

    Directory of Open Access Journals (Sweden)

    Rebecca N Koehler

    Full Text Available HLA, the most genetically diverse loci in the human genome, play a crucial role in host-pathogen interaction by mediating innate and adaptive cellular immune responses. A vast number of infectious diseases affect East Africa, including HIV/AIDS, malaria, and tuberculosis, but the HLA genetic diversity in this region remains incompletely described. This is a major obstacle for the design and evaluation of preventive vaccines. Available HLA typing techniques, that provide the 4-digit level resolution needed to interpret immune responses, lack sufficient throughput for large immunoepidemiological studies. Here we present a novel HLA typing assay bridging the gap between high resolution and high throughput. The assay is based on real-time PCR using sequence-specific primers (SSP and can genotype carriers of the 49 most common East African class I HLA-A, -B, and -C alleles, at the 4-digit level. Using a validation panel of 175 samples from Kampala, Uganda, previously defined by sequence-based typing, the new assay performed with 100% sensitivity and specificity. The assay was also implemented to define the HLA genetic complexity of a previously uncharacterized Tanzanian population, demonstrating its inclusion in the major East African genetic cluster. The availability of genotyping tools with this capacity will be extremely useful in the identification of correlates of immune protection and the evaluation of candidate vaccine efficacy.

  16. High-throughput phenotypic characterization of Pseudomonas aeruginosa membrane transport genes.

    Directory of Open Access Journals (Sweden)

    Daniel A Johnson

    2008-10-01

    Full Text Available The deluge of data generated by genome sequencing has led to an increasing reliance on bioinformatic predictions, since the traditional experimental approach of characterizing gene function one at a time cannot possibly keep pace with the sequence-based discovery of novel genes. We have utilized Biolog phenotype MicroArrays to identify phenotypes of gene knockout mutants in the opportunistic pathogen and versatile soil bacterium Pseudomonas aeruginosa in a relatively high-throughput fashion. Seventy-eight P. aeruginosa mutants defective in predicted sugar and amino acid membrane transporter genes were screened and clear phenotypes were identified for 27 of these. In all cases, these phenotypes were confirmed by independent growth assays on minimal media. Using qRT-PCR, we demonstrate that the expression levels of 11 of these transporter genes were induced from 4- to 90-fold by their substrates identified via phenotype analysis. Overall, the experimental data showed the bioinformatic predictions to be largely correct in 22 out of 27 cases, and led to the identification of novel transporter genes and a potentially new histamine catabolic pathway. Thus, rapid phenotype identification assays are an invaluable tool for confirming and extending bioinformatic predictions.

  17. A robust robotic high-throughput antibody purification platform.

    Science.gov (United States)

    Schmidt, Peter M; Abdo, Michael; Butcher, Rebecca E; Yap, Min-Yin; Scotney, Pierre D; Ramunno, Melanie L; Martin-Roussety, Genevieve; Owczarek, Catherine; Hardy, Matthew P; Chen, Chao-Guang; Fabri, Louis J

    2016-07-15

    Monoclonal antibodies (mAbs) have become the fastest growing segment in the drug market with annual sales of more than 40 billion US$ in 2013. The selection of lead candidate molecules involves the generation of large repertoires of antibodies from which to choose a final therapeutic candidate. Improvements in the ability to rapidly produce and purify many antibodies in sufficient quantities reduces the lead time for selection which ultimately impacts on the speed with which an antibody may transition through the research stage and into product development. Miniaturization and automation of chromatography using micro columns (RoboColumns(®) from Atoll GmbH) coupled to an automated liquid handling instrument (ALH; Freedom EVO(®) from Tecan) has been a successful approach to establish high throughput process development platforms. Recent advances in transient gene expression (TGE) using the high-titre Expi293F™ system have enabled recombinant mAb titres of greater than 500mg/L. These relatively high protein titres reduce the volume required to generate several milligrams of individual antibodies for initial biochemical and biological downstream assays, making TGE in the Expi293F™ system ideally suited to high throughput chromatography on an ALH. The present publication describes a novel platform for purifying Expi293F™-expressed recombinant mAbs directly from cell-free culture supernatant on a Perkin Elmer JANUS-VariSpan ALH equipped with a plate shuttle device. The purification platform allows automated 2-step purification (Protein A-desalting/size exclusion chromatography) of several hundred mAbs per week. The new robotic method can purify mAbs with high recovery (>90%) at sub-milligram level with yields of up to 2mg from 4mL of cell-free culture supernatant. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Surrogate-assisted feature extraction for high-throughput phenotyping.

    Science.gov (United States)

    Yu, Sheng; Chakrabortty, Abhishek; Liao, Katherine P; Cai, Tianrun; Ananthakrishnan, Ashwin N; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Cai, Tianxi

    2017-04-01

    Phenotyping algorithms are capable of accurately identifying patients with specific phenotypes from within electronic medical records systems. However, developing phenotyping algorithms in a scalable way remains a challenge due to the extensive human resources required. This paper introduces a high-throughput unsupervised feature selection method, which improves the robustness and scalability of electronic medical record phenotyping without compromising its accuracy. The proposed Surrogate-Assisted Feature Extraction (SAFE) method selects candidate features from a pool of comprehensive medical concepts found in publicly available knowledge sources. The target phenotype's International Classification of Diseases, Ninth Revision and natural language processing counts, acting as noisy surrogates to the gold-standard labels, are used to create silver-standard labels. Candidate features highly predictive of the silver-standard labels are selected as the final features. Algorithms were trained to identify patients with coronary artery disease, rheumatoid arthritis, Crohn's disease, and ulcerative colitis using various numbers of labels to compare the performance of features selected by SAFE, a previously published automated feature extraction for phenotyping procedure, and domain experts. The out-of-sample area under the receiver operating characteristic curve and F -score from SAFE algorithms were remarkably higher than those from the other two, especially at small label sizes. SAFE advances high-throughput phenotyping methods by automatically selecting a succinct set of informative features for algorithm training, which in turn reduces overfitting and the needed number of gold-standard labels. SAFE also potentially identifies important features missed by automated feature extraction for phenotyping or experts.

  19. A High-Throughput Antibody-Based Microarray Typing Platform

    Science.gov (United States)

    Andrew, Gehring; Charles, Barnett; Chu, Ted; DebRoy, Chitrita; D'Souza, Doris; Eaker, Shannon; Fratamico, Pina; Gillespie, Barbara; Hegde, Narasimha; Jones, Kevin; Lin, Jun; Oliver, Stephen; Paoli, George; Perera, Ashan; Uknalis, Joseph

    2013-01-01

    Many rapid methods have been developed for screening foods for the presence of pathogenic microorganisms. Rapid methods that have the additional ability to identify microorganisms via multiplexed immunological recognition have the potential for classification or typing of microbial contaminants thus facilitating epidemiological investigations that aim to identify outbreaks and trace back the contamination to its source. This manuscript introduces a novel, high throughput typing platform that employs microarrayed multiwell plate substrates and laser-induced fluorescence of the nucleic acid intercalating dye/stain SYBR Gold for detection of antibody-captured bacteria. The aim of this study was to use this platform for comparison of different sets of antibodies raised against the same pathogens as well as demonstrate its potential effectiveness for serotyping. To that end, two sets of antibodies raised against each of the “Big Six” non-O157 Shiga toxin-producing E. coli (STEC) as well as E. coli O157:H7 were array-printed into microtiter plates, and serial dilutions of the bacteria were added and subsequently detected. Though antibody specificity was not sufficient for the development of an STEC serotyping method, the STEC antibody sets performed reasonably well exhibiting that specificity increased at lower capture antibody concentrations or, conversely, at lower bacterial target concentrations. The favorable results indicated that with sufficiently selective and ideally concentrated sets of biorecognition elements (e.g., antibodies or aptamers), this high-throughput platform can be used to rapidly type microbial isolates derived from food samples within ca. 80 min of total assay time. It can also potentially be used to detect the pathogens from food enrichments and at least serve as a platform for testing antibodies. PMID:23645110

  20. High throughput phenotyping for aphid resistance in large plant collections

    Directory of Open Access Journals (Sweden)

    Chen Xi

    2012-08-01

    Full Text Available Abstract Background Phloem-feeding insects are among the most devastating pests worldwide. They not only cause damage by feeding from the phloem, thereby depleting the plant from photo-assimilates, but also by vectoring viruses. Until now, the main way to prevent such problems is the frequent use of insecticides. Applying resistant varieties would be a more environmental friendly and sustainable solution. For this, resistant sources need to be identified first. Up to now there were no methods suitable for high throughput phenotyping of plant germplasm to identify sources of resistance towards phloem-feeding insects. Results In this paper we present a high throughput screening system to identify plants with an increased resistance against aphids. Its versatility is demonstrated using an Arabidopsis thaliana activation tag mutant line collection. This system consists of the green peach aphid Myzus persicae (Sulzer and the circulative virus Turnip yellows virus (TuYV. In an initial screening, with one plant representing one mutant line, 13 virus-free mutant lines were identified by ELISA. Using seeds produced from these lines, the putative candidates were re-evaluated and characterized, resulting in nine lines with increased resistance towards the aphid. Conclusions This M. persicae-TuYV screening system is an efficient, reliable and quick procedure to identify among thousands of mutated lines those resistant to aphids. In our study, nine mutant lines with increased resistance against the aphid were selected among 5160 mutant lines in just 5 months by one person. The system can be extended to other phloem-feeding insects and circulative viruses to identify insect resistant sources from several collections, including for example genebanks and artificially prepared mutant collections.

  1. High-throughput phenotyping of seminal root traits in wheat.

    Science.gov (United States)

    Richard, Cecile Ai; Hickey, Lee T; Fletcher, Susan; Jennings, Raeleen; Chenu, Karine; Christopher, Jack T

    2015-01-01

    Water availability is a major limiting factor for wheat (Triticum aestivum L.) production in rain-fed agricultural systems worldwide. Root system architecture has important functional implications for the timing and extent of soil water extraction, yet selection for root architectural traits in breeding programs has been limited by a lack of suitable phenotyping methods. The aim of this research was to develop low-cost high-throughput phenotyping methods to facilitate selection for desirable root architectural traits. Here, we report two methods, one using clear pots and the other using growth pouches, to assess the angle and the number of seminal roots in wheat seedlings- two proxy traits associated with the root architecture of mature wheat plants. Both methods revealed genetic variation for seminal root angle and number in the panel of 24 wheat cultivars. The clear pot method provided higher heritability and higher genetic correlations across experiments compared to the growth pouch method. In addition, the clear pot method was more efficient - requiring less time, space, and labour compared to the growth pouch method. Therefore the clear pot method was considered the most suitable for large-scale and high-throughput screening of seedling root characteristics in crop improvement programs. The clear-pot method could be easily integrated in breeding programs targeting drought tolerance to rapidly enrich breeding populations with desirable alleles. For instance, selection for narrow root angle and high number of seminal roots could lead to deeper root systems with higher branching at depth. Such root characteristics are highly desirable in wheat to cope with anticipated future climate conditions, particularly where crops rely heavily on stored soil moisture at depth, including some Australian, Indian, South American, and African cropping regions.

  2. High-Throughput Genomics Enhances Tomato Breeding Efficiency

    Science.gov (United States)

    Barone, A; Di Matteo, A; Carputo, D; Frusciante, L

    2009-01-01

    Tomato (Solanum lycopersicum) is considered a model plant species for a group of economically important crops, such as potato, pepper, eggplant, since it exhibits a reduced genomic size (950 Mb), a short generation time, and routine transformation technologies. Moreover, it shares with the other Solanaceous plants the same haploid chromosome number and a high level of conserved genomic organization. Finally, many genomic and genetic resources are actually available for tomato, and the sequencing of its genome is in progress. These features make tomato an ideal species for theoretical studies and practical applications in the genomics field. The present review describes how structural genomics assist the selection of new varieties resistant to pathogens that cause damage to this crop. Many molecular markers highly linked to resistance genes and cloned resistance genes are available and could be used for a high-throughput screening of multiresistant varieties. Moreover, a new genomics-assisted breeding approach for improving fruit quality is presented and discussed. It relies on the identification of genetic mechanisms controlling the trait of interest through functional genomics tools. Following this approach, polymorphisms in major gene sequences responsible for variability in the expression of the trait under study are then exploited for tracking simultaneously favourable allele combinations in breeding programs using high-throughput genomic technologies. This aims at pyramiding in the genetic background of commercial cultivars alleles that increase their performances. In conclusion, tomato breeding strategies supported by advanced technologies are expected to target increased productivity and lower costs of improved genotypes even for complex traits. PMID:19721805

  3. A Primer on High-Throughput Computing for Genomic Selection

    Directory of Open Access Journals (Sweden)

    Xiao-Lin eWu

    2011-02-01

    Full Text Available High-throughput computing (HTC uses computer clusters to solve advanced computational problems, with the goal of accomplishing high throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general purpose computation on a graphics processing unit (GPU provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin – Madison, which can be leveraged for genomic selection, in terms of central processing unit (CPU capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of

  4. A high throughput mechanical screening device for cartilage tissue engineering.

    Science.gov (United States)

    Mohanraj, Bhavana; Hou, Chieh; Meloni, Gregory R; Cosgrove, Brian D; Dodge, George R; Mauck, Robert L

    2014-06-27

    Articular cartilage enables efficient and near-frictionless load transmission, but suffers from poor inherent healing capacity. As such, cartilage tissue engineering strategies have focused on mimicking both compositional and mechanical properties of native tissue in order to provide effective repair materials for the treatment of damaged or degenerated joint surfaces. However, given the large number design parameters available (e.g. cell sources, scaffold designs, and growth factors), it is difficult to conduct combinatorial experiments of engineered cartilage. This is particularly exacerbated when mechanical properties are a primary outcome, given the long time required for testing of individual samples. High throughput screening is utilized widely in the pharmaceutical industry to rapidly and cost-effectively assess the effects of thousands of compounds for therapeutic discovery. Here we adapted this approach to develop a high throughput mechanical screening (HTMS) system capable of measuring the mechanical properties of up to 48 materials simultaneously. The HTMS device was validated by testing various biomaterials and engineered cartilage constructs and by comparing the HTMS results to those derived from conventional single sample compression tests. Further evaluation showed that the HTMS system was capable of distinguishing and identifying 'hits', or factors that influence the degree of tissue maturation. Future iterations of this device will focus on reducing data variability, increasing force sensitivity and range, as well as scaling-up to even larger (96-well) formats. This HTMS device provides a novel tool for cartilage tissue engineering, freeing experimental design from the limitations of mechanical testing throughput. © 2013 Published by Elsevier Ltd.

  5. A pocket device for high-throughput optofluidic holographic microscopy

    Science.gov (United States)

    Mandracchia, B.; Bianco, V.; Wang, Z.; Paturzo, M.; Bramanti, A.; Pioggia, G.; Ferraro, P.

    2017-06-01

    Here we introduce a compact holographic microscope embedded onboard a Lab-on-a-Chip (LoC) platform. A wavefront division interferometer is realized by writing a polymer grating onto the channel to extract a reference wave from the object wave impinging the LoC. A portion of the beam reaches the samples flowing along the channel path, carrying their information content to the recording device, while one of the diffraction orders from the grating acts as an off-axis reference wave. Polymeric micro-lenses are delivered forward the chip by Pyro-ElectroHydroDynamic (Pyro-EHD) inkjet printing techniques. Thus, all the required optical components are embedded onboard a pocket device, and fast, non-iterative, reconstruction algorithms can be used. We use our device in combination with a novel high-throughput technique, named Space-Time Digital Holography (STDH). STDH exploits the samples motion inside microfluidic channels to obtain a synthetic hologram, mapped in a hybrid space-time domain, and with intrinsic useful features. Indeed, a single Linear Sensor Array (LSA) is sufficient to build up a synthetic representation of the entire experiment (i.e. the STDH) with unlimited Field of View (FoV) along the scanning direction, independently from the magnification factor. The throughput of the imaging system is dramatically increased as STDH provides unlimited FoV, refocusable imaging of samples inside the liquid volume with no need for hologram stitching. To test our embedded STDH microscopy module, we counted, imaged and tracked in 3D with high-throughput red blood cells moving inside the channel volume under non ideal flow conditions.

  6. Genomics and epigenomics: new promises of personalized medicine for cancer patients.

    Science.gov (United States)

    Schweiger, Michal-Ruth; Barmeyer, Christian; Timmermann, Bernd

    2013-09-01

    Recent years have brought about a marked extension of our understanding of the somatic basis of cancer. Parallel to the large-scale investigation of diverse tumor genomes the knowledge arose that cancer pathologies are most often not restricted to single genomic events. In contrast, a large number of different alterations in the genomes and epigenomes come together and promote the malignant transformation. The combination of mutations, structural variations and epigenetic alterations differs between each tumor, making individual diagnosis and treatment strategies necessary. This view is summarized in the new discipline of personalized medicine. To satisfy the ideas of this approach each tumor needs to be fully characterized and individual diagnostic and therapeutic strategies designed. Here, we will discuss the power of high-throughput sequencing technologies for genomic and epigenomic analyses. We will provide insight into the current status and how these technologies can be transferred to routine clinical usage.

  7. Genetic sources of population epigenomic variation

    NARCIS (Netherlands)

    Taudt, Aaron; Colome-Tatche, Maria; Johannes, Frank

    The field of epigenomics has rapidly progressed from the study of individual reference epigenomes to surveying epigenomic variation in populations. Recent studies in a number of species, from yeast to humans, have begun to dissect the cis- and trans-regulatory genetic mechanisms that shape patterns

  8. Study on radiation-responsive epigenomes

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Hong; Chung, Byung Yeop; Lee, Seung Sik; Moon, Yu Ran; Lee, Min Hee; Kim, Ji Hong [KAERI, Daejeon (Korea, Republic of)

    2011-01-15

    The purpose of this project is development of world-class headspring techniques of biological science for application of plant genomes/epigenomes through study on radiation- responsive epigenomes and improvement of the national competitiveness in the field of fundamental technology for biological science and industry. Research scope includes 1) Investigation of radiation-responsive epigenomes and elucidation of their relation with phenotypes, 2) Elucidation of interaction and transcription control of epigenomes and epigenetic regulators using ionizing radiation (IR), 3) Investigation of epigenome-mediated traits in plant development, differentiation and antioxidant defense using IR, and 4) Development of application techniques of radiation-responsive epigenomes for eco-monitoring and molecular breeding. Main results are as follow: Setup of conditions for chromatin immunoprecipitation in irradiated plants: investigation of aberrations in DNA methylation after treatment with different IR: elucidation of responses of epigenetic regulators to gamma rays (GR): investigation of aberrations in GR-responsive epigenetic regulators at different developmental stages: elucidation of interactive aberrations of epigenomes and epigenetic regulators after treatment of GR: comparison of functional genomes after treatment of GR or H{sub 2}O{sub 2}: elucidation of relation of epigenomes with GR-induced delay in senescence: elucidation of relation of epigenomes with GR-induced aberrations in pigment metabolism: comparison of antioxidant defense in epigenetic mutants: investigation of senescence-associated changes in epigenomes: investigation of senescence-associated changes in epigenetic regulators: comparison of aberrations in epigenomes at different dose of GR for mutation.

  9. SNP-PHAGE – High throughput SNP discovery pipeline

    Directory of Open Access Journals (Sweden)

    Cregan Perry B

    2006-10-01

    Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs as defined here are single base sequence changes or short insertion/deletions between or within individuals of a given species. As a result of their abundance and the availability of high throughput analysis technologies SNP markers have begun to replace other traditional markers such as restriction fragment length polymorphisms (RFLPs, amplified fragment length polymorphisms (AFLPs and simple sequence repeats (SSRs or microsatellite markers for fine mapping and association studies in several species. For SNP discovery from chromatogram data, several bioinformatics programs have to be combined to generate an analysis pipeline. Results have to be stored in a relational database to facilitate interrogation through queries or to generate data for further analyses such as determination of linkage disequilibrium and identification of common haplotypes. Although these tasks are routinely performed by several groups, an integrated open source SNP discovery pipeline that can be easily adapted by new groups interested in SNP marker development is currently unavailable. Results We developed SNP-PHAGE (SNP discovery Pipeline with additional features for identification of common haplotypes within a sequence tagged site (Haplotype Analysis and GenBank (-dbSNP submissions. This tool was applied for analyzing sequence traces from diverse soybean genotypes to discover over 10,000 SNPs. This package was developed on UNIX/Linux platform, written in Perl and uses a MySQL database. Scripts to generate a user-friendly web interface are also provided with common queries for preliminary data analysis. A machine learning tool developed by this group for increasing the efficiency of SNP discovery is integrated as a part of this package as an optional feature. The SNP-PHAGE package is being made available open source at http://bfgl.anri.barc.usda.gov/ML/snp-phage/. Conclusion SNP-PHAGE provides a bioinformatics

  10. A Primer on High-Throughput Computing for Genomic Selection

    Science.gov (United States)

    Wu, Xiao-Lin; Beissinger, Timothy M.; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J. M.; Weigel, Kent A.; Gatti, Natalia de Leon; Gianola, Daniel

    2011-01-01

    High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin–Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized

  11. A high-throughput Arabidopsis reverse genetics system.

    Science.gov (United States)

    Sessions, Allen; Burke, Ellen; Presting, Gernot; Aux, George; McElver, John; Patton, David; Dietrich, Bob; Ho, Patrick; Bacwaden, Johana; Ko, Cynthia; Clarke, Joseph D; Cotton, David; Bullis, David; Snell, Jennifer; Miguel, Trini; Hutchison, Don; Kimmerly, Bill; Mitzel, Theresa; Katagiri, Fumiaki; Glazebrook, Jane; Law, Marc; Goff, Stephen A

    2002-12-01

    A collection of Arabidopsis lines with T-DNA insertions in known sites was generated to increase the efficiency of functional genomics. A high-throughput modified thermal asymmetric interlaced (TAIL)-PCR protocol was developed and used to amplify DNA fragments flanking the T-DNA left borders from approximately 100000 transformed lines. A total of 85108 TAIL-PCR products from 52964 T-DNA lines were sequenced and compared with the Arabidopsis genome to determine the positions of T-DNAs in each line. Predicted T-DNA insertion sites, when mapped, showed a bias against predicted coding sequences. Predicted insertion mutations in genes of interest can be identified using Arabidopsis Gene Index name searches or by BLAST (Basic Local Alignment Search Tool) search. Insertions can be confirmed by simple PCR assays on individual lines. Predicted insertions were confirmed in 257 of 340 lines tested (76%). This resource has been named SAIL (Syngenta Arabidopsis Insertion Library) and is available to the scientific community at www.tmri.org.

  12. Use of High Throughput Screening Data in IARC Monograph ...

    Science.gov (United States)

    Purpose: Evaluation of carcinogenic mechanisms serves a critical role in IARC monograph evaluations, and can lead to “upgrade” or “downgrade” of the carcinogenicity conclusions based on human and animal evidence alone. Three recent IARC monograph Working Groups (110, 112, and 113) pioneered analysis of high throughput in vitro screening data from the U.S. Environmental Protection Agency’s ToxCast program in evaluations of carcinogenic mechanisms. Methods: For monograph 110, ToxCast assay data across multiple nuclear receptors were used to test the hypothesis that PFOA acts exclusively through the PPAR family of receptors, with activity profiles compared to several prototypical nuclear receptor-activating compounds. For monographs 112 and 113, ToxCast assays were systematically evaluated and used as an additional data stream in the overall evaluation of the mechanistic evidence. Specifically, ToxCast assays were mapped to 10 “key characteristics of carcinogens” recently identified by an IARC expert group, and chemicals’ bioactivity profiles were evaluated both in absolute terms (number of relevant assays positive for bioactivity) and relative terms (ranking with respect to other compounds evaluated by IARC, using the ToxPi methodology). Results: PFOA activates multiple nuclear receptors in addition to the PPAR family in the ToxCast assays. ToxCast assays offered substantial coverage for 5 of the 10 “key characteristics,” with the greates

  13. High-throughput optical screening of cellular mechanotransduction

    Science.gov (United States)

    Compton, Jonathan L.; Luo, Justin C.; Ma, Huan; Botvinick, Elliot; Venugopalan, Vasan

    2014-09-01

    We introduce an optical platform for rapid, high-throughput screening of exogenous molecules that affect cellular mechanotransduction. Our method initiates mechanotransduction in adherent cells using single laser-microbeam generated microcavitation bubbles without requiring flow chambers or microfluidics. These microcavitation bubbles expose adherent cells to a microtsunami, a transient microscale burst of hydrodynamic shear stress, which stimulates cells over areas approaching 1 mm2. We demonstrate microtsunami-initiated mechanosignalling in primary human endothelial cells. This observed signalling is consistent with G-protein-coupled receptor stimulation, resulting in Ca2+ release by the endoplasmic reticulum. Moreover, we demonstrate the dose-dependent modulation of microtsunami-induced Ca2+ signalling by introducing a known inhibitor to this pathway. The imaging of Ca2+ signalling and its modulation by exogenous molecules demonstrates the capacity to initiate and assess cellular mechanosignalling in real time. We utilize this capability to screen the effects of a set of small molecules on cellular mechanotransduction in 96-well plates using standard imaging cytometry.

  14. Strategies for high-throughput gene cloning and expression.

    Science.gov (United States)

    Dieckman, L J; Hanly, W C; Collart, E R

    2006-01-01

    High-throughput approaches for gene cloning and expression require the development of new, nonstandard tools for use by molecular biologists and biochemists. We have developed and implemented a series of methods that enable the production of expression constructs in 96-well plate format. A screening process is described that facilitates the identification of bacterial clones expressing soluble protein. Application of the solubility screen then provides a plate map that identifies the location of wells containing clones producing soluble proteins. A series of semi-automated methods can then be applied for validation of solubility and production of freezer stocks for the protein production group. This process provides an 80% success rate for the identification of clones producing soluble protein and results in a significant decrease in the level of effort required for the labor-intensive components of validation and preparation of freezer stocks. This process is customized for large-scale structural genomics programs that rely on the production of large amounts of soluble proteins for crystallization trials.

  15. High-Throughput Printing Process for Flexible Electronics

    Science.gov (United States)

    Hyun, Woo Jin

    Printed electronics is an emerging field for manufacturing electronic devices with low cost and minimal material waste for a variety of applications including displays, distributed sensing, smart packaging, and energy management. Moreover, its compatibility with roll-to-roll production formats and flexible substrates is desirable for continuous, high-throughput production of flexible electronics. Despite the promise, however, the roll-to-roll production of printed electronics is quite challenging due to web movement hindering accurate ink registration and high-fidelity printing. In this talk, I will present a promising strategy for roll-to-roll production using a novel printing process that we term SCALE (Self-aligned Capillarity-Assisted Lithography for Electronics). By utilizing capillarity of liquid inks on nano/micro-structured substrates, the SCALE process facilitates high-resolution and self-aligned patterning of electrically functional inks with greatly improved printing tolerance. I will show the fabrication of key building blocks (e.g. transistor, resistor, capacitor) for electronic circuits using the SCALE process on plastics.

  16. A high-throughput biliverdin assay using infrared fluorescence.

    Science.gov (United States)

    Berlec, Aleš; Štrukelj, Borut

    2014-07-01

    Biliverdin is an intermediate of heme degradation with an established role in veterinary clinical diagnostics of liver-related diseases. The need for chromatographic assays has so far prevented its wider use in diagnostic laboratories. The current report describes a simple, fast, high-throughput, and inexpensive assay, based on the interaction of biliverdin with infrared fluorescent protein (iRFP) that yields functional protein exhibiting infrared fluorescence. The assay is linear in the range of 0-10 µmol/l of biliverdin, has a limit of detection of 0.02 μmol/l, and has a limit of quantification of 0.03 µmol/l. The assay is accurate with relative error less than 0.15, and precise, with coefficient of variation less than 5% in the concentration range of 2-9 µmol/l of biliverdin. More than 95% of biliverdin was recovered from biological samples by simple dimethyl sulfoxide extraction. There was almost no interference by hemin, although bilirubin caused an increase in the biliverdin concentration, probably due to spontaneous oxidation of bilirubin to biliverdin. The newly developed biliverdin assay is appropriate for reliable quantification of large numbers of samples in veterinary medicine.

  17. High-Throughput Network Communication with NetIO

    CERN Document Server

    Schumacher, J\\"orn; The ATLAS collaboration; Vandelli, Wainer

    2016-01-01

    HPC network technologies like Infiniband, TrueScale or OmniPath provide low-latency and high-throughput communication between hosts, which makes them attractive options for data-acquisition systems in large-scale high-energy physics experiments. Like HPC networks, DAQ networks are local and include a well specified number of systems. Unfortunately traditional network communication APIs for HPC clusters like MPI or PGAS target exclusively the HPC community and are not suited well for DAQ applications. It is possible to build distributed DAQ applications using low-level system APIs like Infiniband Verbs (and this has been done), but it requires a non negligible effort and expert knowledge. On the other hand, message services like 0MQ have gained popularity in the HEP community. Such APIs allow to build distributed applications with a high-level approach and provide good performance. Unfortunately their usage usually limits developers to TCP/IP-based networks. While it is possible to operate a TCP/IP stack on to...

  18. Quantitative High-Throughput Screening Using a Coincidence Reporter Biocircuit.

    Science.gov (United States)

    Schuck, Brittany W; MacArthur, Ryan; Inglese, James

    2017-04-10

    Reporter-biased artifacts-i.e., compounds that interact directly with the reporter enzyme used in a high-throughput screening (HTS) assay and not the biological process or pharmacology being interrogated-are now widely recognized to reduce the efficiency and quality of HTS used for chemical probe and therapeutic development. Furthermore, narrow or single-concentration HTS perpetuates false negatives during primary screening campaigns. Titration-based HTS, or quantitative HTS (qHTS), and coincidence reporter technology can be employed to reduce false negatives and false positives, respectively, thereby increasing the quality and efficiency of primary screening efforts, where the number of compounds investigated can range from tens of thousands to millions. The three protocols described here allow for generation of a coincidence reporter (CR) biocircuit to interrogate a biological or pharmacological question of interest, generation of a stable cell line expressing the CR biocircuit, and qHTS using the CR biocircuit to efficiently identify high-quality biologically active small molecules. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  19. Functional approach to high-throughput plant growth analysis

    Science.gov (United States)

    2013-01-01

    Method Taking advantage of the current rapid development in imaging systems and computer vision algorithms, we present HPGA, a high-throughput phenotyping platform for plant growth modeling and functional analysis, which produces better understanding of energy distribution in regards of the balance between growth and defense. HPGA has two components, PAE (Plant Area Estimation) and GMA (Growth Modeling and Analysis). In PAE, by taking the complex leaf overlap problem into consideration, the area of every plant is measured from top-view images in four steps. Given the abundant measurements obtained with PAE, in the second module GMA, a nonlinear growth model is applied to generate growth curves, followed by functional data analysis. Results Experimental results on model plant Arabidopsis thaliana show that, compared to an existing approach, HPGA reduces the error rate of measuring plant area by half. The application of HPGA on the cfq mutant plants under fluctuating light reveals the correlation between low photosynthetic rates and small plant area (compared to wild type), which raises a hypothesis that knocking out cfq changes the sensitivity of the energy distribution under fluctuating light conditions to repress leaf growth. Availability HPGA is available at http://www.msu.edu/~jinchen/HPGA. PMID:24565437

  20. High-throughput literature mining to support read-across ...

    Science.gov (United States)

    Building scientific confidence in the development and evaluation of read-across remains an ongoing challenge. Approaches include establishing systematic frameworks to identify sources of uncertainty and ways to address them. One source of uncertainty is related to characterizing biological similarity. Many research efforts are underway such as structuring mechanistic data in adverse outcome pathways and investigating the utility of high throughput (HT)/high content (HC) screening data. A largely untapped resource for read-across to date is the biomedical literature. This information has the potential to support read-across by facilitating the identification of valid source analogues with similar biological and toxicological profiles as well as providing the mechanistic understanding for any prediction made. A key challenge in using biomedical literature is to convert and translate its unstructured form into a computable format that can be linked to chemical structure. We developed a novel text-mining strategy to represent literature information for read across. Keywords were used to organize literature into toxicity signatures at the chemical level. These signatures were integrated with HT in vitro data and curated chemical structures. A rule-based algorithm assessed the strength of the literature relationship, providing a mechanism to rank and visualize the signature as literature ToxPIs (LitToxPIs). LitToxPIs were developed for over 6,000 chemicals for a varie

  1. Field high-throughput phenotyping: the new crop breeding frontier.

    Science.gov (United States)

    Araus, José Luis; Cairns, Jill E

    2014-01-01

    Constraints in field phenotyping capability limit our ability to dissect the genetics of quantitative traits, particularly those related to yield and stress tolerance (e.g., yield potential as well as increased drought, heat tolerance, and nutrient efficiency, etc.). The development of effective field-based high-throughput phenotyping platforms (HTPPs) remains a bottleneck for future breeding advances. However, progress in sensors, aeronautics, and high-performance computing are paving the way. Here, we review recent advances in field HTPPs, which should combine at an affordable cost, high capacity for data recording, scoring and processing, and non-invasive remote sensing methods, together with automated environmental data collection. Laboratory analyses of key plant parts may complement direct phenotyping under field conditions. Improvements in user-friendly data management together with a more powerful interpretation of results should increase the use of field HTPPs, therefore increasing the efficiency of crop genetic improvement to meet the needs of future generations. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Mouse eye enucleation for remote high-throughput phenotyping.

    Science.gov (United States)

    Mahajan, Vinit B; Skeie, Jessica M; Assefnia, Amir H; Mahajan, Maryann; Tsang, Stephen H

    2011-11-19

    The mouse eye is an important genetic model for the translational study of human ophthalmic disease. Blinding diseases in humans, such as macular degeneration, photoreceptor degeneration, cataract, glaucoma, retinoblastoma, and diabetic retinopathy have been recapitulated in transgenic mice.(1-5) Most transgenic and knockout mice have been generated by laboratories to study non-ophthalmic diseases, but genetic conservation between organ systems suggests that many of the same genes may also play a role in ocular development and disease. Hence, these mice represent an important resource for discovering new genotype-phenotype correlations in the eye. Because these mice are scattered across the globe, it is difficult to acquire, maintain, and phenotype them in an efficient, cost-effective manner. Thus, most high-throughput ophthalmic phenotyping screens are restricted to a few locations that require on-site, ophthalmic expertise to examine eyes in live mice. (6-9) An alternative approach developed by our laboratory is a method for remote tissue-acquisition that can be used in large or small-scale surveys of transgenic mouse eyes. Standardized procedures for video-based surgical skill transfer, tissue fixation, and shipping allow any lab to collect whole eyes from mutant animals and send them for molecular and morphological phenotyping. In this video article, we present techniques to enucleate and transfer both unfixed and perfusion fixed mouse eyes for remote phenotyping analyses.

  3. Large scale library generation for high throughput sequencing.

    Directory of Open Access Journals (Sweden)

    Erik Borgström

    Full Text Available BACKGROUND: Large efforts have recently been made to automate the sample preparation protocols for massively parallel sequencing in order to match the increasing instrument throughput. Still, the size selection through agarose gel electrophoresis separation is a labor-intensive bottleneck of these protocols. METHODOLOGY/PRINCIPAL FINDINGS: In this study a method for automatic library preparation and size selection on a liquid handling robot is presented. The method utilizes selective precipitation of certain sizes of DNA molecules on to paramagnetic beads for cleanup and selection after standard enzymatic reactions. CONCLUSIONS/SIGNIFICANCE: The method is used to generate libraries for de novo and re-sequencing on the Illumina HiSeq 2000 instrument with a throughput of 12 samples per instrument in approximately 4 hours. The resulting output data show quality scores and pass filter rates comparable to manually prepared samples. The sample size distribution can be adjusted for each application, and are suitable for all high throughput DNA processing protocols seeking to control size intervals.

  4. High Throughput Multispectral Image Processing with Applications in Food Science.

    Science.gov (United States)

    Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John

    2015-01-01

    Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.

  5. High Throughput Heuristics for Prioritizing Human Exposure to ...

    Science.gov (United States)

    The risk posed to human health by any of the thousands of untested anthropogenic chemicals in our environment is a function of both the potential hazard presented by the chemical, and the possibility of being exposed. Without the capacity to make quantitative, albeit uncertain, forecasts of exposure, the putative risk of adverse health effect from a chemical cannot be evaluated. We used Bayesian methodology to infer ranges of exposure intakes that are consistent with biomarkers of chemical exposures identified in urine samples from the U.S. population by the National Health and Nutrition Examination Survey (NHANES). We perform linear regression on inferred exposure for demographic subsets of NHANES demarked by age, gender, and weight using high throughput chemical descriptors gleaned from databases and chemical structure-based calculators. We find that five of these descriptors are capable of explaining roughly 50% of the variability across chemicals for all the demographic groups examined, including children aged 6-11. For the thousands of chemicals with no other source of information, this approach allows rapid and efficient prediction of average exposure intake of environmental chemicals. The methods described by this manuscript provide a highly improved methodology for HTS of human exposure to environmental chemicals. The manuscript includes a ranking of 7785 environmental chemicals with respect to potential human exposure, including most of the Tox21 in vit

  6. High-throughput membrane surface modification to control NOM fouling.

    Science.gov (United States)

    Zhou, Mingyan; Liu, Hongwei; Kilduff, James E; Langer, Robert; Anderson, Daniel G; Belfort, Georges

    2009-05-15

    A novel method for synthesis and screening of fouling-resistant membrane surfaces was developed by combining a high-throughput platform (HTP) approach together with photoinduced graft polymerization (PGP)forfacile modification of commercial poly(aryl sulfone) membranes. This method is an inexpensive, fast, simple, reproducible, and scalable approach to identify fouling-resistant surfaces appropriate for a specific feed. In this research, natural organic matter (NOM)-resistant surfaces were synthesized and indentified from a library of 66 monomers. Surfaces were prepared via graft polymerization onto poly(ether sulfone) (PES) membranes and were evaluated using an assay involving NOM adsorption, followed by pressure-driven-filtration. In this work new and previously tested low-fouling surfaces for NOM are identified, and their ability to mitigate NOM and protein (bovine serum albumin)fouling is compared. The best-performing monomers were the zwitterion [2-(methacryloyloxy)ethyl]dimethyl-(3-sulfopropyl)ammonium hydroxide, and diacetone acrylamide, a neutral monomer containing an amide group. Other excellent surfaces were synthesized from amides, amines, basic monomers, and long-chain poly(ethylene) glycols. Bench-scale studies conducted for selected monomers verified the scalability of HTP-PGP results. The results and the synthesis and screening method presented here offer new opportunities for choosing new membrane chemistries that minimize NOM fouling.

  7. High Throughput, Continuous, Mass Production of Photovoltaic Modules

    Energy Technology Data Exchange (ETDEWEB)

    Kurt Barth

    2008-02-06

    AVA Solar has developed a very low cost solar photovoltaic (PV) manufacturing process and has demonstrated the significant economic and commercial potential of this technology. This I & I Category 3 project provided significant assistance toward accomplishing these milestones. The original goals of this project were to design, construct and test a production prototype system, fabricate PV modules and test the module performance. The original module manufacturing costs in the proposal were estimated at $2/Watt. The objectives of this project have been exceeded. An advanced processing line was designed, fabricated and installed. Using this automated, high throughput system, high efficiency devices and fully encapsulated modules were manufactured. AVA Solar has obtained 2 rounds of private equity funding, expand to 50 people and initiated the development of a large scale factory for 100+ megawatts of annual production. Modules will be manufactured at an industry leading cost which will enable AVA Solar's modules to produce power that is cost-competitive with traditional energy resources. With low manufacturing costs and the ability to scale manufacturing, AVA Solar has been contacted by some of the largest customers in the PV industry to negotiate long-term supply contracts. The current market for PV has continued to grow at 40%+ per year for nearly a decade and is projected to reach $40-$60 Billion by 2012. Currently, a crystalline silicon raw material supply shortage is limiting growth and raising costs. Our process does not use silicon, eliminating these limitations.

  8. Efficient Management of High-Throughput Screening Libraries with SAVANAH.

    Science.gov (United States)

    List, Markus; Elnegaard, Marlene Pedersen; Schmidt, Steffen; Christiansen, Helle; Tan, Qihua; Mollenhauer, Jan; Baumbach, Jan

    2017-02-01

    High-throughput screening (HTS) has become an indispensable tool for the pharmaceutical industry and for biomedical research. A high degree of automation allows for experiments in the range of a few hundred up to several hundred thousand to be performed in close succession. The basis for such screens are molecular libraries, that is, microtiter plates with solubilized reagents such as siRNAs, shRNAs, miRNA inhibitors or mimics, and sgRNAs, or small compounds, that is, drugs. These reagents are typically condensed to provide enough material for covering several screens. Library plates thus need to be serially diluted before they can be used as assay plates. This process, however, leads to an explosion in the number of plates and samples to be tracked. Here, we present SAVANAH, the first tool to effectively manage molecular screening libraries across dilution series. It conveniently links (connects) sample information from the library to experimental results from the assay plates. All results can be exported to the R statistical environment or piped into HiTSeekR ( http://hitseekr.compbio.sdu.dk ) for comprehensive follow-up analyses. In summary, SAVANAH supports the HTS community in managing and analyzing HTS experiments with an emphasis on serially diluted molecular libraries.

  9. Assessing the utility and limitations of high throughput virtual screening

    Directory of Open Access Journals (Sweden)

    Paul Daniel Phillips

    2016-05-01

    Full Text Available Due to low cost, speed, and unmatched ability to explore large numbers of compounds, high throughput virtual screening and molecular docking engines have become widely utilized by computational scientists. It is generally accepted that docking engines, such as AutoDock, produce reliable qualitative results for ligand-macromolecular receptor binding, and molecular docking results are commonly reported in literature in the absence of complementary wet lab experimental data. In this investigation, three variants of the sixteen amino acid peptide, α-conotoxin MII, were docked to a homology model of the a3β2-nicotinic acetylcholine receptor. DockoMatic version 2.0 was used to perform a virtual screen of each peptide ligand to the receptor for ten docking trials consisting of 100 AutoDock cycles per trial. The results were analyzed for both variation in the calculated binding energy obtained from AutoDock, and the orientation of bound peptide within the receptor. The results show that, while no clear correlation exists between consistent ligand binding pose and the calculated binding energy, AutoDock is able to determine a consistent positioning of bound peptide in the majority of trials when at least ten trials were evaluated.

  10. Savant: genome browser for high-throughput sequencing data.

    Science.gov (United States)

    Fiume, Marc; Williams, Vanessa; Brook, Andrew; Brudno, Michael

    2010-08-15

    The advent of high-throughput sequencing (HTS) technologies has made it affordable to sequence many individuals' genomes. Simultaneously the computational analysis of the large volumes of data generated by the new sequencing machines remains a challenge. While a plethora of tools are available to map the resulting reads to a reference genome, and to conduct primary analysis of the mappings, it is often necessary to visually examine the results and underlying data to confirm predictions and understand the functional effects, especially in the context of other datasets. We introduce Savant, the Sequence Annotation, Visualization and ANalysis Tool, a desktop visualization and analysis browser for genomic data. Savant was developed for visualizing and analyzing HTS data, with special care taken to enable dynamic visualization in the presence of gigabases of genomic reads and references the size of the human genome. Savant supports the visualization of genome-based sequence, point, interval and continuous datasets, and multiple visualization modes that enable easy identification of genomic variants (including single nucleotide polymorphisms, structural and copy number variants), and functional genomic information (e.g. peaks in ChIP-seq data) in the context of genomic annotations. Savant is freely available at http://compbio.cs.toronto.edu/savant.

  11. High-Throughput Identification of Antimicrobial Peptides from Amphibious Mudskippers.

    Science.gov (United States)

    Yi, Yunhai; You, Xinxin; Bian, Chao; Chen, Shixi; Lv, Zhao; Qiu, Limei; Shi, Qiong

    2017-11-22

    Widespread existence of antimicrobial peptides (AMPs) has been reported in various animals with comprehensive biological activities, which is consistent with the important roles of AMPs as the first line of host defense system. However, no big-data-based analysis on AMPs from any fish species is available. In this study, we identified 507 AMP transcripts on the basis of our previously reported genomes and transcriptomes of two representative amphibious mudskippers, Boleophthalmus pectinirostris (BP) and Periophthalmus magnuspinnatus (PM). The former is predominantly aquatic with less time out of water, while the latter is primarily terrestrial with extended periods of time on land. Within these identified AMPs, 449 sequences are novel; 15 were reported in BP previously; 48 are identically overlapped between BP and PM; 94 were validated by mass spectrometry. Moreover, most AMPs presented differential tissue transcription patterns in the two mudskippers. Interestingly, we discovered two AMPs, hemoglobin β1 and amylin, with high inhibitions on Micrococcus luteus. In conclusion, our high-throughput screening strategy based on genomic and transcriptomic data opens an efficient pathway to discover new antimicrobial peptides for ongoing development of marine drugs.

  12. High-Throughput Identification of Antimicrobial Peptides from Amphibious Mudskippers

    Directory of Open Access Journals (Sweden)

    Yunhai Yi

    2017-11-01

    Full Text Available Widespread existence of antimicrobial peptides (AMPs has been reported in various animals with comprehensive biological activities, which is consistent with the important roles of AMPs as the first line of host defense system. However, no big-data-based analysis on AMPs from any fish species is available. In this study, we identified 507 AMP transcripts on the basis of our previously reported genomes and transcriptomes of two representative amphibious mudskippers, Boleophthalmus pectinirostris (BP and Periophthalmus magnuspinnatus (PM. The former is predominantly aquatic with less time out of water, while the latter is primarily terrestrial with extended periods of time on land. Within these identified AMPs, 449 sequences are novel; 15 were reported in BP previously; 48 are identically overlapped between BP and PM; 94 were validated by mass spectrometry. Moreover, most AMPs presented differential tissue transcription patterns in the two mudskippers. Interestingly, we discovered two AMPs, hemoglobin β1 and amylin, with high inhibitions on Micrococcus luteus. In conclusion, our high-throughput screening strategy based on genomic and transcriptomic data opens an efficient pathway to discover new antimicrobial peptides for ongoing development of marine drugs.

  13. High throughput screening for anti-Trypanosoma cruzi drug discovery.

    Directory of Open Access Journals (Sweden)

    Julio Alonso-Padilla

    2014-12-01

    Full Text Available The discovery of new therapeutic options against Trypanosoma cruzi, the causative agent of Chagas disease, stands as a fundamental need. Currently, there are only two drugs available to treat this neglected disease, which represents a major public health problem in Latin America. Both available therapies, benznidazole and nifurtimox, have significant toxic side effects and their efficacy against the life-threatening symptomatic chronic stage of the disease is variable. Thus, there is an urgent need for new, improved anti-T. cruzi drugs. With the objective to reliably accelerate the drug discovery process against Chagas disease, several advances have been made in the last few years. Availability of engineered reporter gene expressing parasites triggered the development of phenotypic in vitro assays suitable for high throughput screening (HTS as well as the establishment of new in vivo protocols that allow faster experimental outcomes. Recently, automated high content microscopy approaches have also been used to identify new parasitic inhibitors. These in vitro and in vivo early drug discovery approaches, which hopefully will contribute to bring better anti-T. cruzi drug entities in the near future, are reviewed here.

  14. Nanoliter high-throughput PCR for DNA and RNA profiling.

    Science.gov (United States)

    Brenan, Colin J H; Roberts, Douglas; Hurley, James

    2009-01-01

    The increasing emphasis in life science research on utilization of genetic and genomic information underlies the need for high-throughput technologies capable of analyzing the expression of multiple genes or the presence of informative single nucleotide polymorphisms (SNPs) in large-scale, population-based applications. Human disease research, disease diagnosis, personalized therapeutics, environmental monitoring, blood testing, and identification of genetic traits impacting agricultural practices, both in terms of food quality and production efficiency, are a few areas where such systems are in demand. This has stimulated the need for PCR technologies that preserves the intrinsic analytical benefits of PCR yet enables higher throughputs without increasing the time to answer, labor and reagent expenses and workflow complexity. An example of such a system based on a high-density array of nanoliter PCR assays is described here. Functionally equivalent to a microtiter plate, the nanoplate system makes possible up to 3,072 simultaneous end-point or real-time PCR measurements in a device, the size of a standard microscope slide. Methods for SNP genotyping with end-point TaqMan PCR assays and quantitative measurement of gene expression with SYBR Green I real-time PCR are outlined and illustrative data showing system performance is provided.

  15. The JCSG high-throughput structural biology pipeline.

    Science.gov (United States)

    Elsliger, Marc André; Deacon, Ashley M; Godzik, Adam; Lesley, Scott A; Wooley, John; Wüthrich, Kurt; Wilson, Ian A

    2010-10-01

    The Joint Center for Structural Genomics high-throughput structural biology pipeline has delivered more than 1000 structures to the community over the past ten years. The JCSG has made a significant contribution to the overall goal of the NIH Protein Structure Initiative (PSI) of expanding structural coverage of the protein universe, as well as making substantial inroads into structural coverage of an entire organism. Targets are processed through an extensive combination of bioinformatics and biophysical analyses to efficiently characterize and optimize each target prior to selection for structure determination. The pipeline uses parallel processing methods at almost every step in the process and can adapt to a wide range of protein targets from bacterial to human. The construction, expansion and optimization of the JCSG gene-to-structure pipeline over the years have resulted in many technological and methodological advances and developments. The vast number of targets and the enormous amounts of associated data processed through the multiple stages of the experimental pipeline required the development of variety of valuable resources that, wherever feasible, have been converted to free-access web-based tools and applications.

  16. Achieving High Throughput for Data Transfer over ATM Networks

    Science.gov (United States)

    Johnson, Marjory J.; Townsend, Jeffrey N.

    1996-01-01

    File-transfer rates for ftp are often reported to be relatively slow, compared to the raw bandwidth available in emerging gigabit networks. While a major bottleneck is disk I/O, protocol issues impact performance as well. Ftp was developed and optimized for use over the TCP/IP protocol stack of the Internet. However, TCP has been shown to run inefficiently over ATM. In an effort to maximize network throughput, data-transfer protocols can be developed to run over UDP or directly over IP, rather than over TCP. If error-free transmission is required, techniques for achieving reliable transmission can be included as part of the transfer protocol. However, selected image-processing applications can tolerate a low level of errors in images that are transmitted over a network. In this paper we report on experimental work to develop a high-throughput protocol for unreliable data transfer over ATM networks. We attempt to maximize throughput by keeping the communications pipe full, but still keep packet loss under five percent. We use the Bay Area Gigabit Network Testbed as our experimental platform.

  17. Validation of high throughput sequencing and microbial forensics applications.

    Science.gov (United States)

    Budowle, Bruce; Connell, Nancy D; Bielecka-Oder, Anna; Colwell, Rita R; Corbett, Cindi R; Fletcher, Jacqueline; Forsman, Mats; Kadavy, Dana R; Markotic, Alemka; Morse, Stephen A; Murch, Randall S; Sajantila, Antti; Schmedes, Sarah E; Ternus, Krista L; Turner, Stephen D; Minot, Samuel

    2014-01-01

    High throughput sequencing (HTS) generates large amounts of high quality sequence data for microbial genomics. The value of HTS for microbial forensics is the speed at which evidence can be collected and the power to characterize microbial-related evidence to solve biocrimes and bioterrorist events. As HTS technologies continue to improve, they provide increasingly powerful sets of tools to support the entire field of microbial forensics. Accurate, credible results allow analysis and interpretation, significantly influencing the course and/or focus of an investigation, and can impact the response of the government to an attack having individual, political, economic or military consequences. Interpretation of the results of microbial forensic analyses relies on understanding the performance and limitations of HTS methods, including analytical processes, assays and data interpretation. The utility of HTS must be defined carefully within established operating conditions and tolerances. Validation is essential in the development and implementation of microbial forensics methods used for formulating investigative leads attribution. HTS strategies vary, requiring guiding principles for HTS system validation. Three initial aspects of HTS, irrespective of chemistry, instrumentation or software are: 1) sample preparation, 2) sequencing, and 3) data analysis. Criteria that should be considered for HTS validation for microbial forensics are presented here. Validation should be defined in terms of specific application and the criteria described here comprise a foundation for investigators to establish, validate and implement HTS as a tool in microbial forensics, enhancing public safety and national security.

  18. Advances in High Throughput Screening of Biomass Recalcitrance (Poster)

    Energy Technology Data Exchange (ETDEWEB)

    Turner, G. B.; Decker, S. R.; Tucker, M. P.; Law, C.; Doeppke, C.; Sykes, R. W.; Davis, M. F.; Ziebell, A.

    2012-06-01

    This was a poster displayed at the Symposium. Advances on previous high throughput screening of biomass recalcitrance methods have resulted in improved conversion and replicate precision. Changes in plate reactor metallurgy, improved preparation of control biomass, species-specific pretreatment conditions, and enzymatic hydrolysis parameters have reduced overall coefficients of variation to an average of 6% for sample replicates. These method changes have improved plate-to-plate variation of control biomass recalcitrance and improved confidence in sugar release differences between samples. With smaller errors plant researchers can have a higher degree of assurance more low recalcitrance candidates can be identified. Significant changes in plate reactor, control biomass preparation, pretreatment conditions and enzyme have significantly reduced sample and control replicate variability. Reactor plate metallurgy significantly impacts sugar release aluminum leaching into reaction during pretreatment degrades sugars and inhibits enzyme activity. Removal of starch and extractives significantly decreases control biomass variability. New enzyme formulations give more consistent and higher conversion levels, however required re-optimization for switchgrass. Pretreatment time and temperature (severity) should be adjusted to specific biomass types i.e. woody vs. herbaceous. Desalting of enzyme preps to remove low molecular weight stabilizers and improved conversion levels likely due to water activity impacts on enzyme structure and substrate interactions not attempted here due to need to continually desalt and validate precise enzyme concentration and activity.

  19. Generation of RNAi Libraries for High-Throughput Screens

    Directory of Open Access Journals (Sweden)

    Julie Clark

    2006-01-01

    Full Text Available The completion of the genome sequencing for several organisms has created a great demand for genomic tools that can systematically analyze the growing wealth of data. In contrast to the classical reverse genetics approach of creating specific knockout cell lines or animals that is time-consuming and expensive, RNA-mediated interference (RNAi has emerged as a fast, simple, and cost-effective technique for gene knockdown in large scale. Since its discovery as a gene silencing response to double-stranded RNA (dsRNA with homology to endogenous genes in Caenorhabditis elegans (C elegans, RNAi technology has been adapted to various high-throughput screens (HTS for genome-wide loss-of-function (LOF analysis. Biochemical insights into the endogenous mechanism of RNAi have led to advances in RNAi methodology including RNAi molecule synthesis, delivery, and sequence design. In this article, we will briefly review these various RNAi library designs and discuss the benefits and drawbacks of each library strategy.

  20. Emerging metrology for high-throughput nanomaterial genotoxicology.

    Science.gov (United States)

    Nelson, Bryant C; Wright, Christa W; Ibuki, Yuko; Moreno-Villanueva, Maria; Karlsson, Hanna L; Hendriks, Giel; Sims, Christopher M; Singh, Neenu; Doak, Shareen H

    2017-01-01

    The rapid development of the engineered nanomaterial (ENM) manufacturing industry has accelerated the incorporation of ENMs into a wide variety of consumer products across the globe. Unintentionally or not, some of these ENMs may be introduced into the environment or come into contact with humans or other organisms resulting in unexpected biological effects. It is thus prudent to have rapid and robust analytical metrology in place that can be used to critically assess and/or predict the cytotoxicity, as well as the potential genotoxicity of these ENMs. Many of the traditional genotoxicity test methods [e.g. unscheduled DNA synthesis assay, bacterial reverse mutation (Ames) test, etc.,] for determining the DNA damaging potential of chemical and biological compounds are not suitable for the evaluation of ENMs, due to a variety of methodological issues ranging from potential assay interferences to problems centered on low sample throughput. Recently, a number of sensitive, high-throughput genotoxicity assays/platforms (CometChip assay, flow cytometry/micronucleus assay, flow cytometry/γ-H2AX assay, automated 'Fluorimetric Detection of Alkaline DNA Unwinding' (FADU) assay, ToxTracker reporter assay) have been developed, based on substantial modifications and enhancements of traditional genotoxicity assays. These new assays have been used for the rapid measurement of DNA damage (strand breaks), chromosomal damage (micronuclei) and for detecting upregulated DNA damage signalling pathways resulting from ENM exposures. In this critical review, we describe and discuss the fundamental measurement principles and measurement endpoints of these new assays, as well as the modes of operation, analytical metrics and potential interferences, as applicable to ENM exposures. An unbiased discussion of the major technical advantages and limitations of each assay for evaluating and predicting the genotoxic potential of ENMs is also provided. Published by Oxford University Press on

  1. High-throughput flow cytometry data normalization for clinical trials.

    Science.gov (United States)

    Finak, Greg; Jiang, Wenxin; Krouse, Kevin; Wei, Chungwen; Sanz, Ignacio; Phippard, Deborah; Asare, Adam; De Rosa, Stephen C; Self, Steve; Gottardo, Raphael

    2014-03-01

    Flow cytometry datasets from clinical trials generate very large datasets and are usually highly standardized, focusing on endpoints that are well defined apriori. Staining variability of individual makers is not uncommon and complicates manual gating, requiring the analyst to adapt gates for each sample, which is unwieldy for large datasets. It can lead to unreliable measurements, especially if a template-gating approach is used without further correction to the gates. In this article, a computational framework is presented for normalizing the fluorescence intensity of multiple markers in specific cell populations across samples that is suitable for high-throughput processing of large clinical trial datasets. Previous approaches to normalization have been global and applied to all cells or data with debris removed. They provided no mechanism to handle specific cell subsets. This approach integrates tightly with the gating process so that normalization is performed during gating and is local to the specific cell subsets exhibiting variability. This improves peak alignment and the performance of the algorithm. The performance of this algorithm is demonstrated on two clinical trial datasets from the HIV Vaccine Trials Network (HVTN) and the Immune Tolerance Network (ITN). In the ITN data set we show that local normalization combined with template gating can account for sample-to-sample variability as effectively as manual gating. In the HVTN dataset, it is shown that local normalization mitigates false-positive vaccine response calls in an intracellular cytokine staining assay. In both datasets, local normalization performs better than global normalization. The normalization framework allows the use of template gates even in the presence of sample-to-sample staining variability, mitigates the subjectivity and bias of manual gating, and decreases the time necessary to analyze large datasets. © 2013 International Society for Advancement of Cytometry.

  2. Mining Chemical Activity Status from High-Throughput Screening Assays.

    Science.gov (United States)

    Soufan, Othman; Ba-alawi, Wail; Afeef, Moataz; Essack, Magbubah; Rodionov, Valentin; Kalnis, Panos; Bajic, Vladimir B

    2015-01-01

    High-throughput screening (HTS) experiments provide a valuable resource that reports biological activity of numerous chemical compounds relative to their molecular targets. Building computational models that accurately predict such activity status (active vs. inactive) in specific assays is a challenging task given the large volume of data and frequently small proportion of active compounds relative to the inactive ones. We developed a method, DRAMOTE, to predict activity status of chemical compounds in HTP activity assays. For a class of HTP assays, our method achieves considerably better results than the current state-of-the-art-solutions. We achieved this by modification of a minority oversampling technique. To demonstrate that DRAMOTE is performing better than the other methods, we performed a comprehensive comparison analysis with several other methods and evaluated them on data from 11 PubChem assays through 1,350 experiments that involved approximately 500,000 interactions between chemicals and their target proteins. As an example of potential use, we applied DRAMOTE to develop robust models for predicting FDA approved drugs that have high probability to interact with the thyroid stimulating hormone receptor (TSHR) in humans. Our findings are further partially and indirectly supported by 3D docking results and literature information. The results based on approximately 500,000 interactions suggest that DRAMOTE has performed the best and that it can be used for developing robust virtual screening models. The datasets and implementation of all solutions are available as a MATLAB toolbox online at www.cbrc.kaust.edu.sa/dramote and can be found on Figshare.

  3. Compound Cytotoxicity Profiling Using Quantitative High-Throughput Screening

    Science.gov (United States)

    Xia, Menghang; Huang, Ruili; Witt, Kristine L.; Southall, Noel; Fostel, Jennifer; Cho, Ming-Hsuang; Jadhav, Ajit; Smith, Cynthia S.; Inglese, James; Portier, Christopher J.; Tice, Raymond R.; Austin, Christopher P.

    2008-01-01

    Background The propensity of compounds to produce adverse health effects in humans is generally evaluated using animal-based test methods. Such methods can be relatively expensive, low-throughput, and associated with pain suffered by the treated animals. In addition, differences in species biology may confound extrapolation to human health effects. Objective The National Toxicology Program and the National Institutes of Health Chemical Genomics Center are collaborating to identify a battery of cell-based screens to prioritize compounds for further toxicologic evaluation. Methods A collection of 1,408 compounds previously tested in one or more traditional toxicologic assays were profiled for cytotoxicity using quantitative high-throughput screening (qHTS) in 13 human and rodent cell types derived from six common targets of xenobiotic toxicity (liver, blood, kidney, nerve, lung, skin). Selected cytotoxicants were further tested to define response kinetics. Results qHTS of these compounds produced robust and reproducible results, which allowed cross-compound, cross-cell type, and cross-species comparisons. Some compounds were cytotoxic to all cell types at similar concentrations, whereas others exhibited species- or cell type–specific cytotoxicity. Closely related cell types and analogous cell types in human and rodent frequently showed different patterns of cytotoxicity. Some compounds inducing similar levels of cytotoxicity showed distinct time dependence in kinetic studies, consistent with known mechanisms of toxicity. Conclusions The generation of high-quality cytotoxicity data on this large library of known compounds using qHTS demonstrates the potential of this methodology to profile a much broader array of assays and compounds, which, in aggregate, may be valuable for prioritizing compounds for further toxicologic evaluation, identifying compounds with particular mechanisms of action, and potentially predicting in vivo biological response. PMID:18335092

  4. Hypoxia-sensitive reporter system for high-throughput screening.

    Science.gov (United States)

    Tsujita, Tadayuki; Kawaguchi, Shin-ichi; Dan, Takashi; Baird, Liam; Miyata, Toshio; Yamamoto, Masayuki

    2015-02-01

    The induction of anti-hypoxic stress enzymes and proteins has the potential to be a potent therapeutic strategy to prevent the progression of ischemic heart, kidney or brain diseases. To realize this idea, small chemical compounds, which mimic hypoxic conditions by activating the PHD-HIF-α system, have been developed. However, to date, none of these compounds were identified by monitoring the transcriptional activation of hypoxia-inducible factors (HIFs). Thus, to facilitate the discovery of potent inducers of HIF-α, we have developed an effective high-throughput screening (HTS) system to directly monitor the output of HIF-α transcription. We generated a HIF-α-dependent reporter system that responds to hypoxic stimuli in a concentration- and time-dependent manner. This system was developed through multiple optimization steps, resulting in the generation of a construct that consists of the secretion-type luciferase gene (Metridia luciferase, MLuc) under the transcriptional regulation of an enhancer containing 7 copies of 40-bp hypoxia responsive element (HRE) upstream of a mini-TATA promoter. This construct was stably integrated into the human neuroblastoma cell line, SK-N-BE(2)c, to generate a reporter system, named SKN:HRE-MLuc. To improve this system and to increase its suitability for the HTS platform, we incorporated the next generation luciferase, Nano luciferase (NLuc), whose longer half-life provides us with flexibility for the use of this reporter. We thus generated a stably transformed clone with NLuc, named SKN:HRE-NLuc, and found that it showed significantly improved reporter activity compared to SKN:HRE-MLuc. In this study, we have successfully developed the SKN:HRE-NLuc screening system as an efficient platform for future HTS.

  5. Towards Chip Scale Liquid Chromatography and High Throughput Immunosensing

    Energy Technology Data Exchange (ETDEWEB)

    Ni, Jing [Iowa State Univ., Ames, IA (United States)

    2000-09-21

    This work describes several research projects aimed towards developing new instruments and novel methods for high throughput chemical and biological analysis. Approaches are taken in two directions. The first direction takes advantage of well-established semiconductor fabrication techniques and applies them to miniaturize instruments that are workhorses in analytical laboratories. Specifically, the first part of this work focused on the development of micropumps and microvalves for controlled fluid delivery. The mechanism of these micropumps and microvalves relies on the electrochemically-induced surface tension change at a mercury/electrolyte interface. A miniaturized flow injection analysis device was integrated and flow injection analyses were demonstrated. In the second part of this work, microfluidic chips were also designed, fabricated, and tested. Separations of two fluorescent dyes were demonstrated in microfabricated channels, based on an open-tubular liquid chromatography (OT LC) or an electrochemically-modulated liquid chromatography (EMLC) format. A reduction in instrument size can potentially increase analysis speed, and allow exceedingly small amounts of sample to be analyzed under diverse separation conditions. The second direction explores the surface enhanced Raman spectroscopy (SERS) as a signal transduction method for immunoassay analysis. It takes advantage of the improved detection sensitivity as a result of surface enhancement on colloidal gold, the narrow width of Raman band, and the stability of Raman scattering signals to distinguish several different species simultaneously without exploiting spatially-separated addresses on a biochip. By labeling gold nanoparticles with different Raman reporters in conjunction with different detection antibodies, a simultaneous detection of a dual-analyte immunoassay was demonstrated. Using this scheme for quantitative analysis was also studied and preliminary dose-response curves from an immunoassay of a

  6. Missing call bias in high-throughput genotyping

    Directory of Open Access Journals (Sweden)

    Lin Rong

    2009-03-01

    Full Text Available Abstract Background The advent of high-throughput and cost-effective genotyping platforms made genome-wide association (GWA studies a reality. While the primary focus has been invested upon the improvement of reducing genotyping error, the problems associated with missing calls are largely overlooked. Results To probe into the effect of missing calls on GWAs, we demonstrated experimentally the prevalence and severity of the problem of missing call bias (MCB in four genotyping technologies (Affymetrix 500 K SNP array, SNPstream, TaqMan, and Illumina Beadlab. Subsequently, we showed theoretically that MCB leads to biased conclusions in the subsequent analyses, including estimation of allele/genotype frequencies, the measurement of HWE and association tests under various modes of inheritance relationships. We showed that MCB usually leads to power loss in association tests, and such power change is greater than what could be achieved by equivalent reduction of sample size unbiasedly. We also compared the bias in allele frequency estimation and in association tests introduced by MCB with those by genotyping errors. Our results illustrated that in most cases, the bias can be greatly reduced by increasing the call-rate at the cost of genotyping error rate. Conclusion The commonly used 'no-call' procedure for the observations of borderline quality should be modified. If the objective is to minimize the bias, the cut-off for call-rate and that for genotyping error rate should be properly coupled in GWA. We suggested that the ongoing QC cut-off for call-rate should be increased, while the cut-off for genotyping error rate can be reduced properly.

  7. Mining Chemical Activity Status from High-Throughput Screening Assays

    KAUST Repository

    Soufan, Othman

    2015-12-14

    High-throughput screening (HTS) experiments provide a valuable resource that reports biological activity of numerous chemical compounds relative to their molecular targets. Building computational models that accurately predict such activity status (active vs. inactive) in specific assays is a challenging task given the large volume of data and frequently small proportion of active compounds relative to the inactive ones. We developed a method, DRAMOTE, to predict activity status of chemical compounds in HTP activity assays. For a class of HTP assays, our method achieves considerably better results than the current state-of-the-art-solutions. We achieved this by modification of a minority oversampling technique. To demonstrate that DRAMOTE is performing better than the other methods, we performed a comprehensive comparison analysis with several other methods and evaluated them on data from 11 PubChem assays through 1,350 experiments that involved approximately 500,000 interactions between chemicals and their target proteins. As an example of potential use, we applied DRAMOTE to develop robust models for predicting FDA approved drugs that have high probability to interact with the thyroid stimulating hormone receptor (TSHR) in humans. Our findings are further partially and indirectly supported by 3D docking results and literature information. The results based on approximately 500,000 interactions suggest that DRAMOTE has performed the best and that it can be used for developing robust virtual screening models. The datasets and implementation of all solutions are available as a MATLAB toolbox online at www.cbrc.kaust.edu.sa/dramote and can be found on Figshare.

  8. Applications of Biophysics in High-Throughput Screening Hit Validation.

    Science.gov (United States)

    Genick, Christine Clougherty; Barlier, Danielle; Monna, Dominique; Brunner, Reto; Bé, Céline; Scheufler, Clemens; Ottl, Johannes

    2014-06-01

    For approximately a decade, biophysical methods have been used to validate positive hits selected from high-throughput screening (HTS) campaigns with the goal to verify binding interactions using label-free assays. By applying label-free readouts, screen artifacts created by compound interference and fluorescence are discovered, enabling further characterization of the hits for their target specificity and selectivity. The use of several biophysical methods to extract this type of high-content information is required to prevent the promotion of false positives to the next level of hit validation and to select the best candidates for further chemical optimization. The typical technologies applied in this arena include dynamic light scattering, turbidometry, resonance waveguide, surface plasmon resonance, differential scanning fluorimetry, mass spectrometry, and others. Each technology can provide different types of information to enable the characterization of the binding interaction. Thus, these technologies can be incorporated in a hit-validation strategy not only according to the profile of chemical matter that is desired by the medicinal chemists, but also in a manner that is in agreement with the target protein's amenability to the screening format. Here, we present the results of screening strategies using biophysics with the objective to evaluate the approaches, discuss the advantages and challenges, and summarize the benefits in reference to lead discovery. In summary, the biophysics screens presented here demonstrated various hit rates from a list of ~2000 preselected, IC50-validated hits from HTS (an IC50 is the inhibitor concentration at which 50% inhibition of activity is observed). There are several lessons learned from these biophysical screens, which will be discussed in this article. © 2014 Society for Laboratory Automation and Screening.

  9. High-throughput DNA extraction of forensic adhesive tapes.

    Science.gov (United States)

    Forsberg, Christina; Jansson, Linda; Ansell, Ricky; Hedman, Johannes

    2016-09-01

    Tape-lifting has since its introduction in the early 2000's become a well-established sampling method in forensic DNA analysis. Sampling is quick and straightforward while the following DNA extraction is more challenging due to the "stickiness", rigidity and size of the tape. We have developed, validated and implemented a simple and efficient direct lysis DNA extraction protocol for adhesive tapes that requires limited manual labour. The method uses Chelex beads and is applied with SceneSafe FAST tape. This direct lysis protocol provided higher mean DNA yields than PrepFiler Express BTA on Automate Express, although the differences were not significant when using clothes worn in a controlled fashion as reference material (p=0.13 and p=0.34 for T-shirts and button-down shirts, respectively). Through in-house validation we show that the method is fit-for-purpose for application in casework, as it provides high DNA yields and amplifiability, as well as good reproducibility and DNA extract stability. After implementation in casework, the proportion of extracts with DNA concentrations above 0.01ng/μL increased from 71% to 76%. Apart from providing higher DNA yields compared with the previous method, the introduction of the developed direct lysis protocol also reduced the amount of manual labour by half and doubled the potential throughput for tapes at the laboratory. Generally, simplified manual protocols can serve as a cost-effective alternative to sophisticated automation solutions when the aim is to enable high-throughput DNA extraction of complex crime scene samples. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  10. High-throughput microfluidic line scan imaging for cytological characterization

    Science.gov (United States)

    Hutcheson, Joshua A.; Powless, Amy J.; Majid, Aneeka A.; Claycomb, Adair; Fritsch, Ingrid; Balachandran, Kartik; Muldoon, Timothy J.

    2015-03-01

    Imaging cells in a microfluidic chamber with an area scan camera is difficult due to motion blur and data loss during frame readout causing discontinuity of data acquisition as cells move at relatively high speeds through the chamber. We have developed a method to continuously acquire high-resolution images of cells in motion through a microfluidics chamber using a high-speed line scan camera. The sensor acquires images in a line-by-line fashion in order to continuously image moving objects without motion blur. The optical setup comprises an epi-illuminated microscope with a 40X oil immersion, 1.4 NA objective and a 150 mm tube lens focused on a microfluidic channel. Samples containing suspended cells fluorescently stained with 0.01% (w/v) proflavine in saline are introduced into the microfluidics chamber via a syringe pump; illumination is provided by a blue LED (455 nm). Images were taken of samples at the focal plane using an ELiiXA+ 8k/4k monochrome line-scan camera at a line rate of up to 40 kHz. The system's line rate and fluid velocity are tightly controlled to reduce image distortion and are validated using fluorescent microspheres. Image acquisition was controlled via MATLAB's Image Acquisition toolbox. Data sets comprise discrete images of every detectable cell which may be subsequently mined for morphological statistics and definable features by a custom texture analysis algorithm. This high-throughput screening method, comparable to cell counting by flow cytometry, provided efficient examination including counting, classification, and differentiation of saliva, blood, and cultured human cancer cells.

  11. A bioimage informatics platform for high-throughput embryo phenotyping.

    Science.gov (United States)

    Brown, James M; Horner, Neil R; Lawson, Thomas N; Fiegel, Tanja; Greenaway, Simon; Morgan, Hugh; Ring, Natalie; Santos, Luis; Sneddon, Duncan; Teboul, Lydia; Vibert, Jennifer; Yaikhom, Gagarine; Westerberg, Henrik; Mallon, Ann-Marie

    2018-01-01

    High-throughput phenotyping is a cornerstone of numerous functional genomics projects. In recent years, imaging screens have become increasingly important in understanding gene-phenotype relationships in studies of cells, tissues and whole organisms. Three-dimensional (3D) imaging has risen to prominence in the field of developmental biology for its ability to capture whole embryo morphology and gene expression, as exemplified by the International Mouse Phenotyping Consortium (IMPC). Large volumes of image data are being acquired by multiple institutions around the world that encompass a range of modalities, proprietary software and metadata. To facilitate robust downstream analysis, images and metadata must be standardized to account for these differences. As an open scientific enterprise, making the data readily accessible is essential so that members of biomedical and clinical research communities can study the images for themselves without the need for highly specialized software or technical expertise. In this article, we present a platform of software tools that facilitate the upload, analysis and dissemination of 3D images for the IMPC. Over 750 reconstructions from 80 embryonic lethal and subviable lines have been captured to date, all of which are openly accessible at mousephenotype.org. Although designed for the IMPC, all software is available under an open-source licence for others to use and develop further. Ongoing developments aim to increase throughput and improve the analysis and dissemination of image data. Furthermore, we aim to ensure that images are searchable so that users can locate relevant images associated with genes, phenotypes or human diseases of interest. © The Author 2016. Published by Oxford University Press.

  12. High-Throughput Neuroimaging-Genetics Computational Infrastructure

    Directory of Open Access Journals (Sweden)

    Ivo D Dinov

    2014-04-01

    Full Text Available Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate and disseminate novel scientific methods, computational resources and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval and aggregation. Computational processing involves the necessary software, hardware and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical and phenotypic data and meta-data. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI and the Laboratory of Neuro Imaging (LONI at University of Southern California (USC. INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize diverse suites of software tools and web-services. These pipeline workflows are represented as portable XML objects which transfer the execution instructions and user specifications from the client user machine to remote pipeline servers for distributed computing. Using Alzheimer’s and Parkinson’s data, we provide several examples of translational applications using this infrastructure.

  13. Recent advances in high-throughput molecular marker identification for superficial and invasive bladder cancers

    DEFF Research Database (Denmark)

    Andersen, Lars Dyrskjøt; Zieger, Karsten; Ørntoft, Torben Falck

    2007-01-01

    individually contributed to the management of the disease. However, the development of high-throughput techniques for simultaneous assessment of a large number of markers has allowed classification of tumors into clinically relevant molecular subgroups beyond those possible by pathological classification. Here......, we review the recent advances in high-throughput molecular marker identification for superficial and invasive bladder cancers....

  14. High-throughput and computational approaches for diagnostic and prognostic host tuberculosis biomarkers

    Directory of Open Access Journals (Sweden)

    January Weiner

    2017-03-01

    Full Text Available High-throughput techniques strive to identify new biomarkers that will be useful for the diagnosis, treatment, and prevention of tuberculosis (TB. However, their analysis and interpretation pose considerable challenges. Recent developments in the high-throughput detection of host biomarkers in TB are reported in this review.

  15. Alginate Immobilization of Metabolic Enzymes (AIME) for High-Throughput Screening Assays (SOT)

    Science.gov (United States)

    Alginate Immobilization of Metabolic Enzymes (AIME) for High-Throughput Screening Assays DE DeGroot, RS Thomas, and SO SimmonsNational Center for Computational Toxicology, US EPA, Research Triangle Park, NC USAThe EPA’s ToxCast program utilizes a wide variety of high-throughput s...

  16. Integrative analysis of 111 reference human epigenomes

    Science.gov (United States)

    Kundaje, Anshul; Meuleman, Wouter; Ernst, Jason; Bilenky, Misha; Yen, Angela; Kheradpour, Pouya; Zhang, Zhizhuo; Heravi-Moussavi, Alireza; Liu, Yaping; Amin, Viren; Ziller, Michael J; Whitaker, John W; Schultz, Matthew D; Sandstrom, Richard S; Eaton, Matthew L; Wu, Yi-Chieh; Wang, Jianrong; Ward, Lucas D; Sarkar, Abhishek; Quon, Gerald; Pfenning, Andreas; Wang, Xinchen; Claussnitzer, Melina; Coarfa, Cristian; Harris, R Alan; Shoresh, Noam; Epstein, Charles B; Gjoneska, Elizabeta; Leung, Danny; Xie, Wei; Hawkins, R David; Lister, Ryan; Hong, Chibo; Gascard, Philippe; Mungall, Andrew J; Moore, Richard; Chuah, Eric; Tam, Angela; Canfield, Theresa K; Hansen, R Scott; Kaul, Rajinder; Sabo, Peter J; Bansal, Mukul S; Carles, Annaick; Dixon, Jesse R; Farh, Kai-How; Feizi, Soheil; Karlic, Rosa; Kim, Ah-Ram; Kulkarni, Ashwinikumar; Li, Daofeng; Lowdon, Rebecca; Mercer, Tim R; Neph, Shane J; Onuchic, Vitor; Polak, Paz; Rajagopal, Nisha; Ray, Pradipta; Sallari, Richard C; Siebenthall, Kyle T; Sinnott-Armstrong, Nicholas; Stevens, Michael; Thurman, Robert E; Wu, Jie; Zhang, Bo; Zhou, Xin; Beaudet, Arthur E; Boyer, Laurie A; De Jager, Philip; Farnham, Peggy J; Fisher, Susan J; Haussler, David; Jones, Steven; Li, Wei; Marra, Marco; McManus, Michael T; Sunyaev, Shamil; Thomson, James A; Tlsty, Thea D; Tsai, Li-Huei; Wang, Wei; Waterland, Robert A; Zhang, Michael; Chadwick, Lisa H; Bernstein, Bradley E; Costello, Joseph F; Ecker, Joseph R; Hirst, Martin; Meissner, Alexander; Milosavljevic, Aleksandar; Ren, Bing; Stamatoyannopoulos, John A; Wang, Ting; Kellis, Manolis

    2015-01-01

    The reference human genome sequence set the stage for studies of genetic variation and its association with human disease, but a similar reference has lacked for epigenomic studies. To address this need, the NIH Roadmap Epigenomics Consortium generated the largest collection to-date of human epigenomes for primary cells and tissues. Here, we describe the integrative analysis of 111 reference human epigenomes generated as part of the program, profiled for histone modification patterns, DNA accessibility, DNA methylation, and RNA expression. We establish global maps of regulatory elements, define regulatory modules of coordinated activity, and their likely activators and repressors. We show that disease and trait-associated genetic variants are enriched in tissue-specific epigenomic marks, revealing biologically-relevant cell types for diverse human traits, and providing a resource for interpreting the molecular basis of human disease. Our results demonstrate the central role of epigenomic information for understanding gene regulation, cellular differentiation, and human disease. PMID:25693563

  17. The Utilization of Formalin Fixed-Paraffin-Embedded Specimens in High Throughput Genomic Studies

    Directory of Open Access Journals (Sweden)

    Pan Zhang

    2017-01-01

    Full Text Available High throughput genomic assays empower us to study the entire human genome in short time with reasonable cost. Formalin fixed-paraffin-embedded (FFPE tissue processing remains the most economical approach for longitudinal tissue specimen storage. Therefore, the ability to apply high throughput genomic applications to FFPE specimens can expand clinical assays and discovery. Many studies have measured the accuracy and repeatability of data generated from FFPE specimens using high throughput genomic assays. Together, these studies demonstrate feasibility and provide crucial guidance for future studies using FFPE specimens. Here, we summarize the findings of these studies and discuss the limitations of high throughput data generated from FFPE specimens across several platforms that include microarray, high throughput sequencing, and NanoString.

  18. Epigenomic engineering for Down syndrome.

    Science.gov (United States)

    Mentis, A F

    2016-12-01

    Down syndrome (DS; trisomy 21), the commonest genetic cause of mental disability, affects approximately 250,000 families in the United States alone. Despite milestones in understanding the specific genetic causes of the syndrome, the major symptoms of DS - not least those related to neurocognitive function - are incurable. DS phenotypes are highly variable, and gene expression patterns cannot be explained by trisomy alone, implicating epigenetics in DS pathophysiology. DNA and histone modifications appear to contribute to DS pathology and cognitive defects, and epigenomic, and genome editing research have very recently opened up novel therapeutic avenues for several diseases including DS. Here, we discuss how epigenomic therapies might be used to ameliorate DS-related phenotypes with a particular focus on the CRISPR-Cas 9 system for targeted epigenomic engineering in DS. This approach is likely to reap rewards in terms of understanding the pathophysiology of DS, especially when combined with animal models, but significant technical and ethical challenges must be overcome for clinical translation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Performance of high-throughput DNA quantification methods

    Directory of Open Access Journals (Sweden)

    Chanock Stephen J

    2003-10-01

    Full Text Available Abstract Background The accuracy and precision of estimates of DNA concentration are critical factors for efficient use of DNA samples in high-throughput genotype and sequence analyses. We evaluated the performance of spectrophotometric (OD DNA quantification, and compared it to two fluorometric quantification methods, the PicoGreen® assay (PG, and a novel real-time quantitative genomic PCR assay (QG specific to a region at the human BRCA1 locus. Twenty-Two lymphoblastoid cell line DNA samples with an initial concentration of ~350 ng/uL were diluted to 20 ng/uL. DNA concentration was estimated by OD and further diluted to 5 ng/uL. The concentrations of multiple aliquots of the final dilution were measured by the OD, QG and PG methods. The effects of manual and robotic laboratory sample handling procedures on the estimates of DNA concentration were assessed using variance components analyses. Results The OD method was the DNA quantification method most concordant with the reference sample among the three methods evaluated. A large fraction of the total variance for all three methods (36.0–95.7% was explained by sample-to-sample variation, whereas the amount of variance attributable to sample handling was small (0.8–17.5%. Residual error (3.2–59.4%, corresponding to un-modelled factors, contributed a greater extent to the total variation than the sample handling procedures. Conclusion The application of a specific DNA quantification method to a particular molecular genetic laboratory protocol must take into account the accuracy and precision of the specific method, as well as the requirements of the experimental workflow with respect to sample volumes and throughput. While OD was the most concordant and precise DNA quantification method in this study, the information provided by the quantitative PCR assay regarding the suitability of DNA samples for PCR may be an essential factor for some protocols, despite the decreased concordance and

  20. High throughput modular chambers for rapid evaluation of anesthetic sensitivity

    Directory of Open Access Journals (Sweden)

    Eckmann David M

    2006-11-01

    Full Text Available Abstract Background Anesthetic sensitivity is determined by the interaction of multiple genes. Hence, a dissection of genetic contributors would be aided by precise and high throughput behavioral screens. Traditionally, anesthetic phenotyping has addressed only induction of anesthesia, evaluated with dose-response curves, while ignoring potentially important data on emergence from anesthesia. Methods We designed and built a controlled environment apparatus to permit rapid phenotyping of twenty-four mice simultaneously. We used the loss of righting reflex to indicate anesthetic-induced unconsciousness. After fitting the data to a sigmoidal dose-response curve with variable slope, we calculated the MACLORR (EC50, the Hill coefficient, and the 95% confidence intervals bracketing these values. Upon termination of the anesthetic, Emergence timeRR was determined and expressed as the mean ± standard error for each inhaled anesthetic. Results In agreement with several previously published reports we find that the MACLORR of halothane, isoflurane, and sevoflurane in 8–12 week old C57BL/6J mice is 0.79% (95% confidence interval = 0.78 – 0.79%, 0.91% (95% confidence interval = 0.90 – 0.93%, and 1.96% (95% confidence interval = 1.94 – 1.97%, respectively. Hill coefficients for halothane, isoflurane, and sevoflurane are 24.7 (95% confidence interval = 19.8 – 29.7%, 19.2 (95% confidence interval = 14.0 – 24.3%, and 33.1 (95% confidence interval = 27.3 – 38.8%, respectively. After roughly 2.5 MACLORR • hr exposures, mice take 16.00 ± 1.07, 6.19 ± 0.32, and 2.15 ± 0.12 minutes to emerge from halothane, isoflurane, and sevoflurane, respectively. Conclusion This system enabled assessment of inhaled anesthetic responsiveness with a higher precision than that previously reported. It is broadly adaptable for delivering an inhaled therapeutic (or toxin to a population while monitoring its vital signs, motor reflexes, and providing precise control

  1. High throughput comet assay to study genotoxicity of nanomaterials

    Directory of Open Access Journals (Sweden)

    Naouale El Yamani

    2015-06-01

    Full Text Available The unique physicochemical properties of engineered nanomaterials (NMs have accelerated their use in diverse industrial and domestic products. Although their presence in consumer products represents a major concern for public health safety, their potential impact on human health is poorly understood. There is therefore an urgent need to clarify the toxic effects of NMs and to elucidate the mechanisms involved. In view of the large number of NMs currently being used, high throughput (HTP screening technologies are clearly needed for efficient assessment of toxicity. The comet assay is the most used method in nanogenotoxicity studies and has great potential for increasing throughput as it is fast, versatile and robust; simple technical modifications of the assay make it possible to test many compounds (NMs in a single experiment. The standard gel of 70-100 μL contains thousands of cells, of which only a tiny fraction are actually scored. Reducing the gel to a volume of 5 μL, with just a few hundred cells, allows twelve gels to be set on a standard slide, or 96 as a standard 8x12 array. For the 12 gel format, standard slides precoated with agarose are placed on a metal template and gels are set on the positions marked on the template. The HTP comet assay, incorporating digestion of DNA with formamidopyrimidine DNA glycosylase (FPG to detect oxidised purines, has recently been applied to study the potential induction of genotoxicity by NMs via reactive oxygen. In the NanoTEST project we investigated the genotoxic potential of several well-characterized metal and polymeric nanoparticles with the comet assay. All in vitro studies were harmonized; i.e. NMs were from the same batch, and identical dispersion protocols, exposure time, concentration range, culture conditions, and time-courses were used. As a kidney model, Cos-1 fibroblast-like kidney cells were treated with different concentrations of iron oxide NMs, and cells embedded in minigels (12

  2. High throughput RNAi assay optimization using adherent cell cytometry

    Directory of Open Access Journals (Sweden)

    Pradhan Leena

    2011-04-01

    Full Text Available Abstract Background siRNA technology is a promising tool for gene therapy of vascular disease. Due to the multitude of reagents and cell types, RNAi experiment optimization can be time-consuming. In this study adherent cell cytometry was used to rapidly optimize siRNA transfection in human aortic vascular smooth muscle cells (AoSMC. Methods AoSMC were seeded at a density of 3000-8000 cells/well of a 96well plate. 24 hours later AoSMC were transfected with either non-targeting unlabeled siRNA (50 nM, or non-targeting labeled siRNA, siGLO Red (5 or 50 nM using no transfection reagent, HiPerfect or Lipofectamine RNAiMax. For counting cells, Hoechst nuclei stain or Cell Tracker green were used. For data analysis an adherent cell cytometer, Celigo® was used. Data was normalized to the transfection reagent alone group and expressed as red pixel count/cell. Results After 24 hours, none of the transfection conditions led to cell loss. Red fluorescence counts were normalized to the AoSMC count. RNAiMax was more potent compared to HiPerfect or no transfection reagent at 5 nM siGLO Red (4.12 +/-1.04 vs. 0.70 +/-0.26 vs. 0.15 +/-0.13 red pixel/cell and 50 nM siGLO Red (6.49 +/-1.81 vs. 2.52 +/-0.67 vs. 0.34 +/-0.19. Fluorescence expression results supported gene knockdown achieved by using MARCKS targeting siRNA in AoSMCs. Conclusion This study underscores that RNAi delivery depends heavily on the choice of delivery method. Adherent cell cytometry can be used as a high throughput-screening tool for the optimization of RNAi assays. This technology can accelerate in vitro cell assays and thus save costs.

  3. High-throughput metal susceptibility testing of microbial biofilms

    Directory of Open Access Journals (Sweden)

    Turner Raymond J

    2005-10-01

    Full Text Available Abstract Background Microbial biofilms exist all over the natural world, a distribution that is paralleled by metal cations and oxyanions. Despite this reality, very few studies have examined how biofilms withstand exposure to these toxic compounds. This article describes a batch culture technique for biofilm and planktonic cell metal susceptibility testing using the MBEC assay. This device is compatible with standard 96-well microtiter plate technology. As part of this method, a two part, metal specific neutralization protocol is summarized. This procedure minimizes residual biological toxicity arising from the carry-over of metals from challenge to recovery media. Neutralization consists of treating cultures with a chemical compound known to react with or to chelate the metal. Treated cultures are plated onto rich agar to allow metal complexes to diffuse into the recovery medium while bacteria remain on top to recover. Two difficulties associated with metal susceptibility testing were the focus of two applications of this technique. First, assays were calibrated to allow comparisons of the susceptibility of different organisms to metals. Second, the effects of exposure time and growth medium composition on the susceptibility of E. coli JM109 biofilms to metals were investigated. Results This high-throughput method generated 96-statistically equivalent biofilms in a single device and thus allowed for comparative and combinatorial experiments of media, microbial strains, exposure times and metals. By adjusting growth conditions, it was possible to examine biofilms of different microorganisms that had similar cell densities. In one example, Pseudomonas aeruginosa ATCC 27853 was up to 80 times more resistant to heavy metalloid oxyanions than Escherichia coli TG1. Further, biofilms were up to 133 times more tolerant to tellurite (TeO32- than corresponding planktonic cultures. Regardless of the growth medium, the tolerance of biofilm and planktonic

  4. High-throughput metal susceptibility testing of microbial biofilms

    Science.gov (United States)

    Harrison, Joe J; Turner, Raymond J; Ceri, Howard

    2005-01-01

    Background Microbial biofilms exist all over the natural world, a distribution that is paralleled by metal cations and oxyanions. Despite this reality, very few studies have examined how biofilms withstand exposure to these toxic compounds. This article describes a batch culture technique for biofilm and planktonic cell metal susceptibility testing using the MBEC assay. This device is compatible with standard 96-well microtiter plate technology. As part of this method, a two part, metal specific neutralization protocol is summarized. This procedure minimizes residual biological toxicity arising from the carry-over of metals from challenge to recovery media. Neutralization consists of treating cultures with a chemical compound known to react with or to chelate the metal. Treated cultures are plated onto rich agar to allow metal complexes to diffuse into the recovery medium while bacteria remain on top to recover. Two difficulties associated with metal susceptibility testing were the focus of two applications of this technique. First, assays were calibrated to allow comparisons of the susceptibility of different organisms to metals. Second, the effects of exposure time and growth medium composition on the susceptibility of E. coli JM109 biofilms to metals were investigated. Results This high-throughput method generated 96-statistically equivalent biofilms in a single device and thus allowed for comparative and combinatorial experiments of media, microbial strains, exposure times and metals. By adjusting growth conditions, it was possible to examine biofilms of different microorganisms that had similar cell densities. In one example, Pseudomonas aeruginosa ATCC 27853 was up to 80 times more resistant to heavy metalloid oxyanions than Escherichia coli TG1. Further, biofilms were up to 133 times more tolerant to tellurite (TeO32-) than corresponding planktonic cultures. Regardless of the growth medium, the tolerance of biofilm and planktonic cell E. coli JM109 to metals

  5. TCGA Workflow: Analyze cancer genomics and epigenomics data using Bioconductor packages [version 2; referees: 1 approved, 2 approved with reservations

    Directory of Open Access Journals (Sweden)

    Tiago C. Silva

    2016-12-01

    Full Text Available Biotechnological advances in sequencing have led to an explosion of publicly available data via large international consortia such as The Cancer Genome Atlas (TCGA, The Encyclopedia of DNA Elements (ENCODE, and The NIH Roadmap Epigenomics Mapping Consortium (Roadmap. These projects have provided unprecedented opportunities to interrogate the epigenome of cultured cancer cell lines as well as normal and tumor tissues with high genomic resolution. The Bioconductor project offers more than 1,000 open-source software and statistical packages to analyze high-throughput genomic data. However, most packages are designed for specific data types (e.g. expression, epigenetics, genomics and there is no one comprehensive tool that provides a complete integrative analysis of the resources and data provided by all three public projects. A need to create an integration of these different analyses was recently proposed. In this workflow, we provide a series of biologically focused integrative analyses of different molecular data. We describe how to download, process and prepare TCGA data and by harnessing several key Bioconductor packages, we describe how to extract biologically meaningful genomic and epigenomic data. Using Roadmap and ENCODE data, we provide a work plan to identify biologically relevant functional epigenomic elements associated with cancer. To illustrate our workflow, we analyzed two types of brain tumors: low-grade glioma (LGG versus high-grade glioma (glioblastoma multiform or GBM. This workflow introduces the following Bioconductor packages: AnnotationHub, ChIPSeeker, ComplexHeatmap, pathview, ELMER, GAIA, MINET, RTCGAToolbox, TCGAbiolinks.

  6. Perspectives of International Human Epigenome Consortium

    Directory of Open Access Journals (Sweden)

    Jae-Bum Bae

    2013-03-01

    Full Text Available As the International Human Epigenome Consortium (IHEC launched officially at the 2010 Washington meeting, a giant step toward the conquest of unexplored regions of the human genome has begun. IHEC aims at the production of 1,000 reference epigenomes to the international scientific community for next 7-10 years. Seven member institutions, including South Korea, Korea National Institute of Health (KNIH, will produce 25-200 reference epigenomes individually, and the produced data will be publically available by using a data center. Epigenome data will cover from whole genome bisulfite sequencing, histone modification, and chromatin access information to miRNA-seq. The final goal of IHEC is the production of reference maps of human epigenomes for key cellular status relevant to health and disease.

  7. Wide Throttling, High Throughput Hall Thruster for Science and Exploration Missions Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In response to Topic S3.04 "Propulsion Systems," Busek Co. Inc. will develop a high throughput Hall effect thruster with a nominal peak power of 1-kW and wide...

  8. High-Throughput Approaches to Pinpoint Function within the Noncoding Genome.

    Science.gov (United States)

    Montalbano, Antonino; Canver, Matthew C; Sanjana, Neville E

    2017-10-05

    The clustered regularly interspaced short palindromic repeats (CRISPR)-Cas nuclease system is a powerful tool for genome editing, and its simple programmability has enabled high-throughput genetic and epigenetic studies. These high-throughput approaches offer investigators a toolkit for functional interrogation of not only protein-coding genes but also noncoding DNA. Historically, noncoding DNA has lacked the detailed characterization that has been applied to protein-coding genes in large part because there has not been a robust set of methodologies for perturbing these regions. Although the majority of high-throughput CRISPR screens have focused on the coding genome to date, an increasing number of CRISPR screens targeting noncoding genomic regions continue to emerge. Here, we review high-throughput CRISPR-based approaches to uncover and understand functional elements within the noncoding genome and discuss practical aspects of noncoding library design and screen analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. High-Throughput Industrial Coatings Research at The Dow Chemical Company.

    Science.gov (United States)

    Kuo, Tzu-Chi; Malvadkar, Niranjan A; Drumright, Ray; Cesaretti, Richard; Bishop, Matthew T

    2016-09-12

    At The Dow Chemical Company, high-throughput research is an active area for developing new industrial coatings products. Using the principles of automation (i.e., using robotic instruments), parallel processing (i.e., prepare, process, and evaluate samples in parallel), and miniaturization (i.e., reduce sample size), high-throughput tools for synthesizing, formulating, and applying coating compositions have been developed at Dow. In addition, high-throughput workflows for measuring various coating properties, such as cure speed, hardness development, scratch resistance, impact toughness, resin compatibility, pot-life, surface defects, among others have also been developed in-house. These workflows correlate well with the traditional coatings tests, but they do not necessarily mimic those tests. The use of such high-throughput workflows in combination with smart experimental designs allows accelerated discovery and commercialization.

  10. High-throughput phenotyping of multicellular organisms: finding the link between genotype and phenotype

    OpenAIRE

    Sozzani, Rosangela; Benfey, Philip N

    2011-01-01

    High-throughput phenotyping approaches (phenomics) are being combined with genome-wide genetic screens to identify alterations in phenotype that result from gene inactivation. Here we highlight promising technologies for 'phenome-scale' analyses in multicellular organisms.

  11. High-throughput phenotyping of multicellular organisms: finding the link between genotype and phenotype

    Science.gov (United States)

    2011-01-01

    High-throughput phenotyping approaches (phenomics) are being combined with genome-wide genetic screens to identify alterations in phenotype that result from gene inactivation. Here we highlight promising technologies for 'phenome-scale' analyses in multicellular organisms. PMID:21457493

  12. EMPeror: a tool for visualizing high-throughput microbial community data

    National Research Council Canada - National Science Library

    Vázquez-Baeza, Yoshiki; Pirrung, Meg; Gonzalez, Antonio; Knight, Rob

    2013-01-01

    As microbial ecologists take advantage of high-throughput sequencing technologies to describe microbial communities across ever-increasing numbers of samples, new analysis tools are required to relate...

  13. High-throughput system-wide engineering and screening for microbial biotechnology.

    Science.gov (United States)

    Vervoort, Yannick; Linares, Alicia Gutiérrez; Roncoroni, Miguel; Liu, Chengxun; Steensels, Jan; Verstrepen, Kevin J

    2017-08-01

    Genetic engineering and screening of large number of cells or populations is a crucial bottleneck in today's systems biology and applied (micro)biology. Instead of using standard methods in bottles, flasks or 96-well plates, scientists are increasingly relying on high-throughput strategies that miniaturize their experiments to the nanoliter and picoliter scale and the single-cell level. In this review, we summarize different high-throughput system-wide genome engineering and screening strategies for microbes. More specifically, we will emphasize the use of multiplex automated genome evolution (MAGE) and CRISPR/Cas systems for high-throughput genome engineering and the application of (lab-on-chip) nanoreactors for high-throughput single-cell or population screening. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. An image analysis toolbox for high-throughput C. elegans assays.

    Science.gov (United States)

    Wählby, Carolina; Kamentsky, Lee; Liu, Zihan H; Riklin-Raviv, Tammy; Conery, Annie L; O'Rourke, Eyleen J; Sokolnicki, Katherine L; Visvikis, Orane; Ljosa, Vebjorn; Irazoqui, Javier E; Golland, Polina; Ruvkun, Gary; Ausubel, Frederick M; Carpenter, Anne E

    2012-04-22

    We present a toolbox for high-throughput screening of image-based Caenorhabditis elegans phenotypes. The image analysis algorithms measure morphological phenotypes in individual worms and are effective for a variety of assays and imaging systems. This WormToolbox is available through the open-source CellProfiler project and enables objective scoring of whole-worm high-throughput image-based assays of C. elegans for the study of diverse biological pathways that are relevant to human disease.

  15. Deep Recurrent Neural Network for Mobile Human Activity Recognition with High Throughput

    OpenAIRE

    Inoue, Masaya; Inoue, Sozo; Nishida, Takeshi

    2016-01-01

    In this paper, we propose a method of human activity recognition with high throughput from raw accelerometer data applying a deep recurrent neural network (DRNN), and investigate various architectures and its combination to find the best parameter values. The "high throughput" refers to short time at a time of recognition. We investigated various parameters and architectures of the DRNN by using the training dataset of 432 trials with 6 activity classes from 7 people. The maximum recognition ...

  16. NGS++: a library for rapid prototyping of epigenomics software tools.

    Science.gov (United States)

    Nordell Markovits, Alexei; Joly Beauparlant, Charles; Toupin, Dominique; Wang, Shengrui; Droit, Arnaud; Gevry, Nicolas

    2013-08-01

    The development of computational tools to enable testing and analysis of high-throughput-sequencing data is essential to modern genomics research. However, although multiple frameworks have been developed to facilitate access to these tools, comparatively little effort has been made at implementing low-level programming libraries to increase the speed and ease of their development. We propose NGS++, a programming library in C++11 specialized in manipulating both next-generation sequencing (NGS) datasets and genomic information files. This library allows easy integration of new formats and rapid prototyping of new functionalities with a focus on the analysis of genomic regions and features. It offers a powerful, yet versatile and easily extensible interface to read, write and manipulate multiple genomic file formats. By standardizing the internal data structures and presenting a common interface to the data parser, NGS++ offers an effective framework for epigenomics tool development. NGS++ was written in C++ using the C++11 standard. It requires minimal efforts to build and is well-documented via a complete docXygen guide, online documentation and tutorials. Source code, tests, code examples and documentation are available via the website at http://www.ngsplusplus.ca and the github repository at https://github.com/NGS-lib/NGSplusplus. nicolas.gevry@usherbrooke.ca or arnaud.droit@crchuq.ulaval.ca.

  17. Lessons from high-throughput protein crystallization screening: 10 years of practical experience

    Science.gov (United States)

    JR, Luft; EH, Snell; GT, DeTitta

    2011-01-01

    Introduction X-ray crystallography provides the majority of our structural biological knowledge at a molecular level and in terms of pharmaceutical design is a valuable tool to accelerate discovery. It is the premier technique in the field, but its usefulness is significantly limited by the need to grow well-diffracting crystals. It is for this reason that high-throughput crystallization has become a key technology that has matured over the past 10 years through the field of structural genomics. Areas covered The authors describe their experiences in high-throughput crystallization screening in the context of structural genomics and the general biomedical community. They focus on the lessons learnt from the operation of a high-throughput crystallization screening laboratory, which to date has screened over 12,500 biological macromolecules. They also describe the approaches taken to maximize the success while minimizing the effort. Through this, the authors hope that the reader will gain an insight into the efficient design of a laboratory and protocols to accomplish high-throughput crystallization on a single-, multiuser-laboratory or industrial scale. Expert Opinion High-throughput crystallization screening is readily available but, despite the power of the crystallographic technique, getting crystals is still not a solved problem. High-throughput approaches can help when used skillfully; however, they still require human input in the detailed analysis and interpretation of results to be more successful. PMID:22646073

  18. DGW: an exploratory data analysis tool for clustering and visualisation of epigenomic marks.

    Science.gov (United States)

    Lukauskas, Saulius; Visintainer, Roberto; Sanguinetti, Guido; Schweikert, Gabriele B

    2016-12-13

    Functional genomic and epigenomic research relies fundamentally on sequencing based methods like ChIP-seq for the detection of DNA-protein interactions. These techniques return large, high dimensional data sets with visually complex structures, such as multi-modal peaks extended over large genomic regions. Current tools for visualisation and data exploration represent and leverage these complex features only to a limited extent. We present DGW, an open source software package for simultaneous alignment and clustering of multiple epigenomic marks. DGW uses Dynamic Time Warping to adaptively rescale and align genomic distances which allows to group regions of interest with similar shapes, thereby capturing the structure of epigenomic marks. We demonstrate the effectiveness of the approach in a simulation study and on a real epigenomic data set from the ENCODE project. Our results show that DGW automatically recognises and aligns important genomic features such as transcription start sites and splicing sites from histone marks. DGW is available as an open source Python package.

  19. Enhancing genome assemblies by integrating non-sequence based data

    Directory of Open Access Journals (Sweden)

    Heider Thomas N

    2011-04-01

    Full Text Available Abstract Introduction Many genome projects were underway before the advent of high-throughput sequencing and have thus been supported by a wealth of genome information from other technologies. Such information frequently takes the form of linkage and physical maps, both of which can provide a substantial amount of data useful in de novo sequencing projects. Furthermore, the recent abundance of genome resources enables the use of conserved synteny maps identified in related species to further enhance genome assemblies. Methods The tammar wallaby (Macropus eugenii is a model marsupial mammal with a low coverage genome. However, we have access to extensive comparative maps containing over 14,000 markers constructed through the physical mapping of conserved loci, chromosome painting and comprehensive linkage maps. Using a custom Bioperl pipeline, information from the maps was aligned to assembled tammar wallaby contigs using BLAT. This data was used to construct pseudo paired-end libraries with intervals ranging from 5-10 MB. We then used Bambus (a program designed to scaffold eukaryotic genomes by ordering and orienting contigs through the use of paired-end data to scaffold our libraries. To determine how map data compares to sequence based approaches to enhance assemblies, we repeated the experiment using a 0.5× coverage of unique reads from 4 KB and 8 KB Illumina paired-end libraries. Finally, we combined both the sequence and non-sequence-based data to determine how a combined approach could further enhance the quality of the low coverage de novo reconstruction of the tammar wallaby genome. Results Using the map data alone, we were able order 2.2% of the initial contigs into scaffolds, and increase the N50 scaffold size to 39 KB (36 KB in the original assembly. Using only the 0.5× paired-end sequence based data, 53% of the initial contigs were assigned to scaffolds. Combining both data sets resulted in a further 2% increase in the number of

  20. Epigenomics and the regulation of aging.

    Science.gov (United States)

    Boyd-Kirkup, Jerome D; Green, Christopher D; Wu, Gang; Wang, Dan; Han, Jing-Dong J

    2013-04-01

    It is tempting to assume that a gradual accumulation of damage 'causes' an organism to age, but other biological processes present during the lifespan, whether 'programmed' or 'hijacked', could control the type and speed of aging. Theories of aging have classically focused on changes at the genomic level; however, individuals with similar genetic backgrounds can age very differently. Epigenetic modifications include DNA methylation, histone modifications and ncRNA. Environmental cues may be 'remembered' during lifespan through changes to the epigenome that affect the rate of aging. Changes to the epigenomic landscape are now known to associate with aging, but so far causal links to longevity are only beginning to be revealed. Nevertheless, it is becoming apparent that there is significant reciprocal regulation occurring between the epigenomic levels. Future work utilizing new technologies and techniques should build a clearer picture of the link between epigenomic changes and aging.

  1. Chromatin epigenomic domain folding: size matters

    Directory of Open Access Journals (Sweden)

    Bertrand R. Caré

    2015-09-01

    Full Text Available In eukaryotes, chromatin is coated with epigenetic marks which induce differential gene expression profiles and eventually lead to different cellular phenotypes. One of the challenges of contemporary cell biology is to relate the wealth of epigenomic data with the observed physical properties of chromatin. In this study, we present a polymer physics framework that takes into account the sizes of epigenomic domains. We build a model of chromatin as a block copolymer made of domains with various sizes. This model produces a rich set of conformations which is well explained by finite-size scaling analysis of the coil-globule transition of epigenomic domains. Our results suggest that size-dependent folding of epigenomic domains may be a crucial physical mechanism able to provide chromatin with tissue-specific folding states, these being associated with differential gene expression.

  2. High-throughput phenotyping and genomic selection: the frontiers of crop breeding converge.

    Science.gov (United States)

    Cabrera-Bosquet, Llorenç; Crossa, José; von Zitzewitz, Jarislav; Serret, María Dolors; Araus, José Luis

    2012-05-01

    Genomic selection (GS) and high-throughput phenotyping have recently been captivating the interest of the crop breeding community from both the public and private sectors world-wide. Both approaches promise to revolutionize the prediction of complex traits, including growth, yield and adaptation to stress. Whereas high-throughput phenotyping may help to improve understanding of crop physiology, most powerful techniques for high-throughput field phenotyping are empirical rather than analytical and comparable to genomic selection. Despite the fact that the two methodological approaches represent the extremes of what is understood as the breeding process (phenotype versus genome), they both consider the targeted traits (e.g. grain yield, growth, phenology, plant adaptation to stress) as a black box instead of dissecting them as a set of secondary traits (i.e. physiological) putatively related to the target trait. Both GS and high-throughput phenotyping have in common their empirical approach enabling breeders to use genome profile or phenotype without understanding the underlying biology. This short review discusses the main aspects of both approaches and focuses on the case of genomic selection of maize flowering traits and near-infrared spectroscopy (NIRS) and plant spectral reflectance as high-throughput field phenotyping methods for complex traits such as crop growth and yield. © 2012 Institute of Botany, Chinese Academy of Sciences.

  3. Protocol: A high-throughput DNA extraction system suitable for conifers.

    Science.gov (United States)

    Bashalkhanov, Stanislav; Rajora, Om P

    2008-08-01

    High throughput DNA isolation from plants is a major bottleneck for most studies requiring large sample sizes. A variety of protocols have been developed for DNA isolation from plants. However, many species, including conifers, have high contents of secondary metabolites that interfere with the extraction process or the subsequent analysis steps. Here, we describe a procedure for high-throughput DNA isolation from conifers. We have developed a high-throughput DNA extraction protocol for conifers using an automated liquid handler and modifying the Qiagen MagAttract Plant Kit protocol. The modifications involve change to the buffer system and improving the protocol so that it almost doubles the number of samples processed per kit, which significantly reduces the overall costs. We describe two versions of the protocol: one for medium-throughput (MTP) and another for high-throughput (HTP) DNA isolation. The HTP version works from start to end in the industry-standard 96-well format, while the MTP version provides higher DNA yields per sample processed. We have successfully used the protocol for DNA extraction and genotyping of thousands of individuals of several spruce and a pine species. A high-throughput system for DNA extraction from conifer needles and seeds has been developed and validated. The quality of the isolated DNA was comparable with that obtained from two commonly used methods: the silica-spin column and the classic CTAB protocol. Our protocol provides a fully automatable and cost effective solution for processing large numbers of conifer samples.

  4. Study on radiation-responsive epigenomes

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Hong; Lee, Seung Sik; Bae, Hyung Woo; Kim, Ji Hong; Kim, Ji Eun; Cho, Eun Ju; Lee, Min Hee; Moon, Yu Ran [KAERI, Daejeon (Korea, Republic of)

    2012-01-15

    The purpose of this project is development of world-class headspring techniques of biological science for application of plant genomes/epigenomes through study on radiation- responsive epigenomes and improvement of the national competitiveness in the field of fundamental technology for biological science and industry. Research scope includes 1) Investigation of radiation-responsive epigenomes and elucidation of their relation with phenotypes, 2) Elucidation of interaction and transcription control of epigenomes and epigenetic regulators using IR, 3) Investigation of epigenome-mediated traits in plant development, differentiation and antioxidant defense using IR, and 4) Development of application techniques of radiation-responsive epigenomes for eco-monitoring and molecular breeding. Main results are as follow: practical application of ChIP in GR-treated Arabidopsis using anti-histone antibodies: mapping of DNA methylomes associated with GR-responsive transcriptomes: setup of methylated DNA quantification using HPLC: elucidation of aberrations in epigenetic regulation induced by low-dose GR using gamma phytotron: comparison of gene expression of histone-modifying enzymes after treatment of GR: elucidation of transcriptomes and physiological alterations associated with delayed senescence of drd1-6 mutant: comparison of gene expression of DNA methylation-related enzymes in GR-treated rice callus and Arabidopsis: investigation of germination capacity, low-temperature, salinity and drought stress-resistance in drd1-6 epigenetic mutant: investigation of aberrations in DNA methylation depending on dose rates of gamma radiation

  5. The sva package for removing batch effects and other unwanted variation in high-throughput experiments.

    Science.gov (United States)

    Leek, Jeffrey T; Johnson, W Evan; Parker, Hilary S; Jaffe, Andrew E; Storey, John D

    2012-03-15

    Heterogeneity and latent variables are now widely recognized as major sources of bias and variability in high-throughput experiments. The most well-known source of latent variation in genomic experiments are batch effects-when samples are processed on different days, in different groups or by different people. However, there are also a large number of other variables that may have a major impact on high-throughput measurements. Here we describe the sva package for identifying, estimating and removing unwanted sources of variation in high-throughput experiments. The sva package supports surrogate variable estimation with the sva function, direct adjustment for known batch effects with the ComBat function and adjustment for batch and latent variables in prediction problems with the fsva function.

  6. Recent progress using high-throughput sequencing technologies in plant molecular breeding.

    Science.gov (United States)

    Gao, Qiang; Yue, Guidong; Li, Wenqi; Wang, Junyi; Xu, Jiaohui; Yin, Ye

    2012-04-01

    High-throughput sequencing is a revolutionary technological innovation in DNA sequencing. This technology has an ultra-low cost per base of sequencing and an overwhelmingly high data output. High-throughput sequencing has brought novel research methods and solutions to the research fields of genomics and post-genomics. Furthermore, this technology is leading to a new molecular breeding revolution that has landmark significance for scientific research and enables us to launch multi-level, multi-faceted, and multi-extent studies in the fields of crop genetics, genomics, and crop breeding. In this paper, we review progress in the application of high-throughput sequencing technologies to plant molecular breeding studies. © 2012 Institute of Botany, Chinese Academy of Sciences.

  7. Plant phenomics and high-throughput phenotyping: accelerating rice functional genomics using multidisciplinary technologies.

    Science.gov (United States)

    Yang, Wanneng; Duan, Lingfeng; Chen, Guoxing; Xiong, Lizhong; Liu, Qian

    2013-05-01

    The functional analysis of the rice genome has entered into a high-throughput stage, and a project named RICE2020 has been proposed to determine the function of every gene in the rice genome by the year 2020. However, as compared with the robustness of genetic techniques, the evaluation of rice phenotypic traits is still performed manually, and the process is subjective, inefficient, destructive and error-prone. To overcome these limitations and help rice phenomics more closely parallel rice genomics, reliable, automatic, multifunctional, and high-throughput phenotyping platforms should be developed. In this article, we discuss the key plant phenotyping technologies, particularly photonics-based technologies, and then introduce their current applications in rice (wheat or barley) phenomics. We also note the major challenges in rice phenomics and are confident that these reliable high-throughput phenotyping tools will give plant scientists new perspectives on the information encoded in the rice genome. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. High-Throughput 3D Tumor Culture in a Recyclable Microfluidic Platform.

    Science.gov (United States)

    Liu, Wenming; Wang, Jinyi

    2017-01-01

    Three-dimensional (3D) tumor culture miniaturized platforms are of importance to biomimetic model construction and pathophysiological studies. Controllable and high-throughput production of 3D tumors is desirable to make cell-based manipulation dynamic and efficient at micro-scale. Moreover, the 3D culture platform being reusable is convenient to research scholars. In this chapter, we describe a dynamically controlled 3D tumor manipulation and culture method using pneumatic microstructure-based microfluidics, which has potential applications in the fields of tissue engineering, tumor biology, and clinical medicine in a high-throughput way.

  9. Applications of high-throughput plant phenotyping to study nutrient use efficiency.

    Science.gov (United States)

    Berger, Bettina; de Regt, Bas; Tester, Mark

    2013-01-01

    Remote sensing and spectral reflectance measurements of plants has long been used to assess the growth and nutrient status of plants in a noninvasive manner. With improved imaging and computer technologies, these approaches can now be used at high-throughput for more extensive physiological and genetic studies. Here, we present an example of how high-throughput imaging can be used to study the growth of plants exposed to different nutrient levels. In addition, the color of the leaves can be used to estimate leaf chlorophyll and nitrogen status of the plant.

  10. A platform for high-throughput screening of DNA-encoded catalyst libraries in organic solvents.

    Science.gov (United States)

    Hook, K Delaney; Chambers, John T; Hili, Ryan

    2017-10-01

    We have developed a novel high-throughput screening platform for the discovery of small-molecules catalysts for bond-forming reactions. The method employs an in vitro selection for bond-formation using amphiphilic DNA-encoded small molecules charged with reaction substrate, which enables selections to be conducted in a variety of organic or aqueous solvents. Using the amine-catalysed aldol reaction as a catalytic model and high-throughput DNA sequencing as a selection read-out, we demonstrate the 1200-fold enrichment of a known aldol catalyst from a library of 16.7-million uncompetitive library members.

  11. Perspective: Composition–structure–property mapping in high-throughput experiments: Turning data into knowledge

    Directory of Open Access Journals (Sweden)

    Jason R. Hattrick-Simpers

    2016-05-01

    Full Text Available With their ability to rapidly elucidate composition-structure-property relationships, high-throughput experimental studies have revolutionized how materials are discovered, optimized, and commercialized. It is now possible to synthesize and characterize high-throughput libraries that systematically address thousands of individual cuts of fabrication parameter space. An unresolved issue remains transforming structural characterization data into phase mappings. This difficulty is related to the complex information present in diffraction and spectroscopic data and its variation with composition and processing. We review the field of automated phase diagram attribution and discuss the impact that emerging computational approaches will have in the generation of phase diagrams and beyond.

  12. Macro-to-micro structural proteomics: native source proteins for high-throughput crystallization.

    Science.gov (United States)

    Totir, Monica; Echols, Nathaniel; Nanao, Max; Gee, Christine L; Moskaleva, Alisa; Gradia, Scott; Iavarone, Anthony T; Berger, James M; May, Andrew P; Zubieta, Chloe; Alber, Tom

    2012-01-01

    Structural biology and structural genomics projects routinely rely on recombinantly expressed proteins, but many proteins and complexes are difficult to obtain by this approach. We investigated native source proteins for high-throughput protein crystallography applications. The Escherichia coli proteome was fractionated, purified, crystallized, and structurally characterized. Macro-scale fermentation and fractionation were used to subdivide the soluble proteome into 408 unique fractions of which 295 fractions yielded crystals in microfluidic crystallization chips. Of the 295 crystals, 152 were selected for optimization, diffraction screening, and data collection. Twenty-three structures were determined, four of which were novel. This study demonstrates the utility of native source proteins for high-throughput crystallography.

  13. HTP-NLP: A New NLP System for High Throughput Phenotyping.

    Science.gov (United States)

    Schlegel, Daniel R; Crowner, Chris; Lehoullier, Frank; Elkin, Peter L

    2017-01-01

    Secondary use of clinical data for research requires a method to quickly process the data so that researchers can quickly extract cohorts. We present two advances in the High Throughput Phenotyping NLP system which support the aim of truly high throughput processing of clinical data, inspired by a characterization of the linguistic properties of such data. Semantic indexing to store and generalize partially-processed results and the use of compositional expressions for ungrammatical text are discussed, along with a set of initial timing results for the system.

  14. High-throughput exposure modeling to support prioritization of chemicals in personal care products

    DEFF Research Database (Denmark)

    Csiszar, Susan A.; Ernstoff, Alexi; Fantke, Peter

    2016-01-01

    We demonstrate the application of a high-throughput modeling framework to estimate exposure to chemicals used in personal care products (PCPs). As a basis for estimating exposure, we use the product intake fraction (PiF), defined as the mass of chemical taken by an individual or population per mass...... intakes were associated with body lotion. Bioactive doses derived from high-throughput in vitro toxicity data were combined with the estimated PiFs to demonstrate an approach to estimate bioactive equivalent chemical content and to screen chemicals for risk....

  15. Epigenomic programing: a future way to health?

    Directory of Open Access Journals (Sweden)

    Boris A. Shenderov

    2014-05-01

    Full Text Available It is now generally accepted that the ‘central genome dogma’ (i.e. a causal chain going from DNA to RNA to proteins and downstream to biological functions should be replaced by the ‘fluid genome dogma’, that is, complex feed-forward and feed-back cycles that interconnect organism and environment by epigenomic programing – and reprograming – throughout life and at all levels, sometimes also down the generations. The epigenomic programing is the net sum of interactions derived from own metabolism and microbiota as well as external factors such as diet, pharmaceuticals, environmental compounds, and so on. It is a growing body of results indicating that many chronic metabolic and degenerative disorders and diseases – often called ‘civilization diseases’ – are initiated and/or influenced upon by non-optimal epigenomic programing, often taking place early in life. In this context, the first 1,000 days of life – from conception into early infancy – is often called the most important period of life. The following sections present some major mechanisms for epigenomic programing as well as some factors assumed to be of importance. The need for more information about own genome and metagenome, as well as a substantial lack of adequate information regarding dietary and environmental databases are also commented upon. However, the mere fact that we can influence epigenomic health programing opens up the way for prophylactic and therapeutic interventions. The authors underline the importance of creating a ‘Human Gut Microbiota and Epigenomic Platform’ in order to facilitate interdisciplinary collaborations among scientists and clinicians engaged in host microbial ecology, nutrition, metagenomics, epigenomics and metabolomics as well as in disease epidemiology, prevention and treatment.

  16. Integrating Epigenomics into Pharmacogenomic Studies.

    Science.gov (United States)

    Zhang, Wei; Huang, R Stephanie; Dolan, M Eileen

    2008-11-01

    The goal of personalized medicine is to recommend drug treatment based on an individual's genetic makeup. Pharmacogenomic studies utilize two main approaches: candidate gene and whole-genome. Both approaches analyze genetic variants such as single nucleotide polymorphisms (SNPs) to identify associations with drug response. In addition to DNA sequence variations, non-genetic but heritable epigenetic systems have also been implicated in regulating gene expression that could influence drug response. The International HapMap Project lymphoblastoid cell lines (LCLs) have been used to study genetic determinants responsible for expression variation and drug response. Recent studies have demonstrated that common genetic variants, including both SNPs and copy number variants (CNVs) account for a substantial fraction of natural variation in gene expression. Given the critical role played by DNA methylation in gene regulation and the fact that DNA methylation is currently the most studied epigenetic system, we suggest that profiling the variation in DNA methylation in the HapMap samples will provide new insights into the regulation of gene expression as well as the mechanisms of individual drug response at a new level of complexity. Epigenomics will substantially add to our knowledge of how genetics explains gene expression and pharmacogenomics.

  17. Integrating epigenomics into pharmacogenomic studies

    Directory of Open Access Journals (Sweden)

    Wei Zhang

    2008-11-01

    Full Text Available Wei Zhang, R Stephanie Huang, M Eileen DolanSection of Hematology/Oncology, Department of Medicine, The University of Chicago, Chicago, IL 60637, USAAbstract: The goal of personalized medicine is to recommend drug treatment based on an individual’s genetic makeup. Pharmacogenomic studies utilize two main approaches: candidate gene and whole-genome. Both approaches analyze genetic variants such as single nucleotide polymorphisms (SNPs to identify associations with drug response. In addition to DNA sequence variations, nongenetic but heritable epigenetic systems have also been implicated in regulating gene expression that could influence drug response. The International HapMap Project lymphoblastoid cell lines (LCLs have been used to study genetic determinants responsible for expression variation and drug response. Recent studies have demonstrated that common genetic variants, including both SNPs and copy number variants (CNVs account for a substantial fraction of natural variation in gene expression. Given the critical role played by DNA methylationin gene regulation and the fact that DNA methylation is currently the most studied epigenetic system, we suggest that profiling the variation in DNA methylation in the HapMap samples will provide new insights into the regulation of gene expression as well as the mechanisms of individual drug response at a new level of complexity. Epigenomics will substantially add to our knowledge of how genetics explains gene expression and pharmacogenomics.Keywords: epigenetics, DNA methylation, gene expression, pharmacogenomics, HapMap, drug response

  18. DNA methylation profiling of the human major histocompatibility complex: a pilot study for the human epigenome project.

    Directory of Open Access Journals (Sweden)

    Vardhman K Rakyan

    2004-12-01

    Full Text Available The Human Epigenome Project aims to identify, catalogue, and interpret genome-wide DNA methylation phenomena. Occurring naturally on cytosine bases at cytosine-guanine dinucleotides, DNA methylation is intimately involved in diverse biological processes and the aetiology of many diseases. Differentially methylated cytosines give rise to distinct profiles, thought to be specific for gene activity, tissue type, and disease state. The identification of such methylation variable positions will significantly improve our understanding of genome biology and our ability to diagnose disease. Here, we report the results of the pilot study for the Human Epigenome Project entailing the methylation analysis of the human major histocompatibility complex. This study involved the development of an integrated pipeline for high-throughput methylation analysis using bisulphite DNA sequencing, discovery of methylation variable positions, epigenotyping by matrix-assisted laser desorption/ionisation mass spectrometry, and development of an integrated public database available at http://www.epigenome.org. Our analysis of DNA methylation levels within the major histocompatibility complex, including regulatory exonic and intronic regions associated with 90 genes in multiple tissues and individuals, reveals a bimodal distribution of methylation profiles (i.e., the vast majority of the analysed regions were either hypo- or hypermethylated, tissue specificity, inter-individual variation, and correlation with independent gene expression data.

  19. DNA Methylation Profiling of the Human Major Histocompatibility Complex: A Pilot Study for the Human Epigenome Project

    Science.gov (United States)

    Rakyan, Vardhman K; Hildmann, Thomas; Novik, Karen L; Lewin, Jörn; Tost, Jörg; Cox, Antony V; Andrews, T. Dan; Howe, Kevin L; Otto, Thomas; Olek, Alexander; Fischer, Judith; Gut, Ivo G; Berlin, Kurt

    2004-01-01

    The Human Epigenome Project aims to identify, catalogue, and interpret genome-wide DNA methylation phenomena. Occurring naturally on cytosine bases at cytosine–guanine dinucleotides, DNA methylation is intimately involved in diverse biological processes and the aetiology of many diseases. Differentially methylated cytosines give rise to distinct profiles, thought to be specific for gene activity, tissue type, and disease state. The identification of such methylation variable positions will significantly improve our understanding of genome biology and our ability to diagnose disease. Here, we report the results of the pilot study for the Human Epigenome Project entailing the methylation analysis of the human major histocompatibility complex. This study involved the development of an integrated pipeline for high-throughput methylation analysis using bisulphite DNA sequencing, discovery of methylation variable positions, epigenotyping by matrix-assisted laser desorption/ionisation mass spectrometry, and development of an integrated public database available at http://www.epigenome.org. Our analysis of DNA methylation levels within the major histocompatibility complex, including regulatory exonic and intronic regions associated with 90 genes in multiple tissues and individuals, reveals a bimodal distribution of methylation profiles (i.e., the vast majority of the analysed regions were either hypo- or hypermethylated), tissue specificity, inter-individual variation, and correlation with independent gene expression data. PMID:15550986

  20. Liquid Phase Multiplex High-Throughput Screening of Metagenomic Libraries Using p-Nitrophenyl-Linked Substrates for Accessory Lignocellulosic Enzymes.

    Science.gov (United States)

    Smart, Mariette; Huddy, Robert J; Cowan, Don A; Trindade, Marla

    2017-01-01

    To access the genetic potential contained in large metagenomic libraries, suitable high-throughput functional screening methods are required. Here we describe a high-throughput screening approach which enables the rapid identification of metagenomic library clones expressing functional accessory lignocellulosic enzymes. The high-throughput nature of this method hinges on the multiplexing of both the E. coli metagenomic library clones and the colorimetric p-nitrophenyl linked substrates which allows for the simultaneous screening for β-glucosidases, β-xylosidases, and α-L-arabinofuranosidases. This method is readily automated and compatible with high-throughput robotic screening systems.

  1. The toxicological application of transcriptomics and epigenomics in zebrafish and other teleosts.

    Science.gov (United States)

    Williams, Tim D; Mirbahai, Leda; Chipman, J Kevin

    2014-03-01

    Zebrafish (Danio rerio) is one of a number of teleost fish species frequently employed in toxicology. Toxico-genomics determines global transcriptomic responses to chemical exposures and can predict their effects. It has been applied successfully within aquatic toxicology to assist in chemical testing, determination of mechanisms and environmental monitoring. Moreover, the related field of toxico-epigenomics, that determines chemical-induced changes in DNA methylation, histone modifications and micro-RNA expression, is emerging as a valuable contribution to understanding mechanisms of both adaptive and adverse responses. Zebrafish has proven a useful and convenient model species for both transcriptomic and epigenetic toxicological studies. Despite zebrafish's dominance in other areas of fish biology, alternative fish species are used extensively in toxico-genomics. The main reason for this is that environmental monitoring generally focuses on species native to the region of interest. We are starting to see advances in the integration of high-throughput screening, omics techniques and bioinformatics together with more traditional indicator endpoints that are relevant to regulators. Integration of such approaches with high-throughput testing of zebrafish embryos, leading to the discovery of adverse outcome pathways, promises to make a major contribution to ensuring the safety of chemicals in the environment.

  2. High-throughput analysis of the impact of antibiotics on the human intestinal microbiota composition

    NARCIS (Netherlands)

    Ladirat, S.E.; Schols, H.A.; Nauta, A.; Schoterman, M.H.C.; Keijser, B.J.F.; Montijn, R.C.; Gruppen, H.; Schuren, F.H.J.

    2013-01-01

    Antibiotic treatments can lead to a disruption of the human microbiota. In this in-vitro study, the impact of antibiotics on adult intestinal microbiota was monitored in a new high-throughput approach: a fermentation screening-platform was coupled with a phylogenetic microarray analysis

  3. Complementing high-throughput X-ray powder diffraction data with quantum-chemical calculations

    DEFF Research Database (Denmark)

    Naelapaa, Kaisa; van de Streek, Jacco; Rantanen, Jukka

    2012-01-01

    High-throughput crystallisation and characterisation platforms provide an efficient means to carry out solid-form screening during the pre-formulation phase. To determine the crystal structures of identified new solid phases, however, usually requires independent crystallisation trials to produce...... obtained only during high-energy processing such as spray drying or milling....

  4. Reverse Phase Protein Arrays for High-Throughput Protein Measurements in Mammospheres

    DEFF Research Database (Denmark)

    Pedersen, Marlene Lemvig; Block, Ines; List, Markus

    Protein Array (RPPA)-based readout format integrated into robotic siRNA screening. This technique would allow post-screening high-throughput quantification of protein changes. Recently, breast cancer stem cells (BCSCs) have attracted much attention, as a tumor- and metastasis-driving subpopulation...

  5. High-throughput open source computational methods for genetics and genomics

    NARCIS (Netherlands)

    Prins, J.C.P.

    2015-01-01

    Biology is increasingly data driven by virtue of the development of high-throughput technologies, such as DNA and RNA sequencing. Computational biology and bioinformatics are scientific disciplines that cross-over between the disciplines of biology, informatics and statistics; which is clearly

  6. ToxCast Workflow: High-throughput screening assay data processing, analysis and management (SOT)

    Science.gov (United States)

    US EPA’s ToxCast program is generating data in high-throughput screening (HTS) and high-content screening (HCS) assays for thousands of environmental chemicals, for use in developing predictive toxicity models. Currently the ToxCast screening program includes over 1800 unique c...

  7. Microfluidic Impedance Flow Cytometry Enabling High-Throughput Single-Cell Electrical Property Characterization

    OpenAIRE

    Jian Chen; Chengcheng Xue; Yang Zhao; Deyong Chen; Min-Hsien Wu; Junbo Wang

    2015-01-01

    This article reviews recent developments in microfluidic impedance flow cytometry for high-throughput electrical property characterization of single cells. Four major perspectives of microfluidic impedance flow cytometry for single-cell characterization are included in this review: (1) early developments of microfluidic impedance flow cytometry for single-cell electrical property characterization; (2) microfluidic impedance flow cytometry with enhanced sensitivity; (3) microfluidic impedance ...

  8. High-throughput transformation of Saccharomyces cerevisiae using liquid handling robots.

    Directory of Open Access Journals (Sweden)

    Guangbo Liu

    Full Text Available Saccharomyces cerevisiae (budding yeast is a powerful eukaryotic model organism ideally suited to high-throughput genetic analyses, which time and again has yielded insights that further our understanding of cell biology processes conserved in humans. Lithium Acetate (LiAc transformation of yeast with DNA for the purposes of exogenous protein expression (e.g., plasmids or genome mutation (e.g., gene mutation, deletion, epitope tagging is a useful and long established method. However, a reliable and optimized high throughput transformation protocol that runs almost no risk of human error has not been described in the literature. Here, we describe such a method that is broadly transferable to most liquid handling high-throughput robotic platforms, which are now commonplace in academic and industry settings. Using our optimized method, we are able to comfortably transform approximately 1200 individual strains per day, allowing complete transformation of typical genomic yeast libraries within 6 days. In addition, use of our protocol for gene knockout purposes also provides a potentially quicker, easier and more cost-effective approach to generating collections of double mutants than the popular and elegant synthetic genetic array methodology. In summary, our methodology will be of significant use to anyone interested in high throughput molecular and/or genetic analysis of yeast.

  9. The Power of High-Throughput Experimentation in Homogeneous Catalysis Research for Fine Chemicals

    NARCIS (Netherlands)

    Vries, Johannes G. de; Vries, André H.M. de

    2003-01-01

    The use of high-throughput experimentation (HTE) in homogeneous catalysis research for the production of fine chemicals is an important breakthrough. Whereas in the past stoichiometric chemistry was often preferred because of time-to-market constraints, HTE allows catalytic solutions to be found

  10. High throughput phenotyping to accelerate crop breeding and monitoring of diseases in the field.

    Science.gov (United States)

    Shakoor, Nadia; Lee, Scott; Mockler, Todd C

    2017-08-01

    Effective implementation of technology that facilitates accurate and high-throughput screening of thousands of field-grown lines is critical for accelerating crop improvement and breeding strategies for higher yield and disease tolerance. Progress in the development of field-based high throughput phenotyping methods has advanced considerably in the last 10 years through technological progress in sensor development and high-performance computing. Here, we review recent advances in high throughput field phenotyping technologies designed to inform the genetics of quantitative traits, including crop yield and disease tolerance. Successful application of phenotyping platforms to advance crop breeding and identify and monitor disease requires: (1) high resolution of imaging and environmental sensors; (2) quality data products that facilitate computer vision, machine learning and GIS; (3) capacity infrastructure for data management and analysis; and (4) automated environmental data collection. Accelerated breeding for agriculturally relevant crop traits is key to the development of improved varieties and is critically dependent on high-resolution, high-throughput field-scale phenotyping technologies that can efficiently discriminate better performing lines within a larger population and across multiple environments. Copyright © 2017. Published by Elsevier Ltd.

  11. Functional characterisation of human glycine receptors in a fluorescence-based high throughput screening assay

    DEFF Research Database (Denmark)

    Jensen, Anders A.

    2005-01-01

    receptors in this assay were found to be in good agreement with those from electrophysiology studies of the receptors expressed in Xenopus oocytes or mammalian cell lines. Hence, this high throughput screening assay will be of great use in future pharmacological studies of glycine receptors, particular...

  12. A high throughput DNA extraction method with high yield and quality

    Directory of Open Access Journals (Sweden)

    Xin Zhanguo

    2012-07-01

    Full Text Available Abstract Background Preparation of large quantity and high quality genomic DNA from a large number of plant samples is a major bottleneck for most genetic and genomic analyses, such as, genetic mapping, TILLING (Targeting Induced Local Lesion IN Genome, and next-generation sequencing directly from sheared genomic DNA. A variety of DNA preparation methods and commercial kits are available. However, they are either low throughput, low yield, or costly. Here, we describe a method for high throughput genomic DNA isolation from sorghum [Sorghum bicolor (L. Moench] leaves and dry seeds with high yield, high quality, and affordable cost. Results We developed a high throughput DNA isolation method by combining a high yield CTAB extraction method with an improved cleanup procedure based on MagAttract kit. The method yielded large quantity and high quality DNA from both lyophilized sorghum leaves and dry seeds. The DNA yield was improved by nearly 30 fold with 4 times less consumption of MagAttract beads. The method can also be used in other plant species, including cotton leaves and pine needles. Conclusion A high throughput system for DNA extraction from sorghum leaves and seeds was developed and validated. The main advantages of the method are low cost, high yield, high quality, and high throughput. One person can process two 96-well plates in a working day at a cost of $0.10 per sample of magnetic beads plus other consumables that other methods will also need.

  13. Predicting gene function through systematic analysis and quality assessment of high-throughput data.

    Science.gov (United States)

    Kemmeren, Patrick; Kockelkorn, Thessa T J P; Bijma, Theo; Donders, Rogier; Holstege, Frank C P

    2005-04-15

    Determining gene function is an important challenge arising from the availability of whole genome sequences. Until recently, approaches based on sequence homology were the only high-throughput method for predicting gene function. Use of high-throughput generated experimental data sets for determining gene function has been limited for several reasons. Here a new approach is presented for integration of high-throughput data sets, leading to prediction of function based on relationships supported by multiple types and sources of data. This is achieved with a database containing 125 different high-throughput data sets describing phenotypes, cellular localizations, protein interactions and mRNA expression levels from Saccharomyces cerevisiae, using a bit-vector representation and information content-based ranking. The approach takes characteristic and qualitative differences between the data sets into account, is highly flexible, efficient and scalable. Database queries result in predictions for 543 uncharacterized genes, based on multiple functional relationships each supported by at least three types of experimental data. Some of these are experimentally verified, further demonstrating their reliability. The results also generate insights into the relative merits of different data types and provide a coherent framework for functional genomic datamining. Free availability over the Internet. f.c.p.holstege@med.uu.nl http://www.genomics.med.uu.nl/pub/pk/comb_gen_network.

  14. tcpl: The ToxCast Pipeline for High-Throughput Screening Data

    Science.gov (United States)

    Motivation: The large and diverse high-throughput chemical screening efforts carried out by the US EPAToxCast program requires an efficient, transparent, and reproducible data pipeline.Summary: The tcpl R package and its associated MySQL database provide a generalized platform fo...

  15. Application of high-throughput technologies to a structural proteomics-type analysis of Bacillus anthracis

    NARCIS (Netherlands)

    Au, K.; Folkers, G.E.; Kaptein, R.

    2006-01-01

    A collaborative project between two Structural Proteomics In Europe (SPINE) partner laboratories, York and Oxford, aimed at high-throughput (HTP) structure determination of proteins from Bacillus anthracis, the aetiological agent of anthrax and a biomedically important target, is described. Based

  16. Roche genome sequencer FLX based high-throughput sequencing of ancient DNA

    DEFF Research Database (Denmark)

    Alquezar-Planas, David E; Fordyce, Sarah Louise

    2012-01-01

    Since the development of so-called "next generation" high-throughput sequencing in 2005, this technology has been applied to a variety of fields. Such applications include disease studies, evolutionary investigations, and ancient DNA. Each application requires a specialized protocol to ensure tha...

  17. 20170913 - Retrofit Strategies for Incorporating Xenobiotic Metabolism into High Throughput Screening Assays (EMGS)

    Science.gov (United States)

    The US EPA’s ToxCast program is designed to assess chemical perturbations of molecular and cellular endpoints using a variety of high-throughput screening (HTS) assays. However, existing HTS assays have limited or no xenobiotic metabolism which could lead to a mischaracteri...

  18. PLASMA PROTEIN PROFILING AS A HIGH THROUGHPUT TOOL FOR CHEMICAL SCREENING USING A SMALL FISH MODEL

    Science.gov (United States)

    Hudson, R. Tod, Michael J. Hemmer, Kimberly A. Salinas, Sherry S. Wilkinson, James Watts, James T. Winstead, Peggy S. Harris, Amy Kirkpatrick and Calvin C. Walker. In press. Plasma Protein Profiling as a High Throughput Tool for Chemical Screening Using a Small Fish Model (Abstra...

  19. HTPheno: an image analysis pipeline for high-throughput plant phenotyping.

    Science.gov (United States)

    Hartmann, Anja; Czauderna, Tobias; Hoffmann, Roberto; Stein, Nils; Schreiber, Falk

    2011-05-12

    In the last few years high-throughput analysis methods have become state-of-the-art in the life sciences. One of the latest developments is automated greenhouse systems for high-throughput plant phenotyping. Such systems allow the non-destructive screening of plants over a period of time by means of image acquisition techniques. During such screening different images of each plant are recorded and must be analysed by applying sophisticated image analysis algorithms. This paper presents an image analysis pipeline (HTPheno) for high-throughput plant phenotyping. HTPheno is implemented as a plugin for ImageJ, an open source image processing software. It provides the possibility to analyse colour images of plants which are taken in two different views (top view and side view) during a screening. Within the analysis different phenotypical parameters for each plant such as height, width and projected shoot area of the plants are calculated for the duration of the screening. HTPheno is applied to analyse two barley cultivars. HTPheno, an open source image analysis pipeline, supplies a flexible and adaptable ImageJ plugin which can be used for automated image analysis in high-throughput plant phenotyping and therefore to derive new biological insights, such as determination of fitness.

  20. HTPheno: An image analysis pipeline for high-throughput plant phenotyping

    Directory of Open Access Journals (Sweden)

    Stein Nils

    2011-05-01

    Full Text Available Abstract Background In the last few years high-throughput analysis methods have become state-of-the-art in the life sciences. One of the latest developments is automated greenhouse systems for high-throughput plant phenotyping. Such systems allow the non-destructive screening of plants over a period of time by means of image acquisition techniques. During such screening different images of each plant are recorded and must be analysed by applying sophisticated image analysis algorithms. Results This paper presents an image analysis pipeline (HTPheno for high-throughput plant phenotyping. HTPheno is implemented as a plugin for ImageJ, an open source image processing software. It provides the possibility to analyse colour images of plants which are taken in two different views (top view and side view during a screening. Within the analysis different phenotypical parameters for each plant such as height, width and projected shoot area of the plants are calculated for the duration of the screening. HTPheno is applied to analyse two barley cultivars. Conclusions HTPheno, an open source image analysis pipeline, supplies a flexible and adaptable ImageJ plugin which can be used for automated image analysis in high-throughput plant phenotyping and therefore to derive new biological insights, such as determination of fitness.

  1. High-Throughput Immunogenetics for Clinical and Research Applications in Immunohematology: Potential and Challenges

    NARCIS (Netherlands)

    Langerak, A.W.; Bruggemann, M.; Davi, F.; Darzentas, N.; Dongen, J.J. van; Gonzalez, D.; Cazzaniga, G.; Giudicelli, V.; Lefranc, M.P.; Giraud, M.; Macintyre, E.A.; Hummel, M.; Pott, C.; Groenen, P.J.T.A.; Stamatopoulos, K.

    2017-01-01

    Analysis and interpretation of Ig and TCR gene rearrangements in the conventional, low-throughput way have their limitations in terms of resolution, coverage, and biases. With the advent of high-throughput, next-generation sequencing (NGS) technologies, a deeper analysis of Ig and/or TCR (IG/TR)

  2. Accelerating the design of solar thermal fuel materials through high throughput simulations.

    Science.gov (United States)

    Liu, Yun; Grossman, Jeffrey C

    2014-12-10

    Solar thermal fuels (STF) store the energy of sunlight, which can then be released later in the form of heat, offering an emission-free and renewable solution for both solar energy conversion and storage. However, this approach is currently limited by the lack of low-cost materials with high energy density and high stability. In this Letter, we present an ab initio high-throughput computational approach to accelerate the design process and allow for searches over a broad class of materials. The high-throughput screening platform we have developed can run through large numbers of molecules composed of earth-abundant elements and identifies possible metastable structures of a given material. Corresponding isomerization enthalpies associated with the metastable structures are then computed. Using this high-throughput simulation approach, we have discovered molecular structures with high isomerization enthalpies that have the potential to be new candidates for high-energy density STF. We have also discovered physical principles to guide further STF materials design through structural analysis. More broadly, our results illustrate the potential of using high-throughput ab initio simulations to design materials that undergo targeted structural transitions.

  3. Investigation of non-halogenated solvent mixtures for high throughput fabrication of polymerfullerene solar cells

    NARCIS (Netherlands)

    Schmidt-Hansberg, B.; Sanyal, M.; Grossiord, N.; Galagan, Y.O.; Baunach, M.; Klein, M.F.G.; Colsmann, A.; Scharfer, P.; Lemmer, U.; Dosch, H.; Michels, J.J; Barrena, E.; Schabel, W.

    2012-01-01

    The rapidly increasing power conversion efficiencies of organic solar cells are an important prerequisite towards low cost photovoltaic fabricated in high throughput. In this work we suggest indane as a non-halogenated replacement for the commonly used halogenated solvent o-dichlorobenzene. Indane

  4. ESSENTIALS: Software for Rapid Analysis of High Throughput Transposon Insertion Sequencing Data.

    NARCIS (Netherlands)

    Zomer, A.L.; Burghout, P.J.; Bootsma, H.J.; Hermans, P.W.M.; Hijum, S.A.F.T. van

    2012-01-01

    High-throughput analysis of genome-wide random transposon mutant libraries is a powerful tool for (conditional) essential gene discovery. Recently, several next-generation sequencing approaches, e.g. Tn-seq/INseq, HITS and TraDIS, have been developed that accurately map the site of transposon

  5. Development of a thyroperoxidase inhibition assay for high-throughput screening

    Science.gov (United States)

    High-throughput screening (HTPS) assays to detect inhibitors of thyroperoxidase (TPO), the enzymatic catalyst for thyroid hormone (TH) synthesis, are not currently available. Herein we describe the development of a HTPS TPO inhibition assay. Rat thyroid microsomes and a fluores...

  6. Evaluation of Simple and Inexpensive High-Throughput Methods for Phytic Acid Determination

    DEFF Research Database (Denmark)

    Raboy, Victor; Johnson, Amy; Bilyeu, Kristin

    2017-01-01

    High-throughput/low-cost/low-tech methods for phytic acid determination that are sufficiently accurate and reproducible would be of value in plant genetics, crop breeding and in the food and feed industries. Variants of two candidate methods, those described by Vaintraub and Lapteva (Anal Biochem...

  7. High-throughput genotoxicity assay identifies antioxidants as inducers of DNA damage response and cell death

    Science.gov (United States)

    Human ATAD5 is an excellent biomarker for identifying genotoxic compounds because ATADS protein levels increase post-transcriptionally following exposure to a variety of DNA damaging agents. Here we report a novel quantitative high-throughput ATAD5-Iuciferase assay that can moni...

  8. Increasing ecological inference from high throughput sequencing of fungi in the environment through a tagging approach

    Science.gov (United States)

    D. Lee Taylor; Michael G. Booth; Jack W. McFarland; Ian C. Herriott; Niall J. Lennon; Chad Nusbaum; Thomas G. Marr

    2008-01-01

    High throughput sequencing methods are widely used in analyses of microbial diversity but are generally applied to small numbers of samples, which precludes charaterization of patterns of microbial diversity across space and time. We have designed a primer-tagging approach that allows pooling and subsequent sorting of numerous samples, which is directed to...

  9. A high throughput DNA extraction method with high yield and quality.

    Science.gov (United States)

    Xin, Zhanguo; Chen, Junping

    2012-07-28

    Preparation of large quantity and high quality genomic DNA from a large number of plant samples is a major bottleneck for most genetic and genomic analyses, such as, genetic mapping, TILLING (Targeting Induced Local Lesion IN Genome), and next-generation sequencing directly from sheared genomic DNA. A variety of DNA preparation methods and commercial kits are available. However, they are either low throughput, low yield, or costly. Here, we describe a method for high throughput genomic DNA isolation from sorghum [Sorghum bicolor (L.) Moench] leaves and dry seeds with high yield, high quality, and affordable cost. We developed a high throughput DNA isolation method by combining a high yield CTAB extraction method with an improved cleanup procedure based on MagAttract kit. The method yielded large quantity and high quality DNA from both lyophilized sorghum leaves and dry seeds. The DNA yield was improved by nearly 30 fold with 4 times less consumption of MagAttract beads. The method can also be used in other plant species, including cotton leaves and pine needles. A high throughput system for DNA extraction from sorghum leaves and seeds was developed and validated. The main advantages of the method are low cost, high yield, high quality, and high throughput. One person can process two 96-well plates in a working day at a cost of $0.10 per sample of magnetic beads plus other consumables that other methods will also need.

  10. The protein crystallography beamline BW6 at DORIS - automatic operation and high-throughput data collection

    CERN Document Server

    Blume, H; Bourenkov, G P; Kosciesza, D; Bartunik, H D

    2001-01-01

    The wiggler beamline BW6 at DORIS has been optimized for de-novo solution of protein structures on the basis of MAD phasing. Facilities for automatic data collection, rapid data transfer and storage, and online processing have been developed which provide adequate conditions for high-throughput applications, e.g., in structural genomics.

  11. The Complete Automation of Cell Culture: Improvements for High-Throughput and High-Content Screening

    NARCIS (Netherlands)

    Jain, S.; Sondervan, D.; Rizzu, P.; Bochdanovits, Z.; Caminada, D.; Heutink, P.

    2011-01-01

    Genomic approaches provide enormous amounts of raw data with regard to genetic variation, the diversity of RNA species, and protein complement. high-throughput (HT) and high-content (HC) cellular screens are ideally suited to contextualize the information gathered from other "omic" approaches into

  12. A high throughput platform for understanding the influence of excipients on physical and chemical stability

    DEFF Research Database (Denmark)

    Raijada, Dhara; Cornett, Claus; Rantanen, Jukka

    2013-01-01

    The present study puts forward a miniaturized high-throughput platform to understand influence of excipient selection and processing on the stability of a given drug compound. Four model drugs (sodium naproxen, theophylline, amlodipine besylate and nitrofurantoin) and ten different excipients were...

  13. A High-Throughput MALDI-TOF Mass Spectrometry-Based Assay of Chitinase Activity

    Science.gov (United States)

    A high-throughput MALDI-TOF mass spectrometric assay is described for assay of chitolytic enzyme activity. The assay uses unmodified chitin oligosaccharide substrates, and is readily achievable on a microliter scale (2 µL total volume, containing 2 µg of substrate and 1 ng of protein). The speed a...

  14. High-throughput siRNA screening applied to the ubiquitin-proteasome system

    DEFF Research Database (Denmark)

    Poulsen, Esben Guldahl; Nielsen, Sofie V.; Pietras, Elin J.

    2016-01-01

    that are not genetically tractable as, for instance, a yeast model system. Here, we describe a method relying on high-throughput cellular imaging of cells transfected with a targeted siRNA library to screen for components involved in degradation of a protein of interest. This method is a rapid and cost-effective tool...

  15. High-throughput shotgun lipidomics by quadrupole time-of-flight mass spectrometry

    DEFF Research Database (Denmark)

    Ståhlman, Marcus; Ejsing, Christer S.; Tarasov, Kirill

    2009-01-01

    we describe a novel high-throughput shotgun lipidomic platform based on 96-well robot-assisted lipid extraction, automated sample infusion by mircofluidic-based nanoelectrospray ionization, and quantitative multiple precursor ion scanning analysis on a quadrupole time-of-flight mass spectrometer...

  16. High-throughput verification of transcriptional starting sites by Deep-RACE

    DEFF Research Database (Denmark)

    Olivarius, Signe; Plessy, Charles; Carninci, Piero

    2009-01-01

    We present a high-throughput method for investigating the transcriptional starting sites of genes of interest, which we named Deep-RACE (Deep–rapid amplification of cDNA ends). Taking advantage of the latest sequencing technology, it allows the parallel analysis of multiple genes and is free...

  17. High throughput generated micro-aggregates of chondrocytes stimulate cartilage formation in vitro and in vivo

    NARCIS (Netherlands)

    Moreira Teixeira, Liliana; Leijten, Jeroen Christianus Hermanus; Sobral, J.; Jin, R.; van Apeldoorn, Aart A.; Feijen, Jan; van Blitterswijk, Clemens; Dijkstra, Pieter J.; Karperien, Hermanus Bernardus Johannes

    2012-01-01

    Cell-based cartilage repair strategies such as matrix-induced autologous chondrocyte implantation (MACI) could be improved by enhancing cell performance. We hypothesised that micro-aggregates of chondrocytes generated in high-throughput prior to implantation in a defect could stimulate cartilaginous

  18. DNA from buccal swabs suitable for high-throughput SNP multiplex analysis.

    Science.gov (United States)

    McMichael, Gai L; Gibson, Catherine S; O'Callaghan, Michael E; Goldwater, Paul N; Dekker, Gustaaf A; Haan, Eric A; MacLennan, Alastair H

    2009-12-01

    We sought a convenient and reliable method for collection of genetic material that is inexpensive and noninvasive and suitable for self-collection and mailing and a compatible, commercial DNA extraction protocol to meet quantitative and qualitative requirements for high-throughput single nucleotide polymorphism (SNP) multiplex analysis on an automated platform. Buccal swabs were collected from 34 individuals as part of a pilot study to test commercially available buccal swabs and DNA extraction kits. DNA was quantified on a spectrofluorometer with Picogreen dsDNA prior to testing the DNA integrity with predesigned SNP multiplex assays. Based on the pilot study results, the Catch-All swabs and Isohelix buccal DNA isolation kit were selected for our high-throughput application and extended to a further 1140 samples as part of a large cohort study. The average DNA yield in the pilot study (n=34) was 1.94 microg +/- 0.54 with a 94% genotyping pass rate. For the high-throughput application (n=1140), the average DNA yield was 2.44 microg +/- 1.74 with a >or=93% genotyping pass rate. The Catch-All buccal swabs are a convenient and cost-effective alternative to blood sampling. Combined with the Isohelix buccal DNA isolation kit, they provided DNA of sufficient quantity and quality for high-throughput SNP multiplex analysis.

  19. High-Throughput Dietary Exposure Predictions for Chemical Migrants from Food Packaging Materials

    Science.gov (United States)

    United States Environmental Protection Agency researchers have developed a Stochastic Human Exposure and Dose Simulation High -Throughput (SHEDS-HT) model for use in prioritization of chemicals under the ExpoCast program. In this research, new methods were implemented in SHEDS-HT...

  20. New approach for high-throughput screening of drug activity on Plasmodium liver stages.

    NARCIS (Netherlands)

    Gego, A.; Silvie, O.; Franetich, J.F.; Farhati, K.; Hannoun, L.; Luty, A.J.F.; Sauerwein, R.W.; Boucheix, C.; Rubinstein, E.; Mazier, D.

    2006-01-01

    Plasmodium liver stages represent potential targets for antimalarial prophylactic drugs. Nevertheless, there is a lack of molecules active on these stages. We have now developed a new approach for the high-throughput screening of drug activity on Plasmodium liver stages in vitro, based on an

  1. High-throughput tri-colour flow cytometry technique to assess Plasmodium falciparum parasitaemia in bioassays

    DEFF Research Database (Denmark)

    Tiendrebeogo, Regis W; Adu, Bright; Singh, Susheel K

    2014-01-01

    distinction of early ring stages of Plasmodium falciparum from uninfected red blood cells (uRBC) remains a challenge. METHODS: Here, a high-throughput, three-parameter (tri-colour) flow cytometry technique based on mitotracker red dye, the nucleic acid dye coriphosphine O (CPO) and the leucocyte marker CD45...

  2. High throughput system for magnetic manipulation of cells, polymers, and biomaterials

    Science.gov (United States)

    Spero, Richard Chasen; Vicci, Leandra; Cribb, Jeremy; Bober, David; Swaminathan, Vinay; O’Brien, E. Timothy; Rogers, Stephen L.; Superfine, R.

    2008-01-01

    In the past decade, high throughput screening (HTS) has changed the way biochemical assays are performed, but manipulation and mechanical measurement of micro- and nanoscale systems have not benefited from this trend. Techniques using microbeads (particles ∼0.1–10 μm) show promise for enabling high throughput mechanical measurements of microscopic systems. We demonstrate instrumentation to magnetically drive microbeads in a biocompatible, multiwell magnetic force system. It is based on commercial HTS standards and is scalable to 96 wells. Cells can be cultured in this magnetic high throughput system (MHTS). The MHTS can apply independently controlled forces to 16 specimen wells. Force calibrations demonstrate forces in excess of 1 nN, predicted force saturation as a function of pole material, and powerlaw dependence of F∼r−2.7±0.1. We employ this system to measure the stiffness of SR2+ Drosophila cells. MHTS technology is a key step toward a high throughput screening system for micro- and nanoscale biophysical experiments. PMID:19044357

  3. The Impact of Data Fragmentation on High-Throughput Clinical Phenotyping

    Science.gov (United States)

    Wei, Weiqi

    2012-01-01

    Subject selection is essential and has become the rate-limiting step for harvesting knowledge to advance healthcare through clinical research. Present manual approaches inhibit researchers from conducting deep and broad studies and drawing confident conclusions. High-throughput clinical phenotyping (HTCP), a recently proposed approach, leverages…

  4. A high-throughput method for GMO multi-detection using a microfluidic dynamic array

    NARCIS (Netherlands)

    Brod, F.C.A.; Dijk, van J.P.; Voorhuijzen, M.M.; Dinon, A.Z.; Guimarães, L.H.S.; Scholtens, I.M.J.; Arisi, A.C.M.; Kok, E.J.

    2014-01-01

    The ever-increasing production of genetically modified crops generates a demand for high-throughput DNAbased methods for the enforcement of genetically modified organisms (GMO) labelling requirements. The application of standard real-time PCR will become increasingly costly with the growth of the

  5. High-throughput assessment of context-dependent effects of chromatin proteins

    NARCIS (Netherlands)

    Brueckner, L. (Laura); Van Arensbergen, J. (Joris); Akhtar, W. (Waseem); L. Pagie (Ludo); B. van Steensel (Bas)

    2016-01-01

    textabstractBackground: Chromatin proteins control gene activity in a concerted manner. We developed a high-throughput assay to study the effects of the local chromatin environment on the regulatory activity of a protein of interest. The assay combines a previously reported multiplexing strategy

  6. Establishment of integrated protocols for automated high throughput kinetic chlorophyll fluorescence analyses.

    Science.gov (United States)

    Tschiersch, Henning; Junker, Astrid; Meyer, Rhonda C; Altmann, Thomas

    2017-01-01

    Automated plant phenotyping has been established as a powerful new tool in studying plant growth, development and response to various types of biotic or abiotic stressors. Respective facilities mainly apply non-invasive imaging based methods, which enable the continuous quantification of the dynamics of plant growth and physiology during developmental progression. However, especially for plants of larger size, integrative, automated and high throughput measurements of complex physiological parameters such as photosystem II efficiency determined through kinetic chlorophyll fluorescence analysis remain a challenge. We present the technical installations and the establishment of experimental procedures that allow the integrated high throughput imaging of all commonly determined PSII parameters for small and large plants using kinetic chlorophyll fluorescence imaging systems (FluorCam, PSI) integrated into automated phenotyping facilities (Scanalyzer, LemnaTec). Besides determination of the maximum PSII efficiency, we focused on implementation of high throughput amenable protocols recording PSII operating efficiency (ΦPSII). Using the presented setup, this parameter is shown to be reproducibly measured in differently sized plants despite the corresponding variation in distance between plants and light source that caused small differences in incident light intensity. Values of ΦPSII obtained with the automated chlorophyll fluorescence imaging setup correlated very well with conventionally determined data using a spot-measuring chlorophyll fluorometer. The established high throughput operating protocols enable the screening of up to 1080 small and 184 large plants per hour, respectively. The application of the implemented high throughput protocols is demonstrated in screening experiments performed with large Arabidopsis and maize populations assessing natural variation in PSII efficiency. The incorporation of imaging systems suitable for kinetic chlorophyll

  7. The folding landscape of the epigenome

    Science.gov (United States)

    Olarte-Plata, Juan D.; Haddad, Noelle; Vaillant, Cédric; Jost, Daniel

    2016-04-01

    The role of the spatial organization of chromatin in gene regulation is a long-standing but still open question. Experimentally it has been shown that the genome is segmented into epigenomic chromatin domains that are organized into hierarchical sub-nuclear spatial compartments. However, whether this non-random spatial organization only reflects or indeed contributes—and how—to the regulation of genome function remains to be elucidated. To address this question, we recently proposed a quantitative description of the folding properties of the fly genome as a function of its epigenomic landscape using a polymer model with epigenomic-driven attractions. We propose in this article, to characterize more deeply the physical properties of the 3D epigenome folding. Using an efficient lattice version of the original block copolymer model, we study the structural and dynamical properties of chromatin and show that the size of epigenomic domains and asymmetries in sizes and in interaction strengths play a critical role in the chromatin organization. Finally, we discuss the biological implications of our findings. In particular, our predictions are quantitatively compatible with experimental data and suggest a different mean of self-interaction in euchromatin versus heterochromatin domains.

  8. Linking the Epigenome with Exposure Effects and ...

    Science.gov (United States)

    The epigenome is a dynamic mediator of gene expression that shapes the way that cells, tissues, and organisms respond to their environment. Initial studies in the emerging field of “toxicoepigenetics” have described either the impact of an environmental exposure on the epigenome or the association of epigenetic signatures with the onset or progression of disease: however, the majority of these pioneering studies examined the relationship between discrete epigenetic modifications and the effects of a single environmental factor. While these data provide critical blocks with which we construct our understanding of the role of the epigenome in susceptibility and disease, they are akin to individual letters in a complex alphabet that is used to compose the language of the epigenome. Advancing the use of epigenetic data to gain a more comprehensive understanding of the mechanisms underlying exposure effects, identify susceptible populations, and inform the next generation of risk management depends on our ability to integrate these data in a way that accounts for their cumulative impact on gene regulation. Here we will review current examples demonstrating associations between the epigenetic impacts of intrinsic factors, such as such as age, genetics, and sex, and environmental exposures shape the epigenome and susceptibility, to exposure effects and disease. We will also demonstrate how the “epigenetic seed and soil'' model can be used as a conceptua

  9. Protocol: A high-throughput DNA extraction system suitable for conifers

    Directory of Open Access Journals (Sweden)

    Rajora Om P

    2008-08-01

    Full Text Available Abstract Background High throughput DNA isolation from plants is a major bottleneck for most studies requiring large sample sizes. A variety of protocols have been developed for DNA isolation from plants. However, many species, including conifers, have high contents of secondary metabolites that interfere with the extraction process or the subsequent analysis steps. Here, we describe a procedure for high-throughput DNA isolation from conifers. Results We have developed a high-throughput DNA extraction protocol for conifers using an automated liquid handler and modifying the Qiagen MagAttract Plant Kit protocol. The modifications involve change to the buffer system and improving the protocol so that it almost doubles the number of samples processed per kit, which significantly reduces the overall costs. We describe two versions of the protocol: one for medium-throughput (MTP and another for high-throughput (HTP DNA isolation. The HTP version works from start to end in the industry-standard 96-well format, while the MTP version provides higher DNA yields per sample processed. We have successfully used the protocol for DNA extraction and genotyping of thousands of individuals of several spruce and a pine species. Conclusion A high-throughput system for DNA extraction from conifer needles and seeds has been developed and validated. The quality of the isolated DNA was comparable with that obtained from two commonly used methods: the silica-spin column and the classic CTAB protocol. Our protocol provides a fully automatable and cost effective solution for processing large numbers of conifer samples.

  10. Epigenomics of Development in Populus

    Energy Technology Data Exchange (ETDEWEB)

    Strauss, Steve; Freitag, Michael; Mockler, Todd

    2013-01-10

    We conducted research to determine the role of epigenetic modifications during tree development using poplar (Populus trichocarpa), a model woody feedstock species. Using methylated DNA immunoprecipitation (MeDIP) or chromatin immunoprecipitation (ChIP), followed by high-throughput sequencing, we are analyzed DNA and histone methylation patterns in the P. trichocarpa genome in relation to four biological processes: bud dormancy and release, mature organ maintenance, in vitro organogenesis, and methylation suppression. Our project is now completed. We have 1) produced 22 transgenic events for a gene involved in DNA methylation suppression and studied its phenotypic consequences; 2) completed sequencing of methylated DNA from eleven target tissues in wildtype P. trichocarpa; 3) updated our customized poplar genome browser using the open-source software tools (2.13) and (V2.2) of the P. trichocarpa genome; 4) produced summary data for genome methylation in P. trichocarpa, including distribution of methylation across chromosomes and in and around genes; 5) employed bioinformatic and statistical methods to analyze differences in methylation patterns among tissue types; and 6) used bisulfite sequencing of selected target genes to confirm bioinformatics and sequencing results, and gain a higher-resolution view of methylation at selected genes 7) compared methylation patterns to expression using available microarray data. Our main findings of biological significance are the identification of extensive regions of the genome that display developmental variation in DNA methylation; highly distinctive gene-associated methylation profiles in reproductive tissues, particularly male catkins; a strong whole genome/all tissue inverse association of methylation at gene bodies and promoters with gene expression; a lack of evidence that tissue specificity of gene expression is associated with gene methylation; and evidence that genome methylation is a significant impediment to tissue

  11. Automated High Throughput Protein Crystallization Screening at Nanoliter Scale and Protein Structural Study on Lactate Dehydrogenase

    Energy Technology Data Exchange (ETDEWEB)

    Li, Fenglei [Iowa State Univ., Ames, IA (United States)

    2006-08-09

    The purposes of our research were: (1) To develop an economical, easy to use, automated, high throughput system for large scale protein crystallization screening. (2) To develop a new protein crystallization method with high screening efficiency, low protein consumption and complete compatibility with high throughput screening system. (3) To determine the structure of lactate dehydrogenase complexed with NADH by x-ray protein crystallography to study its inherent structural properties. Firstly, we demonstrated large scale protein crystallization screening can be performed in a high throughput manner with low cost, easy operation. The overall system integrates liquid dispensing, crystallization and detection and serves as a whole solution to protein crystallization screening. The system can dispense protein and multiple different precipitants in nanoliter scale and in parallel. A new detection scheme, native fluorescence, has been developed in this system to form a two-detector system with a visible light detector for detecting protein crystallization screening results. This detection scheme has capability of eliminating common false positives by distinguishing protein crystals from inorganic crystals in a high throughput and non-destructive manner. The entire system from liquid dispensing, crystallization to crystal detection is essentially parallel, high throughput and compatible with automation. The system was successfully demonstrated by lysozyme crystallization screening. Secondly, we developed a new crystallization method with high screening efficiency, low protein consumption and compatibility with automation and high throughput. In this crystallization method, a gas permeable membrane is employed to achieve the gentle evaporation required by protein crystallization. Protein consumption is significantly reduced to nanoliter scale for each condition and thus permits exploring more conditions in a phase diagram for given amount of protein. In addition

  12. Accurate Classification of Protein Subcellular Localization from High-Throughput Microscopy Images Using Deep Learning

    Directory of Open Access Journals (Sweden)

    Tanel Pärnamaa

    2017-05-01

    Full Text Available High-throughput microscopy of many single cells generates high-dimensional data that are far from straightforward to analyze. One important problem is automatically detecting the cellular compartment where a fluorescently-tagged protein resides, a task relatively simple for an experienced human, but difficult to automate on a computer. Here, we train an 11-layer neural network on data from mapping thousands of yeast proteins, achieving per cell localization classification accuracy of 91%, and per protein accuracy of 99% on held-out images. We confirm that low-level network features correspond to basic image characteristics, while deeper layers separate localization classes. Using this network as a feature calculator, we train standard classifiers that assign proteins to previously unseen compartments after observing only a small number of training examples. Our results are the most accurate subcellular localization classifications to date, and demonstrate the usefulness of deep learning for high-throughput microscopy.

  13. Nanostructured biosensing platform-shadow edge lithography for high-throughput nanofabrication.

    Science.gov (United States)

    Bai, John G; Yeo, Woon-Hong; Chung, Jae-Hyun

    2009-02-07

    One of the critical challenges in nanostructured biosensors is to manufacture an addressable array of nanopatterns at low cost. The addressable array (1) provides multiplexing for biomolecule detection and (2) enables direct detection of biomolecules without labeling and amplification. To fabricate such an array of nanostructures, current nanolithography methods are limited by the lack of either high throughput or high resolution. This paper presents a high-resolution and high-throughput nanolithography method using the compensated shadow effect in high-vacuum evaporation. The approach enables the fabrication of uniform nanogaps down to 20 nm in width across a 100 mm silicon wafer. The nanogap pattern is used as a template for the routine fabrication of zero-, one-, and two-dimensional nanostructures with a high yield. The method can facilitate the fabrication of nanostructured biosensors on a wafer scale at a low manufacturing cost.

  14. An automated system for high-throughput single cell-based breeding

    Science.gov (United States)

    Yoshimoto, Nobuo; Kida, Akiko; Jie, Xu; Kurokawa, Masaya; Iijima, Masumi; Niimi, Tomoaki; Maturana, Andrés D.; Nikaido, Itoshi; Ueda, Hiroki R.; Tatematsu, Kenji; Tanizawa, Katsuyuki; Kondo, Akihiko; Fujii, Ikuo; Kuroda, Shun'ichi

    2013-01-01

    When establishing the most appropriate cells from the huge numbers of a cell library for practical use of cells in regenerative medicine and production of various biopharmaceuticals, cell heterogeneity often found in an isogenic cell population limits the refinement of clonal cell culture. Here, we demonstrated high-throughput screening of the most suitable cells in a cell library by an automated undisruptive single-cell analysis and isolation system, followed by expansion of isolated single cells. This system enabled establishment of the most suitable cells, such as embryonic stem cells with the highest expression of the pluripotency marker Rex1 and hybridomas with the highest antibody secretion, which could not be achieved by conventional high-throughput cell screening systems (e.g., a fluorescence-activated cell sorter). This single cell-based breeding system may be a powerful tool to analyze stochastic fluctuations and delineate their molecular mechanisms. PMID:23378922

  15. A high-throughput, multi-channel photon-counting detector with picosecond timing

    CERN Document Server

    Lapington, J S; Miller, G M; Ashton, T J R; Jarron, P; Despeisse, M; Powolny, F; Howorth, J; Milnes, J

    2009-01-01

    High-throughput photon counting with high time resolution is a niche application area where vacuum tubes can still outperform solid-state devices. Applications in the life sciences utilizing time-resolved spectroscopies, particularly in the growing field of proteomics, will benefit greatly from performance enhancements in event timing and detector throughput. The HiContent project is a collaboration between the University of Leicester Space Research Centre, the Microelectronics Group at CERN, Photek Ltd., and end-users at the Gray Cancer Institute and the University of Manchester. The goal is to develop a detector system specifically designed for optical proteomics, capable of high content (multi-parametric) analysis at high throughput. The HiContent detector system is being developed to exploit this niche market. It combines multi-channel, high time resolution photon counting in a single miniaturized detector system with integrated electronics. The combination of enabling technologies; small pore microchanne...

  16. Robust, high-throughput solution structural analyses by small angle X-ray scattering (SAXS)

    Energy Technology Data Exchange (ETDEWEB)

    Hura, Greg L.; Menon, Angeli L.; Hammel, Michal; Rambo, Robert P.; Poole II, Farris L.; Tsutakawa, Susan E.; Jenney Jr, Francis E.; Classen, Scott; Frankel, Kenneth A.; Hopkins, Robert C.; Yang, Sungjae; Scott, Joseph W.; Dillard, Bret D.; Adams, Michael W. W.; Tainer, John A.

    2009-07-20

    We present an efficient pipeline enabling high-throughput analysis of protein structure in solution with small angle X-ray scattering (SAXS). Our SAXS pipeline combines automated sample handling of microliter volumes, temperature and anaerobic control, rapid data collection and data analysis, and couples structural analysis with automated archiving. We subjected 50 representative proteins, mostly from Pyrococcus furiosus, to this pipeline and found that 30 were multimeric structures in solution. SAXS analysis allowed us to distinguish aggregated and unfolded proteins, define global structural parameters and oligomeric states for most samples, identify shapes and similar structures for 25 unknown structures, and determine envelopes for 41 proteins. We believe that high-throughput SAXS is an enabling technology that may change the way that structural genomics research is done.

  17. The promise and challenge of high-throughput sequencing of the antibody repertoire

    Science.gov (United States)

    Georgiou, George; Ippolito, Gregory C; Beausang, John; Busse, Christian E; Wardemann, Hedda; Quake, Stephen R

    2014-01-01

    Efforts to determine the antibody repertoire encoded by B cells in the blood or lymphoid organs using high-throughput DNA sequencing technologies have been advancing at an extremely rapid pace and are transforming our understanding of humoral immune responses. Information gained from high-throughput DNA sequencing of immunoglobulin genes (Ig-seq) can be applied to detect B-cell malignancies with high sensitivity, to discover antibodies specific for antigens of interest, to guide vaccine development and to understand autoimmunity. Rapid progress in the development of experimental protocols and informatics analysis tools is helping to reduce sequencing artifacts, to achieve more precise quantification of clonal diversity and to extract the most pertinent biological information. That said, broader application of Ig-seq, especially in clinical settings, will require the development of a standardized experimental design framework that will enable the sharing and meta-analysis of sequencing data generated by different laboratories. PMID:24441474

  18. tcpl: the ToxCast pipeline for high-throughput screening data.

    Science.gov (United States)

    Filer, Dayne L; Kothiya, Parth; Setzer, R Woodrow; Judson, Richard S; Martin, Matthew T

    2017-02-15

    Large high-throughput screening (HTS) efforts are widely used in drug development and chemical toxicity screening. Wide use and integration of these data can benefit from an efficient, transparent and reproducible data pipeline. Summary: The tcpl R package and its associated MySQL database provide a generalized platform for efficiently storing, normalizing and dose-response modeling of large high-throughput and high-content chemical screening data. The novel dose-response modeling algorithm has been tested against millions of diverse dose-response series, and robustly fits data with outliers and cytotoxicity-related signal loss. tcpl is freely available on the Comprehensive R Archive Network under the GPL-2 license. martin.matt@epa.gov.

  19. Current developments in high-throughput analysis for microalgae cellular contents.

    Science.gov (United States)

    Lee, Tsung-Hua; Chang, Jo-Shu; Wang, Hsiang-Yu

    2013-11-01

    Microalgae have emerged as one of the most promising feedstocks for biofuels and bio-based chemical production. However, due to the lack of effective tools enabling rapid and high-throughput analysis of the content of microalgae biomass, the efficiency of screening and identification of microalgae with desired functional components from the natural environment is usually quite low. Moreover, the real-time monitoring of the production of target components from microalgae is also difficult. Recently, research efforts focusing on overcoming this limitation have started. In this review, the recent development of high-throughput methods for analyzing microalgae cellular contents is summarized. The future prospects and impacts of these detection methods in microalgae-related processing and industries are also addressed. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Quantitative monitoring of Arabidopsis thaliana growth and development using high-throughput plant phenotyping.

    Science.gov (United States)

    Arend, Daniel; Lange, Matthias; Pape, Jean-Michel; Weigelt-Fischer, Kathleen; Arana-Ceballos, Fernando; Mücke, Ingo; Klukas, Christian; Altmann, Thomas; Scholz, Uwe; Junker, Astrid

    2016-08-16

    With the implementation of novel automated, high throughput methods and facilities in the last years, plant phenomics has developed into a highly interdisciplinary research domain integrating biology, engineering and bioinformatics. Here we present a dataset of a non-invasive high throughput plant phenotyping experiment, which uses image- and image analysis- based approaches to monitor the growth and development of 484 Arabidopsis thaliana plants (thale cress). The result is a comprehensive dataset of images and extracted phenotypical features. Such datasets require detailed documentation, standardized description of experimental metadata as well as sustainable data storage and publication in order to ensure the reproducibility of experiments, data reuse and comparability among the scientific community. Therefore the here presented dataset has been annotated using the standardized ISA-Tab format and considering the recently published recommendations for the semantical description of plant phenotyping experiments.

  1. Automated High-Throughput Root Phenotyping of Arabidopsis thaliana Under Nutrient Deficiency Conditions.

    Science.gov (United States)

    Satbhai, Santosh B; Göschl, Christian; Busch, Wolfgang

    2017-01-01

    The central question of genetics is how a genotype determines the phenotype of an organism. Genetic mapping approaches are a key for finding answers to this question. In particular, genome-wide association (GWA) studies have been rapidly adopted to study the architecture of complex quantitative traits. This was only possible due to the improvement of high-throughput and low-cost phenotyping methodologies. In this chapter we provide a detailed protocol for obtaining root trait data from the model species Arabidopsis thaliana using the semiautomated, high-throughput phenotyping pipeline BRAT (Busch-lab Root Analysis Toolchain) for early root growth under the stress condition of iron deficiency. Extracted root trait data can be directly used to perform GWA mapping using the freely accessible web application GWAPP to identify marker polymorphisms associated with the phenotype of interest.

  2. Development of New Sensing Materials Using Combinatorial and High-Throughput Experimentation

    Science.gov (United States)

    Potyrailo, Radislav A.; Mirsky, Vladimir M.

    New sensors with improved performance characteristics are needed for applications as diverse as bedside continuous monitoring, tracking of environmental pollutants, monitoring of food and water quality, monitoring of chemical processes, and safety in industrial, consumer, and automotive settings. Typical requirements in sensor improvement are selectivity, long-term stability, sensitivity, response time, reversibility, and reproducibility. Design of new sensing materials is the important cornerstone in the effort to develop new sensors. Often, sensing materials are too complex to predict their performance quantitatively in the design stage. Thus, combinatorial and high-throughput experimentation methodologies provide an opportunity to generate new required data to discover new sensing materials and/or to optimize existing material compositions. The goal of this chapter is to provide an overview of the key concepts of experimental development of sensing materials using combinatorial and high-throughput experimentation tools, and to promote additional fruitful interactions between computational scientists and experimentalists.

  3. High-throughput clone screening followed by protein expression cross-check: A visual assay platform.

    Science.gov (United States)

    Bose, Partha Pratim; Kumar, Prakash

    2017-01-01

    In high-throughput biotechnology and structural biology, molecular cloning is an essential prerequisite for attaining high yields of recombinant protein. However, a rapid, cost-effective, easy clone screening protocol is still required to identify colonies with desired insert along with a cross check method to certify the expression of the desired protein as the end product. We report an easy, fast, sensitive and cheap visual clone screening and protein expression cross check protocol employing gold nanoparticle based plasmonic detection phenomenon. This is a non-gel, non-PCR based visual detection technique, which can be used as simultaneous high throughput clone screening followed by the determination of expression of desired protein. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Engineering High Affinity Protein-Protein Interactions Using a High-Throughput Microcapillary Array Platform.

    Science.gov (United States)

    Lim, Sungwon; Chen, Bob; Kariolis, Mihalis S; Dimov, Ivan K; Baer, Thomas M; Cochran, Jennifer R

    2017-02-17

    Affinity maturation of protein-protein interactions requires iterative rounds of protein library generation and high-throughput screening to identify variants that bind with increased affinity to a target of interest. We recently developed a multipurpose protein engineering platform, termed μSCALE (Microcapillary Single Cell Analysis and Laser Extraction). This technology enables high-throughput screening of libraries of millions of cell-expressing protein variants based on their binding properties or functional activity. Here, we demonstrate the first use of the μSCALE platform for affinity maturation of a protein-protein binding interaction. In this proof-of-concept study, we engineered an extracellular domain of the Axl receptor tyrosine kinase to bind tighter to its ligand Gas6. Within 2 weeks, two iterative rounds of library generation and screening resulted in engineered Axl variants with a 50-fold decrease in kinetic dissociation rate, highlighting the use of μSCALE as a new tool for directed evolution.

  5. Post-high-throughput screening analysis: an empirical compound prioritization scheme.

    Science.gov (United States)

    Oprea, Tudor I; Bologa, Cristian G; Edwards, Bruce S; Prossnitz, Eric R; Sklar, Larry A

    2005-08-01

    An empirical scheme to evaluate and prioritize screening hits from high-throughput screening (HTS) is proposed. Negative scores are given when chemotypes found in the HTS hits are present in annotated databases such as MDDR and WOMBAT or for testing positive in toxicity-related experiments reported in TOXNET. Positive scores were given for higher measured biological activities, for testing negative in toxicity-related literature, and for good overlap when profiled against drug-related properties. Particular emphasis is placed on estimating aqueous solubility to prioritize in vivo experiments. This empirical scheme is given as an illustration to assist the decision-making process in selecting chemotypes and individual compounds for further experimentation, when confronted with multiple hits from high-throughput experiments. The decision-making process is discussed for a set of G-protein coupled receptor antagonists and validated on a literature example for dihydrofolate reductase inhibition.

  6. High-throughput investigation of catalysts for JP-8 fuel cracking to liquefied petroleum gas.

    Science.gov (United States)

    Bedenbaugh, John E; Kim, Sungtak; Sasmaz, Erdem; Lauterbach, Jochen

    2013-09-09

    Portable power technologies for military applications necessitate the production of fuels similar to LPG from existing feedstocks. Catalytic cracking of military jet fuel to form a mixture of C₂-C₄ hydrocarbons was investigated using high-throughput experimentation. Cracking experiments were performed in a gas-phase, 16-sample high-throughput reactor. Zeolite ZSM-5 catalysts with low Si/Al ratios (≤25) demonstrated the highest production of C₂-C₄ hydrocarbons at moderate reaction temperatures (623-823 K). ZSM-5 catalysts were optimized for JP-8 cracking activity to LPG through varying reaction temperature and framework Si/Al ratio. The reducing atmosphere required during catalytic cracking resulted in coking of the catalyst and a commensurate decrease in conversion rate. Rare earth metal promoters for ZSM-5 catalysts were screened to reduce coking deactivation rates, while noble metal promoters reduced onset temperatures for coke burnoff regeneration.

  7. A High-Throughput Microfluidic Platform for Mammalian Cell Transfection and Culturing

    Science.gov (United States)

    Woodruff, Kristina; Maerkl, Sebastian J.

    2016-01-01

    Mammalian synthetic biology could be augmented through the development of high-throughput microfluidic systems that integrate cellular transfection, culturing, and imaging. We created a microfluidic chip that cultures cells and implements 280 independent transfections at up to 99% efficiency. The chip can perform co-transfections, in which the number of cells expressing each protein and the average protein expression level can be precisely tuned as a function of input DNA concentration and synthetic gene circuits can be optimized on chip. We co-transfected four plasmids to test a histidine kinase signaling pathway and mapped the dose dependence of this network on the level of one of its constituents. The chip is readily integrated with high-content imaging, enabling the evaluation of cellular behavior and protein expression dynamics over time. These features make the transfection chip applicable to high-throughput mammalian protein and synthetic biology studies. PMID:27030663

  8. Macro-to-micro structural proteomics: native source proteins for high-throughput crystallization.

    Directory of Open Access Journals (Sweden)

    Monica Totir

    Full Text Available Structural biology and structural genomics projects routinely rely on recombinantly expressed proteins, but many proteins and complexes are difficult to obtain by this approach. We investigated native source proteins for high-throughput protein crystallography applications. The Escherichia coli proteome was fractionated, purified, crystallized, and structurally characterized. Macro-scale fermentation and fractionation were used to subdivide the soluble proteome into 408 unique fractions of which 295 fractions yielded crystals in microfluidic crystallization chips. Of the 295 crystals, 152 were selected for optimization, diffraction screening, and data collection. Twenty-three structures were determined, four of which were novel. This study demonstrates the utility of native source proteins for high-throughput crystallography.

  9. The complete automation of cell culture: improvements for high-throughput and high-content screening.

    Science.gov (United States)

    Jain, Shushant; Sondervan, David; Rizzu, Patrizia; Bochdanovits, Zoltan; Caminada, Daniel; Heutink, Peter

    2011-09-01

    Genomic approaches provide enormous amounts of raw data with regard to genetic variation, the diversity of RNA species, and protein complement. High-throughput (HT) and high-content (HC) cellular screens are ideally suited to contextualize the information gathered from other "omic" approaches into networks and can be used for the identification of therapeutic targets. Current methods used for HT-HC screens are laborious, time-consuming, and prone to human error. The authors thus developed an automated high-throughput system with an integrated fluorescent imager for HC screens called the AI.CELLHOST. The implementation of user-defined culturing and assay plate setup parameters allows parallel operation of multiple screens in diverse mammalian cell types. The authors demonstrate that such a system is able to successfully maintain different cell lines in culture for extended periods of time as well as significantly increasing throughput, accuracy, and reproducibility of HT and HC screens.

  10. High-throughput characterization of film thickness in thin film materials libraries by digital holographic microscopy.

    Science.gov (United States)

    Lai, Yiu Wai; Krause, Michael; Savan, Alan; Thienhaus, Sigurd; Koukourakis, Nektarios; Hofmann, Martin R; Ludwig, Alfred

    2011-10-01

    A high-throughput characterization technique based on digital holography for mapping film thickness in thin-film materials libraries was developed. Digital holographic microscopy is used for fully automatic measurements of the thickness of patterned films with nanometer resolution. The method has several significant advantages over conventional stylus profilometry: it is contactless and fast, substrate bending is compensated, and the experimental setup is simple. Patterned films prepared by different combinatorial thin-film approaches were characterized to investigate and demonstrate this method. The results show that this technique is valuable for the quick, reliable and high-throughput determination of the film thickness distribution in combinatorial materials research. Importantly, it can also be applied to thin films that have been structured by shadow masking.

  11. Multiple and high-throughput droplet reactions via combination of microsampling technique and microfluidic chip

    KAUST Repository

    Wu, Jinbo

    2012-11-20

    Microdroplets offer unique compartments for accommodating a large number of chemical and biological reactions in tiny volume with precise control. A major concern in droplet-based microfluidics is the difficulty to address droplets individually and achieve high throughput at the same time. Here, we have combined an improved cartridge sampling technique with a microfluidic chip to perform droplet screenings and aggressive reaction with minimal (nanoliter-scale) reagent consumption. The droplet composition, distance, volume (nanoliter to subnanoliter scale), number, and sequence could be precisely and digitally programmed through the improved sampling technique, while sample evaporation and cross-contamination are effectively eliminated. Our combined device provides a simple model to utilize multiple droplets for various reactions with low reagent consumption and high throughput. © 2012 American Chemical Society.

  12. Marine natural product libraries for high-throughput screening and rapid drug discovery.

    Science.gov (United States)

    Bugni, Tim S; Richards, Burt; Bhoite, Leen; Cimbora, Daniel; Harper, Mary Kay; Ireland, Chris M

    2008-06-01

    There is a need for diverse molecular libraries for phenotype-selective and high-throughput screening. To make marine natural products (MNPs) more amenable to newer screening paradigms and shorten discovery time lines, we have created an MNP library characterized online using MS. To test the potential of the library, we screened a subset of the library in a phenotype-selective screen to identify compounds that inhibited the growth of BRCA2-deficient cells.

  13. Geochip: A high throughput genomic tool for linking community structure to functions

    Energy Technology Data Exchange (ETDEWEB)

    Van Nostrand, Joy D.; Liang, Yuting; He, Zhili; Li, Guanghe; Zhou, Jizhong

    2009-01-30

    GeoChip is a comprehensive functional gene array that targets key functional genes involved in the geochemical cycling of N, C, and P, sulfate reduction, metal resistance and reduction, and contaminant degradation. Studies have shown the GeoChip to be a sensitive, specific, and high-throughput tool for microbial community analysis that has the power to link geochemical processes with microbial community structure. However, several challenges remain regarding the development and applications of microarrays for microbial community analysis.

  14. Computational and Statistical Methods for High-Throughput Mass Spectrometry-Based PTM Analysis.

    Science.gov (United States)

    Schwämmle, Veit; Vaudel, Marc

    2017-01-01

    Cell signaling and functions heavily rely on post-translational modifications (PTMs) of proteins. Their high-throughput characterization is thus of utmost interest for multiple biological and medical investigations. In combination with efficient enrichment methods, peptide mass spectrometry analysis allows the quantitative comparison of thousands of modified peptides over different conditions. However, the large and complex datasets produced pose multiple data interpretation challenges, ranging from spectral interpretation to statistical and multivariate analyses. Here, we present a typical workflow to interpret such data.

  15. In Vitro High Throughput Screening, What Next? Lessons from the Screening for Aurora Kinase Inhibitors

    Directory of Open Access Journals (Sweden)

    Thi-My-Nhung Hoang

    2014-02-01

    Full Text Available Based on in vitro assays, we performed a High Throughput Screening (HTS to identify kinase inhibitors among 10,000 small chemical compounds. In this didactic paper, we describe step-by-step the approach to validate the hits as well as the major pitfalls encountered in the development of active molecules. We propose a decision tree that could be adapted to most in vitro HTS.

  16. In vitro high throughput screening, what next? Lessons from the screening for aurora kinase inhibitors.

    Science.gov (United States)

    Hoang, Thi-My-Nhung; Vu, Hong-Lien; Le, Ly-Thuy-Tram; Nguyen, Chi-Hung; Molla, Annie

    2014-02-27

    Based on in vitro assays, we performed a High Throughput Screening (HTS) to identify kinase inhibitors among 10,000 small chemical compounds. In this didactic paper, we describe step-by-step the approach to validate the hits as well as the major pitfalls encountered in the development of active molecules. We propose a decision tree that could be adapted to most in vitro HTS.

  17. High-Throughput SNP Discovery And Genetic Mapping In Perennial Ryegrass

    DEFF Research Database (Denmark)

    Asp, Torben; Studer, Bruno; Lübberstedt, Thomas

    Gene-associated single nucleotide polymorphisms (SNPs) are of major interest for genome analysis and breeding applications in the key grassland species perennial ryegrass. High-throughput 454 Titanium transcriptome sequencing was performed on two genotypes, which previously have been used...... in the VrnA mapping population. Here we report on large-scale SNP discovery, and the construction of a genetic map enabling QTL fine mapping, map-based cloning, and comparative genomics in perennial ryegrass....

  18. DRABAL: novel method to mine large high-throughput screening assays using Bayesian active learning

    OpenAIRE

    Soufan, Othman; Ba-Alawi, Wail; Afeef, Moataz; Essack, Magbubah; Kalnis, Panos; Bajic, Vladimir B.

    2016-01-01

    Background Mining high-throughput screening (HTS) assays is key for enhancing decisions in the area of drug repositioning and drug discovery. However, many challenges are encountered in the process of developing suitable and accurate methods for extracting useful information from these assays. Virtual screening and a wide variety of databases, methods and solutions proposed to-date, did not completely overcome these challenges. This study is based on a multi-label classification (MLC) techniq...

  19. Machine learning in computational biology to accelerate high-throughput protein expression

    DEFF Research Database (Denmark)

    Sastry, Anand; Monk, Jonathan M.; Tegel, Hanna

    2017-01-01

    over 40 000 unique human protein fragments have been expressed in E. coli. These datasets enable quantitative tracking of entire cellular proteomes and present new avenues for understanding molecular-level properties influencing expression and solubility. Results: Combining computational biology......Motivation: The Human Protein Atlas (HPA) enables the simultaneous characterization of thousands of proteins across various tissues to pinpoint their spatial location in the human body. This has been achieved through transcriptomics and high-throughput immunohistochemistry-based approaches, where...

  20. A Concept for a Sensitive Micro Total Analysis System for High Throughput Fluorescence Imaging

    OpenAIRE

    Rabner, Arthur; Shacham, Yosi

    2006-01-01

    This paper discusses possible methods for on-chip fluorescent imaging for integrated bio-sensors. The integration of optical and electro-optical accessories, according to suggested methods, can improve the performance of fluorescence imaging. It can boost the signal to background ratio by a few orders of magnitudes in comparison to conventional discrete setups. The methods that are present in this paper are oriented towards building reproducible arrays for high-throughput micro total analysis...

  1. Patterning cell using Si-stencil for high-throughput assay

    KAUST Repository

    Wu, Jinbo

    2011-01-01

    In this communication, we report a newly developed cell pattering methodology by a silicon-based stencil, which exhibited advantages such as easy handling, reusability, hydrophilic surface and mature fabrication technologies. Cell arrays obtained by this method were used to investigate cell growth under a temperature gradient, which demonstrated the possibility of studying cell behavior in a high-throughput assay. This journal is © The Royal Society of Chemistry 2011.

  2. Streptococcus mutans Protein Synthesis during Mixed-Species Biofilm Development by High-Throughput Quantitative Proteomics

    OpenAIRE

    Klein, Marlise I.; Xiao, Jin; Lu, Bingwen; Delahunty, Claire M.; Yates, John R.; Koo, Hyun

    2012-01-01

    Biofilms formed on tooth surfaces are comprised of mixed microbiota enmeshed in an extracellular matrix. Oral biofilms are constantly exposed to environmental changes, which influence the microbial composition, matrix formation and expression of virulence. Streptococcus mutans and sucrose are key modulators associated with the evolution of virulent-cariogenic biofilms. In this study, we used a high-throughput quantitative proteomics approach to examine how S. mutans produces relevant proteins...

  3. Statistical Methods for Integrating Multiple Types of High-Throughput Data

    OpenAIRE

    Xie, Yang; Ahn, Chul

    2010-01-01

    Large-scale sequencing, copy number, mRNA, and protein data have given great promise to the biomedical research, while posing great challenges to data management and data analysis. Integrating different types of high-throughput data from diverse sources can increase the statistical power of data analysis and provide deeper biological understanding. This chapter uses two biomedical research examples to illustrate why there is an urgent need to develop reliable and robust methods for integratin...

  4. Acanthamoeba castellanii: a new high-throughput method for drug screening in vitro

    OpenAIRE

    Ortega-Rivas, Antonio; Padrón, José M; Valladares, Basilio; Elsheikha, Hany M

    2016-01-01

    Despite significant public health impact, there is no specific antiprotozoal therapy for prevention and treatment of Acanthamoeba castellanii infection. There is a need for new and efficient anti-Acanthamoeba drugs that are less toxic and can reduce treatment duration and frequency of administration. In this context a new, rapid and sensitive assay is required for high-throughput activity testing and screening of new therapeutic compounds. A colorimetric assay based on sulforhodamine B (SRB) ...

  5. Predictions versus high-throughput experiments in T-cell epitope discovery: competition or synergy?

    DEFF Research Database (Denmark)

    Lundegaard, Claus; Lund, Ole; Nielsen, Morten

    2012-01-01

    Prediction methods as well as experimental methods for T-cell epitope discovery have developed significantly in recent years. High-throughput experimental methods have made it possible to perform full-length protein scans for epitopes restricted to a limited number of MHC alleles. The high costs...... discovery. We expect prediction methods as well as experimental validation methods to continue to develop and that we will soon see clinical trials of products whose development has been guided by prediction methods....

  6. High-Throughput Synthesis, Screening, and Scale-Up of Optimized Conducting Indium Tin Oxides

    OpenAIRE

    Marchand, P; Makwana, N. M.; Tighe, C. J.; Gruar, R. I.; Parkin, I. P.; Carmalt, C. J.; Darr, J. A.

    2016-01-01

    A high-throughput optimization and subsequent scale-up methodology has been used for the synthesis of conductive tin-doped indium oxide (known as ITO) nanoparticles. ITO nanoparticles with up to 12 at % Sn were synthesized using a laboratory scale (15 g/hour by dry mass) continuous hydrothermal synthesis process, and the as-synthesized powders were characterized by powder X-ray diffraction, transmission electron microscopy, energy-dispersive X-ray analysis, and X-ray photoelectron spectroscop...

  7. Deep Mutational Scanning: Library Construction, Functional Selection, and High-Throughput Sequencing.

    Science.gov (United States)

    Starita, Lea M; Fields, Stanley

    2015-08-03

    Deep mutational scanning is a highly parallel method that uses high-throughput sequencing to track changes in >10(5) protein variants before and after selection to measure the effects of mutations on protein function. Here we outline the stages of a deep mutational scanning experiment, focusing on the construction of libraries of protein sequence variants and the preparation of Illumina sequencing libraries. © 2015 Cold Spring Harbor Laboratory Press.

  8. Peranan Biologi Molekuler Dan Hts (High Throughput Screening) Dalam Pengembangan Obat Sintetik Baru

    OpenAIRE

    Nurrochmad, Arief

    2004-01-01

    Recently, the discovery of new drugs uses the new concept by modern techniques instead ofthe convenstional techniques. In the development of scientific knowledge, the role of molecularbiology and the modern techniques in the investigations and discovery new drug becomes theimportant things. Many methods and modern techniques use in the discovery of new drugs, i.e,genetic enginering, DNA recombinant, radioligand binding assay technique, HTS techniques (HighThroughput Screening), and mass ligan...

  9. A Novel High-Throughput Approach to Measure Hydroxyl Radicals Induced by Airborne Particulate Matter

    Directory of Open Access Journals (Sweden)

    Yeongkwon Son

    2015-10-01

    Full Text Available Oxidative stress is one of the key mechanisms linking ambient particulate matter (PM exposure with various adverse health effects. The oxidative potential of PM has been used to characterize the ability of PM induced oxidative stress. Hydroxyl radical (•OH is the most destructive radical produced by PM. However, there is currently no high-throughput approach which can rapidly measure PM-induced •OH for a large number of samples with an automated system. This study evaluated four existing molecular probes (disodium terephthalate, 3′-p-(aminophenylfluorescein, coumarin-3-carboxylic acid, and sodium benzoate for their applicability to measure •OH induced by PM in a high-throughput cell-free system using fluorescence techniques, based on both our experiments and on an assessment of the physicochemical properties of the probes reported in the literature. Disodium terephthalate (TPT was the most applicable molecular probe to measure •OH induced by PM, due to its high solubility, high stability of the corresponding fluorescent product (i.e., 2-hydroxyterephthalic acid, high yield compared with the other molecular probes, and stable fluorescence intensity in a wide range of pH environments. TPT was applied in a high-throughput format to measure PM (NIST 1648a-induced •OH, in phosphate buffered saline. The formed fluorescent product was measured at designated time points up to 2 h. The fluorescent product of TPT had a detection limit of 17.59 nM. The soluble fraction of PM contributed approximately 76.9% of the •OH induced by total PM, and the soluble metal ions of PM contributed 57.4% of the overall •OH formation. This study provides a promising cost-effective high-throughput method to measure •OH induced by PM on a routine basis.

  10. A high-throughput method for quantifying metabolically active yeast cells

    DEFF Research Database (Denmark)

    Nandy, Subir Kumar; Knudsen, Peter Boldsen; Rosenkjær, Alexander

    2015-01-01

    By redesigning the established methylene blue reduction test for bacteria and yeast, we present a cheap and efficient methodology for quantitative physiology of eukaryotic cells applicable for high-throughput systems. Validation of themethod in fermenters and highthroughput systems proved...... equivalent, displaying reduction curves that interrelated directly with CFU counts. For growth rate estimation, the methylene blue reduction test (MBRT) proved superior, since the discriminatory nature of the method allowed for the quantification of metabolically active cells only, excluding dead cells...

  11. High-throughput, image-based screening of pooled genetic-variant libraries.

    Science.gov (United States)

    Emanuel, George; Moffitt, Jeffrey R; Zhuang, Xiaowei

    2017-12-01

    We report a high-throughput screening method that allows diverse genotypes and corresponding phenotypes to be imaged in individual cells. We achieve genotyping by introducing barcoded genetic variants into cells as pooled libraries and reading the barcodes out using massively multiplexed fluorescence in situ hybridization. To demonstrate the power of image-based pooled screening, we identified brighter and more photostable variants of the fluorescent protein YFAST among 60,000 variants.

  12. Pyicos: a versatile toolkit for the analysis of high-throughput sequencing data

    OpenAIRE

    Althammer, Sonja Daniela; González-Vallinas Rostes, Juan, 1983-; Ballaré, Cecilia Julia; Beato, Miguel; Eyras Jiménez, Eduardo

    2011-01-01

    Motivation: High-throughput sequencing (HTS) has revolutionized gene regulation studies and is now fundamental for the detection of protein?DNA and protein?RNA binding, as well as for measuring RNA expression. With increasing variety and sequencing depth of HTS datasets, the need for more flexible and memory-efficient tools to analyse them is growing. Results: We describe Pyicos, a powerful toolkit for the analysis of mapped reads from diverse HTS experiments: ChIP-Seq, either punctuated or b...

  13. Computational and statistical methods for high-throughput analysis of post-translational modifications of proteins

    DEFF Research Database (Denmark)

    Schwämmle, Veit; Braga, Thiago Verano; Roepstorff, Peter

    2015-01-01

    The investigation of post-translational modifications (PTMs) represents one of the main research focuses for the study of protein function and cell signaling. Mass spectrometry instrumentation with increasing sensitivity improved protocols for PTM enrichment and recently established pipelines...... for high-throughput experiments allow large-scale identification and quantification of several PTM types. This review addresses the concurrently emerging challenges for the computational analysis of the resulting data and presents PTM-centered approaches for spectra identification, statistical analysis...

  14. Development of High-Throughput Quantitative Assays for Glucose Uptake in Cancer Cell Lines

    Science.gov (United States)

    Hassanein, Mohamed; Weidow, Brandy; Koehler, Elizabeth; Bakane, Naimish; Garbett, Shawn; Shyr, Yu; Quaranta, Vito

    2013-01-01

    Purpose Metabolism, and especially glucose uptake, is a key quantitative cell trait that is closely linked to cancer initiation and progression. Therefore, developing high-throughput assays for measuring glucose uptake in cancer cells would be enviable for simultaneous comparisons of multiple cell lines and microenvironmental conditions. This study was designed with two specific aims in mind: the first was to develop and validate a high-throughput screening method for quantitative assessment of glucose uptake in “normal” and tumor cells using the fluorescent 2-deoxyglucose analog 2-[N-(7-nitrobenz-2-oxa-1,3-diazol-4-yl)amino]-2-deoxyglucose (2-NBDG), and the second was to develop an image-based, quantitative, single-cell assay for measuring glucose uptake using the same probe to dissect the full spectrum of metabolic variability within populations of tumor cells in vitro in higher resolution. Procedure The kinetics of population-based glucose uptake was evaluated for MCF10A mammary epithelial and CA1d breast cancer cell lines, using 2-NBDG and a fluorometric microplate reader. Glucose uptake for the same cell lines was also examined at the single-cell level using high-content automated microscopy coupled with semi-automated cell-cytometric image analysis approaches. Statistical treatments were also implemented to analyze intra-population variability. Results Our results demonstrate that the high-throughput fluorometric assay using 2-NBDG is a reliable method to assess population-level kinetics of glucose uptake in cell lines in vitro. Similarly, single-cell image-based assays and analyses of 2-NBDG fluorescence proved an effective and accurate means for assessing glucose uptake, which revealed that breast tumor cell lines display intra-population variability that is modulated by growth conditions. Conclusions These studies indicate that 2-NBDG can be used to aid in the high-throughput analysis of the influence of chemotherapeutics on glucose uptake in cancer

  15. High Throughput Single-cell and Multiple-cell Micro-encapsulation

    OpenAIRE

    Lagus, Todd P.; Edd, Jon F.

    2012-01-01

    Microfluidic encapsulation methods have been previously utilized to capture cells in picoliter-scale aqueous, monodisperse drops, providing confinement from a bulk fluid environment with applications in high throughput screening, cytometry, and mass spectrometry. We describe a method to not only encapsulate single cells, but to repeatedly capture a set number of cells (here we demonstrate one- and two-cell encapsulation) to study both isolation and the interactions between cells in groups of ...

  16. Rapid 2,2'-bicinchoninic-based xylanase assay compatible with high throughput screening

    Science.gov (United States)

    William R. Kenealy; Thomas W. Jeffries

    2003-01-01

    High-throughput screening requires simple assays that give reliable quantitative results. A microplate assay was developed for reducing sugar analysis that uses a 2,2'-bicinchoninic-based protein reagent. Endo-1,4-â-D-xylanase activity against oat spelt xylan was detected at activities of 0.002 to 0.011 IU ml−1. The assay is linear for sugar...

  17. Keystone Symposia on Epigenomics and Chromatin Dynamics

    DEFF Research Database (Denmark)

    Ravnskjær, Kim

    2012-01-01

    Keystone Symposia kicked off the start of 2012 with two joint meetings on Epigenomics and Chromatin Dynamics and a star-studded list of speakers. Held in Keystone, CO, January 17-22, and organized by Steven Jacobsen and Steven Henikoff and by Bradley Cairns and Geneviève Almouzni, respectively...

  18. An Automated High Throughput Proteolysis and Desalting Platform for Quantitative Proteomic Analysis

    Directory of Open Access Journals (Sweden)

    Albert-Baskar Arul

    2013-06-01

    Full Text Available Proteomics for biomarker validation needs high throughput instrumentation to analyze huge set of clinical samples for quantitative and reproducible analysis at a minimum time without manual experimental errors. Sample preparation, a vital step in proteomics plays a major role in identification and quantification of proteins from biological samples. Tryptic digestion a major check point in sample preparation for mass spectrometry based proteomics needs to be more accurate with rapid processing time. The present study focuses on establishing a high throughput automated online system for proteolytic digestion and desalting of proteins from biological samples quantitatively and qualitatively in a reproducible manner. The present study compares online protein digestion and desalting of BSA with conventional off-line (in-solution method and validated for real time sample for reproducibility. Proteins were identified using SEQUEST data base search engine and the data were quantified using IDEALQ software. The present study shows that the online system capable of handling high throughput samples in 96 well formats carries out protein digestion and peptide desalting efficiently in a reproducible and quantitative manner. Label free quantification showed clear increase of peptide quantities with increase in concentration with much linearity compared to off line method. Hence we would like to suggest that inclusion of this online system in proteomic pipeline will be effective in quantification of proteins in comparative proteomics were the quantification is really very crucial.

  19. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    Science.gov (United States)

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  20. A versatile toolkit for high throughput functional genomics with Trichoderma reesei

    Energy Technology Data Exchange (ETDEWEB)

    Schuster, Andre; Bruno, Kenneth S.; Collett, James R.; Baker, Scott E.; Seiboth, Bernhard; Kubicek, Christian P.; Schmoll, Monika

    2012-01-02

    The ascomycete fungus, Trichoderma reesei (anamorph of Hypocrea jecorina), represents a biotechnological workhorse and is currently one of the most proficient cellulase producers. While strain improvement was traditionally accomplished by random mutagenesis, a detailed understanding of cellulase regulation can only be gained using recombinant technologies. RESULTS: Aiming at high efficiency and high throughput methods, we present here a construction kit for gene knock out in T. reesei. We provide a primer database for gene deletion using the pyr4, amdS and hph selection markers. For high throughput generation of gene knock outs, we constructed vectors using yeast mediated recombination and then transformed a T. reesei strain deficient in non-homologous end joining (NHEJ) by spore electroporation. This NHEJ-defect was subsequently removed by crossing of mutants with a sexually competent strain derived from the parental strain, QM9414.CONCLUSIONS:Using this strategy and the materials provided, high throughput gene deletion in T. reesei becomes feasible. Moreover, with the application of sexual development, the NHEJ-defect can be removed efficiently and without the need for additional selection markers. The same advantages apply for the construction of multiple mutants by crossing of strains with different gene deletions, which is now possible with considerably less hands-on time and minimal screening effort compared to a transformation approach. Consequently this toolkit can considerably boost research towards efficient exploitation of the resources of T. reesei for cellulase expression and hence second generation biofuel production.

  1. Engineering serendipity: High-throughput discovery of materials that resist bacterial attachment.

    Science.gov (United States)

    Magennis, E P; Hook, A L; Davies, M C; Alexander, C; Williams, P; Alexander, M R

    2016-04-01

    Controlling the colonisation of materials by microorganisms is important in a wide range of industries and clinical settings. To date, the underlying mechanisms that govern the interactions of bacteria with material surfaces remain poorly understood, limiting the ab initio design and engineering of biomaterials to control bacterial attachment. Combinatorial approaches involving high-throughput screening have emerged as key tools for identifying materials to control bacterial attachment. The hundreds of different materials assessed using these methods can be carried out with the aid of computational modelling. This approach can develop an understanding of the rules used to predict bacterial attachment to surfaces of non-toxic synthetic materials. Here we outline our view on the state of this field and the challenges and opportunities in this area for the coming years. This opinion article on high throughput screening methods reflects one aspect of how the field of biomaterials research has developed and progressed. The piece takes the reader through key developments in biomaterials discovery, particularly focusing on need to reduce bacterial colonisation of surfaces. Such bacterial resistant surfaces are increasingly required in this age of antibiotic resistance. The influence and origin of high-throughput methods are discussed with insights into the future of biomaterials development where computational methods may drive materials development into new fertile areas of discovery. New biomaterials will exhibit responsiveness to adapt to the biological environment and promote better integration and reduced rejection or infection. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  2. Carbohydrate chips for studying high-throughput carbohydrate-protein interactions.

    Science.gov (United States)

    Park, Sungjin; Lee, Myung-ryul; Pyo, Soon-Jin; Shin, Injae

    2004-04-21

    Carbohydrate-protein interactions play important biological roles in living organisms. For the most part, biophysical and biochemical methods have been used for studying these biomolecular interactions. Less attention has been given to the development of high-throughput methods to elucidate recognition events between carbohydrates and proteins. In the current effort to develop a novel high-throughput tool for monitoring carbohydrate-protein interactions, we prepared carbohydrate microarrays by immobilizing maleimide-linked carbohydrates on thiol-derivatized glass slides and carried out lectin binding experiments by using these microarrays. The results showed that carbohydrates with different structural features selectively bound to the corresponding lectins with relative binding affinities that correlated with those obtained from solution-based assays. In addition, binding affinities of lectins to carbohydrates were also quantitatively analyzed by determining IC(50) values of soluble carbohydrates with the carbohydrate microarrays. To fabricate carbohydrate chips that contained more diverse carbohydrate probes, solution-phase parallel and enzymatic glycosylations were performed. Three model disaccharides were in parallel synthesized in solution-phase and used as carbohydrate probes for the fabrication of carbohydrate chips. Three enzymatic glycosylations on glass slides were consecutively performed to generate carbohydrate microarrays that contained the complex oligosaccharide, sialyl Le(x). Overall, these works demonstrated that carbohydrate chips could be efficiently prepared by covalent immobilization of maleimide-linked carbohydrates on the thiol-coated glass slides and applied for the high-throughput analyses of carbohydrate-protein interactions.

  3. A strategy for primary high throughput cytotoxicity screening in pharmaceutical toxicology.

    Science.gov (United States)

    Bugelski, P J; Atif, U; Molton, S; Toeg, I; Lord, P G; Morgan, D G

    2000-10-01

    Recent advances in combinatorial chemistry and high throughput screens for pharmacologic activity have created an increasing demand for in vitro high throughput screens for toxicological evaluation in the early phases of drug discovery. To develop a strategy for such a screen, we have conducted a data mining study of the National Cancer Institute's Developmental Therapeutics Program (DTP) cytotoxicity database. Using hierarchical cluster analysis, we confirmed that the different tissues of origin and individual cell lines showed differential sensitivity to compounds in the DTP Standard Agents database. Surprisingly, however, approaching the data globally, linear regression analysis showed that the differences were relatively minor. Comparison with the literature on acute toxicity in mice showed that the predictive power of growth inhibition was marginally superior to that of cell death. This datamining study suggests that in designing a strategy for high throughput cytotoxicity screening: a single cell line, the choice of which may not be critical, can be used as a primary screen; a single end point may be an adequate measure and a cut off value for 50% growth inhibition between 10(-6) and 10(-8) M may be a reasonable starting point for accepting a cytotoxic compound for scale up and further study.

  4. High-throughput gene expression profiling of memory differentiation in primary human T cells

    Directory of Open Access Journals (Sweden)

    Russell Kate

    2008-08-01

    Full Text Available Abstract Background The differentiation of naive T and B cells into memory lymphocytes is essential for immunity to pathogens. Therapeutic manipulation of this cellular differentiation program could improve vaccine efficacy and the in vitro expansion of memory cells. However, chemical screens to identify compounds that induce memory differentiation have been limited by 1 the lack of reporter-gene or functional assays that can distinguish naive and memory-phenotype T cells at high throughput and 2 a suitable cell-line representative of naive T cells. Results Here, we describe a method for gene-expression based screening that allows primary naive and memory-phenotype lymphocytes to be discriminated based on complex genes signatures corresponding to these differentiation states. We used ligation-mediated amplification and a fluorescent, bead-based detection system to quantify simultaneously 55 transcripts representing naive and memory-phenotype signatures in purified populations of human T cells. The use of a multi-gene panel allowed better resolution than any constituent single gene. The method was precise, correlated well with Affymetrix microarray data, and could be easily scaled up for high-throughput. Conclusion This method provides a generic solution for high-throughput differentiation screens in primary human T cells where no single-gene or functional assay is available. This screening platform will allow the identification of small molecules, genes or soluble factors that direct memory differentiation in naive human lymphocytes.

  5. Machine learning in computational biology to accelerate high-throughput protein expression.

    Science.gov (United States)

    Sastry, Anand; Monk, Jonathan; Tegel, Hanna; Uhlen, Mathias; Palsson, Bernhard O; Rockberg, Johan; Brunk, Elizabeth

    2017-08-15

    The Human Protein Atlas (HPA) enables the simultaneous characterization of thousands of proteins across various tissues to pinpoint their spatial location in the human body. This has been achieved through transcriptomics and high-throughput immunohistochemistry-based approaches, where over 40 000 unique human protein fragments have been expressed in E. coli. These datasets enable quantitative tracking of entire cellular proteomes and present new avenues for understanding molecular-level properties influencing expression and solubility. Combining computational biology and machine learning identifies protein properties that hinder the HPA high-throughput antibody production pipeline. We predict protein expression and solubility with accuracies of 70% and 80%, respectively, based on a subset of key properties (aromaticity, hydropathy and isoelectric point). We guide the selection of protein fragments based on these characteristics to optimize high-throughput experimentation. We present the machine learning workflow as a series of IPython notebooks hosted on GitHub (https://github.com/SBRG/Protein_ML). The workflow can be used as a template for analysis of further expression and solubility datasets. ebrunk@ucsd.edu or johanr@biotech.kth.se. Supplementary data are available at Bioinformatics online.

  6. High-Throughput Cloning and Expression Library Creation for Functional Proteomics

    Science.gov (United States)

    Festa, Fernanda; Steel, Jason; Bian, Xiaofang; Labaer, Joshua

    2013-01-01

    The study of protein function usually requires the use of a cloned version of the gene for protein expression and functional assays. This strategy is particular important when the information available regarding function is limited. The functional characterization of the thousands of newly identified proteins revealed by genomics requires faster methods than traditional single gene experiments, creating the need for fast, flexible and reliable cloning systems. These collections of open reading frame (ORF) clones can be coupled with high-throughput proteomics platforms, such as protein microarrays and cell-based assays, to answer biological questions. In this tutorial we provide the background for DNA cloning, discuss the major high-throughput cloning systems (Gateway® Technology, Flexi® Vector Systems, and Creator™ DNA Cloning System) and compare them side-by-side. We also report an example of high-throughput cloning study and its application in functional proteomics. This Tutorial is part of the International Proteomics Tutorial Programme (IPTP12). Details can be found at http://www.proteomicstutorials.org. PMID:23457047

  7. PCR cycles above routine numbers do not compromise high-throughput DNA barcoding results.

    Science.gov (United States)

    Vierna, J; Doña, J; Vizcaíno, A; Serrano, D; Jovani, R

    2017-10-01

    High-throughput DNA barcoding has become essential in ecology and evolution, but some technical questions still remain. Increasing the number of PCR cycles above the routine 20-30 cycles is a common practice when working with old-type specimens, which provide little amounts of DNA, or when facing annealing issues with the primers. However, increasing the number of cycles can raise the number of artificial mutations due to polymerase errors. In this work, we sequenced 20 COI libraries in the Illumina MiSeq platform. Libraries were prepared with 40, 45, 50, 55, and 60 PCR cycles from four individuals belonging to four species of four genera of cephalopods. We found no relationship between the number of PCR cycles and the number of mutations despite using a nonproofreading polymerase. Moreover, even when using a high number of PCR cycles, the resulting number of mutations was low enough not to be an issue in the context of high-throughput DNA barcoding (but may still remain an issue in DNA metabarcoding due to chimera formation). We conclude that the common practice of increasing the number of PCR cycles should not negatively impact the outcome of a high-throughput DNA barcoding study in terms of the occurrence of point mutations.

  8. An Air-Well sparging minifermenter system for high-throughput protein production.

    Science.gov (United States)

    Deantonio, Cecilia; Sedini, Valentina; Cesaro, Patrizia; Quasso, Fabio; Cotella, Diego; Persichetti, Francesca; Santoro, Claudio; Sblattero, Daniele

    2014-09-14

    Over the last few years High-Throughput Protein Production (HTPP) has played a crucial role for functional proteomics. High-quality, high yield and fast recombinant protein production are critical for new HTPP technologies. Escherichia coli is usually the expression system of choice in protein production thanks to its fast growth, ease of handling and high yields of protein produced. Even though shake-flask cultures are widely used, there is an increasing need for easy to handle, lab scale, high throughput systems. In this article we described a novel minifermenter system suitable for HTPP. The Air-Well minifermenter system is made by a homogeneous air sparging device that includes an air diffusion system, and a stainless steel 96 needle plate integrated with a 96 deep well plate where cultures take place. This system provides aeration to achieve higher optical density growth compared to classical shaking growth without the decrease in pH value and bacterial viability. Moreover the yield of recombinant protein is up to 3-fold higher with a considerable improvement in the amount of full length proteins. High throughput production of hundreds of proteins in parallel can be obtained sparging air in a continuous and controlled manner. The system used is modular and can be easily modified and scaled up to meet the demands for HTPP.

  9. Arabidopsis Seed Content QTL Mapping Using High-Throughput Phenotyping: The Assets of Near Infrared Spectroscopy.

    Science.gov (United States)

    Jasinski, Sophie; Lécureuil, Alain; Durandet, Monique; Bernard-Moulin, Patrick; Guerche, Philippe

    2016-01-01

    Seed storage compounds are of crucial importance for human diet, feed and industrial uses. In oleo-proteaginous species like rapeseed, seed oil and protein are the qualitative determinants that conferred economic value to the harvested seed. To date, although the biosynthesis pathways of oil and storage protein are rather well-known, the factors that determine how these types of reserves are partitioned in seeds have to be identified. With the aim of implementing a quantitative genetics approach, requiring phenotyping of 100s of plants, our first objective was to establish near-infrared reflectance spectroscopic (NIRS) predictive equations in order to estimate oil, protein, carbon, and nitrogen content in Arabidopsis seed with high-throughput level. Our results demonstrated that NIRS is a powerful non-destructive, high-throughput method to assess the content of these four major components studied in Arabidopsis seed. With this tool in hand, we analyzed Arabidopsis natural variation for these four components and illustrated that they all displayed a wide range of variation. Finally, NIRS was used in order to map QTL for these four traits using seeds from the Arabidopsis thaliana Ct-1 × Col-0 recombinant inbred line population. Some QTL co-localized with QTL previously identified, but others mapped to chromosomal regions never identified so far for such traits. This paper illustrates the usefulness of NIRS predictive equations to perform accurate high-throughput phenotyping of Arabidopsis seed content, opening new perspectives in gene identification following QTL mapping and genome wide association studies.

  10. Arabidopsis seed content QTL mapping using high-throughput phenotyping: the assets of Near Infrared Spectroscopy

    Directory of Open Access Journals (Sweden)

    Sophie Jasinski

    2016-11-01

    Full Text Available Seed storage compounds are of crucial importance for human diet, feed and industrial uses. In oleo-proteaginous species like rapeseed, seed oil and protein are the qualitative determinants that conferred economic value to the harvested seed. To date, although the biosynthesis pathways of oil and storage protein are rather well known, the factors that determine how these types of reserves are partitioned in seeds have to be identified. With the aim of implementing a quantitative genetics approach, requiring phenotyping of hundreds of plants, our first objective was to establish near-infrared reflectance spectroscopic (NIRS predictive equations in order to estimate oil, protein, carbon and nitrogen content in Arabidopsis seed with high-throughput level. Our results demonstrated that NIRS is a powerful non-destructive, high-throughput method to assess the content of these four major components studied in Arabidopsis seed. With this tool in hand, we analysed Arabidopsis natural variation for these four components and illustrated that they all displayed a wide range of variation. Finally, NIRS was used in order to map QTL for these four traits using seeds from the Arabidopsis thaliana Ct-1 x Col-0 recombinant inbred line population. Some QTL co-localised with QTL previously identified, but others mapped to chromosomal regions never identified so far for such traits. This paper illustrates the usefulness of NIRS predictive equations to perform accurate high-throughput phenotyping of Arabidopsis seed content, opening new perspectives in gene identification following QTL mapping and Genome Wide Association Studies.

  11. Infra-red thermography for high throughput field phenotyping in Solanum tuberosum.

    Science.gov (United States)

    Prashar, Ankush; Yildiz, Jane; McNicol, James W; Bryan, Glenn J; Jones, Hamlyn G

    2013-01-01

    The rapid development of genomic technology has made high throughput genotyping widely accessible but the associated high throughput phenotyping is now the major limiting factor in genetic analysis of traits. This paper evaluates the use of thermal imaging for the high throughput field phenotyping of Solanum tuberosum for differences in stomatal behaviour. A large multi-replicated trial of a potato mapping population was used to investigate the consistency in genotypic rankings across different trials and across measurements made at different times of day and on different days. The results confirmed a high degree of consistency between the genotypic rankings based on relative canopy temperature on different occasions. Genotype discrimination was enhanced both through normalising data by expressing genotype temperatures as differences from image means and through the enhanced replication obtained by using overlapping images. A Monte Carlo simulation approach was used to confirm the magnitude of genotypic differences that it is possible to discriminate. The results showed a clear negative association between canopy temperature and final tuber yield for this population, when grown under ample moisture supply. We have therefore established infrared thermography as an easy, rapid and non-destructive screening method for evaluating large population trials for genetic analysis. We also envisage this approach as having great potential for evaluating plant response to stress under field conditions.

  12. Design of novel solar thermal fuels with high-throughput ab initio simulations

    Science.gov (United States)

    Liu, Yun; Grossman, Jeffrey

    2014-03-01

    Solar thermal fuels (STF) store the energy of sunlight, which can then be released later in the form of heat, offering an emission-free and renewable solution for both solar energy conversion and storage. However, this approach is currently limited by the lack of low-cost materials with high energy density and high stability. Previously we have predicted a new class of functional materials that have the potential to address these challenges. Recently, we have developed an ab initio high-throughput computational approach to accelerate the design process and allow for searches over a broad class of materials. The high-throughput screening algorithm we have developed can run through large numbers of molecules composed of earth-abundant elements, and identifies possible metastable structures of a given material. Corresponding isomerization enthalpies associated with the metastable structures are then computed. Using this high-throughput simulation approach, we have discovered molecular structures with high isomerization enthalpies that have the potential to be new candidates for high-energy density STF. We have also discovered physical design principles to guide further STF materials design through the correlation between isomerization enthalpy and structural properties.

  13. High-throughput screening to identify selective inhibitors of microbial sulfate reduction (and beyond)

    Science.gov (United States)

    Carlson, H. K.; Coates, J. D.; Deutschbauer, A. M.

    2015-12-01

    The selective perturbation of complex microbial ecosystems to predictably influence outcomes in engineered and industrial environments remains a grand challenge for geomicrobiology. In some industrial ecosystems, such as oil reservoirs, sulfate reducing microorganisms (SRM) produce hydrogen sulfide which is toxic, explosive and corrosive. Current strategies to selectively inhibit sulfidogenesis are based on non-specific biocide treatments, bio-competitive exclusion by alternative electron acceptors or sulfate-analogs which are competitive inhibitors or futile/alternative substrates of the sulfate reduction pathway. Despite the economic cost of sulfidogenesis, there has been minimal exploration of the chemical space of possible inhibitory compounds, and very little work has quantitatively assessed the selectivity of putative souring treatments. We have developed a high-throughput screening strategy to target SRM, quantitatively ranked the selectivity and potency of hundreds of compounds and identified previously unrecognized SRM selective inhibitors and synergistic interactions between inhibitors. Once inhibitor selectivity is defined, high-throughput characterization of microbial community structure across compound gradients and identification of fitness determinants using isolate bar-coded transposon mutant libraries can give insights into the genetic mechanisms whereby compounds structure microbial communities. The high-throughput (HT) approach we present can be readily applied to target SRM in diverse environments and more broadly, could be used to identify and quantify the potency and selectivity of inhibitors of a variety of microbial metabolisms. Our findings and approach are relevant for engineering environmental ecosystems and also to understand the role of natural gradients in shaping microbial niche space.

  14. High throughput workflow for coacervate formation and characterization in shampoo systems.

    Science.gov (United States)

    Kalantar, T H; Tucker, C J; Zalusky, A S; Boomgaard, T A; Wilson, B E; Ladika, M; Jordan, S L; Li, W K; Zhang, X; Goh, C G

    2007-01-01

    Cationic cellulosic polymers find wide utility as benefit agents in shampoo. Deposition of these polymers onto hair has been shown to mend split-ends, improve appearance and wet combing, as well as provide controlled delivery of insoluble actives. The deposition is thought to be enhanced by the formation of a polymer/surfactant complex that phase-separates from the bulk solution upon dilution. A standard characterization method has been developed to characterize the coacervate formation upon dilution, but the test is time and material prohibitive. We have developed a semi-automated high throughput workflow to characterize the coacervate-forming behavior of different shampoo formulations. A procedure that allows testing of real use shampoo dilutions without first formulating a complete shampoo was identified. This procedure was adapted to a Tecan liquid handler by optimizing the parameters for liquid dispensing as well as for mixing. The high throughput workflow enabled preparation and testing of hundreds of formulations with different types and levels of cationic cellulosic polymers and surfactants, and for each formulation a haze diagram was constructed. Optimal formulations and their dilutions that give substantial coacervate formation (determined by haze measurements) were identified. Results from this high throughput workflow were shown to reproduce standard haze and bench-top turbidity measurements, and this workflow has the advantages of using less material and allowing more variables to be tested with significant time savings.

  15. A high throughput array microscope for the mechanical characterization of biomaterials

    Science.gov (United States)

    Cribb, Jeremy; Osborne, Lukas D.; Hsiao, Joe Ping-Lin; Vicci, Leandra; Meshram, Alok; O'Brien, E. Tim; Spero, Richard Chasen; Taylor, Russell; Superfine, Richard

    2015-02-01

    In the last decade, the emergence of high throughput screening has enabled the development of novel drug therapies and elucidated many complex cellular processes. Concurrently, the mechanobiology community has developed tools and methods to show that the dysregulation of biophysical properties and the biochemical mechanisms controlling those properties contribute significantly to many human diseases. Despite these advances, a complete understanding of the connection between biomechanics and disease will require advances in instrumentation that enable parallelized, high throughput assays capable of probing complex signaling pathways, studying biology in physiologically relevant conditions, and capturing specimen and mechanical heterogeneity. Traditional biophysical instruments are unable to meet this need. To address the challenge of large-scale, parallelized biophysical measurements, we have developed an automated array high-throughput microscope system that utilizes passive microbead diffusion to characterize mechanical properties of biomaterials. The instrument is capable of acquiring data on twelve-channels simultaneously, where each channel in the system can independently drive two-channel fluorescence imaging at up to 50 frames per second. We employ this system to measure the concentration-dependent apparent viscosity of hyaluronan, an essential polymer found in connective tissue and whose expression has been implicated in cancer progression.

  16. A High-Throughput Biological Calorimetry Core: Steps to Startup, Run, and Maintain a Multiuser Facility.

    Science.gov (United States)

    Yennawar, Neela H; Fecko, Julia A; Showalter, Scott A; Bevilacqua, Philip C

    2016-01-01

    Many labs have conventional calorimeters where denaturation and binding experiments are setup and run one at a time. While these systems are highly informative to biopolymer folding and ligand interaction, they require considerable manual intervention for cleaning and setup. As such, the throughput for such setups is limited typically to a few runs a day. With a large number of experimental parameters to explore including different buffers, macromolecule concentrations, temperatures, ligands, mutants, controls, replicates, and instrument tests, the need for high-throughput automated calorimeters is on the rise. Lower sample volume requirements and reduced user intervention time compared to the manual instruments have improved turnover of calorimetry experiments in a high-throughput format where 25 or more runs can be conducted per day. The cost and efforts to maintain high-throughput equipment typically demands that these instruments be housed in a multiuser core facility. We describe here the steps taken to successfully start and run an automated biological calorimetry facility at Pennsylvania State University. Scientists from various departments at Penn State including Chemistry, Biochemistry and Molecular Biology, Bioengineering, Biology, Food Science, and Chemical Engineering are benefiting from this core facility. Samples studied include proteins, nucleic acids, sugars, lipids, synthetic polymers, small molecules, natural products, and virus capsids. This facility has led to higher throughput of data, which has been leveraged into grant support, attracting new faculty hire and has led to some exciting publications. © 2016 Elsevier Inc. All rights reserved.

  17. A versatile toolkit for high throughput functional genomics with Trichoderma reesei

    Directory of Open Access Journals (Sweden)

    Schuster André

    2012-01-01

    Full Text Available Abstract Background The ascomycete fungus, Trichoderma reesei (anamorph of Hypocrea jecorina, represents a biotechnological workhorse and is currently one of the most proficient cellulase producers. While strain improvement was traditionally accomplished by random mutagenesis, a detailed understanding of cellulase regulation can only be gained using recombinant technologies. Results Aiming at high efficiency and high throughput methods, we present here a construction kit for gene knock out in T. reesei. We provide a primer database for gene deletion using the pyr4, amdS and hph selection markers. For high throughput generation of gene knock outs, we constructed vectors using yeast mediated recombination and then transformed a T. reesei strain deficient in non-homologous end joining (NHEJ by spore electroporation. This NHEJ-defect was subsequently removed by crossing of mutants with a sexually competent strain derived from the parental strain, QM9414. Conclusions Using this strategy and the materials provided, high throughput gene deletion in T. reesei becomes feasible. Moreover, with the application of sexual development, the NHEJ-defect can be removed efficiently and without the need for additional selection markers. The same advantages apply for the construction of multiple mutants by crossing of strains with different gene deletions, which is now possible with considerably less hands-on time and minimal screening effort compared to a transformation approach. Consequently this toolkit can considerably boost research towards efficient exploitation of the resources of T. reesei for cellulase expression and hence second generation biofuel production.

  18. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    Science.gov (United States)

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  19. High Throughput, Polymeric Aqueous Two-Phase Printing of Tumor Spheroids

    Science.gov (United States)

    Atefi, Ehsan; Lemmo, Stephanie; Fyffe, Darcy; Luker, Gary D.; Tavana, Hossein

    2014-01-01

    This paper presents a new 3D culture microtechnology for high throughput production of tumor spheroids and validates its utility for screening anti-cancer drugs. We use two immiscible polymeric aqueous solutions and microprint a submicroliter drop of the “patterning” phase containing cells into a bath of the “immersion” phase. Selecting proper formulations of biphasic systems using a panel of biocompatible polymers results in the formation of a round drop that confines cells to facilitate spontaneous formation of a spheroid without any external stimuli. Adapting this approach to robotic tools enables straightforward generation and maintenance of spheroids of well-defined size in standard microwell plates and biochemical analysis of spheroids in situ, which is not possible with existing techniques for spheroid culture. To enable high throughput screening, we establish a phase diagram to identify minimum cell densities within specific volumes of the patterning drop to result in a single spheroid. Spheroids show normal growth over long-term incubation and dose-dependent decrease in cellular viability when treated with drug compounds, but present significant resistance compared to monolayer cultures. The unprecedented ease of implementing this microtechnology and its robust performance will benefit high throughput studies of drug screening against cancer cells with physiologically-relevant 3D tumor models. PMID:25411577

  20. High-throughput micro-scale cultivations and chromatography modeling: Powerful tools for integrated process development.

    Science.gov (United States)

    Baumann, Pascal; Hahn, Tobias; Hubbuch, Jürgen

    2015-10-01

    Upstream processes are rather complex to design and the productivity of cells under suitable cultivation conditions is hard to predict. The method of choice for examining the design space is to execute high-throughput cultivation screenings in micro-scale format. Various predictive in silico models have been developed for many downstream processes, leading to a reduction of time and material costs. This paper presents a combined optimization approach based on high-throughput micro-scale cultivation experiments and chromatography modeling. The overall optimized system must not necessarily be the one with highest product titers, but the one resulting in an overall superior process performance in up- and downstream. The methodology is presented in a case study for the Cherry-tagged enzyme Glutathione-S-Transferase from Escherichia coli SE1. The Cherry-Tag™ (Delphi Genetics, Belgium) which can be fused to any target protein allows for direct product analytics by simple VIS absorption measurements. High-throughput cultivations were carried out in a 48-well format in a BioLector micro-scale cultivation system (m2p-Labs, Germany). The downstream process optimization for a set of randomly picked upstream conditions producing high yields was performed in silico using a chromatography modeling software developed in-house (ChromX). The suggested in silico-optimized operational modes for product capturing were validated subsequently. The overall best system was chosen based on a combination of excellent up- and downstream performance. © 2015 Wiley Periodicals, Inc.

  1. A priori Considerations When Conducting High-Throughput Amplicon-Based Sequence Analysis

    Directory of Open Access Journals (Sweden)

    Aditi Sengupta

    2016-03-01

    Full Text Available Amplicon-based sequencing strategies that include 16S rRNA and functional genes, alongside “meta-omics” analyses of communities of microorganisms, have allowed researchers to pose questions and find answers to “who” is present in the environment and “what” they are doing. Next-generation sequencing approaches that aid microbial ecology studies of agricultural systems are fast gaining popularity among agronomy, crop, soil, and environmental science researchers. Given the rapid development of these high-throughput sequencing techniques, researchers with no prior experience will desire information about the best practices that can be used before actually starting high-throughput amplicon-based sequence analyses. We have outlined items that need to be carefully considered in experimental design, sampling, basic bioinformatics, sequencing of mock communities and negative controls, acquisition of metadata, and in standardization of reaction conditions as per experimental requirements. Not all considerations mentioned here may pertain to a particular study. The overall goal is to inform researchers about considerations that must be taken into account when conducting high-throughput microbial DNA sequencing and sequences analysis.

  2. Infra-red thermography for high throughput field phenotyping in Solanum tuberosum.

    Directory of Open Access Journals (Sweden)

    Ankush Prashar

    Full Text Available The rapid development of genomic technology has made high throughput genotyping widely accessible but the associated high throughput phenotyping is now the major limiting factor in genetic analysis of traits. This paper evaluates the use of thermal imaging for the high throughput field phenotyping of Solanum tuberosum for differences in stomatal behaviour. A large multi-replicated trial of a potato mapping population was used to investigate the consistency in genotypic rankings across different trials and across measurements made at different times of day and on different days. The results confirmed a high degree of consistency between the genotypic rankings based on relative canopy temperature on different occasions. Genotype discrimination was enhanced both through normalising data by expressing genotype temperatures as differences from image means and through the enhanced replication obtained by using overlapping images. A Monte Carlo simulation approach was used to confirm the magnitude of genotypic differences that it is possible to discriminate. The results showed a clear negative association between canopy temperature and final tuber yield for this population, when grown under ample moisture supply. We have therefore established infrared thermography as an easy, rapid and non-destructive screening method for evaluating large population trials for genetic analysis. We also envisage this approach as having great potential for evaluating plant response to stress under field conditions.

  3. A general approach for discriminative de novo motif discovery from high-throughput data.

    Science.gov (United States)

    Grau, Jan; Posch, Stefan; Grosse, Ivo; Keilwagen, Jens

    2013-11-01

    De novo motif discovery has been an important challenge of bioinformatics for the past two decades. Since the emergence of high-throughput techniques like ChIP-seq, ChIP-exo and protein-binding microarrays (PBMs), the focus of de novo motif discovery has shifted to runtime and accuracy on large data sets. For this purpose, specialized algorithms have been designed for discovering motifs in ChIP-seq or PBM data. However, none of the existing approaches work perfectly for all three high-throughput techniques. In this article, we propose Dimont, a general approach for fast and accurate de novo motif discovery from high-throughput data. We demonstrate that Dimont yields a higher number of correct motifs from ChIP-seq data than any of the specialized approaches and achieves a higher accuracy for predicting PBM intensities from probe sequence than any of the approaches specifically designed for that purpose. Dimont also reports the expected motifs for several ChIP-exo data sets. Investigating differences between in vitro and in vivo binding, we find that for most transcription factors, the motifs discovered by Dimont are in good accordance between techniques, but we also find notable exceptions. We also observe that modeling intra-motif dependencies may increase accuracy, which indicates that more complex motif models are a worthwhile field of research.

  4. Comprehensive molecular diagnosis of Bardet-Biedl syndrome by high-throughput targeted exome sequencing.

    Directory of Open Access Journals (Sweden)

    Dong-Jun Xing

    Full Text Available Bardet-Biedl syndrome (BBS is an autosomal recessive disorder with significant genetic heterogeneity. BBS is linked to mutations in 17 genes, which contain more than 200 coding exons. Currently, BBS is diagnosed by direct DNA sequencing for mutations in these genes, which because of the large genomic screening region is both time-consuming and expensive. In order to develop a practical method for the clinic diagnosis of BBS, we have developed a high-throughput targeted exome sequencing (TES for genetic diagnosis. Five typical BBS patients were recruited and screened for mutations in a total of 144 known genes responsible for inherited retinal diseases, a hallmark symptom of BBS. The genomic DNA of these patients and their families were subjected to high-throughput DNA re-sequencing. Deep bioinformatics analysis was carried out to filter the massive sequencing data, which were further confirmed through co-segregation analysis. TES successfully revealed mutations in BBS genes in each patient and family member. Six pathological mutations, including five novel mutations, were revealed in the genes BBS2, MKKS, ARL6, MKS1. This study represents the first report of targeted exome sequencing in BBS patients and demonstrates that high-throughput TES is an accurate and rapid method for the genetic diagnosis of BBS.

  5. An improved high-throughput lipid extraction method for the analysis of human brain lipids.

    Science.gov (United States)

    Abbott, Sarah K; Jenner, Andrew M; Mitchell, Todd W; Brown, Simon H J; Halliday, Glenda M; Garner, Brett

    2013-03-01

    We have developed a protocol suitable for high-throughput lipidomic analysis of human brain samples. The traditional Folch extraction (using chloroform and glass-glass homogenization) was compared to a high-throughput method combining methyl-tert-butyl ether (MTBE) extraction with mechanical homogenization utilizing ceramic beads. This high-throughput method significantly reduced sample handling time and increased efficiency compared to glass-glass homogenizing. Furthermore, replacing chloroform with MTBE is safer (less carcinogenic/toxic), with lipids dissolving in the upper phase, allowing for easier pipetting and the potential for automation (i.e., robotics). Both methods were applied to the analysis of human occipital cortex. Lipid species (including ceramides, sphingomyelins, choline glycerophospholipids, ethanolamine glycerophospholipids and phosphatidylserines) were analyzed via electrospray ionization mass spectrometry and sterol species were analyzed using gas chromatography mass spectrometry. No differences in lipid species composition were evident when the lipid extraction protocols were compared, indicating that MTBE extraction with mechanical bead homogenization provides an improved method for the lipidomic profiling of human brain tissue.

  6. Droplet microfluidic technology for single-cell high-throughput screening.

    Science.gov (United States)

    Brouzes, Eric; Medkova, Martina; Savenelli, Neal; Marran, Dave; Twardowski, Mariusz; Hutchison, J Brian; Rothberg, Jonathan M; Link, Darren R; Perrimon, Norbert; Samuels, Michael L

    2009-08-25

    We present a droplet-based microfluidic technology that enables high-throughput screening of single mammalian cells. This integrated platform allows for the encapsulation of single cells and reagents in independent aqueous microdroplets (1 pL to 10 nL volumes) dispersed in an immiscible carrier oil and enables the digital manipulation of these reactors at a very high-throughput. Here, we validate a full droplet screening workflow by conducting a droplet-based cytotoxicity screen. To perform this screen, we first developed a droplet viability assay that permits the quantitative scoring of cell viability and growth within intact droplets. Next, we demonstrated the high viability of encapsulated human monocytic U937 cells over a period of 4 days. Finally, we developed an optically-coded droplet library enabling the identification of the droplets composition during the assay read-out. Using the integrated droplet technology, we screened a drug library for its cytotoxic effect against U937 cells. Taken together our droplet microfluidic platform is modular, robust, uses no moving parts, and has a wide range of potential applications including high-throughput single-cell analyses, combinatorial screening, and facilitating small sample analyses.

  7. CrossCheck: an open-source web tool for high-throughput screen data analysis.

    Science.gov (United States)

    Najafov, Jamil; Najafov, Ayaz

    2017-07-19

    Modern high-throughput screening methods allow researchers to generate large datasets that potentially contain important biological information. However, oftentimes, picking relevant hits from such screens and generating testable hypotheses requires training in bioinformatics and the skills to efficiently perform database mining. There are currently no tools available to general public that allow users to cross-reference their screen datasets with published screen datasets. To this end, we developed CrossCheck, an online platform for high-throughput screen data analysis. CrossCheck is a centralized database that allows effortless comparison of the user-entered list of gene symbols with 16,231 published datasets. These datasets include published data from genome-wide RNAi and CRISPR screens, interactome proteomics and phosphoproteomics screens, cancer mutation databases, low-throughput studies of major cell signaling mediators, such as kinases, E3 ubiquitin ligases and phosphatases, and gene ontological information. Moreover, CrossCheck includes a novel database of predicted protein kinase substrates, which was developed using proteome-wide consensus motif searches. CrossCheck dramatically simplifies high-throughput screen data analysis and enables researchers to dig deep into the published literature and streamline data-driven hypothesis generation. CrossCheck is freely accessible as a web-based application at http://proteinguru.com/crosscheck.

  8. Developing high throughput genotyped chromosome segment substitution lines based on population whole-genome re-sequencing in rice (Oryza sativa L.

    Directory of Open Access Journals (Sweden)

    Gu Minghong

    2010-11-01

    Full Text Available Abstract Background Genetic populations provide the basis for a wide range of genetic and genomic studies and have been widely used in genetic mapping, gene discovery and genomics-assisted breeding. Chromosome segment substitution lines (CSSLs are the most powerful tools for the detection and precise mapping of quantitative trait loci (QTLs, for the analysis of complex traits in plant molecular genetics. Results In this study, a wide population consisting of 128 CSSLs was developed, derived from the crossing and back-crossing of two sequenced rice cultivars: 9311, an elite indica cultivar as the recipient and Nipponbare, a japonica cultivar as the donor. First, a physical map of the 128 CSSLs was constructed on the basis of estimates of the lengths and locations of the substituted chromosome segments using 254 PCR-based molecular markers. From this map, the total size of the 142 substituted segments in the population was 882.2 Mb, was 2.37 times that of the rice genome. Second, every CSSL underwent high-throughput genotyping by whole-genome re-sequencing with a 0.13× genome sequence, and an ultrahigh-quality physical map was constructed. This sequencing-based physical map indicated that 117 new segments were detected; almost all were shorter than 3 Mb and were not apparent in the molecular marker map. Furthermore, relative to the molecular marker-based map, the sequencing-based map yielded more precise recombination breakpoint determination and greater accuracy of the lengths of the substituted segments, and provided more accurate background information. Third, using the 128 CSSLs combined with the bin-map converted from the sequencing-based physical map, a multiple linear regression QTL analysis mapped nine QTLs, which explained 89.50% of the phenotypic variance for culm length. A large-effect QTL was located in a 791,655 bp region that contained the rice 'green revolution' gene. Conclusions The present results demonstrated that high

  9. Developing high throughput genotyped chromosome segment substitution lines based on population whole-genome re-sequencing in rice (Oryza sativa L.).

    Science.gov (United States)

    Xu, Jianjun; Zhao, Qiang; Du, Peina; Xu, Chenwu; Wang, Baohe; Feng, Qi; Liu, Qiaoquan; Tang, Shuzhu; Gu, Minghong; Han, Bin; Liang, Guohua

    2010-11-24

    Genetic populations provide the basis for a wide range of genetic and genomic studies and have been widely used in genetic mapping, gene discovery and genomics-assisted breeding. Chromosome segment substitution lines (CSSLs) are the most powerful tools for the detection and precise mapping of quantitative trait loci (QTLs), for the analysis of complex traits in plant molecular genetics. In this study, a wide population consisting of 128 CSSLs was developed, derived from the crossing and back-crossing of two sequenced rice cultivars: 9311, an elite indica cultivar as the recipient and Nipponbare, a japonica cultivar as the donor. First, a physical map of the 128 CSSLs was constructed on the basis of estimates of the lengths and locations of the substituted chromosome segments using 254 PCR-based molecular markers. From this map, the total size of the 142 substituted segments in the population was 882.2 Mb, was 2.37 times that of the rice genome. Second, every CSSL underwent high-throughput genotyping by whole-genome re-sequencing with a 0.13× genome sequence, and an ultrahigh-quality physical map was constructed. This sequencing-based physical map indicated that 117 new segments were detected; almost all were shorter than 3 Mb and were not apparent in the molecular marker map. Furthermore, relative to the molecular marker-based map, the sequencing-based map yielded more precise recombination breakpoint determination and greater accuracy of the lengths of the substituted segments, and provided more accurate background information. Third, using the 128 CSSLs combined with the bin-map converted from the sequencing-based physical map, a multiple linear regression QTL analysis mapped nine QTLs, which explained 89.50% of the phenotypic variance for culm length. A large-effect QTL was located in a 791,655 bp region that contained the rice 'green revolution' gene. The present results demonstrated that high throughput genotyped CSSLs combine the advantages of an ultrahigh

  10. High-throughput retrotransposon-based fluorescent markers: improved information content and allele discrimination

    Directory of Open Access Journals (Sweden)

    Baker David

    2009-07-01

    Full Text Available Abstract Background Dense genetic maps, together with the efficiency and accuracy of their construction, are integral to genetic studies and marker assisted selection for plant breeding. High-throughput multiplex markers that are robust and reproducible can contribute to both efficiency and accuracy. Multiplex markers are often dominant and so have low information content, this coupled with the pressure to find alternatives to radio-labelling, has led us to adapt the SSAP (sequence specific amplified polymorphism marker method from a 33P labelling procedure to fluorescently tagged markers analysed from an automated ABI 3730 xl platform. This method is illustrated for multiplexed SSAP markers based on retrotransposon insertions of pea and is applicable for the rapid and efficient generation of markers from genomes where repetitive element sequence information is available for primer design. We cross-reference SSAP markers previously generated using the 33P manual PAGE system to fluorescent peaks, and use these high-throughput fluorescent SSAP markers for further genetic studies in Pisum. Results The optimal conditions for the fluorescent-labelling method used a triplex set of primers in the PCR. These included a fluorescently labelled specific primer together with its unlabelled counterpart, plus an adapter-based primer with two bases of selection on the 3' end. The introduction of the unlabelled specific primer helped to optimise the fluorescent signal across the range of fragment sizes expected, and eliminated the need for extensive dilutions of PCR amplicons. The software (GeneMarker Version 1.6 used for the high-throughput data analysis provided an assessment of amplicon size in nucleotides, peak areas and fluorescence intensity in a table format, so providing additional information content for each marker. The method has been tested in a small-scale study with 12 pea accessions resulting in 467 polymorphic fluorescent SSAP markers of which

  11. Optimizing transformations for automated, high throughput analysis of flow cytometry data

    Directory of Open Access Journals (Sweden)

    Weng Andrew

    2010-11-01

    Full Text Available Abstract Background In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. Results We compare the performance of parameter-optimized and default-parameter (in flowCore data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter

  12. A cell-based high-throughput screening assay for radiation susceptibility using automated cell counting.

    Science.gov (United States)

    Hodzic, Jasmina; Dingjan, Ilse; Maas, Mariëlle Jp; van der Meulen-Muileman, Ida H; de Menezes, Renee X; Heukelom, Stan; Verheij, Marcel; Gerritsen, Winald R; Geldof, Albert A; van Triest, Baukelien; van Beusechem, Victor W

    2015-02-27

    Radiotherapy is one of the mainstays in the treatment for cancer, but its success can be limited due to inherent or acquired resistance. Mechanisms underlying radioresistance in various cancers are poorly understood and available radiosensitizers have shown only modest clinical benefit. There is thus a need to identify new targets and drugs for more effective sensitization of cancer cells to irradiation. Compound and RNA interference high-throughput screening technologies allow comprehensive enterprises to identify new agents and targets for radiosensitization. However, the gold standard assay to investigate radiosensitivity of cancer cells in vitro, the colony formation assay (CFA), is unsuitable for high-throughput screening. We developed a new high-throughput screening method for determining radiation susceptibility. Fast and uniform irradiation of batches up to 30 microplates was achieved using a Perspex container and a clinically employed linear accelerator. The readout was done by automated counting of fluorescently stained nuclei using the Acumen eX3 laser scanning cytometer. Assay performance was compared to that of the CFA and the CellTiter-Blue homogeneous uniform-well cell viability assay. The assay was validated in a whole-genome siRNA library screening setting using PC-3 prostate cancer cells. On 4 different cancer cell lines, the automated cell counting assay produced radiation dose response curves that followed a linear-quadratic equation and that exhibited a better correlation to the results of the CFA than did the cell viability assay. Moreover, the cell counting assay could be used to detect radiosensitization by silencing DNA-PKcs or by adding caffeine. In a high-throughput screening setting, using 4 Gy irradiated and control PC-3 cells, the effects of DNA-PKcs siRNA and non-targeting control siRNA could be clearly discriminated. We developed a simple assay for radiation susceptibility that can be used for high-throughput screening. This will

  13. Comparative genomics beyond sequence-based alignments

    DEFF Research Database (Denmark)

    Þórarinsson, Elfar; Yao, Zizhen; Wiklund, Eric D.

    2008-01-01

    Recent computational scans for non-coding RNAs (ncRNAs) in multiple organisms have relied on existing multiple sequence alignments. However, as sequence similarity drops, a key signal of RNA structure--frequent compensating base changes--is increasingly likely to cause sequence-based alignment me...

  14. Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq)-A Method for High-Throughput Analysis of Differentially Methylated CCGG Sites in Plants with Large Genomes.

    Science.gov (United States)

    Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw

    2017-01-01

    Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare. However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop

  15. The RABiT: high-throughput technology for assessing global DSB repair.

    Science.gov (United States)

    Turner, Helen C; Sharma, P; Perrier, J R; Bertucci, A; Smilenov, L; Johnson, G; Taveras, M; Brenner, D J; Garty, G

    2014-05-01

    At the Center for High-Throughput Minimally Invasive Radiation Biodosimetry, we have developed a rapid automated biodosimetry tool (RABiT); this is a completely automated, ultra-high-throughput robotically based biodosimetry workstation designed for use following a large-scale radiological event, to perform radiation biodosimetry measurements based on a fingerstick blood sample. High throughput is achieved through purpose built robotics, sample handling in filter-bottomed multi-well plates and innovations in high-speed imaging and analysis. Currently, we are adapting the RABiT technologies for use in laboratory settings, for applications in epidemiological and clinical studies. Our overall goal is to extend the RABiT system to directly measure the kinetics of DNA repair proteins. The design of the kinetic/time-dependent studies is based on repeated, automated sampling of lymphocytes from a central reservoir of cells housed in the RABiT incubator as a function of time after the irradiation challenge. In the present study, we have characterized the DNA repair kinetics of the following repair proteins: γ-H2AX, 53-BP1, ATM kinase, MDC1 at multiple times (0.5, 2, 4, 7 and 24 h) after irradiation with 4 Gy γ rays. In order to provide a consistent dose exposure at time zero, we have developed an automated capillary irradiator to introduce DNA DSBs into fingerstick-size blood samples within the RABiT. To demonstrate the scalability of the laboratory-based RABiT system, we have initiated a population study using γ-H2AX as a biomarker.

  16. Generalized empirical Bayesian methods for discovery of differential data in high-throughput biology.

    Science.gov (United States)

    Hardcastle, Thomas J

    2016-01-15

    High-throughput data are now commonplace in biological research. Rapidly changing technologies and application mean that novel methods for detecting differential behaviour that account for a 'large P, small n' setting are required at an increasing rate. The development of such methods is, in general, being done on an ad hoc basis, requiring further development cycles and a lack of standardization between analyses. We present here a generalized method for identifying differential behaviour within high-throughput biological data through empirical Bayesian methods. This approach is based on our baySeq algorithm for identification of differential expression in RNA-seq data based on a negative binomial distribution, and in paired data based on a beta-binomial distribution. Here we show how the same empirical Bayesian approach can be applied to any parametric distribution, removing the need for lengthy development of novel methods for differently distributed data. Comparisons with existing methods developed to address specific problems in high-throughput biological data show that these generic methods can achieve equivalent or better performance. A number of enhancements to the basic algorithm are also presented to increase flexibility and reduce computational costs. The methods are implemented in the R baySeq (v2) package, available on Bioconductor http://www.bioconductor.org/packages/release/bioc/html/baySeq.html. tjh48@cam.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. A high-throughput, multi-channel photon-counting detector with picosecond timing

    Science.gov (United States)

    Lapington, J. S.; Fraser, G. W.; Miller, G. M.; Ashton, T. J. R.; Jarron, P.; Despeisse, M.; Powolny, F.; Howorth, J.; Milnes, J.

    2009-06-01

    High-throughput photon counting with high time resolution is a niche application area where vacuum tubes can still outperform solid-state devices. Applications in the life sciences utilizing time-resolved spectroscopies, particularly in the growing field of proteomics, will benefit greatly from performance enhancements in event timing and detector throughput. The HiContent project is a collaboration between the University of Leicester Space Research Centre, the Microelectronics Group at CERN, Photek Ltd., and end-users at the Gray Cancer Institute and the University of Manchester. The goal is to develop a detector system specifically designed for optical proteomics, capable of high content (multi-parametric) analysis at high throughput. The HiContent detector system is being developed to exploit this niche market. It combines multi-channel, high time resolution photon counting in a single miniaturized detector system with integrated electronics. The combination of enabling technologies; small pore microchannel plate devices with very high time resolution, and high-speed multi-channel ASIC electronics developed for the LHC at CERN, provides the necessary building blocks for a high-throughput detector system with up to 1024 parallel counting channels and 20 ps time resolution. We describe the detector and electronic design, discuss the current status of the HiContent project and present the results from a 64-channel prototype system. In the absence of an operational detector, we present measurements of the electronics performance using a pulse generator to simulate detector events. Event timing results from the NINO high-speed front-end ASIC captured using a fast digital oscilloscope are compared with data taken with the proposed electronic configuration which uses the multi-channel HPTDC timing ASIC.

  18. High-throughput detection, genotyping and quantification of the human papillomavirus using real-time PCR.

    Science.gov (United States)

    Micalessi, Isabel M; Boulet, Gaëlle A V; Bogers, Johannes J; Benoy, Ina H; Depuydt, Christophe E

    2011-12-20

    The establishment of the causal relationship between high-risk human papillomavirus (HR-HPV) infection and cervical cancer and its precursors has resulted in the development of HPV DNA detection systems. Currently, real-time PCR assays for the detection of HPV, such as the RealTime High Risk (HR) HPV assay (Abbott) and the cobas® 4800 HPV Test (Roche Molecular Diagnostics) are commercially available. However, none of them enables the detection and typing of all HR-HPV types in a clinical high-throughput setting. This paper describes the laboratory workflow and the validation of a type-specific real-time quantitative PCR (qPCR) assay for high-throughput HPV detection, genotyping and quantification. This assay is routinely applied in a liquid-based cytology screening setting (700 samples in 24 h) and was used in many epidemiological and clinical studies. The TaqMan-based qPCR assay enables the detection of 17 HPV genotypes and β-globin in seven multiplex reactions. These HPV types include all 12 high-risk types (HPV16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, 59), three probably high-risk types (HPV53, 66 and 68), one low-risk type (HPV6) and one undetermined risk type (HPV67). An analytical sensitivity of ≤100 copies was obtained for all the HPV types. The analytical specificity of each primer pair was 100% and an intra- and inter-run variability of real-time PCR approach enables detection of 17 HPV types, identification of the HPV type and determination of the viral load in a single sensitive assay suitable for high-throughput screening.

  19. Integrated Analysis Platform: An Open-Source Information System for High-Throughput Plant Phenotyping.

    Science.gov (United States)

    Klukas, Christian; Chen, Dijun; Pape, Jean-Michel

    2014-06-01

    High-throughput phenotyping is emerging as an important technology to dissect phenotypic components in plants. Efficient image processing and feature extraction are prerequisites to quantify plant growth and performance based on phenotypic traits. Issues include data management, image analysis, and result visualization of large-scale phenotypic data sets. Here, we present Integrated Analysis Platform (IAP), an open-source framework for high-throughput plant phenotyping. IAP provides user-friendly interfaces, and its core functions are highly adaptable. Our system supports image data transfer from different acquisition environments and large-scale image analysis for different plant species based on real-time imaging data obtained from different spectra. Due to the huge amount of data to manage, we utilized a common data structure for efficient storage and organization of data for both input data and result data. We implemented a block-based method for automated image processing to extract a representative list of plant phenotypic traits. We also provide tools for build-in data plotting and result export. For validation of IAP, we performed an example experiment that contains 33 maize (Zea mays 'Fernandez') plants, which were grown for 9 weeks in an automated greenhouse with nondestructive imaging. Subsequently, the image data were subjected to automated analysis with the maize pipeline implemented in our system. We found that the computed digital volume and number of leaves correlate with our manually measured data in high accuracy up to 0.98 and 0.95, respectively. In summary, IAP provides a multiple set of functionalities for import/export, management, and automated analysis of high-throughput plant phenotyping data, and its analysis results are highly reliable. © 2014 American Society of Plant Biologists. All Rights Reserved.

  20. SmartGrain: high-throughput phenotyping software for measuring seed shape through image analysis.

    Science.gov (United States)

    Tanabata, Takanari; Shibaya, Taeko; Hori, Kiyosumi; Ebana, Kaworu; Yano, Masahiro

    2012-12-01

    Seed shape and size are among the most important agronomic traits because they affect yield and market price. To obtain accurate seed size data, a large number of measurements are needed because there is little difference in size among seeds from one plant. To promote genetic analysis and selection for seed shape in plant breeding, efficient, reliable, high-throughput seed phenotyping methods are required. We developed SmartGrain software for high-throughput measurement of seed shape. This software uses a new image analysis method to reduce the time taken in the preparation of seeds and in image capture. Outlines of seeds are automatically recognized from digital images, and several shape parameters, such as seed length, width, area, and perimeter length, are calculated. To validate the software, we performed a quantitative trait locus (QTL) analysis for rice (Oryza sativa) seed shape using backcrossed inbred lines derived from a cross between japonica cultivars Koshihikari and Nipponbare, which showed small differences in seed shape. SmartGrain removed areas of awns and pedicels automatically, and several QTLs were detected for six shape parameters. The allelic effect of a QTL for seed length detected on chromosome 11 was confirmed in advanced backcross progeny; the cv Nipponbare allele increased seed length and, thus, seed weight. High-throughput measurement with SmartGrain reduced sampling error and made it possible to distinguish between lines with small differences in seed shape. SmartGrain could accurately recognize seed not only of rice but also of several other species, including Arabidopsis (Arabidopsis thaliana). The software is free to researchers.

  1. High-throughput biochemical fingerprinting of Saccharomyces cerevisiae by Fourier transform infrared spectroscopy.

    Directory of Open Access Journals (Sweden)

    Achim Kohler

    Full Text Available Single-channel optical density measurements of population growth are the dominant large scale phenotyping methodology for bridging the gene-function gap in yeast. However, a substantial amount of the genetic variation induced by single allele, single gene or double gene knock-out technologies fail to manifest in detectable growth phenotypes under conditions readily testable in the laboratory. Thus, new high-throughput phenotyping technologies capable of providing information about molecular level consequences of genetic variation are sorely needed. Here we report a protocol for high-throughput Fourier transform infrared spectroscopy (FTIR measuring biochemical fingerprints of yeast strains. It includes high-throughput cultivation for FTIR spectroscopy, FTIR measurements and spectral pre-treatment to increase measurement accuracy. We demonstrate its capacity to distinguish not only yeast genera, species and populations, but also strains that differ only by a single gene, its excellent signal-to-noise ratio and its relative robustness to measurement bias. Finally, we illustrated its applicability by determining the FTIR signatures of all viable Saccharomyces cerevisiae single gene knock-outs corresponding to lipid biosynthesis genes. Many of the examined knock-out strains showed distinct, highly reproducible FTIR phenotypes despite having no detectable growth phenotype. These phenotypes were confirmed by conventional lipid analysis and could be linked to specific changes in lipid composition. We conclude that the introduced protocol is robust to noise and bias, possible to apply on a very large scale, and capable of generating biologically meaningful biochemical fingerprints that are strain specific, even when strains lack detectable growth phenotypes. Thus, it has a substantial potential for application in the molecular functionalization of the yeast genome.

  2. A High-Throughput, Field-Based Phenotyping Technology for Tall Biomass Crops1[OPEN

    Science.gov (United States)

    2017-01-01

    Recent advances in omics technologies have not been accompanied by equally efficient, cost-effective, and accurate phenotyping methods required to dissect the genetic architecture of complex traits. Even though high-throughput phenotyping platforms have been developed for controlled environments, field-based aerial and ground technologies have only been designed and deployed for short-stature crops. Therefore, we developed and tested Phenobot 1.0, an auto-steered and self-propelled field-based high-throughput phenotyping platform for tall dense canopy crops, such as sorghum (Sorghum bicolor). Phenobot 1.0 was equipped with laterally positioned and vertically stacked stereo RGB cameras. Images collected from 307 diverse sorghum lines were reconstructed in 3D for feature extraction. User interfaces were developed, and multiple algorithms were evaluated for their accuracy in estimating plant height and stem diameter. Tested feature extraction methods included the following: (1) User-interactive Individual Plant Height Extraction (UsIn-PHe) based on dense stereo three-dimensional reconstruction; (2) Automatic Hedge-based Plant Height Extraction (Auto-PHe) based on dense stereo 3D reconstruction; (3) User-interactive Dense Stereo Matching Stem Diameter Extraction; and (4) User-interactive Image Patch Stereo Matching Stem Diameter Extraction (IPaS-Di). Comparative genome-wide association analysis and ground-truth validation demonstrated that both UsIn-PHe and Auto-PHe were accurate methods to estimate plant height, while Auto-PHe had the additional advantage of being a completely automated process. For stem diameter, IPaS-Di generated the most accurate estimates of this biomass-related architectural trait. In summary, our technology was proven robust to obtain ground-based high-throughput plant architecture parameters of sorghum, a tall and densely planted crop species. PMID:28620124

  3. A High-Throughput, Field-Based Phenotyping Technology for Tall Biomass Crops.

    Science.gov (United States)

    Salas Fernandez, Maria G; Bao, Yin; Tang, Lie; Schnable, Patrick S

    2017-08-01

    Recent advances in omics technologies have not been accompanied by equally efficient, cost-effective, and accurate phenotyping methods required to dissect the genetic architecture of complex traits. Even though high-throughput phenotyping platforms have been developed for controlled environments, field-based aerial and ground technologies have only been designed and deployed for short-stature crops. Therefore, we developed and tested Phenobot 1.0, an auto-steered and self-propelled field-based high-throughput phenotyping platform for tall dense canopy crops, such as sorghum (Sorghum bicolor). Phenobot 1.0 was equipped with laterally positioned and vertically stacked stereo RGB cameras. Images collected from 307 diverse sorghum lines were reconstructed in 3D for feature extraction. User interfaces were developed, and multiple algorithms were evaluated for their accuracy in estimating plant height and stem diameter. Tested feature extraction methods included the following: (1) User-interactive Individual Plant Height Extraction (UsIn-PHe) based on dense stereo three-dimensional reconstruction; (2) Automatic Hedge-based Plant Height Extraction (Auto-PHe) based on dense stereo 3D reconstruction; (3) User-interactive Dense Stereo Matching Stem Diameter Extraction; and (4) User-interactive Image Patch Stereo Matching Stem Diameter Extraction (IPaS-Di). Comparative genome-wide association analysis and ground-truth validation demonstrated that both UsIn-PHe and Auto-PHe were accurate methods to estimate plant height, while Auto-PHe had the additional advantage of being a completely automated process. For stem diameter, IPaS-Di generated the most accurate estimates of this biomass-related architectural trait. In summary, our technology was proven robust to obtain ground-based high-throughput plant architecture parameters of sorghum, a tall and densely planted crop species. © 2017 American Society of Plant Biologists. All Rights Reserved.

  4. High-throughput gene targeting and phenotyping in zebrafish using CRISPR/Cas9.

    Science.gov (United States)

    Varshney, Gaurav K; Pei, Wuhong; LaFave, Matthew C; Idol, Jennifer; Xu, Lisha; Gallardo, Viviana; Carrington, Blake; Bishop, Kevin; Jones, MaryPat; Li, Mingyu; Harper, Ursula; Huang, Sunny C; Prakash, Anupam; Chen, Wenbiao; Sood, Raman; Ledin, Johan; Burgess, Shawn M

    2015-07-01

    The use of CRISPR/Cas9 as a genome-editing tool in various model organisms has radically changed targeted mutagenesis. Here, we present a high-throughput targeted mutagenesis pipeline using CRISPR/Cas9 technology in zebrafish that will make possible both saturation mutagenesis of the genome and large-scale phenotyping efforts. We describe a cloning-free single-guide RNA (sgRNA) synthesis, coupled with streamlined mutant identification methods utilizing fluorescent PCR and multiplexed, high-throughput sequencing. We report germline transmission data from 162 loci targeting 83 genes in the zebrafish genome, in which we obtained a 99% success rate for generating mutations and an average germline transmission rate of 28%. We verified 678 unique alleles from 58 genes by high-throughput sequencing. We demonstrate that our method can be used for efficient multiplexed gene targeting. We also demonstrate that phenotyping can be done in the F1 generation by inbreeding two injected founder fish, significantly reducing animal husbandry and time. This study compares germline transmission data from CRISPR/Cas9 with those of TALENs and ZFNs and shows that efficiency of CRISPR/Cas9 is sixfold more efficient than other techniques. We show that the majority of published "rules" for efficient sgRNA design do not effectively predict germline transmission rates in zebrafish, with the exception of a GG or GA dinucleotide genomic match at the 5' end of the sgRNA. Finally, we show that predicted off-target mutagenesis is of low concern for in vivo genetic studies. © 2015 Varshney et al.; Published by Cold Spring Harbor Laboratory Press.

  5. Neuraminidase activity provides a practical read-out for a high throughput influenza antiviral screening assay

    Directory of Open Access Journals (Sweden)

    Wu Meng

    2008-09-01

    Full Text Available Abstract Background The emergence of influenza strains that are resistant to commonly used antivirals has highlighted the need to develop new compounds that target viral gene products or host mechanisms that are essential for effective virus replication. Existing assays to identify potential antiviral compounds often use high throughput screening assays that target specific viral replication steps. To broaden the search for antivirals, cell-based replication assays can be performed, but these are often labor intensive and have limited throughput. Results We have adapted a traditional virus neutralization assay to develop a practical, cell-based, high throughput screening assay. This assay uses viral neuraminidase (NA as a read-out to quantify influenza replication, thereby offering an assay that is both rapid and sensitive. In addition to identification of inhibitors that target either viral or host factors, the assay allows simultaneous evaluation of drug toxicity. Antiviral activity was demonstrated for a number of known influenza inhibitors including amantadine that targets the M2 ion channel, zanamivir that targets NA, ribavirin that targets IMP dehydrogenase, and bis-indolyl maleimide that targets protein kinase A/C. Amantadine-resistant strains were identified by comparing IC50 with that of the wild-type virus. Conclusion Antivirals with specificity for a broad range of targets are easily identified in an accelerated viral inhibition assay that uses NA as a read-out of replication. This assay is suitable for high throughput screening to identify potential antivirals or can be used to identify drug-resistant influenza strains.

  6. A high throughput architecture for a low complexity soft-output demapping algorithm

    Science.gov (United States)

    Ali, I.; Wasenmüller, U.; Wehn, N.

    2015-11-01

    Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.

  7. High throughput generated micro-aggregates of chondrocytes stimulate cartilage formation in vitro and in vivo.

    Science.gov (United States)

    Moreira Teixeira, L S; Leijten, J C H; Sobral, J; Jin, R; van Apeldoorn, A A; Feijen, J; van Blitterswijk, C; Dijkstra, P J; Karperien, M

    2012-06-05

    Cell-based cartilage repair strategies such as matrix-induced autologous chondrocyte implantation (MACI) could be improved by enhancing cell performance. We hypothesised that micro-aggregates of chondrocytes generated in high-throughput prior to implantation in a defect could stimulate cartilaginous matrix deposition and remodelling. To address this issue, we designed a micro-mould to enable controlled high-throughput formation of micro-aggregates. Morphology, stability, gene expression profiles and chondrogenic potential of micro-aggregates of human and bovine chondrocytes were evaluated and compared to single-cells cultured in micro-wells and in 3D after encapsulation in Dextran-Tyramine (Dex-TA) hydrogels in vitro and in vivo. We successfully formed micro-aggregates of human and bovine chondrocytes with highly controlled size, stability and viability within 24 hours. Micro-aggregates of 100 cells presented a superior balance in Collagen type I and Collagen type II gene expression over single cells and micro-aggregates of 50 and 200 cells. Matrix metalloproteinases 1, 9 and 13 mRNA levels were decreased in micro-aggregates compared to single-cells. Histological and biochemical analysis demonstrated enhanced matrix deposition in constructs seeded with micro-aggregates cultured in vitro and in vivo, compared to single-cell seeded constructs. Whole genome microarray analysis and single gene expression profiles using human chondrocytes confirmed increased expression of cartilage-related genes when chondrocytes were cultured in micro-aggregates. In conclusion, we succeeded in controlled high-throughput formation of micro-aggregates of chondrocytes. Compared to single cell-seeded constructs, seeding of constructs with micro-aggregates greatly improved neo-cartilage formation. Therefore, micro-aggregation prior to chondrocyte implantation in current MACI procedures, may effectively accelerate hyaline cartilage formation.

  8. A high throughput single nucleotide polymorphism multiplex assay for parentage assignment in New Zealand sheep.

    Directory of Open Access Journals (Sweden)

    Shannon M Clarke

    Full Text Available Accurate pedigree information is critical to animal breeding systems to ensure the highest rate of genetic gain and management of inbreeding. The abundance of available genomic data, together with development of high throughput genotyping platforms, means that single nucleotide polymorphisms (SNPs are now the DNA marker of choice for genomic selection studies. Furthermore the superior qualities of SNPs compared to microsatellite markers allows for standardization between laboratories; a property that is crucial for developing an international set of markers for traceability studies. The objective of this study was to develop a high throughput SNP assay for use in the New Zealand sheep industry that gives accurate pedigree assignment and will allow a reduction in breeder input over lambing. This required two phases of development--firstly, a method of extracting quality DNA from ear-punch tissue performed in a high throughput cost efficient manner and secondly a SNP assay that has the ability to assign paternity to progeny resulting from mob mating. A likelihood based approach to infer paternity was used where sires with the highest LOD score (log of the ratio of the likelihood given parentage to likelihood given non-parentage are assigned. An 84 "parentage SNP panel" was developed that assigned, on average, 99% of progeny to a sire in a problem where there were 3,000 progeny from 120 mob mated sires that included numerous half sib sires. In only 6% of those cases was there another sire with at least a 0.02 probability of paternity. Furthermore dam information (either recorded, or by genotyping possible dams was absent, highlighting the SNP test's suitability for paternity testing. Utilization of this parentage SNP assay will allow implementation of progeny testing into large commercial farms where the improved accuracy of sire assignment and genetic evaluations will increase genetic gain in the sheep industry.

  9. A high throughput single nucleotide polymorphism multiplex assay for parentage assignment in New Zealand sheep.

    Science.gov (United States)

    Clarke, Shannon M; Henry, Hannah M; Dodds, Ken G; Jowett, Timothy W D; Manley, Tim R; Anderson, Rayna M; McEwan, John C

    2014-01-01

    Accurate pedigree information is critical to animal breeding systems to ensure the highest rate of genetic gain and management of inbreeding. The abundance of available genomic data, together with development of high throughput genotyping platforms, means that single nucleotide polymorphisms (SNPs) are now the DNA marker of choice for genomic selection studies. Furthermore the superior qualities of SNPs compared to microsatellite markers allows for standardization between laboratories; a property that is crucial for developing an international set of markers for traceability studies. The objective of this study was to develop a high throughput SNP assay for use in the New Zealand sheep industry that gives accurate pedigree assignment and will allow a reduction in breeder input over lambing. This required two phases of development--firstly, a method of extracting quality DNA from ear-punch tissue performed in a high throughput cost efficient manner and secondly a SNP assay that has the ability to assign paternity to progeny resulting from mob mating. A likelihood based approach to infer paternity was used where sires with the highest LOD score (log of the ratio of the likelihood given parentage to likelihood given non-parentage) are assigned. An 84 "parentage SNP panel" was developed that assigned, on average, 99% of progeny to a sire in a problem where there were 3,000 progeny from 120 mob mated sires that included numerous half sib sires. In only 6% of those cases was there another sire with at least a 0.02 probability of paternity. Furthermore dam information (either recorded, or by genotyping possible dams) was absent, highlighting the SNP test's suitability for paternity testing. Utilization of this parentage SNP assay will allow implementation of progeny testing into large commercial farms where the improved accuracy of sire assignment and genetic evaluations will increase genetic gain in the sheep industry.

  10. High-throughput olfactory conditioning and memory retention test show variation in Nasonia parasitic wasps.

    Science.gov (United States)

    Hoedjes, K M; Steidle, J L M; Werren, J H; Vet, L E M; Smid, H M

    2012-10-01

    Most of our knowledge on learning and memory formation results from extensive studies on a small number of animal species. Although features and cellular pathways of learning and memory are highly similar in this diverse group of species, there are also subtle differences. Closely related species of parasitic wasps display substantial variation in memory dynamics and can be instrumental to understanding both the adaptive benefit of and mechanisms underlying this variation. Parasitic wasps of the genus Nasonia offer excellent opportunities for multidisciplinary research on this topic. Genetic and genomic resources available for Nasonia are unrivaled among parasitic wasps, providing tools for genetic dissection of mechanisms that cause differences in learning. This study presents a robust, high-throughput method for olfactory conditioning of Nasonia using a host encounter as reward. A T-maze olfactometer facilitates high-throughput memory retention testing and employs standardized odors of equal detectability, as quantified by electroantennogram recordings. Using this setup, differences in memory retention between Nasonia species were shown. In both Nasonia vitripennis and Nasonia longicornis, memory was observed up to at least 5 days after a single conditioning trial, whereas Nasonia giraulti lost its memory after 2 days. This difference in learning may be an adaptation to species-specific differences in ecological factors, for example, host preference. The high-throughput methods for conditioning and memory retention testing are essential tools to study both ultimate and proximate factors that cause variation in learning and memory formation in Nasonia and other parasitic wasp species. © 2012 The Authors. Genes, Brain and Behavior © 2012 Blackwell Publishing Ltd and International Behavioural and Neural Genetics Society.

  11. High-Throughput Screening of Myometrial Calcium-Mobilization to Identify Modulators of Uterine Contractility

    Science.gov (United States)

    Herington, Jennifer L.; Swale, Daniel R.; Brown, Naoko; Shelton, Elaine L.; Choi, Hyehun; Williams, Charles H.; Hong, Charles C.; Paria, Bibhash C.; Denton, Jerod S.; Reese, Jeff

    2015-01-01

    The uterine myometrium (UT-myo) is a therapeutic target for preterm labor, labor induction, and postpartum hemorrhage. Stimulation of intracellular Ca2+-release in UT-myo cells by oxytocin is a final pathway controlling myometrial contractions. The goal of this study was to develop a dual-addition assay for high-throughput screening of small molecular compounds, which could regulate Ca2+-mobilization in UT-myo cells, and hence, myometrial contractions. Primary murine UT-myo cells in 384-well plates were loaded with a Ca2+-sensitive fluorescent probe, and then screened for inducers of Ca2+-mobilization and inhibitors of oxytocin-induced Ca2+-mobilization. The assay exhibited robust screening statistics (Z´ = 0.73), DMSO-tolerance, and was validated for high-throughput screening against 2,727 small molecules from the Spectrum, NIH Clinical I and II collections of well-annotated compounds. The screen revealed a hit-rate of 1.80% for agonist and 1.39% for antagonist compounds. Concentration-dependent responses of hit-compounds demonstrated an EC50 less than 10μM for 21 hit-antagonist compounds, compared to only 7 hit-agonist compounds. Subsequent studies focused on hit-antagonist compounds. Based on the percent inhibition and functional annotation analyses, we selected 4 confirmed hit-antagonist compounds (benzbromarone, dipyridamole, fenoterol hydrobromide and nisoldipine) for further analysis. Using an ex vivo isometric contractility assay, each compound significantly inhibited uterine contractility, at different potencies (IC50). Overall, these results demonstrate for the first time that high-throughput small-molecules screening of myometrial Ca2+-mobilization is an ideal primary approach for discovering modulators of uterine contractility. PMID:26600013

  12. Cancer panomics: computational methods and infrastructure for integrative analysis of cancer high-throughput "omics" data

    DEFF Research Database (Denmark)

    Brunak, Søren; De La Vega, Francisco M.; Rätsch, Gunnar

    2014-01-01

    , which continues to drop in cost, and that has enabled the sequencing of the genome, transcriptome, and epigenome of the tumors of a large number of cancer patients in order to discover the molecular aberrations that drive the oncogenesis of several types of cancer. Applying these technologies...... and response to therapy. This in turn necessitates new computational methods to integrate large-scale "omics" data for each patient with their electronic medical records, and in the context of the results from large-scale pan-cancer research studies, to select the best therapy and/or clinical trial...

  13. Epigenomics of cancer – emerging new concepts

    Science.gov (United States)

    Hassler, Melanie R.; Egger, Gerda

    2012-01-01

    The complexity of the mammalian genome is regulated by heritable epigenetic mechanisms, which provide the basis for differentiation, development and cellular homeostasis. These mechanisms act on the level of chromatin, by modifying DNA, histone proteins and nucleosome density/composition. During the last decade it became clear that cancer is defined by a variety of epigenetic changes, which occur in early stages of disease and parallel genetic mutations. With the advent of new technologies we are just starting to unravel the cancer epigenome and latest mechanistic findings provide the first clue as to how altered epigenetic patterns might occur in different cancers. Here we review latest findings on chromatin related mechanisms and hypothesize how their impairment might contribute to the altered epigenome of cancer cells. PMID:22609632

  14. Rice epigenomics and epigenetics: challenges and opportunities.

    Science.gov (United States)

    Chen, Xiangsong; Zhou, Dao-Xiu

    2013-05-01

    During recent years rice genome-wide epigenomic information such as DNA methylation and histone modifications, which are important for genome activity has been accumulated. The function of a number of rice epigenetic regulators has been studied, many of which are found to be involved in a diverse range of developmental and stress-responsive pathways. Analysis of epigenetic variations among different rice varieties indicates that epigenetic modification may lead to inheritable phenotypic variation. Characterizing phenotypic consequences of rice epigenomic variations and the underlining chromatin mechanism and identifying epialleles related to important agronomic traits may provide novel strategies to enhance agronomically favorable traits and grain productivity in rice. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Epigenomics of cancer - emerging new concepts.

    Science.gov (United States)

    Hassler, Melanie R; Egger, Gerda

    2012-11-01

    The complexity of the mammalian genome is regulated by heritable epigenetic mechanisms, which provide the basis for differentiation, development and cellular homeostasis. These mechanisms act on the level of chromatin, by modifying DNA, histone proteins and nucleosome density/composition. During the last decade it became clear that cancer is defined by a variety of epigenetic changes, which occur in early stages of disease and parallel genetic mutations. With the advent of new technologies we are just starting to unravel the cancer epigenome and latest mechanistic findings provide the first clue as to how altered epigenetic patterns might occur in different cancers. Here we review latest findings on chromatin related mechanisms and hypothesize how their impairment might contribute to the altered epigenome of cancer cells. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  16. Direct multiplex sequencing (DMPS)--a novel method for targeted high-throughput sequencing of ancient and highly degraded DNA

    National Research Council Canada - National Science Library

    Stiller, Mathias; Knapp, Michael; Stenzel, Udo; Hofreiter, Michael; Meyer, Matthias

    2009-01-01

    Although the emergence of high-throughput sequencing technologies has enabled whole-genome sequencing from extinct organisms, little progress has been made in accelerating targeted sequencing from highly degraded DNA...

  17. High-throughput glycosylation analysis of therapeutic immunoglobulin G by capillary gel electrophoresis using a DNA analyzer.

    NARCIS (Netherlands)

    Reusch, D.; Haberger, M.; Kailich, T.; Heidenreich, A.K.; Kampe, M.; Bulau, P.; Wuhrer, M.

    2014-01-01

    The Fc glycosylation of therapeutic antibodies is crucial for their effector functions and their behavior in pharmacokinetics and pharmacodynamics. To monitor the Fc glycosylation in bioprocess development and characterization, high-throughput techniques for glycosylation analysis are needed. Here,

  18. High-Throughput Dietary Exposure Predictions for Chemical Migrants from Food Contact Substances for Use in Chemical Prioritization

    Data.gov (United States)

    U.S. Environmental Protection Agency — Under the ExpoCast program, United States Environmental Protection Agency (EPA) researchers have developed a high-throughput (HT) framework for estimating aggregate...

  19. HTTK R Package v1.4 - JSS Article on HTTK: R Package for High-Throughput Toxicokinetics

    Data.gov (United States)

    U.S. Environmental Protection Agency — httk: High-Throughput Toxicokinetics Functions and data tables for simulation and statistical analysis of chemical toxicokinetics ("TK") using data obtained from...

  20. State of the Art High-Throughput Approaches to Genotoxicity: Flow Micronucleus, Ames II, GreenScreen and Comet

    Science.gov (United States)

    State of the Art High-Throughput Approaches to Genotoxicity: Flow Micronucleus, Ames II, GreenScreen and Comet (Presented by Dr. Marilyn J. Aardema, Chief Scientific Advisor, Toxicology, Dr. Leon Stankowski, et. al. (6/28/2012)

  1. High-throughput search for caloric materials: the CaloriCool approach

    Science.gov (United States)

    Zarkevich, N. A.; Johnson, D. D.; Pecharsky, V. K.

    2018-01-01

    The high-throughput search paradigm adopted by the newly established caloric materials consortium—CaloriCool®—with the goal to substantially accelerate discovery and design of novel caloric materials is briefly discussed. We begin with describing material selection criteria based on known properties, which are then followed by heuristic fast estimates, ab initio calculations, all of which has been implemented in a set of automated computational tools and measurements. We also demonstrate how theoretical and computational methods serve as a guide for experimental efforts by considering a representative example from the field of magnetocaloric materials.

  2. High Throughput Preparation of Aligned Nanofibers Using an Improved Bubble-Electrospinning

    Directory of Open Access Journals (Sweden)

    Liang Yu

    2017-11-01

    Full Text Available An improved bubble-electrospinning, consisting of a cone shaped air nozzle, a copper solution reservoir connected directly to the power generator, and a high speed rotating copper wire drum as a collector, was presented successfully to obtain high throughput preparation of aligned nanofibers. The influences of drum rotation speed on morphology and properties of obtained nanofibers were explored and researched. The results showed that the alignment degree, diameter distribution, and properties of nanofibers were improved with the increase of the drum rotation speed.

  3. Filtration improves the performance of a high-throughput screen for anti-mycobacterial compounds.

    Directory of Open Access Journals (Sweden)

    Nancy Cheng

    Full Text Available The tendency for mycobacteria to aggregate poses a challenge for their use in microplate based assays. Good dispersions have been difficult to achieve in high-throughput screening (HTS assays used in the search for novel antibacterial drugs to treat tuberculosis and other related diseases. Here we describe a method using filtration to overcome the problem of variability resulting from aggregation of mycobacteria. This method consistently yielded higher reproducibility and lower variability than conventional methods, such as settling under gravity and vortexing.

  4. Quantitative high-throughput screening identifies inhibitors of anthrax-induced cell death.

    Science.gov (United States)

    Zhu, Ping Jun; Hobson, John P; Southall, Noel; Qiu, Cunping; Thomas, Craig J; Lu, Jiamo; Inglese, James; Zheng, Wei; Leppla, Stephen H; Bugge, Thomas H; Austin, Christopher P; Liu, Shihui

    2009-07-15

    Here, we report the results of a quantitative high-throughput screen (qHTS) measuring the endocytosis and translocation of a beta-lactamase-fused-lethal factor and the identification of small molecules capable of obstructing the process of anthrax toxin internalization. Several small molecules protect RAW264.7 macrophages and CHO cells from anthrax lethal toxin and protected cells from an LF-Pseudomonas exotoxin fusion protein and diphtheria toxin. Further efforts demonstrated that these compounds impaired the PA heptamer pre-pore to pore conversion in cells expressing the CMG2 receptor, but not the related TEM8 receptor, indicating that these compounds likely interfere with toxin internalization.

  5. Screening and Crystallization Plates for Manual and High-throughput Protein Crystal Growth

    Science.gov (United States)

    Thorne, Robert E. (Inventor); Berejnov, Viatcheslav (Inventor); Kalinin, Yevgeniy (Inventor)

    2010-01-01

    In one embodiment, a crystallization and screening plate comprises a plurality of cells open at a top and a bottom, a frame that defines the cells in the plate, and at least two films. The first film seals a top of the plate and the second film seals a bottom of the plate. At least one of the films is patterned to strongly pin the contact lines of drops dispensed onto it, fixing their position and shape. The present invention also includes methods and other devices for manual and high-throughput protein crystal growth.

  6. Newborn screening for X-linked adrenoleukodystrophy: further evidence high throughput screening is feasible.

    Science.gov (United States)

    Theda, Christiane; Gibbons, Katy; Defor, Todd E; Donohue, Pamela K; Golden, W Christopher; Kline, Antonie D; Gulamali-Majid, Fizza; Panny, Susan R; Hubbard, Walter C; Jones, Richard O; Liu, Anita K; Moser, Ann B; Raymond, Gerald V

    2014-01-01

    X-linked adrenoleukodystrophy (ALD) is characterized by adrenal insufficiency and neurologic involvement with onset at variable ages. Plasma very long chain fatty acids are elevated in ALD; even in asymptomatic patients. We demonstrated previously that liquid chromatography tandem mass spectrometry measuring C26:0 lysophosphatidylcholine reliably identifies affected males. We prospectively applied this method to 4689 newborn blood spot samples; no false positives were observed. We show that high throughput neonatal screening for ALD is methodologically feasible. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Towards sensitive, high-throughput, biomolecular assays based on fluorescence lifetime

    Science.gov (United States)

    Ioanna Skilitsi, Anastasia; Turko, Timothé; Cianfarani, Damien; Barre, Sophie; Uhring, Wilfried; Hassiepen, Ulrich; Léonard, Jérémie

    2017-09-01

    Time-resolved fluorescence detection for robust sensing of biomolecular interactions is developed by implementing time-correlated single photon counting in high-throughput conditions. Droplet microfluidics is used as a promising platform for the very fast handling of low-volume samples. We illustrate the potential of this very sensitive and cost-effective technology in the context of an enzymatic activity assay based on fluorescently-labeled biomolecules. Fluorescence lifetime detection by time-correlated single photon counting is shown to enable reliable discrimination between positive and negative control samples at a throughput as high as several hundred samples per second.

  8. Forecasting Ecological Genomics: High-Tech Animal Instrumentation Meets High-Throughput Sequencing.

    Science.gov (United States)

    Shafer, Aaron B A; Northrup, Joseph M; Wikelski, Martin; Wittemyer, George; Wolf, Jochen B W

    2016-01-01

    Recent advancements in animal tracking technology and high-throughput sequencing are rapidly changing the questions and scope of research in the biological sciences. The integration of genomic data with high-tech animal instrumentation comes as a natural progression of traditional work in ecological genetics, and we provide a framework for linking the separate data streams from these technologies. Such a merger will elucidate the genetic basis of adaptive behaviors like migration and hibernation and advance our understanding of fundamental ecological and evolutionary processes such as pathogen transmission, population responses to environmental change, and communication in natural populations.

  9. Improving High-Throughput Sequencing Approaches for Reconstructing the Evolutionary Dynamics of Upper Paleolithic Human Groups

    DEFF Research Database (Denmark)

    Seguin-Orlando, Andaine

    the development and testing of innovative molecular approaches aiming at improving the amount of informative HTS data one can recover from ancient DNA extracts. We have characterized important ligation and amplification biases in the sequencing library building and enrichment steps, which can impede further...... been mainly driven by the development of High-Throughput DNA Sequencing (HTS) technologies but also by the implementation of novel molecular tools tailored to the manipulation of ultra short and damaged DNA molecules. Our ability to retrieve traces of genetic material has tremendously improved, pushing...

  10. Microfluidic Impedance Flow Cytometry Enabling High-Throughput Single-Cell Electrical Property Characterization

    Directory of Open Access Journals (Sweden)

    Jian Chen

    2015-04-01

    Full Text Available This article reviews recent developments in microfluidic impedance flow cytometry for high-throughput electrical property characterization of single cells. Four major perspectives of microfluidic impedance flow cytometry for single-cell characterization are included in this review: (1 early developments of microfluidic impedance flow cytometry for single-cell electrical property characterization; (2 microfluidic impedance flow cytometry with enhanced sensitivity; (3 microfluidic impedance and optical flow cytometry for single-cell analysis and (4 integrated point of care system based on microfluidic impedance flow cytometry. We examine the advantages and limitations of each technique and discuss future research opportunities from the perspectives of both technical innovation and clinical applications.

  11. Microfluidic impedance flow cytometry enabling high-throughput single-cell electrical property characterization.

    Science.gov (United States)

    Chen, Jian; Xue, Chengcheng; Zhao, Yang; Chen, Deyong; Wu, Min-Hsien; Wang, Junbo

    2015-04-29

    This article reviews recent developments in microfluidic impedance flow cytometry for high-throughput electrical property characterization of single cells. Four major perspectives of microfluidic impedance flow cytometry for single-cell characterization are included in this review: (1) early developments of microfluidic impedance flow cytometry for single-cell electrical property characterization; (2) microfluidic impedance flow cytometry with enhanced sensitivity; (3) microfluidic impedance and optical flow cytometry for single-cell analysis and (4) integrated point of care system based on microfluidic impedance flow cytometry. We examine the advantages and limitations of each technique and discuss future research opportunities from the perspectives of both technical innovation and clinical applications.

  12. High-Throughput Cancer Cell Sphere Formation for 3D Cell Culture.

    Science.gov (United States)

    Chen, Yu-Chih; Yoon, Euisik

    2017-01-01

    Three-dimensional (3D) cell culture is critical in studying cancer pathology and drug response. Though 3D cancer sphere culture can be performed in low-adherent dishes or well plates, the unregulated cell aggregation may skew the results. On contrary, microfluidic 3D culture can allow precise control of cell microenvironments, and provide higher throughput by orders of magnitude. In this chapter, we will look into engineering innovations in a microfluidic platform for high-throughput cancer cell sphere formation and review the implementation methods in detail.

  13. High-throughput SRCD using multi-well plates and its applications

    Science.gov (United States)

    Hussain, Rohanah; Jávorfi, Tamás; Rudd, Timothy R.; Siligardi, Giuliano

    2016-12-01

    The sample compartment for high-throughput synchrotron radiation circular dichroism (HT-SRCD) has been developed to satisfy an increased demand of protein characterisation in terms of folding and binding interaction properties not only in the traditional field of structural biology but also in the growing research area of material science with the potential to save time by 80%. As the understanding of protein behaviour in different solvent environments has increased dramatically the development of novel functions such as recombinant proteins modified to have different functions from harvesting solar energy to metabolonics for cleaning heavy and metal and organic molecule pollutions, there is a need to characterise speedily these system.

  14. High-Throughput Screening of the Asymmetric Decarboxylative Alkylation Reaction of Enolate-Stabilized Enol Carbonates

    KAUST Repository

    Stoltz, Brian

    2010-06-14

    The use of high-throughput screening allowed for the optimization of reaction conditions for the palladium-catalyzed asymmetric decarboxylative alkylation reaction of enolate-stabilized enol carbonates. Changing to a non-polar reaction solvent and to an electron-deficient PHOX derivative as ligand from our standard reaction conditions improved the enantioselectivity for the alkylation of a ketal-protected,1,3-diketone-derived enol carbonate from 28% ee to 84% ee. Similar improvements in enantioselectivity were seen for a β-keto-ester derived- and an α-phenyl cyclohexanone-derived enol carbonate.

  15. Detection of regulatory polymorphisms: high-throughput capillary DNase I footprinting.

    Science.gov (United States)

    Hancock, Matthew; Shephard, Elizabeth A

    2013-01-01

    We describe a method for high-throughput analysis of protein-binding sites in DNA using 96-well plates and capillary electrophoresis. The genomic DNA or plasmid DNA to be analyzed is amplified using fluorescent primers, incubated with an appropriate nuclear extract and treated with DNase I. Separation of the DNase I-generated fragments and co-analysis of their base sequences identify the position of protein-binding sites in a DNA fragment. The method is applicable to the identification of base changes, e.g., single-nucleotide polymorphisms (SNPs), that eliminate protein binding to DNA.

  16. Statistical Methods for Comparative Phenomics Using High-Throughput Phenotype Microarrays

    KAUST Repository

    Sturino, Joseph

    2010-01-24

    We propose statistical methods for comparing phenomics data generated by the Biolog Phenotype Microarray (PM) platform for high-throughput phenotyping. Instead of the routinely used visual inspection of data with no sound inferential basis, we develop two approaches. The first approach is based on quantifying the distance between mean or median curves from two treatments and then applying a permutation test; we also consider a permutation test applied to areas under mean curves. The second approach employs functional principal component analysis. Properties of the proposed methods are investigated on both simulated data and data sets from the PM platform.

  17. High-throughput screening for novel anti-infectives using a C. elegans pathogenesis model.

    Science.gov (United States)

    Conery, Annie L; Larkins-Ford, Jonah; Ausubel, Frederick M; Kirienko, Natalia V

    2014-03-14

    In recent history, the nematode Caenorhabditis elegans has provided a compelling platform for the discovery of novel antimicrobial drugs. In this protocol, we present an automated, high-throughput C. elegans pathogenesis assay, which can be used to screen for anti-infective compounds that prevent nematodes from dying due to Pseudomonas aeruginosa. New antibiotics identified from such screens would be promising candidates for treatment of human infections, and also can be used as probe compounds to identify novel targets in microbial pathogenesis or host immunity. Copyright © 2014 John Wiley & Sons, Inc.

  18. Multiplexed homogeneous proximity ligation assays for high throughput protein biomarker research in serological material

    DEFF Research Database (Denmark)

    Lundberg, Martin; Thorsen, Stine Buch; Assarsson, Erika

    2011-01-01

    specificity, even in multiplex, by its dual recognition feature, its proximity requirement, and most importantly by using unique sequence specific reporter fragments on both antibody-based probes. To illustrate the potential of this protein detection technology, a pilot biomarker research project......A high throughput protein biomarker discovery tool has been developed based on multiplexed proximity ligation assays (PLA) in a homogeneous format in the sense of no washing steps. The platform consists of four 24-plex panels profiling 74 putative biomarkers with sub pM sensitivity each consuming...

  19. Curating and Preparing High-Throughput Screening Data for Quantitative Structure-Activity Relationship Modeling.

    Science.gov (United States)

    Kim, Marlene T; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao

    2016-01-01

    Publicly available bioassay data often contains errors. Curating massive bioassay data, especially high-throughput screening (HTS) data, for Quantitative Structure-Activity Relationship (QSAR) modeling requires the assistance of automated data curation tools. Using automated data curation tools are beneficial to users, especially ones without prior computer skills, because many platforms have been developed and optimized based on standardized requirements. As a result, the users do not need to extensively configure the curation tool prior to the application procedure. In this chapter, a freely available automatic tool to curate and prepare HTS data for QSAR modeling purposes will be described.

  20. Lights, camera, action: high-throughput plant phenotyping is ready for a close-up.

    Science.gov (United States)

    Fahlgren, Noah; Gehan, Malia A; Baxter, Ivan

    2015-04-01

    Anticipated population growth, shifting demographics, and environmental variability over the next century are expected to threaten global food security. In the face of these challenges, crop yield for food and fuel must be maintained and improved using fewer input resources. In recent years, genetic tools for profiling crop germplasm has benefited from rapid advances in DNA sequencing, and now similar advances are needed to improve the throughput of plant phenotyping. We highlight recent developments in high-throughput plant phenotyping using robotic-assisted imaging platforms and computer vision-assisted analysis tools. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. AELAS: Automatic ELAStic property derivations via high-throughput first-principles computation

    Science.gov (United States)

    Zhang, S. H.; Zhang, R. F.

    2017-11-01

    The elastic properties are fundamental and important for crystalline materials as they relate to other mechanical properties, various thermodynamic qualities as well as some critical physical properties. However, a complete set of experimentally determined elastic properties is only available for a small subset of known materials, and an automatic scheme for the derivations of elastic properties that is adapted to high-throughput computation is much demanding. In this paper, we present the AELAS code, an automated program for calculating second-order elastic constants of both two-dimensional and three-dimensional single crystal materials with any symmetry, which is designed mainly for high-throughput first-principles computation. Other derivations of general elastic properties such as Young's, bulk and shear moduli as well as Poisson's ratio of polycrystal materials, Pugh ratio, Cauchy pressure, elastic anisotropy and elastic stability criterion, are also implemented in this code. The implementation of the code has been critically validated by a lot of evaluations and tests on a broad class of materials including two-dimensional and three-dimensional materials, providing its efficiency and capability for high-throughput screening of specific materials with targeted mechanical properties. Program Files doi:http://dx.doi.org/10.17632/f8fwg4j9tw.1 Licensing provisions: BSD 3-Clause Programming language: Fortran Nature of problem: To automate the calculations of second-order elastic constants and the derivations of other elastic properties for two-dimensional and three-dimensional materials with any symmetry via high-throughput first-principles computation. Solution method: The space-group number is firstly determined by the SPGLIB code [1] and the structure is then redefined to unit cell with IEEE-format [2]. Secondly, based on the determined space group number, a set of distortion modes is automatically specified and the distorted structure files are generated

  2. Solid-phase cloning for high-throughput assembly of single and multiple DNA parts

    DEFF Research Database (Denmark)

    Lundqvist, Magnus; Edfors, Fredrik; Sivertsson, Åsa

    2015-01-01

    present a robust automated protocol for restriction enzyme based SPC and its performance for the cloning of >60 000 unique human gene fragments into expression vectors. In addition, we report on SPC-based single-strand assembly for applications where exact control of the sequence between fragments......We describe solid-phase cloning (SPC) for high-throughput assembly of expression plasmids. Our method allows PCR products to be put directly into a liquid handler for capture and purification using paramagnetic streptavidin beads and conversion into constructs by subsequent cloning reactions. We...

  3. High-throughput gated photon counter with two detection windows programmable down to 70 ps width

    Energy Technology Data Exchange (ETDEWEB)

    Boso, Gianluca; Tosi, Alberto, E-mail: alberto.tosi@polimi.it; Zappa, Franco [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo Da Vinci 32, 20133 Milano (Italy); Mora, Alberto Dalla [Dipartimento di Fisica, Politecnico di Milano, Piazza Leonardo Da Vinci 32, 20133 Milano (Italy)

    2014-01-15

    We present the design and characterization of a high-throughput gated photon counter able to count electrical pulses occurring within two well-defined and programmable detection windows. We extensively characterized and validated this instrument up to 100 Mcounts/s and with detection window width down to 70 ps. This instrument is suitable for many applications and proves to be a cost-effective and compact alternative to time-correlated single-photon counting equipment, thanks to its easy configurability, user-friendly interface, and fully adjustable settings via a Universal Serial Bus (USB) link to a remote computer.

  4. High-throughput mapping of cell-wall polymers within and between plants using novel microarrays

    DEFF Research Database (Denmark)

    Moller, Isabel Eva; Sørensen, Iben; Bernal Giraldo, Adriana Jimena

    2007-01-01

    We describe here a methodology that enables the occurrence of cell-wall glycans to be systematically mapped throughout plants in a semi-quantitative high-throughput fashion. The technique (comprehensive microarray polymer profiling, or CoMPP) integrates the sequential extraction of glycans from...... analysis of mutant and wild-type plants, as demonstrated here for the Arabidopsis thaliana mutants fra8, mur1 and mur3. CoMPP was also applied to Physcomitrella patens cell walls and was validated by carbohydrate linkage analysis. These data provide new insights into the structure and functions of plant...

  5. High-throughput sequencing of forensic genetic samples using punches of FTA cards with buccal swabs

    DEFF Research Database (Denmark)

    Kampmann, Marie-Louise; Buchard, Anders; Børsting, Claus

    2016-01-01

    Here, we demonstrate that punches from buccal swab samples preserved on FTA cards can be used for high-throughput DNA sequencing, also known as massively parallel sequencing (MPS). We typed 44 reference samples with the HID-Ion AmpliSeq Identity Panel using washed 1.2 mm punches from FTA cards...... with buccal swabs and compared the results with those obtained with DNA extracted using the EZ1 DNA Investigator Kit. Concordant profiles were obtained for all samples. Our protocol includes simple punch, wash, and PCR steps, reducing cost and hands-on time in the laboratory. Furthermore, it facilitates...... automation of DNA sequencing....

  6. Zebrafish: A marvel of high-throughput biology for 21st century toxicology.

    Science.gov (United States)

    Bugel, Sean M; Tanguay, Robert L; Planchart, Antonio

    2014-09-07

    The evolutionary conservation of genomic, biochemical and developmental features between zebrafish and humans is gradually coming into focus with the end result that the zebrafish embryo model has emerged as a powerful tool for uncovering the effects of environmental exposures on a multitude of biological processes with direct relevance to human health. In this review, we highlight advances in automation, high-throughput (HT) screening, and analysis that leverage the power of the zebrafish embryo model for unparalleled advances in our understanding of how chemicals in our environment affect our health and wellbeing.

  7. A spinnable and automatable StageTip for high throughput peptide desalting and proteomics

    OpenAIRE

    sprotocols

    2014-01-01

    Authors: Yanbao Yu, Madeline Smith & Rembert Pieper ### Abstract The stop-and-go-extraction tips (StageTips) have been widely used in shotgun proteomics to clean/desalt peptide samples prior to LC-MS/MS analysis. Here, an extremely simple and high throughput StageTip protocol is described. In this protocol, an adaptor is introduced to the StageTip, and makes it readily available for bench-top centrifugation. Each spin step (with 200 μL buffer loaded) takes around 2 min at 4,000 r...

  8. High-throughput method for optimum solubility screening for homogeneity and crystallization of proteins

    Science.gov (United States)

    Kim, Sung-Hou [Moraga, CA; Kim, Rosalind [Moraga, CA; Jancarik, Jamila [Walnut Creek, CA

    2012-01-31

    An optimum solubility screen in which a panel of buffers and many additives are provided in order to obtain the most homogeneous and monodisperse protein condition for protein crystallization. The present methods are useful for proteins that aggregate and cannot be concentrated prior to setting up crystallization screens. A high-throughput method using the hanging-drop method and vapor diffusion equilibrium and a panel of twenty-four buffers is further provided. Using the present methods, 14 poorly behaving proteins have been screened, resulting in 11 of the proteins having highly improved dynamic light scattering results allowing concentration of the proteins, and 9 were crystallized.

  9. Advanced transfection with Lipofectamine 2000 reagent: primary neurons, siRNA, and high-throughput applications.

    Science.gov (United States)

    Dalby, Brian; Cates, Sharon; Harris, Adam; Ohki, Elise C; Tilkins, Mary L; Price, Paul J; Ciccarone, Valentina C

    2004-06-01

    Lipofectamine 2000 is a cationic liposome based reagent that provides high transfection efficiency and high levels of transgene expression in a range of mammalian cell types in vitro using a simple protocol. Optimum transfection efficiency and subsequent cell viability depend on a number of experimental variables such as cell density, liposome and DNA concentrations, liposome-DNA complexing time, and the presence or absence of media components such as antibiotics and serum. The importance of these factors in Lipofectamine 2000 mediated transfection will be discussed together with some specific applications: transfection of primary neurons, high throughput transfection, and delivery of small interfering RNAs. Copyright 2003 Elsevier Inc.

  10. High-throughput enzyme screening platform for the IPP-bypass mevalonate pathway for isopentenol production

    DEFF Research Database (Denmark)

    Kang, Aram; Meadows, Corey W.; Canu, Nicolas

    2017-01-01

    ATP requirements and isopentenyl diphosphate (IPP) toxicity pose immediate challenges for engineering bacterial strains to overproduce commodities utilizing IPP as an intermediate. To overcome these limitations, we developed an “IPP-bypass� isopentenol pathway using the promiscuous activity...... the endogenous non-mevalonate pathway, we developed a high-throughput screening platform that correlated promiscuous PMD activity toward MVAP with cellular growth. Successful identification of mutants that altered PMD activity demonstrated the sensitivity and specificity of the screening platform. Strains...

  11. Development of Droplet Microfluidics Enabling High-Throughput Single-Cell Analysis

    Directory of Open Access Journals (Sweden)

    Na Wen

    2016-07-01

    Full Text Available This article reviews recent developments in droplet microfluidics enabling high-throughput single-cell analysis. Five key aspects in this field are included in this review: (1 prototype demonstration of single-cell encapsulation in microfluidic droplets; (2 technical improvements of single-cell encapsulation in microfluidic droplets; (3 microfluidic droplets enabling single-cell proteomic analysis; (4 microfluidic droplets enabling single-cell genomic analysis; and (5 integrated microfluidic droplet systems enabling single-cell screening. We examine the advantages and limitations of each technique and discuss future research opportunities by focusing on key performances of throughput, multifunctionality, and absolute quantification.

  12. Using machine learning and high-throughput RNA sequencing to classify the precursors of small non-coding RNAs

    OpenAIRE

    Ryvkin, Paul; Leung, Yuk Yee; Ungar, Lyle H.; Gregory, Brian D.; Wang, Li-San

    2013-01-01

    Recent advances in high-throughput sequencing allow researchers to examine the transcriptome in more detail than ever before. Using a method known as high-throughput small RNA-sequencing, we can now profile the expression of small regulatory RNAs such as microRNAs and small interfering RNAs (siRNAs) with a great deal of sensitivity. However, there are many other types of small RNAs (

  13. Polymer surface functionalities that control human embryoid body cell adhesion revealed by high throughput surface characterization of combinatorial material microarrays

    OpenAIRE

    Yang, Jing; Mei, Ying; Hook, Andrew L.; Taylor, Michael; Urquhart, Andrew J.; Bogatyrev, Said R.; Langer, Robert; Anderson, Daniel G.; Davies, Martyn C.; Alexander, Morgan R.

    2010-01-01

    High throughput materials discovery using combinatorial polymer microarrays to screen for new biomaterials with new and improved function is established as a powerful strategy. Here we combine this screening approach with high throughput surface characterisation (HT-SC) to identify surface structure-function relationships. We explore how this combination can help to identify surface chemical moieties that control protein adsorption and subsequent cellular response. The adhesion of human embry...

  14. The Einstein Center for Epigenomics: studying the role of epigenomic dysregulation in human disease.

    Science.gov (United States)

    McLellan, Andrew S; Dubin, Robert A; Jing, Qiang; Maqbool, Shahina B; Olea, Raul; Westby, Gael; Broin, Pilib Ó; Fazzari, Melissa J; Zheng, Deyou; Suzuki, Masako; Greally, John M

    2009-10-01

    There is increasing interest in the role of epigenetic and transcriptional dysregulation in the pathogenesis of a range of human diseases, not just in the best-studied example of cancer. It is, however, quite difficult for an individual investigator to perform these studies, as they involve genome-wide molecular assays combined with sophisticated computational analytical approaches of very large datasets that may be generated from various resources and technologies. In 2008, the Albert Einstein College of Medicine in New York, USA established a Center for Epigenomics to facilitate the research programs of its investigators, providing shared resources for genome-wide assays and for data analysis. As a result, several avenues of research are now expanding, with cancer epigenomics being complemented by studies of the epigenomics of infectious disease and a neuroepigenomics program.

  15. Representing high throughput expression profiles via perturbation barcodes reveals compound targets.

    Directory of Open Access Journals (Sweden)

    Tracey M Filzen

    2017-02-01

    Full Text Available High throughput mRNA expression profiling can be used to characterize the response of cell culture models to perturbations such as pharmacologic modulators and genetic perturbations. As profiling campaigns expand in scope, it is important to homogenize, summarize, and analyze the resulting data in a manner that captures significant biological signals in spite of various noise sources such as batch effects and stochastic variation. We used the L1000 platform for large-scale profiling of 978 representative genes across thousands of compound treatments. Here, a method is described that uses deep learning techniques to convert the expression changes of the landmark genes into a perturbation barcode that reveals important features of the underlying data, performing better than the raw data in revealing important biological insights. The barcode captures compound structure and target information, and predicts a compound's high throughput screening promiscuity, to a higher degree than the original data measurements, indicating that the approach uncovers underlying factors of the expression data that are otherwise entangled or masked by noise. Furthermore, we demonstrate that visualizations derived from the perturbation barcode can be used to more sensitively assign functions to unknown compounds through a guilt-by-association approach, which we use to predict and experimentally validate the activity of compounds on the MAPK pathway. The demonstrated application of deep metric learning to large-scale chemical genetics projects highlights the utility of this and related approaches to the extraction of insights and testable hypotheses from big, sometimes noisy data.

  16. Microfluidic cell microarray platform for high throughput analysis of particle-cell interactions.

    Science.gov (United States)

    Tong, Ziqiu; Rajeev, Gayathri; Guo, Keying; Ivask, Angela; McCormick, Scott; Lombi, Enzo; Priest, Craig; Voelcker, Nicolas H

    2018-03-02

    With the advances in nanotechnology, particles with various size, shape, surface chemistry and composition can be easily produced. Nano- and microparticles have been extensively explored in many industrial and clinical applications. Ensuring that the particles themselves are not possessing any toxic effects to the biological system is of paramount importance. This paper describes a proof of concept method in which a microfluidic system is used in conjunction with a cell microarray technique aiming to streamline the analysis of particle-cell interaction in a high throughput manner. Polymeric microparticles, with different particle surface functionalities, were firstly used to investigate the efficiency of particle-cell adhesion under dynamic flow. Silver nanoparticles (AgNPs,10 nm in diameter) perfused at different concentrations (0 to 20 μg/ml) in parallel streams over the cells in the microchannel exhibited higher toxicity compared to the static culture in the 96 well plate format. This developed microfluidic system can be easily scaled up to accommodate larger number of microchannels for high throughput analysis of potential toxicity of a wide range of particles in a single experiment.

  17. A high-throughput method for detection of DNA in chloroplasts using flow cytometry

    Directory of Open Access Journals (Sweden)

    Oldenburg Delene J

    2007-03-01

    Full Text Available Abstract Background The amount of DNA in the chloroplasts of some plant species has been shown recently to decline dramatically during leaf development. A high-throughput method of DNA detection in chloroplasts is now needed in order to facilitate the further investigation of this process using large numbers of tissue samples. Results The DNA-binding fluorophores 4',6-diamidino-2-phenylindole (DAPI, SYBR Green I (SG, SYTO 42, and SYTO 45 were assessed for their utility in flow cytometric analysis of DNA in Arabidopsis chloroplasts. Fluorescence microscopy and real-time quantitative PCR (qPCR were used to validate flow cytometry data. We found neither DAPI nor SYTO 45 suitable for flow cytometric analysis of chloroplast DNA (cpDNA content, but did find changes in cpDNA content during development by flow cytometry using SG and SYTO 42. The latter dye provided more sensitive detection, and the results were similar to those from the fluorescence microscopic analysis. Differences in SYTO 42 fluorescence were found to correlate with differences in cpDNA content as determined by qPCR using three primer sets widely spaced across the chloroplast genome, suggesting that the whole genome undergoes copy number reduction during development, rather than selective reduction/degradation of subgenomic regions. Conclusion Flow cytometric analysis of chloroplasts stained with SYTO 42 is a high-throughput method suitable for determining changes in cpDNA content during development and for sorting chloroplasts on the basis of DNA content.

  18. High-Throughput Peptide Epitope Mapping Using Carbon Nanotube Field-Effect Transistors

    Directory of Open Access Journals (Sweden)

    Steingrimur Stefansson

    2013-01-01

    Full Text Available Label-free and real-time detection technologies can dramatically reduce the time and cost of pharmaceutical testing and development. However, to reach their full promise, these technologies need to be adaptable to high-throughput automation. To demonstrate the potential of single-walled carbon nanotube field-effect transistors (SWCNT-FETs for high-throughput peptide-based assays, we have designed circuits arranged in an 8 × 12 (96-well format that are accessible to standard multichannel pipettors. We performed epitope mapping of two HIV-1 gp160 antibodies using an overlapping gp160 15-mer peptide library coated onto nonfunctionalized SWCNTs. The 15-mer peptides did not require a linker to adhere to the non-functionalized SWCNTs, and binding data was obtained in real time for all 96 circuits. Despite some sequence differences in the HIV strains used to generate these antibodies and the overlapping peptide library, respectively, our results using these antibodies are in good agreement with known data, indicating that peptides immobilized onto SWCNT are accessible and that linear epitope mapping can be performed in minutes using SWCNT-FET.

  19. A multi-endpoint, high-throughput study of nanomaterial toxicity in Caenorhabditis elegans

    Science.gov (United States)

    Jung, Sang-Kyu; Qu, Xiaolei; Aleman-Meza, Boanerges; Wang, Tianxiao; Riepe, Celeste; Liu, Zheng; Li, Qilin; Zhong, Weiwei

    2015-01-01

    The booming nanotech industry has raised public concerns about the environmental health and safety impact of engineered nanomaterials (ENMs). High-throughput assays are needed to obtain toxicity data for the rapidly increasing number of ENMs. Here we present a suite of high-throughput methods to study nanotoxicity in intact animals using Caenorhabditis elegans as a model. At the population level, our system measures food consumption of thousands of animals to evaluate population fitness. At the organism level, our automated system analyzes hundreds of individual animals for body length, locomotion speed, and lifespan. To demonstrate the utility of our system, we applied this technology to test the toxicity of 20 nanomaterials under four concentrations. Only fullerene nanoparticles (nC60), fullerol, TiO2, and CeO2 showed little or no toxicity. Various degrees of toxicity were detected from different forms of carbon nanotubes, graphene, carbon black, Ag, and fumed SiO2 nanoparticles. Aminofullerene and UV irradiated nC60 also showed small but significant toxicity. We further investigated the effects of nanomaterial size, shape, surface chemistry, and exposure conditions on toxicity. Our data are publicly available at the open-access nanotoxicity database www.QuantWorm.org/nano. PMID:25611253

  20. A high-throughput and quantitative method to assess the mutagenic potential of translesion DNA synthesis

    Science.gov (United States)

    Taggart, David J.; Camerlengo, Terry L.; Harrison, Jason K.; Sherrer, Shanen M.; Kshetry, Ajay K.; Taylor, John-Stephen; Huang, Kun; Suo, Zucai

    2013-01-01

    Cellular genomes are constantly damaged by endogenous and exogenous agents that covalently and structurally modify DNA to produce DNA lesions. Although most lesions are mended by various DNA repair pathways in vivo, a significant number of damage sites persist during genomic replication. Our understanding of the mutagenic outcomes derived from these unrepaired DNA lesions has been hindered by the low throughput of existing sequencing methods. Therefore, we have developed a cost-effective high-throughput short oligonucleotide sequencing assay that uses next-generation DNA sequencing technology for the assessment of the mutagenic profiles of translesion DNA synthesis catalyzed by any error-prone DNA polymerase. The vast amount of sequencing data produced were aligned and quantified by using our novel software. As an example, the high-throughput short oligonucleotide sequencing assay was used to analyze the types and frequencies of mutations upstream, downstream and at a site-specifically placed cis–syn thymidine–thymidine dimer generated individually by three lesion-bypass human Y-family DNA polymerases. PMID:23470999

  1. Strategies for Reliable Exploitation of Evolutionary Concepts in High Throughput Biology

    Directory of Open Access Journals (Sweden)

    Julie D. Thompson

    2008-01-01

    Full Text Available The recent availability of the complete genome sequences of a large number of model organisms, together with the immense amount of data being produced by the new high-throughput technologies, means that we can now begin comparative analyses to understand the mechanisms involved in the evolution of the genome and their consequences in the study of biological systems. Phylogenetic approaches provide a unique conceptual framework for performing comparative analyses of all this data, for propagating information between different systems and for predicting or inferring new knowledge. As a result, phylogeny-based inference systems are now playing an increasingly important role in most areas of high throughput genomics, including studies of promoters (phylogenetic footprinting, interactomes (based on the presence and degree of conservation of interacting proteins, and in comparisons of transcriptomes or proteomes (phylogenetic proximity and co-regulation/co-expression. Here we review the recent developments aimed at making automatic, reliable phylogeny-based inference feasible in large-scale projects. We also discuss how evolutionary concepts and phylogeny-based inference strategies are now being exploited in order to understand the evolution and function of biological systems. Such advances will be fundamental for the success of the emerging disciplines of systems biology and synthetic biology, and will have wide-reaching effects in applied fields such as biotechnology, medicine and pharmacology.

  2. A Direct Comparison of Remote Sensing Approaches for High-Throughput Phenotyping in Plant Breeding

    Science.gov (United States)

    Tattaris, Maria; Reynolds, Matthew P.; Chapman, Scott C.

    2016-01-01

    Remote sensing (RS) of plant canopies permits non-intrusive, high-throughput monitoring of plant physiological characteristics. This study compared three RS approaches using a low flying UAV (unmanned aerial vehicle), with that of proximal sensing, and satellite-based imagery. Two physiological traits were considered, canopy temperature (CT) and a vegetation index (NDVI), to determine the most viable approaches for large scale crop genetic improvement. The UAV-based platform achieves plot-level resolution while measuring several hundred plots in one mission via high-resolution thermal and multispectral imagery measured at altitudes of 30–100 m. The satellite measures multispectral imagery from an altitude of 770 km. Information was compared with proximal measurements using IR thermometers and an NDVI sensor at a distance of 0.5–1 m above plots. For robust comparisons, CT and NDVI were assessed on panels of elite cultivars under irrigated and drought conditions, in different thermal regimes, and on un-adapted genetic resources under water deficit. Correlations between airborne data and yield/biomass at maturity were generally higher than equivalent proximal correlations. NDVI was derived from high-resolution satellite imagery for only larger sized plots (8.5 × 2.4 m) due to restricted pixel density. Results support use of UAV-based RS techniques for high-throughput phenotyping for both precision and efficiency. PMID:27536304

  3. Exploring the elephant: histopathology in high-throughput phenotyping of mutant mice

    Directory of Open Access Journals (Sweden)

    Paul N. Schofield

    2012-01-01

    Full Text Available Recent advances in gene knockout techniques and the in vivo analysis of mutant mice, together with the advent of large-scale projects for systematic mouse mutagenesis and genome-wide phenotyping, have allowed the creation of platforms for the most complete and systematic analysis of gene function ever undertaken in a vertebrate. The development of high-throughput phenotyping pipelines for these and other large-scale projects allows investigators to search and integrate large amounts of directly comparable phenotype data from many mutants, on a genomic scale, to help develop and test new hypotheses about the origins of disease and the normal functions of genes in the organism. Histopathology has a venerable history in the understanding of the pathobiology of human and animal disease, and presents complementary advantages and challenges to in vivo phenotyping. In this review, we present evidence for the unique contribution that histopathology can make to a large-scale phenotyping effort, using examples from past and current programmes at Lexicon Pharmaceuticals and The Jackson Laboratory, and critically assess the role of histopathology analysis in high-throughput phenotyping pipelines.

  4. High throughput screening of hydrolytic enzymes from termites using a natural substrate derived from sugarcane bagasse

    Directory of Open Access Journals (Sweden)

    Lucena Severino A

    2011-11-01

    Full Text Available Abstract Background The description of new hydrolytic enzymes is an important step in the development of techniques which use lignocellulosic materials as a starting point for fuel production. Sugarcane bagasse, which is subjected to pre-treatment, hydrolysis and fermentation for the production of ethanol in several test refineries, is the most promising source of raw material for the production of second generation renewable fuels in Brazil. One problem when screening hydrolytic activities is that the activity against commercial substrates, such as carboxymethylcellulose, does not always correspond to the activity against the natural lignocellulosic material. Besides that, the macroscopic characteristics of the raw material, such as insolubility and heterogeneity, hinder its use for high throughput screenings. Results In this paper, we present the preparation of a colloidal suspension of particles obtained from sugarcane bagasse, with minimal chemical change in the lignocellulosic material, and demonstrate its use for high throughput assays of hydrolases using Brazilian termites as the screened organisms. Conclusions Important differences between the use of the natural substrate and commercial cellulase substrates, such as carboxymethylcellulose or crystalline cellulose, were observed. This suggests that wood feeding termites, in contrast to litter feeding termites, might not be the best source for enzymes that degrade sugarcane biomass.

  5. Semi-automated library preparation for high-throughput DNA sequencing platforms.

    Science.gov (United States)

    Farias-Hesson, Eveline; Erikson, Jonathan; Atkins, Alexander; Shen, Peidong; Davis, Ronald W; Scharfe, Curt; Pourmand, Nader

    2010-01-01

    Next-generation sequencing platforms are powerful technologies, providing gigabases of genetic information in a single run. An important prerequisite for high-throughput DNA sequencing is the development of robust and cost-effective preprocessing protocols for DNA sample library construction. Here we report the development of a semi-automated sample preparation protocol to produce adaptor-ligated fragment libraries. Using a liquid-handling robot in conjunction with Carboxy Terminated Magnetic Beads, we labeled each library sample using a unique 6 bp DNA barcode, which allowed multiplex sample processing and sequencing of 32 libraries in a single run using Applied Biosystems' SOLiD sequencer. We applied our semi-automated pipeline to targeted medical resequencing of nuclear candidate genes in individuals affected by mitochondrial disorders. This novel method is capable of preparing as much as 32 DNA libraries in 2.01 days (8-hour workday) for emulsion PCR/high throughput DNA sequencing, increasing sample preparation production by 8-fold.

  6. High-throughput time-stretch microscopy with morphological and chemical specificity

    Science.gov (United States)

    Lei, Cheng; Ugawa, Masashi; Nozawa, Taisuke; Ideguchi, Takuro; Di Carlo, Dino; Ota, Sadao; Ozeki, Yasuyuki; Goda, Keisuke

    2016-03-01

    Particle analysis is an effective method in analytical chemistry for sizing and counting microparticles such as emulsions, colloids, and biological cells. However, conventional methods for particle analysis, which fall into two extreme categories, have severe limitations. Sieving and Coulter counting are capable of analyzing particles with high throughput, but due to their lack of detailed information such as morphological and chemical characteristics, they can only provide statistical results with low specificity. On the other hand, CCD or CMOS image sensors can be used to analyze individual microparticles with high content, but due to their slow charge download, the frame rate (hence, the throughput) is significantly limited. Here by integrating a time-stretch optical microscope with a three-color fluorescent analyzer on top of an inertial-focusing microfluidic device, we demonstrate an optofluidic particle analyzer with a sub-micrometer spatial resolution down to 780 nm and a high throughput of 10,000 particles/s. In addition to its morphological specificity, the particle analyzer provides chemical specificity to identify chemical expressions of particles via fluorescence detection. Our results indicate that we can identify different species of microparticles with high specificity without sacrificing throughput. Our method holds promise for high-precision statistical particle analysis in chemical industry and pharmaceutics.

  7. High throughput volatile fatty acid skin metabolite profiling by thermal desorption secondary electrospray ionisation mass spectrometry.

    Science.gov (United States)

    Martin, Helen J; Reynolds, James C; Riazanskaia, Svetlana; Thomas, C L Paul

    2014-09-07

    The non-invasive nature of volatile organic compound (VOC) sampling from skin makes this a priority in the development of new screening and diagnostic assays. Evaluation of recent literature highlights the tension between the analytical utility of ambient ionisation approaches for skin profiling and the practicality of undertaking larger campaigns (higher statistical power), or undertaking research in remote locations. This study describes how VOC may be sampled from skin and recovered from a polydimethylsilicone sampling coupon and analysed by thermal desorption (TD) interfaced to secondary electrospray ionisation (SESI) time-of-flight mass spectrometry (MS) for the high throughput screening of volatile fatty acids (VFAs) from human skin. Analysis times were reduced by 79% compared to gas chromatography-mass spectrometry methods (GC-MS) and limits of detection in the range 300 to 900 pg cm(-2) for VFA skin concentrations were obtained. Using body odour as a surrogate model for clinical testing 10 Filipino participants, 5 high and 5 low odour, were sampled in Manilla and the samples returned to the UK and screened by TD-SESI-MS and TD-GC-MS for malodour precursors with greater than >95% agreement between the two analytical techniques. Eight additional VFAs were also identified by both techniques with chains 4 to 15 carbons long being observed. TD-SESI-MS appears to have significant potential for the high throughput targeted screening of volatile biomarkers in human skin.

  8. No Time To Lose - High Throughput Screening To Assess Nanomaterial Safety

    Science.gov (United States)

    Damoiseaux, R; George, S; Li, M; Pokhrel, S; Ji, Z; France, B; Xia, T; Suarez, E; Rallo, R; Mädler, L; Cohen, Y; Hoek, EMV; Nel, A

    2014-01-01

    Nanomaterials hold great promise for medical, technological and economical benefits. Knowledge concerning the toxicological properties of these novel materials is typically lacking. At the same time, it is becoming evident that some nanomaterials could have a toxic potential in humans and the environment. Animal based systems lack the needed capacity to cope with the abundance of novel nanomaterials being produced, and thus we have to employ in vitro methods with high throughput to manage the rush logistically and use high content readouts wherever needed in order to gain more depth of information. Towards this end, high throughput screening (HTS) and high content screening (HCS) approaches can be used to speed up the safety analysis on a scale that commensurate with the rate of expansion of new materials and new properties. The insights gained from HTS/HCS should aid in our understanding of the tenets of nanomaterial hazard at biological level as well as asset the development of safe-by-design approaches. This review aims to provide a comprehensive introduction to the HTS/HCS methodology employed for safety assessment of engineered nanomaterials (ENMs), including data analysis and prediction of potentially hazardous material properties. Given the current pace of nanomaterial development, HTS/HCS is a potentially effective means of keeping up with the rapid progress in this field – we have literally no time to lose. PMID:21301704

  9. High-throughput and reliable protocols for animal microRNA library cloning.

    Science.gov (United States)

    Xiao, Caide

    2011-01-01

    MicroRNAs are short single-stranded RNA molecules (18-25 nucleotides). Because of their ability to silence gene expressions, they can be used to diagnose and treat tumors. Experimental construction of microRNA libraries was the most important step to identify microRNAs from animal tissues. Although there are many commercial kits with special protocols to construct microRNA libraries, this chapter provides the most reliable, high-throughput, and affordable protocols for microRNA library construction. The high-throughput capability of our protocols came from a double concentration (3 and 15%, thickness 1.5 mm) polyacrylamide gel electrophoresis (PAGE), which could directly extract microRNA-size RNAs from up to 400 μg total RNA (enough for two microRNA libraries). The reliability of our protocols was assured by a third PAGE, which selected PCR products of microRNA-size RNAs ligated with 5' and 3' linkers by a miRCat™ kit. Also, a MathCAD program was provided to automatically search short RNAs inserted between 5' and 3' linkers from thousands of sequencing text files.

  10. Sources of PCR-induced distortions in high-throughput sequencing data sets

    Science.gov (United States)

    Kebschull, Justus M.; Zador, Anthony M.

    2015-01-01

    PCR permits the exponential and sequence-specific amplification of DNA, even from minute starting quantities. PCR is a fundamental step in preparing DNA samples for high-throughput sequencing. However, there are errors associated with PCR-mediated amplification. Here we examine the effects of four important sources of error—bias, stochasticity, template switches and polymerase errors—on sequence representation in low-input next-generation sequencing libraries. We designed a pool of diverse PCR amplicons with a defined structure, and then used Illumina sequencing to search for signatures of each process. We further developed quantitative models for each process, and compared predictions of these models to our experimental data. We find that PCR stochasticity is the major force skewing sequence representation after amplification of a pool of unique DNA amplicons. Polymerase errors become very common in later cycles of PCR but have little impact on the overall sequence distribution as they are confined to small copy numbers. PCR template switches are rare and confined to low copy numbers. Our results provide a theoretical basis for removing distortions from high-throughput sequencing data. In addition, our findings on PCR stochasticity will have particular relevance to quantification of results from single cell sequencing, in which sequences are represented by only one or a few molecules. PMID:26187991

  11. Quantitative dot blot analysis (QDB), a versatile high throughput immunoblot method.

    Science.gov (United States)

    Tian, Geng; Tang, Fangrong; Yang, Chunhua; Zhang, Wenfeng; Bergquist, Jonas; Wang, Bin; Mi, Jia; Zhang, Jiandi

    2017-08-29

    Lacking access to an affordable method of high throughput immunoblot analysis for daily use remains a big challenge for scientists worldwide. We proposed here Quantitative Dot Blot analysis (QDB) to meet this demand. With the defined linear range, QDB analysis fundamentally transforms traditional immunoblot method into a true quantitative assay. Its convenience in analyzing large number of samples also enables bench scientists to examine protein expression levels from multiple parameters. In addition, the small amount of sample lysates needed for analysis means significant saving in research sources and efforts. This method was evaluated at both cellular and tissue levels with unexpected observations otherwise would be hard to achieve using conventional immunoblot methods like Western blot analysis. Using QDB technique, we were able to observed an age-dependent significant alteration of CAPG protein expression level in TRAMP mice. We believe that the adoption of QDB analysis would have immediate impact on biological and biomedical research to provide much needed high-throughput information at protein level in this "Big Data" era.

  12. Oligonucleotide Functionalised Microbeads: Indispensable Tools for High-Throughput Aptamer Selection

    Directory of Open Access Journals (Sweden)

    Lewis A. Fraser

    2015-12-01

    Full Text Available The functionalisation of microbeads with oligonucleotides has become an indispensable technique for high-throughput aptamer selection in SELEX protocols. In addition to simplifying the separation of binding and non-binding aptamer candidates, microbeads have facilitated the integration of other technologies such as emulsion PCR (ePCR and Fluorescence Activated Cell Sorting (FACS to high-throughput selection techniques. Within these systems, monoclonal aptamer microbeads can be individually generated and assayed to assess aptamer candidate fitness thereby helping eliminate stochastic effects which are common to classical SELEX techniques. Such techniques have given rise to aptamers with 1000 times greater binding affinities when compared to traditional SELEX. Another emerging technique is Fluorescence Activated Droplet Sorting (FADS whereby selection does not rely on binding capture allowing evolution of a greater diversity of aptamer properties such as fluorescence or enzymatic activity. Within this review we explore examples and applications of oligonucleotide functionalised microbeads in aptamer selection and reflect upon new opportunities arising for aptamer science.

  13. High-Throughput Yeast-Based Reporter Assay to Identify Compounds with Anti-inflammatory Potential.

    Science.gov (United States)

    Garcia, G; Santos, C Nunes do; Menezes, R

    2016-01-01

    The association between altered proteostasis and inflammatory responses has been increasingly recognized, therefore the identification and characterization of novel compounds with anti-inflammatory potential will certainly have a great impact in the therapeutics of protein-misfolding diseases such as degenerative disorders. Although cell-based screens are powerful approaches to identify potential therapeutic compounds, establishing robust inflammation models amenable to high-throughput screening remains a challenge. To bridge this gap, we have exploited the use of yeasts as a platform to identify lead compounds with anti-inflammatory properties. The yeast cell model described here relies on the high-degree homology between mammalian and yeast Ca(2+)/calcineurin pathways converging into the activation of NFAT and Crz1 orthologous proteins, respectively. It consists of a recombinant yeast strain encoding the lacZ gene under the control of Crz1-recongition elements to facilitate the identification of compounds interfering with Crz1 activation through the easy monitoring of β-galactosidase activity. Here, we describe in detail a protocol optimized for high-throughput screening of compounds with potential anti-inflammatory activity as well as a protocol to validate the positive hits using an alternative β-galactosidase substrate.

  14. High throughput integrated thermal characterization with non-contact optical calorimetry

    Science.gov (United States)

    Hou, Sichao; Huo, Ruiqing; Su, Ming

    2017-10-01

    Commonly used thermal analysis tools such as calorimeter and thermal conductivity meter are separated instruments and limited by low throughput, where only one sample is examined each time. This work reports an infrared based optical calorimetry with its theoretical foundation, which is able to provide an integrated solution to characterize thermal properties of materials with high throughput. By taking time domain temperature information of spatially distributed samples, this method allows a single device (infrared camera) to determine the thermal properties of both phase change systems (melting temperature and latent heat of fusion) and non-phase change systems (thermal conductivity and heat capacity). This method further allows these thermal properties of multiple samples to be determined rapidly, remotely, and simultaneously. In this proof-of-concept experiment, the thermal properties of a panel of 16 samples including melting temperatures, latent heats of fusion, heat capacities, and thermal conductivities have been determined in 2 min with high accuracy. Given the high thermal, spatial, and temporal resolutions of the advanced infrared camera, this method has the potential to revolutionize the thermal characterization of materials by providing an integrated solution with high throughput, high sensitivity, and short analysis time.

  15. Bifrost: a Modular Python/C++ Framework for Development of High-Throughput Data Analysis Pipelines

    Science.gov (United States)

    Cranmer, Miles; Barsdell, Benjamin R.; Price, Danny C.; Garsden, Hugh; Taylor, Gregory B.; Dowell, Jayce; Schinzel, Frank; Costa, Timothy; Greenhill, Lincoln J.

    2017-01-01

    Large radio interferometers have data rates that render long-term storage of raw correlator data infeasible, thus motivating development of real-time processing software. For high-throughput applications, processing pipelines are challenging to design and implement. Motivated by science efforts with the Long Wavelength Array, we have developed Bifrost, a novel Python/C++ framework that eases the development of high-throughput data analysis software by packaging algorithms as black box processes in a directed graph. This strategy to modularize code allows astronomers to create parallelism without code adjustment. Bifrost uses CPU/GPU ’circular memory’ data buffers that enable ready introduction of arbitrary functions into the processing path for ’streams’ of data, and allow pipelines to automatically reconfigure in response to astrophysical transient detection or input of new observing settings. We have deployed and tested Bifrost at the latest Long Wavelength Array station, in Sevilleta National Wildlife Refuge, NM, where it handles throughput exceeding 10 Gbps per CPU core.

  16. DESIGN OF LOW EPI AND HIGH THROUGHPUT CORDIC CELL TO IMPROVE THE PERFORMANCE OF MOBILE ROBOT

    Directory of Open Access Journals (Sweden)

    P. VELRAJKUMAR

    2014-04-01

    Full Text Available This paper mainly focuses on pass logic based design, which gives an low Energy Per Instruction (EPI and high throughput COrdinate Rotation Digital Computer (CORDIC cell for application of robotic exploration. The basic components of CORDIC cell namely register, multiplexer and proposed adder is designed using pass transistor logic (PTL design. The proposed adder is implemented in bit-parallel iterative CORDIC circuit whereas designed using DSCH2 VLSI CAD tool and their layouts are generated by Microwind 3 VLSI CAD tool. The propagation delay, area and power dissipation are calculated from the simulated results for proposed adder based CORDIC cell. The EPI, throughput and effect of temperature are calculated from generated layout. The output parameter of generated layout is analysed using BSIM4 advanced analyzer. The simulated result of the proposed adder based CORDIC circuit is compared with other adder based CORDIC circuits. From the analysis of these simulated results, it was found that the proposed adder based CORDIC circuit dissipates low power, gives faster response, low EPI and high throughput.

  17. High-throughput dental biofilm growth analysis for multiparametric microenvironmental biochemical conditions using microfluidics.

    Science.gov (United States)

    Lam, Raymond H W; Cui, Xin; Guo, Weijin; Thorsen, Todd

    2016-04-26

    Dental biofilm formation is not only a precursor to tooth decay, but also induces more serious systematic health problems such as cardiovascular disease and diabetes. Understanding the conditions promoting colonization and subsequent biofilm development involving complex bacteria coaggregation is particularly important. In this paper, we report a high-throughput microfluidic 'artificial teeth' device offering controls of multiple microenvironmental factors (e.g. nutrients, growth factors, dissolved gases, and seeded cell populations) for quantitative characteristics of long-term dental bacteria growth and biofilm development. This 'artificial teeth' device contains multiple (up to 128) incubation chambers to perform parallel cultivation and analyses (e.g. biofilm thickness, viable-dead cell ratio, and spatial distribution of multiple bacterial species) of bacteria samples under a matrix of different combinations of microenvironmental factors, further revealing possible developmental mechanisms of dental biofilms. Specifically, we applied the 'artificial teeth' to investigate the growth of two key dental bacteria, Streptococci species and Fusobacterium nucleatum, in the biofilm under different dissolved gas conditions and sucrose concentrations. Together, this high-throughput microfluidic platform can provide extended applications for general biofilm research, including screening of the biofilm properties developing under combinations of specified growth parameters such as seeding bacteria populations, growth medium compositions, medium flow rates and dissolved gas levels.

  18. High-throughput heterogeneous integration of diverse nanomaterials on a single chip for sensing applications.

    Directory of Open Access Journals (Sweden)

    Samuel MacNaughton

    Full Text Available There is a large variety of nanomaterials each with unique electronic, optical and sensing properties. However, there is currently no paradigm for integration of different nanomaterials on a single chip in a low-cost high-throughput manner. We present a high throughput integration approach based on spatially controlled dielectrophoresis executed sequentially for each nanomaterial type to realize a scalable array of individually addressable assemblies of graphene, carbon nanotubes, metal oxide nanowires and conductive polymers on a single chip. This is a first time where such a diversity of nanomaterials has been assembled on the same layer in a single chip. The resolution of assembly can range from mesoscale to microscale and is limited only by the size and spacing of the underlying electrodes on chip used for assembly. While many applications are possible, the utility of such an array is demonstrated with an example application of a chemical sensor array for detection of volatile organic compounds below parts-per-million sensitivity.

  19. Oligonucleotide Functionalised Microbeads: Indispensable Tools for High-Throughput Aptamer Selection.

    Science.gov (United States)

    Fraser, Lewis A; Kinghorn, Andrew B; Tang, Marco S L; Cheung, Yee-Wai; Lim, Bryce; Liang, Shaolin; Dirkzwager, Roderick M; Tanner, Julian A

    2015-12-01

    The functionalisation of microbeads with oligonucleotides has become an indispensable technique for high-throughput aptamer selection in SELEX protocols. In addition to simplifying the separation of binding and non-binding aptamer candidates, microbeads have facilitated the integration of other technologies such as emulsion PCR (ePCR) and Fluorescence Activated Cell Sorting (FACS) to high-throughput selection techniques. Within these systems, monoclonal aptamer microbeads can be individually generated and assayed to assess aptamer candidate fitness thereby helping eliminate stochastic effects which are common to classical SELEX techniques. Such techniques have given rise to aptamers with 1000 times greater binding affinities when compared to traditional SELEX. Another emerging technique is Fluorescence Activated Droplet Sorting (FADS) whereby selection does not rely on binding capture allowing evolution of a greater diversity of aptamer properties such as fluorescence or enzymatic activity. Within this review we explore examples and applications of oligonucleotide functionalised microbeads in aptamer selection and reflect upon new opportunities arising for aptamer science.

  20. Quantitative Live-Cell Confocal Imaging of 3D Spheroids in a High-Throughput Format.

    Science.gov (United States)

    Leary, Elizabeth; Rhee, Claire; Wilks, Benjamin T; Morgan, Jeffrey R

    2018-02-01

    Accurately predicting the human response to new compounds is critical to a wide variety of industries. Standard screening pipelines (including both in vitro and in vivo models) often lack predictive power. Three-dimensional (3D) culture systems of human cells, a more physiologically relevant platform, could provide a high-throughput, automated means to test the efficacy and/or toxicity of novel substances. However, the challenge of obtaining high-magnification, confocal z stacks of 3D spheroids and understanding their respective quantitative limitations must be overcome first. To address this challenge, we developed a method to form spheroids of reproducible size at precise spatial locations across a 96-well plate. Spheroids of variable radii were labeled with four different fluorescent dyes and imaged with a high-throughput confocal microscope. 3D renderings of the spheroid had a complex bowl-like appearance. We systematically analyzed these confocal z stacks to determine the depth of imaging and the effect of spheroid size and dyes on quantitation. Furthermore, we have shown that this loss of fluorescence can be addressed through the use of ratio imaging. Overall, understanding both the limitations of confocal imaging and the tools to correct for these limits is critical for developing accurate quantitative assays using 3D spheroids.

  1. Generalized schemes for high throughput manipulation of the Desulfovibrio vulgaris Hildenborough genome

    Energy Technology Data Exchange (ETDEWEB)

    Chhabra, S.R.; Butland, G.; Elias, D.; Chandonia, J.-M.; Fok, V.; Juba, T.; Gorur, A.; Allen, S.; Leung, C.-M.; Keller, K.; Reveco, S.; Zane, G.; Semkiw, E.; Prathapam, R.; Gold, B.; Singer, M.; Ouellet, M.; Sazakal, E.; Jorgens, D.; Price, M.; Witkowska, E.; Beller, H.; Hazen, T.C.; Biggin, M.; Auer, M.; Wall, J.; Keasling, J.

    2011-07-15

    The ability to conduct advanced functional genomic studies of the thousands of sequenced bacteria has been hampered by the lack of available tools for making high- throughput chromosomal manipulations in a systematic manner that can be applied across diverse species. In this work, we highlight the use of synthetic biological tools to assemble custom suicide vectors with reusable and interchangeable DNA “parts” to facilitate chromosomal modification at designated loci. These constructs enable an array of downstream applications including gene replacement and creation of gene fusions with affinity purification or localization tags. We employed this approach to engineer chromosomal modifications in a bacterium that has previously proven difficult to manipulate genetically, Desulfovibrio vulgaris Hildenborough, to generate a library of over 700 strains. Furthermore, we demonstrate how these modifications can be used for examining metabolic pathways, protein-protein interactions, and protein localization. The ubiquity of suicide constructs in gene replacement throughout biology suggests that this approach can be applied to engineer a broad range of species for a diverse array of systems biological applications and is amenable to high-throughput implementation.

  2. Identification of adiponectin receptor agonist utilizing a fluorescence polarization based high throughput assay.

    Directory of Open Access Journals (Sweden)

    Yiyi Sun

    Full Text Available Adiponectin, the adipose-derived hormone, plays an important role in the suppression of metabolic disorders that can result in type 2 diabetes, obesity, and atherosclerosis. It has been shown that up-regulation of adiponectin or adiponectin receptor has a number of therapeutic benefits. Given that it is hard to convert the full size adiponectin protein into a viable drug, adiponectin receptor agonists could be designed or identified using high-throughput screening. Here, we report on the development of a two-step screening process to identify adiponectin agonists. First step, we developed a high throughput screening assay based on fluorescence polarization to identify adiponectin ligands. The fluorescence polarization assay reported here could be adapted to screening against larger small molecular compound libraries. A natural product library containing 10,000 compounds was screened and 9 hits were selected for validation. These compounds have been taken for the second-step in vitro tests to confirm their agonistic activity. The most active adiponectin receptor 1 agonists are matairesinol, arctiin, (--arctigenin and gramine. The most active adiponectin receptor 2 agonists are parthenolide, taxifoliol, deoxyschizandrin, and syringin. These compounds may be useful drug candidates for hypoadiponectin related diseases.

  3. Unraveling long non-coding RNAs through analysis of high-throughput RNA-sequencing data

    Directory of Open Access Journals (Sweden)

    Rashmi Tripathi

    2017-06-01

    Full Text Available Extensive genome-wide transcriptome study mediated by high throughput sequencing technique has revolutionized the study of genetics and epigenetic at unprecedented resolution. The research has revealed that besides protein-coding RNAs, large proportions of mammalian transcriptome includes a heap of regulatory non protein-coding RNAs, the number encoded within human genome is enigmatic. Many taboos developed in the past categorized these non-coding RNAs as ‘‘dark matter” and “junks”. Breaking the myth, RNA-seq-- a recently developed experimental technique is widely being used for studying non-coding RNAs which has acquired the limelight due to their physiological and pathological significance. The longest member of the ncRNA family-- long non-coding RNAs, acts as stable and functional part of a genome, guiding towards the important clues about the varied biological events like cellular-, structural- processes governing the complexity of an organism. Here, we review the most recent and influential computational approach developed to identify and quantify the long non-coding RNAs serving as an assistant for the users to choose appropriate tools for their specific research. Keywords: Transcriptome, High throughput sequencing, Genetic and epigenetic, Long non-coding RNA, RNA-sequencing, RNA-seq

  4. High throughput generation and trapping of individual agarose microgel using microfluidic approach

    KAUST Repository

    Shi, Yang

    2013-02-28

    Microgel is a kind of biocompatible polymeric material, which has been widely used as micro-carriers in materials synthesis, drug delivery and cell biology applications. However, high-throughput generation of individual microgel for on-site analysis in a microdevice still remains a challenge. Here, we presented a simple and stable droplet microfluidic system to realize high-throughput generation and trapping of individual agarose microgels based on the synergetic effect of surface tension and hydrodynamic forces in microchannels and used it for 3-D cell culture in real-time. The established system was mainly composed of droplet generators with flow focusing T-junction and a series of array individual trap structures. The whole process including the independent agarose microgel formation, immobilization in trapping array and gelation in situ via temperature cooling could be realized on the integrated microdevice completely. The performance of this system was demonstrated by successfully encapsulating and culturing adenoid cystic carcinoma (ACCM) cells in the gelated agarose microgels. This established approach is simple, easy to operate, which can not only generate the micro-carriers with different components in parallel, but also monitor the cell behavior in 3D matrix in real-time. It can also be extended for applications in the area of material synthesis and tissue engineering. © 2013 Springer-Verlag Berlin Heidelberg.

  5. Semiautomated Alignment of High-Throughput Metabolite Profiles with Chemometric Tools

    Directory of Open Access Journals (Sweden)

    Ze-ying Wu

    2017-01-01

    Full Text Available The rapid increase in the use of metabolite profiling/fingerprinting techniques to resolve complicated issues in metabolomics has stimulated demand for data processing techniques, such as alignment, to extract detailed information. In this study, a new and automated method was developed to correct the retention time shift of high-dimensional and high-throughput data sets. Information from the target chromatographic profiles was used to determine the standard profile as a reference for alignment. A novel, piecewise data partition strategy was applied for the determination of the target components in the standard profile as markers for alignment. An automated target search (ATS method was proposed to find the exact retention times of the selected targets in other profiles for alignment. The linear interpolation technique (LIT was employed to align the profiles prior to pattern recognition, comprehensive comparison analysis, and other data processing steps. In total, 94 metabolite profiles of ginseng were studied, including the most volatile secondary metabolites. The method used in this article could be an essential step in the extraction of information from high-throughput data acquired in the study of systems biology, metabolomics, and biomarker discovery.

  6. Bayesian analysis of high-throughput quantitative measurement of protein-DNA interactions.

    Directory of Open Access Journals (Sweden)

    David D Pollock

    Full Text Available Transcriptional regulation depends upon the binding of transcription factor (TF proteins to DNA in a sequence-dependent manner. Although many experimental methods address the interaction between DNA and proteins, they generally do not comprehensively and accurately assess the full binding repertoire (the complete set of sequences that might be bound with at least moderate strength. Here, we develop and evaluate through simulation an experimental approach that allows simultaneous high-throughput quantitative analysis of TF binding affinity to thousands of potential DNA ligands. Tens of thousands of putative binding targets can be mixed with a TF, and both the pre-bound and bound target pools sequenced. A hierarchical Bayesian Markov chain Monte Carlo approach determines posterior estimates for the dissociation constants, sequence-specific binding energies, and free TF concentrations. A unique feature of our approach is that dissociation constants are jointly estimated from their inferred degree of binding and from a model of binding energetics, depending on how many sequence reads are available and the explanatory power of the energy model. Careful experimental design is necessary to obtain accurate results over a wide range of dissociation constants. This approach, which we call Simultaneous Ultra high-throughput Ligand Dissociation EXperiment (SULDEX, is theoretically capable of rapid and accurate elucidation of an entire TF-binding repertoire.

  7. Identification of Adiponectin Receptor Agonist Utilizing a Fluorescence Polarization Based High Throughput Assay

    Science.gov (United States)

    Sun, Yiyi; Zang, Zhihe; Zhong, Ling; Wu, Min; Su, Qing; Gao, Xiurong; Zan, Wang; Lin, Dong; Zhao, Yan; Zhang, Zhonglin

    2013-01-01

    Adiponectin, the adipose-derived hormone, plays an important role in the suppression of metabolic disorders that can result in type 2 diabetes, obesity, and atherosclerosis. It has been shown that up-regulation of adiponectin or adiponectin receptor has a number of therapeutic benefits. Given that it is hard to convert the full size adiponectin protein into a viable drug, adiponectin receptor agonists could be designed or identified using high-throughput screening. Here, we report on the development of a two-step screening process to identify adiponectin agonists. First step, we developed a high throughput screening assay based on fluorescence polarization to identify adiponectin ligands. The fluorescence polarization assay reported here could be adapted to screening against larger small molecular compound libraries. A natural product library containing 10,000 compounds was screened and 9 hits were selected for validation. These compounds have been taken for the second-step in vitro tests to confirm their agonistic activity. The most active adiponectin receptor 1 agonists are matairesinol, arctiin, (-)-arctigenin and gramine. The most active adiponectin receptor 2 agonists are parthenolide, taxifoliol, deoxyschizandrin, and syringin. These compounds may be useful drug candidates for hypoadiponectin related diseases. PMID:23691032

  8. Novel method for the high-throughput processing of slides for the comet assay.

    Science.gov (United States)

    Karbaschi, Mahsa; Cooke, Marcus S

    2014-11-26

    Single cell gel electrophoresis (the comet assay), continues to gain popularity as a means of assessing DNA damage. However, the assay's low sample throughput and laborious sample workup procedure are limiting factors to its application. "Scoring", or individually determining DNA damage levels in 50 cells per treatment, is time-consuming, but with the advent of high-throughput scoring, the limitation is now the ability to process significant numbers of comet slides. We have developed a novel method by which multiple slides may be manipulated, and undergo electrophoresis, in batches of 25 rather than individually and, importantly, retains the use of standard microscope comet slides, which are the assay convention. This decreases assay time by 60%, and benefits from an electrophoresis tank with a substantially smaller footprint, and more uniform orientation of gels during electrophoresis. Our high-throughput variant of the comet assay greatly increases the number of samples analysed, decreases assay time, number of individual slide manipulations, reagent requirements and risk of damage to slides. The compact nature of the electrophoresis tank is of particular benefit to laboratories where bench space is at a premium. This novel approach is a significant advance on the current comet assay procedure.

  9. Recent advances in quantitative high throughput and high content data analysis.

    Science.gov (United States)

    Moutsatsos, Ioannis K; Parker, Christian N

    2016-01-01

    High throughput screening has become a basic technique with which to explore biological systems. Advances in technology, including increased screening capacity, as well as methods that generate multiparametric readouts, are driving the need for improvements in the analysis of data sets derived from such screens. This article covers the recent advances in the analysis of high throughput screening data sets from arrayed samples, as well as the recent advances in the analysis of cell-by-cell data sets derived from image or flow cytometry application. Screening multiple genomic reagents targeting any given gene creates additional challenges and so methods that prioritize individual gene targets have been developed. The article reviews many of the open source data analysis methods that are now available and which are helping to define a consensus on the best practices to use when analyzing screening data. As data sets become larger, and more complex, the need for easily accessible data analysis tools will continue to grow. The presentation of such complex data sets, to facilitate quality control monitoring and interpretation of the results will require the development of novel visualizations. In addition, advanced statistical and machine learning algorithms that can help identify patterns, correlations and the best features in massive data sets will be required. The ease of use for these tools will be important, as they will need to be used iteratively by laboratory scientists to improve the outcomes of complex analyses.

  10. Construction and analysis of high-density linkage map using high-throughput sequencing data.

    Directory of Open Access Journals (Sweden)

    Dongyuan Liu

    Full Text Available Linkage maps enable the study of important biological questions. The construction of high-density linkage maps appears more feasible since the advent of next-generation sequencing (NGS, which eases SNP discovery and high-throughput genotyping of large population. However, the marker number explosion and genotyping errors from NGS data challenge the computational efficiency and linkage map quality of linkage study methods. Here we report the HighMap method for constructing high-density linkage maps from NGS data. HighMap employs an iterative ordering and error correction strategy based on a k-nearest neighbor algorithm and a Monte Carlo multipoint maximum likelihood algorithm. Simulation study shows HighMap can create a linkage map with three times as many markers as ordering-only methods while offering more accurate marker orders and stable genetic distances. Using HighMap, we constructed a common carp linkage map with 10,004 markers. The singleton rate was less than one-ninth of that generated by JoinMap4.1. Its total map distance was 5,908 cM, consistent with reports on low-density maps. HighMap is an efficient method for constructing high-density, high-quality linkage maps from high-throughput population NGS data. It will facilitate genome assembling, comparative genomic analysis, and QTL studies. HighMap is available at http://highmap.biomarker.com.cn/.

  11. A BSL-4 high-throughput screen identifies sulfonamide inhibitors of Nipah virus.

    Science.gov (United States)

    Tigabu, Bersabeh; Rasmussen, Lynn; White, E Lucile; Tower, Nichole; Saeed, Mohammad; Bukreyev, Alexander; Rockx, Barry; LeDuc, James W; Noah, James W

    2014-04-01

    Nipah virus is a biosafety level 4 (BSL-4) pathogen that causes severe respiratory illness and encephalitis in humans. To identify novel small molecules that target Nipah virus replication as potential therapeutics, Southern Research Institute and Galveston National Laboratory jointly developed an automated high-throughput screening platform that is capable of testing 10,000 compounds per day within BSL-4 biocontainment. Using this platform, we screened a 10,080-compound library using a cell-based, high-throughput screen for compounds that inhibited the virus-induced cytopathic effect. From this pilot effort, 23 compounds were identified with EC50 values ranging from 3.9 to 20.0 μM and selectivities >10. Three sulfonamide compounds with EC50 values <12 μM were further characterized for their point of intervention in the viral replication cycle and for broad antiviral efficacy. Development of HTS capability under BSL-4 containment changes the paradigm for drug discovery for highly pathogenic agents because this platform can be readily modified to identify prophylactic and postexposure therapeutic candidates against other BSL-4 pathogens, particularly Ebola, Marburg, and Lassa viruses.

  12. High-throughput ocular artifact reduction in multichannel electroencephalography (EEG) using component subspace projection.

    Science.gov (United States)

    Ma, Junshui; Bayram, Sevinç; Tao, Peining; Svetnik, Vladimir

    2011-03-15

    After a review of the ocular artifact reduction literature, a high-throughput method designed to reduce the ocular artifacts in multichannel continuous EEG recordings acquired at clinical EEG laboratories worldwide is proposed. The proposed method belongs to the category of component-based methods, and does not rely on any electrooculography (EOG) signals. Based on a concept that all ocular artifact components exist in a signal component subspace, the method can uniformly handle all types of ocular artifacts, including eye-blinks, saccades, and other eye movements, by automatically identifying ocular components from decomposed signal components. This study also proposes an improved strategy to objectively and quantitatively evaluate artifact reduction methods. The evaluation strategy uses real EEG signals to synthesize realistic simulated datasets with different amounts of ocular artifacts. The simulated datasets enable us to objectively demonstrate that the proposed method outperforms some existing methods when no high-quality EOG signals are available. Moreover, the results of the simulated datasets improve our understanding of the involved signal decomposition algorithms, and provide us with insights into the inconsistency regarding the performance of different methods in the literature. The proposed method was also applied to two independent clinical EEG datasets involving 28 volunteers and over 1000 EEG recordings. This effort further confirms that the proposed method can effectively reduce ocular artifacts in large clinical EEG datasets in a high-throughput fashion. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. High-Throughput Molecular Simulations of Metal Organic Frameworks for CO2 Separation: Opportunities and Challenges

    Directory of Open Access Journals (Sweden)

    Ilknur Erucar

    2018-02-01

    Full Text Available Metal organic frameworks (MOFs have emerged as great alternatives to traditional nanoporous materials for CO2 separation applications. MOFs are porous materials that are formed by self-assembly of transition metals and organic ligands. The most important advantage of MOFs over well-known porous materials is the possibility to generate multiple materials with varying structural properties and chemical functionalities by changing the combination of metal centers and organic linkers during the synthesis. This leads to a large diversity of materials with various pore sizes and shapes that can be efficiently used for CO2 separations. Since the number of synthesized MOFs has already reached to several thousand, experimental investigation of each MOF at the lab-scale is not practical. High-throughput computational screening of MOFs is a great opportunity to identify the best materials for CO2 separation and to gain molecular-level insights into the structure–performance relationships. This type of knowledge can be used to design new materials with the desired structural features that can lead to extraordinarily high CO2 selectivities. In this mini-review, we focused on developments in high-throughput molecular simulations of MOFs for CO2 separations. After reviewing the current studies on this topic, we discussed the opportunities and challenges in the field and addressed the potential future developments.

  14. High-throughput Cloning and Expression of Integral Membrane Proteins in Escherichia coli

    Science.gov (United States)

    Bruni, Renato

    2014-01-01

    Recently, several structural genomics centers have been established and a remarkable number of three-dimensional structures of soluble proteins have been solved. For membrane proteins, the number of structures solved has been significantly trailing those for their soluble counterparts, not least because over-expression and purification of membrane proteins is a much more arduous process. By using high throughput technologies, a large number of membrane protein targets can be screened simultaneously and a greater number of expression and purification conditions can be employed, leading to a higher probability of successfully determining the structure of membrane proteins. This unit describes the cloning, expression and screening of membrane proteins using high throughput methodologies developed in our laboratory. Basic Protocol 1 deals with the cloning of inserts into expression vectors by ligation-independent cloning. Basic Protocol 2 describes the expression and purification of the target proteins on a miniscale. Lastly, for the targets that express at the miniscale, basic protocols 3 and 4 outline the methods employed for the expression and purification of targets at the midi-scale, as well as a procedure for detergent screening and identification of detergent(s) in which the target protein is stable. PMID:24510647

  15. Characterizing ncRNAs in human pathogenic protists using high-throughput sequencing technology

    Directory of Open Access Journals (Sweden)

    Lesley Joan Collins

    2011-12-01

    Full Text Available ncRNAs are key genes in many human diseases including cancer and viral infection, as well as providing critical functions in pathogenic organisms such as fungi, bacteria, viruses and protists. Until now the identification and characterization of ncRNAs associated with disease has been slow or inaccurate requiring many years of testing to understand complicated RNA and protein gene relationships. High-throughput sequencing now offers the opportunity to characterize miRNAs, siRNAs, snoRNAs and long ncRNAs on a genomic scale making it faster and easier to clarify how these ncRNAs contribute to the disease state. However, this technology is still relatively new, and ncRNA discovery is not an application of high priority for streamlined bioinformatics. Here we summarize background concepts and practical approaches for ncRNA analysis using high-throughput sequencing, and how it relates to understanding human disease. As a case study, we focus on the parasitic protists Giardia lamblia and Trichomonas vaginalis, where large evolutionary distance has meant difficulties in comparing ncRNAs with those from model eukaryotes. A combination of biological, computational and sequencing approaches has enabled easier classification of ncRNA classes such as snoRNAs, but has also aided the identification of novel classes. It is hoped that a higher level of understanding of ncRNA expression and interaction may aid in the development of less harsh treatment for protist-based diseases.

  16. glbase: a framework for combining, analyzing and displaying heterogeneous genomic and high-throughput sequencing data

    Directory of Open Access Journals (Sweden)

    Andrew Paul Hutchins

    2014-01-01

    Full Text Available Genomic datasets and the tools to analyze them have proliferated at an astonishing rate. However, such tools are often poorly integrated with each other: each program typically produces its own custom output in a variety of non-standard file formats. Here we present glbase, a framework that uses a flexible set of descriptors that can quickly parse non-binary data files. glbase includes many functions to intersect two lists of data, including operations on genomic interval data and support for the efficient random access to huge genomic data files. Many glbase functions can produce graphical outputs, including scatter plots, heatmaps, boxplots and other common analytical displays of high-throughput data such as RNA-seq, ChIP-seq and microarray expression data. glbase is designed to rapidly bring biological data into a Python-based analytical environment to facilitate analysis and data processing. In summary, glbase is a flexible and multifunctional toolkit that allows the combination and analysis of high-throughput data (especially next-generation sequencing and genome-wide data, and which has been instrumental in the analysis of complex data sets. glbase is freely available at http://bitbucket.org/oaxiom/glbase/.

  17. Acanthamoeba castellanii: A new high-throughput method for drug screening in vitro.

    Science.gov (United States)

    Ortega-Rivas, Antonio; Padrón, José M; Valladares, Basilio; Elsheikha, Hany M

    2016-12-01

    Despite significant public health impact, there is no specific antiprotozoal therapy for prevention and treatment of Acanthamoeba castellanii infection. There is a need for new and efficient anti-Acanthamoeba drugs that are less toxic and can reduce treatment duration and frequency of administration. In this context a new, rapid and sensitive assay is required for high-throughput activity testing and screening of new therapeutic compounds. A colorimetric assay based on sulforhodamine B (SRB) staining has been developed for anti-Acanthamoeba drug susceptibility testing and adapted to a 96-well microtiter plate format. Under these conditions chlorhexidine was tested to validate the assay using two clinical strains of A. castellanii (Neff strain, T4 genotype [IC50 4.68±0.6μM] and T3 genotype [IC50 5.69±0.9μM]). These results were in good agreement with those obtained by the conventional Alamar Blue assay, OCR cytotoxicity assay and manual cell counting method. Our new assay offers an inexpensive and reliable method, which complements current assays by enhancing high-throughput anti-Acanthamoeba drug screening capabilities. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. High-Throughput Phase-Field Design of High-Energy-Density Polymer Nanocomposites.

    Science.gov (United States)

    Shen, Zhong-Hui; Wang, Jian-Jun; Lin, Yuanhua; Nan, Ce-Wen; Chen, Long-Qing; Shen, Yang

    2017-11-22

    Understanding the dielectric breakdown behavior of polymer nanocomposites is crucial to the design of high-energy-density dielectric materials with reliable performances. It is however challenging to predict the breakdown behavior due to the complicated factors involved in this highly nonequilibrium process. In this work, a comprehensive phase-field model is developed to investigate the breakdown behavior of polymer nanocomposites under electrostatic stimuli. It is found that the breakdown strength and path significantly depend on the microstructure of the nanocomposite. The predicted breakdown strengths for polymer nanocomposites with specific microstructures agree with existing experimental measurements. Using this phase-field model, a high throughput calculation is performed to seek the optimal microstructure. Based on the high-throughput calculation, a sandwich microstructure for PVDF-BaTiO3 nanocomposite is designed, where the upper and lower layers are filled with parallel nanosheets and the middle layer is filled with vertical nanofibers. It has an enhanced energy density of 2.44 times that of the pure PVDF polymer. The present work provides a computational approach for understanding the electrostatic breakdown, and it is expected to stimulate future experimental efforts on synthesizing polymer nanocomposites with novel microstructures to achieve high performances. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Automated cleaning and pre-processing of immunoglobulin gene sequences from high-throughput sequencing

    Directory of Open Access Journals (Sweden)

    Miri eMichaeli

    2012-12-01

    Full Text Available High throughput sequencing (HTS yields tens of thousands to millions of sequences that require a large amount of pre-processing work to clean various artifacts. Such cleaning cannot be performed manually. Existing programs are not suitable for immunoglobulin (Ig genes, which are variable and often highly mutated. This paper describes Ig-HTS-Cleaner (Ig High Throughput Sequencing Cleaner, a program containing a simple cleaning procedure that successfully deals with pre-processing of Ig sequences derived from HTS, and Ig-Indel-Identifier (Ig Insertion – Deletion Identifier, a program for identifying legitimate and artifact insertions and/or deletions (indels. Our programs were designed for analyzing Ig gene sequences obtained by 454 sequencing, but they are applicable to all types of sequences and sequencing platforms. Ig-HTS-Cleaner and Ig-Indel-Identifier have been implemented in Java and saved as executable JAR files, supported on Linux and MS Windows. No special requirements are needed in order to run the programs, except for correctly constructing the input files as explained in the text. The programs' performance has been tested and validated on real and simulated data sets.

  20. Histopathology reveals correlative and unique phenotypes in a high-throughput mouse phenotyping screen.

    Science.gov (United States)

    Adissu, Hibret A; Estabel, Jeanne; Sunter, David; Tuck, Elizabeth; Hooks, Yvette; Carragher, Damian M; Clarke, Kay; Karp, Natasha A; Newbigging, Susan; Jones, Nora; Morikawa, Lily; White, Jacqueline K; McKerlie, Colin

    2014-05-01

    The Mouse Genetics Project (MGP) at the Wellcome Trust Sanger Institute aims to generate and phenotype over 800 genetically modified mouse lines over the next 5 years to gain a better understanding of mammalian gene function and provide an invaluable resource to the scientific community for follow-up studies. Phenotyping includes the generation of a standardized biobank of paraffin-embedded tissues for each mouse line, but histopathology is not routinely performed. In collaboration with the Pathology Core of the Centre for Modeling Human Disease (CMHD) we report the utility of histopathology in a high-throughput primary phenotyping screen. Histopathology was assessed in an unbiased selection of 50 mouse lines with (n=30) or without (n=20) clinical phenotypes detected by the standard MGP primary phenotyping screen. Our findings revealed that histopathology added correlating morphological data in 19 of 30 lines (63.3%) in which the primary screen detected a phenotype. In addition, seven of the 50 lines (14%) presented significant histopathology findings that were not associated with or predicted by the standard primary screen. Three of these seven lines had no clinical phenotype detected by the standard primary screen. Incidental and strain-associated background lesions were present in all mutant lines with good concordance to wild-type controls. These findings demonstrate the complementary and unique contribution of histopathology to high-throughput primary phenotyping of mutant mice.