van den Berg, Noëlani; Crampton, Bridget G; Hein, Ingo; Birch, Paul R J; Berger, Dave K
Efficient construction of cDNA libraries enriched for differentially expressed transcripts is an important first step in many biological investigations. We present a quantitative procedure for screening cDNA libraries constructed by suppression subtractive hybridization (SSH). The methodology was applied to two independent SSHs from pearl millet and banana. Following two-color cyanin dye labeling and hybridization of subtracted tester with either unsubtracted driver or unsubtracted tester cDNAs to the SSH libraries arrayed on glass slides, two values were calculated for each clone, an enrichment ratio 1 (ER1) and an enrichment ratio 2 (ER2). Graphical representation of ER1 and ER2 enabled the identification of clones that were likely to represent up-regulated transcripts. Normalization of each clone by the SSH process was determined from the ER2 values, thereby indicating whether clones represented rare or abundant transcripts. Differential expression of pearl millet and banana clones identified from both libraries by this quantitative approach was verified by inverse Northern blot analysis.
Botella, Eric; Fogg, Mark; Jules, Matthieu; Piersma, Sjouke; Doherty, Geoff; Hansen, Annette; Denham, Emma. L.; Le Chat, Ludovic; Veiga, Patrick; Bailey, Kirra; Lewis, Peter J.; van Dijl, Jan Maarten; Aymerich, Stephane; Wilkinson, Anthony J.; Devine, Kevin M.
Plasmid pBaSysBioll was constructed for high-throughput analysis of gene expression in Bacillus subtilis. It is an integrative plasmid with a ligation-independent cloning (LIC) site, allowing the generation of transcriptional gfpmut3 fusions with desired promoters. Integration is by a Campbell-type
Mochizuki, Yuki; Suzuki, Takeru; Fujimoto, Kenzo; Nemoto, Naoto
cDNA display is a powerful in vitro display technology used to explore functional peptides and proteins from a huge library by in vitro selection. In addition to expediting the in vitro selection cycle by using cDNA display, easy and rapid functional analysis of selected candidate clones is crucial for high-throughput screening of functional peptides and proteins. In this report, a versatile puromycin-linker employing an ultrafast photo-cross-linker, 3-cyanovinylcarbazole nucleoside, is introduced. Its utility for both in vitro selection using cDNA display and protein-protein interaction analysis using a surface plasmon resonance (SPR) system is described. Using this versatile puromycin-linker, we demonstrated the model in vitro selection of the FLAG epitope and a SPR-based assay to measure the dissociation constant between the B domain of protein A and immunoglobulin G. Improvement of the puromycin-linker as described herein should make the cDNA display method easier to utilize for design of protein or peptide based affinity reagents. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Full Text Available Abstract Background The field of plasmid-based functional proteomics requires the rapid assay of proteins expressed from plasmid libraries. Automation is essential since large sets of mutant open reading frames are being cloned for evaluation. To date no integrated automated platform is available to carry out the entire process including production of plasmid libraries, expression of cloned genes, and functional testing of expressed proteins. Results We used a functional proteomic assay in a multiplexed setting on an integrated plasmid-based robotic workcell for high-throughput screening of mutants of cellulase F, an endoglucanase from the anaerobic fungus Orpinomyces PC-2. This allowed us to identify plasmids containing optimized clones expressing mutants with improved activity at lower pH. A plasmid library of mutagenized clones of the celF gene with targeted variations in the last four codons was constructed by site-directed PCR mutagenesis and transformed into Escherichia coli. A robotic picker integrated into the workcell was used to inoculate medium in a 96-well deep well plate, combining the transformants into a multiplexed set in each well, and the plate was incubated on the workcell. Plasmids were prepared from the multiplexed culture on the liquid handler component of the workcell and used for in vitro transcription/translation. The multiplexed expressed recombinant proteins were screened for improved activity and stability in an azo-carboxymethylcellulose plate assay. The multiplexed wells containing mutants with improved activity were identified and linked back to the corresponding multiplexed cultures stored in glycerol. Spread plates were prepared from the glycerol stocks and the workcell was used to pick single colonies from the spread plates, prepare plasmid, produce recombinant protein, and assay for activity. The screening assay and subsequent deconvolution of the multiplexed wells resulted in identification of improved Cel
Full Text Available The ability of plasmids to propagate in Saccharomyces cerevisiae has been instrumental in defining eukaryotic chromosomal control elements. Stable propagation demands both plasmid replication, which requires a chromosomal replication origin (i.e., an ARS, and plasmid distribution to dividing cells, which requires either a chromosomal centromere for segregation or a plasmid-partitioning element. While our knowledge of yeast ARSs and centromeres is relatively advanced, we know less about chromosomal regions that can function as plasmid partitioning elements. The Rap1 protein-binding site (RAP1 present in transcriptional silencers and telomeres of budding yeast is a known plasmid-partitioning element that functions to anchor a plasmid to the inner nuclear membrane (INM, which in turn facilitates plasmid distribution to daughter cells. This Rap1-dependent INM-anchoring also has an important chromosomal role in higher-order chromosomal structures that enhance transcriptional silencing and telomere stability. Thus, plasmid partitioning can reflect fundamental features of chromosome structure and biology, yet a systematic screen for plasmid partitioning elements has not been reported. Here, we couple deep sequencing with competitive growth experiments of a plasmid library containing thousands of short ARS fragments to identify new plasmid partitioning elements. Competitive growth experiments were performed with libraries that differed only in terms of the presence or absence of a centromere. Comparisons of the behavior of ARS fragments in the two experiments allowed us to identify sequences that were likely to drive plasmid partitioning. In addition to the silencer RAP1 site, we identified 74 new putative plasmid-partitioning motifs predicted to act as binding sites for DNA binding proteins enriched for roles in negative regulation of gene expression and G2/M-phase associated biology. These data expand our knowledge of chromosomal elements that may
Peeters, Ben; Leeuw, de Olav
Reverse genetics systems for non-segmented negative-strand RNA viruses rely on co-transfection of a plasmid containing the full-length viral cDNA and helper plasmids encoding essential viral replication proteins. Here, a system is presented in which virus can be rescued from a single plasmid
Federal Laboratory Consortium — Argonne?s high throughput facility provides highly automated and parallel approaches to material and materials chemistry development. The facility allows scientists...
The ideal chemical testing approach will provide complete coverage of all relevant toxicological responses. It should be sensitive and specific It should identify the mechanism/mode-of-action (with dose-dependence). It should identify responses relevant to the species of interest. Responses should ideally be translated into tissue-, organ-, and organism-level effects. It must be economical and scalable. Using a High Throughput Transcriptomics platform within US EPA provides broader coverage of biological activity space and toxicological MOAs and helps fill the toxicological data gap. Slide presentation at the 2016 ToxForum on using High Throughput Transcriptomics at US EPA for broader coverage biological activity space and toxicological MOAs.
Beernink, Peter T [Walnut Creek, CA; Coleman, Matthew A [Oakland, CA; Segelke, Brent W [San Ramon, CA
Methods, compositions, and kits for the cell-free production and analysis of proteins are provided. The invention allows for the production of proteins from prokaryotic sequences or eukaryotic sequences, including human cDNAs using PCR and IVT methods and detecting the proteins through fluorescence or immunoblot techniques. This invention can be used to identify optimized PCR and WT conditions, codon usages and mutations. The methods are readily automated and can be used for high throughput analysis of protein expression levels, interactions, and functional states.
Mujovic, Selman; Foster, John
The troublesome emergence of new classes of micro-pollutants, such as pharmaceuticals and endocrine disruptors, poses challenges for conventional water treatment systems. In an effort to address these contaminants and to support water reuse in drought stricken regions, new technologies must be introduced. The interaction of water with plasma rapidly mineralizes organics by inducing advanced oxidation in addition to other chemical, physical and radiative processes. The primary barrier to the implementation of plasma-based water treatment is process volume scale up. In this work, we investigate a potentially scalable, high throughput plasma water reactor that utilizes a packed bed dielectric barrier-like geometry to maximize the plasma-water interface. Here, the water serves as the dielectric medium. High-speed imaging and emission spectroscopy are used to characterize the reactor discharges. Changes in methylene blue concentration and basic water parameters are mapped as a function of plasma treatment time. Experimental results are compared to electrostatic and plasma chemistry computations, which will provide insight into the reactor's operation so that efficiency can be assessed. Supported by NSF (CBET 1336375).
Olivarius, Signe; Plessy, Charles; Carninci, Piero
We present a high-throughput method for investigating the transcriptional starting sites of genes of interest, which we named Deep-RACE (Deep–rapid amplification of cDNA ends). Taking advantage of the latest sequencing technology, it allows the parallel analysis of multiple genes and is free...
Ligterink, Wilco; Hilhorst, Henk W.M.
High-throughput analysis of seed germination for phenotyping large genetic populations or mutant collections is very labor intensive and would highly benefit from an automated setup. Although very often used, the total germination percentage after a nominated period of time is not very
Gesley, M.; Puri, R.
A high throughput spectral image microscopy system is configured for rapid detection of rare cells in large populations. To overcome flow cytometry rates and use of fluorophore tags, a system architecture integrates sample mechanical handling, signal processors, and optics in a non-confocal version of light absorption and scattering spectroscopic microscopy. Spectral images with native contrast do not require the use of exogeneous stain to render cells with submicron resolution. Structure may be characterized without restriction to cell clusters of differentiation.
Full Text Available Abstract Background Establishing a suitable level of exogenous gene expression in mammalian cells in general, and embryonic stem (ES cells in particular, is an important aspect of understanding pathways of cell differentiation, signal transduction and cell physiology. Despite its importance, this process remains challenging because of the poor correlation between the presence of introduced exogenous DNA and its transcription. Consequently, many transfected cells must be screened to identify those with an appropriate level of expression. To improve the screening process, we investigated the utility of the human interleukin 12 (IL-12 p40 cDNA as a reporter gene for studies of mammalian gene expression and for high-throughput screening of engineered mouse embryonic stem cells. Results A series of expression plasmids were used to study the utility of IL-12 p40 as an accurate reporter of gene activity. These studies included a characterization of the IL-12 p40 expression system in terms of: (i a time course of IL-12 p40 accumulation in the medium of transfected cells; (ii the dose-response relationship between the input DNA and IL-12 p40 mRNA levels and IL-12 p40 protein secretion; (iii the utility of IL-12 p40 as a reporter gene for analyzing the activity of cis-acting genetic elements; (iv expression of the IL-12 p40 reporter protein driven by an IRES element in a bicistronic mRNA; (v utility of IL-12 p40 as a reporter gene in a high-throughput screening strategy to identify successful transformed mouse embryonic stem cells; (vi demonstration of pluripotency of IL-12 p40 expressing ES cells in vitro and in vivo; and (vii germline transmission of the IL-12 p40 reporter gene. Conclusion IL-12 p40 showed several advantages as a reporter gene in terms of sensitivity and ease of the detection procedure. The IL-12 p40 assay was rapid and simple, in as much as the reporter protein secreted from the transfected cells was accurately measured by ELISA using
Michael I Miller
Full Text Available This paper describes neuroinformatics technologies at 1 mm anatomical scale based on high throughput 3D functional and structural imaging technologies of the human brain. The core is an abstract pipeline for converting functional and structural imagery into their high dimensional neuroinformatic representations index containing O(E3-E4 discriminating dimensions. The pipeline is based on advanced image analysis coupled to digital knowledge representations in the form of dense atlases of the human brain at gross anatomical scale. We demonstrate the integration of these high-dimensional representations with machine learning methods, which have become the mainstay of other fields of science including genomics as well as social networks. Such high throughput facilities have the potential to alter the way medical images are stored and utilized in radiological workflows. The neuroinformatics pipeline is used to examine cross-sectional and personalized analyses of neuropsychiatric illnesses in clinical applications as well as longitudinal studies. We demonstrate the use of high throughput machine learning methods for supporting (i cross-sectional image analysis to evaluate the health status of individual subjects with respect to the population data, (ii integration of image and non-image information for diagnosis and prognosis.
Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy
Hartley, John G.; Govindaraju, Lakshmi
Many people in the semiconductor industry bemoan the high costs of masks and view mask cost as one of the significant barriers to bringing new chip designs to market. All that is needed is a viable maskless technology and the problem will go away. Numerous sites around the world are working on maskless lithography but inevitably, the question asked is "Wouldn't a one wafer per hour maskless tool make a really good mask writer?" Of course, the answer is yes, the hesitation you hear in the answer isn't based on technology concerns, it's financial. The industry needs maskless lithography because mask costs are too high. Mask costs are too high because mask pattern generators (PG's) are slow and expensive. If mask PG's become much faster, mask costs go down, the maskless market goes away and the PG supplier is faced with an even smaller tool demand from the mask shops. Technical success becomes financial suicide - or does it? In this paper we will present the results of a model that examines some of the consequences of introducing high throughput maskless pattern generation. Specific features in the model include tool throughput for masks and wafers, market segmentation by node for masks and wafers and mask cost as an entry barrier to new chip designs. How does the availability of low cost masks and maskless tools affect the industries tool makeup and what is the ultimate potential market for high throughput maskless pattern generators?
Dusheyko, Serge; Furman, Craig; Pangilinan, Jasmyn; Shapiro, Harris; Tu, Hank
Metagenome data sets present a qualitatively different assembly problem than traditional single-organism whole-genome shotgun (WGS) assembly. The unique aspects of such projects include the presence of a potentially large number of distinct organisms and their representation in the data set at widely different fractions. In addition, multiple closely related strains could be present, which would be difficult to assemble separately. Failure to take these issues into account can result in poor assemblies that either jumble together different strains or which fail to yield useful results. The DOE Joint Genome Institute has sequenced a number of metagenomic projects and plans to considerably increase this number in the coming year. As a result, the JGI has a need for high-throughput tools and techniques for handling metagenome projects. We present the techniques developed to handle metagenome assemblies in a high-throughput environment. This includes a streamlined assembly wrapper, based on the JGI?s in-house WGS assembler, Jazz. It also includes the selection of sensible defaults targeted for metagenome data sets, as well as quality control automation for cleaning up the raw results. While analysis is ongoing, we will discuss preliminary assessments of the quality of the assembly results (http://fames.jgi-psf.org).
Environmental chemicals can elicit endocrine disruption by altering steroid hormone biosynthesis and metabolism (steroidogenesis) causing adverse reproductive and developmental effects. Historically, a lack of assays resulted in few chemicals having been evaluated for effects on steroidogenesis. The steroidogenic pathway is a series of hydroxylation and dehydrogenation steps carried out by CYP450 and hydroxysteroid dehydrogenase enzymes, yet the only enzyme in the pathway for which a high-throughput screening (HTS) assay has been developed is aromatase (CYP19A1), responsible for the aromatization of androgens to estrogens. Recently, the ToxCast HTS program adapted the OECD validated H295R steroidogenesis assay using human adrenocortical carcinoma cells into a high-throughput model to quantitatively assess the concentration-dependent (0.003-100 µM) effects of chemicals on 10 steroid hormones including progestagens, androgens, estrogens and glucocorticoids. These results, in combination with two CYP19A1 inhibition assays, comprise a large dataset amenable to clustering approaches supporting the identification and characterization of putative mechanisms of action (pMOA) for steroidogenesis disruption. In total, 514 chemicals were tested in all CYP19A1 and steroidogenesis assays. 216 chemicals were identified as CYP19A1 inhibitors in at least one CYP19A1 assay. 208 of these chemicals also altered hormone levels in the H295R assay, suggesting 96% sensitivity in the
Lu, Guoxin [Iowa State Univ., Ames, IA (United States)
High-throughput screening (HTS) techniques have been applied to many research fields nowadays. Robot microarray printing technique and automation microtiter handling technique allows HTS performing in both heterogeneous and homogeneous formats, with minimal sample required for each assay element. In this dissertation, new HTS techniques for enzyme activity analysis were developed. First, patterns of immobilized enzyme on nylon screen were detected by multiplexed capillary system. The imaging resolution is limited by the outer diameter of the capillaries. In order to get finer images, capillaries with smaller outer diameters can be used to form the imaging probe. Application of capillary electrophoresis allows separation of the product from the substrate in the reaction mixture, so that the product doesn't have to have different optical properties with the substrate. UV absorption detection allows almost universal detection for organic molecules. Thus, no modifications of either the substrate or the product molecules are necessary. This technique has the potential to be used in screening of local distribution variations of specific bio-molecules in a tissue or in screening of multiple immobilized catalysts. Another high-throughput screening technique is developed by directly monitoring the light intensity of the immobilized-catalyst surface using a scientific charge-coupled device (CCD). Briefly, the surface of enzyme microarray is focused onto a scientific CCD using an objective lens. By carefully choosing the detection wavelength, generation of product on an enzyme spot can be seen by the CCD. Analyzing the light intensity change over time on an enzyme spot can give information of reaction rate. The same microarray can be used for many times. Thus, high-throughput kinetic studies of hundreds of catalytic reactions are made possible. At last, we studied the fluorescence emission spectra of ADP and obtained the detection limits for ADP under three different
Full Text Available Saccharomyces cerevisiae (budding yeast is a powerful eukaryotic model organism ideally suited to high-throughput genetic analyses, which time and again has yielded insights that further our understanding of cell biology processes conserved in humans. Lithium Acetate (LiAc transformation of yeast with DNA for the purposes of exogenous protein expression (e.g., plasmids or genome mutation (e.g., gene mutation, deletion, epitope tagging is a useful and long established method. However, a reliable and optimized high throughput transformation protocol that runs almost no risk of human error has not been described in the literature. Here, we describe such a method that is broadly transferable to most liquid handling high-throughput robotic platforms, which are now commonplace in academic and industry settings. Using our optimized method, we are able to comfortably transform approximately 1200 individual strains per day, allowing complete transformation of typical genomic yeast libraries within 6 days. In addition, use of our protocol for gene knockout purposes also provides a potentially quicker, easier and more cost-effective approach to generating collections of double mutants than the popular and elegant synthetic genetic array methodology. In summary, our methodology will be of significant use to anyone interested in high throughput molecular and/or genetic analysis of yeast.
Li, Xianqiang; Jiang, Xin; Yaoi, Takuro
Transcription factors are a group of proteins that modulate the expression of genes involved in many biological processes, such as cell growth and differentiation. Alterations in transcription factor function are associated with many human diseases, and therefore these proteins are attractive potential drug targets. A key issue in the development of such therapeutics is the generation of effective tools that can be used for high throughput discovery of the critical transcription factors involved in human diseases, and the measurement of their activities in a variety of disease or compound-treated samples. Here, a number of innovative arrays and 96-well format assays for profiling and measuring the activities of transcription factors will be discussed.
Pardo-Martin, Carlos; Allalou, Amin; Medina, Jaime; Eimon, Peter M; Wählby, Carolina; Fatih Yanik, Mehmet
Most gene mutations and biologically active molecules cause complex responses in animals that cannot be predicted by cell culture models. Yet animal studies remain too slow and their analyses are often limited to only a few readouts. Here we demonstrate high-throughput optical projection tomography with micrometre resolution and hyperdimensional screening of entire vertebrates in tens of seconds using a simple fluidic system. Hundreds of independent morphological features and complex phenotypes are automatically captured in three dimensions with unprecedented speed and detail in semitransparent zebrafish larvae. By clustering quantitative phenotypic signatures, we can detect and classify even subtle alterations in many biological processes simultaneously. We term our approach hyperdimensional in vivo phenotyping. To illustrate the power of hyperdimensional in vivo phenotyping, we have analysed the effects of several classes of teratogens on cartilage formation using 200 independent morphological measurements, and identified similarities and differences that correlate well with their known mechanisms of actions in mammals.
Shukla, Abhinav A; Rameez, Shahid; Wolfe, Leslie S; Oien, Nathan
The ability to conduct multiple experiments in parallel significantly reduces the time that it takes to develop a manufacturing process for a biopharmaceutical. This is particularly significant before clinical entry, because process development and manufacturing are on the "critical path" for a drug candidate to enter clinical development. High-throughput process development (HTPD) methodologies can be similarly impactful during late-stage development, both for developing the final commercial process as well as for process characterization and scale-down validation activities that form a key component of the licensure filing package. This review examines the current state of the art for HTPD methodologies as they apply to cell culture, downstream purification, and analytical techniques. In addition, we provide a vision of how HTPD activities across all of these spaces can integrate to create a rapid process development engine that can accelerate biopharmaceutical drug development. Graphical Abstract.
Waage, Johannes Eichler
The recent advent of high throughput sequencing of nucleic acids (RNA and DNA) has vastly expanded research into the functional and structural biology of the genome of all living organisms (and even a few dead ones). With this enormous and exponential growth in biological data generation come...... equally large demands in data handling, analysis and interpretation, perhaps defining the modern challenge of the computational biologist of the post-genomic era. The first part of this thesis consists of a general introduction to the history, common terms and challenges of next generation sequencing......, focusing on oft encountered problems in data processing, such as quality assurance, mapping, normalization, visualization, and interpretation. Presented in the second part are scientific endeavors representing solutions to problems of two sub-genres of next generation sequencing. For the first flavor, RNA-sequencing...
Waage, Johannes Eichler
The recent advent of high throughput sequencing of nucleic acids (RNA and DNA) has vastly expanded research into the functional and structural biology of the genome of all living organisms (and even a few dead ones). With this enormous and exponential growth in biological data generation come...... equally large demands in data handling, analysis and interpretation, perhaps defining the modern challenge of the computational biologist of the post-genomic era. The first part of this thesis consists of a general introduction to the history, common terms and challenges of next generation sequencing......). For the second flavor, DNA-seq, a study presenting genome wide profiling of transcription factor CEBP/A in liver cells undergoing regeneration after partial hepatectomy (article IV) is included....
Woodruff, Kristina; Maerkl, Sebastian J.
Mammalian synthetic biology could be augmented through the development of high-throughput microfluidic systems that integrate cellular transfection, culturing, and imaging. We created a microfluidic chip that cultures cells and implements 280 independent transfections at up to 99% efficiency. The chip can perform co-transfections, in which the number of cells expressing each protein and the average protein expression level can be precisely tuned as a function of input DNA concentration and synthetic gene circuits can be optimized on chip. We co-transfected four plasmids to test a histidine kinase signaling pathway and mapped the dose dependence of this network on the level of one of its constituents. The chip is readily integrated with high-content imaging, enabling the evaluation of cellular behavior and protein expression dynamics over time. These features make the transfection chip applicable to high-throughput mammalian protein and synthetic biology studies. PMID:27030663
Final 3. DATES COVERED (From - To) 26-June-2014 to 25-March-2015 4. TITLE AND SUBTITLE High Throughput Catalyst Screening via Surface...TITLE AND SUBTITLE High Throughput Catalyst Screening via Surface Plasmon Spectroscopy 5a. CONTRACT NUMBER FA2386-14-1-4064 5b. GRANT NUMBER 5c...AOARD Grant 144064 FA2386-14-1-4064 “High Throughput Spectroscopic Catalyst Screening by Surface Plasmon Spectroscopy” Date July 15, 2015
Protein X-ray crystallography recently celebrated its 50th anniversary. The structures of myoglobin and hemoglobin determined by Kendrew and Perutz provided the first glimpses into the complex protein architecture and chemistry. Since then, the field of structural molecular biology has experienced extraordinary progress and now more than 55000 protein structures have been deposited into the Protein Data Bank. In the past decade many advances in macromolecular crystallography have been driven by world-wide structural genomics efforts. This was made possible because of third-generation synchrotron sources, structure phasing approaches using anomalous signal, and cryo-crystallography. Complementary progress in molecular biology, proteomics, hardware and software for crystallographic data collection, structure determination and refinement, computer science, databases, robotics and automation improved and accelerated many processes. These advancements provide the robust foundation for structural molecular biology and assure strong contribution to science in the future. In this report we focus mainly on reviewing structural genomics high-throughput X-ray crystallography technologies and their impact.
Protein X-ray crystallography recently celebrated its 50th anniversary. The structures of myoglobin and hemoglobin determined by Kendrew and Perutz provided the first glimpses into the complex protein architecture and chemistry. Since then, the field of structural molecular biology has experienced extraordinary progress and now over 53,000 proteins structures have been deposited into the Protein Data Bank. In the past decade many advances in macromolecular crystallography have been driven by world-wide structural genomics efforts. This was made possible because of third-generation synchrotron sources, structure phasing approaches using anomalous signal and cryo-crystallography. Complementary progress in molecular biology, proteomics, hardware and software for crystallographic data collection, structure determination and refinement, computer science, databases, robotics and automation improved and accelerated many processes. These advancements provide the robust foundation for structural molecular biology and assure strong contribution to science in the future. In this report we focus mainly on reviewing structural genomics high-throughput X-ray crystallography technologies and their impact. PMID:19765976
Hancock, Matthew; Shephard, Elizabeth A
We describe a method for high-throughput analysis of protein-binding sites in DNA using 96-well plates and capillary electrophoresis. The genomic DNA or plasmid DNA to be analyzed is amplified using fluorescent primers, incubated with an appropriate nuclear extract and treated with DNase I. Separation of the DNase I-generated fragments and co-analysis of their base sequences identify the position of protein-binding sites in a DNA fragment. The method is applicable to the identification of base changes, e.g., single-nucleotide polymorphisms (SNPs), that eliminate protein binding to DNA.
Lundqvist, Magnus; Edfors, Fredrik; Sivertsson, Åsa
present a robust automated protocol for restriction enzyme based SPC and its performance for the cloning of >60 000 unique human gene fragments into expression vectors. In addition, we report on SPC-based single-strand assembly for applications where exact control of the sequence between fragments......We describe solid-phase cloning (SPC) for high-throughput assembly of expression plasmids. Our method allows PCR products to be put directly into a liquid handler for capture and purification using paramagnetic streptavidin beads and conversion into constructs by subsequent cloning reactions. We...
Full Text Available Abstract Background The variations within an individual's HLA (Human Leukocyte Antigen genes have been linked to many immunological events, e.g. susceptibility to disease, response to vaccines, and the success of blood, tissue, and organ transplants. Although the microarray format has the potential to achieve high-resolution typing, this has yet to be attained due to inefficiencies of current probe design strategies. Results We present a novel three-step approach for the design of high-throughput microarray assays for HLA typing. This approach first selects sequences containing the SNPs present in all alleles of the locus of interest and next calculates the number of base changes necessary to convert a candidate probe sequences to the closest subsequence within the set of sequences that are likely to be present in the sample including the remainder of the human genome in order to identify those candidate probes which are "ultraspecific" for the allele of interest. Due to the high specificity of these sequences, it is possible that preliminary steps such as PCR amplification are no longer necessary. Lastly, the minimum number of these ultraspecific probes is selected such that the highest resolution typing can be achieved for the minimal cost of production. As an example, an array was designed and in silico results were obtained for typing of the HLA-B locus. Conclusion The assay presented here provides a higher resolution than has previously been developed and includes more alleles than previously considered. Based upon the in silico and preliminary experimental results, we believe that the proposed approach can be readily applied to any highly polymorphic gene system.
High throughput toxicokinetics (HTTK) is an approach that allows for rapid estimations of TK for hundreds of environmental chemicals. HTTK-based reverse dosimetry (i.e, reverse toxicokinetics or RTK) is used in order to convert high throughput in vitro toxicity screening (HTS) da...
Ghany, Mohamed A. Abd El; El-Moursy, Magdy A.; Ismail, Mohammed
In this chapter, the high throughput NoC architecture is proposed to increase the throughput of the switch in NoC. The proposed architecture can also improve the latency of the network. The proposed high throughput interconnect architecture is applied on different NoC architectures. The architecture increases the throughput of the network by more than 38% while preserving the average latency. The area of high throughput NoC switch is decreased by 18% as compared to the area of BFT switch. The...
Van den Berg, N
Full Text Available subtractive hybridization (SSH). The methodology was applied to two independent SSHs from pearl millet and banana. Following two-colour cyanin dye labelling and hybridization of subtracted tester with either unsubtracted driver or unsubtracted tester c...
Shapiro, Adam; Jahic, Haris; Prasad, Swati; Ehmann, David; Thresher, Jason; Gao, Ning; Hajec, Laurel
The degree of supercoiling of DNA is vital for cellular processes, such as replication and transcription. DNA topology is controlled by the action of DNA topoisomerase enzymes. Topoisomerases, because of their importance in cellular replication, are the targets of several anticancer and antibacterial drugs. In the search for new drugs targeting topoisomerases, a biochemical assay compatible with automated high-throughput screening (HTS) would be valuable. Gel electrophoresis is the standard method for measuring changes in the extent of supercoiling of plasmid DNA when acted upon by topoisomerases, but this is a low-throughput and laborious method. A medium-throughput method was described previously that quantitatively distinguishes relaxed and supercoiled plasmids by the difference in their abilities to form triplex structures with an immobilized oligonucleotide. In this article, the authors describe a homogeneous supercoiling assay based on triplex formation in which the oligonucleotide strand is labeled with a fluorescent dye and the readout is fluorescence anisotropy. The new assay requires no immobilization, filtration, or plate washing steps and is therefore well suited to HTS for inhibitors of topoisomerases. The utility of this assay is demonstrated with relaxation of supercoiled plasmid by Escherichia coli topoisomerase I, supercoiling of relaxed plasmid by E. coli DNA gyrase, and inhibition of gyrase by fluoroquinolones and nalidixic acid.
National Aeronautics and Space Administration — Busek is developing a high throughput nominal 100-W Hall Effect Thruster. This device is well sized for spacecraft ranging in size from several tens of kilograms to...
As high throughput screening (HTS) approaches play a larger role in toxicity testing, computational toxicology has emerged as a critical component in interpreting the large volume of data produced. Computational models for this purpose are becoming increasingly more sophisticated...
National Aeronautics and Space Administration — Busek Co. Inc. proposes to develop a high throughput, nominal 100 W Hall Effect Thruster (HET). This HET will be sized for small spacecraft (< 180 kg), including...
Shubhakar, A.; Reiding, K.R.; Gardner, R.A.; Spencer, D.I.R.; Fernandes, D.L.; Wuhrer, M.
This review covers advances in analytical technologies for high-throughput (HTP) glycomics. Our focus is on structural studies of glycoprotein glycosylation to support biopharmaceutical realization and the discovery of glycan biomarkers for human disease. For biopharmaceuticals, there is increasing
de Boer, Jan; van Blitterswijk, Clemens
This complete, yet concise, guide introduces you to the rapidly developing field of high throughput screening of biomaterials: materiomics. Bringing together the key concepts and methodologies used to determine biomaterial properties, you will understand the adaptation and application of materomics
High throughput methodologies such as microarrays, mass spectrometry and plate-based small molecule screens are increasingly used to facilitate discoveries from gene function to drug candidate identification. These large-scale experiments are typically carried out over the course...
Kim, Hyunsung John
High throughput sequencing methods have fundamentally shifted the manner in which biological experiments are performed. In this dissertation, conventional and novel high throughput sequencing and bioinformatics methods are applied to immunology and diagnostics. In order to study rare subsets of cells, an RNA sequencing method was first optimized for use with minimal levels of RNA and cellular input. The optimized RNA sequencing method was then applied to study the transcriptional differences ...
Hughes, Stephen R; Butt, Tauseef R; Bartolett, Scott; Riedmuller, Steven B; Farrelly, Philip
The molecular biological techniques for plasmid-based assembly and cloning of gene open reading frames are essential for elucidating the function of the proteins encoded by the genes. High-throughput integrated robotic molecular biology platforms that have the capacity to rapidly clone and express heterologous gene open reading frames in bacteria and yeast and to screen large numbers of expressed proteins for optimized function are an important technology for improving microbial strains for biofuel production. The process involves the production of full-length complementary DNA libraries as a source of plasmid-based clones to express the desired proteins in active form for determination of their functions. Proteins that were identified by high-throughput screening as having desired characteristics are overexpressed in microbes to enable them to perform functions that will allow more cost-effective and sustainable production of biofuels. Because the plasmid libraries are composed of several thousand unique genes, automation of the process is essential. This review describes the design and implementation of an automated integrated programmable robotic workcell capable of producing complementary DNA libraries, colony picking, isolating plasmid DNA, transforming yeast and bacteria, expressing protein, and performing appropriate functional assays. These operations will allow tailoring microbial strains to use renewable feedstocks for production of biofuels, bioderived chemicals, fertilizers, and other coproducts for profitable and sustainable biorefineries. Published by Elsevier Inc.
Zhang, Ying; Mallapragada, Surya K; Narasimhan, Balaji
The dissolution behavior of polystyrene (PS) in biodiesel was studied by developing a novel high throughput approach based on Fourier-transform infrared (FTIR) microscopy. A multiwell device for high throughput dissolution testing was fabricated using a photolithographic rapid prototyping method. The dissolution of PS films in each well was tracked by following the characteristic IR band of PS and the effect of PS molecular weight and temperature on the dissolution rate was simultaneously investigated. The results were validated with conventional gravimetric methods. The high throughput method can be extended to evaluate the dissolution profiles of a large number of samples, or to simultaneously investigate the effect of variables such as polydispersity, crystallinity, and mixed solvents. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Morgan, R E; Westwood, N J
High throughput technologies continue to develop in response to the challenges set by the genome projects. This article discusses how the techniques of both high throughput screening (HTS) and synthesis can influence research in parasitology. Examples of the use of targeted and phenotype-based HTS using unbiased compound collections are provided. The important issue of identifying the protein target(s) of bioactive compounds is discussed from the synthetic chemist's perspective. This article concludes by reviewing recent examples of successful target identification studies in parasitology.
Full Text Available Development of next generation batteries requires a breakthrough in materials. Traditional one-by-one method, which is suitable for synthesizing large number of sing-composition material, is time-consuming and costly. High throughput and combinatorial experimentation, is an effective method to synthesize and characterize huge amount of materials over a broader compositional region in a short time, which enables to greatly speed up the discovery and optimization of materials with lower cost. In this work, high throughput and combinatorial materials synthesis technologies for lithium ion battery research are discussed, and our efforts on developing such instrumentations are introduced.
Goda, Keisuke; Fard, Ali; Malik, Omer; Fu, Gilbert; Quach, Alan; Jalali, Bahram
We report high-throughput optical coherence tomography (OCT) that offers 1,000 times higher axial scan rate than conventional OCT in the 800 nm spectral range. This is made possible by employing photonic time-stretch for chirping a pulse train and transforming it into a passive swept source. We demonstrate a record high axial scan rate of 90.9 MHz. To show the utility of our method, we also demonstrate real-time observation of laser ablation dynamics. Our high-throughput OCT is expected to be useful for industrial applications where the speed of conventional OCT falls short.
Hoflack, Lieve; De Groeve, Manu; Desmet, Tom; Van Gerwen, Peter; Soetaert, Wim
A calorimetric assay is described for the high-throughput screening of enzymes that produce inorganic phosphate. In the current example, cellobiose phosphorylase (EC 188.8.131.52) is tested for its ability to synthesise rare disaccharides. The generated phosphate is measured in a high-throughput calorimeter by coupling the reaction to pyruvate oxidase and catalase. This procedure allows for the simultaneous analysis of 48 reactions in microtiter plate format and has been validated by comparison with a colorimetric phosphate assay. The proposed assay has a coefficient of variation of 3.14% and is useful for screening enzyme libraries for enhanced activity and substrate libraries for enzyme promiscuity.
This work demonstrates the detection method for a high throughput droplet based agglutination assay system. Using simple hydrodynamic forces to mix and aggregate functionalized microbeads we avoid the need to use magnetic assistance or mixing structures. The concentration of our target molecules was estimated by agglutination strength, obtained through optical image analysis. Agglutination in droplets was performed with flow rates of 150 µl/min and occurred in under a minute, with potential to perform high-throughput measurements. The lowest target concentration detected in droplet microfluidics was 0.17 nM, which is three orders of magnitude more sensitive than a conventional card based agglutination assay.
Sheppod, Timothy; Satterfield, Brent; Hukari, Kyle W.; West, Jason A. A.; Hux, Gary A.
The advancement of DNA cloning has significantly augmented the potential threat of a focused bioweapon assault, such as a terrorist attack. With current DNA cloning techniques, toxin genes from the most dangerous (but environmentally labile) bacterial or viral organism can now be selected and inserted into robust organism to produce an infinite number of deadly chimeric bioweapons. In order to neutralize such a threat, accurate detection of the expressed toxin genes, rather than classification on strain or genealogical decent of these organisms, is critical. The development of a high-throughput microarray approach will enable the detection of unknowns chimeric bioweapons. The development of a high-throughput microarray approach will enable the detection of unknown bioweapons. We have developed a unique microfluidic approach to capture and concentrate these threat genes (mRNA's) upto a 30 fold concentration. These captured oligonucleotides can then be used to synthesize in situ oligonucleotide copies (cDNA probes) of the captured genes. An integrated microfluidic architecture will enable us to control flows of reagents, perform clean-up steps and finally elute nanoliter volumes of synthesized oligonucleotides probes. The integrated approach has enabled a process where chimeric or conventional bioweapons can rapidly be identified based on their toxic function, rather than being restricted to information that may not identify the critical nature of the threat.
Bach, Søren Spanner; Bassard, Jean-Étienne; Andersen-Ranberg, Johan; Møldrup, Morten Emil; Simonsen, Henrik Toft; Hamberger, Björn
To respond to the rapidly growing number of genes putatively involved in terpenoid metabolism, a robust high-throughput platform for functional testing is needed. An in planta expression system offers several advantages such as the capacity to produce correctly folded and active enzymes localized to the native compartments, unlike microbial or prokaryotic expression systems. Two inherent drawbacks of plant-based expression systems, time-consuming generation of transgenic plant lines and challenging gene-stacking, can be circumvented by transient expression in Nicotiana benthamiana. In this chapter we describe an expression platform for rapid testing of candidate terpenoid biosynthetic genes based on Agrobacterium mediated gene expression in N. benthamiana leaves. Simultaneous expression of multiple genes is facilitated by co-infiltration of leaves with several engineered Agrobacterium strains, possibly making this the fastest and most convenient system for the assembly of plant terpenoid biosynthetic routes. Tools for cloning of expression plasmids, N. benthamiana culturing, Agrobacterium preparation, leaf infiltration, metabolite extraction, and automated GC-MS data mining are provided. With all steps optimized for high throughput, this in planta expression platform is particularly suited for testing large panels of candidate genes in all possible permutations.
The molecular biological techniques for plasmid-based assembly and cloning of synthetic assembled gene open reading frames are essential for elucidating the function of the proteins encoded by the genes. These techniques involve the production of full-length cDNA libraries as a source of plasmid-bas...
Yang, Luzhu; Wang, Yanjun; Li, Baoxin; Jin, Yan
Formation of the G-quadruplex in the human telomeric DNA is an effective way to inhibit telomerase activity. Therefore, screening ligands of G-quadruplex has potential applications in the treatment of cancer by inhibit telomerase activity. Although several techniques have been explored for screening of telomeric G-quadruplexes ligands, high-throughput screening method for fast screening telomere-binding ligands from the large compound library is still urgently needed. Herein, a label-free fluorescence strategy has been proposed for high-throughput screening telomere-binding ligands by using DNA-copper nanoparticles (DNA-CuNPs) as a signal probe. In the absence of ligands, human telomeric DNA (GDNA) hybridized with its complementary DNA (cDNA) to form double stranded DNA (dsDNA) which can act as an efficient template for the formation of DNA-CuNPs, leading to the high fluorescence of DNA-CuNPs. In the presence of ligands, GDNA folded into G-quadruplex. Single-strdanded cDNA does not support the formation of DNA-CuNP, resulting in low fluorescence of DNA-CuNPs. Therefore, telomere-binding ligands can be high-throughput screened by monitoring the change in the fluorescence of DNA-CuNPs. Thirteen traditional chinese medicines were screened. Circular dichroism (CD) measurements demonstrated that the selected ligands could induce single-stranded telomeric DNA to form G-quadruplex. The telomere repeat amplification protocol (TRAP) assay demonstrated that the selected ligands can effectively inhibit telomerase activity. Therefore, it offers a cost-effective, label-free and reliable high-throughput way to identify G-quadruplex ligands, which holds great potential in discovering telomerase-targeted anticancer drugs. Copyright Â© 2016 Elsevier B.V. All rights reserved.
Himbergen, H.M.P. van; Nijkerk, M.D.; Jager, P.W.H. de; Hosman, T.C.; Kruit, P.
A new concept for high throughput defect detection with multiple parallel electron beams is described. As many as 30 000 beams can be placed on a footprint of a in.2, each beam having its own microcolumn and detection system without cross-talk. Based on the International Technology Roadmap for
Geertsma, Eric R.; Poolman, Bert
We developed a generic method for high-throughput cloning in bacteria that are less amenable to conventional DNA manipulations. The method involves ligation-independent cloning in an intermediary Escherichia coli vector, which is rapidly converted via vector-backbone exchange (VBEx) into an
High-throughput screening (HTS) studies are providing a rich source of data that can be applied to chemical profiling to address sensitivity and specificity of molecular targets, biological pathways, cellular and developmental processes. EPA’s ToxCast project is testing 960 uniq...
High-throughput screening (HTS) studies are providing a rich source of data that can be applied to profile thousands of chemical compounds for biological activity and potential toxicity. EPA’s ToxCast™ project, and the broader Tox21 consortium, in addition to projects worldwide,...
Ye, Fei; Samuels, David C; Clark, Travis; Guo, Yan
Next-generation sequencing, also known as high-throughput sequencing, has greatly enhanced researchers' ability to conduct biomedical research on all levels. Mitochondrial research has also benefitted greatly from high-throughput sequencing; sequencing technology now allows for screening of all 16,569 base pairs of the mitochondrial genome simultaneously for SNPs and low level heteroplasmy and, in some cases, the estimation of mitochondrial DNA copy number. It is important to realize the full potential of high-throughput sequencing for the advancement of mitochondrial research. To this end, we review how high-throughput sequencing has impacted mitochondrial research in the categories of SNPs, low level heteroplasmy, copy number, and structural variants. We also discuss the different types of mitochondrial DNA sequencing and their pros and cons. Based on previous studies conducted by various groups, we provide strategies for processing mitochondrial DNA sequencing data, including assembly, variant calling, and quality control. Copyright © 2014 Elsevier B.V. and Mitochondria Research Society. All rights reserved.
de Jong, R.N.; Daniëls, M.; Kaptein, R.|info:eu-repo/dai/nl/074334603; Folkers, G.E.|info:eu-repo/dai/nl/162277202
Structural and functional genomics initiatives significantly improved cloning methods over the past few years. Although recombinational cloning is highly efficient, its costs urged us to search for an alternative high throughput (HTP) cloning method. We implemented a modified Enzyme Free Cloning
High-throughput metabolomic assays that allow simultaneous targeted screening of hundreds of metabolites have recently become available in kit form. Such assays provide a window into understanding changes to biochemical pathways due to chemical exposure or disease, and are usefu...
The main topic of this thesis is the investigation of the synergies between High-Throughput Experimentation (HTE) and Chemometric Optimization methodologies in Catalysis research and of the use of such methodologies to maximize the advantages of using HTE methods. Several case studies were analysed
De Masi, Federico; Chiarella, P.; Wilhelm, H.
Recent advances in proteomics research underscore the increasing need for high-affinity monoclonal antibodies, which are still generated with lengthy, low-throughput antibody production techniques. Here we present a semi-automated, high-throughput method of hybridoma generation and identification...
Nierychlo, Marta; Larsen, Poul; Jørgensen, Mads Koustrup
S rRNA gene amplicon sequencing has been developed over the past few years and is now ready to use for more comprehensive studies related to plant operation and optimization thanks to short analysis time, low cost, high throughput, and high taxonomic resolution. In this study we show how 16S r...
Full Text Available Plasmids, extrachromosomal DNA, were identified in bacteria pertaining to family of Enterobacteriacae for the very first time. After that, they were discovered in almost every single observed strain. The structure of plasmids is made of circular double chain DNA molecules which are replicated autonomously in a host cell. Their length may vary from few up to several hundred kilobase (kb. Among the bacteria, plasmids are mostly transferred horizontally by conjugation process. Plasmid replication process can be divided into three stages: initiation, elongation, and termination. The process involves DNA helicase I, DNA gyrase, DNA polymerase III, endonuclease, and ligase.Plasmids contain genes essential for plasmid function and their preservation in a host cell (the beginning and the control of replication. Some of them possess genes whichcontrol plasmid stability. There is a common opinion that plasmids are unnecessary fora growth of bacterial population and their vital functions; thus, in many cases they can be taken up or kicked out with no lethal effects to a plasmid host cell. However,there are numerous biological functions of bacteria related to plasmids. Plasmids identification and classification are based upon their genetic features which are presented permanently in all of them, and these are: abilities to preserve themselves in a host cell and to control a replication process. In this way, plasmids classification among incompatibility groups is performed. The method of replicon typing, which is based on genotype and not on phenotype characteristics, has the same results as in compatibility grouping.
Kuksa, Pavel P; Leung, Yuk Yee; Vandivier, Lee E; Anderson, Zachary; Gregory, Brian D; Wang, Li-San
RNA molecules are often altered post-transcriptionally by the covalent modification of their nucleotides. These modifications are known to modulate the structure, function, and activity of RNAs. When reverse transcribed into cDNA during RNA sequencing library preparation, atypical (modified) ribonucleotides that affect Watson-Crick base pairing will interfere with reverse transcriptase (RT), resulting in cDNA products with mis-incorporated bases or prematurely terminated RNA products. These interactions with RT can therefore be inferred from mismatch patterns in the sequencing reads, and are distinguishable from simple base-calling errors, single-nucleotide polymorphisms (SNPs), or RNA editing sites. Here, we describe a computational protocol for the in silico identification of modified ribonucleotides from RT-based RNA-seq read-out using the High-throughput Analysis of Modified Ribonucleotides (HAMR) software. HAMR can identify these modifications transcriptome-wide with single nucleotide resolution, and also differentiate between different types of modifications to predict modification identity. Researchers can use HAMR to identify and characterize RNA modifications using RNA-seq data from a variety of common RT-based sequencing protocols such as Poly(A), total RNA-seq, and small RNA-seq.
Shapland, Elaine B; Holmes, Victor; Reeves, Christopher D; Sorokin, Elena; Durot, Maxime; Platt, Darren; Allen, Christopher; Dean, Jed; Serber, Zach; Newman, Jack; Chandran, Sunil
In recent years, next-generation sequencing (NGS) technology has greatly reduced the cost of sequencing whole genomes, whereas the cost of sequence verification of plasmids via Sanger sequencing has remained high. Consequently, industrial-scale strain engineers either limit the number of designs or take short cuts in quality control. Here, we show that over 4000 plasmids can be completely sequenced in one Illumina MiSeq run for less than $3 each (15× coverage), which is a 20-fold reduction over using Sanger sequencing (2× coverage). We reduced the volume of the Nextera tagmentation reaction by 100-fold and developed an automated workflow to prepare thousands of samples for sequencing. We also developed software to track the samples and associated sequence data and to rapidly identify correctly assembled constructs having the fewest defects. As DNA synthesis and assembly become a centralized commodity, this NGS quality control (QC) process will be essential to groups operating high-throughput pipelines for DNA construction.
Full Text Available Abstract Background The recent development within high-throughput technologies for expression profiling has allowed for parallel analysis of transcriptomes and proteomes in biological systems such as comparative analysis of transcript and protein levels of tissue regulated genes. Until now, such studies of have only included microarray or short length sequence tags for transcript profiling. Furthermore, most comparisons of transcript and protein levels have been based on absolute expression values from within the same tissue and not relative expression values based on tissue ratios. Results Presented here is a novel study of two porcine tissues based on integrative analysis of data from expression profiling of identical samples using cDNA microarray, 454-sequencing and iTRAQ-based proteomics. Sequence homology identified 2.541 unique transcripts that are detectable by both microarray hybridizations and 454-sequencing of 1.2 million cDNA tags. Both transcript-based technologies showed high reproducibility between sample replicates of the same tissue, but the correlation across these two technologies was modest. Thousands of genes being differentially expressed were identified with microarray. Out of the 306 differentially expressed genes, identified by 454-sequencing, 198 (65% were also found by microarray. The relationship between the regulation of transcript and protein levels was analyzed by integrating iTRAQ-based proteomics data. Protein expression ratios were determined for 354 genes, of which 148 could be mapped to both microarray and 454-sequencing data. A comparison of the expression ratios from the three technologies revealed that differences in transcript and protein levels across heart and muscle tissues are positively correlated. Conclusion We show that the reproducibility within cDNA microarray and 454-sequencing is high, but that the agreement across these two technologies is modest. We demonstrate that the regulation of transcript
Gregoire, John M.; Xiang, Chengxiang; Liu, Xiaonao; Marcin, Martin; Jin, Jian
High throughput electrochemical techniques are widely applied in material discovery and optimization. For many applications, the most desirable electrochemical characterization requires a three-electrode cell under potentiostat control. In high throughput screening, a material library is explored by either employing an array of such cells, or rastering a single cell over the library. To attain this latter capability with unprecedented throughput, we have developed a highly integrated, compact scanning droplet cell that is optimized for rapid electrochemical and photoeletrochemical measurements. Using this cell, we screened a quaternary oxide library as (photo)electrocatalysts for the oxygen evolution (water splitting) reaction. High quality electrochemical measurements were carried out and key electrocatalytic properties were identified for each of 5456 samples with a throughput of 4 s per sample.
Shi-Gang, Ling; Jian, Gao; Rui-Juan, Xiao; Li-Quan, Chen
The rapid evolution of high-throughput theoretical design schemes to discover new lithium battery materials is reviewed, including high-capacity cathodes, low-strain cathodes, anodes, solid state electrolytes, and electrolyte additives. With the development of efficient theoretical methods and inexpensive computers, high-throughput theoretical calculations have played an increasingly important role in the discovery of new materials. With the help of automatic simulation flow, many types of materials can be screened, optimized and designed from a structural database according to specific search criteria. In advanced cell technology, new materials for next generation lithium batteries are of great significance to achieve performance, and some representative criteria are: higher energy density, better safety, and faster charge/discharge speed. Project supported by the National Natural Science Foundation of China (Grant Nos. 11234013 and 51172274) and the National High Technology Research and Development Program of China (Grant No. 2015AA034201).
Tanackovic, Vanja; Rydahl, Maja Gro; Pedersen, Henriette Lodberg; Motawia, Mohammed Saddik; Shaik, Shahnoor Sultana; Mikkelsen, Maria Dalgaard; Krunic, Susanne Langgaard; Fangel, Jonatan Ulrik; Willats, William George Tycho; Blennow, Andreas
In this study we introduce the starch-recognising carbohydrate binding module family 20 (CBM20) from Aspergillus niger for screening biological variations in starch molecular structure using high throughput carbohydrate microarray technology. Defined linear, branched and phosphorylated maltooligosaccharides, pure starch samples including a variety of different structures with variations in the amylopectin branching pattern, amylose content and phosphate content, enzymatically modified starches and glycogen were included. Using this technique, different important structures, including amylose content and branching degrees could be differentiated in a high throughput fashion. The screening method was validated using transgenic barley grain analysed during development and subjected to germination. Typically, extreme branching or linearity were detected less than normal starch structures. The method offers the potential for rapidly analysing resistant and slowly digested dietary starches.
Otis, Richard A.; Liu, Zi-Kui
One foundational component of the integrated computational materials engineering (ICME) and Materials Genome Initiative is the computational thermodynamics based on the calculation of phase diagrams (CALPHAD) method. The CALPHAD method pioneered by Kaufman has enabled the development of thermodynamic, atomic mobility, and molar volume databases of individual phases in the full space of temperature, composition, and sometimes pressure for technologically important multicomponent engineering materials, along with sophisticated computational tools for using the databases. In this article, our recent efforts will be presented in terms of developing new computational tools for high-throughput modeling and uncertainty quantification based on high-throughput, first-principles calculations and the CALPHAD method along with their potential propagations to downstream ICME modeling and simulations.
Volety, Kalpana K; Huyberechts, Guido P J
This Research Article presents a strategy to identify the optimum compositions in metal alloys with certain desired properties in a high-throughput screening environment, using a multiobjective optimization approach. In addition to the identification of the optimum compositions in a primary screening, the strategy also allows pointing to regions in the compositional space where further exploration in a secondary screening could be carried out. The strategy for the primary screening is a combination of two multiobjective optimization approaches namely Pareto optimality and desirability functions. The experimental data used in the present study have been collected from over 200 different compositions belonging to four different alloy systems. The metal alloys (comprising Fe, Ti, Al, Nb, Hf, Zr) are synthesized and screened using high-throughput technologies. The advantages of such a kind of approach compared to the limitations of the traditional and comparatively simpler approaches like ranking and calculating figures of merit are discussed.
Tanackovic, Vanja; Rydahl, Maja Gro; Pedersen, Henriette Lodberg
In this study we introduce the starch-recognising carbohydrate binding module family 20 (CBM20) from Aspergillus niger for screening biological variations in starch molecular structure using high throughput carbohydrate microarray technology. Defined linear, branched and phosphorylated...... maltooligosaccharides, pure starch samples including a variety of different structures with variations in the amylopectin branching pattern, amylose content and phosphate content, enzymatically modified starches and glycogen were included. Using this technique, different important structures, including amylose content...... and branching degrees could be differentiated in a high throughput fashion. The screening method was validated using transgenic barley grain analysed during development and subjected to germination. Typically, extreme branching or linearity were detected less than normal starch structures. The method offers...
Chaouachi, Maher; Chupeau, Gaëlle; Berard, Aurélie; McKhann, Heather; Romaniuk, Marcel; Giancola, Sandra; Laval, Valérie; Bertheau, Yves; Brunel, Dominique
A high-throughput multiplex assay for the detection of genetically modified organisms (GMO) was developed on the basis of the existing SNPlex method designed for SNP genotyping. This SNPlex assay allows the simultaneous detection of up to 48 short DNA sequences (approximately 70 bp; "signature sequences") from taxa endogenous reference genes, from GMO constructions, screening targets, construct-specific, and event-specific targets, and finally from donor organisms. This assay avoids certain shortcomings of multiplex PCR-based methods already in widespread use for GMO detection. The assay demonstrated high specificity and sensitivity. The results suggest that this assay is reliable, flexible, and cost- and time-effective for high-throughput GMO detection.
Park, Chan Young; Tambe, Dhananjay; Chen, Bohao; Lavoie, Tera; Dowell, Maria; Simeonov, Anton; Maloney, David J; Marinkovic, Aleksandar; Tschumperlin, Daniel J; Burger, Stephanie; Frykenberg, Matthew; Butler, James P; Stamer, W Daniel; Johnson, Mark; Solway, Julian; Fredberg, Jeffrey J; Krishnan, Ramaswamy
When cellular contractile forces are central to pathophysiology, these forces comprise a logical target of therapy. Nevertheless, existing high-throughput screens are limited to upstream signaling intermediates with poorly defined relationship to such a physiological endpoint. Using cellular force as the target, here we screened libraries to identify novel drug candidates in the case of human airway smooth muscle cells in the context of asthma, and also in the case of Schlemm's canal endothelial cells in the context of glaucoma. This approach identified several drug candidates for both asthma and glaucoma. We attained rates of 1000 compounds per screening day, thus establishing a force-based cellular platform for high-throughput drug discovery.
Full Text Available Abstract Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU.
Compton, JL; Luo, JC; Ma, H.; Botvinick, E; Venugopalan, V
We introduce an optical platform for rapid, high-throughput screening of exogenous molecules that affect cellular mechanotransduction. Our method initiates mechanotransduction in adherent cells using single laser-microbeam generated microcavitation bubbles without requiring flow chambers or microfluidics. These microcavitation bubbles expose adherent cells to a microtsunami, a transient microscale burst of hydrodynamic shear stress, which stimulates cells over areas approaching 1 mm2. We demo...
The Intel/CERN High Throughput Computing Collaboration studies the application of upcoming Intel technologies to the very challenging environment of the LHC trigger and data-acquisition systems. These systems will need to transport and process many terabits of data every second, in some cases with tight latency constraints. Parallelisation and tight integration of accelerators and classical CPU via Intel's OmniPath fabric are the key elements in this project.
Schatz, Michael C; Trapnell, Cole; Delcher, Arthur L; Varshney, Amitabh
The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs) in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA) from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU.
Klesmith, Justin R; Whitehead, Timothy A
A central challenge in the field of metabolic engineering is the efficient identification of a metabolic pathway genotype that maximizes specific productivity over a robust range of process conditions. Here we review current methods for optimizing specific productivity of metabolic pathways in living cells. New tools for library generation, computational analysis of pathway sequence-flux space, and high-throughput screening and selection techniques are discussed.
Curtarolo, Stefano; Hart, Gus L W; Nardelli, Marco Buongiorno; Mingo, Natalio; Sanvito, Stefano; Levy, Ohad
High-throughput computational materials design is an emerging area of materials science. By combining advanced thermodynamic and electronic-structure methods with intelligent data mining and database construction, and exploiting the power of current supercomputer architectures, scientists generate, manage and analyse enormous data repositories for the discovery of novel materials. In this Review we provide a current snapshot of this rapidly evolving field, and highlight the challenges and opportunities that lie ahead.
Goecks, Jeremy; Eberhard, Carl; Too, Tomithy; Nekrutenko, Anton; Taylor, James
Visualization plays an essential role in genomics research by making it possible to observe correlations and trends in large datasets as well as communicate findings to others. Visual analysis, which combines visualization with analysis tools to enable seamless use of both approaches for scientific investigation, offers a powerful method for performing complex genomic analyses. However, there are numerous challenges that arise when creating rich, interactive Web-based visualizations/visual analysis applications for high-throughput genomics. These challenges include managing data flow from Web server to Web browser, integrating analysis tools and visualizations, and sharing visualizations with colleagues. We have created a platform simplifies the creation of Web-based visualization/visual analysis applications for high-throughput genomics. This platform provides components that make it simple to efficiently query very large datasets, draw common representations of genomic data, integrate with analysis tools, and share or publish fully interactive visualizations. Using this platform, we have created a Circos-style genome-wide viewer, a generic scatter plot for correlation analysis, an interactive phylogenetic tree, a scalable genome browser for next-generation sequencing data, and an application for systematically exploring tool parameter spaces to find good parameter values. All visualizations are interactive and fully customizable. The platform is integrated with the Galaxy (http://galaxyproject.org) genomics workbench, making it easy to integrate new visual applications into Galaxy. Visualization and visual analysis play an important role in high-throughput genomics experiments, and approaches are needed to make it easier to create applications for these activities. Our framework provides a foundation for creating Web-based visualizations and integrating them into Galaxy. Finally, the visualizations we have created using the framework are useful tools for high-throughput
Budowle, Bruce; Connell, Nancy D.; Bielecka-Oder, Anna; Rita R Colwell; Corbett, Cindi R.; Fletcher, Jacqueline; Forsman, Mats; Kadavy, Dana R; Markotic, Alemka; Morse, Stephen A.; Murch, Randall S; Sajantila, Antti; Schemes, Sarah E; Ternus, Krista L; Turner, Stephen D
Abstract High throughput sequencing (HTS) generates large amounts of high quality sequence data for microbial genomics. The value of HTS for microbial forensics is the speed at which evidence can be collected and the power to characterize microbial-related evidence to solve biocrimes and bioterrorist events. As HTS technologies continue to improve, they provide increasingly powerful sets of tools to support the entire field of microbial forensics. Accurate, credible results a...
In recent years, the food industry has made progress in improving safety testing methods focused on microbial contaminants in order to promote food safety. However, food industry toxicologists must also assess the safety of food-relevant chemicals including pesticides, direct additives, and food contact substances. With the rapidly growing use of new food additives, as well as innovation in food contact substance development, an interest in exploring the use of high-throughput chemical safety testing approaches has emerged. Currently, the field of toxicology is undergoing a paradigm shift in how chemical hazards can be evaluated. Since there are tens of thousands of chemicals in use, many of which have little to no hazard information and there are limited resources (namely time and money) for testing these chemicals, it is necessary to prioritize which chemicals require further safety testing to better protect human health. Advances in biochemistry and computational toxicology have paved the way for animal-free (in vitro) high-throughput screening which can characterize chemical interactions with highly specific biological processes. Screening approaches are not novel; in fact, quantitative high-throughput screening (qHTS) methods that incorporate dose-response evaluation have been widely used in the pharmaceutical industry. For toxicological evaluation and prioritization, it is the throughput as well as the cost- and time-efficient nature of qHTS that makes it
Herskovic, Jorge R; Subramanian, Devika; Cohen, Trevor; Bozzo-Silva, Pamela A; Bearden, Charles F; Bernstam, Elmer V
Electronic Health Records aggregated in Clinical Data Warehouses (CDWs) promise to revolutionize Comparative Effectiveness Research and suggest new avenues of research. However, the effectiveness of CDWs is diminished by the lack of properly labeled data. We present a novel approach that integrates knowledge from the CDW, the biomedical literature, and the Unified Medical Language System (UMLS) to perform high-throughput phenotyping. In this paper, we automatically construct a graphical knowledge model and then use it to phenotype breast cancer patients. We compare the performance of this approach to using MetaMap when labeling records. MetaMap's overall accuracy at identifying breast cancer patients was 51.1% (n=428); recall=85.4%, precision=26.2%, and F1=40.1%. Our unsupervised graph-based high-throughput phenotyping had accuracy of 84.1%; recall=46.3%, precision=61.2%, and F1=52.8%. We conclude that our approach is a promising alternative for unsupervised high-throughput phenotyping.
Full Text Available Abstract Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage.
Chance, Mark R; Fiser, Andras; Sali, Andrej; Pieper, Ursula; Eswar, Narayanan; Xu, Guiping; Fajardo, J Eduardo; Radhakannan, Thirumuruhan; Marinkovic, Nebojsa
Structural genomics has as its goal the provision of structural information for all possible ORF sequences through a combination of experimental and computational approaches. The access to genome sequences and cloning resources from an ever-widening array of organisms is driving high-throughput structural studies by the New York Structural Genomics Research Consortium. In this report, we outline the progress of the Consortium in establishing its pipeline for structural genomics, and some of the experimental and bioinformatics efforts leading to structural annotation of proteins. The Consortium has established a pipeline for structural biology studies, automated modeling of ORF sequences using solved (template) structures, and a novel high-throughput approach (metallomics) to examining the metal binding to purified protein targets. The Consortium has so far produced 493 purified proteins from >1077 expression vectors. A total of 95 have resulted in crystal structures, and 81 are deposited in the Protein Data Bank (PDB). Comparative modeling of these structures has generated >40,000 structural models. We also initiated a high-throughput metal analysis of the purified proteins; this has determined that 10%-15% of the targets contain a stoichiometric structural or catalytic transition metal atom. The progress of the structural genomics centers in the U.S. and around the world suggests that the goal of providing useful structural information on most all ORF domains will be realized. This projected resource will provide structural biology information important to understanding the function of most proteins of the cell.
Visser, Daniel F
Full Text Available Frames from Metagenomic Libraries Generated from Extremophilic Organisms: Application of Metagenomics and High Throughput Screening for Novel Enzyme Isolation 1FRITHA HENNESSY, 1DANIEL F. VISSER, 1VARSHA P. CHHIBA, 1FRANCISCO LAKAY, 2ESTA VAN HEERDEN..., in particular proteases and lipases. Result- ant hits had plasmid DNA isolated; this DNA was sequenced and analysed using BLAST. CONCLUSIONS High variation in hits Duplicate results Smaller inserts still gave activity despite small ORF’s Weak correlation...
Matthijs Rudolf Albert Welkers
Full Text Available High-throughput sequencing (HTS of viral samples provides important information on the presence of viral minority variants. However, detection and accurate quantification is limited by the capacity to distinguish biological from artificial variation. In this study, errors related to the Illumina Hiseq2000 library generation and HTS process were investigated by determining minority variant frequencies in an influenza A/WSN/1933(H1N1 virus reverse-genetics plasmid pool. Errors related to amplification and sequencing were determined using the same plasmid pool, by generation of infectious virus using reverse genetics followed by in duplo reverse-transcriptase PCR (RT-PCR amplification and HTS in the same sequence run. Results showed that after ‘best practice’ quality control (QC, within the plasmid pool, 1 minority variant with a frequency >0.5% was identified, while 84 and 139 were identified in the RT-PCR amplified samples, indicating RT-PCR amplification artificially increased variation. Detailed analysis showed that artifactual minority variants could be identified by two major technical characteristics: their predominant presence in a single read orientation and uneven distribution of mismatches over the length of the reads. We demonstrate that by addition of two QC steps 95% of the artifactual minority variants could be identified. When our analysis approach was applied to 3 clinical samples 68% of the initially identified minority variants were identified as artifacts. Our study clearly demonstrated that, without additional QC steps, overestimation of viral minority variants is very likely to occur, mainly as a consequence of the required RT-PCR amplification step. The improved ability to detect and correct for artifactual minority variants, increases data resolution and could aid both past and future studies incorporating HTS. The source code has been made available through Sourceforge (https://sourceforge.net/projects/mva-ngs.
... Clinical Diagnostic Applications--Approaches To Assess Analytical Validity.'' The purpose of the public... approaches to assess analytical validity of ultra high throughput sequencing for clinical diagnostic... HUMAN SERVICES Food and Drug Administration Ultra High Throughput Sequencing for Clinical Diagnostic...
Tillich, Ulrich M; Wolter, Nick; Schulze, Katja; Kramer, Dan; Brödel, Oliver; Frohme, Marcus
High-throughput cultivation and screening methods allow a parallel, miniaturized and cost efficient processing of many samples. These methods however, have not been generally established for phototrophic organisms such as microalgae or cyanobacteria. In this work we describe and test high-throughput methods with the model organism Synechocystis sp. PCC6803. The required technical automation for these processes was achieved with a Tecan Freedom Evo 200 pipetting robot. The cultivation was performed in 2.2 ml deepwell microtiter plates within a cultivation chamber outfitted with programmable shaking conditions, variable illumination, variable temperature, and an adjustable CO2 atmosphere. Each microtiter-well within the chamber functions as a separate cultivation vessel with reproducible conditions. The automated measurement of various parameters such as growth, full absorption spectrum, chlorophyll concentration, MALDI-TOF-MS, as well as a novel vitality measurement protocol, have already been established and can be monitored during cultivation. Measurement of growth parameters can be used as inputs for the system to allow for periodic automatic dilutions and therefore a semi-continuous cultivation of hundreds of cultures in parallel. The system also allows the automatic generation of mid and long term backups of cultures to repeat experiments or to retrieve strains of interest. The presented platform allows for high-throughput cultivation and screening of Synechocystis sp. PCC6803. The platform should be usable for many phototrophic microorganisms as is, and be adaptable for even more. A variety of analyses are already established and the platform is easily expandable both in quality, i.e. with further parameters to screen for additional targets and in quantity, i.e. size or number of processed samples.
Bulaevskaya, V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sales, A. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
The need for adaptive sampling arises in the context of high throughput data because the rates of data arrival are many orders of magnitude larger than the rates at which they can be analyzed. A very fast decision must therefore be made regarding the value of each incoming observation and its inclusion in the analysis. In this report we discuss one approach to adaptive sampling, based on the new data point’s similarity to the other data points being considered for inclusion. We present preliminary results for one real and one synthetic data set.
Poisot, Timothée; Péquin, Bérangère; Gravel, Dominique
High-throughput sequencing is becoming increasingly important in microbial ecology, yet it is surprisingly under-used to generate or test biogeographic hypotheses. In this contribution, we highlight how adding these methods to the ecologist toolbox will allow the detection of new patterns, and will help our understanding of the structure and dynamics of diversity. Starting with a review of ecological questions that can be addressed, we move on to the technical and analytical issues that will benefit from an increased collaboration between different disciplines.
Sankaran, Sindhuja; Khot, Lav R.; Quirós, Juan; Vandemark, George J.; McGee, Rebecca J.
In plant breeding, one of the biggest obstacles in genetic improvement is the lack of proven rapid methods for measuring plant responses in field conditions. Therefore, the major objective of this research was to evaluate the feasibility of utilizing high-throughput remote sensing technology for rapid measurement of phenotyping traits in legume crops. The plant responses of several chickpea and peas varieties to the environment were assessed with an unmanned aerial vehicle (UAV) integrated with multispectral imaging sensors. Our preliminary assessment showed that the vegetation indices are strongly correlated (pphenotyping traits.
Picardi, Ernesto; Pesole, Graziano
The reliable detection of RNA editing sites from massive sequencing data remains challenging and, although several methodologies have been proposed, no computational tools have been released to date. Here, we introduce REDItools a suite of python scripts to perform high-throughput investigation of RNA editing using next-generation sequencing data. REDItools are in python programming language and freely available at http://code.google.com/p/reditools/. firstname.lastname@example.org or email@example.com Supplementary data are available at Bioinformatics online.
Mancia, Filippo; Love, James
Structural genomics approaches on integral membrane proteins have been postulated for over a decade, yet specific efforts are lagging years behind their soluble counterparts. Indeed, high throughput methodologies for production and characterization of prokaryotic integral membrane proteins are only now emerging, while large-scale efforts for eukaryotic ones are still in their infancy. Presented here is a review of recent literature on actively ongoing structural genomics of membrane protein initiatives, with a focus on those aimed at implementing interesting techniques aimed at increasing our rate of success for this class of macromolecules. Copyright © 2011 Elsevier Ltd. All rights reserved.
Engmark, Mikael; De Masi, Federico; Laustsen, Andreas Hougaard
Insight into the epitopic recognition pattern for polyclonal antivenoms is a strong tool for accurate prediction of antivenom cross-reactivity and provides a basis for design of novel antivenoms. In this work, a high-throughput approach was applied to characterize linear epitopes in 966 individual...... toxins from pit vipers (Crotalidae) using the ICP Crotalidae antivenom. Due to an abundance of snake venom metalloproteinases and phospholipase A2s in the venoms used for production of the investigated antivenom, this study focuses on these toxin families....
Barsdell, Ben; Price, Daniel; Cranmer, Miles; Garsden, Hugh; Dowell, Jayce
Bifrost is a stream processing framework that eases the development of high-throughput processing CPU/GPU pipelines. It is designed for digital signal processing (DSP) applications within radio astronomy. Bifrost uses a flexible ring buffer implementation that allows different signal processing blocks to be connected to form a pipeline. Each block may be assigned to a CPU core, and the ring buffers are used to transport data to and from blocks. Processing blocks may be run on either the CPU or GPU, and the ring buffer will take care of memory copies between the CPU and GPU spaces.
Huang, G M
The progress trends in automated DNA sequencing operation are reviewed. Technological development in sequencing instruments, enzymatic chemistry and robotic stations has resulted in ever-increasing capacity of sequence data production. This progress leads to a higher demand on laboratory information management and data quality assessment. High-throughput laboratories face the challenge of organizational management, as well as technology management. Engineering principles of process control should be adopted in this biological data manufacturing procedure. While various systems attempt to provide solutions to automate different parts of, or even the entire process, new technical advances will continue to change the paradigm and provide new challenges.
Full Text Available This paper reviews high-throughput screening enzyme assays developed in our laboratory over the last ten years. These enzyme assays were initially developed for the purpose of discovering catalytic antibodies by screening cell culture supernatants, but have proved generally useful for testing enzyme activities. Examples include TLC-based screening using acridone-labeled substrates, fluorogenic assays based on the β-elimination of umbelliferone or nitrophenol, and indirect assays such as the back-titration method with adrenaline and the copper-calcein fluorescence assay for aminoacids.
Sinclair, Michael B.; Cowie, Jim R. (New Mexico State University, Las Cruces, NM); Van Benthem, Mark Hilary; Wylie, Brian Neil; Davidson, George S.; Haaland, David Michael; Timlin, Jerilyn Ann; Aragon, Anthony D. (University of New Mexico, Albuquerque, NM); Keenan, Michael Robert; Boyack, Kevin W.; Thomas, Edward Victor; Werner-Washburne, Margaret C. (University of New Mexico, Albuquerque, NM); Mosquera-Caro, Monica P. (University of New Mexico, Albuquerque, NM); Martinez, M. Juanita (University of New Mexico, Albuquerque, NM); Martin, Shawn Bryan; Willman, Cheryl L. (University of New Mexico, Albuquerque, NM)
High throughput instruments and analysis techniques are required in order to make good use of the genomic sequences that have recently become available for many species, including humans. These instruments and methods must work with tens of thousands of genes simultaneously, and must be able to identify the small subsets of those genes that are implicated in the observed phenotypes, or, for instance, in responses to therapies. Microarrays represent one such high throughput method, which continue to find increasingly broad application. This project has improved microarray technology in several important areas. First, we developed the hyperspectral scanner, which has discovered and diagnosed numerous flaws in techniques broadly employed by microarray researchers. Second, we used a series of statistically designed experiments to identify and correct errors in our microarray data to dramatically improve the accuracy, precision, and repeatability of the microarray gene expression data. Third, our research developed new informatics techniques to identify genes with significantly different expression levels. Finally, natural language processing techniques were applied to improve our ability to make use of online literature annotating the important genes. In combination, this research has improved the reliability and precision of laboratory methods and instruments, while also enabling substantially faster analysis and discovery.
Reichelt, Wieland N; Kaineder, Andreas; Brillmann, Markus; Neutsch, Lukas; Taschauer, Alexander; Lohninger, Hans; Herwig, Christoph
The expression of pharmaceutical relevant proteins in Escherichia coli frequently triggers inclusion body (IB) formation caused by protein aggregation. In the scientific literature, substantial effort has been devoted to the quantification of IB size. However, particle-based methods used up to this point to analyze the physical properties of representative numbers of IBs lack sensitivity and/or orthogonal verification. Using high pressure freezing and automated freeze substitution for transmission electron microscopy (TEM) the cytosolic inclusion body structure was preserved within the cells. TEM imaging in combination with manual grey scale image segmentation allowed the quantification of relative areas covered by the inclusion body within the cytosol. As a high throughput method nano particle tracking analysis (NTA) enables one to derive the diameter of inclusion bodies in cell homogenate based on a measurement of the Brownian motion. The NTA analysis of fixated (glutaraldehyde) and non-fixated IBs suggests that high pressure homogenization annihilates the native physiological shape of IBs. Nevertheless, the ratio of particle counts of non-fixated and fixated samples could potentially serve as factor for particle stickiness. In this contribution, we establish image segmentation of TEM pictures as an orthogonal method to size biologic particles in the cytosol of cells. More importantly, NTA has been established as a particle-based, fast and high throughput method (1000-3000 particles), thus constituting a much more accurate and representative analysis than currently available methods. Copyright © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
List, Markus; Elnegaard, Marlene Pedersen; Schmidt, Steffen
High-throughput screening (HTS) has become an indispensable tool for the pharmaceutical industry and for biomedical research. A high degree of automation allows for experiments in the range of a few hundred up to several hundred thousand to be performed in close succession. The basis for such scr......High-throughput screening (HTS) has become an indispensable tool for the pharmaceutical industry and for biomedical research. A high degree of automation allows for experiments in the range of a few hundred up to several hundred thousand to be performed in close succession. The basis...... for such screens are molecular libraries, that is, microtiter plates with solubilized reagents such as siRNAs, shRNAs, miRNA inhibitors or mimics, and sgRNAs, or small compounds, that is, drugs. These reagents are typically condensed to provide enough material for covering several screens. Library plates thus need...... to be serially diluted before they can be used as assay plates. This process, however, leads to an explosion in the number of plates and samples to be tracked. Here, we present SAVANAH, the first tool to effectively manage molecular screening libraries across dilution series. It conveniently links (connects...
Xu, Weihong; Seok, Junhee; Mindrinos, Michael N.; Schweitzer, Anthony C.; Jiang, Hui; Wilhelmy, Julie; Clark, Tyson A.; Kapur, Karen; Xing, Yi; Faham, Malek; Storey, John D.; Moldawer, Lyle L.; Maier, Ronald V.; Tompkins, Ronald G.; Wong, Wing Hung; Davis, Ronald W.; Xiao, Wenzhong; Toner, Mehmet; Warren, H. Shaw; Schoenfeld, David A.; Rahme, Laurence; McDonald-Smith, Grace P.; Hayden, Douglas; Mason, Philip; Fagan, Shawn; Yu, Yong-Ming; Cobb, J. Perren; Remick, Daniel G.; Mannick, John A.; Lederer, James A.; Gamelli, Richard L.; Silver, Geoffrey M.; West, Michael A.; Shapiro, Michael B.; Smith, Richard; Camp, David G.; Qian, Weijun; Tibshirani, Rob; Lowry, Stephen; Calvano, Steven; Chaudry, Irshad; Cohen, Mitchell; Moore, Ernest E.; Johnson, Jeffrey; Baker, Henry V.; Efron, Philip A.; Balis, Ulysses G. J.; Billiar, Timothy R.; Ochoa, Juan B.; Sperry, Jason L.; Miller-Graziano, Carol L.; De, Asit K.; Bankey, Paul E.; Herndon, David N.; Finnerty, Celeste C.; Jeschke, Marc G.; Minei, Joseph P.; Arnoldo, Brett D.; Hunt, John L.; Horton, Jureta; Cobb, J. Perren; Brownstein, Bernard; Freeman, Bradley; Nathens, Avery B.; Cuschieri, Joseph; Gibran, Nicole; Klein, Matthew; O'Keefe, Grant
A 6.9 million-feature oligonucleotide array of the human transcriptome [Glue Grant human transcriptome (GG-H array)] has been developed for high-throughput and cost-effective analyses in clinical studies. This array allows comprehensive examination of gene expression and genome-wide identification of alternative splicing as well as detection of coding SNPs and noncoding transcripts. The performance of the array was examined and compared with mRNA sequencing (RNA-Seq) results over multiple independent replicates of liver and muscle samples. Compared with RNA-Seq of 46 million uniquely mappable reads per replicate, the GG-H array is highly reproducible in estimating gene and exon abundance. Although both platforms detect similar expression changes at the gene level, the GG-H array is more sensitive at the exon level. Deeper sequencing is required to adequately cover low-abundance transcripts. The array has been implemented in a multicenter clinical program and has generated high-quality, reproducible data. Considering the clinical trial requirements of cost, sample availability, and throughput, the GG-H array has a wide range of applications. An emerging approach for large-scale clinical genomic studies is to first use RNA-Seq to the sufficient depth for the discovery of transcriptome elements relevant to the disease process followed by high-throughput and reliable screening of these elements on thousands of patient samples using custom-designed arrays. PMID:21317363
Full Text Available A number of cellular proteins localize to discrete foci within cells, for example DNA repair proteins, microtubule organizing centers, P bodies or kinetochores. It is often possible to measure the fluorescence emission from tagged proteins within these foci as a surrogate for the concentration of that specific protein. We wished to develop tools that would allow quantitation of fluorescence foci intensities in high-throughput studies. As proof of principle we have examined the kinetochore, a large multi-subunit complex that is critical for the accurate segregation of chromosomes during cell division. Kinetochore perturbations lead to aneuploidy, which is a hallmark of cancer cells. Hence, understanding kinetochore homeostasis and regulation are important for a global understanding of cell division and genome integrity. The 16 budding yeast kinetochores colocalize within the nucleus to form a single focus. Here we have created a set of freely-available tools to allow high-throughput quantitation of kinetochore foci fluorescence. We use this ‘FociQuant’ tool to compare methods of kinetochore quantitation and we show proof of principle that FociQuant can be used to identify changes in kinetochore protein levels in a mutant that affects kinetochore function. This analysis can be applied to any protein that forms discrete foci in cells.
Jonas Loskyll, Klaus Stoewe and Wilhelm F Maier
Full Text Available We review the state of the art and explain the need for better SO2 oxidation catalysts for the production of sulfuric acid. A high-throughput technology has been developed for the study of potential catalysts in the oxidation of SO2 to SO3. High-throughput methods are reviewed and the problems encountered with their adaptation to the corrosive conditions of SO2 oxidation are described. We show that while emissivity-corrected infrared thermography (ecIRT can be used for primary screening, it is prone to errors because of the large variations in the emissivity of the catalyst surface. UV-visible (UV-Vis spectrometry was selected instead as a reliable analysis method of monitoring the SO2 conversion. Installing plain sugar absorbents at reactor outlets proved valuable for the detection and quantitative removal of SO3 from the product gas before the UV-Vis analysis. We also overview some elements used for prescreening and those remaining after the screening of the first catalyst generations.
Annala, M J; Parker, B C; Zhang, W; Nykter, M
Fusion genes are hybrid genes that combine parts of two or more original genes. They can form as a result of chromosomal rearrangements or abnormal transcription, and have been shown to act as drivers of malignant transformation and progression in many human cancers. The biological significance of fusion genes together with their specificity to cancer cells has made them into excellent targets for molecular therapy. Fusion genes are also used as diagnostic and prognostic markers to confirm cancer diagnosis and monitor response to molecular therapies. High-throughput sequencing has enabled the systematic discovery of fusion genes in a wide variety of cancer types. In this review, we describe the history of fusion genes in cancer and the ways in which fusion genes form and affect cellular function. We also describe computational methodologies for detecting fusion genes from high-throughput sequencing experiments, and the most common sources of error that lead to false discovery of fusion genes. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Robinson, J Paul; Rajwa, Bartek; Patsekin, Valery; Davisson, Vincent Jo
Flow cytometry has been around for over 40 years, but only recently has the opportunity arisen to move into the high-throughput domain. The technology is now available and is highly competitive with imaging tools under the right conditions. Flow cytometry has, however, been a technology that has focused on its unique ability to study single cells and appropriate analytical tools are readily available to handle this traditional role of the technology. Expansion of flow cytometry to a high-throughput (HT) and high-content technology requires both advances in hardware and analytical tools. The historical perspective of flow cytometry operation as well as how the field has changed and what the key changes have been discussed. The authors provide a background and compelling arguments for moving toward HT flow, where there are many innovative opportunities. With alternative approaches now available for flow cytometry, there will be a considerable number of new applications. These opportunities show strong capability for drug screening and functional studies with cells in suspension. There is no doubt that HT flow is a rich technology awaiting acceptance by the pharmaceutical community. It can provide a powerful phenotypic analytical toolset that has the capacity to change many current approaches to HT screening. The previous restrictions on the technology, based on its reduced capacity for sample throughput, are no longer a major issue. Overcoming this barrier has transformed a mature technology into one that can focus on systems biology questions not previously considered possible.
Bellasio, Chandra; Fini, Alessio; Ferrini, Francesco
Starch is the most important long-term reserve in trees, and the analysis of starch is therefore useful source of physiological information. Currently published protocols for wood starch analysis impose several limitations, such as long procedures and a neutralization step. The high-throughput standard protocols for starch analysis in food and feed represent a valuable alternative. However, they have not been optimised or tested with woody samples. These have particular chemical and structural characteristics, including the presence of interfering secondary metabolites, low reactivity of starch, and low starch content. In this study, a standard method for starch analysis used for food and feed (AOAC standard method 996.11) was optimised to improve precision and accuracy for the analysis of starch in wood. Key modifications were introduced in the digestion conditions and in the glucose assay. The optimised protocol was then evaluated through 430 starch analyses of standards at known starch content, matrix polysaccharides, and wood collected from three organs (roots, twigs, mature wood) of four species (coniferous and flowering plants). The optimised protocol proved to be remarkably precise and accurate (3%), suitable for a high throughput routine analysis (35 samples a day) of specimens with a starch content between 40 mg and 21 µg. Samples may include lignified organs of coniferous and flowering plants and non-lignified organs, such as leaves, fruits and rhizomes. PMID:24523863
Jiang, Huawei; Xu, Zhen; Aluru, Maneesha R; Dong, Liang
We report on the development of a vertical and transparent microfluidic chip for high-throughput phenotyping of Arabidopsis thaliana plants. Multiple Arabidopsis seeds can be germinated and grown hydroponically over more than two weeks in the chip, thus enabling large-scale and quantitative monitoring of plant phenotypes. The novel vertical arrangement of this microfluidic device not only allows for normal gravitropic growth of the plants but also, more importantly, makes it convenient to continuously monitor phenotypic changes in plants at the whole organismal level, including seed germination and root and shoot growth (hypocotyls, cotyledons, and leaves), as well as at the cellular level. We also developed a hydrodynamic trapping method to automatically place single seeds into seed holding sites of the device and to avoid potential damage to seeds that might occur during manual loading. We demonstrated general utility of this microfluidic device by showing clear visible phenotypes of the immutans mutant of Arabidopsis, and we also showed changes occurring during plant-pathogen interactions at different developmental stages. Arabidopsis plants grown in the device maintained normal morphological and physiological behaviour, and distinct phenotypic variations consistent with a priori data were observed via high-resolution images taken in real time. Moreover, the timeline for different developmental stages for plants grown in this device was highly comparable to growth using a conventional agar plate method. This prototype plant chip technology is expected to lead to the establishment of a powerful experimental and cost-effective framework for high-throughput and precise plant phenotyping.
Full Text Available The growing need for rapid and accurate approaches for large-scale assessment of phenotypic characters in plants becomes more and more obvious in the studies looking into relationships between genotype and phenotype. This need is due to the advent of high throughput methods for analysis of genomes. Nowadays, any genetic experiment involves data on thousands and dozens of thousands of plants. Traditional ways of assessing most phenotypic characteristics (those with reliance on the eye, the touch, the ruler are little effective on samples of such sizes. Modern approaches seek to take advantage of automated phenotyping, which warrants a much more rapid data acquisition, higher accuracy of the assessment of phenotypic features, measurement of new parameters of these features and exclusion of human subjectivity from the process. Additionally, automation allows measurement data to be rapidly loaded into computer databases, which reduces data processing time.In this work, we present the WheatPGE information system designed to solve the problem of integration of genotypic and phenotypic data and parameters of the environment, as well as to analyze the relationships between the genotype and phenotype in wheat. The system is used to consolidate miscellaneous data on a plant for storing and processing various morphological traits and genotypes of wheat plants as well as data on various environmental factors. The system is available at www.wheatdb.org. Its potential in genetic experiments has been demonstrated in high-throughput phenotyping of wheat leaf pubescence.
Todd, Douglas W; Philip, Rohit C; Niihori, Maki; Ringle, Ryan A; Coyle, Kelsey R; Zehri, Sobia F; Zabala, Leanne; Mudery, Jordan A; Francis, Ross H; Rodriguez, Jeffrey J; Jacob, Abraham
Zebrafish animal models lend themselves to behavioral assays that can facilitate rapid screening of ototoxic, otoprotective, and otoregenerative drugs. Structurally similar to human inner ear hair cells, the mechanosensory hair cells on their lateral line allow the zebrafish to sense water flow and orient head-to-current in a behavior called rheotaxis. This rheotaxis behavior deteriorates in a dose-dependent manner with increased exposure to the ototoxin cisplatin, thereby establishing itself as an excellent biomarker for anatomic damage to lateral line hair cells. Building on work by our group and others, we have built a new, fully automated high-throughput behavioral assay system that uses automated image analysis techniques to quantify rheotaxis behavior. This novel system consists of a custom-designed swimming apparatus and imaging system consisting of network-controlled Raspberry Pi microcomputers capturing infrared video. Automated analysis techniques detect individual zebrafish, compute their orientation, and quantify the rheotaxis behavior of a zebrafish test population, producing a powerful, high-throughput behavioral assay. Using our fully automated biological assay to test a standardized ototoxic dose of cisplatin against varying doses of compounds that protect or regenerate hair cells may facilitate rapid translation of candidate drugs into preclinical mammalian models of hearing loss.
First principles methodologies have grown in accuracy and applicability to the point where large databases can be built, shared, and analyzed with the goal of predicting novel compositions, optimizing functional properties, and discovering unexpected relationships between the data. In order to be useful to a large community of users, data should be standardized, validated, and distributed. In addition, tools to easily manage large datasets should be made available to effectively lead to materials development. Within the AFLOW consortium we have developed a simple frame to expand, validate, and mine data repositories: the MTFrame. Our minimalistic approach complement AFLOW and other existing high-throughput infrastructures and aims to integrate data generation with data analysis. We present few examples from our work on materials for energy conversion. Our intent s to pinpoint the usefulness of high-throughput methodologies to guide the discovery process by quantitatively structuring the scientific intuition. This work was supported by ONR-MURI under Contract N00014-13-1-0635 and the Duke University Center for Materials Genomics.
Barone, A; Di Matteo, A; Carputo, D; Frusciante, L
Tomato (Solanum lycopersicum) is considered a model plant species for a group of economically important crops, such as potato, pepper, eggplant, since it exhibits a reduced genomic size (950 Mb), a short generation time, and routine transformation technologies. Moreover, it shares with the other Solanaceous plants the same haploid chromosome number and a high level of conserved genomic organization. Finally, many genomic and genetic resources are actually available for tomato, and the sequencing of its genome is in progress. These features make tomato an ideal species for theoretical studies and practical applications in the genomics field. The present review describes how structural genomics assist the selection of new varieties resistant to pathogens that cause damage to this crop. Many molecular markers highly linked to resistance genes and cloned resistance genes are available and could be used for a high-throughput screening of multiresistant varieties. Moreover, a new genomics-assisted breeding approach for improving fruit quality is presented and discussed. It relies on the identification of genetic mechanisms controlling the trait of interest through functional genomics tools. Following this approach, polymorphisms in major gene sequences responsible for variability in the expression of the trait under study are then exploited for tracking simultaneously favourable allele combinations in breeding programs using high-throughput genomic technologies. This aims at pyramiding in the genetic background of commercial cultivars alleles that increase their performances. In conclusion, tomato breeding strategies supported by advanced technologies are expected to target increased productivity and lower costs of improved genotypes even for complex traits.
Robinson, J Paul; Rajwa, Bartek; Patsekin, Valery; Davisson, Vincent Jo
Introduction Flow cytometry has been around for over 40 years, but only recently has the opportunity arisen to move into the high-throughput domain. The technology is now available and is highly competitive with imaging tools under the right conditions. Flow cytometry has, however, been a technology that has focused on its unique ability to study single cells and appropriate analytical tools are readily available to handle this traditional role of the technology. Areas covered Expansion of flow cytometry to a high-throughput (HT) and high-content technology requires both advances in hardware and analytical tools. The historical perspective of flow cytometry operation as well as how the field has changed and what the key changes have been discussed. The authors provide a background and compelling arguments for moving toward HT flow, where there are many innovative opportunities. With alternative approaches now available for flow cytometry, there will be a considerable number of new applications. These opportunities show strong capability for drug screening and functional studies with cells in suspension. Expert opinion There is no doubt that HT flow is a rich technology awaiting acceptance by the pharmaceutical community. It can provide a powerful phenotypic analytical toolset that has the capacity to change many current approaches to HT screening. The previous restrictions on the technology, based on its reduced capacity for sample throughput, are no longer a major issue. Overcoming this barrier has transformed a mature technology into one that can focus on systems biology questions not previously considered possible. PMID:22708834
Huang, Mingtao; Joensson, Haakan N; Nielsen, Jens
Cell factory development is critically important for efficient biological production of chemicals, biofuels, and pharmaceuticals. Many rounds of the Design-Build-Test-Learn cycles may be required before an engineered strain meeting specific metrics required for industrial application. The bioindustry prefer products in secreted form (secreted products or extracellular metabolites) as it can lower the cost of downstream processing, reduce metabolic burden to cell hosts, and allow necessary modification on the final products , such as biopharmaceuticals. Yet, products in secreted form result in the disconnection of phenotype from genotype, which may have limited throughput in the Test step for identification of desired variants from large libraries of mutant strains. In droplet microfluidic screening, single cells are encapsulated in individual droplet and enable high-throughput processing and sorting of single cells or clones. Encapsulation in droplets allows this technology to overcome the throughput limitations present in traditional methods for screening by extracellular phenotypes. In this chapter, we describe a protocol/guideline for high-throughput droplet microfluidics screening of yeast libraries for higher protein secretion . This protocol can be adapted to screening by a range of other extracellular products from yeast or other hosts.
High-throughput methodologies are a very useful computational tool to explore the space of binary and ternary oxides. We use these methods to search for new and improved transparent conducting oxides (TCOs). TCOs exhibit both visible transparency and good carrier mobility and underpin many energy and electronic applications (e.g. photovoltaics, transparent transistors). We find several potential new n-type and p-type TCOs with a low effective mass. Combining different ab initio approaches, we characterize candidate oxides by their effective mass (mobility), band gap (transparency) and dopability. We present several compounds, not considered previously as TCOs, and discuss the chemical rationale for their promising properties. This analysis is useful to formulate design strategies for future high mobility oxides and has led to follow-up studies including preliminary experimental characterization of a p-type TCO candidate with unexpected chemistry. G. Hautier, A. Miglio, D. Waroquiers, G.-M. Rignanese, and X. Gonze, ``How Does Chemistry Influence Electron Effective Mass in Oxides? A High-Throughput Computational Analysis'', Chem. Mater. 26, 5447 (2014). G. Hautier, A. Miglio, G. Ceder, G.-M. Rignanese, and X. Gonze, ``Identification and design principles of low hole effective mass p-type transparent conducting oxides'', Nature Commun. 4, 2292 (2013).
Compton, Jonathan Lee
inhibitor to IP 3 induced Ca2+ release. This capability opens the development of a high-throughput screening platform for molecules that modulate cellular mechanotransduction. We have applied this approach to screen the effects of a small set of small molecules, in a 96-well plate in less than an hour. These detailed studies offer a basis for the design, development, and implementation of a novel high-throughput mechanotransduction assay to rapidly screen the effect of small molecules on cellular mechanotransduction at high throughput.
Pangilinan, Jasmyn; Shapiro, Harris; Tu, Hank; Platt, Darren
The DOE Joint Genome Institute has sequenced over 50 eukaryotic genomes, ranging in size from 15 MB to 1.6 GB, over a wide range of organism types. In the course of doing so, it has become clear that a substantial fraction of these data sets contains bonus organisms, usually prokaryotes, in addition to the desired genome. While some of these additional organisms are extraneous contamination, they are sometimes symbionts, and so can be of biological interest. Therefore, it is desirable to assemble the bonus organisms along with the main genome. This transforms the problem into one of metagenomic assembly, which is considerably more challenging than traditional whole-genome shotgun (WGS) assembly. The different organisms will usually be present at different sequence depths, which is difficult to handle in most WGS assemblers. In addition, with multiple distinct genomes present, chimerism can produce cross-organism combinations. Finally, there is no guarantee that only a single bonus organism will be present. For example, one JGI project contained at least two different prokaryotic contaminants, plus a 145 KB plasmid of unknown origin. We have developed techniques to routinely identify and handle such bonus organisms in a high-throughput sequencing environment. Approaches include screening and partitioning the unassembled data, and iterative subassemblies. These methods are applicable not only to bonus organisms, but also to desired components such as organelles. These procedures have the additional benefit of identifying, and allowing for the removal of, cloning artifacts such as E.coli and spurious vector inclusions.
Lu, Wenjing; Wang, Jidong; Wu, Qiong; Sun, Jiashu; Chen, Yiping; Zhang, Lu; Zheng, Chunsheng; Gao, Wenna; Liu, Yi; Jiang, Xingyu
We develop a micro-pipette tip-based nucleic acid test (MTNT) for high-throughput sample-to-answer detection of both DNA and RNA from crude samples including cells, bacteria, and solid plants, without the need of sample pretreatment and complex operation. MTNT consists of micro-pipette tips and embedded solid phase nucleic acid extraction membranes, and fully integrates the functions of nucleic acid extraction from crude samples, loop-mediated isothermal amplification (LAMP) of nucleic acids, and visual readout of assays. The total assaying time for DNA or RNA from a variety of crude samples ranges from 90 to 160 min. The limit of detection (LOD) of MTNT is 2 copies of plasmids containing the target nucleic acid fragments of Ebola virus, and 8 CFU of Escherichia coli carrying Ebola virus-derived plasmids. MTNT can also detect CK-19 mRNA from as few as 2 cancer cells without complicated procedures such as RNA extraction and purification. We further demonstrate MTNT in a high-throughput format using an eight-channel pipette and a homemade mini-heater, with a maximum throughput of 40 samples. Compared with other point-of-care (POC) nucleic acid tests (NAT), MTNT could assay both DNA and RNA directly from liquid (cells/bacteria/blood) or solid (plant) samples in a straightforward, sensitive, high-throughput, and containment-free manner, suggesting a considerable promise for low-cost and POC NAT in remote areas.
Wu, Pei-Hsun; Hale, Christopher M; Chen, Wei-Chiang; Lee, Jerry S H; Tseng, Yiider; Wirtz, Denis
High-throughput ballistic injection nanorheology is a method for the quantitative study of cell mechanics. Cell mechanics are measured by ballistic injection of submicron particles into the cytoplasm of living cells and tracking the spontaneous displacement of the particles at high spatial resolution. The trajectories of the cytoplasm-embedded particles are transformed into mean-squared displacements, which are subsequently transformed into frequency-dependent viscoelastic moduli and time-dependent creep compliance of the cytoplasm. This method allows for the study of a wide range of cellular conditions, including cells inside a 3D matrix, cell subjected to shear flows and biochemical stimuli, and cells in a live animal. Ballistic injection lasts < 1 min and is followed by overnight incubation. Multiple particle tracking for one cell lasts < 1 min. Forty cells can be examined in < 1 h. PMID:22222790
Singh, Arti; Ganapathysubramanian, Baskar; Singh, Asheesh Kumar; Sarkar, Soumik
Advances in automated and high-throughput imaging technologies have resulted in a deluge of high-resolution images and sensor data of plants. However, extracting patterns and features from this large corpus of data requires the use of machine learning (ML) tools to enable data assimilation and feature identification for stress phenotyping. Four stages of the decision cycle in plant stress phenotyping and plant breeding activities where different ML approaches can be deployed are (i) identification, (ii) classification, (iii) quantification, and (iv) prediction (ICQP). We provide here a comprehensive overview and user-friendly taxonomy of ML tools to enable the plant community to correctly and easily apply the appropriate ML tools and best-practice guidelines for various biotic and abiotic stress traits. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sapkota, Rumakanta; Nicolaisen, Mogens
Culture-independent studies using next generation sequencing have revolutionizedmicrobial ecology, however, oomycete ecology in soils is severely lagging behind. The aimof this study was to improve and validate standard techniques for using high throughput sequencing as a tool for studying oomycete...... agricultural fields in Denmark, and 11 samples from carrot tissue with symptoms of Pythium infection. Sequence data from the Pythium and Phytophthora mock communities showed that our strategy successfully detected all included species. Taxonomic assignments of OTUs from 26 soil sample showed that 95...... the usefulness of the method not only in soil DNA but also in a plant DNA background. In conclusion, we demonstrate a successful approach for pyrosequencing of oomycete communities using ITS1 as the barcode sequence with well-known primers for oomycete DNA amplification....
Scientific experiments are producing huge amounts of data, and they continue increasing the size of their datasets and the total volume of data. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of Scientific Data Centres has shifted from coping efficiently with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful s...
Full Text Available Efficient parallel screening of combinatorial libraries is one of the most challenging aspects of the high-throughput (HT heterogeneous catalysis workflow. Today, a number of methods have been used in HT catalyst studies, including various optical, mass-spectrometry, and gas-chromatography techniques. Of these, rapid-scanning Fourier-transform infrared (FTIR imaging is one of the fastest and most versatile screening techniques. Here, the new design of the 16-channel HT reactor is presented and test results for its accuracy and reproducibility are shown. The performance of the system was evaluated through the oxidation of CO over commercial Pd/Al2O3 and cobalt oxide nanoparticles synthesized with different reducer-reductant molar ratios, surfactant types, metal and surfactant concentrations, synthesis temperatures, and ramp rates.
The structure of nanoscale materials is difficult to study because crystallography, the gold-standard for structure studies, no longer works at the nanoscale. New tools are needed to study nanostructure. Furthermore, it is important to study the evolution of nanostructure of complex nanostructured materials as a function of various parameters such as temperature or other environmental variables. These are called parametric studies because an environmental parameter is being varied. This means that the new tools for studying nanostructure also need to be extended to work quickly and on large numbers of datasets. This thesis describes the development of new tools for high throughput studies of complex and nanostructured materials, and their application to study the structural evolution of bulk, and nanoparticles of, MnAs as a function of temperature. The tool for high throughput analysis of the bulk material was developed as part of this PhD thesis work and is called SrRietveld. A large part of making a new tool is to validate it and we did this for SrRietveld by carrying out a high-throughput study of uncertainties coming from the program using different ways of estimating the uncertainty. This tool was applied to study structural changes in MnAs as a function of temperature. We were also interested in studying different MnAs nanoparticles fabricated through different methods because of their applications in information storage. PDFgui, an existing tool for analyzing nanoparticles using Pair distribution function (PDF) refinement, was used in these cases. Comparing the results from the analysis by SrRietveld and PDFgui, we got more comprehensive structure information about MnAs. The layout of the thesis is as follows. First, the background knowledge about material structures is given. The conventional crystallographic analysis is introduced in both theoretical and practical ways. For high throughput study, the next-generation Rietveld analysis program: Sr
Selekman, Joshua A; Qiu, Jun; Tran, Kristy; Stevens, Jason; Rosso, Victor; Simmons, Eric; Xiao, Yi; Janey, Jacob
High-throughput (HT) techniques built upon laboratory automation technology and coupled to statistical experimental design and parallel experimentation have enabled the acceleration of chemical process development across multiple industries. HT technologies are often applied to interrogate wide, often multidimensional experimental spaces to inform the design and optimization of any number of unit operations that chemical engineers use in process development. In this review, we outline the evolution of HT technology and provide a comprehensive overview of how HT automation is used throughout different industries, with a particular focus on chemical and pharmaceutical process development. In addition, we highlight the common strategies of how HT automation is incorporated into routine development activities to maximize its impact in various academic and industrial settings.
Steed, Chad A [ORNL; Potok, Thomas E [ORNL; Patton, Robert M [ORNL; Goodall, John R [ORNL; Maness, Christopher S [ORNL; Senter, James K [ORNL; Potok, Thomas E [ORNL
The scale, velocity, and dynamic nature of large scale social media systems like Twitter demand a new set of visual analytics techniques that support near real-time situational awareness. Social media systems are credited with escalating social protest during recent large scale riots. Virtual communities form rapidly in these online systems, and they occasionally foster violence and unrest which is conveyed in the users language. Techniques for analyzing broad trends over these networks or reconstructing conversations within small groups have been demonstrated in recent years, but state-of- the-art tools are inadequate at supporting near real-time analysis of these high throughput streams of unstructured information. In this paper, we present an adaptive system to discover and interactively explore these virtual networks, as well as detect sentiment, highlight change, and discover spatio- temporal patterns.
The potential of Distributed Processing Systems to deliver computing capabilities with qualities ranging from high availability and reliability to easy expansion in functionality and capacity were recognized and formalized in the 1970’s. For more three decade these principals Distributed Computing guided the development of the HTCondor resource and job management system. The widely adopted suite of software tools offered by HTCondor are based on novel distributed computing technologies and are driven by the evolving needs of High Throughput scientific applications. We will review the principals that underpin our work, the distributed computing frameworks and technologies we developed and the lessons we learned from delivering effective and dependable software tools in an ever changing landscape computing technologies and needs that range today from a desktop computer to tens of thousands of cores offered by commercial clouds. About the speaker Miron Livny received a B.Sc. degree in Physics and Mat...
Full Text Available Mass spectrometry (MS remains under-utilized for the analysis of expressed proteins because it is inaccessible to the non-specialist, and sample-turnaround from service labs is slow. Here, we describe 3.5 min Liquid-Chromatography (LC-MS and 16 min LC-MSMS methods which are tailored to validation and characterization of recombinant proteins in a high throughput structural biology pipeline. We illustrate the type and scope of MS data typically obtained from a 96-well expression and purification test for both soluble and integral membrane proteins (IMPs, and describe their utility in the selection of constructs for scale-up structural work, leading to cost and efficiency savings. We propose that value of MS data lies in how quickly it becomes available and that this can fundamentally change the way in which it is used.
Fox, Sandra; Farr-Jones, Shauna; Sopchak, Lynne; Boggs, Amy; Nicely, Helen Wang; Khoury, Richard; Biros, Michael
High-throughput screening (HTS) has become an important part of drug discovery at most pharmaceutical and many biotechnology companies worldwide, and use of HTS technologies is expanding into new areas. Target validation, assay development, secondary screening, ADME/Tox, and lead optimization are among the areas in which there is an increasing use of HTS technologies. It is becoming fully integrated within drug discovery, both upstream and downstream, which includes increasing use of cell-based assays and high-content screening (HCS) technologies to achieve more physiologically relevant results and to find higher quality leads. In addition, HTS laboratories are continually evaluating new technologies as they struggle to increase their success rate for finding drug candidates. The material in this article is based on a 900-page HTS industry report involving 54 HTS directors representing 58 HTS laboratories and 34 suppliers.
Myers, David R.; Qiu, Yongzhi; Fay, Meredith E.; Tennenbaum, Michael; Chester, Daniel; Cuadrado, Jonas; Sakurai, Yumiko; Baek, Jong; Tran, Reginald; Ciciliano, Jordan C.; Ahn, Byungwook; Mannino, Robert G.; Bunting, Silvia T.; Bennett, Carolyn; Briones, Michael; Fernandez-Nieves, Alberto; Smith, Michael L.; Brown, Ashley C.; Sulchek, Todd; Lam, Wilbur A.
Haemostasis occurs at sites of vascular injury, where flowing blood forms a clot, a dynamic and heterogeneous fibrin-based biomaterial. Paramount in the clot's capability to stem haemorrhage are its changing mechanical properties, the major drivers of which are the contractile forces exerted by platelets against the fibrin scaffold. However, how platelets transduce microenvironmental cues to mediate contraction and alter clot mechanics is unknown. This is clinically relevant, as overly softened and stiffened clots are associated with bleeding and thrombotic disorders. Here, we report a high-throughput hydrogel-based platelet-contraction cytometer that quantifies single-platelet contraction forces in different clot microenvironments. We also show that platelets, via the Rho/ROCK pathway, synergistically couple mechanical and biochemical inputs to mediate contraction. Moreover, highly contractile platelet subpopulations present in healthy controls are conspicuously absent in a subset of patients with undiagnosed bleeding disorders, and therefore may function as a clinical diagnostic biophysical biomarker.
Rydahl, Maja Gro
Plant cell walls are composed of an interlinked network of polysaccharides, glycoproteins and phenolic polymers. When addressing the diverse polysaccharides in green plants, including land plants and the ancestral green algae, there are significant overlaps in the cell wall structures. Yet......, there are noteworthy differences in the less evolved species of algae as compared to land plants. The dynamic process orchestrating the deposition of these biopolymers both in algae and higher plants, is complex and highly heterogeneous, yet immensely important for the development and differentiation of the cell...... of green algae, during the development into land plants. Hence, there is a pressing need for rethinking the glycomic toolbox, by developing new and high-throughput (HTP) technology, in order to acquire information of the location and relative abundance of diverse cell wall polymers. In this dissertation...
Full Text Available Here, we present the use of ethoscopes, which are machines for high-throughput analysis of behavior in Drosophila and other animals. Ethoscopes provide a software and hardware solution that is reproducible and easily scalable. They perform, in real-time, tracking and profiling of behavior by using a supervised machine learning algorithm, are able to deliver behaviorally triggered stimuli to flies in a feedback-loop mode, and are highly customizable and open source. Ethoscopes can be built easily by using 3D printing technology and rely on Raspberry Pi microcomputers and Arduino boards to provide affordable and flexible hardware. All software and construction specifications are available at http://lab.gilest.ro/ethoscope.
Geissmann, Quentin; Garcia Rodriguez, Luis; Beckwith, Esteban J; French, Alice S; Jamasb, Arian R; Gilestro, Giorgio F
Here, we present the use of ethoscopes, which are machines for high-throughput analysis of behavior in Drosophila and other animals. Ethoscopes provide a software and hardware solution that is reproducible and easily scalable. They perform, in real-time, tracking and profiling of behavior by using a supervised machine learning algorithm, are able to deliver behaviorally triggered stimuli to flies in a feedback-loop mode, and are highly customizable and open source. Ethoscopes can be built easily by using 3D printing technology and rely on Raspberry Pi microcomputers and Arduino boards to provide affordable and flexible hardware. All software and construction specifications are available at http://lab.gilest.ro/ethoscope.
Geissmann, Quentin; Garcia Rodriguez, Luis; Beckwith, Esteban J.; French, Alice S.; Jamasb, Arian R.
Here, we present the use of ethoscopes, which are machines for high-throughput analysis of behavior in Drosophila and other animals. Ethoscopes provide a software and hardware solution that is reproducible and easily scalable. They perform, in real-time, tracking and profiling of behavior by using a supervised machine learning algorithm, are able to deliver behaviorally triggered stimuli to flies in a feedback-loop mode, and are highly customizable and open source. Ethoscopes can be built easily by using 3D printing technology and rely on Raspberry Pi microcomputers and Arduino boards to provide affordable and flexible hardware. All software and construction specifications are available at http://lab.gilest.ro/ethoscope. PMID:29049280
Hasan, Molla; Kumar, Golden
Thermoplastic embossing of metallic glasses promises direct imprinting of metal nanostructures using templates. However, embossing high-aspect-ratio nanostructures faces unworkable flow resistance due to friction and non-wetting conditions at the template interface. Herein, we show that these inherent challenges of embossing can be reversed by thermoplastic drawing using templates. The flow resistance not only remains independent of wetting but also decreases with increasing feature aspect-ratio. Arrays of assembled nanotips, nanowires, and nanotubes with aspect-ratios exceeding 1000 can be produced through controlled elongation and fracture of metallic glass structures. In contrast to embossing, the drawing approach generates two sets of nanostructures upon final fracture; one set remains anchored to the metallic glass substrate while the second set is assembled on the template. This method can be readily adapted for high-throughput fabrication and testing of nanoscale tensile specimens, enabling rapid screening of size-effects in mechanical behavior.
Barbash, Shahar; Soreq, Hermona
Classification analysis based on high throughput data is a common feature in neuroscience and other fields of science, with a rapidly increasing impact on both basic biology and disease-related studies. The outcome of such classifications often serves to delineate novel biochemical mechanisms in health and disease states, identify new targets for therapeutic interference, and develop innovative diagnostic approaches. Given the importance of this type of studies, we screened 111 recently-published high-impact manuscripts involving classification analysis of gene expression, and found that 58 of them (53%) based their conclusions on a statistically invalid method which can lead to bias in a statistical sense (lower true classification accuracy then the reported classification accuracy). In this report we characterize the potential methodological error and its scope, investigate how it is influenced by different experimental parameters, and describe statistically valid methods for avoiding such classification mistakes. PMID:23346359
The recent advent of high throughput sequencing of nucleic acids (RNA and DNA) has vastly expanded research into the functional and structural biology of the genome of all living organisms (and even a few dead ones). With this enormous and exponential growth in biological data generation come...... equally large demands in data handling, analysis and interpretation, perhaps defining the modern challenge of the computational biologist of the post-genomic era. The first part of this thesis consists of a general introduction to the history, common terms and challenges of next generation sequencing......). For the second flavor, DNA-seq, a study presenting genome wide profiling of transcription factor CEBP/A in liver cells undergoing regeneration after partial hepatectomy (article IV) is included....
Rathjen, Tina; Pais, Helio; Sweetman, Dylan; Moulton, Vincent; Munsterberg, Andrea; Dalmay, Tamas
High throughput Solexa sequencing technology was applied to identify microRNAs in somites of developing chicken embryos. We obtained 651,273 reads, from which 340,415 were mapped to the chicken genome representing 1701 distinct sequences. Eighty-five of these were known microRNAs and 42 novel miRNA candidates were identified. Accumulation of 18 of 42 sequences was confirmed by Northern blot analysis. Ten of the 18 sequences are new variants of known miRNAs and eight short RNAs are novel miRNAs. Six of these eight have not been reported by other deep sequencing projects. One of the six new miRNAs is highly enriched in somite tissue suggesting that deep sequencing of other specific tissues has the potential to identify novel tissue specific miRNAs.
Richendrfer, Holly; Créton, Robbert
We have created a novel high-throughput imaging system for the analysis of behavior in 7-day-old zebrafish larvae in multi-lane plates. This system measures spontaneous behaviors and the response to an aversive stimulus, which is shown to the larvae via a PowerPoint presentation. The recorded images are analyzed with an ImageJ macro, which automatically splits the color channels, subtracts the background, and applies a threshold to identify individual larvae placement in the lanes. We can then import the coordinates into an Excel sheet to quantify swim speed, preference for edge or side of the lane, resting behavior, thigmotaxis, distance between larvae, and avoidance behavior. Subtle changes in behavior are easily detected using our system, making it useful for behavioral analyses after exposure to environmental toxicants or pharmaceuticals.
Wu, Henry; Mayeshiba, Tam; Morgan, Dane
We demonstrate automated generation of diffusion databases from high-throughput density functional theory (DFT) calculations. A total of more than 230 dilute solute diffusion systems in Mg, Al, Cu, Ni, Pd, and Pt host lattices have been determined using multi-frequency diffusion models. We apply a correction method for solute diffusion in alloys using experimental and simulated values of host self-diffusivity. We find good agreement with experimental solute diffusion data, obtaining a weighted activation barrier RMS error of 0.176 eV when excluding magnetic solutes in non-magnetic alloys. The compiled database is the largest collection of consistently calculated ab-initio solute diffusion data in the world. PMID:27434308
Pedersen, Marlene Lemvig; Block, Ines; List, Markus
RNAs with known killing effects as a model system to demonstrate that RPPA-based protein quantification can serve as substitute readout of cell viability, hereby reliably reflecting toxicity. In terms of automation, cell exposure, protein harvest, serial dilution and sample reformatting were performed using...... beneficially in automated high-throughput toxicity testing. An advantage of using RPPAs is that, in addition to the baseline toxicity readout, they allow testing of multiple markers of toxicity, such as inflammatory responses, which do not necessarily cumulate in cell death. We used transfection of si...... a robotic screening platform. Furthermore, we automated sample tracking and data analysis by developing a bundled bioinformatics tool named “MIRACLE”. Automation and RPPA-based viability/toxicity readouts enable rapid testing of large sample numbers, while granting the possibility for flexible consecutive...
Schmidt, Peter M; Abdo, Michael; Butcher, Rebecca E; Yap, Min-Yin; Scotney, Pierre D; Ramunno, Melanie L; Martin-Roussety, Genevieve; Owczarek, Catherine; Hardy, Matthew P; Chen, Chao-Guang; Fabri, Louis J
Monoclonal antibodies (mAbs) have become the fastest growing segment in the drug market with annual sales of more than 40 billion US$ in 2013. The selection of lead candidate molecules involves the generation of large repertoires of antibodies from which to choose a final therapeutic candidate. Improvements in the ability to rapidly produce and purify many antibodies in sufficient quantities reduces the lead time for selection which ultimately impacts on the speed with which an antibody may transition through the research stage and into product development. Miniaturization and automation of chromatography using micro columns (RoboColumns(®) from Atoll GmbH) coupled to an automated liquid handling instrument (ALH; Freedom EVO(®) from Tecan) has been a successful approach to establish high throughput process development platforms. Recent advances in transient gene expression (TGE) using the high-titre Expi293F™ system have enabled recombinant mAb titres of greater than 500mg/L. These relatively high protein titres reduce the volume required to generate several milligrams of individual antibodies for initial biochemical and biological downstream assays, making TGE in the Expi293F™ system ideally suited to high throughput chromatography on an ALH. The present publication describes a novel platform for purifying Expi293F™-expressed recombinant mAbs directly from cell-free culture supernatant on a Perkin Elmer JANUS-VariSpan ALH equipped with a plate shuttle device. The purification platform allows automated 2-step purification (Protein A-desalting/size exclusion chromatography) of several hundred mAbs per week. The new robotic method can purify mAbs with high recovery (>90%) at sub-milligram level with yields of up to 2mg from 4mL of cell-free culture supernatant. Copyright © 2016 Elsevier B.V. All rights reserved.
Yu, Sheng; Chakrabortty, Abhishek; Liao, Katherine P; Cai, Tianrun; Ananthakrishnan, Ashwin N; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Cai, Tianxi
Phenotyping algorithms are capable of accurately identifying patients with specific phenotypes from within electronic medical records systems. However, developing phenotyping algorithms in a scalable way remains a challenge due to the extensive human resources required. This paper introduces a high-throughput unsupervised feature selection method, which improves the robustness and scalability of electronic medical record phenotyping without compromising its accuracy. The proposed Surrogate-Assisted Feature Extraction (SAFE) method selects candidate features from a pool of comprehensive medical concepts found in publicly available knowledge sources. The target phenotype's International Classification of Diseases, Ninth Revision and natural language processing counts, acting as noisy surrogates to the gold-standard labels, are used to create silver-standard labels. Candidate features highly predictive of the silver-standard labels are selected as the final features. Algorithms were trained to identify patients with coronary artery disease, rheumatoid arthritis, Crohn's disease, and ulcerative colitis using various numbers of labels to compare the performance of features selected by SAFE, a previously published automated feature extraction for phenotyping procedure, and domain experts. The out-of-sample area under the receiver operating characteristic curve and F -score from SAFE algorithms were remarkably higher than those from the other two, especially at small label sizes. SAFE advances high-throughput phenotyping methods by automatically selecting a succinct set of informative features for algorithm training, which in turn reduces overfitting and the needed number of gold-standard labels. SAFE also potentially identifies important features missed by automated feature extraction for phenotyping or experts.
Andrew, Gehring; Charles, Barnett; Chu, Ted; DebRoy, Chitrita; D'Souza, Doris; Eaker, Shannon; Fratamico, Pina; Gillespie, Barbara; Hegde, Narasimha; Jones, Kevin; Lin, Jun; Oliver, Stephen; Paoli, George; Perera, Ashan; Uknalis, Joseph
Many rapid methods have been developed for screening foods for the presence of pathogenic microorganisms. Rapid methods that have the additional ability to identify microorganisms via multiplexed immunological recognition have the potential for classification or typing of microbial contaminants thus facilitating epidemiological investigations that aim to identify outbreaks and trace back the contamination to its source. This manuscript introduces a novel, high throughput typing platform that employs microarrayed multiwell plate substrates and laser-induced fluorescence of the nucleic acid intercalating dye/stain SYBR Gold for detection of antibody-captured bacteria. The aim of this study was to use this platform for comparison of different sets of antibodies raised against the same pathogens as well as demonstrate its potential effectiveness for serotyping. To that end, two sets of antibodies raised against each of the “Big Six” non-O157 Shiga toxin-producing E. coli (STEC) as well as E. coli O157:H7 were array-printed into microtiter plates, and serial dilutions of the bacteria were added and subsequently detected. Though antibody specificity was not sufficient for the development of an STEC serotyping method, the STEC antibody sets performed reasonably well exhibiting that specificity increased at lower capture antibody concentrations or, conversely, at lower bacterial target concentrations. The favorable results indicated that with sufficiently selective and ideally concentrated sets of biorecognition elements (e.g., antibodies or aptamers), this high-throughput platform can be used to rapidly type microbial isolates derived from food samples within ca. 80 min of total assay time. It can also potentially be used to detect the pathogens from food enrichments and at least serve as a platform for testing antibodies. PMID:23645110
Full Text Available Abstract Background Phloem-feeding insects are among the most devastating pests worldwide. They not only cause damage by feeding from the phloem, thereby depleting the plant from photo-assimilates, but also by vectoring viruses. Until now, the main way to prevent such problems is the frequent use of insecticides. Applying resistant varieties would be a more environmental friendly and sustainable solution. For this, resistant sources need to be identified first. Up to now there were no methods suitable for high throughput phenotyping of plant germplasm to identify sources of resistance towards phloem-feeding insects. Results In this paper we present a high throughput screening system to identify plants with an increased resistance against aphids. Its versatility is demonstrated using an Arabidopsis thaliana activation tag mutant line collection. This system consists of the green peach aphid Myzus persicae (Sulzer and the circulative virus Turnip yellows virus (TuYV. In an initial screening, with one plant representing one mutant line, 13 virus-free mutant lines were identified by ELISA. Using seeds produced from these lines, the putative candidates were re-evaluated and characterized, resulting in nine lines with increased resistance towards the aphid. Conclusions This M. persicae-TuYV screening system is an efficient, reliable and quick procedure to identify among thousands of mutated lines those resistant to aphids. In our study, nine mutant lines with increased resistance against the aphid were selected among 5160 mutant lines in just 5 months by one person. The system can be extended to other phloem-feeding insects and circulative viruses to identify insect resistant sources from several collections, including for example genebanks and artificially prepared mutant collections.
Richard, Cecile Ai; Hickey, Lee T; Fletcher, Susan; Jennings, Raeleen; Chenu, Karine; Christopher, Jack T
Water availability is a major limiting factor for wheat (Triticum aestivum L.) production in rain-fed agricultural systems worldwide. Root system architecture has important functional implications for the timing and extent of soil water extraction, yet selection for root architectural traits in breeding programs has been limited by a lack of suitable phenotyping methods. The aim of this research was to develop low-cost high-throughput phenotyping methods to facilitate selection for desirable root architectural traits. Here, we report two methods, one using clear pots and the other using growth pouches, to assess the angle and the number of seminal roots in wheat seedlings- two proxy traits associated with the root architecture of mature wheat plants. Both methods revealed genetic variation for seminal root angle and number in the panel of 24 wheat cultivars. The clear pot method provided higher heritability and higher genetic correlations across experiments compared to the growth pouch method. In addition, the clear pot method was more efficient - requiring less time, space, and labour compared to the growth pouch method. Therefore the clear pot method was considered the most suitable for large-scale and high-throughput screening of seedling root characteristics in crop improvement programs. The clear-pot method could be easily integrated in breeding programs targeting drought tolerance to rapidly enrich breeding populations with desirable alleles. For instance, selection for narrow root angle and high number of seminal roots could lead to deeper root systems with higher branching at depth. Such root characteristics are highly desirable in wheat to cope with anticipated future climate conditions, particularly where crops rely heavily on stored soil moisture at depth, including some Australian, Indian, South American, and African cropping regions.
Barone, A; Di Matteo, A; Carputo, D; Frusciante, L
Tomato (Solanum lycopersicum) is considered a model plant species for a group of economically important crops, such as potato, pepper, eggplant, since it exhibits a reduced genomic size (950 Mb), a short generation time, and routine transformation technologies. Moreover, it shares with the other Solanaceous plants the same haploid chromosome number and a high level of conserved genomic organization. Finally, many genomic and genetic resources are actually available for tomato, and the sequencing of its genome is in progress. These features make tomato an ideal species for theoretical studies and practical applications in the genomics field. The present review describes how structural genomics assist the selection of new varieties resistant to pathogens that cause damage to this crop. Many molecular markers highly linked to resistance genes and cloned resistance genes are available and could be used for a high-throughput screening of multiresistant varieties. Moreover, a new genomics-assisted breeding approach for improving fruit quality is presented and discussed. It relies on the identification of genetic mechanisms controlling the trait of interest through functional genomics tools. Following this approach, polymorphisms in major gene sequences responsible for variability in the expression of the trait under study are then exploited for tracking simultaneously favourable allele combinations in breeding programs using high-throughput genomic technologies. This aims at pyramiding in the genetic background of commercial cultivars alleles that increase their performances. In conclusion, tomato breeding strategies supported by advanced technologies are expected to target increased productivity and lower costs of improved genotypes even for complex traits. PMID:19721805
Full Text Available High-throughput computing (HTC uses computer clusters to solve advanced computational problems, with the goal of accomplishing high throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general purpose computation on a graphics processing unit (GPU provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin – Madison, which can be leveraged for genomic selection, in terms of central processing unit (CPU capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of
Mohanraj, Bhavana; Hou, Chieh; Meloni, Gregory R; Cosgrove, Brian D; Dodge, George R; Mauck, Robert L
Articular cartilage enables efficient and near-frictionless load transmission, but suffers from poor inherent healing capacity. As such, cartilage tissue engineering strategies have focused on mimicking both compositional and mechanical properties of native tissue in order to provide effective repair materials for the treatment of damaged or degenerated joint surfaces. However, given the large number design parameters available (e.g. cell sources, scaffold designs, and growth factors), it is difficult to conduct combinatorial experiments of engineered cartilage. This is particularly exacerbated when mechanical properties are a primary outcome, given the long time required for testing of individual samples. High throughput screening is utilized widely in the pharmaceutical industry to rapidly and cost-effectively assess the effects of thousands of compounds for therapeutic discovery. Here we adapted this approach to develop a high throughput mechanical screening (HTMS) system capable of measuring the mechanical properties of up to 48 materials simultaneously. The HTMS device was validated by testing various biomaterials and engineered cartilage constructs and by comparing the HTMS results to those derived from conventional single sample compression tests. Further evaluation showed that the HTMS system was capable of distinguishing and identifying 'hits', or factors that influence the degree of tissue maturation. Future iterations of this device will focus on reducing data variability, increasing force sensitivity and range, as well as scaling-up to even larger (96-well) formats. This HTMS device provides a novel tool for cartilage tissue engineering, freeing experimental design from the limitations of mechanical testing throughput. © 2013 Published by Elsevier Ltd.
Mandracchia, B.; Bianco, V.; Wang, Z.; Paturzo, M.; Bramanti, A.; Pioggia, G.; Ferraro, P.
Here we introduce a compact holographic microscope embedded onboard a Lab-on-a-Chip (LoC) platform. A wavefront division interferometer is realized by writing a polymer grating onto the channel to extract a reference wave from the object wave impinging the LoC. A portion of the beam reaches the samples flowing along the channel path, carrying their information content to the recording device, while one of the diffraction orders from the grating acts as an off-axis reference wave. Polymeric micro-lenses are delivered forward the chip by Pyro-ElectroHydroDynamic (Pyro-EHD) inkjet printing techniques. Thus, all the required optical components are embedded onboard a pocket device, and fast, non-iterative, reconstruction algorithms can be used. We use our device in combination with a novel high-throughput technique, named Space-Time Digital Holography (STDH). STDH exploits the samples motion inside microfluidic channels to obtain a synthetic hologram, mapped in a hybrid space-time domain, and with intrinsic useful features. Indeed, a single Linear Sensor Array (LSA) is sufficient to build up a synthetic representation of the entire experiment (i.e. the STDH) with unlimited Field of View (FoV) along the scanning direction, independently from the magnification factor. The throughput of the imaging system is dramatically increased as STDH provides unlimited FoV, refocusable imaging of samples inside the liquid volume with no need for hologram stitching. To test our embedded STDH microscopy module, we counted, imaged and tracked in 3D with high-throughput red blood cells moving inside the channel volume under non ideal flow conditions.
Full Text Available Abstract Background Estrogen receptor α (ERα is a transcription factor whose activity is affected by multiple regulatory cofactors. In an effort to identify the human genes involved in the regulation of ERα, we constructed a high-throughput, cell-based, functional screening platform by linking a response element (ERE with a reporter gene. This allowed the cellular activity of ERα, in cells cotransfected with the candidate gene, to be quantified in the presence or absence of its cognate ligand E2. Results From a library of 570 human cDNA clones, we identified zinc finger protein 131 (ZNF131 as a repressor of ERα mediated transactivation. ZNF131 is a typical member of the BTB/POZ family of transcription factors, and shows both ubiquitous expression and a high degree of sequence conservation. The luciferase reporter gene assay revealed that ZNF131 inhibits ligand-dependent transactivation by ERα in a dose-dependent manner. Electrophoretic mobility shift assay clearly demonstrated that the interaction between ZNF131 and ERα interrupts or prevents ERα binding to the estrogen response element (ERE. In addition, ZNF131 was able to suppress the expression of pS2, an ERα target gene. Conclusion We suggest that the functional screening platform we constructed can be applied for high-throughput genomic screening candidate ERα-related genes. This in turn may provide new insights into the underlying molecular mechanisms of ERα regulation in mammalian cells.
Weiszmann, R.; Hammonds, A.S.; Celniker, S.E.
We describe a high-throughput protocol for RNA in situ hybridization (ISH) to Drosophila embryos in a 96-well format. cDNA or genomic DNA templates are amplified by PCR and then digoxigenin-labeled ribonucleotides are incorporated into antisense RNA probes by in vitro transcription. The quality of each probe is evaluated before ISH using a RNA probe quantification (dot blot) assay. RNA probes are hybridized to fixed, mixed-staged Drosophila embryos in 96-well plates. The resulting stained embryos can be examined and photographed immediately or stored at 4oC for later analysis. Starting with fixed, staged embryos, the protocol takes 6 d from probe template production through hybridization. Preparation of fixed embryos requires a minimum of 2 weeks to collect embryos representing all stages. The method has been used to determine the expression patterns of over 6,000 genes throughout embryogenesis.
Santos Carla S
Full Text Available Abstract Background Pine wilt disease (PWD, caused by the pinewood nematode (PWN; Bursaphelenchus xylophilus, damages and kills pine trees and is causing serious economic damage worldwide. Although the ecological mechanism of infestation is well described, the plant’s molecular response to the pathogen is not well known. This is due mainly to the lack of genomic information and the complexity of the disease. High throughput sequencing is now an efficient approach for detecting the expression of genes in non-model organisms, thus providing valuable information in spite of the lack of the genome sequence. In an attempt to unravel genes potentially involved in the pine defense against the pathogen, we hereby report the high throughput comparative sequence analysis of infested and non-infested stems of Pinus pinaster (very susceptible to PWN and Pinus pinea (less susceptible to PWN. Results Four cDNA libraries from infested and non-infested stems of P. pinaster and P. pinea were sequenced in a full 454 GS FLX run, producing a total of 2,083,698 reads. The putative amino acid sequences encoded by the assembled transcripts were annotated according to Gene Ontology, to assign Pinus contigs into Biological Processes, Cellular Components and Molecular Functions categories. Most of the annotated transcripts corresponded to Picea genes-25.4-39.7%, whereas a smaller percentage, matched Pinus genes, 1.8-12.8%, probably a consequence of more public genomic information available for Picea than for Pinus. The comparative transcriptome analysis showed that when P. pinaster was infested with PWN, the genes malate dehydrogenase, ABA, water deficit stress related genes and PAR1 were highly expressed, while in PWN-infested P. pinea, the highly expressed genes were ricin B-related lectin, and genes belonging to the SNARE and high mobility group families. Quantitative PCR experiments confirmed the differential gene expression between the two pine species
Cregan Perry B
Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs as defined here are single base sequence changes or short insertion/deletions between or within individuals of a given species. As a result of their abundance and the availability of high throughput analysis technologies SNP markers have begun to replace other traditional markers such as restriction fragment length polymorphisms (RFLPs, amplified fragment length polymorphisms (AFLPs and simple sequence repeats (SSRs or microsatellite markers for fine mapping and association studies in several species. For SNP discovery from chromatogram data, several bioinformatics programs have to be combined to generate an analysis pipeline. Results have to be stored in a relational database to facilitate interrogation through queries or to generate data for further analyses such as determination of linkage disequilibrium and identification of common haplotypes. Although these tasks are routinely performed by several groups, an integrated open source SNP discovery pipeline that can be easily adapted by new groups interested in SNP marker development is currently unavailable. Results We developed SNP-PHAGE (SNP discovery Pipeline with additional features for identification of common haplotypes within a sequence tagged site (Haplotype Analysis and GenBank (-dbSNP submissions. This tool was applied for analyzing sequence traces from diverse soybean genotypes to discover over 10,000 SNPs. This package was developed on UNIX/Linux platform, written in Perl and uses a MySQL database. Scripts to generate a user-friendly web interface are also provided with common queries for preliminary data analysis. A machine learning tool developed by this group for increasing the efficiency of SNP discovery is integrated as a part of this package as an optional feature. The SNP-PHAGE package is being made available open source at http://bfgl.anri.barc.usda.gov/ML/snp-phage/. Conclusion SNP-PHAGE provides a bioinformatics
Wu, Xiao-Lin; Beissinger, Timothy M.; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J. M.; Weigel, Kent A.; Gatti, Natalia de Leon; Gianola, Daniel
High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin–Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized
Sang, Fuming; Yang, Yang; Yuan, Lin; Ren, Jicun; Zhang, Zhizhou
Hot start (HS) PCR is an excellent alternative for high-throughput real time PCR due to its ability to prevent nonspecific amplification at low temperature. Development of a cost-effective and simple HS PCR technique to guarantee high-throughput PCR specificity and consistency still remains a great challenge. In this study, we systematically investigated the HS characteristics of QDs triggered in real time PCR with EvaGreen and SYBR Green I dyes by the analysis of amplification curves, standard curves and melting curves. Two different kinds of DNA polymerases, Pfu and Taq, were employed. Here we showed that high specificity and efficiency of real time PCR were obtained in a plasmid DNA and an error-prone two-round PCR assay using QD-based HS PCR, even after an hour preincubation at 50 °C before real time PCR. Moreover, the results obtained by QD-based HS PCR were comparable to a commercial Taq antibody DNA polymerase. However, no obvious HS effect of QDs was found in real time PCR using Taq DNA polymerase. The findings of this study demonstrated that a cost-effective high-throughput real time PCR based on QD triggered HS PCR could be established with high consistency, sensitivity and accuracy.Hot start (HS) PCR is an excellent alternative for high-throughput real time PCR due to its ability to prevent nonspecific amplification at low temperature. Development of a cost-effective and simple HS PCR technique to guarantee high-throughput PCR specificity and consistency still remains a great challenge. In this study, we systematically investigated the HS characteristics of QDs triggered in real time PCR with EvaGreen and SYBR Green I dyes by the analysis of amplification curves, standard curves and melting curves. Two different kinds of DNA polymerases, Pfu and Taq, were employed. Here we showed that high specificity and efficiency of real time PCR were obtained in a plasmid DNA and an error-prone two-round PCR assay using QD-based HS PCR, even after an hour
Sessions, Allen; Burke, Ellen; Presting, Gernot; Aux, George; McElver, John; Patton, David; Dietrich, Bob; Ho, Patrick; Bacwaden, Johana; Ko, Cynthia; Clarke, Joseph D; Cotton, David; Bullis, David; Snell, Jennifer; Miguel, Trini; Hutchison, Don; Kimmerly, Bill; Mitzel, Theresa; Katagiri, Fumiaki; Glazebrook, Jane; Law, Marc; Goff, Stephen A
A collection of Arabidopsis lines with T-DNA insertions in known sites was generated to increase the efficiency of functional genomics. A high-throughput modified thermal asymmetric interlaced (TAIL)-PCR protocol was developed and used to amplify DNA fragments flanking the T-DNA left borders from approximately 100000 transformed lines. A total of 85108 TAIL-PCR products from 52964 T-DNA lines were sequenced and compared with the Arabidopsis genome to determine the positions of T-DNAs in each line. Predicted T-DNA insertion sites, when mapped, showed a bias against predicted coding sequences. Predicted insertion mutations in genes of interest can be identified using Arabidopsis Gene Index name searches or by BLAST (Basic Local Alignment Search Tool) search. Insertions can be confirmed by simple PCR assays on individual lines. Predicted insertions were confirmed in 257 of 340 lines tested (76%). This resource has been named SAIL (Syngenta Arabidopsis Insertion Library) and is available to the scientific community at www.tmri.org.
Purpose: Evaluation of carcinogenic mechanisms serves a critical role in IARC monograph evaluations, and can lead to “upgrade” or “downgrade” of the carcinogenicity conclusions based on human and animal evidence alone. Three recent IARC monograph Working Groups (110, 112, and 113) pioneered analysis of high throughput in vitro screening data from the U.S. Environmental Protection Agency’s ToxCast program in evaluations of carcinogenic mechanisms. Methods: For monograph 110, ToxCast assay data across multiple nuclear receptors were used to test the hypothesis that PFOA acts exclusively through the PPAR family of receptors, with activity profiles compared to several prototypical nuclear receptor-activating compounds. For monographs 112 and 113, ToxCast assays were systematically evaluated and used as an additional data stream in the overall evaluation of the mechanistic evidence. Specifically, ToxCast assays were mapped to 10 “key characteristics of carcinogens” recently identified by an IARC expert group, and chemicals’ bioactivity profiles were evaluated both in absolute terms (number of relevant assays positive for bioactivity) and relative terms (ranking with respect to other compounds evaluated by IARC, using the ToxPi methodology). Results: PFOA activates multiple nuclear receptors in addition to the PPAR family in the ToxCast assays. ToxCast assays offered substantial coverage for 5 of the 10 “key characteristics,” with the greates
Compton, Jonathan L.; Luo, Justin C.; Ma, Huan; Botvinick, Elliot; Venugopalan, Vasan
We introduce an optical platform for rapid, high-throughput screening of exogenous molecules that affect cellular mechanotransduction. Our method initiates mechanotransduction in adherent cells using single laser-microbeam generated microcavitation bubbles without requiring flow chambers or microfluidics. These microcavitation bubbles expose adherent cells to a microtsunami, a transient microscale burst of hydrodynamic shear stress, which stimulates cells over areas approaching 1 mm2. We demonstrate microtsunami-initiated mechanosignalling in primary human endothelial cells. This observed signalling is consistent with G-protein-coupled receptor stimulation, resulting in Ca2+ release by the endoplasmic reticulum. Moreover, we demonstrate the dose-dependent modulation of microtsunami-induced Ca2+ signalling by introducing a known inhibitor to this pathway. The imaging of Ca2+ signalling and its modulation by exogenous molecules demonstrates the capacity to initiate and assess cellular mechanosignalling in real time. We utilize this capability to screen the effects of a set of small molecules on cellular mechanotransduction in 96-well plates using standard imaging cytometry.
Dieckman, L J; Hanly, W C; Collart, E R
High-throughput approaches for gene cloning and expression require the development of new, nonstandard tools for use by molecular biologists and biochemists. We have developed and implemented a series of methods that enable the production of expression constructs in 96-well plate format. A screening process is described that facilitates the identification of bacterial clones expressing soluble protein. Application of the solubility screen then provides a plate map that identifies the location of wells containing clones producing soluble proteins. A series of semi-automated methods can then be applied for validation of solubility and production of freezer stocks for the protein production group. This process provides an 80% success rate for the identification of clones producing soluble protein and results in a significant decrease in the level of effort required for the labor-intensive components of validation and preparation of freezer stocks. This process is customized for large-scale structural genomics programs that rely on the production of large amounts of soluble proteins for crystallization trials.
Hyun, Woo Jin
Printed electronics is an emerging field for manufacturing electronic devices with low cost and minimal material waste for a variety of applications including displays, distributed sensing, smart packaging, and energy management. Moreover, its compatibility with roll-to-roll production formats and flexible substrates is desirable for continuous, high-throughput production of flexible electronics. Despite the promise, however, the roll-to-roll production of printed electronics is quite challenging due to web movement hindering accurate ink registration and high-fidelity printing. In this talk, I will present a promising strategy for roll-to-roll production using a novel printing process that we term SCALE (Self-aligned Capillarity-Assisted Lithography for Electronics). By utilizing capillarity of liquid inks on nano/micro-structured substrates, the SCALE process facilitates high-resolution and self-aligned patterning of electrically functional inks with greatly improved printing tolerance. I will show the fabrication of key building blocks (e.g. transistor, resistor, capacitor) for electronic circuits using the SCALE process on plastics.
Berlec, Aleš; Štrukelj, Borut
Biliverdin is an intermediate of heme degradation with an established role in veterinary clinical diagnostics of liver-related diseases. The need for chromatographic assays has so far prevented its wider use in diagnostic laboratories. The current report describes a simple, fast, high-throughput, and inexpensive assay, based on the interaction of biliverdin with infrared fluorescent protein (iRFP) that yields functional protein exhibiting infrared fluorescence. The assay is linear in the range of 0-10 µmol/l of biliverdin, has a limit of detection of 0.02 μmol/l, and has a limit of quantification of 0.03 µmol/l. The assay is accurate with relative error less than 0.15, and precise, with coefficient of variation less than 5% in the concentration range of 2-9 µmol/l of biliverdin. More than 95% of biliverdin was recovered from biological samples by simple dimethyl sulfoxide extraction. There was almost no interference by hemin, although bilirubin caused an increase in the biliverdin concentration, probably due to spontaneous oxidation of bilirubin to biliverdin. The newly developed biliverdin assay is appropriate for reliable quantification of large numbers of samples in veterinary medicine.
Schumacher, J\\"orn; The ATLAS collaboration; Vandelli, Wainer
HPC network technologies like Infiniband, TrueScale or OmniPath provide low-latency and high-throughput communication between hosts, which makes them attractive options for data-acquisition systems in large-scale high-energy physics experiments. Like HPC networks, DAQ networks are local and include a well specified number of systems. Unfortunately traditional network communication APIs for HPC clusters like MPI or PGAS target exclusively the HPC community and are not suited well for DAQ applications. It is possible to build distributed DAQ applications using low-level system APIs like Infiniband Verbs (and this has been done), but it requires a non negligible effort and expert knowledge. On the other hand, message services like 0MQ have gained popularity in the HEP community. Such APIs allow to build distributed applications with a high-level approach and provide good performance. Unfortunately their usage usually limits developers to TCP/IP-based networks. While it is possible to operate a TCP/IP stack on to...
Schuck, Brittany W; MacArthur, Ryan; Inglese, James
Reporter-biased artifacts-i.e., compounds that interact directly with the reporter enzyme used in a high-throughput screening (HTS) assay and not the biological process or pharmacology being interrogated-are now widely recognized to reduce the efficiency and quality of HTS used for chemical probe and therapeutic development. Furthermore, narrow or single-concentration HTS perpetuates false negatives during primary screening campaigns. Titration-based HTS, or quantitative HTS (qHTS), and coincidence reporter technology can be employed to reduce false negatives and false positives, respectively, thereby increasing the quality and efficiency of primary screening efforts, where the number of compounds investigated can range from tens of thousands to millions. The three protocols described here allow for generation of a coincidence reporter (CR) biocircuit to interrogate a biological or pharmacological question of interest, generation of a stable cell line expressing the CR biocircuit, and qHTS using the CR biocircuit to efficiently identify high-quality biologically active small molecules. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.
Method Taking advantage of the current rapid development in imaging systems and computer vision algorithms, we present HPGA, a high-throughput phenotyping platform for plant growth modeling and functional analysis, which produces better understanding of energy distribution in regards of the balance between growth and defense. HPGA has two components, PAE (Plant Area Estimation) and GMA (Growth Modeling and Analysis). In PAE, by taking the complex leaf overlap problem into consideration, the area of every plant is measured from top-view images in four steps. Given the abundant measurements obtained with PAE, in the second module GMA, a nonlinear growth model is applied to generate growth curves, followed by functional data analysis. Results Experimental results on model plant Arabidopsis thaliana show that, compared to an existing approach, HPGA reduces the error rate of measuring plant area by half. The application of HPGA on the cfq mutant plants under fluctuating light reveals the correlation between low photosynthetic rates and small plant area (compared to wild type), which raises a hypothesis that knocking out cfq changes the sensitivity of the energy distribution under fluctuating light conditions to repress leaf growth. Availability HPGA is available at http://www.msu.edu/~jinchen/HPGA. PMID:24565437
Building scientific confidence in the development and evaluation of read-across remains an ongoing challenge. Approaches include establishing systematic frameworks to identify sources of uncertainty and ways to address them. One source of uncertainty is related to characterizing biological similarity. Many research efforts are underway such as structuring mechanistic data in adverse outcome pathways and investigating the utility of high throughput (HT)/high content (HC) screening data. A largely untapped resource for read-across to date is the biomedical literature. This information has the potential to support read-across by facilitating the identification of valid source analogues with similar biological and toxicological profiles as well as providing the mechanistic understanding for any prediction made. A key challenge in using biomedical literature is to convert and translate its unstructured form into a computable format that can be linked to chemical structure. We developed a novel text-mining strategy to represent literature information for read across. Keywords were used to organize literature into toxicity signatures at the chemical level. These signatures were integrated with HT in vitro data and curated chemical structures. A rule-based algorithm assessed the strength of the literature relationship, providing a mechanism to rank and visualize the signature as literature ToxPIs (LitToxPIs). LitToxPIs were developed for over 6,000 chemicals for a varie
Araus, José Luis; Cairns, Jill E
Constraints in field phenotyping capability limit our ability to dissect the genetics of quantitative traits, particularly those related to yield and stress tolerance (e.g., yield potential as well as increased drought, heat tolerance, and nutrient efficiency, etc.). The development of effective field-based high-throughput phenotyping platforms (HTPPs) remains a bottleneck for future breeding advances. However, progress in sensors, aeronautics, and high-performance computing are paving the way. Here, we review recent advances in field HTPPs, which should combine at an affordable cost, high capacity for data recording, scoring and processing, and non-invasive remote sensing methods, together with automated environmental data collection. Laboratory analyses of key plant parts may complement direct phenotyping under field conditions. Improvements in user-friendly data management together with a more powerful interpretation of results should increase the use of field HTPPs, therefore increasing the efficiency of crop genetic improvement to meet the needs of future generations. Copyright © 2013 Elsevier Ltd. All rights reserved.
Mahajan, Vinit B; Skeie, Jessica M; Assefnia, Amir H; Mahajan, Maryann; Tsang, Stephen H
The mouse eye is an important genetic model for the translational study of human ophthalmic disease. Blinding diseases in humans, such as macular degeneration, photoreceptor degeneration, cataract, glaucoma, retinoblastoma, and diabetic retinopathy have been recapitulated in transgenic mice.(1-5) Most transgenic and knockout mice have been generated by laboratories to study non-ophthalmic diseases, but genetic conservation between organ systems suggests that many of the same genes may also play a role in ocular development and disease. Hence, these mice represent an important resource for discovering new genotype-phenotype correlations in the eye. Because these mice are scattered across the globe, it is difficult to acquire, maintain, and phenotype them in an efficient, cost-effective manner. Thus, most high-throughput ophthalmic phenotyping screens are restricted to a few locations that require on-site, ophthalmic expertise to examine eyes in live mice. (6-9) An alternative approach developed by our laboratory is a method for remote tissue-acquisition that can be used in large or small-scale surveys of transgenic mouse eyes. Standardized procedures for video-based surgical skill transfer, tissue fixation, and shipping allow any lab to collect whole eyes from mutant animals and send them for molecular and morphological phenotyping. In this video article, we present techniques to enucleate and transfer both unfixed and perfusion fixed mouse eyes for remote phenotyping analyses.
Full Text Available BACKGROUND: Large efforts have recently been made to automate the sample preparation protocols for massively parallel sequencing in order to match the increasing instrument throughput. Still, the size selection through agarose gel electrophoresis separation is a labor-intensive bottleneck of these protocols. METHODOLOGY/PRINCIPAL FINDINGS: In this study a method for automatic library preparation and size selection on a liquid handling robot is presented. The method utilizes selective precipitation of certain sizes of DNA molecules on to paramagnetic beads for cleanup and selection after standard enzymatic reactions. CONCLUSIONS/SIGNIFICANCE: The method is used to generate libraries for de novo and re-sequencing on the Illumina HiSeq 2000 instrument with a throughput of 12 samples per instrument in approximately 4 hours. The resulting output data show quality scores and pass filter rates comparable to manually prepared samples. The sample size distribution can be adjusted for each application, and are suitable for all high throughput DNA processing protocols seeking to control size intervals.
Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John
Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.
The risk posed to human health by any of the thousands of untested anthropogenic chemicals in our environment is a function of both the potential hazard presented by the chemical, and the possibility of being exposed. Without the capacity to make quantitative, albeit uncertain, forecasts of exposure, the putative risk of adverse health effect from a chemical cannot be evaluated. We used Bayesian methodology to infer ranges of exposure intakes that are consistent with biomarkers of chemical exposures identified in urine samples from the U.S. population by the National Health and Nutrition Examination Survey (NHANES). We perform linear regression on inferred exposure for demographic subsets of NHANES demarked by age, gender, and weight using high throughput chemical descriptors gleaned from databases and chemical structure-based calculators. We find that five of these descriptors are capable of explaining roughly 50% of the variability across chemicals for all the demographic groups examined, including children aged 6-11. For the thousands of chemicals with no other source of information, this approach allows rapid and efficient prediction of average exposure intake of environmental chemicals. The methods described by this manuscript provide a highly improved methodology for HTS of human exposure to environmental chemicals. The manuscript includes a ranking of 7785 environmental chemicals with respect to potential human exposure, including most of the Tox21 in vit
Zhou, Mingyan; Liu, Hongwei; Kilduff, James E; Langer, Robert; Anderson, Daniel G; Belfort, Georges
A novel method for synthesis and screening of fouling-resistant membrane surfaces was developed by combining a high-throughput platform (HTP) approach together with photoinduced graft polymerization (PGP)forfacile modification of commercial poly(aryl sulfone) membranes. This method is an inexpensive, fast, simple, reproducible, and scalable approach to identify fouling-resistant surfaces appropriate for a specific feed. In this research, natural organic matter (NOM)-resistant surfaces were synthesized and indentified from a library of 66 monomers. Surfaces were prepared via graft polymerization onto poly(ether sulfone) (PES) membranes and were evaluated using an assay involving NOM adsorption, followed by pressure-driven-filtration. In this work new and previously tested low-fouling surfaces for NOM are identified, and their ability to mitigate NOM and protein (bovine serum albumin)fouling is compared. The best-performing monomers were the zwitterion [2-(methacryloyloxy)ethyl]dimethyl-(3-sulfopropyl)ammonium hydroxide, and diacetone acrylamide, a neutral monomer containing an amide group. Other excellent surfaces were synthesized from amides, amines, basic monomers, and long-chain poly(ethylene) glycols. Bench-scale studies conducted for selected monomers verified the scalability of HTP-PGP results. The results and the synthesis and screening method presented here offer new opportunities for choosing new membrane chemistries that minimize NOM fouling.
AVA Solar has developed a very low cost solar photovoltaic (PV) manufacturing process and has demonstrated the significant economic and commercial potential of this technology. This I & I Category 3 project provided significant assistance toward accomplishing these milestones. The original goals of this project were to design, construct and test a production prototype system, fabricate PV modules and test the module performance. The original module manufacturing costs in the proposal were estimated at $2/Watt. The objectives of this project have been exceeded. An advanced processing line was designed, fabricated and installed. Using this automated, high throughput system, high efficiency devices and fully encapsulated modules were manufactured. AVA Solar has obtained 2 rounds of private equity funding, expand to 50 people and initiated the development of a large scale factory for 100+ megawatts of annual production. Modules will be manufactured at an industry leading cost which will enable AVA Solar's modules to produce power that is cost-competitive with traditional energy resources. With low manufacturing costs and the ability to scale manufacturing, AVA Solar has been contacted by some of the largest customers in the PV industry to negotiate long-term supply contracts. The current market for PV has continued to grow at 40%+ per year for nearly a decade and is projected to reach $40-$60 Billion by 2012. Currently, a crystalline silicon raw material supply shortage is limiting growth and raising costs. Our process does not use silicon, eliminating these limitations.
List, Markus; Elnegaard, Marlene Pedersen; Schmidt, Steffen; Christiansen, Helle; Tan, Qihua; Mollenhauer, Jan; Baumbach, Jan
High-throughput screening (HTS) has become an indispensable tool for the pharmaceutical industry and for biomedical research. A high degree of automation allows for experiments in the range of a few hundred up to several hundred thousand to be performed in close succession. The basis for such screens are molecular libraries, that is, microtiter plates with solubilized reagents such as siRNAs, shRNAs, miRNA inhibitors or mimics, and sgRNAs, or small compounds, that is, drugs. These reagents are typically condensed to provide enough material for covering several screens. Library plates thus need to be serially diluted before they can be used as assay plates. This process, however, leads to an explosion in the number of plates and samples to be tracked. Here, we present SAVANAH, the first tool to effectively manage molecular screening libraries across dilution series. It conveniently links (connects) sample information from the library to experimental results from the assay plates. All results can be exported to the R statistical environment or piped into HiTSeekR ( http://hitseekr.compbio.sdu.dk ) for comprehensive follow-up analyses. In summary, SAVANAH supports the HTS community in managing and analyzing HTS experiments with an emphasis on serially diluted molecular libraries.
Paul Daniel Phillips
Full Text Available Due to low cost, speed, and unmatched ability to explore large numbers of compounds, high throughput virtual screening and molecular docking engines have become widely utilized by computational scientists. It is generally accepted that docking engines, such as AutoDock, produce reliable qualitative results for ligand-macromolecular receptor binding, and molecular docking results are commonly reported in literature in the absence of complementary wet lab experimental data. In this investigation, three variants of the sixteen amino acid peptide, α-conotoxin MII, were docked to a homology model of the a3β2-nicotinic acetylcholine receptor. DockoMatic version 2.0 was used to perform a virtual screen of each peptide ligand to the receptor for ten docking trials consisting of 100 AutoDock cycles per trial. The results were analyzed for both variation in the calculated binding energy obtained from AutoDock, and the orientation of bound peptide within the receptor. The results show that, while no clear correlation exists between consistent ligand binding pose and the calculated binding energy, AutoDock is able to determine a consistent positioning of bound peptide in the majority of trials when at least ten trials were evaluated.
Fiume, Marc; Williams, Vanessa; Brook, Andrew; Brudno, Michael
The advent of high-throughput sequencing (HTS) technologies has made it affordable to sequence many individuals' genomes. Simultaneously the computational analysis of the large volumes of data generated by the new sequencing machines remains a challenge. While a plethora of tools are available to map the resulting reads to a reference genome, and to conduct primary analysis of the mappings, it is often necessary to visually examine the results and underlying data to confirm predictions and understand the functional effects, especially in the context of other datasets. We introduce Savant, the Sequence Annotation, Visualization and ANalysis Tool, a desktop visualization and analysis browser for genomic data. Savant was developed for visualizing and analyzing HTS data, with special care taken to enable dynamic visualization in the presence of gigabases of genomic reads and references the size of the human genome. Savant supports the visualization of genome-based sequence, point, interval and continuous datasets, and multiple visualization modes that enable easy identification of genomic variants (including single nucleotide polymorphisms, structural and copy number variants), and functional genomic information (e.g. peaks in ChIP-seq data) in the context of genomic annotations. Savant is freely available at http://compbio.cs.toronto.edu/savant.
Yi, Yunhai; You, Xinxin; Bian, Chao; Chen, Shixi; Lv, Zhao; Qiu, Limei; Shi, Qiong
Widespread existence of antimicrobial peptides (AMPs) has been reported in various animals with comprehensive biological activities, which is consistent with the important roles of AMPs as the first line of host defense system. However, no big-data-based analysis on AMPs from any fish species is available. In this study, we identified 507 AMP transcripts on the basis of our previously reported genomes and transcriptomes of two representative amphibious mudskippers, Boleophthalmus pectinirostris (BP) and Periophthalmus magnuspinnatus (PM). The former is predominantly aquatic with less time out of water, while the latter is primarily terrestrial with extended periods of time on land. Within these identified AMPs, 449 sequences are novel; 15 were reported in BP previously; 48 are identically overlapped between BP and PM; 94 were validated by mass spectrometry. Moreover, most AMPs presented differential tissue transcription patterns in the two mudskippers. Interestingly, we discovered two AMPs, hemoglobin β1 and amylin, with high inhibitions on Micrococcus luteus. In conclusion, our high-throughput screening strategy based on genomic and transcriptomic data opens an efficient pathway to discover new antimicrobial peptides for ongoing development of marine drugs.
Full Text Available Widespread existence of antimicrobial peptides (AMPs has been reported in various animals with comprehensive biological activities, which is consistent with the important roles of AMPs as the first line of host defense system. However, no big-data-based analysis on AMPs from any fish species is available. In this study, we identified 507 AMP transcripts on the basis of our previously reported genomes and transcriptomes of two representative amphibious mudskippers, Boleophthalmus pectinirostris (BP and Periophthalmus magnuspinnatus (PM. The former is predominantly aquatic with less time out of water, while the latter is primarily terrestrial with extended periods of time on land. Within these identified AMPs, 449 sequences are novel; 15 were reported in BP previously; 48 are identically overlapped between BP and PM; 94 were validated by mass spectrometry. Moreover, most AMPs presented differential tissue transcription patterns in the two mudskippers. Interestingly, we discovered two AMPs, hemoglobin β1 and amylin, with high inhibitions on Micrococcus luteus. In conclusion, our high-throughput screening strategy based on genomic and transcriptomic data opens an efficient pathway to discover new antimicrobial peptides for ongoing development of marine drugs.
Full Text Available The discovery of new therapeutic options against Trypanosoma cruzi, the causative agent of Chagas disease, stands as a fundamental need. Currently, there are only two drugs available to treat this neglected disease, which represents a major public health problem in Latin America. Both available therapies, benznidazole and nifurtimox, have significant toxic side effects and their efficacy against the life-threatening symptomatic chronic stage of the disease is variable. Thus, there is an urgent need for new, improved anti-T. cruzi drugs. With the objective to reliably accelerate the drug discovery process against Chagas disease, several advances have been made in the last few years. Availability of engineered reporter gene expressing parasites triggered the development of phenotypic in vitro assays suitable for high throughput screening (HTS as well as the establishment of new in vivo protocols that allow faster experimental outcomes. Recently, automated high content microscopy approaches have also been used to identify new parasitic inhibitors. These in vitro and in vivo early drug discovery approaches, which hopefully will contribute to bring better anti-T. cruzi drug entities in the near future, are reviewed here.
Brenan, Colin J H; Roberts, Douglas; Hurley, James
The increasing emphasis in life science research on utilization of genetic and genomic information underlies the need for high-throughput technologies capable of analyzing the expression of multiple genes or the presence of informative single nucleotide polymorphisms (SNPs) in large-scale, population-based applications. Human disease research, disease diagnosis, personalized therapeutics, environmental monitoring, blood testing, and identification of genetic traits impacting agricultural practices, both in terms of food quality and production efficiency, are a few areas where such systems are in demand. This has stimulated the need for PCR technologies that preserves the intrinsic analytical benefits of PCR yet enables higher throughputs without increasing the time to answer, labor and reagent expenses and workflow complexity. An example of such a system based on a high-density array of nanoliter PCR assays is described here. Functionally equivalent to a microtiter plate, the nanoplate system makes possible up to 3,072 simultaneous end-point or real-time PCR measurements in a device, the size of a standard microscope slide. Methods for SNP genotyping with end-point TaqMan PCR assays and quantitative measurement of gene expression with SYBR Green I real-time PCR are outlined and illustrative data showing system performance is provided.
Elsliger, Marc André; Deacon, Ashley M; Godzik, Adam; Lesley, Scott A; Wooley, John; Wüthrich, Kurt; Wilson, Ian A
The Joint Center for Structural Genomics high-throughput structural biology pipeline has delivered more than 1000 structures to the community over the past ten years. The JCSG has made a significant contribution to the overall goal of the NIH Protein Structure Initiative (PSI) of expanding structural coverage of the protein universe, as well as making substantial inroads into structural coverage of an entire organism. Targets are processed through an extensive combination of bioinformatics and biophysical analyses to efficiently characterize and optimize each target prior to selection for structure determination. The pipeline uses parallel processing methods at almost every step in the process and can adapt to a wide range of protein targets from bacterial to human. The construction, expansion and optimization of the JCSG gene-to-structure pipeline over the years have resulted in many technological and methodological advances and developments. The vast number of targets and the enormous amounts of associated data processed through the multiple stages of the experimental pipeline required the development of variety of valuable resources that, wherever feasible, have been converted to free-access web-based tools and applications.
Johnson, Marjory J.; Townsend, Jeffrey N.
File-transfer rates for ftp are often reported to be relatively slow, compared to the raw bandwidth available in emerging gigabit networks. While a major bottleneck is disk I/O, protocol issues impact performance as well. Ftp was developed and optimized for use over the TCP/IP protocol stack of the Internet. However, TCP has been shown to run inefficiently over ATM. In an effort to maximize network throughput, data-transfer protocols can be developed to run over UDP or directly over IP, rather than over TCP. If error-free transmission is required, techniques for achieving reliable transmission can be included as part of the transfer protocol. However, selected image-processing applications can tolerate a low level of errors in images that are transmitted over a network. In this paper we report on experimental work to develop a high-throughput protocol for unreliable data transfer over ATM networks. We attempt to maximize throughput by keeping the communications pipe full, but still keep packet loss under five percent. We use the Bay Area Gigabit Network Testbed as our experimental platform.
Budowle, Bruce; Connell, Nancy D; Bielecka-Oder, Anna; Colwell, Rita R; Corbett, Cindi R; Fletcher, Jacqueline; Forsman, Mats; Kadavy, Dana R; Markotic, Alemka; Morse, Stephen A; Murch, Randall S; Sajantila, Antti; Schmedes, Sarah E; Ternus, Krista L; Turner, Stephen D; Minot, Samuel
High throughput sequencing (HTS) generates large amounts of high quality sequence data for microbial genomics. The value of HTS for microbial forensics is the speed at which evidence can be collected and the power to characterize microbial-related evidence to solve biocrimes and bioterrorist events. As HTS technologies continue to improve, they provide increasingly powerful sets of tools to support the entire field of microbial forensics. Accurate, credible results allow analysis and interpretation, significantly influencing the course and/or focus of an investigation, and can impact the response of the government to an attack having individual, political, economic or military consequences. Interpretation of the results of microbial forensic analyses relies on understanding the performance and limitations of HTS methods, including analytical processes, assays and data interpretation. The utility of HTS must be defined carefully within established operating conditions and tolerances. Validation is essential in the development and implementation of microbial forensics methods used for formulating investigative leads attribution. HTS strategies vary, requiring guiding principles for HTS system validation. Three initial aspects of HTS, irrespective of chemistry, instrumentation or software are: 1) sample preparation, 2) sequencing, and 3) data analysis. Criteria that should be considered for HTS validation for microbial forensics are presented here. Validation should be defined in terms of specific application and the criteria described here comprise a foundation for investigators to establish, validate and implement HTS as a tool in microbial forensics, enhancing public safety and national security.
Turner, G. B.; Decker, S. R.; Tucker, M. P.; Law, C.; Doeppke, C.; Sykes, R. W.; Davis, M. F.; Ziebell, A.
This was a poster displayed at the Symposium. Advances on previous high throughput screening of biomass recalcitrance methods have resulted in improved conversion and replicate precision. Changes in plate reactor metallurgy, improved preparation of control biomass, species-specific pretreatment conditions, and enzymatic hydrolysis parameters have reduced overall coefficients of variation to an average of 6% for sample replicates. These method changes have improved plate-to-plate variation of control biomass recalcitrance and improved confidence in sugar release differences between samples. With smaller errors plant researchers can have a higher degree of assurance more low recalcitrance candidates can be identified. Significant changes in plate reactor, control biomass preparation, pretreatment conditions and enzyme have significantly reduced sample and control replicate variability. Reactor plate metallurgy significantly impacts sugar release aluminum leaching into reaction during pretreatment degrades sugars and inhibits enzyme activity. Removal of starch and extractives significantly decreases control biomass variability. New enzyme formulations give more consistent and higher conversion levels, however required re-optimization for switchgrass. Pretreatment time and temperature (severity) should be adjusted to specific biomass types i.e. woody vs. herbaceous. Desalting of enzyme preps to remove low molecular weight stabilizers and improved conversion levels likely due to water activity impacts on enzyme structure and substrate interactions not attempted here due to need to continually desalt and validate precise enzyme concentration and activity.
Full Text Available The completion of the genome sequencing for several organisms has created a great demand for genomic tools that can systematically analyze the growing wealth of data. In contrast to the classical reverse genetics approach of creating specific knockout cell lines or animals that is time-consuming and expensive, RNA-mediated interference (RNAi has emerged as a fast, simple, and cost-effective technique for gene knockdown in large scale. Since its discovery as a gene silencing response to double-stranded RNA (dsRNA with homology to endogenous genes in Caenorhabditis elegans (C elegans, RNAi technology has been adapted to various high-throughput screens (HTS for genome-wide loss-of-function (LOF analysis. Biochemical insights into the endogenous mechanism of RNAi have led to advances in RNAi methodology including RNAi molecule synthesis, delivery, and sequence design. In this article, we will briefly review these various RNAi library designs and discuss the benefits and drawbacks of each library strategy.
Konieczny, A; Jensen, E O; Marcker, K A
Poly(A)+ RNA isolated from root nodules of yellow lupin (Lupinus luteus, var. Ventus) has been used as a template for the construction of a cDNA library. The ds cDNA was synthesized and inserted into the Hind III site of plasmid pBR 322 using synthetic Hind III linkers. Clones containing sequences...... its nucleotide sequence was consistent with known amino acid sequence of lupin Lb II. The cloned lupin Lb cDNA hybridized to poly(A)+ RNA from nodules only, which is in accordance with the general concept, that leghemoglobin is expressed exclusively in nodules. Udgivelsesdato: 1987-null...
Nelson, Bryant C; Wright, Christa W; Ibuki, Yuko; Moreno-Villanueva, Maria; Karlsson, Hanna L; Hendriks, Giel; Sims, Christopher M; Singh, Neenu; Doak, Shareen H
The rapid development of the engineered nanomaterial (ENM) manufacturing industry has accelerated the incorporation of ENMs into a wide variety of consumer products across the globe. Unintentionally or not, some of these ENMs may be introduced into the environment or come into contact with humans or other organisms resulting in unexpected biological effects. It is thus prudent to have rapid and robust analytical metrology in place that can be used to critically assess and/or predict the cytotoxicity, as well as the potential genotoxicity of these ENMs. Many of the traditional genotoxicity test methods [e.g. unscheduled DNA synthesis assay, bacterial reverse mutation (Ames) test, etc.,] for determining the DNA damaging potential of chemical and biological compounds are not suitable for the evaluation of ENMs, due to a variety of methodological issues ranging from potential assay interferences to problems centered on low sample throughput. Recently, a number of sensitive, high-throughput genotoxicity assays/platforms (CometChip assay, flow cytometry/micronucleus assay, flow cytometry/γ-H2AX assay, automated 'Fluorimetric Detection of Alkaline DNA Unwinding' (FADU) assay, ToxTracker reporter assay) have been developed, based on substantial modifications and enhancements of traditional genotoxicity assays. These new assays have been used for the rapid measurement of DNA damage (strand breaks), chromosomal damage (micronuclei) and for detecting upregulated DNA damage signalling pathways resulting from ENM exposures. In this critical review, we describe and discuss the fundamental measurement principles and measurement endpoints of these new assays, as well as the modes of operation, analytical metrics and potential interferences, as applicable to ENM exposures. An unbiased discussion of the major technical advantages and limitations of each assay for evaluating and predicting the genotoxic potential of ENMs is also provided. Published by Oxford University Press on
Finak, Greg; Jiang, Wenxin; Krouse, Kevin; Wei, Chungwen; Sanz, Ignacio; Phippard, Deborah; Asare, Adam; De Rosa, Stephen C; Self, Steve; Gottardo, Raphael
Flow cytometry datasets from clinical trials generate very large datasets and are usually highly standardized, focusing on endpoints that are well defined apriori. Staining variability of individual makers is not uncommon and complicates manual gating, requiring the analyst to adapt gates for each sample, which is unwieldy for large datasets. It can lead to unreliable measurements, especially if a template-gating approach is used without further correction to the gates. In this article, a computational framework is presented for normalizing the fluorescence intensity of multiple markers in specific cell populations across samples that is suitable for high-throughput processing of large clinical trial datasets. Previous approaches to normalization have been global and applied to all cells or data with debris removed. They provided no mechanism to handle specific cell subsets. This approach integrates tightly with the gating process so that normalization is performed during gating and is local to the specific cell subsets exhibiting variability. This improves peak alignment and the performance of the algorithm. The performance of this algorithm is demonstrated on two clinical trial datasets from the HIV Vaccine Trials Network (HVTN) and the Immune Tolerance Network (ITN). In the ITN data set we show that local normalization combined with template gating can account for sample-to-sample variability as effectively as manual gating. In the HVTN dataset, it is shown that local normalization mitigates false-positive vaccine response calls in an intracellular cytokine staining assay. In both datasets, local normalization performs better than global normalization. The normalization framework allows the use of template gates even in the presence of sample-to-sample staining variability, mitigates the subjectivity and bias of manual gating, and decreases the time necessary to analyze large datasets. © 2013 International Society for Advancement of Cytometry.
Soufan, Othman; Ba-alawi, Wail; Afeef, Moataz; Essack, Magbubah; Rodionov, Valentin; Kalnis, Panos; Bajic, Vladimir B
High-throughput screening (HTS) experiments provide a valuable resource that reports biological activity of numerous chemical compounds relative to their molecular targets. Building computational models that accurately predict such activity status (active vs. inactive) in specific assays is a challenging task given the large volume of data and frequently small proportion of active compounds relative to the inactive ones. We developed a method, DRAMOTE, to predict activity status of chemical compounds in HTP activity assays. For a class of HTP assays, our method achieves considerably better results than the current state-of-the-art-solutions. We achieved this by modification of a minority oversampling technique. To demonstrate that DRAMOTE is performing better than the other methods, we performed a comprehensive comparison analysis with several other methods and evaluated them on data from 11 PubChem assays through 1,350 experiments that involved approximately 500,000 interactions between chemicals and their target proteins. As an example of potential use, we applied DRAMOTE to develop robust models for predicting FDA approved drugs that have high probability to interact with the thyroid stimulating hormone receptor (TSHR) in humans. Our findings are further partially and indirectly supported by 3D docking results and literature information. The results based on approximately 500,000 interactions suggest that DRAMOTE has performed the best and that it can be used for developing robust virtual screening models. The datasets and implementation of all solutions are available as a MATLAB toolbox online at www.cbrc.kaust.edu.sa/dramote and can be found on Figshare.
Xia, Menghang; Huang, Ruili; Witt, Kristine L.; Southall, Noel; Fostel, Jennifer; Cho, Ming-Hsuang; Jadhav, Ajit; Smith, Cynthia S.; Inglese, James; Portier, Christopher J.; Tice, Raymond R.; Austin, Christopher P.
Background The propensity of compounds to produce adverse health effects in humans is generally evaluated using animal-based test methods. Such methods can be relatively expensive, low-throughput, and associated with pain suffered by the treated animals. In addition, differences in species biology may confound extrapolation to human health effects. Objective The National Toxicology Program and the National Institutes of Health Chemical Genomics Center are collaborating to identify a battery of cell-based screens to prioritize compounds for further toxicologic evaluation. Methods A collection of 1,408 compounds previously tested in one or more traditional toxicologic assays were profiled for cytotoxicity using quantitative high-throughput screening (qHTS) in 13 human and rodent cell types derived from six common targets of xenobiotic toxicity (liver, blood, kidney, nerve, lung, skin). Selected cytotoxicants were further tested to define response kinetics. Results qHTS of these compounds produced robust and reproducible results, which allowed cross-compound, cross-cell type, and cross-species comparisons. Some compounds were cytotoxic to all cell types at similar concentrations, whereas others exhibited species- or cell type–specific cytotoxicity. Closely related cell types and analogous cell types in human and rodent frequently showed different patterns of cytotoxicity. Some compounds inducing similar levels of cytotoxicity showed distinct time dependence in kinetic studies, consistent with known mechanisms of toxicity. Conclusions The generation of high-quality cytotoxicity data on this large library of known compounds using qHTS demonstrates the potential of this methodology to profile a much broader array of assays and compounds, which, in aggregate, may be valuable for prioritizing compounds for further toxicologic evaluation, identifying compounds with particular mechanisms of action, and potentially predicting in vivo biological response. PMID:18335092
Tsujita, Tadayuki; Kawaguchi, Shin-ichi; Dan, Takashi; Baird, Liam; Miyata, Toshio; Yamamoto, Masayuki
The induction of anti-hypoxic stress enzymes and proteins has the potential to be a potent therapeutic strategy to prevent the progression of ischemic heart, kidney or brain diseases. To realize this idea, small chemical compounds, which mimic hypoxic conditions by activating the PHD-HIF-α system, have been developed. However, to date, none of these compounds were identified by monitoring the transcriptional activation of hypoxia-inducible factors (HIFs). Thus, to facilitate the discovery of potent inducers of HIF-α, we have developed an effective high-throughput screening (HTS) system to directly monitor the output of HIF-α transcription. We generated a HIF-α-dependent reporter system that responds to hypoxic stimuli in a concentration- and time-dependent manner. This system was developed through multiple optimization steps, resulting in the generation of a construct that consists of the secretion-type luciferase gene (Metridia luciferase, MLuc) under the transcriptional regulation of an enhancer containing 7 copies of 40-bp hypoxia responsive element (HRE) upstream of a mini-TATA promoter. This construct was stably integrated into the human neuroblastoma cell line, SK-N-BE(2)c, to generate a reporter system, named SKN:HRE-MLuc. To improve this system and to increase its suitability for the HTS platform, we incorporated the next generation luciferase, Nano luciferase (NLuc), whose longer half-life provides us with flexibility for the use of this reporter. We thus generated a stably transformed clone with NLuc, named SKN:HRE-NLuc, and found that it showed significantly improved reporter activity compared to SKN:HRE-MLuc. In this study, we have successfully developed the SKN:HRE-NLuc screening system as an efficient platform for future HTS.
Ni, Jing [Iowa State Univ., Ames, IA (United States)
This work describes several research projects aimed towards developing new instruments and novel methods for high throughput chemical and biological analysis. Approaches are taken in two directions. The first direction takes advantage of well-established semiconductor fabrication techniques and applies them to miniaturize instruments that are workhorses in analytical laboratories. Specifically, the first part of this work focused on the development of micropumps and microvalves for controlled fluid delivery. The mechanism of these micropumps and microvalves relies on the electrochemically-induced surface tension change at a mercury/electrolyte interface. A miniaturized flow injection analysis device was integrated and flow injection analyses were demonstrated. In the second part of this work, microfluidic chips were also designed, fabricated, and tested. Separations of two fluorescent dyes were demonstrated in microfabricated channels, based on an open-tubular liquid chromatography (OT LC) or an electrochemically-modulated liquid chromatography (EMLC) format. A reduction in instrument size can potentially increase analysis speed, and allow exceedingly small amounts of sample to be analyzed under diverse separation conditions. The second direction explores the surface enhanced Raman spectroscopy (SERS) as a signal transduction method for immunoassay analysis. It takes advantage of the improved detection sensitivity as a result of surface enhancement on colloidal gold, the narrow width of Raman band, and the stability of Raman scattering signals to distinguish several different species simultaneously without exploiting spatially-separated addresses on a biochip. By labeling gold nanoparticles with different Raman reporters in conjunction with different detection antibodies, a simultaneous detection of a dual-analyte immunoassay was demonstrated. Using this scheme for quantitative analysis was also studied and preliminary dose-response curves from an immunoassay of a
Full Text Available Abstract Background The advent of high-throughput and cost-effective genotyping platforms made genome-wide association (GWA studies a reality. While the primary focus has been invested upon the improvement of reducing genotyping error, the problems associated with missing calls are largely overlooked. Results To probe into the effect of missing calls on GWAs, we demonstrated experimentally the prevalence and severity of the problem of missing call bias (MCB in four genotyping technologies (Affymetrix 500 K SNP array, SNPstream, TaqMan, and Illumina Beadlab. Subsequently, we showed theoretically that MCB leads to biased conclusions in the subsequent analyses, including estimation of allele/genotype frequencies, the measurement of HWE and association tests under various modes of inheritance relationships. We showed that MCB usually leads to power loss in association tests, and such power change is greater than what could be achieved by equivalent reduction of sample size unbiasedly. We also compared the bias in allele frequency estimation and in association tests introduced by MCB with those by genotyping errors. Our results illustrated that in most cases, the bias can be greatly reduced by increasing the call-rate at the cost of genotyping error rate. Conclusion The commonly used 'no-call' procedure for the observations of borderline quality should be modified. If the objective is to minimize the bias, the cut-off for call-rate and that for genotyping error rate should be properly coupled in GWA. We suggested that the ongoing QC cut-off for call-rate should be increased, while the cut-off for genotyping error rate can be reduced properly.
High-throughput screening (HTS) experiments provide a valuable resource that reports biological activity of numerous chemical compounds relative to their molecular targets. Building computational models that accurately predict such activity status (active vs. inactive) in specific assays is a challenging task given the large volume of data and frequently small proportion of active compounds relative to the inactive ones. We developed a method, DRAMOTE, to predict activity status of chemical compounds in HTP activity assays. For a class of HTP assays, our method achieves considerably better results than the current state-of-the-art-solutions. We achieved this by modification of a minority oversampling technique. To demonstrate that DRAMOTE is performing better than the other methods, we performed a comprehensive comparison analysis with several other methods and evaluated them on data from 11 PubChem assays through 1,350 experiments that involved approximately 500,000 interactions between chemicals and their target proteins. As an example of potential use, we applied DRAMOTE to develop robust models for predicting FDA approved drugs that have high probability to interact with the thyroid stimulating hormone receptor (TSHR) in humans. Our findings are further partially and indirectly supported by 3D docking results and literature information. The results based on approximately 500,000 interactions suggest that DRAMOTE has performed the best and that it can be used for developing robust virtual screening models. The datasets and implementation of all solutions are available as a MATLAB toolbox online at www.cbrc.kaust.edu.sa/dramote and can be found on Figshare.
Genick, Christine Clougherty; Barlier, Danielle; Monna, Dominique; Brunner, Reto; Bé, Céline; Scheufler, Clemens; Ottl, Johannes
For approximately a decade, biophysical methods have been used to validate positive hits selected from high-throughput screening (HTS) campaigns with the goal to verify binding interactions using label-free assays. By applying label-free readouts, screen artifacts created by compound interference and fluorescence are discovered, enabling further characterization of the hits for their target specificity and selectivity. The use of several biophysical methods to extract this type of high-content information is required to prevent the promotion of false positives to the next level of hit validation and to select the best candidates for further chemical optimization. The typical technologies applied in this arena include dynamic light scattering, turbidometry, resonance waveguide, surface plasmon resonance, differential scanning fluorimetry, mass spectrometry, and others. Each technology can provide different types of information to enable the characterization of the binding interaction. Thus, these technologies can be incorporated in a hit-validation strategy not only according to the profile of chemical matter that is desired by the medicinal chemists, but also in a manner that is in agreement with the target protein's amenability to the screening format. Here, we present the results of screening strategies using biophysics with the objective to evaluate the approaches, discuss the advantages and challenges, and summarize the benefits in reference to lead discovery. In summary, the biophysics screens presented here demonstrated various hit rates from a list of ~2000 preselected, IC50-validated hits from HTS (an IC50 is the inhibitor concentration at which 50% inhibition of activity is observed). There are several lessons learned from these biophysical screens, which will be discussed in this article. © 2014 Society for Laboratory Automation and Screening.
Forsberg, Christina; Jansson, Linda; Ansell, Ricky; Hedman, Johannes
Tape-lifting has since its introduction in the early 2000's become a well-established sampling method in forensic DNA analysis. Sampling is quick and straightforward while the following DNA extraction is more challenging due to the "stickiness", rigidity and size of the tape. We have developed, validated and implemented a simple and efficient direct lysis DNA extraction protocol for adhesive tapes that requires limited manual labour. The method uses Chelex beads and is applied with SceneSafe FAST tape. This direct lysis protocol provided higher mean DNA yields than PrepFiler Express BTA on Automate Express, although the differences were not significant when using clothes worn in a controlled fashion as reference material (p=0.13 and p=0.34 for T-shirts and button-down shirts, respectively). Through in-house validation we show that the method is fit-for-purpose for application in casework, as it provides high DNA yields and amplifiability, as well as good reproducibility and DNA extract stability. After implementation in casework, the proportion of extracts with DNA concentrations above 0.01ng/μL increased from 71% to 76%. Apart from providing higher DNA yields compared with the previous method, the introduction of the developed direct lysis protocol also reduced the amount of manual labour by half and doubled the potential throughput for tapes at the laboratory. Generally, simplified manual protocols can serve as a cost-effective alternative to sophisticated automation solutions when the aim is to enable high-throughput DNA extraction of complex crime scene samples. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Hutcheson, Joshua A.; Powless, Amy J.; Majid, Aneeka A.; Claycomb, Adair; Fritsch, Ingrid; Balachandran, Kartik; Muldoon, Timothy J.
Imaging cells in a microfluidic chamber with an area scan camera is difficult due to motion blur and data loss during frame readout causing discontinuity of data acquisition as cells move at relatively high speeds through the chamber. We have developed a method to continuously acquire high-resolution images of cells in motion through a microfluidics chamber using a high-speed line scan camera. The sensor acquires images in a line-by-line fashion in order to continuously image moving objects without motion blur. The optical setup comprises an epi-illuminated microscope with a 40X oil immersion, 1.4 NA objective and a 150 mm tube lens focused on a microfluidic channel. Samples containing suspended cells fluorescently stained with 0.01% (w/v) proflavine in saline are introduced into the microfluidics chamber via a syringe pump; illumination is provided by a blue LED (455 nm). Images were taken of samples at the focal plane using an ELiiXA+ 8k/4k monochrome line-scan camera at a line rate of up to 40 kHz. The system's line rate and fluid velocity are tightly controlled to reduce image distortion and are validated using fluorescent microspheres. Image acquisition was controlled via MATLAB's Image Acquisition toolbox. Data sets comprise discrete images of every detectable cell which may be subsequently mined for morphological statistics and definable features by a custom texture analysis algorithm. This high-throughput screening method, comparable to cell counting by flow cytometry, provided efficient examination including counting, classification, and differentiation of saliva, blood, and cultured human cancer cells.
Brown, James M; Horner, Neil R; Lawson, Thomas N; Fiegel, Tanja; Greenaway, Simon; Morgan, Hugh; Ring, Natalie; Santos, Luis; Sneddon, Duncan; Teboul, Lydia; Vibert, Jennifer; Yaikhom, Gagarine; Westerberg, Henrik; Mallon, Ann-Marie
High-throughput phenotyping is a cornerstone of numerous functional genomics projects. In recent years, imaging screens have become increasingly important in understanding gene-phenotype relationships in studies of cells, tissues and whole organisms. Three-dimensional (3D) imaging has risen to prominence in the field of developmental biology for its ability to capture whole embryo morphology and gene expression, as exemplified by the International Mouse Phenotyping Consortium (IMPC). Large volumes of image data are being acquired by multiple institutions around the world that encompass a range of modalities, proprietary software and metadata. To facilitate robust downstream analysis, images and metadata must be standardized to account for these differences. As an open scientific enterprise, making the data readily accessible is essential so that members of biomedical and clinical research communities can study the images for themselves without the need for highly specialized software or technical expertise. In this article, we present a platform of software tools that facilitate the upload, analysis and dissemination of 3D images for the IMPC. Over 750 reconstructions from 80 embryonic lethal and subviable lines have been captured to date, all of which are openly accessible at mousephenotype.org. Although designed for the IMPC, all software is available under an open-source licence for others to use and develop further. Ongoing developments aim to increase throughput and improve the analysis and dissemination of image data. Furthermore, we aim to ensure that images are searchable so that users can locate relevant images associated with genes, phenotypes or human diseases of interest. © The Author 2016. Published by Oxford University Press.
Ivo D Dinov
Full Text Available Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate and disseminate novel scientific methods, computational resources and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval and aggregation. Computational processing involves the necessary software, hardware and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical and phenotypic data and meta-data. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI and the Laboratory of Neuro Imaging (LONI at University of Southern California (USC. INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize diverse suites of software tools and web-services. These pipeline workflows are represented as portable XML objects which transfer the execution instructions and user specifications from the client user machine to remote pipeline servers for distributed computing. Using Alzheimer’s and Parkinson’s data, we provide several examples of translational applications using this infrastructure.
Full Text Available Small RNAs (sRNAs of 20 to 25 nucleotides (nt in length maintain genome integrity and control gene expression in a multitude of developmental and physiological processes. Despite RNA silencing has been primarily studied in model plants, the advent of high-throughput sequencing technologies has enabled profiling of the sRNA component of more than 40 plant species. Here, we used deep sequencing and molecular methods to report the first inventory of sRNAs in olive (Olea europaea L.. sRNA libraries prepared from juvenile and adult shoots revealed that the 24-nt class dominates the sRNA transcriptome and atypically accumulates to levels never seen in other plant species, suggesting an active role of heterochromatin silencing in the maintenance and integrity of its large genome. A total of 18 known miRNA families were identified in the libraries. Also, 5 other sRNAs derived from potential hairpin-like precursors remain as plausible miRNA candidates. RNA blots confirmed miRNA expression and suggested tissue- and/or developmental-specific expression patterns. Target mRNAs of conserved miRNAs were computationally predicted among the olive cDNA collection and experimentally validated through endonucleolytic cleavage assays. Finally, we use expression data to uncover genetic components of the miR156, miR172 and miR390/TAS3-derived trans-acting small interfering RNA (tasiRNA regulatory nodes, suggesting that these interactive networks controlling developmental transitions are fully operational in olive.
Gonzalo H Villarino
Full Text Available Salinity and drought stress are the primary cause of crop losses worldwide. In sodic saline soils sodium chloride (NaCl disrupts normal plant growth and development. The complex interactions of plant systems with abiotic stress have made RNA sequencing a more holistic and appealing approach to study transcriptome level responses in a single cell and/or tissue. In this work, we determined the Petunia transcriptome response to NaCl stress by sequencing leaf samples and assembling 196 million Illumina reads with Trinity software. Using our reference transcriptome we identified more than 7,000 genes that were differentially expressed within 24 h of acute NaCl stress. The proposed transcriptome can also be used as an excellent tool for biological and bioinformatics in the absence of an available Petunia genome and it is available at the SOL Genomics Network (SGN http://solgenomics.net. Genes related to regulation of reactive oxygen species, transport, and signal transductions as well as novel and undescribed transcripts were among those differentially expressed in response to salt stress. The candidate genes identified in this study can be applied as markers for breeding or to genetically engineer plants to enhance salt tolerance. Gene Ontology analyses indicated that most of the NaCl damage happened at 24 h inducing genotoxicity, affecting transport and organelles due to the high concentration of Na+ ions. Finally, we report a modification to the library preparation protocol whereby cDNA samples were bar-coded with non-HPLC purified primers, without affecting the quality and quantity of the RNA-seq data. The methodological improvement presented here could substantially reduce the cost of sample preparation for future high-throughput RNA sequencing experiments.
Tombuloglu, Guzin; Tombuloglu, Huseyin; Sakcali, M Serdal; Unver, Turgay
Boron (B) is an essential micronutrient for optimum plant growth. However, above certain threshold B is toxic and causes yield loss in agricultural lands. While a number of studies were conducted to understand B tolerance mechanism, a transcriptome-wide approach for B tolerant barley is performed here for the first time. A high-throughput RNA-Seq (cDNA) sequencing technology (Illumina) was used with barley (Hordeum vulgare), yielding 208 million clean reads. In total, 256,874 unigenes were generated and assigned to known peptide databases: Gene Ontology (GO) (99,043), Swiss-Prot (38,266), Clusters of Orthologous Groups (COG) (26,250), and the Kyoto Encyclopedia of Genes and Genomes (KEGG) (36,860), as determined by BLASTx search. According to the digital gene expression (DGE) analyses, 16% and 17% of the transcripts were found to be differentially regulated in root and leaf tissues, respectively. Most of them were involved in cell wall, stress response, membrane, protein kinase and transporter mechanisms. Some of the genes detected as highly expressed in root tissue are phospholipases, predicted divalent heavy-metal cation transporters, formin-like proteins and calmodulin/Ca(2+)-binding proteins. In addition, chitin-binding lectin precursor, ubiquitin carboxyl-terminal hydrolase, and serine/threonine-protein kinase AFC2 genes were indicated to be highly regulated in leaf tissue upon excess B treatment. Some pathways, such as the Ca(2+)-calmodulin system, are activated in response to B toxicity. The differential regulation of 10 transcripts was confirmed by qRT-PCR, revealing the tissue-specific responses against B toxicity and their putative function in B-tolerance mechanisms. Copyright © 2014. Published by Elsevier B.V.
Andersen, Lars Dyrskjøt; Zieger, Karsten; Ørntoft, Torben Falck
individually contributed to the management of the disease. However, the development of high-throughput techniques for simultaneous assessment of a large number of markers has allowed classification of tumors into clinically relevant molecular subgroups beyond those possible by pathological classification. Here......, we review the recent advances in high-throughput molecular marker identification for superficial and invasive bladder cancers....
Full Text Available High-throughput techniques strive to identify new biomarkers that will be useful for the diagnosis, treatment, and prevention of tuberculosis (TB. However, their analysis and interpretation pose considerable challenges. Recent developments in the high-throughput detection of host biomarkers in TB are reported in this review.
Alginate Immobilization of Metabolic Enzymes (AIME) for High-Throughput Screening Assays DE DeGroot, RS Thomas, and SO SimmonsNational Center for Computational Toxicology, US EPA, Research Triangle Park, NC USAThe EPA’s ToxCast program utilizes a wide variety of high-throughput s...
Full Text Available High throughput genomic assays empower us to study the entire human genome in short time with reasonable cost. Formalin fixed-paraffin-embedded (FFPE tissue processing remains the most economical approach for longitudinal tissue specimen storage. Therefore, the ability to apply high throughput genomic applications to FFPE specimens can expand clinical assays and discovery. Many studies have measured the accuracy and repeatability of data generated from FFPE specimens using high throughput genomic assays. Together, these studies demonstrate feasibility and provide crucial guidance for future studies using FFPE specimens. Here, we summarize the findings of these studies and discuss the limitations of high throughput data generated from FFPE specimens across several platforms that include microarray, high throughput sequencing, and NanoString.
Yang, Chao; Che, You; Qi, Yan; Liang, Peixin; Song, Cunjiang
This study aimed to investigate the effect of the fast cooling process on the microbiological community in chilled fresh pork during storage. We established a culture-independent method to study viable microbes in raw pork. Tray-packaged fresh pork and chilled fresh pork were completely spoiled after 18 and 49 d in aseptic bags at 4 °C, respectively. 16S/18S ribosomal RNAs were reverse transcribed to cDNA to characterize the activity of viable bacteria/fungi in the 2 types of pork. Both cDNA and total DNA were analyzed by high-throughput sequencing, which revealed that viable Bacteroides sp. were the most active genus in rotten pork, although viable Myroides sp. and Pseudomonas sp. were also active. Moreover, viable fungi were only detected in chilled fresh pork. The sequencing results revealed that the fast cooling process could suppress the growth of microbes present initially in the raw meat to extend its shelf life. Our results also suggested that fungi associated with pork spoilage could not grow well in aseptic tray-packaged conditions. © 2016 Institute of Food Technologists®.
Huang, Rui; Chen, Hui; Zhong, Chao; Kim, Jae Eung; Zhang, Yi-Heng Percival
Coenzyme engineering that changes NAD(P) selectivity of redox enzymes is an important tool in metabolic engineering, synthetic biology, and biocatalysis. Here we developed a high throughput screening method to identify mutants of 6-phosphogluconate dehydrogenase (6PGDH) from a thermophilic bacterium Moorella thermoacetica with reversed coenzyme selectivity from NADP+ to NAD+. Colonies of a 6PGDH mutant library growing on the agar plates were treated by heat to minimize the background noise, that is, the deactivation of intracellular dehydrogenases, degradation of inherent NAD(P)H, and disruption of cell membrane. The melted agarose solution containing a redox dye tetranitroblue tetrazolium (TNBT), phenazine methosulfate (PMS), NAD+, and 6-phosphogluconate was carefully poured on colonies, forming a second semi-solid layer. More active 6PGDH mutants were examined via an enzyme-linked TNBT-PMS colorimetric assay. Positive mutants were recovered by direct extraction of plasmid from dead cell colonies followed by plasmid transformation into E. coli TOP10. By utilizing this double-layer screening method, six positive mutants were obtained from two-round saturation mutagenesis. The best mutant 6PGDH A30D/R31I/T32I exhibited a 4,278-fold reversal of coenzyme selectivity from NADP+ to NAD+. This screening method could be widely used to detect numerous redox enzymes, particularly for thermophilic ones, which can generate NAD(P)H reacted with the redox dye TNBT. PMID:27587230
Huang, Rui; Chen, Hui; Zhong, Chao; Kim, Jae Eung; Zhang, Yi-Heng Percival
Coenzyme engineering that changes NAD(P) selectivity of redox enzymes is an important tool in metabolic engineering, synthetic biology, and biocatalysis. Here we developed a high throughput screening method to identify mutants of 6-phosphogluconate dehydrogenase (6PGDH) from a thermophilic bacterium Moorella thermoacetica with reversed coenzyme selectivity from NADP(+) to NAD(+). Colonies of a 6PGDH mutant library growing on the agar plates were treated by heat to minimize the background noise, that is, the deactivation of intracellular dehydrogenases, degradation of inherent NAD(P)H, and disruption of cell membrane. The melted agarose solution containing a redox dye tetranitroblue tetrazolium (TNBT), phenazine methosulfate (PMS), NAD(+), and 6-phosphogluconate was carefully poured on colonies, forming a second semi-solid layer. More active 6PGDH mutants were examined via an enzyme-linked TNBT-PMS colorimetric assay. Positive mutants were recovered by direct extraction of plasmid from dead cell colonies followed by plasmid transformation into E. coli TOP10. By utilizing this double-layer screening method, six positive mutants were obtained from two-round saturation mutagenesis. The best mutant 6PGDH A30D/R31I/T32I exhibited a 4,278-fold reversal of coenzyme selectivity from NADP(+) to NAD(+). This screening method could be widely used to detect numerous redox enzymes, particularly for thermophilic ones, which can generate NAD(P)H reacted with the redox dye TNBT.
Chanock Stephen J
Full Text Available Abstract Background The accuracy and precision of estimates of DNA concentration are critical factors for efficient use of DNA samples in high-throughput genotype and sequence analyses. We evaluated the performance of spectrophotometric (OD DNA quantification, and compared it to two fluorometric quantification methods, the PicoGreen® assay (PG, and a novel real-time quantitative genomic PCR assay (QG specific to a region at the human BRCA1 locus. Twenty-Two lymphoblastoid cell line DNA samples with an initial concentration of ~350 ng/uL were diluted to 20 ng/uL. DNA concentration was estimated by OD and further diluted to 5 ng/uL. The concentrations of multiple aliquots of the final dilution were measured by the OD, QG and PG methods. The effects of manual and robotic laboratory sample handling procedures on the estimates of DNA concentration were assessed using variance components analyses. Results The OD method was the DNA quantification method most concordant with the reference sample among the three methods evaluated. A large fraction of the total variance for all three methods (36.0–95.7% was explained by sample-to-sample variation, whereas the amount of variance attributable to sample handling was small (0.8–17.5%. Residual error (3.2–59.4%, corresponding to un-modelled factors, contributed a greater extent to the total variation than the sample handling procedures. Conclusion The application of a specific DNA quantification method to a particular molecular genetic laboratory protocol must take into account the accuracy and precision of the specific method, as well as the requirements of the experimental workflow with respect to sample volumes and throughput. While OD was the most concordant and precise DNA quantification method in this study, the information provided by the quantitative PCR assay regarding the suitability of DNA samples for PCR may be an essential factor for some protocols, despite the decreased concordance and
Eckmann David M
Full Text Available Abstract Background Anesthetic sensitivity is determined by the interaction of multiple genes. Hence, a dissection of genetic contributors would be aided by precise and high throughput behavioral screens. Traditionally, anesthetic phenotyping has addressed only induction of anesthesia, evaluated with dose-response curves, while ignoring potentially important data on emergence from anesthesia. Methods We designed and built a controlled environment apparatus to permit rapid phenotyping of twenty-four mice simultaneously. We used the loss of righting reflex to indicate anesthetic-induced unconsciousness. After fitting the data to a sigmoidal dose-response curve with variable slope, we calculated the MACLORR (EC50, the Hill coefficient, and the 95% confidence intervals bracketing these values. Upon termination of the anesthetic, Emergence timeRR was determined and expressed as the mean ± standard error for each inhaled anesthetic. Results In agreement with several previously published reports we find that the MACLORR of halothane, isoflurane, and sevoflurane in 8–12 week old C57BL/6J mice is 0.79% (95% confidence interval = 0.78 – 0.79%, 0.91% (95% confidence interval = 0.90 – 0.93%, and 1.96% (95% confidence interval = 1.94 – 1.97%, respectively. Hill coefficients for halothane, isoflurane, and sevoflurane are 24.7 (95% confidence interval = 19.8 – 29.7%, 19.2 (95% confidence interval = 14.0 – 24.3%, and 33.1 (95% confidence interval = 27.3 – 38.8%, respectively. After roughly 2.5 MACLORR • hr exposures, mice take 16.00 ± 1.07, 6.19 ± 0.32, and 2.15 ± 0.12 minutes to emerge from halothane, isoflurane, and sevoflurane, respectively. Conclusion This system enabled assessment of inhaled anesthetic responsiveness with a higher precision than that previously reported. It is broadly adaptable for delivering an inhaled therapeutic (or toxin to a population while monitoring its vital signs, motor reflexes, and providing precise control
Naouale El Yamani
Full Text Available The unique physicochemical properties of engineered nanomaterials (NMs have accelerated their use in diverse industrial and domestic products. Although their presence in consumer products represents a major concern for public health safety, their potential impact on human health is poorly understood. There is therefore an urgent need to clarify the toxic effects of NMs and to elucidate the mechanisms involved. In view of the large number of NMs currently being used, high throughput (HTP screening technologies are clearly needed for efficient assessment of toxicity. The comet assay is the most used method in nanogenotoxicity studies and has great potential for increasing throughput as it is fast, versatile and robust; simple technical modifications of the assay make it possible to test many compounds (NMs in a single experiment. The standard gel of 70-100 μL contains thousands of cells, of which only a tiny fraction are actually scored. Reducing the gel to a volume of 5 μL, with just a few hundred cells, allows twelve gels to be set on a standard slide, or 96 as a standard 8x12 array. For the 12 gel format, standard slides precoated with agarose are placed on a metal template and gels are set on the positions marked on the template. The HTP comet assay, incorporating digestion of DNA with formamidopyrimidine DNA glycosylase (FPG to detect oxidised purines, has recently been applied to study the potential induction of genotoxicity by NMs via reactive oxygen. In the NanoTEST project we investigated the genotoxic potential of several well-characterized metal and polymeric nanoparticles with the comet assay. All in vitro studies were harmonized; i.e. NMs were from the same batch, and identical dispersion protocols, exposure time, concentration range, culture conditions, and time-courses were used. As a kidney model, Cos-1 fibroblast-like kidney cells were treated with different concentrations of iron oxide NMs, and cells embedded in minigels (12
Full Text Available Abstract Background siRNA technology is a promising tool for gene therapy of vascular disease. Due to the multitude of reagents and cell types, RNAi experiment optimization can be time-consuming. In this study adherent cell cytometry was used to rapidly optimize siRNA transfection in human aortic vascular smooth muscle cells (AoSMC. Methods AoSMC were seeded at a density of 3000-8000 cells/well of a 96well plate. 24 hours later AoSMC were transfected with either non-targeting unlabeled siRNA (50 nM, or non-targeting labeled siRNA, siGLO Red (5 or 50 nM using no transfection reagent, HiPerfect or Lipofectamine RNAiMax. For counting cells, Hoechst nuclei stain or Cell Tracker green were used. For data analysis an adherent cell cytometer, Celigo® was used. Data was normalized to the transfection reagent alone group and expressed as red pixel count/cell. Results After 24 hours, none of the transfection conditions led to cell loss. Red fluorescence counts were normalized to the AoSMC count. RNAiMax was more potent compared to HiPerfect or no transfection reagent at 5 nM siGLO Red (4.12 +/-1.04 vs. 0.70 +/-0.26 vs. 0.15 +/-0.13 red pixel/cell and 50 nM siGLO Red (6.49 +/-1.81 vs. 2.52 +/-0.67 vs. 0.34 +/-0.19. Fluorescence expression results supported gene knockdown achieved by using MARCKS targeting siRNA in AoSMCs. Conclusion This study underscores that RNAi delivery depends heavily on the choice of delivery method. Adherent cell cytometry can be used as a high throughput-screening tool for the optimization of RNAi assays. This technology can accelerate in vitro cell assays and thus save costs.
Turner Raymond J
Full Text Available Abstract Background Microbial biofilms exist all over the natural world, a distribution that is paralleled by metal cations and oxyanions. Despite this reality, very few studies have examined how biofilms withstand exposure to these toxic compounds. This article describes a batch culture technique for biofilm and planktonic cell metal susceptibility testing using the MBEC assay. This device is compatible with standard 96-well microtiter plate technology. As part of this method, a two part, metal specific neutralization protocol is summarized. This procedure minimizes residual biological toxicity arising from the carry-over of metals from challenge to recovery media. Neutralization consists of treating cultures with a chemical compound known to react with or to chelate the metal. Treated cultures are plated onto rich agar to allow metal complexes to diffuse into the recovery medium while bacteria remain on top to recover. Two difficulties associated with metal susceptibility testing were the focus of two applications of this technique. First, assays were calibrated to allow comparisons of the susceptibility of different organisms to metals. Second, the effects of exposure time and growth medium composition on the susceptibility of E. coli JM109 biofilms to metals were investigated. Results This high-throughput method generated 96-statistically equivalent biofilms in a single device and thus allowed for comparative and combinatorial experiments of media, microbial strains, exposure times and metals. By adjusting growth conditions, it was possible to examine biofilms of different microorganisms that had similar cell densities. In one example, Pseudomonas aeruginosa ATCC 27853 was up to 80 times more resistant to heavy metalloid oxyanions than Escherichia coli TG1. Further, biofilms were up to 133 times more tolerant to tellurite (TeO32- than corresponding planktonic cultures. Regardless of the growth medium, the tolerance of biofilm and planktonic
Harrison, Joe J; Turner, Raymond J; Ceri, Howard
Background Microbial biofilms exist all over the natural world, a distribution that is paralleled by metal cations and oxyanions. Despite this reality, very few studies have examined how biofilms withstand exposure to these toxic compounds. This article describes a batch culture technique for biofilm and planktonic cell metal susceptibility testing using the MBEC assay. This device is compatible with standard 96-well microtiter plate technology. As part of this method, a two part, metal specific neutralization protocol is summarized. This procedure minimizes residual biological toxicity arising from the carry-over of metals from challenge to recovery media. Neutralization consists of treating cultures with a chemical compound known to react with or to chelate the metal. Treated cultures are plated onto rich agar to allow metal complexes to diffuse into the recovery medium while bacteria remain on top to recover. Two difficulties associated with metal susceptibility testing were the focus of two applications of this technique. First, assays were calibrated to allow comparisons of the susceptibility of different organisms to metals. Second, the effects of exposure time and growth medium composition on the susceptibility of E. coli JM109 biofilms to metals were investigated. Results This high-throughput method generated 96-statistically equivalent biofilms in a single device and thus allowed for comparative and combinatorial experiments of media, microbial strains, exposure times and metals. By adjusting growth conditions, it was possible to examine biofilms of different microorganisms that had similar cell densities. In one example, Pseudomonas aeruginosa ATCC 27853 was up to 80 times more resistant to heavy metalloid oxyanions than Escherichia coli TG1. Further, biofilms were up to 133 times more tolerant to tellurite (TeO32-) than corresponding planktonic cultures. Regardless of the growth medium, the tolerance of biofilm and planktonic cell E. coli JM109 to metals
Yamauchi, Fumio; Kato, Koichi; Iwata, Hiroo
Functional characterization of human genes is one of the most challenging tasks in current genomics. Owing to a large number of newly discovered genes, high-throughput methodologies are greatly needed to express in parallel each gene in living cells. To develop a method that allows efficient transfection of plasmids into adherent cells in spatial- and temporal-specific manners, we studied electric pulse-triggered gene transfer using a plasmid-loaded electrode. A plasmid was loaded on a gold e...
Sales, Kevin C; Rosa, Filipa; Cunha, Bernardo R; Sampaio, Pedro N; Lopes, Marta B; Calado, Cecília R C
Escherichia coli is one of the most used host microorganism for the production of recombinant products, such as heterologous proteins and plasmids. However, genetic, physiological and environmental factors influence the plasmid replication and cloned gene expression in a highly complex way. To control and optimize the recombinant expression system performance, it is very important to understand this complexity. Therefore, the development of rapid, highly sensitive and economic analytical methodologies, which enable the simultaneous characterization of the heterologous product synthesis and physiologic cell behavior under a variety of culture conditions, is highly desirable. For that, the metabolic profile of recombinant E. coli cultures producing the pVAX-lacZ plasmid model was analyzed by rapid, economic and high-throughput Fourier Transform Mid-Infrared (FT-MIR) spectroscopy. The main goal of the present work is to show as the simultaneous multivariate data analysis by principal component analysis (PCA) and direct spectral analysis could represent a very interesting tool to monitor E. coli culture processes and acquire relevant information according to current quality regulatory guidelines. While PCA allowed capturing the energetic metabolic state of the cell, e.g. by identifying different C-sources consumption phases, direct FT-MIR spectral analysis allowed obtaining valuable biochemical and metabolic information along the cell culture, e.g. lipids, RNA, protein synthesis and turnover metabolism. The information achieved by spectral multivariate data and direct spectral analyses complement each other and may contribute to understand the complex interrelationships between the recombinant cell metabolism and the bioprocess environment towards more economic and robust processes design according to Quality by Design framework. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 33:285-298, 2017. © 2016 American Institute of Chemical Engineers.
Dyer, Philip; Dean, Samuel; Sunter, Jack
Improvements in mass spectrometry, sequencing and bioinformatics have generated large datasets of potentially interesting genes. Tagging these proteins can give insights into their function by determining their localization within the cell and enabling interaction partner identification. We recently published a fast and scalable method to generate Trypanosoma brucei cell lines that express a tagged protein from the endogenous locus. The method was based on a plasmid we generated that, when coupled with long primer PCR, can be used to modify a gene to encode a protein tagged at either terminus. This allows the tagging of dozens of trypanosome proteins in parallel, facilitating the large-scale validation of candidate genes of interest. This system can be used to tag proteins for localization (using a fluorescent protein, epitope tag or electron microscopy tag) or biochemistry (using tags for purification, such as the TAP (tandem affinity purification) tag). Here, we describe a protocol to perform the long primer PCR and the electroporation in 96-well plates, with the recovery and selection of transgenic trypanosomes occurring in 24-well plates. With this workflow, hundreds of proteins can be tagged in parallel; this is an order of magnitude improvement to our previous protocol and genome scale tagging is now possible.
Full Text Available With the severe acute respiratory syndrome epidemic of 2003 and renewed attention on avian influenza viral pandemics, new surveillance systems are needed for the earlier detection of emerging infectious diseases. We applied a "next-generation" parallel sequencing platform for viral detection in nasopharyngeal and fecal samples collected during seasonal influenza virus (Flu infections and norovirus outbreaks from 2005 to 2007 in Osaka, Japan. Random RT-PCR was performed to amplify RNA extracted from 0.1-0.25 ml of nasopharyngeal aspirates (N = 3 and fecal specimens (N = 5, and more than 10 microg of cDNA was synthesized. Unbiased high-throughput sequencing of these 8 samples yielded 15,298-32,335 (average 24,738 reads in a single 7.5 h run. In nasopharyngeal samples, although whole genome analysis was not available because the majority (>90% of reads were host genome-derived, 20-460 Flu-reads were detected, which was sufficient for subtype identification. In fecal samples, bacteria and host cells were removed by centrifugation, resulting in gain of 484-15,260 reads of norovirus sequence (78-98% of the whole genome was covered, except for one specimen that was under-detectable by RT-PCR. These results suggest that our unbiased high-throughput sequencing approach is useful for directly detecting pathogenic viruses without advance genetic information. Although its cost and technological availability make it unlikely that this system will very soon be the diagnostic standard worldwide, this system could be useful for the earlier discovery of novel emerging viruses and bioterrorism, which are difficult to detect with conventional procedures.
Kwon, Keehwan; Nagarajan, Rajesh; Stivers, James T
Vaccinia type I DNA topoisomerase exhibits a strong site-specific ribonuclease activity when provided a DNA substrate that contains a single uridine ribonucleotide within a duplex DNA containing the sequence 5' CCCTU 3'. The reaction involves two steps: attack of the active site tyrosine nucleophile of topo I at the 3' phosphodiester of the uridine nucleotide to generate a covalent enzyme-DNA adduct, followed by nucleophilic attack of the uridine 2'-hydroxyl to release the covalently tethered enzyme. Here we report the first continuous spectroscopic assay for topoisomerase that allows monitoring of the ribonuclease reaction under multiple-turnover conditions. The assay is especially robust for high-throughput screening applications because sensitive molecular beacon technology is utilized, and the topoisomerase is released during the reaction to allow turnover of multiple substrate molecules by a single molecule of enzyme. Direct computer simulation of the fluorescence time courses was used to obtain the rate constants for substrate binding and release, covalent complex formation, and formation of the 2',3'-cyclic phosphodiester product of the ribonuclease reaction. The assay allowed rapid screening of a 500 member chemical library from which several new inhibitors of topo I were identified with IC(50) values in the range of 2-100 microM. Three of the most potent hits from the high-throughput screening were also found to inhibit plasmid supercoil relaxation by the enzyme, establishing the utility of the assay in identifying inhibitors of the biologically relevant DNA relaxation reaction. One of the most potent inhibitors of the vaccinia enzyme, 3-benzo[1,3]dioxol-5-yl-2-oxoproprionic acid, did not inhibit the closely related human enzyme. The inhibitory mechanism of this compound is unique and involves a step required for recycling the enzyme for steady-state turnover.
Full Text Available Abstract Background In the last years several high-throughput cDNA sequencing projects have been funded worldwide with the aim of identifying and characterizing the structure of complete novel human transcripts. However some of these cDNAs are error prone due to frameshifts and stop codon errors caused by low sequence quality, or to cloning of truncated inserts, among other reasons. Therefore, accurate CDS prediction from these sequences first require the identification of potentially problematic cDNAs in order to speed up the posterior annotation process. Results cDNA2Genome is an application for the automatic high-throughput mapping and characterization of cDNAs. It utilizes current annotation data and the most up to date databases, especially in the case of ESTs and mRNAs in conjunction with a vast number of approaches to gene prediction in order to perform a comprehensive assessment of the cDNA exon-intron structure. The final result of cDNA2Genome is an XML file containing all relevant information obtained in the process. This XML output can easily be used for further analysis such us program pipelines, or the integration of results into databases. The web interface to cDNA2Genome also presents this data in HTML, where the annotation is additionally shown in a graphical form. cDNA2Genome has been implemented under the W3H task framework which allows the combination of bioinformatics tools in tailor-made analysis task flows as well as the sequential or parallel computation of many sequences for large-scale analysis. Conclusions cDNA2Genome represents a new versatile and easily extensible approach to the automated mapping and annotation of human cDNAs. The underlying approach allows sequential or parallel computation of sequences for high-throughput analysis of cDNAs.
techniques. Cells were patterned on the nanochannel array and collectively were electroporated in parallel, injected with cargo in Z-direction. Controlling the dose was demonstrated with the external pulse durations at high-throughput. The 'electrophoretic'- expedited delivery of large molecular weight plasmids were demonstrated with large numbers of primary cells simultaneously, which cannot be achieved in BEP and MEP. Two clinically valuable case studies were performed with our 3D NEP for living cell sensing / interrogation. (1) In the case of in vitro transfection of primary cardiomyocytes, we studied the dose-effects of miR-29 on mitochondrial changes and the suppression of the Mcl-1 gene in adult mouse cardiomyocytes by precisely controlling the miR-29 dose injected. (2) Glioma stem cells (GSCs), a type of cell hypothesized to be highly aggressive and to lead to the relapses of gliobastoma in human brain, was studied at single cell resolution on 3D NEP platform. The developed 3D NEP system moves towards clinically oriented and user-friendly tools for life science applications. The batch-treated cells with controlled dosage delivery provide a useful tool for single cell analysis. The pioneering experiments in this work have demonstrated the 3D NEP for the applications of cell reprogramming, adoptive immunotherapy, in vitro cardiomyocytes transfection and glioma stem cells study.
Erica B Schleifman
Full Text Available Molecular profiling of tumor tissue to detect alterations, such as oncogenic mutations, plays a vital role in determining treatment options in oncology. Hence, there is an increasing need for a robust and high-throughput technology to detect oncogenic hotspot mutations. Although commercial assays are available to detect genetic alterations in single genes, only a limited amount of tissue is often available from patients, requiring multiplexing to allow for simultaneous detection of mutations in many genes using low DNA input. Even though next-generation sequencing (NGS platforms provide powerful tools for this purpose, they face challenges such as high cost, large DNA input requirement, complex data analysis, and long turnaround times, limiting their use in clinical settings. We report the development of the next generation mutation multi-analyte panel (MUT-MAP, a high-throughput microfluidic, panel for detecting 120 somatic mutations across eleven genes of therapeutic interest (AKT1, BRAF, EGFR, FGFR3, FLT3, HRAS, KIT, KRAS, MET, NRAS, and PIK3CA using allele-specific PCR (AS-PCR and Taqman technology. This mutation panel requires as little as 2 ng of high quality DNA from fresh frozen or 100 ng of DNA from formalin-fixed paraffin-embedded (FFPE tissues. Mutation calls, including an automated data analysis process, have been implemented to run 88 samples per day. Validation of this platform using plasmids showed robust signal and low cross-reactivity in all of the newly added assays and mutation calls in cell line samples were found to be consistent with the Catalogue of Somatic Mutations in Cancer (COSMIC database allowing for direct comparison of our platform to Sanger sequencing. High correlation with NGS when compared to the SuraSeq500 panel run on the Ion Torrent platform in a FFPE dilution experiment showed assay sensitivity down to 0.45%. This multiplexed mutation panel is a valuable tool for high-throughput biomarker discovery in
National Aeronautics and Space Administration — In response to Topic S3.04 "Propulsion Systems," Busek Co. Inc. will develop a high throughput Hall effect thruster with a nominal peak power of 1-kW and wide...
Montalbano, Antonino; Canver, Matthew C; Sanjana, Neville E
The clustered regularly interspaced short palindromic repeats (CRISPR)-Cas nuclease system is a powerful tool for genome editing, and its simple programmability has enabled high-throughput genetic and epigenetic studies. These high-throughput approaches offer investigators a toolkit for functional interrogation of not only protein-coding genes but also noncoding DNA. Historically, noncoding DNA has lacked the detailed characterization that has been applied to protein-coding genes in large part because there has not been a robust set of methodologies for perturbing these regions. Although the majority of high-throughput CRISPR screens have focused on the coding genome to date, an increasing number of CRISPR screens targeting noncoding genomic regions continue to emerge. Here, we review high-throughput CRISPR-based approaches to uncover and understand functional elements within the noncoding genome and discuss practical aspects of noncoding library design and screen analysis. Copyright © 2017 Elsevier Inc. All rights reserved.
Kuo, Tzu-Chi; Malvadkar, Niranjan A; Drumright, Ray; Cesaretti, Richard; Bishop, Matthew T
At The Dow Chemical Company, high-throughput research is an active area for developing new industrial coatings products. Using the principles of automation (i.e., using robotic instruments), parallel processing (i.e., prepare, process, and evaluate samples in parallel), and miniaturization (i.e., reduce sample size), high-throughput tools for synthesizing, formulating, and applying coating compositions have been developed at Dow. In addition, high-throughput workflows for measuring various coating properties, such as cure speed, hardness development, scratch resistance, impact toughness, resin compatibility, pot-life, surface defects, among others have also been developed in-house. These workflows correlate well with the traditional coatings tests, but they do not necessarily mimic those tests. The use of such high-throughput workflows in combination with smart experimental designs allows accelerated discovery and commercialization.
Sozzani, Rosangela; Benfey, Philip N
High-throughput phenotyping approaches (phenomics) are being combined with genome-wide genetic screens to identify alterations in phenotype that result from gene inactivation. Here we highlight promising technologies for 'phenome-scale' analyses in multicellular organisms.
High-throughput phenotyping approaches (phenomics) are being combined with genome-wide genetic screens to identify alterations in phenotype that result from gene inactivation. Here we highlight promising technologies for 'phenome-scale' analyses in multicellular organisms. PMID:21457493
Vázquez-Baeza, Yoshiki; Pirrung, Meg; Gonzalez, Antonio; Knight, Rob
As microbial ecologists take advantage of high-throughput sequencing technologies to describe microbial communities across ever-increasing numbers of samples, new analysis tools are required to relate...
Vervoort, Yannick; Linares, Alicia Gutiérrez; Roncoroni, Miguel; Liu, Chengxun; Steensels, Jan; Verstrepen, Kevin J
Genetic engineering and screening of large number of cells or populations is a crucial bottleneck in today's systems biology and applied (micro)biology. Instead of using standard methods in bottles, flasks or 96-well plates, scientists are increasingly relying on high-throughput strategies that miniaturize their experiments to the nanoliter and picoliter scale and the single-cell level. In this review, we summarize different high-throughput system-wide genome engineering and screening strategies for microbes. More specifically, we will emphasize the use of multiplex automated genome evolution (MAGE) and CRISPR/Cas systems for high-throughput genome engineering and the application of (lab-on-chip) nanoreactors for high-throughput single-cell or population screening. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Wählby, Carolina; Kamentsky, Lee; Liu, Zihan H; Riklin-Raviv, Tammy; Conery, Annie L; O'Rourke, Eyleen J; Sokolnicki, Katherine L; Visvikis, Orane; Ljosa, Vebjorn; Irazoqui, Javier E; Golland, Polina; Ruvkun, Gary; Ausubel, Frederick M; Carpenter, Anne E
We present a toolbox for high-throughput screening of image-based Caenorhabditis elegans phenotypes. The image analysis algorithms measure morphological phenotypes in individual worms and are effective for a variety of assays and imaging systems. This WormToolbox is available through the open-source CellProfiler project and enables objective scoring of whole-worm high-throughput image-based assays of C. elegans for the study of diverse biological pathways that are relevant to human disease.
Inoue, Masaya; Inoue, Sozo; Nishida, Takeshi
In this paper, we propose a method of human activity recognition with high throughput from raw accelerometer data applying a deep recurrent neural network (DRNN), and investigate various architectures and its combination to find the best parameter values. The "high throughput" refers to short time at a time of recognition. We investigated various parameters and architectures of the DRNN by using the training dataset of 432 trials with 6 activity classes from 7 people. The maximum recognition ...
JR, Luft; EH, Snell; GT, DeTitta
Introduction X-ray crystallography provides the majority of our structural biological knowledge at a molecular level and in terms of pharmaceutical design is a valuable tool to accelerate discovery. It is the premier technique in the field, but its usefulness is significantly limited by the need to grow well-diffracting crystals. It is for this reason that high-throughput crystallization has become a key technology that has matured over the past 10 years through the field of structural genomics. Areas covered The authors describe their experiences in high-throughput crystallization screening in the context of structural genomics and the general biomedical community. They focus on the lessons learnt from the operation of a high-throughput crystallization screening laboratory, which to date has screened over 12,500 biological macromolecules. They also describe the approaches taken to maximize the success while minimizing the effort. Through this, the authors hope that the reader will gain an insight into the efficient design of a laboratory and protocols to accomplish high-throughput crystallization on a single-, multiuser-laboratory or industrial scale. Expert Opinion High-throughput crystallization screening is readily available but, despite the power of the crystallographic technique, getting crystals is still not a solved problem. High-throughput approaches can help when used skillfully; however, they still require human input in the detailed analysis and interpretation of results to be more successful. PMID:22646073
Blanca, José; Cañizares, Joaquín; Roig, Cristina; Ziarsolo, Pello; Nuez, Fernando; Picó, Belén
Cucurbita pepo belongs to the Cucurbitaceae family. The "Zucchini" types rank among the highest-valued vegetables worldwide, and other C. pepo and related Cucurbita spp., are food staples and rich sources of fat and vitamins. A broad range of genomic tools are today available for other cucurbits that have become models for the study of different metabolic processes. However, these tools are still lacking in the Cucurbita genus, thus limiting gene discovery and the process of breeding. We report the generation of a total of 512,751 C. pepo EST sequences, using 454 GS FLX Titanium technology. ESTs were obtained from normalized cDNA libraries (root, leaves, and flower tissue) prepared using two varieties with contrasting phenotypes for plant, flowering and fruit traits, representing the two C. pepo subspecies: subsp. pepo cv. Zucchini and subsp. ovifera cv Scallop. De novo assembling was performed to generate a collection of 49,610 Cucurbita unigenes (average length of 626 bp) that represent the first transcriptome of the species. Over 60% of the unigenes were functionally annotated and assigned to one or more Gene Ontology terms. The distributions of Cucurbita unigenes followed similar tendencies than that reported for Arabidopsis or melon, suggesting that the dataset may represent the whole Cucurbita transcriptome. About 34% unigenes were detected to have known orthologs of Arabidopsis or melon, including genes potentially involved in disease resistance, flowering and fruit quality. Furthermore, a set of 1,882 unigenes with SSR motifs and 9,043 high confidence SNPs between Zucchini and Scallop were identified, of which 3,538 SNPs met criteria for use with high throughput genotyping platforms, and 144 could be detected as CAPS. A set of markers were validated, being 80% of them polymorphic in a set of variable C. pepo and C. moschata accessions. We present the first broad survey of gene sequences and allelic variation in C. pepo, where limited prior genomic
Full Text Available Abstract Background Cucurbita pepo belongs to the Cucurbitaceae family. The "Zucchini" types rank among the highest-valued vegetables worldwide, and other C. pepo and related Cucurbita spp., are food staples and rich sources of fat and vitamins. A broad range of genomic tools are today available for other cucurbits that have become models for the study of different metabolic processes. However, these tools are still lacking in the Cucurbita genus, thus limiting gene discovery and the process of breeding. Results We report the generation of a total of 512,751 C. pepo EST sequences, using 454 GS FLX Titanium technology. ESTs were obtained from normalized cDNA libraries (root, leaves, and flower tissue prepared using two varieties with contrasting phenotypes for plant, flowering and fruit traits, representing the two C. pepo subspecies: subsp. pepo cv. Zucchini and subsp. ovifera cv Scallop. De novo assembling was performed to generate a collection of 49,610 Cucurbita unigenes (average length of 626 bp that represent the first transcriptome of the species. Over 60% of the unigenes were functionally annotated and assigned to one or more Gene Ontology terms. The distributions of Cucurbita unigenes followed similar tendencies than that reported for Arabidopsis or melon, suggesting that the dataset may represent the whole Cucurbita transcriptome. About 34% unigenes were detected to have known orthologs of Arabidopsis or melon, including genes potentially involved in disease resistance, flowering and fruit quality. Furthermore, a set of 1,882 unigenes with SSR motifs and 9,043 high confidence SNPs between Zucchini and Scallop were identified, of which 3,538 SNPs met criteria for use with high throughput genotyping platforms, and 144 could be detected as CAPS. A set of markers were validated, being 80% of them polymorphic in a set of variable C. pepo and C. moschata accessions. Conclusion We present the first broad survey of gene sequences and allelic
Zhang, Yonghua [Iowa State Univ., Ames, IA (United States)
Sample preparation has been one of the major bottlenecks for many high throughput analyses. The purpose of this research was to develop new sample preparation and integration approach for DNA sequencing, PCR based DNA analysis and combinatorial screening of homogeneous catalysis based on multiplexed capillary electrophoresis with laser induced fluorescence or imaging UV absorption detection. The author first introduced a method to integrate the front-end tasks to DNA capillary-array sequencers. protocols for directly sequencing the plasmids from a single bacterial colony in fused-silica capillaries were developed. After the colony was picked, lysis was accomplished in situ in the plastic sample tube using either a thermocycler or heating block. Upon heating, the plasmids were released while chromsomal DNA and membrane proteins were denatured and precipitated to the bottom of the tube. After adding enzyme and Sanger reagents, the resulting solution was aspirated into the reaction capillaries by a syringe pump, and cycle sequencing was initiated. No deleterious effect upon the reaction efficiency, the on-line purification system, or the capillary electrophoresis separation was observed, even though the crude lysate was used as the template. Multiplexed on-line DNA sequencing data from 8 parallel channels allowed base calling up to 620 bp with an accuracy of 98%. The entire system can be automatically regenerated for repeated operation. For PCR based DNA analysis, they demonstrated that capillary electrophoresis with UV detection can be used for DNA analysis starting from clinical sample without purification. After PCR reaction using cheek cell, blood or HIV-1 gag DNA, the reaction mixtures was injected into the capillary either on-line or off-line by base stacking. The protocol was also applied to capillary array electrophoresis. The use of cheaper detection, and the elimination of purification of DNA sample before or after PCR reaction, will make this approach an
José Miguel Blanca
Full Text Available Melon ( L. ranks among the highest-valued fruit crops worldwide. Some genomic tools are available for this crop, including a Sanger transcriptome. We report the generation of 689,054 high-quality expressed sequence tags (ESTs from two 454 sequencing runs, using normalized and nonnormalized complementary DNA (cDNA libraries prepared from four genotypes belonging to the two subspecies and the main commercial types. 454 ESTs were combined with the Sanger available ESTs and de novo assembled into 53,252 unigenes. Over 63% of the unigenes were functionally annotated with Gene Ontology (GO terms and 21% had known orthologs of (L. Heynh. Annotation distribution followed similar tendencies than that reported for , suggesting that the dataset represents a fairly complete melon transcriptome. Furthermore, we identified a set of 3298 unigenes with microsatellite motifs and 14,417 sequences with single nucleotide variants of which 11,655 single nucleotide polymorphism met criteria for use with high-throughput genotyping platforms, and 453 could be detected as cleaved amplified polymorphic sequence (CAPS. A set of markers were validated, 90% of them being polymorphic in a number of variable accessions. This transcriptome provides an invaluable new tool for biological research, more so when it includes transcripts not described previously. It is being used for genome annotation and has provided a large collection of markers that will allow speeding up the process of breeding new melon varieties.
Miyazaki, Nobuo; Kiyose, Norihiko; Akazawa, Yoko; Takashima, Mizuki; Hagihara, Yosihisa; Inoue, Naokazu; Matsuda, Tomonari; Ogawa, Ryu; Inoue, Seiya; Ito, Yuji
The antigen-binding domain of camelid dimeric heavy chain antibodies, known as VHH or Nanobody, has much potential in pharmaceutical and industrial applications. To establish the isolation process of antigen-specific VHH, a VHH phage library was constructed with a diversity of 8.4 × 10(7) from cDNA of peripheral blood mononuclear cells of an alpaca (Lama pacos) immunized with a fragment of IZUMO1 (IZUMO1PFF) as a model antigen. By conventional biopanning, 13 antigen-specific VHHs were isolated. The amino acid sequences of these VHHs, designated as N-group VHHs, were very similar to each other (>93% identity). To find more diverse antibodies, we performed high-throughput sequencing (HTS) of VHH genes. By comparing the frequencies of each sequence between before and after biopanning, we found the sequences whose frequencies were increased by biopanning. The top 100 sequences of them were supplied for phylogenic tree analysis. In total 75% of them belonged to N-group VHHs, but the other were phylogenically apart from N-group VHHs (Non N-group). Two of three VHHs selected from non N-group VHHs showed sufficient antigen binding ability. These results suggested that biopanning followed by HTS provided a useful method for finding minor and diverse antigen-specific clones that could not be identified by conventional biopanning. © The Authors 2015. Published by Oxford University Press on behalf of the Japanese Biochemical Society. All rights reserved.
Full Text Available The importance of single-cell level data is increasingly appreciated, and significant advances in this direction have been made in recent years. Common to these technologies is the need to physically segregate individual cells into containers, such as wells or chambers of a micro-fluidics chip. High-throughput Single-Cell Labeling (Hi-SCL in drops is a novel method that uses drop-based libraries of oligonucleotide barcodes to index individual cells in a population. The use of drops as containers, and a microfluidics platform to manipulate them en-masse, yields a highly scalable methodological framework. Once tagged, labeled molecules from different cells may be mixed without losing the cell-of-origin information. Here we demonstrate an application of the method for generating RNA-sequencing data for multiple individual cells within a population. Barcoded oligonucleotides are used to prime cDNA synthesis within drops. Barcoded cDNAs are then combined and subjected to second generation sequencing. The data are deconvoluted based on the barcodes, yielding single-cell mRNA expression data. In a proof-of-concept set of experiments we show that this method yields data comparable to other existing methods, but with unique potential for assaying very large numbers of cells.
Cabrera-Bosquet, Llorenç; Crossa, José; von Zitzewitz, Jarislav; Serret, María Dolors; Araus, José Luis
Genomic selection (GS) and high-throughput phenotyping have recently been captivating the interest of the crop breeding community from both the public and private sectors world-wide. Both approaches promise to revolutionize the prediction of complex traits, including growth, yield and adaptation to stress. Whereas high-throughput phenotyping may help to improve understanding of crop physiology, most powerful techniques for high-throughput field phenotyping are empirical rather than analytical and comparable to genomic selection. Despite the fact that the two methodological approaches represent the extremes of what is understood as the breeding process (phenotype versus genome), they both consider the targeted traits (e.g. grain yield, growth, phenology, plant adaptation to stress) as a black box instead of dissecting them as a set of secondary traits (i.e. physiological) putatively related to the target trait. Both GS and high-throughput phenotyping have in common their empirical approach enabling breeders to use genome profile or phenotype without understanding the underlying biology. This short review discusses the main aspects of both approaches and focuses on the case of genomic selection of maize flowering traits and near-infrared spectroscopy (NIRS) and plant spectral reflectance as high-throughput field phenotyping methods for complex traits such as crop growth and yield. © 2012 Institute of Botany, Chinese Academy of Sciences.
Bashalkhanov, Stanislav; Rajora, Om P
High throughput DNA isolation from plants is a major bottleneck for most studies requiring large sample sizes. A variety of protocols have been developed for DNA isolation from plants. However, many species, including conifers, have high contents of secondary metabolites that interfere with the extraction process or the subsequent analysis steps. Here, we describe a procedure for high-throughput DNA isolation from conifers. We have developed a high-throughput DNA extraction protocol for conifers using an automated liquid handler and modifying the Qiagen MagAttract Plant Kit protocol. The modifications involve change to the buffer system and improving the protocol so that it almost doubles the number of samples processed per kit, which significantly reduces the overall costs. We describe two versions of the protocol: one for medium-throughput (MTP) and another for high-throughput (HTP) DNA isolation. The HTP version works from start to end in the industry-standard 96-well format, while the MTP version provides higher DNA yields per sample processed. We have successfully used the protocol for DNA extraction and genotyping of thousands of individuals of several spruce and a pine species. A high-throughput system for DNA extraction from conifer needles and seeds has been developed and validated. The quality of the isolated DNA was comparable with that obtained from two commonly used methods: the silica-spin column and the classic CTAB protocol. Our protocol provides a fully automatable and cost effective solution for processing large numbers of conifer samples.
Tenchini, M.L.; Bossi, E.; Marchetti, L.; Malcovati, M. (Universita di Milano Via Viotti (Italy)); Lorenzetti, R. (M.M.D.R.I. Via R. Lepetit, Gerenzano (Italy))
A clone for C-reactive protein (CRP) has been isolated from a human liver cDNA library; this clone harbors a plasmid, pC81, which has an insert of 1631 bp. When compared to genomic and cDNA sequences published to date now, pC81 has revealed homologies and differences that might help to clarify the structure of this gene and the presence of allelic variants in man.
Leek, Jeffrey T; Johnson, W Evan; Parker, Hilary S; Jaffe, Andrew E; Storey, John D
Heterogeneity and latent variables are now widely recognized as major sources of bias and variability in high-throughput experiments. The most well-known source of latent variation in genomic experiments are batch effects-when samples are processed on different days, in different groups or by different people. However, there are also a large number of other variables that may have a major impact on high-throughput measurements. Here we describe the sva package for identifying, estimating and removing unwanted sources of variation in high-throughput experiments. The sva package supports surrogate variable estimation with the sva function, direct adjustment for known batch effects with the ComBat function and adjustment for batch and latent variables in prediction problems with the fsva function.
Gao, Qiang; Yue, Guidong; Li, Wenqi; Wang, Junyi; Xu, Jiaohui; Yin, Ye
High-throughput sequencing is a revolutionary technological innovation in DNA sequencing. This technology has an ultra-low cost per base of sequencing and an overwhelmingly high data output. High-throughput sequencing has brought novel research methods and solutions to the research fields of genomics and post-genomics. Furthermore, this technology is leading to a new molecular breeding revolution that has landmark significance for scientific research and enables us to launch multi-level, multi-faceted, and multi-extent studies in the fields of crop genetics, genomics, and crop breeding. In this paper, we review progress in the application of high-throughput sequencing technologies to plant molecular breeding studies. © 2012 Institute of Botany, Chinese Academy of Sciences.
Yang, Wanneng; Duan, Lingfeng; Chen, Guoxing; Xiong, Lizhong; Liu, Qian
The functional analysis of the rice genome has entered into a high-throughput stage, and a project named RICE2020 has been proposed to determine the function of every gene in the rice genome by the year 2020. However, as compared with the robustness of genetic techniques, the evaluation of rice phenotypic traits is still performed manually, and the process is subjective, inefficient, destructive and error-prone. To overcome these limitations and help rice phenomics more closely parallel rice genomics, reliable, automatic, multifunctional, and high-throughput phenotyping platforms should be developed. In this article, we discuss the key plant phenotyping technologies, particularly photonics-based technologies, and then introduce their current applications in rice (wheat or barley) phenomics. We also note the major challenges in rice phenomics and are confident that these reliable high-throughput phenotyping tools will give plant scientists new perspectives on the information encoded in the rice genome. Copyright © 2013 Elsevier Ltd. All rights reserved.
Liu, Wenming; Wang, Jinyi
Three-dimensional (3D) tumor culture miniaturized platforms are of importance to biomimetic model construction and pathophysiological studies. Controllable and high-throughput production of 3D tumors is desirable to make cell-based manipulation dynamic and efficient at micro-scale. Moreover, the 3D culture platform being reusable is convenient to research scholars. In this chapter, we describe a dynamically controlled 3D tumor manipulation and culture method using pneumatic microstructure-based microfluidics, which has potential applications in the fields of tissue engineering, tumor biology, and clinical medicine in a high-throughput way.
Berger, Bettina; de Regt, Bas; Tester, Mark
Remote sensing and spectral reflectance measurements of plants has long been used to assess the growth and nutrient status of plants in a noninvasive manner. With improved imaging and computer technologies, these approaches can now be used at high-throughput for more extensive physiological and genetic studies. Here, we present an example of how high-throughput imaging can be used to study the growth of plants exposed to different nutrient levels. In addition, the color of the leaves can be used to estimate leaf chlorophyll and nitrogen status of the plant.
Hook, K Delaney; Chambers, John T; Hili, Ryan
We have developed a novel high-throughput screening platform for the discovery of small-molecules catalysts for bond-forming reactions. The method employs an in vitro selection for bond-formation using amphiphilic DNA-encoded small molecules charged with reaction substrate, which enables selections to be conducted in a variety of organic or aqueous solvents. Using the amine-catalysed aldol reaction as a catalytic model and high-throughput DNA sequencing as a selection read-out, we demonstrate the 1200-fold enrichment of a known aldol catalyst from a library of 16.7-million uncompetitive library members.
Jason R. Hattrick-Simpers
Full Text Available With their ability to rapidly elucidate composition-structure-property relationships, high-throughput experimental studies have revolutionized how materials are discovered, optimized, and commercialized. It is now possible to synthesize and characterize high-throughput libraries that systematically address thousands of individual cuts of fabrication parameter space. An unresolved issue remains transforming structural characterization data into phase mappings. This difficulty is related to the complex information present in diffraction and spectroscopic data and its variation with composition and processing. We review the field of automated phase diagram attribution and discuss the impact that emerging computational approaches will have in the generation of phase diagrams and beyond.
Totir, Monica; Echols, Nathaniel; Nanao, Max; Gee, Christine L; Moskaleva, Alisa; Gradia, Scott; Iavarone, Anthony T; Berger, James M; May, Andrew P; Zubieta, Chloe; Alber, Tom
Structural biology and structural genomics projects routinely rely on recombinantly expressed proteins, but many proteins and complexes are difficult to obtain by this approach. We investigated native source proteins for high-throughput protein crystallography applications. The Escherichia coli proteome was fractionated, purified, crystallized, and structurally characterized. Macro-scale fermentation and fractionation were used to subdivide the soluble proteome into 408 unique fractions of which 295 fractions yielded crystals in microfluidic crystallization chips. Of the 295 crystals, 152 were selected for optimization, diffraction screening, and data collection. Twenty-three structures were determined, four of which were novel. This study demonstrates the utility of native source proteins for high-throughput crystallography.
Schlegel, Daniel R; Crowner, Chris; Lehoullier, Frank; Elkin, Peter L
Secondary use of clinical data for research requires a method to quickly process the data so that researchers can quickly extract cohorts. We present two advances in the High Throughput Phenotyping NLP system which support the aim of truly high throughput processing of clinical data, inspired by a characterization of the linguistic properties of such data. Semantic indexing to store and generalize partially-processed results and the use of compositional expressions for ungrammatical text are discussed, along with a set of initial timing results for the system.
Csiszar, Susan A.; Ernstoff, Alexi; Fantke, Peter
We demonstrate the application of a high-throughput modeling framework to estimate exposure to chemicals used in personal care products (PCPs). As a basis for estimating exposure, we use the product intake fraction (PiF), defined as the mass of chemical taken by an individual or population per mass...... intakes were associated with body lotion. Bioactive doses derived from high-throughput in vitro toxicity data were combined with the estimated PiFs to demonstrate an approach to estimate bioactive equivalent chemical content and to screen chemicals for risk....
Mateusz G Adamski
Full Text Available Over the past decade rapid advances have occurred in the understanding of RNA expression and its regulation. Quantitative polymerase chain reactions (qPCR have become the gold standard for quantifying gene expression. Microfluidic next generation, high throughput qPCR now permits the detection of transcript copy number in thousands of reactions simultaneously, dramatically increasing the sensitivity over standard qPCR. Here we present a gene expression analysis method applicable to both standard polymerase chain reactions (qPCR and high throughput qPCR. This technique is adjusted to the input sample quantity (e.g., the number of cells and is independent of control gene expression. It is efficiency-corrected and with the use of a universal reference sample (commercial complementary DNA (cDNA permits the normalization of results between different batches and between different instruments--regardless of potential differences in transcript amplification efficiency. Modifications of the input quantity method include (1 the achievement of absolute quantification and (2 a non-efficiency corrected analysis. When compared to other commonly used algorithms the input quantity method proved to be valid. This method is of particular value for clinical studies of whole blood and circulating leukocytes where cell counts are readily available.
Clarke, Paul; Cuív, Páraic O; O'Connell, Michael
Since its initial description, the yeast two-hybrid (Y2H) system has been widely used for the detection and analysis of protein-protein interactions. Mating-based strategies have been developed permitting its application for automated proteomic interaction mapping projects using both exhaustive and high-throughput strategies. More recently, a number of prokaryotic two-hybrid (P2H) systems have been developed but, despite the many advantages such Escherichia coli-based systems have over the Y2H system, they have not yet been widely implemented for proteomic interaction mapping. This may be largely due to the fact that high-throughput strategies employing bacterial transformation are not as amenable to automation as Y2H mating-based strategies. Here, we describe the construction of novel conjugative P2H system vectors. These vectors carry a mobilization element of the IncPalpha group plasmid RP4 and can therefore be mobilized with high efficiency from an E.coli donor strain encoding all of the required transport functions in trans. We demonstrate how these vectors permit the exploitation of bacterial conjugation for technically simplified and automated proteomic interaction mapping strategies in E.coli, analogous to the mating-based strategies developed for the Y2H system.
Full Text Available Here we aimed to develop a capillary electrophoresis-based high-throughput multiplex polymerase chain reaction (PCR system for the simultaneous detection of nine pathogens in swine. Nine pairs of specific primers and a set of universal primers were designed; the multiplex PCR was established. The specificity and cross-reactivity of this assay were examined, and the detection limit was determined using serial 10-fold dilutions of plasmids containing the target sequences. The assay was further tested using 144 clinical samples. We found that the nine specific amplification peaks were observed, and the assay had a high degree of specificity, without nonspecific amplification. The simultaneous detection limit for the nine viruses reached 10000 copies μL−1 when all of the premixed viral targets were present. Seventy-seven of the clinical samples tested positive for at least one of the viruses; the principal viral infections in the clinical samples were porcine circovirus type 2 and porcine reproductive and respiratory syndrome virus. This approach has much potential for further development of high-throughput detection tools for the diagnosis of diseases in animals.
Arredondo-Alonso, Sergio; Willems, Rob J; van Schaik, Willem; Schürch, Anita C
To benchmark algorithms for automated plasmid sequence reconstruction from short-read sequencing data, we selected 42 publicly available complete bacterial genome sequences spanning 12 genera, containing 148 plasmids. We predicted plasmids from short-read data with four programs (PlasmidSPAdes, Recycler, cBar and PlasmidFinder) and compared the outcome to the reference sequences. PlasmidSPAdes reconstructs plasmids based on coverage differences in the assembly graph. It reconstructed most of the reference plasmids (recall=0.82), but approximately a quarter of the predicted plasmid contigs were false positives (precision=0.75). PlasmidSPAdes merged 84 % of the predictions from genomes with multiple plasmids into a single bin. Recycler searches the assembly graph for sub-graphs corresponding to circular sequences and correctly predicted small plasmids, but failed with long plasmids (recall=0.12, precision=0.30). cBar, which applies pentamer frequency analysis to detect plasmid-derived contigs, showed a recall and precision of 0.76 and 0.62, respectively. However, cBar categorizes contigs as plasmid-derived and does not bin the different plasmids. PlasmidFinder, which searches for replicons, had the highest precision (1.0), but was restricted by the contents of its database and the contig length obtained from de novo assembly (recall=0.36). PlasmidSPAdes and Recycler detected putative small plasmids (50 kbp) containing repeated sequences remains challenging and limits the high-throughput analysis of plasmids from short-read whole-genome sequencing data.
Smart, Mariette; Huddy, Robert J; Cowan, Don A; Trindade, Marla
To access the genetic potential contained in large metagenomic libraries, suitable high-throughput functional screening methods are required. Here we describe a high-throughput screening approach which enables the rapid identification of metagenomic library clones expressing functional accessory lignocellulosic enzymes. The high-throughput nature of this method hinges on the multiplexing of both the E. coli metagenomic library clones and the colorimetric p-nitrophenyl linked substrates which allows for the simultaneous screening for β-glucosidases, β-xylosidases, and α-L-arabinofuranosidases. This method is readily automated and compatible with high-throughput robotic screening systems.
Ladirat, S.E.; Schols, H.A.; Nauta, A.; Schoterman, M.H.C.; Keijser, B.J.F.; Montijn, R.C.; Gruppen, H.; Schuren, F.H.J.
Antibiotic treatments can lead to a disruption of the human microbiota. In this in-vitro study, the impact of antibiotics on adult intestinal microbiota was monitored in a new high-throughput approach: a fermentation screening-platform was coupled with a phylogenetic microarray analysis
Naelapaa, Kaisa; van de Streek, Jacco; Rantanen, Jukka
High-throughput crystallisation and characterisation platforms provide an efficient means to carry out solid-form screening during the pre-formulation phase. To determine the crystal structures of identified new solid phases, however, usually requires independent crystallisation trials to produce...... obtained only during high-energy processing such as spray drying or milling....
Pedersen, Marlene Lemvig; Block, Ines; List, Markus
Protein Array (RPPA)-based readout format integrated into robotic siRNA screening. This technique would allow post-screening high-throughput quantification of protein changes. Recently, breast cancer stem cells (BCSCs) have attracted much attention, as a tumor- and metastasis-driving subpopulation...
Biology is increasingly data driven by virtue of the development of high-throughput technologies, such as DNA and RNA sequencing. Computational biology and bioinformatics are scientific disciplines that cross-over between the disciplines of biology, informatics and statistics; which is clearly
US EPA’s ToxCast program is generating data in high-throughput screening (HTS) and high-content screening (HCS) assays for thousands of environmental chemicals, for use in developing predictive toxicity models. Currently the ToxCast screening program includes over 1800 unique c...
Jian Chen; Chengcheng Xue; Yang Zhao; Deyong Chen; Min-Hsien Wu; Junbo Wang
This article reviews recent developments in microfluidic impedance flow cytometry for high-throughput electrical property characterization of single cells. Four major perspectives of microfluidic impedance flow cytometry for single-cell characterization are included in this review: (1) early developments of microfluidic impedance flow cytometry for single-cell electrical property characterization; (2) microfluidic impedance flow cytometry with enhanced sensitivity; (3) microfluidic impedance ...
Vries, Johannes G. de; Vries, André H.M. de
The use of high-throughput experimentation (HTE) in homogeneous catalysis research for the production of fine chemicals is an important breakthrough. Whereas in the past stoichiometric chemistry was often preferred because of time-to-market constraints, HTE allows catalytic solutions to be found
Shakoor, Nadia; Lee, Scott; Mockler, Todd C
Effective implementation of technology that facilitates accurate and high-throughput screening of thousands of field-grown lines is critical for accelerating crop improvement and breeding strategies for higher yield and disease tolerance. Progress in the development of field-based high throughput phenotyping methods has advanced considerably in the last 10 years through technological progress in sensor development and high-performance computing. Here, we review recent advances in high throughput field phenotyping technologies designed to inform the genetics of quantitative traits, including crop yield and disease tolerance. Successful application of phenotyping platforms to advance crop breeding and identify and monitor disease requires: (1) high resolution of imaging and environmental sensors; (2) quality data products that facilitate computer vision, machine learning and GIS; (3) capacity infrastructure for data management and analysis; and (4) automated environmental data collection. Accelerated breeding for agriculturally relevant crop traits is key to the development of improved varieties and is critically dependent on high-resolution, high-throughput field-scale phenotyping technologies that can efficiently discriminate better performing lines within a larger population and across multiple environments. Copyright © 2017. Published by Elsevier Ltd.
Jensen, Anders A.
receptors in this assay were found to be in good agreement with those from electrophysiology studies of the receptors expressed in Xenopus oocytes or mammalian cell lines. Hence, this high throughput screening assay will be of great use in future pharmacological studies of glycine receptors, particular...
Full Text Available Abstract Background Preparation of large quantity and high quality genomic DNA from a large number of plant samples is a major bottleneck for most genetic and genomic analyses, such as, genetic mapping, TILLING (Targeting Induced Local Lesion IN Genome, and next-generation sequencing directly from sheared genomic DNA. A variety of DNA preparation methods and commercial kits are available. However, they are either low throughput, low yield, or costly. Here, we describe a method for high throughput genomic DNA isolation from sorghum [Sorghum bicolor (L. Moench] leaves and dry seeds with high yield, high quality, and affordable cost. Results We developed a high throughput DNA isolation method by combining a high yield CTAB extraction method with an improved cleanup procedure based on MagAttract kit. The method yielded large quantity and high quality DNA from both lyophilized sorghum leaves and dry seeds. The DNA yield was improved by nearly 30 fold with 4 times less consumption of MagAttract beads. The method can also be used in other plant species, including cotton leaves and pine needles. Conclusion A high throughput system for DNA extraction from sorghum leaves and seeds was developed and validated. The main advantages of the method are low cost, high yield, high quality, and high throughput. One person can process two 96-well plates in a working day at a cost of $0.10 per sample of magnetic beads plus other consumables that other methods will also need.
Kemmeren, Patrick; Kockelkorn, Thessa T J P; Bijma, Theo; Donders, Rogier; Holstege, Frank C P
Determining gene function is an important challenge arising from the availability of whole genome sequences. Until recently, approaches based on sequence homology were the only high-throughput method for predicting gene function. Use of high-throughput generated experimental data sets for determining gene function has been limited for several reasons. Here a new approach is presented for integration of high-throughput data sets, leading to prediction of function based on relationships supported by multiple types and sources of data. This is achieved with a database containing 125 different high-throughput data sets describing phenotypes, cellular localizations, protein interactions and mRNA expression levels from Saccharomyces cerevisiae, using a bit-vector representation and information content-based ranking. The approach takes characteristic and qualitative differences between the data sets into account, is highly flexible, efficient and scalable. Database queries result in predictions for 543 uncharacterized genes, based on multiple functional relationships each supported by at least three types of experimental data. Some of these are experimentally verified, further demonstrating their reliability. The results also generate insights into the relative merits of different data types and provide a coherent framework for functional genomic datamining. Free availability over the Internet. firstname.lastname@example.org http://www.genomics.med.uu.nl/pub/pk/comb_gen_network.
Motivation: The large and diverse high-throughput chemical screening efforts carried out by the US EPAToxCast program requires an efficient, transparent, and reproducible data pipeline.Summary: The tcpl R package and its associated MySQL database provide a generalized platform fo...
Au, K.; Folkers, G.E.; Kaptein, R.
A collaborative project between two Structural Proteomics In Europe (SPINE) partner laboratories, York and Oxford, aimed at high-throughput (HTP) structure determination of proteins from Bacillus anthracis, the aetiological agent of anthrax and a biomedically important target, is described. Based
Alquezar-Planas, David E; Fordyce, Sarah Louise
Since the development of so-called "next generation" high-throughput sequencing in 2005, this technology has been applied to a variety of fields. Such applications include disease studies, evolutionary investigations, and ancient DNA. Each application requires a specialized protocol to ensure tha...
The US EPA’s ToxCast program is designed to assess chemical perturbations of molecular and cellular endpoints using a variety of high-throughput screening (HTS) assays. However, existing HTS assays have limited or no xenobiotic metabolism which could lead to a mischaracteri...
Hudson, R. Tod, Michael J. Hemmer, Kimberly A. Salinas, Sherry S. Wilkinson, James Watts, James T. Winstead, Peggy S. Harris, Amy Kirkpatrick and Calvin C. Walker. In press. Plasma Protein Profiling as a High Throughput Tool for Chemical Screening Using a Small Fish Model (Abstra...
Hartmann, Anja; Czauderna, Tobias; Hoffmann, Roberto; Stein, Nils; Schreiber, Falk
In the last few years high-throughput analysis methods have become state-of-the-art in the life sciences. One of the latest developments is automated greenhouse systems for high-throughput plant phenotyping. Such systems allow the non-destructive screening of plants over a period of time by means of image acquisition techniques. During such screening different images of each plant are recorded and must be analysed by applying sophisticated image analysis algorithms. This paper presents an image analysis pipeline (HTPheno) for high-throughput plant phenotyping. HTPheno is implemented as a plugin for ImageJ, an open source image processing software. It provides the possibility to analyse colour images of plants which are taken in two different views (top view and side view) during a screening. Within the analysis different phenotypical parameters for each plant such as height, width and projected shoot area of the plants are calculated for the duration of the screening. HTPheno is applied to analyse two barley cultivars. HTPheno, an open source image analysis pipeline, supplies a flexible and adaptable ImageJ plugin which can be used for automated image analysis in high-throughput plant phenotyping and therefore to derive new biological insights, such as determination of fitness.
Full Text Available Abstract Background In the last few years high-throughput analysis methods have become state-of-the-art in the life sciences. One of the latest developments is automated greenhouse systems for high-throughput plant phenotyping. Such systems allow the non-destructive screening of plants over a period of time by means of image acquisition techniques. During such screening different images of each plant are recorded and must be analysed by applying sophisticated image analysis algorithms. Results This paper presents an image analysis pipeline (HTPheno for high-throughput plant phenotyping. HTPheno is implemented as a plugin for ImageJ, an open source image processing software. It provides the possibility to analyse colour images of plants which are taken in two different views (top view and side view during a screening. Within the analysis different phenotypical parameters for each plant such as height, width and projected shoot area of the plants are calculated for the duration of the screening. HTPheno is applied to analyse two barley cultivars. Conclusions HTPheno, an open source image analysis pipeline, supplies a flexible and adaptable ImageJ plugin which can be used for automated image analysis in high-throughput plant phenotyping and therefore to derive new biological insights, such as determination of fitness.
Langerak, A.W.; Bruggemann, M.; Davi, F.; Darzentas, N.; Dongen, J.J. van; Gonzalez, D.; Cazzaniga, G.; Giudicelli, V.; Lefranc, M.P.; Giraud, M.; Macintyre, E.A.; Hummel, M.; Pott, C.; Groenen, P.J.T.A.; Stamatopoulos, K.
Analysis and interpretation of Ig and TCR gene rearrangements in the conventional, low-throughput way have their limitations in terms of resolution, coverage, and biases. With the advent of high-throughput, next-generation sequencing (NGS) technologies, a deeper analysis of Ig and/or TCR (IG/TR)
Liu, Yun; Grossman, Jeffrey C
Solar thermal fuels (STF) store the energy of sunlight, which can then be released later in the form of heat, offering an emission-free and renewable solution for both solar energy conversion and storage. However, this approach is currently limited by the lack of low-cost materials with high energy density and high stability. In this Letter, we present an ab initio high-throughput computational approach to accelerate the design process and allow for searches over a broad class of materials. The high-throughput screening platform we have developed can run through large numbers of molecules composed of earth-abundant elements and identifies possible metastable structures of a given material. Corresponding isomerization enthalpies associated with the metastable structures are then computed. Using this high-throughput simulation approach, we have discovered molecular structures with high isomerization enthalpies that have the potential to be new candidates for high-energy density STF. We have also discovered physical principles to guide further STF materials design through structural analysis. More broadly, our results illustrate the potential of using high-throughput ab initio simulations to design materials that undergo targeted structural transitions.
Schmidt-Hansberg, B.; Sanyal, M.; Grossiord, N.; Galagan, Y.O.; Baunach, M.; Klein, M.F.G.; Colsmann, A.; Scharfer, P.; Lemmer, U.; Dosch, H.; Michels, J.J; Barrena, E.; Schabel, W.
The rapidly increasing power conversion efficiencies of organic solar cells are an important prerequisite towards low cost photovoltaic fabricated in high throughput. In this work we suggest indane as a non-halogenated replacement for the commonly used halogenated solvent o-dichlorobenzene. Indane
Zomer, A.L.; Burghout, P.J.; Bootsma, H.J.; Hermans, P.W.M.; Hijum, S.A.F.T. van
High-throughput analysis of genome-wide random transposon mutant libraries is a powerful tool for (conditional) essential gene discovery. Recently, several next-generation sequencing approaches, e.g. Tn-seq/INseq, HITS and TraDIS, have been developed that accurately map the site of transposon
High-throughput screening (HTPS) assays to detect inhibitors of thyroperoxidase (TPO), the enzymatic catalyst for thyroid hormone (TH) synthesis, are not currently available. Herein we describe the development of a HTPS TPO inhibition assay. Rat thyroid microsomes and a fluores...
Raboy, Victor; Johnson, Amy; Bilyeu, Kristin
High-throughput/low-cost/low-tech methods for phytic acid determination that are sufficiently accurate and reproducible would be of value in plant genetics, crop breeding and in the food and feed industries. Variants of two candidate methods, those described by Vaintraub and Lapteva (Anal Biochem...
Human ATAD5 is an excellent biomarker for identifying genotoxic compounds because ATADS protein levels increase post-transcriptionally following exposure to a variety of DNA damaging agents. Here we report a novel quantitative high-throughput ATAD5-Iuciferase assay that can moni...
D. Lee Taylor; Michael G. Booth; Jack W. McFarland; Ian C. Herriott; Niall J. Lennon; Chad Nusbaum; Thomas G. Marr
High throughput sequencing methods are widely used in analyses of microbial diversity but are generally applied to small numbers of samples, which precludes charaterization of patterns of microbial diversity across space and time. We have designed a primer-tagging approach that allows pooling and subsequent sorting of numerous samples, which is directed to...
Xin, Zhanguo; Chen, Junping
Preparation of large quantity and high quality genomic DNA from a large number of plant samples is a major bottleneck for most genetic and genomic analyses, such as, genetic mapping, TILLING (Targeting Induced Local Lesion IN Genome), and next-generation sequencing directly from sheared genomic DNA. A variety of DNA preparation methods and commercial kits are available. However, they are either low throughput, low yield, or costly. Here, we describe a method for high throughput genomic DNA isolation from sorghum [Sorghum bicolor (L.) Moench] leaves and dry seeds with high yield, high quality, and affordable cost. We developed a high throughput DNA isolation method by combining a high yield CTAB extraction method with an improved cleanup procedure based on MagAttract kit. The method yielded large quantity and high quality DNA from both lyophilized sorghum leaves and dry seeds. The DNA yield was improved by nearly 30 fold with 4 times less consumption of MagAttract beads. The method can also be used in other plant species, including cotton leaves and pine needles. A high throughput system for DNA extraction from sorghum leaves and seeds was developed and validated. The main advantages of the method are low cost, high yield, high quality, and high throughput. One person can process two 96-well plates in a working day at a cost of $0.10 per sample of magnetic beads plus other consumables that other methods will also need.
Blume, H; Bourenkov, G P; Kosciesza, D; Bartunik, H D
The wiggler beamline BW6 at DORIS has been optimized for de-novo solution of protein structures on the basis of MAD phasing. Facilities for automatic data collection, rapid data transfer and storage, and online processing have been developed which provide adequate conditions for high-throughput applications, e.g., in structural genomics.
Jain, S.; Sondervan, D.; Rizzu, P.; Bochdanovits, Z.; Caminada, D.; Heutink, P.
Genomic approaches provide enormous amounts of raw data with regard to genetic variation, the diversity of RNA species, and protein complement. high-throughput (HT) and high-content (HC) cellular screens are ideally suited to contextualize the information gathered from other "omic" approaches into
Raijada, Dhara; Cornett, Claus; Rantanen, Jukka
The present study puts forward a miniaturized high-throughput platform to understand influence of excipient selection and processing on the stability of a given drug compound. Four model drugs (sodium naproxen, theophylline, amlodipine besylate and nitrofurantoin) and ten different excipients were...
A high-throughput MALDI-TOF mass spectrometric assay is described for assay of chitolytic enzyme activity. The assay uses unmodified chitin oligosaccharide substrates, and is readily achievable on a microliter scale (2 µL total volume, containing 2 µg of substrate and 1 ng of protein). The speed a...
Poulsen, Esben Guldahl; Nielsen, Sofie V.; Pietras, Elin J.
that are not genetically tractable as, for instance, a yeast model system. Here, we describe a method relying on high-throughput cellular imaging of cells transfected with a targeted siRNA library to screen for components involved in degradation of a protein of interest. This method is a rapid and cost-effective tool...
Ståhlman, Marcus; Ejsing, Christer S.; Tarasov, Kirill
we describe a novel high-throughput shotgun lipidomic platform based on 96-well robot-assisted lipid extraction, automated sample infusion by mircofluidic-based nanoelectrospray ionization, and quantitative multiple precursor ion scanning analysis on a quadrupole time-of-flight mass spectrometer...
Moreira Teixeira, Liliana; Leijten, Jeroen Christianus Hermanus; Sobral, J.; Jin, R.; van Apeldoorn, Aart A.; Feijen, Jan; van Blitterswijk, Clemens; Dijkstra, Pieter J.; Karperien, Hermanus Bernardus Johannes
Cell-based cartilage repair strategies such as matrix-induced autologous chondrocyte implantation (MACI) could be improved by enhancing cell performance. We hypothesised that micro-aggregates of chondrocytes generated in high-throughput prior to implantation in a defect could stimulate cartilaginous
McMichael, Gai L; Gibson, Catherine S; O'Callaghan, Michael E; Goldwater, Paul N; Dekker, Gustaaf A; Haan, Eric A; MacLennan, Alastair H
We sought a convenient and reliable method for collection of genetic material that is inexpensive and noninvasive and suitable for self-collection and mailing and a compatible, commercial DNA extraction protocol to meet quantitative and qualitative requirements for high-throughput single nucleotide polymorphism (SNP) multiplex analysis on an automated platform. Buccal swabs were collected from 34 individuals as part of a pilot study to test commercially available buccal swabs and DNA extraction kits. DNA was quantified on a spectrofluorometer with Picogreen dsDNA prior to testing the DNA integrity with predesigned SNP multiplex assays. Based on the pilot study results, the Catch-All swabs and Isohelix buccal DNA isolation kit were selected for our high-throughput application and extended to a further 1140 samples as part of a large cohort study. The average DNA yield in the pilot study (n=34) was 1.94 microg +/- 0.54 with a 94% genotyping pass rate. For the high-throughput application (n=1140), the average DNA yield was 2.44 microg +/- 1.74 with a >or=93% genotyping pass rate. The Catch-All buccal swabs are a convenient and cost-effective alternative to blood sampling. Combined with the Isohelix buccal DNA isolation kit, they provided DNA of sufficient quantity and quality for high-throughput SNP multiplex analysis.
United States Environmental Protection Agency researchers have developed a Stochastic Human Exposure and Dose Simulation High -Throughput (SHEDS-HT) model for use in prioritization of chemicals under the ExpoCast program. In this research, new methods were implemented in SHEDS-HT...
Gego, A.; Silvie, O.; Franetich, J.F.; Farhati, K.; Hannoun, L.; Luty, A.J.F.; Sauerwein, R.W.; Boucheix, C.; Rubinstein, E.; Mazier, D.
Plasmodium liver stages represent potential targets for antimalarial prophylactic drugs. Nevertheless, there is a lack of molecules active on these stages. We have now developed a new approach for the high-throughput screening of drug activity on Plasmodium liver stages in vitro, based on an
Tiendrebeogo, Regis W; Adu, Bright; Singh, Susheel K
distinction of early ring stages of Plasmodium falciparum from uninfected red blood cells (uRBC) remains a challenge. METHODS: Here, a high-throughput, three-parameter (tri-colour) flow cytometry technique based on mitotracker red dye, the nucleic acid dye coriphosphine O (CPO) and the leucocyte marker CD45...
Spero, Richard Chasen; Vicci, Leandra; Cribb, Jeremy; Bober, David; Swaminathan, Vinay; O’Brien, E. Timothy; Rogers, Stephen L.; Superfine, R.
In the past decade, high throughput screening (HTS) has changed the way biochemical assays are performed, but manipulation and mechanical measurement of micro- and nanoscale systems have not benefited from this trend. Techniques using microbeads (particles ∼0.1–10 μm) show promise for enabling high throughput mechanical measurements of microscopic systems. We demonstrate instrumentation to magnetically drive microbeads in a biocompatible, multiwell magnetic force system. It is based on commercial HTS standards and is scalable to 96 wells. Cells can be cultured in this magnetic high throughput system (MHTS). The MHTS can apply independently controlled forces to 16 specimen wells. Force calibrations demonstrate forces in excess of 1 nN, predicted force saturation as a function of pole material, and powerlaw dependence of F∼r−2.7±0.1. We employ this system to measure the stiffness of SR2+ Drosophila cells. MHTS technology is a key step toward a high throughput screening system for micro- and nanoscale biophysical experiments. PMID:19044357
Subject selection is essential and has become the rate-limiting step for harvesting knowledge to advance healthcare through clinical research. Present manual approaches inhibit researchers from conducting deep and broad studies and drawing confident conclusions. High-throughput clinical phenotyping (HTCP), a recently proposed approach, leverages…
Brod, F.C.A.; Dijk, van J.P.; Voorhuijzen, M.M.; Dinon, A.Z.; Guimarães, L.H.S.; Scholtens, I.M.J.; Arisi, A.C.M.; Kok, E.J.
The ever-increasing production of genetically modified crops generates a demand for high-throughput DNAbased methods for the enforcement of genetically modified organisms (GMO) labelling requirements. The application of standard real-time PCR will become increasingly costly with the growth of the
Brueckner, L. (Laura); Van Arensbergen, J. (Joris); Akhtar, W. (Waseem); L. Pagie (Ludo); B. van Steensel (Bas)
textabstractBackground: Chromatin proteins control gene activity in a concerted manner. We developed a high-throughput assay to study the effects of the local chromatin environment on the regulatory activity of a protein of interest. The assay combines a previously reported multiplexing strategy
Tschiersch, Henning; Junker, Astrid; Meyer, Rhonda C; Altmann, Thomas
Automated plant phenotyping has been established as a powerful new tool in studying plant growth, development and response to various types of biotic or abiotic stressors. Respective facilities mainly apply non-invasive imaging based methods, which enable the continuous quantification of the dynamics of plant growth and physiology during developmental progression. However, especially for plants of larger size, integrative, automated and high throughput measurements of complex physiological parameters such as photosystem II efficiency determined through kinetic chlorophyll fluorescence analysis remain a challenge. We present the technical installations and the establishment of experimental procedures that allow the integrated high throughput imaging of all commonly determined PSII parameters for small and large plants using kinetic chlorophyll fluorescence imaging systems (FluorCam, PSI) integrated into automated phenotyping facilities (Scanalyzer, LemnaTec). Besides determination of the maximum PSII efficiency, we focused on implementation of high throughput amenable protocols recording PSII operating efficiency (ΦPSII). Using the presented setup, this parameter is shown to be reproducibly measured in differently sized plants despite the corresponding variation in distance between plants and light source that caused small differences in incident light intensity. Values of ΦPSII obtained with the automated chlorophyll fluorescence imaging setup correlated very well with conventionally determined data using a spot-measuring chlorophyll fluorometer. The established high throughput operating protocols enable the screening of up to 1080 small and 184 large plants per hour, respectively. The application of the implemented high throughput protocols is demonstrated in screening experiments performed with large Arabidopsis and maize populations assessing natural variation in PSII efficiency. The incorporation of imaging systems suitable for kinetic chlorophyll
Rajora Om P
Full Text Available Abstract Background High throughput DNA isolation from plants is a major bottleneck for most studies requiring large sample sizes. A variety of protocols have been developed for DNA isolation from plants. However, many species, including conifers, have high contents of secondary metabolites that interfere with the extraction process or the subsequent analysis steps. Here, we describe a procedure for high-throughput DNA isolation from conifers. Results We have developed a high-throughput DNA extraction protocol for conifers using an automated liquid handler and modifying the Qiagen MagAttract Plant Kit protocol. The modifications involve change to the buffer system and improving the protocol so that it almost doubles the number of samples processed per kit, which significantly reduces the overall costs. We describe two versions of the protocol: one for medium-throughput (MTP and another for high-throughput (HTP DNA isolation. The HTP version works from start to end in the industry-standard 96-well format, while the MTP version provides higher DNA yields per sample processed. We have successfully used the protocol for DNA extraction and genotyping of thousands of individuals of several spruce and a pine species. Conclusion A high-throughput system for DNA extraction from conifer needles and seeds has been developed and validated. The quality of the isolated DNA was comparable with that obtained from two commonly used methods: the silica-spin column and the classic CTAB protocol. Our protocol provides a fully automatable and cost effective solution for processing large numbers of conifer samples.
Yoon, Ju-Yeon; Cho, In-Sook; Choi, Gug-Seoun; Choi, Seung-Kook
Chrysanthemum stunt viroid (CSVd), a noncoding infectious RNA molecule, causes seriously economic losses of chrysanthemum for 3 or 4 years after its first infection. Monomeric cDNA clones of CSVd isolate SK1 (CSVd-SK1) were constructed in the plasmids pGEM-T easy vector and pUC19 vector. Linear positive-sense transcripts synthesized in vitro from the full-length monomeric cDNA clones of CSVd-SK1 could infect systemically tomato seedlings and chrysanthemum plants, suggesting that the linear CSVd RNA transcribed from the cDNA clones could be replicated as efficiently as circular CSVd in host species. However, direct inoculation of plasmid cDNA clones containing full-length monomeric cDNA of CSVd-SK1 failed to infect tomato and chrysanthemum and linear negative-sense transcripts from the plasmid DNAs were not infectious in the two plant species. The cDNA sequences of progeny viroid in systemically infected tomato and chrysanthemum showed a few substitutions at a specific nucleotide position, but there were no deletions and insertions in the sequences of the CSVd progeny from tomato and chrysanthemum plants. PMID:25288987
Full Text Available Chrysanthemum stunt viroid (CSVd, a noncoding infectious RNA molecule, causes seriously economic losses of chrysanthemum for 3 or 4 years after its first infection. Monomeric cDNA clones of CSVd isolate SK1 (CSVd-SK1 were constructed in the plasmids pGEM-T easy vector and pUC19 vector. Linear positive-sense transcripts synthesized in vitro from the full-length monomeric cDNA clones of CSVd-SK1 could infect systemically tomato seedlings and chrysanthemum plants, suggesting that the linear CSVd RNA transcribed from the cDNA clones could be replicated as efficiently as circular CSVd in host species. However, direct inoculation of plasmid cDNA clones containing full-length monomeric cDNA of CSVd-SK1 failed to infect tomato and chrysanthemum and linear negative-sense transcripts from the plasmid DNAs were not infectious in the two plant species. The cDNA sequences of progeny viroid in systemically infected tomato and chrysanthemum showed a few substitutions at a specific nucleotide position, but there were no deletions and insertions in the sequences of the CSVd progeny from tomato and chrysanthemum plants.
Redman Julia C
Full Text Available Abstract Background Medicago truncatula is a model legume species that is currently the focus of an international genome sequencing effort. Although several different oligonucleotide and cDNA arrays have been produced for genome-wide transcript analysis of this species, intrinsic limitations in the sensitivity of hybridization-based technologies mean that transcripts of genes expressed at low-levels cannot be measured accurately with these tools. Amongst such genes are many encoding transcription factors (TFs, which are arguably the most important class of regulatory proteins. Quantitative reverse transcription-polymerase chain reaction (qRT-PCR is the most sensitive method currently available for transcript quantification, and one that can be scaled up to analyze transcripts of thousands of genes in parallel. Thus, qRT-PCR is an ideal method to tackle the problem of TF transcript quantification in Medicago and other plants. Results We established a bioinformatics pipeline to identify putative TF genes in Medicago truncatula and to design gene-specific oligonucleotide primers for qRT-PCR analysis of TF transcripts. We validated the efficacy and gene-specificity of over 1000 TF primer pairs and utilized these to identify sets of organ-enhanced TF genes that may play important roles in organ development or differentiation in this species. This community resource will be developed further as more genome sequence becomes available, with the ultimate goal of producing validated, gene-specific primers for all Medicago TF genes. Conclusion High-throughput qRT-PCR using a 384-well plate format enables rapid, flexible, and sensitive quantification of all predicted Medicago transcription factor mRNAs. This resource has been utilized recently by several groups in Europe, Australia, and the USA, and we expect that it will become the 'gold-standard' for TF transcript profiling in Medicago truncatula.
Alabi, Olufemi J; Zheng, Yun; Jagadeeswaran, Guru; Sunkar, Ramanjulu; Naidu, Rayapati A
Grapevine leafroll disease (GLRD) is one of the most economically important virus diseases of grapevine (Vitis spp.) worldwide. In this study, we used high-throughput sequencing of cDNA libraries made from small RNAs (sRNAs) to compare profiles of sRNA populations recovered from own-rooted Merlot grapevines with and without GLRD symptoms. The data revealed the presence of sRNAs specific to Grapevine leafroll-associated virus 3, Hop stunt viroid (HpSVd), Grapevine yellow speckle viroid 1 (GYSVd-1) and Grapevine yellow speckle viroid 2 (GYSVd-2) in symptomatic grapevines and sRNAs specific only to HpSVd, GYSVd-1 and GYSVd-2 in nonsymptomatic grapevines. In addition to 135 previously identified conserved microRNAs in grapevine (Vvi-miRs), we identified 10 novel and several candidate Vvi-miRs in both symptomatic and nonsymptomatic grapevine leaves based on the cloning of miRNA star sequences. Quantitative real-time reverse transcriptase-polymerase chain reaction (RT-PCR) of selected conserved Vvi-miRs indicated that individual members of an miRNA family are differentially expressed in symptomatic and nonsymptomatic leaves. The high-resolution mapping of sRNAs specific to an ampelovirus and three viroids in mixed infections, the identification of novel Vvi-miRs and the modulation of certain conserved Vvi-miRs offers resources for the further elucidation of compatible host-pathogen interactions and for the provision of ecologically relevant information to better understand host-pathogen-environment interactions in a perennial fruit crop. © 2012 THE AUTHORS. MOLECULAR PLANT PATHOLOGY © 2012 BSPP AND BLACKWELL PUBLISHING LTD.
Li, Fenglei [Iowa State Univ., Ames, IA (United States)
The purposes of our research were: (1) To develop an economical, easy to use, automated, high throughput system for large scale protein crystallization screening. (2) To develop a new protein crystallization method with high screening efficiency, low protein consumption and complete compatibility with high throughput screening system. (3) To determine the structure of lactate dehydrogenase complexed with NADH by x-ray protein crystallography to study its inherent structural properties. Firstly, we demonstrated large scale protein crystallization screening can be performed in a high throughput manner with low cost, easy operation. The overall system integrates liquid dispensing, crystallization and detection and serves as a whole solution to protein crystallization screening. The system can dispense protein and multiple different precipitants in nanoliter scale and in parallel. A new detection scheme, native fluorescence, has been developed in this system to form a two-detector system with a visible light detector for detecting protein crystallization screening results. This detection scheme has capability of eliminating common false positives by distinguishing protein crystals from inorganic crystals in a high throughput and non-destructive manner. The entire system from liquid dispensing, crystallization to crystal detection is essentially parallel, high throughput and compatible with automation. The system was successfully demonstrated by lysozyme crystallization screening. Secondly, we developed a new crystallization method with high screening efficiency, low protein consumption and compatibility with automation and high throughput. In this crystallization method, a gas permeable membrane is employed to achieve the gentle evaporation required by protein crystallization. Protein consumption is significantly reduced to nanoliter scale for each condition and thus permits exploring more conditions in a phase diagram for given amount of protein. In addition
Full Text Available The selectivity of the adaptive immune response is based on the enormous diversity of T and B cell antigen-specific receptors. The immune repertoire, the collection of T and B cells with functional diversity in the circulatory system at any given time, is dynamic and reflects the essence of immune selectivity. In this article, we review the recent advances in immune repertoire study of infectious diseases that achieved by traditional techniques and high-throughput sequencing techniques. High-throughput sequencing techniques enable the determination of complementary regions of lymphocyte receptors with unprecedented efficiency and scale. This progress in methodology enhances the understanding of immunologic changes during pathogen challenge, and also provides a basis for further development of novel diagnostic markers, immunotherapies and vaccines.
Full Text Available High-throughput microscopy of many single cells generates high-dimensional data that are far from straightforward to analyze. One important problem is automatically detecting the cellular compartment where a fluorescently-tagged protein resides, a task relatively simple for an experienced human, but difficult to automate on a computer. Here, we train an 11-layer neural network on data from mapping thousands of yeast proteins, achieving per cell localization classification accuracy of 91%, and per protein accuracy of 99% on held-out images. We confirm that low-level network features correspond to basic image characteristics, while deeper layers separate localization classes. Using this network as a feature calculator, we train standard classifiers that assign proteins to previously unseen compartments after observing only a small number of training examples. Our results are the most accurate subcellular localization classifications to date, and demonstrate the usefulness of deep learning for high-throughput microscopy.
Bai, John G; Yeo, Woon-Hong; Chung, Jae-Hyun
One of the critical challenges in nanostructured biosensors is to manufacture an addressable array of nanopatterns at low cost. The addressable array (1) provides multiplexing for biomolecule detection and (2) enables direct detection of biomolecules without labeling and amplification. To fabricate such an array of nanostructures, current nanolithography methods are limited by the lack of either high throughput or high resolution. This paper presents a high-resolution and high-throughput nanolithography method using the compensated shadow effect in high-vacuum evaporation. The approach enables the fabrication of uniform nanogaps down to 20 nm in width across a 100 mm silicon wafer. The nanogap pattern is used as a template for the routine fabrication of zero-, one-, and two-dimensional nanostructures with a high yield. The method can facilitate the fabrication of nanostructured biosensors on a wafer scale at a low manufacturing cost.
Yoshimoto, Nobuo; Kida, Akiko; Jie, Xu; Kurokawa, Masaya; Iijima, Masumi; Niimi, Tomoaki; Maturana, Andrés D.; Nikaido, Itoshi; Ueda, Hiroki R.; Tatematsu, Kenji; Tanizawa, Katsuyuki; Kondo, Akihiko; Fujii, Ikuo; Kuroda, Shun'ichi
When establishing the most appropriate cells from the huge numbers of a cell library for practical use of cells in regenerative medicine and production of various biopharmaceuticals, cell heterogeneity often found in an isogenic cell population limits the refinement of clonal cell culture. Here, we demonstrated high-throughput screening of the most suitable cells in a cell library by an automated undisruptive single-cell analysis and isolation system, followed by expansion of isolated single cells. This system enabled establishment of the most suitable cells, such as embryonic stem cells with the highest expression of the pluripotency marker Rex1 and hybridomas with the highest antibody secretion, which could not be achieved by conventional high-throughput cell screening systems (e.g., a fluorescence-activated cell sorter). This single cell-based breeding system may be a powerful tool to analyze stochastic fluctuations and delineate their molecular mechanisms. PMID:23378922
Lapington, J S; Miller, G M; Ashton, T J R; Jarron, P; Despeisse, M; Powolny, F; Howorth, J; Milnes, J
High-throughput photon counting with high time resolution is a niche application area where vacuum tubes can still outperform solid-state devices. Applications in the life sciences utilizing time-resolved spectroscopies, particularly in the growing field of proteomics, will benefit greatly from performance enhancements in event timing and detector throughput. The HiContent project is a collaboration between the University of Leicester Space Research Centre, the Microelectronics Group at CERN, Photek Ltd., and end-users at the Gray Cancer Institute and the University of Manchester. The goal is to develop a detector system specifically designed for optical proteomics, capable of high content (multi-parametric) analysis at high throughput. The HiContent detector system is being developed to exploit this niche market. It combines multi-channel, high time resolution photon counting in a single miniaturized detector system with integrated electronics. The combination of enabling technologies; small pore microchanne...
Hura, Greg L.; Menon, Angeli L.; Hammel, Michal; Rambo, Robert P.; Poole II, Farris L.; Tsutakawa, Susan E.; Jenney Jr, Francis E.; Classen, Scott; Frankel, Kenneth A.; Hopkins, Robert C.; Yang, Sungjae; Scott, Joseph W.; Dillard, Bret D.; Adams, Michael W. W.; Tainer, John A.
We present an efficient pipeline enabling high-throughput analysis of protein structure in solution with small angle X-ray scattering (SAXS). Our SAXS pipeline combines automated sample handling of microliter volumes, temperature and anaerobic control, rapid data collection and data analysis, and couples structural analysis with automated archiving. We subjected 50 representative proteins, mostly from Pyrococcus furiosus, to this pipeline and found that 30 were multimeric structures in solution. SAXS analysis allowed us to distinguish aggregated and unfolded proteins, define global structural parameters and oligomeric states for most samples, identify shapes and similar structures for 25 unknown structures, and determine envelopes for 41 proteins. We believe that high-throughput SAXS is an enabling technology that may change the way that structural genomics research is done.
Georgiou, George; Ippolito, Gregory C; Beausang, John; Busse, Christian E; Wardemann, Hedda; Quake, Stephen R
Efforts to determine the antibody repertoire encoded by B cells in the blood or lymphoid organs using high-throughput DNA sequencing technologies have been advancing at an extremely rapid pace and are transforming our understanding of humoral immune responses. Information gained from high-throughput DNA sequencing of immunoglobulin genes (Ig-seq) can be applied to detect B-cell malignancies with high sensitivity, to discover antibodies specific for antigens of interest, to guide vaccine development and to understand autoimmunity. Rapid progress in the development of experimental protocols and informatics analysis tools is helping to reduce sequencing artifacts, to achieve more precise quantification of clonal diversity and to extract the most pertinent biological information. That said, broader application of Ig-seq, especially in clinical settings, will require the development of a standardized experimental design framework that will enable the sharing and meta-analysis of sequencing data generated by different laboratories. PMID:24441474
Filer, Dayne L; Kothiya, Parth; Setzer, R Woodrow; Judson, Richard S; Martin, Matthew T
Large high-throughput screening (HTS) efforts are widely used in drug development and chemical toxicity screening. Wide use and integration of these data can benefit from an efficient, transparent and reproducible data pipeline. Summary: The tcpl R package and its associated MySQL database provide a generalized platform for efficiently storing, normalizing and dose-response modeling of large high-throughput and high-content chemical screening data. The novel dose-response modeling algorithm has been tested against millions of diverse dose-response series, and robustly fits data with outliers and cytotoxicity-related signal loss. tcpl is freely available on the Comprehensive R Archive Network under the GPL-2 license. email@example.com.
Lee, Tsung-Hua; Chang, Jo-Shu; Wang, Hsiang-Yu
Microalgae have emerged as one of the most promising feedstocks for biofuels and bio-based chemical production. However, due to the lack of effective tools enabling rapid and high-throughput analysis of the content of microalgae biomass, the efficiency of screening and identification of microalgae with desired functional components from the natural environment is usually quite low. Moreover, the real-time monitoring of the production of target components from microalgae is also difficult. Recently, research efforts focusing on overcoming this limitation have started. In this review, the recent development of high-throughput methods for analyzing microalgae cellular contents is summarized. The future prospects and impacts of these detection methods in microalgae-related processing and industries are also addressed. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Arend, Daniel; Lange, Matthias; Pape, Jean-Michel; Weigelt-Fischer, Kathleen; Arana-Ceballos, Fernando; Mücke, Ingo; Klukas, Christian; Altmann, Thomas; Scholz, Uwe; Junker, Astrid
With the implementation of novel automated, high throughput methods and facilities in the last years, plant phenomics has developed into a highly interdisciplinary research domain integrating biology, engineering and bioinformatics. Here we present a dataset of a non-invasive high throughput plant phenotyping experiment, which uses image- and image analysis- based approaches to monitor the growth and development of 484 Arabidopsis thaliana plants (thale cress). The result is a comprehensive dataset of images and extracted phenotypical features. Such datasets require detailed documentation, standardized description of experimental metadata as well as sustainable data storage and publication in order to ensure the reproducibility of experiments, data reuse and comparability among the scientific community. Therefore the here presented dataset has been annotated using the standardized ISA-Tab format and considering the recently published recommendations for the semantical description of plant phenotyping experiments.
Satbhai, Santosh B; Göschl, Christian; Busch, Wolfgang
The central question of genetics is how a genotype determines the phenotype of an organism. Genetic mapping approaches are a key for finding answers to this question. In particular, genome-wide association (GWA) studies have been rapidly adopted to study the architecture of complex quantitative traits. This was only possible due to the improvement of high-throughput and low-cost phenotyping methodologies. In this chapter we provide a detailed protocol for obtaining root trait data from the model species Arabidopsis thaliana using the semiautomated, high-throughput phenotyping pipeline BRAT (Busch-lab Root Analysis Toolchain) for early root growth under the stress condition of iron deficiency. Extracted root trait data can be directly used to perform GWA mapping using the freely accessible web application GWAPP to identify marker polymorphisms associated with the phenotype of interest.
Potyrailo, Radislav A.; Mirsky, Vladimir M.
New sensors with improved performance characteristics are needed for applications as diverse as bedside continuous monitoring, tracking of environmental pollutants, monitoring of food and water quality, monitoring of chemical processes, and safety in industrial, consumer, and automotive settings. Typical requirements in sensor improvement are selectivity, long-term stability, sensitivity, response time, reversibility, and reproducibility. Design of new sensing materials is the important cornerstone in the effort to develop new sensors. Often, sensing materials are too complex to predict their performance quantitatively in the design stage. Thus, combinatorial and high-throughput experimentation methodologies provide an opportunity to generate new required data to discover new sensing materials and/or to optimize existing material compositions. The goal of this chapter is to provide an overview of the key concepts of experimental development of sensing materials using combinatorial and high-throughput experimentation tools, and to promote additional fruitful interactions between computational scientists and experimentalists.
Bose, Partha Pratim; Kumar, Prakash
In high-throughput biotechnology and structural biology, molecular cloning is an essential prerequisite for attaining high yields of recombinant protein. However, a rapid, cost-effective, easy clone screening protocol is still required to identify colonies with desired insert along with a cross check method to certify the expression of the desired protein as the end product. We report an easy, fast, sensitive and cheap visual clone screening and protein expression cross check protocol employing gold nanoparticle based plasmonic detection phenomenon. This is a non-gel, non-PCR based visual detection technique, which can be used as simultaneous high throughput clone screening followed by the determination of expression of desired protein. Copyright © 2016 Elsevier Inc. All rights reserved.
Lim, Sungwon; Chen, Bob; Kariolis, Mihalis S; Dimov, Ivan K; Baer, Thomas M; Cochran, Jennifer R
Affinity maturation of protein-protein interactions requires iterative rounds of protein library generation and high-throughput screening to identify variants that bind with increased affinity to a target of interest. We recently developed a multipurpose protein engineering platform, termed μSCALE (Microcapillary Single Cell Analysis and Laser Extraction). This technology enables high-throughput screening of libraries of millions of cell-expressing protein variants based on their binding properties or functional activity. Here, we demonstrate the first use of the μSCALE platform for affinity maturation of a protein-protein binding interaction. In this proof-of-concept study, we engineered an extracellular domain of the Axl receptor tyrosine kinase to bind tighter to its ligand Gas6. Within 2 weeks, two iterative rounds of library generation and screening resulted in engineered Axl variants with a 50-fold decrease in kinetic dissociation rate, highlighting the use of μSCALE as a new tool for directed evolution.
Oprea, Tudor I; Bologa, Cristian G; Edwards, Bruce S; Prossnitz, Eric R; Sklar, Larry A
An empirical scheme to evaluate and prioritize screening hits from high-throughput screening (HTS) is proposed. Negative scores are given when chemotypes found in the HTS hits are present in annotated databases such as MDDR and WOMBAT or for testing positive in toxicity-related experiments reported in TOXNET. Positive scores were given for higher measured biological activities, for testing negative in toxicity-related literature, and for good overlap when profiled against drug-related properties. Particular emphasis is placed on estimating aqueous solubility to prioritize in vivo experiments. This empirical scheme is given as an illustration to assist the decision-making process in selecting chemotypes and individual compounds for further experimentation, when confronted with multiple hits from high-throughput experiments. The decision-making process is discussed for a set of G-protein coupled receptor antagonists and validated on a literature example for dihydrofolate reductase inhibition.
Bedenbaugh, John E; Kim, Sungtak; Sasmaz, Erdem; Lauterbach, Jochen
Portable power technologies for military applications necessitate the production of fuels similar to LPG from existing feedstocks. Catalytic cracking of military jet fuel to form a mixture of C₂-C₄ hydrocarbons was investigated using high-throughput experimentation. Cracking experiments were performed in a gas-phase, 16-sample high-throughput reactor. Zeolite ZSM-5 catalysts with low Si/Al ratios (≤25) demonstrated the highest production of C₂-C₄ hydrocarbons at moderate reaction temperatures (623-823 K). ZSM-5 catalysts were optimized for JP-8 cracking activity to LPG through varying reaction temperature and framework Si/Al ratio. The reducing atmosphere required during catalytic cracking resulted in coking of the catalyst and a commensurate decrease in conversion rate. Rare earth metal promoters for ZSM-5 catalysts were screened to reduce coking deactivation rates, while noble metal promoters reduced onset temperatures for coke burnoff regeneration.
Full Text Available Structural biology and structural genomics projects routinely rely on recombinantly expressed proteins, but many proteins and complexes are difficult to obtain by this approach. We investigated native source proteins for high-throughput protein crystallography applications. The Escherichia coli proteome was fractionated, purified, crystallized, and structurally characterized. Macro-scale fermentation and fractionation were used to subdivide the soluble proteome into 408 unique fractions of which 295 fractions yielded crystals in microfluidic crystallization chips. Of the 295 crystals, 152 were selected for optimization, diffraction screening, and data collection. Twenty-three structures were determined, four of which were novel. This study demonstrates the utility of native source proteins for high-throughput crystallography.
Jain, Shushant; Sondervan, David; Rizzu, Patrizia; Bochdanovits, Zoltan; Caminada, Daniel; Heutink, Peter
Genomic approaches provide enormous amounts of raw data with regard to genetic variation, the diversity of RNA species, and protein complement. High-throughput (HT) and high-content (HC) cellular screens are ideally suited to contextualize the information gathered from other "omic" approaches into networks and can be used for the identification of therapeutic targets. Current methods used for HT-HC screens are laborious, time-consuming, and prone to human error. The authors thus developed an automated high-throughput system with an integrated fluorescent imager for HC screens called the AI.CELLHOST. The implementation of user-defined culturing and assay plate setup parameters allows parallel operation of multiple screens in diverse mammalian cell types. The authors demonstrate that such a system is able to successfully maintain different cell lines in culture for extended periods of time as well as significantly increasing throughput, accuracy, and reproducibility of HT and HC screens.
Lai, Yiu Wai; Krause, Michael; Savan, Alan; Thienhaus, Sigurd; Koukourakis, Nektarios; Hofmann, Martin R; Ludwig, Alfred
A high-throughput characterization technique based on digital holography for mapping film thickness in thin-film materials libraries was developed. Digital holographic microscopy is used for fully automatic measurements of the thickness of patterned films with nanometer resolution. The method has several significant advantages over conventional stylus profilometry: it is contactless and fast, substrate bending is compensated, and the experimental setup is simple. Patterned films prepared by different combinatorial thin-film approaches were characterized to investigate and demonstrate this method. The results show that this technique is valuable for the quick, reliable and high-throughput determination of the film thickness distribution in combinatorial materials research. Importantly, it can also be applied to thin films that have been structured by shadow masking.
Microdroplets offer unique compartments for accommodating a large number of chemical and biological reactions in tiny volume with precise control. A major concern in droplet-based microfluidics is the difficulty to address droplets individually and achieve high throughput at the same time. Here, we have combined an improved cartridge sampling technique with a microfluidic chip to perform droplet screenings and aggressive reaction with minimal (nanoliter-scale) reagent consumption. The droplet composition, distance, volume (nanoliter to subnanoliter scale), number, and sequence could be precisely and digitally programmed through the improved sampling technique, while sample evaporation and cross-contamination are effectively eliminated. Our combined device provides a simple model to utilize multiple droplets for various reactions with low reagent consumption and high throughput. © 2012 American Chemical Society.
Full Text Available The overuse or misuse of antibiotics has accelerated antibiotic resistance, creating a major challenge for the public health in the world. Sewage treatment plants (STPs are considered as important reservoirs for antibiotic resistance genes (ARGs and activated sludge characterized with high microbial density and diversity facilitates ARG horizontal gene transfer (HGT via mobile genetic elements (MGEs. However, little is known regarding the pool of ARGs and MGEs in sludge microbiome. In this study, the transposon aided capture (TRACA system was employed to isolate novel plasmids from activated sludge of one STP in Hong Kong, China. We also used Illumina Hiseq 2000 high-throughput sequencing and metagenomics analysis to investigate the plasmid metagenome. Two novel plasmids were acquired from the sludge microbiome by using TRACA system and one novel plasmid was identified through metagenomics analysis. Our results revealed high levels of various ARGs as well as MGEs for HGT, including integrons, transposons and plasmids. The application of the TRACA system to isolate novel plasmids from the environmental metagenome, coupled with subsequent high-throughput sequencing and metagenomic analysis, highlighted the prevalence of ARGs and MGEs in microbial community of STPs.
Zhang, Tong; Zhang, Xu-Xiang; Ye, Lin
The overuse or misuse of antibiotics has accelerated antibiotic resistance, creating a major challenge for the public health in the world. Sewage treatment plants (STPs) are considered as important reservoirs for antibiotic resistance genes (ARGs) and activated sludge characterized with high microbial density and diversity facilitates ARG horizontal gene transfer (HGT) via mobile genetic elements (MGEs). However, little is known regarding the pool of ARGs and MGEs in sludge microbiome. In this study, the transposon aided capture (TRACA) system was employed to isolate novel plasmids from activated sludge of one STP in Hong Kong, China. We also used Illumina Hiseq 2000 high-throughput sequencing and metagenomics analysis to investigate the plasmid metagenome. Two novel plasmids were acquired from the sludge microbiome by using TRACA system and one novel plasmid was identified through metagenomics analysis. Our results revealed high levels of various ARGs as well as MGEs for HGT, including integrons, transposons and plasmids. The application of the TRACA system to isolate novel plasmids from the environmental metagenome, coupled with subsequent high-throughput sequencing and metagenomic analysis, highlighted the prevalence of ARGs and MGEs in microbial community of STPs.
Bugni, Tim S; Richards, Burt; Bhoite, Leen; Cimbora, Daniel; Harper, Mary Kay; Ireland, Chris M
There is a need for diverse molecular libraries for phenotype-selective and high-throughput screening. To make marine natural products (MNPs) more amenable to newer screening paradigms and shorten discovery time lines, we have created an MNP library characterized online using MS. To test the potential of the library, we screened a subset of the library in a phenotype-selective screen to identify compounds that inhibited the growth of BRCA2-deficient cells.
Van Nostrand, Joy D.; Liang, Yuting; He, Zhili; Li, Guanghe; Zhou, Jizhong
GeoChip is a comprehensive functional gene array that targets key functional genes involved in the geochemical cycling of N, C, and P, sulfate reduction, metal resistance and reduction, and contaminant degradation. Studies have shown the GeoChip to be a sensitive, specific, and high-throughput tool for microbial community analysis that has the power to link geochemical processes with microbial community structure. However, several challenges remain regarding the development and applications of microarrays for microbial community analysis.
Schwämmle, Veit; Vaudel, Marc
Cell signaling and functions heavily rely on post-translational modifications (PTMs) of proteins. Their high-throughput characterization is thus of utmost interest for multiple biological and medical investigations. In combination with efficient enrichment methods, peptide mass spectrometry analysis allows the quantitative comparison of thousands of modified peptides over different conditions. However, the large and complex datasets produced pose multiple data interpretation challenges, ranging from spectral interpretation to statistical and multivariate analyses. Here, we present a typical workflow to interpret such data.
Full Text Available Based on in vitro assays, we performed a High Throughput Screening (HTS to identify kinase inhibitors among 10,000 small chemical compounds. In this didactic paper, we describe step-by-step the approach to validate the hits as well as the major pitfalls encountered in the development of active molecules. We propose a decision tree that could be adapted to most in vitro HTS.
Hoang, Thi-My-Nhung; Vu, Hong-Lien; Le, Ly-Thuy-Tram; Nguyen, Chi-Hung; Molla, Annie
Based on in vitro assays, we performed a High Throughput Screening (HTS) to identify kinase inhibitors among 10,000 small chemical compounds. In this didactic paper, we describe step-by-step the approach to validate the hits as well as the major pitfalls encountered in the development of active molecules. We propose a decision tree that could be adapted to most in vitro HTS.
Asp, Torben; Studer, Bruno; Lübberstedt, Thomas
Gene-associated single nucleotide polymorphisms (SNPs) are of major interest for genome analysis and breeding applications in the key grassland species perennial ryegrass. High-throughput 454 Titanium transcriptome sequencing was performed on two genotypes, which previously have been used...... in the VrnA mapping population. Here we report on large-scale SNP discovery, and the construction of a genetic map enabling QTL fine mapping, map-based cloning, and comparative genomics in perennial ryegrass....
Soufan, Othman; Ba-Alawi, Wail; Afeef, Moataz; Essack, Magbubah; Kalnis, Panos; Bajic, Vladimir B.
Background Mining high-throughput screening (HTS) assays is key for enhancing decisions in the area of drug repositioning and drug discovery. However, many challenges are encountered in the process of developing suitable and accurate methods for extracting useful information from these assays. Virtual screening and a wide variety of databases, methods and solutions proposed to-date, did not completely overcome these challenges. This study is based on a multi-label classification (MLC) techniq...
Sastry, Anand; Monk, Jonathan M.; Tegel, Hanna
over 40 000 unique human protein fragments have been expressed in E. coli. These datasets enable quantitative tracking of entire cellular proteomes and present new avenues for understanding molecular-level properties influencing expression and solubility. Results: Combining computational biology......Motivation: The Human Protein Atlas (HPA) enables the simultaneous characterization of thousands of proteins across various tissues to pinpoint their spatial location in the human body. This has been achieved through transcriptomics and high-throughput immunohistochemistry-based approaches, where...
Rabner, Arthur; Shacham, Yosi
This paper discusses possible methods for on-chip fluorescent imaging for integrated bio-sensors. The integration of optical and electro-optical accessories, according to suggested methods, can improve the performance of fluorescence imaging. It can boost the signal to background ratio by a few orders of magnitudes in comparison to conventional discrete setups. The methods that are present in this paper are oriented towards building reproducible arrays for high-throughput micro total analysis...
In this communication, we report a newly developed cell pattering methodology by a silicon-based stencil, which exhibited advantages such as easy handling, reusability, hydrophilic surface and mature fabrication technologies. Cell arrays obtained by this method were used to investigate cell growth under a temperature gradient, which demonstrated the possibility of studying cell behavior in a high-throughput assay. This journal is © The Royal Society of Chemistry 2011.
Klein, Marlise I.; Xiao, Jin; Lu, Bingwen; Delahunty, Claire M.; Yates, John R.; Koo, Hyun
Biofilms formed on tooth surfaces are comprised of mixed microbiota enmeshed in an extracellular matrix. Oral biofilms are constantly exposed to environmental changes, which influence the microbial composition, matrix formation and expression of virulence. Streptococcus mutans and sucrose are key modulators associated with the evolution of virulent-cariogenic biofilms. In this study, we used a high-throughput quantitative proteomics approach to examine how S. mutans produces relevant proteins...
Xie, Yang; Ahn, Chul
Large-scale sequencing, copy number, mRNA, and protein data have given great promise to the biomedical research, while posing great challenges to data management and data analysis. Integrating different types of high-throughput data from diverse sources can increase the statistical power of data analysis and provide deeper biological understanding. This chapter uses two biomedical research examples to illustrate why there is an urgent need to develop reliable and robust methods for integratin...
Ortega-Rivas, Antonio; Padrón, José M; Valladares, Basilio; Elsheikha, Hany M
Despite significant public health impact, there is no specific antiprotozoal therapy for prevention and treatment of Acanthamoeba castellanii infection. There is a need for new and efficient anti-Acanthamoeba drugs that are less toxic and can reduce treatment duration and frequency of administration. In this context a new, rapid and sensitive assay is required for high-throughput activity testing and screening of new therapeutic compounds. A colorimetric assay based on sulforhodamine B (SRB) ...
Lundegaard, Claus; Lund, Ole; Nielsen, Morten
Prediction methods as well as experimental methods for T-cell epitope discovery have developed significantly in recent years. High-throughput experimental methods have made it possible to perform full-length protein scans for epitopes restricted to a limited number of MHC alleles. The high costs...... discovery. We expect prediction methods as well as experimental validation methods to continue to develop and that we will soon see clinical trials of products whose development has been guided by prediction methods....
Marchand, P; Makwana, N. M.; Tighe, C. J.; Gruar, R. I.; Parkin, I. P.; Carmalt, C. J.; Darr, J. A.
A high-throughput optimization and subsequent scale-up methodology has been used for the synthesis of conductive tin-doped indium oxide (known as ITO) nanoparticles. ITO nanoparticles with up to 12 at % Sn were synthesized using a laboratory scale (15 g/hour by dry mass) continuous hydrothermal synthesis process, and the as-synthesized powders were characterized by powder X-ray diffraction, transmission electron microscopy, energy-dispersive X-ray analysis, and X-ray photoelectron spectroscop...
Starita, Lea M; Fields, Stanley
Deep mutational scanning is a highly parallel method that uses high-throughput sequencing to track changes in >10(5) protein variants before and after selection to measure the effects of mutations on protein function. Here we outline the stages of a deep mutational scanning experiment, focusing on the construction of libraries of protein sequence variants and the preparation of Illumina sequencing libraries. © 2015 Cold Spring Harbor Laboratory Press.
Recently, the discovery of new drugs uses the new concept by modern techniques instead ofthe convenstional techniques. In the development of scientific knowledge, the role of molecularbiology and the modern techniques in the investigations and discovery new drug becomes theimportant things. Many methods and modern techniques use in the discovery of new drugs, i.e,genetic enginering, DNA recombinant, radioligand binding assay technique, HTS techniques (HighThroughput Screening), and mass ligan...
Full Text Available Oxidative stress is one of the key mechanisms linking ambient particulate matter (PM exposure with various adverse health effects. The oxidative potential of PM has been used to characterize the ability of PM induced oxidative stress. Hydroxyl radical (•OH is the most destructive radical produced by PM. However, there is currently no high-throughput approach which can rapidly measure PM-induced •OH for a large number of samples with an automated system. This study evaluated four existing molecular probes (disodium terephthalate, 3′-p-(aminophenylfluorescein, coumarin-3-carboxylic acid, and sodium benzoate for their applicability to measure •OH induced by PM in a high-throughput cell-free system using fluorescence techniques, based on both our experiments and on an assessment of the physicochemical properties of the probes reported in the literature. Disodium terephthalate (TPT was the most applicable molecular probe to measure •OH induced by PM, due to its high solubility, high stability of the corresponding fluorescent product (i.e., 2-hydroxyterephthalic acid, high yield compared with the other molecular probes, and stable fluorescence intensity in a wide range of pH environments. TPT was applied in a high-throughput format to measure PM (NIST 1648a-induced •OH, in phosphate buffered saline. The formed fluorescent product was measured at designated time points up to 2 h. The fluorescent product of TPT had a detection limit of 17.59 nM. The soluble fraction of PM contributed approximately 76.9% of the •OH induced by total PM, and the soluble metal ions of PM contributed 57.4% of the overall •OH formation. This study provides a promising cost-effective high-throughput method to measure •OH induced by PM on a routine basis.
Nandy, Subir Kumar; Knudsen, Peter Boldsen; Rosenkjær, Alexander
By redesigning the established methylene blue reduction test for bacteria and yeast, we present a cheap and efficient methodology for quantitative physiology of eukaryotic cells applicable for high-throughput systems. Validation of themethod in fermenters and highthroughput systems proved...... equivalent, displaying reduction curves that interrelated directly with CFU counts. For growth rate estimation, the methylene blue reduction test (MBRT) proved superior, since the discriminatory nature of the method allowed for the quantification of metabolically active cells only, excluding dead cells...
Emanuel, George; Moffitt, Jeffrey R; Zhuang, Xiaowei
We report a high-throughput screening method that allows diverse genotypes and corresponding phenotypes to be imaged in individual cells. We achieve genotyping by introducing barcoded genetic variants into cells as pooled libraries and reading the barcodes out using massively multiplexed fluorescence in situ hybridization. To demonstrate the power of image-based pooled screening, we identified brighter and more photostable variants of the fluorescent protein YFAST among 60,000 variants.
Althammer, Sonja Daniela; González-Vallinas Rostes, Juan, 1983-; Ballaré, Cecilia Julia; Beato, Miguel; Eyras Jiménez, Eduardo
Motivation: High-throughput sequencing (HTS) has revolutionized gene regulation studies and is now fundamental for the detection of protein?DNA and protein?RNA binding, as well as for measuring RNA expression. With increasing variety and sequencing depth of HTS datasets, the need for more flexible and memory-efficient tools to analyse them is growing. Results: We describe Pyicos, a powerful toolkit for the analysis of mapped reads from diverse HTS experiments: ChIP-Seq, either punctuated or b...
Schwämmle, Veit; Braga, Thiago Verano; Roepstorff, Peter
The investigation of post-translational modifications (PTMs) represents one of the main research focuses for the study of protein function and cell signaling. Mass spectrometry instrumentation with increasing sensitivity improved protocols for PTM enrichment and recently established pipelines...... for high-throughput experiments allow large-scale identification and quantification of several PTM types. This review addresses the concurrently emerging challenges for the computational analysis of the resulting data and presents PTM-centered approaches for spectra identification, statistical analysis...
Hassanein, Mohamed; Weidow, Brandy; Koehler, Elizabeth; Bakane, Naimish; Garbett, Shawn; Shyr, Yu; Quaranta, Vito
Purpose Metabolism, and especially glucose uptake, is a key quantitative cell trait that is closely linked to cancer initiation and progression. Therefore, developing high-throughput assays for measuring glucose uptake in cancer cells would be enviable for simultaneous comparisons of multiple cell lines and microenvironmental conditions. This study was designed with two specific aims in mind: the first was to develop and validate a high-throughput screening method for quantitative assessment of glucose uptake in “normal” and tumor cells using the fluorescent 2-deoxyglucose analog 2-[N-(7-nitrobenz-2-oxa-1,3-diazol-4-yl)amino]-2-deoxyglucose (2-NBDG), and the second was to develop an image-based, quantitative, single-cell assay for measuring glucose uptake using the same probe to dissect the full spectrum of metabolic variability within populations of tumor cells in vitro in higher resolution. Procedure The kinetics of population-based glucose uptake was evaluated for MCF10A mammary epithelial and CA1d breast cancer cell lines, using 2-NBDG and a fluorometric microplate reader. Glucose uptake for the same cell lines was also examined at the single-cell level using high-content automated microscopy coupled with semi-automated cell-cytometric image analysis approaches. Statistical treatments were also implemented to analyze intra-population variability. Results Our results demonstrate that the high-throughput fluorometric assay using 2-NBDG is a reliable method to assess population-level kinetics of glucose uptake in cell lines in vitro. Similarly, single-cell image-based assays and analyses of 2-NBDG fluorescence proved an effective and accurate means for assessing glucose uptake, which revealed that breast tumor cell lines display intra-population variability that is modulated by growth conditions. Conclusions These studies indicate that 2-NBDG can be used to aid in the high-throughput analysis of the influence of chemotherapeutics on glucose uptake in cancer
Lagus, Todd P.; Edd, Jon F.
Microfluidic encapsulation methods have been previously utilized to capture cells in picoliter-scale aqueous, monodisperse drops, providing confinement from a bulk fluid environment with applications in high throughput screening, cytometry, and mass spectrometry. We describe a method to not only encapsulate single cells, but to repeatedly capture a set number of cells (here we demonstrate one- and two-cell encapsulation) to study both isolation and the interactions between cells in groups of ...
William R. Kenealy; Thomas W. Jeffries
High-throughput screening requires simple assays that give reliable quantitative results. A microplate assay was developed for reducing sugar analysis that uses a 2,2'-bicinchoninic-based protein reagent. Endo-1,4-Ã¢-D-xylanase activity against oat spelt xylan was detected at activities of 0.002 to 0.011 IU ml−1. The assay is linear for sugar...
Full Text Available Proteomics for biomarker validation needs high throughput instrumentation to analyze huge set of clinical samples for quantitative and reproducible analysis at a minimum time without manual experimental errors. Sample preparation, a vital step in proteomics plays a major role in identification and quantification of proteins from biological samples. Tryptic digestion a major check point in sample preparation for mass spectrometry based proteomics needs to be more accurate with rapid processing time. The present study focuses on establishing a high throughput automated online system for proteolytic digestion and desalting of proteins from biological samples quantitatively and qualitatively in a reproducible manner. The present study compares online protein digestion and desalting of BSA with conventional off-line (in-solution method and validated for real time sample for reproducibility. Proteins were identified using SEQUEST data base search engine and the data were quantified using IDEALQ software. The present study shows that the online system capable of handling high throughput samples in 96 well formats carries out protein digestion and peptide desalting efficiently in a reproducible and quantitative manner. Label free quantification showed clear increase of peptide quantities with increase in concentration with much linearity compared to off line method. Hence we would like to suggest that inclusion of this online system in proteomic pipeline will be effective in quantification of proteins in comparative proteomics were the quantification is really very crucial.
Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal
High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917
Schuster, Andre; Bruno, Kenneth S.; Collett, James R.; Baker, Scott E.; Seiboth, Bernhard; Kubicek, Christian P.; Schmoll, Monika
The ascomycete fungus, Trichoderma reesei (anamorph of Hypocrea jecorina), represents a biotechnological workhorse and is currently one of the most proficient cellulase producers. While strain improvement was traditionally accomplished by random mutagenesis, a detailed understanding of cellulase regulation can only be gained using recombinant technologies. RESULTS: Aiming at high efficiency and high throughput methods, we present here a construction kit for gene knock out in T. reesei. We provide a primer database for gene deletion using the pyr4, amdS and hph selection markers. For high throughput generation of gene knock outs, we constructed vectors using yeast mediated recombination and then transformed a T. reesei strain deficient in non-homologous end joining (NHEJ) by spore electroporation. This NHEJ-defect was subsequently removed by crossing of mutants with a sexually competent strain derived from the parental strain, QM9414.CONCLUSIONS:Using this strategy and the materials provided, high throughput gene deletion in T. reesei becomes feasible. Moreover, with the application of sexual development, the NHEJ-defect can be removed efficiently and without the need for additional selection markers. The same advantages apply for the construction of multiple mutants by crossing of strains with different gene deletions, which is now possible with considerably less hands-on time and minimal screening effort compared to a transformation approach. Consequently this toolkit can considerably boost research towards efficient exploitation of the resources of T. reesei for cellulase expression and hence second generation biofuel production.
Magennis, E P; Hook, A L; Davies, M C; Alexander, C; Williams, P; Alexander, M R
Controlling the colonisation of materials by microorganisms is important in a wide range of industries and clinical settings. To date, the underlying mechanisms that govern the interactions of bacteria with material surfaces remain poorly understood, limiting the ab initio design and engineering of biomaterials to control bacterial attachment. Combinatorial approaches involving high-throughput screening have emerged as key tools for identifying materials to control bacterial attachment. The hundreds of different materials assessed using these methods can be carried out with the aid of computational modelling. This approach can develop an understanding of the rules used to predict bacterial attachment to surfaces of non-toxic synthetic materials. Here we outline our view on the state of this field and the challenges and opportunities in this area for the coming years. This opinion article on high throughput screening methods reflects one aspect of how the field of biomaterials research has developed and progressed. The piece takes the reader through key developments in biomaterials discovery, particularly focusing on need to reduce bacterial colonisation of surfaces. Such bacterial resistant surfaces are increasingly required in this age of antibiotic resistance. The influence and origin of high-throughput methods are discussed with insights into the future of biomaterials development where computational methods may drive materials development into new fertile areas of discovery. New biomaterials will exhibit responsiveness to adapt to the biological environment and promote better integration and reduced rejection or infection. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Park, Sungjin; Lee, Myung-ryul; Pyo, Soon-Jin; Shin, Injae
Carbohydrate-protein interactions play important biological roles in living organisms. For the most part, biophysical and biochemical methods have been used for studying these biomolecular interactions. Less attention has been given to the development of high-throughput methods to elucidate recognition events between carbohydrates and proteins. In the current effort to develop a novel high-throughput tool for monitoring carbohydrate-protein interactions, we prepared carbohydrate microarrays by immobilizing maleimide-linked carbohydrates on thiol-derivatized glass slides and carried out lectin binding experiments by using these microarrays. The results showed that carbohydrates with different structural features selectively bound to the corresponding lectins with relative binding affinities that correlated with those obtained from solution-based assays. In addition, binding affinities of lectins to carbohydrates were also quantitatively analyzed by determining IC(50) values of soluble carbohydrates with the carbohydrate microarrays. To fabricate carbohydrate chips that contained more diverse carbohydrate probes, solution-phase parallel and enzymatic glycosylations were performed. Three model disaccharides were in parallel synthesized in solution-phase and used as carbohydrate probes for the fabrication of carbohydrate chips. Three enzymatic glycosylations on glass slides were consecutively performed to generate carbohydrate microarrays that contained the complex oligosaccharide, sialyl Le(x). Overall, these works demonstrated that carbohydrate chips could be efficiently prepared by covalent immobilization of maleimide-linked carbohydrates on the thiol-coated glass slides and applied for the high-throughput analyses of carbohydrate-protein interactions.
Bugelski, P J; Atif, U; Molton, S; Toeg, I; Lord, P G; Morgan, D G
Recent advances in combinatorial chemistry and high throughput screens for pharmacologic activity have created an increasing demand for in vitro high throughput screens for toxicological evaluation in the early phases of drug discovery. To develop a strategy for such a screen, we have conducted a data mining study of the National Cancer Institute's Developmental Therapeutics Program (DTP) cytotoxicity database. Using hierarchical cluster analysis, we confirmed that the different tissues of origin and individual cell lines showed differential sensitivity to compounds in the DTP Standard Agents database. Surprisingly, however, approaching the data globally, linear regression analysis showed that the differences were relatively minor. Comparison with the literature on acute toxicity in mice showed that the predictive power of growth inhibition was marginally superior to that of cell death. This datamining study suggests that in designing a strategy for high throughput cytotoxicity screening: a single cell line, the choice of which may not be critical, can be used as a primary screen; a single end point may be an adequate measure and a cut off value for 50% growth inhibition between 10(-6) and 10(-8) M may be a reasonable starting point for accepting a cytotoxic compound for scale up and further study.
Full Text Available Abstract Background The differentiation of naive T and B cells into memory lymphocytes is essential for immunity to pathogens. Therapeutic manipulation of this cellular differentiation program could improve vaccine efficacy and the in vitro expansion of memory cells. However, chemical screens to identify compounds that induce memory differentiation have been limited by 1 the lack of reporter-gene or functional assays that can distinguish naive and memory-phenotype T cells at high throughput and 2 a suitable cell-line representative of naive T cells. Results Here, we describe a method for gene-expression based screening that allows primary naive and memory-phenotype lymphocytes to be discriminated based on complex genes signatures corresponding to these differentiation states. We used ligation-mediated amplification and a fluorescent, bead-based detection system to quantify simultaneously 55 transcripts representing naive and memory-phenotype signatures in purified populations of human T cells. The use of a multi-gene panel allowed better resolution than any constituent single gene. The method was precise, correlated well with Affymetrix microarray data, and could be easily scaled up for high-throughput. Conclusion This method provides a generic solution for high-throughput differentiation screens in primary human T cells where no single-gene or functional assay is available. This screening platform will allow the identification of small molecules, genes or soluble factors that direct memory differentiation in naive human lymphocytes.
Sastry, Anand; Monk, Jonathan; Tegel, Hanna; Uhlen, Mathias; Palsson, Bernhard O; Rockberg, Johan; Brunk, Elizabeth
The Human Protein Atlas (HPA) enables the simultaneous characterization of thousands of proteins across various tissues to pinpoint their spatial location in the human body. This has been achieved through transcriptomics and high-throughput immunohistochemistry-based approaches, where over 40 000 unique human protein fragments have been expressed in E. coli. These datasets enable quantitative tracking of entire cellular proteomes and present new avenues for understanding molecular-level properties influencing expression and solubility. Combining computational biology and machine learning identifies protein properties that hinder the HPA high-throughput antibody production pipeline. We predict protein expression and solubility with accuracies of 70% and 80%, respectively, based on a subset of key properties (aromaticity, hydropathy and isoelectric point). We guide the selection of protein fragments based on these characteristics to optimize high-throughput experimentation. We present the machine learning workflow as a series of IPython notebooks hosted on GitHub (https://github.com/SBRG/Protein_ML). The workflow can be used as a template for analysis of further expression and solubility datasets. firstname.lastname@example.org or email@example.com. Supplementary data are available at Bioinformatics online.
Festa, Fernanda; Steel, Jason; Bian, Xiaofang; Labaer, Joshua
The study of protein function usually requires the use of a cloned version of the gene for protein expression and functional assays. This strategy is particular important when the information available regarding function is limited. The functional characterization of the thousands of newly identified proteins revealed by genomics requires faster methods than traditional single gene experiments, creating the need for fast, flexible and reliable cloning systems. These collections of open reading frame (ORF) clones can be coupled with high-throughput proteomics platforms, such as protein microarrays and cell-based assays, to answer biological questions. In this tutorial we provide the background for DNA cloning, discuss the major high-throughput cloning systems (Gateway® Technology, Flexi® Vector Systems, and Creator™ DNA Cloning System) and compare them side-by-side. We also report an example of high-throughput cloning study and its application in functional proteomics. This Tutorial is part of the International Proteomics Tutorial Programme (IPTP12). Details can be found at http://www.proteomicstutorials.org. PMID:23457047
Vierna, J; Doña, J; Vizcaíno, A; Serrano, D; Jovani, R
High-throughput DNA barcoding has become essential in ecology and evolution, but some technical questions still remain. Increasing the number of PCR cycles above the routine 20-30 cycles is a common practice when working with old-type specimens, which provide little amounts of DNA, or when facing annealing issues with the primers. However, increasing the number of cycles can raise the number of artificial mutations due to polymerase errors. In this work, we sequenced 20 COI libraries in the Illumina MiSeq platform. Libraries were prepared with 40, 45, 50, 55, and 60 PCR cycles from four individuals belonging to four species of four genera of cephalopods. We found no relationship between the number of PCR cycles and the number of mutations despite using a nonproofreading polymerase. Moreover, even when using a high number of PCR cycles, the resulting number of mutations was low enough not to be an issue in the context of high-throughput DNA barcoding (but may still remain an issue in DNA metabarcoding due to chimera formation). We conclude that the common practice of increasing the number of PCR cycles should not negatively impact the outcome of a high-throughput DNA barcoding study in terms of the occurrence of point mutations.
Deantonio, Cecilia; Sedini, Valentina; Cesaro, Patrizia; Quasso, Fabio; Cotella, Diego; Persichetti, Francesca; Santoro, Claudio; Sblattero, Daniele
Over the last few years High-Throughput Protein Production (HTPP) has played a crucial role for functional proteomics. High-quality, high yield and fast recombinant protein production are critical for new HTPP technologies. Escherichia coli is usually the expression system of choice in protein production thanks to its fast growth, ease of handling and high yields of protein produced. Even though shake-flask cultures are widely used, there is an increasing need for easy to handle, lab scale, high throughput systems. In this article we described a novel minifermenter system suitable for HTPP. The Air-Well minifermenter system is made by a homogeneous air sparging device that includes an air diffusion system, and a stainless steel 96 needle plate integrated with a 96 deep well plate where cultures take place. This system provides aeration to achieve higher optical density growth compared to classical shaking growth without the decrease in pH value and bacterial viability. Moreover the yield of recombinant protein is up to 3-fold higher with a considerable improvement in the amount of full length proteins. High throughput production of hundreds of proteins in parallel can be obtained sparging air in a continuous and controlled manner. The system used is modular and can be easily modified and scaled up to meet the demands for HTPP.
Jasinski, Sophie; Lécureuil, Alain; Durandet, Monique; Bernard-Moulin, Patrick; Guerche, Philippe
Seed storage compounds are of crucial importance for human diet, feed and industrial uses. In oleo-proteaginous species like rapeseed, seed oil and protein are the qualitative determinants that conferred economic value to the harvested seed. To date, although the biosynthesis pathways of oil and storage protein are rather well-known, the factors that determine how these types of reserves are partitioned in seeds have to be identified. With the aim of implementing a quantitative genetics approach, requiring phenotyping of 100s of plants, our first objective was to establish near-infrared reflectance spectroscopic (NIRS) predictive equations in order to estimate oil, protein, carbon, and nitrogen content in Arabidopsis seed with high-throughput level. Our results demonstrated that NIRS is a powerful non-destructive, high-throughput method to assess the content of these four major components studied in Arabidopsis seed. With this tool in hand, we analyzed Arabidopsis natural variation for these four components and illustrated that they all displayed a wide range of variation. Finally, NIRS was used in order to map QTL for these four traits using seeds from the Arabidopsis thaliana Ct-1 × Col-0 recombinant inbred line population. Some QTL co-localized with QTL previously identified, but others mapped to chromosomal regions never identified so far for such traits. This paper illustrates the usefulness of NIRS predictive equations to perform accurate high-throughput phenotyping of Arabidopsis seed content, opening new perspectives in gene identification following QTL mapping and genome wide association studies.
Full Text Available Seed storage compounds are of crucial importance for human diet, feed and industrial uses. In oleo-proteaginous species like rapeseed, seed oil and protein are the qualitative determinants that conferred economic value to the harvested seed. To date, although the biosynthesis pathways of oil and storage protein are rather well known, the factors that determine how these types of reserves are partitioned in seeds have to be identified. With the aim of implementing a quantitative genetics approach, requiring phenotyping of hundreds of plants, our first objective was to establish near-infrared reflectance spectroscopic (NIRS predictive equations in order to estimate oil, protein, carbon and nitrogen content in Arabidopsis seed with high-throughput level. Our results demonstrated that NIRS is a powerful non-destructive, high-throughput method to assess the content of these four major components studied in Arabidopsis seed. With this tool in hand, we analysed Arabidopsis natural variation for these four components and illustrated that they all displayed a wide range of variation. Finally, NIRS was used in order to map QTL for these four traits using seeds from the Arabidopsis thaliana Ct-1 x Col-0 recombinant inbred line population. Some QTL co-localised with QTL previously identified, but others mapped to chromosomal regions never identified so far for such traits. This paper illustrates the usefulness of NIRS predictive equations to perform accurate high-throughput phenotyping of Arabidopsis seed content, opening new perspectives in gene identification following QTL mapping and Genome Wide Association Studies.
Prashar, Ankush; Yildiz, Jane; McNicol, James W; Bryan, Glenn J; Jones, Hamlyn G
The rapid development of genomic technology has made high throughput genotyping widely accessible but the associated high throughput phenotyping is now the major limiting factor in genetic analysis of traits. This paper evaluates the use of thermal imaging for the high throughput field phenotyping of Solanum tuberosum for differences in stomatal behaviour. A large multi-replicated trial of a potato mapping population was used to investigate the consistency in genotypic rankings across different trials and across measurements made at different times of day and on different days. The results confirmed a high degree of consistency between the genotypic rankings based on relative canopy temperature on different occasions. Genotype discrimination was enhanced both through normalising data by expressing genotype temperatures as differences from image means and through the enhanced replication obtained by using overlapping images. A Monte Carlo simulation approach was used to confirm the magnitude of genotypic differences that it is possible to discriminate. The results showed a clear negative association between canopy temperature and final tuber yield for this population, when grown under ample moisture supply. We have therefore established infrared thermography as an easy, rapid and non-destructive screening method for evaluating large population trials for genetic analysis. We also envisage this approach as having great potential for evaluating plant response to stress under field conditions.
Liu, Yun; Grossman, Jeffrey
Solar thermal fuels (STF) store the energy of sunlight, which can then be released later in the form of heat, offering an emission-free and renewable solution for both solar energy conversion and storage. However, this approach is currently limited by the lack of low-cost materials with high energy density and high stability. Previously we have predicted a new class of functional materials that have the potential to address these challenges. Recently, we have developed an ab initio high-throughput computational approach to accelerate the design process and allow for searches over a broad class of materials. The high-throughput screening algorithm we have developed can run through large numbers of molecules composed of earth-abundant elements, and identifies possible metastable structures of a given material. Corresponding isomerization enthalpies associated with the metastable structures are then computed. Using this high-throughput simulation approach, we have discovered molecular structures with high isomerization enthalpies that have the potential to be new candidates for high-energy density STF. We have also discovered physical design principles to guide further STF materials design through the correlation between isomerization enthalpy and structural properties.
Carlson, H. K.; Coates, J. D.; Deutschbauer, A. M.
The selective perturbation of complex microbial ecosystems to predictably influence outcomes in engineered and industrial environments remains a grand challenge for geomicrobiology. In some industrial ecosystems, such as oil reservoirs, sulfate reducing microorganisms (SRM) produce hydrogen sulfide which is toxic, explosive and corrosive. Current strategies to selectively inhibit sulfidogenesis are based on non-specific biocide treatments, bio-competitive exclusion by alternative electron acceptors or sulfate-analogs which are competitive inhibitors or futile/alternative substrates of the sulfate reduction pathway. Despite the economic cost of sulfidogenesis, there has been minimal exploration of the chemical space of possible inhibitory compounds, and very little work has quantitatively assessed the selectivity of putative souring treatments. We have developed a high-throughput screening strategy to target SRM, quantitatively ranked the selectivity and potency of hundreds of compounds and identified previously unrecognized SRM selective inhibitors and synergistic interactions between inhibitors. Once inhibitor selectivity is defined, high-throughput characterization of microbial community structure across compound gradients and identification of fitness determinants using isolate bar-coded transposon mutant libraries can give insights into the genetic mechanisms whereby compounds structure microbial communities. The high-throughput (HT) approach we present can be readily applied to target SRM in diverse environments and more broadly, could be used to identify and quantify the potency and selectivity of inhibitors of a variety of microbial metabolisms. Our findings and approach are relevant for engineering environmental ecosystems and also to understand the role of natural gradients in shaping microbial niche space.
Kalantar, T H; Tucker, C J; Zalusky, A S; Boomgaard, T A; Wilson, B E; Ladika, M; Jordan, S L; Li, W K; Zhang, X; Goh, C G
Cationic cellulosic polymers find wide utility as benefit agents in shampoo. Deposition of these polymers onto hair has been shown to mend split-ends, improve appearance and wet combing, as well as provide controlled delivery of insoluble actives. The deposition is thought to be enhanced by the formation of a polymer/surfactant complex that phase-separates from the bulk solution upon dilution. A standard characterization method has been developed to characterize the coacervate formation upon dilution, but the test is time and material prohibitive. We have developed a semi-automated high throughput workflow to characterize the coacervate-forming behavior of different shampoo formulations. A procedure that allows testing of real use shampoo dilutions without first formulating a complete shampoo was identified. This procedure was adapted to a Tecan liquid handler by optimizing the parameters for liquid dispensing as well as for mixing. The high throughput workflow enabled preparation and testing of hundreds of formulations with different types and levels of cationic cellulosic polymers and surfactants, and for each formulation a haze diagram was constructed. Optimal formulations and their dilutions that give substantial coacervate formation (determined by haze measurements) were identified. Results from this high throughput workflow were shown to reproduce standard haze and bench-top turbidity measurements, and this workflow has the advantages of using less material and allowing more variables to be tested with significant time savings.
Cribb, Jeremy; Osborne, Lukas D.; Hsiao, Joe Ping-Lin; Vicci, Leandra; Meshram, Alok; O'Brien, E. Tim; Spero, Richard Chasen; Taylor, Russell; Superfine, Richard
In the last decade, the emergence of high throughput screening has enabled the development of novel drug therapies and elucidated many complex cellular processes. Concurrently, the mechanobiology community has developed tools and methods to show that the dysregulation of biophysical properties and the biochemical mechanisms controlling those properties contribute significantly to many human diseases. Despite these advances, a complete understanding of the connection between biomechanics and disease will require advances in instrumentation that enable parallelized, high throughput assays capable of probing complex signaling pathways, studying biology in physiologically relevant conditions, and capturing specimen and mechanical heterogeneity. Traditional biophysical instruments are unable to meet this need. To address the challenge of large-scale, parallelized biophysical measurements, we have developed an automated array high-throughput microscope system that utilizes passive microbead diffusion to characterize mechanical properties of biomaterials. The instrument is capable of acquiring data on twelve-channels simultaneously, where each channel in the system can independently drive two-channel fluorescence imaging at up to 50 frames per second. We employ this system to measure the concentration-dependent apparent viscosity of hyaluronan, an essential polymer found in connective tissue and whose expression has been implicated in cancer progression.
Yennawar, Neela H; Fecko, Julia A; Showalter, Scott A; Bevilacqua, Philip C
Many labs have conventional calorimeters where denaturation and binding experiments are setup and run one at a time. While these systems are highly informative to biopolymer folding and ligand interaction, they require considerable manual intervention for cleaning and setup. As such, the throughput for such setups is limited typically to a few runs a day. With a large number of experimental parameters to explore including different buffers, macromolecule concentrations, temperatures, ligands, mutants, controls, replicates, and instrument tests, the need for high-throughput automated calorimeters is on the rise. Lower sample volume requirements and reduced user intervention time compared to the manual instruments have improved turnover of calorimetry experiments in a high-throughput format where 25 or more runs can be conducted per day. The cost and efforts to maintain high-throughput equipment typically demands that these instruments be housed in a multiuser core facility. We describe here the steps taken to successfully start and run an automated biological calorimetry facility at Pennsylvania State University. Scientists from various departments at Penn State including Chemistry, Biochemistry and Molecular Biology, Bioengineering, Biology, Food Science, and Chemical Engineering are benefiting from this core facility. Samples studied include proteins, nucleic acids, sugars, lipids, synthetic polymers, small molecules, natural products, and virus capsids. This facility has led to higher throughput of data, which has been leveraged into grant support, attracting new faculty hire and has led to some exciting publications. © 2016 Elsevier Inc. All rights reserved.
Full Text Available Abstract Background The ascomycete fungus, Trichoderma reesei (anamorph of Hypocrea jecorina, represents a biotechnological workhorse and is currently one of the most proficient cellulase producers. While strain improvement was traditionally accomplished by random mutagenesis, a detailed understanding of cellulase regulation can only be gained using recombinant technologies. Results Aiming at high efficiency and high throughput methods, we present here a construction kit for gene knock out in T. reesei. We provide a primer database for gene deletion using the pyr4, amdS and hph selection markers. For high throughput generation of gene knock outs, we constructed vectors using yeast mediated recombination and then transformed a T. reesei strain deficient in non-homologous end joining (NHEJ by spore electroporation. This NHEJ-defect was subsequently removed by crossing of mutants with a sexually competent strain derived from the parental strain, QM9414. Conclusions Using this strategy and the materials provided, high throughput gene deletion in T. reesei becomes feasible. Moreover, with the application of sexual development, the NHEJ-defect can be removed efficiently and without the need for additional selection markers. The same advantages apply for the construction of multiple mutants by crossing of strains with different gene deletions, which is now possible with considerably less hands-on time and minimal screening effort compared to a transformation approach. Consequently this toolkit can considerably boost research towards efficient exploitation of the resources of T. reesei for cellulase expression and hence second generation biofuel production.
Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal
High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.
Atefi, Ehsan; Lemmo, Stephanie; Fyffe, Darcy; Luker, Gary D.; Tavana, Hossein
This paper presents a new 3D culture microtechnology for high throughput production of tumor spheroids and validates its utility for screening anti-cancer drugs. We use two immiscible polymeric aqueous solutions and microprint a submicroliter drop of the “patterning” phase containing cells into a bath of the “immersion” phase. Selecting proper formulations of biphasic systems using a panel of biocompatible polymers results in the formation of a round drop that confines cells to facilitate spontaneous formation of a spheroid without any external stimuli. Adapting this approach to robotic tools enables straightforward generation and maintenance of spheroids of well-defined size in standard microwell plates and biochemical analysis of spheroids in situ, which is not possible with existing techniques for spheroid culture. To enable high throughput screening, we establish a phase diagram to identify minimum cell densities within specific volumes of the patterning drop to result in a single spheroid. Spheroids show normal growth over long-term incubation and dose-dependent decrease in cellular viability when treated with drug compounds, but present significant resistance compared to monolayer cultures. The unprecedented ease of implementing this microtechnology and its robust performance will benefit high throughput studies of drug screening against cancer cells with physiologically-relevant 3D tumor models. PMID:25411577
Baumann, Pascal; Hahn, Tobias; Hubbuch, Jürgen
Upstream processes are rather complex to design and the productivity of cells under suitable cultivation conditions is hard to predict. The method of choice for examining the design space is to execute high-throughput cultivation screenings in micro-scale format. Various predictive in silico models have been developed for many downstream processes, leading to a reduction of time and material costs. This paper presents a combined optimization approach based on high-throughput micro-scale cultivation experiments and chromatography modeling. The overall optimized system must not necessarily be the one with highest product titers, but the one resulting in an overall superior process performance in up- and downstream. The methodology is presented in a case study for the Cherry-tagged enzyme Glutathione-S-Transferase from Escherichia coli SE1. The Cherry-Tag™ (Delphi Genetics, Belgium) which can be fused to any target protein allows for direct product analytics by simple VIS absorption measurements. High-throughput cultivations were carried out in a 48-well format in a BioLector micro-scale cultivation system (m2p-Labs, Germany). The downstream process optimization for a set of randomly picked upstream conditions producing high yields was performed in silico using a chromatography modeling software developed in-house (ChromX). The suggested in silico-optimized operational modes for product capturing were validated subsequently. The overall best system was chosen based on a combination of excellent up- and downstream performance. © 2015 Wiley Periodicals, Inc.
Liu, Zhen; Xu, Jian-hong
High throughput sequencing technology has dramatically improved the efficiency of DNA sequencing, and decreased the costs to a great extent. Meanwhile, this technology usually has advantages of better specificity, higher sensitivity and accuracy. Therefore, it has been applied to the research on genetic variations, transcriptomics and epigenomics. Recently, this technology has been widely employed in the studies of transposable elements and has achieved fruitful results. In this review, we summarize the application of high throughput sequencing technology in the fields of transposable elements, including the estimation of transposon content, preference of target sites and distribution, insertion polymorphism and population frequency, identification of rare copies, transposon horizontal transfers as well as transposon tagging. We also briefly introduce the major common sequencing strategies and algorithms, their advantages and disadvantages, and the corresponding solutions. Finally, we envision the developing trends of high throughput sequencing technology, especially the third generation sequencing technology, and its application in transposon studies in the future, hopefully providing a comprehensive understanding and reference for related scientific researchers.
Full Text Available Amplicon-based sequencing strategies that include 16S rRNA and functional genes, alongside “meta-omics” analyses of communities of microorganisms, have allowed researchers to pose questions and find answers to “who” is present in the environment and “what” they are doing. Next-generation sequencing approaches that aid microbial ecology studies of agricultural systems are fast gaining popularity among agronomy, crop, soil, and environmental science researchers. Given the rapid development of these high-throughput sequencing techniques, researchers with no prior experience will desire information about the best practices that can be used before actually starting high-throughput amplicon-based sequence analyses. We have outlined items that need to be carefully considered in experimental design, sampling, basic bioinformatics, sequencing of mock communities and negative controls, acquisition of metadata, and in standardization of reaction conditions as per experimental requirements. Not all considerations mentioned here may pertain to a particular study. The overall goal is to inform researchers about considerations that must be taken into account when conducting high-throughput microbial DNA sequencing and sequences analysis.
Full Text Available The rapid development of genomic technology has made high throughput genotyping widely accessible but the associated high throughput phenotyping is now the major limiting factor in genetic analysis of traits. This paper evaluates the use of thermal imaging for the high throughput field phenotyping of Solanum tuberosum for differences in stomatal behaviour. A large multi-replicated trial of a potato mapping population was used to investigate the consistency in genotypic rankings across different trials and across measurements made at different times of day and on different days. The results confirmed a high degree of consistency between the genotypic rankings based on relative canopy temperature on different occasions. Genotype discrimination was enhanced both through normalising data by expressing genotype temperatures as differences from image means and through the enhanced replication obtained by using overlapping images. A Monte Carlo simulation approach was used to confirm the magnitude of genotypic differences that it is possible to discriminate. The results showed a clear negative association between canopy temperature and final tuber yield for this population, when grown under ample moisture supply. We have therefore established infrared thermography as an easy, rapid and non-destructive screening method for evaluating large population trials for genetic analysis. We also envisage this approach as having great potential for evaluating plant response to stress under field conditions.
Grau, Jan; Posch, Stefan; Grosse, Ivo; Keilwagen, Jens
De novo motif discovery has been an important challenge of bioinformatics for the past two decades. Since the emergence of high-throughput techniques like ChIP-seq, ChIP-exo and protein-binding microarrays (PBMs), the focus of de novo motif discovery has shifted to runtime and accuracy on large data sets. For this purpose, specialized algorithms have been designed for discovering motifs in ChIP-seq or PBM data. However, none of the existing approaches work perfectly for all three high-throughput techniques. In this article, we propose Dimont, a general approach for fast and accurate de novo motif discovery from high-throughput data. We demonstrate that Dimont yields a higher number of correct motifs from ChIP-seq data than any of the specialized approaches and achieves a higher accuracy for predicting PBM intensities from probe sequence than any of the approaches specifically designed for that purpose. Dimont also reports the expected motifs for several ChIP-exo data sets. Investigating differences between in vitro and in vivo binding, we find that for most transcription factors, the motifs discovered by Dimont are in good accordance between techniques, but we also find notable exceptions. We also observe that modeling intra-motif dependencies may increase accuracy, which indicates that more complex motif models are a worthwhile field of research.
Full Text Available Bardet-Biedl syndrome (BBS is an autosomal recessive disorder with significant genetic heterogeneity. BBS is linked to mutations in 17 genes, which contain more than 200 coding exons. Currently, BBS is diagnosed by direct DNA sequencing for mutations in these genes, which because of the large genomic screening region is both time-consuming and expensive. In order to develop a practical method for the clinic diagnosis of BBS, we have developed a high-throughput targeted exome sequencing (TES for genetic diagnosis. Five typical BBS patients were recruited and screened for mutations in a total of 144 known genes responsible for inherited retinal diseases, a hallmark symptom of BBS. The genomic DNA of these patients and their families were subjected to high-throughput DNA re-sequencing. Deep bioinformatics analysis was carried out to filter the massive sequencing data, which were further confirmed through co-segregation analysis. TES successfully revealed mutations in BBS genes in each patient and family member. Six pathological mutations, including five novel mutations, were revealed in the genes BBS2, MKKS, ARL6, MKS1. This study represents the first report of targeted exome sequencing in BBS patients and demonstrates that high-throughput TES is an accurate and rapid method for the genetic diagnosis of BBS.
Abbott, Sarah K; Jenner, Andrew M; Mitchell, Todd W; Brown, Simon H J; Halliday, Glenda M; Garner, Brett
We have developed a protocol suitable for high-throughput lipidomic analysis of human brain samples. The traditional Folch extraction (using chloroform and glass-glass homogenization) was compared to a high-throughput method combining methyl-tert-butyl ether (MTBE) extraction with mechanical homogenization utilizing ceramic beads. This high-throughput method significantly reduced sample handling time and increased efficiency compared to glass-glass homogenizing. Furthermore, replacing chloroform with MTBE is safer (less carcinogenic/toxic), with lipids dissolving in the upper phase, allowing for easier pipetting and the potential for automation (i.e., robotics). Both methods were applied to the analysis of human occipital cortex. Lipid species (including ceramides, sphingomyelins, choline glycerophospholipids, ethanolamine glycerophospholipids and phosphatidylserines) were analyzed via electrospray ionization mass spectrometry and sterol species were analyzed using gas chromatography mass spectrometry. No differences in lipid species composition were evident when the lipid extraction protocols were compared, indicating that MTBE extraction with mechanical bead homogenization provides an improved method for the lipidomic profiling of human brain tissue.
Brouzes, Eric; Medkova, Martina; Savenelli, Neal; Marran, Dave; Twardowski, Mariusz; Hutchison, J Brian; Rothberg, Jonathan M; Link, Darren R; Perrimon, Norbert; Samuels, Michael L
We present a droplet-based microfluidic technology that enables high-throughput screening of single mammalian cells. This integrated platform allows for the encapsulation of single cells and reagents in independent aqueous microdroplets (1 pL to 10 nL volumes) dispersed in an immiscible carrier oil and enables the digital manipulation of these reactors at a very high-throughput. Here, we validate a full droplet screening workflow by conducting a droplet-based cytotoxicity screen. To perform this screen, we first developed a droplet viability assay that permits the quantitative scoring of cell viability and growth within intact droplets. Next, we demonstrated the high viability of encapsulated human monocytic U937 cells over a period of 4 days. Finally, we developed an optically-coded droplet library enabling the identification of the droplets composition during the assay read-out. Using the integrated droplet technology, we screened a drug library for its cytotoxic effect against U937 cells. Taken together our droplet microfluidic platform is modular, robust, uses no moving parts, and has a wide range of potential applications including high-throughput single-cell analyses, combinatorial screening, and facilitating small sample analyses.
Najafov, Jamil; Najafov, Ayaz
Modern high-throughput screening methods allow researchers to generate large datasets that potentially contain important biological information. However, oftentimes, picking relevant hits from such screens and generating testable hypotheses requires training in bioinformatics and the skills to efficiently perform database mining. There are currently no tools available to general public that allow users to cross-reference their screen datasets with published screen datasets. To this end, we developed CrossCheck, an online platform for high-throughput screen data analysis. CrossCheck is a centralized database that allows effortless comparison of the user-entered list of gene symbols with 16,231 published datasets. These datasets include published data from genome-wide RNAi and CRISPR screens, interactome proteomics and phosphoproteomics screens, cancer mutation databases, low-throughput studies of major cell signaling mediators, such as kinases, E3 ubiquitin ligases and phosphatases, and gene ontological information. Moreover, CrossCheck includes a novel database of predicted protein kinase substrates, which was developed using proteome-wide consensus motif searches. CrossCheck dramatically simplifies high-throughput screen data analysis and enables researchers to dig deep into the published literature and streamline data-driven hypothesis generation. CrossCheck is freely accessible as a web-based application at http://proteinguru.com/crosscheck.
Klümper, Uli; Dechesne, Arnaud; Smets, Barth F.
The transfer of conjugal plasmids is the main bacterial process of horizontal gene transfer to potentially distantly related bacteria. These extrachromosomal, circular DNA molecules host genes that code for their own replication and transfer to other organisms. Because additional accessory genes...... transfer, the ability of a plasmid to invade a mixed community is crucial. The main parameter that controls the possible extent of horizontal plasmid transfer (HGT) in a bacterial community is the in situ community permissiveness for the considered plasmid. Permissiveness describes the fraction...... a gfp-tagged plasmid in a mCherry red fluorescently tagged donor strain repressing gfp expression. We take advantage of fluorescent marker genes to microscopically detect plasmid transfer events and use subsequent high-throughput fluorescence-activated cell sorting (FACS) to isolate...
Full Text Available Two recently developed isolation methods have shown promise when recovering pure community plasmid DNA (metamobilomes/plasmidomes, which is useful in conducting culture-independent investigations into plasmid ecology. However, both methods employ multiple displacement amplification (MDA to ensure suitable quantities of plasmid DNA for high-throughput sequencing. This study demonstrates that MDA greatly favors smaller circular DNA elements (10 Kbp. Throughout the study, we used two model plasmids, a 4.4 Kbp cloning vector (pBR322, and a 56 Kbp conjugative plasmid (pKJK10, to represent lower- and upper plasmid size ranges, respectively. Subjecting a mixture of these plasmids to the overall isolation protocol revealed a 34-fold over-amplification of pBR322 after MDA. To address this bias, we propose the addition of an electroelution step that separates different plasmid size ranges prior to MDA in order to reduce size-dependent competition during incubation. Subsequent analyses of metamobilome data from wastewater spiked with the model plasmids showed in silica recovery of pKJK10 to be very poor with the established method and a 1,300-fold overrepresentation of pBR322. Conversely, complete recovery of pKJK10 was enabled with the new modified protocol although considerable care must be taken during electroelution to minimize cross-contamination between samples. For further validation, non-spiked wastewater metamobilomes were mapped to more than 2,500 known plasmid genomes. This displayed an overall recovery of plasmids well into the upper size range (median size: 30 kilobases with the modified protocol. Analysis of de novo assembled metamobilome data also suggested distinctly better recovery of larger plasmids, as gene functions associated with these plasmids, such as conjugation, was exclusively encoded in the data output generated through the modified protocol. Thus, with the suggested modification, access to a large uncharacterized pool of
Kooiker, Maarten; Xue, Gang-Ping
The construction of full-length cDNA libraries allows researchers to study gene expression and protein interactions and undertake gene discovery. Recent improvements allow the construction of high-quality cDNA libraries, with small amounts of mRNA. In parallel, these improvements allow for the incorporation of adapters into the cDNA, both at the 5' and 3' end of the cDNA. The 3' adapter is attached to the oligo-dT primer that is used by the reverse transcriptase, whereas the 5' adapter is incorporated by the template switching properties of the MMLV reverse transcriptase. This allows directional cloning and eliminates inefficient steps like adapter ligation, phosphorylation, and methylation. Another important step in the construction of high-quality cDNA libraries is the normalization. The difference in the levels of expression between genes might be several orders of magnitude. Therefore, it is essential that the cDNA library is normalized. With a recently discovered enzyme, duplex-specific nuclease, it is possible to normalize the cDNA library, based on the fact that more abundant molecules are more likely to reanneal after denaturation compared to rare molecules.
Full Text Available Abstract Background Dense genetic maps, together with the efficiency and accuracy of their construction, are integral to genetic studies and marker assisted selection for plant breeding. High-throughput multiplex markers that are robust and reproducible can contribute to both efficiency and accuracy. Multiplex markers are often dominant and so have low information content, this coupled with the pressure to find alternatives to radio-labelling, has led us to adapt the SSAP (sequence specific amplified polymorphism marker method from a 33P labelling procedure to fluorescently tagged markers analysed from an automated ABI 3730 xl platform. This method is illustrated for multiplexed SSAP markers based on retrotransposon insertions of pea and is applicable for the rapid and efficient generation of markers from genomes where repetitive element sequence information is available for primer design. We cross-reference SSAP markers previously generated using the 33P manual PAGE system to fluorescent peaks, and use these high-throughput fluorescent SSAP markers for further genetic studies in Pisum. Results The optimal conditions for the fluorescent-labelling method used a triplex set of primers in the PCR. These included a fluorescently labelled specific primer together with its unlabelled counterpart, plus an adapter-based primer with two bases of selection on the 3' end. The introduction of the unlabelled specific primer helped to optimise the fluorescent signal across the range of fragment sizes expected, and eliminated the need for extensive dilutions of PCR amplicons. The software (GeneMarker Version 1.6 used for the high-throughput data analysis provided an assessment of amplicon size in nucleotides, peak areas and fluorescence intensity in a table format, so providing additional information content for each marker. The method has been tested in a small-scale study with 12 pea accessions resulting in 467 polymorphic fluorescent SSAP markers of which
Full Text Available Abstract Background In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. Results We compare the performance of parameter-optimized and default-parameter (in flowCore data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter
Hodzic, Jasmina; Dingjan, Ilse; Maas, Mariëlle Jp; van der Meulen-Muileman, Ida H; de Menezes, Renee X; Heukelom, Stan; Verheij, Marcel; Gerritsen, Winald R; Geldof, Albert A; van Triest, Baukelien; van Beusechem, Victor W
Radiotherapy is one of the mainstays in the treatment for cancer, but its success can be limited due to inherent or acquired resistance. Mechanisms underlying radioresistance in various cancers are poorly understood and available radiosensitizers have shown only modest clinical benefit. There is thus a need to identify new targets and drugs for more effective sensitization of cancer cells to irradiation. Compound and RNA interference high-throughput screening technologies allow comprehensive enterprises to identify new agents and targets for radiosensitization. However, the gold standard assay to investigate radiosensitivity of cancer cells in vitro, the colony formation assay (CFA), is unsuitable for high-throughput screening. We developed a new high-throughput screening method for determining radiation susceptibility. Fast and uniform irradiation of batches up to 30 microplates was achieved using a Perspex container and a clinically employed linear accelerator. The readout was done by automated counting of fluorescently stained nuclei using the Acumen eX3 laser scanning cytometer. Assay performance was compared to that of the CFA and the CellTiter-Blue homogeneous uniform-well cell viability assay. The assay was validated in a whole-genome siRNA library screening setting using PC-3 prostate cancer cells. On 4 different cancer cell lines, the automated cell counting assay produced radiation dose response curves that followed a linear-quadratic equation and that exhibited a better correlation to the results of the CFA than did the cell viability assay. Moreover, the cell counting assay could be used to detect radiosensitization by silencing DNA-PKcs or by adding caffeine. In a high-throughput screening setting, using 4 Gy irradiated and control PC-3 cells, the effects of DNA-PKcs siRNA and non-targeting control siRNA could be clearly discriminated. We developed a simple assay for radiation susceptibility that can be used for high-throughput screening. This will
Chen, Ding-Qiang; Wu, Ai-Wu; Yang, Ling; Su, Dan-Hong; Lin, Yong-Ping; Hu, Yan-Wei; Zheng, Lei; Wang, Qian
Klebsiella pneumoniae strain KP01 carrying blaGES-5 was identified from a patient in Guangzhou, China. High-throughput sequencing assigned blaGES-5 to a 28.5-kb nonconjugative plasmid, pGES-GZ. A 13-kb plasmid backbone sequence on pGES-GZ was found to share high sequence identities with plasmids from Gram-negative nonfermenters. A novel class 1 integron carrying a gene cassette array of orf28-orf28-blaGES-5 was identified on pGES-GZ, within which orf28 encoded a hypothetical protein possibly correlated to fosfomycin resistance. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
Ebersbach, Gitte; Gerdes, Kenn; Charbon, Gitte Ebersbach
Bacterial plasmids encode partitioning (par) loci that ensure ordered plasmid segregation prior to cell division. par loci come in two types: those that encode actin-like ATPases and those that encode deviant Walker-type ATPases. ParM, the actin-like ATPase of plasmid R1, forms dynamic filaments...... that segregate plasmids paired at mid-cell to daughter cells. Like microtubules, ParM filaments exhibit dynamic instability (i.e., catastrophic decay) whose regulation is an important component of the DNA segregation process. The Walker box ParA ATPases are related to MinD and form highly dynamic, oscillating...... filaments that are required for the subcellular movement and positioning of plasmids. The role of the observed ATPase oscillation is not yet understood. However, we propose a simple model that couples plasmid segregation to ParA oscillation. The model is consistent with the observed movement...
Turner, Helen C; Sharma, P; Perrier, J R; Bertucci, A; Smilenov, L; Johnson, G; Taveras, M; Brenner, D J; Garty, G
At the Center for High-Throughput Minimally Invasive Radiation Biodosimetry, we have developed a rapid automated biodosimetry tool (RABiT); this is a completely automated, ultra-high-throughput robotically based biodosimetry workstation designed for use following a large-scale radiological event, to perform radiation biodosimetry measurements based on a fingerstick blood sample. High throughput is achieved through purpose built robotics, sample handling in filter-bottomed multi-well plates and innovations in high-speed imaging and analysis. Currently, we are adapting the RABiT technologies for use in laboratory settings, for applications in epidemiological and clinical studies. Our overall goal is to extend the RABiT system to directly measure the kinetics of DNA repair proteins. The design of the kinetic/time-dependent studies is based on repeated, automated sampling of lymphocytes from a central reservoir of cells housed in the RABiT incubator as a function of time after the irradiation challenge. In the present study, we have characterized the DNA repair kinetics of the following repair proteins: γ-H2AX, 53-BP1, ATM kinase, MDC1 at multiple times (0.5, 2, 4, 7 and 24 h) after irradiation with 4 Gy γ rays. In order to provide a consistent dose exposure at time zero, we have developed an automated capillary irradiator to introduce DNA DSBs into fingerstick-size blood samples within the RABiT. To demonstrate the scalability of the laboratory-based RABiT system, we have initiated a population study using γ-H2AX as a biomarker.
Hardcastle, Thomas J
High-throughput data are now commonplace in biological research. Rapidly changing technologies and application mean that novel methods for detecting differential behaviour that account for a 'large P, small n' setting are required at an increasing rate. The development of such methods is, in general, being done on an ad hoc basis, requiring further development cycles and a lack of standardization between analyses. We present here a generalized method for identifying differential behaviour within high-throughput biological data through empirical Bayesian methods. This approach is based on our baySeq algorithm for identification of differential expression in RNA-seq data based on a negative binomial distribution, and in paired data based on a beta-binomial distribution. Here we show how the same empirical Bayesian approach can be applied to any parametric distribution, removing the need for lengthy development of novel methods for differently distributed data. Comparisons with existing methods developed to address specific problems in high-throughput biological data show that these generic methods can achieve equivalent or better performance. A number of enhancements to the basic algorithm are also presented to increase flexibility and reduce computational costs. The methods are implemented in the R baySeq (v2) package, available on Bioconductor http://www.bioconductor.org/packages/release/bioc/html/baySeq.html. firstname.lastname@example.org Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: email@example.com.
Lapington, J. S.; Fraser, G. W.; Miller, G. M.; Ashton, T. J. R.; Jarron, P.; Despeisse, M.; Powolny, F.; Howorth, J.; Milnes, J.
High-throughput photon counting with high time resolution is a niche application area where vacuum tubes can still outperform solid-state devices. Applications in the life sciences utilizing time-resolved spectroscopies, particularly in the growing field of proteomics, will benefit greatly from performance enhancements in event timing and detector throughput. The HiContent project is a collaboration between the University of Leicester Space Research Centre, the Microelectronics Group at CERN, Photek Ltd., and end-users at the Gray Cancer Institute and the University of Manchester. The goal is to develop a detector system specifically designed for optical proteomics, capable of high content (multi-parametric) analysis at high throughput. The HiContent detector system is being developed to exploit this niche market. It combines multi-channel, high time resolution photon counting in a single miniaturized detector system with integrated electronics. The combination of enabling technologies; small pore microchannel plate devices with very high time resolution, and high-speed multi-channel ASIC electronics developed for the LHC at CERN, provides the necessary building blocks for a high-throughput detector system with up to 1024 parallel counting channels and 20 ps time resolution. We describe the detector and electronic design, discuss the current status of the HiContent project and present the results from a 64-channel prototype system. In the absence of an operational detector, we present measurements of the electronics performance using a pulse generator to simulate detector events. Event timing results from the NINO high-speed front-end ASIC captured using a fast digital oscilloscope are compared with data taken with the proposed electronic configuration which uses the multi-channel HPTDC timing ASIC.
Micalessi, Isabel M; Boulet, Gaëlle A V; Bogers, Johannes J; Benoy, Ina H; Depuydt, Christophe E
The establishment of the causal relationship between high-risk human papillomavirus (HR-HPV) infection and cervical cancer and its precursors has resulted in the development of HPV DNA detection systems. Currently, real-time PCR assays for the detection of HPV, such as the RealTime High Risk (HR) HPV assay (Abbott) and the cobas® 4800 HPV Test (Roche Molecular Diagnostics) are commercially available. However, none of them enables the detection and typing of all HR-HPV types in a clinical high-throughput setting. This paper describes the laboratory workflow and the validation of a type-specific real-time quantitative PCR (qPCR) assay for high-throughput HPV detection, genotyping and quantification. This assay is routinely applied in a liquid-based cytology screening setting (700 samples in 24 h) and was used in many epidemiological and clinical studies. The TaqMan-based qPCR assay enables the detection of 17 HPV genotypes and β-globin in seven multiplex reactions. These HPV types include all 12 high-risk types (HPV16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, 59), three probably high-risk types (HPV53, 66 and 68), one low-risk type (HPV6) and one undetermined risk type (HPV67). An analytical sensitivity of ≤100 copies was obtained for all the HPV types. The analytical specificity of each primer pair was 100% and an intra- and inter-run variability of real-time PCR approach enables detection of 17 HPV types, identification of the HPV type and determination of the viral load in a single sensitive assay suitable for high-throughput screening.
Klukas, Christian; Chen, Dijun; Pape, Jean-Michel
High-throughput phenotyping is emerging as an important technology to dissect phenotypic components in plants. Efficient image processing and feature extraction are prerequisites to quantify plant growth and performance based on phenotypic traits. Issues include data management, image analysis, and result visualization of large-scale phenotypic data sets. Here, we present Integrated Analysis Platform (IAP), an open-source framework for high-throughput plant phenotyping. IAP provides user-friendly interfaces, and its core functions are highly adaptable. Our system supports image data transfer from different acquisition environments and large-scale image analysis for different plant species based on real-time imaging data obtained from different spectra. Due to the huge amount of data to manage, we utilized a common data structure for efficient storage and organization of data for both input data and result data. We implemented a block-based method for automated image processing to extract a representative list of plant phenotypic traits. We also provide tools for build-in data plotting and result export. For validation of IAP, we performed an example experiment that contains 33 maize (Zea mays 'Fernandez') plants, which were grown for 9 weeks in an automated greenhouse with nondestructive imaging. Subsequently, the image data were subjected to automated analysis with the maize pipeline implemented in our system. We found that the computed digital volume and number of leaves correlate with our manually measured data in high accuracy up to 0.98 and 0.95, respectively. In summary, IAP provides a multiple set of functionalities for import/export, management, and automated analysis of high-throughput plant phenotyping data, and its analysis results are highly reliable. © 2014 American Society of Plant Biologists. All Rights Reserved.
Tanabata, Takanari; Shibaya, Taeko; Hori, Kiyosumi; Ebana, Kaworu; Yano, Masahiro
Seed shape and size are among the most important agronomic traits because they affect yield and market price. To obtain accurate seed size data, a large number of measurements are needed because there is little difference in size among seeds from one plant. To promote genetic analysis and selection for seed shape in plant breeding, efficient, reliable, high-throughput seed phenotyping methods are required. We developed SmartGrain software for high-throughput measurement of seed shape. This software uses a new image analysis method to reduce the time taken in the preparation of seeds and in image capture. Outlines of seeds are automatically recognized from digital images, and several shape parameters, such as seed length, width, area, and perimeter length, are calculated. To validate the software, we performed a quantitative trait locus (QTL) analysis for rice (Oryza sativa) seed shape using backcrossed inbred lines derived from a cross between japonica cultivars Koshihikari and Nipponbare, which showed small differences in seed shape. SmartGrain removed areas of awns and pedicels automatically, and several QTLs were detected for six shape parameters. The allelic effect of a QTL for seed length detected on chromosome 11 was confirmed in advanced backcross progeny; the cv Nipponbare allele increased seed length and, thus, seed weight. High-throughput measurement with SmartGrain reduced sampling error and made it possible to distinguish between lines with small differences in seed shape. SmartGrain could accurately recognize seed not only of rice but also of several other species, including Arabidopsis (Arabidopsis thaliana). The software is free to researchers.
Full Text Available Single-channel optical density measurements of population growth are the dominant large scale phenotyping methodology for bridging the gene-function gap in yeast. However, a substantial amount of the genetic variation induced by single allele, single gene or double gene knock-out technologies fail to manifest in detectable growth phenotypes under conditions readily testable in the laboratory. Thus, new high-throughput phenotyping technologies capable of providing information about molecular level consequences of genetic variation are sorely needed. Here we report a protocol for high-throughput Fourier transform infrared spectroscopy (FTIR measuring biochemical fingerprints of yeast strains. It includes high-throughput cultivation for FTIR spectroscopy, FTIR measurements and spectral pre-treatment to increase measurement accuracy. We demonstrate its capacity to distinguish not only yeast genera, species and populations, but also strains that differ only by a single gene, its excellent signal-to-noise ratio and its relative robustness to measurement bias. Finally, we illustrated its applicability by determining the FTIR signatures of all viable Saccharomyces cerevisiae single gene knock-outs corresponding to lipid biosynthesis genes. Many of the examined knock-out strains showed distinct, highly reproducible FTIR phenotypes despite having no detectable growth phenotype. These phenotypes were confirmed by conventional lipid analysis and could be linked to specific changes in lipid composition. We conclude that the introduced protocol is robust to noise and bias, possible to apply on a very large scale, and capable of generating biologically meaningful biochemical fingerprints that are strain specific, even when strains lack detectable growth phenotypes. Thus, it has a substantial potential for application in the molecular functionalization of the yeast genome.
Recent advances in omics technologies have not been accompanied by equally efficient, cost-effective, and accurate phenotyping methods required to dissect the genetic architecture of complex traits. Even though high-throughput phenotyping platforms have been developed for controlled environments, field-based aerial and ground technologies have only been designed and deployed for short-stature crops. Therefore, we developed and tested Phenobot 1.0, an auto-steered and self-propelled field-based high-throughput phenotyping platform for tall dense canopy crops, such as sorghum (Sorghum bicolor). Phenobot 1.0 was equipped with laterally positioned and vertically stacked stereo RGB cameras. Images collected from 307 diverse sorghum lines were reconstructed in 3D for feature extraction. User interfaces were developed, and multiple algorithms were evaluated for their accuracy in estimating plant height and stem diameter. Tested feature extraction methods included the following: (1) User-interactive Individual Plant Height Extraction (UsIn-PHe) based on dense stereo three-dimensional reconstruction; (2) Automatic Hedge-based Plant Height Extraction (Auto-PHe) based on dense stereo 3D reconstruction; (3) User-interactive Dense Stereo Matching Stem Diameter Extraction; and (4) User-interactive Image Patch Stereo Matching Stem Diameter Extraction (IPaS-Di). Comparative genome-wide association analysis and ground-truth validation demonstrated that both UsIn-PHe and Auto-PHe were accurate methods to estimate plant height, while Auto-PHe had the additional advantage of being a completely automated process. For stem diameter, IPaS-Di generated the most accurate estimates of this biomass-related architectural trait. In summary, our technology was proven robust to obtain ground-based high-throughput plant architecture parameters of sorghum, a tall and densely planted crop species. PMID:28620124
Salas Fernandez, Maria G; Bao, Yin; Tang, Lie; Schnable, Patrick S
Recent advances in omics technologies have not been accompanied by equally efficient, cost-effective, and accurate phenotyping methods required to dissect the genetic architecture of complex traits. Even though high-throughput phenotyping platforms have been developed for controlled environments, field-based aerial and ground technologies have only been designed and deployed for short-stature crops. Therefore, we developed and tested Phenobot 1.0, an auto-steered and self-propelled field-based high-throughput phenotyping platform for tall dense canopy crops, such as sorghum (Sorghum bicolor). Phenobot 1.0 was equipped with laterally positioned and vertically stacked stereo RGB cameras. Images collected from 307 diverse sorghum lines were reconstructed in 3D for feature extraction. User interfaces were developed, and multiple algorithms were evaluated for their accuracy in estimating plant height and stem diameter. Tested feature extraction methods included the following: (1) User-interactive Individual Plant Height Extraction (UsIn-PHe) based on dense stereo three-dimensional reconstruction; (2) Automatic Hedge-based Plant Height Extraction (Auto-PHe) based on dense stereo 3D reconstruction; (3) User-interactive Dense Stereo Matching Stem Diameter Extraction; and (4) User-interactive Image Patch Stereo Matching Stem Diameter Extraction (IPaS-Di). Comparative genome-wide association analysis and ground-truth validation demonstrated that both UsIn-PHe and Auto-PHe were accurate methods to estimate plant height, while Auto-PHe had the additional advantage of being a completely automated process. For stem diameter, IPaS-Di generated the most accurate estimates of this biomass-related architectural trait. In summary, our technology was proven robust to obtain ground-based high-throughput plant architecture parameters of sorghum, a tall and densely planted crop species. © 2017 American Society of Plant Biologists. All Rights Reserved.
Varshney, Gaurav K; Pei, Wuhong; LaFave, Matthew C; Idol, Jennifer; Xu, Lisha; Gallardo, Viviana; Carrington, Blake; Bishop, Kevin; Jones, MaryPat; Li, Mingyu; Harper, Ursula; Huang, Sunny C; Prakash, Anupam; Chen, Wenbiao; Sood, Raman; Ledin, Johan; Burgess, Shawn M
The use of CRISPR/Cas9 as a genome-editing tool in various model organisms has radically changed targeted mutagenesis. Here, we present a high-throughput targeted mutagenesis pipeline using CRISPR/Cas9 technology in zebrafish that will make possible both saturation mutagenesis of the genome and large-scale phenotyping efforts. We describe a cloning-free single-guide RNA (sgRNA) synthesis, coupled with streamlined mutant identification methods utilizing fluorescent PCR and multiplexed, high-throughput sequencing. We report germline transmission data from 162 loci targeting 83 genes in the zebrafish genome, in which we obtained a 99% success rate for generating mutations and an average germline transmission rate of 28%. We verified 678 unique alleles from 58 genes by high-throughput sequencing. We demonstrate that our method can be used for efficient multiplexed gene targeting. We also demonstrate that phenotyping can be done in the F1 generation by inbreeding two injected founder fish, significantly reducing animal husbandry and time. This study compares germline transmission data from CRISPR/Cas9 with those of TALENs and ZFNs and shows that efficiency of CRISPR/Cas9 is sixfold more efficient than other techniques. We show that the majority of published "rules" for efficient sgRNA design do not effectively predict germline transmission rates in zebrafish, with the exception of a GG or GA dinucleotide genomic match at the 5' end of the sgRNA. Finally, we show that predicted off-target mutagenesis is of low concern for in vivo genetic studies. © 2015 Varshney et al.; Published by Cold Spring Harbor Laboratory Press.
Full Text Available Abstract Background The emergence of influenza strains that are resistant to commonly used antivirals has highlighted the need to develop new compounds that target viral gene products or host mechanisms that are essential for effective virus replication. Existing assays to identify potential antiviral compounds often use high throughput screening assays that target specific viral replication steps. To broaden the search for antivirals, cell-based replication assays can be performed, but these are often labor intensive and have limited throughput. Results We have adapted a traditional virus neutralization assay to develop a practical, cell-based, high throughput screening assay. This assay uses viral neuraminidase (NA as a read-out to quantify influenza replication, thereby offering an assay that is both rapid and sensitive. In addition to identification of inhibitors that target either viral or host factors, the assay allows simultaneous evaluation of drug toxicity. Antiviral activity was demonstrated for a number of known influenza inhibitors including amantadine that targets the M2 ion channel, zanamivir that targets NA, ribavirin that targets IMP dehydrogenase, and bis-indolyl maleimide that targets protein kinase A/C. Amantadine-resistant strains were identified by comparing IC50 with that of the wild-type virus. Conclusion Antivirals with specificity for a broad range of targets are easily identified in an accelerated viral inhibition assay that uses NA as a read-out of replication. This assay is suitable for high throughput screening to identify potential antivirals or can be used to identify drug-resistant influenza strains.
Ali, I.; Wasenmüller, U.; Wehn, N.
Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.
Moreira Teixeira, L S; Leijten, J C H; Sobral, J; Jin, R; van Apeldoorn, A A; Feijen, J; van Blitterswijk, C; Dijkstra, P J; Karperien, M
Cell-based cartilage repair strategies such as matrix-induced autologous chondrocyte implantation (MACI) could be improved by enhancing cell performance. We hypothesised that micro-aggregates of chondrocytes generated in high-throughput prior to implantation in a defect could stimulate cartilaginous matrix deposition and remodelling. To address this issue, we designed a micro-mould to enable controlled high-throughput formation of micro-aggregates. Morphology, stability, gene expression profiles and chondrogenic potential of micro-aggregates of human and bovine chondrocytes were evaluated and compared to single-cells cultured in micro-wells and in 3D after encapsulation in Dextran-Tyramine (Dex-TA) hydrogels in vitro and in vivo. We successfully formed micro-aggregates of human and bovine chondrocytes with highly controlled size, stability and viability within 24 hours. Micro-aggregates of 100 cells presented a superior balance in Collagen type I and Collagen type II gene expression over single cells and micro-aggregates of 50 and 200 cells. Matrix metalloproteinases 1, 9 and 13 mRNA levels were decreased in micro-aggregates compared to single-cells. Histological and biochemical analysis demonstrated enhanced matrix deposition in constructs seeded with micro-aggregates cultured in vitro and in vivo, compared to single-cell seeded constructs. Whole genome microarray analysis and single gene expression profiles using human chondrocytes confirmed increased expression of cartilage-related genes when chondrocytes were cultured in micro-aggregates. In conclusion, we succeeded in controlled high-throughput formation of micro-aggregates of chondrocytes. Compared to single cell-seeded constructs, seeding of constructs with micro-aggregates greatly improved neo-cartilage formation. Therefore, micro-aggregation prior to chondrocyte implantation in current MACI procedures, may effectively accelerate hyaline cartilage formation.
Shannon M Clarke
Full Text Available Accurate pedigree information is critical to animal breeding systems to ensure the highest rate of genetic gain and management of inbreeding. The abundance of available genomic data, together with development of high throughput genotyping platforms, means that single nucleotide polymorphisms (SNPs are now the DNA marker of choice for genomic selection studies. Furthermore the superior qualities of SNPs compared to microsatellite markers allows for standardization between laboratories; a property that is crucial for developing an international set of markers for traceability studies. The objective of this study was to develop a high throughput SNP assay for use in the New Zealand sheep industry that gives accurate pedigree assignment and will allow a reduction in breeder input over lambing. This required two phases of development--firstly, a method of extracting quality DNA from ear-punch tissue performed in a high throughput cost efficient manner and secondly a SNP assay that has the ability to assign paternity to progeny resulting from mob mating. A likelihood based approach to infer paternity was used where sires with the highest LOD score (log of the ratio of the likelihood given parentage to likelihood given non-parentage are assigned. An 84 "parentage SNP panel" was developed that assigned, on average, 99% of progeny to a sire in a problem where there were 3,000 progeny from 120 mob mated sires that included numerous half sib sires. In only 6% of those cases was there another sire with at least a 0.02 probability of paternity. Furthermore dam information (either recorded, or by genotyping possible dams was absent, highlighting the SNP test's suitability for paternity testing. Utilization of this parentage SNP assay will allow implementation of progeny testing into large commercial farms where the improved accuracy of sire assignment and genetic evaluations will increase genetic gain in the sheep industry.
Clarke, Shannon M; Henry, Hannah M; Dodds, Ken G; Jowett, Timothy W D; Manley, Tim R; Anderson, Rayna M; McEwan, John C
Accurate pedigree information is critical to animal breeding systems to ensure the highest rate of genetic gain and management of inbreeding. The abundance of available genomic data, together with development of high throughput genotyping platforms, means that single nucleotide polymorphisms (SNPs) are now the DNA marker of choice for genomic selection studies. Furthermore the superior qualities of SNPs compared to microsatellite markers allows for standardization between laboratories; a property that is crucial for developing an international set of markers for traceability studies. The objective of this study was to develop a high throughput SNP assay for use in the New Zealand sheep industry that gives accurate pedigree assignment and will allow a reduction in breeder input over lambing. This required two phases of development--firstly, a method of extracting quality DNA from ear-punch tissue performed in a high throughput cost efficient manner and secondly a SNP assay that has the ability to assign paternity to progeny resulting from mob mating. A likelihood based approach to infer paternity was used where sires with the highest LOD score (log of the ratio of the likelihood given parentage to likelihood given non-parentage) are assigned. An 84 "parentage SNP panel" was developed that assigned, on average, 99% of progeny to a sire in a problem where there were 3,000 progeny from 120 mob mated sires that included numerous half sib sires. In only 6% of those cases was there another sire with at least a 0.02 probability of paternity. Furthermore dam information (either recorded, or by genotyping possible dams) was absent, highlighting the SNP test's suitability for paternity testing. Utilization of this parentage SNP assay will allow implementation of progeny testing into large commercial farms where the improved accuracy of sire assignment and genetic evaluations will increase genetic gain in the sheep industry.
Hoedjes, K M; Steidle, J L M; Werren, J H; Vet, L E M; Smid, H M
Most of our knowledge on learning and memory formation results from extensive studies on a small number of animal species. Although features and cellular pathways of learning and memory are highly similar in this diverse group of species, there are also subtle differences. Closely related species of parasitic wasps display substantial variation in memory dynamics and can be instrumental to understanding both the adaptive benefit of and mechanisms underlying this variation. Parasitic wasps of the genus Nasonia offer excellent opportunities for multidisciplinary research on this topic. Genetic and genomic resources available for Nasonia are unrivaled among parasitic wasps, providing tools for genetic dissection of mechanisms that cause differences in learning. This study presents a robust, high-throughput method for olfactory conditioning of Nasonia using a host encounter as reward. A T-maze olfactometer facilitates high-throughput memory retention testing and employs standardized odors of equal detectability, as quantified by electroantennogram recordings. Using this setup, differences in memory retention between Nasonia species were shown. In both Nasonia vitripennis and Nasonia longicornis, memory was observed up to at least 5 days after a single conditioning trial, whereas Nasonia giraulti lost its memory after 2 days. This difference in learning may be an adaptation to species-specific differences in ecological factors, for example, host preference. The high-throughput methods for conditioning and memory retention testing are essential tools to study both ultimate and proximate factors that cause variation in learning and memory formation in Nasonia and other parasitic wasp species. © 2012 The Authors. Genes, Brain and Behavior © 2012 Blackwell Publishing Ltd and International Behavioural and Neural Genetics Society.
Herington, Jennifer L.; Swale, Daniel R.; Brown, Naoko; Shelton, Elaine L.; Choi, Hyehun; Williams, Charles H.; Hong, Charles C.; Paria, Bibhash C.; Denton, Jerod S.; Reese, Jeff
The uterine myometrium (UT-myo) is a therapeutic target for preterm labor, labor induction, and postpartum hemorrhage. Stimulation of intracellular Ca2+-release in UT-myo cells by oxytocin is a final pathway controlling myometrial contractions. The goal of this study was to develop a dual-addition assay for high-throughput screening of small molecular compounds, which could regulate Ca2+-mobilization in UT-myo cells, and hence, myometrial contractions. Primary murine UT-myo cells in 384-well plates were loaded with a Ca2+-sensitive fluorescent probe, and then screened for inducers of Ca2+-mobilization and inhibitors of oxytocin-induced Ca2+-mobilization. The assay exhibited robust screening statistics (Z´ = 0.73), DMSO-tolerance, and was validated for high-throughput screening against 2,727 small molecules from the Spectrum, NIH Clinical I and II collections of well-annotated compounds. The screen revealed a hit-rate of 1.80% for agonist and 1.39% for antagonist compounds. Concentration-dependent responses of hit-compounds demonstrated an EC50 less than 10μM for 21 hit-antagonist compounds, compared to only 7 hit-agonist compounds. Subsequent studies focused on hit-antagonist compounds. Based on the percent inhibition and functional annotation analyses, we selected 4 confirmed hit-antagonist compounds (benzbromarone, dipyridamole, fenoterol hydrobromide and nisoldipine) for further analysis. Using an ex vivo isometric contractility assay, each compound significantly inhibited uterine contractility, at different potencies (IC50). Overall, these results demonstrate for the first time that high-throughput small-molecules screening of myometrial Ca2+-mobilization is an ideal primary approach for discovering modulators of uterine contractility. PMID:26600013
Stiller, Mathias; Knapp, Michael; Stenzel, Udo; Hofreiter, Michael; Meyer, Matthias
Although the emergence of high-throughput sequencing technologies has enabled whole-genome sequencing from extinct organisms, little progress has been made in accelerating targeted sequencing from highly degraded DNA...
Reusch, D.; Haberger, M.; Kailich, T.; Heidenreich, A.K.; Kampe, M.; Bulau, P.; Wuhrer, M.
The Fc glycosylation of therapeutic antibodies is crucial for their effector functions and their behavior in pharmacokinetics and pharmacodynamics. To monitor the Fc glycosylation in bioprocess development and characterization, high-throughput techniques for glycosylation analysis are needed. Here,
U.S. Environmental Protection Agency — Under the ExpoCast program, United States Environmental Protection Agency (EPA) researchers have developed a high-throughput (HT) framework for estimating aggregate...
U.S. Environmental Protection Agency — httk: High-Throughput Toxicokinetics Functions and data tables for simulation and statistical analysis of chemical toxicokinetics ("TK") using data obtained from...
State of the Art High-Throughput Approaches to Genotoxicity: Flow Micronucleus, Ames II, GreenScreen and Comet (Presented by Dr. Marilyn J. Aardema, Chief Scientific Advisor, Toxicology, Dr. Leon Stankowski, et. al. (6/28/2012)
Zarkevich, N. A.; Johnson, D. D.; Pecharsky, V. K.
The high-throughput search paradigm adopted by the newly established caloric materials consortium—CaloriCool®—with the goal to substantially accelerate discovery and design of novel caloric materials is briefly discussed. We begin with describing material selection criteria based on known properties, which are then followed by heuristic fast estimates, ab initio calculations, all of which has been implemented in a set of automated computational tools and measurements. We also demonstrate how theoretical and computational methods serve as a guide for experimental efforts by considering a representative example from the field of magnetocaloric materials.
Full Text Available An improved bubble-electrospinning, consisting of a cone shaped air nozzle, a copper solution reservoir connected directly to the power generator, and a high speed rotating copper wire drum as a collector, was presented successfully to obtain high throughput preparation of aligned nanofibers. The influences of drum rotation speed on morphology and properties of obtained nanofibers were explored and researched. The results showed that the alignment degree, diameter distribution, and properties of nanofibers were improved with the increase of the drum rotation speed.
Full Text Available The tendency for mycobacteria to aggregate poses a challenge for their use in microplate based assays. Good dispersions have been difficult to achieve in high-throughput screening (HTS assays used in the search for novel antibacterial drugs to treat tuberculosis and other related diseases. Here we describe a method using filtration to overcome the problem of variability resulting from aggregation of mycobacteria. This method consistently yielded higher reproducibility and lower variability than conventional methods, such as settling under gravity and vortexing.
Zhu, Ping Jun; Hobson, John P; Southall, Noel; Qiu, Cunping; Thomas, Craig J; Lu, Jiamo; Inglese, James; Zheng, Wei; Leppla, Stephen H; Bugge, Thomas H; Austin, Christopher P; Liu, Shihui
Here, we report the results of a quantitative high-throughput screen (qHTS) measuring the endocytosis and translocation of a beta-lactamase-fused-lethal factor and the identification of small molecules capable of obstructing the process of anthrax toxin internalization. Several small molecules protect RAW264.7 macrophages and CHO cells from anthrax lethal toxin and protected cells from an LF-Pseudomonas exotoxin fusion protein and diphtheria toxin. Further efforts demonstrated that these compounds impaired the PA heptamer pre-pore to pore conversion in cells expressing the CMG2 receptor, but not the related TEM8 receptor, indicating that these compounds likely interfere with toxin internalization.
Thorne, Robert E. (Inventor); Berejnov, Viatcheslav (Inventor); Kalinin, Yevgeniy (Inventor)
In one embodiment, a crystallization and screening plate comprises a plurality of cells open at a top and a bottom, a frame that defines the cells in the plate, and at least two films. The first film seals a top of the plate and the second film seals a bottom of the plate. At least one of the films is patterned to strongly pin the contact lines of drops dispensed onto it, fixing their position and shape. The present invention also includes methods and other devices for manual and high-throughput protein crystal growth.
Theda, Christiane; Gibbons, Katy; Defor, Todd E; Donohue, Pamela K; Golden, W Christopher; Kline, Antonie D; Gulamali-Majid, Fizza; Panny, Susan R; Hubbard, Walter C; Jones, Richard O; Liu, Anita K; Moser, Ann B; Raymond, Gerald V
X-linked adrenoleukodystrophy (ALD) is characterized by adrenal insufficiency and neurologic involvement with onset at variable ages. Plasma very long chain fatty acids are elevated in ALD; even in asymptomatic patients. We demonstrated previously that liquid chromatography tandem mass spectrometry measuring C26:0 lysophosphatidylcholine reliably identifies affected males. We prospectively applied this method to 4689 newborn blood spot samples; no false positives were observed. We show that high throughput neonatal screening for ALD is methodologically feasible. Copyright © 2013 Elsevier Inc. All rights reserved.
Ioanna Skilitsi, Anastasia; Turko, Timothé; Cianfarani, Damien; Barre, Sophie; Uhring, Wilfried; Hassiepen, Ulrich; Léonard, Jérémie
Time-resolved fluorescence detection for robust sensing of biomolecular interactions is developed by implementing time-correlated single photon counting in high-throughput conditions. Droplet microfluidics is used as a promising platform for the very fast handling of low-volume samples. We illustrate the potential of this very sensitive and cost-effective technology in the context of an enzymatic activity assay based on fluorescently-labeled biomolecules. Fluorescence lifetime detection by time-correlated single photon counting is shown to enable reliable discrimination between positive and negative control samples at a throughput as high as several hundred samples per second.
Shafer, Aaron B A; Northrup, Joseph M; Wikelski, Martin; Wittemyer, George; Wolf, Jochen B W
Recent advancements in animal tracking technology and high-throughput sequencing are rapidly changing the questions and scope of research in the biological sciences. The integration of genomic data with high-tech animal instrumentation comes as a natural progression of traditional work in ecological genetics, and we provide a framework for linking the separate data streams from these technologies. Such a merger will elucidate the genetic basis of adaptive behaviors like migration and hibernation and advance our understanding of fundamental ecological and evolutionary processes such as pathogen transmission, population responses to environmental change, and communication in natural populations.
the development and testing of innovative molecular approaches aiming at improving the amount of informative HTS data one can recover from ancient DNA extracts. We have characterized important ligation and amplification biases in the sequencing library building and enrichment steps, which can impede further...... been mainly driven by the development of High-Throughput DNA Sequencing (HTS) technologies but also by the implementation of novel molecular tools tailored to the manipulation of ultra short and damaged DNA molecules. Our ability to retrieve traces of genetic material has tremendously improved, pushing...
Full Text Available This article reviews recent developments in microfluidic impedance flow cytometry for high-throughput electrical property characterization of single cells. Four major perspectives of microfluidic impedance flow cytometry for single-cell characterization are included in this review: (1 early developments of microfluidic impedance flow cytometry for single-cell electrical property characterization; (2 microfluidic impedance flow cytometry with enhanced sensitivity; (3 microfluidic impedance and optical flow cytometry for single-cell analysis and (4 integrated point of care system based on microfluidic impedance flow cytometry. We examine the advantages and limitations of each technique and discuss future research opportunities from the perspectives of both technical innovation and clinical applications.
Chen, Jian; Xue, Chengcheng; Zhao, Yang; Chen, Deyong; Wu, Min-Hsien; Wang, Junbo
This article reviews recent developments in microfluidic impedance flow cytometry for high-throughput electrical property characterization of single cells. Four major perspectives of microfluidic impedance flow cytometry for single-cell characterization are included in this review: (1) early developments of microfluidic impedance flow cytometry for single-cell electrical property characterization; (2) microfluidic impedance flow cytometry with enhanced sensitivity; (3) microfluidic impedance and optical flow cytometry for single-cell analysis and (4) integrated point of care system based on microfluidic impedance flow cytometry. We examine the advantages and limitations of each technique and discuss future research opportunities from the perspectives of both technical innovation and clinical applications.
Chen, Yu-Chih; Yoon, Euisik
Three-dimensional (3D) cell culture is critical in studying cancer pathology and drug response. Though 3D cancer sphere culture can be performed in low-adherent dishes or well plates, the unregulated cell aggregation may skew the results. On contrary, microfluidic 3D culture can allow precise control of cell microenvironments, and provide higher throughput by orders of magnitude. In this chapter, we will look into engineering innovations in a microfluidic platform for high-throughput cancer cell sphere formation and review the implementation methods in detail.
Hussain, Rohanah; Jávorfi, Tamás; Rudd, Timothy R.; Siligardi, Giuliano
The sample compartment for high-throughput synchrotron radiation circular dichroism (HT-SRCD) has been developed to satisfy an increased demand of protein characterisation in terms of folding and binding interaction properties not only in the traditional field of structural biology but also in the growing research area of material science with the potential to save time by 80%. As the understanding of protein behaviour in different solvent environments has increased dramatically the development of novel functions such as recombinant proteins modified to have different functions from harvesting solar energy to metabolonics for cleaning heavy and metal and organic molecule pollutions, there is a need to characterise speedily these system.
The use of high-throughput screening allowed for the optimization of reaction conditions for the palladium-catalyzed asymmetric decarboxylative alkylation reaction of enolate-stabilized enol carbonates. Changing to a non-polar reaction solvent and to an electron-deficient PHOX derivative as ligand from our standard reaction conditions improved the enantioselectivity for the alkylation of a ketal-protected,1,3-diketone-derived enol carbonate from 28% ee to 84% ee. Similar improvements in enantioselectivity were seen for a β-keto-ester derived- and an α-phenyl cyclohexanone-derived enol carbonate.
We propose statistical methods for comparing phenomics data generated by the Biolog Phenotype Microarray (PM) platform for high-throughput phenotyping. Instead of the routinely used visual inspection of data with no sound inferential basis, we develop two approaches. The first approach is based on quantifying the distance between mean or median curves from two treatments and then applying a permutation test; we also consider a permutation test applied to areas under mean curves. The second approach employs functional principal component analysis. Properties of the proposed methods are investigated on both simulated data and data sets from the PM platform.
Conery, Annie L; Larkins-Ford, Jonah; Ausubel, Frederick M; Kirienko, Natalia V
In recent history, the nematode Caenorhabditis elegans has provided a compelling platform for the discovery of novel antimicrobial drugs. In this protocol, we present an automated, high-throughput C. elegans pathogenesis assay, which can be used to screen for anti-infective compounds that prevent nematodes from dying due to Pseudomonas aeruginosa. New antibiotics identified from such screens would be promising candidates for treatment of human infections, and also can be used as probe compounds to identify novel targets in microbial pathogenesis or host immunity. Copyright © 2014 John Wiley & Sons, Inc.
Lundberg, Martin; Thorsen, Stine Buch; Assarsson, Erika
specificity, even in multiplex, by its dual recognition feature, its proximity requirement, and most importantly by using unique sequence specific reporter fragments on both antibody-based probes. To illustrate the potential of this protein detection technology, a pilot biomarker research project......A high throughput protein biomarker discovery tool has been developed based on multiplexed proximity ligation assays (PLA) in a homogeneous format in the sense of no washing steps. The platform consists of four 24-plex panels profiling 74 putative biomarkers with sub pM sensitivity each consuming...
Kim, Marlene T; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao
Publicly available bioassay data often contains errors. Curating massive bioassay data, especially high-throughput screening (HTS) data, for Quantitative Structure-Activity Relationship (QSAR) modeling requires the assistance of automated data curation tools. Using automated data curation tools are beneficial to users, especially ones without prior computer skills, because many platforms have been developed and optimized based on standardized requirements. As a result, the users do not need to extensively configure the curation tool prior to the application procedure. In this chapter, a freely available automatic tool to curate and prepare HTS data for QSAR modeling purposes will be described.
Fahlgren, Noah; Gehan, Malia A; Baxter, Ivan
Anticipated population growth, shifting demographics, and environmental variability over the next century are expected to threaten global food security. In the face of these challenges, crop yield for food and fuel must be maintained and improved using fewer input resources. In recent years, genetic tools for profiling crop germplasm has benefited from rapid advances in DNA sequencing, and now similar advances are needed to improve the throughput of plant phenotyping. We highlight recent developments in high-throughput plant phenotyping using robotic-assisted imaging platforms and computer vision-assisted analysis tools. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Zhang, S. H.; Zhang, R. F.
The elastic properties are fundamental and important for crystalline materials as they relate to other mechanical properties, various thermodynamic qualities as well as some critical physical properties. However, a complete set of experimentally determined elastic properties is only available for a small subset of known materials, and an automatic scheme for the derivations of elastic properties that is adapted to high-throughput computation is much demanding. In this paper, we present the AELAS code, an automated program for calculating second-order elastic constants of both two-dimensional and three-dimensional single crystal materials with any symmetry, which is designed mainly for high-throughput first-principles computation. Other derivations of general elastic properties such as Young's, bulk and shear moduli as well as Poisson's ratio of polycrystal materials, Pugh ratio, Cauchy pressure, elastic anisotropy and elastic stability criterion, are also implemented in this code. The implementation of the code has been critically validated by a lot of evaluations and tests on a broad class of materials including two-dimensional and three-dimensional materials, providing its efficiency and capability for high-throughput screening of specific materials with targeted mechanical properties. Program Files doi:http://dx.doi.org/10.17632/f8fwg4j9tw.1 Licensing provisions: BSD 3-Clause Programming language: Fortran Nature of problem: To automate the calculations of second-order elastic constants and the derivations of other elastic properties for two-dimensional and three-dimensional materials with any symmetry via high-throughput first-principles computation. Solution method: The space-group number is firstly determined by the SPGLIB code  and the structure is then redefined to unit cell with IEEE-format . Secondly, based on the determined space group number, a set of distortion modes is automatically specified and the distorted structure files are generated
Boso, Gianluca; Tosi, Alberto, E-mail: firstname.lastname@example.org; Zappa, Franco [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo Da Vinci 32, 20133 Milano (Italy); Mora, Alberto Dalla [Dipartimento di Fisica, Politecnico di Milano, Piazza Leonardo Da Vinci 32, 20133 Milano (Italy)
We present the design and characterization of a high-throughput gated photon counter able to count electrical pulses occurring within two well-defined and programmable detection windows. We extensively characterized and validated this instrument up to 100 Mcounts/s and with detection window width down to 70 ps. This instrument is suitable for many applications and proves to be a cost-effective and compact alternative to time-correlated single-photon counting equipment, thanks to its easy configurability, user-friendly interface, and fully adjustable settings via a Universal Serial Bus (USB) link to a remote computer.
Moller, Isabel Eva; Sørensen, Iben; Bernal Giraldo, Adriana Jimena
We describe here a methodology that enables the occurrence of cell-wall glycans to be systematically mapped throughout plants in a semi-quantitative high-throughput fashion. The technique (comprehensive microarray polymer profiling, or CoMPP) integrates the sequential extraction of glycans from...... analysis of mutant and wild-type plants, as demonstrated here for the Arabidopsis thaliana mutants fra8, mur1 and mur3. CoMPP was also applied to Physcomitrella patens cell walls and was validated by carbohydrate linkage analysis. These data provide new insights into the structure and functions of plant...
Kampmann, Marie-Louise; Buchard, Anders; Børsting, Claus
Here, we demonstrate that punches from buccal swab samples preserved on FTA cards can be used for high-throughput DNA sequencing, also known as massively parallel sequencing (MPS). We typed 44 reference samples with the HID-Ion AmpliSeq Identity Panel using washed 1.2 mm punches from FTA cards...... with buccal swabs and compared the results with those obtained with DNA extracted using the EZ1 DNA Investigator Kit. Concordant profiles were obtained for all samples. Our protocol includes simple punch, wash, and PCR steps, reducing cost and hands-on time in the laboratory. Furthermore, it facilitates...... automation of DNA sequencing....
Bugel, Sean M; Tanguay, Robert L; Planchart, Antonio
The evolutionary conservation of genomic, biochemical and developmental features between zebrafish and humans is gradually coming into focus with the end result that the zebrafish embryo model has emerged as a powerful tool for uncovering the effects of environmental exposures on a multitude of biological processes with direct relevance to human health. In this review, we highlight advances in automation, high-throughput (HT) screening, and analysis that leverage the power of the zebrafish embryo model for unparalleled advances in our understanding of how chemicals in our environment affect our health and wellbeing.
Authors: Yanbao Yu, Madeline Smith & Rembert Pieper ### Abstract The stop-and-go-extraction tips (StageTips) have been widely used in shotgun proteomics to clean/desalt peptide samples prior to LC-MS/MS analysis. Here, an extremely simple and high throughput StageTip protocol is described. In this protocol, an adaptor is introduced to the StageTip, and makes it readily available for bench-top centrifugation. Each spin step (with 200 μL buffer loaded) takes around 2 min at 4,000 r...
Kim, Sung-Hou [Moraga, CA; Kim, Rosalind [Moraga, CA; Jancarik, Jamila [Walnut Creek, CA
An optimum solubility screen in which a panel of buffers and many additives are provided in order to obtain the most homogeneous and monodisperse protein condition for protein crystallization. The present methods are useful for proteins that aggregate and cannot be concentrated prior to setting up crystallization screens. A high-throughput method using the hanging-drop method and vapor diffusion equilibrium and a panel of twenty-four buffers is further provided. Using the present methods, 14 poorly behaving proteins have been screened, resulting in 11 of the proteins having highly improved dynamic light scattering results allowing concentration of the proteins, and 9 were crystallized.
Dalby, Brian; Cates, Sharon; Harris, Adam; Ohki, Elise C; Tilkins, Mary L; Price, Paul J; Ciccarone, Valentina C
Lipofectamine 2000 is a cationic liposome based reagent that provides high transfection efficiency and high levels of transgene expression in a range of mammalian cell types in vitro using a simple protocol. Optimum transfection efficiency and subsequent cell viability depend on a number of experimental variables such as cell density, liposome and DNA concentrations, liposome-DNA complexing time, and the presence or absence of media components such as antibiotics and serum. The importance of these factors in Lipofectamine 2000 mediated transfection will be discussed together with some specific applications: transfection of primary neurons, high throughput transfection, and delivery of small interfering RNAs. Copyright 2003 Elsevier Inc.
Kang, Aram; Meadows, Corey W.; Canu, Nicolas
ATP requirements and isopentenyl diphosphate (IPP) toxicity pose immediate challenges for engineering bacterial strains to overproduce commodities utilizing IPP as an intermediate. To overcome these limitations, we developed an â€œIPP-bypassâ€� isopentenol pathway using the promiscuous activity...... the endogenous non-mevalonate pathway, we developed a high-throughput screening platform that correlated promiscuous PMD activity toward MVAP with cellular growth. Successful identification of mutants that altered PMD activity demonstrated the sensitivity and specificity of the screening platform. Strains...
Full Text Available This article reviews recent developments in droplet microfluidics enabling high-throughput single-cell analysis. Five key aspects in this field are included in this review: (1 prototype demonstration of single-cell encapsulation in microfluidic droplets; (2 technical improvements of single-cell encapsulation in microfluidic droplets; (3 microfluidic droplets enabling single-cell proteomic analysis; (4 microfluidic droplets enabling single-cell genomic analysis; and (5 integrated microfluidic droplet systems enabling single-cell screening. We examine the advantages and limitations of each technique and discuss future research opportunities by focusing on key performances of throughput, multifunctionality, and absolute quantification.
Ryvkin, Paul; Leung, Yuk Yee; Ungar, Lyle H.; Gregory, Brian D.; Wang, Li-San
Recent advances in high-throughput sequencing allow researchers to examine the transcriptome in more detail than ever before. Using a method known as high-throughput small RNA-sequencing, we can now profile the expression of small regulatory RNAs such as microRNAs and small interfering RNAs (siRNAs) with a great deal of sensitivity. However, there are many other types of small RNAs (
Yang, Jing; Mei, Ying; Hook, Andrew L.; Taylor, Michael; Urquhart, Andrew J.; Bogatyrev, Said R.; Langer, Robert; Anderson, Daniel G.; Davies, Martyn C.; Alexander, Morgan R.
High throughput materials discovery using combinatorial polymer microarrays to screen for new biomaterials with new and improved function is established as a powerful strategy. Here we combine this screening approach with high throughput surface characterisation (HT-SC) to identify surface structure-function relationships. We explore how this combination can help to identify surface chemical moieties that control protein adsorption and subsequent cellular response. The adhesion of human embry...
Woods, D; Crampton, J; Clarke, B; Williamson, R
A recombinant library has been constructed using the plasmid pAT153 and double stranded cDNA prepared from normal human lymphocyte poly(A)+ RNA. Transformation conditions were optimized to yield approximately 200,000 recombinants per microgram of double stranded cDNA. Statistical analysis as well as sequence complexity analysis of the inserted sequences indicates that the cDNA library is representative of > 99% of the poly(A)+ RNA present in the normal human lymphocyte.
Tracey M Filzen
Full Text Available High throughput mRNA expression profiling can be used to characterize the response of cell culture models to perturbations such as pharmacologic modulators and genetic perturbations. As profiling campaigns expand in scope, it is important to homogenize, summarize, and analyze the resulting data in a manner that captures significant biological signals in spite of various noise sources such as batch effects and stochastic variation. We used the L1000 platform for large-scale profiling of 978 representative genes across thousands of compound treatments. Here, a method is described that uses deep learning techniques to convert the expression changes of the landmark genes into a perturbation barcode that reveals important features of the underlying data, performing better than the raw data in revealing important biological insights. The barcode captures compound structure and target information, and predicts a compound's high throughput screening promiscuity, to a higher degree than the original data measurements, indicating that the approach uncovers underlying factors of the expression data that are otherwise entangled or masked by noise. Furthermore, we demonstrate that visualizations derived from the perturbation barcode can be used to more sensitively assign functions to unknown compounds through a guilt-by-association approach, which we use to predict and experimentally validate the activity of compounds on the MAPK pathway. The demonstrated application of deep metric learning to large-scale chemical genetics projects highlights the utility of this and related approaches to the extraction of insights and testable hypotheses from big, sometimes noisy data.
Tong, Ziqiu; Rajeev, Gayathri; Guo, Keying; Ivask, Angela; McCormick, Scott; Lombi, Enzo; Priest, Craig; Voelcker, Nicolas H
With the advances in nanotechnology, particles with various size, shape, surface chemistry and composition can be easily produced. Nano- and microparticles have been extensively explored in many industrial and clinical applications. Ensuring that the particles themselves are not possessing any toxic effects to the biological system is of paramount importance. This paper describes a proof of concept method in which a microfluidic system is used in conjunction with a cell microarray technique aiming to streamline the analysis of particle-cell interaction in a high throughput manner. Polymeric microparticles, with different particle surface functionalities, were firstly used to investigate the efficiency of particle-cell adhesion under dynamic flow. Silver nanoparticles (AgNPs,10 nm in diameter) perfused at different concentrations (0 to 20 μg/ml) in parallel streams over the cells in the microchannel exhibited higher toxicity compared to the static culture in the 96 well plate format. This developed microfluidic system can be easily scaled up to accommodate larger number of microchannels for high throughput analysis of potential toxicity of a wide range of particles in a single experiment.
Oldenburg Delene J
Full Text Available Abstract Background The amount of DNA in the chloroplasts of some plant species has been shown recently to decline dramatically during leaf development. A high-throughput method of DNA detection in chloroplasts is now needed in order to facilitate the further investigation of this process using large numbers of tissue samples. Results The DNA-binding fluorophores 4',6-diamidino-2-phenylindole (DAPI, SYBR Green I (SG, SYTO 42, and SYTO 45 were assessed for their utility in flow cytometric analysis of DNA in Arabidopsis chloroplasts. Fluorescence microscopy and real-time quantitative PCR (qPCR were used to validate flow cytometry data. We found neither DAPI nor SYTO 45 suitable for flow cytometric analysis of chloroplast DNA (cpDNA content, but did find changes in cpDNA content during development by flow cytometry using SG and SYTO 42. The latter dye provided more sensitive detection, and the results were similar to those from the fluorescence microscopic analysis. Differences in SYTO 42 fluorescence were found to correlate with differences in cpDNA content as determined by qPCR using three primer sets widely spaced across the chloroplast genome, suggesting that the whole genome undergoes copy number reduction during development, rather than selective reduction/degradation of subgenomic regions. Conclusion Flow cytometric analysis of chloroplasts stained with SYTO 42 is a high-throughput method suitable for determining changes in cpDNA content during development and for sorting chloroplasts on the basis of DNA content.
Full Text Available Label-free and real-time detection technologies can dramatically reduce the time and cost of pharmaceutical testing and development. However, to reach their full promise, these technologies need to be adaptable to high-throughput automation. To demonstrate the potential of single-walled carbon nanotube field-effect transistors (SWCNT-FETs for high-throughput peptide-based assays, we have designed circuits arranged in an 8 × 12 (96-well format that are accessible to standard multichannel pipettors. We performed epitope mapping of two HIV-1 gp160 antibodies using an overlapping gp160 15-mer peptide library coated onto nonfunctionalized SWCNTs. The 15-mer peptides did not require a linker to adhere to the non-functionalized SWCNTs, and binding data was obtained in real time for all 96 circuits. Despite some sequence differences in the HIV strains used to generate these antibodies and the overlapping peptide library, respectively, our results using these antibodies are in good agreement with known data, indicating that peptides immobilized onto SWCNT are accessible and that linear epitope mapping can be performed in minutes using SWCNT-FET.
Jung, Sang-Kyu; Qu, Xiaolei; Aleman-Meza, Boanerges; Wang, Tianxiao; Riepe, Celeste; Liu, Zheng; Li, Qilin; Zhong, Weiwei
The booming nanotech industry has raised public concerns about the environmental health and safety impact of engineered nanomaterials (ENMs). High-throughput assays are needed to obtain toxicity data for the rapidly increasing number of ENMs. Here we present a suite of high-throughput methods to study nanotoxicity in intact animals using Caenorhabditis elegans as a model. At the population level, our system measures food consumption of thousands of animals to evaluate population fitness. At the organism level, our automated system analyzes hundreds of individual animals for body length, locomotion speed, and lifespan. To demonstrate the utility of our system, we applied this technology to test the toxicity of 20 nanomaterials under four concentrations. Only fullerene nanoparticles (nC60), fullerol, TiO2, and CeO2 showed little or no toxicity. Various degrees of toxicity were detected from different forms of carbon nanotubes, graphene, carbon black, Ag, and fumed SiO2 nanoparticles. Aminofullerene and UV irradiated nC60 also showed small but significant toxicity. We further investigated the effects of nanomaterial size, shape, surface chemistry, and exposure conditions on toxicity. Our data are publicly available at the open-access nanotoxicity database www.QuantWorm.org/nano. PMID:25611253
Taggart, David J.; Camerlengo, Terry L.; Harrison, Jason K.; Sherrer, Shanen M.; Kshetry, Ajay K.; Taylor, John-Stephen; Huang, Kun; Suo, Zucai
Cellular genomes are constantly damaged by endogenous and exogenous agents that covalently and structurally modify DNA to produce DNA lesions. Although most lesions are mended by various DNA repair pathways in vivo, a significant number of damage sites persist during genomic replication. Our understanding of the mutagenic outcomes derived from these unrepaired DNA lesions has been hindered by the low throughput of existing sequencing methods. Therefore, we have developed a cost-effective high-throughput short oligonucleotide sequencing assay that uses next-generation DNA sequencing technology for the assessment of the mutagenic profiles of translesion DNA synthesis catalyzed by any error-prone DNA polymerase. The vast amount of sequencing data produced were aligned and quantified by using our novel software. As an example, the high-throughput short oligonucleotide sequencing assay was used to analyze the types and frequencies of mutations upstream, downstream and at a site-specifically placed cis–syn thymidine–thymidine dimer generated individually by three lesion-bypass human Y-family DNA polymerases. PMID:23470999
Julie D. Thompson
Full Text Available The recent availability of the complete genome sequences of a large number of model organisms, together with the immense amount of data being produced by the new high-throughput technologies, means that we can now begin comparative analyses to understand the mechanisms involved in the evolution of the genome and their consequences in the study of biological systems. Phylogenetic approaches provide a unique conceptual framework for performing comparative analyses of all this data, for propagating information between different systems and for predicting or inferring new knowledge. As a result, phylogeny-based inference systems are now playing an increasingly important role in most areas of high throughput genomics, including studies of promoters (phylogenetic footprinting, interactomes (based on the presence and degree of conservation of interacting proteins, and in comparisons of transcriptomes or proteomes (phylogenetic proximity and co-regulation/co-expression. Here we review the recent developments aimed at making automatic, reliable phylogeny-based inference feasible in large-scale projects. We also discuss how evolutionary concepts and phylogeny-based inference strategies are now being exploited in order to understand the evolution and function of biological systems. Such advances will be fundamental for the success of the emerging disciplines of systems biology and synthetic biology, and will have wide-reaching effects in applied ﬁelds such as biotechnology, medicine and pharmacology.
Tattaris, Maria; Reynolds, Matthew P.; Chapman, Scott C.
Remote sensing (RS) of plant canopies permits non-intrusive, high-throughput monitoring of plant physiological characteristics. This study compared three RS approaches using a low flying UAV (unmanned aerial vehicle), with that of proximal sensing, and satellite-based imagery. Two physiological traits were considered, canopy temperature (CT) and a vegetation index (NDVI), to determine the most viable approaches for large scale crop genetic improvement. The UAV-based platform achieves plot-level resolution while measuring several hundred plots in one mission via high-resolution thermal and multispectral imagery measured at altitudes of 30–100 m. The satellite measures multispectral imagery from an altitude of 770 km. Information was compared with proximal measurements using IR thermometers and an NDVI sensor at a distance of 0.5–1 m above plots. For robust comparisons, CT and NDVI were assessed on panels of elite cultivars under irrigated and drought conditions, in different thermal regimes, and on un-adapted genetic resources under water deficit. Correlations between airborne data and yield/biomass at maturity were generally higher than equivalent proximal correlations. NDVI was derived from high-resolution satellite imagery for only larger sized plots (8.5 × 2.4 m) due to restricted pixel density. Results support use of UAV-based RS techniques for high-throughput phenotyping for both precision and efficiency. PMID:27536304
Paul N. Schofield
Full Text Available Recent advances in gene knockout techniques and the in vivo analysis of mutant mice, together with the advent of large-scale projects for systematic mouse mutagenesis and genome-wide phenotyping, have allowed the creation of platforms for the most complete and systematic analysis of gene function ever undertaken in a vertebrate. The development of high-throughput phenotyping pipelines for these and other large-scale projects allows investigators to search and integrate large amounts of directly comparable phenotype data from many mutants, on a genomic scale, to help develop and test new hypotheses about the origins of disease and the normal functions of genes in the organism. Histopathology has a venerable history in the understanding of the pathobiology of human and animal disease, and presents complementary advantages and challenges to in vivo phenotyping. In this review, we present evidence for the unique contribution that histopathology can make to a large-scale phenotyping effort, using examples from past and current programmes at Lexicon Pharmaceuticals and The Jackson Laboratory, and critically assess the role of histopathology analysis in high-throughput phenotyping pipelines.
Lucena Severino A
Full Text Available Abstract Background The description of new hydrolytic enzymes is an important step in the development of techniques which use lignocellulosic materials as a starting point for fuel production. Sugarcane bagasse, which is subjected to pre-treatment, hydrolysis and fermentation for the production of ethanol in several test refineries, is the most promising source of raw material for the production of second generation renewable fuels in Brazil. One problem when screening hydrolytic activities is that the activity against commercial substrates, such as carboxymethylcellulose, does not always correspond to the activity against the natural lignocellulosic material. Besides that, the macroscopic characteristics of the raw material, such as insolubility and heterogeneity, hinder its use for high throughput screenings. Results In this paper, we present the preparation of a colloidal suspension of particles obtained from sugarcane bagasse, with minimal chemical change in the lignocellulosic material, and demonstrate its use for high throughput assays of hydrolases using Brazilian termites as the screened organisms. Conclusions Important differences between the use of the natural substrate and commercial cellulase substrates, such as carboxymethylcellulose or crystalline cellulose, were observed. This suggests that wood feeding termites, in contrast to litter feeding termites, might not be the best source for enzymes that degrade sugarcane biomass.
Farias-Hesson, Eveline; Erikson, Jonathan; Atkins, Alexander; Shen, Peidong; Davis, Ronald W; Scharfe, Curt; Pourmand, Nader
Next-generation sequencing platforms are powerful technologies, providing gigabases of genetic information in a single run. An important prerequisite for high-throughput DNA sequencing is the development of robust and cost-effective preprocessing protocols for DNA sample library construction. Here we report the development of a semi-automated sample preparation protocol to produce adaptor-ligated fragment libraries. Using a liquid-handling robot in conjunction with Carboxy Terminated Magnetic Beads, we labeled each library sample using a unique 6 bp DNA barcode, which allowed multiplex sample processing and sequencing of 32 libraries in a single run using Applied Biosystems' SOLiD sequencer. We applied our semi-automated pipeline to targeted medical resequencing of nuclear candidate genes in individuals affected by mitochondrial disorders. This novel method is capable of preparing as much as 32 DNA libraries in 2.01 days (8-hour workday) for emulsion PCR/high throughput DNA sequencing, increasing sample preparation production by 8-fold.
Lei, Cheng; Ugawa, Masashi; Nozawa, Taisuke; Ideguchi, Takuro; Di Carlo, Dino; Ota, Sadao; Ozeki, Yasuyuki; Goda, Keisuke
Particle analysis is an effective method in analytical chemistry for sizing and counting microparticles such as emulsions, colloids, and biological cells. However, conventional methods for particle analysis, which fall into two extreme categories, have severe limitations. Sieving and Coulter counting are capable of analyzing particles with high throughput, but due to their lack of detailed information such as morphological and chemical characteristics, they can only provide statistical results with low specificity. On the other hand, CCD or CMOS image sensors can be used to analyze individual microparticles with high content, but due to their slow charge download, the frame rate (hence, the throughput) is significantly limited. Here by integrating a time-stretch optical microscope with a three-color fluorescent analyzer on top of an inertial-focusing microfluidic device, we demonstrate an optofluidic particle analyzer with a sub-micrometer spatial resolution down to 780 nm and a high throughput of 10,000 particles/s. In addition to its morphological specificity, the particle analyzer provides chemical specificity to identify chemical expressions of particles via fluorescence detection. Our results indicate that we can identify different species of microparticles with high specificity without sacrificing throughput. Our method holds promise for high-precision statistical particle analysis in chemical industry and pharmaceutics.
Martin, Helen J; Reynolds, James C; Riazanskaia, Svetlana; Thomas, C L Paul
The non-invasive nature of volatile organic compound (VOC) sampling from skin makes this a priority in the development of new screening and diagnostic assays. Evaluation of recent literature highlights the tension between the analytical utility of ambient ionisation approaches for skin profiling and the practicality of undertaking larger campaigns (higher statistical power), or undertaking research in remote locations. This study describes how VOC may be sampled from skin and recovered from a polydimethylsilicone sampling coupon and analysed by thermal desorption (TD) interfaced to secondary electrospray ionisation (SESI) time-of-flight mass spectrometry (MS) for the high throughput screening of volatile fatty acids (VFAs) from human skin. Analysis times were reduced by 79% compared to gas chromatography-mass spectrometry methods (GC-MS) and limits of detection in the range 300 to 900 pg cm(-2) for VFA skin concentrations were obtained. Using body odour as a surrogate model for clinical testing 10 Filipino participants, 5 high and 5 low odour, were sampled in Manilla and the samples returned to the UK and screened by TD-SESI-MS and TD-GC-MS for malodour precursors with greater than >95% agreement between the two analytical techniques. Eight additional VFAs were also identified by both techniques with chains 4 to 15 carbons long being observed. TD-SESI-MS appears to have significant potential for the high throughput targeted screening of volatile biomarkers in human skin.
Damoiseaux, R; George, S; Li, M; Pokhrel, S; Ji, Z; France, B; Xia, T; Suarez, E; Rallo, R; Mädler, L; Cohen, Y; Hoek, EMV; Nel, A
Nanomaterials hold great promise for medical, technological and economical benefits. Knowledge concerning the toxicological properties of these novel materials is typically lacking. At the same time, it is becoming evident that some nanomaterials could have a toxic potential in humans and the environment. Animal based systems lack the needed capacity to cope with the abundance of novel nanomaterials being produced, and thus we have to employ in vitro methods with high throughput to manage the rush logistically and use high content readouts wherever needed in order to gain more depth of information. Towards this end, high throughput screening (HTS) and high content screening (HCS) approaches can be used to speed up the safety analysis on a scale that commensurate with the rate of expansion of new materials and new properties. The insights gained from HTS/HCS should aid in our understanding of the tenets of nanomaterial hazard at biological level as well as asset the development of safe-by-design approaches. This review aims to provide a comprehensive introduction to the HTS/HCS methodology employed for safety assessment of engineered nanomaterials (ENMs), including data analysis and prediction of potentially hazardous material properties. Given the current pace of nanomaterial development, HTS/HCS is a potentially effective means of keeping up with the rapid progress in this field – we have literally no time to lose. PMID:21301704
MicroRNAs are short single-stranded RNA molecules (18-25 nucleotides). Because of their ability to silence gene expressions, they can be used to diagnose and treat tumors. Experimental construction of microRNA libraries was the most important step to identify microRNAs from animal tissues. Although there are many commercial kits with special protocols to construct microRNA libraries, this chapter provides the most reliable, high-throughput, and affordable protocols for microRNA library construction. The high-throughput capability of our protocols came from a double concentration (3 and 15%, thickness 1.5 mm) polyacrylamide gel electrophoresis (PAGE), which could directly extract microRNA-size RNAs from up to 400 μg total RNA (enough for two microRNA libraries). The reliability of our protocols was assured by a third PAGE, which selected PCR products of microRNA-size RNAs ligated with 5' and 3' linkers by a miRCat™ kit. Also, a MathCAD program was provided to automatically search short RNAs inserted between 5' and 3' linkers from thousands of sequencing text files.
Kebschull, Justus M.; Zador, Anthony M.
PCR permits the exponential and sequence-specific amplification of DNA, even from minute starting quantities. PCR is a fundamental step in preparing DNA samples for high-throughput sequencing. However, there are errors associated with PCR-mediated amplification. Here we examine the effects of four important sources of error—bias, stochasticity, template switches and polymerase errors—on sequence representation in low-input next-generation sequencing libraries. We designed a pool of diverse PCR amplicons with a defined structure, and then used Illumina sequencing to search for signatures of each process. We further developed quantitative models for each process, and compared predictions of these models to our experimental data. We find that PCR stochasticity is the major force skewing sequence representation after amplification of a pool of unique DNA amplicons. Polymerase errors become very common in later cycles of PCR but have little impact on the overall sequence distribution as they are confined to small copy numbers. PCR template switches are rare and confined to low copy numbers. Our results provide a theoretical basis for removing distortions from high-throughput sequencing data. In addition, our findings on PCR stochasticity will have particular relevance to quantification of results from single cell sequencing, in which sequences are represented by only one or a few molecules. PMID:26187991
Tian, Geng; Tang, Fangrong; Yang, Chunhua; Zhang, Wenfeng; Bergquist, Jonas; Wang, Bin; Mi, Jia; Zhang, Jiandi
Lacking access to an affordable method of high throughput immunoblot analysis for daily use remains a big challenge for scientists worldwide. We proposed here Quantitative Dot Blot analysis (QDB) to meet this demand. With the defined linear range, QDB analysis fundamentally transforms traditional immunoblot method into a true quantitative assay. Its convenience in analyzing large number of samples also enables bench scientists to examine protein expression levels from multiple parameters. In addition, the small amount of sample lysates needed for analysis means significant saving in research sources and efforts. This method was evaluated at both cellular and tissue levels with unexpected observations otherwise would be hard to achieve using conventional immunoblot methods like Western blot analysis. Using QDB technique, we were able to observed an age-dependent significant alteration of CAPG protein expression level in TRAMP mice. We believe that the adoption of QDB analysis would have immediate impact on biological and biomedical research to provide much needed high-throughput information at protein level in this "Big Data" era.
Lewis A. Fraser
Full Text Available The functionalisation of microbeads with oligonucleotides has become an indispensable technique for high-throughput aptamer selection in SELEX protocols. In addition to simplifying the separation of binding and non-binding aptamer candidates, microbeads have facilitated the integration of other technologies such as emulsion PCR (ePCR and Fluorescence Activated Cell Sorting (FACS to high-throughput selection techniques. Within these systems, monoclonal aptamer microbeads can be individually generated and assayed to assess aptamer candidate fitness thereby helping eliminate stochastic effects which are common to classical SELEX techniques. Such techniques have given rise to aptamers with 1000 times greater binding affinities when compared to traditional SELEX. Another emerging technique is Fluorescence Activated Droplet Sorting (FADS whereby selection does not rely on binding capture allowing evolution of a greater diversity of aptamer properties such as fluorescence or enzymatic activity. Within this review we explore examples and applications of oligonucleotide functionalised microbeads in aptamer selection and reflect upon new opportunities arising for aptamer science.
Garcia, G; Santos, C Nunes do; Menezes, R
The association between altered proteostasis and inflammatory responses has been increasingly recognized, therefore the identification and characterization of novel compounds with anti-inflammatory potential will certainly have a great impact in the therapeutics of protein-misfolding diseases such as degenerative disorders. Although cell-based screens are powerful approaches to identify potential therapeutic compounds, establishing robust inflammation models amenable to high-throughput screening remains a challenge. To bridge this gap, we have exploited the use of yeasts as a platform to identify lead compounds with anti-inflammatory properties. The yeast cell model described here relies on the high-degree homology between mammalian and yeast Ca(2+)/calcineurin pathways converging into the activation of NFAT and Crz1 orthologous proteins, respectively. It consists of a recombinant yeast strain encoding the lacZ gene under the control of Crz1-recongition elements to facilitate the identification of compounds interfering with Crz1 activation through the easy monitoring of β-galactosidase activity. Here, we describe in detail a protocol optimized for high-throughput screening of compounds with potential anti-inflammatory activity as well as a protocol to validate the positive hits using an alternative β-galactosidase substrate.
Hou, Sichao; Huo, Ruiqing; Su, Ming
Commonly used thermal analysis tools such as calorimeter and thermal conductivity meter are separated instruments and limited by low throughput, where only one sample is examined each time. This work reports an infrared based optical calorimetry with its theoretical foundation, which is able to provide an integrated solution to characterize thermal properties of materials with high throughput. By taking time domain temperature information of spatially distributed samples, this method allows a single device (infrared camera) to determine the thermal properties of both phase change systems (melting temperature and latent heat of fusion) and non-phase change systems (thermal conductivity and heat capacity). This method further allows these thermal properties of multiple samples to be determined rapidly, remotely, and simultaneously. In this proof-of-concept experiment, the thermal properties of a panel of 16 samples including melting temperatures, latent heats of fusion, heat capacities, and thermal conductivities have been determined in 2 min with high accuracy. Given the high thermal, spatial, and temporal resolutions of the advanced infrared camera, this method has the potential to revolutionize the thermal characterization of materials by providing an integrated solution with high throughput, high sensitivity, and short analysis time.
Cranmer, Miles; Barsdell, Benjamin R.; Price, Danny C.; Garsden, Hugh; Taylor, Gregory B.; Dowell, Jayce; Schinzel, Frank; Costa, Timothy; Greenhill, Lincoln J.
Large radio interferometers have data rates that render long-term storage of raw correlator data infeasible, thus motivating development of real-time processing software. For high-throughput applications, processing pipelines are challenging to design and implement. Motivated by science efforts with the Long Wavelength Array, we have developed Bifrost, a novel Python/C++ framework that eases the development of high-throughput data analysis software by packaging algorithms as black box processes in a directed graph. This strategy to modularize code allows astronomers to create parallelism without code adjustment. Bifrost uses CPU/GPU ’circular memory’ data buffers that enable ready introduction of arbitrary functions into the processing path for ’streams’ of data, and allow pipelines to automatically reconfigure in response to astrophysical transient detection or input of new observing settings. We have deployed and tested Bifrost at the latest Long Wavelength Array station, in Sevilleta National Wildlife Refuge, NM, where it handles throughput exceeding 10 Gbps per CPU core.
Full Text Available This paper mainly focuses on pass logic based design, which gives an low Energy Per Instruction (EPI and high throughput COrdinate Rotation Digital Computer (CORDIC cell for application of robotic exploration. The basic components of CORDIC cell namely register, multiplexer and proposed adder is designed using pass transistor logic (PTL design. The proposed adder is implemented in bit-parallel iterative CORDIC circuit whereas designed using DSCH2 VLSI CAD tool and their layouts are generated by Microwind 3 VLSI CAD tool. The propagation delay, area and power dissipation are calculated from the simulated results for proposed adder based CORDIC cell. The EPI, throughput and effect of temperature are calculated from generated layout. The output parameter of generated layout is analysed using BSIM4 advanced analyzer. The simulated result of the proposed adder based CORDIC circuit is compared with other adder based CORDIC circuits. From the analysis of these simulated results, it was found that the proposed adder based CORDIC circuit dissipates low power, gives faster response, low EPI and high throughput.
Lam, Raymond H W; Cui, Xin; Guo, Weijin; Thorsen, Todd
Dental biofilm formation is not only a precursor to tooth decay, but also induces more serious systematic health problems such as cardiovascular disease and diabetes. Understanding the conditions promoting colonization and subsequent biofilm development involving complex bacteria coaggregation is particularly important. In this paper, we report a high-throughput microfluidic 'artificial teeth' device offering controls of multiple microenvironmental factors (e.g. nutrients, growth factors, dissolved gases, and seeded cell populations) for quantitative characteristics of long-term dental bacteria growth and biofilm development. This 'artificial teeth' device contains multiple (up to 128) incubation chambers to perform parallel cultivation and analyses (e.g. biofilm thickness, viable-dead cell ratio, and spatial distribution of multiple bacterial species) of bacteria samples under a matrix of different combinations of microenvironmental factors, further revealing possible developmental mechanisms of dental biofilms. Specifically, we applied the 'artificial teeth' to investigate the growth of two key dental bacteria, Streptococci species and Fusobacterium nucleatum, in the biofilm under different dissolved gas conditions and sucrose concentrations. Together, this high-throughput microfluidic platform can provide extended applications for general biofilm research, including screening of the biofilm properties developing under combinations of specified growth parameters such as seeding bacteria populations, growth medium compositions, medium flow rates and dissolved gas levels.
Full Text Available There is a large variety of nanomaterials each with unique electronic, optical and sensing properties. However, there is currently no paradigm for integration of different nanomaterials on a single chip in a low-cost high-throughput manner. We present a high throughput integration approach based on spatially controlled dielectrophoresis executed sequentially for each nanomaterial type to realize a scalable array of individually addressable assemblies of graphene, carbon nanotubes, metal oxide nanowires and conductive polymers on a single chip. This is a first time where such a diversity of nanomaterials has been assembled on the same layer in a single chip. The resolution of assembly can range from mesoscale to microscale and is limited only by the size and spacing of the underlying electrodes on chip used for assembly. While many applications are possible, the utility of such an array is demonstrated with an example application of a chemical sensor array for detection of volatile organic compounds below parts-per-million sensitivity.
Fraser, Lewis A; Kinghorn, Andrew B; Tang, Marco S L; Cheung, Yee-Wai; Lim, Bryce; Liang, Shaolin; Dirkzwager, Roderick M; Tanner, Julian A
The functionalisation of microbeads with oligonucleotides has become an indispensable technique for high-throughput aptamer selection in SELEX protocols. In addition to simplifying the separation of binding and non-binding aptamer candidates, microbeads have facilitated the integration of other technologies such as emulsion PCR (ePCR) and Fluorescence Activated Cell Sorting (FACS) to high-throughput selection techniques. Within these systems, monoclonal aptamer microbeads can be individually generated and assayed to assess aptamer candidate fitness thereby helping eliminate stochastic effects which are common to classical SELEX techniques. Such techniques have given rise to aptamers with 1000 times greater binding affinities when compared to traditional SELEX. Another emerging technique is Fluorescence Activated Droplet Sorting (FADS) whereby selection does not rely on binding capture allowing evolution of a greater diversity of aptamer properties such as fluorescence or enzymatic activity. Within this review we explore examples and applications of oligonucleotide functionalised microbeads in aptamer selection and reflect upon new opportunities arising for aptamer science.
Leary, Elizabeth; Rhee, Claire; Wilks, Benjamin T; Morgan, Jeffrey R
Accurately predicting the human response to new compounds is critical to a wide variety of industries. Standard screening pipelines (including both in vitro and in vivo models) often lack predictive power. Three-dimensional (3D) culture systems of human cells, a more physiologically relevant platform, could provide a high-throughput, automated means to test the efficacy and/or toxicity of novel substances. However, the challenge of obtaining high-magnification, confocal z stacks of 3D spheroids and understanding their respective quantitative limitations must be overcome first. To address this challenge, we developed a method to form spheroids of reproducible size at precise spatial locations across a 96-well plate. Spheroids of variable radii were labeled with four different fluorescent dyes and imaged with a high-throughput confocal microscope. 3D renderings of the spheroid had a complex bowl-like appearance. We systematically analyzed these confocal z stacks to determine the depth of imaging and the effect of spheroid size and dyes on quantitation. Furthermore, we have shown that this loss of fluorescence can be addressed through the use of ratio imaging. Overall, understanding both the limitations of confocal imaging and the tools to correct for these limits is critical for developing accurate quantitative assays using 3D spheroids.
Chhabra, S.R.; Butland, G.; Elias, D.; Chandonia, J.-M.; Fok, V.; Juba, T.; Gorur, A.; Allen, S.; Leung, C.-M.; Keller, K.; Reveco, S.; Zane, G.; Semkiw, E.; Prathapam, R.; Gold, B.; Singer, M.; Ouellet, M.; Sazakal, E.; Jorgens, D.; Price, M.; Witkowska, E.; Beller, H.; Hazen, T.C.; Biggin, M.; Auer, M.; Wall, J.; Keasling, J.
The ability to conduct advanced functional genomic studies of the thousands of sequenced bacteria has been hampered by the lack of available tools for making high- throughput chromosomal manipulations in a systematic manner that can be applied across diverse species. In this work, we highlight the use of synthetic biological tools to assemble custom suicide vectors with reusable and interchangeable DNA “parts” to facilitate chromosomal modification at designated loci. These constructs enable an array of downstream applications including gene replacement and creation of gene fusions with affinity purification or localization tags. We employed this approach to engineer chromosomal modifications in a bacterium that has previously proven difficult to manipulate genetically, Desulfovibrio vulgaris Hildenborough, to generate a library of over 700 strains. Furthermore, we demonstrate how these modifications can be used for examining metabolic pathways, protein-protein interactions, and protein localization. The ubiquity of suicide constructs in gene replacement throughout biology suggests that this approach can be applied to engineer a broad range of species for a diverse array of systems biological applications and is amenable to high-throughput implementation.
Full Text Available Adiponectin, the adipose-derived hormone, plays an important role in the suppression of metabolic disorders that can result in type 2 diabetes, obesity, and atherosclerosis. It has been shown that up-regulation of adiponectin or adiponectin receptor has a number of therapeutic benefits. Given that it is hard to convert the full size adiponectin protein into a viable drug, adiponectin receptor agonists could be designed or identified using high-throughput screening. Here, we report on the development of a two-step screening process to identify adiponectin agonists. First step, we developed a high throughput screening assay based on fluorescence polarization to identify adiponectin ligands. The fluorescence polarization assay reported here could be adapted to screening against larger small molecular compound libraries. A natural product library containing 10,000 compounds was screened and 9 hits were selected for validation. These compounds have been taken for the second-step in vitro tests to confirm their agonistic activity. The most active adiponectin receptor 1 agonists are matairesinol, arctiin, (--arctigenin and gramine. The most active adiponectin receptor 2 agonists are parthenolide, taxifoliol, deoxyschizandrin, and syringin. These compounds may be useful drug candidates for hypoadiponectin related diseases.
Full Text Available Extensive genome-wide transcriptome study mediated by high throughput sequencing technique has revolutionized the study of genetics and epigenetic at unprecedented resolution. The research has revealed that besides protein-coding RNAs, large proportions of mammalian transcriptome includes a heap of regulatory non protein-coding RNAs, the number encoded within human genome is enigmatic. Many taboos developed in the past categorized these non-coding RNAs as ââdark matterâ and âjunksâ. Breaking the myth, RNA-seq-- a recently developed experimental technique is widely being used for studying non-coding RNAs which has acquired the limelight due to their physiological and pathological significance. The longest member of the ncRNA family-- long non-coding RNAs, acts as stable and functional part of a genome, guiding towards the important clues about the varied biological events like cellular-, structural- processes governing the complexity of an organism. Here, we review the most recent and influential computational approach developed to identify and quantify the long non-coding RNAs serving as an assistant for the users to choose appropriate tools for their specific research. Keywords: Transcriptome, High throughput sequencing, Genetic and epigenetic, Long non-coding RNA, RNA-sequencing, RNA-seq
Microgel is a kind of biocompatible polymeric material, which has been widely used as micro-carriers in materials synthesis, drug delivery and cell biology applications. However, high-throughput generation of individual microgel for on-site analysis in a microdevice still remains a challenge. Here, we presented a simple and stable droplet microfluidic system to realize high-throughput generation and trapping of individual agarose microgels based on the synergetic effect of surface tension and hydrodynamic forces in microchannels and used it for 3-D cell culture in real-time. The established system was mainly composed of droplet generators with flow focusing T-junction and a series of array individual trap structures. The whole process including the independent agarose microgel formation, immobilization in trapping array and gelation in situ via temperature cooling could be realized on the integrated microdevice completely. The performance of this system was demonstrated by successfully encapsulating and culturing adenoid cystic carcinoma (ACCM) cells in the gelated agarose microgels. This established approach is simple, easy to operate, which can not only generate the micro-carriers with different components in parallel, but also monitor the cell behavior in 3D matrix in real-time. It can also be extended for applications in the area of material synthesis and tissue engineering. © 2013 Springer-Verlag Berlin Heidelberg.
Full Text Available The rapid increase in the use of metabolite profiling/fingerprinting techniques to resolve complicated issues in metabolomics has stimulated demand for data processing techniques, such as alignment, to extract detailed information. In this study, a new and automated method was developed to correct the retention time shift of high-dimensional and high-throughput data sets. Information from the target chromatographic profiles was used to determine the standard profile as a reference for alignment. A novel, piecewise data partition strategy was applied for the determination of the target components in the standard profile as markers for alignment. An automated target search (ATS method was proposed to find the exact retention times of the selected targets in other profiles for alignment. The linear interpolation technique (LIT was employed to align the profiles prior to pattern recognition, comprehensive comparison analysis, and other data processing steps. In total, 94 metabolite profiles of ginseng were studied, including the most volatile secondary metabolites. The method used in this article could be an essential step in the extraction of information from high-throughput data acquired in the study of systems biology, metabolomics, and biomarker discovery.
David D Pollock
Full Text Available Transcriptional regulation depends upon the binding of transcription factor (TF proteins to DNA in a sequence-dependent manner. Although many experimental methods address the interaction between DNA and proteins, they generally do not comprehensively and accurately assess the full binding repertoire (the complete set of sequences that might be bound with at least moderate strength. Here, we develop and evaluate through simulation an experimental approach that allows simultaneous high-throughput quantitative analysis of TF binding affinity to thousands of potential DNA ligands. Tens of thousands of putative binding targets can be mixed with a TF, and both the pre-bound and bound target pools sequenced. A hierarchical Bayesian Markov chain Monte Carlo approach determines posterior estimates for the dissociation constants, sequence-specific binding energies, and free TF concentrations. A unique feature of our approach is that dissociation constants are jointly estimated from their inferred degree of binding and from a model of binding energetics, depending on how many sequence reads are available and the explanatory power of the energy model. Careful experimental design is necessary to obtain accurate results over a wide range of dissociation constants. This approach, which we call Simultaneous Ultra high-throughput Ligand Dissociation EXperiment (SULDEX, is theoretically capable of rapid and accurate elucidation of an entire TF-binding repertoire.
Sun, Yiyi; Zang, Zhihe; Zhong, Ling; Wu, Min; Su, Qing; Gao, Xiurong; Zan, Wang; Lin, Dong; Zhao, Yan; Zhang, Zhonglin
Adiponectin, the adipose-derived hormone, plays an important role in the suppression of metabolic disorders that can result in type 2 diabetes, obesity, and atherosclerosis. It has been shown that up-regulation of adiponectin or adiponectin receptor has a number of therapeutic benefits. Given that it is hard to convert the full size adiponectin protein into a viable drug, adiponectin receptor agonists could be designed or identified using high-throughput screening. Here, we report on the development of a two-step screening process to identify adiponectin agonists. First step, we developed a high throughput screening assay based on fluorescence polarization to identify adiponectin ligands. The fluorescence polarization assay reported here could be adapted to screening against larger small molecular compound libraries. A natural product library containing 10,000 compounds was screened and 9 hits were selected for validation. These compounds have been taken for the second-step in vitro tests to confirm their agonistic activity. The most active adiponectin receptor 1 agonists are matairesinol, arctiin, (-)-arctigenin and gramine. The most active adiponectin receptor 2 agonists are parthenolide, taxifoliol, deoxyschizandrin, and syringin. These compounds may be useful drug candidates for hypoadiponectin related diseases. PMID:23691032
Karbaschi, Mahsa; Cooke, Marcus S
Single cell gel electrophoresis (the comet assay), continues to gain popularity as a means of assessing DNA damage. However, the assay's low sample throughput and laborious sample workup procedure are limiting factors to its application. "Scoring", or individually determining DNA damage levels in 50 cells per treatment, is time-consuming, but with the advent of high-throughput scoring, the limitation is now the ability to process significant numbers of comet slides. We have developed a novel method by which multiple slides may be manipulated, and undergo electrophoresis, in batches of 25 rather than individually and, importantly, retains the use of standard microscope comet slides, which are the assay convention. This decreases assay time by 60%, and benefits from an electrophoresis tank with a substantially smaller footprint, and more uniform orientation of gels during electrophoresis. Our high-throughput variant of the comet assay greatly increases the number of samples analysed, decreases assay time, number of individual slide manipulations, reagent requirements and risk of damage to slides. The compact nature of the electrophoresis tank is of particular benefit to laboratories where bench space is at a premium. This novel approach is a significant advance on the current comet assay procedure.
Götz, Stefan; García-Gómez, Juan Miguel; Terol, Javier; Williams, Tim D.; Nagaraj, Shivashankar H.; Nueda, María José; Robles, Montserrat; Talón, Manuel; Dopazo, Joaquín; Conesa, Ana
Functional genomics technologies have been widely adopted in the biological research of both model and non-model species. An efficient functional annotation of DNA or protein sequences is a major requirement for the successful application of these approaches as functional information on gene products is often the key to the interpretation of experimental results. Therefore, there is an increasing need for bioinformatics resources which are able to cope with large amount of sequence data, produce valuable annotation results and are easily accessible to laboratories where functional genomics projects are being undertaken. We present the Blast2GO suite as an integrated and biologist-oriented solution for the high-throughput and automatic functional annotation of DNA or protein sequences based on the Gene Ontology vocabulary. The most outstanding Blast2GO features are: (i) the combination of various annotation strategies and tools controlling type and intensity of annotation, (ii) the numerous graphical features such as the interactive GO-graph visualization for gene-set function profiling or descriptive charts, (iii) the general sequence management features and (iv) high-throughput capabilities. We used the Blast2GO framework to carry out a detailed analysis of annotation behaviour through homology transfer and its impact in functional genomics research. Our aim is to offer biologists useful information to take into account when addressing the task of functionally characterizing their sequence data. PMID:18445632
Moutsatsos, Ioannis K; Parker, Christian N
High throughput screening has become a basic technique with which to explore biological systems. Advances in technology, including increased screening capacity, as well as methods that generate multiparametric readouts, are driving the need for improvements in the analysis of data sets derived from such screens. This article covers the recent advances in the analysis of high throughput screening data sets from arrayed samples, as well as the recent advances in the analysis of cell-by-cell data sets derived from image or flow cytometry application. Screening multiple genomic reagents targeting any given gene creates additional challenges and so methods that prioritize individual gene targets have been developed. The article reviews many of the open source data analysis methods that are now available and which are helping to define a consensus on the best practices to use when analyzing screening data. As data sets become larger, and more complex, the need for easily accessible data analysis tools will continue to grow. The presentation of such complex data sets, to facilitate quality control monitoring and interpretation of the results will require the development of novel visualizations. In addition, advanced statistical and machine learning algorithms that can help identify patterns, correlations and the best features in massive data sets will be required. The ease of use for these tools will be important, as they will need to be used iteratively by laboratory scientists to improve the outcomes of complex analyses.
Full Text Available Linkage maps enable the study of important biological questions. The construction of high-density linkage maps appears more feasible since the advent of next-generation sequencing (NGS, which eases SNP discovery and high-throughput genotyping of large population. However, the marker number explosion and genotyping errors from NGS data challenge the computational efficiency and linkage map quality of linkage study methods. Here we report the HighMap method for constructing high-density linkage maps from NGS data. HighMap employs an iterative ordering and error correction strategy based on a k-nearest neighbor algorithm and a Monte Carlo multipoint maximum likelihood algorithm. Simulation study shows HighMap can create a linkage map with three times as many markers as ordering-only methods while offering more accurate marker orders and stable genetic distances. Using HighMap, we constructed a common carp linkage map with 10,004 markers. The singleton rate was less than one-ninth of that generated by JoinMap4.1. Its total map distance was 5,908 cM, consistent with reports on low-density maps. HighMap is an efficient method for constructing high-density, high-quality linkage maps from high-throughput population NGS data. It will facilitate genome assembling, comparative genomic analysis, and QTL studies. HighMap is available at http://highmap.biomarker.com.cn/.
Tigabu, Bersabeh; Rasmussen, Lynn; White, E Lucile; Tower, Nichole; Saeed, Mohammad; Bukreyev, Alexander; Rockx, Barry; LeDuc, James W; Noah, James W
Nipah virus is a biosafety level 4 (BSL-4) pathogen that causes severe respiratory illness and encephalitis in humans. To identify novel small molecules that target Nipah virus replication as potential therapeutics, Southern Research Institute and Galveston National Laboratory jointly developed an automated high-throughput screening platform that is capable of testing 10,000 compounds per day within BSL-4 biocontainment. Using this platform, we screened a 10,080-compound library using a cell-based, high-throughput screen for compounds that inhibited the virus-induced cytopathic effect. From this pilot effort, 23 compounds were identified with EC50 values ranging from 3.9 to 20.0 μM and selectivities >10. Three sulfonamide compounds with EC50 values <12 μM were further characterized for their point of intervention in the viral replication cycle and for broad antiviral efficacy. Development of HTS capability under BSL-4 containment changes the paradigm for drug discovery for highly pathogenic agents because this platform can be readily modified to identify prophylactic and postexposure therapeutic candidates against other BSL-4 pathogens, particularly Ebola, Marburg, and Lassa viruses.
Ma, Junshui; Bayram, Sevinç; Tao, Peining; Svetnik, Vladimir
After a review of the ocular artifact reduction literature, a high-throughput method designed to reduce the ocular artifacts in multichannel continuous EEG recordings acquired at clinical EEG laboratories worldwide is proposed. The proposed method belongs to the category of component-based methods, and does not rely on any electrooculography (EOG) signals. Based on a concept that all ocular artifact components exist in a signal component subspace, the method can uniformly handle all types of ocular artifacts, including eye-blinks, saccades, and other eye movements, by automatically identifying ocular components from decomposed signal components. This study also proposes an improved strategy to objectively and quantitatively evaluate artifact reduction methods. The evaluation strategy uses real EEG signals to synthesize realistic simulated datasets with different amounts of ocular artifacts. The simulated datasets enable us to objectively demonstrate that the proposed method outperforms some existing methods when no high-quality EOG signals are available. Moreover, the results of the simulated datasets improve our understanding of the involved signal decomposition algorithms, and provide us with insights into the inconsistency regarding the performance of different methods in the literature. The proposed method was also applied to two independent clinical EEG datasets involving 28 volunteers and over 1000 EEG recordings. This effort further confirms that the proposed method can effectively reduce ocular artifacts in large clinical EEG datasets in a high-throughput fashion. Copyright © 2011 Elsevier B.V. All rights reserved.
Full Text Available Metal organic frameworks (MOFs have emerged as great alternatives to traditional nanoporous materials for CO2 separation applications. MOFs are porous materials that are formed by self-assembly of transition metals and organic ligands. The most important advantage of MOFs over well-known porous materials is the possibility to generate multiple materials with varying structural properties and chemical functionalities by changing the combination of metal centers and organic linkers during the synthesis. This leads to a large diversity of materials with various pore sizes and shapes that can be efficiently used for CO2 separations. Since the number of synthesized MOFs has already reached to several thousand, experimental investigation of each MOF at the lab-scale is not practical. High-throughput computational screening of MOFs is a great opportunity to identify the best materials for CO2 separation and to gain molecular-level insights into the structure–performance relationships. This type of knowledge can be used to design new materials with the desired structural features that can lead to extraordinarily high CO2 selectivities. In this mini-review, we focused on developments in high-throughput molecular simulations of MOFs for CO2 separations. After reviewing the current studies on this topic, we discussed the opportunities and challenges in the field and addressed the potential future developments.
Recently, several structural genomics centers have been established and a remarkable number of three-dimensional structures of soluble proteins have been solved. For membrane proteins, the number of structures solved has been significantly trailing those for their soluble counterparts, not least because over-expression and purification of membrane proteins is a much more arduous process. By using high throughput technologies, a large number of membrane protein targets can be screened simultaneously and a greater number of expression and purification conditions can be employed, leading to a higher probability of successfully determining the structure of membrane proteins. This unit describes the cloning, expression and screening of membrane proteins using high throughput methodologies developed in our laboratory. Basic Protocol 1 deals with the cloning of inserts into expression vectors by ligation-independent cloning. Basic Protocol 2 describes the expression and purification of the target proteins on a miniscale. Lastly, for the targets that express at the miniscale, basic protocols 3 and 4 outline the methods employed for the expression and purification of targets at the midi-scale, as well as a procedure for detergent screening and identification of detergent(s) in which the target protein is stable. PMID:24510647
Lesley Joan Collins
Full Text Available ncRNAs are key genes in many human diseases including cancer and viral infection, as well as providing critical functions in pathogenic organisms such as fungi, bacteria, viruses and protists. Until now the identification and characterization of ncRNAs associated with disease has been slow or inaccurate requiring many years of testing to understand complicated RNA and protein gene relationships. High-throughput sequencing now offers the opportunity to characterize miRNAs, siRNAs, snoRNAs and long ncRNAs on a genomic scale making it faster and easier to clarify how these ncRNAs contribute to the disease state. However, this technology is still relatively new, and ncRNA discovery is not an application of high priority for streamlined bioinformatics. Here we summarize background concepts and practical approaches for ncRNA analysis using high-throughput sequencing, and how it relates to understanding human disease. As a case study, we focus on the parasitic protists Giardia lamblia and Trichomonas vaginalis, where large evolutionary distance has meant difficulties in comparing ncRNAs with those from model eukaryotes. A combination of biological, computational and sequencing approaches has enabled easier classification of ncRNA classes such as snoRNAs, but has also aided the identification of novel classes. It is hoped that a higher level of understanding of ncRNA expression and interaction may aid in the development of less harsh treatment for protist-based diseases.
Andrew Paul Hutchins
Full Text Available Genomic datasets and the tools to analyze them have proliferated at an astonishing rate. However, such tools are often poorly integrated with each other: each program typically produces its own custom output in a variety of non-standard file formats. Here we present glbase, a framework that uses a flexible set of descriptors that can quickly parse non-binary data files. glbase includes many functions to intersect two lists of data, including operations on genomic interval data and support for the efficient random access to huge genomic data files. Many glbase functions can produce graphical outputs, including scatter plots, heatmaps, boxplots and other common analytical displays of high-throughput data such as RNA-seq, ChIP-seq and microarray expression data. glbase is designed to rapidly bring biological data into a Python-based analytical environment to facilitate analysis and data processing. In summary, glbase is a flexible and multifunctional toolkit that allows the combination and analysis of high-throughput data (especially next-generation sequencing and genome-wide data, and which has been instrumental in the analysis of complex data sets. glbase is freely available at http://bitbucket.org/oaxiom/glbase/.
Ortega-Rivas, Antonio; Padrón, José M; Valladares, Basilio; Elsheikha, Hany M
Despite significant public health impact, there is no specific antiprotozoal therapy for prevention and treatment of Acanthamoeba castellanii infection. There is a need for new and efficient anti-Acanthamoeba drugs that are less toxic and can reduce treatment duration and frequency of administration. In this context a new, rapid and sensitive assay is required for high-throughput activity testing and screening of new therapeutic compounds. A colorimetric assay based on sulforhodamine B (SRB) staining has been developed for anti-Acanthamoeba drug susceptibility testing and adapted to a 96-well microtiter plate format. Under these conditions chlorhexidine was tested to validate the assay using two clinical strains of A. castellanii (Neff strain, T4 genotype [IC50 4.68±0.6μM] and T3 genotype [IC50 5.69±0.9μM]). These results were in good agreement with those obtained by the conventional Alamar Blue assay, OCR cytotoxicity assay and manual cell counting method. Our new assay offers an inexpensive and reliable method, which complements current assays by enhancing high-throughput anti-Acanthamoeba drug screening capabilities. Copyright © 2016 Elsevier B.V. All rights reserved.
Shen, Zhong-Hui; Wang, Jian-Jun; Lin, Yuanhua; Nan, Ce-Wen; Chen, Long-Qing; Shen, Yang
Understanding the dielectric breakdown behavior of polymer nanocomposites is crucial to the design of high-energy-density dielectric materials with reliable performances. It is however challenging to predict the breakdown behavior due to the complicated factors involved in this highly nonequilibrium process. In this work, a comprehensive phase-field model is developed to investigate the breakdown behavior of polymer nanocomposites under electrostatic stimuli. It is found that the breakdown strength and path significantly depend on the microstructure of the nanocomposite. The predicted breakdown strengths for polymer nanocomposites with specific microstructures agree with existing experimental measurements. Using this phase-field model, a high throughput calculation is performed to seek the optimal microstructure. Based on the high-throughput calculation, a sandwich microstructure for PVDF-BaTiO3 nanocomposite is designed, where the upper and lower layers are filled with parallel nanosheets and the middle layer is filled with vertical nanofibers. It has an enhanced energy density of 2.44 times that of the pure PVDF polymer. The present work provides a computational approach for understanding the electrostatic breakdown, and it is expected to stimulate future experimental efforts on synthesizing polymer nanocomposites with novel microstructures to achieve high performances. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Full Text Available High throughput sequencing (HTS yields tens of thousands to millions of sequences that require a large amount of pre-processing work to clean various artifacts. Such cleaning cannot be performed manually. Existing programs are not suitable for immunoglobulin (Ig genes, which are variable and often highly mutated. This paper describes Ig-HTS-Cleaner (Ig High Throughput Sequencing Cleaner, a program containing a simple cleaning procedure that successfully deals with pre-processing of Ig sequences derived from HTS, and Ig-Indel-Identifier (Ig Insertion – Deletion Identifier, a program for identifying legitimate and artifact insertions and/or deletions (indels. Our programs were designed for analyzing Ig gene sequences obtained by 454 sequencing, but they are applicable to all types of sequences and sequencing platforms. Ig-HTS-Cleaner and Ig-Indel-Identifier have been implemented in Java and saved as executable JAR files, supported on Linux and MS Windows. No special requirements are needed in order to run the programs, except for correctly constructing the input files as explained in the text. The programs' performance has been tested and validated on real and simulated data sets.
Adissu, Hibret A; Estabel, Jeanne; Sunter, David; Tuck, Elizabeth; Hooks, Yvette; Carragher, Damian M; Clarke, Kay; Karp, Natasha A; Newbigging, Susan; Jones, Nora; Morikawa, Lily; White, Jacqueline K; McKerlie, Colin
The Mouse Genetics Project (MGP) at the Wellcome Trust Sanger Institute aims to generate and phenotype over 800 genetically modified mouse lines over the next 5 years to gain a better understanding of mammalian gene function and provide an invaluable resource to the scientific community for follow-up studies. Phenotyping includes the generation of a standardized biobank of paraffin-embedded tissues for each mouse line, but histopathology is not routinely performed. In collaboration with the Pathology Core of the Centre for Modeling Human Disease (CMHD) we report the utility of histopathology in a high-throughput primary phenotyping screen. Histopathology was assessed in an unbiased selection of 50 mouse lines with (n=30) or without (n=20) clinical phenotypes detected by the standard MGP primary phenotyping screen. Our findings revealed that histopathology added correlating morphological data in 19 of 30 lines (63.3%) in which the primary screen detected a phenotype. In addition, seven of the 50 lines (14%) presented significant histopathology findings that were not associated with or predicted by the standard primary screen. Three of these seven lines had no clinical phenotype detected by the standard primary screen. Incidental and strain-associated background lesions were present in all mutant lines with good concordance to wild-type controls. These findings demonstrate the complementary and unique contribution of histopathology to high-throughput primary phenotyping of mutant mice.
Yus, Eva; Yang, Jae-Seong; Sogues, Adrià; Serrano, Luis
Quantitative analysis of the sequence determinants of transcription and translation regulation is relevant for systems and synthetic biology. To identify these determinants, researchers have developed different methods of screening random libraries using fluorescent reporters or antibiotic resistance genes. Here, we have implemented a generic approach called ELM-seq (expression level monitoring by DNA methylation) that overcomes the technical limitations of such classic reporters. ELM-seq uses DamID (Escherichia coli DNA adenine methylase as a reporter coupled with methylation-sensitive restriction enzyme digestion and high-throughput sequencing) to enable in vivo quantitative analyses of upstream regulatory sequences. Using the genome-reduced bacterium Mycoplasma pneumoniae, we show that ELM-seq has a large dynamic range and causes minimal toxicity. We use ELM-seq to determine key sequences (known and putatively novel) of promoter and untranslated regions that influence transcription and translation efficiency. Applying ELM-seq to other organisms will help us to further understand gene expression and guide synthetic biology.Quantitative analysis of how DNA sequence determines transcription and translation regulation is of interest to systems and synthetic biologists. Here the authors present ELM-seq, which uses Dam activity as reporter for high-throughput analysis of promoter and 5'-UTR regions.
Lou, Dianne I.; Hussmann, Jeffrey A.; McBee, Ross M.; Acevedo, Ashley; Andino, Raul; Press, William H.; Sawyer, Sara L.
A major limitation of high-throughput DNA sequencing is the high rate of erroneous base calls produced. For instance, Illumina sequencing machines produce errors at a rate of ∼0.1–1 × 10−2 per base sequenced. These technologies typically produce billions of base calls per experiment, translating to millions of errors. We have developed a unique library preparation strategy, “circle sequencing,” which allows for robust downstream computational correction of these errors. In this strategy, DNA templates are circularized, copied multiple times in tandem with a rolling circle polymerase, and then sequenced on any high-throughput sequencing machine. Each read produced is computationally processed to obtain a consensus sequence of all linked copies of the original molecule. Physically linking the copies ensures that each copy is independently derived from the original molecule and allows for efficient formation of consensus sequences. The circle-sequencing protocol precedes standard library preparations and is therefore suitable for a broad range of sequencing applications. We tested our method using the Illumina MiSeq platform and obtained errors in our processed sequencing reads at a rate as low as 7.6 × 10−6 per base sequenced, dramatically improving the error rate of Illumina sequencing and putting error on par with low-throughput, but highly accurate, Sanger sequencing. Circle sequencing also had substantially higher efficiency and lower cost than existing barcode-based schemes for correcting sequencing errors. PMID:24243955
Marko, D; Briglia, N; Summerer, S; Petrozza, A; Cellini, F; Iannacone, R
High-throughput phenotyping has opened whole new perspectives for crop improvement and better understanding of quantitative traits in plants. Generation of loss-of-function and gain-of-function plant mutants requires processing and imaging a large number of plants in order to determine unknown gene functions and phenotypic changes generated by genetic modifications or selection of new traits. The use of phenomics for the evaluation of transgenic lines contributed significantly to the identification of plants more tolerant to biotic/abiotic stresses and furthermore, helped in the identification of unknown gene functions. In this chapter we describe the High-throughput phenotyping (HTP) platform working in our facility, drawing the general protocol and showing some examples of data obtainable from the platform. Tomato transgenic plants over-expressing the arginine decarboxylase 2 gene, which is involved in the polyamine biosynthetic pathway, were analyzed through our HTP facility for their tolerance to abiotic stress and significant differences in water content and ability to recover after drought stress where highlighted. This demonstrates the applicability of this methodology to the plant polyamine field.