Willumsen, Niels J; Bech, Morten; Olesen, Søren-Peter
. A cornerstone in current drug discovery is high throughput screening assays which allow examination of the activity of specific ion channels though only to a limited extent. Conventional patch clamp remains the sole technique with sufficiently high time resolution and sensitivity required for precise and direct....... The introduction of new powerful HTS electrophysiological techniques is predicted to cause a revolution in ion channel drug discovery....
Cregan Perry B
Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs as defined here are single base sequence changes or short insertion/deletions between or within individuals of a given species. As a result of their abundance and the availability of high throughput analysis technologies SNP markers have begun to replace other traditional markers such as restriction fragment length polymorphisms (RFLPs, amplified fragment length polymorphisms (AFLPs and simple sequence repeats (SSRs or microsatellite markers for fine mapping and association studies in several species. For SNP discovery from chromatogram data, several bioinformatics programs have to be combined to generate an analysis pipeline. Results have to be stored in a relational database to facilitate interrogation through queries or to generate data for further analyses such as determination of linkage disequilibrium and identification of common haplotypes. Although these tasks are routinely performed by several groups, an integrated open source SNP discovery pipeline that can be easily adapted by new groups interested in SNP marker development is currently unavailable. Results We developed SNP-PHAGE (SNP discovery Pipeline with additional features for identification of common haplotypes within a sequence tagged site (Haplotype Analysis and GenBank (-dbSNP submissions. This tool was applied for analyzing sequence traces from diverse soybean genotypes to discover over 10,000 SNPs. This package was developed on UNIX/Linux platform, written in Perl and uses a MySQL database. Scripts to generate a user-friendly web interface are also provided with common queries for preliminary data analysis. A machine learning tool developed by this group for increasing the efficiency of SNP discovery is integrated as a part of this package as an optional feature. The SNP-PHAGE package is being made available open source at http://bfgl.anri.barc.usda.gov/ML/snp-phage/. Conclusion SNP-PHAGE provides a bioinformatics
Willumsen, Niels J; Bech, Morten; Olesen, Søren-Peter
Proper function of ion channels is crucial for all living cells. Ion channel dysfunction may lead to a number of diseases, so-called channelopathies, and a number of common diseases, including epilepsy, arrhythmia, and type II diabetes, are primarily treated by drugs that modulate ion channels....... A cornerstone in current drug discovery is high throughput screening assays which allow examination of the activity of specific ion channels though only to a limited extent. Conventional patch clamp remains the sole technique with sufficiently high time resolution and sensitivity required for precise and direct...... characterization of ion channel properties. However, patch clamp is a slow, labor-intensive, and thus expensive, technique. New techniques combining the reliability and high information content of patch clamping with the virtues of high throughput philosophy are emerging and predicted to make a number of ion...
Mitrovic, Slobodan; Becerra, Natalie; Cornell, Earl; Guevarra, Dan; Haber, Joel; Jin, Jian; Jones, Ryan; Kan, Kevin; Marcin, Martin; Newhouse, Paul; Soedarmadji, Edwin; Suram, Santosh; Xiang, Chengxiang; Gregoire, John; High-Throughput Experimentation Team
In this talk I will present the status of the High-Throughput Experimentation (HTE) project of the Joint Center for Artificial Photosynthesis (JCAP). JCAP is an Energy Innovation Hub of the U.S. Department of Energy with a mandate to deliver a solar fuel generator based on an integrated photoelectrochemical cell (PEC). However, efficient and commercially viable catalysts or light absorbers for the PEC do not exist. The mission of HTE is to provide the accelerated discovery through combinatorial synthesis and rapid screening of material properties. The HTE pipeline also features high-throughput material characterization using x-ray diffraction and x-ray photoemission spectroscopy (XPS). In this talk I present the currently operating pipeline and focus on our combinatorial XPS efforts to build the largest free database of spectra from mixed-metal oxides, nitrides, sulfides and alloys. This work was performed at Joint Center for Artificial Photosynthesis, a DOE Energy Innovation Hub, supported through the Office of Science of the U.S. Department of Energy under Award No. DE-SC0004993.
Jones, Neil Christopher
High throughput data acquisition technology has inarguably transformed the landscape of the life sciences, in part by making possible---and necessary---the computational disciplines of bioinformatics and biomedical informatics. These fields focus primarily on developing tools for analyzing data and generating hypotheses about objects in nature, and it is in this context that we address three pressing problems in the fields of the computational life sciences which each require computing capaci...
Thomson, S.; Hoffmann, C.; Johann, T.; Wolf, A.; Schmidt, H.-W.; Farrusseng, D.; Schueth, F.
Full text: The use of combinatorial chemistry to obtain new materials has been developed extensively by the pharmaceutical and biochemical industries, but such approaches have been slow to impact on the field of heterogeneous catalysis. The reasons for this lie in with difficulties associated in the synthesis, characterisation and determination of catalytic properties of such materials. In many synthetic and catalytic reactions, the conditions used are difficult to emulate using High Throughput Experimentation (HTE). Furthermore, the ability to screen these catalysts simultaneously in real time, requires the development and/or modification of characterisation methods. Clearly, there is a need for both high throughput synthesis and screening of new and novel reactions, and we describe several new concepts that help to achieve these goals. Although such problems have impeded the development of combinatorial catalysis, the fact remains that many highly attractive processes still exist for which no suitable catalysts have been developed. The ability to decrease the tiFme needed to evaluate catalyst is therefore essential and this makes the use of high throughput techniques highly desirable. In this presentation we will describe the synthesis, catalytic testing, and novel screening methods developed at the Max Planck Institute. Automated synthesis procedures, performed by the use of a modified Gilson pipette robot, will be described, as will the development of two 16 and 49 sample fixed bed reactors and two 25 and 29 sample three phase reactors for catalytic testing. We will also present new techniques for the characterisation of catalysts and catalytic products using standard IR microscopy and infrared focal plane array detection, respectively
Rohman, Mattias; Wingfield, Jonathan
In order to detect a biochemical analyte with a mass spectrometer (MS) it is necessary to ionize the analyte of interest. The analyte can be ionized by a number of different mechanisms, however, one common method is electrospray ionization (ESI). Droplets of analyte are sprayed through a highly charged field, the droplets pick up charge, and this is transferred to the analyte. High levels of salt in the assay buffer will potentially steal charge from the analyte and suppress the MS signal. In order to avoid this suppression of signal, salt is often removed from the sample prior to injection into the MS. Traditional ESI MS relies on liquid chromatography (LC) to remove the salt and reduce matrix effects, however, this is a lengthy process. Here we describe the use of RapidFire™ coupled to a triple-quadrupole MS for high-throughput screening. This system uses solid-phase extraction to de-salt samples prior to injection, reducing processing time such that a sample is injected into the MS ~every 10 s.
Simm, Jaak; Klambauer, Günter; Arany, Adam; Steijaert, Marvin; Wegner, Jörg Kurt; Gustin, Emmanuel; Chupakhin, Vladimir; Chong, Yolanda T; Vialard, Jorge; Buijnsters, Peter; Velter, Ingrid; Vapirev, Alexander; Singh, Shantanu; Carpenter, Anne E; Wuyts, Roel; Hochreiter, Sepp; Moreau, Yves; Ceulemans, Hugo
In both academia and the pharmaceutical industry, large-scale assays for drug discovery are expensive and often impractical, particularly for the increasingly important physiologically relevant model systems that require primary cells, organoids, whole organisms, or expensive or rare reagents. We hypothesized that data from a single high-throughput imaging assay can be repurposed to predict the biological activity of compounds in other assays, even those targeting alternate pathways or biological processes. Indeed, quantitative information extracted from a three-channel microscopy-based screen for glucocorticoid receptor translocation was able to predict assay-specific biological activity in two ongoing drug discovery projects. In these projects, repurposing increased hit rates by 50- to 250-fold over that of the initial project assays while increasing the chemical structure diversity of the hits. Our results suggest that data from high-content screens are a rich source of information that can be used to predict and replace customized biological assays. Copyright © 2018 Elsevier Ltd. All rights reserved.
Shou, Wilson Z; Zhang, Jun
Bioanalysis with LC-MS/MS has been established as the method of choice for quantitative determination of drug candidates in biological matrices in drug discovery and development. The LC-MS/MS bioanalytical support for drug discovery, especially for early discovery, often requires high-throughput (HT) analysis of large numbers of samples (hundreds to thousands per day) generated from many structurally diverse compounds (tens to hundreds per day) with a very quick turnaround time, in order to provide important activity and liability data to move discovery projects forward. Another important consideration for discovery bioanalysis is its fit-for-purpose quality requirement depending on the particular experiments being conducted at this stage, and it is usually not as stringent as those required in bioanalysis supporting drug development. These aforementioned attributes of HT discovery bioanalysis made it an ideal candidate for using software and automation tools to eliminate manual steps, remove bottlenecks, improve efficiency and reduce turnaround time while maintaining adequate quality. In this article we will review various recent developments that facilitate automation of individual bioanalytical procedures, such as sample preparation, MS/MS method development, sample analysis and data review, as well as fully integrated software tools that manage the entire bioanalytical workflow in HT discovery bioanalysis. In addition, software tools supporting the emerging high-resolution accurate MS bioanalytical approach are also discussed.
Li, Bin; Kaye, Steven S; Riley, Conor; Greenberg, Doron; Galang, Daniel; Bailey, Mark S
The lack of a high capacity hydrogen storage material is a major barrier to the implementation of the hydrogen economy. To accelerate discovery of such materials, we have developed a high-throughput workflow for screening of hydrogen storage materials in which candidate materials are synthesized and characterized via highly parallel ball mills and volumetric gas sorption instruments, respectively. The workflow was used to identify mixed imides with significantly enhanced absorption rates relative to Li2Mg(NH)2. The most promising material, 2LiNH2:MgH2 + 5 atom % LiBH4 + 0.5 atom % La, exhibits the best balance of absorption rate, capacity, and cycle-life, absorbing >4 wt % H2 in 1 h at 120 °C after 11 absorption-desorption cycles.
Joslin, John; Gilligan, James; Anderson, Paul; Garcia, Catherine; Sharif, Orzala; Hampton, Janice; Cohen, Steven; King, Miranda; Zhou, Bin; Jiang, Shumei; Trussell, Christopher; Dunn, Robert; Fathman, John W; Snead, Jennifer L; Boitano, Anthony E; Nguyen, Tommy; Conner, Michael; Cooke, Mike; Harris, Jennifer; Ainscow, Ed; Zhou, Yingyao; Shaw, Chris; Sipes, Dan; Mainquist, James; Lesley, Scott
The goal of high-throughput screening is to enable screening of compound libraries in an automated manner to identify quality starting points for optimization. This often involves screening a large diversity of compounds in an assay that preserves a connection to the disease pathology. Phenotypic screening is a powerful tool for drug identification, in that assays can be run without prior understanding of the target and with primary cells that closely mimic the therapeutic setting. Advanced automation and high-content imaging have enabled many complex assays, but these are still relatively slow and low throughput. To address this limitation, we have developed an automated workflow that is dedicated to processing complex phenotypic assays for flow cytometry. The system can achieve a throughput of 50,000 wells per day, resulting in a fully automated platform that enables robust phenotypic drug discovery. Over the past 5 years, this screening system has been used for a variety of drug discovery programs, across many disease areas, with many molecules advancing quickly into preclinical development and into the clinic. This report will highlight a diversity of approaches that automated flow cytometry has enabled for phenotypic drug discovery.
Thomas J.J. Gintjee
Full Text Available Centers for the screening of biologically active compounds and genomic libraries are becoming common in the academic setting and have enabled researchers devoted to developing strategies for the treatment of diseases or interested in studying a biological phenomenon to have unprecedented access to libraries that, until few years ago, were accessible only by pharmaceutical companies. As a result, new drugs and genetic targets have now been identified for the treatment of Duchenne muscular dystrophy (DMD, the most prominent of the neuromuscular disorders affecting children. Although the work is still at an early stage, the results obtained to date are encouraging and demonstrate the importance that these centers may have in advancing therapeutic strategies for DMD as well as other diseases. This review will provide a summary of the status and progress made toward the development of a cure for this disorder and implementing high-throughput screening (HTS technologies as the main source of discovery. As more academic institutions are gaining access to HTS as a valuable discovery tool, the identification of new biologically active molecules is likely to grow larger. In addition, the presence in the academic setting of experts in different aspects of the disease will offer the opportunity to develop novel assays capable of identifying new targets to be pursued as potential therapeutic options. These assays will represent an excellent source to be used by pharmaceutical companies for the screening of larger libraries providing the opportunity to establish strong collaborations between the private and academic sectors and maximizing the chances of bringing into the clinic new drugs for the treatment of DMD.
Gintjee, Thomas J J; Magh, Alvin S H; Bertoni, Carmen
Centers for the screening of biologically active compounds and genomic libraries are becoming common in the academic setting and have enabled researchers devoted to developing strategies for the treatment of diseases or interested in studying a biological phenomenon to have unprecedented access to libraries that, until few years ago, were accessible only by pharmaceutical companies. As a result, new drugs and genetic targets have now been identified for the treatment of Duchenne muscular dystrophy (DMD), the most prominent of the neuromuscular disorders affecting children. Although the work is still at an early stage, the results obtained to date are encouraging and demonstrate the importance that these centers may have in advancing therapeutic strategies for DMD as well as other diseases. This review will provide a summary of the status and progress made toward the development of a cure for this disorder and implementing high-throughput screening (HTS) technologies as the main source of discovery. As more academic institutions are gaining access to HTS as a valuable discovery tool, the identification of new biologically active molecules is likely to grow larger. In addition, the presence in the academic setting of experts in different aspects of the disease will offer the opportunity to develop novel assays capable of identifying new targets to be pursued as potential therapeutic options. These assays will represent an excellent source to be used by pharmaceutical companies for the screening of larger libraries providing the opportunity to establish strong collaborations between the private and academic sectors and maximizing the chances of bringing into the clinic new drugs for the treatment of DMD.
Ufarté, Lisa; Bozonnet, Sophie; Laville, Elisabeth; Cecchini, Davide A; Pizzut-Serin, Sandra; Jacquiod, Samuel; Demanèche, Sandrine; Simonet, Pascal; Franqueville, Laure; Veronese, Gabrielle Potocki
Activity-based metagenomics is one of the most efficient approaches to boost the discovery of novel biocatalysts from the huge reservoir of uncultivated bacteria. In this chapter, we describe a highly generic procedure of metagenomic library construction and high-throughput screening for carbohydrate-active enzymes. Applicable to any bacterial ecosystem, it enables the swift identification of functional enzymes that are highly efficient, alone or acting in synergy, to break down polysaccharides and oligosaccharides.
Blanca, José; Cañizares, Joaquín; Roig, Cristina; Ziarsolo, Pello; Nuez, Fernando; Picó, Belén
Cucurbita pepo belongs to the Cucurbitaceae family. The "Zucchini" types rank among the highest-valued vegetables worldwide, and other C. pepo and related Cucurbita spp., are food staples and rich sources of fat and vitamins. A broad range of genomic tools are today available for other cucurbits that have become models for the study of different metabolic processes. However, these tools are still lacking in the Cucurbita genus, thus limiting gene discovery and the process of breeding. We report the generation of a total of 512,751 C. pepo EST sequences, using 454 GS FLX Titanium technology. ESTs were obtained from normalized cDNA libraries (root, leaves, and flower tissue) prepared using two varieties with contrasting phenotypes for plant, flowering and fruit traits, representing the two C. pepo subspecies: subsp. pepo cv. Zucchini and subsp. ovifera cv Scallop. De novo assembling was performed to generate a collection of 49,610 Cucurbita unigenes (average length of 626 bp) that represent the first transcriptome of the species. Over 60% of the unigenes were functionally annotated and assigned to one or more Gene Ontology terms. The distributions of Cucurbita unigenes followed similar tendencies than that reported for Arabidopsis or melon, suggesting that the dataset may represent the whole Cucurbita transcriptome. About 34% unigenes were detected to have known orthologs of Arabidopsis or melon, including genes potentially involved in disease resistance, flowering and fruit quality. Furthermore, a set of 1,882 unigenes with SSR motifs and 9,043 high confidence SNPs between Zucchini and Scallop were identified, of which 3,538 SNPs met criteria for use with high throughput genotyping platforms, and 144 could be detected as CAPS. A set of markers were validated, being 80% of them polymorphic in a set of variable C. pepo and C. moschata accessions. We present the first broad survey of gene sequences and allelic variation in C. pepo, where limited prior genomic
Pistachio (Pistacia vera L.) trees from the National Clonal Germplasm Repository (NCGR) and orchards in California were surveyed for viruses and virus-like agents by high-throughput sequencing (HTS). Analyses of 60 trees including clonal UCB-1 hybrid rootstock (P. atlantica × P. integerrima) identif...
.... The major focus thus far has been the implementation of a reliable and robust high-throughput screen for blockers specific for BoNT using Neuro 2A cells in which BoNTA forms channels with similar properties to those previously characterized in lipid bilayers. The immediate task during the present reporting period involved the detailed characterization of the channel and chaperone activity of BoNTA on Neuro2A cells.
Asmild, Margit; Oswald, Nicholas; Krzywkowski, Karen M
by developing two lines of automated patch clamp products, a traditional pipette-based system called Apatchi-1, and a silicon chip-based system QPatch. The degree of automation spans from semi-automation (Apatchi-1) where a trained technician interacts with the system in a limited way, to a complete automation...... (QPatch 96) where the system works continuously and unattended until screening of a full compound library is completed. The performance of the systems range from medium to high throughputs....
Microscale High-Throughput Experimentation as an Enabling Technology in Drug Discovery: Application in the Discovery of (Piperidinyl)pyridinyl-1H-benzimidazole Diacylglycerol Acyltransferase 1 Inhibitors.
Cernak, Tim; Gesmundo, Nathan J; Dykstra, Kevin; Yu, Yang; Wu, Zhicai; Shi, Zhi-Cai; Vachal, Petr; Sperbeck, Donald; He, Shuwen; Murphy, Beth Ann; Sonatore, Lisa; Williams, Steven; Madeira, Maria; Verras, Andreas; Reiter, Maud; Lee, Claire Heechoon; Cuff, James; Sherer, Edward C; Kuethe, Jeffrey; Goble, Stephen; Perrotto, Nicholas; Pinto, Shirly; Shen, Dong-Ming; Nargund, Ravi; Balkovec, James; DeVita, Robert J; Dreher, Spencer D
Miniaturization and parallel processing play an important role in the evolution of many technologies. We demonstrate the application of miniaturized high-throughput experimentation methods to resolve synthetic chemistry challenges on the frontlines of a lead optimization effort to develop diacylglycerol acyltransferase (DGAT1) inhibitors. Reactions were performed on ∼1 mg scale using glass microvials providing a miniaturized high-throughput experimentation capability that was used to study a challenging S N Ar reaction. The availability of robust synthetic chemistry conditions discovered in these miniaturized investigations enabled the development of structure-activity relationships that ultimately led to the discovery of soluble, selective, and potent inhibitors of DGAT1.
Dhara A Patel
Full Text Available Most of current strategies for antiviral therapeutics target the virus specifically and directly, but an alternative approach to drug discovery might be to enhance the immune response to a broad range of viruses. Based on clinical observation in humans and successful genetic strategies in experimental models, we reasoned that an improved interferon (IFN signaling system might better protect against viral infection. Here we aimed to identify small molecular weight compounds that might mimic this beneficial effect and improve antiviral defense. Accordingly, we developed a cell-based high-throughput screening (HTS assay to identify small molecules that enhance the IFN signaling pathway components. The assay is based on a phenotypic screen for increased IFN-stimulated response element (ISRE activity in a fully automated and robust format (Z'>0.7. Application of this assay system to a library of 2240 compounds (including 2160 already approved or approvable drugs led to the identification of 64 compounds with significant ISRE activity. From these, we chose the anthracycline antibiotic, idarubicin, for further validation and mechanism based on activity in the sub-µM range. We found that idarubicin action to increase ISRE activity was manifest by other members of this drug class and was independent of cytotoxic or topoisomerase inhibitory effects as well as endogenous IFN signaling or production. We also observed that this compound conferred a consequent increase in IFN-stimulated gene (ISG expression and a significant antiviral effect using a similar dose-range in a cell-culture system inoculated with encephalomyocarditis virus (EMCV. The antiviral effect was also found at compound concentrations below the ones observed for cytotoxicity. Taken together, our results provide proof of concept for using activators of components of the IFN signaling pathway to improve IFN efficacy and antiviral immune defense as well as a validated HTS approach to identify
Thornburg, Christopher C; Britt, John R; Evans, Jason R; Akee, Rhone K; Whitt, James A; Trinh, Spencer K; Harris, Matthew J; Thompson, Jerell R; Ewing, Teresa L; Shipley, Suzanne M; Grothaus, Paul G; Newman, David J; Schneider, Joel P; Grkovic, Tanja; O'Keefe, Barry R
The US National Cancer Institute's (NCI) Natural Product Repository is one of the world's largest, most diverse collections of natural products containing over 230,000 unique extracts derived from plant, marine, and microbial organisms that have been collected from biodiverse regions throughout the world. Importantly, this national resource is available to the research community for the screening of extracts and the isolation of bioactive natural products. However, despite the success of natural products in drug discovery, compatibility issues that make extracts challenging for liquid handling systems, extended timelines that complicate natural product-based drug discovery efforts and the presence of pan-assay interfering compounds have reduced enthusiasm for the high-throughput screening (HTS) of crude natural product extract libraries in targeted assay systems. To address these limitations, the NCI Program for Natural Product Discovery (NPNPD), a newly launched, national program to advance natural product discovery technologies and facilitate the discovery of structurally defined, validated lead molecules ready for translation will create a prefractionated library from over 125,000 natural product extracts with the aim of producing a publicly-accessible, HTS-amenable library of >1,000,000 fractions. This library, representing perhaps the largest accumulation of natural-product based fractions in the world, will be made available free of charge in 384-well plates for screening against all disease states in an effort to reinvigorate natural product-based drug discovery.
Martins da Silva, Sarah J; Brown, Sean G; Sutton, Keith; King, Louise V; Ruso, Halil; Gray, David W; Wyatt, Paul G; Kelly, Mark C; Barratt, Christopher L R; Hope, Anthony G
Can pharma drug discovery approaches be utilized to transform investigation into novel therapeutics for male infertility? High-throughput screening (HTS) is a viable approach to much-needed drug discovery for male factor infertility. There is both huge demand and a genuine clinical need for new treatment options for infertile men. However, the time, effort and resources required for drug discovery are currently exorbitant, due to the unique challenges of the cellular, physical and functional properties of human spermatozoa and a lack of appropriate assay platform. Spermatozoa were obtained from healthy volunteer research donors and subfertile patients undergoing IVF/ICSI at a hospital-assisted reproductive techniques clinic between January 2012 and November 2016. A HTS assay was developed and validated using intracellular calcium ([Ca2+]i) as a surrogate for motility in human spermatozoa. Calcium fluorescence was detected using a Flexstation microplate reader (384-well platform) and compared with responses evoked by progesterone, a compound known to modify a number of biologically relevant behaviours in human spermatozoa. Hit compounds identified following single point drug screen (10 μM) of an ion channel-focussed library assembled by the University of Dundee Drug Discovery Unit were rescreened to ensure potency using standard 10 point half-logarithm concentration curves, and tested for purity and integrity using liquid chromatography and mass spectrometry. Hit compounds were grouped by structure activity relationships and five representative compounds then further investigated for direct effects on spermatozoa, using computer-assisted sperm assessment, sperm penetration assay and whole-cell patch clamping. Of the 3242 ion channel library ligands screened, 384 compounds (11.8%) elicited a statistically significant increase in calcium fluorescence, with greater than 3× median absolute deviation above the baseline. Seventy-four compounds eliciting ≥50% increase
Wu, Shihua; Yang, Lu; Gao, Yuan; Liu, Xiaoyue; Liu, Feiyan
A multi-channel counter-current chromatography (CCC) method has been designed and fabricated for the high-throughput fractionation of natural products without complications sometimes encountered with other conventional chromatographic systems, such as irreversible adsorptive constituent losses and deactivation, tailing of solute peaks and contamination. It has multiple independent CCC channels and each channel connects independent separation column(s) by parallel flow tubes, and thus the multi-channel CCC apparatus can achieve simultaneously two or more independent chromatographic processes. Furthermore, a high-throughput CCC fractionation method for natural products has been developed by a combination of a new three-channel CCC apparatus and conventional parallel chromatographic devices including pumps, sample injectors, effluent detectors and collectors, and its performance has been displayed on the fractionation of ethyl acetate extracts of three natural materials Solidago canadensis, Suillus placidus, and Trichosanthes kirilowii, which are found to be potent cytotoxic to tumor cell lines in the course of screening the antitumor candidates. By combination of biological screening programs and preparative high-performance liquid chromatography (HPLC) purification, 22.8 mg 6 beta-angeloyloxykolavenic acid and 29.4 mg 6 beta-tigloyloxykolavenic acid for S. canadensis, 25.3mg suillin for S. placidus, and 6.8 mg 23,24-dihydrocucurbitacin B for T. Kirilowii as their major cytotoxic principles were isolated from each 1000 mg crude ethyl acetate extract. Their chemical structures were characterized by electrospray ionization mass spectrometry, one- and two-dimensional nuclear magnetic resonance. The overall results indicate the multi-channel CCC is very useful for high-throughput fractionation of natural products for drug discovery in spite of the solvent balancing requirement and the lower resolution of the shorter CCC columns.
Craig A Gedye
Full Text Available Cell surface proteins have a wide range of biological functions, and are often used as lineage-specific markers. Antibodies that recognize cell surface antigens are widely used as research tools, diagnostic markers, and even therapeutic agents. The ability to obtain broad cell surface protein profiles would thus be of great value in a wide range of fields. There are however currently few available methods for high-throughput analysis of large numbers of cell surface proteins. We describe here a high-throughput flow cytometry (HT-FC platform for rapid analysis of 363 cell surface antigens. Here we demonstrate that HT-FC provides reproducible results, and use the platform to identify cell surface antigens that are influenced by common cell preparation methods. We show that multiple populations within complex samples such as primary tumors can be simultaneously analyzed by co-staining of cells with lineage-specific antibodies, allowing unprecedented depth of analysis of heterogeneous cell populations. Furthermore, standard informatics methods can be used to visualize, cluster and downsample HT-FC data to reveal novel signatures and biomarkers. We show that the cell surface profile provides sufficient molecular information to classify samples from different cancers and tissue types into biologically relevant clusters using unsupervised hierarchical clustering. Finally, we describe the identification of a candidate lineage marker and its subsequent validation. In summary, HT-FC combines the advantages of a high-throughput screen with a detection method that is sensitive, quantitative, highly reproducible, and allows in-depth analysis of heterogeneous samples. The use of commercially available antibodies means that high quality reagents are immediately available for follow-up studies. HT-FC has a wide range of applications, including biomarker discovery, molecular classification of cancers, or identification of novel lineage specific or stem cell
Gedye, Craig A; Hussain, Ali; Paterson, Joshua; Smrke, Alannah; Saini, Harleen; Sirskyj, Danylo; Pereira, Keira; Lobo, Nazleen; Stewart, Jocelyn; Go, Christopher; Ho, Jenny; Medrano, Mauricio; Hyatt, Elzbieta; Yuan, Julie; Lauriault, Stevan; Meyer, Mona; Kondratyev, Maria; van den Beucken, Twan; Jewett, Michael; Dirks, Peter; Guidos, Cynthia J; Danska, Jayne; Wang, Jean; Wouters, Bradly; Neel, Benjamin; Rottapel, Robert; Ailles, Laurie E
Cell surface proteins have a wide range of biological functions, and are often used as lineage-specific markers. Antibodies that recognize cell surface antigens are widely used as research tools, diagnostic markers, and even therapeutic agents. The ability to obtain broad cell surface protein profiles would thus be of great value in a wide range of fields. There are however currently few available methods for high-throughput analysis of large numbers of cell surface proteins. We describe here a high-throughput flow cytometry (HT-FC) platform for rapid analysis of 363 cell surface antigens. Here we demonstrate that HT-FC provides reproducible results, and use the platform to identify cell surface antigens that are influenced by common cell preparation methods. We show that multiple populations within complex samples such as primary tumors can be simultaneously analyzed by co-staining of cells with lineage-specific antibodies, allowing unprecedented depth of analysis of heterogeneous cell populations. Furthermore, standard informatics methods can be used to visualize, cluster and downsample HT-FC data to reveal novel signatures and biomarkers. We show that the cell surface profile provides sufficient molecular information to classify samples from different cancers and tissue types into biologically relevant clusters using unsupervised hierarchical clustering. Finally, we describe the identification of a candidate lineage marker and its subsequent validation. In summary, HT-FC combines the advantages of a high-throughput screen with a detection method that is sensitive, quantitative, highly reproducible, and allows in-depth analysis of heterogeneous samples. The use of commercially available antibodies means that high quality reagents are immediately available for follow-up studies. HT-FC has a wide range of applications, including biomarker discovery, molecular classification of cancers, or identification of novel lineage specific or stem cell markers.
Buongiorno Nardelli, Marco
High-Throughput Quantum-Mechanics computation of materials properties by ab initio methods has become the foundation of an effective approach to materials design, discovery and characterization. This data driven approach to materials science currently presents the most promising path to the development of advanced technological materials that could solve or mitigate important social and economic challenges of the 21st century. In particular, the rapid proliferation of computational data on materials properties presents the possibility to complement and extend materials property databases where the experimental data is lacking and difficult to obtain. Enhanced repositories such as AFLOWLIB open novel opportunities for structure discovery and optimization, including uncovering of unsuspected compounds, metastable structures and correlations between various properties. The practical realization of these opportunities depends almost exclusively on the the design of efficient algorithms for electronic structure simulations of realistic material systems beyond the limitations of the current standard theories. In this talk, I will review recent progress in theoretical and computational tools, and in particular, discuss the development and validation of novel functionals within Density Functional Theory and of local basis representations for effective ab-initio tight-binding schemes. Marco Buongiorno Nardelli is a pioneer in the development of computational platforms for theory/data/applications integration rooted in his profound and extensive expertise in the design of electronic structure codes and in his vision for sustainable and innovative software development for high-performance materials simulations. His research activities range from the design and discovery of novel materials for 21st century applications in renewable energy, environment, nano-electronics and devices, the development of advanced electronic structure theories and high-throughput techniques in
Full Text Available Treatment of tuberculosis, like other infectious diseases, is increasingly hindered by the emergence of drug resistance. Drug discovery efforts would be facilitated by facile screening tools that incorporate the complexities of human disease. Mycobacterium marinum-infected zebrafish larvae recapitulate key aspects of tuberculosis pathogenesis and drug treatment. Here, we develop a model for rapid in vivo drug screening using fluorescence-based methods for serial quantitative assessment of drug efficacy and toxicity. We provide proof-of-concept that both traditional bacterial-targeting antitubercular drugs and newly identified host-targeting drugs would be discovered through the use of this model. We demonstrate the model’s utility for the identification of synergistic combinations of antibacterial drugs and demonstrate synergy between bacterial- and host-targeting compounds. Thus, the platform can be used to identify new antibacterial agents and entirely new classes of drugs that thwart infection by targeting host pathways. The methods developed here should be widely applicable to small-molecule screens for other infectious and noninfectious diseases.
Mordwinkin, Nicholas M; Burridge, Paul W; Wu, Joseph C
Drug attrition rates have increased in past years, resulting in growing costs for the pharmaceutical industry and consumers. The reasons for this include the lack of in vitro models that correlate with clinical results and poor preclinical toxicity screening assays. The in vitro production of human cardiac progenitor cells and cardiomyocytes from human pluripotent stem cells provides an amenable source of cells for applications in drug discovery, disease modeling, regenerative medicine, and cardiotoxicity screening. In addition, the ability to derive human-induced pluripotent stem cells from somatic tissues, combined with current high-throughput screening and pharmacogenomics, may help realize the use of these cells to fulfill the potential of personalized medicine. In this review, we discuss the use of pluripotent stem cell-derived cardiomyocytes for drug discovery and cardiotoxicity screening, as well as current hurdles that must be overcome for wider clinical applications of this promising approach.
Heinle, Lance; Peterkin, Vincent; de Morais, Sonia M; Jenkins, Gary J; Badagnani, Ilaria
A high throughput, semi-automated clearance screening assay in hepatocytes was developed allowing a scientist to generate data for 96 compounds in one week. The 384-well format assay utilizes a Thermo Multidrop Combi and an optimized LC-MS/MS method. The previously reported LCMS/ MS method reduced the analytical run time by 3-fold, down to 1.2 min injection-to-injection. The Multidrop was able to deliver hepatocytes to 384-well plates with minimal viability loss. Comparison of results from the new 384-well and historical 24-well assays yielded a correlation of 0.95. In addition, results obtained for 25 marketed drugs with various metabolism pathways had a correlation of 0.75 when compared with literature values. Precision was maintained in the new format as 8 compounds tested in ≥39 independent experiments had coefficients of variation ≤21%. The ability to predict in vivo clearances using the new stability assay format was also investigated using 22 marketed drugs and 26 AbbVie compounds. Correction of intrinsic clearance values with binding to hepatocytes (in vitro data) and plasma (in vivo data) resulted in a higher in vitro to in vivo correlation when comparing 22 marketed compounds in human (0.80 vs 0.35) and 26 AbbVie Discovery compounds in rat (0.56 vs 0.17), demonstrating the importance of correcting for binding in clearance studies. This newly developed high throughput, semi-automated clearance assay allows for rapid screening of Discovery compounds to enable Structure Activity Relationship (SAR) analysis based on high quality hepatocyte stability data in sufficient quantity and quality to drive the next round of compound synthesis.
Lorenz, Daniel A; Song, James M; Garner, Amanda L
MicroRNAs (miRNA) play critical roles in human development and disease. As such, the targeting of miRNAs is considered attractive as a novel therapeutic strategy. A major bottleneck toward this goal, however, has been the identification of small molecule probes that are specific for select RNAs and methods that will facilitate such discovery efforts. Using pre-microRNAs as proof-of-concept, herein we report a conceptually new and innovative approach for assaying RNA-small molecule interactions. Through this platform assay technology, which we term catalytic enzyme-linked click chemistry assay or cat-ELCCA, we have designed a method that can be implemented in high throughput, is virtually free of false readouts, and is general for all nucleic acids. Through cat-ELCCA, we envision the discovery of selective small molecule ligands for disease-relevant miRNAs to promote the field of RNA-targeted drug discovery and further our understanding of the role of miRNAs in cellular biology.
efficiency of drug discovery and make a potential impact on modern pharmaceutical industries . 15. SUBJECT TERMS ODOC carriers, barcode, split-mix...approach4-7. Array technologies can construct high density of molecules in an array format on a solid substrate (microchip), from which the chemical...and-play microfluidic packaging scheme, known as Microflego – 3D Microfluidic Assembly, to facilely establish complex 3D microfluidic networks using
Full Text Available Abstract Background The primary goal of genetic linkage analysis is to identify genes affecting a phenotypic trait. After localisation of the linkage region, efficient genetic dissection of the disease linked loci requires that functional variants are identified across the loci. These functional variations are difficult to detect due to extent of genetic diversity and, to date, incomplete cataloguing of the large number of variants present both within and between populations. Massively parallel sequencing platforms offer unprecedented capacity for variant discovery, however the number of samples analysed are still limited by cost per sample. Some progress has been made in reducing the cost of resequencing using either multiplexing methodologies or through the utilisation of targeted enrichment technologies which provide the ability to resequence genomic areas of interest rather that full genome sequencing. Results We developed a method that combines current multiplexing methodologies with a solution-based target enrichment method to further reduce the cost of resequencing where region-specific sequencing is required. Our multiplex/enrichment strategy produced high quality data with nominal reduction of sequencing depth. We undertook a genotyping study and were successful in the discovery of novel SNP alleles in all samples at uniplex, duplex and pentaplex levels. Conclusion Our work describes the successful combination of a targeted enrichment method and index barcode multiplexing to reduce costs, time and labour associated with processing large sample sets. Furthermore, we have shown that the sequencing depth obtained is adequate for credible SNP genotyping analysis at uniplex, duplex and pentaplex levels.
Mullen Michael P
Full Text Available Abstract Background The central role of the somatotrophic axis in animal post-natal growth, development and fertility is well established. Therefore, the identification of genetic variants affecting quantitative traits within this axis is an attractive goal. However, large sample numbers are a pre-requisite for the identification of genetic variants underlying complex traits and although technologies are improving rapidly, high-throughput sequencing of large numbers of complete individual genomes remains prohibitively expensive. Therefore using a pooled DNA approach coupled with target enrichment and high-throughput sequencing, the aim of this study was to identify polymorphisms and estimate allele frequency differences across 83 candidate genes of the somatotrophic axis, in 150 Holstein-Friesian dairy bulls divided into two groups divergent for genetic merit for fertility. Results In total, 4,135 SNPs and 893 indels were identified during the resequencing of the 83 candidate genes. Nineteen percent (n = 952 of variants were located within 5' and 3' UTRs. Seventy-two percent (n = 3,612 were intronic and 9% (n = 464 were exonic, including 65 indels and 236 SNPs resulting in non-synonymous substitutions (NSS. Significant (P ® MassARRAY. No significant differences (P > 0.1 were observed between the two methods for any of the 43 SNPs across both pools (i.e., 86 tests in total. Conclusions The results of the current study support previous findings of the use of DNA sample pooling and high-throughput sequencing as a viable strategy for polymorphism discovery and allele frequency estimation. Using this approach we have characterised the genetic variation within genes of the somatotrophic axis and related pathways, central to mammalian post-natal growth and development and subsequent lactogenesis and fertility. We have identified a large number of variants segregating at significantly different frequencies between cattle groups divergent for calving
Subedi, Amit; Shimizu, Takeshi; Ryo, Akihide; Sanada, Emiko; Watanabe, Nobumoto; Osada, Hiroyuki
Peptidyl prolyl cis/trans isomerization by Pin1 regulates various oncogenic signals during cancer progression, and its inhibition through multiple approaches has established Pin1 as a therapeutic target. However, lack of simplified screening systems has limited the discovery of potent Pin1 inhibitors. We utilized phosphorylation-dependent binding of Pin1 to its specific substrate to develop a screening system for Pin1 inhibitors. Using this system, we screened a chemical library, and identified a novel selenium derivative as Pin1 inhibitor. Based on structure-activity guided chemical synthesis, we developed more potent Pin1 inhibitors that inhibited cancer cell proliferation. -- Highlights: •Novel screening for Pin1 inhibitors based on Pin1 binding is developed. •A novel selenium compound is discovered as Pin1 inhibitor. •Activity guided chemical synthesis of selenium derivatives resulted potent Pin1 inhibitors.
Randi Holm Jensen
Full Text Available Viral infections cause many different diseases stemming both from well-characterized viral pathogens but also from emerging viruses, and the search for novel viruses continues to be of great importance. High-throughput sequencing is an important technology for this purpose. However, viral nucleic acids often constitute a minute proportion of the total genetic material in a sample from infected tissue. Techniques to enrich viral targets in high-throughput sequencing have been reported, but the sensitivity of such methods is not well established. This study compares different library preparation techniques targeting both DNA and RNA with and without virion enrichment. By optimizing the selection of intact virus particles, both by physical and enzymatic approaches, we assessed the effectiveness of the specific enrichment of viral sequences as compared to non-enriched sample preparations by selectively looking for and counting read sequences obtained from shotgun sequencing. Using shotgun sequencing of total DNA or RNA, viral targets were detected at concentrations corresponding to the predicted level, providing a foundation for estimating the effectiveness of virion enrichment. Virion enrichment typically produced a 1000-fold increase in the proportion of DNA virus sequences. For RNA virions the gain was less pronounced with a maximum 13-fold increase. This enrichment varied between the different sample concentrations, with no clear trend. Despite that less sequencing was required to identify target sequences, it was not evident from our data that a lower detection level was achieved by virion enrichment compared to shotgun sequencing.
White Frank F
Full Text Available Abstract Background Eight diverse sorghum (Sorghum bicolor L. Moench accessions were subjected to short-read genome sequencing to characterize the distribution of single-nucleotide polymorphisms (SNPs. Two strategies were used for DNA library preparation. Missing SNP genotype data were imputed by local haplotype comparison. The effect of library type and genomic diversity on SNP discovery and imputation are evaluated. Results Alignment of eight genome equivalents (6 Gb to the public reference genome revealed 283,000 SNPs at ≥82% confirmation probability. Sequencing from libraries constructed to limit sequencing to start at defined restriction sites led to genotyping 10-fold more SNPs in all 8 accessions, and correctly imputing 11% more missing data, than from semirandom libraries. The SNP yield advantage of the reduced-representation method was less than expected, since up to one fifth of reads started at noncanonical restriction sites and up to one third of restriction sites predicted in silico to yield unique alignments were not sampled at near-saturation. For imputation accuracy, the availability of a genomically similar accession in the germplasm panel was more important than panel size or sequencing coverage. Conclusions A sequence quantity of 3 million 50-base reads per accession using a BsrFI library would conservatively provide satisfactory genotyping of 96,000 sorghum SNPs. For most reliable SNP-genotype imputation in shallowly sequenced genomes, germplasm panels should consist of pairs or groups of genomically similar entries. These results may help in designing strategies for economical genotyping-by-sequencing of large numbers of plant accessions.
Panavas, Tadas; Lu, Jin; Liu, Xuesong; Winkis, Ann-Marie; Powers, Gordon; Naso, Michael F; Amegadzie, Bernard
Expressed protein libraries are becoming a critical tool for new target discovery in the pharmaceutical industry. In order to get the most meaningful and comprehensive results from protein library screens, it is essential to have library proteins in their native conformation with proper post-translation modifications. This goal is achieved by expressing untagged human proteins in a human cell background. We optimized the transfection and cell culture conditions to maximize protein expression in a 96-well format so that the expression levels were comparable with the levels observed in shake flasks. For detection purposes, we engineered a 'tag after stop codon' system. Depending on the expression conditions, it was possible to express either native or tagged proteins from the same expression vector set. We created a human secretion protein library of 1432 candidates and a small plasma membrane protein set of about 500 candidates. Utilizing the optimized expression conditions, we expressed and analyzed both libraries by SDS-PAGE gel electrophoresis and Western blotting. Two thirds of secreted proteins could be detected by Western-blot analyses; almost half of them were visible on Coomassie stained gels. In this paper, we describe protein expression libraries that can be easily produced in mammalian expression systems in a 96-well format, with one protein expressed per well. The libraries and methods described allow for the development of robust, high-throughput functional screens designed to assay for protein specific functions associated with a relevant disease-specific activity. Copyright © 2011 Elsevier Inc. All rights reserved.
Paytubi, Sonia; de La Cruz, Mercedes; Tormo, Jose R.; Martín, Jesús; González, Ignacio; González-Menendez, Victor; Genilloud, Olga; Reyes, Fernando; Vicente, Francisca; Madrid, Cristina; Balsalobre, Carlos
In this report, we describe a High-Throughput Screening (HTS) to identify compounds that inhibit biofilm formation or cause the disintegration of an already formed biofilm using the Salmonella Enteritidis 3934 strain. Initially, we developed a new methodology for growing Salmonella biofilms suitable for HTS platforms. The biomass associated with biofilm at the solid-liquid interface was quantified by staining both with resazurin and crystal violet, to detect living cells and total biofilm mass, respectively. For a pilot project, a subset of 1120 extracts from the Fundación MEDINA's collection was examined to identify molecules with antibiofilm activity. This is the first validated HTS assay of microbial natural product extracts which allows for the detection of four types of activities which are not mutually exclusive: inhibition of biofilm formation, detachment of the preformed biofilm and antimicrobial activity against planktonic cells or biofilm embedded cells. Currently, several extracts have been selected for further fractionation and purification of the active compounds. In one of the natural extracts patulin has been identified as a potent molecule with antimicrobial activity against both, planktonic cells and cells within the biofilm. These findings provide a proof of concept that the developed HTS can lead to the discovery of new natural compounds with antibiofilm activity against Salmonella and its possible use as an alternative to antimicrobial therapies and traditional disinfectants. PMID:28303128
Gai, Xiaowu; Perin, Juan C; Murphy, Kevin; O'Hara, Ryan; D'arcy, Monica; Wenocur, Adam; Xie, Hongbo M; Rappaport, Eric F; Shaikh, Tamim H; White, Peter S
Recent studies have shown that copy number variations (CNVs) are frequent in higher eukaryotes and associated with a substantial portion of inherited and acquired risk for various human diseases. The increasing availability of high-resolution genome surveillance platforms provides opportunity for rapidly assessing research and clinical samples for CNV content, as well as for determining the potential pathogenicity of identified variants. However, few informatics tools for accurate and efficient CNV detection and assessment currently exist. We developed a suite of software tools and resources (CNV Workshop) for automated, genome-wide CNV detection from a variety of SNP array platforms. CNV Workshop includes three major components: detection, annotation, and presentation of structural variants from genome array data. CNV detection utilizes a robust and genotype-specific extension of the Circular Binary Segmentation algorithm, and the use of additional detection algorithms is supported. Predicted CNVs are captured in a MySQL database that supports cohort-based projects and incorporates a secure user authentication layer and user/admin roles. To assist with determination of pathogenicity, detected CNVs are also annotated automatically for gene content, known disease loci, and gene-based literature references. Results are easily queried, sorted, filtered, and visualized via a web-based presentation layer that includes a GBrowse-based graphical representation of CNV content and relevant public data, integration with the UCSC Genome Browser, and tabular displays of genomic attributes for each CNV. To our knowledge, CNV Workshop represents the first cohesive and convenient platform for detection, annotation, and assessment of the biological and clinical significance of structural variants. CNV Workshop has been successfully utilized for assessment of genomic variation in healthy individuals and disease cohorts and is an ideal platform for coordinating multiple associated
Rappaport Eric F
Full Text Available Abstract Background Recent studies have shown that copy number variations (CNVs are frequent in higher eukaryotes and associated with a substantial portion of inherited and acquired risk for various human diseases. The increasing availability of high-resolution genome surveillance platforms provides opportunity for rapidly assessing research and clinical samples for CNV content, as well as for determining the potential pathogenicity of identified variants. However, few informatics tools for accurate and efficient CNV detection and assessment currently exist. Results We developed a suite of software tools and resources (CNV Workshop for automated, genome-wide CNV detection from a variety of SNP array platforms. CNV Workshop includes three major components: detection, annotation, and presentation of structural variants from genome array data. CNV detection utilizes a robust and genotype-specific extension of the Circular Binary Segmentation algorithm, and the use of additional detection algorithms is supported. Predicted CNVs are captured in a MySQL database that supports cohort-based projects and incorporates a secure user authentication layer and user/admin roles. To assist with determination of pathogenicity, detected CNVs are also annotated automatically for gene content, known disease loci, and gene-based literature references. Results are easily queried, sorted, filtered, and visualized via a web-based presentation layer that includes a GBrowse-based graphical representation of CNV content and relevant public data, integration with the UCSC Genome Browser, and tabular displays of genomic attributes for each CNV. Conclusions To our knowledge, CNV Workshop represents the first cohesive and convenient platform for detection, annotation, and assessment of the biological and clinical significance of structural variants. CNV Workshop has been successfully utilized for assessment of genomic variation in healthy individuals and disease cohorts and
Grünzner, S.; Reddavide, F. V.; Steinfelder, C.; Cui, M.; Busek, M.; Klotzbach, U.; Zhang, Y.; Sonntag, F.
The fast development of DNA-encoded chemical libraries (DECL) in the past 10 years has received great attention from pharmaceutical industries. It applies the selection approach for small molecular drug discovery. Because of the limited choices of DNA-compatible chemical reactions, most DNA-encoded chemical libraries have a narrow structural diversity and low synthetic yield. There is also a poor correlation between the ranking of compounds resulted from analyzing the sequencing data and the affinity measured through biochemical assays. By combining DECL with dynamical chemical library, the resulting DNA-encoded dynamic library (EDCCL) explores the thermodynamic equilibrium of reversible reactions as well as the advantages of DNA encoded compounds for manipulation/detection, thus leads to enhanced signal-to-noise ratio of the selection process and higher library quality. However, the library dynamics are caused by the weak interactions between the DNA strands, which also result in relatively low affinity of the bidentate interaction, as compared to a stable DNA duplex. To take advantage of both stably assembled dual-pharmacophore libraries and EDCCLs, we extended the concept of EDCCLs to heat-induced EDCCLs (hi-EDCCLs), in which the heat-induced recombination process of stable DNA duplexes and affinity capture are carried out separately. To replace the extremely laborious and repetitive manual process, a fully automated device will facilitate the use of DECL in drug discovery. Herein we describe a novel lab-on-a-chip platform for high throughput drug discovery with hi-EDCCL. A microfluidic system with integrated actuation was designed which is able to provide a continuous sample circulation by reducing the volume to a minimum. It consists of a cooled and a heated chamber for constant circulation. The system is capable to generate stable temperatures above 75 °C in the heated chamber to melt the double strands of the DNA and less than 15 °C in the cooled chamber
Shubhakar, A.; Reiding, K.R.; Gardner, R.A.; Spencer, D.I.R.; Fernandes, D.L.; Wuhrer, M.
This review covers advances in analytical technologies for high-throughput (HTP) glycomics. Our focus is on structural studies of glycoprotein glycosylation to support biopharmaceutical realization and the discovery of glycan biomarkers for human disease. For biopharmaceuticals, there is increasing
High throughput methodologies such as microarrays, mass spectrometry and plate-based small molecule screens are increasingly used to facilitate discoveries from gene function to drug candidate identification. These large-scale experiments are typically carried out over the course...
Isabel K Macdonald
Full Text Available BACKGROUND: The National Lung Screening Trial showed that CT screening for lung cancer led to a 20% reduction in mortality. However, CT screening has a number of disadvantages including low specificity. A validated autoantibody assay is available commercially (EarlyCDT®-Lung to aid in the early detection of lung cancer and risk stratification in patients with pulmonary nodules detected by CT. Recent advances in high throughput (HTP cloning and expression methods have been developed into a discovery pipeline to identify biomarkers that detect autoantibodies. The aim of this study was to demonstrate the successful clinical application of this strategy to add to the EarlyCDT-Lung panel in order to improve its sensitivity and specificity (and hence positive predictive value, (PPV. METHODS AND FINDINGS: Serum from two matched independent cohorts of lung cancer patients were used (n = 100 and n = 165. Sixty nine proteins were initially screened on an abridged HTP version of the autoantibody ELISA using protein prepared on small scale by a HTP expression and purification screen. Promising leads were produced in shake flask culture and tested on the full assay. These results were analyzed in combination with those from the EarlyCDT-Lung panel in order to provide a set of re-optimized cut-offs. Five proteins that still displayed cancer/normal differentiation were tested for reproducibility and validation on a second batch of protein and a separate patient cohort. Addition of these proteins resulted in an improvement in the sensitivity and specificity of the test from 38% and 86% to 49% and 93% respectively (PPV improvement from 1 in 16 to 1 in 7. CONCLUSION: This is a practical example of the value of investing resources to develop a HTP technology. Such technology may lead to improvement in the clinical utility of the EarlyCDT--Lung test, and so further aid the early detection of lung cancer.
Full Text Available J chain is a small polypeptide responsible for immunoglobulin (Ig polymerization and transport of Igs across mucosal surfaces in higher vertebrates. We identified a J chain in dipnoid fish, the African lungfish (Protopterus dolloi by high throughput sequencing of the transcriptome. P. dolloi J chain is 161 aa long and contains six of the eight Cys residues present in mammalian J chain. Phylogenetic studies place the lungfish J chain closer to tetrapod J chain than to the coelacanth or nurse shark sequences. J chain expression occurs in all P. dolloi immune tissues examined and it increases in the gut and kidney in response to an experimental bacterial infection. Double fluorescent in-situ hybridization shows that 88.5% of IgM⁺ cells in the gut co-express J chain, a significantly higher percentage than in the pre-pyloric spleen. Importantly, J chain expression is not restricted to the B-cell compartment since gut epithelial cells also express J chain. These results improve our current view of J chain from a phylogenetic perspective.
Full Text Available Abstract Background Malaria, a major public health issue in developing nations, is responsible for more than one million deaths a year. The most lethal species, Plasmodium falciparum, causes up to 90% of fatalities. Drug resistant strains to common therapies have emerged worldwide and recent artemisinin-based combination therapy failures hasten the need for new antimalarial drugs. Discovering novel compounds to be used as antimalarials is expedited by the use of a high-throughput screen (HTS to detect parasite growth and proliferation. Fluorescent dyes that bind to DNA have replaced expensive traditional radioisotope incorporation for HTS growth assays, but do not give additional information regarding the parasite stage affected by the drug and a better indication of the drug's mode of action. Live cell imaging with RNA dyes, which correlates with cell growth and proliferation, has been limited by the availability of successful commercial dyes. Results After screening a library of newly synthesized stryrl dyes, we discovered three RNA binding dyes that provide morphological details of live parasites. Utilizing an inverted confocal imaging platform, live cell imaging of parasites increases parasite detection, improves the spatial and temporal resolution of the parasite under drug treatments, and can resolve morphological changes in individual cells. Conclusion This simple one-step technique is suitable for automation in a microplate format for novel antimalarial compound HTS. We have developed a new P. falciparum RNA high-content imaging growth inhibition assay that is robust with time and energy efficiency.
McDonald, Peter R; Roy, Anuradha; Chaguturu, Rathnam
The University of Kansas High-Throughput Screening (KU HTS) core is a state-of-the-art drug-discovery facility with an entrepreneurial open-service policy, which provides centralized resources supporting public- and private-sector research initiatives. The KU HTS core was established in 2002 at the University of Kansas with support from an NIH grant and the state of Kansas. It collaborates with investigators from national and international academic, nonprofit and pharmaceutical organizations in executing HTS-ready assay development and screening of chemical libraries for target validation, probe selection, hit identification and lead optimization. This is part two of a contribution from the KU HTS laboratory.
Simmons, Katie J; Gotfryd, Kamil; Billesbølle, Christian B
Abstract Membrane proteins are intrinsically involved in both human and pathogen physiology, and are the target of 60% of all marketed drugs. During the past decade, advances in the studies of membrane proteins using X-ray crystallography, electron microscopy and NMR-based techniques led to the e...... this is a virtual high-throughput screening (vHTS) technique initially developed for soluble proteins. This paper describes application of this technique to the discovery of inhibitors of the leucine transporter (LeuT), a member of the neurotransmitter:sodium symporter (NSS) family....
McDonald, Peter R; Roy, Anuradha; Chaguturu, Rathnam
The University of Kansas High-Throughput Screening (KU HTS) core is a state-of-the-art drug-discovery facility with an entrepreneurial open-service policy, which provides centralized resources supporting public- and private-sector research initiatives. The KU HTS core applies pharmaceutical industry project-management principles in an academic setting by bringing together multidisciplinary teams to fill critical scientific and technology gaps, using an experienced team of industry-trained researchers and project managers. The KU HTS proactively engages in supporting grant applications for extramural funding, intellectual-property management and technology transfer. The KU HTS staff further provides educational opportunities for the KU faculty and students to learn cutting-edge technologies in drug-discovery platforms through seminars, workshops, internships and course teaching. This is the first instalment of a two-part contribution from the KU HTS laboratory.
Full Text Available Abstract Background The recent emergence of high-throughput automated image acquisition technologies has forever changed how cell biologists collect and analyze data. Historically, the interpretation of cellular phenotypes in different experimental conditions has been dependent upon the expert opinions of well-trained biologists. Such qualitative analysis is particularly effective in detecting subtle, but important, deviations in phenotypes. However, while the rapid and continuing development of automated microscope-based technologies now facilitates the acquisition of trillions of cells in thousands of diverse experimental conditions, such as in the context of RNA interference (RNAi or small-molecule screens, the massive size of these datasets precludes human analysis. Thus, the development of automated methods which aim to identify novel and biological relevant phenotypes online is one of the major challenges in high-throughput image-based screening. Ideally, phenotype discovery methods should be designed to utilize prior/existing information and tackle three challenging tasks, i.e. restoring pre-defined biological meaningful phenotypes, differentiating novel phenotypes from known ones and clarifying novel phenotypes from each other. Arbitrarily extracted information causes biased analysis, while combining the complete existing datasets with each new image is intractable in high-throughput screens. Results Here we present the design and implementation of a novel and robust online phenotype discovery method with broad applicability that can be used in diverse experimental contexts, especially high-throughput RNAi screens. This method features phenotype modelling and iterative cluster merging using improved gap statistics. A Gaussian Mixture Model (GMM is employed to estimate the distribution of each existing phenotype, and then used as reference distribution in gap statistics. This method is broadly applicable to a number of different types of
Turchetto, Jeremy; Sequeira, Ana Filipa; Ramond, Laurie; Peysson, Fanny; Brás, Joana L A; Saez, Natalie J; Duhoo, Yoan; Blémont, Marilyne; Guerreiro, Catarina I P D; Quinton, Loic; De Pauw, Edwin; Gilles, Nicolas; Darbon, Hervé; Fontes, Carlos M G A; Vincentelli, Renaud
Animal venoms are complex molecular cocktails containing a wide range of biologically active disulphide-reticulated peptides that target, with high selectivity and efficacy, a variety of membrane receptors. Disulphide-reticulated peptides have evolved to display improved specificity, low immunogenicity and to show much higher resistance to degradation than linear peptides. These properties make venom peptides attractive candidates for drug development. However, recombinant expression of reticulated peptides containing disulphide bonds is challenging, especially when associated with the production of large libraries of bioactive molecules for drug screening. To date, as an alternative to artificial synthetic chemical libraries, no comprehensive recombinant libraries of natural venom peptides are accessible for high-throughput screening to identify novel therapeutics. In the accompanying paper an efficient system for the expression and purification of oxidized disulphide-reticulated venom peptides in Escherichia coli is described. Here we report the development of a high-throughput automated platform, that could be adapted to the production of other families, to generate the largest ever library of recombinant venom peptides. The peptides were produced in the periplasm of E. coli using redox-active DsbC as a fusion tag, thus allowing the efficient formation of correctly folded disulphide bridges. TEV protease was used to remove fusion tags and recover the animal venom peptides in the native state. Globally, within nine months, out of a total of 4992 synthetic genes encoding a representative diversity of venom peptides, a library containing 2736 recombinant disulphide-reticulated peptides was generated. The data revealed that the animal venom peptides produced in the bacterial host were natively folded and, thus, are putatively biologically active. Overall this study reveals that high-throughput expression of animal venom peptides in E. coli can generate large
Full Text Available High dimensional mass and flow cytometry (HDCyto experiments have become a method of choice for high throughput interrogation and characterization of cell populations.Here, we present an R-based pipeline for differential analyses of HDCyto data, largely based on Bioconductor packages. We computationally define cell populations using FlowSOM clustering, and facilitate an optional but reproducible strategy for manual merging of algorithm-generated clusters. Our workflow offers different analysis paths, including association of cell type abundance with a phenotype or changes in signaling markers within specific subpopulations, or differential analyses of aggregated signals. Importantly, the differential analyses we show are based on regression frameworks where the HDCyto data is the response; thus, we are able to model arbitrary experimental designs, such as those with batch effects, paired designs and so on. In particular, we apply generalized linear mixed models to analyses of cell population abundance or cell-population-specific analyses of signaling markers, allowing overdispersion in cell count or aggregated signals across samples to be appropriately modeled. To support the formal statistical analyses, we encourage exploratory data analysis at every step, including quality control (e.g. multi-dimensional scaling plots, reporting of clustering results (dimensionality reduction, heatmaps with dendrograms and differential analyses (e.g. plots of aggregated signals.
Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D.; Ha, Dong-Gwang; Einzinger, Markus; Wu, Tony; Baldo, Marc A.; Aspuru-Guzik, Alán.
Discovering new OLED emitters requires many experiments to synthesize candidates and test performance in devices. Large scale computer simulation can greatly speed this search process but the problem remains challenging enough that brute force application of massive computing power is not enough to successfully identify novel structures. We report a successful High Throughput Virtual Screening study that leveraged a range of methods to optimize the search process. The generation of candidate structures was constrained to contain combinatorial explosion. Simulations were tuned to the specific problem and calibrated with experimental results. Experimentalists and theorists actively collaborated such that experimental feedback was regularly utilized to update and shape the computational search. Supervised machine learning methods prioritized candidate structures prior to quantum chemistry simulation to prevent wasting compute on likely poor performers. With this combination of techniques, each multiplying the strength of the search, this effort managed to navigate an area of molecular space and identify hundreds of promising OLED candidate structures. An experimentally validated selection of this set shows emitters with external quantum efficiencies as high as 22%.
Tang, Huaping; Shen, Ding Ren; Han, Yong-Hae; Kong, Yan; Balimane, Praveen; Marino, Anthony; Gao, Mian; Wu, Sophie; Xie, Dianlin; Soars, Matthew G; O'Connell, Jonathan C; Rodrigues, A David; Zhang, Litao; Cvijic, Mary Ellen
Transporter proteins are known to play a critical role in affecting the overall absorption, distribution, metabolism, and excretion characteristics of drug candidates. In addition to efflux transporters (P-gp, BCRP, MRP2, etc.) that limit absorption, there has been a renewed interest in influx transporters at the renal (OATs, OCTs) and hepatic (OATPs, BSEP, NTCP, etc.) organ level that can cause significant clinical drug-drug interactions (DDIs). Several of these transporters are also critical for hepatobiliary disposition of bilirubin and bile acid/salts, and their inhibition is directly implicated in hepatic toxicities. Regulatory agencies took action to address transporter-mediated DDI with the goal of ensuring drug safety in the clinic and on the market. To meet regulatory requirements, advanced bioassay technology and automation solutions were implemented for high-throughput transporter screening to provide structure-activity relationship within lead optimization. To enhance capacity, several functional assay formats were miniaturized to 384-well throughput including novel fluorescence-based uptake and efflux inhibition assays using high-content image analysis as well as cell-based radioactive uptake and vesicle-based efflux inhibition assays. This high-throughput capability enabled a paradigm shift from studying transporter-related issues in the development space to identifying and dialing out these concerns early on in discovery for enhanced mechanism-based efficacy while circumventing DDIs and transporter toxicities.
Zhang, Douglas; Lee, Junmin; Kilian, Kristopher A
Cells in tissue receive a host of soluble and insoluble signals in a context-dependent fashion, where integration of these cues through a complex network of signal transduction cascades will define a particular outcome. Biomaterials scientists and engineers are tasked with designing materials that can at least partially recreate this complex signaling milieu towards new materials for biomedical applications. In this progress report, recent advances in high throughput techniques and high content imaging approaches that are facilitating the discovery of efficacious biomaterials are described. From microarrays of synthetic polymers, peptides and full-length proteins, to designer cell culture systems that present multiple biophysical and biochemical cues in tandem, it is discussed how the integration of combinatorics with high content imaging and analysis is essential to extracting biologically meaningful information from large scale cellular screens to inform the design of next generation biomaterials. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Xiang, X D
Combinatorial materials synthesis methods and high-throughput evaluation techniques have been developed to accelerate the process of materials discovery and optimization and phase-diagram mapping. Analogous to integrated circuit chips, integrated materials chips containing thousands of discrete different compositions or continuous phase diagrams, often in the form of high-quality epitaxial thin films, can be fabricated and screened for interesting properties. Microspot x-ray method, various optical measurement techniques, and a novel evanescent microwave microscope have been used to characterize the structural, optical, magnetic, and electrical properties of samples on the materials chips. These techniques are routinely used to discover/optimize and map phase diagrams of ferroelectric, dielectric, optical, magnetic, and superconducting materials.
Yusuf, Noor Hydayaty Md; Ong, Wen Dee; Redwan, Raimi Mohamed; Latip, Mariam Abd; Kumar, S Vijay
MicroRNAs (miRNAs) are a class of small, endogenous non-coding RNAs that negatively regulate gene expression, resulting in the silencing of target mRNA transcripts through mRNA cleavage or translational inhibition. MiRNAs play significant roles in various biological and physiological processes in plants. However, the miRNA-mediated gene regulatory network in pineapple, the model tropical non-climacteric fruit, remains largely unexplored. Here, we report a complete list of pineapple mature miRNAs obtained from high-throughput small RNA sequencing and precursor miRNAs (pre-miRNAs) obtained from ESTs. Two small RNA libraries were constructed from pineapple fruits and leaves, respectively, using Illumina's Solexa technology. Sequence similarity analysis using miRBase revealed 579,179 reads homologous to 153 miRNAs from 41 miRNA families. In addition, a pineapple fruit transcriptome library consisting of approximately 30,000 EST contigs constructed using Solexa sequencing was used for the discovery of pre-miRNAs. In all, four pre-miRNAs were identified (MIR156, MIR399, MIR444 and MIR2673). Furthermore, the same pineapple transcriptome was used to dissect the function of the miRNAs in pineapple by predicting their putative targets in conjunction with their regulatory networks. In total, 23 metabolic pathways were found to be regulated by miRNAs in pineapple. The use of high-throughput sequencing in pineapples to unveil the presence of miRNAs and their regulatory pathways provides insight into the repertoire of miRNA regulation used exclusively in this non-climacteric model plant. Copyright © 2015 Elsevier B.V. All rights reserved.
Federal Laboratory Consortium — Argonne?s high throughput facility provides highly automated and parallel approaches to material and materials chemistry development. The facility allows scientists...
Rounge, Trine B; Lauritzen, Marianne; Langseth, Hilde; Enerly, Espen; Lyle, Robert; Gislefoss, Randi E
The impacts of long-term storage and varying preanalytical factors on the quality and quantity of DNA and miRNA from archived serum have not been fully assessed. Preanalytical and analytical variations and degradation may introduce bias in representation of DNA and miRNA and may result in loss or corruption of quantitative data. We have evaluated DNA and miRNA quantity, quality, and variability in samples stored up to 40 years using one of the oldest prospective serum collections in the world, the Janus Serumbank, a biorepository dedicated to cancer research. miRNAs are present and stable in archived serum samples frozen at -25°C for at least 40 years. Long-time storage did not reduce miRNA yields; however, varying preanalytical conditions had a significant effect and should be taken into consideration during project design. Of note, 500 μL serum yielded sufficient miRNA for qPCR and small RNA sequencing and on average 650 unique miRNAs were detected in samples from presumably healthy donors. Of note, 500 μL serum yielded sufficient DNA for whole-genome sequencing and subsequent SNP calling, giving a uniform representation of the genomes. DNA and miRNA are stable during long-term storage, making large prospectively collected serum repositories an invaluable source for miRNA and DNA biomarker discovery. Large-scale biomarker studies with long follow-up time are possible utilizing biorepositories with archived serum and state-of-the-art technology. ©2015 American Association for Cancer Research.
Full Text Available BACKGROUND: Paspalum dilatatum Poir. (common name dallisgrass is a native grass species of South America, with special relevance to dairy and red meat production. P. dilatatum exhibits higher forage quality than other C4 forage grasses and is tolerant to frost and water stress. This species is predominantly cultivated in an apomictic monoculture, with an inherent high risk that biotic and abiotic stresses could potentially devastate productivity. Therefore, advanced breeding strategies that characterise and use available genetic diversity, or assess germplasm collections effectively are required to deliver advanced cultivars for production systems. However, there are limited genomic resources available for this forage grass species. RESULTS: Transcriptome sequencing using second-generation sequencing platforms has been employed using pooled RNA from different tissues (stems, roots, leaves and inflorescences at the final reproductive stage of P. dilatatum cultivar Primo. A total of 324,695 sequence reads were obtained, corresponding to c. 102 Mbp. The sequences were assembled, generating 20,169 contigs of a combined length of 9,336,138 nucleotides. The contigs were BLAST analysed against the fully sequenced grass species of Oryza sativa subsp. japonica, Brachypodium distachyon, the closely related Sorghum bicolor and foxtail millet (Setaria italica genomes as well as against the UniRef 90 protein database allowing a comprehensive gene ontology analysis to be performed. The contigs generated from the transcript sequencing were also analysed for the presence of simple sequence repeats (SSRs. A total of 2,339 SSR motifs were identified within 1,989 contigs and corresponding primer pairs were designed. Empirical validation of a cohort of 96 SSRs was performed, with 34% being polymorphic between sexual and apomictic biotypes. CONCLUSIONS: The development of genetic and genomic resources for P. dilatatum will contribute to gene discovery and expression
Annang, F; Pérez-Moreno, G; García-Hernández, R; Cordon-Obras, C; Martín, J; Tormo, J R; Rodríguez, L; de Pedro, N; Gómez-Pérez, V; Valente, M; Reyes, F; Genilloud, O; Vicente, F; Castanys, S; Ruiz-Pérez, L M; Navarro, M; Gamarro, F; González-Pacanowska, D
African trypanosomiasis, leishmaniasis, and Chagas disease are 3 neglected tropical diseases for which current therapeutic interventions are inadequate or toxic. There is an urgent need to find new lead compounds against these diseases. Most drug discovery strategies rely on high-throughput screening (HTS) of synthetic chemical libraries using phenotypic and target-based approaches. Combinatorial chemistry libraries contain hundreds of thousands of compounds; however, they lack the structural diversity required to find entirely novel chemotypes. Natural products, in contrast, are a highly underexplored pool of unique chemical diversity that can serve as excellent templates for the synthesis of novel, biologically active molecules. We report here a validated HTS platform for the screening of microbial extracts against the 3 diseases. We have used this platform in a pilot project to screen a subset (5976) of microbial extracts from the MEDINA Natural Products library. Tandem liquid chromatography-mass spectrometry showed that 48 extracts contain potentially new compounds that are currently undergoing de-replication for future isolation and characterization. Known active components included actinomycin D, bafilomycin B1, chromomycin A3, echinomycin, hygrolidin, and nonactins, among others. The report here is, to our knowledge, the first HTS of microbial natural product extracts against the above-mentioned kinetoplastid parasites. © 2014 Society for Laboratory Automation and Screening.
Isgut, Monica; Rao, Mukkavilli; Yang, Chunhua; Subrahmanyam, Vangala; Rida, Padmashree C G; Aneja, Ritu
Modern drug discovery efforts have had mediocre success rates with increasing developmental costs, and this has encouraged pharmaceutical scientists to seek innovative approaches. Recently with the rise of the fields of systems biology and metabolomics, network pharmacology (NP) has begun to emerge as a new paradigm in drug discovery, with a focus on multiple targets and drug combinations for treating disease. Studies on the benefits of drug combinations lay the groundwork for a renewed focus on natural products in drug discovery. Natural products consist of a multitude of constituents that can act on a variety of targets in the body to induce pharmacodynamic responses that may together culminate in an additive or synergistic therapeutic effect. Although natural products cannot be patented, they can be used as starting points in the discovery of potent combination therapeutics. The optimal mix of bioactive ingredients in natural products can be determined via phenotypic screening. The targets and molecular mechanisms of action of these active ingredients can then be determined using chemical proteomics, and by implementing a reverse pharmacokinetics approach. This review article provides evidence supporting the potential benefits of natural product-based combination drugs, and summarizes drug discovery methods that can be applied to this class of drugs. © 2017 Wiley Periodicals, Inc.
Mordwinkin, Nicholas M.; Burridge, Paul W.; Wu, Joseph C.
Drug attrition rates have increased in past years, resulting in growing costs for the pharmaceutical industry and consumers. The reasons for this include the lack of in vitro models that correlate with clinical results, and poor preclinical toxicity screening assays. The in vitro production of human cardiac progenitor cells and cardiomyocytes from human pluripotent stem cells provides an amenable source of cells for applications in drug discovery, disease modeling, regenerative medicine, and ...
Takamiya, Mari; Sakurai, Masaaki; Teranishi, Fumie; Ikeda, Tomoko; Kamiyama, Tsutomu; Asai, Akira
A high-throughput RapidFire mass spectrometry assay is described for elongation of very long-chain fatty acids family 6 (Elovl6). Elovl6 is a microsomal enzyme that regulates the elongation of C12-16 saturated and monounsaturated fatty acids. Elovl6 may be a new therapeutic target for fat metabolism disorders such as obesity, type 2 diabetes, and nonalcoholic steatohepatitis. To identify new Elovl6 inhibitors, we developed a high-throughput fluorescence screening assay in 1536-well format. However, a number of false positives caused by fluorescent interference have been identified. To pick up the real active compounds among the primary hits from the fluorescence assay, we developed a RapidFire mass spectrometry assay and a conventional radioisotope assay. These assays have the advantage of detecting the main products directly without using fluorescent-labeled substrates. As a result, 276 compounds (30%) of the primary hits (921 compounds) in a fluorescence ultra-high-throughput screening method were identified as common active compounds in these two assays. It is concluded that both methods are very effective to eliminate false positives. Compared with the radioisotope method using an expensive 14 C-labeled substrate, the RapidFire mass spectrometry method using unlabeled substrates is a high-accuracy, high-throughput method. In addition, some of the hit compounds selected from the screening inhibited cellular fatty acid elongation in HEK293 cells expressing Elovl6 transiently. This result suggests that these compounds may be promising lead candidates for therapeutic drugs. Ultra-high-throughput fluorescence screening followed by a RapidFire mass spectrometry assay was a suitable strategy for lead discovery against Elovl6. - Highlights: • A novel assay for elongation of very-long-chain fatty acids 6 (Elovl6) is proposed. • RapidFire mass spectrometry (RF-MS) assay is useful to select real screening hits. • RF-MS assay is proved to be beneficial because of
Howes, Amy L; Richardson, Robyn D; Finlay, Darren; Vuori, Kristiina
3-dimensional (3D) culture models have the potential to bridge the gap between monolayer cell culture and in vivo studies. To benefit anti-cancer drug discovery from 3D models, new techniques are needed that enable their use in high-throughput (HT) screening amenable formats. We have established miniaturized 3D culture methods robust enough for automated HT screens. We have applied these methods to evaluate the sensitivity of normal and tumorigenic breast epithelial cell lines against a panel of oncology drugs when cultured as monolayers (2D) and spheroids (3D). We have identified two classes of compounds that exhibit preferential cytotoxicity against cancer cells over normal cells when cultured as 3D spheroids: microtubule-targeting agents and epidermal growth factor receptor (EGFR) inhibitors. Further improving upon our 3D model, superior differentiation of EC50 values in the proof-of-concept screens was obtained by co-culturing the breast cancer cells with normal human fibroblasts and endothelial cells. Further, the selective sensitivity of the cancer cells towards chemotherapeutics was observed in 3D co-culture conditions, rather than as 2D co-culture monolayers, highlighting the importance of 3D cultures. Finally, we examined the putative mechanisms that drive the differing potency displayed by EGFR inhibitors. In summary, our studies establish robust 3D culture models of human cells for HT assessment of tumor cell-selective agents. This methodology is anticipated to provide a useful tool for the study of biological differences within 2D and 3D culture conditions in HT format, and an important platform for novel anti-cancer drug discovery.
Ling Shi-Gang; Gao Jian; Xiao Rui-Juan; Chen Li-Quan
The rapid evolution of high-throughput theoretical design schemes to discover new lithium battery materials is reviewed, including high-capacity cathodes, low-strain cathodes, anodes, solid state electrolytes, and electrolyte additives. With the development of efficient theoretical methods and inexpensive computers, high-throughput theoretical calculations have played an increasingly important role in the discovery of new materials. With the help of automatic simulation flow, many types of materials can be screened, optimized and designed from a structural database according to specific search criteria. In advanced cell technology, new materials for next generation lithium batteries are of great significance to achieve performance, and some representative criteria are: higher energy density, better safety, and faster charge/discharge speed. (topical review)
Full Text Available The development of high-performance lithium ion batteries requires the discovery of new materials and the optimization of key components. By contrast with traditional one-by-one method, high-throughput method can synthesize and characterize a large number of compositionally varying samples, which is able to accelerate the pace of discovery, development and optimization process of materials. Because of rapid progress in thin film and automatic control technologies, thousands of compounds with different compositions could be synthesized rapidly right now, even in a single experiment. However, the lack of rapid or combinatorial characterization technologies to match with high-throughput synthesis methods, limit the application of high-throughput technology. Here, we review a series of representative high-throughput characterization methods used in lithium batteries, including high-throughput structural and electrochemical characterization methods and rapid measuring technologies based on synchrotron light sources.
Full Text Available Abstract Background Extensive computational and database tools are available to mine genomic and genetic databases for model organisms, but little genomic data is available for many species of ecological or agricultural significance, especially those with large genomes. Genome surveys using conventional sequencing techniques are powerful, particularly for detecting sequences present in many copies per genome. However these methods are time-consuming and have potential drawbacks. High throughput 454 sequencing provides an alternative method by which much information can be gained quickly and cheaply from high-coverage surveys of genomic DNA. Results We sequenced 78 million base-pairs of randomly sheared soybean DNA which passed our quality criteria. Computational analysis of the survey sequences provided global information on the abundant repetitive sequences in soybean. The sequence was used to determine the copy number across regions of large genomic clones or contigs and discover higher-order structures within satellite repeats. We have created an annotated, online database of sequences present in multiple copies in the soybean genome. The low bias of pyrosequencing against repeat sequences is demonstrated by the overall composition of the survey data, which matches well with past estimates of repetitive DNA content obtained by DNA re-association kinetics (Cot analysis. Conclusion This approach provides a potential aid to conventional or shotgun genome assembly, by allowing rapid assessment of copy number in any clone or clone-end sequence. In addition, we show that partial sequencing can provide access to partial protein-coding sequences.
Full Text Available Cryptosporidium parvum is a water-borne and food-borne apicomplexan pathogen. It is one of the top four diarrheal-causing pathogens in children under the age of five in developing countries, and an opportunistic pathogen in immunocompromised individuals. Unlike other apicomplexans, C. parvum lacks Kreb's cycle and cytochrome-based respiration, thus relying mainly on glycolysis to produce ATP. In this study, we characterized the primary biochemical features of the C. parvum glucose-6-phosphate isomerase (CpGPI and determined its Michaelis constant towards fructose-6-phosphate (Km = 0.309 mM, Vmax = 31.72 nmol/μg/min. We also discovered that ebselen, an organoselenium drug, was a selective inhibitor of CpGPI by high-throughput screening of 1200 known drugs. Ebselen acted on CpGPI as an allosteric noncompetitive inhibitor (IC50 = 8.33 μM; Ki = 36.33 μM, while complete inhibition of CpGPI activity was not achieved. Ebselen could also inhibit the growth of C. parvum in vitro (EC50 = 165 μM at concentrations nontoxic to host cells, albeit with a relatively small in vitro safety window of 4.2 (cytotoxicity TC50 on HCT-8 cells = 700 μM. Additionally, ebselen might also target other enzymes in the parasite, leading to the parasite growth reduction. Therefore, although ebselen is useful in studying the inhibition of CpGPI enzyme activity, further proof is needed to chemically and/or genetically validate CpGPI as a drug target. Keywords: Apicomplexan, Cryptosporidium parvum, Glucose-6-phosphate isomerase (GPI, Ebselen
Eltahan, Rana; Guo, Fengguang; Zhang, Haili; Xiang, Lixin; Zhu, Guan
Cryptosporidium parvum is a water-borne and food-borne apicomplexan pathogen. It is one of the top four diarrheal-causing pathogens in children under the age of five in developing countries, and an opportunistic pathogen in immunocompromised individuals. Unlike other apicomplexans, C. parvum lacks Kreb's cycle and cytochrome-based respiration, thus relying mainly on glycolysis to produce ATP. In this study, we characterized the primary biochemical features of the C. parvum glucose-6-phosphate isomerase (CpGPI) and determined its Michaelis constant towards fructose-6-phosphate (K m = 0.309 mM, V max = 31.72 nmol/μg/min). We also discovered that ebselen, an organoselenium drug, was a selective inhibitor of CpGPI by high-throughput screening of 1200 known drugs. Ebselen acted on CpGPI as an allosteric noncompetitive inhibitor (IC 50 = 8.33 μM; K i = 36.33 μM), while complete inhibition of CpGPI activity was not achieved. Ebselen could also inhibit the growth of C. parvum in vitro (EC 50 = 165 μM) at concentrations nontoxic to host cells, albeit with a relatively small in vitro safety window of 4.2 (cytotoxicity TC 50 on HCT-8 cells = 700 μM). Additionally, ebselen might also target other enzymes in the parasite, leading to the parasite growth reduction. Therefore, although ebselen is useful in studying the inhibition of CpGPI enzyme activity, further proof is needed to chemically and/or genetically validate CpGPI as a drug target. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Thrash, Adam; Arick, Mark; Peterson, Daniel G
The quality of data generated by high-throughput DNA sequencing tools must be rapidly assessed in order to determine how useful the data may be in making biological discoveries; higher quality data leads to more confident results and conclusions. Due to the ever-increasing size of data sets and the importance of rapid quality assessment, tools that analyze sequencing data should quickly produce easily interpretable graphics. Quack addresses these issues by generating information-dense visualizations from FASTQ files at a speed far surpassing other publicly available quality assurance tools in a manner independent of sequencing technology. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Li, Li-Ping; Feng, Bao-Sheng; Yang, Jian-Wang; Chang, Cui-Lan; Bai, Yu; Liu, Hu-Wei
The development of rapid screening and identification techniques is of great importance for drug discovery, doping control, forensic identification, food safety and quality control. Ambient mass spectrometry (AMS) allows rapid and direct analysis of various samples in open air with little sample preparation. Recently, its applications in high-throughput screening have been in rapid progress. During the past decade, various ambient ionization techniques have been developed and applied in high-throughput screening. This review discusses typical applications of AMS, including DESI (desorption electrospray ionization), DART (direct analysis in real time), EESI (extractive electrospray ionization), etc., in high-throughput screening (HTS).
Full Text Available The comet assay is a sensitive and versatile method for assessing DNA damage in cells. In the traditional version of the assay, there are many manual steps involved and few samples can be treated in one experiment. High throughput modifications have been developed during recent years, and they are reviewed and discussed. These modifications include accelerated scoring of comets; other important elements that have been studied and adapted to high throughput are cultivation and manipulation of cells or tissues before and after exposure, and freezing of treated samples until comet analysis and scoring. High throughput methods save time and money but they are useful also for other reasons: large-scale experiments may be performed which are otherwise not practicable (e.g., analysis of many organs from exposed animals, and human biomonitoring studies, and automation gives more uniform sample treatment and less dependence on operator performance. The high throughput modifications now available vary largely in their versatility, capacity, complexity and costs. The bottleneck for further increase of throughput appears to be the scoring.
Keenan, Martine; Alexander, Paul W; Chaplin, Jason H; Abbott, Michael J; Diao, Hugo; Wang, Zhisen; Best, Wayne M; Perez, Catherine J; Cornwall, Scott M J; Keatley, Sarah K; Thompson, R C Andrew; Charman, Susan A; White, Karen L; Ryan, Eileen; Chen, Gong; Ioset, Jean-Robert; von Geldern, Thomas W; Chatelain, Eric
Inhibitors of Trypanosoma cruzi with novel mechanisms of action are urgently required to diversify the current clinical and preclinical pipelines. Increasing the number and diversity of hits available for assessment at the beginning of the discovery process will help to achieve this aim. We report the evaluation of multiple hits generated from a high-throughput screen to identify inhibitors of T. cruzi and from these studies the discovery of two novel series currently in lead optimization. Lead compounds from these series potently and selectively inhibit growth of T. cruzi in vitro and the most advanced compound is orally active in a subchronic mouse model of T. cruzi infection. High-throughput screening of novel compound collections has an important role to play in diversifying the trypanosomatid drug discovery portfolio. A new T. cruzi inhibitor series with good drug-like properties and promising in vivo efficacy has been identified through this process.
The ideal chemical testing approach will provide complete coverage of all relevant toxicological responses. It should be sensitive and specific It should identify the mechanism/mode-of-action (with dose-dependence). It should identify responses relevant to the species of interest. Responses should ideally be translated into tissue-, organ-, and organism-level effects. It must be economical and scalable. Using a High Throughput Transcriptomics platform within US EPA provides broader coverage of biological activity space and toxicological MOAs and helps fill the toxicological data gap. Slide presentation at the 2016 ToxForum on using High Throughput Transcriptomics at US EPA for broader coverage biological activity space and toxicological MOAs.
Cooper, Khershed P.
Interest in nano-scale manufacturing research and development is growing. The reason is to accelerate the translation of discoveries and inventions of nanoscience and nanotechnology into products that would benefit industry, economy and society. Ongoing research in nanomanufacturing is focused primarily on developing novel nanofabrication techniques for a variety of applications—materials, energy, electronics, photonics, biomedical, etc. Our goal is to foster the development of high-throughput methods of fabricating nano-enabled products. Large-area parallel processing and highspeed continuous processing are high-throughput means for mass production. An example of large-area processing is step-and-repeat nanoimprinting, by which nanostructures are reproduced again and again over a large area, such as a 12 in wafer. Roll-to-roll processing is an example of continuous processing, by which it is possible to print and imprint multi-level nanostructures and nanodevices on a moving flexible substrate. The big pay-off is high-volume production and low unit cost. However, the anticipated cost benefits can only be realized if the increased production rate is accompanied by high yields of high quality products. To ensure product quality, we need to design and construct manufacturing systems such that the processes can be closely monitored and controlled. One approach is to bring cyber-physical systems (CPS) concepts to nanomanufacturing. CPS involves the control of a physical system such as manufacturing through modeling, computation, communication and control. Such a closely coupled system will involve in-situ metrology and closed-loop control of the physical processes guided by physics-based models and driven by appropriate instrumentation, sensing and actuation. This paper will discuss these ideas in the context of controlling high-throughput manufacturing at the nano-scale.
Havrilla, George J.; Miller, Thomasin C.
Micro-x-ray fluorescence (MXRF) is a useful characterization tool for high-throughput screening of combinatorial libraries. Due to the increasing threat of use of chemical warfare (CW) agents both in military actions and against civilians by terrorist extremists, there is a strong push to improve existing methods and develop means for the detection of a broad spectrum of CW agents in a minimal amount of time to increase national security. This paper describes a combinatorial high-throughput screening technique for CW receptor discovery to aid in sensor development. MXRF can screen materials for elemental composition at the mesoscale level (tens to hundreds of micrometers). The key aspect of this work is the use of commercial MXRF instrumentation coupled with the inherent heteroatom elements within the target molecules of the combinatorial reaction to provide rapid and specific identification of lead species. The method is demonstrated by screening an 11-mer oligopeptide library for selective binding of the degradation products of the nerve agent VX. The identified oligopeptides can be used as selective molecular receptors for sensor development. The MXRF screening method is nondestructive, requires minimal sample preparation or special tags for analysis, and the screening time depends on the desired sensitivity
A cryopump with a unique method of regeneration which allows continuous operation at high throughput has been constructed and tested. Deuterium was pumped continuously at a throughput of 30 Torr.L/s at a speed of 2000 L/s and a compression ratio of 200. Argon was pumped at a throughput of 60 Torr.L/s at a speed of 1275 L/s. To produce continuous operation of the pump, a method of regeneration that does not thermally cycle the pump is employed. A small chamber (the ''snail'') passes over the pumping surface and removes the frost from it either by mechanical action with a scraper or by local heating. The material removed is topologically in a secondary vacuum system with low conductance into the primary vacuum; thus, the exhaust can be pumped at pressures up to an effective compression ratio determined by the ratio of the pumping speed to the leakage conductance of the snail. The pump, which is all-metal-sealed and dry and which regenerates every 60 s, would be an ideal system for pumping tritium. Potential fusion applications are for mpmp limiters, for repeating pneumatic pellet injection lines, and for the centrifuge pellet injector spin tank, all of which will require pumping tritium at high throughput. Industrial applications requiring ultraclean pumping of corrosive gases at high throughput, such as the reactive ion etch semiconductor process, may also be feasible
Beernink, Peter T [Walnut Creek, CA; Coleman, Matthew A [Oakland, CA; Segelke, Brent W [San Ramon, CA
Methods, compositions, and kits for the cell-free production and analysis of proteins are provided. The invention allows for the production of proteins from prokaryotic sequences or eukaryotic sequences, including human cDNAs using PCR and IVT methods and detecting the proteins through fluorescence or immunoblot techniques. This invention can be used to identify optimized PCR and WT conditions, codon usages and mutations. The methods are readily automated and can be used for high throughput analysis of protein expression levels, interactions, and functional states.
Gale, Molly; Yan, Qin
Lysine demethylases (KDMs) are epigenetic regulators whose dysfunction is implicated in the pathology of many human diseases including various types of cancer, inflammation and X-linked intellectual disability. Particular demethylases have been identified as promising therapeutic targets, and tremendous efforts are being devoted toward developing suitable small-molecule inhibitors for clinical and research use. Several High-throughput screening strategies have been developed to screen for small-molecule inhibitors of KDMs, each with advantages and disadvantages in terms of time, cost, effort, reliability and sensitivity. In this Special Report, we review and evaluate the High-throughput screening methods utilized for discovery of novel small-molecule KDM inhibitors.
Full Text Available Development of next generation batteries requires a breakthrough in materials. Traditional one-by-one method, which is suitable for synthesizing large number of sing-composition material, is time-consuming and costly. High throughput and combinatorial experimentation, is an effective method to synthesize and characterize huge amount of materials over a broader compositional region in a short time, which enables to greatly speed up the discovery and optimization of materials with lower cost. In this work, high throughput and combinatorial materials synthesis technologies for lithium ion battery research are discussed, and our efforts on developing such instrumentations are introduced.
Zhu, Yanhui; Zhang, Zhiyun; Zhang, Meng; Mais, Dale E; Wang, Ming-Wei
Throughout the centuries, traditional Chinese medicine has been a rich resource in the development of new drugs. Modern drug discovery, which relies increasingly on automated high throughput screening and quick hit-to-lead development, however, is confronted with the challenges of the chemical complexity associated with natural products. New technologies for biological screening as well as library building are in great demand in order to meet the requirements. Here we review the developments in these techniques under the perspective of their applicability in natural product drug discovery. Methods in library building, component characterizing, biological evaluation, and other screening methods including NMR and X-ray diffraction are discussed.
The pharmaceutical industry has come under increasing pressure due to regulatory restrictions on the marketing and pricing of drugs, competition, and the escalating costs of developing new drugs. These forces can be addressed by the identification of novel targets, reductions in the development time of new drugs, and increased productivity. Emphasis has been placed on identifying and validating new targets and on lead generation: the response from industry has been very evident in genomics and high throughput screening, where new technologies have been applied, usually coupled with a high degree of automation. The combination of numerous new potential biological targets and the ability to screen large numbers of compounds against many of these targets has generated the need for large diverse compound collections. To address this requirement, high-throughput chemistry has become an integral part of the drug discovery process. Copyright 2002 Wiley-Liss, Inc.
Farmer, Jenny; Jacobs, Donald
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
Dusheyko, Serge; Furman, Craig; Pangilinan, Jasmyn; Shapiro, Harris; Tu, Hank
Metagenome data sets present a qualitatively different assembly problem than traditional single-organism whole-genome shotgun (WGS) assembly. The unique aspects of such projects include the presence of a potentially large number of distinct organisms and their representation in the data set at widely different fractions. In addition, multiple closely related strains could be present, which would be difficult to assemble separately. Failure to take these issues into account can result in poor assemblies that either jumble together different strains or which fail to yield useful results. The DOE Joint Genome Institute has sequenced a number of metagenomic projects and plans to considerably increase this number in the coming year. As a result, the JGI has a need for high-throughput tools and techniques for handling metagenome projects. We present the techniques developed to handle metagenome assemblies in a high-throughput environment. This includes a streamlined assembly wrapper, based on the JGI?s in-house WGS assembler, Jazz. It also includes the selection of sensible defaults targeted for metagenome data sets, as well as quality control automation for cleaning up the raw results. While analysis is ongoing, we will discuss preliminary assessments of the quality of the assembly results (http://fames.jgi-psf.org).
Mujovic, Selman; Foster, John
The troublesome emergence of new classes of micro-pollutants, such as pharmaceuticals and endocrine disruptors, poses challenges for conventional water treatment systems. In an effort to address these contaminants and to support water reuse in drought stricken regions, new technologies must be introduced. The interaction of water with plasma rapidly mineralizes organics by inducing advanced oxidation in addition to other chemical, physical and radiative processes. The primary barrier to the implementation of plasma-based water treatment is process volume scale up. In this work, we investigate a potentially scalable, high throughput plasma water reactor that utilizes a packed bed dielectric barrier-like geometry to maximize the plasma-water interface. Here, the water serves as the dielectric medium. High-speed imaging and emission spectroscopy are used to characterize the reactor discharges. Changes in methylene blue concentration and basic water parameters are mapped as a function of plasma treatment time. Experimental results are compared to electrostatic and plasma chemistry computations, which will provide insight into the reactor's operation so that efficiency can be assessed. Supported by NSF (CBET 1336375).
In the event of an intentional or accidental release of ionizing radiation in a densely populated area, timely assessment and triage of the general population for radiation exposure is critical. In particular, a significant number of victims may sustain radiation injury, which increases mortality and worsens the overall prognosis of victims from radiation trauma. Availability of a high-throughput noninvasive in vivo biodosimetry tool for assessing the radiation exposure is of particular importance for timely diagnosis of radiation injury. In this study, we describe the potential NMR techniques in evaluating the radiation injury. NMR is the most versatile technique that has been extensively used in the diverse fields of science since its discovery. NMR and biomedical sciences have been going hand in hand since its application in clinical imaging as MRI and metabolic profiling of biofluids was identified. We have established an NMR based metabonomic and in vivo spectroscopy approach to analyse and identify metabolic profile to measure metabolic fingerprint for radiation exposure. NMR spectroscopy experiments were conducted on urine and serum samples collected from mice irradiated with different doses of radiation. Additionally, in vivo NMR spectroscopy was also performed in different region of brains post irradiation in animal model. A number of metabolites associated with energy metabolism, gut flora metabolites, osmolytes, amino acids and membrane metabolism were identified in serum and urine metabolome. Our results illustrated a metabolic fingerprint for radiation exposure that elucidates perturbed physiological functions. Quantitative as well as multivariate analysis/assessment of these metabolites demonstrated dose and time dependent toxicological effect. In vivo spectroscopy from brain showed radiation induced changes in hippocampus region indicating whole body radiation had striking effect on brain metabolism as well. The results of the present work lay a
Mohanraj, Bhavana; Hou, Chieh; Meloni, Gregory R; Cosgrove, Brian D; Dodge, George R; Mauck, Robert L
Articular cartilage enables efficient and near-frictionless load transmission, but suffers from poor inherent healing capacity. As such, cartilage tissue engineering strategies have focused on mimicking both compositional and mechanical properties of native tissue in order to provide effective repair materials for the treatment of damaged or degenerated joint surfaces. However, given the large number design parameters available (e.g. cell sources, scaffold designs, and growth factors), it is difficult to conduct combinatorial experiments of engineered cartilage. This is particularly exacerbated when mechanical properties are a primary outcome, given the long time required for testing of individual samples. High throughput screening is utilized widely in the pharmaceutical industry to rapidly and cost-effectively assess the effects of thousands of compounds for therapeutic discovery. Here we adapted this approach to develop a high throughput mechanical screening (HTMS) system capable of measuring the mechanical properties of up to 48 materials simultaneously. The HTMS device was validated by testing various biomaterials and engineered cartilage constructs and by comparing the HTMS results to those derived from conventional single sample compression tests. Further evaluation showed that the HTMS system was capable of distinguishing and identifying 'hits', or factors that influence the degree of tissue maturation. Future iterations of this device will focus on reducing data variability, increasing force sensitivity and range, as well as scaling-up to even larger (96-well) formats. This HTMS device provides a novel tool for cartilage tissue engineering, freeing experimental design from the limitations of mechanical testing throughput. © 2013 Published by Elsevier Ltd.
Soufan, Othman; Ba Alawi, Wail; Afeef, Moataz A.; Essack, Magbubah; Kalnis, Panos; Bajic, Vladimir B.
Mining high-throughput screening (HTS) assays is key for enhancing decisions in the area of drug repositioning and drug discovery. However, many challenges are encountered in the process of developing suitable and accurate methods
High-Throughput Screening to Identify Compounds That Increase Fragile X Mental Retardation Protein Expression in Neural Stem Cells Differentiated From Fragile X Syndrome Patient-Derived Induced Pluripotent Stem Cells.
Kumari, Daman; Swaroop, Manju; Southall, Noel; Huang, Wenwei; Zheng, Wei; Usdin, Karen
: Fragile X syndrome (FXS), the most common form of inherited cognitive disability, is caused by a deficiency of the fragile X mental retardation protein (FMRP). In most patients, the absence of FMRP is due to an aberrant transcriptional silencing of the fragile X mental retardation 1 (FMR1) gene. FXS has no cure, and the available treatments only provide symptomatic relief. Given that FMR1 gene silencing in FXS patient cells can be partially reversed by treatment with compounds that target repressive epigenetic marks, restoring FMRP expression could be one approach for the treatment of FXS. We describe a homogeneous and highly sensitive time-resolved fluorescence resonance energy transfer assay for FMRP detection in a 1,536-well plate format. Using neural stem cells differentiated from an FXS patient-derived induced pluripotent stem cell (iPSC) line that does not express any FMRP, we screened a collection of approximately 5,000 known tool compounds and approved drugs using this FMRP assay and identified 6 compounds that modestly increase FMR1 gene expression in FXS patient cells. Although none of these compounds resulted in clinically relevant levels of FMR1 mRNA, our data provide proof of principle that this assay combined with FXS patient-derived neural stem cells can be used in a high-throughput format to identify better lead compounds for FXS drug development. In this study, a specific and sensitive fluorescence resonance energy transfer-based assay for fragile X mental retardation protein detection was developed and optimized for high-throughput screening (HTS) of compound libraries using fragile X syndrome (FXS) patient-derived neural stem cells. The data suggest that this HTS format will be useful for the identification of better lead compounds for developing new therapeutics for FXS. This assay can also be adapted for FMRP detection in clinical and research settings. ©AlphaMed Press.
Blondelle, Sylvie E; Lohner, Karl
While a well-established process for lead compound discovery in for-profit companies, high-throughput screening is becoming more popular in basic and applied research settings in academia. The development of combinatorial libraries combined with easy and less expensive access to new technologies have greatly contributed to the implementation of high-throughput screening in academic laboratories. While such techniques were earlier applied to simple assays involving single targets or based on binding affinity, they have now been extended to more complex systems such as whole cell-based assays. In particular, the urgent need for new antimicrobial compounds that would overcome the rapid rise of drug-resistant microorganisms, where multiple target assays or cell-based assays are often required, has forced scientists to focus onto high-throughput technologies. Based on their existence in natural host defense systems and their different mode of action relative to commercial antibiotics, antimicrobial peptides represent a new hope in discovering novel antibiotics against multi-resistant bacteria. The ease of generating peptide libraries in different formats has allowed a rapid adaptation of high-throughput assays to the search for novel antimicrobial peptides. Similarly, the availability nowadays of high-quantity and high-quality antimicrobial peptide data has permitted the development of predictive algorithms to facilitate the optimization process. This review summarizes the various library formats that lead to de novo antimicrobial peptide sequences as well as the latest structural knowledge and optimization processes aimed at improving the peptides selectivity.
Pietiainen, Vilja; Saarela, Jani; von Schantz, Carina; Turunen, Laura; Ostling, Paivi; Wennerberg, Krister
The High Throughput Biomedicine (HTB) unit at the Institute for Molecular Medicine Finland FIMM was established in 2010 to serve as a national and international academic screening unit providing access to state of the art instrumentation for chemical and RNAi-based high throughput screening. The initial focus of the unit was multiwell plate based chemical screening and high content microarray-based siRNA screening. However, over the first four years of operation, the unit has moved to a more flexible service platform where both chemical and siRNA screening is performed at different scales primarily in multiwell plate-based assays with a wide range of readout possibilities with a focus on ultraminiaturization to allow for affordable screening for the academic users. In addition to high throughput screening, the equipment of the unit is also used to support miniaturized, multiplexed and high throughput applications for other types of research such as genomics, sequencing and biobanking operations. Importantly, with the translational research goals at FIMM, an increasing part of the operations at the HTB unit is being focused on high throughput systems biological platforms for functional profiling of patient cells in personalized and precision medicine projects.
Pedersen, Marlene Lemvig; Block, Ines; List, Markus
High-throughput screening is extensively applied for identification of drug targets and drug discovery and recently it found entry into toxicity testing. Reverse phase protein arrays (RPPAs) are used widespread for quantification of protein markers. We reasoned that RPPAs also can be utilized...... beneficially in automated high-throughput toxicity testing. An advantage of using RPPAs is that, in addition to the baseline toxicity readout, they allow testing of multiple markers of toxicity, such as inflammatory responses, which do not necessarily cumulate in cell death. We used transfection of si......RNAs with known killing effects as a model system to demonstrate that RPPA-based protein quantification can serve as substitute readout of cell viability, hereby reliably reflecting toxicity. In terms of automation, cell exposure, protein harvest, serial dilution and sample reformatting were performed using...
Shubhakar, Archana; Reiding, Karli R; Gardner, Richard A; Spencer, Daniel I R; Fernandes, Daryl L; Wuhrer, Manfred
This review covers advances in analytical technologies for high-throughput (HTP) glycomics. Our focus is on structural studies of glycoprotein glycosylation to support biopharmaceutical realization and the discovery of glycan biomarkers for human disease. For biopharmaceuticals, there is increasing use of glycomics in Quality by Design studies to help optimize glycan profiles of drugs with a view to improving their clinical performance. Glycomics is also used in comparability studies to ensure consistency of glycosylation both throughout product development and between biosimilars and innovator drugs. In clinical studies there is as well an expanding interest in the use of glycomics-for example in Genome Wide Association Studies-to follow changes in glycosylation patterns of biological tissues and fluids with the progress of certain diseases. These include cancers, neurodegenerative disorders and inflammatory conditions. Despite rising activity in this field, there are significant challenges in performing large scale glycomics studies. The requirement is accurate identification and quantitation of individual glycan structures. However, glycoconjugate samples are often very complex and heterogeneous and contain many diverse branched glycan structures. In this article we cover HTP sample preparation and derivatization methods, sample purification, robotization, optimized glycan profiling by UHPLC, MS and multiplexed CE, as well as hyphenated techniques and automated data analysis tools. Throughout, we summarize the advantages and challenges with each of these technologies. The issues considered include reliability of the methods for glycan identification and quantitation, sample throughput, labor intensity, and affordability for large sample numbers.
Zhang, Haohai; Chan, Leo Li-Ying; Rice, William; Kassam, Nasim; Longhi, Maria Serena; Zhao, Haitao; Robson, Simon C; Gao, Wenda; Wu, Yan
Hybridoma screening is a critical step for antibody discovery, which necessitates prompt identification of potential clones from hundreds to thousands of hybridoma cultures against the desired immunogen. Technical issues associated with ELISA- and flow cytometry-based screening limit accuracy and diminish high-throughput capability, increasing time and cost. Conventional ELISA screening with coated antigen is also impractical for difficult-to-express hydrophobic membrane antigens or multi-chain protein complexes. Here, we demonstrate novel high-throughput screening methodology employing the Celigo Image Cytometer, which avoids nonspecific signals by contrasting antibody binding signals directly on living cells, with and without recombinant antigen expression. The image cytometry-based high-throughput screening method was optimized by detecting the binding of hybridoma supernatants to the recombinant antigen CD39 expressed on Chinese hamster ovary (CHO) cells. Next, the sensitivity of the image cytometer was demonstrated by serial dilution of purified CD39 antibody. Celigo was used to measure antibody affinities of commercial and in-house antibodies to membrane-bound CD39. This cell-based screening procedure can be completely accomplished within one day, significantly improving throughput and efficiency of hybridoma screening. Furthermore, measuring direct antibody binding to living cells eliminated both false positive and false negative hits. The image cytometry method was highly sensitive and versatile, and could detect positive antibody in supernatants at concentrations as low as ~5ng/mL, with concurrent K d binding affinity coefficient determination. We propose that this screening method will greatly facilitate antibody discovery and screening technologies. Copyright © 2017 Elsevier B.V. All rights reserved.
Nas, R.J.M.; Berkel, van C.H.
This paper presents a hardware design for a scalable, high throughput, configurable LFSR. High throughput is achieved by producing L consecutive outputs per clock cycle with a clock cycle period that, for practical cases, increases only logarithmically with the block size L and the length of the
Sobral Romao, J.I.; Baiao Barata, David; Habibovic, Pamela; Mul, Guido; Baltrusaitis, Jonas
We present a novel high throughput photocatalyst efficiency assessment method based on 96-well microplates and UV-Vis spectroscopy. We demonstrate the reproducibility of the method using methyl orange (MO) decomposition, and compare kinetic data obtained with those provided in the literature for
Ligterink, Wilco; Hilhorst, Henk W.M.
High-throughput analysis of seed germination for phenotyping large genetic populations or mutant collections is very labor intensive and would highly benefit from an automated setup. Although very often used, the total germination percentage after a nominated period of time is not very
Bakan, Ahmet; Lazo, John S; Wipf, Peter; Brummond, Kay M; Bahar, Ivet
Dual-specificity phosphatases (DSPs) are important, but poorly understood, cell signaling enzymes that remove phosphate groups from tyrosine and serine/threonine residues on their substrate. Deregulation of DSPs has been implicated in cancer, obesity, diabetes, inflammation, and Alzheimer’s disease. Due to their biological and biomedical significance, DSPs have increasingly become the subject of drug discovery high-throughput screening (HTS) and focused compound library development efforts. P...
Michael I Miller
Full Text Available This paper describes neuroinformatics technologies at 1 mm anatomical scale based on high throughput 3D functional and structural imaging technologies of the human brain. The core is an abstract pipeline for converting functional and structural imagery into their high dimensional neuroinformatic representations index containing O(E3-E4 discriminating dimensions. The pipeline is based on advanced image analysis coupled to digital knowledge representations in the form of dense atlases of the human brain at gross anatomical scale. We demonstrate the integration of these high-dimensional representations with machine learning methods, which have become the mainstay of other fields of science including genomics as well as social networks. Such high throughput facilities have the potential to alter the way medical images are stored and utilized in radiological workflows. The neuroinformatics pipeline is used to examine cross-sectional and personalized analyses of neuropsychiatric illnesses in clinical applications as well as longitudinal studies. We demonstrate the use of high throughput machine learning methods for supporting (i cross-sectional image analysis to evaluate the health status of individual subjects with respect to the population data, (ii integration of image and non-image information for diagnosis and prognosis.
Full Text Available Abstract Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU.
Full Text Available Disrupted intracellular calcium homeostasis is believed to occur early in the cascade of events leading to Alzheimer's disease (AD pathology. Particularly familial AD mutations linked to Presenilins result in exaggerated agonist-evoked calcium release from endoplasmic reticulum (ER. Here we report the development of a fully automated high-throughput calcium imaging assay utilizing a genetically-encoded FRET-based calcium indicator at single cell resolution for compound screening. The established high-throughput screening assay offers several advantages over conventional high-throughput calcium imaging technologies. We employed this assay for drug discovery in AD by screening compound libraries consisting of over 20,000 small molecules followed by structure-activity-relationship analysis. This led to the identification of Bepridil, a calcium channel antagonist drug in addition to four further lead structures capable of normalizing the potentiated FAD-PS1-induced calcium release from ER. Interestingly, it has recently been reported that Bepridil can reduce Aβ production by lowering BACE1 activity. Indeed, we also detected lowered Aβ, increased sAPPα and decreased sAPPβ fragment levels upon Bepridil treatment. The latter findings suggest that Bepridil may provide a multifactorial therapeutic modality for AD by simultaneously addressing multiple aspects of the disease.
Duong-Thi, Minh-Dao; Bergström, Maria; Fex, Tomas; Isaksson, Roland; Ohlson, Sten
Fragment screening, an emerging approach for hit finding in drug discovery, has recently been proven effective by its first approved drug, vemurafenib, for cancer treatment. Techniques such as nuclear magnetic resonance, surface plasmon resonance, and isothemal titration calorimetry, with their own pros and cons, have been employed for screening fragment libraries. As an alternative approach, screening based on high-performance liquid chromatography separation has been developed. In this work, we present weak affinity LC/MS as a method to screen fragments under high-throughput conditions. Affinity-based capillary columns with immobilized thrombin were used to screen a collection of 590 compounds from a fragment library. The collection was divided into 11 mixtures (each containing 35 to 65 fragments) and screened by MS detection. The primary screening was performed in 3500 fragments per day). Thirty hits were defined, which subsequently entered a secondary screening using an active site-blocked thrombin column for confirmation of specificity. One hit showed selective binding to thrombin with an estimated dissociation constant (K (D)) in the 0.1 mM range. This study shows that affinity LC/MS is characterized by high throughput, ease of operation, and low consumption of target and fragments, and therefore it promises to be a valuable method for fragment screening.
Beeman, Katrin; Baumgärtner, Jens; Laubenheimer, Manuel; Hergesell, Karlheinz; Hoffmann, Martin; Pehl, Ulrich; Fischer, Frank; Pieck, Jan-Carsten
Mass spectrometry (MS) is known for its label-free detection of substrates and products from a variety of enzyme reactions. Recent hardware improvements have increased interest in the use of matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) MS for high-throughput drug discovery. Despite interest in this technology, several challenges remain and must be overcome before MALDI-MS can be integrated as an automated "in-line reader" for high-throughput drug discovery. Two such hurdles include in situ sample processing and deposition, as well as integration of MALDI-MS for enzymatic screening assays that usually contain high levels of MS-incompatible components. Here we adapt our c-MET kinase assay to optimize for MALDI-MS compatibility and test its feasibility for compound screening. The pros and cons of the Echo (Labcyte) as a transfer system for in situ MALDI-MS sample preparation are discussed. We demonstrate that this method generates robust data in a 1536-grid format. We use the MALDI-MS to directly measure the ratio of c-MET substrate and phosphorylated product to acquire IC50 curves and demonstrate that the pharmacology is unaffected. The resulting IC50 values correlate well between the common label-based capillary electrophoresis and the label-free MALDI-MS detection method. We predict that label-free MALDI-MS-based high-throughput screening will become increasingly important and more widely used for drug discovery.
Ingham, C.J.; Sprenkels, A.J.; van Hylckama Vlieg, J.E.T.; Bomer, Johan G.; de Vos, W.M.; van den Berg, Albert
The invention relates to the field of microbiology. Provided is a method which is particularly powerful for High Throughput Screening (HTS) purposes. More specific a high throughput method for determining heterogeneity or interactions of microorganisms is provided.
Slide presentation at the SETAC annual meeting on High-Throughput Screening and Modeling Approaches to Identify Steroidogenesis Distruptors Slide presentation at the SETAC annual meeting on High-Throughput Screening and Modeling Approaches to Identify Steroidogenssis Distruptors
Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy
Ligterink, Wilco; Hilhorst, Henk W M
High-throughput analysis of seed germination for phenotyping large genetic populations or mutant collections is very labor intensive and would highly benefit from an automated setup. Although very often used, the total germination percentage after a nominated period of time is not very informative as it lacks information about start, rate, and uniformity of germination, which are highly indicative of such traits as dormancy, stress tolerance, and seed longevity. The calculation of cumulative germination curves requires information about germination percentage at various time points. We developed the GERMINATOR package: a simple, highly cost-efficient, and flexible procedure for high-throughput automatic scoring and evaluation of germination that can be implemented without the use of complex robotics. The GERMINATOR package contains three modules: (I) design of experimental setup with various options to replicate and randomize samples; (II) automatic scoring of germination based on the color contrast between the protruding radicle and seed coat on a single image; and (III) curve fitting of cumulative germination data and the extraction, recap, and visualization of the various germination parameters. GERMINATOR is a freely available package that allows the monitoring and analysis of several thousands of germination tests, several times a day by a single person.
Environmental chemicals can elicit endocrine disruption by altering steroid hormone biosynthesis and metabolism (steroidogenesis) causing adverse reproductive and developmental effects. Historically, a lack of assays resulted in few chemicals having been evaluated for effects on steroidogenesis. The steroidogenic pathway is a series of hydroxylation and dehydrogenation steps carried out by CYP450 and hydroxysteroid dehydrogenase enzymes, yet the only enzyme in the pathway for which a high-throughput screening (HTS) assay has been developed is aromatase (CYP19A1), responsible for the aromatization of androgens to estrogens. Recently, the ToxCast HTS program adapted the OECD validated H295R steroidogenesis assay using human adrenocortical carcinoma cells into a high-throughput model to quantitatively assess the concentration-dependent (0.003-100 µM) effects of chemicals on 10 steroid hormones including progestagens, androgens, estrogens and glucocorticoids. These results, in combination with two CYP19A1 inhibition assays, comprise a large dataset amenable to clustering approaches supporting the identification and characterization of putative mechanisms of action (pMOA) for steroidogenesis disruption. In total, 514 chemicals were tested in all CYP19A1 and steroidogenesis assays. 216 chemicals were identified as CYP19A1 inhibitors in at least one CYP19A1 assay. 208 of these chemicals also altered hormone levels in the H295R assay, suggesting 96% sensitivity in the
Traumatic joint injuries initiate acute degenerative changes in articular cartilage that can lead to progressive loss of load-bearing function. As a result, patients often develop post-traumatic osteoarthritis (PTOA), a condition for which there currently exists no biologic interventions. To address this need, tissue engineering aims to mimic the structure and function of healthy, native counterparts. These constructs can be used to not only replace degenerated tissue, but also build in vitro, pre-clinical models of disease. Towards this latter goal, this thesis focuses on the design of a high throughput system to screen new therapeutics in a micro-engineered model of PTOA, and the development of a mechanically-responsive drug delivery system to augment tissue-engineered approaches for cartilage repair. High throughput screening is a powerful tool for drug discovery that can be adapted to include 3D tissue constructs. To facilitate this process for cartilage repair, we built a high throughput mechanical injury platform to create an engineered cartilage model of PTOA. Compressive injury of functionally mature constructs increased cell death and proteoglycan loss, two hallmarks of injury observed in vivo. Comparison of this response to that of native cartilage explants, and evaluation of putative therapeutics, validated this model for subsequent use in small molecule screens. A primary screen of 118 compounds identified a number of 'hits' and relevant pathways that may modulate pathologic signaling post-injury. To complement this process of therapeutic discovery, a stimuli-responsive delivery system was designed that used mechanical inputs as the 'trigger' mechanism for controlled release. The failure thresholds of these mechanically-activated microcapsules (MAMCs) were influenced by physical properties and composition, as well as matrix mechanical properties in 3D environments. TGF-beta released from the system upon mechano-activation stimulated stem cell
Turner, Howard W.; Volpe, Anthony F., Jr.; Weinberg, W. H.
With the discovery of abundant and low cost crude oil in the early 1900's came the need to create efficient conversion processes to produce low cost fuels and basic chemicals. Enormous investment over the last century has led to the development of a set of highly efficient catalytic processes which define the modern oil refinery and which produce most of the raw materials and fuels used in modern society. Process evolution and development has led to a refining infrastructure that is both dominated and enabled by modern heterogeneous catalyst technologies. Refineries and chemical manufacturers are currently under intense pressure to improve efficiency, adapt to increasingly disadvantaged feedstocks including biomass, lower their environmental footprint, and continue to deliver their products at low cost. This pressure creates a demand for new and more robust catalyst systems and processes that can accommodate them. Traditional methods of catalyst synthesis and testing are slow and inefficient, particularly in heterogeneous systems where the structure of the active sites is typically complex and the reaction mechanism is at best ill-defined. While theoretical modeling and a growing understanding of fundamental surface science help guide the chemist in designing and synthesizing targets, even in the most well understood areas of catalysis, the parameter space that one needs to explore experimentally is vast. The result is that the chemist using traditional methods must navigate a complex and unpredictable diversity space with a limited data set to make discoveries or to optimize known systems. We describe here a mature set of synthesis and screening technologies that together form a workflow that breaks this traditional paradigm and allows for rapid and efficient heterogeneous catalyst discovery and optimization. We exemplify the power of these new technologies by describing their use in the development and commercialization of a novel catalyst for the
Yeow, Jonathan; Joshi, Sanket; Chapman, Robert; Boyer, Cyrille Andre Jean Marie
Translating controlled/living radical polymerization (CLRP) from batch to the high throughput production of polymer libraries presents several challenges in terms of both polymer synthesis and characterization. Although recently there have been significant advances in the field of low volume, high throughput CLRP, techniques able to simultaneously monitor multiple polymerizations in an "online" manner have not yet been developed. Here, we report our discovery that 5,10,15,20-tetraphenyl-21H,23H-porphine zinc (ZnTPP) is a self-reporting photocatalyst that can mediate PET-RAFT polymerization as well as report on monomer conversion via changes in its fluorescence properties. This enables the use of a microplate reader to conduct high throughput "online" monitoring of PET-RAFT polymerizations performed directly in 384-well, low volume microtiter plates. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Lu, Guoxin [Iowa State Univ., Ames, IA (United States)
High-throughput screening (HTS) techniques have been applied to many research fields nowadays. Robot microarray printing technique and automation microtiter handling technique allows HTS performing in both heterogeneous and homogeneous formats, with minimal sample required for each assay element. In this dissertation, new HTS techniques for enzyme activity analysis were developed. First, patterns of immobilized enzyme on nylon screen were detected by multiplexed capillary system. The imaging resolution is limited by the outer diameter of the capillaries. In order to get finer images, capillaries with smaller outer diameters can be used to form the imaging probe. Application of capillary electrophoresis allows separation of the product from the substrate in the reaction mixture, so that the product doesn't have to have different optical properties with the substrate. UV absorption detection allows almost universal detection for organic molecules. Thus, no modifications of either the substrate or the product molecules are necessary. This technique has the potential to be used in screening of local distribution variations of specific bio-molecules in a tissue or in screening of multiple immobilized catalysts. Another high-throughput screening technique is developed by directly monitoring the light intensity of the immobilized-catalyst surface using a scientific charge-coupled device (CCD). Briefly, the surface of enzyme microarray is focused onto a scientific CCD using an objective lens. By carefully choosing the detection wavelength, generation of product on an enzyme spot can be seen by the CCD. Analyzing the light intensity change over time on an enzyme spot can give information of reaction rate. The same microarray can be used for many times. Thus, high-throughput kinetic studies of hundreds of catalytic reactions are made possible. At last, we studied the fluorescence emission spectra of ADP and obtained the detection limits for ADP under three different
De Masi, Federico; Chiarella, P.; Wilhelm, H.
Recent advances in proteomics research underscore the increasing need for high-affinity monoclonal antibodies, which are still generated with lengthy, low-throughput antibody production techniques. Here we present a semi-automated, high-throughput method of hybridoma generation and identification....... Monoclonal antibodies were raised to different targets in single batch runs of 6-10 wk using multiplexed immunisations, automated fusion and cell-culture, and a novel antigen-coated microarray-screening assay. In a large-scale experiment, where eight mice were immunized with ten antigens each, we generated...
Mancia, Filippo; Love, James
Structural genomics approaches on integral membrane proteins have been postulated for over a decade, yet specific efforts are lagging years behind their soluble counterparts. Indeed, high throughput methodologies for production and characterization of prokaryotic integral membrane proteins are only now emerging, while large-scale efforts for eukaryotic ones are still in their infancy. Presented here is a review of recent literature on actively ongoing structural genomics of membrane protein initiatives, with a focus on those aimed at implementing interesting techniques aimed at increasing our rate of success for this class of macromolecules. Copyright © 2011 Elsevier Ltd. All rights reserved.
Shukla, Abhinav A; Rameez, Shahid; Wolfe, Leslie S; Oien, Nathan
The ability to conduct multiple experiments in parallel significantly reduces the time that it takes to develop a manufacturing process for a biopharmaceutical. This is particularly significant before clinical entry, because process development and manufacturing are on the "critical path" for a drug candidate to enter clinical development. High-throughput process development (HTPD) methodologies can be similarly impactful during late-stage development, both for developing the final commercial process as well as for process characterization and scale-down validation activities that form a key component of the licensure filing package. This review examines the current state of the art for HTPD methodologies as they apply to cell culture, downstream purification, and analytical techniques. In addition, we provide a vision of how HTPD activities across all of these spaces can integrate to create a rapid process development engine that can accelerate biopharmaceutical drug development. Graphical Abstract.
Suram, Santosh K; Haber, Joel A; Jin, Jian; Gregoire, John M
High-throughput experimental methodologies are capable of synthesizing, screening and characterizing vast arrays of combinatorial material libraries at a very rapid rate. These methodologies strategically employ tiered screening wherein the number of compositions screened decreases as the complexity, and very often the scientific information obtained from a screening experiment, increases. The algorithm used for down-selection of samples from higher throughput screening experiment to a lower throughput screening experiment is vital in achieving information-rich experimental materials genomes. The fundamental science of material discovery lies in the establishment of composition-structure-property relationships, motivating the development of advanced down-selection algorithms which consider the information value of the selected compositions, as opposed to simply selecting the best performing compositions from a high throughput experiment. Identification of property fields (composition regions with distinct composition-property relationships) in high throughput data enables down-selection algorithms to employ advanced selection strategies, such as the selection of representative compositions from each field or selection of compositions that span the composition space of the highest performing field. Such strategies would greatly enhance the generation of data-driven discoveries. We introduce an informatics-based clustering of composition-property functional relationships using a combination of information theory and multitree genetic programming concepts for identification of property fields in a composition library. We demonstrate our approach using a complex synthetic composition-property map for a 5 at. % step ternary library consisting of four distinct property fields and finally explore the application of this methodology for capturing relationships between composition and catalytic activity for the oxygen evolution reaction for 5429 catalyst compositions in a
Mohanraj, Bhavana; Meloni, Gregory R.; Mauck, Robert L.; Dodge, George R.
(1) Objective A number of in vitro models of post-traumatic osteoarthritis (PTOA) have been developed to study the effect of mechanical overload on the processes that regulate cartilage degeneration. While such frameworks are critical for the identification therapeutic targets, existing technologies are limited in their throughput capacity. Here, we validate a test platform for high-throughput mechanical injury incorporating engineered cartilage. (2) Method We utilized a high throughput mechanical testing platform to apply injurious compression to engineered cartilage and determined their strain and strain rate dependent responses to injury. Next, we validated this response by applying the same injury conditions to cartilage explants. Finally, we conducted a pilot screen of putative PTOA therapeutic compounds. (3) Results Engineered cartilage response to injury was strain dependent, with a 2-fold increase in GAG loss at 75% compared to 50% strain. Extensive cell death was observed adjacent to fissures, with membrane rupture corroborated by marked increases in LDH release. Testing of established PTOA therapeutics showed that pan-caspase inhibitor (ZVF) was effective at reducing cell death, while the amphiphilic polymer (P188) and the free-radical scavenger (NAC) reduced GAG loss as compared to injury alone. (4) Conclusions The injury response in this engineered cartilage model replicated key features of the response from cartilage explants, validating this system for application of physiologically relevant injurious compression. This study establishes a novel tool for the discovery of mechanisms governing cartilage injury, as well as a screening platform for the identification of new molecules for the treatment of PTOA. PMID:24999113
Zapata, Pedro; Basak, Pratyay; Carson Meredith, J.
Combinatorial and high-throughput techniques have been successfully used for efficient and rapid property screening in multiple fields. The use of these techniques can be an advantageous new approach to assay ionic conductivity and accelerate the development of novel materials in research areas such as fuel cells. A high-throughput ionic conductivity (HTC) apparatus is described and applied to screening candidate polymer electrolyte membranes for fuel cell applications. The device uses a miniature four-point probe for rapid, automated point-to-point AC electrochemical impedance measurements in both liquid and humid air environments. The conductivity of Nafion 112 HTC validation standards was within 1.8% of the manufacturer's specification. HTC screening of 40 novel Kynar poly(vinylidene fluoride) (PVDF)/acrylic polyelectrolyte (PE) membranes focused on varying the Kynar type (5x) and PE composition (8x) using reduced sample sizes. Two factors were found to be significant in determining the proton conducting capacity: (1) Kynar PVDF series: membranes containing a particular Kynar PVDF type exhibited statistically identical mean conductivity as other membranes containing different Kynar PVDF types that belong to the same series or family. (2) Maximum effective amount of polyelectrolyte: increments in polyelectrolyte content from 55 wt% to 60 wt% showed no statistically significant effect in increasing conductivity. In fact, some membranes experienced a reduction in conductivity.
Devender, Devender; Shen, Xumin; Duggan, Mark; Singh, Sunil; Rullan, Jonathan; Choo, Jae; Mehta, Sohan; Tang, Teck Jung; Reidy, Sean; Holt, Jonathan; Kim, Hyung Woo; Fox, Robert; Sohn, D. K.
Realizing sensitive, high throughput and robust overlay measurement is a challenge in current 14nm and advanced upcoming nodes with transition to 300mm and upcoming 450mm semiconductor manufacturing, where slight deviation in overlay has significant impact on reliability and yield1). Exponentially increasing number of critical masks in multi-patterning lithoetch, litho-etch (LELE) and subsequent LELELE semiconductor processes require even tighter overlay specification2). Here, we discuss limitations of current image- and diffraction- based overlay measurement techniques to meet these stringent processing requirements due to sensitivity, throughput and low contrast3). We demonstrate a new electrical measurement based technique where resistance is measured for a macro with intentional misalignment between two layers. Overlay is quantified by a parabolic fitting model to resistance where minima and inflection points are extracted to characterize overlay control and process window, respectively. Analyses using transmission electron microscopy show good correlation between actual overlay performance and overlay obtained from fitting. Additionally, excellent correlation of overlay from electrical measurements to existing image- and diffraction- based techniques is found. We also discuss challenges of integrating electrical measurement based approach in semiconductor manufacturing from Back End of Line (BEOL) perspective. Our findings open up a new pathway for accessing simultaneous overlay as well as process window and margins from a robust, high throughput and electrical measurement approach.
Todd, Douglas W; Philip, Rohit C; Niihori, Maki; Ringle, Ryan A; Coyle, Kelsey R; Zehri, Sobia F; Zabala, Leanne; Mudery, Jordan A; Francis, Ross H; Rodriguez, Jeffrey J; Jacob, Abraham
Zebrafish animal models lend themselves to behavioral assays that can facilitate rapid screening of ototoxic, otoprotective, and otoregenerative drugs. Structurally similar to human inner ear hair cells, the mechanosensory hair cells on their lateral line allow the zebrafish to sense water flow and orient head-to-current in a behavior called rheotaxis. This rheotaxis behavior deteriorates in a dose-dependent manner with increased exposure to the ototoxin cisplatin, thereby establishing itself as an excellent biomarker for anatomic damage to lateral line hair cells. Building on work by our group and others, we have built a new, fully automated high-throughput behavioral assay system that uses automated image analysis techniques to quantify rheotaxis behavior. This novel system consists of a custom-designed swimming apparatus and imaging system consisting of network-controlled Raspberry Pi microcomputers capturing infrared video. Automated analysis techniques detect individual zebrafish, compute their orientation, and quantify the rheotaxis behavior of a zebrafish test population, producing a powerful, high-throughput behavioral assay. Using our fully automated biological assay to test a standardized ototoxic dose of cisplatin against varying doses of compounds that protect or regenerate hair cells may facilitate rapid translation of candidate drugs into preclinical mammalian models of hearing loss.
As high throughput screening (HTS) plays a larger role in toxicity testing, camputational toxicology has emerged as a critical component in interpreting the large volume of data produced. Computational models designed to quantify potential adverse effects based on HTS data will benefit from additional data sources that connect the magnitude of perturbation from the in vitro system to a level of concern at the organism or population level. The adverse outcome pathway (AOP) concept provides an ideal framework for combining these complementary data. Recent international efforts under the auspices of the Organization for Economic Co-operation and Development (OECD) have resulted in an AOP wiki designed to house formal descriptions of AOPs suitable for use in regulatory decision making. Recent efforts have built upon this to include an ontology describing the AOP with linkages to biological pathways, physiological terminology, and taxonomic applicability domains. Incorporation of an AOP network tool developed by the U.S. Army Corps of Engineers also allows consideration of cumulative risk from chemical and non-chemical stressors. Biomarkers are an important complement to formal AOP descriptions, particularly when dealing with susceptible subpopulations or lifestages in human health risk assessment. To address the issue of nonchemical stressors than may modify effects of criteria air pollutants, a novel method was used to integrate blood gene expression data with hema
Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of biochemical and cellular processes, including endocrine disruption, cytotoxicity, and zebrafish development. Over 2.6 million concentration response curves are fit to models to extract parameters related to potency and efficacy. Models built on ToxCast results are being used to rank and prioritize the toxicological risk of tested chemicals and to predict the toxicity of tens of thousands of chemicals not yet tested in vivo. However, the data size also presents challenges. When fitting the data, the choice of models, model selection strategy, and hit call criteria must reflect the need for computational efficiency and robustness, requiring hard and somewhat arbitrary cutoffs. When coupled with unavoidable noise in the experimental concentration response data, these hard cutoffs cause uncertainty in model parameters and the hit call itself. The uncertainty will then propagate through all of the models built on the data. Left unquantified, this uncertainty makes it difficult to fully interpret the data for risk assessment. We used bootstrap resampling methods to quantify the uncertainty in fitting models to the concentration response data. Bootstrap resampling determines confidence intervals for
Discovery of new antitumoral and antibacterial drugs from brazilian plant extracts using high throughput screening Descoberta de novos fármacos anti-tumorais e anti-bacterianos de extratos vegetais brasileiros através de screening em larga escala
Riad Naim Younes
Full Text Available Plants have played a significant role in the treatment of cancer and infectious diseases for the last four decades. The discovery and introduction to market of paclitaxel, the vinca alkaloids, etoposide, and many antibacterial drugs support drug discovery programs based on natural products. Natural products have been rediscovered as important tools for drug development despite advances in combinatorial chemistry, due to the complex molecular structures able to interact with mammalian cell targets. The Brazilian flora, the most diverse in the world, has become an interesting spot to prospect for new chemical leads or hits due to its species diversity and associated chemical richness. Screening programs have been established in Brazil as a strategy to identify potentially active substances. High throughput screening techniques allow for the analysis of large numbers of extracts in a relatively short period of time, and can be considered one of the most efficient ways of finding new leads from natural products. An updated review of the current status of the biological screening program is presented and recent results from new antitumoral and antibacterial chemical leads are discussed.Plantas são uma importante fonte de novos protótipos há pelo menos quatro décadas. A descoberta e introdução no mercado de paclitaxel, dos alcalóides da vinca, etoposídeo e muitos agentes antibacterianos têm servido de apoio ao desenvolvimento de programas de descobrimento de novos fármacos baseados em produtos naturais. Produtos naturais foram recentemente redescobertos como importante ferramenta na descoberta de novos fármacos devido às estruturas moleculares complexas capazes de interagir com alvos em células de mamíferos. A flora brasileira, a mais rica do mundo, tornou-se um importante sítio para prospecção de novos protótipos em decorrência da riqueza de espécies vegetais relacionada a uma possível diversidade química. Programas de triagem t
Full Text Available With the rapidly increasing availability of High-Throughput Screening (HTS data in the public domain, such as the PubChem database, methods for ligand-based computer-aided drug discovery (LB-CADD have the potential to accelerate and reduce the cost of probe development and drug discovery efforts in academia. We assemble nine data sets from realistic HTS campaigns representing major families of drug target proteins for benchmarking LB-CADD methods. Each data set is public domain through PubChem and carefully collated through confirmation screens validating active compounds. These data sets provide the foundation for benchmarking a new cheminformatics framework BCL::ChemInfo, which is freely available for non-commercial use. Quantitative structure activity relationship (QSAR models are built using Artificial Neural Networks (ANNs, Support Vector Machines (SVMs, Decision Trees (DTs, and Kohonen networks (KNs. Problem-specific descriptor optimization protocols are assessed including Sequential Feature Forward Selection (SFFS and various information content measures. Measures of predictive power and confidence are evaluated through cross-validation, and a consensus prediction scheme is tested that combines orthogonal machine learning algorithms into a single predictor. Enrichments ranging from 15 to 101 for a TPR cutoff of 25% are observed.
Full Text Available Abstract Background The variations within an individual's HLA (Human Leukocyte Antigen genes have been linked to many immunological events, e.g. susceptibility to disease, response to vaccines, and the success of blood, tissue, and organ transplants. Although the microarray format has the potential to achieve high-resolution typing, this has yet to be attained due to inefficiencies of current probe design strategies. Results We present a novel three-step approach for the design of high-throughput microarray assays for HLA typing. This approach first selects sequences containing the SNPs present in all alleles of the locus of interest and next calculates the number of base changes necessary to convert a candidate probe sequences to the closest subsequence within the set of sequences that are likely to be present in the sample including the remainder of the human genome in order to identify those candidate probes which are "ultraspecific" for the allele of interest. Due to the high specificity of these sequences, it is possible that preliminary steps such as PCR amplification are no longer necessary. Lastly, the minimum number of these ultraspecific probes is selected such that the highest resolution typing can be achieved for the minimal cost of production. As an example, an array was designed and in silico results were obtained for typing of the HLA-B locus. Conclusion The assay presented here provides a higher resolution than has previously been developed and includes more alleles than previously considered. Based upon the in silico and preliminary experimental results, we believe that the proposed approach can be readily applied to any highly polymorphic gene system.
The EPA ToxCast effort has screened thousands of chemicals across hundreds of high-throughput in vitro screening assays. The project is now leveraging high-throughput transcriptomic (HTTr) technologies to substantially expand its coverage of biological pathways. The first HTTr sc...
Presentation at the Retinoids Review 2nd workshop in Brussels, Belgium on the application of high throughput screening and model to the retinoid system Presentation at the Retinoids Review 2nd workshop in Brussels, Belgium on the application of high throughput screening and model to the retinoid system
High throughput toxicokinetics (HTTK) is an approach that allows for rapid estimations of TK for hundreds of environmental chemicals. HTTK-based reverse dosimetry (i.e, reverse toxicokinetics or RTK) is used in order to convert high throughput in vitro toxicity screening (HTS) da...
High-throughput screening (HTS) generates in vitro data for characterizing potential chemical hazard. TK models are needed to allow in vitro to in vivo extrapolation (IVIVE) to real world situations. The U.S. EPA has created a public tool (R package “httk” for high throughput tox...
Herbst, John; Anthony, Monique; Stewart, Jeremy; Connors, David; Chen, Taosheng; Banks, Martyn; Petrillo, Edward W; Agler, Michele
In order to identify potential cytochrome P-450 3A4 (drug-metabolizing enzyme) inducers at an early stage of the drug discovery process, a cell-based transactivation high-throughput luciferase reporter assay for the human pregnane X receptor (PXR) in HepG2 cells has been implemented and multiplexed with a viability end point for data interpretation, as part of a Lead Profiling portfolio of assays. As a routine part of Lead Profiling operations, assays are periodically evaluated for utility as well as for potential improvements in technology or process. We used a recent evaluation of our PXR-transactivation assay as a model for the application of Lean Thinking-based process analysis to lab-bench assay optimization and automation. This resulted in the development of a 384-well multiplexed homogeneous assay simultaneously detecting PXR transactivation and HepG2 cell cytotoxicity. In order to multiplex fluorescent and luminescent read-outs, modifications to each assay were necessary, which included optimization of multiple assay parameters such as cell density, plate type, and reagent concentrations. Subsequently, a set of compounds including known cytotoxic compounds and PXR inducers were used to validate the multiplexed assay. Results from the multiplexed assay correlate well with those from the singleplexed assay formats measuring PXR transactivation and viability separately. Implementation of the multiplexed assay for routine compound profiling provides improved data quality, sample conservation, cost savings, and resource efficiencies.
Peralta-Yahya, Pamela; Carter, Brian T; Lin, Hening; Tao, Haiyan; Cornish, Virginia W
Efficient enzymatic hydrolysis of lignocellulosic material remains one of the major bottlenecks to cost-effective conversion of biomass to ethanol. Improvement of glycosylhydrolases, however, is limited by existing medium-throughput screening technologies. Here, we report the first high-throughput selection for cellulase catalysts. This selection was developed by adapting chemical complementation to provide a growth assay for bond cleavage reactions. First, a URA3 counter selection was adapted to link chemical dimerizer activated gene transcription to cell death. Next, the URA3 counter selection was shown to detect cellulase activity based on cleavage of a tetrasaccharide chemical dimerizer substrate and decrease in expression of the toxic URA3 reporter. Finally, the utility of the cellulase selection was assessed by isolating cellulases with improved activity from a cellulase library created by family DNA shuffling. This application provides further evidence that chemical complementation can be readily adapted to detect different enzymatic activities for important chemical transformations for which no natural selection exists. Because of the large number of enzyme variants that selections can now test as compared to existing medium-throughput screens for cellulases, this assay has the potential to impact the discovery of improved cellulases and other glycosylhydrolases for biomass conversion from libraries of cellulases created by mutagenesis or obtained from natural biodiversity.
Scientific experiments are producing huge amounts of data, and they continue increasing the size of their datasets and the total volume of data. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of Scientific Data Centres has shifted from coping efficiently with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful s...
Marcellin, Esteban; Nielsen, Lars Keld
The emergence of inexpensive, base-perfect genome editing is revolutionising biology. Modern industrial biotechnology exploits the advances in genome editing in combination with automation, analytics and data integration to build high-throughput automated strain engineering pipelines also known...... as biofoundries. Biofoundries replace the slow and inconsistent artisanal processes used to build microbial cell factories with an automated design–build–test cycle, considerably reducing the time needed to deliver commercially viable strains. Testing and hence learning remains relatively shallow, but recent...... advances in analytical chemistry promise to increase the depth of characterization possible. Analytics combined with models of cellular physiology in automated systems biology pipelines should enable deeper learning and hence a steeper pitch of the learning cycle. This review explores the progress...
Full Text Available High-throughput computing (HTC uses computer clusters to solve advanced computational problems, with the goal of accomplishing high throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general purpose computation on a graphics processing unit (GPU provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin – Madison, which can be leveraged for genomic selection, in terms of central processing unit (CPU capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of
Verdirame, Maria; Veneziano, Maria; Alfieri, Anna; Di Marco, Annalise; Monteagudo, Edith; Bonelli, Fabio
Turbulent Flow Chromatography (TFC) is a powerful approach for on-line extraction in bioanalytical studies. It improves sensitivity and reduces sample preparation time, two factors that are of primary importance in drug discovery. In this paper the application of the ARIA system to the analytical support of in vivo pharmacokinetics (PK) and in vitro drug metabolism studies is described, with an emphasis in high throughput optimization. For PK studies, a comparison between acetonitrile plasma protein precipitation (APPP) and TFC was carried out. Our optimized TFC methodology gave better S/N ratios and lower limit of quantification (LOQ) than conventional procedures. A robust and high throughput analytical method to support hepatocyte metabolic stability screening of new chemical entities was developed by hyphenation of TFC with mass spectrometry. An in-loop dilution injection procedure was implemented to overcome one of the main issues when using TFC, that is the early elution of hydrophilic compounds that renders low recoveries. A comparison between off-line solid phase extraction (SPE) and TFC was also carried out, and recovery, sensitivity (LOQ), matrix effect and robustness were evaluated. The use of two parallel columns in the configuration of the system provided a further increase of the throughput. Copyright 2009 Elsevier B.V. All rights reserved.
Mandracchia, B.; Bianco, V.; Wang, Z.; Paturzo, M.; Bramanti, A.; Pioggia, G.; Ferraro, P.
Here we introduce a compact holographic microscope embedded onboard a Lab-on-a-Chip (LoC) platform. A wavefront division interferometer is realized by writing a polymer grating onto the channel to extract a reference wave from the object wave impinging the LoC. A portion of the beam reaches the samples flowing along the channel path, carrying their information content to the recording device, while one of the diffraction orders from the grating acts as an off-axis reference wave. Polymeric micro-lenses are delivered forward the chip by Pyro-ElectroHydroDynamic (Pyro-EHD) inkjet printing techniques. Thus, all the required optical components are embedded onboard a pocket device, and fast, non-iterative, reconstruction algorithms can be used. We use our device in combination with a novel high-throughput technique, named Space-Time Digital Holography (STDH). STDH exploits the samples motion inside microfluidic channels to obtain a synthetic hologram, mapped in a hybrid space-time domain, and with intrinsic useful features. Indeed, a single Linear Sensor Array (LSA) is sufficient to build up a synthetic representation of the entire experiment (i.e. the STDH) with unlimited Field of View (FoV) along the scanning direction, independently from the magnification factor. The throughput of the imaging system is dramatically increased as STDH provides unlimited FoV, refocusable imaging of samples inside the liquid volume with no need for hologram stitching. To test our embedded STDH microscopy module, we counted, imaged and tracked in 3D with high-throughput red blood cells moving inside the channel volume under non ideal flow conditions.
Full Text Available Many rapid methods have been developed for screening foods for the presence of pathogenic microorganisms. Rapid methods that have the additional ability to identify microorganisms via multiplexed immunological recognition have the potential for classification or typing of microbial contaminants thus facilitating epidemiological investigations that aim to identify outbreaks and trace back the contamination to its source. This manuscript introduces a novel, high throughput typing platform that employs microarrayed multiwell plate substrates and laser-induced fluorescence of the nucleic acid intercalating dye/stain SYBR Gold for detection of antibody-captured bacteria. The aim of this study was to use this platform for comparison of different sets of antibodies raised against the same pathogens as well as demonstrate its potential effectiveness for serotyping. To that end, two sets of antibodies raised against each of the “Big Six” non-O157 Shiga toxin-producing E. coli (STEC as well as E. coli O157:H7 were array-printed into microtiter plates, and serial dilutions of the bacteria were added and subsequently detected. Though antibody specificity was not sufficient for the development of an STEC serotyping method, the STEC antibody sets performed reasonably well exhibiting that specificity increased at lower capture antibody concentrations or, conversely, at lower bacterial target concentrations. The favorable results indicated that with sufficiently selective and ideally concentrated sets of biorecognition elements (e.g., antibodies or aptamers, this high-throughput platform can be used to rapidly type microbial isolates derived from food samples within ca. 80 min of total assay time. It can also potentially be used to detect the pathogens from food enrichments and at least serve as a platform for testing antibodies.
Schückel, Julia; Kracun, Stjepan Kresimir; Willats, William George Tycho
for this is that advances in genome and transcriptome sequencing, together with associated bioinformatics tools allow for rapid identification of candidate CAZymes, but technology for determining an enzyme's biochemical characteristics has advanced more slowly. To address this technology gap, a novel high-throughput assay...... CPH and ICB substrates are provided in a 96-well high-throughput assay system. The CPH substrates can be made in four different colors, enabling them to be mixed together and thus increasing assay throughput. The protocol describes a 96-well plate assay and illustrates how this assay can be used...... for screening the activities of enzymes, enzyme cocktails, and broths....
Mercier, Kelly A.; Powers, Robert
High-throughput screening (HTS) using NMR spectroscopy has become a common component of the drug discovery effort and is widely used throughout the pharmaceutical industry. NMR provides additional information about the nature of small molecule-protein interactions compared to traditional HTS methods. In order to achieve comparable efficiency, small molecules are often screened as mixtures in NMR-based assays. Nevertheless, an analysis of the efficiency of mixtures and a corresponding determination of the optimum mixture size (OMS) that minimizes the amount of material and instrumentation time required for an NMR screen has been lacking. A model for calculating OMS based on the application of the hypergeometric distribution function to determine the probability of a 'hit' for various mixture sizes and hit rates is presented. An alternative method for the deconvolution of large screening mixtures is also discussed. These methods have been applied in a high-throughput NMR screening assay using a small, directed library
Gurard-Levin, Zachary A; Scholle, Michael D; Eisenberg, Adam H; Mrksich, Milan
High-throughput screening is a common strategy used to identify compounds that modulate biochemical activities, but many approaches depend on cumbersome fluorescent reporters or antibodies and often produce false-positive hits. The development of "label-free" assays addresses many of these limitations, but current approaches still lack the throughput needed for applications in drug discovery. This paper describes a high-throughput, label-free assay that combines self-assembled monolayers with mass spectrometry, in a technique called SAMDI, as a tool for screening libraries of 100,000 compounds in one day. This method is fast, has high discrimination, and is amenable to a broad range of chemical and biological applications.
Brod, F.C.A.; Dijk, van J.P.; Voorhuijzen, M.M.; Dinon, A.Z.; Guimarães, L.H.S.; Scholtens, I.M.J.; Arisi, A.C.M.; Kok, E.J.
The ever-increasing production of genetically modified crops generates a demand for high-throughput DNAbased methods for the enforcement of genetically modified organisms (GMO) labelling requirements. The application of standard real-time PCR will become increasingly costly with the growth of the
Tanackovic, Vanja; Rydahl, Maja Gro; Pedersen, Henriette Lodberg
In this study we introduce the starch-recognising carbohydrate binding module family 20 (CBM20) from Aspergillus niger for screening biological variations in starch molecular structure using high throughput carbohydrate microarray technology. Defined linear, branched and phosphorylated...
Biology is increasingly data driven by virtue of the development of high-throughput technologies, such as DNA and RNA sequencing. Computational biology and bioinformatics are scientific disciplines that cross-over between the disciplines of biology, informatics and statistics; which is clearly
Wu, Xiao-Lin; Beissinger, Timothy M; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J M; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel
High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin-Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized
Fu, Jiaqi; Fernandez, Daniel; Ferrer, Marc; Titus, Steven A; Buehler, Eugen; Lal-Nag, Madhu A
The widespread use of two-dimensional (2D) monolayer cultures for high-throughput screening (HTS) to identify targets in drug discovery has led to attrition in the number of drug targets being validated. Solid tumors are complex, aberrantly growing microenvironments that harness structural components from stroma, nutrients fed through vasculature, and immunosuppressive factors. Increasing evidence of stromally-derived signaling broadens the complexity of our understanding of the tumor microenvironment while stressing the importance of developing better models that reflect these interactions. Three-dimensional (3D) models may be more sensitive to certain gene-silencing events than 2D models because of their components of hypoxia, nutrient gradients, and increased dependence on cell-cell interactions and therefore are more representative of in vivo interactions. Colorectal cancer (CRC) and breast cancer (BC) models composed of epithelial cells only, deemed single-cell-type tumor spheroids (SCTS) and multi-cell-type tumor spheroids (MCTS), containing fibroblasts were developed for RNAi HTS in 384-well microplates with flat-bottom wells for 2D screening and round-bottom, ultra-low-attachment wells for 3D screening. We describe the development of a high-throughput assay platform that can assess physiologically relevant phenotypic differences between screening 2D versus 3D SCTS, 3D SCTS, and MCTS in the context of different cancer subtypes. This assay platform represents a paradigm shift in how we approach drug discovery that can reduce the attrition rate of drugs that enter the clinic.
Salvo-Chirnside, Eliane; Kane, Steven; Kerr, Lorraine E
Abstract The increasing popularity of systems-based approaches to plant research has resulted in a demand for high throughput (HTP) methods to be developed. RNA extraction from multiple samples in an experiment is a significant bottleneck in performing systems-level genomic studies. Therefore we have established a high throughput method of RNA extraction from Arabidopsis thaliana to facilitate gene expression studies in this widely used plant model. We present optimised manual and automated p...
Thienhaus, Sigurd; Hamann, Sven; Ludwig, Alfred
Versatile high-throughput characterization tools are required for the development of new materials using combinatorial techniques. Here, we describe a modular, high-throughput test stand for the screening of thin-film materials libraries, which can carry out automated electrical, magnetic and magnetoresistance measurements in the temperature range of −40 to 300 °C. As a proof of concept, we measured the temperature-dependent resistance of Fe–Pd–Mn ferromagnetic shape-memory alloy materials libraries, revealing reversible martensitic transformations and the associated transformation temperatures. Magneto-optical screening measurements of a materials library identify ferromagnetic samples, whereas resistivity maps support the discovery of new phases. A distance sensor in the same setup allows stress measurements in materials libraries deposited on cantilever arrays. A combination of these methods offers a fast and reliable high-throughput characterization technology for searching for new materials. Using this approach, a composition region has been identified in the Fe–Pd–Mn system that combines ferromagnetism and martensitic transformation.
Wang, Daqian; Ding, Lili; Zhang, Wei; Zhang, Enyao; Yu, Xinglong; Luo, Zhaofeng; Ou, Huichao
A new high-throughput surface plasmon resonance (SPR) biosensor based on differential interferometric imaging is reported. The two SPR interferograms of the sensing surface are imaged on two CCD cameras. The phase difference between the two interferograms is 180°. The refractive index related factor (RIRF) of the sensing surface is calculated from the two simultaneously acquired interferograms. The simulation results indicate that the RIRF exhibits a linear relationship with the refractive index of the sensing surface and is unaffected by the noise, drift and intensity distribution of the light source. The affinity and kinetic information can be extracted in real time from continuously acquired RIRF distributions. The results of refractometry experiments show that the dynamic detection range of SPR differential interferometric imaging system can be over 0.015 refractive index unit (RIU). High refractive index resolution is down to 0.45 RU (1 RU = 1 × 10 −6 RIU). Imaging and protein microarray experiments demonstrate the ability of high-throughput detection. The aptamer experiments demonstrate that the SPR sensor based on differential interferometric imaging has a great capability to be implemented for high-throughput aptamer kinetic evaluation. These results suggest that this biosensor has the potential to be utilized in proteomics and drug discovery after further improvement. (paper)
Dowsey, Andrew W; Dunn, Michael J; Yang, Guang-Zhong
The quest for high-throughput proteomics has revealed a number of challenges in recent years. Whilst substantial improvements in automated protein separation with liquid chromatography and mass spectrometry (LC/MS), aka 'shotgun' proteomics, have been achieved, large-scale open initiatives such as the Human Proteome Organization (HUPO) Brain Proteome Project have shown that maximal proteome coverage is only possible when LC/MS is complemented by 2D gel electrophoresis (2-DE) studies. Moreover, both separation methods require automated alignment and differential analysis to relieve the bioinformatics bottleneck and so make high-throughput protein biomarker discovery a reality. The purpose of this article is to describe a fully automatic image alignment framework for the integration of 2-DE into a high-throughput differential expression proteomics pipeline. The proposed method is based on robust automated image normalization (RAIN) to circumvent the drawbacks of traditional approaches. These use symbolic representation at the very early stages of the analysis, which introduces persistent errors due to inaccuracies in modelling and alignment. In RAIN, a third-order volume-invariant B-spline model is incorporated into a multi-resolution schema to correct for geometric and expression inhomogeneity at multiple scales. The normalized images can then be compared directly in the image domain for quantitative differential analysis. Through evaluation against an existing state-of-the-art method on real and synthetically warped 2D gels, the proposed analysis framework demonstrates substantial improvements in matching accuracy and differential sensitivity. High-throughput analysis is established through an accelerated GPGPU (general purpose computation on graphics cards) implementation. Supplementary material, software and images used in the validation are available at http://www.proteomegrid.org/rain/.
Barker, Gregory A; Calzada, Joseph; Herzer, Sibylle; Rieble, Siegfried
High throughput process development offers unique approaches to explore complex process design spaces with relatively low material consumption. Batch chromatography is one technique that can be used to screen chromatographic conditions in a 96-well plate. Typical batch chromatography workflows examine variations in buffer conditions or comparison of multiple resins in a given process, as opposed to the assessment of protein loading conditions in combination with other factors. A modification to the batch chromatography paradigm is described here where experimental planning, programming, and a staggered loading approach increase the multivariate space that can be explored with a liquid handling system. The iterative batch chromatography (IBC) approach is described, which treats every well in a 96-well plate as an individual experiment, wherein protein loading conditions can be varied alongside other factors such as wash and elution buffer conditions. As all of these factors are explored in the same experiment, the interactions between them are characterized and the number of follow-up confirmatory experiments is reduced. This in turn improves statistical power and throughput. Two examples of the IBC method are shown and the impact of the load conditions are assessed in combination with the other factors explored. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Full Text Available Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.
Sallon, C; Soulet, D; Tremblay, Y
Weibel's research has shown that any alteration of the pulmonary structure has effects on function. This demonstration required a quantitative analysis of lung structures called morphometry. This is possible thanks to stereology, a set of methods based on principles of geometry and statistics. His work has helped to better understand the morphological harmony of the lung, which is essential for its proper functioning. An imbalance leads to pathophysiology such as chronic obstructive pulmonary disease in adults and bronchopulmonary dysplasia in neonates. It is by studying this imbalance that new therapeutic approaches can be developed. These advances are achievable only through morphometric analytical methods, which are increasingly precise and focused, in particular thanks to the high-throughput automation of these methods. This review makes a comparison between an automated method that we developed in the laboratory and semi-manual methods of morphometric analyzes. The automation of morphometric measurements is a fundamental asset in the study of pulmonary pathophysiology because it is an assurance of robustness, reproducibility and speed. This tool will thus contribute significantly to the acceleration of the race for the development of new drugs. Copyright © 2017 SPLF. Published by Elsevier Masson SAS. All rights reserved.
Accion, E; Bria, A; Bernabeu, G; Caubet, M; Delfino, M; Espinal, X; Merino, G; Lopez, F; Martinez, F; Planas, E
Scientific experiments are producing huge amounts of data, and the size of their datasets and total volume of data continues increasing. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of scientific data centers has shifted from efficiently coping with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful data storage and processing service in an intensive HTC environment.
Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John
Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.
Zarkevich, N. A.; Johnson, D. D.; Pecharsky, V. K.
The high-throughput search paradigm adopted by the newly established caloric materials consortium—CaloriCool®—with the goal to substantially accelerate discovery and design of novel caloric materials is briefly discussed. We begin with describing material selection criteria based on known properties, which are then followed by heuristic fast estimates, ab initio calculations, all of which has been implemented in a set of automated computational tools and measurements. We also demonstrate how theoretical and computational methods serve as a guide for experimental efforts by considering a representative example from the field of magnetocaloric materials.
Lundberg, Martin; Thorsen, Stine Buch; Assarsson, Erika
A high throughput protein biomarker discovery tool has been developed based on multiplexed proximity ligation assays (PLA) in a homogeneous format in the sense of no washing steps. The platform consists of four 24-plex panels profiling 74 putative biomarkers with sub pM sensitivity each consuming...... sequences are united by DNA ligation upon simultaneous target binding forming a PCR amplicon. Multiplex PLA thereby converts multiple target analytes into real-time PCR amplicons that are individually quantificatied using microfluidic high capacity qPCR in nano liter volumes. The assay shows excellent...
Conery, Annie L; Larkins-Ford, Jonah; Ausubel, Frederick M; Kirienko, Natalia V
In recent history, the nematode Caenorhabditis elegans has provided a compelling platform for the discovery of novel antimicrobial drugs. In this protocol, we present an automated, high-throughput C. elegans pathogenesis assay, which can be used to screen for anti-infective compounds that prevent nematodes from dying due to Pseudomonas aeruginosa. New antibiotics identified from such screens would be promising candidates for treatment of human infections, and also can be used as probe compounds to identify novel targets in microbial pathogenesis or host immunity. Copyright © 2014 John Wiley & Sons, Inc.
Kwon, S.W.; Park, K.M.; Kim, J.G.; Kim, I.T.; Park, S.B., E-mail: email@example.com [Korea Atomic Energy Research Inst. (Korea, Republic of)
It is very important to increase the throughput of the salt separation system owing to the high uranium content of spent nuclear fuel and high salt fraction of uranium dendrites in pyroprocessing. Multilayer porous crucible system was proposed to increase a throughput of the salt distiller in this study. An integrated sieve-crucible assembly was also investigated for the practical use of the porous crucible system. The salt evaporation behaviors were compared between the conventional nonporous crucible and the porous crucible. Two step weight reductions took place in the porous crucible, whereas the salt weight reduced only at high temperature by distillation in a nonporous crucible. The first weight reduction in the porous crucible was caused by the liquid salt penetrated out through the perforated crucible during the temperature elevation until the distillation temperature. Multilayer porous crucibles have a benefit to expand the evaporation surface area. (author)
Kodzius, Rimantas; Castro, David; Foulds, Ian G.
This work demonstrates the detection method for a high throughput droplet based agglutination assay system. Using simple hydrodynamic forces to mix and aggregate functionalized microbeads we avoid the need to use magnetic assistance or mixing structures. The concentration of our target molecules was estimated by agglutination strength, obtained through optical image analysis. Agglutination in droplets was performed with flow rates of 150 µl/min and occurred in under a minute, with potential to perform high-throughput measurements. The lowest target concentration detected in droplet microfluidics was 0.17 nM, which is three orders of magnitude more sensitive than a conventional card based agglutination assay.
This work demonstrates the detection method for a high throughput droplet based agglutination assay system. Using simple hydrodynamic forces to mix and aggregate functionalized microbeads we avoid the need to use magnetic assistance or mixing structures. The concentration of our target molecules was estimated by agglutination strength, obtained through optical image analysis. Agglutination in droplets was performed with flow rates of 150 µl/min and occurred in under a minute, with potential to perform high-throughput measurements. The lowest target concentration detected in droplet microfluidics was 0.17 nM, which is three orders of magnitude more sensitive than a conventional card based agglutination assay.
Thousands of chemicals have been profiled by high-throughput screening programs such as ToxCast and Tox21; these chemicals are tested in part because most of them have limited or no data on hazard, exposure, or toxicokinetics. Toxicokinetic models aid in predicting tissue concent...
Thousands of chemicals have been profiled by high-throughput screening (HTS) programs such as ToxCast and Tox21. These chemicals are tested in part because there are limited or no data on hazard, exposure, or toxicokinetics (TK). TK models aid in predicting tissue concentrations ...
Geertsma, Eric R.; Poolman, Bert
We developed a generic method for high-throughput cloning in bacteria that are less amenable to conventional DNA manipulations. The method involves ligation-independent cloning in an intermediary Escherichia coli vector, which is rapidly converted via vector-backbone exchange (VBEx) into an
de Jong, R.N.; Daniëls, M.; Kaptein, R.; Folkers, G.E.
Structural and functional genomics initiatives significantly improved cloning methods over the past few years. Although recombinational cloning is highly efficient, its costs urged us to search for an alternative high throughput (HTP) cloning method. We implemented a modified Enzyme Free Cloning
Fiers, M.W.E.J.; Burgt, van der A.; Datema, E.; Groot, de J.C.W.; Ham, van R.C.H.J.
Background - Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses
Yoo, Daniel; Provchy, Justin; Park, Cynthia; Schulz, Craig; Walker, Kenneth
As the pace of drug discovery accelerates there is an increased focus on screening larger numbers of protein therapeutic candidates to identify those that are functionally superior and to assess manufacturability earlier in the process. Although there have been advances toward high throughput (HT) cloning and expression, protein purification is still an area where improvements can be made to conventional techniques. Current methodologies for purification often involve a tradeoff between HT automation or capacity and quality. We present an ÄKTA combined with an autosampler, the ÄKTA-AS, which has the capability of purifying up to 240 samples in two chromatographic dimensions without the need for user intervention. The ÄKTA-AS has been shown to be reliable with sample volumes between 0.5 mL and 100 mL, and the innovative use of a uniquely configured loading valve ensures reliability by efficiently removing air from the system as well as preventing sample cross contamination. Incorporation of a sample pump flush minimizes sample loss and enables recoveries ranging from the low tens of micrograms to milligram quantities of protein. In addition, when used in an affinity capture-buffer exchange format the final samples are formulated in a buffer compatible with most assays without requirement of additional downstream processing. The system is designed to capture samples in 96-well microplate format allowing for seamless integration of downstream HT analytic processes such as microfluidic or HPLC analysis. Most notably, there is minimal operator intervention to operate this system, thereby increasing efficiency, sample consistency and reducing the risk of human error. Copyright © 2014 Elsevier B.V. All rights reserved.
Full Text Available The C1a isoenzyme of horseradish peroxidase (HRP is an industrially important heme-containing enzyme that utilizes hydrogen peroxide to oxidize a wide variety of inorganic and organic compounds for practical applications, including synthesis of fine chemicals, medical diagnostics, and bioremediation. To develop a ultra-high-throughput screening system for HRP, we successfully produced active HRP in an Escherichia coli cell-free protein synthesis system, by adding disulfide bond isomerase DsbC and optimizing the concentrations of hemin and calcium ions and the temperature. The biosynthesized HRP was fused with a single-chain Cro (scCro DNA-binding tag at its N-terminal and C-terminal sites. The addition of the scCro-tag at both ends increased the solubility of the protein. Next, HRP and its fusion proteins were successfully synthesized in a water droplet emulsion by using hexadecane as the oil phase and SunSoft No. 818SK as the surfactant. HRP fusion proteins were displayed on microbeads attached with double-stranded DNA (containing the scCro binding sequence via scCro-DNA interactions. The activities of the immobilized HRP fusion proteins were detected with a tyramide-based fluorogenic assay using flow cytometry. Moreover, a model microbead library containing wild type hrp (WT and inactive mutant (MUT genes was screened using fluorescence-activated cell-sorting, thus efficiently enriching the WT gene from the 1:100 (WT:MUT library. The technique described here could serve as a novel platform for the ultra-high-throughput discovery of more useful HRP mutants and other heme-containing peroxidases.
Reproducible and efficient high-throughput phenotyping approaches, combined with advances in genome sequencing, are facilitating the discovery of genes affecting plant performance. Salinity tolerance is a desirable trait that can be achieved through breeding, where most have aimed at selecting for plants that perform effective ion exclusion from the shoots. To determine overall plant performance under salt stress, it is helpful to investigate several plant traits collectively in one experimental setup. Hence, we developed a quantitative phenotyping protocol using a high-throughput phenotyping system, with RGB and chlorophyll fluorescence (ChlF) imaging, which captures the growth, morphology, color and photosynthetic performance of Arabidopsis thaliana plants in response to salt stress. We optimized our salt treatment by controlling the soil-water content prior to introducing salt stress. We investigated these traits over time in two accessions in soil at 150, 100, or 50 mM NaCl to find that the plants subjected to 100 mM NaCl showed the most prominent responses in the absence of symptoms of severe stress. In these plants, salt stress induced significant changes in rosette area and morphology, but less prominent changes in rosette coloring and photosystem II efficiency. Clustering of ChlF traits with plant growth of nine accessions maintained at 100 mM NaCl revealed that in the early stage of salt stress, salinity tolerance correlated with non-photochemical quenching processes and during the later stage, plant performance correlated with quantum yield. This integrative approach allows the simultaneous analysis of several phenotypic traits. In combination with various genetic resources, the phenotyping protocol described here is expected to increase our understanding of plant performance and stress responses, ultimately identifying genes that improve plant performance in salt stress conditions.
Ni, Jing [Iowa State Univ., Ames, IA (United States)
This work describes several research projects aimed towards developing new instruments and novel methods for high throughput chemical and biological analysis. Approaches are taken in two directions. The first direction takes advantage of well-established semiconductor fabrication techniques and applies them to miniaturize instruments that are workhorses in analytical laboratories. Specifically, the first part of this work focused on the development of micropumps and microvalves for controlled fluid delivery. The mechanism of these micropumps and microvalves relies on the electrochemically-induced surface tension change at a mercury/electrolyte interface. A miniaturized flow injection analysis device was integrated and flow injection analyses were demonstrated. In the second part of this work, microfluidic chips were also designed, fabricated, and tested. Separations of two fluorescent dyes were demonstrated in microfabricated channels, based on an open-tubular liquid chromatography (OT LC) or an electrochemically-modulated liquid chromatography (EMLC) format. A reduction in instrument size can potentially increase analysis speed, and allow exceedingly small amounts of sample to be analyzed under diverse separation conditions. The second direction explores the surface enhanced Raman spectroscopy (SERS) as a signal transduction method for immunoassay analysis. It takes advantage of the improved detection sensitivity as a result of surface enhancement on colloidal gold, the narrow width of Raman band, and the stability of Raman scattering signals to distinguish several different species simultaneously without exploiting spatially-separated addresses on a biochip. By labeling gold nanoparticles with different Raman reporters in conjunction with different detection antibodies, a simultaneous detection of a dual-analyte immunoassay was demonstrated. Using this scheme for quantitative analysis was also studied and preliminary dose-response curves from an immunoassay of a
Brown, James M; Horner, Neil R; Lawson, Thomas N; Fiegel, Tanja; Greenaway, Simon; Morgan, Hugh; Ring, Natalie; Santos, Luis; Sneddon, Duncan; Teboul, Lydia; Vibert, Jennifer; Yaikhom, Gagarine; Westerberg, Henrik; Mallon, Ann-Marie
High-throughput phenotyping is a cornerstone of numerous functional genomics projects. In recent years, imaging screens have become increasingly important in understanding gene-phenotype relationships in studies of cells, tissues and whole organisms. Three-dimensional (3D) imaging has risen to prominence in the field of developmental biology for its ability to capture whole embryo morphology and gene expression, as exemplified by the International Mouse Phenotyping Consortium (IMPC). Large volumes of image data are being acquired by multiple institutions around the world that encompass a range of modalities, proprietary software and metadata. To facilitate robust downstream analysis, images and metadata must be standardized to account for these differences. As an open scientific enterprise, making the data readily accessible is essential so that members of biomedical and clinical research communities can study the images for themselves without the need for highly specialized software or technical expertise. In this article, we present a platform of software tools that facilitate the upload, analysis and dissemination of 3D images for the IMPC. Over 750 reconstructions from 80 embryonic lethal and subviable lines have been captured to date, all of which are openly accessible at mousephenotype.org. Although designed for the IMPC, all software is available under an open-source licence for others to use and develop further. Ongoing developments aim to increase throughput and improve the analysis and dissemination of image data. Furthermore, we aim to ensure that images are searchable so that users can locate relevant images associated with genes, phenotypes or human diseases of interest. © The Author 2016. Published by Oxford University Press.
Seamon, Kyle J; Light, Yooli K; Saada, Edwin A; Schoeniger, Joseph S; Harmon, Brooke
The RNA-guided DNA nuclease Cas9 is now widely used for the targeted modification of genomes of human cells and various organisms. Despite the extensive use of Clustered Regularly Interspaced Palindromic Repeats (CRISPR) systems for genome engineering and the rapid discovery and engineering of new CRISPR-associated nucleases, there are no high-throughput assays for measuring enzymatic activity. The current laboratory and future therapeutic uses of CRISPR technology have a significant risk of accidental exposure or clinical off-target effects, underscoring the need for therapeutically effective inhibitors of Cas9. Here, we develop a fluorescence assay for monitoring Cas9 nuclease activity and demonstrate its utility with S. pyogenes (Spy), S. aureus (Sau), and C. jejuni (Cje) Cas9. The assay was validated by quantitatively profiling the species specificity of published anti-CRISPR (Acr) proteins, confirming the reported inhibition of Spy Cas9 by AcrIIA4 and Cje Cas9 by AcrIIC1 and no inhibition of Sau Cas9 by either anti-CRISPR. To identify drug-like inhibitors, we performed a screen of 189 606 small molecules for inhibition of Spy Cas9. Of 437 hits (0.2% hit rate), six were confirmed as Cas9 inhibitors in a direct gel electrophoresis secondary assay. The high-throughput nature of this assay makes it broadly applicable for the discovery of additional Cas9 inhibitors or the characterization of Cas9 enzyme variants.
Sinclair, Ian; Stearns, Rick; Pringle, Steven; Wingfield, Jonathan; Datwani, Sammy; Hall, Eric; Ghislain, Luke; Majlof, Lars; Bachman, Martin
High-throughput, direct measurement of substrate-to-product conversion by label-free detection, without the need for engineered substrates or secondary assays, could be considered the "holy grail" of drug discovery screening. Mass spectrometry (MS) has the potential to be part of this ultimate screening solution, but is constrained by the limitations of existing MS sample introduction modes that cannot meet the throughput requirements of high-throughput screening (HTS). Here we report data from a prototype system (Echo-MS) that uses acoustic droplet ejection (ADE) to transfer femtoliter-scale droplets in a rapid, precise, and accurate fashion directly into the MS. The acoustic source can load samples into the MS from a microtiter plate at a rate of up to three samples per second. The resulting MS signal displays a very sharp attack profile and ions are detected within 50 ms of activation of the acoustic transducer. Additionally, we show that the system is capable of generating multiply charged ion species from simple peptides and large proteins. The combination of high speed and low sample volume has significant potential within not only drug discovery, but also other areas of the industry. © 2015 Society for Laboratory Automation and Screening.
Lee, Dennis; Barnes, Stephen
The need for new pharmacological agents is unending. Yet the drug discovery process has changed substantially over the past decade and continues to evolve in response to new technologies. There is presently a high demand to reduce discovery time by improving specific lab disciplines and developing new technology platforms in the area of cell-based assay screening. Here we present the developmental concept and early stage testing of the Ab-Sniffer, a novel fiber optic fluorescence device for high-throughput cytotoxicity screening using an immobilized whole cell approach. The fused silica fibers are chemically functionalized with biotin to provide interaction with fluorescently labeled, streptavidin functionalized alginate-chitosan microspheres. The microspheres are also functionalized with Concanavalin A to facilitate binding to living cells. By using lymphoma cells and rituximab in an adaptation of a well-known cytotoxicity protocol we demonstrate the utility of the Ab-Sniffer for functional screening of potential drug compounds rather than indirect, non-functional screening via binding assay. The platform can be extended to any assay capable of being tied to a fluorescence response including multiple target cells in each well of a multi-well plate for high-throughput screening.
Full Text Available A disintegrin and metalloprotease with thrombospondin type I motifs-1 (ADAMTS1 plays a crucial role in inflammatory joint diseases and its inhibitors are potential candidates for anti-arthritis drugs. For the purposes of drug discovery, we reported the development and validation of fluorescence resonance energy transfer (FRET assay for high-throughput screening (HTS of the ADAMTS1 inhibitors. A FRET substrate was designed for a quantitative assay of ADAMTS1 activity and enzyme kinetics studies. The assay was developed into a 50-µL, 384-well assay format for high throughput screening of ADAMTS1 inhibitors with an overall Z’ factor of 0.89. ADAMTS1 inhibitors were screened against a diverse library of 40,960 total compounds with the established HTS system. Four structurally related hits, naturally occurring compounds, kuwanon P, kuwanon X, albafuran C and mulberrofuran J, extracted from the Chinese herb Morus alba L., were identified for further investigation. The results suggest that this FRET assay is an excellent tool, not only for measurement of ADAMTS1 activity but also for discovery of novel ADAMTS1 inhibitors with HTS.
Montmayeur, Anna M.; Schmidt, Alexander; Zhao, Kun; Magaña, Laura; Iber, Jane; Castro, Christina J.; Chen, Qi; Henderson, Elizabeth; Ramos, Edward; Shaw, Jing; Tatusov, Roman L.; Dybdahl-Sissoko, Naomi; Endegue-Zanga, Marie Claire; Adeniji, Johnson A.; Oberste, M. Steven; Burns, Cara C.
ABSTRACT The poliovirus (PV) is currently targeted for worldwide eradication and containment. Sanger-based sequencing of the viral protein 1 (VP1) capsid region is currently the standard method for PV surveillance. However, the whole-genome sequence is sometimes needed for higher resolution global surveillance. In this study, we optimized whole-genome sequencing protocols for poliovirus isolates and FTA cards using next-generation sequencing (NGS), aiming for high sequence coverage, efficiency, and throughput. We found that DNase treatment of poliovirus RNA followed by random reverse transcription (RT), amplification, and the use of the Nextera XT DNA library preparation kit produced significantly better results than other preparations. The average viral reads per total reads, a measurement of efficiency, was as high as 84.2% ± 15.6%. PV genomes covering >99 to 100% of the reference length were obtained and validated with Sanger sequencing. A total of 52 PV genomes were generated, multiplexing as many as 64 samples in a single Illumina MiSeq run. This high-throughput, sequence-independent NGS approach facilitated the detection of a diverse range of PVs, especially for those in vaccine-derived polioviruses (VDPV), circulating VDPV, or immunodeficiency-related VDPV. In contrast to results from previous studies on other viruses, our results showed that filtration and nuclease treatment did not discernibly increase the sequencing efficiency of PV isolates. However, DNase treatment after nucleic acid extraction to remove host DNA significantly improved the sequencing results. This NGS method has been successfully implemented to generate PV genomes for molecular epidemiology of the most recent PV isolates. Additionally, the ability to obtain full PV genomes from FTA cards will aid in facilitating global poliovirus surveillance. PMID:27927929
Svensson, Fredrik; Afzal, Avid M; Norinder, Ulf; Bender, Andreas
Iterative screening has emerged as a promising approach to increase the efficiency of screening campaigns compared to traditional high throughput approaches. By learning from a subset of the compound library, inferences on what compounds to screen next can be made by predictive models, resulting in more efficient screening. One way to evaluate screening is to consider the cost of screening compared to the gain associated with finding an active compound. In this work, we introduce a conformal predictor coupled with a gain-cost function with the aim to maximise gain in iterative screening. Using this setup we were able to show that by evaluating the predictions on the training data, very accurate predictions on what settings will produce the highest gain on the test data can be made. We evaluate the approach on 12 bioactivity datasets from PubChem training the models using 20% of the data. Depending on the settings of the gain-cost function, the settings generating the maximum gain were accurately identified in 8-10 out of the 12 datasets. Broadly, our approach can predict what strategy generates the highest gain based on the results of the cost-gain evaluation: to screen the compounds predicted to be active, to screen all the remaining data, or not to screen any additional compounds. When the algorithm indicates that the predicted active compounds should be screened, our approach also indicates what confidence level to apply in order to maximize gain. Hence, our approach facilitates decision-making and allocation of the resources where they deliver the most value by indicating in advance the likely outcome of a screening campaign.
Woolf Peter J
Full Text Available Abstract Background A key goal of drug discovery is to increase the throughput of small molecule screens without sacrificing screening accuracy. High-throughput screening (HTS in drug discovery involves testing a large number of compounds in a biological assay to identify active compounds. Normally, molecules from a large compound library are tested individually to identify the activity of each molecule. Usually a small number of compounds are found to be active, however the presence of false positive and negative testing errors suggests that this one-drug one-assay screening strategy can be significantly improved. Pooling designs are testing schemes that test mixtures of compounds in each assay, thereby generating a screen of the whole compound library in fewer tests. By repeatedly testing compounds in different combinations, pooling designs also allow for error-correction. These pooled designs, for specific experiment parameters, can be simply and efficiently created using the Shifted Transversal Design (STD pooling algorithm. However, drug screening contains a number of key constraints that require specific modifications if this pooling approach is to be useful for practical screen designs. Results In this paper, we introduce a pooling strategy called poolHiTS (Pooled High-Throughput Screening which is based on the STD algorithm. In poolHiTS, we implement a limit on the number of compounds that can be mixed in a single assay. In addition, we show that the STD-based pooling strategy is limited in the error-correction that it can achieve. Due to the mixing constraint, we show that it is more efficient to split a large library into smaller blocks of compounds, which are then tested using an optimized strategy repeated for each block. We package the optimal block selection algorithm into poolHiTS. The MATLAB codes for the poolHiTS algorithm and the corresponding decoding strategy are also provided. Conclusion We have produced a practical version
Davidson, Edgar; Doranz, Benjamin J
Characterizing the binding sites of monoclonal antibodies (mAbs) on protein targets, their 'epitopes', can aid in the discovery and development of new therapeutics, diagnostics and vaccines. However, the speed of epitope mapping techniques has not kept pace with the increasingly large numbers of mAbs being isolated. Obtaining detailed epitope maps for functionally relevant antibodies can be challenging, particularly for conformational epitopes on structurally complex proteins. To enable rapid epitope mapping, we developed a high-throughput strategy, shotgun mutagenesis, that enables the identification of both linear and conformational epitopes in a fraction of the time required by conventional approaches. Shotgun mutagenesis epitope mapping is based on large-scale mutagenesis and rapid cellular testing of natively folded proteins. Hundreds of mutant plasmids are individually cloned, arrayed in 384-well microplates, expressed within human cells, and tested for mAb reactivity. Residues are identified as a component of a mAb epitope if their mutation (e.g. to alanine) does not support candidate mAb binding but does support that of other conformational mAbs or allows full protein function. Shotgun mutagenesis is particularly suited for studying structurally complex proteins because targets are expressed in their native form directly within human cells. Shotgun mutagenesis has been used to delineate hundreds of epitopes on a variety of proteins, including G protein-coupled receptor and viral envelope proteins. The epitopes mapped on dengue virus prM/E represent one of the largest collections of epitope information for any viral protein, and results are being used to design better vaccines and drugs. © 2014 John Wiley & Sons Ltd.
Full Text Available The investigation of the interconnections between the molecular and genetic events that govern biological systems is essential if we are to understand the development of disease and design effective novel treatments. Microarray and next-generation sequencing technologies have the potential to provide this information. However, taking full advantage of these approaches requires that biological connections be made across large quantities of highly heterogeneous genomic datasets. Leveraging the increasingly huge quantities of genomic data in the public domain is fast becoming one of the key challenges in the research community today.We have developed a novel data mining framework that enables researchers to use this growing collection of public high-throughput data to investigate any set of genes or proteins. The connectivity between molecular states across thousands of heterogeneous datasets from microarrays and other genomic platforms is determined through a combination of rank-based enrichment statistics, meta-analyses, and biomedical ontologies. We address data quality concerns through dataset replication and meta-analysis and ensure that the majority of the findings are derived using multiple lines of evidence. As an example of our strategy and the utility of this framework, we apply our data mining approach to explore the biology of brown fat within the context of the thousands of publicly available gene expression datasets.Our work presents a practical strategy for organizing, mining, and correlating global collections of large-scale genomic data to explore normal and disease biology. Using a hypothesis-free approach, we demonstrate how a data-driven analysis across very large collections of genomic data can reveal novel discoveries and evidence to support existing hypothesis.
Reiser, Vladimír; Smith, Ryan C; Xue, Jiyan; Kurtz, Marc M; Liu, Rong; Legrand, Cheryl; He, Xuanmin; Yu, Xiang; Wong, Peggy; Hinchcliffe, John S; Tanen, Michael R; Lazar, Gloria; Zieba, Renata; Ichetovkin, Marina; Chen, Zhu; O'Neill, Edward A; Tanaka, Wesley K; Marton, Matthew J; Liao, Jason; Morris, Mark; Hailman, Eric; Tokiwa, George Y; Plump, Andrew S
With expanding biomarker discovery efforts and increasing costs of drug development, it is critical to maximize the value of mass-limited clinical samples. The main limitation of available methods is the inability to isolate and analyze, from a single sample, molecules requiring incompatible extraction methods. Thus, we developed a novel semiautomated method for tissue processing and tissue milling and division (TMAD). We used a SilverHawk atherectomy catheter to collect atherosclerotic plaques from patients requiring peripheral atherectomy. Tissue preservation by flash freezing was compared with immersion in RNAlater®, and tissue grinding by traditional mortar and pestle was compared with TMAD. Comparators were protein, RNA, and lipid yield and quality. Reproducibility of analyte yield from aliquots of the same tissue sample processed by TMAD was also measured. The quantity and quality of biomarkers extracted from tissue prepared by TMAD was at least as good as that extracted from tissue stored and prepared by traditional means. TMAD enabled parallel analysis of gene expression (quantitative reverse-transcription PCR, microarray), protein composition (ELISA), and lipid content (biochemical assay) from as little as 20 mg of tissue. The mean correlation was r = 0.97 in molecular composition (RNA, protein, or lipid) between aliquots of individual samples generated by TMAD. We also demonstrated that it is feasible to use TMAD in a large-scale clinical study setting. The TMAD methodology described here enables semiautomated, high-throughput sampling of small amounts of heterogeneous tissue specimens by multiple analytical techniques with generally improved quality of recovered biomolecules.
Kupershmidt, Ilya; Su, Qiaojuan Jane; Grewal, Anoop; Sundaresh, Suman; Halperin, Inbal; Flynn, James; Shekar, Mamatha; Wang, Helen; Park, Jenny; Cui, Wenwu; Wall, Gregory D; Wisotzkey, Robert; Alag, Satnam; Akhtari, Saeid; Ronaghi, Mostafa
The investigation of the interconnections between the molecular and genetic events that govern biological systems is essential if we are to understand the development of disease and design effective novel treatments. Microarray and next-generation sequencing technologies have the potential to provide this information. However, taking full advantage of these approaches requires that biological connections be made across large quantities of highly heterogeneous genomic datasets. Leveraging the increasingly huge quantities of genomic data in the public domain is fast becoming one of the key challenges in the research community today. We have developed a novel data mining framework that enables researchers to use this growing collection of public high-throughput data to investigate any set of genes or proteins. The connectivity between molecular states across thousands of heterogeneous datasets from microarrays and other genomic platforms is determined through a combination of rank-based enrichment statistics, meta-analyses, and biomedical ontologies. We address data quality concerns through dataset replication and meta-analysis and ensure that the majority of the findings are derived using multiple lines of evidence. As an example of our strategy and the utility of this framework, we apply our data mining approach to explore the biology of brown fat within the context of the thousands of publicly available gene expression datasets. Our work presents a practical strategy for organizing, mining, and correlating global collections of large-scale genomic data to explore normal and disease biology. Using a hypothesis-free approach, we demonstrate how a data-driven analysis across very large collections of genomic data can reveal novel discoveries and evidence to support existing hypothesis.
Vinner, Lasse; Mourier, Tobias; Friis-Nielsen, Jens
-stringency in-solution hybridization method enables detection of discovery of hitherto unknown viral sequences by high-throughput sequencing. The sensitivity was sufficient to detect retroviral...... sequences in clinical samples. We used this method to conduct an investigation for novel retrovirus in samples from three cancer types. In accordance with recent studies our investigation revealed no retroviral infections in human B-cell lymphoma cells, cutaneous T-cell lymphoma or colorectal cancer...
The Discovery Dome is a portable full-dome theater that plays professionally-created science films. Developed by the Houston Museum of Natural Science and Rice University, this inflatable planetarium offers a state-of-the-art visual learning experience that can address many different fields of science for any grade level. It surrounds students with roaring dinosaurs, fascinating planets, and explosive storms - all immersive, engaging, and realistic. Dickinson State University has chosen to utilize its Discovery Dome to address Earth Science education at two levels. University courses across the science disciplines can use the Discovery Dome as part of their curriculum. The digital shows immerse the students in various topics ranging from astronomy to geology to weather and climate. The dome has proven to be a valuable tool for introducing new material to students as well as for reinforcing concepts previously covered in lectures or laboratory settings. The Discovery Dome also serves as an amazing science public-outreach tool. University students are trained to run the dome, and they travel with it to schools and libraries around the region. During the 2013-14 school year, our Discovery Dome visited over 30 locations. Many of the schools visited are in rural settings which offer students few opportunities to experience state-of-the-art science technology. The school kids are extremely excited when the Discovery Dome visits their community, and they will talk about the experience for many weeks. Traveling with the dome is also very valuable for the university students who get involved in the program. They become very familiar with the science content, and they gain experience working with teachers as well as the general public. They get to share their love of science, and they get to help inspire a new generation of scientists.
Usui, Takuji; Noble, Daniel W A; O'Dea, Rose E; Fangmeier, Melissa L; Lagisz, Malgorzata; Hesselson, Daniel; Nakagawa, Shinichi
Zebrafish are increasingly used as a vertebrate model organism for various traits including swimming performance, obesity and metabolism, necessitating high-throughput protocols to generate standardized phenotypic information. Here, we propose a novel and cost-effective method for exercising zebrafish, using a coffee plunger and magnetic stirrer. To demonstrate the use of this method, we conducted a pilot experiment to show that this simple system provides repeatable estimates of maximal swim performance (intra-class correlation [ICC] = 0.34-0.41) and observe that exercise training of zebrafish on this system significantly increases their maximum swimming speed. We propose this high-throughput and reproducible system as an alternative to traditional linear chamber systems for exercising zebrafish and similarly sized fishes.
Schwämmle, Veit; Braga, Thiago Verano; Roepstorff, Peter
The investigation of post-translational modifications (PTMs) represents one of the main research focuses for the study of protein function and cell signaling. Mass spectrometry instrumentation with increasing sensitivity improved protocols for PTM enrichment and recently established pipelines...... for high-throughput experiments allow large-scale identification and quantification of several PTM types. This review addresses the concurrently emerging challenges for the computational analysis of the resulting data and presents PTM-centered approaches for spectra identification, statistical analysis...
Persson, Nils E; Rafshoon, Joshua; Naghshpour, Kaylie; Fast, Tony; Chu, Ping-Hsun; McBride, Michael; Risteen, Bailey; Grover, Martha; Reichmanis, Elsa
High-throughput discovery of process-structure-property relationships in materials through an informatics-enabled empirical approach is an increasingly utilized technique in materials research due to the rapidly expanding availability of data. Here, process-structure-property relationships are extracted for the nucleation, growth, and deposition of semiconducting poly(3-hexylthiophene) (P3HT) nanofibers used in organic field effect transistors, via high-throughput image analysis. This study is performed using an automated image analysis pipeline combining existing open-source software and new algorithms, enabling the rapid evaluation of structural metrics for images of fibrillar materials, including local orientational order, fiber length density, and fiber length distributions. We observe that microfluidic processing leads to fibers that pack with unusually high density, while sonication yields fibers that pack sparsely with low alignment. This is attributed to differences in their crystallization mechanisms. P3HT nanofiber packing during thin film deposition exhibits behavior suggesting that fibers are confined to packing in two-dimensional layers. We find that fiber alignment, a feature correlated with charge carrier mobility, is driven by increasing fiber length, and that shorter fibers tend to segregate to the buried dielectric interface during deposition, creating potentially performance-limiting defects in alignment. Another barrier to perfect alignment is the curvature of P3HT fibers; we propose a mechanistic simulation of fiber growth that reconciles both this curvature and the log-normal distribution of fiber lengths inherent to the fiber populations under consideration.
Brito Palma, Bernardo; Fisher, Charles W; Rueff, José; Kranendonk, Michel
The formation of reactive metabolites through biotransformation is the suspected cause of many adverse drug reactions. Testing for the propensity of a drug to form reactive metabolites has increasingly become an integral part of lead-optimization strategy in drug discovery. DNA reactivity is one undesirable facet of a drug or its metabolites and can lead to increased risk of cancer and reproductive toxicity. Many drugs are metabolized by cytochromes P450 in the liver and other tissues, and these reactions can generate hard electrophiles. These hard electrophilic reactive metabolites may react with DNA and may be detected in standard in vitro genotoxicity assays; however, the majority of these assays fall short due to the use of animal-derived organ extracts that inadequately represent human metabolism. The current study describes the development of bacterial systems that efficiently detect DNA-damaging electrophilic reactive metabolites generated by human P450 biotransformation. These assays use a GFP reporter system that detects DNA damage through induction of the SOS response and a GFP reporter to control for cytotoxicity. Two human CYP1A2-competent prototypes presented here have appropriate characteristics for the detection of DNA-damaging reactive metabolites in a high-throughput manner. The advantages of this approach include a short assay time (120-180 min) with real-time measurement, sensitivity to small amounts of compound, and adaptability to a microplate format. These systems are suitable for high-throughput assays and can serve as prototypes for the development of future enhanced versions.
Heap, Rachel E; Hope, Anthony G; Pearson, Lesley-Anne; Reyskens, Kathleen M S E; McElroy, Stuart P; Hastie, C James; Porter, David W; Arthur, J Simon C; Gray, David W; Trost, Matthias
Matrix-assisted laser desorption/ionization time-of-flight (MALDI TOF) mass spectrometry has become a promising alternative for high-throughput drug discovery as new instruments offer high speed, flexibility and sensitivity, and the ability to measure physiological substrates label free. Here we developed and applied high-throughput MALDI TOF mass spectrometry to identify inhibitors of the salt-inducible kinase (SIK) family, which are interesting drug targets in the field of inflammatory disease as they control production of the anti-inflammatory cytokine interleukin-10 (IL-10) in macrophages. Using peptide substrates in in vitro kinase assays, we can show that hit identification of the MALDI TOF kinase assay correlates with indirect ADP-Hunter kinase assays. Moreover, we can show that both techniques generate comparable IC 50 data for a number of hit compounds and known inhibitors of SIK kinases. We further take these inhibitors to a fluorescence-based cellular assay using the SIK activity-dependent translocation of CRTC3 into the nucleus, thereby providing a complete assay pipeline for the identification of SIK kinase inhibitors in vitro and in cells. Our data demonstrate that MALDI TOF mass spectrometry is fully applicable to high-throughput kinase screening, providing label-free data comparable to that of current high-throughput fluorescence assays.
Vierna, J; Doña, J; Vizcaíno, A; Serrano, D; Jovani, R
High-throughput DNA barcoding has become essential in ecology and evolution, but some technical questions still remain. Increasing the number of PCR cycles above the routine 20-30 cycles is a common practice when working with old-type specimens, which provide little amounts of DNA, or when facing annealing issues with the primers. However, increasing the number of cycles can raise the number of artificial mutations due to polymerase errors. In this work, we sequenced 20 COI libraries in the Illumina MiSeq platform. Libraries were prepared with 40, 45, 50, 55, and 60 PCR cycles from four individuals belonging to four species of four genera of cephalopods. We found no relationship between the number of PCR cycles and the number of mutations despite using a nonproofreading polymerase. Moreover, even when using a high number of PCR cycles, the resulting number of mutations was low enough not to be an issue in the context of high-throughput DNA barcoding (but may still remain an issue in DNA metabarcoding due to chimera formation). We conclude that the common practice of increasing the number of PCR cycles should not negatively impact the outcome of a high-throughput DNA barcoding study in terms of the occurrence of point mutations.
Nierychlo, Marta; Larsen, Poul; Jørgensen, Mads Koustrup
S rRNA gene amplicon sequencing has been developed over the past few years and is now ready to use for more comprehensive studies related to plant operation and optimization thanks to short analysis time, low cost, high throughput, and high taxonomic resolution. In this study we show how 16S r......RNA gene amplicon sequencing can be used to reveal factors of importance for the operation of full-scale nutrient removal plants related to settling problems and floc properties. Using optimized DNA extraction protocols, indexed primers and our in-house Illumina platform, we prepared multiple samples...... be correlated to the presence of the species that are regarded as “strong” and “weak” floc formers. In conclusion, 16S rRNA gene amplicon sequencing provides a high throughput approach for a rapid and cheap community profiling of activated sludge that in combination with multivariate statistics can be used...
Full Text Available Manipulation of gene expression on a genome-wide level is one of the most important systematic tools in the post-genome era. Such manipulations have largely been enabled by expression cloning approaches using sequence-verified cDNA libraries, large-scale RNA interference libraries (shRNA or siRNA and zinc finger nuclease technologies. More recently, the CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats and CRISPR-associated (Cas9-mediated gene editing technology has been described that holds great promise for future use of this technology in genomic manipulation. It was suggested that the CRISPR system has the potential to be used in high-throughput, large-scale loss of function screening. Here we discuss some of the challenges in engineering of CRISPR/Cas genomic libraries and some of the aspects that need to be addressed in order to use this technology on a high-throughput scale.
Otis, Richard A.; Liu, Zi-Kui
One foundational component of the integrated computational materials engineering (ICME) and Materials Genome Initiative is the computational thermodynamics based on the calculation of phase diagrams (CALPHAD) method. The CALPHAD method pioneered by Kaufman has enabled the development of thermodynamic, atomic mobility, and molar volume databases of individual phases in the full space of temperature, composition, and sometimes pressure for technologically important multicomponent engineering materials, along with sophisticated computational tools for using the databases. In this article, our recent efforts will be presented in terms of developing new computational tools for high-throughput modeling and uncertainty quantification based on high-throughput, first-principles calculations and the CALPHAD method along with their potential propagations to downstream ICME modeling and simulations.
Chaouachi, Maher; Chupeau, Gaëlle; Berard, Aurélie; McKhann, Heather; Romaniuk, Marcel; Giancola, Sandra; Laval, Valérie; Bertheau, Yves; Brunel, Dominique
A high-throughput multiplex assay for the detection of genetically modified organisms (GMO) was developed on the basis of the existing SNPlex method designed for SNP genotyping. This SNPlex assay allows the simultaneous detection of up to 48 short DNA sequences (approximately 70 bp; "signature sequences") from taxa endogenous reference genes, from GMO constructions, screening targets, construct-specific, and event-specific targets, and finally from donor organisms. This assay avoids certain shortcomings of multiplex PCR-based methods already in widespread use for GMO detection. The assay demonstrated high specificity and sensitivity. The results suggest that this assay is reliable, flexible, and cost- and time-effective for high-throughput GMO detection.
Engmark, Mikael; De Masi, Federico; Laustsen, Andreas Hougaard
Insight into the epitopic recognition pattern for polyclonal antivenoms is a strong tool for accurate prediction of antivenom cross-reactivity and provides a basis for design of novel antivenoms. In this work, a high-throughput approach was applied to characterize linear epitopes in 966 individua...... toxins from pit vipers (Crotalidae) using the ICP Crotalidae antivenom. Due to an abundance of snake venom metalloproteinases and phospholipase A2s in the venoms used for production of the investigated antivenom, this study focuses on these toxin families.......Insight into the epitopic recognition pattern for polyclonal antivenoms is a strong tool for accurate prediction of antivenom cross-reactivity and provides a basis for design of novel antivenoms. In this work, a high-throughput approach was applied to characterize linear epitopes in 966 individual...
Václavík, Jan; Melich, Radek; Pintr, Pavel; Pleštil, Jan
Affordable, long-wave infrared hyperspectral imaging calls for use of an uncooled FPA with high-throughput optics. This paper describes the design of the optical part of a stationary hyperspectral imager in a spectral range of 7-14 um with a field of view of 20°×10°. The imager employs a push-broom method made by a scanning mirror. High throughput and a demand for simplicity and rigidity led to a fully refractive design with highly aspheric surfaces and off-axis positioning of the detector array. The design was optimized to exploit the machinability of infrared materials by the SPDT method and a simple assemblage.
Balajee, Adayabalam S.; Escalona, Maria; Smith, Tammy; Ryan, Terri; Dainiak, Nicholas
Accidental or intentional radiological or nuclear (R/N) disasters constitute a major threat around the globe that can affect several tens, hundreds and thousands of humans. Currently available cytogenetic biodosimeters are time consuming and laborious to perform making them impractical for triage scenarios. Therefore, it is imperative to develop high throughput techniques which will enable timely assessment of personalized dose for making an appropriate 'life-saving' clinical decision
Human induced pluripotent stem cells, hydrogels, 3D culture, electrophysiology, high-throughput assay 16. SECURITY CLASSIFICATION OF: 17...image the 3D rat dorsal root ganglion ( DRG ) cultures with sufficiently low background as to detect electrically-evoked depolarization events, as...of voltage-sensitive dyes. 8 We have made substantial progress in Task 4.1. We have fabricated neural fiber tracts from DRG explants and
The Intel/CERN High Throughput Computing Collaboration studies the application of upcoming Intel technologies to the very challenging environment of the LHC trigger and data-acquisition systems. These systems will need to transport and process many terabits of data every second, in some cases with tight latency constraints. Parallelisation and tight integration of accelerators and classical CPU via Intel's OmniPath fabric are the key elements in this project.
de Groot Joost CW
Full Text Available Abstract Background Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses are often interdependent and chained together to form complex workflows or pipelines. Given the volume of the data used and the multitude of computational resources available, specialized pipeline software is required to make high-throughput analysis of large-scale omics datasets feasible. Results We have developed a generic pipeline system called Cyrille2. The system is modular in design and consists of three functionally distinct parts: 1 a web based, graphical user interface (GUI that enables a pipeline operator to manage the system; 2 the Scheduler, which forms the functional core of the system and which tracks what data enters the system and determines what jobs must be scheduled for execution, and; 3 the Executor, which searches for scheduled jobs and executes these on a compute cluster. Conclusion The Cyrille2 system is an extensible, modular system, implementing the stated requirements. Cyrille2 enables easy creation and execution of high throughput, flexible bioinformatics pipelines.
Sagar, Dodderi Manjunatha; Korshoj, Lee Erik; Hanson, Katrina Bethany; Chowdhury, Partha Pratim; Otoupal, Peter Britton; Chatterjee, Anushree; Nagpal, Prashant
Optical techniques for molecular diagnostics or DNA sequencing generally rely on small molecule fluorescent labels, which utilize light with a wavelength of several hundred nanometers for detection. Developing a label-free optical DNA sequencing technique will require nanoscale focusing of light, a high-throughput and multiplexed identification method, and a data compression technique to rapidly identify sequences and analyze genomic heterogeneity for big datasets. Such a method should identify characteristic molecular vibrations using optical spectroscopy, especially in the "fingerprinting region" from ≈400-1400 cm -1 . Here, surface-enhanced Raman spectroscopy is used to demonstrate label-free identification of DNA nucleobases with multiplexed 3D plasmonic nanofocusing. While nanometer-scale mode volumes prevent identification of single nucleobases within a DNA sequence, the block optical technique can identify A, T, G, and C content in DNA k-mers. The content of each nucleotide in a DNA block can be a unique and high-throughput method for identifying sequences, genes, and other biomarkers as an alternative to single-letter sequencing. Additionally, coupling two complementary vibrational spectroscopy techniques (infrared and Raman) can improve block characterization. These results pave the way for developing a novel, high-throughput block optical sequencing method with lossy genomic data compression using k-mer identification from multiplexed optical data acquisition. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Full Text Available Proteomics for biomarker validation needs high throughput instrumentation to analyze huge set of clinical samples for quantitative and reproducible analysis at a minimum time without manual experimental errors. Sample preparation, a vital step in proteomics plays a major role in identification and quantification of proteins from biological samples. Tryptic digestion a major check point in sample preparation for mass spectrometry based proteomics needs to be more accurate with rapid processing time. The present study focuses on establishing a high throughput automated online system for proteolytic digestion and desalting of proteins from biological samples quantitatively and qualitatively in a reproducible manner. The present study compares online protein digestion and desalting of BSA with conventional off-line (in-solution method and validated for real time sample for reproducibility. Proteins were identified using SEQUEST data base search engine and the data were quantified using IDEALQ software. The present study shows that the online system capable of handling high throughput samples in 96 well formats carries out protein digestion and peptide desalting efficiently in a reproducible and quantitative manner. Label free quantification showed clear increase of peptide quantities with increase in concentration with much linearity compared to off line method. Hence we would like to suggest that inclusion of this online system in proteomic pipeline will be effective in quantification of proteins in comparative proteomics were the quantification is really very crucial.
Klimas, Aleksandra; Yu, Jinzhu; Ambrosi, Christina M.; Williams, John C.; Bien, Harold; Entcheva, Emilia
In the last two decades, market were due to cardiac toxicity, where unintended interactions with ion channels disrupt the heart's normal electrical function. Consequently, all new drugs must undergo preclinical testing for cardiac liability, adding to an already expensive and lengthy process. Recognition that proarrhythmic effects often result from drug action on multiple ion channels demonstrates a need for integrative and comprehensive measurements. Additionally, patient-specific therapies relying on emerging technologies employing stem-cell derived cardiomyocytes (e.g. induced pluripotent stem-cell-derived cardiomyocytes, iPSC-CMs) require better screening methods to become practical. However, a high-throughput, cost-effective approach for cellular cardiac electrophysiology has not been feasible. Optical techniques for manipulation and recording provide a contactless means of dynamic, high-throughput testing of cells and tissues. Here, we consider the requirements for all-optical electrophysiology for drug testing, and we implement and validate OptoDyCE, a fully automated system for all-optical cardiac electrophysiology. We demonstrate the high-throughput capabilities using multicellular samples in 96-well format by combining optogenetic actuation with simultaneous fast high-resolution optical sensing of voltage or intracellular calcium. The system can also be implemented using iPSC-CMs and other cell-types by delivery of optogenetic drivers, or through the modular use of dedicated light-sensitive somatic cells in conjunction with non-modified cells. OptoDyCE provides a truly modular and dynamic screening system, capable of fully-automated acquisition of high-content information integral for improved discovery and development of new drugs and biologics, as well as providing a means of better understanding of electrical disturbances in the heart.
Cribbes, Scott; Kessel, Sarah; McMenemy, Scott; Qiu, Jean; Chan, Leo Li-Ying
Three-dimensional (3D) tumor models have been increasingly used to investigate and characterize cancer drug compounds. The ability to perform high-throughput screening of 3D multicellular tumor spheroids (MCTS) can highly improve the efficiency and cost-effectiveness of discovering potential cancer drug candidates. Previously, the Celigo Image Cytometer has demonstrated a novel method for high-throughput screening of 3D multicellular tumor spheroids. In this work, we employed the Celigo Image Cytometer to examine the effects of 14 cancer drug compounds on 3D MCTS of the glioblastoma cell line U87MG in 384-well plates. Using parameters such as MCTS diameter and invasion area, growth and invasion were monitored for 9 and 3 d, respectively. Furthermore, fluorescent staining with calcein AM, propidium iodide, Hoechst 33342, and caspase 3/7 was performed at day 9 posttreatment to measure viability and apoptosis. Using the kinetic and endpoint data generated, we created a novel multiparametric drug-scoring system for 3D MCTS that can be used to identify and classify potential drug candidates earlier in the drug discovery process. Furthermore, the combination of quantitative and qualitative image data can be used to delineate differences between drugs that induce cytotoxic and cytostatic effects. The 3D MCTS-based multiparametric scoring method described here can provide an alternative screening method to better qualify tested drug compounds.
Kim, Young Hwan; Park, Geun Il; Lee, Jung Won; Jung, Jae Hoo; Kim, Ki Ho; Lee, Yong Soon; Lee, Do Youn; Kim, Su Sung
KAERI is developing a pyro-process. As a piece of process equipment, a high throughput vol-oxidizer which can handle a several tens kg HM/batch was developed to supply U 3 O 8 powders to an electrolytic reduction(ER) reactor. To increase the reduction yield, UO 2 pellets should be converted into uniform powders. In this paper, we aim at the evaluation of a high throughput vol-oxidizer for operability. The evaluation consisted of 3 targets, a mechanical motion test, a heating test and hull separation test. In order to test a high throughput vol-oxidizer, By using a control system, mechanical motion tests of the vol-oxidizer were conducted, and heating rates were analyzed. Also the separation tests of hulls for recovery rate were conducted. The test results of the vol-oxidizer are going to be applied for operability. A study on the characteristics of the volatile gas produced during a vol-oxidation process is not included in this study
Dong, Huina; Zu, Xin; Zheng, Ping; Zhang, Dawei
Adenosine is a major local regulator of tissue function and industrially useful as precursor for the production of medicinal nucleoside substances. High-throughput screening of adenosine overproducers is important for industrial microorganism breeding. An enzymatic assay of adenosine was developed by combined adenosine deaminase (ADA) with indophenol method. The ADA catalyzes the cleavage of adenosine to inosine and NH3, the latter can be accurately determined by indophenol method. The assay system was optimized to deliver a good performance and could tolerate the addition of inorganic salts and many nutrition components to the assay mixtures. Adenosine could be accurately determined by this assay using 96-well microplates. Spike and recovery tests showed that this assay can accurately and reproducibly determine increases in adenosine in fermentation broth without any pretreatment to remove proteins and potentially interfering low-molecular-weight molecules. This assay was also applied to high-throughput screening for high adenosine-producing strains. The high selectivity and accuracy of the ADA assay provides rapid and high-throughput analysis of adenosine in large numbers of samples. PMID:25580842
Mazis, A.; Hiller, J.; Morgan, P.; Awada, T.; Stoerger, V.
High throughput plant phenotyping is increasingly being used to assess morphological and biophysical traits of economically important crops in agriculture. In this study, the potential application of this technique in natural resources management, through the characterization of woody plants regeneration, establishment, growth, and responses to water and nutrient manipulations was assessed. Two woody species were selected for this study, Quercus prinoides and Quercus bicolor. Seeds were collected from trees growing at the edge of their natural distribution in Nebraska and Missouri, USA. Seeds were germinated in the greenhouse and transferred to the Nebraska Innovation Campus Lemnatec3D High Throughput facility at the University of Nebraska-Lincoln. Seedlings subjected to water and N manipulations, were imaged twice or three times a week using four cameras (Visible, Fluorescence, Infrared and Hyperspectral), throughout the growing season. Traditional leaf to plant levels ecophysiological measurements were concurrently acquired to assess the relationship between these two techniques. These include gas exchange (LI 6400 and LI 6800, LICOR Inc., Lincoln NE), chlorophyll content, optical characteristics (Ocean Optics USB200), water and osmotic potentials, leaf area and weight and carbon isotope ratio. In the presentation, we highlight results on the potential use of high throughput plant phenotyping techniques to assess the morphology and physiology of woody species including responses to water availability and nutrient manipulation, and its broader application under field conditions and natural resources management. Also, we explore the different capabilities imaging provides us for modeling the plant physiological and morphological growth and how it can complement the current techniques
The MELOX plant in the south of France together with the La Hague reprocessing plant, are part of the two industrial facilities in charge of closing the nuclear fuel cycle in France. Started up in 1995, MELOX has since accumulated a solid know-how in recycling plutonium recovered from spent uranium fuel into MOX: a fuel blend comprised of both uranium and plutonium oxides. Converting recovered Pu into a proliferation-resistant material that can readily be used to power a civil nuclear reactor, MOX fabrication offers a sustainable solution to safely take advantage of the plutonium's high energy content. Being the first large-capacity industrial facility dedicated to MOX fuel fabrication, MELOX distinguishes itself from the first generation MOX plants with high capacity (around 200 tHM versus around 40 tHM) and several unique operational features designed to improve productivity, reliability and flexibility while maintaining high safety standards. Providing an exemplary reference for high throughput MOX fabrication with 1,000 tHM produced since start-up, the unique process and technologies implemented at MELOX are currently inspiring other MOX plant construction projects (in Japan with the J-MOX plant, in the US and in Russia as part of the weapon-grade plutonium inventory reduction). Spurred by the growing international demand, MELOX has embarked upon an ambitious production development and diversification plan. Starting from an annual level of 100 tons of heavy metal (tHM), MELOX demonstrated production capacity is continuously increasing: MELOX is now aiming for a minimum of 140 tHM by the end of 2005, with the ultimate ambition of reaching the full capacity of the plant (around 200 tHM) in the near future. With regards to its activity, MELOX also remains deeply committed to sustainable development in a consolidated involvement within AREVA group. The French minister of Industry, on August 26th 2005, acknowledged the benefits of MOX fuel production at MELOX: 'In
Naouale El Yamani
Full Text Available The unique physicochemical properties of engineered nanomaterials (NMs have accelerated their use in diverse industrial and domestic products. Although their presence in consumer products represents a major concern for public health safety, their potential impact on human health is poorly understood. There is therefore an urgent need to clarify the toxic effects of NMs and to elucidate the mechanisms involved. In view of the large number of NMs currently being used, high throughput (HTP screening technologies are clearly needed for efficient assessment of toxicity. The comet assay is the most used method in nanogenotoxicity studies and has great potential for increasing throughput as it is fast, versatile and robust; simple technical modifications of the assay make it possible to test many compounds (NMs in a single experiment. The standard gel of 70-100 μL contains thousands of cells, of which only a tiny fraction are actually scored. Reducing the gel to a volume of 5 μL, with just a few hundred cells, allows twelve gels to be set on a standard slide, or 96 as a standard 8x12 array. For the 12 gel format, standard slides precoated with agarose are placed on a metal template and gels are set on the positions marked on the template. The HTP comet assay, incorporating digestion of DNA with formamidopyrimidine DNA glycosylase (FPG to detect oxidised purines, has recently been applied to study the potential induction of genotoxicity by NMs via reactive oxygen. In the NanoTEST project we investigated the genotoxic potential of several well-characterized metal and polymeric nanoparticles with the comet assay. All in vitro studies were harmonized; i.e. NMs were from the same batch, and identical dispersion protocols, exposure time, concentration range, culture conditions, and time-courses were used. As a kidney model, Cos-1 fibroblast-like kidney cells were treated with different concentrations of iron oxide NMs, and cells embedded in minigels (12
Beneyton, Thomas; Wijaya, I. Putu Mahendra; Postros, Prexilia; Najah, Majdi; Leblond, Pascal; Couvent, Angélique; Mayot, Estelle; Griffiths, Andrew D.; Drevelle, Antoine
Filamentous fungi are an extremely important source of industrial enzymes because of their capacity to secrete large quantities of proteins. Currently, functional screening of fungi is associated with low throughput and high costs, which severely limits the discovery of novel enzymatic activities and better production strains. Here, we describe a nanoliter-range droplet-based microfluidic system specially adapted for the high-throughput sceening (HTS) of large filamentous fungi libraries for secreted enzyme activities. The platform allowed (i) compartmentalization of single spores in ~10 nl droplets, (ii) germination and mycelium growth and (iii) high-throughput sorting of fungi based on enzymatic activity. A 104 clone UV-mutated library of Aspergillus niger was screened based on α-amylase activity in just 90 minutes. Active clones were enriched 196-fold after a single round of microfluidic HTS. The platform is a powerful tool for the development of new production strains with low cost, space and time footprint and should bring enormous benefit for improving the viability of biotechnological processes.
The high-throughput framework AFLOW that has been developed and used successfully over the last decade is improved to include fully-integrated software for crystallographic symmetry characterization. The standards used in the symmetry algorithms conform with the conventions and prescriptions given in the International Tables of Crystallography (ITC). A standard cell choice with standard origin is selected, and the space group, point group, Bravais lattice, crystal system, lattice system, and representative symmetry operations are determined. Following the conventions of the ITC, the Wyckoff sites are also determined and their labels and site symmetry are provided. The symmetry code makes no assumptions on the input cell orientation, origin, or reduction and has been integrated in the AFLOW high-throughput framework for materials discovery by adding to the existing code base and making use of existing classes and functions. The software is written in object-oriented C++ for flexibility and reuse. A performance analysis and examination of the algorithms scaling with cell size and symmetry is also reported.
Full Text Available Microfluidic systems have been regarded as a potential platform for high-throughput screening technology in drug discovery due to their low sample consumption, high integration, and easy operation. The handling of small-volume liquid is an essential operation in microfluidic systems, especially in investigating large-scale combination conditions. Here, we develop a nanoliter centrifugal liquid dispenser (NanoCLD coupled with superhydrophobic microwell array chips for high-throughput cell-based assays in the nanoliter scale. The NanoCLD consists of a plastic stock block with an array of drilled through holes, a reagent microwell array chip (reagent chip, and an alignment bottom assembled together in a fixture. A simple centrifugation at 800 rpm can dispense ~160 nL reagents into microwells in 5 min. The dispensed reagents are then delivered to cells by sandwiching the reagent chip upside down with another microwell array chip (cell chip on which cells are cultured. A gradient of doxorubicin is then dispensed to the cell chip using the NanoCLD for validating the feasibility of performing drug tests on our microchip platform. This novel nanoliter-volume liquid dispensing method is simple, easy to operate, and especially suitable for repeatedly dispensing many different reagents simultaneously to microwells.
Lesley Joan Collins
Full Text Available ncRNAs are key genes in many human diseases including cancer and viral infection, as well as providing critical functions in pathogenic organisms such as fungi, bacteria, viruses and protists. Until now the identification and characterization of ncRNAs associated with disease has been slow or inaccurate requiring many years of testing to understand complicated RNA and protein gene relationships. High-throughput sequencing now offers the opportunity to characterize miRNAs, siRNAs, snoRNAs and long ncRNAs on a genomic scale making it faster and easier to clarify how these ncRNAs contribute to the disease state. However, this technology is still relatively new, and ncRNA discovery is not an application of high priority for streamlined bioinformatics. Here we summarize background concepts and practical approaches for ncRNA analysis using high-throughput sequencing, and how it relates to understanding human disease. As a case study, we focus on the parasitic protists Giardia lamblia and Trichomonas vaginalis, where large evolutionary distance has meant difficulties in comparing ncRNAs with those from model eukaryotes. A combination of biological, computational and sequencing approaches has enabled easier classification of ncRNA classes such as snoRNAs, but has also aided the identification of novel classes. It is hoped that a higher level of understanding of ncRNA expression and interaction may aid in the development of less harsh treatment for protist-based diseases.
Collins, Lesley Joan
ncRNAs are key genes in many human diseases including cancer and viral infection, as well as providing critical functions in pathogenic organisms such as fungi, bacteria, viruses, and protists. Until now the identification and characterization of ncRNAs associated with disease has been slow or inaccurate requiring many years of testing to understand complicated RNA and protein gene relationships. High-throughput sequencing now offers the opportunity to characterize miRNAs, siRNAs, small nucleolar RNAs (snoRNAs), and long ncRNAs on a genomic scale, making it faster and easier to clarify how these ncRNAs contribute to the disease state. However, this technology is still relatively new, and ncRNA discovery is not an application of high priority for streamlined bioinformatics. Here we summarize background concepts and practical approaches for ncRNA analysis using high-throughput sequencing, and how it relates to understanding human disease. As a case study, we focus on the parasitic protists Giardia lamblia and Trichomonas vaginalis, where large evolutionary distance has meant difficulties in comparing ncRNAs with those from model eukaryotes. A combination of biological, computational, and sequencing approaches has enabled easier classification of ncRNA classes such as snoRNAs, but has also aided the identification of novel classes. It is hoped that a higher level of understanding of ncRNA expression and interaction may aid in the development of less harsh treatment for protist-based diseases. PMID:22303390
Chen, Wenjin; Wong, Chung; Vosburgh, Evan; Levine, Arnold J; Foran, David J; Xu, Eugenia Y
The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application - SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary "Manual Initialize" and "Hand Draw" tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model
The RAS Drug Discovery group aims to develop assays that will reveal aspects of RAS biology upon which cancer cells depend. Successful assay formats are made available for high-throughput screening programs to yield potentially effective drug compounds.
Boger, Dale L
.... This entails the high throughput synthesis of DNA binding agents related to distamycin, their screening for binding to androgen response elements using a new high throughput DNA binding screen...
.... This entails the high throughput synthesis of DNA binding agents related to distamycin, their screening for binding to androgen response elements using a new high throughput DNA binding screen...
Full Text Available KJ Allan,1,2 David F Stojdl,1–3 SL Swift1 1Children’s Hospital of Eastern Ontario (CHEO Research Institute, 2Department of Biology, Microbiology and Immunology, 3Department of Pediatrics, University of Ottawa, Ottawa, ON, Canada Abstract: High-throughput screens can rapidly scan and capture large amounts of information across multiple biological parameters. Although many screens have been designed to uncover potential new therapeutic targets capable of crippling viruses that cause disease, there have been relatively few directed at improving the efficacy of viruses that are used to treat disease. Oncolytic viruses (OVs are biotherapeutic agents with an inherent specificity for treating malignant disease. Certain OV platforms – including those based on herpes simplex virus, reovirus, and vaccinia virus – have shown success against solid tumors in advanced clinical trials. Yet, many of these OVs have only undergone minimal engineering to solidify tumor specificity, with few extra modifications to manipulate additional factors. Several aspects of the interaction between an OV and a tumor-bearing host have clear value as targets to improve therapeutic outcomes. At the virus level, these include delivery to the tumor, infectivity, productivity, oncolysis, bystander killing, spread, and persistence. At the host level, these include engaging the immune system and manipulating the tumor microenvironment. Here, we review the chemical- and genome-based high-throughput screens that have been performed to manipulate such parameters during OV infection and analyze their impact on therapeutic efficacy. We further explore emerging themes that represent key areas of focus for future research. Keywords: oncolytic, virus, screen, high-throughput, cancer, chemical, genomic, immunotherapy
Worley, Bradley; Sisco, Nicholas J.; Powers, Robert
NMR ligand-affinity screens are vital to drug discovery, are routinely used to screen fragment-based libraries, and used to verify chemical leads from high-throughput assays and virtual screens. NMR ligand-affinity screens are also a highly informative first step towards identifying functional epitopes of unknown proteins, as well as elucidating the biochemical functions of protein–ligand interaction at their binding interfaces. While simple one-dimensional 1 H NMR experiments are capable of indicating binding through a change in ligand line shape, they are plagued by broad, ill-defined background signals from protein 1 H resonances. We present an uncomplicated method for subtraction of protein background in high-throughput ligand-based affinity screens, and show that its performance is maximized when phase-scatter correction is applied prior to subtraction
Ngo, Tony; Coleman, James L J; Smith, Nicola J
Orphan G protein-coupled receptors represent an underexploited resource for drug discovery but pose a considerable challenge for assay development because their cognate G protein signaling pathways are often unknown. In this methodological chapter, we describe the use of constitutive activity, that is, the inherent ability of receptors to couple to their cognate G proteins in the absence of ligand, to inform the development of high-throughput screening assays for a particular orphan receptor. We specifically focus on a two-step process, whereby constitutive G protein coupling is first determined using yeast Gpa1/human G protein chimeras linked to growth and β-galactosidase generation. Coupling selectivity is then confirmed in mammalian cells expressing endogenous G proteins and driving accumulation of transcription factor-fused luciferase reporters specific to each of the classes of G protein. Based on these findings, high-throughput screening campaigns can be performed on the already miniaturized mammalian reporter system.
Bliznetsov, Vladimir; Manickam, Anbumalar; Ranganathan, Nagarajan; Chen, Junwei
This note describes a new high-throughput process of polyimide etching for the fabrication of MEMS devices with an organic sacrificial layer approach. Using dual frequency superimposed capacitively coupled plasma we achieved a vertical profile of polyimide with an etching rate as high as 3.5 µm min −1 . After the fabrication of vertical structures in a polyimide material, additional steps were performed to fabricate structural elements of MEMS by deposition of a SiO 2 layer and performing release etching of polyimide. (technical note)
Studholme, David J; Glover, Rachel H; Boonham, Neil
The new sequencing technologies are already making a big impact in academic research on medically important microbes and may soon revolutionize diagnostics, epidemiology, and infection control. Plant pathology also stands to gain from exploiting these opportunities. This manuscript reviews some applications of these high-throughput sequencing methods that are relevant to phytopathology, with emphasis on the associated computational and bioinformatics challenges and their solutions. Second-generation sequencing technologies have recently been exploited in genomics of both prokaryotic and eukaryotic plant pathogens. They are also proving to be useful in diagnostics, especially with respect to viruses. Copyright © 2011 by Annual Reviews. All rights reserved.
Picardi, Ernesto; Pesole, Graziano
The reliable detection of RNA editing sites from massive sequencing data remains challenging and, although several methodologies have been proposed, no computational tools have been released to date. Here, we introduce REDItools a suite of python scripts to perform high-throughput investigation of RNA editing using next-generation sequencing data. REDItools are in python programming language and freely available at http://code.google.com/p/reditools/. firstname.lastname@example.org or email@example.com Supplementary data are available at Bioinformatics online.
Zhao, H.; Xu, L.; Jiang, H.; Shi, S.; Chen, D.
Hyperspectral and three-dimensional measurement can obtain the intrinsic physicochemical properties and external geometrical characteristics of objects, respectively. Currently, a variety of sensors are integrated into a system to collect spectral and morphological information in agriculture. However, previous experiments were usually performed with several commercial devices on a single platform. Inadequate registration and synchronization among instruments often resulted in mismatch between spectral and 3D information of the same target. And narrow field of view (FOV) extends the working hours in farms. Therefore, we propose a high throughput prototype that combines stereo vision and grating dispersion to simultaneously acquire hyperspectral and 3D information.
The creation of a high-throughput screening facility within an organization is a difficult task, requiring a substantial investment of time, money, and organizational effort. Major issues to consider include the selection of equipment, the establishment of data analysis methodologies, and the formation of a group having the necessary competencies. If done properly, it is possible to build a screening system in incremental steps, adding new pieces of equipment and data analysis modules as the need grows. Based upon our experience with the creation of a small screening service, we present some guidelines to consider in planning a screening facility.
Amin, A.; Bockelman, B.; Letts, J.; Levshina, T.; Martin, T.; Pi, H.; Sfiligoi, I.; Thomas, M.; Wüerthwein, F.
Hadoop distributed file system (HDFS) is becoming more popular in recent years as a key building block of integrated grid storage solution in the field of scientific computing. Wide Area Network (WAN) data transfer is one of the important data operations for large high energy physics experiments to manage, share and process datasets of PetaBytes scale in a highly distributed grid computing environment. In this paper, we present the experience of high throughput WAN data transfer with HDFS-based Storage Element. Two protocols, GridFTP and fast data transfer (FDT), are used to characterize the network performance of WAN data transfer.
Amin, A; Thomas, M; Bockelman, B; Letts, J; Martin, T; Pi, H; Sfiligoi, I; Wüerthwein, F; Levshina, T
Hadoop distributed file system (HDFS) is becoming more popular in recent years as a key building block of integrated grid storage solution in the field of scientific computing. Wide Area Network (WAN) data transfer is one of the important data operations for large high energy physics experiments to manage, share and process datasets of PetaBytes scale in a highly distributed grid computing environment. In this paper, we present the experience of high throughput WAN data transfer with HDFS-based Storage Element. Two protocols, GridFTP and fast data transfer (FDT), are used to characterize the network performance of WAN data transfer.
Full Text Available This paper reviews high-throughput screening enzyme assays developed in our laboratory over the last ten years. These enzyme assays were initially developed for the purpose of discovering catalytic antibodies by screening cell culture supernatants, but have proved generally useful for testing enzyme activities. Examples include TLC-based screening using acridone-labeled substrates, fluorogenic assays based on the β-elimination of umbelliferone or nitrophenol, and indirect assays such as the back-titration method with adrenaline and the copper-calcein fluorescence assay for aminoacids.
Wang, Yuhong; Huang, Ruili
High-throughput screening (HTS) makes it possible to collect cellular response data from a large number of cell lines and small molecules in a timely and cost-effective manner. The errors and noises in the microplate-formatted data from HTS have unique characteristics, and they can be generally grouped into three categories: run-wise (temporal, multiple plates), plate-wise (background pattern, single plate), and well-wise (single well). In this chapter, we describe a systematic solution for identifying and correcting such errors and noises, mainly basing on pattern recognition and digital signal processing technologies.
Full Text Available Hyperspectral and three-dimensional measurement can obtain the intrinsic physicochemical properties and external geometrical characteristics of objects, respectively. Currently, a variety of sensors are integrated into a system to collect spectral and morphological information in agriculture. However, previous experiments were usually performed with several commercial devices on a single platform. Inadequate registration and synchronization among instruments often resulted in mismatch between spectral and 3D information of the same target. And narrow field of view (FOV extends the working hours in farms. Therefore, we propose a high throughput prototype that combines stereo vision and grating dispersion to simultaneously acquire hyperspectral and 3D information.
Casalino, Laura; Magnani, Dario; De Falco, Sandro; Filosa, Stefania; Minchiotti, Gabriella; Patriarca, Eduardo J; De Cesare, Dario
The use of Embryonic Stem Cells (ESCs) holds considerable promise both for drug discovery programs and the treatment of degenerative disorders in regenerative medicine approaches. Nevertheless, the successful use of ESCs is still limited by the lack of efficient control of ESC self-renewal and differentiation capabilities. In this context, the possibility to modulate ESC biological properties and to obtain homogenous populations of correctly specified cells will help developing physiologically relevant screens, designed for the identification of stem cell modulators. Here, we developed a high throughput screening-suitable ESC neural differentiation assay by exploiting the Cell(maker) robotic platform and demonstrated that neural progenies can be generated from ESCs in complete automation, with high standards of accuracy and reliability. Moreover, we performed a pilot screening providing proof of concept that this assay allows the identification of regulators of ESC neural differentiation in full automation.
Increasing efficiency and declining cost of generating whole transcriptome profiles has made high-throughput transcriptomics a practical option for chemical bioactivity screening. The resulting data output provides information on the expression of thousands of genes and is amenab...
Increasing efficiency and declining cost of generating whole transcriptome profiles has made high-throughput transcriptomics a practical option for chemical bioactivity screening. The resulting data output provides information on the expression of thousands of genes and is amenab...
Wu, Bainan; Barile, Elisa; De, Surya K; Wei, Jun; Purves, Angela; Pellecchia, Maurizio
In recent years the ever so complex field of drug discovery has embraced novel design strategies based on biophysical fragment screening (fragment-based drug design; FBDD) using nuclear magnetic resonance spectroscopy (NMR) and/or structure-guided approaches, most often using X-ray crystallography and computer modeling. Experience from recent years unveiled that these methods are more effective and less prone to artifacts compared to biochemical high-throughput screening (HTS) of large collection of compounds in designing protein inhibitors. Hence these strategies are increasingly becoming the most utilized in the modern pharmaceutical industry. Nonetheless, there is still an impending need to develop innovative and effective strategies to tackle other more challenging targets such as those involving protein-protein interactions (PPIs). While HTS strategies notoriously fail to identify viable hits against such targets, few successful examples of PPIs antagonists derived by FBDD strategies exist. Recently, we reported on a new strategy that combines some of the basic principles of fragment-based screening with combinatorial chemistry and NMR-based screening. The approach, termed HTS by NMR, combines the advantages of combinatorial chemistry and NMR-based screening to rapidly and unambiguously identify bona fide inhibitors of PPIs. This review will reiterate the critical aspects of the approach with examples of possible applications.
Full Text Available The increasing number of people suffering from metabolic syndrome and obesity is becoming a serious problem not only in developed countries, but also in developing countries. However, there are few agents currently approved for the treatment of obesity. Those that are available are mainly appetite suppressants and gastrointestinal fat blockers. We have developed a simple and rapid method for the measurement of the feeding volume of Danio rerio (zebrafish. This assay can be used to screen appetite suppressants and enhancers. In this study, zebrafish were fed viable paramecia that were fluorescently-labeled, and feeding volume was measured using a 96-well microplate reader. Gene expression analysis of brain-derived neurotrophic factor (bdnf, knockdown of appetite-regulating genes (neuropeptide Y, preproinsulin, melanocortin 4 receptor, agouti related protein, and cannabinoid receptor 1, and the administration of clinical appetite suppressants (fluoxetine, sibutramine, mazindol, phentermine, and rimonabant revealed the similarity among mechanisms regulating appetite in zebrafish and mammals. In combination with behavioral analysis, we were able to evaluate adverse effects on locomotor activities from gene knockdown and chemical treatments. In conclusion, we have developed an assay that uses zebrafish, which can be applied to high-throughput screening and target gene discovery for appetite suppressants and enhancers.
Loskyll, Jonas; Stoewe, Klaus; Maier, Wilhelm F
We review the state of the art and explain the need for better SO 2 oxidation catalysts for the production of sulfuric acid. A high-throughput technology has been developed for the study of potential catalysts in the oxidation of SO 2 to SO 3 . High-throughput methods are reviewed and the problems encountered with their adaptation to the corrosive conditions of SO 2 oxidation are described. We show that while emissivity-corrected infrared thermography (ecIRT) can be used for primary screening, it is prone to errors because of the large variations in the emissivity of the catalyst surface. UV-visible (UV-Vis) spectrometry was selected instead as a reliable analysis method of monitoring the SO 2 conversion. Installing plain sugar absorbents at reactor outlets proved valuable for the detection and quantitative removal of SO 3 from the product gas before the UV-Vis analysis. We also overview some elements used for prescreening and those remaining after the screening of the first catalyst generations. (topical review)
Full Text Available A number of cellular proteins localize to discrete foci within cells, for example DNA repair proteins, microtubule organizing centers, P bodies or kinetochores. It is often possible to measure the fluorescence emission from tagged proteins within these foci as a surrogate for the concentration of that specific protein. We wished to develop tools that would allow quantitation of fluorescence foci intensities in high-throughput studies. As proof of principle we have examined the kinetochore, a large multi-subunit complex that is critical for the accurate segregation of chromosomes during cell division. Kinetochore perturbations lead to aneuploidy, which is a hallmark of cancer cells. Hence, understanding kinetochore homeostasis and regulation are important for a global understanding of cell division and genome integrity. The 16 budding yeast kinetochores colocalize within the nucleus to form a single focus. Here we have created a set of freely-available tools to allow high-throughput quantitation of kinetochore foci fluorescence. We use this ‘FociQuant’ tool to compare methods of kinetochore quantitation and we show proof of principle that FociQuant can be used to identify changes in kinetochore protein levels in a mutant that affects kinetochore function. This analysis can be applied to any protein that forms discrete foci in cells.
Krycer, James R; Diskin, Ciana; Nelson, Marin E; Zeng, Xiao-Yi; Fazakerley, Daniel J; James, David E
Research into cellular metabolism has become more high-throughput, with typical cell-culture experiments being performed in multiwell plates (microplates). This format presents a challenge when trying to collect gaseous products, such as carbon dioxide (CO2), which requires a sealed environment and a vessel separate from the biological sample. To address this limitation, we developed a gas trapping protocol using perforated plastic lids in sealed cell-culture multiwell plates. We used this trap design to measure CO2 production from glucose and fatty acid metabolism, as well as hydrogen sulfide production from cysteine-treated cells. Our data clearly show that this gas trap can be applied to liquid and solid gas-collection media and can be used to study gaseous product generation by both adherent cells and cells in suspension. Since our gas traps can be adapted to multiwell plates of various sizes, they present a convenient, cost-effective solution that can accommodate the trend toward high-throughput measurements in metabolic research.
Chang, Yang-Lang; Chang, Cheng-Chun; Huang, Min-Yu; Huang, Bormin
Low-density parity-check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide variety of communication standards and configurations have inspired the demand for high-performance and flexibility computing. Accordingly, finding a fast and reconfigurable developing platform for designing the high-throughput LDPC decoder has become important especially for rapidly changing communication standards and configurations. In this paper, a new graphic-processing-unit (GPU) LDPC decoding platform with the asynchronous data transfer is proposed to realize this practical implementation. Experimental results showed that the proposed GPU-based decoder achieved 271x speedup compared to its CPU-based counterpart. It can serve as a high-throughput LDPC decoder.
Elsliger, Marc-André; Deacon, Ashley M.; Godzik, Adam; Lesley, Scott A.; Wooley, John; Wüthrich, Kurt; Wilson, Ian A.
The Joint Center for Structural Genomics high-throughput structural biology pipeline has delivered more than 1000 structures to the community over the past ten years and has made a significant contribution to the overall goal of the NIH Protein Structure Initiative (PSI) of expanding structural coverage of the protein universe. The Joint Center for Structural Genomics high-throughput structural biology pipeline has delivered more than 1000 structures to the community over the past ten years. The JCSG has made a significant contribution to the overall goal of the NIH Protein Structure Initiative (PSI) of expanding structural coverage of the protein universe, as well as making substantial inroads into structural coverage of an entire organism. Targets are processed through an extensive combination of bioinformatics and biophysical analyses to efficiently characterize and optimize each target prior to selection for structure determination. The pipeline uses parallel processing methods at almost every step in the process and can adapt to a wide range of protein targets from bacterial to human. The construction, expansion and optimization of the JCSG gene-to-structure pipeline over the years have resulted in many technological and methodological advances and developments. The vast number of targets and the enormous amounts of associated data processed through the multiple stages of the experimental pipeline required the development of variety of valuable resources that, wherever feasible, have been converted to free-access web-based tools and applications
Chokhawala, Harshal A; Huang, Shengshu; Lau, Kam; Yu, Hai; Cheng, Jiansong; Thon, Vireak; Hurtado-Ziola, Nancy; Guerrero, Juan A; Varki, Ajit; Chen, Xi
Although the vital roles of structures containing sialic acid in biomolecular recognition are well documented, limited information is available on how sialic acid structural modifications, sialyl linkages, and the underlying glycan structures affect the binding or the activity of sialic acid-recognizing proteins and related downstream biological processes. A novel combinatorial chemoenzymatic method has been developed for the highly efficient synthesis of biotinylated sialosides containing different sialic acid structures and different underlying glycans in 96-well plates from biotinylated sialyltransferase acceptors and sialic acid precursors. By transferring the reaction mixtures to NeutrAvidin-coated plates and assaying for the yields of enzymatic reactions using lectins recognizing sialyltransferase acceptors but not the sialylated products, the biotinylated sialoside products can be directly used, without purification, for high-throughput screening to quickly identify the ligand specificity of sialic acid-binding proteins. For a proof-of-principle experiment, 72 biotinylated alpha2,6-linked sialosides were synthesized in 96-well plates from 4 biotinylated sialyltransferase acceptors and 18 sialic acid precursors using a one-pot three-enzyme system. High-throughput screening assays performed in NeutrAvidin-coated microtiter plates show that whereas Sambucus nigra Lectin binds to alpha2,6-linked sialosides with high promiscuity, human Siglec-2 (CD22) is highly selective for a number of sialic acid structures and the underlying glycans in its sialoside ligands.
Full Text Available The growing need for rapid and accurate approaches for large-scale assessment of phenotypic characters in plants becomes more and more obvious in the studies looking into relationships between genotype and phenotype. This need is due to the advent of high throughput methods for analysis of genomes. Nowadays, any genetic experiment involves data on thousands and dozens of thousands of plants. Traditional ways of assessing most phenotypic characteristics (those with reliance on the eye, the touch, the ruler are little effective on samples of such sizes. Modern approaches seek to take advantage of automated phenotyping, which warrants a much more rapid data acquisition, higher accuracy of the assessment of phenotypic features, measurement of new parameters of these features and exclusion of human subjectivity from the process. Additionally, automation allows measurement data to be rapidly loaded into computer databases, which reduces data processing time.In this work, we present the WheatPGE information system designed to solve the problem of integration of genotypic and phenotypic data and parameters of the environment, as well as to analyze the relationships between the genotype and phenotype in wheat. The system is used to consolidate miscellaneous data on a plant for storing and processing various morphological traits and genotypes of wheat plants as well as data on various environmental factors. The system is available at www.wheatdb.org. Its potential in genetic experiments has been demonstrated in high-throughput phenotyping of wheat leaf pubescence.
Sjostrom, Staffan L.; Bai, Yunpeng; Huang, Mingtao
A high-throughput method for single cell screening by microfluidic droplet sorting is applied to a whole-genome mutated yeast cell library yielding improved production hosts of secreted industrial enzymes. The sorting method is validated by enriching a yeast strain 14 times based on its α......-amylase production, close to the theoretical maximum enrichment. Furthermore, a 105 member yeast cell library is screened yielding a clone with a more than 2-fold increase in α-amylase production. The increase in enzyme production results from an improvement of the cellular functions of the production host...
Background Mining high-throughput screening (HTS) assays is key for enhancing decisions in the area of drug repositioning and drug discovery. However, many challenges are encountered in the process of developing suitable and accurate methods for extracting useful information from these assays. Virtual screening and a wide variety of databases, methods and solutions proposed to-date, did not completely overcome these challenges. This study is based on a multi-label classification (MLC) technique for modeling correlations between several HTS assays, meaning that a single prediction represents a subset of assigned correlated labels instead of one label. Thus, the devised method provides an increased probability for more accurate predictions of compounds that were not tested in particular assays. Results Here we present DRABAL, a novel MLC solution that incorporates structure learning of a Bayesian network as a step to model dependency between the HTS assays. In this study, DRABAL was used to process more than 1.4 million interactions of over 400,000 compounds and analyze the existing relationships between five large HTS assays from the PubChem BioAssay Database. Compared to different MLC methods, DRABAL significantly improves the F1Score by about 22%, on average. We further illustrated usefulness and utility of DRABAL through screening FDA approved drugs and reported ones that have a high probability to interact with several targets, thus enabling drug-multi-target repositioning. Specifically DRABAL suggests the Thiabendazole drug as a common activator of the NCP1 and Rab-9A proteins, both of which are designed to identify treatment modalities for the Niemann–Pick type C disease. Conclusion We developed a novel MLC solution based on a Bayesian active learning framework to overcome the challenge of lacking fully labeled training data and exploit actual dependencies between the HTS assays. The solution is motivated by the need to model dependencies between existing
Van Duren, Jeroen K [Intermolecular, Inc., San Jose, CA (United States); Koch, Carl [North Carolina State Univ., Raleigh, NC (United States); Luo, Alan [The Ohio State Univ., Columbus, OH (United States); Sample, Vivek [Arconic, Pittsburgh, PA (United States); Sachdev, Anil [General Motors, Detroit, MI (United States)
The primary limitation of today’s lightweight structural alloys is that specific yield strengths (SYS) higher than 200MPa x cc/g (typical value for titanium alloys) are extremely difficult to achieve. This holds true especially at a cost lower than 5dollars/kg (typical value for magnesium alloys). Recently, high-entropy alloys (HEA) have shown promising SYS, yet the large composition space of HEA makes screening compositions complex and time-consuming. Over the course of this 2-year project we started from 150 billion compositions and reduced the number of potential low-density (<5g/cc), low-cost (<5dollars/kg) high-entropy alloy (LDHEA) candidates that are single-phase, disordered, solid-solution (SPSS) to a few thousand compositions. This was accomplished by means of machine learning to guide design for SPSS LDHEA based on a combination of recursive partitioning, an extensive, experimental HEA database compiled from 24 literature sources, and 91 calculated parameters serving as phenomenological selection rules. Machine learning shows an accuracy of 82% in identifying which compositions of a separate, smaller, experimental HEA database are SPSS HEA. Calculation of Phase Diagrams (CALPHAD) shows an accuracy of 71-77% for the alloys supported by the CALPHAD database, where 30% of the compiled HEA database is not supported by CALPHAD. In addition to machine learning, and CALPHAD, a third tool was developed to aid design of SPSS LDHEA. Phase diagrams were calculated by constructing the Gibbs-free energy convex hull based on easily accessible enthalpy and entropy terms. Surprisingly, accuracy was 78%. Pursuing these LDHEA candidates by high-throughput experimental methods resulted in SPSS LDHEA composed of transition metals (e.g. Cr, Mn, Fe, Ni, Cu) alloyed with Al, yet the high concentration of Al, necessary to bring the mass density below 5.0g/cc, makes these materials hard and brittle, body-centered-cubic (BCC) alloys. A related, yet multi-phase BCC alloy, based
Full Text Available An improved bubble-electrospinning, consisting of a cone shaped air nozzle, a copper solution reservoir connected directly to the power generator, and a high speed rotating copper wire drum as a collector, was presented successfully to obtain high throughput preparation of aligned nanofibers. The influences of drum rotation speed on morphology and properties of obtained nanofibers were explored and researched. The results showed that the alignment degree, diameter distribution, and properties of nanofibers were improved with the increase of the drum rotation speed.
DiBartolomeo, Franklin J; Ge, Ning; Trinkle, Christine A
This work introduces microscale dual roller casting (MDRC), a novel high-throughput fabrication method for creating continuous micropatterned surfaces using thermosetting polymers. MDRC utilizes a pair of rotating, heated cylindrical molds with microscale surface patterns to cure a continuous microstructured film. Using unmodified polydimethylsiloxane as the thermosetting polymer, we were able to create optically transparent, biocompatible surfaces with submicron patterning fidelity. Compared to other roll-to-roll fabrication processes, this method offers increased flexibility in the types of materials and topography that can be generated, including dual-sided patterning, embedded materials and tunable film thickness. (paper)
Koo, John, E-mail: firstname.lastname@example.org; Binns, Brant; Miller, Timothy; Krause, Stephen; Skinner, Wesley; Mullin, James [Applied Materials, Inc., Varian Semiconductor Equipment Business Unit, 35 Dory Road, Gloucester, Massachusetts 01930 (United States)
In this paper, we introduce the Solion ion source for high-throughput solar cell doping. As the source power is increased to enable higher throughput, negative effects degrade the lifetime of the plasma chamber and the extraction electrodes. In order to improve efficiency, we have explored a wide range of electron energies and determined the conditions which best suit production. To extend the lifetime of the source we have developed an in situ cleaning method using only existing hardware. With these combinations, source life-times of >200 h for phosphorous and >100 h for boron ion beams have been achieved while maintaining 1100 cell-per-hour production.
Full Text Available Efficient parallel screening of combinatorial libraries is one of the most challenging aspects of the high-throughput (HT heterogeneous catalysis workflow. Today, a number of methods have been used in HT catalyst studies, including various optical, mass-spectrometry, and gas-chromatography techniques. Of these, rapid-scanning Fourier-transform infrared (FTIR imaging is one of the fastest and most versatile screening techniques. Here, the new design of the 16-channel HT reactor is presented and test results for its accuracy and reproducibility are shown. The performance of the system was evaluated through the oxidation of CO over commercial Pd/Al2O3 and cobalt oxide nanoparticles synthesized with different reducer-reductant molar ratios, surfactant types, metal and surfactant concentrations, synthesis temperatures, and ramp rates.
Hintsala, Eric D.; Hangen, Ude; Stauffer, Douglas D.
Standard nanoindentation tests are "high throughput" compared to nearly all other mechanical tests, such as tension or compression. However, the typical rates of tens of tests per hour can be significantly improved. These higher testing rates enable otherwise impractical studies requiring several thousands of indents, such as high-resolution property mapping and detailed statistical studies. However, care must be taken to avoid systematic errors in the measurement, including choosing of the indentation depth/spacing to avoid overlap of plastic zones, pileup, and influence of neighboring microstructural features in the material being tested. Furthermore, since fast loading rates are required, the strain rate sensitivity must also be considered. A review of these effects is given, with the emphasis placed on making complimentary standard nanoindentation measurements to address these issues. Experimental applications of the technique, including mapping of welds, microstructures, and composites with varying length scales, along with studying the effect of surface roughness on nominally homogeneous specimens, will be presented.
Gay, E.C.; Miller, W.E.; Laidler, J.J.
A high-throughput electrorefining process is being adapted to treat spent N-Reactor fuel for ultimate disposal in a geologic repository. Anodic dissolution tests were made with unirradiated N-Reactor fuel to determine the type of fragmentation necessary to provide fuel segments suitable for this process. Based on these tests, a conceptual design was produced of a plant-scale electrorefiner. In this design, the diameter of an electrode assembly is about 1.07 m (42 in.). Three of these assemblies in an electrorefiner would accommodate a 3-metric-ton batch of N-Reactor fuel that would be processed at a rate of 42 kg of uranium per hour
MacBeath, Gavin; Schreiber, Stuart L.
Systematic efforts are currently under way to construct defined sets of cloned genes for high-throughput expression and purification of recombinant proteins. To facilitate subsequent studies of protein function, we have developed miniaturized assays that accommodate extremely low sample volumes and enable the rapid, simultaneous processing of thousands of proteins. A high-precision robot designed to manufacture complementary DNA microarrays was used to spot proteins onto chemically derivatized glass slides at extremely high spatial densities. The proteins attached covalently to the slide surface yet retained their ability to interact specifically with other proteins, or with small molecules, in solution. Three applications for protein microarrays were demonstrated: screening for protein-protein interactions, identifying the substrates of protein kinases, and identifying the protein targets of small molecules.
Ramani, Vijay; Qiu, Ruolan; Shendure, Jay
We present an unbiased method to globally resolve RNA structures through pairwise contact measurements between interacting regions. RNA proximity ligation (RPL) uses proximity ligation of native RNA followed by deep sequencing to yield chimeric reads with ligation junctions in the vicinity of structurally proximate bases. We apply RPL in both baker's yeast (Saccharomyces cerevisiae) and human cells and generate contact probability maps for ribosomal and other abundant RNAs, including yeast snoRNAs, the RNA subunit of the signal recognition particle and the yeast U2 spliceosomal RNA homolog. RPL measurements correlate with established secondary structures for these RNA molecules, including stem-loop structures and long-range pseudoknots. We anticipate that RPL will complement the current repertoire of computational and experimental approaches in enabling the high-throughput determination of secondary and tertiary RNA structures.
The potential of Distributed Processing Systems to deliver computing capabilities with qualities ranging from high availability and reliability to easy expansion in functionality and capacity were recognized and formalized in the 1970’s. For more three decade these principals Distributed Computing guided the development of the HTCondor resource and job management system. The widely adopted suite of software tools offered by HTCondor are based on novel distributed computing technologies and are driven by the evolving needs of High Throughput scientific applications. We will review the principals that underpin our work, the distributed computing frameworks and technologies we developed and the lessons we learned from delivering effective and dependable software tools in an ever changing landscape computing technologies and needs that range today from a desktop computer to tens of thousands of cores offered by commercial clouds. About the speaker Miron Livny received a B.Sc. degree in Physics and Mat...
Nguyen, Viet-Anh; Lió, Pietro; Koukolíková-Nicola, Zdena; Bagnoli, Franco
High-throughput data analyses are becoming common in biology, communications, economics and sociology. The vast amounts of data are usually represented in the form of matrices and can be considered as knowledge networks. Spectra-based approaches have proved useful in extracting hidden information within such networks and for estimating missing data, but these methods are based essentially on linear assumptions. The physical models of matching, when applicable, often suggest non-linear mechanisms, that may sometimes be identified as noise. The use of non-linear models in data analysis, however, may require the introduction of many parameters, which lowers the statistical weight of the model. According to the quality of data, a simpler linear analysis may be more convenient than more complex approaches. In this paper, we show how a simple non-parametric Bayesian model may be used to explore the role of non-linearities and noise in synthetic and experimental data sets
Wu, Henry; Mayeshiba, Tam; Morgan, Dane
We demonstrate automated generation of diffusion databases from high-throughput density functional theory (DFT) calculations. A total of more than 230 dilute solute diffusion systems in Mg, Al, Cu, Ni, Pd, and Pt host lattices have been determined using multi-frequency diffusion models. We apply a correction method for solute diffusion in alloys using experimental and simulated values of host self-diffusivity. We find good agreement with experimental solute diffusion data, obtaining a weighted activation barrier RMS error of 0.176 eV when excluding magnetic solutes in non-magnetic alloys. The compiled database is the largest collection of consistently calculated ab-initio solute diffusion data in the world.
Tsapatsaris, Nikolaos; Beesley, Angela M.; Weiher, Norbert; Tatton, Helen; Schroeder, Sven L. M.; Dent, Andy J.; Mosselmans, Frederick J. W.; Tromp, Moniek; Russu, Sergio; Evans, John; Harvey, Ian; Hayama, Shu
We outline and demonstrate the feasibility of high-throughput (HT) in situ XAFS for synchrotron radiation studies. An XAS data acquisition and control system for the analysis of dynamic materials libraries under control of temperature and gaseous environments has been developed. The system is compatible with the 96-well industry standard and coupled to multi-stream quadrupole mass spectrometry (QMS) analysis of reactor effluents. An automated analytical workflow generates data quickly compared to traditional individual spectrum acquisition and analyses them in quasi-real time using an HT data analysis tool based on IFFEFIT. The system was used for the automated characterization of a library of 91 catalyst precursors containing ternary combinations of Cu, Pt, and Au on γ-Al2O3, and for the in situ characterization of Au catalysts supported on Al2O3 and TiO2
Singh, Arti; Ganapathysubramanian, Baskar; Singh, Asheesh Kumar; Sarkar, Soumik
Advances in automated and high-throughput imaging technologies have resulted in a deluge of high-resolution images and sensor data of plants. However, extracting patterns and features from this large corpus of data requires the use of machine learning (ML) tools to enable data assimilation and feature identification for stress phenotyping. Four stages of the decision cycle in plant stress phenotyping and plant breeding activities where different ML approaches can be deployed are (i) identification, (ii) classification, (iii) quantification, and (iv) prediction (ICQP). We provide here a comprehensive overview and user-friendly taxonomy of ML tools to enable the plant community to correctly and easily apply the appropriate ML tools and best-practice guidelines for various biotic and abiotic stress traits. Copyright © 2015 Elsevier Ltd. All rights reserved.
Linask, Kaari L; Lo, Cecilia W
The use of mouse models is rapidly expanding in biomedical research. This has dictated the need for the rapid genotyping of mutant mouse colonies for more efficient utilization of animal holding space. We have established a high-throughput protocol for mouse genotyping using two robotics workstations: a liquid-handling robot to assemble PCR and a microfluidics electrophoresis robot for PCR product analysis. This dual-robotics setup incurs lower start-up costs than a fully automated system while still minimizing human intervention. Essential to this automation scheme is the construction of a database containing customized scripts for programming the robotics workstations. Using these scripts and the robotics systems, multiple combinations of genotyping reactions can be assembled simultaneously, allowing even complex genotyping data to be generated rapidly with consistency and accuracy. A detailed protocol, database, scripts, and additional background information are available at http://dir.nhlbi.nih.gov/labs/ldb-chd/autogene/.
Full Text Available Here, we present the use of ethoscopes, which are machines for high-throughput analysis of behavior in Drosophila and other animals. Ethoscopes provide a software and hardware solution that is reproducible and easily scalable. They perform, in real-time, tracking and profiling of behavior by using a supervised machine learning algorithm, are able to deliver behaviorally triggered stimuli to flies in a feedback-loop mode, and are highly customizable and open source. Ethoscopes can be built easily by using 3D printing technology and rely on Raspberry Pi microcomputers and Arduino boards to provide affordable and flexible hardware. All software and construction specifications are available at http://lab.gilest.ro/ethoscope.
Kobayashi, Hirofumi; Lei, Cheng; Wu, Yi; Mao, Ailin; Jiang, Yiyue; Guo, Baoshan; Ozeki, Yasuyuki; Goda, Keisuke
In the last decade, high-content screening based on multivariate single-cell imaging has been proven effective in drug discovery to evaluate drug-induced phenotypic variations. Unfortunately, this method inherently requires fluorescent labeling which has several drawbacks. Here we present a label-free method for evaluating cellular drug responses only by high-throughput bright-field imaging with the aid of machine learning algorithms. Specifically, we performed high-throughput bright-field imaging of numerous drug-treated and -untreated cells (N = ~240,000) by optofluidic time-stretch microscopy with high throughput up to 10,000 cells/s and applied machine learning to the cell images to identify their morphological variations which are too subtle for human eyes to detect. Consequently, we achieved a high accuracy of 92% in distinguishing drug-treated and -untreated cells without the need for labeling. Furthermore, we also demonstrated that dose-dependent, drug-induced morphological change from different experiments can be inferred from the classification accuracy of a single classification model. Our work lays the groundwork for label-free drug screening in pharmaceutical science and industry.
Su, Hui [Iowa State Univ., Ames, IA (United States)
Laser-induced fluorescence detection is one of the most sensitive detection techniques and it has found enormous applications in various areas. The purpose of this research was to develop detection approaches based on laser-induced fluorescence detection in two different areas, heterogeneous catalysts screening and single cell study. First, we introduced laser-induced imaging (LIFI) as a high-throughput screening technique for heterogeneous catalysts to explore the use of this high-throughput screening technique in discovery and study of various heterogeneous catalyst systems. This scheme is based on the fact that the creation or the destruction of chemical bonds alters the fluorescence properties of suitably designed molecules. By irradiating the region immediately above the catalytic surface with a laser, the fluorescence intensity of a selected product or reactant can be imaged by a charge-coupled device (CCD) camera to follow the catalytic activity as a function of time and space. By screening the catalytic activity of vanadium pentoxide catalysts in oxidation of naphthalene, we demonstrated LIFI has good detection performance and the spatial and temporal resolution needed for high-throughput screening of heterogeneous catalysts. The sample packing density can reach up to 250 x 250 subunits/cm2 for 40-μm wells. This experimental set-up also can screen solid catalysts via near infrared thermography detection.
Artemenko, Natalia V; Campbell, Matthew P; Rudd, Pauline M
Recently, an automated high-throughput HPLC platform has been developed that can be used to fully sequence and quantify low concentrations of N-linked sugars released from glycoproteins, supported by an experimental database (GlycoBase) and analytical tools (autoGU). However, commercial packages that support the operation of HPLC instruments and data storage lack platforms for the extraction of large volumes of data. The lack of resources and agreed formats in glycomics is now a major limiting factor that restricts the development of bioinformatic tools and automated workflows for high-throughput HPLC data analysis. GlycoExtractor is a web-based tool that interfaces with a commercial HPLC database/software solution to facilitate the extraction of large volumes of processed glycan profile data (peak number, peak areas, and glucose unit values). The tool allows the user to export a series of sample sets to a set of file formats (XML, JSON, and CSV) rather than a collection of disconnected files. This approach not only reduces the amount of manual refinement required to export data into a suitable format for data analysis but also opens the field to new approaches for high-throughput data interpretation and storage, including biomarker discovery and validation and monitoring of online bioprocessing conditions for next generation biotherapeutics.
Laser-induced fluorescence detection is one of the most sensitive detection techniques and it has found enormous applications in various areas. The purpose of this research was to develop detection approaches based on laser-induced fluorescence detection in two different areas, heterogeneous catalysts screening and single cell study. First, we introduced laser-induced imaging (LIFI) as a high-throughput screening technique for heterogeneous catalysts to explore the use of this high-throughput screening technique in discovery and study of various heterogeneous catalyst systems. This scheme is based on the fact that the creation or the destruction of chemical bonds alters the fluorescence properties of suitably designed molecules. By irradiating the region immediately above the catalytic surface with a laser, the fluorescence intensity of a selected product or reactant can be imaged by a charge-coupled device (CCD) camera to follow the catalytic activity as a function of time and space. By screening the catalytic activity of vanadium pentoxide catalysts in oxidation of naphthalene, we demonstrated LIFI has good detection performance and the spatial and temporal resolution needed for high-throughput screening of heterogeneous catalysts. The sample packing density can reach up to 250 x 250 subunits/cm(sub 2) for 40-(micro)m wells. This experimental set-up also can screen solid catalysts via near infrared thermography detection
Schmidt, Peter M; Abdo, Michael; Butcher, Rebecca E; Yap, Min-Yin; Scotney, Pierre D; Ramunno, Melanie L; Martin-Roussety, Genevieve; Owczarek, Catherine; Hardy, Matthew P; Chen, Chao-Guang; Fabri, Louis J
Monoclonal antibodies (mAbs) have become the fastest growing segment in the drug market with annual sales of more than 40 billion US$ in 2013. The selection of lead candidate molecules involves the generation of large repertoires of antibodies from which to choose a final therapeutic candidate. Improvements in the ability to rapidly produce and purify many antibodies in sufficient quantities reduces the lead time for selection which ultimately impacts on the speed with which an antibody may transition through the research stage and into product development. Miniaturization and automation of chromatography using micro columns (RoboColumns(®) from Atoll GmbH) coupled to an automated liquid handling instrument (ALH; Freedom EVO(®) from Tecan) has been a successful approach to establish high throughput process development platforms. Recent advances in transient gene expression (TGE) using the high-titre Expi293F™ system have enabled recombinant mAb titres of greater than 500mg/L. These relatively high protein titres reduce the volume required to generate several milligrams of individual antibodies for initial biochemical and biological downstream assays, making TGE in the Expi293F™ system ideally suited to high throughput chromatography on an ALH. The present publication describes a novel platform for purifying Expi293F™-expressed recombinant mAbs directly from cell-free culture supernatant on a Perkin Elmer JANUS-VariSpan ALH equipped with a plate shuttle device. The purification platform allows automated 2-step purification (Protein A-desalting/size exclusion chromatography) of several hundred mAbs per week. The new robotic method can purify mAbs with high recovery (>90%) at sub-milligram level with yields of up to 2mg from 4mL of cell-free culture supernatant. Copyright © 2016 Elsevier B.V. All rights reserved.
Cai, Yingying; Xia, Miaomiao; Dong, Huina; Qian, Yuan; Zhang, Tongcun; Zhu, Beiwei; Wu, Jinchuan; Zhang, Dawei
As a very important coenzyme in the cell metabolism, Vitamin B 12 (cobalamin, VB 12 ) has been widely used in food and medicine fields. The complete biosynthesis of VB 12 requires approximately 30 genes, but overexpression of these genes did not result in expected increase of VB 12 production. High-yield VB 12 -producing strains are usually obtained by mutagenesis treatments, thus developing an efficient screening approach is urgently needed. By the help of engineered strains with varied capacities of VB 12 production, a riboswitch library was constructed and screened, and the btuB element from Salmonella typhimurium was identified as the best regulatory device. A flow cytometry high-throughput screening system was developed based on the btuB riboswitch with high efficiency to identify positive mutants. Mutation of Sinorhizobium meliloti (S. meliloti) was optimized using the novel mutation technique of atmospheric and room temperature plasma (ARTP). Finally, the mutant S. meliloti MC5-2 was obtained and considered as a candidate for industrial applications. After 7 d's cultivation on a rotary shaker at 30 °C, the VB 12 titer of S. meliloti MC5-2 reached 156 ± 4.2 mg/L, which was 21.9% higher than that of the wild type strain S. meliloti 320 (128 ± 3.2 mg/L). The genome of S. meliloti MC5-2 was sequenced, and gene mutations were identified and analyzed. To our knowledge, it is the first time that a riboswitch element was used in S. meliloti. The flow cytometry high-throughput screening system was successfully developed and a high-yield VB 12 producing strain was obtained. The identified and analyzed gene mutations gave useful information for developing high-yield strains by metabolic engineering. Overall, this work provides a useful high-throughput screening method for developing high VB 12 -yield strains.
Shannon M Clarke
Full Text Available Accurate pedigree information is critical to animal breeding systems to ensure the highest rate of genetic gain and management of inbreeding. The abundance of available genomic data, together with development of high throughput genotyping platforms, means that single nucleotide polymorphisms (SNPs are now the DNA marker of choice for genomic selection studies. Furthermore the superior qualities of SNPs compared to microsatellite markers allows for standardization between laboratories; a property that is crucial for developing an international set of markers for traceability studies. The objective of this study was to develop a high throughput SNP assay for use in the New Zealand sheep industry that gives accurate pedigree assignment and will allow a reduction in breeder input over lambing. This required two phases of development--firstly, a method of extracting quality DNA from ear-punch tissue performed in a high throughput cost efficient manner and secondly a SNP assay that has the ability to assign paternity to progeny resulting from mob mating. A likelihood based approach to infer paternity was used where sires with the highest LOD score (log of the ratio of the likelihood given parentage to likelihood given non-parentage are assigned. An 84 "parentage SNP panel" was developed that assigned, on average, 99% of progeny to a sire in a problem where there were 3,000 progeny from 120 mob mated sires that included numerous half sib sires. In only 6% of those cases was there another sire with at least a 0.02 probability of paternity. Furthermore dam information (either recorded, or by genotyping possible dams was absent, highlighting the SNP test's suitability for paternity testing. Utilization of this parentage SNP assay will allow implementation of progeny testing into large commercial farms where the improved accuracy of sire assignment and genetic evaluations will increase genetic gain in the sheep industry.
LS Moreira Teixeira
Full Text Available Cell-based cartilage repair strategies such as matrix-induced autologous chondrocyte implantation (MACI could be improved by enhancing cell performance. We hypothesised that micro-aggregates of chondrocytes generated in high-throughput prior to implantation in a defect could stimulate cartilaginous matrix deposition and remodelling. To address this issue, we designed a micro-mould to enable controlled high-throughput formation of micro-aggregates. Morphology, stability, gene expression profiles and chondrogenic potential of micro-aggregates of human and bovine chondrocytes were evaluated and compared to single-cells cultured in micro-wells and in 3D after encapsulation in Dextran-Tyramine (Dex-TA hydrogels in vitro and in vivo. We successfully formed micro-aggregates of human and bovine chondrocytes with highly controlled size, stability and viability within 24 hours. Micro-aggregates of 100 cells presented a superior balance in Collagen type I and Collagen type II gene expression over single cells and micro-aggregates of 50 and 200 cells. Matrix metalloproteinases 1, 9 and 13 mRNA levels were decreased in micro-aggregates compared to single-cells. Histological and biochemical analysis demonstrated enhanced matrix deposition in constructs seeded with micro-aggregates cultured in vitro and in vivo, compared to single-cell seeded constructs. Whole genome microarray analysis and single gene expression profiles using human chondrocytes confirmed increased expression of cartilage-related genes when chondrocytes were cultured in micro-aggregates. In conclusion, we succeeded in controlled high-throughput formation of micro-aggregates of chondrocytes. Compared to single cell-seeded constructs, seeding of constructs with micro-aggregates greatly improved neo-cartilage formation. Therefore, micro-aggregation prior to chondrocyte implantation in current MACI procedures, may effectively accelerate hyaline cartilage formation.
Moreira Teixeira, L S; Leijten, J C H; Sobral, J; Jin, R; van Apeldoorn, A A; Feijen, J; van Blitterswijk, C; Dijkstra, P J; Karperien, M
Cell-based cartilage repair strategies such as matrix-induced autologous chondrocyte implantation (MACI) could be improved by enhancing cell performance. We hypothesised that micro-aggregates of chondrocytes generated in high-throughput prior to implantation in a defect could stimulate cartilaginous matrix deposition and remodelling. To address this issue, we designed a micro-mould to enable controlled high-throughput formation of micro-aggregates. Morphology, stability, gene expression profiles and chondrogenic potential of micro-aggregates of human and bovine chondrocytes were evaluated and compared to single-cells cultured in micro-wells and in 3D after encapsulation in Dextran-Tyramine (Dex-TA) hydrogels in vitro and in vivo. We successfully formed micro-aggregates of human and bovine chondrocytes with highly controlled size, stability and viability within 24 hours. Micro-aggregates of 100 cells presented a superior balance in Collagen type I and Collagen type II gene expression over single cells and micro-aggregates of 50 and 200 cells. Matrix metalloproteinases 1, 9 and 13 mRNA levels were decreased in micro-aggregates compared to single-cells. Histological and biochemical analysis demonstrated enhanced matrix deposition in constructs seeded with micro-aggregates cultured in vitro and in vivo, compared to single-cell seeded constructs. Whole genome microarray analysis and single gene expression profiles using human chondrocytes confirmed increased expression of cartilage-related genes when chondrocytes were cultured in micro-aggregates. In conclusion, we succeeded in controlled high-throughput formation of micro-aggregates of chondrocytes. Compared to single cell-seeded constructs, seeding of constructs with micro-aggregates greatly improved neo-cartilage formation. Therefore, micro-aggregation prior to chondrocyte implantation in current MACI procedures, may effectively accelerate hyaline cartilage formation.
Herrmann, Anne-Kathrin; Grimm, Dirk
Over fifty years after its initial description, Adeno-associated virus (AAV) remains a most exciting but also most elusive study object in basic or applied virology. On the one hand, its simple structure not only facilitates investigations into virus biology, but combined with the availability of numerous natural AAV variants with distinct infection efficiency and specificity also makes AAV a preferred substrate for engineering of gene delivery vectors. On the other hand, it is striking to witness a recent flurry of reports that highlight and partially close persistent gaps in our understanding of AAV virus and vector biology. This is all the more perplexing considering that recombinant AAVs have already been used in >160 clinical trials and recently been commercialized as gene therapeutics. Here, we discuss a reason for these advances in AAV research, namely, the advent and application of powerful high-throughput technology for dissection of AAV-host interactions and optimization of AAV gene therapy vectors. As relevant examples, we focus on the discovery of (i) a "new" cellular AAV receptor, AAVR, (ii) host restriction factors for AAV entry, and (iii) AAV capsid determinants that mediate trafficking through the blood-brain barrier. While (i)/(ii) are prototypes of extra- or intracellular AAV host factors that were identified via high-throughput screenings, (iii) exemplifies the power of molecular evolution to investigate the virus itself. In the future, we anticipate that these and other key technologies will continue to accelerate the dissection of AAV biology and will yield a wealth of new designer viruses for clinical use. Copyright © 2018. Published by Elsevier Ltd.
Trujillano, Daniel; Perez, Belén; González, Justo; Tornador, Cristian; Navarrete, Rosa; Escaramis, Georgia; Ossowski, Stephan; Armengol, Lluís; Cornejo, Verónica; Desviat, Lourdes R; Ugarte, Magdalena; Estivill, Xavier
Genetic diagnostics of phenylketonuria (PKU) and tetrahydrobiopterin (BH4) deficient hyperphenylalaninemia (BH4DH) rely on methods that scan for known mutations or on laborious molecular tools that use Sanger sequencing. We have implemented a novel and much more efficient strategy based on high-throughput multiplex-targeted resequencing of four genes (PAH, GCH1, PTS, and QDPR) that, when affected by loss-of-function mutations, cause PKU and BH4DH. We have validated this approach in a cohort of 95 samples with the previously known PAH, GCH1, PTS, and QDPR mutations and one control sample. Pooled barcoded DNA libraries were enriched using a custom NimbleGen SeqCap EZ Choice array and sequenced using a HiSeq2000 sequencer. The combination of several robust bioinformatics tools allowed us to detect all known pathogenic mutations (point mutations, short insertions/deletions, and large genomic rearrangements) in the 95 samples, without detecting spurious calls in these genes in the control sample. We then used the same capture assay in a discovery cohort of 11 uncharacterized HPA patients using a MiSeq sequencer. In addition, we report the precise characterization of the breakpoints of four genomic rearrangements in PAH, including a novel deletion of 899 bp in intron 3. Our study is a proof-of-principle that high-throughput-targeted resequencing is ready to substitute classical molecular methods to perform differential genetic diagnosis of hyperphenylalaninemias, allowing the establishment of specifically tailored treatments a few days after birth. PMID:23942198
Brod, Fábio Cristiano Angonesi; van Dijk, Jeroen P; Voorhuijzen, Marleen M; Dinon, Andréia Zilio; Guimarães, Luis Henrique S; Scholtens, Ingrid M J; Arisi, Ana Carolina Maisonnave; Kok, Esther J
The ever-increasing production of genetically modified crops generates a demand for high-throughput DNA-based methods for the enforcement of genetically modified organisms (GMO) labelling requirements. The application of standard real-time PCR will become increasingly costly with the growth of the number of GMOs that is potentially present in an individual sample. The present work presents the results of an innovative approach in genetically modified crops analysis by DNA based methods, which is the use of a microfluidic dynamic array as a high throughput multi-detection system. In order to evaluate the system, six test samples with an increasing degree of complexity were prepared, preamplified and subsequently analysed in the Fluidigm system. Twenty-eight assays targeting different DNA elements, GM events and species-specific reference genes were used in the experiment. The large majority of the assays tested presented expected results. The power of low level detection was assessed and elements present at concentrations as low as 0.06 % were successfully detected. The approach proposed in this work presents the Fluidigm system as a suitable and promising platform for GMO multi-detection.
Full Text Available Changes in protein glycosylation are related to different diseases and have a potential as diagnostic and prognostic disease biomarkers. Transferrin (Tf glycosylation changes are common marker for congenital disorders of glycosylation. However, biological interindividual variability of Tf N-glycosylation and genes involved in glycosylation regulation are not known. Therefore, high-throughput Tf isolation method and large scale glycosylation studies are needed in order to address these questions. Due to their unique chromatographic properties, the use of chromatographic monoliths enables very fast analysis cycle, thus significantly increasing sample preparation throughput. Here, we are describing characterization of novel immunoaffinity-based monolithic columns in a 96-well plate format for specific high-throughput purification of human Tf from blood plasma. We optimized the isolation and glycan preparation procedure for subsequent ultra performance liquid chromatography (UPLC analysis of Tf N-glycosylation and managed to increase the sensitivity for approximately three times compared to initial experimental conditions, with very good reproducibility. This work is licensed under a Creative Commons Attribution 4.0 International License.
Reid, Alex; Evans, Fiona; Mulholland, Vincent; Cole, Yvonne; Pickup, Jon
Potato cyst nematode (PCN) is a damaging soilborne pest of potatoes which can cause major crop losses. In 2010, a new European Union directive (2007/33/EC) on the control of PCN came into force. Under the new directive, seed potatoes can only be planted on land which has been found to be free from PCN infestation following an official soil test. A major consequence of the new directive was the introduction of a new harmonized soil sampling rate resulting in a threefold increase in the number of samples requiring testing. To manage this increase with the same staffing resources, we have replaced the traditional diagnostic methods. A system has been developed for the processing of soil samples, extraction of DNA from float material, and detection of PCN by high-throughput real-time PCR. Approximately 17,000 samples are analyzed each year using this method. This chapter describes the high-throughput processes for the production of float material from soil samples, DNA extraction from the entire float, and subsequent detection and identification of PCN within these samples.
Cheng, I-Fang; Froude, Victoria E; Zhu, Yingxi; Chang, Hsueh-Chia; Chang, Hsien-Chang
We present a high throughput (maximum flow rate approximately 10 microl/min or linear velocity approximately 3 mm/s) continuous bio-particle sorter based on 3D traveling-wave dielectrophoresis (twDEP) at an optimum AC frequency of 500 kHz. The high throughput sorting is achieved with a sustained twDEP particle force normal to the continuous through-flow, which is applied over the entire chip by a single 3D electrode array. The design allows continuous fractionation of micron-sized particles into different downstream sub-channels based on differences in their twDEP mobility on both sides of the cross-over. Conventional DEP is integrated upstream to focus the particles into a single levitated queue to allow twDEP sorting by mobility difference and to minimize sedimentation and field-induced lysis. The 3D electrode array design minimizes the offsetting effect of nDEP (negative DEP with particle force towards regions with weak fields) on twDEP such that both forces increase monotonically with voltage to further increase the throughput. Effective focusing and separation of red blood cells from debris-filled heterogeneous samples are demonstrated, as well as size-based separation of poly-dispersed liposome suspensions into two distinct bands at 2.3 to 4.6 microm and 1.5 to 2.7 microm, at the highest throughput recorded in hand-held chips of 6 microl/min.
Noyes, Aaron; Godavarti, Ranga; Titchener-Hooker, Nigel; Coffman, Jonathan; Mukhopadhyay, Tarit
The rapid development of purification processes for polysaccharide vaccines is constrained by a lack of analytical tools current technologies for the measurement of polysaccharide recovery and process-related impurity clearance are complex, time-consuming, and generally not amenable to high throughput process development (HTPD). HTPD is envisioned to be central to the improvement of existing polysaccharide manufacturing processes through the identification of critical process parameters that potentially impact the quality attributes of the vaccine and to the development of de novo processes for clinical candidates, across the spectrum of downstream processing. The availability of a fast and automated analytics platform will expand the scope, robustness, and evolution of Design of Experiment (DOE) studies. This paper details recent advances in improving the speed, throughput, and success of in-process analytics at the micro-scale. Two methods, based on modifications of existing procedures, are described for the rapid measurement of polysaccharide titre in microplates without the need for heating steps. A simplification of a commercial endotoxin assay is also described that features a single measurement at room temperature. These assays, along with existing assays for protein and nucleic acids are qualified for deployment in the high throughput screening of polysaccharide feedstreams. Assay accuracy, precision, robustness, interference, and ease of use are assessed and described. In combination, these assays are capable of measuring the product concentration and impurity profile of a microplate of 96 samples in less than one day. This body of work relies on the evaluation of a combination of commercially available and clinically relevant polysaccharides to ensure maximum versatility and reactivity of the final assay suite. Together, these advancements reduce overall process time by up to 30-fold and significantly reduce sample volume over current practices. The
Abidi, Nada; Franke, Raimo; Findeisen, Peter; Klawonn, Frank
To better understand the dynamics of the underlying processes in cells, it is necessary to take measurements over a time course. Modern high-throughput technologies are often used for this purpose to measure the behavior of cell products like metabolites, peptides, proteins, [Formula: see text]RNA or mRNA at different points in time. Compared to classical time series, the number of time points is usually very limited and the measurements are taken at irregular time intervals. The main reasons for this are the costs of the experiments and the fact that the dynamic behavior usually shows a strong reaction and fast changes shortly after a stimulus and then slowly converges to a certain stable state. Another reason might simply be missing values. It is common to repeat the experiments and to have replicates in order to carry out a more reliable analysis. The ideal assumptions that the initial stimulus really started exactly at the same time for all replicates and that the replicates are perfectly synchronized are seldom satisfied. Therefore, there is a need to first adjust or align the time-resolved data before further analysis is carried out. Dynamic time warping (DTW) is considered as one of the common alignment techniques for time series data with equidistant time points. In this paper, we modified the DTW algorithm so that it can align sequences with measurements at different, non-equidistant time points with large gaps in between. This type of data is usually known as time-resolved data characterized by irregular time intervals between measurements as well as non-identical time points for different replicates. This new algorithm can be easily used to align time-resolved data from high-throughput experiments and to come across existing problems such as time scarcity and existing noise in the measurements. We propose a modified method of DTW to adapt requirements imposed by time-resolved data by use of monotone cubic interpolation splines. Our presented approach
Clutterbuck, Abigail L.; Smith, Julia R.; Allaway, David; Harris, Pat; Liddell, Susan; Mobasheri, Ali
This study employed a targeted high-throughput proteomic approach to identify the major proteins present in the secretome of articular cartilage. Explants from equine metacarpophalangeal joints were incubated alone or with interleukin-1beta (IL-1β, 10 ng/ml), with or without carprofen, a non-steroidal anti-inflammatory drug, for six days. After tryptic digestion of culture medium supernatants, resulting peptides were separated by HPLC and detected in a Bruker amaZon ion trap instrument. The five most abundant peptides in each MS scan were fragmented and the fragmentation patterns compared to mammalian entries in the Swiss-Prot database, using the Mascot search engine. Tryptic peptides originating from aggrecan core protein, cartilage oligomeric matrix protein (COMP), fibronectin, fibromodulin, thrombospondin-1 (TSP-1), clusterin (CLU), cartilage intermediate layer protein-1 (CILP-1), chondroadherin (CHAD) and matrix metalloproteinases MMP-1 and MMP-3 were detected. Quantitative western blotting confirmed the presence of CILP-1, CLU, MMP-1, MMP-3 and TSP-1. Treatment with IL-1β increased MMP-1, MMP-3 and TSP-1 and decreased the CLU precursor but did not affect CILP-1 and CLU levels. Many of the proteins identified have well-established extracellular matrix functions and are involved in early repair/stress responses in cartilage. This high throughput approach may be used to study the changes that occur in the early stages of osteoarthritis. PMID:21354348
Jung, Sang-Kyu; Qu, Xiaolei; Aleman-Meza, Boanerges; Wang, Tianxiao; Riepe, Celeste; Liu, Zheng; Li, Qilin; Zhong, Weiwei
The booming nanotech industry has raised public concerns about the environmental health and safety impact of engineered nanomaterials (ENMs). High-throughput assays are needed to obtain toxicity data for the rapidly increasing number of ENMs. Here we present a suite of high-throughput methods to study nanotoxicity in intact animals using Caenorhabditis elegans as a model. At the population level, our system measures food consumption of thousands of animals to evaluate population fitness. At the organism level, our automated system analyzes hundreds of individual animals for body length, locomotion speed, and lifespan. To demonstrate the utility of our system, we applied this technology to test the toxicity of 20 nanomaterials under four concentrations. Only fullerene nanoparticles (nC60), fullerol, TiO2, and CeO2 showed little or no toxicity. Various degrees of toxicity were detected from different forms of carbon nanotubes, graphene, carbon black, Ag, and fumed SiO2 nanoparticles. Aminofullerene and UV irradiated nC60 also showed small but significant toxicity. We further investigated the effects of nanomaterial size, shape, surface chemistry, and exposure conditions on toxicity. Our data are publicly available at the open-access nanotoxicity database www.QuantWorm.org/nano. PMID:25611253
Badakhshannoory, Hossein; Hashemi, Mahmoud R.; Aminlou, Alireza; Fatemi, Omid
The Discrete Wavelet Transform (DWT) is increasingly recognized in image and video compression standards, as indicated by its use in JPEG2000. The lifting scheme algorithm is an alternative DWT implementation that has a lower computational complexity and reduced resource requirement. In the JPEG2000 standard two lifting scheme based filter banks are introduced: the 5/3 and 9/7. In this paper a high throughput, two channel DWT architecture for both of the JPEG2000 DWT filters is presented. The proposed pipelined architecture has two separate input channels that process the incoming samples simultaneously with minimum memory requirement for each channel. The architecture had been implemented in VHDL and synthesized on a Xilinx Virtex2 XCV1000. The proposed architecture applies DWT on a 2K by 1K image at 33 fps with a 75 MHZ clock frequency. This performance is achieved with 70% less resources than two independent single channel modules. The high throughput and reduced resource requirement has made this architecture the proper choice for real time applications such as Digital Cinema.
Karbaschi, Mahsa; Cooke, Marcus S
Single cell gel electrophoresis (the comet assay), continues to gain popularity as a means of assessing DNA damage. However, the assay's low sample throughput and laborious sample workup procedure are limiting factors to its application. "Scoring", or individually determining DNA damage levels in 50 cells per treatment, is time-consuming, but with the advent of high-throughput scoring, the limitation is now the ability to process significant numbers of comet slides. We have developed a novel method by which multiple slides may be manipulated, and undergo electrophoresis, in batches of 25 rather than individually and, importantly, retains the use of standard microscope comet slides, which are the assay convention. This decreases assay time by 60%, and benefits from an electrophoresis tank with a substantially smaller footprint, and more uniform orientation of gels during electrophoresis. Our high-throughput variant of the comet assay greatly increases the number of samples analysed, decreases assay time, number of individual slide manipulations, reagent requirements and risk of damage to slides. The compact nature of the electrophoresis tank is of particular benefit to laboratories where bench space is at a premium. This novel approach is a significant advance on the current comet assay procedure.
Moutsatsos, Ioannis K; Parker, Christian N
High throughput screening has become a basic technique with which to explore biological systems. Advances in technology, including increased screening capacity, as well as methods that generate multiparametric readouts, are driving the need for improvements in the analysis of data sets derived from such screens. This article covers the recent advances in the analysis of high throughput screening data sets from arrayed samples, as well as the recent advances in the analysis of cell-by-cell data sets derived from image or flow cytometry application. Screening multiple genomic reagents targeting any given gene creates additional challenges and so methods that prioritize individual gene targets have been developed. The article reviews many of the open source data analysis methods that are now available and which are helping to define a consensus on the best practices to use when analyzing screening data. As data sets become larger, and more complex, the need for easily accessible data analysis tools will continue to grow. The presentation of such complex data sets, to facilitate quality control monitoring and interpretation of the results will require the development of novel visualizations. In addition, advanced statistical and machine learning algorithms that can help identify patterns, correlations and the best features in massive data sets will be required. The ease of use for these tools will be important, as they will need to be used iteratively by laboratory scientists to improve the outcomes of complex analyses.
Full Text Available Abstract Background Pathogen diagnostic assays based on polymerase chain reaction (PCR technology provide high sensitivity and specificity. However, the design of these diagnostic assays is computationally intensive, requiring high-throughput methods to identify unique PCR signatures in the presence of an ever increasing availability of sequenced genomes. Results We present the Tool for PCR Signature Identification (TOPSI, a high-performance computing pipeline for the design of PCR-based pathogen diagnostic assays. The TOPSI pipeline efficiently designs PCR signatures common to multiple bacterial genomes by obtaining the shared regions through pairwise alignments between the input genomes. TOPSI successfully designed PCR signatures common to 18 Staphylococcus aureus genomes in less than 14 hours using 98 cores on a high-performance computing system. Conclusions TOPSI is a computationally efficient, fully integrated tool for high-throughput design of PCR signatures common to multiple bacterial genomes. TOPSI is freely available for download at http://www.bhsai.org/downloads/topsi.tar.gz.
Reid, Deseree J.; Diesing, Jessica M.; Miller, Matthew A.; Perry, Scott M.; Wales, Jessica A.; Montfort, William R.; Marty, Michael T.
The expansion of native mass spectrometry (MS) methods for both academic and industrial applications has created a substantial need for analysis of large native MS datasets. Existing software tools are poorly suited for high-throughput deconvolution of native electrospray mass spectra from intact proteins and protein complexes. The UniDec Bayesian deconvolution algorithm is uniquely well suited for high-throughput analysis due to its speed and robustness but was previously tailored towards individual spectra. Here, we optimized UniDec for deconvolution, analysis, and visualization of large data sets. This new module, MetaUniDec, centers around a hierarchical data format 5 (HDF5) format for storing datasets that significantly improves speed, portability, and file size. It also includes code optimizations to improve speed and a new graphical user interface for visualization, interaction, and analysis of data. To demonstrate the utility of MetaUniDec, we applied the software to analyze automated collision voltage ramps with a small bacterial heme protein and large lipoprotein nanodiscs. Upon increasing collisional activation, bacterial heme-nitric oxide/oxygen binding (H-NOX) protein shows a discrete loss of bound heme, and nanodiscs show a continuous loss of lipids and charge. By using MetaUniDec to track changes in peak area or mass as a function of collision voltage, we explore the energetic profile of collisional activation in an ultra-high mass range Orbitrap mass spectrometer. [Figure not available: see fulltext.
Full Text Available Ganciclovir and valganciclor are antiviral agents used for the treatment of cytomegalovirus retinitis. The conventional method for administering ganciclovir in cytomegalovirus retinitis patients is repeated intravitreal injections. In order to obviate the possible detrimental effects of repeated intraocular injections, to improve compliance and to eliminate systemic side-effects, we investigated the tuning of the ganciclovir pro-drug valganciclovir and the release from thin films of poly(lactic-co-glycolic acid (PLGA, polycaprolactone (PCL, or mixtures of both, as a step towards prototyping periocular valganciclovir implants. To investigate the drug release, we established and evaluated a high throughput fluorescence-based quantification screening assay for the detection of valganciclovir. Our protocol allows quantifying as little as 20 ng of valganciclovir in 96-well polypropylene plates and a 50× faster analysis compared to traditional HPLC measurements. This improvement can hence be extrapolated to other polyester matrix thin film formulations using a high-throughput approach. The acidic microenvironment within the polyester matrix was found to protect valganciclovir from degradation with resultant increases in the half-life of the drug in the periocular implant to 100 days. Linear release profiles were obtained using the pure polyester polymers for 10 days and 60 days formulations; however, gross phase separations of PCL and acid-terminated PLGA prevented tuning within these timeframes due to the phase separation of the polymer, valganciclovir, or both.
Paul Daniel Phillips
Full Text Available Due to low cost, speed, and unmatched ability to explore large numbers of compounds, high throughput virtual screening and molecular docking engines have become widely utilized by computational scientists. It is generally accepted that docking engines, such as AutoDock, produce reliable qualitative results for ligand-macromolecular receptor binding, and molecular docking results are commonly reported in literature in the absence of complementary wet lab experimental data. In this investigation, three variants of the sixteen amino acid peptide, α-conotoxin MII, were docked to a homology model of the a3β2-nicotinic acetylcholine receptor. DockoMatic version 2.0 was used to perform a virtual screen of each peptide ligand to the receptor for ten docking trials consisting of 100 AutoDock cycles per trial. The results were analyzed for both variation in the calculated binding energy obtained from AutoDock, and the orientation of bound peptide within the receptor. The results show that, while no clear correlation exists between consistent ligand binding pose and the calculated binding energy, AutoDock is able to determine a consistent positioning of bound peptide in the majority of trials when at least ten trials were evaluated.
Giollo, Manuel; Minervini, Giovanni; Scalzotto, Marta; Leonardi, Emanuela; Ferrari, Carlo; Tosatto, Silvio C E
Over the last decade, we have witnessed an incredible growth in the amount of available genotype data due to high throughput sequencing (HTS) techniques. This information may be used to predict phenotypes of medical relevance, and pave the way towards personalized medicine. Blood phenotypes (e.g. ABO and Rh) are a purely genetic trait that has been extensively studied for decades, with currently over thirty known blood groups. Given the public availability of blood group data, it is of interest to predict these phenotypes from HTS data which may translate into more accurate blood typing in clinical practice. Here we propose BOOGIE, a fast predictor for the inference of blood groups from single nucleotide variant (SNV) databases. We focus on the prediction of thirty blood groups ranging from the well known ABO and Rh, to the less studied Junior or Diego. BOOGIE correctly predicted the blood group with 94% accuracy for the Personal Genome Project whole genome profiles where good quality SNV annotation was available. Additionally, our tool produces a high quality haplotype phase, which is of interest in the context of ethnicity-specific polymorphisms or traits. The versatility and simplicity of the analysis make it easily interpretable and allow easy extension of the protocol towards other phenotypes. BOOGIE can be downloaded from URL http://protein.bio.unipd.it/download/.
Wleklinski, Michael; Loren, Bradley P; Ferreira, Christina R; Jaman, Zinia; Avramova, Larisa; Sobreira, Tiago J P; Thompson, David H; Cooks, R Graham
We report the high throughput analysis of reaction mixture arrays using methods and data handling routines that were originally developed for biological tissue imaging. Desorption electrospray ionization (DESI) mass spectrometry (MS) is applied in a continuous on-line process at rates that approach 10 4 reactions per h at area densities of up to 1 spot per mm 2 (6144 spots per standard microtiter plate) with the sprayer moving at ca. 10 4 microns per s. Data are analyzed automatically by MS using in-house software to create ion images of selected reagents and products as intensity plots in standard array format. Amine alkylation reactions were used to optimize the system performance on PTFE membrane substrates using methanol as the DESI spray/analysis solvent. Reaction times can be screening of processes like N -alkylation and Suzuki coupling reactions as reported herein. Products and by-products were confirmed by on-line MS/MS upon rescanning of the array.
High-throughput screening (HTS) for potential thyroid–disrupting chemicals requires a system of assays to capture multiple molecular-initiating events (MIEs) that converge on perturbed thyroid hormone (TH) homeostasis. Screening for MIEs specific to TH-disrupting pathways is limited in the US EPA ToxCast screening assay portfolio. To fill one critical screening gap, the Amplex UltraRed-thyroperoxidase (AUR-TPO) assay was developed to identify chemicals that inhibit TPO, as decreased TPO activity reduces TH synthesis. The ToxCast Phase I and II chemical libraries, comprised of 1,074 unique chemicals, were initially screened using a single, high concentration to identify potential TPO inhibitors. Chemicals positive in the single concentration screen were retested in concentration-response. Due to high false positive rates typically observed with loss-of-signal assays such as AUR-TPO, we also employed two additional assays in parallel to identify possible sources of nonspecific assay signal loss, enabling stratification of roughly 300 putative TPO inhibitors based upon selective AUR-TPO activity. A cell-free luciferase inhibition assay was used to identify nonspecific enzyme inhibition among the putative TPO inhibitors, and a cytotoxicity assay using a human cell line was used to estimate the cellular tolerance limit. Additionally, the TPO inhibition activities of 150 chemicals were compared between the AUR-TPO and an orthogonal peroxidase oxidation assay using
Kirklin, S.; Saal, James E.; Hegde, Vinay I.; Wolverton, C.
The search for high-strength alloys and precipitation hardened systems has largely been accomplished through Edisonian trial and error experimentation. Here, we present a novel strategy using high-throughput computational approaches to search for promising precipitate/alloy systems. We perform density functional theory (DFT) calculations of an extremely large space of ∼200,000 potential compounds in search of effective strengthening precipitates for a variety of different alloy matrices, e.g., Fe, Al, Mg, Ni, Co, and Ti. Our search strategy involves screening phases that are likely to produce coherent precipitates (based on small lattice mismatch) and are composed of relatively common alloying elements. When combined with the Open Quantum Materials Database (OQMD), we can computationally screen for precipitates that either have a stable two-phase equilibrium with the host matrix, or are likely to precipitate as metastable phases. Our search produces (for the structure types considered) nearly all currently known high-strength precipitates in a variety of fcc, bcc, and hcp matrices, thus giving us confidence in the strategy. In addition, we predict a number of new, currently-unknown precipitate systems that should be explored experimentally as promising high-strength alloy chemistries.
Disruption of steroidogenesis by environmental chemicals can result in altered hormone levels causing adverse reproductive and developmental effects. A high-throughput assay using H295R human adrenocortical carcinoma cells was used to evaluate the effect of 2,060 chemical samples on steroidogenesis via HPLC-MS/MS quantification of 10 steroid hormones, including progestagens, glucocorticoids, androgens, and estrogens. The study employed a three stage screening strategy. The first stage established the maximum tolerated concentration (MTC; >70% viability) per sample. The second stage quantified changes in hormone levels at the MTC while the third stage performed concentration-response (CR) on a subset of samples. At all stages, cells were pre-stimulated with 10 µM forskolin for 48 h to induce steroidogenesis followed by chemical treatment for 48 h. Of the 2,060 chemical samples evaluated, 524 samples were selected for six-point CR screening, based in part on significantly altering at least 4 hormones at the MTC. CR screening identified 232 chemical samples with concentration-dependent effects on 17β-estradiol and/or testosterone, with 411 chemical samples showing an effect on at least one hormone across the steroidogenesis pathway. Clustering of the concentration-dependent chemical-mediated steroid hormone effects grouped chemical samples into five distinct profiles generally representing putative mechanisms of action, including CYP17A1 and HSD3B inhibition. A d
The risk posed to human health by any of the thousands of untested anthropogenic chemicals in our environment is a function of both the potential hazard presented by the chemical, and the possibility of being exposed. Without the capacity to make quantitative, albeit uncertain, forecasts of exposure, the putative risk of adverse health effect from a chemical cannot be evaluated. We used Bayesian methodology to infer ranges of exposure intakes that are consistent with biomarkers of chemical exposures identified in urine samples from the U.S. population by the National Health and Nutrition Examination Survey (NHANES). We perform linear regression on inferred exposure for demographic subsets of NHANES demarked by age, gender, and weight using high throughput chemical descriptors gleaned from databases and chemical structure-based calculators. We find that five of these descriptors are capable of explaining roughly 50% of the variability across chemicals for all the demographic groups examined, including children aged 6-11. For the thousands of chemicals with no other source of information, this approach allows rapid and efficient prediction of average exposure intake of environmental chemicals. The methods described by this manuscript provide a highly improved methodology for HTS of human exposure to environmental chemicals. The manuscript includes a ranking of 7785 environmental chemicals with respect to potential human exposure, including most of the Tox21 in vit
Full Text Available Widespread existence of antimicrobial peptides (AMPs has been reported in various animals with comprehensive biological activities, which is consistent with the important roles of AMPs as the first line of host defense system. However, no big-data-based analysis on AMPs from any fish species is available. In this study, we identified 507 AMP transcripts on the basis of our previously reported genomes and transcriptomes of two representative amphibious mudskippers, Boleophthalmus pectinirostris (BP and Periophthalmus magnuspinnatus (PM. The former is predominantly aquatic with less time out of water, while the latter is primarily terrestrial with extended periods of time on land. Within these identified AMPs, 449 sequences are novel; 15 were reported in BP previously; 48 are identically overlapped between BP and PM; 94 were validated by mass spectrometry. Moreover, most AMPs presented differential tissue transcription patterns in the two mudskippers. Interestingly, we discovered two AMPs, hemoglobin β1 and amylin, with high inhibitions on Micrococcus luteus. In conclusion, our high-throughput screening strategy based on genomic and transcriptomic data opens an efficient pathway to discover new antimicrobial peptides for ongoing development of marine drugs.
Yi, Yunhai; You, Xinxin; Bian, Chao; Chen, Shixi; Lv, Zhao; Qiu, Limei; Shi, Qiong
Widespread existence of antimicrobial peptides (AMPs) has been reported in various animals with comprehensive biological activities, which is consistent with the important roles of AMPs as the first line of host defense system. However, no big-data-based analysis on AMPs from any fish species is available. In this study, we identified 507 AMP transcripts on the basis of our previously reported genomes and transcriptomes of two representative amphibious mudskippers, Boleophthalmus pectinirostris (BP) and Periophthalmus magnuspinnatus (PM). The former is predominantly aquatic with less time out of water, while the latter is primarily terrestrial with extended periods of time on land. Within these identified AMPs, 449 sequences are novel; 15 were reported in BP previously; 48 are identically overlapped between BP and PM; 94 were validated by mass spectrometry. Moreover, most AMPs presented differential tissue transcription patterns in the two mudskippers. Interestingly, we discovered two AMPs, hemoglobin β1 and amylin, with high inhibitions on Micrococcus luteus . In conclusion, our high-throughput screening strategy based on genomic and transcriptomic data opens an efficient pathway to discover new antimicrobial peptides for ongoing development of marine drugs.
Purpose: Evaluation of carcinogenic mechanisms serves a critical role in IARC monograph evaluations, and can lead to “upgrade” or “downgrade” of the carcinogenicity conclusions based on human and animal evidence alone. Three recent IARC monograph Working Groups (110, 112, and 113) pioneered analysis of high throughput in vitro screening data from the U.S. Environmental Protection Agency’s ToxCast program in evaluations of carcinogenic mechanisms. Methods: For monograph 110, ToxCast assay data across multiple nuclear receptors were used to test the hypothesis that PFOA acts exclusively through the PPAR family of receptors, with activity profiles compared to several prototypical nuclear receptor-activating compounds. For monographs 112 and 113, ToxCast assays were systematically evaluated and used as an additional data stream in the overall evaluation of the mechanistic evidence. Specifically, ToxCast assays were mapped to 10 “key characteristics of carcinogens” recently identified by an IARC expert group, and chemicals’ bioactivity profiles were evaluated both in absolute terms (number of relevant assays positive for bioactivity) and relative terms (ranking with respect to other compounds evaluated by IARC, using the ToxPi methodology). Results: PFOA activates multiple nuclear receptors in addition to the PPAR family in the ToxCast assays. ToxCast assays offered substantial coverage for 5 of the 10 “key characteristics,” with the greates
Miller, Neil A; Kingsmore, Stephen F; Farmer, Andrew; Langley, Raymond J; Mudge, Joann; Crow, John A; Gonzalez, Alvaro J; Schilkey, Faye D; Kim, Ryan J; van Velkinburgh, Jennifer; May, Gregory D; Black, C Forrest; Myers, M Kathy; Utsey, John P; Frost, Nicholas S; Sugarbaker, David J; Bueno, Raphael; Gullans, Stephen R; Baxter, Susan M; Day, Steve W; Retzel, Ernest F
High-throughput DNA sequencing has enabled systems biology to begin to address areas in health, agricultural and basic biological research. Concomitant with the opportunities is an absolute necessity to manage significant volumes of high-dimensional and inter-related data and analysis. Alpheus is an analysis pipeline, database and visualization software for use with massively parallel DNA sequencing technologies that feature multi-gigabase throughput characterized by relatively short reads, such as Illumina-Solexa (sequencing-by-synthesis), Roche-454 (pyrosequencing) and Applied Biosystem's SOLiD (sequencing-by-ligation). Alpheus enables alignment to reference sequence(s), detection of variants and enumeration of sequence abundance, including expression levels in transcriptome sequence. Alpheus is able to detect several types of variants, including non-synonymous and synonymous single nucleotide polymorphisms (SNPs), insertions/deletions (indels), premature stop codons, and splice isoforms. Variant detection is aided by the ability to filter variant calls based on consistency, expected allele frequency, sequence quality, coverage, and variant type in order to minimize false positives while maximizing the identification of true positives. Alpheus also enables comparisons of genes with variants between cases and controls or bulk segregant pools. Sequence-based differential expression comparisons can be developed, with data export to SAS JMP Genomics for statistical analysis.
Schumacher, J\\"orn; The ATLAS collaboration; Vandelli, Wainer
HPC network technologies like Infiniband, TrueScale or OmniPath provide low-latency and high-throughput communication between hosts, which makes them attractive options for data-acquisition systems in large-scale high-energy physics experiments. Like HPC networks, DAQ networks are local and include a well specified number of systems. Unfortunately traditional network communication APIs for HPC clusters like MPI or PGAS target exclusively the HPC community and are not suited well for DAQ applications. It is possible to build distributed DAQ applications using low-level system APIs like Infiniband Verbs (and this has been done), but it requires a non negligible effort and expert knowledge. On the other hand, message services like 0MQ have gained popularity in the HEP community. Such APIs allow to build distributed applications with a high-level approach and provide good performance. Unfortunately their usage usually limits developers to TCP/IP-based networks. While it is possible to operate a TCP/IP stack on to...
Full Text Available High-throughput sequencing (HTS is becoming the state-of-the-art technology for typing of microbial isolates, especially in clinical samples. Yet, its application is still in its infancy for monitoring and outbreak investigations of foods. Here we review the published literature, covering not only bacterial but also viral and Eukaryote food pathogens, to assess the status and potential of HTS implementation to inform stakeholders, improve food safety and reduce outbreak impacts. The developments in sequencing technology and bioinformatics have outpaced the capacity to analyze and interpret the sequence data. The influence of sample processing, nucleic acid extraction and purification, harmonized protocols for generation and interpretation of data, and properly annotated and curated reference databases including non-pathogenic “natural” strains are other major obstacles to the realization of the full potential of HTS in analytical food surveillance, epidemiological and outbreak investigations, and in complementing preventive approaches for the control and management of foodborne pathogens. Despite significant obstacles, the achieved progress in capacity and broadening of the application range over the last decade is impressive and unprecedented, as illustrated with the chosen examples from the literature. Large consortia, often with broad international participation, are making coordinated efforts to cope with many of the mentioned obstacles. Further rapid progress can therefore be prospected for the next decade.
Peikon, Ian D; Kebschull, Justus M; Vagin, Vasily V; Ravens, Diana I; Sun, Yu-Chi; Brouzes, Eric; Corrêa, Ivan R; Bressan, Dario; Zador, Anthony M
The function of a neural circuit is determined by the details of its synaptic connections. At present, the only available method for determining a neural wiring diagram with single synapse precision-a 'connectome'-is based on imaging methods that are slow, labor-intensive and expensive. Here, we present SYNseq, a method for converting the connectome into a form that can exploit the speed and low cost of modern high-throughput DNA sequencing. In SYNseq, each neuron is labeled with a unique random nucleotide sequence-an RNA 'barcode'-which is targeted to the synapse using engineered proteins. Barcodes in pre- and postsynaptic neurons are then associated through protein-protein crosslinking across the synapse, extracted from the tissue, and joined into a form suitable for sequencing. Although our failure to develop an efficient barcode joining scheme precludes the widespread application of this approach, we expect that with further development SYNseq will enable tracing of complex circuits at high speed and low cost. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Turner, G. B.; Decker, S. R.; Tucker, M. P.; Law, C.; Doeppke, C.; Sykes, R. W.; Davis, M. F.; Ziebell, A.
This was a poster displayed at the Symposium. Advances on previous high throughput screening of biomass recalcitrance methods have resulted in improved conversion and replicate precision. Changes in plate reactor metallurgy, improved preparation of control biomass, species-specific pretreatment conditions, and enzymatic hydrolysis parameters have reduced overall coefficients of variation to an average of 6% for sample replicates. These method changes have improved plate-to-plate variation of control biomass recalcitrance and improved confidence in sugar release differences between samples. With smaller errors plant researchers can have a higher degree of assurance more low recalcitrance candidates can be identified. Significant changes in plate reactor, control biomass preparation, pretreatment conditions and enzyme have significantly reduced sample and control replicate variability. Reactor plate metallurgy significantly impacts sugar release aluminum leaching into reaction during pretreatment degrades sugars and inhibits enzyme activity. Removal of starch and extractives significantly decreases control biomass variability. New enzyme formulations give more consistent and higher conversion levels, however required re-optimization for switchgrass. Pretreatment time and temperature (severity) should be adjusted to specific biomass types i.e. woody vs. herbaceous. Desalting of enzyme preps to remove low molecular weight stabilizers and improved conversion levels likely due to water activity impacts on enzyme structure and substrate interactions not attempted here due to need to continually desalt and validate precise enzyme concentration and activity.
Hyun, Woo Jin
Printed electronics is an emerging field for manufacturing electronic devices with low cost and minimal material waste for a variety of applications including displays, distributed sensing, smart packaging, and energy management. Moreover, its compatibility with roll-to-roll production formats and flexible substrates is desirable for continuous, high-throughput production of flexible electronics. Despite the promise, however, the roll-to-roll production of printed electronics is quite challenging due to web movement hindering accurate ink registration and high-fidelity printing. In this talk, I will present a promising strategy for roll-to-roll production using a novel printing process that we term SCALE (Self-aligned Capillarity-Assisted Lithography for Electronics). By utilizing capillarity of liquid inks on nano/micro-structured substrates, the SCALE process facilitates high-resolution and self-aligned patterning of electrically functional inks with greatly improved printing tolerance. I will show the fabrication of key building blocks (e.g. transistor, resistor, capacitor) for electronic circuits using the SCALE process on plastics.
Michael P Friedmann
Full Text Available Enzymes are capable of directing complex stereospecific transformations and of accelerating reaction rates many orders of magnitude. As even the simplest known enzymes comprise thousands of atoms, the question arises as to how such exquisite catalysts evolved. A logical predecessor would be shorter peptides, but they lack the defined structure and size that are apparently necessary for enzyme functions. However, some very short peptides are able to assemble into amyloids, thereby forming a well-defined tertiary structure called the cross-β-sheet, which bestows unique properties upon the peptides. We have hypothesized that amyloids could have been the catalytically active precursor to modern enzymes. To test this hypothesis, we designed an amyloid peptide library that could be screened for catalytic activity. Our approach, amenable to high-throughput methodologies, allowed us to find several peptides and peptide mixtures that form amyloids with esterase activity. These results indicate that amyloids, with their stability in a wide range of conditions and their potential as catalysts with low sequence specificity, would indeed be fitting precursors to modern enzymes. Furthermore, our approach can be efficiently expanded upon in library size, screening conditions, and target activity to yield novel amyloid catalysts with potential applications in aqueous-organic mixtures, at high temperature and in other extreme conditions that could be advantageous for industrial applications.
Sørensen, Lasse Maretty
High-throughput sequencing has the potential to answer many of the big questions in biology and medicine. It can be used to determine the ancestry of species, to chart complex ecosystems and to understand and diagnose disease. However, going from raw sequencing data to biological or medical insig....... By estimating the genotypes on a set of candidate variants obtained from both a standard mapping-based approach as well as de novo assemblies, we are able to find considerably more structural variation than previous studies...... for reconstructing transcript sequences from RNA sequencing data. The method is based on a novel sparse prior distribution over transcript abundances and is markedly more accurate than existing approaches. The second chapter describes a new method for calling genotypes from a fixed set of candidate variants....... The method queries the reads using a graph representation of the variants and hereby mitigates the reference-bias that characterise standard genotyping methods. In the last chapter, we apply this method to call the genotypes of 50 deeply sequencing parent-offspring trios from the GenomeDenmark project...
Röst, Hannes L; Rosenberger, George; Aebersold, Ruedi; Malmström, Lars
Targeted mass spectrometry comprises a set of powerful methods to obtain accurate and consistent protein quantification in complex samples. To fully exploit these techniques, a cross-platform and open-source software stack based on standardized data exchange formats is required. We present TAPIR, a fast and efficient Python visualization software for chromatograms and peaks identified in targeted proteomics experiments. The input formats are open, community-driven standardized data formats (mzML for raw data storage and TraML encoding the hierarchical relationships between transitions, peptides and proteins). TAPIR is scalable to proteome-wide targeted proteomics studies (as enabled by SWATH-MS), allowing researchers to visualize high-throughput datasets. The framework integrates well with existing automated analysis pipelines and can be extended beyond targeted proteomics to other types of analyses. TAPIR is available for all computing platforms under the 3-clause BSD license at https://github.com/msproteomicstools/msproteomicstools. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: email@example.com.
Mann, Sarah K; Czuba, Ewa; Selby, Laura I; Such, Georgina K; Johnston, Angus P R
The internalization of nanoparticles into cells is critical for effective nanoparticle mediated drug delivery. To investigate the kinetics and mechanism of internalization of nanoparticles into cells we have developed a DNA molecular sensor, termed the Specific Hybridization Internalization Probe - SHIP. Self-assembling polymeric 'pHlexi' nanoparticles were functionalized with a Fluorescent Internalization Probe (FIP) and the interactions with two different cell lines (3T3 and CEM cells) were studied. The kinetics of internalization were quantified and chemical inhibitors that inhibited energy dependent endocytosis (sodium azide), dynamin dependent endocytosis (Dyngo-4a) and macropinocytosis (5-(N-ethyl-N-isopropyl) amiloride (EIPA)) were used to study the mechanism of internalization. Nanoparticle internalization kinetics were significantly faster in 3T3 cells than CEM cells. We have shown that ~90% of the nanoparticles associated with 3T3 cells were internalized, compared to only 20% of the nanoparticles associated with CEM cells. Nanoparticle uptake was via a dynamin-dependent pathway, and the nanoparticles were trafficked to lysosomal compartments once internalized. SHIP is able to distinguish between nanoparticles that are associated on the outer cell membrane from nanoparticles that are internalized. This study demonstrates the assay can be used to probe the kinetics of nanoparticle internalization and the mechanisms by which the nanoparticles are taken up by cells. This information is fundamental for engineering more effective nanoparticle delivery systems. The SHIP assay is a simple and a high-throughput technique that could have wide application in therapeutic delivery research.
Giuseppina Li Pira
Full Text Available Mapping of antigenic peptide sequences from proteins of relevant pathogens recognized by T helper (Th and by cytolytic T lymphocytes (CTL is crucial for vaccine development. In fact, mapping of T-cell epitopes provides useful information for the design of peptide-based vaccines and of peptide libraries to monitor specific cellular immunity in protected individuals, patients and vaccinees. Nevertheless, epitope mapping is a challenging task. In fact, large panels of overlapping peptides need to be tested with lymphocytes to identify the sequences that induce a T-cell response. Since numerous peptide panels from antigenic proteins are to be screened, lymphocytes available from human subjects are a limiting factor. To overcome this limitation, high throughput (HTP approaches based on miniaturization and automation of T-cell assays are needed. Here we consider the most recent applications of the HTP approach to T epitope mapping. The alternative or complementary use of in silico prediction and experimental epitope definition is discussed in the context of the recent literature. The currently used methods are described with special reference to the possibility of applying the HTP concept to make epitope mapping an easier procedure in terms of time, workload, reagents, cells and overall cost.
Identifying chemicals that provide a specific function within a product, yet have minimal impact on the human body or environment, is the goal of most formulation chemists and engineers practicing green chemistry. We present a methodology to identify potential chemical functional substitutes from large libraries of chemicals using machine learning based models. We collect and analyze publicly available information on the function of chemicals in consumer products or industrial processes to identify a suite of harmonized function categories suitable for modeling. We use structural and physicochemical descriptors for these chemicals to build 41 quantitative structure–use relationship (QSUR) models for harmonized function categories using random forest classification. We apply these models to screen a library of nearly 6400 chemicals with available structure information for potential functional substitutes. Using our Functional Use database (FUse), we could identify uses for 3121 chemicals; 4412 predicted functional uses had a probability of 80% or greater. We demonstrate the potential application of the models to high-throughput (HT) screening for “candidate alternatives” by merging the valid functional substitute classifications with hazard metrics developed from HT screening assays for bioactivity. A descriptor set could be obtained for 6356 Tox21 chemicals that have undergone a battery of HT in vitro bioactivity screening assays. By applying QSURs, we wer
Building scientific confidence in the development and evaluation of read-across remains an ongoing challenge. Approaches include establishing systematic frameworks to identify sources of uncertainty and ways to address them. One source of uncertainty is related to characterizing biological similarity. Many research efforts are underway such as structuring mechanistic data in adverse outcome pathways and investigating the utility of high throughput (HT)/high content (HC) screening data. A largely untapped resource for read-across to date is the biomedical literature. This information has the potential to support read-across by facilitating the identification of valid source analogues with similar biological and toxicological profiles as well as providing the mechanistic understanding for any prediction made. A key challenge in using biomedical literature is to convert and translate its unstructured form into a computable format that can be linked to chemical structure. We developed a novel text-mining strategy to represent literature information for read across. Keywords were used to organize literature into toxicity signatures at the chemical level. These signatures were integrated with HT in vitro data and curated chemical structures. A rule-based algorithm assessed the strength of the literature relationship, providing a mechanism to rank and visualize the signature as literature ToxPIs (LitToxPIs). LitToxPIs were developed for over 6,000 chemicals for a varie
Wagner, Stefanie; Lagane, Frédéric; Seguin-Orlando, Andaine; Schubert, Mikkel; Leroy, Thibault; Guichoux, Erwan; Chancerel, Emilie; Bech-Hebelstrup, Inger; Bernard, Vincent; Billard, Cyrille; Billaud, Yves; Bolliger, Matthias; Croutsch, Christophe; Čufar, Katarina; Eynaud, Frédérique; Heussner, Karl Uwe; Köninger, Joachim; Langenegger, Fabien; Leroy, Frédéric; Lima, Christine; Martinelli, Nicoletta; Momber, Garry; Billamboz, André; Nelle, Oliver; Palomo, Antoni; Piqué, Raquel; Ramstein, Marianne; Schweichel, Roswitha; Stäuble, Harald; Tegel, Willy; Terradas, Xavier; Verdin, Florence; Plomion, Christophe; Kremer, Antoine; Orlando, Ludovic
Reconstructing the colonization and demographic dynamics that gave rise to extant forests is essential to forecasts of forest responses to environmental changes. Classical approaches to map how population of trees changed through space and time largely rely on pollen distribution patterns, with only a limited number of studies exploiting DNA molecules preserved in wooden tree archaeological and subfossil remains. Here, we advance such analyses by applying high-throughput (HTS) DNA sequencing to wood archaeological and subfossil material for the first time, using a comprehensive sample of 167 European white oak waterlogged remains spanning a large temporal (from 550 to 9,800 years) and geographical range across Europe. The successful characterization of the endogenous DNA and exogenous microbial DNA of 140 (~83%) samples helped the identification of environmental conditions favouring long-term DNA preservation in wood remains, and started to unveil the first trends in the DNA decay process in wood material. Additionally, the maternally inherited chloroplast haplotypes of 21 samples from three periods of forest human-induced use (Neolithic, Bronze Age and Middle Ages) were found to be consistent with those of modern populations growing in the same geographic areas. Our work paves the way for further studies aiming at using ancient DNA preserved in wood to reconstruct the micro-evolutionary response of trees to climate change and human forest management. © 2018 John Wiley & Sons Ltd.
1. Current research - PhD work on discovery of new allergens - Postdoctoral work on Transcriptional Start Sites a) Tag based technologies allow higher throughput b) CAGE technology to define promoters c) CAGE data analysis to understand Transcription - Wo
1. Current research - PhD work on discovery of new allergens - Postdoctoral work on Transcriptional Start Sites a) Tag based technologies allow higher throughput b) CAGE technology to define promoters c) CAGE data analysis to understand Transcription - Wo
Rodríguez-Dévora, Jorge I; Reyna, Daniel; Xu Tao; Zhang Bimeng; Shi Zhidong
In the pharmaceutical industry, new drugs are tested to find appropriate compounds for therapeutic purposes for contemporary diseases. Unfortunately, novel compounds emerge at expensive prices and current target evaluation processes have limited throughput, thus leading to an increase of cost and time for drug development. This work shows the development of the novel inkjet-based deposition method for assembling a miniature drug-screening platform, which can realistically and inexpensively evaluate biochemical reactions in a picoliter-scale volume at a high speed rate. As proof of concept, applying a modified Hewlett Packard model 5360 compact disc printer, green fluorescent protein expressing Escherichia coli cells along with alginate gel solution have been arrayed on a coverslip chip under a repeatable volume of 180% ± 26% picoliters per droplet; subsequently, different antibiotic droplets were patterned on the spots of cells to evaluate the inhibition of bacteria for antibiotic screening. The proposed platform was compared to the current screening process, validating its effectiveness. The viability and basic function of the printed cells were evaluated, resulting in cell viability above 98% and insignificant or no DNA damage to human kidney cells transfected. Based on the reduction of investment and compound volume used by this platform, this technique has the potential to improve the actual drug discovery process at its target evaluation stage. (paper)
Heusinkveld, Harm J.; Westerink, Remco H.S.
Calcium plays a crucial role in virtually all cellular processes, including neurotransmission. The intracellular Ca 2+ concentration ([Ca 2+ ] i ) is therefore an important readout in neurotoxicological and neuropharmacological studies. Consequently, there is an increasing demand for high-throughput measurements of [Ca 2+ ] i , e.g. using multi-well microplate readers, in hazard characterization, human risk assessment and drug development. However, changes in [Ca 2+ ] i are highly dynamic, thereby creating challenges for high-throughput measurements. Nonetheless, several protocols are now available for real-time kinetic measurement of [Ca 2+ ] i in plate reader systems, though the results of such plate reader-based measurements have been questioned. In view of the increasing use of plate reader systems for measurements of [Ca 2+ ] i a careful evaluation of current technologies is warranted. We therefore performed an extensive set of experiments, using two cell lines (PC12 and B35) and two fluorescent calcium-sensitive dyes (Fluo-4 and Fura-2), for comparison of a linear plate reader system with single cell fluorescence microscopy. Our data demonstrate that the use of plate reader systems for high-throughput real-time kinetic measurements of [Ca 2+ ] i is associated with many pitfalls and limitations, including erroneous sustained increases in fluorescence, limited sensitivity and lack of single cell resolution. Additionally, our data demonstrate that probenecid, which is often used to prevent dye leakage, effectively inhibits the depolarization-evoked increase in [Ca 2+ ] i . Overall, the data indicate that the use of current plate reader-based strategies for high-throughput real-time kinetic measurements of [Ca 2+ ] i is associated with caveats and limitations that require further investigation. - Research highlights: → The use of plate readers for high-throughput screening of intracellular Ca 2+ is associated with many pitfalls and limitations. → Single cell
Cheng, Xiaoliang; Hiras, Jennifer; Deng, Kai; Bowen, Benjamin; Simmons, Blake A; Adams, Paul D; Singer, Steven W; Northen, Trent R
Production of biofuels via enzymatic hydrolysis of complex plant polysaccharides is a subject of intense global interest. Microbial communities are known to express a wide range of enzymes necessary for the saccharification of lignocellulosic feedstocks and serve as a powerful reservoir for enzyme discovery. However, the growth temperature and conditions that yield high cellulase activity vary widely, and the throughput to identify optimal conditions has been limited by the slow handling and conventional analysis. A rapid method that uses small volumes of isolate culture to resolve specific enzyme activity is needed. In this work, a high throughput nanostructure-initiator mass spectrometry (NIMS)-based approach was developed for screening a thermophilic cellulolytic actinomycete, Thermobispora bispora, for β-glucosidase production under various growth conditions. Media that produced high β-glucosidase activity were found to be I/S + glucose or microcrystalline cellulose (MCC), Medium 84 + rolled oats, and M9TE + MCC at 45°C. Supernatants of cell cultures grown in M9TE + 1% MCC cleaved 2.5 times more substrate at 45°C than at all other temperatures. While T. bispora is reported to grow optimally at 60°C in Medium 84 + rolled oats and M9TE + 1% MCC, approximately 40% more conversion was observed at 45°C. This high throughput NIMS approach may provide an important tool in discovery and characterization of enzymes from environmental microbes for industrial and biofuel applications.
Full Text Available Production of biofuels via enzymatic hydrolysis of complex plant polysaccharides is a subject of intense global interest. Microbial communities are known to express a wide range of enzymes necessary for the saccharification of lignocellulosic feedstocks and serve as a powerful reservoir for enzyme discovery. However, the growth temperature and conditions that yield high cellulase activity vary widely, and the throughput to identify optimal conditions has been limited by the slow handling and conventional analysis. A rapid method that uses small volumes of isolate culture to resolve specific enzyme activity is needed. In this work, a high throughput nanostructure-initiator mass spectrometry (NIMS based approach was developed for screening a thermophilic cellulolytic actinomycete, Thermobispora bispora, for β-glucosidase production under various growth conditions. Media that produced high β-glucosidase activity were found to be I/S + glucose or microcrystalline cellulose (MCC, Medium 84 + rolled oats, and M9TE + MCC at 45 °C. Supernatants of cell cultures grown in M9TE + 1% MCC cleaved 2.5 times more substrate at 45 °C than at all other temperatures. While T. bispora is reported to grow optimally at 60 °C in Medium 84 + rolled oats and M9TE + 1% MCC, approximately 40% more conversion was observed at 45 °C. This high throughput NIMS approach may provide an important tool in discovery and characterization of enzymes from environmental microbes for industrial and biofuel applications.
Christie-Oleza Joseph A
Full Text Available Abstract Background The structural and functional annotation of genomes is now heavily based on data obtained using automated pipeline systems. The key for an accurate structural annotation consists of blending similarities between closely related genomes with biochemical evidence of the genome interpretation. In this work we applied high-throughput proteogenomics to Ruegeria pomeroyi, a member of the Roseobacter clade, an abundant group of marine bacteria, as a seed for the annotation of the whole clade. Results A large dataset of peptides from R. pomeroyi was obtained after searching over 1.1 million MS/MS spectra against a six-frame translated genome database. We identified 2006 polypeptides, of which thirty-four were encoded by open reading frames (ORFs that had not previously been annotated. From the pool of 'one-hit-wonders', i.e. those ORFs specified by only one peptide detected by tandem mass spectrometry, we could confirm the probable existence of five additional new genes after proving that the corresponding RNAs were transcribed. We also identified the most-N-terminal peptide of 486 polypeptides, of which sixty-four had originally been wrongly annotated. Conclusions By extending these re-annotations to the other thirty-six Roseobacter isolates sequenced to date (twenty different genera, we propose the correction of the assigned start codons of 1082 homologous genes in the clade. In addition, we also report the presence of novel genes within operons encoding determinants of the important tricarboxylic acid cycle, a feature that seems to be characteristic of some Roseobacter genomes. The detection of their corresponding products in large amounts raises the question of their function. Their discoveries point to a possible theory for protein evolution that will rely on high expression of orphans in bacteria: their putative poor efficiency could be counterbalanced by a higher level of expression. Our proteogenomic analysis will increase
Lindsey, Benson E; Rivero, Luz; Calhoun, Chistopher S; Grotewold, Erich; Brkljacic, Jelena
Arabidopsis thaliana (Arabidopsis) seedlings often need to be grown on sterile media. This requires prior seed sterilization to prevent the growth of microbial contaminants present on the seed surface. Currently, Arabidopsis seeds are sterilized using two distinct sterilization techniques in conditions that differ slightly between labs and have not been standardized, often resulting in only partially effective sterilization or in excessive seed mortality. Most of these methods are also not easily scalable to a large number of seed lines of diverse genotypes. As technologies for high-throughput analysis of Arabidopsis continue to proliferate, standardized techniques for sterilizing large numbers of seeds of different genotypes are becoming essential for conducting these types of experiments. The response of a number of Arabidopsis lines to two different sterilization techniques was evaluated based on seed germination rate and the level of seed contamination with microbes and other pathogens. The treatments included different concentrations of sterilizing agents and times of exposure, combined to determine optimal conditions for Arabidopsis seed sterilization. Optimized protocols have been developed for two different sterilization methods: bleach (liquid-phase) and chlorine (Cl2) gas (vapor-phase), both resulting in high seed germination rates and minimal microbial contamination. The utility of these protocols was illustrated through the testing of both wild type and mutant seeds with a range of germination potentials. Our results show that seeds can be effectively sterilized using either method without excessive seed mortality, although detrimental effects of sterilization were observed for seeds with lower than optimal germination potential. In addition, an equation was developed to enable researchers to apply the standardized chlorine gas sterilization conditions to airtight containers of different sizes. The protocols described here allow easy, efficient, and
High-throughput screening (HTS) experiments provide a valuable resource that reports biological activity of numerous chemical compounds relative to their molecular targets. Building computational models that accurately predict such activity status (active vs. inactive) in specific assays is a challenging task given the large volume of data and frequently small proportion of active compounds relative to the inactive ones. We developed a method, DRAMOTE, to predict activity status of chemical compounds in HTP activity assays. For a class of HTP assays, our method achieves considerably better results than the current state-of-the-art-solutions. We achieved this by modification of a minority oversampling technique. To demonstrate that DRAMOTE is performing better than the other methods, we performed a comprehensive comparison analysis with several other methods and evaluated them on data from 11 PubChem assays through 1,350 experiments that involved approximately 500,000 interactions between chemicals and their target proteins. As an example of potential use, we applied DRAMOTE to develop robust models for predicting FDA approved drugs that have high probability to interact with the thyroid stimulating hormone receptor (TSHR) in humans. Our findings are further partially and indirectly supported by 3D docking results and literature information. The results based on approximately 500,000 interactions suggest that DRAMOTE has performed the best and that it can be used for developing robust virtual screening models. The datasets and implementation of all solutions are available as a MATLAB toolbox online at www.cbrc.kaust.edu.sa/dramote and can be found on Figshare.
Today there are more than 80,000 chemicals in commerce and the environment. The potential human health risks are unknown for the vast majority of these chemicals as they lack human health risk assessments, toxicity reference values and risk screening values. We aim to use computational toxicology and quantitative high throughput screening (qHTS) technologies to fill these data gaps, and begin to prioritize these chemicals for additional assessment. By coupling qHTS data with adverse outcome pathways (AOPs) we can use ontologies to make predictions about potential hazards and to identify those assays which are sufficient to infer these same hazards. Once those assays are identified, we can use bootstrap natural spline-based metaregression to integrate the evidence across multiple replicates or assays (if a combination of assays are together necessary to be sufficient). In this pilot, we demonstrate how we were able to identify that benzo[k]fluoranthene (B[k]F) may induce DNA damage and steatosis using qHTS data and two separate AOPs. We also demonstrate how bootstrap natural spline-based metaregression can be used to integrate the data across multiple assay replicates to generate a concentration-response curve. We used this analysis to calculate an internal point of departure of 0.751µM and risk-specific concentrations of 0.378µM for both 1:1,000 and 1:10,000 additive risk for B[k]F induced DNA damage based on the p53 assay. Based on the available evidence, we
Borodin, Oleg; Olguin, Marco; Spear, Carrie E; Leiter, Kenneth W; Knap, Jaroslaw
High throughput screening of solvents and additives with potential applications in lithium batteries is reported. The initial test set is limited to carbonate and phosphate-based compounds and focused on their electrochemical properties. Solvent stability towards first and second reduction and oxidation is reported from density functional theory (DFT) calculations performed on isolated solvents surrounded by implicit solvent. The reorganization energy is estimated from the difference between vertical and adiabatic redox energies and found to be especially important for the accurate prediction of reduction stability. A majority of tested compounds had the second reduction potential higher than the first reduction potential indicating that the second reduction reaction might play an important role in the passivation layer formation. Similarly, the second oxidation potential was smaller for a significant subset of tested molecules than the first oxidation potential. A number of potential sources of errors introduced during screening of the electrolyte electrochemical properties were examined. The formation of lithium fluoride during reduction of semifluorinated solvents such as fluoroethylene carbonate and the H-transfer during oxidation of solvents were found to shift the electrochemical potential by 1.5–2 V and could shrink the electrochemical stability window by as much as 3.5 V when such reactions are included in the screening procedure. The initial oxidation reaction of ethylene carbonate and dimethyl carbonate at the surface of the completely de-lithiated LiNi 0.5 Mn 1.5 O 4 high voltage spinel cathode was examined using DFT. Depending on the molecular orientation at the cathode surface, a carbonate molecule either exhibited deprotonation or was found bound to the transition metal via its carbonyl oxygen. (paper)
Soufan, Othman; Ba Alawi, Wail; Afeef, Moataz A.; Essack, Magbubah; Rodionov, Valentin; Kalnis, Panos; Bajic, Vladimir B.
High-throughput screening (HTS) experiments provide a valuable resource that reports biological activity of numerous chemical compounds relative to their molecular targets. Building computational models that accurately predict such activity status (active vs. inactive) in specific assays is a challenging task given the large volume of data and frequently small proportion of active compounds relative to the inactive ones. We developed a method, DRAMOTE, to predict activity status of chemical compounds in HTP activity assays. For a class of HTP assays, our method achieves considerably better results than the current state-of-the-art-solutions. We achieved this by modification of a minority oversampling technique. To demonstrate that DRAMOTE is performing better than the other methods, we performed a comprehensive comparison analysis with several other methods and evaluated them on data from 11 PubChem assays through 1,350 experiments that involved approximately 500,000 interactions between chemicals and their target proteins. As an example of potential use, we applied DRAMOTE to develop robust models for predicting FDA approved drugs that have high probability to interact with the thyroid stimulating hormone receptor (TSHR) in humans. Our findings are further partially and indirectly supported by 3D docking results and literature information. The results based on approximately 500,000 interactions suggest that DRAMOTE has performed the best and that it can be used for developing robust virtual screening models. The datasets and implementation of all solutions are available as a MATLAB toolbox online at www.cbrc.kaust.edu.sa/dramote and can be found on Figshare.
Rusch, Terry L.; Petsinger, Jeremy; Christensen, Carl; Vaske, David A.; Brumley, Robert L., Jr.; Luckey, John A.; Weber, James L.
A new scanning fluorescence detector (SCAFUD) was developed for high-throughput genotyping of short tandem repeat polymorphisms (STRPs). Fluorescent dyes are incorporated into relatively short DNA fragments via polymerase chain reaction (PCR) and are separated by electrophoresis in short, wide polyacrylamide gels (144 lanes with well to read distances of 14 cm). Excitation light from an argon laser with primary lines at 488 and 514 nm is introduced into the gel through a fiber optic cable, dichroic mirror, and 40X microscope objective. Emitted fluorescent light is collected confocally through a second fiber. The confocal head is translated across the bottom of the gel at 0.5 Hz. The detection unit utilizes dichroic mirrors and band pass filters to direct light with 10 - 20 nm bandwidths to four photomultiplier tubes (PMTs). PMT signals are independently amplified with variable gain and then sampled at a rate of 2500 points per scan using a computer based A/D board. LabView software (National Instruments) is used for instrument operation. Currently, three fluorescent dyes (Fam, Hex and Rox) are simultaneously detected with peak detection wavelengths of 543, 567, and 613 nm, respectively. The detection limit for fluorescein-labeled primers is about 100 attomoles. Planned SCAFUD upgrades include rearrangement of laser head geometry, use of additional excitation lasers for simultaneous detection of more dyes, and the use of detector arrays instead of individual PMTs. Extensive software has been written for automatic analysis of SCAFUD images. The software enables background subtraction, band identification, multiple- dye signal resolution, lane finding, band sizing and allele calling. Whole genome screens are currently underway to search for loci influencing such complex diseases as diabetes, asthma, and hypertension. Seven production SCAFUDs are currently in operation. Genotyping output for the coming year is projected to be about one million total genotypes (DNA
Londoño-Velasco, Elizabeth; Martínez-Perafán, Fabián; Carvajal-Varona, Silvio; García-Vallejo, Felipe; Hoyos-Giraldo, Luz Stella
Occupational exposure as a painter is associated with DNA damage and development of cancer. Comet assay has been widely adopted as a sensitive and quantitative tool for DNA damage assessment at the individual cell level in populations exposed to genotoxics. The aim of this study was to assess the application of the high-throughput comet assay, to determine the DNA damage in car spray painters. The study population included 52 car spray painters and 52 unexposed subjects. A significant increase in the %TDNA median (p 0.05). The results showed an increase in DNA breaks in car spray painters exposed to organic solvents and paints; furthermore, they demonstrated the application of high-throughput comet assay in an occupational exposure study to genotoxic agents.
Sumudu P. Leelananda
Full Text Available The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed.
Bosco, Filippo; Hwu, En-Te; Chen, Ching-Hsiu
Sensors are crucial in many daily operations including security, environmental control, human diagnostics and patient monitoring. Screening and online monitoring require reliable and high-throughput sensing. We report on the demonstration of a high-throughput label-free sensor platform utilizing...
Alginate Immobilization of Metabolic Enzymes (AIME) for High-Throughput Screening Assays DE DeGroot, RS Thomas, and SO SimmonsNational Center for Computational Toxicology, US EPA, Research Triangle Park, NC USAThe EPA’s ToxCast program utilizes a wide variety of high-throughput s...
Filošević, Ana; Al-Samarai, Sabina; Andretić Waldowski, Rozi
Drosophila melanogaster can be used to identify genes with novel functional roles in neuronal plasticity induced by repeated consumption of addictive drugs. Behavioral sensitization is a relatively simple behavioral output of plastic changes that occur in the brain after repeated exposures to drugs of abuse. The development of screening procedures for genes that control behavioral sensitization has stalled due to a lack of high-throughput behavioral tests that can be used in genetically tractable organism, such as Drosophila . We have developed a new behavioral test, FlyBong, which combines delivery of volatilized cocaine (vCOC) to individually housed flies with objective quantification of their locomotor activity. There are two main advantages of FlyBong: it is high-throughput and it allows for comparisons of locomotor activity of individual flies before and after single or multiple exposures. At the population level, exposure to vCOC leads to transient and concentration-dependent increase in locomotor activity, representing sensitivity to an acute dose. A second exposure leads to further increase in locomotion, representing locomotor sensitization. We validate FlyBong by showing that locomotor sensitization at either the population or individual level is absent in the mutants for circadian genes period (per) , Clock (Clk) , and cycle (cyc) . The locomotor sensitization that is present in timeless (tim) and pigment dispersing factor (pdf) mutant flies is in large part not cocaine specific, but derived from increased sensitivity to warm air. Circadian genes are not only integral part of the neural mechanism that is required for development of locomotor sensitization, but in addition, they modulate the intensity of locomotor sensitization as a function of the time of day. Motor-activating effects of cocaine are sexually dimorphic and require a functional dopaminergic transporter. FlyBong is a new and improved method for inducing and measuring locomotor sensitization
Full Text Available Drosophila melanogaster can be used to identify genes with novel functional roles in neuronal plasticity induced by repeated consumption of addictive drugs. Behavioral sensitization is a relatively simple behavioral output of plastic changes that occur in the brain after repeated exposures to drugs of abuse. The development of screening procedures for genes that control behavioral sensitization has stalled due to a lack of high-throughput behavioral tests that can be used in genetically tractable organism, such as Drosophila. We have developed a new behavioral test, FlyBong, which combines delivery of volatilized cocaine (vCOC to individually housed flies with objective quantification of their locomotor activity. There are two main advantages of FlyBong: it is high-throughput and it allows for comparisons of locomotor activity of individual flies before and after single or multiple exposures. At the population level, exposure to vCOC leads to transient and concentration-dependent increase in locomotor activity, representing sensitivity to an acute dose. A second exposure leads to further increase in locomotion, representing locomotor sensitization. We validate FlyBong by showing that locomotor sensitization at either the population or individual level is absent in the mutants for circadian genes period (per, Clock (Clk, and cycle (cyc. The locomotor sensitization that is present in timeless (tim and pigment dispersing factor (pdf mutant flies is in large part not cocaine specific, but derived from increased sensitivity to warm air. Circadian genes are not only integral part of the neural mechanism that is required for development of locomotor sensitization, but in addition, they modulate the intensity of locomotor sensitization as a function of the time of day. Motor-activating effects of cocaine are sexually dimorphic and require a functional dopaminergic transporter. FlyBong is a new and improved method for inducing and measuring locomotor
Recent and rapid technological advances in molecular sciences have dramatically increased the ability to carry out high-throughput studies characterized by big data production. This, in turn, led to the consequent negative effect of highlighting the presence of a gap between data yield and their analysis. Indeed, big data management is becoming an increasingly important aspect of many fields of molecular research including the study of human diseases. Now, the challenge is to identify, within the huge amount of data obtained, that which is of clinical relevance. In this context, issues related to data interpretation, sharing and storage need to be assessed and standardized. Once this is achieved, the integration of data from different -omic approaches will improve the diagnosis, monitoring and therapy of diseases by allowing the identification of novel, potentially actionably biomarkers in view of personalized medicine.
Tindall, Andrew J; Waller, Jade; Greenwood, Mark; Gould, Peter D; Hartwell, James; Hall, Anthony
Over the last two decades, the development of high-throughput techniques has enabled us to probe the plant circadian clock, a key coordinator of vital biological processes, in ways previously impossible. With the circadian clock increasingly implicated in key fitness and signalling pathways, this has opened up new avenues for understanding plant development and signalling. Our tool-kit has been constantly improving through continual development and novel techniques that increase throughput, reduce costs and allow higher resolution on the cellular and subcellular levels. With circadian assays becoming more accessible and relevant than ever to researchers, in this paper we offer a review of the techniques currently available before considering the horizons in circadian investigation at ever higher throughputs and resolutions.
Littlefair, Joanne E; Clare, Elizabeth L
Society faces the complex challenge of supporting biodiversity and ecosystem functioning, while ensuring food security by providing safe traceable food through an ever-more-complex global food chain. The increase in human mobility brings the added threat of pests, parasites, and invaders that further complicate our agro-industrial efforts. DNA barcoding technologies allow researchers to identify both individual species, and, when combined with universal primers and high-throughput sequencing techniques, the diversity within mixed samples (metabarcoding). These tools are already being employed to detect market substitutions, trace pests through the forensic evaluation of trace "environmental DNA", and to track parasitic infections in livestock. The potential of DNA barcoding to contribute to increased security of the food chain is clear, but challenges remain in regulation and the need for validation of experimental analysis. Here, we present an overview of the current uses and challenges of applied DNA barcoding in agriculture, from agro-ecosystems within farmland to the kitchen table.
Ekua W Brenu
Full Text Available BACKGROUND: MicroRNAs (miRNAs are known to regulate many biological processes and their dysregulation has been associated with a variety of diseases including Chronic Fatigue Syndrome/Myalgic Encephalomyelitis (CFS/ME. The recent discovery of stable and reproducible miRNA in plasma has raised the possibility that circulating miRNAs may serve as novel diagnostic markers. The objective of this study was to determine the role of plasma miRNA in CFS/ME. RESULTS: Using Illumina high-throughput sequencing we identified 19 miRNAs that were differentially expressed in the plasma of CFS/ME patients in comparison to non-fatigued controls. Following RT-qPCR analysis, we were able to confirm the significant up-regulation of three miRNAs (hsa-miR-127-3p, hsa-miR-142-5p and hsa-miR-143-3p in the CFS/ME patients. CONCLUSION: Our study is the first to identify circulating miRNAs from CFS/ME patients and also to confirm three differentially expressed circulating miRNAs in CFS/ME patients, providing a basis for further study to find useful CFS/ME biomarkers.
Brenu, Ekua W.; Ashton, Kevin J.; Batovska, Jana; Staines, Donald R.; Marshall-Gradisnik, Sonya M.
Background MicroRNAs (miRNAs) are known to regulate many biological processes and their dysregulation has been associated with a variety of diseases including Chronic Fatigue Syndrome/Myalgic Encephalomyelitis (CFS/ME). The recent discovery of stable and reproducible miRNA in plasma has raised the possibility that circulating miRNAs may serve as novel diagnostic markers. The objective of this study was to determine the role of plasma miRNA in CFS/ME. Results Using Illumina high-throughput sequencing we identified 19 miRNAs that were differentially expressed in the plasma of CFS/ME patients in comparison to non-fatigued controls. Following RT-qPCR analysis, we were able to confirm the significant up-regulation of three miRNAs (hsa-miR-127-3p, hsa-miR-142-5p and hsa-miR-143-3p) in the CFS/ME patients. Conclusion Our study is the first to identify circulating miRNAs from CFS/ME patients and also to confirm three differentially expressed circulating miRNAs in CFS/ME patients, providing a basis for further study to find useful CFS/ME biomarkers. PMID:25238588
Full Text Available Abstract Background Until recently, only a small number of low- and mid-throughput methods have been used for single nucleotide polymorphism (SNP discovery and genotyping in grapevine (Vitis vinifera L.. However, following completion of the sequence of the highly heterozygous genome of Pinot Noir, it has been possible to identify millions of electronic SNPs (eSNPs thus providing a valuable source for high-throughput genotyping methods. Results Herein we report the first application of the SNPlex™ genotyping system in grapevine aiming at the anchoring of an eukaryotic genome. This approach combines robust SNP detection with automated assay readout and data analysis. 813 candidate eSNPs were developed from non-repetitive contigs of the assembled genome of Pinot Noir and tested in 90 progeny of Syrah × Pinot Noir cross. 563 new SNP-based markers were obtained and mapped. The efficiency rate of 69% was enhanced to 80% when multiple displacement amplification (MDA methods were used for preparation of genomic DNA for the SNPlex assay. Conclusion Unlike other SNP genotyping methods used to investigate thousands of SNPs in a few genotypes, or a few SNPs in around a thousand genotypes, the SNPlex genotyping system represents a good compromise to investigate several hundred SNPs in a hundred or more samples simultaneously. Therefore, the use of the SNPlex assay, coupled with whole genome amplification (WGA, is a good solution for future applications in well-equipped laboratories.
Yong, K J; Scott, D J
Directed evolution is a powerful method for engineering proteins towards user-defined goals and has been used to generate novel proteins for industrial processes, biological research and drug discovery. Typical directed evolution techniques include cellular display, phage display, ribosome display and water-in-oil compartmentalization, all of which physically link individual members of diverse gene libraries to their translated proteins. This allows the screening or selection for a desired protein function and subsequent isolation of the encoding gene from diverse populations. For biotechnological and industrial applications there is a need to engineer proteins that are functional under conditions that are not compatible with these techniques, such as high temperatures and harsh detergents. Cellular High-throughput Encapsulation Solubilization and Screening (CHESS), is a directed evolution method originally developed to engineer detergent-stable G proteins-coupled receptors (GPCRs) for structural biology. With CHESS, library-transformed bacterial cells are encapsulated in detergent-resistant polymers to form capsules, which serve to contain mutant genes and their encoded proteins upon detergent mediated solubilization of cell membranes. Populations of capsules can be screened like single cells to enable rapid isolation of genes encoding detergent-stable protein mutants. To demonstrate the general applicability of CHESS to other proteins, we have characterized the stability and permeability of CHESS microcapsules and employed CHESS to generate thermostable, sodium dodecyl sulfate (SDS) resistant green fluorescent protein (GFP) mutants, the first soluble proteins to be engineered using CHESS. © 2014 Wiley Periodicals, Inc.
Desmarais, Samantha M.; Tropini, Carolina; Miguel, Amanda; Cava, Felipe; Monds, Russell D.; de Pedro, Miguel A.; Huang, Kerwyn Casey
The bacterial cell wall is a network of glycan strands cross-linked by short peptides (peptidoglycan); it is responsible for the mechanical integrity of the cell and shape determination. Liquid chromatography can be used to measure the abundance of the muropeptide subunits composing the cell wall. Characteristics such as the degree of cross-linking and average glycan strand length are known to vary across species. However, a systematic comparison among strains of a given species has yet to be undertaken, making it difficult to assess the origins of variability in peptidoglycan composition. We present a protocol for muropeptide analysis using ultra performance liquid chromatography (UPLC) and demonstrate that UPLC achieves resolution comparable with that of HPLC while requiring orders of magnitude less injection volume and a fraction of the elution time. We also developed a software platform to automate the identification and quantification of chromatographic peaks, which we demonstrate has improved accuracy relative to other software. This combined experimental and computational methodology revealed that peptidoglycan composition was approximately maintained across strains from three Gram-negative species despite taxonomical and morphological differences. Peptidoglycan composition and density were maintained after we systematically altered cell size in Escherichia coli using the antibiotic A22, indicating that cell shape is largely decoupled from the biochemistry of peptidoglycan synthesis. High-throughput, sensitive UPLC combined with our automated software for chromatographic analysis will accelerate the discovery of peptidoglycan composition and the molecular mechanisms of cell wall structure determination. PMID:26468288
Lu, Mei; Chan, Brian M; Schow, Peter W; Chang, Wesley S; King, Chadwick T
With current available assay formats using either immobilized protein (ELISA, enzyme-linked immunosorbent assay) or immunostaining of fixed cells for primary monoclonal antibody (mAb) screening, researchers often fail to identify and characterize antibodies that recognize the native conformation of cell-surface antigens. Therefore, screening using live cells has become an integral and important step contributing to the successful identification of therapeutic antibody candidates. Thus the need for developing high-throughput screening (HTS) technologies using live cells has become a major priority for therapeutic mAb discovery and development. We have developed a novel technique called Multiplexed Fluorescent Cell Barcoding (MFCB), a flow cytometry-based method based upon the Fluorescent Cell Barcoding (FCB) technique and the Luminex fluorescent bead array system, but is applicable to high-through mAb screens on live cells. Using this technique in our system, we can simultaneously identify or characterize the antibody-antigen binding of up to nine unique fluorescent labeled cell populations in the time that it would normally take to process a single population. This has significantly reduced the amount of time needed for the identification of potential lead candidates. This new technology enables investigators to conduct large-scale primary hybridoma screens using flow cytometry. This in turn has allowed us to screen antibodies more efficiently than before and streamline identification and characterization of lead molecules. Copyright © 2017 Elsevier B.V. All rights reserved.
Low, Ying Wei Ivan; Blasco, Francesca; Vachaspati, Prakash
Lipophilicity is one of the molecular properties assessed in early drug discovery. Direct measurement of the octanol-water distribution coefficient (logD) requires an analytical method with a large dynamic range or multistep dilutions, as the analyte's concentrations span across several orders of magnitude. In addition, water/buffer and octanol phases which have very different polarity could lead to matrix effects and affect the LC-MS response, leading to erroneous logD values. Most compound libraries use DMSO stocks as it greatly reduces the sample requirement but the presence of DMSO has been shown to underestimate the lipophilicity of the analyte. The present work describes the development of an optimised shake flask logD method using deepwell 96 well plate that addresses the issues related to matrix effects, DMSO concentration and incubation conditions and is also amenable to high throughput. Our results indicate that the equilibrium can be achieved within 30min by flipping the plate on its side while even 0.5% of DMSO is not tolerated in the assay. This study uses the matched matrix concept to minimise the errors in analysing the two phases namely buffer and octanol in LC-MS. Copyright © 2016 Elsevier B.V. All rights reserved.
Peralta-Yahya, Pamela; Carter, Brian T.; Lin, Hening; Tao, Haiyan; Cornish, Virginia W.
Efficient enzymatic hydrolysis of lignocellulosic material remains one of the major bottlenecks to cost-effective conversion of biomass to ethanol. Improvement of glycosylhydrolases however is limited by existing medium-throughput screening technologies. Here, we report the first high-throughput selection for cellulase catalysts. This selection was developed by adapting chemical complementation to provide a growth assay for bond cleavage reactions. First, a URA3 counter selection was adapted to link chemical dimerizer activated gene transcription to cell death. Next, the URA3 counter selection was shown to detect cellulase activity based on cleavage of a tetrasaccharide chemical dimerizer substrate and decrease in expression of the toxic URA3 reporter. Finally, the utility of the cellulase selection was assessed by isolating cellulases with improved activity from a cellulase library created by family DNA shuffling. This application provides further evidence that chemical complementation can be readily adapted to detect different enzymatic activities for important chemical transformations for which no natural selection exists. Due to the large number of enzyme variants selections can test compared to existing medium-throughput screens for cellulases, this assay has the potential to impact the discovery of improved cellulases and other glycosylhydrolases for biomass conversion from libraries of cellulases created by mutagenesis or obtained from natural biodiversity. PMID:19053460
Collman, Benjamin M; Miller, Jonathan M; Seadeek, Christopher; Stambek, Julie A; Blackburn, Anthony C
In recent years, high throughput (HT) screening has become the most widely used approach for early phase salt screening and selection in a drug discovery/development setting. The purpose of this study was to compare a rational approach for salt screening and selection to those results previously generated using a HT approach. The rational approach involved a much smaller number of initial trials (one salt synthesis attempt per counterion) that were selected based on a few strategic solubility determinations of the free form combined with a theoretical analysis of the ideal solvent solubility conditions for salt formation. Salt screening results for sertraline, tamoxifen, and trazodone using the rational approach were compared to those previously generated by HT screening. The rational approach produced similar results to HT screening, including identification of the commercially chosen salt forms, but with a fraction of the crystallization attempts. Moreover, the rational approach provided enough solid from the very initial crystallization of a salt for more thorough and reliable solid-state characterization and thus rapid decision-making. The crystallization techniques used in the rational approach mimic larger-scale process crystallization, allowing smoother technical transfer of the selected salt to the process chemist.
Sloane, A.J.; Duff, J.L.; Hopwood, F.G.; Wilson, N.L.; Smith, P.E.; Hill, C.J.; Packer, N.H.; Williams, K.L.; Gooley, A.A.; Cole, R.A.; Cooley, P.W.; Wallace, D.B.
We describe a 'chemical printer' that uses piezoelectric pulsing for rapid and accurate microdispensing of picolitre volumes of fluid for proteomic analysis of 'protein macroarrays'. Unlike positive transfer and pin transfer systems, our printer dispenses fluid in a non-contact process that ensures that the fluid source cannot be contaminated by substrate during a printing event. We demonstrate automated delivery of enzyme and matrix solutions for on-membrane protein digestion and subsequent peptide mass fingerprinting (pmf) analysis directly from the membrane surface using matrix-assisted laser-desorption/ionization time-of-flight (MALDI-TOF) mass spectrometry (MS). This approach bypasses the more commonly used multi-step procedures, thereby permitting a more rapid procedure for protein identification. We also highlight the advantage of printing different chemistries onto an individual protein spot for multiple microscale analyses. This ability is particularly useful when detailed characterisation of rare and valuable sample is required. Using a combination of PNGase F and trypsin we have mapped sites of N-glycosylation using on-membrane digestion strategies. We also demonstrate the ability to print multiple serum samples in a micro-ELISA format and rapidly screen a protein macroarray of human blood plasma for pathogen-derived antigens. We anticipate that the 'chemical printer' will be a major component of proteomic platforms for high-throughput protein identification and characterisation with widespread applications in biomedical and diagnostic discovery
de la Iglesia, Diana; García-Remesal, Miguel; de la Calle, Guillermo; Kulikowski, Casimir; Sanz, Ferran; Maojo, Víctor
The Human Genome Project and the explosion of high-throughput data have transformed the areas of molecular and personalized medicine, which are producing a wide range of studies and experimental results and providing new insights for developing medical applications. Research in many interdisciplinary fields is resulting in data repositories and computational tools that support a wide diversity of tasks: genome sequencing, genome-wide association studies, analysis of genotype-phenotype interactions, drug toxicity and side effects assessment, prediction of protein interactions and diseases, development of computational models, biomarker discovery, and many others. The authors of the present paper have developed several inventories covering tools, initiatives and studies in different computational fields related to molecular medicine: medical informatics, bioinformatics, clinical informatics and nanoinformatics. With these inventories, created by mining the scientific literature, we have carried out several reviews of these fields, providing researchers with a useful framework to locate, discover, search and integrate resources. In this paper we present an analysis of the state-of-the-art as it relates to computational resources for molecular medicine, based on results compiled in our inventories, as well as results extracted from a systematic review of the literature and other scientific media. The present review is based on the impact of their related publications and the available data and software resources for molecular medicine. It aims to provide information that can be useful to support ongoing research and work to improve diagnostics and therapeutics based on molecular-level insights.
Drug ototoxicity research has relied traditionally on animal models for the discovery and development of therapeutic interventions. More than 50 years of research, however, has delivered few--if any--successful clinical strategies for preventing or ameliorating the ototoxic effects of common pharmacological drugs such as aminoglycoside…
Full Text Available Abstract Background With the advances in DNA sequencer-based technologies, it has become possible to automate several steps of the genotyping process leading to increased throughput. To efficiently handle the large amounts of genotypic data generated and help with quality control, there is a strong need for a software system that can help with the tracking of samples and capture and management of data at different steps of the process. Such systems, while serving to manage the workflow precisely, also encourage good laboratory practice by standardizing protocols, recording and annotating data from every step of the workflow. Results A laboratory information management system (LIMS has been designed and implemented at the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT that meets the requirements of a moderately high throughput molecular genotyping facility. The application is designed as modules and is simple to learn and use. The application leads the user through each step of the process from starting an experiment to the storing of output data from the genotype detection step with auto-binning of alleles; thus ensuring that every DNA sample is handled in an identical manner and all the necessary data are captured. The application keeps track of DNA samples and generated data. Data entry into the system is through the use of forms for file uploads. The LIMS provides functions to trace back to the electrophoresis gel files or sample source for any genotypic data and for repeating experiments. The LIMS is being presently used for the capture of high throughput SSR (simple-sequence repeat genotyping data from the legume (chickpea, groundnut and pigeonpea and cereal (sorghum and millets crops of importance in the semi-arid tropics. Conclusion A laboratory information management system is available that has been found useful in the management of microsatellite genotype data in a moderately high throughput genotyping
Curriculum of 2013 has been started in schools appointed as the implementer. This curriculum, for English subject demands the students to improve their skills. To reach this one of the suggested methods is discovery learning since this method is considered appropriate to implement for increasing the students' ability especially to fulfill minimum…
Duarte, José M; Barbier, Içvara; Schaerli, Yolanda
Synthetic biologists increasingly rely on directed evolution to optimize engineered biological systems. Applying an appropriate screening or selection method for identifying the potentially rare library members with the desired properties is a crucial step for success in these experiments. Special challenges include substantial cell-to-cell variability and the requirement to check multiple states (e.g., being ON or OFF depending on the input). Here, we present a high-throughput screening method that addresses these challenges. First, we encapsulate single bacteria into microfluidic agarose gel beads. After incubation, they harbor monoclonal bacterial microcolonies (e.g., expressing a synthetic construct) and can be sorted according their fluorescence by fluorescence activated cell sorting (FACS). We determine enrichment rates and demonstrate that we can measure the average fluorescent signals of microcolonies containing phenotypically heterogeneous cells, obviating the problem of cell-to-cell variability. Finally, we apply this method to sort a pBAD promoter library at ON and OFF states.
Song, Xiaohang; Cvetkovski, Darko; Hälsig, Tim; Rave, Wolfgang; Fettweis, Gerhard; Grass, Eckhard; Lankl, Berthold
The evolution to ultra-dense next generation networks requires a massive increase in throughput and deployment flexibility. Therefore, novel wireless backhaul solutions that can support these demands are needed. In this work we present an approach for a millimeter wave line-of-sight MIMO backhaul design, targeting transmission rates in the order of 100 Gbit/s. We provide theoretical foundations for the concept showcasing its potential, which are confirmed through channel measurements. Furthermore, we provide insights into the system design with respect to antenna array setup, baseband processing, synchronization, and channel equalization. Implementation in a 60 GHz demonstrator setup proves the feasibility of the system concept for high throughput backhauling in next generation networks.
Li, B; Chan, E C Y
We present an approach to customize the sample submission process for high-throughput purification (HTP) of combinatorial parallel libraries using preparative liquid chromatography electrospray ionization mass spectrometry. In this study, Visual Basic and Visual Basic for Applications programs were developed using Microsoft Visual Basic 6 and Microsoft Excel 2000, respectively. These programs are subsequently applied for the seamless electronic submission and handling of data for HTP. Functions were incorporated into these programs where medicinal chemists can perform on-line verification of the purification status and on-line retrieval of postpurification data. The application of these user friendly and cost effective programs in our HTP technology has greatly increased our work efficiency by reducing paper work and manual manipulation of data.
Abalde-Cela, Sara; Gould, Anna; Liu, Xin; Kazamia, Elena; Smith, Alison G; Abell, Chris
Ethanol production by microorganisms is an important renewable energy source. Most processes involve fermentation of sugars from plant feedstock, but there is increasing interest in direct ethanol production by photosynthetic organisms. To facilitate this, a high-throughput screening technique for the detection of ethanol is required. Here, a method for the quantitative detection of ethanol in a microdroplet-based platform is described that can be used for screening cyanobacterial strains to identify those with the highest ethanol productivity levels. The detection of ethanol by enzymatic assay was optimized both in bulk and in microdroplets. In parallel, the encapsulation of engineered ethanol-producing cyanobacteria in microdroplets and their growth dynamics in microdroplet reservoirs were demonstrated. The combination of modular microdroplet operations including droplet generation for cyanobacteria encapsulation, droplet re-injection and pico-injection, and laser-induced fluorescence, were used to create this new platform to screen genetically engineered strains of cyanobacteria with different levels of ethanol production.
Cruz, J.; Hooshmand Zadeh, S.; Graells, T.; Andersson, M.; Malmström, J.; Wu, Z. G.; Hjort, K.
Inertial focusing is a promising microfluidic technology for concentration and separation of particles by size. However, there is a strong correlation of increased pressure with decreased particle size. Theory and experimental results for larger particles were used to scale down the phenomenon and find the conditions that focus 1 µm particles. High pressure experiments in robust glass chips were used to demonstrate the alignment. We show how the technique works for 1 µm spherical polystyrene particles and for Escherichia coli, not being harmful for the bacteria at 50 µl min-1. The potential to focus bacteria, simplicity of use and high throughput make this technology interesting for healthcare applications, where concentration and purification of a sample may be required as an initial step.
In this paper we describe the design, implementation and performance of Trans4SCIF, a user-level socket-like transport library for the Intel Xeon Phi coprocessor. Trans4SCIF library is primarily intended for high-throughput applications. It uses RDMA transfers over the native SCIF support, in a way that is transparent for the application, which has the illusion of using conventional stream sockets. We also discuss the integration of Trans4SCIF with the ZeroMQ messaging library, used extensively by several applications running at CERN. We show that this can lead to a substantial, up to 3x, increase of application throughput compared to the default TCP/IP transport option.
Fanzio, Paola; Cagliani, Alberto; Peterffy, Kristof G.
The patterning of conductive polymers is a major challenge in the implementation of these materials in several research and industrial applications, spanning from photovoltaics to biosensors. Within this context, we have developed a reliable technique to pattern a thin layer of the conductive...... polymer poly(3,4-ethylenedioxythiophene) (PEDOT) by means of a low cost and high throughput soft embossing process. We were able to reproduce a functional conductive pattern with a minimum dimension of 1 Î¼m and to fabricate electrically decoupled electrodes. Moreover, the conductivity of the PEDOT films...... has been characterized, finding that a post-processing treatment with Ethylene Glycol allows an increase in conductivity and a decrease in water solubility of the PEDOT film. Finally, cyclic voltammetry demonstrates that the post-treatment also ensures the electrochemical activity of the film. Our...
Tai, Mitchell; Ly, Amanda; Leung, Inne; Nayar, Gautam
The burgeoning pipeline for new biologic drugs has increased the need for high-throughput process characterization to efficiently use process development resources. Breakthroughs in highly automated and parallelized upstream process development have led to technologies such as the 250-mL automated mini bioreactor (ambr250™) system. Furthermore, developments in modern design of experiments (DoE) have promoted the use of definitive screening design (DSD) as an efficient method to combine factor screening and characterization. Here we utilize the 24-bioreactor ambr250™ system with 10-factor DSD to demonstrate a systematic experimental workflow to efficiently characterize an Escherichia coli (E. coli) fermentation process for recombinant protein production. The generated process model is further validated by laboratory-scale experiments and shows how the strategy is useful for quality by design (QbD) approaches to control strategies for late-stage characterization. © 2015 American Institute of Chemical Engineers.
Vinogradov, Alexander A; Gates, Zachary P; Zhang, Chi; Quartararo, Anthony J; Halloran, Kathryn H; Pentelute, Bradley L
A methodology to achieve high-throughput de novo sequencing of synthetic peptide mixtures is reported. The approach leverages shotgun nanoliquid chromatography coupled with tandem mass spectrometry-based de novo sequencing of library mixtures (up to 2000 peptides) as well as automated data analysis protocols to filter away incorrect assignments, noise, and synthetic side-products. For increasing the confidence in the sequencing results, mass spectrometry-friendly library designs were developed that enabled unambiguous decoding of up to 600 peptide sequences per hour while maintaining greater than 85% sequence identification rates in most cases. The reliability of the reported decoding strategy was additionally confirmed by matching fragmentation spectra for select authentic peptides identified from library sequencing samples. The methods reported here are directly applicable to screening techniques that yield mixtures of active compounds, including particle sorting of one-bead one-compound libraries and affinity enrichment of synthetic library mixtures performed in solution.
Ronquist, Scott; Meixner, Walter; Rajapakse, Indika; Snyder, John
The human genome is dynamic in structure, complicating researcher's attempts at fully understanding it. Time series "Fluorescent in situ Hybridization" (FISH) imaging has increased our ability to observe genome structure, but due to cell type and experimental variability this data is often noisy and difficult to analyze. Furthermore, computational analysis techniques are needed for homolog discrimination and canonical framework detection, in the case of time-series images. In this paper we introduce novel ideas for nucleus imaging analysis, present findings extracted using dynamic genome imaging, and propose an objective algorithm for high-throughput, time-series FISH imaging. While a canonical framework could not be detected beyond statistical significance in the analyzed dataset, a mathematical framework for detection has been outlined with extension to 3D image analysis. Copyright © 2017 Elsevier Inc. All rights reserved.
Liu, Rong; Hassan, Taimur; Rallo, Robert; Cohen, Yoram
The increasing utilization of high-throughput screening (HTS) in toxicity studies of engineered nano-materials (ENMs) requires tools for rapid and reliable processing and analyses of large HTS datasets. In order to meet this need, a web-based platform for HTS data analyses tools (HDAT) was developed that provides statistical methods suitable for ENM toxicity data. As a publicly available computational nanoinformatics infrastructure, HDAT provides different plate normalization methods, various HTS summarization statistics, self-organizing map (SOM)-based clustering analysis, and visualization of raw and processed data using both heat map and SOM. HDAT has been successfully used in a number of HTS studies of ENM toxicity, thereby enabling analysis of toxicity mechanisms and development of structure–activity relationships for ENM toxicity. The online approach afforded by HDAT should encourage standardization of and future advances in HTS as well as facilitate convenient inter-laboratory comparisons of HTS datasets. (paper)
Schaible, Eric; Jimenez, Jessica; Church, Matthew; Lim, Eunhee; Stewart, Polite; Hexemer, Alexander
Grazing incidence small-angle X-ray scattering (GISAXS) and grazing incidence wide-angle X-ray scattering (GIWAXS) are important techniques for characterizing thin films. In order to meet rapidly increasing demand, the SAXSWAXS beamline at the Advanced Light Source (beamline 7.3.3) has implemented a fully automated, high-throughput system to conduct SAXS, GISAXS and GIWAXS measurements. An automated robot arm transfers samples from a holding tray to a measurement stage. Intelligent software aligns each sample in turn, and measures each according to user-defined specifications. Users mail in trays of samples on individually barcoded pucks, and can download and view their data remotely. Data will be pipelined to the NERSC supercomputing facility, and will be available to users via a web portal that facilitates highly parallelized analysis.
Zeng, Wei; Fisher, Alison L; Musson, Donald G; Wang, Amy Qiu
A novel method was developed and assessed to extend the lifetime of extraction columns of high-throughput liquid chromatography (HTLC) for bioanalysis of human plasma samples. In this method, a 15% acetic acid solution and 90% THF were respectively used as mobile phases to clean up the proteins in human plasma samples and residual lipids from the extraction and analytical columns. The 15% acetic acid solution weakens the interactions between proteins and the stationary phase of the extraction column and increases the protein solubility in the mobile phase. The 90% THF mobile phase prevents the accumulation of lipids and thus reduces the potential damage on the columns. Using this novel method, the extraction column lifetime has been extended to about 2000 direct plasma injections, and this is the first time that high concentration acetic acid and THF are used in HTLC for on-line cleanup and extraction column lifetime extension.
Full Text Available Cognitive Radio Vehicular Ad-hoc Networks (CR-VANETs exploit cognitive radios to allow vehicles to access the unused channels in their radio environment. Thus, CR-VANETs do not only suffer the traditional CR problems, especially spectrum sensing, but also suffer new challenges due to the highly dynamic nature of VANETs. In this paper, we present a low-delay and high-throughput radio environment assessment scheme for CR-VANETs that can be easily incorporated with the IEEE 802.11p standard developed for VANETs. Simulation results show that the proposed scheme significantly reduces the time to get the radio environment map and increases the CR-VANET throughput.
Su, Hui [Iowa State Univ., Ames, IA (United States)
Laser-induced fluorescence detection is one of the most sensitive detection techniques and it has found enormous applications in various areas. The purpose of this research was to develop detection approaches based on laser-induced fluorescence detection in two different areas, heterogeneous catalysts screening and single cell study. First, the author introduced laser-induced imaging (LIFI) as a high-throughput screening technique for heterogeneous catalysts to explore the use of this high-throughput screening technique in discovery and study of various heterogeneous catalyst systems. This scheme is based on the fact that the creation or the destruction of chemical bonds alters the fluorescence properties of suitably designed molecules. By irradiating the region immediately above the catalytic surface with a laser, the fluorescence intensity of a selected product or reactant can be imaged by a charge-coupled device (CCD) camera to follow the catalytic activity as a function of time and space. By screening the catalytic activity of vanadium pentoxide catalysts in oxidation of naphthalene, they demonstrated LIFI has good detection performance and the spatial and temporal resolution needed for high-throughput screening of heterogeneous catalysts. The sample packing density can reach up to 250 x 250 subunits/cm2 for 40-μm wells. This experimental set-up also can screen solid catalysts via near infrared thermography detection. In the second part of this dissertation, the author used laser-induced native fluorescence coupled with capillary electrophoresis (LINF-CE) and microscope imaging to study the single cell degranulation. On the basis of good temporal correlation with events observed through an optical microscope, they have identified individual peaks in the fluorescence electropherograms as serotonin released from the granular core on contact with the surrounding fluid.
Full Text Available Image-based high-throughput plant phenotyping in greenhouse has the potential to relieve the bottleneck currently presented by phenotypic scoring which limits the throughput of gene discovery and crop improvement efforts. Numerous studies have employed automated RGB imaging to characterize biomass and growth of agronomically important crops. The objective of this study was to investigate the utility of hyperspectral imaging for quantifying chemical properties of maize and soybean plants in vivo. These properties included leaf water content, as well as concentrations of macronutrients nitrogen (N, phosphorus (P, potassium (K, magnesium (Mg, calcium (Ca, and sulfur (S, and micronutrients sodium (Na, iron (Fe, manganese (Mn, boron (B, copper (Cu, and zinc (Zn. Hyperspectral images were collected from 60 maize and 60 soybean plants, each subjected to varying levels of either water deficit or nutrient limitation stress with the goal of creating a wide range of variation in the chemical properties of plant leaves. Plants were imaged on an automated conveyor belt system using a hyperspectral imager with a spectral range from 550 to 1,700 nm. Images were processed to extract reflectance spectrum from each plant and partial least squares regression models were developed to correlate spectral data with chemical data. Among all the chemical properties investigated, water content was predicted with the highest accuracy [R2 = 0.93 and RPD (Ratio of Performance to Deviation = 3.8]. All macronutrients were also quantified satisfactorily (R2 from 0.69 to 0.92, RPD from 1.62 to 3.62, with N predicted best followed by P, K, and S. The micronutrients group showed lower prediction accuracy (R2 from 0.19 to 0.86, RPD from 1.09 to 2.69 than the macronutrient groups. Cu and Zn were best predicted, followed by Fe and Mn. Na and B were the only two properties that hyperspectral imaging was not able to quantify satisfactorily (R2 < 0.3 and RPD < 1.2. This study suggested
Drewes, Stephan; Straková, Petra; Drexler, Jan F; Jacob, Jens; Ulrich, Rainer G
Rodents are distributed throughout the world and interact with humans in many ways. They provide vital ecosystem services, some species are useful models in biomedical research and some are held as pet animals. However, many rodent species can have adverse effects such as damage to crops and stored produce, and they are of health concern because of the transmission of pathogens to humans and livestock. The first rodent viruses were discovered by isolation approaches and resulted in break-through knowledge in immunology, molecular and cell biology, and cancer research. In addition to rodent-specific viruses, rodent-borne viruses are causing a large number of zoonotic diseases. Most prominent examples are reemerging outbreaks of human hemorrhagic fever disease cases caused by arena- and hantaviruses. In addition, rodents are reservoirs for vector-borne pathogens, such as tick-borne encephalitis virus and Borrelia spp., and may carry human pathogenic agents, but likely are not involved in their transmission to human. In our days, next-generation sequencing or high-throughput sequencing (HTS) is revolutionizing the speed of the discovery of novel viruses, but other molecular approaches, such as generic RT-PCR/PCR and rolling circle amplification techniques, contribute significantly to the rapidly ongoing process. However, the current knowledge still represents only the tip of the iceberg, when comparing the known human viruses to those known for rodents, the mammalian taxon with the largest species number. The diagnostic potential of HTS-based metagenomic approaches is illustrated by their use in the discovery and complete genome determination of novel borna- and adenoviruses as causative disease agents in squirrels. In conclusion, HTS, in combination with conventional RT-PCR/PCR-based approaches, resulted in a drastically increased knowledge of the diversity of rodent viruses. Future improvements of the used workflows, including bioinformatics analysis, will further
Whitehurst, Charles E; Annis, D Allen
Advances in combinatorial chemistry and genomics have inspired the development of novel affinity selection-based screening techniques that rely on mass spectrometry to identify compounds that preferentially bind to a protein target. Of the many affinity selection-mass spectrometry techniques so far documented, only a few solution-based implementations that separate target-ligand complexes away from unbound ligands persist today as routine high throughput screening platforms. Because affinity selection-mass spectrometry techniques do not rely on radioactive or fluorescent reporters or enzyme activities, they can complement traditional biochemical and cell-based screening assays and enable scientists to screen targets that may not be easily amenable to other methods. In addition, by employing mass spectrometry for ligand detection, these techniques enable high throughput screening of massive library collections of pooled compound mixtures, vastly increasing the chemical space that a target can encounter during screening. Of all drug targets, G protein coupled receptors yield the highest percentage of therapeutically effective drugs. In this manuscript, we present the emerging application of affinity selection-mass spectrometry to the high throughput screening of G protein coupled receptors. We also review how affinity selection-mass spectrometry can be used as an analytical tool to guide receptor purification, and further used after screening to characterize target-ligand binding interactions, enabling the classification of orthosteric and allosteric binders.
Li, Xiaofei; Wu, Yuhua; Li, Jun; Li, Yunjing; Long, Likun; Li, Feiwu; Wu, Gang
The rapid increase in the number of genetically modified (GM) varieties has led to a demand for high-throughput methods to detect genetically modified organisms (GMOs). We describe a new dynamic array-based high throughput method to simultaneously detect 48 targets in 48 samples on a Fludigm system. The test targets included species-specific genes, common screening elements, most of the Chinese-approved GM events, and several unapproved events. The 48 TaqMan assays successfully amplified products from both single-event samples and complex samples with a GMO DNA amount of 0.05 ng, and displayed high specificity. To improve the sensitivity of detection, a preamplification step for 48 pooled targets was added to enrich the amount of template before performing dynamic chip assays. This dynamic chip-based method allowed the synchronous high-throughput detection of multiple targets in multiple samples. Thus, it represents an efficient, qualitative method for GMO multi-detection.
Xu, Like; Ouyang, Weiying; Qian, Yanyun; Su, Chao; Su, Jianqiang; Chen, Hong
Antibiotic resistance genes (ARGs) are present in surface water and often cannot be completely eliminated by drinking water treatment plants (DWTPs). Improper elimination of the ARG-harboring microorganisms contaminates the water supply and would lead to animal and human disease. Therefore, it is of utmost importance to determine the most effective ways by which DWTPs can eliminate ARGs. Here, we tested water samples from two DWTPs and distribution systems and detected the presence of 285 ARGs, 8 transposases, and intI-1 by utilizing high-throughput qPCR. The prevalence of ARGs differed in the two DWTPs, one of which employed conventional water treatments while the other had advanced treatment processes. The relative abundance of ARGs increased significantly after the treatment with biological activated carbon (BAC), raising the number of detected ARGs from 76 to 150. Furthermore, the final chlorination step enhanced the relative abundance of ARGs in the finished water generated from both DWTPs. The total enrichment of ARGs varied from 6.4-to 109.2-fold in tap water compared to finished water, among which beta-lactam resistance genes displayed the highest enrichment. Six transposase genes were detected in tap water samples, with the transposase gene TnpA-04 showing the greatest enrichment (up to 124.9-fold). We observed significant positive correlations between ARGs and mobile genetic elements (MGEs) during the distribution systems, indicating that transposases and intI-1 may contribute to antibiotic resistance in drinking water. To our knowledge, this is the first study to investigate the diversity and abundance of ARGs in drinking water treatment systems utilizing high-throughput qPCR techniques in China. Copyright © 2016 Elsevier Ltd. All rights reserved.
Full Text Available Abstract Background In a high-throughput environment, to PCR amplify and sequence a large set of viral isolates from populations that are potentially heterogeneous and continuously evolving, the use of degenerate PCR primers is an important strategy. Degenerate primers allow for the PCR amplification of a wider range of viral isolates with only one set of pre-mixed primers, thus increasing amplification success rates and minimizing the necessity for genome finishing activities. To successfully select a large set of degenerate PCR primers necessary to tile across an entire viral genome and maximize their success, this process is best performed computationally. Results We have developed a fully automated degenerate PCR primer design system that plays a key role in the J. Craig Venter Institute’s (JCVI high-throughput viral sequencing pipeline. A consensus viral genome, or a set of consensus segment sequences in the case of a segmented virus, is specified using IUPAC ambiguity codes in the consensus template sequence to represent the allelic diversity of the target population. PCR primer pairs are then selected computationally to produce a minimal amplicon set capable of tiling across the full length of the specified target region. As part of the tiling process, primer pairs are computationally screened to meet the criteria for successful PCR with one of two described amplification protocols. The actual sequencing success rates for designed primers for measles virus, mumps virus, human parainfluenza virus 1 and 3, human respiratory syncytial virus A and B and human metapneumovirus are described, where >90% of designed primer pairs were able to consistently successfully amplify >75% of the isolates. Conclusions Augmenting our previously developed and published JCVI Primer Design Pipeline, we achieved similarly high sequencing success rates with only minor software modifications. The recommended methodology for the construction of the consensus
Jakob D Wikstrom
Full Text Available The pancreatic beta cell is unique in its response to nutrient by increased fuel oxidation. Recent studies have demonstrated that oxygen consumption rate (OCR may be a valuable predictor of islet quality and long term nutrient responsiveness. To date, high-throughput and user-friendly assays for islet respiration are lacking. The aim of this study was to develop such an assay and to examine bioenergetic efficiency of rodent and human islets.The XF24 respirometer platform was adapted to islets by the development of a 24-well plate specifically designed to confine islets. The islet plate generated data with low inter-well variability and enabled stable measurement of oxygen consumption for hours. The F1F0 ATP synthase blocker oligomycin was used to assess uncoupling while rotenone together with myxothiazol/antimycin was used to measure the level of non-mitochondrial respiration. The use of oligomycin in islets was validated by reversing its effect in the presence of the uncoupler FCCP. Respiratory leak averaged to 59% and 49% of basal OCR in islets from C57Bl6/J and FVB/N mice, respectively. In comparison, respiratory leak of INS-1 cells and C2C12 myotubes was measured to 38% and 23% respectively. Islets from a cohort of human donors showed a respiratory leak of 38%, significantly lower than mouse islets.The assay for islet respiration presented here provides a novel tool that can be used to study islet mitochondrial function in a relatively high-throughput manner. The data obtained in this study shows that rodent islets are less bioenergetically efficient than human islets as well as INS1 cells.
Park, Daniel Sang-Won; Chen, Pin-Chuan; You, Byoung Hee; Kim, Namwon; Park, Taehyun; Lee, Tae Yoon; Soper, Steven A; Nikitopoulos, Dimitris E; Murphy, Michael C; Datta, Proyag; Desta, Yohannes
A high throughput, multi-well (96) polymerase chain reaction (PCR) platform, based on a continuous flow (CF) mode of operation, was developed. Each CFPCR device was confined to a footprint of 8 × 8 mm 2 , matching the footprint of a well on a standard micro-titer plate. While several CFPCR devices have been demonstrated, this is the first example of a high-throughput multi-well continuous flow thermal reactor configuration. Verification of the feasibility of the multi-well CFPCR device was carried out at each stage of development from manufacturing to demonstrating sample amplification. The multi-well CFPCR devices were fabricated by micro-replication in polymers, polycarbonate to accommodate the peak temperatures during thermal cycling in this case, using double-sided hot embossing. One side of the substrate contained the thermal reactors and the opposite side was patterned with structures to enhance thermal isolation of the closely packed constant temperature zones. A 99 bp target from a λ-DNA template was successfully amplified in a prototype multi-well CFPCR device with a total reaction time as low as ∼5 min at a flow velocity of 3 mm s −1 (15.3 s cycle −1 ) and a relatively low amplification efficiency compared to a bench-top thermal cycler for a 20-cycle device; reducing the flow velocity to 1 mm s −1 (46.2 s cycle −1 ) gave a seven-fold improvement in amplification efficiency. Amplification efficiencies increased at all flow velocities for 25-cycle devices with the same configuration.
Full Text Available Charged particle therapy is increasingly becoming a valuable tool in cancer treatment, mainly due to the favorable interaction of particle radiation with matter. Its application is still limited due, in part, to lack of data regarding the radiosensitivity of certain cell lines to this radiation type, especially to high-LET particles. From the earliest days of radiation biology, the clonogenic survival assay has been used to provide radiation response data. This method produces reliable data but it is not optimized for high-throughput microbeam studies with high-LET radiation where high levels of cell killing lead to a very low probability of maintaining cells’ clonogenic potential. A new method, therefore, is proposed in this paper, which could potentially allow these experiments to be conducted in a high-throughput fashion. Cells are seeded in special polypropylene dishes and bright-field illumination provides cell visualization. Digital images are obtained and cell detection is applied based on corner detection, generating individual cell targets as x-y points. These points in the dish are then irradiated individually by a micron field size high-LET microbeam. Post-irradiation, time-lapse imaging follows cells’ response. All irradiated cells are tracked by linking trajectories in all time-frames, based on finding their nearest position. Cell divisions are detected based on cell appearance and individual cell temporary corner density. The number of divisions anticipated is low due to the high probability of cell killing from high-LET irradiation. Survival curves are produced based on cell’s capacity to divide at least 4-5 times. The process is repeated for a range of doses of radiation. Validation shows the efficiency of the proposed cell detection and tracking method in finding cell divisions.
Georgantzoglou, Antonios; Merchant, Michael J; Jeynes, Jonathan C G; Mayhead, Natalie; Punia, Natasha; Butler, Rachel E; Jena, Rajesh
Charged particle therapy is increasingly becoming a valuable tool in cancer treatment, mainly due to the favorable interaction of particle radiation with matter. Its application is still limited due, in part, to lack of data regarding the radiosensitivity of certain cell lines to this radiation type, especially to high-linear energy transfer (LET) particles. From the earliest days of radiation biology, the clonogenic survival assay has been used to provide radiation response data. This method produces reliable data but it is not optimized for high-throughput microbeam studies with high-LET radiation where high levels of cell killing lead to a very low probability of maintaining cells' clonogenic potential. A new method, therefore, is proposed in this paper, which could potentially allow these experiments to be conducted in a high-throughput fashion. Cells are seeded in special polypropylene dishes and bright-field illumination provides cell visualization. Digital images are obtained and cell detection is applied based on corner detection, generating individual cell targets as x-y points. These points in the dish are then irradiated individually by a micron field size high-LET microbeam. Post-irradiation, time-lapse imaging follows cells' response. All irradiated cells are tracked by linking trajectories in all time-frames, based on finding their nearest position. Cell divisions are detected based on cell appearance and individual cell temporary corner density. The number of divisions anticipated is low due to the high probability of cell killing from high-LET irradiation. Survival curves are produced based on cell's capacity to divide at least four to five times. The process is repeated for a range of doses of radiation. Validation shows the efficiency of the proposed cell detection and tracking method in finding cell divisions.
and molecu- lar physical structure into the prediction of the macroscopic constitutive properties and behaviour of the polymers. GIM uses a mean field...Cβ and Cg are related to the loss of DOFs over beta and glass transitions, where R is the molar gas constant and C is defined by: (2) C = NR (6.7T θ1...The compression yield behaviour of polymethyl methacrylate over a wide range of temperatures and strain-rates, Journal of Materials Science 8 (7
Blixt, Ola; Cló, Emiliano; Nudelman, Aaron Samuel
Biomarker microarrays are becoming valuable tools for serological screening of disease-associated autoantibodies. Post-translational modifications (PTMs) such as glycosylation extend the range of protein function, and a variety of glycosylated proteins are known to be altered in disease progressi...
Veneman, Wouter J.; Stockhammer, Oliver W.; de Boer, Leonie; Zaat, Sebastian A. J.; Meijer, Annemarie H.; Spaink, Herman P.
Staphylococcus epidermidis bacteria are a major cause of biomaterial-associated infections in modern medicine. Yet there is little known about the host responses against this normally innocent bacterium in the context of infection of biomaterials. In order to better understand the factors involved
Thomas J.J. Gintjee; Alvin S.H. Magh; Carmen Bertoni
Centers for the screening of biologically active compounds and genomic libraries are becoming common in the academic setting and have enabled researchers devoted to developing strategies for the treatment of diseases or interested in studying a biological phenomenon to have unprecedented access to libraries that, until few years ago, were accessible only by pharmaceutical companies. As a result, new drugs and genetic targets have now been identified for the treatment of Duchenne muscular dyst...
Newsam John M.
Full Text Available We attempt to take a strategic view of the development and application of HTE techniques across a broad spectrum of chemical, material and earth sciences, looking for unifying assumptions and approaches. We consider why much of the development of HTE technologies and techniques, as well as the majority of their application, have taken place in industry or in institutes or centers working closely with industry. And we look for commonalities and synergies across diverse HTE application areas, taking examples from the energy, catalysis, formulations and biotechnology fields.
‘Rapid Apple Decline’ (RAD) is a newly emerging problem of young, dwarf apple trees in the northeastern USA. The affected trees show trunk necrosis, bark cracking and canker formation before collapsing in the summer. In this study, a new luteovirus and three common viruses were identified from apple...
Singh, Ragini; Ramachandran, Vasanthi; Shandil, Radha; Sharma, Sreevalli; Khandelwal, Swati; Karmarkar, Malancha; Kumar, Naveen; Solapure, Suresh; Saralaya, Ramanatha; Nanduri, Robert; Panduga, Vijender; Reddy, Jitendar; Prabhakar, K. R.; Rajagopalan, Swaminathan; Rao, Narasimha; Narayanan, Shridhar; Anandkumar, Anand; Datta, Santanu
There are currently 18 drug classes for the treatment of tuberculosis, including those in the development pipeline. An in silico simulation enabled combing the innumerably large search space to derive multidrug combinations. Through the use of ordinary differential equations (ODE), we constructed an in silico kinetic platform in which the major metabolic pathways in Mycobacterium tuberculosis and the mechanisms of the antituberculosis drugs were integrated into a virtual proteome. The optimized model was used to evaluate 816 triplets from the set of 18 drugs. The experimentally derived cumulative fractional inhibitory concentration (∑FIC) value was within twofold of the model prediction. Bacterial enumeration revealed that a significant number of combinations that were synergistic for growth inhibition were also synergistic for bactericidal effect. The in silico-based screen provided new starting points for testing in a mouse model of tuberculosis, in which two novel triplets and five novel quartets were significantly superior to the reference drug triplet of isoniazid, rifampin, and ethambutol (HRE) or the quartet of HRE plus pyrazinamide (HREZ). PMID:26149995
Lundegaard, Claus; Lund, Ole; Nielsen, Morten
and limitations regarding the number of proteins and MHC alleles that are feasibly handled by such experimental methods have made in silico prediction models of high interest. MHC binding prediction methods are today of a very high quality and can predict MHC binding peptides with high accuracy. This is possible...
of metal ions. Toyocamycin is an adenosine analog that has shown tumori- cidal activity by inhibiting RNA synthesis and inducing apoptosis (44) and...copolymer esters as flow improvers of waxy crude oil. J. Petrol. Sci. Eng. 65:139 –146. 20. Kyeremateng SO, Amado E, Kressler J. 2007. Synthesis and...MA. 2003. Mecha- nism of fluconazole resistance in Candida albicans biofilms: phase-specific role of efflux pumps and membrane sterols . Infect. Immun
Full Text Available Epstein-Barr Virus (EBV latent infection is associated with several human malignancies and is a causal agent of lymphoproliferative diseases during immunosuppression. While inhibitors of herpesvirus DNA polymerases, like gancyclovir, reduce EBV lytic cycle infection, these treatments have limited efficacy for treating latent infection. EBNA1 is an EBV-encoded DNA-binding protein required for viral genome maintenance during latent infection.Here, we report the identification of a new class of small molecules that inhibit EBNA1 DNA binding activity. These compounds were identified by virtual screening of 90,000 low molecular mass compounds using computational docking programs with the solved crystal structure of EBNA1. Four structurally related compounds were found to inhibit EBNA1-DNA binding in biochemical assays with purified EBNA1 protein. Compounds had a range of 20-100 microM inhibition of EBNA1 in fluorescence polarization assays and were further validated for inhibition using electrophoresis mobility shift assays. These compounds exhibited no significant inhibition of an unrelated DNA binding protein. Three of these compounds inhibited EBNA1 transcription activation function in cell-based assays and reduced EBV genome copy number when incubated with a Burkitt lymphoma cell line.These experiments provide a proof-of-principle that virtual screening can be used to identify specific inhibitors of EBNA1 that may have potential for treatment of EBV latent infection.
This program examines innovative approaches and powerful new technologies to identify selective and potent agents directed to prevent or relieve the neuroparalytic toxic actions of botulinum toxin A (BoNTA)1...
Full Text Available Polypoid species play significant roles in agriculture and food production. Many crop species are polyploid, such as potato, wheat, strawberry, and sugarcane. Genotyping has been a daunting task for genetic studies of polyploid crops, which lags far behind the diploid crop species. Single nucleotide polymorphism (SNP array is considered to be one of, high-throughput, relatively cost-efficient and automated genotyping approaches. However, there are significant challenges for SNP identification in complex, polyploid genomes, which has seriously slowed SNP discovery and array development in polyploid species. Ploidy is a significant factor impacting SNP qualities and validation rates of SNP markers in SNP arrays, which has been proven to be a very important tool for genetic studies and molecular breeding. In this review, we (1 discussed the pros and cons of SNP array in general for high throughput genotyping, (2 presented the challenges of and solutions to SNP calling in polyploid species, (3 summarized the SNP selection criteria and considerations of SNP array design for polyploid species, (4 illustrated SNP array applications in several different polyploid crop species, then (5 discussed challenges, available software, and their accuracy comparisons for genotype calling based on SNP array data in polyploids, and finally (6 provided a series of SNP array design and genotype calling recommendations. This review presents a complete overview of SNP array development and applications in polypoid crops, which will benefit the research in molecular breeding and genetics of crops with complex genomes.
Zhang, Xiao-Yong; Wang, Guang-Hua; Xu, Xin-Ya; Nong, Xu-Hua; Wang, Jie; Amin, Muhammad; Qi, Shu-Hua
The present study investigated the fungal diversity in four different deep-sea sediments from Okinawa Trough using high-throughput Illumina sequencing of the nuclear ribosomal internal transcribed spacer-1 (ITS1). A total of 40,297 fungal ITS1 sequences clustered into 420 operational taxonomic units (OTUs) with 97% sequence similarity and 170 taxa were recovered from these sediments. Most ITS1 sequences (78%) belonged to the phylum Ascomycota, followed by Basidiomycota (17.3%), Zygomycota (1.5%) and Chytridiomycota (0.8%), and a small proportion (2.4%) belonged to unassigned fungal phyla. Compared with previous studies on fungal diversity of sediments from deep-sea environments by culture-dependent approach and clone library analysis, the present result suggested that Illumina sequencing had been dramatically accelerating the discovery of fungal community of deep-sea sediments. Furthermore, our results revealed that Sordariomycetes was the most diverse and abundant fungal class in this study, challenging the traditional view that the diversity of Sordariomycetes phylotypes was low in the deep-sea environments. In addition, more than 12 taxa accounted for 21.5% sequences were found to be rarely reported as deep-sea fungi, suggesting the deep-sea sediments from Okinawa Trough harbored a plethora of different fungal communities compared with other deep-sea environments. To our knowledge, this study is the first exploration of the fungal diversity in deep-sea sediments from Okinawa Trough using high-throughput Illumina sequencing.
Massey, Andrew J
Determining and understanding drug target engagement is critical for drug discovery. This can be challenging within living cells as selective readouts are often unavailable. Here we describe a novel method for measuring target engagement in living cells based on the principle of altered protein thermal stabilization / destabilization in response to ligand binding. This assay (HCIF-CETSA) utilizes high content, high throughput single cell immunofluorescent detection to determine target protein levels following heating of adherent cells in a 96 well plate format. We have used target engagement of Chk1 by potent small molecule inhibitors to validate the assay. Target engagement measured by this method was subsequently compared to target engagement measured by two alternative methods (autophosphorylation and CETSA). The HCIF-CETSA method appeared robust and a good correlation in target engagement measured by this method and CETSA for the selective Chk1 inhibitor V158411 was observed. However, these EC50 values were 23- and 12-fold greater than the autophosphorylation IC50. The described method is therefore a valuable advance in the CETSA method allowing the high throughput determination of target engagement in adherent cells.
Priest, David G; Tanaka, Nobuyuki; Tanaka, Yo; Taniguchi, Yuichi
High-throughput microscopy of bacterial cells elucidated fundamental cellular processes including cellular heterogeneity and cell division homeostasis. Polydimethylsiloxane (PDMS)-based microfluidic devices provide advantages including precise positioning of cells and throughput, however device fabrication is time-consuming and requires specialised skills. Agarose pads are a popular alternative, however cells often clump together, which hinders single cell quantitation. Here, we imprint agarose pads with micro-patterned 'capsules', to trap individual cells and 'lines', to direct cellular growth outwards in a straight line. We implement this micro-patterning into multi-pad devices called CapsuleHotel and LineHotel for high-throughput imaging. CapsuleHotel provides ~65,000 capsule structures per mm 2 that isolate individual Escherichia coli cells. In contrast, LineHotel provides ~300 line structures per mm that direct growth of micro-colonies. With CapsuleHotel, a quantitative single cell dataset of ~10,000 cells across 24 samples can be acquired and analysed in under 1 hour. LineHotel allows tracking growth of > 10 micro-colonies across 24 samples simultaneously for up to 4 generations. These easy-to-use devices can be provided in kit format, and will accelerate discoveries in diverse fields ranging from microbiology to systems and synthetic biology.
Hassig, Christian A; Zeng, Fu-Yue; Kung, Paul; Kiankarimi, Mehrak; Kim, Sylvia; Diaz, Paul W; Zhai, Dayong; Welsh, Kate; Morshedian, Shana; Su, Ying; O'Keefe, Barry; Newman, David J; Rusman, Yudi; Kaur, Harneet; Salomon, Christine E; Brown, Susan G; Baire, Beeraiah; Michel, Andrew R; Hoye, Thomas R; Francis, Subhashree; Georg, Gunda I; Walters, Michael A; Divlianska, Daniela B; Roth, Gregory P; Wright, Amy E; Reed, John C
Antiapoptotic Bcl-2 family proteins are validated cancer targets composed of six related proteins. From a drug discovery perspective, these are challenging targets that exert their cellular functions through protein-protein interactions (PPIs). Although several isoform-selective inhibitors have been developed using structure-based design or high-throughput screening (HTS) of synthetic chemical libraries, no large-scale screen of natural product collections has been reported. A competitive displacement fluorescence polarization (FP) screen of nearly 150,000 natural product extracts was conducted against all six antiapoptotic Bcl-2 family proteins using fluorochrome-conjugated peptide ligands that mimic functionally relevant PPIs. The screens were conducted in 1536-well format and displayed satisfactory overall HTS statistics, with Z'-factor values ranging from 0.72 to 0.83 and a hit confirmation rate between 16% and 64%. Confirmed active extracts were orthogonally tested in a luminescent assay for caspase-3/7 activation in tumor cells. Active extracts were resupplied, and effort toward the isolation of pure active components was initiated through iterative bioassay-guided fractionation. Several previously described altertoxins were isolated from a microbial source, and the pure compounds demonstrate activity in both Bcl-2 FP and caspase cellular assays. The studies demonstrate the feasibility of ultra-high-throughput screening using natural product sources and highlight some of the challenges associated with this approach. © 2014 Society for Laboratory Automation and Screening.
Asunción Salmeán, Armando
to concept proof that is possible to use the Comprehensive Microarray Polymer Profiling (CoMPP) as a tool for other extracellular matrixes such as marine animals and not only for algal or plant cell walls. Thus, we discovered fucoidan and cellulose epitopes in several tissues of various marine animals from...... in cell development. Another part of this work focused in the development of a novel methodology for the discovery of unknown algal polysaccharides and characterization of carbohydrate binding proteins. Based on the coevolution between alga and marine saprophytic microorganisms, which use the algal...
Call, Douglas F.; Logan, Bruce E.
There is great interest in studying exoelectrogenic microorganisms, but existing methods can require expensive electrochemical equipment and specialized reactors. We developed a simple system for conducting high throughput bioelectrochemical
Schwämmle, Veit; Vaudel, Marc
Cell signaling and functions heavily rely on post-translational modifications (PTMs) of proteins. Their high-throughput characterization is thus of utmost interest for multiple biological and medical investigations. In combination with efficient enrichment methods, peptide mass spectrometry analy...
Embryonic vascular disruption is an important adverse outcome pathway (AOP) given the knowledge that chemical disruption of early cardiovascular system development leads to broad prenatal defects. High throughput screening (HTS) assays provide potential building blocks for AOP d...
High-throughput DNA sequencing approaches have enabled direct interrogation of chromatin samples from mammalian cells. We are beginning to develop a genome-wide description of nuclear function during development, but further data collection, refinement, and integration are needed.
Ernstsen, Christina Lundgaard; Login, Frédéric H.; Jensen, Helene Halkjær
Quantification of intracellular bacterial colonies is useful in strategies directed against bacterial attachment, subsequent cellular invasion and intracellular proliferation. An automated, high-throughput microscopy-method was established to quantify the number and size of intracellular bacteria...
Das, Abhiram; Schneider, Hannah; Burridge, James; Ascanio, Ana Karine Martinez; Wojciechowski, Tobias; Topp, Christopher N; Lynch, Jonathan P; Weitz, Joshua S; Bucksch, Alexander
Plant root systems are key drivers of plant function and yield. They are also under-explored targets to meet global food and energy demands. Many new technologies have been developed to characterize crop root system architecture (CRSA). These technologies have the potential to accelerate the progress in understanding the genetic control and environmental response of CRSA. Putting this potential into practice requires new methods and algorithms to analyze CRSA in digital images. Most prior approaches have solely focused on the estimation of root traits from images, yet no integrated platform exists that allows easy and intuitive access to trait extraction and analysis methods from images combined with storage solutions linked to metadata. Automated high-throughput phenotyping methods are increasingly used in laboratory-based efforts to link plant genotype with phenotype, whereas similar field-based studies remain predominantly manual low-throughput. Here, we present an open-source phenomics platform "DIRT", as a means to integrate scalable supercomputing architectures into field experiments and analysis pipelines. DIRT is an online platform that enables researchers to store images of plant roots, measure dicot and monocot root traits under field conditions, and share data and results within collaborative teams and the broader community. The DIRT platform seamlessly connects end-users with large-scale compute "commons" enabling the estimation and analysis of root phenotypes from field experiments of unprecedented size. DIRT is an automated high-throughput computing and collaboration platform for field based crop root phenomics. The platform is accessible at http://www.dirt.iplantcollaborative.org/ and hosted on the iPlant cyber-infrastructure using high-throughput grid computing resources of the Texas Advanced Computing Center (TACC). DIRT is a high volume central depository and high-throughput RSA trait computation platform for plant scientists working on crop roots
Nyshadham, Chandramouli; Hansen, Jacob; Oses, Corey; Curtarolo, Stefano; Hart, Gus
In 2006 an unexpected new superalloy, Co3[Al,W], was discovered. This new alloy is cobalt-based, in contrast to conventional superalloys, which are nickel-based. Inspired by this new discovery, we performed first-principles calculations, searching through 2224 ternary metallic systems of the form A3[B0.5C0.5], where A = Ni/Co/Fe and [B, C] = all binary combinations of 40 different elements chosen from the periodic table. We found 175 new systems that are better than the Co3[Al, W] superalloy. 75 of these systems are brand new--they have never been reported in experimental literature. These 75 new potential superalloys are good candidates for further experiments. Our calculations are consistent with current experimental literature where data exists. Work supported under: ONR (MURI N00014-13-1-0635).
Raijada, Dhara; Cornett, Claus; Rantanen, Jukka
The present study puts forward a miniaturized high-throughput platform to understand influence of excipient selection and processing on the stability of a given drug compound. Four model drugs (sodium naproxen, theophylline, amlodipine besylate and nitrofurantoin) and ten different excipients were...... for chemical degradation. The proposed high-throughput platform can be used during early drug development to simulate typical processing induced stress in a small scale and to understand possible phase transformation behaviour and influence of excipients on this....
Chen, Hui; Jiang, Wen
The oral microbiome is one of most diversity habitat in the human body and they are closely related with oral health and disease. As the technique developing,, high throughput sequencing has become a popular approach applied for oral microbial analysis. Oral bacterial profiles have been studied to explore the relationship between microbial diversity and oral diseases such as caries and periodontal disease. This review describes the application of high-throughput sequencing for characterizati...
Morse, Alison M; Calabro, Kaitlyn R; Fear, Justin M; Bloom, David C; McIntyre, Lauren M
High-throughput sequencing (HTS) has resulted in data for a number of herpes simplex virus (HSV) laboratory strains and clinical isolates. The knowledge of these sequences has been critical for investigating viral pathogenicity. However, the assembly of complete herpesviral genomes, including HSV, is complicated due to the existence of large repeat regions and arrays of smaller reiterated sequences that are commonly found in these genomes. In addition, the inherent genetic variation in populations of isolates for viruses and other microorganisms presents an additional challenge to many existing HTS sequence assembly pipelines. Here, we evaluate two approaches for the identification of genetic variants in HSV1 strains using Illumina short read sequencing data. The first, a reference-based approach, identifies variants from reads aligned to a reference sequence and the second, a de novo assembly approach, identifies variants from reads aligned to de novo assembled consensus sequences. Of critical importance for both approaches is the reduction in the number of low complexity regions through the construction of a non-redundant reference genome. We compared variants identified in the two methods. Our results indicate that approximately 85% of variants are identified regardless of the approach. The reference-based approach to variant discovery captures an additional 15% representing variants divergent from the HSV1 reference possibly due to viral passage. Reference-based approaches are significantly less labor-intensive and identify variants across the genome where de novo assembly-based approaches are limited to regions where contigs have been successfully assembled. In addition, regions of poor quality assembly can lead to false variant identification in de novo consensus sequences. For viruses with a well-assembled reference genome, a reference-based approach is recommended.
Choi, Su-Lim; Rha, Eugene; Lee, Sang Jun; Kim, Haseong; Kwon, Kilkoang; Jeong, Young-Su; Rhee, Young Ha; Song, Jae Jun; Kim, Hak-Sung; Lee, Seung-Goo
Large-scale screening of enzyme libraries is essential for the development of cost-effective biological processes, which will be indispensable for the production of sustainable biobased chemicals. Here, we introduce a genetic circuit termed the Genetic Enzyme Screening System that is highly useful for high-throughput enzyme screening from diverse microbial metagenomes. The circuit consists of two AND logics. The first AND logic, the two inputs of which are the target enzyme and its substrate, is responsible for the accumulation of a phenol compound in cell. Then, the phenol compound and its inducible transcription factor, whose activation turns on the expression of a reporter gene, interact in the other logic gate. We confirmed that an individual cell harboring this genetic circuit can present approximately a 100-fold higher cellular fluorescence than the negative control and can be easily quantified by flow cytometry depending on the amounts of phenolic derivatives. The high sensitivity of the genetic circuit enables the rapid discovery of novel enzymes from metagenomic libraries, even for genes that show marginal activities in a host system. The crucial feature of this approach is that this single system can be used to screen a variety of enzymes that produce a phenol compound from respective synthetic phenyl-substrates, including cellulase, lipase, alkaline phosphatase, tyrosine phenol-lyase, and methyl parathion hydrolase. Consequently, the highly sensitive and quantitative nature of this genetic circuit along with flow cytometry techniques could provide a widely applicable toolkit for discovering and engineering novel enzymes at a single cell level.
Joosen, Ronny V L; Kodde, Jan; Willems, Leo A J; Ligterink, Wilco; van der Plas, Linus H W; Hilhorst, Henk W M
Over the past few decades seed physiology research has contributed to many important scientific discoveries and has provided valuable tools for the production of high quality seeds. An important instrument for this type of research is the accurate quantification of germination; however gathering cumulative germination data is a very laborious task that is often prohibitive to the execution of large experiments. In this paper we present the germinator package: a simple, highly cost-efficient and flexible procedure for high-throughput automatic scoring and evaluation of germination that can be implemented without the use of complex robotics. The germinator package contains three modules: (i) design of experimental setup with various options to replicate and randomize samples; (ii) automatic scoring of germination based on the color contrast between the protruding radicle and seed coat on a single image; and (iii) curve fitting of cumulative germination data and the extraction, recap and visualization of the various germination parameters. The curve-fitting module enables analysis of general cumulative germination data and can be used for all plant species. We show that the automatic scoring system works for Arabidopsis thaliana and Brassica spp. seeds, but is likely to be applicable to other species, as well. In this paper we show the accuracy, reproducibility and flexibility of the germinator package. We have successfully applied it to evaluate natural variation for salt tolerance in a large population of recombinant inbred lines and were able to identify several quantitative trait loci for salt tolerance. Germinator is a low-cost package that allows the monitoring of several thousands of germination tests, several times a day by a single person.
Oliver N F King
Full Text Available Small molecule modulators of epigenetic processes are currently sought as basic probes for biochemical mechanisms, and as starting points for development of therapeutic agents. N(ε-Methylation of lysine residues on histone tails is one of a number of post-translational modifications that together enable transcriptional regulation. Histone lysine demethylases antagonize the action of histone methyltransferases in a site- and methylation state-specific manner. N(ε-Methyllysine demethylases that use 2-oxoglutarate as co-factor are associated with diverse human diseases, including cancer, inflammation and X-linked mental retardation; they are proposed as targets for the therapeutic modulation of transcription. There are few reports on the identification of templates that are amenable to development as potent inhibitors in vivo and large diverse collections have yet to be exploited for the discovery of demethylase inhibitors.High-throughput screening of a ∼236,000-member collection of diverse molecules arrayed as dilution series was used to identify inhibitors of the JMJD2 (KDM4 family of 2-oxoglutarate-dependent histone demethylases. Initial screening hits were prioritized by a combination of cheminformatics, counterscreening using a coupled assay enzyme, and orthogonal confirmatory detection of inhibition by mass spectrometric assays. Follow-up studies were carried out on one of the series identified, 8-hydroxyquinolines, which were shown by crystallographic analyses to inhibit by binding to the active site Fe(II and to modulate demethylation at the H3K9 locus in a cell-based assay.These studies demonstrate that diverse compound screening can yield novel inhibitors of 2OG dependent histone demethylases and provide starting points for the development of potent and selective agents to interrogate epigenetic regulation.
Elder, L. E.; Hull, P. M.; Hsiang, A. Y.; Kahanamoku, S.
The era of Big Data has ushered in the potential to collect population level information in a manageable time frame. Taxon-free morphological trait analysis, referred to as ecometrics, can be used to examine and compare ecological dynamics between communities with entirely different species compositions. Until recently population level studies of morphology were difficult because of the time intensive task of collecting measurements. To overcome this, we implemented advances in imaging technology and created software to automate measurements. This high-throughput set of methods collects assemblage-scale data, with methods tuned to foraminiferal samples (e.g., light objects on a dark background). Methods include serial focused dark-field microscopy, custom software (Automorph) to batch process images, extract 2D and 3D shape parameters and frames, and implement landmark-free geometric morphometric analyses. Informatics pipelines were created to store, catalog and share images through the Yale Peabody Museum(YPM; peabody.yale.edu). We openly share software and images to enhance future data discovery. In less than a year we have generated over 25TB of high resolution semi 3D images for this initial study. Here, we take the first step towards developing ecometric approaches for open ocean microfossil communities with a calibration study of community shape in recent sediments. We will present an overview of the `shape' of modern planktonic foraminiferal communities from 25 Atlantic core top samples (23 sites in the North and Equatorial Atlantic; 2 sites in the South Atlantic). In total, more than 100,000 microfossils and fragments were imaged from these sites' sediment cores, an unprecedented morphometric sample set. Correlates of community shape, including diversity, temperature, and latitude, will be discussed. These methods have also been applied to images of limpets and fish teeth to date, and have the potential to be used on modern taxa to extract meaningful
Truong Daniel D
Full Text Available Abstract Background Although the c.904_906delGAG mutation in Exon 5 of TOR1A typically manifests as early-onset generalized dystonia, DYT1 dystonia is genetically and clinically heterogeneous. Recently, another Exon 5 mutation (c.863G>A has been associated with early-onset generalized dystonia and some ΔGAG mutation carriers present with late-onset focal dystonia. The aim of this study was to identify TOR1A Exon 5 mutations in a large cohort of subjects with mainly non-generalized primary dystonia. Methods High resolution melting (HRM was used to examine the entire TOR1A Exon 5 coding sequence in 1014 subjects with primary dystonia (422 spasmodic dysphonia, 285 cervical dystonia, 67 blepharospasm, 41 writer's cramp, 16 oromandibular dystonia, 38 other primary focal dystonia, 112 segmental dystonia, 16 multifocal dystonia, and 17 generalized dystonia and 250 controls (150 neurologically normal and 100 with other movement disorders. Diagnostic sensitivity and specificity were evaluated in an additional 8 subjects with known ΔGAG DYT1 dystonia and 88 subjects with ΔGAG-negative dystonia. Results HRM of TOR1A Exon 5 showed high (100% diagnostic sensitivity and specificity. HRM was rapid and economical. HRM reliably differentiated the TOR1A ΔGAG and c.863G>A mutations. Melting curves were normal in 250/250 controls and 1012/1014 subjects with primary dystonia. The two subjects with shifted melting curves were found to harbor the classic ΔGAG deletion: 1 a non-Jewish Caucasian female with childhood-onset multifocal dystonia and 2 an Ashkenazi Jewish female with adolescent-onset spasmodic dysphonia. Conclusion First, HRM is an inexpensive, diagnostically sensitive and specific, high-throughput method for mutation discovery. Second, Exon 5 mutations in TOR1A are rarely associated with non-generalized primary dystonia.
Zhu, Xiangcheng; Zheng, Qiang; Yang, Hu; Cai, Jin; Huang, Lei; Duan, Yanwen; Xu, Zhinan; Cen, Peilin
Inkjet dispensing technology is a promising fabrication methodology widely applied in drug discovery. The automated programmable characteristics and high-throughput efficiency makes this approach potentially very useful in miniaturizing the design patterns for assays and drug screening. Various custom-made inkjet dispensing systems as well as specialized bio-ink and substrates have been developed and applied to fulfill the increasing demands of basic drug discovery studies. The incorporation of other modern technologies has further exploited the potential of inkjet dispensing technology in drug discovery and development. This paper reviews and discusses the recent developments and practical applications of inkjet dispensing technology in several areas of drug discovery and development including fundamental assays of cells and proteins, microarrays, biosensors, tissue engineering, basic biological and pharmaceutical studies. Progression in a number of areas of research including biomaterials, inkjet mechanical systems and modern analytical techniques as well as the exploration and accumulation of profound biological knowledge has enabled different inkjet dispensing technologies to be developed and adapted for high-throughput pattern fabrication and miniaturization. This in turn presents a great opportunity to propel inkjet dispensing technology into drug discovery.
Kohn, Joachim; Welsh, William J.; Knight, Doyle
This paper attempts to illustrate both the need for new approaches to biomaterials discovery as well as the significant promise inherent in the use of combinatorial and computational design strategies. The key observation of this Leading Opinion Paper is that the biomaterials community has been slow to embrace advanced biomaterials discovery tools such as combinatorial methods, high throughput experimentation, and computational modeling in spite of the significant promise shown by these discovery tools in materials science, medicinal chemistry and the pharmaceutical industry. It seems that the complexity of living cells and their interactions with biomaterials has been a conceptual as well as a practical barrier to the use of advanced discovery tools in biomaterials science. However, with the continued increase in computer power, the goal of predicting the biological response of cells in contact with biomaterials surfaces is within reach. Once combinatorial synthesis, high throughput experimentation, and computational modeling are integrated into the biomaterials discovery process, a significant acceleration is possible in the pace of development of improved medical implants, tissue regeneration scaffolds, and gene/drug delivery systems. PMID:17644176
Microarray profiling of chemical-induced effects is being increasingly used in medium and high-throughput formats. In this study, we describe computational methods to identify molecular targets from whole-genome microarray data using as an example the estrogen receptor α (ERα), ...
Aeffner, Famke; Wilson, Kristin; Bolon, Brad; Kanaly, Suzanne; Mahrt, Charles R; Rudmann, Dan; Charles, Elaine; Young, G David
Historically, pathologists perform manual evaluation of H&E- or immunohistochemically-stained slides, which can be subjective, inconsistent, and, at best, semiquantitative. As the complexity of staining and demand for increased precision of manual evaluation increase, the pathologist's assessment will include automated analyses (i.e., "digital pathology") to increase the accuracy, efficiency, and speed of diagnosis and hypothesis testing and as an important biomedical research and diagnostic tool. This commentary introduces the many roles for pathologists in designing and conducting high-throughput digital image analysis. Pathology review is central to the entire course of a digital pathology study, including experimental design, sample quality verification, specimen annotation, analytical algorithm development, and report preparation. The pathologist performs these roles by reviewing work undertaken by technicians and scientists with training and expertise in image analysis instruments and software. These roles require regular, face-to-face interactions between team members and the lead pathologist. Traditional pathology training is suitable preparation for entry-level participation on image analysis teams. The future of pathology is very exciting, with the expanding utilization of digital image analysis set to expand pathology roles in research and drug development with increasing and new career opportunities for pathologists. © 2016 by The Author(s) 2016.
Full Text Available We have developed a high-density microarray platform consisting of nano-biofilms of Candida albicans. A robotic microarrayer was used to print yeast cells of C. albicans encapsulated in a collagen matrix at a volume as low as 50 nL onto surface-modified microscope slides. Upon incubation, the cells grow into fully formed "nano-biofilms". The morphological and architectural complexity of these biofilms were evaluated by scanning electron and confocal scanning laser microscopy. The extent of biofilm formation was determined using a microarray scanner from changes in fluorescence intensities due to FUN 1 metabolic processing. This staining technique was also adapted for antifungal susceptibility testing, which demonstrated that, similar to regular biofilms, cells within the on-chip biofilms displayed elevated levels of resistance against antifungal agents (fluconazole and amphotericin B. Thus, results from structural analyses and antifungal susceptibility testing indicated that despite miniaturization, these biofilms display the typical phenotypic properties associated with the biofilm mode of growth. In its final format, the C. albicans biofilm chip (CaBChip is composed of 768 equivalent and spatially distinct nano-biofilms on a single slide; multiple chips can be printed and processed simultaneously. Compared to current methods for the formation of microbial biofilms, namely the 96-well microtiter plate model, this fungal biofilm chip has advantages in terms of miniaturization and automation, which combine to cut reagent use and analysis time, minimize labor intensive steps, and dramatically reduce assay costs. Such a chip should accelerate the antifungal drug discovery process by enabling rapid, convenient and inexpensive screening of hundreds-to-thousands of compounds simultaneously.
Abstract Background Mining high-throughput screening (HTS) assays is key for enhancing decisions in the area of drug repositioning and drug discovery. However, many challenges are encountered in the process of developing suitable and accurate methods for extracting useful information from these assays. Virtual screening and a wide variety of databases, methods and solutions proposed to-date, did not completely overcome these challenges. This study is based on a multi-label classification (MLC) technique for modeling correlations between several HTS assays, meaning that a single prediction represents a subset of assigned correlated labels instead of one label. Thus, the devised method provides an increased probability for more accurate predictions of compounds that were not tested in particular assays. Results Here we present DRABAL, a novel MLC solution that incorporates structure learning of a Bayesian network as a step to model dependency between the HTS assays. In this study, DRABAL was used to process more than 1.4 million interactions of over 400,000 compounds and analyze the existing relationships between five large HTS assays from the PubChem BioAssay Database. Compared to different MLC methods, DRABAL significantly improves the F1Score by about 22%, on average. We further illustrated usefulness and utility of DRABAL through screening FDA approved drugs and reported ones that have a high probability to interact with several targets, thus enabling drug-multi-target repositioning. Specifically DRABAL suggests the Thiabendazole drug as a common activator of the NCP1 and Rab-9A proteins, both of which are designed to identify treatment modalities for the Niemannâ Pick type C disease. Conclusion We developed a novel MLC solution based on a Bayesian active learning framework to overcome the challenge of lacking fully labeled training data and exploit actual dependencies between the HTS assays. The solution is motivated by the need to model dependencies between
Albert, Océane; Reintsch, Wolfgang E.; Chan, Peter; Robaire, Bernard
STUDY QUESTION Can we make the comet assay (single-cell gel electrophoresis) for human sperm a more accurate and informative high throughput assay? SUMMARY ANSWER We developed a standardized automated high throughput comet (HT-COMET) assay for human sperm that improves its accuracy and efficiency, and could be of prognostic value to patients in the fertility clinic. WHAT IS KNOWN ALREADY The comet assay involves the collection of data on sperm DNA damage at the level of the single cell, allowing the use of samples from severe oligozoospermic patients. However, this makes comet scoring a low throughput procedure that renders large cohort analyses tedious. Furthermore, the comet assay comes with an inherent vulnerability to variability. Our objective is to develop an automated high throughput comet assay for human sperm that will increase both its accuracy and efficiency. STUDY DESIGN, SIZE, DURATION The study comprised two distinct components: a HT-COMET technical optimization section based on control versus DNAse treatment analyses (n = 3–5), and a cross-sectional study on 123 men presenting to a reproductive center with sperm concentrations categorized as severe oligozoospermia, oligozoospermia or normozoospermia. PARTICIPANTS/MATERIALS, SETTING, METHODS Sperm chromatin quality was measured using the comet assay: on classic 2-well slides for software comparison; on 96-well slides for HT-COMET optimization; after exposure to various concentrations of a damage-inducing agent, DNAse, using HT-COMET; on 123 subjects with different sperm concentrations using HT-COMET. Data from the 123 subjects were correlated to classic semen quality parameters and plotted as single-cell data in individual DNA damage profiles. MAIN RESULTS AND THE ROLE OF CHANCE We have developed a standard automated HT-COMET procedure for human sperm. It includes automated scoring of comets by a fully integrated high content screening setup that compares well with the most commonly used semi
Krska, Shane W; DiRocco, Daniel A; Dreher, Spencer D; Shevlin, Michael
The structural complexity of pharmaceuticals presents a significant challenge to modern catalysis. Many published methods that work well on simple substrates often fail when attempts are made to apply them to complex drug intermediates. The use of high-throughput experimentation (HTE) techniques offers a means to overcome this fundamental challenge by facilitating the rational exploration of large arrays of catalysts and reaction conditions in a time- and material-efficient manner. Initial forays into the use of HTE in our laboratories for solving chemistry problems centered around screening of chiral precious-metal catalysts for homogeneous asymmetric hydrogenation. The success of these early efforts in developing efficient catalytic steps for late-stage development programs motivated the desire to increase the scope of this approach to encompass other high-value catalytic chemistries. Doing so, however, required significant advances in reactor and workflow design and automation to enable the effective assembly and agitation of arrays of heterogeneous reaction mixtures and retention of volatile solvents under a wide range of temperatures. Associated innovations in high-throughput analytical chemistry techniques greatly increased the efficiency and reliability of these methods. These evolved HTE techniques have been utilized extensively to develop highly innovative catalysis solutions to the most challenging problems in large-scale pharmaceutical synthesis. Starting with Pd- and Cu-catalyzed cross-coupling chemistry, subsequent efforts expanded to other valuable modern synthetic transformations such as chiral phase-transfer catalysis, photoredox catalysis, and C-H functionalization. As our experience and confidence in HTE techniques matured, we envisioned their application beyond problems in process chemistry to address the needs of medicinal chemists. Here the problem of reaction generality is felt most acutely, and HTE approaches should prove broadly enabling
Rabin, O.; Lee, S.Y.; Rabin, O.
Small clusters of nanoparticles are ideal substrates for SERS measurements, but the SERS signal enhancement by a particular cluster is strongly dependent on its structural characteristics and the measurement conditions. Two methods for high-throughput assembly of silver nano cubes into small clusters at predetermined locations on a substrate are presented. These fabrication techniques make it possible to study both the structure and the plasmonic properties of hundreds of nanoparticle clusters. The variations in SERS enhancement factors from cluster to cluster were analyzed and correlated with cluster size and configuration, and laser frequency and polarization. Using Raman instruments with 633 nm and 785 nm lasers and linear clusters of nano cubes, an increase in the reproducibility of the enhancement and an increase in the average enhancement values were achieved by increasing the number of nano cubes in the cluster, up to 4 nano cubes per cluster. By examining the effect of cluster configuration, it is shown that linear clusters with nano cubes attached in a face-to-face configuration are not as effective SERS substrates as linear clusters in which nano cubes are attached along an edge
Proxy-based accelerated discovery of Fischer–Tropsch catalysts† †Electronic supplementary information (ESI) available: Details of synthesis, analysis and testing, validation experiments for high-throughput XRD and gas treatment, details of statistical analysis and calculations, tabulation of synthesis parameters and XRD results, alternatives to Fig. 3 highlighting different data points, FTS testing results displayed graphically. See DOI: 10.1039/c4sc02116a Click here for additional data file.
Boldrin, Paul; Gallagher, James R.; Combes, Gary B.; Enache, Dan I.; James, David; Ellis, Peter R.; Kelly, Gordon; Claridge, John B.
Development of heterogeneous catalysts for complex reactions such as Fischer–Tropsch synthesis of fuels is hampered by difficult reaction conditions, slow characterisation techniques such as chemisorption and temperature-programmed reduction and the need for long term stability. High-throughput (HT) methods may help, but their use has until now focused on bespoke micro-reactors for direct measurements of activity and selectivity. These are specific to individual reactions and do not provide more fundamental information on the materials. Here we report using simpler HT characterisation techniques (XRD and TGA) along with ageing under Fischer–Tropsch reaction conditions to provide information analogous to metal surface area, degree of reduction and thousands of hours of stability testing time for hundreds of samples per month. The use of this method allowed the identification of a series of highly stable, high surface area catalysts promoted by Mg and Ru. In an advance over traditional multichannel HT reactors, the chemical and structural information we obtain on the materials allows us to identify the structural effects of the promoters and their effects on the modes of deactivation observed. PMID:29560180
Zhang, Liqiang; Su, Fengyu; Kong, Xiangxing; Lee, Fred; Day, Kevin; Gao, Weimin; Vecera, Mary E.; Sohr, Jeremy M.; Buizer, Sean; Tian, Yanqing; Meldrum, Deirdre R
Extracellular pH has a strong effect on cell metabolism and growth. Precisely detecting extracellular pH with high throughput is critical for cell metabolism research and fermentation applications. In this research, a series of ratiometric fluorescent pH sensitive polymers are developed and the ps-pH-neutral is characterized as the best one for exculsive detection of extracellular pH. Poly(N-(2-hydroxypropyl)methacrylamide) (PHPMA) is used as the host polymer to increase the water solubility ...
Asli N Goktug
Full Text Available High-throughput RNA interference (RNAi screening has become a widely used approach to elucidating gene functions. However, analysis and annotation of large data sets generated from these screens has been a challenge for researchers without a programming background. Over the years, numerous data analysis methods were produced for plate quality control and hit selection and implemented by a few open-access software packages. Recently, strictly standardized mean difference (SSMD has become a widely used method for RNAi screening analysis mainly due to its better control of false negative and false positive rates and its ability to quantify RNAi effects with a statistical basis. We have developed GUItars to enable researchers without a programming background to use SSMD as both a plate quality and a hit selection metric to analyze large data sets.The software is accompanied by an intuitive graphical user interface for easy and rapid analysis workflow. SSMD analysis methods have been provided to the users along with traditionally-used z-score, normalized percent activity, and t-test methods for hit selection. GUItars is capable of analyzing large-scale data sets from screens with or without replicates. The software is designed to automatically generate and save numerous graphical outputs known to be among the most informative high-throughput data visualization tools capturing plate-wise and screen-wise performances. Graphical outputs are also written in HTML format for easy access, and a comprehensive summary of screening results is written into tab-delimited output files.With GUItars, we demonstrated robust SSMD-based analysis workflow on a 3840-gene small interfering RNA (siRNA library and identified 200 siRNAs that increased and 150 siRNAs that decreased the assay activities with moderate to stronger effects. GUItars enables rapid analysis and illustration of data from large- or small-scale RNAi screens using SSMD and other traditional analysis
Sand, Salomon; Parham, Fred; Portier, Christopher J.; Tice, Raymond R.; Krewski, Daniel
Background: The National Research Council’s vision for toxicity testing in the 21st century anticipates that points of departure (PODs) for establishing human exposure guidelines in future risk assessments will increasingly be based on in vitro high-throughput screening (HTS) data. Objectives: The aim of this study was to compare different PODs for HTS data. Specifically, benchmark doses (BMDs) were compared to the signal-to-noise crossover dose (SNCD), which has been suggested as the lowest dose applicable as a POD. Methods: Hill models were fit to > 10,000 in vitro concentration–response curves, obtained for > 1,400 chemicals tested as part of the U.S. Tox21 Phase I effort. BMDs and lower confidence limits on the BMDs (BMDLs) corresponding to extra effects (i.e., changes in response relative to the maximum response) of 5%, 10%, 20%, 30%, and 40% were estimated for > 8,000 curves, along with BMDs and BMDLs corresponding to additional effects (i.e., absolute changes in response) of 5%, 10%, 15%, 20%, and 25%. The SNCD, defined as the dose where the ratio between the additional effect and the difference between the upper and lower bounds of the two-sided 90% confidence interval on absolute effect was 1, 0.67, and 0.5, respectively, was also calculated and compared with the BMDLs. Results: The BMDL40, BMDL25, and BMDL18, defined in terms of extra effect, corresponded to the SNCD1.0, SNCD0.67, and SNCD0.5, respectively, at the median. Similarly, the BMDL25, BMDL17, and BMDL13, defined in terms of additional effect, corresponded to the SNCD1.0, SNCD0.67, and SNCD0.5, respectively, at the median. Conclusions: The SNCD may serve as a reference level that guides the determination of standardized BMDs for risk assessment based on HTS concentration–response data. The SNCD may also have application as a POD for low-dose extrapolation. Citation: Sand S, Parham F, Portier CJ, Tice RR, Krewski D. 2017. Comparison of points of departure for health risk assessment based on
Muñoz-Torres, Pau M; Rokć, Filip; Belužic, Robert; Grbeša, Ivana; Vugrek, Oliver
Mass spectrometry (MS) are a group of a high-throughput techniques used to increase knowledge about biomolecules. They produce a large amount of data which is presented as a list of hundreds or thousands of proteins. Filtering those data efficiently is the first step for extracting biologically relevant information. The filtering may increase interest by merging previous data with the data obtained from public databases, resulting in an accurate list of proteins which meet the predetermined conditions. In this article we present msBiodat Analysis Tool, a web-based application thought to approach proteomics to the big data analysis. With this tool, researchers can easily select the most relevant information from their MS experiments using an easy-to-use web interface. An interesting feature of msBiodat analysis tool is the possibility of selecting proteins by its annotation on Gene Ontology using its Gene Id, ensembl or UniProt codes. The msBiodat analysis tool is a web-based application that allows researchers with any programming experience to deal with efficient database querying advantages. Its versatility and user-friendly interface makes easy to perform fast and accurate data screening by using complex queries. Once the analysis is finished, the result is delivered by e-mail. msBiodat analysis tool is freely available at http://msbiodata.irb.hr.
Huang Jeffrey T-J
Full Text Available Abstract Background The use of selective reaction monitoring (SRM based LC-MS/MS analysis for the quantification of phosphorylation stoichiometry has been rapidly increasing. At the same time, the number of sites that can be monitored in a single LC-MS/MS experiment is also increasing. The manual processes associated with running these experiments have highlighted the need for computational assistance to quickly design MRM/SRM candidates. Results PChopper has been developed to predict peptides that can be produced via enzymatic protein digest; this includes single enzyme digests, and combinations of enzymes. It also allows digests to be simulated in 'batch' mode and can combine information from these simulated digests to suggest the most appropriate enzyme(s to use. PChopper also allows users to define the characteristic of their target peptides, and can automatically identify phosphorylation sites that may be of interest. Two application end points are available for interacting with the system; the first is a web based graphical tool, and the second is an API endpoint based on HTTP REST. Conclusions Service oriented architecture was used to rapidly develop a system that can consume and expose several services. A graphical tool was built to provide an easy to follow workflow that allows scientists to quickly and easily identify the enzymes required to produce multiple peptides in parallel via enzymatic digests in a high throughput manner.
Aide, Nicolas [Francois Baclesse Cancer Centre, Nuclear Medicine Department, Caen Cedex (France); Caen University, BioTICLA team, EA 4656, IFR 146, Caen (France); Visser, Eric P. [Radboud University Nijmegen Medical Center, Nuclear Medicine Department, Nijmegen (Netherlands); Lheureux, Stephanie [Caen University, BioTICLA team, EA 4656, IFR 146, Caen (France); Francois Baclesse Cancer Centre, Clinical Research Unit, Caen (France); Heutte, Natacha [Francois Baclesse Cancer Centre, Clinical Research Unit, Caen (France); Szanda, Istvan [King' s College London, Division of Imaging Sciences and Biomedical Engineering, London (United Kingdom); Hicks, Rodney J. [Peter MacCallum Cancer Centre, Centre for Molecular Imaging, East Melbourne (Australia)
Over the last decade, small-animal PET imaging has become a vital platform technology in cancer research. With the development of molecularly targeted therapies and drug combinations requiring evaluation of different schedules, the number of animals to be imaged within a PET experiment has increased. This paper describes experimental design requirements to reach statistical significance, based on the expected change in tracer uptake in treated animals as compared to the control group, the number of groups that will be imaged, and the expected intra-animal variability for a given tracer. We also review how high-throughput studies can be performed in dedicated small-animal PET, high-resolution clinical PET systems and planar positron imaging systems by imaging more than one animal simultaneously. Customized beds designed to image more than one animal in large-bore small-animal PET scanners are described. Physics issues related to the presence of several rodents within the field of view (i.e. deterioration of spatial resolution and sensitivity as the radial and the axial offsets increase, respectively, as well as a larger effect of attenuation and the number of scatter events), which can be assessed by using the NEMA NU 4 image quality phantom, are detailed. (orig.)
Jiang, Rongzhong; Rong, Charles; Chu, Deryn
A 40-member array of direct methanol fuel cells (with stationary fuel and convective air supplies) was generated by electrically connecting the fuel cells in series. High-throughput analysis of these fuel cells was realized by fast screening of voltages between the two terminals of a fuel cell at constant current discharge. A large number of voltage-current curves (200) were obtained by screening the voltages through multiple small-current steps. Gaussian distribution was used to statistically analyze the large number of experimental data. The standard deviation (sigma) of voltages of these fuel cells increased linearly with discharge current. The voltage-current curves at various fuel concentrations were simulated with an empirical equation of voltage versus current and a linear equation of sigma versus current. The simulated voltage-current curves fitted the experimental data well. With increasing methanol concentration from 0.5 to 4.0 M, the Tafel slope of the voltage-current curves (at sigma=0.0), changed from 28 to 91 mV.dec-1, the cell resistance from 2.91 to 0.18 Omega, and the power output from 3 to 18 mW.cm-2.
Reis, Monica; McDonald, David; Nicholson, Lindsay; Godthardt, Kathrin; Knobel, Sebastian; Dickinson, Anne M; Filby, Andrew; Wang, Xiao-Nong
Mesenchymal stromal cells (MSCs) are a promising cell source to develop cell therapy for many diseases. Human platelet lysate (PLT) is increasingly used as an alternative to foetal calf serum (FCS) for clinical-scale MSC production. To date, the global surface protein expression of PLT-expended MSCs (MSC-PLT) is not known. To investigate this, paired MSC-PLT and MSC-FCS were analysed in parallel using high-throughput flow cytometry for the expression of 356 cell surface proteins. MSC-PLT showed differential surface protein expression compared to their MSC-FCS counterpart. Higher percentage of positive cells was observed in MSC-PLT for 48 surface proteins, of which 13 were significantly enriched on MSC-PLT. This finding was validated using multiparameter flow cytometry and further confirmed by quantitative staining intensity analysis. The enriched surface proteins are relevant to increased proliferation and migration capacity, as well as enhanced chondrogenic and osteogenic differentiation properties. In silico network analysis revealed that these enriched surface proteins are involved in three distinct networks that are associated with inflammatory responses, carbohydrate metabolism and cellular motility. This is the first study reporting differential cell surface protein expression between MSC-PLT and MSC-FSC. Further studies are required to uncover the impact of those enriched proteins on biological functions of MSC-PLT.
Liu, Ning; Liu, Yang; Yang, YuHan; He, Lan; Ouyang, Jin
High-throughput screening (HTS) is often required in enzyme inhibitor drugs screening. Mass spectrometry (MS) provides a powerful method for high-throughput screening enzyme inhibitors because its high speed, sensitivity and property of lable free. However, most of the MS methods need complicated sampling interface system. Overall throughput was limited by sample loading in these cases. In this study, we develop a simple interface which coupled droplet segmented system to a venturi easy ambient sonic-spray ionization mass spectrometer. It is fabricated by using a single capillary to act as both sampling probe and the emitter, which simplifies the construction, reduces the cost and shorten the sampling time. Samples sucked by venturi effect are segmented to nanoliter plugs by air, then the plugs can be detected by MS directly. This system eliminated the need for flow injection which was popular used in classic scheme. The new system is applied to screen angiotensin converting enzyme inhibitors. High-throughput was achieved in analyzing 96 samples at 1.6 s per sample. The plugs formation was at 0.5s per sample. Carry-over between samples was less than 5%, the peak height RSD was 2.92% (n = 15). Dose-response curves of 3 known inhibitors were also measured to validate its potential in drug discovery. The calculated IC50 agreed well with reported values. Copyright © 2016 Elsevier B.V. All rights reserved.
the link between high throughput metabolomics data generated on different analytical platforms, discover important metabolites deriving from the digestion processes in the gut, and automate metabolic pathway discovery from mass spectrometry. PLS (partial least squares) based chemometric methods were...
Cabrera-Bosquet, Llorenç; Crossa, José; von Zitzewitz, Jarislav; Serret, María Dolors; Araus, José Luis
Genomic selection (GS) and high-throughput phenotyping have recently been captivating the interest of the crop breeding community from both the public and private sectors world-wide. Both approaches promise to revolutionize the prediction of complex traits, including growth, yield and adaptation to stress. Whereas high-throughput phenotyping may help to improve understanding of crop physiology, most powerful techniques for high-throughput field phenotyping are empirical rather than analytical and comparable to genomic selection. Despite the fact that the two methodological approaches represent the extremes of what is understood as the breeding process (phenotype versus genome), they both consider the targeted traits (e.g. grain yield, growth, phenology, plant adaptation to stress) as a black box instead of dissecting them as a set of secondary traits (i.e. physiological) putatively related to the target trait. Both GS and high-throughput phenotyping have in common their empirical approach enabling breeders to use genome profile or phenotype without understanding the underlying biology. This short review discusses the main aspects of both approaches and focuses on the case of genomic selection of maize flowering traits and near-infrared spectroscopy (NIRS) and plant spectral reflectance as high-throughput field phenotyping methods for complex traits such as crop growth and yield. © 2012 Institute of Botany, Chinese Academy of Sciences.
Cuperlovic-Culf, M; Culf, A S
The metabolic profile is a direct signature of phenotype and biochemical activity following any perturbation. Metabolites are small molecules present in a biological system including natural products as well as drugs and their metabolism by-products depending on the biological system studied. Metabolomics can provide activity information about possible novel drugs and drug scaffolds, indicate interesting targets for drug development and suggest binding partners of compounds. Furthermore, metabolomics can be used for the discovery of novel natural products and in drug development. Metabolomics can enhance the discovery and testing of new drugs and provide insight into the on- and off-target effects of drugs. This review focuses primarily on the application of metabolomics in the discovery of active drugs from natural products and the analysis of chemical libraries and the computational analysis of metabolic networks. Metabolomics methodology, both experimental and analytical is fast developing. At the same time, databases of compounds are ever growing with the inclusion of more molecular and spectral information. An increasing number of systems are being represented by very detailed metabolic network models. Combining these experimental and computational tools with high throughput drug testing and drug discovery techniques can provide new promising compounds and leads.
Zimmermann, Boris; Bağcıoğlu, Murat; Tafinstseva, Valeria; Kohler, Achim; Ohlson, Mikael; Fjellheim, Siri
The two factors defining male reproductive success in plants are pollen quantity and quality, but our knowledge about the importance of pollen quality is limited due to methodological constraints. Pollen quality in terms of chemical composition may be either genetically fixed for high performance independent of environmental conditions, or it may be plastic to maximize reproductive output under different environmental conditions. In this study, we validated a new approach for studying the role of chemical composition of pollen in adaptation to local climate. The approach is based on high-throughput Fourier infrared (FTIR) characterization and biochemical interpretation of pollen chemical composition in response to environmental conditions. The study covered three grass species, Poa alpina , Anthoxanthum odoratum , and Festuca ovina . For each species, plants were grown from seeds of three populations with wide geographic and climate variation. Each individual plant was divided into four genetically identical clones which were grown in different controlled environments (high and low levels of temperature and nutrients). In total, 389 samples were measured using a high-throughput FTIR spectrometer. The biochemical fingerprints of pollen were species and population specific, and plastic in response to different environmental conditions. The response was most pronounced for temperature, influencing the levels of proteins, lipids, and carbohydrates in pollen of all species. Furthermore, there is considerable variation in plasticity of the chemical composition of pollen among species and populations. The use of high-throughput FTIR spectroscopy provides fast, cheap, and simple assessment of the chemical composition of pollen. In combination with controlled-condition growth experiments and multivariate analyses, FTIR spectroscopy opens up for studies of the adaptive role of pollen that until now has been difficult with available methodology. The approach can easily be
Rohde, Christopher B.; Zeng, Fei; Gilleland, Cody; Samara, Chrysanthi; Yanik, Mehmet F.
In recent years, the advantages of using small invertebrate animals as model systems for human disease have become increasingly apparent and have resulted in three Nobel Prizes in medicine or chemistry during the last six years for studies conducted on the nematode Caenorhabditis elegans (C. elegans). The availability of a wide array of species-specific genetic techniques, along with the transparency of the worm and its ability to grow in minute volumes make C. elegans an extremely powerful model organism. We present a suite of technologies for complex high-throughput whole-animal genetic and drug screens. We demonstrate a high-speed microfluidic sorter that can isolate and immobilize C. elegans in a well-defined geometry, an integrated chip containing individually addressable screening chambers for incubation and exposure of individual animals to biochemical compounds, and a device for delivery of compound libraries in standard multiwell plates to microfluidic devices. The immobilization stability obtained by these devices is comparable to that of chemical anesthesia and the immobilization process does not affect lifespan, progeny production, or other aspects of animal health. The high-stability enables the use of a variety of key optical techniques. We use this to demonstrate femtosecond-laser nanosurgery and three-dimensional multiphoton microscopy. Used alone or in various combinations these devices facilitate a variety of high-throughput assays using whole animals, including mutagenesis and RNAi and drug screens at subcellular resolution, as well as high-throughput high-precision manipulations such as femtosecond-laser nanosurgery for large-scale in vivo neural degeneration and regeneration studies.
He, Ji; Dai, Xinbin; Zhao, Xuechun
BLAST searches are widely used for sequence alignment. The search results are commonly adopted for various functional and comparative genomics tasks such as annotating unknown sequences, investigating gene models and comparing two sequence sets. Advances in sequencing technologies pose challenges for high-throughput analysis of large-scale sequence data. A number of programs and hardware solutions exist for efficient BLAST searching, but there is a lack of generic software solutions for mining and personalized management of the results. Systematically reviewing the results and identifying information of interest remains tedious and time-consuming. Personal BLAST Navigator (PLAN) is a versatile web platform that helps users to carry out various personalized pre- and post-BLAST tasks, including: (1) query and target sequence database management, (2) automated high-throughput BLAST searching, (3) indexing and searching of results, (4) filtering results online, (5) managing results of personal interest in favorite categories, (6) automated sequence annotation (such as NCBI NR and ontology-based annotation). PLAN integrates, by default, the Decypher hardware-based BLAST solution provided by Active Motif Inc. with a greatly improved efficiency over conventional BLAST software. BLAST results are visualized by spreadsheets and graphs and are full-text searchable. BLAST results and sequence annotations can be exported, in part or in full, in various formats including Microsoft Excel and FASTA. Sequences and BLAST results are organized in projects, the data publication levels of which are controlled by the registered project owners. In addition, all analytical functions are provided to public users without registration. PLAN has proved a valuable addition to the community for automated high-throughput BLAST searches, and, more importantly, for knowledge discovery, management and sharing based on sequence alignment results. The PLAN web interface is platform
Full Text Available Abstract Background BLAST searches are widely used for sequence alignment. The search results are commonly adopted for various functional and comparative genomics tasks such as annotating unknown sequences, investigating gene models and comparing two sequence sets. Advances in sequencing technologies pose challenges for high-throughput analysis of large-scale sequence data. A number of programs and hardware solutions exist for efficient BLAST searching, but there is a lack of generic software solutions for mining and personalized management of the results. Systematically reviewing the results and identifying information of interest remains tedious and time-consuming. Results Personal BLAST Navigator (PLAN is a versatile web platform that helps users to carry out various personalized pre- and post-BLAST tasks, including: (1 query and target sequence database management, (2 automated high-throughput BLAST searching, (3 indexing and searching of results, (4 filtering results online, (5 managing results of personal interest in favorite categories, (6 automated sequence annotation (such as NCBI NR and ontology-based annotation. PLAN integrates, by default, the Decypher hardware-based BLAST solution provided by Active Motif Inc. with a greatly improved efficiency over conventional BLAST software. BLAST results are visualized by spreadsheets and graphs and are full-text searchable. BLAST results and sequence annotations can be exported, in part or in full, in various formats including Microsoft Excel and FASTA. Sequences and BLAST results are organized in projects, the data publication levels of which are controlled by the registered project owners. In addition, all analytical functions are provided to public users without registration. Conclusion PLAN has proved a valuable addition to the community for automated high-throughput BLAST searches, and, more importantly, for knowledge discovery, management and sharing based on sequence alignment results
Full Text Available More and more evidences indicate that diseases of the central nervous system (CNS have been seriously affected by faecal microbes. However, little work is done to explore interaction between amyotrophic lateral sclerosis (ALS and faecal microbes. In the present study, high-throughput sequencing method was used to compare the intestinal microbial diversity of healthy people and ALS patients. The principal coordinate analysis (PCoA, Venn and unweighted pair-group method using arithmetic averages (UPGMA showed an obvious microbial changes between healthy people (group H and ALS patients (group A, and the average ratios of Bacteroides, Faecalibacterium, Anaerostipes, Prevotella, Escherichia and Lachnospira at genus level between ALS patients and healthy people were 0.78, 2.18, 3.41, 0.35, 0.79 and 13.07. Furthermore, the decreased Firmicutes/Bacteroidetes ratio at phylum level using LEfSE (LDA >4.0, together with the significant increased genus Dorea (harmful microorganisms and significant reduced genus Oscillibacter, Anaerostipes, Lachnospiraceae (beneficial microorganisms in ALS patients, indicated that the imbalance in intestinal microflora constitution had a strong association with the pathogenesis of ALS.
Fang, Xin; Wang, Xin; Yang, Shaoguo; Meng, Fanjing; Wang, Xiaolei; Wei, Hua; Chen, Tingtao
More and more evidences indicate that diseases of the central nervous system have been seriously affected by fecal microbes. However, little work is done to explore interaction between amyotrophic lateral sclerosis (ALS) and fecal microbes. In the present study, high-throughput sequencing method was used to compare the intestinal microbial diversity of healthy people and ALS patients. The principal coordinate analysis, Venn and unweighted pair-group method using arithmetic averages (UPGMA) showed an obvious microbial changes between healthy people (group H) and ALS patients (group A), and the average ratios of Bacteroides , Faecalibacterium , Anaerostipes , Prevotella , Escherichia , and Lachnospira at genus level between ALS patients and healthy people were 0.78, 2.18, 3.41, 0.35, 0.79, and 13.07. Furthermore, the decreased Firmicutes/Bacteroidetes ratio at phylum level using LEfSE (LDA > 4.0), together with the significant increased genus Dorea (harmful microorganisms) and significant reduced genus Oscillibacter , Anaerostipes , Lachnospiraceae (beneficial microorganisms) in ALS patients, indicated that the imbalance in intestinal microflora constitution had a strong association with the pathogenesis of ALS.
Dellamorte, Joseph C.; Vijay, Rohit; Snively, Christopher M.; Barteau, Mark A.; Lauterbach, Jochen
A high-throughput parallel reactor system has been designed and constructed to improve the reliability of results from large diameter catalysts such as monoliths. The system, which is expandable, consists of eight quartz reactors, 23.5mm in diameter. The eight reactors were designed with separate K type thermocouples and radiant heaters, allowing for the independent measurement and control of each reactor temperature. This design gives steady state temperature distributions over the eight reactors within 0.5°C of a common setpoint from 50to700°C. Analysis of the effluent from these reactors is performed using rapid-scan Fourier transform infrared (FTIR) spectroscopic imaging. The integration of this technique to the reactor system allows a chemically specific, truly parallel analysis of the reactor effluents with a time resolution of approximately 8s. The capabilities of this system were demonstrated via investigation of catalyst preparation conditions on the direct epoxidation of ethylene, i.e., on the ethylene conversion and the ethylene oxide selectivity. The ethylene, ethylene oxide, and carbon dioxide concentrations were calibrated based on spectra from FTIR imaging using univariate and multivariate chemometric techniques. The results from this analysis showed that the calcination conditions significantly affect the ethylene conversion, with a threefold increase in the conversion when the catalyst was calcined for 3h versus 12h at 400°C.
Ma, Biao; Zou, Yilin; Xie, Xuan; Zhao, Jinhua; Piao, Xiangfan; Piao, Jingyi; Yao, Zhongping; Quinto, Maurizio; Wang, Gang; Li, Donghao
A novel high-throughput, solvent saving and versatile integrated two-dimensional microscale carbon fiber/active carbon fiber system (2DμCFs) that allows a simply and rapid separation of compounds in low-polar, medium-polar and high-polar fractions, has been coupled with ambient ionization-mass spectrometry (ESI-Q-TOF-MS and ESI-QqQ-MS) for screening and quantitative analyses of real samples. 2DμCFs led to a substantial interference reduction and minimization of ionization suppression effects, thus increasing the sensitivity and the screening capabilities of the subsequent MS analysis. The method has been applied to the analysis of Schisandra Chinensis extracts, obtaining with a single injection a simultaneous determination of 33 compounds presenting different polarities, such as organic acids, lignans, and flavonoids in less than 7min, at low pressures and using small solvent amounts. The method was also validated using 10 model compounds, giving limit of detections (LODs) ranging from 0.3 to 30ngmL -1 , satisfactory recoveries (from 75.8 to 93.2%) and reproducibilities (relative standard deviations, RSDs, from 1.40 to 8.06%). Copyright © 2017 Elsevier B.V. All rights reserved.
Switzar, Linda; van Angeren, Jordy; Pinkse, Martijn; Kool, Jeroen; Niessen, Wilfried M A
A high-throughput sample preparation protocol based on the use of 96-well molecular weight cutoff (MWCO) filter plates was developed for shotgun proteomics of cell lysates. All sample preparation steps, including cell lysis, buffer exchange, protein denaturation, reduction, alkylation and proteolytic digestion are performed in a 96-well plate format, making the platform extremely well suited for processing large numbers of samples and directly compatible with functional assays for cellular proteomics. In addition, the usage of a single plate for all sample preparation steps following cell lysis reduces potential samples losses and allows for automation. The MWCO filter also enables sample concentration, thereby increasing the overall sensitivity, and implementation of washing steps involving organic solvents, for example, to remove cell membranes constituents. The optimized protocol allowed for higher throughput with improved sensitivity in terms of the number of identified cellular proteins when compared to an established protocol employing gel-filtration columns. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Full Text Available Drinking water safety is increasingly perceived as one of the top global environmental issues. Plankton has been commonly used as a bioindicator for water quality in lakes and reservoirs. Recently, DNA sequencing technology has been applied to bioassessment. In this study, we compared the effectiveness of the 16S and 18S rRNA high throughput sequencing method (HTS and the traditional optical microscopy method (TOM in the bioassessment of drinking water quality. Five stations reflecting different habitats and hydrological conditions in Danjiangkou Reservoir, one of the largest drinking water reservoirs in Asia, were sampled May 2016. Non-metric multi-dimensional scaling (NMDS analysis showed that plankton assemblages varied among the stations and the spatial patterns revealed by the two methods were consistent. The correlation between TOM and HTS in a symmetric Procrustes analysis was 0.61, revealing overall good concordance between the two methods. Procrustes analysis also showed that site-specific differences between the two methods varied among the stations. Station Heijizui (H, a site heavily influenced by two tributaries, had the largest difference while station Qushou (Q, a confluence site close to the outlet dam, had the smallest difference between the two methods. Our results show that DNA sequencing has the potential to provide consistent identification of taxa, and reliable bioassessment in a long-term biomonitoring and assessment program for drinking water reservoirs.
Taubert, Martin; Grob, Carolina; Howat, Alexandra M; Burns, Oliver J; Chen, Yin; Neufeld, Josh D; Murrell, J Colin
Methylotrophs are microorganisms ubiquitous in the environment that can metabolize one-carbon (C1) compounds as carbon and/or energy sources. The activity of these prokaryotes impacts biogeochemical cycles within their respective habitats and can determine whether these habitats act as sources or sinks of C1 compounds. Due to the high importance of C1 compounds, not only in biogeochemical cycles, but also for climatic processes, it is vital to understand the contributions of these microorganisms to carbon cycling in different environments. One of the most challenging questions when investigating methylotrophs, but also in environmental microbiology in general, is which species contribute to the environmental processes of interest, or "who does what, where and when?" Metabolic labeling with C1 compounds substituted with (13)C, a technique called stable isotope probing, is a key method to trace carbon fluxes within methylotrophic communities. The incorporation of (13)C into the biomass of active methylotrophs leads to an increase in the molecular mass of their biomolecules. For DNA-based stable isotope probing (DNA-SIP), labeled and unlabeled DNA is separated by isopycnic ultracentrifugation. The ability to specifically analyze DNA of active methylotrophs from a complex background community by high-throughput sequencing techniques, i.e. targeted metagenomics, is the hallmark strength of DNA-SIP for elucidating ecosystem functioning, and a protocol is detailed in this chapter.
Full Text Available Shiga toxins 1 and 2 (Stx1 and Stx2 from Shiga toxin-producing E. coli (STEC bacteria were simultaneously detected with a newly developed, high-throughput antibody microarray platform. The proteinaceous toxins were immobilized and sandwiched between biorecognition elements (monoclonal antibodies and pooled horseradish peroxidase (HRP-conjugated monoclonal antibodies. Following the reaction of HRP with the precipitating chromogenic substrate (metal enhanced 3,3-diaminobenzidine tetrahydrochloride or DAB, the formation of a colored product was quantitatively measured with an inexpensive flatbed page scanner. The colorimetric ELISA microarray was demonstrated to detect Stx1 and Stx2 at levels as low as ~4.5 ng/mL within ~2 h of total assay time with a narrow linear dynamic range of ~1–2 orders of magnitude and saturation levels well above background. Stx1 and/or Stx2 produced by various strains of STEC were also detected following the treatment of cultured cells with mitomycin C (a toxin-inducing antibiotic and/or B-PER (a cell-disrupting, protein extraction reagent. Semi-quantitative detection of Shiga toxins was demonstrated to be sporadic among various STEC strains following incubation with mitomycin C; however, further reaction with B-PER generally resulted in the detection of or increased detection of Stx1, relative to Stx2, produced by STECs inoculated into either axenic broth culture or culture broth containing ground beef.
Pronk, Sander [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Pall, Szilard [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Schulz, Roland [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Larsson, Per [Univ. of Virginia, Charlottesville, VA (United States); Bjelkmar, Par [Science for Life Lab., Stockholm (Sweden); Stockholm Univ., Stockholm (Sweden); Apostolov, Rossen [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Shirts, Michael R. [Univ. of Virginia, Charlottesville, VA (United States); Smith, Jeremy C. [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kasson, Peter M. [Univ. of Virginia, Charlottesville, VA (United States); van der Spoel, David [Science for Life Lab., Stockholm (Sweden); Uppsala Univ., Uppsala (Sweden); Hess, Berk [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Lindahl, Erik [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Stockholm Univ., Stockholm (Sweden)
In this study, molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. As a result, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations.
Full Text Available Terpene synthases catalyze the formation of a variety of terpene chemical structures. Systematic mutagenesis studies have been effective in providing insights into the characteristic and complex mechanisms of C-C bond formations and in exploring the enzymatic potential for inventing new chemical structures. In addition, there is growing demand to increase terpene synthase activity in heterologous hosts, given the maturation of metabolic engineering and host breeding for terpenoid synthesis. We have developed a simple screening method for the cellular activities of terpene synthases by scoring their substrate consumption based on the color loss of the cell harboring carotenoid pathways. We demonstrate that this method can be used to detect activities of various terpene synthase or prenyltransferase genes in a high-throughput manner, irrespective of the product type, enabling the mutation analysis and directed evolution of terpene synthases. We also report the possibility for substrate-specific screening system of terpene synthases by taking advantage of the substrate-size specificity of C30 and C40 carotenoid pathways.
Zhang, Xuehai; Huang, Chenglong; Wu, Di; Qiao, Feng; Li, Wenqiang; Duan, Lingfeng; Wang, Ke; Xiao, Yingjie; Chen, Guoxing; Liu, Qian; Xiong, Lizhong; Yang, Wanneng; Yan, Jianbing
With increasing demand for novel traits in crop breeding, the plant research community faces the challenge of quantitatively analyzing the structure and function of large numbers of plants. A clear goal of high-throughput phenotyping is to bridge the gap between genomics and phenomics. In this study, we quantified 106 traits from a maize ( Zea mays ) recombinant inbred line population ( n = 167) across 16 developmental stages using the automatic phenotyping platform. Quantitative trait locus (QTL) mapping with a high-density genetic linkage map, including 2,496 recombinant bins, was used to uncover the genetic basis of these complex agronomic traits, and 988 QTLs have been identified for all investigated traits, including three QTL hotspots. Biomass accumulation and final yield were predicted using a combination of dissected traits in the early growth stage. These results reveal the dynamic genetic architecture of maize plant growth and enhance ideotype-based maize breeding and prediction. © 2017 American Society of Plant Biologists. All Rights Reserved.
Amit U Sinha
Full Text Available Increasing use of high throughput genomic scale assays requires effective visualization and analysis techniques to facilitate data interpretation. Moreover, existing tools often require programming skills, which discourages bench scientists from examining their own data. We have created iCanPlot, a compelling platform for visual data exploration based on the latest technologies. Using the recently adopted HTML5 Canvas element, we have developed a highly interactive tool to visualize tabular data and identify interesting patterns in an intuitive fashion without the need of any specialized computing skills. A module for geneset overlap analysis has been implemented on the Google App Engine platform: when the user selects a region of interest in the plot, the genes in the region are analyzed on the fly. The visualization and analysis are amalgamated for a seamless experience. Further, users can easily upload their data for analysis--which also makes it simple to share the analysis with collaborators. We illustrate the power of iCanPlot by showing an example of how it can be used to interpret histone modifications in the context of gene expression.
Xu Feng; Wu Jinhui; Wang Shuqi; Gurkan, Umut Atakan; Demirci, Utkan; Durmus, Naside Gozde
Screening for effective therapeutic agents from millions of drug candidates is costly, time consuming, and often faces concerns due to the extensive use of animals. To improve cost effectiveness, and to minimize animal testing in pharmaceutical research, in vitro monolayer cell microarrays with multiwell plate assays have been developed. Integration of cell microarrays with microfluidic systems has facilitated automated and controlled component loading, significantly reducing the consumption of the candidate compounds and the target cells. Even though these methods significantly increased the throughput compared to conventional in vitro testing systems and in vivo animal models, the cost associated with these platforms remains prohibitively high. Besides, there is a need for three-dimensional (3D) cell-based drug-screening models which can mimic the in vivo microenvironment and the functionality of the native tissues. Here, we present the state-of-the-art microengineering approaches that can be used to develop 3D cell-based drug-screening assays. We highlight the 3D in vitro cell culture systems with live cell-based arrays, microfluidic cell culture systems, and their application to high-throughput drug screening. We conclude that among the emerging microengineering approaches, bioprinting holds great potential to provide repeatable 3D cell-based constructs with high temporal, spatial control and versatility.
Xu Feng; Wu Jinhui; Wang Shuqi; Gurkan, Umut Atakan; Demirci, Utkan [Department of Medicine, Demirci Bio-Acoustic-MEMS in Medicine (BAMM) Laboratory, Center for Biomedical Engineering, Brigham and Women' s Hospital, Harvard Medical School, Boston, MA (United States); Durmus, Naside Gozde, E-mail: firstname.lastname@example.org [School of Engineering and Division of Biology and Medicine, Brown University, Providence, RI (United States)
Screening for effective therapeutic agents from millions of drug candidates is costly, time consuming, and often faces concerns due to the extensive use of animals. To improve cost effectiveness, and to minimize animal testing in pharmaceutical research, in vitro monolayer cell microarrays with multiwell plate assays have been developed. Integration of cell microarrays with microfluidic systems has facilitated automated and controlled component loading, significantly reducing the consumption of the candidate compounds and the target cells. Even though these methods significantly increased the throughput compared to conventional in vitro testing systems and in vivo animal models, the cost associated with these platforms remains prohibitively high. Besides, there is a need for three-dimensional (3D) cell-based drug-screening models which can mimic the in vivo microenvironment and the functionality of the native tissues. Here, we present the state-of-the-art microengineering approaches that can be used to develop 3D cell-based drug-screening assays. We highlight the 3D in vitro cell culture systems with live cell-based arrays, microfluidic cell culture systems, and their application to high-throughput drug screening. We conclude that among the emerging microengineering approaches, bioprinting holds great potential to provide repeatable 3D cell-based constructs with high temporal, spatial control and versatility.
Murawski, Robert W.; Tchorowski, Nicole; Golden, Bert
As the data rate requirements for space communications increases, significant stress is placed not only on the wireless satellite communication links, but also on the ground networks which forward data from end-users to remote ground stations. These wide area network (WAN) connections add delay and jitter to the end-to-end satellite communication link, effects which can have significant impacts on the wireless communication link. It is imperative that any ground communication protocol can react to these effects such that the ground network does not become a bottleneck in the communication path to the satellite. In this paper, we present our SCENIC Emulation Lab testbed which was developed to test the CCSDS SLE protocol implementations proposed for use on future NASA communication networks. Our results show that in the presence of realistic levels of network delay, high-throughput SLE communication links can experience significant data rate throttling. Based on our observations, we present some insight into why this data throttling happens, and trace the probable issue back to non-optimal blocking communication which is sup-ported by the CCSDS SLE API recommended practices. These issues were presented as well to the SLE implementation developers which, based on our reports, developed a new release for SLE which we show fixes the SLE blocking issue and greatly improves the protocol throughput. In this paper, we also discuss future developments for our end-to-end emulation lab and how these improvements can be used to develop and test future space communication technologies.
Althammer, Sonja; González-Vallinas, Juan; Ballaré, Cecilia; Beato, Miguel; Eyras, Eduardo
High-throughput sequencing (HTS) has revolutionized gene regulation studies and is now fundamental for the detection of protein-DNA and protein-RNA binding, as well as for measuring RNA expression. With increasing variety and sequencing depth of HTS datasets, the need for more flexible and memory-efficient tools to analyse them is growing. We describe Pyicos, a powerful toolkit for the analysis of mapped reads from diverse HTS experiments: ChIP-Seq, either punctuated or broad signals, CLIP-Seq and RNA-Seq. We prove the effectiveness of Pyicos to select for significant signals and show that its accuracy is comparable and sometimes superior to that of methods specifically designed for each particular type of experiment. Pyicos facilitates the analysis of a variety of HTS datatypes through its flexibility and memory efficiency, providing a useful framework for data integration into models of regulatory genomics. Open-source software, with tutorials and protocol files, is available at http://regulatorygenomics.upf.edu/pyicos or as a Galaxy server at http://regulatorygenomics.upf.edu/galaxy email@example.com Supplementary data are available at Bioinformatics online.
Chiu, Han-Chen; Hannemann, Holger; Heesom, Kate J.; Matthews, David A.; Davidson, Andrew D.
Disease caused by dengue virus is a global health concern with up to 390 million individuals infected annually worldwide. There are no vaccines or antiviral compounds available to either prevent or treat dengue disease which may be fatal. To increase our understanding of the interaction of dengue virus with the host cell, we analyzed changes in the proteome of human A549 cells in response to dengue virus type 2 infection using stable isotope labelling in cell culture (SILAC) in combination with high-throughput mass spectrometry (MS). Mock and infected A549 cells were fractionated into nuclear and cytoplasmic extracts before analysis to identify proteins that redistribute between cellular compartments during infection and reduce the complexity of the analysis. We identified and quantified 3098 and 2115 proteins in the cytoplasmic and nuclear fractions respectively. Proteins that showed a significant alteration in amount during infection were examined using gene enrichment, pathway and network analysis tools. The analyses revealed that dengue virus infection modulated the amounts of proteins involved in the interferon and unfolded protein responses, lipid metabolism and the cell cycle. The SILAC-MS results were validated for a select number of proteins over a time course of infection by Western blotting and immunofluorescence microscopy. Our study demonstrates for the first time the power of SILAC-MS for identifying and quantifying novel changes in cellular protein amounts in response to dengue virus infection. PMID:24671231
Full Text Available Disease caused by dengue virus is a global health concern with up to 390 million individuals infected annually worldwide. There are no vaccines or antiviral compounds available to either prevent or treat dengue disease which may be fatal. To increase our understanding of the interaction of dengue virus with the host cell, we analyzed changes in the proteome of human A549 cells in response to dengue virus type 2 infection using stable isotope labelling in cell culture (SILAC in combination with high-throughput mass spectrometry (MS. Mock and infected A549 cells were fractionated into nuclear and cytoplasmic extracts before analysis to identify proteins that redistribute between cellular compartments during infection and reduce the complexity of the analysis. We identified and quantified 3098 and 2115 proteins in the cytoplasmic and nuclear fractions respectively. Proteins that showed a significant alteration in amount during infection were examined using gene enrichment, pathway and network analysis tools. The analyses revealed that dengue virus infection modulated the amounts of proteins involved in the interferon and unfolded protein responses, lipid metabolism and the cell cycle. The SILAC-MS results were validated for a select number of proteins over a time course of infection by Western blotting and immunofluorescence microscopy. Our study demonstrates for the first time the power of SILAC-MS for identifying and quantifying novel changes in cellular protein amounts in response to dengue virus infection.
Luo, Wei; Shabbir, Faizan; Gong, Chao; Gulec, Cagatay; Pigeon, Jeremy; Shaw, Jessica; Greenbaum, Alon; Tochitsky, Sergei; Joshi, Chandrashekhar [Electrical Engineering Department, University of California, Los Angeles, California 90095 (United States); Ozcan, Aydogan, E-mail: firstname.lastname@example.org [Electrical Engineering Department, University of California, Los Angeles, California 90095 (United States); Bioengineering Department, University of California, Los Angeles, California 90095 (United States); California NanoSystems Institute (CNSI), University of California, Los Angeles, California 90095 (United States)
We demonstrate a high-throughput charged particle analysis platform, which is based on lensfree on-chip microscopy for rapid ion track analysis using allyl diglycol carbonate, i.e., CR-39 plastic polymer as the sensing medium. By adopting a wide-area opto-electronic image sensor together with a source-shifting based pixel super-resolution technique, a large CR-39 sample volume (i.e., 4 cm × 4 cm × 0.1 cm) can be imaged in less than 1 min using a compact lensfree on-chip microscope, which detects partially coherent in-line holograms of the ion tracks recorded within the CR-39 detector. After the image capture, using highly parallelized reconstruction and ion track analysis algorithms running on graphics processing units, we reconstruct and analyze the entire volume of a CR-39 detector within ∼1.5 min. This significant reduction in the entire imaging and ion track analysis time not only increases our throughput but also allows us to perform time-resolved analysis of the etching process to monitor and optimize the growth of ion tracks during etching. This computational lensfree imaging platform can provide a much higher throughput and more cost-effective alternative to traditional lens-based scanning optical microscopes for ion track analysis using CR-39 and other passive high energy particle detectors.
Baker, Ryan; Logan, Savannah; Dudley, Christopher; Parthasarathy, Raghuveer
The zebrafish is a model organism with a variety of useful properties; it is small and optically transparent, it reproduces quickly, it is a vertebrate, and there are a large variety of transgenic animals available. Because of these properties, the zebrafish is well suited to study using a variety of optical technologies including light sheet fluorescence microscopy (LSFM), which provides high-resolution three-dimensional imaging over large fields of view. Research progress, however, is often not limited by optical techniques but instead by the number of samples one can examine over the course of an experiment, which in the case of light sheet imaging has so far been severely limited. Here we present an integrated fluidic circuit and microscope which provides rapid, automated imaging of zebrafish using several imaging modes, including LSFM, Hyperspectral Imaging, and Differential Interference Contrast Microscopy. Using this system, we show that we can increase our imaging throughput by a factor of 10 compared to previous techniques. We also show preliminary results visualizing zebrafish immune response, which is sensitive to gut microbiota composition, and which shows a strong variability between individuals that highlights the utility of high throughput imaging. National Science Foundation, Award No. DBI-1427957.
Hiraki, Masahiko; Yamada, Yusuke; Chavas, Leonard M. G.; Wakatsuki, Soichi; Matsugaki, Naohiro
A special liquid-nitrogen Dewar with double capacity for the sample-exchange robot has been created at AR-NE3A at the Photon Factory, allowing continuous fully automated data collection. In this work, this new system is described and the stability of its calibration is discussed. Photon Factory Automated Mounting system (PAM) protein crystal exchange systems are available at the following Photon Factory macromolecular beamlines: BL-1A, BL-5A, BL-17A, AR-NW12A and AR-NE3A. The beamline AR-NE3A has been constructed for high-throughput macromolecular crystallography and is dedicated to structure-based drug design. The PAM liquid-nitrogen Dewar can store a maximum of three SSRL cassettes. Therefore, users have to interrupt their experiments and replace the cassettes when using four or more of them during their beam time. As a result of investigation, four or more cassettes were used in AR-NE3A alone. For continuous automated data collection, the size of the liquid-nitrogen Dewar for the AR-NE3A PAM was increased, doubling the capacity. In order to check the calibration with the new Dewar and the cassette stand, calibration experiments were repeatedly performed. Compared with the current system, the parameters of the novel system are shown to be stable
Hiraki, Masahiko, E-mail: email@example.com; Yamada, Yusuke; Chavas, Leonard M. G. [High Energy Accelerator Research Organization, 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan); Wakatsuki, Soichi [High Energy Accelerator Research Organization, 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan); SLAC National Accelerator Laboratory, 2575 Sand Hill Road, MS 69, Menlo Park, CA 94025-7015 (United States); Stanford University, Beckman Center B105, Stanford, CA 94305-5126 (United States); Matsugaki, Naohiro [High Energy Accelerator Research Organization, 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan)
A special liquid-nitrogen Dewar with double capacity for the sample-exchange robot has been created at AR-NE3A at the Photon Factory, allowing continuous fully automated data collection. In this work, this new system is described and the stability of its calibration is discussed. Photon Factory Automated Mounting system (PAM) protein crystal exchange systems are available at the following Photon Factory macromolecular beamlines: BL-1A, BL-5A, BL-17A, AR-NW12A and AR-NE3A. The beamline AR-NE3A has been constructed for high-throughput macromolecular crystallography and is dedicated to structure-based drug design. The PAM liquid-nitrogen Dewar can store a maximum of three SSRL cassettes. Therefore, users have to interrupt their experiments and replace the cassettes when using four or more of them during their beam time. As a result of investigation, four or more cassettes were used in AR-NE3A alone. For continuous automated data collection, the size of the liquid-nitrogen Dewar for the AR-NE3A PAM was increased, doubling the capacity. In order to check the calibration with the new Dewar and the cassette stand, calibration experiments were repeatedly performed. Compared with the current system, the parameters of the novel system are shown to be stable.
Ibrahim B. Salisu
Full Text Available As long as the genetically modified crops are gaining attention globally, their proper approval and commercialization need accurate and reliable diagnostic methods for the transgenic content. These diagnostic techniques are mainly divided into two major groups, i.e., identification of transgenic (1 DNA and (2 proteins from GMOs and their products. Conventional methods such as PCR (polymerase chain reaction and enzyme-linked immunosorbent assay (ELISA were routinely employed for DNA and protein based quantification respectively. Although, these Techniques (PCR and ELISA are considered as significantly convenient and productive, but there is need for more advance technologies that allow for high throughput detection and the quantification of GM event as the production of more complex GMO is increasing day by day. Therefore, recent approaches like microarray, capillary gel electrophoresis, digital PCR and next generation sequencing are more promising due to their accuracy and precise detection of transgenic contents. The present article is a brief comparative study of all such detection techniques on the basis of their advent, feasibility, accuracy, and cost effectiveness. However, these emerging technologies have a lot to do with detection of a specific event, contamination of different events and determination of fusion as well as stacked gene protein are the critical issues to be addressed in future.
Rodland, Karin D.; Adkins, Joshua N.; Ansong, Charles; Chowdhury, Saiful M.; Manes, Nathan P.; Shi, Liang; Yoon, Hyunjin; Smith, Richard D.; Heffron, Fred
New improvements to mass spectrometry include increased sensitivity, improvements in analyzing the collected data, and most important, from the standpoint of this review, a much higher throughput allowing analysis of many samples in a single day. This short review describes how host-pathogen interactions can be dissected by mass spectrometry using Salmonella as a model system. The approach allowed direct identification of the majority of annotate Salmonella proteins, how expression changed under various in vitro growth conditions, and how this relates to virulence and expression within host cell cells. One of the most significant findings is that a very high percentage of the all annotated genes (>20%) are regulated post-transcriptionally. In addition, new and unexpected interactions have been identified for several Salmonella virulence regulators that involve protein-protein interactions suggesting additional functions of the regulator in coordinating virulence expression. Overall high throughput mass spectrometer provides a new view of pathogen-host interaction emphasizing the protein products and defining how protein interactions determine the outcome of infection.
Gooding, Edward; Deutsch, Erik R.; Huehnerhoff, Joseph; Hajian, Arsen R.
Raman spectral imaging is increasingly becoming the tool of choice for field-based applications such as threat, narcotics and hazmat detection; air, soil and water quality monitoring; and material ID. Conventional fiber-coupled point source Raman spectrometers effectively interrogate a small sample area and identify bulk samples via spectral library matching. However, these devices are very slow at mapping over macroscopic areas. In addition, the spatial averaging performed by instruments that collect binned spectra, particularly when used in combination with orbital raster scanning, tends to dilute the spectra of trace particles in a mixture. Our design, employing free space line illumination combined with area imaging, reveals both the spectral and spatial content of heterogeneous mixtures. This approach is well suited to applications such as detecting explosives and narcotics trace particle detection in fingerprints. The patented High Throughput Virtual Slit1 is an innovative optical design that enables compact, inexpensive handheld Raman spectral imagers. HTVS-based instruments achieve significantly higher spectral resolution than can be obtained with conventional designs of the same size. Alternatively, they can be used to build instruments with comparable resolution to large spectrometers, but substantially smaller size, weight and unit cost, all while maintaining high sensitivity. When used in combination with laser line imaging, this design eliminates sample photobleaching and unwanted photochemistry while greatly enhancing mapping speed, all with high selectivity and sensitivity. We will present spectral image data and discuss applications that are made possible by low cost HTVS-enabled instruments.
The threat of oil pollution increases with the expansion of oil exploration and production activities, as well as the industrial growth around the world. Use of sorbents is a common method to deal with the oil spills. In this work, an advanced sorbent technology is described. A series of non-woven Cellulose Acetate (CA) nanofibrous mats with a 3D fibrous structure were synthesized by a novel high-throughput electrospinning technique. The precursor was solutions of CA/ acetic acid-acetone in various concentrations. Among them, 15.0% CA exhibits a superhydrophobic surface property, with a water contact angle of 128.95°. Its oil sorption capacity is many times higher the oil sorption capacity of the best commercial sorbent available in the market. Also, it showed good buoyancy properties on the water both as dry-mat and oil-saturated mat. In addition, it is biodegradable, easily available, easily manufactured, so the CA nanofibrous mat is an excellent candidate as oil sorbent for oil spill in water treatment.
Martin, H; Bournique, B; Blanchi, B; Lerche-Langrand, C
The purpose of this study was to optimize cryopreservation conditions of rat liver slices in a high-throughput format, with focus on reproducibility. A statistical design of 32 experiments was performed and intracellular lactate dehydrogenase (LDHi) activity and antipyrine (AP) metabolism were evaluated as biomarkers. At freezing, modified University of Wisconsin solution was better than Williams'E medium, and pure dimethyl sulfoxide was better than a cryoprotectant mixture. The best cryoprotectant concentrations were 10% for LDHi and 20% for AP metabolism. Fetal calf serum could be used at 50 or 80%, and incubation of slices with the cryoprotectant could last 10 or 20 min. At thawing, 42 degrees C was better than 22 degrees C. After thawing, 1h was better than 3h of preculture. Cryopreservation increased the interslice variability of the biomarkers. After cryopreservation, LDHi and AP metabolism levels were up to 84 and 80% of fresh values. However, these high levels were not reproducibly achieved. Two factors involved in the day-to-day variability of LDHi were identified: the incubation time with the cryoprotectant and the preculture time. In conclusion, the statistical design was very efficient to quickly determine optimized conditions by simultaneously measuring the role of numerous factors. The cryopreservation procedure developed appears suitable for qualitative metabolic profiling studies.
Linman, Matthew J.; Cheng, Quan Jason
Surface plasmon resonance (SPR) is a surface optical technique that measures minute changes in refractive index at a metal-coated surface. It has become increasingly popular in the study of biological and chemical analytes because of its label-free measurement feature. In addition, SPR allows for both quantitative and qualitative assessment of binding interactions in real time, making it ideally suited for probing weak interactions that are often difficult to study with other methods. This chapter presents the biosensor development in the last 3 years or so utilizing SPR as the principal analytical technique, along with a concise background of the technique itself. While SPR has demonstrated many advantages, it is a nonselective method and so, building reproducible and functional interfaces is vital to sensing applications. This chapter, therefore, focuses mainly on unique surface chemistries and assay approaches to examine biological interactions with SPR. In addition, SPR imaging for high-throughput screening based on microarrays and novel hyphenated techniques involving the coupling of SPR to other analytical methods is discussed. The chapter concludes with a commentary on the current state of SPR biosensing technology and the general direction of future biosensor research.
Pronk, Sander; Páll, Szilárd; Schulz, Roland; Larsson, Per; Bjelkmar, Pär; Apostolov, Rossen; Shirts, Michael R; Smith, Jeremy C; Kasson, Peter M; van der Spoel, David; Hess, Berk; Lindahl, Erik
Molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. Here, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations. GROMACS is an open source and free software available from http://www.gromacs.org. Supplementary data are available at Bioinformatics online.
Moy, Terence I.; Conery, Annie L.; Larkins-Ford, Jonah; Wu, Gang; Mazitschek, Ralph; Casadei, Gabriele; Lewis, Kim; Carpenter, Anne E.; Ausubel, Frederick M.
The nematode Caenorhabditis elegans is a unique whole animal model system for identifying small molecules with in vivo anti-infective properties. C. elegans can be infected with a broad range of human pathogens, including Enterococcus faecalis, an important human nosocomial pathogen with a mortality rate of up to 37% that is increasingly acquiring resistance to antibiotics. Here, we describe an automated, high throughput screen of 37,200 compounds and natural product extracts for those that enhance survival of C. elegans infected with E. faecalis. The screen uses a robot to accurately dispense live, infected animals into 384-well plates, and automated microscopy and image analysis to generate quantitative, high content data. We identified 28 compounds and extracts that were not previously reported to have antimicrobial properties, including 6 structural classes that cure infected C. elegans animals but do not affect the growth of the pathogen in vitro, thus acting by a mechanism of action distinct from antibiotics currently in clinical use. Our versatile and robust screening system can be easily adapted for other whole animal assays to probe a broad range of biological processes. PMID:19572548
Mazzio, E; Deiab, S; Park, K; Soliman, KFA
Age-related increase in monoamine oxidase B (MAO-B) may contribute to CNS neurodegenerative diseases. Moreover, MAO-B inhibitors are used in the treatment of idiopathic Parkinson disease as preliminary monotherapy or adjunct therapy with L-dopa. To date, meager natural sources of MAO-B inhibitors have been identified, and the relative strength, potency and rank of many plants relative to standard drugs such as Selegiline (L-deprenyl, Eldepryl) are not known. In this work, we developed and utilized a high throughput enzyme microarray format to screen and evaluate 905 natural product extracts (0.025–.7 mg/ml) to inhibit human MAO-B derived from BTI-TN-5B1-4 cells infected with recombinant baculovirus. The protein sequence of purified enzyme was confirmed using 1D gel electrophoresis-matrix assisted laser desorption ionization-time-of-flight-tandem mass spectroscopy, and enzyme activity was confirmed by  substrate conversion (3-mM benzylamine) to H202 and  benzaldehyde. Of the 905 natural extracts tested, the lowest IC50s [Comfrey, Bringraj, Skullcap, Kava-kava, Wild Indigo, Gentian and Green Tea. In conclusion, the data reflect relative potency information by rank of commonly used herbs and plants that contain human MAO-B inhibitory properties in their natural form. PMID:22887993
Kyle C Wilcox
Full Text Available Despite their value as sources of therapeutic drug targets, membrane proteomes are largely inaccessible to high-throughput screening (HTS tools designed for soluble proteins. An important example comprises the membrane proteins that bind amyloid β oligomers (AβOs. AβOs are neurotoxic ligands thought to instigate the synapse damage that leads to Alzheimer's dementia. At present, the identities of initial AβO binding sites are highly uncertain, largely because of extensive protein-protein interactions that occur following attachment of AβOs to surface membranes. Here, we show that AβO binding sites can be obtained in a state suitable for unbiased HTS by encapsulating the solubilized synaptic membrane proteome into nanoscale lipid bilayers (Nanodiscs. This method gives a soluble membrane protein library (SMPL--a collection of individualized synaptic proteins in a soluble state. Proteins within SMPL Nanodiscs showed enzymatic and ligand binding activity consistent with conformational integrity. AβOs were found to bind SMPL Nanodiscs with high affinity and specificity, with binding dependent on intact synaptic membrane proteins, and selective for the higher molecular weight oligomers known to accumulate at synapses. Combining SMPL Nanodiscs with a mix-incubate-read chemiluminescence assay provided a solution-based HTS platform to discover antagonists of AβO binding. Screening a library of 2700 drug-like compounds and natural products yielded one compound that potently reduced AβO binding to SMPL Nanodiscs, synaptosomes, and synapses in nerve cell cultures. Although not a therapeutic candidate, this small molecule inhibitor of synaptic AβO binding will provide a useful experimental antagonist for future mechanistic studies of AβOs in Alzheimer's model systems. Overall, results provide proof of concept for using SMPLs in high throughput screening for AβO binding antagonists, and illustrate in general how a SMPL Nanodisc system can
Full Text Available Ion channels are involved in many physiological processes and are attractive targets for therapeutic intervention. Their functional properties vary according to their subunit composition, which in turn varies in a developmental and tissue-specific manner and as a consequence of pathophysiological events. Understanding this diversity requires functional analysis of ion channel properties in large numbers of individual cells. Functional characterisation of ligand-gated channels involves quantitating agonist and drug dose-response relationships using electrophysiological or fluorescence-based techniques. Electrophysiology is limited by low throughput and high-throughput fluorescence-based functional evaluation generally does not enable the characterization of the functional properties of each individual cell. Here we describe a fluorescence-based assay that characterizes functional channel properties at single cell resolution in high throughput mode. It is based on progressive receptor activation and iterative fluorescence imaging and delivers >100 dose-responses in a single well of a 384-well plate, using α1-3 homomeric and αβ heteromeric glycine receptor (GlyR chloride channels as a model system. We applied this assay with transiently transfected HEK293 cells co-expressing halide-sensitive yellow fluorescent protein and different GlyR subunit combinations. Glycine EC50 values of different GlyR isoforms were highly correlated with published electrophysiological data and confirm previously reported pharmacological profiles for the GlyR inhibitors, picrotoxin, strychnine and lindane. We show that inter and intra well variability is low and that clustering of functional phenotypes permits identification of drugs with subunit-specific pharmacological profiles. As this method dramatically improves the efficiency with which ion channel populations can be characterized in the context of cellular heterogeneity, it should facilitate systems
Full Text Available With the severe acute respiratory syndrome epidemic of 2003 and renewed attention on avian influenza viral pandemics, new surveillance systems are needed for the earlier detection of emerging infectious diseases. We applied a "next-generation" parallel sequencing platform for viral detection in nasopharyngeal and fecal samples collected during seasonal influenza virus (Flu infections and norovirus outbreaks from 2005 to 2007 in Osaka, Japan. Random RT-PCR was performed to amplify RNA extracted from 0.1-0.25 ml of nasopharyngeal aspirates (N = 3 and fecal specimens (N = 5, and more than 10 microg of cDNA was synthesized. Unbiased high-throughput sequencing of these 8 samples yielded 15,298-32,335 (average 24,738 reads in a single 7.5 h run. In nasopharyngeal samples, although whole genome analysis was not available because the majority (>90% of reads were host genome-derived, 20-460 Flu-reads were detected, which was sufficient for subtype identification. In fecal samples, bacteria and host cells were removed by centrifugation, resulting in gain of 484-15,260 reads of norovirus sequence (78-98% of the whole genome was covered, except for one specimen that was under-detectable by RT-PCR. These results suggest that our unbiased high-throughput sequencing approach is useful for directly detecting pathogenic viruses without advance genetic information. Although its cost and technological availability make it unlikely that this system will very soon be the diagnostic standard worldwide, this system could be useful for the earlier discovery of novel emerging viruses and bioterrorism, which are difficult to detect with conventional procedures.
Mrzic, Aida; Lermyte, Frederik; Vu, Trung Nghia; Valkenborg, Dirk; Laukens, Kris
Using mass spectrometry, the analysis of known metabolite structures has become feasible in a systematic high-throughput fashion. Nevertheless, the identification of previously unknown structures remains challenging, partially because many unidentified variants originate from known molecules that underwent unexpected modifications. Here, we present a method for the discovery of unknown metabolite modifications and conjugate metabolite isoforms in a high-throughput fashion. The method is based on user-controlled in-source fragmentation which is used to induce loss of weakly bound modifications. This is followed by the comparison of product ions from in-source fragmentation and collision-induced dissociation (CID). Diagonal MS 2 -MS 3 matching allows the detection of unknown metabolite modifications, as well as substructure similarities. As the method relies heavily on the advantages of in-source fragmentation and its ability to 'magically' elucidate unknown modification, we have named it inSourcerer as a portmanteau of in-source and sorcerer. The method was evaluated using a set of 15 different cytokinin standards. Product ions from in-source fragmentation and CID were compared. Hierarchical clustering revealed that good matches are due to the presence of common substructures. Plant leaf extract, spiked with a mix of all 15 standards, was used to demonstrate the method's ability to detect these standards in a complex mixture, as well as confidently identify compounds already present in the plant material. Here we present a method that incorporates a classic liquid chromatography/mass spectrometry (LC/MS) workflow with fragmentation models and computational algorithms. The assumptions upon which the concept of the method was built were shown to be valid and the method showed that in-source fragmentation can be used to pinpoint structural similarities and indicate the occurrence of a modification. Copyright © 2017 John Wiley & Sons, Ltd.
Lee, Moo-Yeal; Dordick, Jonathan S; Clark, Douglas S
Due to poor drug candidate safety profiles that are often identified late in the drug development process, the clinical progression of new chemical entities to pharmaceuticals remains hindered, thus resulting in the high cost of drug discovery. To accelerate the identification of safer drug candidates and improve the clinical progression of drug candidates to pharmaceuticals, it is important to develop high-throughput tools that can provide early-stage predictive toxicology data. In particular, in vitro cell-based systems that can accurately mimic the human in vivo response and predict the impact of drug candidates on human toxicology are needed to accelerate the assessment of drug candidate toxicity and human metabolism earlier in the drug development process. The in vitro techniques that provide a high degree of human toxicity prediction will be perhaps more important in cosmetic and chemical industries in Europe, as animal toxicity testing is being phased out entirely in the immediate future.We have developed a metabolic enzyme microarray (the Metabolizing Enzyme Toxicology Assay Chip, or MetaChip) and a miniaturized three-dimensional (3D) cell-culture array (the Data Analysis Toxicology Assay Chip, or DataChip) for high-throughput toxicity screening of target compounds and their metabolic enzyme-generated products. The human or rat MetaChip contains an array of encapsulated metabolic enzymes that is designed to emulate the metabolic reactions in the human or rat liver. The human or rat DataChip contains an array of 3D human or rat cells encapsulated in alginate gels for cell-based toxicity screening. By combining the DataChip with the complementary MetaChip, in vitro toxicity results are obtained that correlate well with in vivo rat data.
Mindy I Davis
Full Text Available Phosphoinositide kinases regulate diverse cellular functions and are important targets for therapeutic development for diseases, such as diabetes and cancer. Preparation of the lipid substrate is crucial for the development of a robust and miniaturizable lipid kinase assay. Enzymatic assays for phosphoinositide kinases often use lipid substrates prepared from lyophilized lipid preparations by sonication, which result in variability in the liposome size from preparation to preparation. Herein, we report a homogeneous 1536-well luciferase-coupled bioluminescence assay for PI5P4Kα. The substrate preparation is novel and allows the rapid production of a DMSO-containing substrate solution without the need for lengthy liposome preparation protocols, thus enabling the scale-up of this traditionally difficult type of assay. The Z'-factor value was greater than 0.7 for the PI5P4Kα assay, indicating its suitability for high-throughput screening applications. Tyrphostin AG-82 had been identified as an inhibitor of PI5P4Kα by assessing the degree of phospho transfer of γ-(32P-ATP to PI5P; its inhibitory activity against PI5P4Kα was confirmed in the present miniaturized assay. From a pilot screen of a library of bioactive compounds, another tyrphostin, I-OMe tyrphostin AG-538 (I-OMe-AG-538, was identified as an ATP-competitive inhibitor of PI5P4Kα with an IC(50 of 1 µM, affirming the suitability of the assay for inhibitor discovery campaigns. This homogeneous assay may apply to other lipid kinases and should help in the identification of leads for this class of enzymes by enabling high-throughput screening efforts.
de Masson, Adele; O'Malley, John T; Elco, Christopher P; Garcia, Sarah S; Divito, Sherrie J; Lowry, Elizabeth L; Tawa, Marianne; Fisher, David C; Devlin, Phillip M; Teague, Jessica E; Leboeuf, Nicole R; Kirsch, Ilan R; Robins, Harlan; Clark, Rachael A; Kupper, Thomas S
Mycosis fungoides (MF), the most common cutaneous T cell lymphoma (CTCL) is a malignancy of skin-tropic memory T cells. Most MF cases present as early stage (stage I A/B, limited to the skin), and these patients typically have a chronic, indolent clinical course. However, a small subset of early-stage cases develop progressive and fatal disease. Because outcomes can be so different, early identification of this high-risk population is an urgent unmet clinical need. We evaluated the use of next-generation high-throughput DNA sequencing of the T cell receptor β gene ( TCRB ) in lesional skin biopsies to predict progression and survival in a discovery cohort of 208 patients with CTCL (177 with MF) from a 15-year longitudinal observational clinical study. We compared these data to the results in an independent validation cohort of 101 CTCL patients (87 with MF). The tumor clone frequency (TCF) in lesional skin, measured by high-throughput sequencing of the TCRB gene, was an independent prognostic factor of both progression-free and overall survival in patients with CTCL and MF in particular. In early-stage patients, a TCF of >25% in the skin was a stronger predictor of progression than any other established prognostic factor (stage IB versus IA, presence of plaques, high blood lactate dehydrogenase concentration, large-cell transformation, or age). The TCF therefore may accurately predict disease progression in early-stage MF. Early identification of patients at high risk for progression could help identify candidates who may benefit from allogeneic hematopoietic stem cell transplantation before their disease becomes treatment-refractory. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Michael J Smout
Full Text Available BACKGROUND: Helminth parasites cause untold morbidity and mortality to billions of people and livestock. Anthelmintic drugs are available but resistance is a problem in livestock parasites, and is a looming threat for human helminths. Testing the efficacy of available anthelmintic drugs and development of new drugs is hindered by the lack of objective high-throughput screening methods. Currently, drug effect is assessed by observing motility or development of parasites using laborious, subjective, low-throughput methods. METHODOLOGY/PRINCIPAL FINDINGS: Here we describe a novel application for a real-time cell monitoring device (xCELLigence that can simply and objectively assess anthelmintic effects by measuring parasite motility in real time in a fully automated high-throughput fashion. We quantitatively assessed motility and determined real time IC(50 values of different anthelmintic drugs against several developmental stages of major helminth pathogens of humans and livestock, including larval Haemonchus contortus and Strongyloides ratti, and adult hookworms and blood flukes. The assay enabled quantification of the onset of egg hatching in real time, and the impact of drugs on hatch rate, as well as discriminating between the effects of drugs on motility of drug-susceptible and -resistant isolates of H. contortus. CONCLUSIONS/SIGNIFICANCE: Our findings indicate that this technique will be suitable for discovery and development of new anthelmintic drugs as well as for detection of phenotypic resistance to existing drugs for the majority of helminths and other pathogens where motility is a measure of pathogen viability. The method is also amenable to use for other purposes where motility is assessed, such as gene silencing or antibody-mediated killing.
Wilcox, Kyle C.; Marunde, Matthew R.; Das, Aditi; Velasco, Pauline T.; Kuhns, Benjamin D.; Marty, Michael T.; Jiang, Haoming; Luan, Chi-Hao; Sligar, Stephen G.; Klein, William L.
Despite their value as sources of therapeutic drug targets, membrane proteomes are largely inaccessible to high-throughput screening (HTS) tools designed for soluble proteins. An important example comprises the membrane proteins that bind amyloid β oligomers (AβOs). AβOs are neurotoxic ligands thought to instigate the synapse damage that leads to Alzheimer’s dementia. At present, the identities of initial AβO binding sites are highly uncertain, largely because of extensive protein-protein interactions that occur following attachment of AβOs to surface membranes. Here, we show that AβO binding sites can be obtained in a state suitable for unbiased HTS by encapsulating the solubilized synaptic membrane proteome into nanoscale lipid bilayers (Nanodiscs). This method gives a soluble membrane protein library (SMPL)—a collection of individualized synaptic proteins in a soluble state. Proteins within SMPL Nanodiscs showed enzymatic and ligand binding activity consistent with conformational integrity. AβOs were found to bind SMPL Nanodiscs with high affinity and specificity, with binding dependent on intact synaptic membrane proteins, and selective for the higher molecular weight oligomers known to accumulate at synapses. Combining SMPL Nanodiscs with a mix-incubate-read chemiluminescence assay provided a solution-based HTS platform to discover antagonists of AβO binding. Screening a library of 2700 drug-like compounds and natural products yielded one compound that potently reduced AβO binding to SMPL Nanodiscs, synaptosomes, and synapses in nerve cell cultures. Although not a therapeutic candidate, this small molecule inhibitor of synaptic AβO binding will provide a useful experimental antagonist for future mechanistic studies of AβOs in Alzheimer’s model systems. Overall, results provide proof of concept for using SMPLs in high throughput screening for AβO binding antagonists, and illustrate in general how a SMPL Nanodisc system can facilitate drug
Wilcox, Kyle C; Marunde, Matthew R; Das, Aditi; Velasco, Pauline T; Kuhns, Benjamin D; Marty, Michael T; Jiang, Haoming; Luan, Chi-Hao; Sligar, Stephen G; Klein, William L
Despite their value as sources of therapeutic drug targets, membrane proteomes are largely inaccessible to high-throughput screening (HTS) tools designed for soluble proteins. An important example comprises the membrane proteins that bind amyloid β oligomers (AβOs). AβOs are neurotoxic ligands thought to instigate the synapse damage that leads to Alzheimer's dementia. At present, the identities of initial AβO binding sites are highly uncertain, largely because of extensive protein-protein interactions that occur following attachment of AβOs to surface membranes. Here, we show that AβO binding sites can be obtained in a state suitable for unbiased HTS by encapsulating the solubilized synaptic membrane proteome into nanoscale lipid bilayers (Nanodiscs). This method gives a soluble membrane protein library (SMPL)--a collection of individualized synaptic proteins in a soluble state. Proteins within SMPL Nanodiscs showed enzymatic and ligand binding activity consistent with conformational integrity. AβOs were found to bind SMPL Nanodiscs with high affinity and specificity, with binding dependent on intact synaptic membrane proteins, and selective for the higher molecular weight oligomers known to accumulate at synapses. Combining SMPL Nanodiscs with a mix-incubate-read chemiluminescence assay provided a solution-based HTS platform to discover antagonists of AβO binding. Screening a library of 2700 drug-like compounds and natural products yielded one compound that potently reduced AβO binding to SMPL Nanodiscs, synaptosomes, and synapses in nerve cell cultures. Although not a therapeutic candidate, this small molecule inhibitor of synaptic AβO binding will provide a useful experimental antagonist for future mechanistic studies of AβOs in Alzheimer's model systems. Overall, results provide proof of concept for using SMPLs in high throughput screening for AβO binding antagonists, and illustrate in general how a SMPL Nanodisc system can facilitate drug discovery
Sapkota, Rumakanta; Nicolaisen, Mogens
the usefulness of the method not only in soil DNA but also in a plant DNA background. In conclusion, we demonstrate a successful approach for pyrosequencing of oomycete communities using ITS1 as the barcode sequence with well-known primers for oomycete DNA amplification....... communities. Thewell-known primer sets ITS4, ITS6 and ITS7were used in the study in a semi-nested PCR approach to target the internal transcribed spacer (ITS) 1 of ribosomal DNA in a next generation sequencing protocol. These primers have been used in similar studies before, butwith limited success.......Wewere able to increase the proportion of retrieved oomycete sequences dramaticallymainly by increasing the annealing temperature during PCR. The optimized protocol was validated using three mock communities and the method was further evaluated using total DNA from 26 soil samples collected from different...
Full Text Available Abstract Background Protein-protein interaction data used in the creation or prediction of molecular networks is usually obtained from large scale or high-throughput experiments. This experimental data is liable to contain a large number of spurious interactions. Hence, there is a need to validate the interactions and filter out the incorrect data before using them in prediction studies. Results In this study, we use a combination of 3 genomic features – structurally known interacting Pfam domains, Gene Ontology annotations and sequence homology – as a means to assign reliability to the protein-protein interactions in Saccharomyces cerevisiae determined by high-throughput experiments. Using Bayesian network approaches, we show that protein-protein interactions from high-throughput data supported by one or more genomic features have a higher likelihood ratio and hence are more likely to be real interactions. Our method has a high sensitivity (90% and good specificity (63%. We show that 56% of the interactions from high-throughput experiments in Saccharomyces cerevisiae have high reliability. We use the method to estimate the number of true interactions in the high-throughput protein-protein interaction data sets in Caenorhabditis elegans, Drosophila melanogaster and Homo sapiens to be 27%, 18% and 68% respectively. Our results are available for searching and downloading at http://helix.protein.osaka-u.ac.jp/htp/. Conclusion A combination of genomic features that include sequence, structure and annotation information is a good predictor of true interactions in large and noisy high-throughput data sets. The method has a very high sensitivity and good specificity and can be used to assign a likelihood ratio, corresponding to the reliability, to each interaction.
Sastry, Anand; Monk, Jonathan M.; Tegel, Hanna
and machine learning identifies protein properties that hinder the HPA high-throughput antibody production pipeline. We predict protein expression and solubility with accuracies of 70% and 80%, respectively, based on a subset of key properties (aromaticity, hydropathy and isoelectric point). We guide...... the selection of protein fragments based on these characteristics to optimize high-throughput experimentation. Availability and implementation: We present the machine learning workflow as a series of IPython notebooks hosted on GitHub (https://github.com/SBRG/Protein_ML). The workflow can be used as a template...
Andersen, Lars Dyrskjøt; Zieger, Karsten; Ørntoft, Torben Falck
individually contributed to the management of the disease. However, the development of high-throughput techniques for simultaneous assessment of a large number of markers has allowed classification of tumors into clinically relevant molecular subgroups beyond those possible by pathological classification. Here......Bladder cancer is the fifth most common neoplasm in industrialized countries. Due to frequent recurrences of the superficial form of this disease, bladder cancer ranks as one of the most common cancers. Despite the description of a large number of tumor markers for bladder cancers, none have......, we review the recent advances in high-throughput molecular marker identification for superficial and invasive bladder cancers....
Pedersen, Marlene Lemvig; Block, Ines; List, Markus
into automated robotic high-throughput screens, which allows subsequent protein quantification. In this integrated solution, samples are directly forwarded to automated cell lysate preparation and preparation of dilution series, including reformatting to a protein spotter-compatible format after the high......-throughput screening. Tracking of huge sample numbers and data analysis from a high-content screen to RPPAs is accomplished via MIRACLE, a custom made software suite developed by us. To this end, we demonstrate that the RPPAs generated in this manner deliver reliable protein readouts and that GAPDH and TFR levels can...
Despite recent significant hardware and software developments, the complete thermodynamic and kinetic characterization of large macromolecular complexes by molecular simulations still presents significant challenges. The high dimensionality of these systems and the complexity of the associated potential energy surfaces (creating multiple metastable regions connected by high free energy barriers) does not usually allow to adequately sample the relevant regions of their configurational space by means of a single, long Molecular Dynamics (MD) trajectory. Several different approaches have been proposed to tackle this sampling problem. We focus on the development of ensemble simulation strategies, where data from a large number of weakly coupled simulations are integrated to explore the configurational landscape of a complex system more efficiently. Ensemble methods are of increasing interest as the hardware roadmap is now mostly based on increasing core counts, rather than clock speeds. The main challenge in the development of an ensemble approach for efficient sampling is in the design of strategies to adaptively distribute the trajectories over the relevant regions of the systems' configurational space, without using any a priori information on the system global properties. We will discuss the definition of smart adaptive sampling approaches that can redirect computational resources towards unexplored yet relevant regions. Our approaches are based on new developments in dimensionality reduction for high dimensional dynamical systems, and optimal redistribution of resources. NSF CHE-1152344, NSF CHE-1265929, Welch Foundation C-1570.
Full Text Available Abstract Background Genetic markers are pivotal to modern genomics research; however, discovery and genotyping of molecular markers in oat has been hindered by the size and complexity of the genome, and by a scarcity of sequence data. The purpose of this study was to generate oat expressed sequence tag (EST information, develop a bioinformatics pipeline for SNP discovery, and establish a method for rapid, cost-effective, and straightforward genotyping of SNP markers in complex polyploid genomes such as oat. Results Based on cDNA libraries of four cultivated oat genotypes, approximately 127,000 contigs were assembled from approximately one million Roche 454 sequence reads. Contigs were filtered through a novel bioinformatics pipeline to eliminate ambiguous polymorphism caused by subgenome homology, and 96 in silico SNPs were selected from 9,448 candidate loci for validation using high-resolution melting (HRM analysis. Of these, 52 (54% were polymorphic between parents of the Ogle1040 × TAM O-301 (OT mapping population, with 48 segregating as single Mendelian loci, and 44 being placed on the existing OT linkage map. Ogle and TAM amplicons from 12 primers were sequenced for SNP validation, revealing complex polymorphism in seven amplicons but general sequence conservation within SNP loci. Whole-amplicon interrogation with HRM revealed insertions, deletions, and heterozygotes in secondary oat germplasm pools, generating multiple alleles at some primer targets. To validate marker utility, 36 SNP assays were used to evaluate the genetic diversity of 34 diverse oat genotypes. Dendrogram clusters corresponded generally to known genome composition and genetic ancestry. Conclusions The high-throughput SNP discovery pipeline presented here is a rapid and effective method for identification of polymorphic SNP alleles in the oat genome. The current-generation HRM system is a simple and highly-informative platform for SNP genotyping. These techniques provide
AUTHOR|(CDS)2145966; Gribaudo, Marco
The IT infrastructures of companies and research centres are implementing new technologies to satisfy the increasing need of computing resources for big data analysis. In this context, resource profiling plays a crucial role in identifying areas where the improvement of the utilisation efficiency is needed. In order to deal with the profiling and optimisation of computing resources, two complementary approaches can be adopted: the measurement-based approach and the model-based approach. The measurement-based approach gathers and analyses performance metrics executing benchmark applications on computing resources. Instead, the model-based approach implies the design and implementation of a model as an abstraction of the real system, selecting only those aspects relevant to the study. This Thesis originates from a project carried out by the author within the CERN IT department. CERN is an international scientific laboratory that conducts fundamental researches in the domain of elementary particle physics. The p...
Repin, Mikhail; Turner, Helen C.; Garty, Guy; Brenner, David J.
Here the general concept of the combined use of plates and tubes in racks compatible with the American National Standards Institute/the Society for Laboratory Automation and Screening microplate formats as the next generation platforms for increasing the throughput of bio-dosimetry assays was described. These platforms can be used at different stages of bio-dosimetry assays starting from blood collection into micro-tubes organised in standardised racks and ending with the cytogenetic analysis of samples in standardised multi-well and multichannel plates. Robotically friendly platforms can be used for different bio-dosimetry assays in minimally equipped laboratories and on cost-effective automated universal biotech systems. (authors)
mapping. In Chapter 1, it was examined whether combining phage display, a traditional epitope mapping approach, with HTS would improve the method. The developed approach was successfully used to map Ara h 1 epitopes in sera from patients with peanut allergy. Notably, the sera represented difficult...... proliferation advantages. Finally, in Chapter 4, a different emerging technology, next-generation peptide microarrays, was applied for epitope mapping of major peanut allergens using sera from allergic patients. New developments in the peptide microarray have enabled a greatly increased throughput....... In this study, these improvements were utilized to characterize epitopes at high resolution, i.e. determine the importance of each residue for antibody binding, for all major peanut allergens. Epitope reactivity among patients often converged on known epitope hotspots, however the binding patterns were somewhat...
Suyanti, Retno Dwi; Purba, Deby Monika
The objectives of this research are to get the increase student's achievement on the discovery learning model based on lesson study. Beside of that, this research also conducted to know the cognitive aspect. This research was done in three school that are SMA N 3 Medan. Population is all the students in SMA N 11 Medan which taken by purposive random sampling. The research instruments are achievement test instruments that have been validated. The research data analyzed by statistic using Ms Excell. The result data shows that the student's achievement taught by discovery learning model based on Lesson study higher than the student's achievement taught by direct instructional method. It can be seen from the average of gain and also proved with t-test, the normalized gain in experimental class of SMA N 11 is (0.74±0.12) and control class (0.45±0.12), at significant level α = 0.05, Ha is received and Ho is refused where tcount>ttable in SMA N 11 (9.81>1,66). Then get the improvement cognitive aspect from three of school is C2 where SMA N 11 is 0.84(high). Then the observation sheet result of lesson study from SMA N 11 92 % of student working together while 67% less in active using media.
Baudet, D.; Braux, B.; Prieur, O.; Hughes, R.; Wilkinson, M.; Latunde-Dada, K.; Jahns, J.; Lohmann, U.; Fey, D.; Karafolas, N.
For the next generation of HighThroughPut (HTP) Telecommunications Satellites, space end users' needs will result in higher link speeds and an increase in the number of channels; up to 512 channels running at 10Gbits/s. By keeping electrical interconnections based on copper, the constraints in term of power dissipation, number of electrical wires and signal integrity will become too demanding. The replacement of the electrical links by optical links is the most adapted solution as it provides high speed links with low power consumption and no EMC/EMI. But replacing all electrical links by optical links of an On Board Payload (OBP) is challenging. It is not simply a matter of replacing electrical components with optical but rather the whole concept and architecture have to be rethought to achieve a high reliability and high performance optical solution. In this context, this paper will present the concept of an Innovative OBP Optical Architecture. The optical architecture was defined to meet the critical requirements of the application: signal speed, number of channels, space reliability, power dissipation, optical signals crossing and components availability. The resulting architecture is challenging and the need for new developments is highlighted. But this innovative optically interconnected architecture will substantially outperform standard electrical ones.
Riniker, Sereina; Wang, Yuan; Jenkins, Jeremy L; Landrum, Gregory A
Modern high-throughput screening (HTS) is a well-established approach for hit finding in drug discovery that is routinely employed in the pharmaceutical industry to screen more than a million compounds within a few weeks. However, as the industry shifts to more disease-relevant but more complex phenotypic screens, the focus has moved to piloting smaller but smarter chemically/biologically diverse subsets followed by an expansion around hit compounds. One standard method for doing this is to train a machine-learning (ML) model with the chemical fingerprints of the tested subset of molecules and then select the next compounds based on the predictions of this model. An alternative approach would be to take advantage of the wealth of bioactivity information contained in older (full-deck) screens using so-called HTS fingerprints, where each element of the fingerprint corresponds to the outcome of a particular assay, as input to machine-learning algorithms. We constructed HTS fingerprints using two collections of data: 93 in-house assays and 95 publicly available assays from PubChem. For each source, an additional set of 51 and 46 assays, respectively, was collected for testing. Three different ML methods, random forest (RF), logistic regression (LR), and naïve Bayes (NB), were investigated for both the HTS fingerprint and a chemical fingerprint, Morgan2. RF was found to be best suited for learning from HTS fingerprints yielding area under the receiver operating characteristic curve (AUC) values >0.8 for 78% of the internal assays and enrichment factors at 5% (EF(5%)) >10 for 55% of the assays. The RF(HTS-fp) generally outperformed the LR trained with Morgan2, which was the best ML method for the chemical fingerprint, for the majority of assays. In addition, HTS fingerprints were found to retrieve more diverse chemotypes. Combining the two models through heterogeneous classifier fusion led to a similar or better performance than the best individual model for all assays
Pandey, Piyush; Ge, Yufeng; Stoerger, Vincent; Schnable, James C
Image-based high-throughput plant phenotyping in greenhouse has the potential to relieve the bottleneck currently presented by phenotypic scoring which limits the throughput of gene discovery and crop improvement efforts. Numerous studies have employed automated RGB imaging to characterize biomass and growth of agronomically important crops. The objective of this study was to investigate the utility of hyperspectral imaging for quantifying chemical properties of maize and soybean plants in vivo . These properties included leaf water content, as well as concentrations of macronutrients nitrogen (N), phosphorus (P), potassium (K), magnesium (Mg), calcium (Ca), and sulfur (S), and micronutrients sodium (Na), iron (Fe), manganese (Mn), boron (B), copper (Cu), and zinc (Zn). Hyperspectral images were collected from 60 maize and 60 soybean plants, each subjected to varying levels of either water deficit or nutrient limitation stress with the goal of creating a wide range of variation in the chemical properties of plant leaves. Plants were imaged on an automated conveyor belt system using a hyperspectral imager with a spectral range from 550 to 1,700 nm. Images were processed to extract reflectance spectrum from each plant and partial least squares regression models were developed to correlate spectral data with chemical data. Among all the chemical properties investigated, water content was predicted with the highest accuracy [ R 2 = 0.93 and RPD (Ratio of Performance to Deviation) = 3.8]. All macronutrients were also quantified satisfactorily ( R 2 from 0.69 to 0.92, RPD from 1.62 to 3.62), with N predicted best followed by P, K, and S. The micronutrients group showed lower prediction accuracy ( R 2 from 0.19 to 0.86, RPD from 1.09 to 2.69) than the macronutrient groups. Cu and Zn were best predicted, followed by Fe and Mn. Na and B were the only two properties that hyperspectral imaging was not able to quantify satisfactorily ( R 2 plant chemical traits. Future
Pandey, Piyush; Ge, Yufeng; Stoerger, Vincent; Schnable, James C.
Image-based high-throughput plant phenotyping in greenhouse has the potential to relieve the bottleneck currently presented by phenotypic scoring which limits the throughput of gene discovery and crop improvement efforts. Numerous studies have employed automated RGB imaging to characterize biomass and growth of agronomically important crops. The objective of this study was to investigate the utility of hyperspectral imaging for quantifying chemical properties of maize and soybean plants in vivo. These properties included leaf water content, as well as concentrations of macronutrients nitrogen (N), phosphorus (P), potassium (K), magnesium (Mg), calcium (Ca), and sulfur (S), and micronutrients sodium (Na), iron (Fe), manganese (Mn), boron (B), copper (Cu), and zinc (Zn). Hyperspectral images were collected from 60 maize and 60 soybean plants, each subjected to varying levels of either water deficit or nutrient limitation stress with the goal of creating a wide range of variation in the chemical properties of plant leaves. Plants were imaged on an automated conveyor belt system using a hyperspectral imager with a spectral range from 550 to 1,700 nm. Images were processed to extract reflectance spectrum from each plant and partial least squares regression models were developed to correlate spectral data with chemical data. Among all the chemical properties investigated, water content was predicted with the highest accuracy [R2 = 0.93 and RPD (Ratio of Performance to Deviation) = 3.8]. All macronutrients were also quantified satisfactorily (R2 from 0.69 to 0.92, RPD from 1.62 to 3.62), with N predicted best followed by P, K, and S. The micronutrients group showed lower prediction accuracy (R2 from 0.19 to 0.86, RPD from 1.09 to 2.69) than the macronutrient groups. Cu and Zn were best predicted, followed by Fe and Mn. Na and B were the only two properties that hyperspectral imaging was not able to quantify satisfactorily (R2 designing experiments to vary plant nutrients
Wallace, M.; Metson, S.; Holcombe, L.; Anderson, M.; Newbold, D.; Brook, N.
Landslides are an increasing problem in developing countries. Multiple landslides can be triggered by heavy rainfall resulting in loss of life, homes and critical infrastructure. Through computer simulation of individual slopes it is possible to predict the causes, timing and magnitude of landslides and estimate the potential physical impact. Geographical scientists at the University of Bristol have developed software that integrates a physically-based slope hydrology and stability model (CHASM) with an econometric model (QUESTA) in order to predict landslide risk over time. These models allow multiple scenarios to be evaluated for each slope, accounting for data uncertainties, different engineering interventions, risk management approaches and rainfall patterns. Individual scenarios can be computationally intensive, however each scenario is independent and so multiple scenarios can be executed in parallel. As more simulations are carried out the overhead involved in managing input and output data becomes significant. This is a greater problem if multiple slopes are considered concurrently, as is required both for landslide research and for effective disaster planning at national levels. There are two critical factors in this context: generated data volumes can be in the order of tens of terabytes, and greater numbers of simulations result in long total runtimes. Users of such models, in both the research community and in developing countries, need to develop a means for handling the generation and submission of landside modelling experiments, and the storage and analysis of the resulting datasets. Additionally, governments in developing countries typically lack the necessary computing resources and infrastructure. Consequently, knowledge that could be gained by aggregating simulation results from many different scenarios across many different slopes remains hidden within the data. To address these data and workload management issues, University of Bristol particle
Haslam, Carl; Hellicar, John; Dunn, Adrian; Fuetterer, Arne; Hardy, Neil; Marshall, Peter; Paape, Rainer; Pemberton, Michelle; Resemannand, Anja; Leveridge, Melanie
Mass spectrometry (MS) offers a label-free, direct-detection method, in contrast to fluorescent or colorimetric methodologies. Over recent years, solid-phase extraction-based techniques, such as the Agilent RapidFire system, have emerged that are capable of analyzing samples in high-throughput screening (HTS). Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF) offers an alternative for high-throughput MS detection. However, sample preparation and deposition onto the MALDI target, as well as interference from matrix ions, have been considered limitations for the use of MALDI for screening assays. Here we describe the development and validation of assays for both small-molecule and peptide analytes using MALDI-TOF coupled with nanoliter liquid handling. Using the JMJD2c histone demethylase and acetylcholinesterase as model systems, we have generated robust data in a 1536 format and also increased sample deposition to 6144 samples per target. Using these methods, we demonstrate that this technology can deliver fast sample analysis time with low sample volume, and data comparable to that of current RapidFire assays. © 2015 Society for Laboratory Automation and Screening.
Salvo-Chirnside, Eliane; Kane, Steven; Kerr, Lorraine E
The increasing popularity of systems-based approaches to plant research has resulted in a demand for high throughput (HTP) methods to be developed. RNA extraction from multiple samples in an experiment is a significant bottleneck in performing systems-level genomic studies. Therefore we have established a high throughput method of RNA extraction from Arabidopsis thaliana to facilitate gene expression studies in this widely used plant model. We present optimised manual and automated protocols for the extraction of total RNA from 9-day-old Arabidopsis seedlings in a 96 well plate format using silica membrane-based methodology. Consistent and reproducible yields of high quality RNA are isolated averaging 8.9 μg total RNA per sample (~20 mg plant tissue). The purified RNA is suitable for subsequent qPCR analysis of the expression of over 500 genes in triplicate from each sample. Using the automated procedure, 192 samples (2 × 96 well plates) can easily be fully processed (samples homogenised, RNA purified and quantified) in less than half a day. Additionally we demonstrate that plant samples can be stored in RNAlater at -20°C (but not 4°C) for 10 months prior to extraction with no significant effect on RNA yield or quality. Additionally, disrupted samples can be stored in the lysis buffer at -20°C for at least 6 months prior to completion of the extraction procedure providing a flexible sampling and storage scheme to facilitate complex time series experiments.
Lu, Zhi-Yan; Guo, Xiao-Jue; Li, Hui; Huang, Zhong-Zi; Lin, Kuang-Fei; Liu, Yong-Di
A high-throughput screening system for moderately halophilic phenol-degrading bacteria from various habitats was developed to replace the conventional strain screening owing to its high efficiency. Bacterial enrichments were cultivated in 48 deep well microplates instead of shake flasks or tubes. Measurement of phenol concentrations was performed in 96-well microplates instead of using the conventional spectrophotometric method or high-performance liquid chromatography (HPLC). The high-throughput screening system was used to cultivate forty-three bacterial enrichments and gained a halophilic bacterial community E3 with the best phenol-degrading capability. Halomonas sp. strain 4-5 was isolated from the E3 community. Strain 4-5 was able to degrade more than 94% of the phenol (500 mg·L−1 starting concentration) over a range of 3%–10% NaCl. Additionally, the strain accumulated the compatible solute, ectoine, with increasing salt concentrations. PCR detection of the functional genes suggested that the largest subunit of multicomponent phenol hydroxylase (LmPH) and catechol 1,2-dioxygenase (C12O) were active in the phenol degradation process. PMID:26020478
Full Text Available Candida albicans, the most common human pathogenic fungus, can establish a persistent lethal infection in the intestine of the microscopic nematode Caenorhabditis elegans. The C. elegans-C. albicans infection model was previously adapted to screen for antifungal compounds. Modifications to this screen have been made to facilitate a high-throughput assay including co-inoculation of nematodes with C. albicans and instrumentation allowing precise dispensing of worms into assay wells, eliminating two labor-intensive steps. This high-throughput method was utilized to screen a library of 3,228 compounds represented by 1,948 bioactive compounds and 1,280 small molecules derived via diversity-oriented synthesis. Nineteen compounds were identified that conferred an increase in C. elegans survival, including most known antifungal compounds within the chemical library. In addition to seven clinically used antifungal compounds, twelve compounds were identified which are not primarily used as antifungal agents, including three immunosuppressive drugs. This assay also allowed the assessment of the relative minimal inhibitory concentration, the effective concentration in vivo, and the toxicity of the compound in a single assay.
Kuo, Yung; Park, Kyoungwon; Li, Jack; Ingargiola, Antonino; Park, Joonhyuck; Shvadchak, Volodymyr; Weiss, Shimon
Monitoring membrane potential in neurons requires sensors with minimal invasiveness, high spatial and temporal (sub-ms) resolution, and large sensitivity for enabling detection of sub-threshold activities. While organic dyes and fluorescent proteins have been developed to possess voltage-sensing properties, photobleaching, cytotoxicity, low sensitivity, and low spatial resolution have obstructed further studies. Semiconductor nanoparticles (NPs), as prospective voltage sensors, have shown excellent sensitivity based on Quantum confined Stark effect (QCSE) at room temperature and at single particle level. Both theory and experiment have shown their voltage sensitivity can be increased significantly via material, bandgap, and structural engineering. Based on theoretical calculations, we synthesized one of the optimal candidates for voltage sensors: 12 nm type-II ZnSe/CdS nanorods (NRs), with an asymmetrically located seed. The voltage sensitivity and spectral shift were characterized in vitro using spectrally-resolved microscopy using electrodes grown by thin film deposition, which "sandwich" the NRs. We characterized multiple batches of such NRs and iteratively modified the synthesis to achieve higher voltage sensitivity (ΔF/F> 10%), larger spectral shift (>5 nm), better homogeneity, and better colloidal stability. Using a high throughput screening method, we were able to compare the voltage sensitivity of our NRs with commercial spherical quantum dots (QDs) with single particle statistics. Our method of high throughput screening with spectrally-resolved microscope also provides a versatile tool for studying single particles spectroscopy under field modulation.
Lundqvist, Magnus; Edfors, Fredrik; Sivertsson, Åsa
We describe solid-phase cloning (SPC) for high-throughput assembly of expression plasmids. Our method allows PCR products to be put directly into a liquid handler for capture and purification using paramagnetic streptavidin beads and conversion into constructs by subsequent cloning reactions. We ...
Wetmore, Barbara A.
High-throughput in vitro toxicity screening provides an efficient way to identify potential biological targets for environmental and industrial chemicals while conserving limited testing resources. However, reliance on the nominal chemical concentrations in these in vitro assays as an indicator of bioactivity may misrepresent potential in vivo effects of these chemicals due to differences in clearance, protein binding, bioavailability, and other pharmacokinetic factors. Development of high-throughput in vitro hepatic clearance and protein binding assays and refinement of quantitative in vitro-to-in vivo extrapolation (QIVIVE) methods have provided key tools to predict xenobiotic steady state pharmacokinetics. Using a process known as reverse dosimetry, knowledge of the chemical steady state behavior can be incorporated with HTS data to determine the external in vivo oral exposure needed to achieve internal blood concentrations equivalent to those eliciting bioactivity in the assays. These daily oral doses, known as oral equivalents, can be compared to chronic human exposure estimates to assess whether in vitro bioactivity would be expected at the dose-equivalent level of human exposure. This review will describe the use of QIVIVE methods in a high-throughput environment and the promise they hold in shaping chemical testing priorities and, potentially, high-throughput risk assessment strategies
Knudsen, Peter Boldsen
producing the heterologous model polyketide, 6-methylsalicylic acid (6-MSA). An automated methodology for high throughput screening focusing on growth rates, together with a fully automated method for quantitative physiological characterisation in microtiter plates, was established for yeast. Full...
Pei, Y.T.; Eivani, A.R.; Zaharia, T.; Kazantis, A.V.; Sanden, van de M.C.M.; De Hosson, J.T.M.
Flexible hydrogenated amorphous carbon (a-C:H) thin film coated on rubbers has shown outstanding protection of rubber seals from friction and wear. This work concentrates on the potential advances of expanding thermal plasma (ETP) process for a high throughput deposition of a-C:H thin films in
Hoogenboom, R.; Fijten, M.W.M.; Abeln, C.H.; Schubert, U.S.
Gel permeation chromatography (GPC) and gas chromatography (GC) were successfully introduced into a high-throughput workflow. The feasibility and limitations of online GPC with a high-speed column was evaluated by measuring polystyrene standards and comparison of the results with regular offline GPC
an der Heiden, M.R.; Plenio, H.; Immel, S.; Burello, E.; Rothenberg, G.; Hoefsloot, H.C.J.
A method is presented for the high-throughput monitoring of reaction kinetics in homogeneous catalysis, running up to 25 coupling reactions in a single reaction vessel. This method is demonstrated and validated on the Sonogashira reaction, analyzing the kinetics for almost 500 coupling reactions.
Modeling Disordered Materials with a High Throughput ab - initio Approach Kesong Yang,1 Corey Oses,2 and Stefano Curtarolo3, 4 1Department of...J. Furthmüller, Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set, Phys. Rev. B 54, 11169–11186 (1996
Brueckner, L. (Laura); Van Arensbergen, J. (Joris); Akhtar, W. (Waseem); L. Pagie (Ludo); B. van Steensel (Bas)
textabstractBackground: Chromatin proteins control gene activity in a concerted manner. We developed a high-throughput assay to study the effects of the local chromatin environment on the regulatory activity of a protein of interest. The assay combines a previously reported multiplexing strategy
Adams, Jonathan D; Ebbesen, Christian L.; Barnkob, Rune
-slide format using low-cost, rapid-prototyping techniques. This high-throughput acoustophoresis chip (HTAC) utilizes a temperature-stabilized, standing ultrasonic wave, which imposes differential acoustic radiation forces that can separate particles according to size, density and compressibility. The device...